Skip to content

Commit cb8e9a8

Browse files
author
MONAI-Project
authored
Update README.md
1 parent f56bbb0 commit cb8e9a8

File tree

1 file changed

+268
-1
lines changed

1 file changed

+268
-1
lines changed

README.md

+268-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,268 @@
1-
# MONAI
1+
<!----- Conversion time: 2.089 seconds.
2+
3+
4+
Using this Markdown file:
5+
6+
1. Cut and paste this output into your source file.
7+
2. See the notes and action items below regarding this conversion run.
8+
3. Check the rendered output (headings, lists, code blocks, tables) for proper
9+
formatting and use a linkchecker before you publish this page.
10+
11+
Conversion notes:
12+
13+
* Docs to Markdown version 1.0β17
14+
* Fri Oct 11 2019 09:47:55 GMT-0700 (PDT)
15+
* Source doc: https://docs.google.com/a/nvidia.com/open?id=1J5txBS-UBJeUFnFC1ZjydC4jYCB_JKlzPBjkOK76qhU
16+
----->
17+
18+
19+
**Project MONAI (M**edical** O**pen** N**etwork** **for** AI)**
20+
21+
_AI Toolkit for Healthcare Imaging_
22+
23+
_Contact:monai.[email protected]_
24+
25+
_This document identifies key concepts of project MONAI at a high level, the goal is to facilitate further technical discussions of requirements,roadmap, feasibility and trade-offs._ \
26+
27+
28+
29+
30+
1. **Vision**
31+
* Develop a community of academic, industrial and clinical researchers collaborating and working on a common foundation of standardized tools.
32+
* Create a state-of-the-art, end-to-end training toolkit for healthcare imaging.
33+
* Provide academic and industrial researchers with the optimized and standardized way to create and evaluate models
34+
2. **Targeted users**
35+
* Primarily focused on the healthcare researchers who develop DL models for medical imaging
36+
3. **Goals**
37+
* Deliver domain-specific workflow capabilities
38+
* Address the end-end “Pain points” when creating medical imaging deep learning workflows.
39+
* Provide a robust foundation with a performance optimized system software stack that allows researchers to focus on the research and not worry about software development principles. \
40+
41+
4. **Guiding principles**
42+
1. Modularity
43+
* Pythonic -- object oriented components
44+
* Compositional -- can combine components to create workflows
45+
* Extensible -- easy to create new components and extend existing components
46+
* Easy to debug -- loosely coupled, easy to follow code (e.g. in eager or graph mode)
47+
* Flexible -- interfaces for easy integration of external modules
48+
2. User friendly
49+
* Portable -- use components/workflows via Python “import”
50+
* Run well-known baseline workflows in a few commands
51+
* Access to the well-known public datasets in a few lines of code
52+
3. Standardisation
53+
* Unified/consistent component APIs with documentation specifications
54+
* Unified/consistent data and model formats, compatible with other existing standards
55+
4. High quality
56+
* Consistent coding style - extensive documentation - tutorials - contributors’ guidelines
57+
* Reproducibility -- e.g. system-specific deterministic training
58+
5. Future proof
59+
* Task scalability -- both in datasets and computational resources
60+
* Support for advanced data structures -- e.g. graphs/structured text documents
61+
6. Leverage existing high-quality software packages whenever possible
62+
* E.g. low-level medical image format reader, image preprocessing with external packages
63+
* Rigorous risk analysis of choice of foundational software dependencies
64+
7. Compatible with external software
65+
* E.g. data visualisation, experiments tracking, management, orchestration
66+
5. **Key capabilities**
67+
68+
<table>
69+
<tr>
70+
<td>
71+
<strong><em>Basic features</em></strong>
72+
</td>
73+
<td colspan="2" ><em>Example</em>
74+
</td>
75+
<td><em>Notes</em>
76+
</td>
77+
</tr>
78+
<tr>
79+
<td>Ready-to-use workflows
80+
</td>
81+
<td colspan="2" >Volumetric image segmentation
82+
</td>
83+
<td>“Bring your own dataset”
84+
</td>
85+
</tr>
86+
<tr>
87+
<td>Baseline/reference network architectures
88+
</td>
89+
<td colspan="2" >Provide an option to use “U-Net”
90+
</td>
91+
<td>
92+
</td>
93+
</tr>
94+
<tr>
95+
<td>Intuitive command-line interfaces
96+
</td>
97+
<td colspan="2" >
98+
</td>
99+
<td>
100+
</td>
101+
</tr>
102+
<tr>
103+
<td>Multi-gpu training
104+
</td>
105+
<td colspan="2" >Configure the workflow to run data parallel training
106+
</td>
107+
<td>
108+
</td>
109+
</tr>
110+
</table>
111+
112+
113+
114+
<table>
115+
<tr>
116+
<td><strong><em>Customisable Python interfaces</em></strong>
117+
</td>
118+
<td colspan="2" ><em>Example</em>
119+
</td>
120+
<td><em>Notes</em>
121+
</td>
122+
</tr>
123+
<tr>
124+
<td>Training/validation strategies
125+
</td>
126+
<td colspan="2" >Schedule a strategy of alternating between generator and discriminator model training
127+
</td>
128+
<td>
129+
</td>
130+
</tr>
131+
<tr>
132+
<td>Network architectures
133+
</td>
134+
<td colspan="2" >Define new networks w/ the recent “Squeeze-and-Excitation” blocks
135+
</td>
136+
<td>“Bring your own model”
137+
</td>
138+
</tr>
139+
<tr>
140+
<td>Data preprocessors
141+
</td>
142+
<td colspan="2" >Define a new reader to read training data from a database system
143+
</td>
144+
<td>
145+
</td>
146+
</tr>
147+
<tr>
148+
<td>Adaptive training schedule
149+
</td>
150+
<td colspan="2" >Stop training when the loss becomes “NaN”
151+
</td>
152+
<td>“Callbacks”
153+
</td>
154+
</tr>
155+
<tr>
156+
<td>Configuration-driven workflow assembly
157+
</td>
158+
<td colspan="2" >Making workflow instances from configuration file
159+
</td>
160+
<td>Convenient for managing hyperparameters
161+
</td>
162+
</tr>
163+
</table>
164+
165+
166+
167+
<table>
168+
<tr>
169+
<td><strong><em>Model sharing & transfer learning</em></strong>
170+
</td>
171+
<td colspan="2" ><em>Example</em>
172+
</td>
173+
<td><em>Notes</em>
174+
</td>
175+
</tr>
176+
<tr>
177+
<td>Sharing model parameters, hyperparameter configurations
178+
</td>
179+
<td colspan="2" >Standardisation of model archiving format
180+
</td>
181+
<td>
182+
</td>
183+
</tr>
184+
<tr>
185+
<td>Model optimisation for deployment
186+
</td>
187+
<td colspan="2" >
188+
</td>
189+
<td>
190+
</td>
191+
</tr>
192+
<tr>
193+
<td>Fine-tuning from pre-trained models
194+
</td>
195+
<td colspan="2" >Model compression, TensorRT
196+
</td>
197+
<td>
198+
</td>
199+
</tr>
200+
<tr>
201+
<td>Model interpretability
202+
</td>
203+
<td colspan="2" >Visualising feature maps of a trained model
204+
</td>
205+
<td>
206+
</td>
207+
</tr>
208+
<tr>
209+
<td>Experiment tracking & management
210+
</td>
211+
<td colspan="2" >
212+
</td>
213+
<td><a href="https://polyaxon.com/">https://polyaxon.com/</a>
214+
</td>
215+
</tr>
216+
</table>
217+
218+
219+
220+
<table>
221+
<tr>
222+
<td><strong><em>Advanced features</em></strong>
223+
</td>
224+
<td colspan="2" ><em>Example</em>
225+
</td>
226+
<td><em>Notes</em>
227+
</td>
228+
</tr>
229+
<tr>
230+
<td>Compatibility with external toolkits
231+
</td>
232+
<td colspan="2" >XNAT as data source, ITK as preprocessor
233+
</td>
234+
<td>
235+
</td>
236+
</tr>
237+
<tr>
238+
<td>Advanced learning strategies
239+
</td>
240+
<td colspan="2" >Semi-supervised, active learning
241+
</td>
242+
<td>
243+
</td>
244+
</tr>
245+
<tr>
246+
<td>High performance preprocessors
247+
</td>
248+
<td colspan="2" >Smart caching, multi-process
249+
</td>
250+
<td>
251+
</td>
252+
</tr>
253+
<tr>
254+
<td>Multi-node distributed training
255+
</td>
256+
<td colspan="2" >
257+
</td>
258+
<td>
259+
</td>
260+
</tr>
261+
</table>
262+
263+
264+
265+
266+
* Project licensing: Apache License, Version 2.0
267+
268+
<!-- Docs to Markdown version 1.0β17 -->

0 commit comments

Comments
 (0)