You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training a model from .mha files using the ITKReader ImageReader class, memory usage is very high and unstable. This happens to a much lesser extent after converting files to nifti format and using the NiBabelReader ImageReader. We could also reduce the problem by adding an explicit gc.collect statement after each iteration. Still, we see that the 'ramp up' of memory usage using the ITKReader is much faster than for the NiBabelReader.
Any ideas on what causes this behaviour and the best way to handle it? We would prefer to use .mha files for training as that is the default format we use to handle image data.
Below are some plots of memory use, measured through docker stats. Each run was done in a separate container. First image is the default ITKReader with .mha images. Second is the NiBabelReader on nifti images. Third is the ITKReader with explicit gc.collect calls. Note that in the first image the run crashes at around ~60 epochs due to going over the container's memory limit. The other 2 images/runs ran the full 150 epochs and then stopped.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
When training a model from .mha files using the ITKReader ImageReader class, memory usage is very high and unstable. This happens to a much lesser extent after converting files to nifti format and using the NiBabelReader ImageReader. We could also reduce the problem by adding an explicit gc.collect statement after each iteration. Still, we see that the 'ramp up' of memory usage using the ITKReader is much faster than for the NiBabelReader.
Any ideas on what causes this behaviour and the best way to handle it? We would prefer to use .mha files for training as that is the default format we use to handle image data.
Below are some plots of memory use, measured through
docker stats
. Each run was done in a separate container. First image is the default ITKReader with .mha images. Second is the NiBabelReader on nifti images. Third is the ITKReader with explicit gc.collect calls. Note that in the first image the run crashes at around ~60 epochs due to going over the container's memory limit. The other 2 images/runs ran the full 150 epochs and then stopped.Beta Was this translation helpful? Give feedback.
All reactions