Skip to content
This repository has been archived by the owner on Sep 4, 2021. It is now read-only.

Memory Constraint

Yipeng Hu edited this page Sep 6, 2018 · 2 revisions

The default networks, in particular the LocalNet, can be constrained by GPU memory size. For example, originally sized, full-body CT images are almost certainly too large. As a rule of thumb, the network can just about squeeze 4 pairs of image volumes with sizes around 100x100x100, as one minibatch, into a GPU with 12GB GPU-memory.

The good news is we are working on developing a memory efficient architecture. However, there are still a few things you can do:

  • Do you really need high-res, full image volumes to predict DDFs to useful precision? Can you crop them (cropping reduces size very quickly in 3D)? Can you down-sample your image first? I did both in the prostate MR-TRUS application;

  • Do you really need 32 as initial number of channels? While it is case-dependent, some network may be trained with 16 initial channels without losing generalisation ability. This can be configured in network/LocalNet (or any network you use) to a smaller number: self.num_channel_initial = 32 -> self.num_channel_initial = 16

  • The u-net-like encoder part seems to have some widely acceptable schemes, i.e. with doubled number of feature maps and 0.5x0.5x0.5 sized feature maps, vice versa for the decoder. This is what's been used here too but is largely heuristic. it may be very inefficient at the higher resolution part, therefore memory usage. You can customise the numbers of channels at each resolution, e.g. by changing this line: nc = [int(self.num_channel_initial*(2**i)) for i in range(5)]

Please add things you think may help the memory consumption ;)

Clone this wiki locally