We hope that the new structure of nnU-Net v2 makes it much more intuitive on how to modify it! We cannot give an extensive tutorial on how each and every bit of it can be modified. It is better for you to search for the position in the repository where the thing you intend to change is implemented and start working your way through the code from there. Setting breakpoints and debugging into nnU-Net really helps in understanding it and thus will help you make the necessary modifications!
Here are some things you might want to read before you start:
- Editing nnU-Net configurations through plans files is really powerful now and allows you to change a lot of things regarding preprocessing, resampling, network topology etc. Read this!
- Image normalization and i/o formats are easy to extend!
- Manual data splits can be defined as described here
- You can chain arbitrary configurations together into cascades, see this again
- Read about our support for region-based training
- If you intend to modify the training procedure (loss, sampling, data augmentation, lr scheduler, etc) then you need to implement your own trainer class. Best practice is to create a class that inherits from nnUNetTrainer and implements the necessary changes. Head over to our trainer classes folder for inspiration! There will be similar trainers for what you intend to change and you can take them as a guide. nnUNetTrainer are structured similarly to PyTorch lightning trainers, this should also make things easier!
- Integrating new network architectures can be done in two ways:
- Quick and dirty: implement a new nnUNetTrainer class and overwrite its
build_network_architecture
function. Make sure your architecture is compatible with deep supervision (if not, usennUNetTrainerNoDeepSupervision
as basis!) and that it can handle the patch sizes that are thrown at it! Your architecture should NOT apply any nonlinearities at the end (softmax, sigmoid etc). nnU-Net does that! - The 'proper' (but difficult) way: Build a dynamically configurable architecture such as the
PlainConvUNet
class used by default. It needs to have some sort of GPU memory estimation method that can be used to evaluate whether certain patch sizes and topologies fit into a specified GPU memory target. Build a newExperimentPlanner
that can configure your new class and communicate with its memory budget estimation. RunnnUNetv2_plan_and_preprocess
while specifying your customExperimentPlanner
and a customplans_name
. Implement a nnUNetTrainer that can use the plans generated by yourExperimentPlanner
to instantiate the network architecture. Specify your plans and trainer when runningnnUNetv2_train
. It always pays off to first read and understand the corresponding nnU-Net code and use it as a template for your implementation!
- Quick and dirty: implement a new nnUNetTrainer class and overwrite its
- Remember that multi-GPU training, region-based training, ignore label and cascaded training are now simply integrated
into one unified nnUNetTrainer class. No separate classes needed (remember that when implementing your own trainer
classes and ensure support for all of these features! Or raise
NotImplementedError
)