Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port towards config file-based training #90

Merged
merged 67 commits into from
Mar 7, 2024
Merged
Show file tree
Hide file tree
Changes from 51 commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
718cb5e
port code towards config file-based training
naga-karthik Nov 29, 2023
35d7093
fix/update imports
naga-karthik Nov 29, 2023
893106a
create function for argument parser
naga-karthik Nov 29, 2023
c03a962
minor code improvements
naga-karthik Nov 29, 2023
28e99cc
add configs folder with soft_all config yaml
naga-karthik Nov 29, 2023
1382696
add citation bibtex info
naga-karthik Nov 24, 2023
b1b37c3
add arxiv badge
naga-karthik Nov 24, 2023
44b7b42
minor edit
naga-karthik Nov 24, 2023
c829090
Fixed typo
jcohenadad Dec 21, 2023
ace417a
Fixed wrong file name
jcohenadad Dec 21, 2023
1de1e8a
clarify help for checkpoint folder
naga-karthik Dec 21, 2023
ae95bad
Added setup.py
jcohenadad Dec 21, 2023
37258e8
Do not install requirements
jcohenadad Dec 21, 2023
31372e9
Get parser arguments from within main
jcohenadad Dec 21, 2023
40fbaaf
Instruct to download repos
jcohenadad Dec 21, 2023
f7c20a1
Fixed path to install package
jcohenadad Dec 21, 2023
902fc78
Added info to download model
jcohenadad Dec 21, 2023
3405c18
Updated README with install instructions
jcohenadad Dec 21, 2023
1613fb5
Updated README with correct syntax
jcohenadad Dec 21, 2023
8db546a
add code to dump train/val/test subjects into yaml file
naga-karthik Jan 15, 2024
eb3f1ff
add data splits yaml
naga-karthik Jan 15, 2024
ceab9a3
add info about finding train/val/test splits yaml file
naga-karthik Jan 15, 2024
2d9899d
simplify prepare_data function
naga-karthik Jan 22, 2024
1b6056a
remove unused input arg
naga-karthik Jan 22, 2024
c3a9fc0
binarize preds with 0.5 threshold
naga-karthik Jan 22, 2024
998fe5e
remove unused import
naga-karthik Jan 22, 2024
8d19401
fix checkpoint name while loading
naga-karthik Jan 28, 2024
91e57fa
add binrarized soft labels
naga-karthik Jan 29, 2024
1b3e351
add script for generating binarized soft labels
naga-karthik Jan 31, 2024
dcb42af
remove unused function
naga-karthik Jan 31, 2024
70938c6
add option to create datalist for offline binarized soft labels
naga-karthik Feb 2, 2024
edae2ee
add script to compare csa across training GT
naga-karthik Feb 2, 2024
f5ec0f4
change variable name to softseg_soft
naga-karthik Feb 3, 2024
5740888
add script to generate csa violin plots for diff training label types
naga-karthik Feb 3, 2024
1aab0d4
rename training config file
naga-karthik Feb 5, 2024
5960989
revert to orignal version; no label binarization
naga-karthik Feb 5, 2024
e78cb74
add pycache
naga-karthik Feb 5, 2024
009d207
add joblib splits file for datalist creation
naga-karthik Feb 5, 2024
97d0a09
add script for nnunet inference
naga-karthik Feb 5, 2024
b26862f
add script for reproducing training
naga-karthik Feb 5, 2024
f2f4f43
add script for running inference and generating csa plots
naga-karthik Feb 5, 2024
b7f920e
add Readme
naga-karthik Feb 5, 2024
cb4c9de
rename perform_everything_on_device nnUNetPredictor arg
naga-karthik Feb 12, 2024
d3b0f75
add unified qc generation script
naga-karthik Feb 12, 2024
ff85f67
add einops requirement for swinunetr
naga-karthik Feb 17, 2024
aaa3971
add qc generation script for epi data
naga-karthik Feb 17, 2024
d1fe4c3
add script for comparing preds across thresholds
naga-karthik Feb 21, 2024
e04c7e4
add unified script to generate csa plots across methods and thresholds
naga-karthik Feb 22, 2024
953e6f0
update docstring
naga-karthik Feb 22, 2024
5a6eafb
remove old analyse_csa script
naga-karthik Feb 22, 2024
aecb98f
add v2 inference
naga-karthik Feb 23, 2024
c30ca03
fix issue with lambda for abs_csa_error plot
naga-karthik Feb 23, 2024
1b63685
add script for csa comparison across old & new models
naga-karthik Mar 5, 2024
eb90326
add script for csa comparison across resolutions for old & new models
naga-karthik Mar 5, 2024
2dc4cd3
update by adding support for analyzing csa across resolutions
naga-karthik Mar 5, 2024
5425989
add feature for training swinunetr & mednext models
naga-karthik Mar 5, 2024
47edbbd
add support for running inference with new models
naga-karthik Mar 5, 2024
8cafaab
add arg to output soft/hard sc seg masks
naga-karthik Mar 5, 2024
8911c97
add --pred-type soft input in segment_sc_MONAI function
naga-karthik Mar 5, 2024
d96c3a4
add keys for mednext and swinunetr models in training yaml
naga-karthik Mar 5, 2024
dadb666
add keep_largest_object postprocessing util
naga-karthik Mar 7, 2024
894db9c
update script to generate qc for all models
naga-karthik Mar 7, 2024
041ba27
update usage examples in docstring
naga-karthik Mar 7, 2024
d37f255
incorporate suggestions
naga-karthik Mar 7, 2024
b728ee7
update docstring usage example
naga-karthik Mar 7, 2024
907d927
Merge branch 'main' into nk/improve-training-procedure
naga-karthik Mar 7, 2024
c1c4e80
fix minor bug in --model args
naga-karthik Mar 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
*.idea
*.idea
*__pycache__/
17 changes: 17 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,23 @@
# Towards Contrast-agnostic Soft Segmentation of the Spinal Cord

[![arXiv](https://img.shields.io/badge/arXiv-2310.15402-b31b1b.svg)](https://arxiv.org/abs/2310.15402)

Official repository for contrast-agnostic spinal cord segmentation project using SoftSeg.

This repo contains all the code for data preprocessing, training and running inference on other datasets. The code is mainly based on [Spinal Cord Toolbox](https://spinalcordtoolbox.com) and [MONAI](https://github.com/Project-MONAI/MONAI) (PyTorch).

**CITATION INFO**: If you find this work and/or code useful for your research, please cite our paper:

```
@article{bedard2023towards,
title={Towards contrast-agnostic soft segmentation of the spinal cord},
author={B{\'e}dard, Sandrine and Enamundram, Naga Karthik and Tsagkas, Charidimos and Pravat{\`a}, Emanuele and Granziera, Cristina and Smith, Andrew and Weber II, Kenneth Arnold and Cohen-Adad, Julien},
journal={arXiv preprint arXiv:2310.15402},
year={2023}
url={https://arxiv.org/abs/2310.15402}
}
```

## Table of contents
* [1. Main Dependencies](#1-main-dependencies)
* [2. Dataset](#2-dataset)
Expand Down Expand Up @@ -149,6 +164,8 @@ The training script expects a datalist file in the Medical Decathlon format cont
python monai/create_msd_data.py -pd ~/duke/projects/ivadomed/contrast-agnostic-seg/data_processed_sg_2023-08-08_NO_CROP\data_processed_clean> -po ~/datasets/contrast-agnostic/ --contrast all --label-type soft --seed 42
```

The dataset split containing the training, validation, and test subjects can be found in the `monai/data_split_all_soft_seed15.yaml` file.

> **Note**
> The output of the above command is just `.json` file pointing to the image-label pairs in the original BIDS dataset. It _does not_ copy the existing data to the output folder.

Expand Down
63 changes: 63 additions & 0 deletions configs/train_all.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
seed: 15
save_test_preds: True

directories:
# Path to the saved models directory
models_dir: /home/GRAMES.POLYMTL.CA/u114716/contrast-agnostic/saved_models/followup
# Path to the saved results directory
results_dir: /home/GRAMES.POLYMTL.CA/u114716/contrast-agnostic/results/models_followup
# Path to the saved wandb logs directory
# if None, starts training from scratch. Otherwise, resumes training from the specified wandb run folder
wandb_run_folder: None

dataset:
# Dataset name (will be used as "group_name" for wandb logging)
name: spine-generic
# Path to the dataset directory containing all datalists (.json files)
root_dir: /home/GRAMES.POLYMTL.CA/u114716/contrast-agnostic/datalists/spine-generic/seed15
# Type of contrast to be used for training. "all" corresponds to training on all contrasts
contrast: all # choices: ["t1w", "t2w", "t2star", "mton", "mtoff", "dwi", "all"]
# Type of label to be used for training.
label_type: soft_bin # choices: ["hard", "soft", "soft_bin"]

preprocessing:
# Online resampling of images to the specified spacing.
spacing: [1.0, 1.0, 1.0]
# Center crop/pad images to the specified size. (NOTE: done after resampling)
# values correspond to R-L, A-P, I-S axes of the image after 1mm isotropic resampling.
crop_pad_size: [64, 192, 320]

opt:
name: adam
lr: 0.001
max_epochs: 200
batch_size: 2
# Interval between validation checks in epochs
check_val_every_n_epochs: 10
# Early stopping patience (this is until patience * check_val_every_n_epochs)
early_stopping_patience: 20


model:
# Model architecture to be used for training (also to be specified as args in the command line)
nnunet:
# NOTE: these info are typically taken from nnUNetPlans.json (if an nnUNet model is trained)
base_num_features: 32
max_num_features: 320
n_conv_per_stage_encoder: [2, 2, 2, 2, 2, 2]
n_conv_per_stage_decoder: [2, 2, 2, 2, 2]
pool_op_kernel_sizes: [
[1, 1, 1],
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
[1, 2, 2]
]
enable_deep_supervision: True

unetr:
feature_size: 16
hidden_size: 768 # dimensionality of hidden embeddings
mlp_dim: 2048 # dimensionality of the MLPs
num_heads: 12 # number of heads in multi-head Attention
Loading