-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try running contrast-agnostic model on the EPI data #23
Comments
I have run the contrast-agnostic model on the EPI data and visually the results look okay. Use the following command for further reference: python /home/GRAMES.POLYMTL.CA/robana/duke/temp/rohan/fmri_sc_seg/monai/run_inference_single_image.py --path-img {image_path} --chkp-path /home/GRAMES.POLYMTL.CA/robana/duke/temp/muena/contrast-agnostic/final_monai_model/nnunet_nf=32_DS=1_opt=adam_lr=0.001_AdapW_CCrop_bs=2_64x192x320_20230918-2253 --path-out /home/GRAMES.POLYMTL.CA/robana/duke/temp/rohan/fmri_sc_seg/results/monai_results --device cpu" |
Please produce a QC report so the team can conveniently look at the predictions (add the GT as well)-- @Nilser3 can help you with that |
I am attaching the qc reports of both ground truths and predictions of the contrast-agnostic model on the test set of the EPI data. |
|
Relevant issue: sct-pipeline/contrast-agnostic-softseg-spinalcord#83 |
Updating the issue with what has been tried until now. Fine-tuning:ObjectiveThe main objective of the fine-tuning was to use the contrast-agnostic model pre-trained weights to transfer knowledge to a model which is trained to segment spinal cord on EPI data. This is how the newly trained model would be agnostic to EPI data. It is expected that the fine-tuned model would have a good performance since it would have a lot of "spinal cord" context through its weights and biases. Path to checkpoint used: This PR sct-pipeline/contrast-agnostic-softseg-spinalcord#85 adds the functionality to initialize a model with pre-trained weights. Results and Observations
After investigation, the reason of this poor result and unstable training was the crop size. The crop size used for the training was Conclusion:
|
If you set According to @naga-karthik, there is indeed padding, however the poor quality of training is likely caused to the excessive padding because the images are much smaller than the ones used for the original contrast-agnostic model. |
We have decided to continue with nnUNet for now |
If results look OK, I suggest re-training the contrast-agnostic model so it can also work for EPI data.
The text was updated successfully, but these errors were encountered: