You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is a review on the last training I did on ann nnU-Net for spinal canal segmentation.
I went back on the data I used for the previous training and the data used for the kaggle competition to select more precisely the relevant subjects (without motion artefacts for instance), mixing data from spine-generic, dcm-oklahoma, dcm-brno, sci-paris. I corrected some segmentations, and removed the ones from dcm-zurich, I will review them after an anatomy course with @maxradx because there are too manyu doubts on those.
I trained the model for 500 epochs, with on a fold I chose : splits_final.json
So here there were only 8 validations cases, so that is 10% of the traing data. I wanted the model to learn as much as possible just having a few heterogenous validation cases (at least one from every dataset) to see the progress.
I think it would be a good time to make at least a first release ?
I created a new branch to upload the preprocessing pipeline and make a more actual version of what this repository is about
The text was updated successfully, but these errors were encountered:
I trained the model for 500 epochs, with on a fold I chose : splits_final.json
Instead of uploading the train/val split as an attachment to this issue, it would be a better idea to create a folder within the repo, named, for example, datasplits to track splits across model versions using GitHub. See the same logic from the contrast-agnostic project here.
Additionally, your splits_final.json currently contains subjects across all datasets in a single file, which makes it hard to track which subject comes from which dataset. This will be even harder after adding more and more datasets. You can either split the file into multiple files, one per dataset. Or, you can use YML (instead of JSON), which supports comments and you could track the datasets using comments.
I'm also unsure whether you created the splits_final.json manually or using a script. Using a script would definitely be better for reproducibility purposes; see discussion here. It would also be great to track exact dataset versions; see function here.
Great progress! I checked the QC for the whole spine and in general it looks good! There will be a few manual corrections, but it will be a great addition to the training set!
Here is a review on the last training I did on ann nnU-Net for spinal canal segmentation.
I went back on the data I used for the previous training and the data used for the kaggle competition to select more precisely the relevant subjects (without motion artefacts for instance), mixing data from spine-generic, dcm-oklahoma, dcm-brno, sci-paris. I corrected some segmentations, and removed the ones from dcm-zurich, I will review them after an anatomy course with @maxradx because there are too manyu doubts on those.
I trained the model for 500 epochs, with on a fold I chose :
splits_final.json
So here there were only 8 validations cases, so that is 10% of the traing data. I wanted the model to learn as much as possible just having a few heterogenous validation cases (at least one from every dataset) to see the progress.
Training progress:
Then I wanted to try the model on the data I really wanted to add for this model's training : whole-spine
Here is the qc : https://drive.google.com/file/d/1jxdl9hzLBa7hbTac3lTOzn4tRAKP7371/view?usp=drive_link
I justed tested the model on all of the dataset, some images could originally appear weirdly.
Next step would be to :
I think it would be a good time to make at least a first release ?
I created a new branch to upload the preprocessing pipeline and make a more actual version of what this repository is about
The text was updated successfully, but these errors were encountered: