The purpose of this repository is to evaluate the performance of sct_label_vertebrae
across the implementation of new methods.
-
Install
spinalcordtoolbox
fromgit
:git clone https://github.com/spinalcordtoolbox/spinalcordtoolbox cd spinalcordtoolbox ./install_sct
-
Check out the branch corresponding to the SCT disc labeling PR
git checkout lr/Deep_vertebral_labeling
Important: You will want to make sure you are checked out to this branch each time you run
sct_run_batch
. This will only be necessary during the development of the SCT/ivadomed integration. Oncespinalcordtoolbox
@PR#2679 has been merged, this step will change. -
Manually reinstall
ivadomed
in the SCT conda environment, specifically using the "HourglassNet" branch:cd $SCT_DIR source python/etc/profile.d/conda.sh conda activate venv_sct pip uninstall ivadomed -y pip install git+https://github.com/ivadomed/ivadomed.git@jn/539-intervertebral-disc-labeling-pose-estimation
This will only be necessary during the development of the HourglassNet model. Once
ivadomed
@PR#852 has been merged, this step will change. -
Download a dataset of your choice:
- spine-generic multi-subject dataset r20201001
- For internal datasets (such as
sct-testing-large
, seeneuropoly/data-management/internal-server.md
-
Clone this repo, and open it in your terminal
git clone https://github.com/sct-pipeline/vertebral-labeling-validation cd vertebral-labeling-validation
-
First, edit
testing_list.txt
to include a list of subjects you want to process. By default,testing_list.txt
contains a subset of viable subjects from thesct-testing-large
dataset. (The full list of viable subjects is included inviable-testing-subjects.txt
, however that list includes over 500 subjects, sotesting_list.txt
is a smaller alternative.) However, if you're not usingsct-testing-larg
, you will want to specify a different list of subjects. -
(Optional) If you are using a
git-annex
dataset, you will want to make sure the files for these subjects are actually downloaded. For example:git clone git@data.neuro.polymtl.ca:datasets/sct-testing-large cd sct-testing-large # copy and paste these 3 lines and run as a single command xargs -a testing_list.txt -I '{}' \ find . -type d -name "*{}*" -print | xargs -L 1 git annex get
This will pipe the subjects from
testing_list.txt
intogit annex get
to fetch all of the files that match each subject. -
Next, run the
retrieve_large.py
script as follows:python3 retrieve_large.py -l testing_list.txt -i {PATH_TO_DATASET} -o {PATH_TO_STORE_RAW_TESTING_FILES}
This script simply copies the T1w/T2w anat and label files from the original dataset folder (
-i
) to a new folder (-o
). We do this to avoid working in the original dataset folder:- This guarantees that the original folder will always have a "fresh" copy of the raw/unprocessed files if we ever want to start over.
- It also means we can apply
sct_run_batch
on the entire folder without having to explicitly specify a subset of subjects using-include_list
.
-
After that, edit the config file
prepare_seg_and_gt.yml
to match the filepaths on your computer.- NB: You may also want to update
jobs
if you have a more capable workstation (i.e. you are working on a lab server).
- NB: You may also want to update
-
Next, run the preprocessing script:
sct_run_batch -c prepare_seg_and_gt.yml
The corresponding bash script (
prepare_seg_and_gt.sh
) projects the manual disc labels (ground truth, single voxel located at the posterior side of each intervertebral disc) onto the center of the spinal cord. The reason we do that is because the output ofsct_label_vertebrae
are labels in the cord (not at the posterior tip of the disc).
-
First, edit the config file
run_prediction.yml
to match the filepaths on your computer.- NB: You may also want to update
jobs
if you have a more capable workstation (i.e. you are working on a lab server).
- NB: You may also want to update
-
Next, run the processing script:
sct_run_batch -c run_prediction.yml
The corresponding bash script (
run_prediction.sh
) will callsct_label_vertebrae
for each method. Then, it will invoke theanalyze_predictions.py
script to compare the predictions and ground truth. Finally, it will output metrics into aresults.csv
file, which you can then use to gauge the performance of all three methods.