Skip to content

Latest commit

 

History

History
100 lines (65 loc) · 4.91 KB

README.md

File metadata and controls

100 lines (65 loc) · 4.91 KB

vertebral-labeling-validation

The purpose of this repository is to evaluate the performance of sct_label_vertebrae across the implementation of new methods.

0. Prerequisites

  1. Install spinalcordtoolbox from git:

    git clone https://github.com/spinalcordtoolbox/spinalcordtoolbox
    cd spinalcordtoolbox
    ./install_sct
  2. Check out the branch corresponding to the SCT disc labeling PR

    git checkout lr/Deep_vertebral_labeling

    Important: You will want to make sure you are checked out to this branch each time you run sct_run_batch. This will only be necessary during the development of the SCT/ivadomed integration. Once spinalcordtoolbox@PR#2679 has been merged, this step will change.

  3. Manually reinstall ivadomed in the SCT conda environment, specifically using the "HourglassNet" branch:

    cd $SCT_DIR
    source python/etc/profile.d/conda.sh
    conda activate venv_sct
    pip uninstall ivadomed -y
    pip install git+https://github.com/ivadomed/ivadomed.git@jn/539-intervertebral-disc-labeling-pose-estimation

    This will only be necessary during the development of the HourglassNet model. Once ivadomed@PR#852 has been merged, this step will change.

  4. Download a dataset of your choice:

  5. Clone this repo, and open it in your terminal

    git clone https://github.com/sct-pipeline/vertebral-labeling-validation
    cd vertebral-labeling-validation

1. Preprocessing

  1. First, edit testing_list.txt to include a list of subjects you want to process. By default, testing_list.txt contains a subset of viable subjects from the sct-testing-large dataset. (The full list of viable subjects is included in viable-testing-subjects.txt, however that list includes over 500 subjects, so testing_list.txt is a smaller alternative.) However, if you're not using sct-testing-larg, you will want to specify a different list of subjects.

  2. (Optional) If you are using a git-annex dataset, you will want to make sure the files for these subjects are actually downloaded. For example:

    git clone git@data.neuro.polymtl.ca:datasets/sct-testing-large  
    cd sct-testing-large
    
    # copy and paste these 3 lines and run as a single command
    xargs -a testing_list.txt -I '{}' \
    find . -type d -name "*{}*" -print | 
    xargs -L 1 git annex get

    This will pipe the subjects from testing_list.txt into git annex get to fetch all of the files that match each subject.

  3. Next, run the retrieve_large.py script as follows:

    python3 retrieve_large.py -l testing_list.txt -i {PATH_TO_DATASET} -o {PATH_TO_STORE_RAW_TESTING_FILES}

    This script simply copies the T1w/T2w anat and label files from the original dataset folder (-i) to a new folder (-o). We do this to avoid working in the original dataset folder:

    • This guarantees that the original folder will always have a "fresh" copy of the raw/unprocessed files if we ever want to start over.
    • It also means we can apply sct_run_batch on the entire folder without having to explicitly specify a subset of subjects using -include_list.
  4. After that, edit the config file prepare_seg_and_gt.yml to match the filepaths on your computer.

    • NB: You may also want to update jobs if you have a more capable workstation (i.e. you are working on a lab server).
  5. Next, run the preprocessing script:

    sct_run_batch -c prepare_seg_and_gt.yml

    The corresponding bash script (prepare_seg_and_gt.sh) projects the manual disc labels (ground truth, single voxel located at the posterior side of each intervertebral disc) onto the center of the spinal cord. The reason we do that is because the output of sct_label_vertebrae are labels in the cord (not at the posterior tip of the disc).

2. Testing

  1. First, edit the config file run_prediction.yml to match the filepaths on your computer.

    • NB: You may also want to update jobs if you have a more capable workstation (i.e. you are working on a lab server).
  2. Next, run the processing script:

    sct_run_batch -c run_prediction.yml

    The corresponding bash script (run_prediction.sh) will call sct_label_vertebrae for each method. Then, it will invoke the analyze_predictions.py script to compare the predictions and ground truth. Finally, it will output metrics into a results.csv file, which you can then use to gauge the performance of all three methods.