Skip to content

Latest commit

 

History

History
124 lines (91 loc) · 5.08 KB

README.md

File metadata and controls

124 lines (91 loc) · 5.08 KB

Multiple Instance Learning with Mixed Supervision in Gleason Grading

Multiple Instance Learning with Mixed Supervision in Gleason Grading, MICCAI2022. [arXiv]
@article{bian2022multiple,
  title={Multiple Instance Learning with Mixed Supervision in Gleason Grading},
  author={Hao Bian, Zhuchen Shao, Yang Chen, Yifeng Wang, Haoqian Wang, Jian Zhang, Yongbing Zhang},
  journal={arXiv preprint arXiv:2206.12798v1},
  year={2022}
}

Abstract: With the development of computational pathology, deep learning methods for Gleason grading through whole slide images (WSIs) have excellent prospects. Since the size of WSIs is extremely large, the image label usually contains only slide-level label or limited pixel-level labels. The current mainstream approach adopts multi-instance learning to predict Gleason grades. However, some methods only considering the slide-level label ignore the limited pixel-level labels containing rich local information. Furthermore, the method of additionally considering the pixel-level labels ignores the inaccuracy of pixel-level labels. To address these problems, we propose a mixed supervision Transformer based on the multiple instance learning framework. The model utilizes both slidelevel label and instance-level labels to achieve more accurate Gleason grading at the slide level. The impact of inaccurate instance-level labels is further reduced by introducing an efficient random masking strategy in the mixed supervision training process. We achieve the state-of-the-art performance on the SICAPv2 dataset, and the visual analysis shows the accurate prediction results of instance level.

overview

Data Preprocess

SICAPv2 dataset is a database containing prostate histology whole slide images with both annotations of global Gleason scores and path-level Gleason grades. We follow the data process pipeline of SegGini-MICCAI 2021. We provide the processed data (containing extracted instance feature, slide-level and generated instance-level labels). Download the processed_data and then put them into the data/SICAPv2. The form is as follows:

data
└── SICAPv2
    ├── 16B0001851.bin
    ├── 16B0003388.bin
    :
    ├── 18B0006623J.bin
    └── 18B001071J.bin

Installation

  • Linux (Tested on Ubuntu 18.04)
  • NVIDIA GPU (Tested on a single Nvidia GeForce RTX 3090)
  • Python (3.7.11), h5py (2.10.0), opencv-python (4.1.2.30), PyTorch (1.10.1), torchvision (0.11.2), pytorch-lightning (1.5.10), timm (0.5.4), histocartography (0.2.1), protobuf (3.19.1), dgl (0.4.3.post2).

Please refer to the following instructions.

# create and activate the conda environment
conda create -n mixed_supervision python=3.7 -y
conda activate mixed_supervision

# install pytorch
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113

# install related package
pip install -r requirements.txt

Train & test

Basic, Fully Automated Run

CONFIG_FILE=configs/SICAPv2.yaml
GPU=0
bash run.sh $CONFIG_FILE $GPU

Tow-Step run

  • Train (i.e., train the fold0 model):
CONFIG_FILE=configs/SICAPv2.yaml
GPU=0
python main.py --config $CONFIG --stage train --gpus $GPU --fold $fold
  • Test (i.e., test the fold0 model):
CONFIG_FILE=configs/SICAPv2.yaml
GPU=0
python main.py --config $CONFIG --stage train --gpus $GPU --fold $fold
  • Statistical results of 4-folds cross-validation
CONFIG_FILE=configs/SICAPv2.yaml
python metrics.py --config $CONFIG 

Acknowledgements, License & Usage

@article{shao2021transmil,
  title={Transmil: Transformer based correlated multiple instance learning for whole slide image classification},
  author={Shao, Zhuchen and Bian, Hao and Chen, Yang and Wang, Yifeng and Zhang, Jian and Ji, Xiangyang and others},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={2136--2147},
  year={2021}
}

@article{bian2022multiple,
  title={Multiple Instance Learning with Mixed Supervision in Gleason Grading},
  author={Hao Bian, Zhuchen Shao, Yang Chen, Yifeng Wang, Haoqian Wang, Jian Zhang, Yongbing Zhang},
  journal={arXiv preprint arXiv:2206.12798v1},
  year={2022}
}

© This code is made available under the GPLv3 License and is available for non-commercial academic purposes.