Skip to content

Official PyTorch and MATLAB implementations of our MICCAI 2020 paper "FocusLiteNN: High Efficiency Focus Quality Assessment for Digital Pathology"

License

Notifications You must be signed in to change notification settings

icbcbicc/FocusLiteNN

Repository files navigation

FocusLiteNN

This is the official PyTorch and MATLAB implementations of our MICCAI 2020 paper "FocusLiteNN: High Efficiency Focus Quality Assessment for Digital Pathology".

[Update Feb. 1, 2023] An online demo can be founed at CODIDO. You can easily try our model with a few mouse clicks. These demo models are trained using the MSE loss.

[Update Jan. 31, 2023] The original loss used for training all models in the paper is PLCC, which doesn't produce absolute scale scores. This update features three FocusLiteNN (1-kernel, 2-kernel, 10-kernel) models trained with the MSE loss using the entire FocusPath dataset, which produce absolute scale scores. This is beneficial for single image testing and heatmap visualization. The pre-trained models are located at pretrained_model/focuslitenn-{1,2,10}kernel-mse.pt.

1. Brief Introduction

1.1 Backgrounds

  • Out-of-focus microscopy lens in digital pathology is a critical bottleneck in high-throughput Whole Slide Image scanning platforms, for which Focus Quality Assessment methods are highly desirable to help significantly accelerate the clinical workflows.
  • While data-driven approaches such as Convolutional Neural Network based methods have shown great promises, they are difficult to use in practice due to their high computational complexity.

1.2 Contributions

  • We propose a highly efficient CNN-based model FocusLiteNN that only has 148 paramters for Focus Quality Assessment. It maintains impressive performance and is 100x faster than ResNet50.
  • We introduce a comprehensive annotated dataset TCGA@Focus, which contains 14371 pathological images with in/out focus labels.

1.3 Results

  • Evaluation results on the proposed TCGA@Focus dataset results

  • Our proposed FocusLiteNN (1-kernel) model is highly efficient. time

1.4 Citation

Please cite our paper if you find our model or the TCGA@Focus dataset useful.

@InProceedings{wang2020focuslitenn,
    title={FocusLiteNN: High Efficiency Focus Quality Assessment for Digital Pathology},
    author={Wang, Zhongling and Hosseini, Mahdi and Miles, Adyn and Plataniotis, Konstantinos and Wang, Zhou},
    booktitle={Medical Image Computing and Computer Assisted Intervention -- MICCAI 2020},
    year={2020},
    publisher="Springer International Publishing"
}

2. Dataset

  • Download: The dataset is available on Zenodo under a Creative Commons Attribution license: DOI.
  • Content: Contains 14371 pathological image patches of size 1024x1024 with in/out focus labels.
  • Testing: This is the testing dataset proposed and used in the paper. The specific testing images (14371 images) can be found in data/TCGA@Focus.txt
  • Download: The dataset is available on Zenodo under a Creative Commons Attribution license: DOI

  • Content:Contains 8640 pathological image patches of size 1024x1024 of different microscopic blur levels i.e. 14 z-levels (in-depth).

  • Training: This is the training dataset used in the paper. The specific training images (5200 images) in one of the ten folds can be found in data/FocusPath_full_split1.txt

3. Prerequest

3.1 Environment

The code has been tested on Ubuntu 18.04 with Python 3.8 and cuda 10.2

3.2 Packages

pytorch=1.4, torchvision=0.5, scipy, pandas, pillow (or pillow-simd)

3.3 Pretrained Models

  • Pretrained models could be found in folder pretrained_model/
  • Pretrained models for ResNet10, ResNet50 and ResNet101 are available for download at Download Link. The downloaded models should be put under pretrained_model/

4. Running the code

  • Available architectures:
    • FocusLiteNN (1kernel, --arch FocusLiteNN --num_channel 1)
    • FocusLiteNN (2kernel, --arch FocusLiteNN --num_channel 2)
    • FocusLiteNN (10kernel, --arch FocusLiteNN --num_channel 10)
    • EONSS (--arch eonss)
    • DenseNet13 (--arch densenet13)
    • ResNet10 (--arch resnet10)
    • ResNet50 (--arch resnet50)
    • ResNet101 (--arch resnet101)
  • You may need to adjust --batch_size and --num_workers according to your machine configuration.
  • This section only shows basic usages, please refer to the code for more options.

4.1 Python Demo for testing a single image (heatmap available)

python demo.py --arch FocusLiteNN --num_channel 1 --img imgs/TCGA@Focus_patch_i_9651_j_81514.png

  • The score should be -1.548026 for imgs/TCGA@Focus_patch_i_9651_j_81514.png

  • To enable heatmap (normalized), add --heatmap True to the command.

4.2 MATLAB Demo for testing a single image (non-efficient implementation)

run matlab/FocusLiteNN-1kernel.m

4.3 Training FocusLiteNN on Focuspath_full

  1. Download and extract the Focuspath Full dataset under data/
  2. Basic usage: python train_model.py --use_cuda True --arch FocusLiteNN --num_channel 1 --trainset "data/FocusPath Full/FocusPath_full" --train_csv data/FocusPath_full_split1.txt

4.4 Testing FocusLiteNN on TCGA@Focus

  1. Download and extract the TCGA@Focus dataset under data/
  2. Basic usage: python test_model.py --use_cuda True --arch FocusLiteNN --num_channel 1 --ckpt_path pretrained_model/focuslitenn-1kernel.pt --testset "data/TCGA@Focus/Image Patches Database" --test_csv data/TCGA@Focus.txt

5. Codes for comparing models

For other model compared in the paper, you can find the code in

  1. FQPath: https://github.com/mahdihosseini/FQPath
  2. HVS-MaxPol: https://github.com/mahdihosseini/HVS-MaxPol
  3. Synthetic-MaxPol: https://github.com/mahdihosseini/Synthetic-MaxPol
  4. LPC-SI: https://ece.uwaterloo.ca/~z70wang/research/lpcsi/
  5. GPC: http://helios.mi.parisdescartes.fr/~moisan/sharpness/
  6. MLV: https://www.mathworks.com/matlabcentral/fileexchange/49991-maximum-local-variation-mlv-code-for-sharpness-assessment-of-images
  7. SPARISH: https://www.mathworks.com/matlabcentral/fileexchange/55106-sparish

6. License

FocusLiteNN is released under The Prosperity Public License 3.0.0.