Official Pytorch Implementation of Paper - 💎 DiaMond: Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET - Accepted by WACV 2025
- Create environment:
conda env create -n diamond --file requirements.yaml
- Activate environment:
conda activate diamond
We used data from Alzheimer's Disease Neuroimaging Initiative (ADNI) and Japanese Alzheimer's Disease Neuroimaging Initiative (J-ADNI). Since we are not allowed to share our data, you would need to process the data yourself. Data for training, validation, and testing should be stored in separate HDF5 files, using the following hierarchical format:
- First level: A unique identifier, e.g. image ID.
- The second level always has the following entries:
- A group named
MRI/T1
, containing the T1-weighted 3D MRI data. - A group named
PET/FDG
, containing the 3D FDG PET data. - A string attribute
DX
containing the diagnosis labels:CN
,Dementia/AD
,FTD
, orMCI
, if available. - A scalar attribute
RID
with the patient ID, if available.
- A group named
The package uses PyTorch. To train and test DiaMond, execute the src/train.py
script.
The configuration file of the command arguments is stored in config/config.yaml
.
The essential command line arguments are:
--dataset_path
: Path to HDF5 files containing either train, validation, or test data splits.--img_size
: Size of the input scan.--test
: True for model evaluation.
After specifying the config file, simply start training/evaluation by:
python src/train.py
For any questions, please contact: Yitong Li (yi_tong.li@tum.de)
If you find this repository useful, please consider giving a star 🌟 and citing the paper:
@inproceedings{li2024diamond,
title={DiaMond: Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET},
author={Li, Yitong and Ghahremani, Morteza and Wally, Youssef and Wachinger, Christian},
eprint={2410.23219},
archivePrefix={arXiv},
primaryClass={cs.CV},
year={2024},
url={https://arxiv.org/abs/2410.23219},
}
WACV 2025 proceedings:
Coming soon