Official implementation of the paper "Improving Cross-domain Few-shot Classification with Multilayer Perceptron".
Authors: Shuanghao Bai, Wanqi Zhou, Zhirong Luan, Donglin Wang, Badong Chen.
For installation and other package requirements, please follow the instructions as follows. This codebase is tested on Ubuntu 18.04 LTS with python 3.7. Follow the below steps to create environment and install dependencies.
- Setup conda environment.
# Create a conda environment
conda create -y -n cdfsc_mlp python=3.7
# Activate the environment
conda activate cdfsc_mlp
# Install torch (requires version >= 1.5.0) and torchvision
# Please refer to https://pytorch.org/get-started/previous-versions/ if your cuda version is different
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
- Clone CDFSC-MLP code repository and install requirements
# Clone PEM code base
git clone https://github.com/BaiShuanghao/CDFSC-MLP.git
cd CDFSC-MLP
# Install requirements
pip install -r requirements.txt
Please follow the instructions to prepare all datasets. Datasets list:
- miniImageNet
- ChestX-ray
- CropDieases
- DeepWeeds
- DTD
- EuroSAT
- Flower102
- ISIC
- Kaokore
- Omniglot
- Resisc45
- Sketch
- SVHN
You need to put all images in only one folder namely images, and split each dataset into train.csv, val.csv, and test.csv, thus organizing them as follows.
ChestX/
├── images
├── train.csv
├── val.csv
└── test.csv
EuroSAT/
├── images
├── train.csv
├── val.csv
└── test.csv
...
We supply our splits in process_data/data_splits. Also, you can use the code in process_data folder to process 12 datasets and get your own data splits.
Notably, for fair SOTA comparisons, we use the test_compare_with_sota files, which are generated by code of ATA method. By the way, if you use their code, you need to transform the JSON file that is generated by code of ATA method to CSV file. Then, you need to put these split files in each corresponding dataset folder for SOTA comparison experiments.
Please follow the instructions for training, evaluating, and reproducing the results. Firstly, you can modify the name of the CFG file that trains on different models. These CFG files are in our_config.
bash train.sh
bash test_all.sh
bash train_resume.sh
If our code is helpful to your research or projects, please consider citing:
@inproceedings{bai2024improving,
title={Improving Cross-domain Few-shot Classification with Multilayer Perceptron},
author={Bai, Shuanghao and Zhou, Wanqi and Luan, Zhirong and Wang, Donglin and Chen, Badong},
booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={5250--5254},
year={2024},
organization={IEEE}
}
If you have any questions, please create an issue on this repository or contact at baishuanghao@stu.xjtu.edu.cn.
Our code is based on LibFewShot repository. We thank the authors for releasing their code. If you use their code, please consider citing their work as well.
@article{li2021LibFewShot,
title = {LibFewShot: A Comprehensive Library for Few-Shot Learning},
author={Li, Wenbin and Wang, Ziyi and Yang, Xuesong and Dong, Chuanqi and Tian, Pinzhuo and Qin, Tiexin and Huo Jing and Shi, Yinghuan and Wang, Lei and Gao, Yang and Luo, Jiebo},
journal = {IEEE Transactions on Pattern Analysis & Machine Intelligence},
year = {2023},
number = {01},
issn = {1939-3539},
pages = {1-18}
}