This repository contains the code for Attentive Graph Neural Networks for Few-Shot Learning.
Environment
- Python 3.7.3
- Pytorch 1.7.1
- tensorboardX
Datasets
- miniImageNet: the Google Drive or Baidu Drive (uk3o) for downloading (courtesy of DeepEMD)
- tieredImageNet (courtesy of Kwonjoon Lee)
Download the datasets and link the folders into materials/
with names mini-imagenet
, tiered-imagenet
and imagenet
.
Note imagenet
refers to ILSVRC-2012 1K dataset with two directories train
and val
with class folders.
When running python programs, use --gpu
to specify the GPUs for running the code (e.g. --gpu 0,1
).
For Classifier-Baseline, we train with 4 GPUs on miniImageNet and tieredImageNet and with 8 GPUs on ImageNet-800. Meta-Baseline uses half of the GPUs correspondingly.
In following we take miniImageNet as an example. For other datasets, replace mini
with tiered
.
By default it is 1-shot, modify shot
in config file for other shots. Models are saved in save/
.
*The models on miniImageNet and tieredImageNet use ConvNet-4 as backbone, the channels in each block are 64-96-128-256.
python train_classifier.py --config configs/train_classifier_mini.yaml
The pretrained Classifier-Baselines can be downloaded from Google Drive (Mini-ImageNet , Tiered-ImageNet).
You can unzip and place the foder under the 'save' folder.
python train_meta.py --config configs/train_meta_mini.yaml
@inproceedings{cheng2022attentive,
title={Attentive graph neural networks for few-shot learning},
author={Cheng, Hao and Zhou, Joey Tianyi and Tay, Wee Peng and Wen, Bihan},
booktitle={2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR)},
pages={152--157},
year={2022},
organization={IEEE}
}
@article{cheng2023graph,
title={Graph Neural Networks With Triple Attention for Few-Shot Learning},
author={Cheng, Hao and Zhou, Joey Tianyi and Tay, Wee Peng and Wen, Bihan},
journal={IEEE Transactions on Multimedia},
year={2023},
publisher={IEEE}
}
We thank the following repos providing helpful components/functions in our work.