[ECCV-2024] A PyTorch official implementation for CLIP-Guided Generative Networks for Transferable Targeted Adversarial Attacks, accepted to ECCV-2024.
Hao Fang*, Jiawei Kong*, Bin Chen#, Tao Dai, Hao Wu, Shu-Tao Xia
We provide the environment configuration file exported by Anaconda, which can help you build up conveniently.
conda env create -f environment.yml
conda activate CGNC
-
Download the ImageNet training set.
-
Below we provide running commands for training the CLIP-guided generator based on 8 different target classes from a previous same setting.
python train.py --train_dir $DATA_PATH/ImageNet/train --model_type incv3 --start_epoch 0 --epochs 10 --label_flag 'N8'
Download pretrained generators CGNC-Incv3 and CGNC-Res152 based on the setting of 8 different classes.
Below we provide running commands for finetuning the CLIP-guided generator based on one single class if necessary (take class id 150 as example).
python train.py --train_dir $DATA_DIR/ImageNet/train --model_type incv3 --start_epoch 10 --epochs 15 --label_flag 'N8' --load_path $CKPT_DIR/incv3/model-9.pth --finetune --finetune_class 150
Below we provide running commands for generating targeted adversarial examples on ImageNet NeurIPS validation set (1k images) under our multi-class setting:
python eval.py --data_dir data/ImageNet1k/ --model_type incv3 --load_path $SAVE_CHECKPOINT --save_dir ADV_DIR
Below we provide running commands for generating targeted adversarial examples on ImageNet NeurIPS validation set (1k images) under our single-class setting (take class id 150 as example):
python eval.py --data_dir data/ImageNet1k/ --model_type incv3 --load_path $SAVE_CHECKPOINT --save_dir $IMAGES_DIR --finetune --finetune_class 150
The above crafted targeted adversarial examples can be directly used for testing different models in torchvision.
Below we provide running commands for testing our method against different black-box models:
python inference.py --test_dir $IMAGES_DIR --model_t vgg16
@article{fang2024clip,
title={CLIP-Guided Networks for Transferable Targeted Attacks},
author={Fang, Hao and Kong, Jiawei and Chen, Bin and Dai, Tao and Wu, Hao and Xia, Shu-Tao},
journal={arXiv preprint arXiv:2407.10179},
year={2024}
}