This repository contains the code of our paper Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks.
import torch
import torchvision as tv
from ueraser import adversarial_augmentation_loss
# model
model = ... # a PyTorch model
# optimizer
optimizer = torch.optim.SGD(
model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
dataset = ... # an unlearnable dataset
dataloader = torch.utils.data.DataLoader(
dataset, batch_size=128, shuffle=True, num_workers=2)
max_epochs = 200 # the number of total training epochs
aa_epochs = 60 # the number of epochs for adversarial augmentations
repeat = 5 # The number of repeated sampling
# UEraser
for e in range(max_epochs):
for images, labels in dataloader:
optimizer.zero_grad()
images, labels = images.to(device), labels.to(device)
r = repeat if e < aa_epochs else 1
loss = adversarial_augmentation_loss(model, images, labels, r)
loss.backward()
optimizer.step()
Please first install python>=3.10 and the following packages:
pip install torch numpy kornia scikit-learn einops
We provide an example of UEraser on CIFAR-10 poisons generated by EM and LSP.
The detailed instructions are available in EM/QuickStart.ipynb
.
Go to LSP subfolder:
cd LSP/
Here are some example commands to test UEraser on LSP poisons.
Classification performance of UEraser when training on LSP CIFAR-10 with ResNet18:
CUDA_VISIBLE_DEVICES=0 python cifar_train.py --model <model> --dataset <dataset> --mode <mode> --type <type>
The parameter choices for the above commands are as follows:
- Dataset
<dataset>
:c10
,c100
,svhn
. - Model
<model>
:resnet18
,resnet50
,densenet
. - Mode of UEraser
<mode>
:fast
,standard
,em
. - Clean
<clean>
:unlearn
,clean
.
- arXiv version:
@article{qin2023learning,
title={Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks},
author={Qin, Tianrui and Gao, Xitong and Zhao, Juanjuan and Ye, Kejiang and Xu, Cheng-Zhong},
journal={arXiv preprint arXiv:2303.15127},
year={2023}
}
- ICCVW version:
@inproceedings{qin2023iccvw,
title={Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks},
author={Qin, Tianrui and Gao, Xitong and Zhao, Juanjuan and Ye, Kejiang and Xu, Cheng-Zhong},
booktitle={4th Workshop on Adversarial Robustness In the Real World (AROW), ICCV 2023},
url={https://iccv23-arow.github.io/pdf/arow-0025.pdf},
year={2023}
}
Training code adapted from EM and LSP repository EM-repository and LSP-repository.