- Setting
- git clone this repo
- source bin/activate
- pip install -r requirements.txt
- Train a model and Save as model.pt & Test accuracy
- python main.py --execTrain 1 --saved 1
- If don't want to save the model, ignore --saved option
- python main.py --execTrain 1 --saved 1
- Demo
- python demo.py
Kernel Size = 5
Stride = 2
Final Output # = 15
- Epoch : 50 / Learning Rate : 0.01
- Loss
- Accuracy
- Demo
- Select random image from original 164 data and predict
- The normal image of the predicted is shown for compare
Create FGM Adversarial Example by using art.attacks.evasion.FastGradientMethod
The training method was carried out as shown in "EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES".
More Details for CNN Model trained with FGM Examples
Create FGM Adversarial Example by using art.attacks.evasion.ProjectedGradientDescent
The training method was carried out as shown in "Towards Deep Learning Models Resistant to Adversarial Attacks".
More Details for CNN Model trained with PGD Examples
Original Model | FGM Model | PGD Model | |
---|---|---|---|
Original Examples | 0.882 | 0.941 | 0.852 |
FGM Examples | 0.5 | 0.941 | 0.941 |
BIM Examples | 0.058 | 0.941 | 0.941 |
PGD Examples | 0.029 | 0.852 | 0.970 |
- Ian J. Goodfellow, Jonathon Shlens & Christian Szegedy, "EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES", ICLR 2015
- Alexey Kurakin, Ian J. Goodfellow, Samy Bengio, "ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD", ICLR 2017
- Aleksander Ma ̨dry, Aleksandar Makelov, Ludwig Schmidt, "Towards Deep Learning Models Resistant to Adversarial Attacks", stat.ML 4 Sep 2019
- UCSD Computer Vision, Yale Face Database Download
- Adversarial-Robustness-Toolbox Link