Developed robust image classification models to prevent the effect of adversarial attacks
-
Updated
Dec 29, 2023 - Jupyter Notebook
Developed robust image classification models to prevent the effect of adversarial attacks
Adversarial attacks to SRNet
Adversarial Attacks on Image data
Adversarial-Attacks-and-Defence
Notebook to implement different approaches for Adversarial Attack using Python and PyTorch.
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
"Neural Computing and Applications" Published Paper (2023)
A classical-quantum or hybrid neural network with adversarial defense protection
This project evaluates the robustness of image classification models against adversarial attacks using two key metrics: Adversarial Distance and CLEVER. The study employs variants of the WideResNet model, including a standard and a corruption-trained robust model, trained on the CIFAR-10 dataset. Key insights reveal that the CLEVER Score serves as
This repository contains the implementation of three adversarial example attacks including FGSM, noise, semantic attack and a defensive distillation approach to defense against the FGSM attack.
Adversarial attacks on a deep neural network trained on ImageNet
An University Project for the AI4Cybersecurity class.
This study was conducted in collaboration with the University of Prishtina (Kosovo) and the University of Oslo (Norway). This implementation is part of the paper entitled "Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method", published in the International Journal of Applied Artificial Intelligence by Taylo…
Implementations for several white-box and black-box attacks.
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
Adversarial attacks on CNN using the FSGM technique.
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
A classical or convolutional neural network model with adversarial defense protection
Adversarial Sample Generation
Add a description, image, and links to the fgsm-attack topic page so that developers can more easily learn about it.
To associate your repository with the fgsm-attack topic, visit your repo's landing page and select "manage topics."