"Neural Computing and Applications" Published Paper (2023)
-
Updated
Oct 10, 2024 - Python
"Neural Computing and Applications" Published Paper (2023)
Adversarial defense by retreaval-based methods
A classical-quantum or hybrid neural network with adversarial defense protection
Developed robust image classification models to prevent the effect of adversarial attacks
Implementations for several white-box and black-box attacks.
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
A classical or convolutional neural network model with adversarial defense protection
Implementation of PGD attack on a model trained on cifar10 dataset in TensorFlow. Also, FID between original images and generated images has been calculated.
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
vanilla training and adversarial training in PyTorch
An ASR (Automatic Speech Recognition) adversarial attack repository.
Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.
To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."