"Neural Computing and Applications" Published Paper (2023)
-
Updated
Oct 10, 2024 - Python
"Neural Computing and Applications" Published Paper (2023)
A classical-quantum or hybrid neural network with adversarial defense protection
A classical or convolutional neural network model with adversarial defense protection
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Adversarial defense by retreaval-based methods
Developed robust image classification models to prevent the effect of adversarial attacks
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
An ASR (Automatic Speech Recognition) adversarial attack repository.
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
vanilla training and adversarial training in PyTorch
Implementations for several white-box and black-box attacks.
Implementation of PGD attack on a model trained on cifar10 dataset in TensorFlow. Also, FID between original images and generated images has been calculated.
Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.
To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."