Skip to content

Machine Learning project on Classification before and after poisoning skin cancer images with Adversarial Attacks

License

Notifications You must be signed in to change notification settings

NikosBakalis/Adversarial-attack-on-Intel-and-MobileODT-Cervical-Cancer-Screening

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Project Overview

This project focuses on using machine learning for image processing, specifically for classifying cancer types based on medical imaging. By leveraging deep learning techniques and adversarial training, this project aims to enhance the robustness and accuracy of cancer diagnosis systems.

Highlight

9% classification accuracy improvement post adversarial attack and training.

Key Features

  • Data Preprocessing: Image transformations for data normalization and augmentation.
  • Model Training: Utilizes transfer learning with a ResNet50 architecture.
  • Adversarial Training: Incorporates adversarial attack scenarios to improve model resilience.
  • Performance Evaluation: Tests model accuracy on both clean and adversarially altered images.

The problem

image

  • Original Image: A photo of a panda is shown with a label "panda" and the model's confidence in this classification is 57.7%.
  • Perturbation: The middle image represents a small perturbation (visual noise) calculated using the sign of the gradient of the model's loss function with respect to the input image. This perturbation is multiplied by 0.007 to keep its magnitude small.
  • Adversarial Image: The resulting image on the right looks almost identical to the original panda image to the human eye, but it now includes the calculated perturbation. This slight modification causes the model to misclassify the image as a "gibbon" with 99.3% confidence.

Cervical Cancer image classification

image

  • Type 1: This image shows a close-up view of the cervix with a clear and focused visual. The cervix appears slightly open, and there are visible vascular patterns. This type could represent a normal cervix or a particular stage of cervical health.
  • Type 2: This image includes a medical instrument, possibly a speculum, which is used during cervical exams to provide a clear view of the cervix. The cervix in this image looks similar to Type 1 but is being viewed under different conditions, which might highlight other features or abnormalities.
  • Type 3: The third type shows the cervix under a different lighting or imaging technique, possibly using a filter or enhancement to highlight specific features such as vascular patterns or surface texture.

Loss and Accuracy over Epochs before Adversarial Attacks

Without Adversarial Attacks

image

  • Loss (Blue Line): Starts high and decreases sharply, flattening out as the epochs increase.
  • Accuracy (Orange Line): Starts low and increases sharply, reaching a plateau as epochs increase.

With Adversarial Attacks without Adversarial Attack Training

image

  • Loss (Blue Line): Shows a steep decline initially, stabilizing towards the later epochs.
  • Accuracy (Orange Line): Increases sharply at the start and then levels off.
  • Adversarial Accuracy (Green Line): Starts relatively high, and remains stable for the rest of the epochs.

With Adversarial Attack with Adversarial Attack Training

image

  • Loss (Blue Line): Begins high, drops rapidly, and then flattens.
  • Accuracy (Orange Line): Begins low, rises quickly, and remains fairly constant.
  • Adversarial Accuracy (Green Line): Begins relatively high, surpases the orange line and then gradually stabilizes at a level around 9% over the model acuracy.

Performeed Adversarial Attachkes

4 Adversarial Attacks were written and executed for the purposes of this code:

  • FGSM attack
  • Random Perturbation attack
  • Gaussian Noise attack
  • BIM attack

Installation

To run this project, ensure you have Python installed, along with the following major libraries:

  • PyTorch
  • torchvision
  • numpy
  • matplotlib
pip install torch torchvision numpy matplotlib

Usage

The notebook is structured to guide you through the process of data preparation, model training, and evaluation step-by-step. Execute each cell sequentially to reproduce the results.

Configuring Parameters

Adjust the training parameters and device configurations at the beginning of the notebook to suit your computational environment (e.g., GPU settings).

Running Adversarial Attacks

Detailed instructions are provided on how to generate and apply adversarial attacks to test the robustness of the trained model.

Contributing

Contributions to this project are welcome. Please feel free to fork the repository, make changes, and submit a pull request.

Authors

  • Nikolaos Bakalis - Initial work and documentation

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Machine Learning project on Classification before and after poisoning skin cancer images with Adversarial Attacks

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published