Skip to content

kirankonduru3/Deep-fake-Facial-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Sure! Here's a comprehensive README file for your GitHub repository:


Fake Face Detection with Explainability

This repository contains a project for detecting fake faces using a deep learning model. The model is built using InceptionResnetV1 pretrained on the vggface2 dataset and incorporates the GradCAM technique to provide visual explanations for the model's predictions.

Overview

This project leverages MTCNN for face detection and InceptionResnetV1 for face classification. The model is trained to distinguish between real and fake faces and provides visual explanations for its predictions using GradCAM.

Features

  • Face Detection: Uses MTCNN to detect faces in images.
  • Fake Face Detection: Classifies faces as either real or fake using InceptionResnetV1.
  • Explainability: Provides visual explanations for predictions using GradCAM.
  • Interactive Interface: Uses Gradio to create an interactive web interface for users to upload images and get predictions.

Installation

To run this project, follow these steps:

  1. Clone the repository:

    git clone https://github.com/yourusername/fake-face-detection.git
    cd fake-face-detection
  2. Create and activate a virtual environment:

    python -m venv venv
    source venv/bin/activate # On Windows, use `venv\Scripts\activate`
  3. Install the required packages:

    pip install -r requirements.txt
  4. Download the pre-trained model checkpoint and place it in the project directory.

Usage

Interactive Interface

You can launch an interactive web interface using Gradio by running:

python app.py

This will launch a web interface where you can upload an image and get predictions along with visual explanations.

Batch Processing

To process a batch of images and evaluate model performance, follow these steps:

  1. Place your images in a ZIP file and update the paths in the script.

  2. Run the script to extract images, make predictions, and calculate metrics:

    python batch_processing.py

Example

Here's an example of how to use the interactive interface:

Gradio Interface

Visual Explanation

The model provides visual explanations for its predictions using GradCAM:

GradCAM Explanation

Evaluation

You can calculate accuracy and confusion matrix metrics for your test dataset. Here's how:

python evaluate.py

This script will print the accuracy and confusion matrix, and save a visualization of the confusion matrix.

Contributing

Contributions are welcome! Please create an issue or submit a pull request.

License

This project is licensed under the MIT License.

Acknowledgements


Feel free to customize this README file as needed!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published