Skip to content

CNN architectures lack accurate feature extraction of face features from non-visible thermal images. This work finetuned the VGG-16 model with Transfer Learning for classification, then implemented the superpixel technique for face recognition and feature embedding extractions from thermal images

License

Notifications You must be signed in to change notification settings

neyedhayo/thermal-face-feature-extraction-model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Th-VGG16: A Thermal Face Feature Extraction Model (Finetuned VGG-16 + SuperPixel)

Superpixel implementation technique result
First-hand Superpixel technique face extraction implementation result

Introduction

Th-VGG16 is as a result of the fine-tuning of the VGG16 model using transfer learning techniques for the thermal image dataset from the Terravic Thermal Image Dataset Terravic Facial IR Database to train and validate the enhanced feature extraction capabilities. It incorporates the fine-tuning and the superpixel technique as described in the research paper Human Thermal Face Extraction Based on SuperPixel Technique to extract face feature embeddings from thermal image data and compare recognition rates with new images. This project implements a combination of center and contrastive loss functions to build the model.

Key enhancements include:

  • Implementation of Quick-Shift and Otsu's thresholding methods to improve segmentation and feature extraction.

  • Use of combined center loss and contrastive loss functions to enhance the discriminative power of the feature embeddings.


Table of Contents

  1. Installation
  2. Usage
  3. Dataset
  4. Methodology
  5. Experiments and Results
  6. Contributing
  7. License
  8. Acknowledgments

Installation

basic prerequisites

  • Python 3.x
  • TensorFlow
  • Keras
  • OpenCV
  • NumPy
  • Matplotlib
  • Scikit-learn
  • Jupyter Notebook

set up

Clone the repository:

git clone https://github.com/neyedhayo/thermal-face-feature-extraction-model.git

# Navigate to the project directory
cd thermal-face-feature-extraction-model

# Install required Python packages
pip install -r requirements.txt

Usage

Running the Notebooks

  • Preprocessing: Run the preprocessing.py script to prepare the data for training.
  • Model Training: Use the finetunedVGG16_+_superpixel_embeddingfeatures.ipynb notebook to train the VGG16 model with the thermal images.
  • Superpixel Technique: Execute the superpixel.ipynb notebook to apply the superpixel technique.
  • Testing: The embedding_extraction_test.ipynb notebook contains the evaluation and testing steps.

Dataset

The project uses the Terravic Thermal Image Dataset which contains thermal images of human faces. The dataset can be downloaded from Terravic Facial IR Database.

Methodology

1.1. Finetuning VGG16 with Transfer Learning

The proposed method finetunes the VGG16 architecture by discarding the last 9 layers and retaining the first 10 layers and by adding custom layers to the base model, of which only the last 3 are trained. The process involves:

  • Using a max-pooling layer after the 10th layer.
  • Applying batch-normalization.
  • Adding a softmax classifier.

The training steps include:

  • Loading Pre-trained VGG16: Without the top layers.
  • Adding Custom Layers: Including GlobalAveragePooling, Dense, and Dropout layers.
  • Compilation: Using the Adam optimizer and loss functions (center and contrastive loss).
  • Training: On the preprocessed thermal images.

1.2. Superpixel Technique

The superpixel technique used in this project is based on the Quick-Shift method, which helps in segmenting the image into meaningful regions, reducing computational complexity.

  • Parameter Tuning: Adjusting ratio, kernel size, and maximum distance.

  • Superpixel Generation: Using the Quick-Shift algorithm. Thresholding: Applying Otsu's thresholding to convert superpixels into binary images.

  • For more technical details, refer to the paper: Human Thermal Face Extraction Based on SuperPixel Technique.

1.3. Loss Functions

The model combines center loss and contrastive loss to improve the discriminative ability of the feature embeddings, which are crucial for accurate face recognition.

Experiments and Results

The project tested the model under various challenging conditions using the Terravic Facial IR Database, achieving promising results with high recognition rates even under occlusions like glasses and hats. Details of the experimental setup and results can be found in the corresponding Jupyter notebooks in the repository.

Contributing

Contributions to this project are welcome. Please fork the repository and submit pull requests with your features or fixes.

  1. Fork it (https://github.com/neyedhayo/thermal-face-feature-extraction-model/fork)
  2. Create your feature branch (git checkout -b feature/fooBar)
  3. Commit your changes (git commit -am 'Add some fooBar')
  4. Push to the branch (git push origin feature/fooBar)
  5. Create a new Pull Request

License

Distributed under the Apache-2.0 License. See LICENSE for more information.

Acknowledgements

About

CNN architectures lack accurate feature extraction of face features from non-visible thermal images. This work finetuned the VGG-16 model with Transfer Learning for classification, then implemented the superpixel technique for face recognition and feature embedding extractions from thermal images

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published