Skip to content

TUM WS20/21 Master Lab course IBM Machine Learning Applications

License

Notifications You must be signed in to change notification settings

Neihtq/fasecure

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Logo

Fasecure

View Demo · Report Bug · Request Feature

  • Our trained model can be downloaded here
  • The pretrained model can be downloaded here
  • The dataset we used for evaluation can be downloaded here
Table of Contents
  1. About The Project
  2. Getting Started
  3. Training and Evaluation
  4. Running Fasecure
  5. Issues
  6. License
  7. Contact
  8. Acknowledgements

About The Project

Product Name Screen Shot Facsecure is an application that simulates an access control system through face recognition. This project provides the whole training pipeline for training a model with an own selected dataset. On top of this, an application utilizes the model as the core of the facial recognition backend logic.

Fasecure was developed in the context of the advanced practical course "Application Challenges for Machine Learning on the example of IBM Power AI" at the Technical University of Munich. Our main task was to build a complete facial recognition system.

Face Recognition Pipeline

The main focus of this project is implementation of the face recognition. For that we used the implementation described in the paper "FaceNet: A Unified Embedding for Face Recognition and Clustering".

Additionally, we implemented all tasks in which face recognition can be broken down to: Detection, Alignment, Embedding and Recognition/Registration.

Built With

Getting Started

$ git clone https://github.com/Neihtq/IBM-labcourse.git

and you are ready to go.

Prerequisites

You can install all dependencies easily with pip. Simply run:

$ pip install -r requirements.txt

followed by

$ cd backend
$ pip install -e .

to install the Fasecure model.

Also make sure to have a working webcam.

Training and Evaluation

Training data

The VGG Face Dataset consists of multiple images from 2622 distinct identities. Overall the dataset took 69 GB of storage. Triplets were generated and fed into Triplet Loss Function for learning.

Training

Please refer to the wiki page on how to train the model.

Evaluation Data

Labeled Face in the Wild was used evaluating both embedding and recognition pipeline. On our local machines, we used DeepFace beforehand for cropping and aligning the faces on each image.

Evaluation

Please refer to the wiki page on how to evaluate the pipeline and its model.

Face Spoofing

We came also up with the idea of integrating face spoofing/liveness detection into the pipeline. However, we did not have enough time to develop a model with sufficient accuracy. Nonetheless, the face spoofing module can be tested sperately:

$ cd backend
$ python face_recognition/face_spoofing.py

Running Fasecure

Backend

Make sure to place the model for face recognition in backend/face_recognition/results/models/.

Run:

$ cd backend
$ python server.py

Frontend

Run:

$ cd frontend
$ python view.py

Issues

See the open issues for a list of proposed features (and known issues).

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Cao Son Ngoc Pham - @cacao - caoson@hotmail.de -

Quang Thien Nguyen - @Neihtq - q.thien.nguyen@outlook.de -

Simon Felderer - @simonfelderer - simon.felderer@tum.de -

Tobias Zeulner - @Zeulni - ge93yan@mytum.de -

Acknowledgements

About

TUM WS20/21 Master Lab course IBM Machine Learning Applications

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •