Skip to content

Latest commit

 

History

History
32 lines (20 loc) · 2.04 KB

README.md

File metadata and controls

32 lines (20 loc) · 2.04 KB

Facial recognition web service example

An example implementation of a real-time facial recognition server that processes WebRTC streams from web browsers, detects and encodes faces. See this blog post for an in-depth explanation.

Demo 🚀

A demo application is available online. It doesn't have a backend and runs on a pre-built data made from the Labeled Faces in the Wild dataset ("people with name starting with A" subset), which has been processed by the same algorithm that is used if you run the backend yourself.

Usage

  • docker-compose up --force-recreate --abort-on-container-exit;
  • Wait for an "Application is running!" message;
  • Allow invalid certificates for resources loaded from localhost (see chrome://flags/#allow-insecure-localhost);
  • Navigate to localhost:3000.

Project structure

  • app/ - a Next.js client application and a backend that communicates with the worker and the WebRTC server;
  • worker/ - a stateless Python server that implements a facial recognition algorithm.

WebRTC server is based on OpenVidu.

Use a custom dataset

It is possible to run the demo application without a backend and with a custom pre-built image set. In order to do that, place your .jpg images to worker/data and run data.sh from the worker directory. Processed data will be saved to a JSON file that you need to copy to app/data.json. Web application started with STANDALONE=1 environment variable will use that JSON file instead of processing webcam data on the backend.

Credits