An example implementation of a real-time facial recognition server that processes WebRTC streams from web browsers, detects and encodes faces. See this blog post for an in-depth explanation.
A demo application is available online. It doesn't have a backend and runs on a pre-built data made from the Labeled Faces in the Wild dataset ("people with name starting with A" subset), which has been processed by the same algorithm that is used if you run the backend yourself.
docker-compose up --force-recreate --abort-on-container-exit
;- Wait for an "Application is running!" message;
- Allow invalid certificates for resources loaded from localhost (see
chrome://flags/#allow-insecure-localhost
); - Navigate to localhost:3000.
app/
- a Next.js client application and a backend that communicates with the worker and the WebRTC server;worker/
- a stateless Python server that implements a facial recognition algorithm.
WebRTC server is based on OpenVidu.
It is possible to run the demo application without a backend and with a custom pre-built image set. In order to do that, place your .jpg
images to worker/data
and run data.sh
from the worker directory. Processed data will be saved to a JSON file that you need to copy to app/data.json
. Web application started with STANDALONE=1
environment variable will use that JSON file instead of processing webcam data on the backend.
- Face recognition library by Adam Geitgey;
- Labeled Faces in the Wild dataset used in the demo application;
- Bootstrap theme by Start Bootstrap;
- Test card image by Ebnz.