🤗 Hugging Face - Here
📚 Product & Resources - Here
🛟 Help Center - Here
💼 KYC Verification Demo - Here
🙋♀️ Docker Hub - Here
sudo docker pull kbyai/face-liveness-detection:latest
sudo docker run -e LICENSE="xxxxx" -p 8080:8080 -p 9000:9000 kbyai/face-liveness-detection:latest
This repository demonstrates an advanced face liveness detection
technology implemented via a Dockerized Flask API
.
It includes features that allow for testing face liveness detection
using both image files and base64-encoded
images.
In this repo, we integrated
KBY-AI
'sFace Liveness Detection
solution intoLinux Server SDK
bydocker container
.
We can customize theSDK
to align with your specific requirements.
🔽 Face Liveness Detection | Face Recognition |
---|---|
Face Detection | Face Detection |
Face Liveness Detection | Face Recognition(Face Matching or Face Comparison) |
Pose Estimation | Pose Estimation |
68 points Face Landmark Detection | 68 points Face Landmark Detection |
Face Quality Calculation | Face Occlusion Detection |
Face Occlusion Detection | Face Occlusion Detection |
Eye Closure Detection | Eye Closure Detection |
Mouth Opening Check | Mouth Opening Check |
No. | Repository | SDK Details |
---|---|---|
➡️ | Face Liveness Detection - Linux | Face Livness Detection |
2 | Face Liveness Detection - Windows | Face Livness Detection |
3 | Face Recognition - Linux | Face Recognition |
4 | Face Recognition - Windows | Face Recognition |
To get
Face SDK(mobile)
, please visit products here:
You can test the SDK using images from the following URL:
https://web.kby-ai.com
To test the API
, you can use Postman
. Here are the endpoints for testing:
-
Test with an image file: Send a
POST
request tohttp://18.221.33.238:8080/check_liveness
. -
Test with a
base64-encoded
image: Send aPOST
request tohttp://18.221.33.238:8080/check_liveness_base64
.You can download the
Postman
collection to easily access and use these endpoints. click here
This project uses KBY-AI
's Face Liveness Detection
Server SDK
, which requires a license per machine.
-
The code below shows how to use the license:
FaceLivenessDetection-Docker/app.py
Lines 36 to 48 in 6aafd08
-
To request the license, please provide us with the
machine code
obtained from thegetMachineCode
function.
🧙Email:
contact@kby-ai.com
🧙Telegram:
@kbyai
🧙WhatsApp:
+19092802609
🧙Skype:
live:.cid.66e2522354b1049b
🧙Facebook:
https://www.facebook.com/KBYAI
- CPU: 2 cores or more (Recommended: 8 cores)
- RAM: 4 GB or more (Recommended: 8 GB)
- HDD: 4 GB or more (Recommended: 8 GB)
- OS: Ubuntu 20.04 or later
- Dependency: OpenVINO™ Runtime (Version: 2022.3)
-
Clone the project:
git clone https://github.com/kby-ai/FaceLivenessDetection-Docker.git
-
Download the model from
Google Drive
: click herecd FaceLivenessDetection-Docker wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1bYl0p5uHXuTQoETdbRwYLpd3huOqA3wY' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1bYl0p5uHXuTQoETdbRwYLpd3huOqA3wY" -O data.zip && rm -rf /tmp/cookies.txt unzip data.zip
-
Build the
Docker
image:sudo docker build --pull --rm -f Dockerfile -t kby-ai-live:latest .
-
Run the
Docker
container:sudo docker run -v ./license.txt:/home/openvino/kby-ai-live/license.txt -p 8080:8080 kby-ai-live
-
Send us the
machine code
and then we will give you alicense key
.After that, update the
license.txt
file by overwriting thelicense key
that you received. Then, run theDocker
container again. -
To test the API, you can use
Postman
. Here are the endpoints for testing:Test with an image file: Send a
POST
request tohttp://{xx.xx.xx.xx}:8080/check_liveness
.Test with a
base64-encoded
image: Send aPOST
request tohttp://{xx.xx.xx.xx}:8080/check_liveness_base64
.You can download the
Postman
collection to easily access and use these endpoints. click here
-
Setup Gradio Ensure that you have the necessary dependencies installed.
Gradio
requiresPython 3.6
or above.You can install
Gradio
usingpip
by running the following command:pip install gradio
-
Run the demo Run it using the following command:
cd gradio python demo.py
-
You can test within the following URL:
http://127.0.0.1:9000
-
Step One
First, obtain the machine code for activation and request a license based on the
machine code
.machineCode = getMachineCode() print("machineCode: ", machineCode.decode('utf-8'))
-
Step Two
Next, activate the SDK using the received license.
setActivation(license.encode('utf-8'))
If activation is successful, the return value will be
SDK_SUCCESS
. Otherwise, an error value will be returned. -
Step Three
After activation, call the initialization function of the SDK.
initSDK("data".encode('utf-8'))
The first parameter is the path to the model.
If initialization is successful, the return value will be
SDK_SUCCESS
. Otherwise, an error value will be returned.
-
SDK_ERROR
This enumeration represents the return value of the
initSDK
andsetActivation
functions.Feature Value Name Successful activation or initialization 0 SDK_SUCCESS License key error -1 SDK_LICENSE_KEY_ERROR AppID error (Not used in Server SDK) -2 SDK_LICENSE_APPID_ERROR License expiration -3 SDK_LICENSE_EXPIRED Not activated -4 SDK_NO_ACTIVATED Failed to initialize SDK -5 SDK_INIT_ERROR -
FaceBox
This structure represents the output of the face detection function.
Feature Type Name Face rectangle int x1, y1, x2, y2 Liveness score (0 ~ 1) float liveness Face angles (-45 ~ 45) float yaw, roll, pitch Face quality (0 ~ 1) float face_quality Face luminance (0 ~ 255) float face_luminance Eye distance (pixels) float eye_dist Eye closure (0 ~ 1) float left_eye_closed, right_eye_closed Face occlusion (0 ~ 1) float face_occlusion Mouth opening (0 ~ 1) float mouth_opened 68 points facial landmark float[] landmarks_68 68 points facial landmark
-
Face Detection
The
Face SDK
provides a single API for detecting faces, performingliveness detection
, determiningface orientation
(yaw, roll, pitch), assessingface quality
, detectingfacial occlusion
,eye closure
,mouth opening
, and identifyingfacial landmarks
.The function can be used as follows:
faceBoxes = (FaceBox * maxFaceCount)() faceCount = faceDetection(image_np, image_np.shape[1], image_np.shape[0], faceBoxes, maxFaceCount)
This function requires 5 parameters.
- The first parameter: the byte array of the
RGB
image buffer. - The second parameter: the width of the image.
- The third parameter: the height of the image.
- The fourth parameter: the
FaceBox
array allocated withmaxFaceCount
for storing the detected faces. - The fifth parameter: the count allocated for the maximum
FaceBox
objects.
The function returns the count of the detected face.
- The first parameter: the byte array of the
The default thresholds are as the following below:
FaceLivenessDetection-Docker/app.py
Lines 17 to 29 in 1e89ec0