FastAPI wrapper for Faster-Whisper
- Python3.8 +
- 1.4GB for docker image
- 500MB+ RAM for docker
- Set variables.
Create
.env
file with variables:MODEL_SIZE="tiny" PORT="8080"
- Run container
Simply use
docker-compose
for run containerdocker-compose up -d
- Wait for some time while whisper model are downloading.
- http://127.0.0.1:8080/docs - fastapi auto documentation
- http://127.0.0.1:8080/transcribe - post audio endpoint
- http://127.0.0.1:8080/health - healthcheck endpoint
curl -X 'POST' \
'http://127.0.0.1:8080/transcribe' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'audio=@voice.ogg;type=video/ogg'
{
"status": "ok",
"response": "transcribed words from audio"
}
{
"status": "error",
"response": "some error info"
}