-
Clone this repository:
git clone https://github.com/TiagoPrata/FastAPI-TensorFlow-Docker.git
-
Start containers:
docker-compose -f docker-compose.prod.yml up -d
Note: Edit the yml file to adjust the number of cores before starting the containers.
-
That's it! Now go to http://localhost:5000 and use it.
-
Python 3.8+
-
Docker
-
Docker-compose
-
VSCode (optional, but recommended)
With all dependencies in place, open the cloned folder and create a new virtual environment using pipenv install
:
Always use the pipenv install <package>
command to install (or uninstall) a new package.
After any package update, re-export the Python dependencies.
-
Export the Python dependencies to a requirements.txt file
pipenv lock --requirements > ./apps/pyserver/requirements.txt
-
Clear the former docker images
-
Start the new containers:
docker-compose -f docker-compose.prod.yml up -d
-
In the
main.py
file, replace the constantTENSORFLOW_URL
:From:
TENSORFLOW_URL = "http://tensorflow:8501/v1/models/rfcn:predict"
To:
TENSORFLOW_URL = "http://localhost:8501/v1/models/rfcn:predict"
-
Start the
intel/intel-optimized-tensorflow-serving:2.4.0
container only:docker-compose -f docker-compose.prod.yml up -d tensorflow
-
That's it! You can now debug the
main.py
file as you normally do.
The pyserver application has a dockerfile
configured to enable debugging on Docker:
[...]
############# Debugger
FROM base as debug
RUN pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org ptvsd
WORKDIR /app
CMD ["python", "-m", "ptvsd", "--host", "0.0.0.0", "--port", "5678", \
"--wait", "--multiprocess", "-m", \
"uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]
############# Production
FROM base as prod
WORKDIR /app
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]
So it is possible to start the pyserver container in debug mode:
docker-compose -f docker-compose.debug.yml up -d
Important: Now, when the docker-compose
is started, the application will wait for a Remote Attach.
=================================================================================
This repository contains a vscode configuration file for remote attach on these docker applications. (See ./vscode/launch.json)
Important!
By default, in debug mode all application will NOT start automatically. They will wait for a Remote Attach.
In order to change this behavior erase the parameter ---wait
from the dockerfiles.
FastAPI
FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints.
The key features are:
-
Fast: Very high performance, on par with NodeJS and Go (thanks to Starlette and Pydantic). One of the fastest Python frameworks available.
-
Fast to code: Increase the speed to develop features by about 200% to 300% *.
-
Fewer bugs: Reduce about 40% of human (developer) induced errors. *
-
Intuitive: Great editor support. Completion everywhere. Less time debugging.
-
Easy: Designed to be easy to use and learn. Less time reading docs.
-
Short: Minimize code duplication. Multiple features from each parameter declaration. Fewer bugs.
-
Robust: Get production-ready code. With automatic interactive documentation.
-
Standards-based: Based on (and fully compatible with) the open standards for APIs: OpenAPI (previously known as Swagger) and JSON Schema.
For more details check FastAPI on GitHub.
Pipenv
Pipenv is a tool that aims to bring the best of all packaging worlds (bundler, composer, npm, cargo, yarn, etc.) to the Python world. Windows is a first-class citizen, in our world.
It automatically creates and manages a virtualenv for your projects, as well as adds/removes packages from your Pipfile as you install/uninstall packages. It also generates the ever-important Pipfile.lock, which is used to produce deterministic builds.
The problems that Pipenv seeks to solve are multi-faceted:
-
You no longer need to use
pip
andvirtualenv
separately. They work together. -
Managing a
requirements.txt
file can be problematic, so Pipenv uses the upcomingPipfile
andPipfile.lock
instead, which is superior for basic use cases. -
Hashes are used everywhere, always. Security. Automatically expose security vulnerabilities.
-
Give you insight into your dependency graph (e.g.
$ pipenv graph
). -
Streamline development workflow by loading
.env
files.
For more details check pipenv on GitHub
Docker
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
For more details check Docker Hub
TensorFlow
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
For more details check tensorflow.corg
[1] https://www.tensorflow.org/datasets/catalog/coco
[2] https://github.com/IntelAI/models/blob/master/docs/object_detection/tensorflow_serving/Tutorial.md
[3] https://github.com/IntelAI/models/blob/master/benchmarks/object_detection/tensorflow/rfcn/README.md
[4] https://github.com/tensorflow/models/tree/master/research/object_detection
[5] https://github.com/tensorflow/serving/
[6] https://github.com/IntelAI/models/.../fp32/object_detection_benchmark.py#L95
[7] https://www.tensorflow.org
[8] https://arxiv.org/abs/1605.06409
[9] https://paperswithcode.com/paper/r-fcn-object-detection-via-region-based-fully