Skip to content
This repository has been archived by the owner on May 3, 2023. It is now read-only.
/ serving-model-ML Public archive

A simple repfit ml model deployment using flask microframework.

License

Notifications You must be signed in to change notification settings

FurnPriss/serving-model-ML

Repository files navigation

Simple Repfit ML Deployment

This is a simple ML Deployment using flask microframework.

Getting Started

1. Create a Virtual Environment (venv)

Create a virtual environment to install the packages inside requirement.txt file. Please refer to this link to see how to make it: https://gist.github.com/ryumada/c22133988fd1c22a66e4ed1b23eca233

2. Install the Required Packages

Don't forget to activate the virtual environment first before running this command.

pip install -r requirement.txt

This will take a while because of the large size of tensorflow package (5xxMB).

3. Run the server

This command will activate flask's development server.

flask run

4. Test the server

Here I have prepared a shell script file called RUN_xxxx.sh to test if the server running properly.

You need to add execute permission to this file before you can run it.

chmod +x RUN_xxxx.sh

Then, you can execute the file.

./RUN_xxxx.sh

The server is working properly if showing the result like this:

About

A simple repfit ml model deployment using flask microframework.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published