This is an example project demonstrating how to use Hatchet with FastAPI.
Before running this project, make sure you have the following:
- Python 3.10 or higher installed on your machine.
- Poetry package manager installed. You can install it by running
pip install poetry
, or by following instructions in the Poetry Docs - (Optional) If you would like to run the example frontend, Node which can be installed from the node website
- Create a
.env
file in the./backend
directory and set the required environment variables. This requires theHATCHET_CLIENT_TOKEN
variable created in the Getting Started README. You will also need, aOPENAI_API_KEY
which can be created on the OpenAI Website.
If you're running Hatchet locally without TLS:
HATCHET_CLIENT_TLS_STRATEGY=none
HATCHET_CLIENT_TOKEN="<token>"
OPENAI_API_KEY="<openai-key>"
If you're using Hatchet Cloud:
HATCHET_CLIENT_TOKEN="<token>"
OPENAI_API_KEY="<openai-key>"
-
Open a terminal and navigate to the project backend directory (
/fast-api-react/backend
). -
Run the following command to install the project dependencies:
poetry install
To start the FastAPI server, run the following command in the terminal:
poetry run api
In a separate terminal, start the the Hatchet worker by running the following command:
poetry run hatchet
We've included a basic chat engine frontend to play with the example workflow. To run this script:
- Open a new terminal window and cd into the
fast-api-react/frontend
directory. - run
npm install
- run
npm start
- By default you can access the application in your browser at
http://localhost:3000
or by following the instructions in the terminal window.
The project contains two example workflows in the ./backend/src/workflows
directory. These workflows are registered with hatchet in ./backend/src/workflows/main.py
which is started when running poetry run hatchet
.
- Simple Response Generation: a single step workflow making a request to OpenAI
- Basic Retrieval Augmented Generation: a multi-step workflow to load the contents of a website with Beautiful soup, reason about the information, and generate a response with OpenAI.
A common design pattern is to start a Hatchet workflow run from a rest api endpoint. In this way, you can handle authentication and authorization as you normally do and let Hatchet handle execution. The simple FastAPI example can be found at ./backend/src/api/main.py
The POST /message
endpoint initiates a Hatchet workflow run, utilizing the message body as input. Given Hatchet operates asynchronously, this endpoint immediately returns a run ID. This ID acts as a reference for clients to track the status and outcomes of the initiated run.
After initiating a workflow run and receiving a run ID, clients can subscribe to updates through a GET /message/{id}
request. This request allows clients to receive real-time notifications and results from the asynchronous Hatchet worker, associated with their specific run ID.