Give immediate and accurate answers to common queries using widgets.
python3 -m venv ./venv
source ./venv/bin/activate
pip install -r requirements.txt
export FLASK_APP=serve.py; export FLASK_ENV=development; flask run
This project runs along with widgets_client.
+-- serve.py # entry to flask
+-- gpt.py # utils for gpt
| +-- widgets
| | +-- core.py # core logic
| +-- prompts
| | +-- classifier.yaml
- User asks a question
- LLM classifies question into a widget label
- If supported, LLM gets params to evaluate widget
- Widget evalutes params which could be an api call or function call
- Widget returns answer to client
On latency between openai’s functions model vs. running separately
Testing:
~40% less time to call do tasks separately vs. using functions model