- π Overview
- π¦ Features
- π Structure
- π» Installation
- ποΈ Usage
- π Hosting
- π License
- π Authors
This repository contains a Minimum Viable Product (MVP) for an AI Query Backend. It simplifies the process of interacting with OpenAI's language models by providing a Python-based backend service.
Feature | Description | |
---|---|---|
βοΈ | Architecture | The codebase follows a layered architecture with separate directories for API routes, schemas, utilities, and database interactions, promoting modularity and maintainability. |
π | Documentation | The repository includes a README file providing a detailed overview of the MVP, its dependencies, and usage instructions. |
π | Dependencies | The codebase utilizes essential libraries like fastapi , openai , jwt , sqlalchemy , psycopg2-binary , and python-dotenv for API development, OpenAI integration, authentication, database management, and environment configuration. |
𧩠| Modularity | The modular structure promotes easier maintenance and code reusability, with dedicated sections for API routes, data schemas, utility functions, and database interactions. |
π§ͺ | Testing | Includes unit tests using Pytest to ensure the reliability and robustness of the codebase. |
β‘οΈ | Performance | Employs efficient coding practices and considers performance optimization strategies for efficient data processing and response generation. |
π | Security | Enhances security through input validation, secure data storage, and authentication using JWT. |
π | Version Control | Utilizes Git for version control with automated CI/CD workflows for building and deploying the application. |
π | Integrations | Integrates with the OpenAI API for generating responses to user queries. |
πΆ | Scalability | The backend is designed to handle increasing user load and data volume, utilizing efficient database management techniques and caching strategies. |
βββ api
βββ routers
β βββ query_router.py
β βββ auth_router.py
βββ schemas
β βββ schemas.py
βββ utils
β βββ utils.py
βββ database
β βββ database.py
β βββ models.py
βββ main.py
- Python 3.9+
- PostgreSQL 15+
- Docker (recommended)
- Clone the repository:
git clone https://github.com/coslynx/AI-Query-Backend-MVP.git cd AI-Query-Backend-MVP
- Create a virtual environment:
python3 -m venv .venv
- Activate the virtual environment:
source .venv/bin/activate
- Install dependencies:
pip install -r requirements.txt
- Configure environment variables:
cp .env.example .env # Edit the .env file with your OpenAI API key, database credentials, and JWT secret key.
- Start the FastAPI application:
uvicorn api.main:app --host 0.0.0.0 --port 8000 --reload
- The
.env
file is used for storing sensitive environment variables such as OpenAI API key, database connection URL, and JWT secret key. - Modify these values in the
.env
file before running the application.
- Send a query to the AI Query Backend:
Response:
curl -X POST http://localhost:8000/query \ -H "Content-Type: application/json" \ -d '{"model": "text-davinci-003", "query": "What is the meaning of life?", "temperature": 0.7, "max_tokens": 256}'
{ "id": 1, "model": "text-davinci-003", "query": "What is the meaning of life?", "response": "The meaning of life is a question that has been pondered by philosophers and theologians for centuries. There is no one definitive answer, as each individual must ultimately decide for themselves what meaning they find in life.", "created_at": "2024-01-01T12:00:00.000Z" }
- Build the Docker image:
docker build -t ai-query-backend .
- Push the Docker image to a registry (e.g., Docker Hub):
docker push your_dockerhub_username/ai-query-backend:latest
- Deploy to a cloud platform (e.g., Heroku):
- Follow Heroku's deployment instructions for Docker images.
- Ensure that the required environment variables are set in the Heroku app's settings.
- Refer to the Heroku documentation for detailed instructions.
OPENAI_API_KEY
: Your OpenAI API key.DATABASE_URL
: The connection URL for your PostgreSQL database.JWT_SECRET_KEY
: A strong, unique secret key for JWT authentication.
- POST /query:
- Description: Sends a query to the chosen OpenAI model and returns the response.
- Body:
{ "model": "text-davinci-003", // OpenAI model name "query": "What is the meaning of life?", // User query "temperature": 0.7, // Controls the creativity of the response "max_tokens": 256 // Maximum number of tokens in the response }
- Response:
{ "id": 1, "model": "text-davinci-003", "query": "What is the meaning of life?", "response": "The meaning of life is a question that has been pondered by philosophers and theologians for centuries. There is no one definitive answer, as each individual must ultimately decide for themselves what meaning they find in life.", "created_at": "2024-01-01T12:00:00.000Z" }
- POST /token:
- Description: Generates a JWT token for authentication.
- Body:
{ "username": "your_username", "password": "your_password" }
- Response:
{ "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0dXNlciIsImV4cCI6MTY3NzU3NjI0N30.U4R70F7v3K_C0xR2bLq972843q66u6o98V0jD2o9s_w" }
- JWT (JSON Web Token) is used for authentication.
- Upon successful registration or login, a JWT token is issued to the user.
- This token should be included in the Authorization header of subsequent requests to protected API endpoints.
# Register a new user
curl -X POST http://localhost:8000/token \
-H "Content-Type: application/json" \
-d '{"username": "testuser", "password": "testpassword"}'
# Response
{
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0dXNlciIsImV4cCI6MTY3NzU3NjI0N30.U4R70F7v3K_C0xR2bLq972843q66u6o98V0jD2o9s_w"
}
# Send a query using the generated JWT token
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0dXNlciIsImV4cCI6MTY3NzU3NjI0N30.U4R70F7v3K_C0xR2bLq972843q66u6o98V0jD2o9s_w" \
-d '{"model": "text-davinci-003", "query": "What is the meaning of life?", "temperature": 0.7, "max_tokens": 256}'
This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.
This MVP was entirely generated using artificial intelligence through CosLynx.com.
No human was directly involved in the coding process of the repository: AI-Query-Backend-MVP
For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:
- Website: CosLynx.com
- Twitter: @CosLynxAI
Create Your Custom MVP in Minutes With CosLynxAI!