Skip to content

AI-powered backend for user queries, providing quick and accurate responses... Created at https://coslynx.com

Notifications You must be signed in to change notification settings

coslynx/AI-Query-Backend-MVP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI Query Backend - MVP

A Python backend service that simplifies user interactions with OpenAI's language models.

Developed with the software and tools below.

Framework: FastAPI Language: Python Database: PostgreSQL LLMs: OpenAI
git-last-commit GitHub commit activity GitHub top language

πŸ“‘ Table of Contents

  • πŸ“ Overview
  • πŸ“¦ Features
  • πŸ“‚ Structure
  • πŸ’» Installation
  • πŸ—οΈ Usage
  • 🌐 Hosting
  • πŸ“„ License
  • πŸ‘ Authors

πŸ“ Overview

This repository contains a Minimum Viable Product (MVP) for an AI Query Backend. It simplifies the process of interacting with OpenAI's language models by providing a Python-based backend service.

πŸ“¦ Features

Feature Description
βš™οΈ Architecture The codebase follows a layered architecture with separate directories for API routes, schemas, utilities, and database interactions, promoting modularity and maintainability.
πŸ“„ Documentation The repository includes a README file providing a detailed overview of the MVP, its dependencies, and usage instructions.
πŸ”— Dependencies The codebase utilizes essential libraries like fastapi, openai, jwt, sqlalchemy, psycopg2-binary, and python-dotenv for API development, OpenAI integration, authentication, database management, and environment configuration.
🧩 Modularity The modular structure promotes easier maintenance and code reusability, with dedicated sections for API routes, data schemas, utility functions, and database interactions.
πŸ§ͺ Testing Includes unit tests using Pytest to ensure the reliability and robustness of the codebase.
⚑️ Performance Employs efficient coding practices and considers performance optimization strategies for efficient data processing and response generation.
πŸ” Security Enhances security through input validation, secure data storage, and authentication using JWT.
πŸ”€ Version Control Utilizes Git for version control with automated CI/CD workflows for building and deploying the application.
πŸ”Œ Integrations Integrates with the OpenAI API for generating responses to user queries.
πŸ“Ά Scalability The backend is designed to handle increasing user load and data volume, utilizing efficient database management techniques and caching strategies.

πŸ“‚ Structure

└── api
    β”œβ”€β”€ routers
    β”‚   β”œβ”€β”€ query_router.py
    β”‚   └── auth_router.py
    β”œβ”€β”€ schemas
    β”‚   └── schemas.py
    β”œβ”€β”€ utils
    β”‚   └── utils.py
    β”œβ”€β”€ database
    β”‚   β”œβ”€β”€ database.py
    β”‚   └── models.py
    └── main.py

πŸ’» Installation

πŸ”§ Prerequisites

  • Python 3.9+
  • PostgreSQL 15+
  • Docker (recommended)

πŸš€ Setup Instructions

  1. Clone the repository:
    git clone https://github.com/coslynx/AI-Query-Backend-MVP.git
    cd AI-Query-Backend-MVP
  2. Create a virtual environment:
    python3 -m venv .venv
  3. Activate the virtual environment:
    source .venv/bin/activate
  4. Install dependencies:
    pip install -r requirements.txt
  5. Configure environment variables:
    cp .env.example .env
    # Edit the .env file with your OpenAI API key, database credentials, and JWT secret key. 

πŸ—οΈ Usage

πŸƒβ€β™‚οΈ Running the MVP

  1. Start the FastAPI application:
    uvicorn api.main:app --host 0.0.0.0 --port 8000 --reload

βš™οΈ Configuration

  • The .env file is used for storing sensitive environment variables such as OpenAI API key, database connection URL, and JWT secret key.
  • Modify these values in the .env file before running the application.

πŸ“š Examples

  • Send a query to the AI Query Backend:
    curl -X POST http://localhost:8000/query \
    -H "Content-Type: application/json" \
    -d '{"model": "text-davinci-003", "query": "What is the meaning of life?", "temperature": 0.7, "max_tokens": 256}' 
    Response:
    {
      "id": 1,
      "model": "text-davinci-003",
      "query": "What is the meaning of life?",
      "response": "The meaning of life is a question that has been pondered by philosophers and theologians for centuries. There is no one definitive answer, as each individual must ultimately decide for themselves what meaning they find in life.",
      "created_at": "2024-01-01T12:00:00.000Z"
    }

🌐 Hosting

πŸš€ Deployment Instructions

  1. Build the Docker image:
    docker build -t ai-query-backend .
  2. Push the Docker image to a registry (e.g., Docker Hub):
    docker push your_dockerhub_username/ai-query-backend:latest
  3. Deploy to a cloud platform (e.g., Heroku):
    • Follow Heroku's deployment instructions for Docker images.
    • Ensure that the required environment variables are set in the Heroku app's settings.
    • Refer to the Heroku documentation for detailed instructions.

πŸ”‘ Environment Variables

  • OPENAI_API_KEY: Your OpenAI API key.
  • DATABASE_URL: The connection URL for your PostgreSQL database.
  • JWT_SECRET_KEY: A strong, unique secret key for JWT authentication.

πŸ“œ API Documentation

πŸ” Endpoints

  • POST /query:
    • Description: Sends a query to the chosen OpenAI model and returns the response.
    • Body:
      {
        "model": "text-davinci-003", // OpenAI model name
        "query": "What is the meaning of life?", // User query
        "temperature": 0.7, // Controls the creativity of the response
        "max_tokens": 256 // Maximum number of tokens in the response
      }
    • Response:
      {
        "id": 1,
        "model": "text-davinci-003",
        "query": "What is the meaning of life?",
        "response": "The meaning of life is a question that has been pondered by philosophers and theologians for centuries. There is no one definitive answer, as each individual must ultimately decide for themselves what meaning they find in life.",
        "created_at": "2024-01-01T12:00:00.000Z"
      }
  • POST /token:
    • Description: Generates a JWT token for authentication.
    • Body:
      {
        "username": "your_username",
        "password": "your_password"
      }
    • Response:
      {
        "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0dXNlciIsImV4cCI6MTY3NzU3NjI0N30.U4R70F7v3K_C0xR2bLq972843q66u6o98V0jD2o9s_w"
      }

πŸ”’ Authentication

  • JWT (JSON Web Token) is used for authentication.
  • Upon successful registration or login, a JWT token is issued to the user.
  • This token should be included in the Authorization header of subsequent requests to protected API endpoints.

πŸ“ Examples

# Register a new user
curl -X POST http://localhost:8000/token \
-H "Content-Type: application/json" \
-d '{"username": "testuser", "password": "testpassword"}' 

# Response
{
  "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0dXNlciIsImV4cCI6MTY3NzU3NjI0N30.U4R70F7v3K_C0xR2bLq972843q66u6o98V0jD2o9s_w"
}

# Send a query using the generated JWT token 
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZXN0dXNlciIsImV4cCI6MTY3NzU3NjI0N30.U4R70F7v3K_C0xR2bLq972843q66u6o98V0jD2o9s_w" \
-d '{"model": "text-davinci-003", "query": "What is the meaning of life?", "temperature": 0.7, "max_tokens": 256}' 

πŸ“œ License & Attribution

πŸ“„ License

This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.

πŸ€– AI-Generated MVP

This MVP was entirely generated using artificial intelligence through CosLynx.com.

No human was directly involved in the coding process of the repository: AI-Query-Backend-MVP

πŸ“ž Contact

For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:

🌐 CosLynx.com

Create Your Custom MVP in Minutes With CosLynxAI!