Skip to content

Not-Only LLM Chat. An AI tool that enhances creativity and user experience beyond just LLM chat

Notifications You must be signed in to change notification settings

zrg-team/NoLLMChat

Repository files navigation

NoLLM Chat

The project aims to revolutionize AI interaction by creating a platform that enhances traditional LLM experiences. It strives to simplify the exploration of AI technologies directly within web browsers, providing a versatile and visual interface. The objective is to enable users to interact with language models in a manner that boosts creativity and enriches their experience, going beyond basic chat interactions.

Intro

Netlify Status

✨✨✨ DEMO ✨✨✨

[■■■□□□□□□□] 25%

Intro Image

Demo

Vision

  • Enhanced AI Interaction: Move beyond traditional LLM chat with a platform offering a more flexible and visual interface. Users can directly edit and guide AI to improve response quality, enabling richer interaction experiences.

  • Automated Personal Workflows: Empowers users to create custom AI workflows tailored to their needs, enhancing productivity and personalization.

  • Comprehensive AI Learning: Utilize node-based tools that facilitate interaction with and learning about AI technologies. The platform supports LLMs, prompt engineering, function calls, and vector databases, allowing users to experiment and see the impact of different AI components.

  • Free and Browser-Based: Operates locally and free of charge, with the option to extend capabilities using services like OpenAI. This ensures accessibility and ease of use directly from the browser.

Project Structure

src/
│
├── assets/         # Static assets like images and fonts
├── components/     # Reusable React components
├── constants/      # Constant values and configuration settings
├── contexts/       # React context providers for global state management
├── css/            # Styling files (CSS or preprocessor files)
├── hooks/          # Custom React hooks
├── i18n/           # Internationalization setup and resources
├── lib/            # Utility libraries and third-party integrations
├── pages/          # Page components for different routes
├── services/       # API calls and service functions
├── states/         # State management files (e.g., Zustand)
├── utils/          # Utility functions and helpers
│
├── App.tsx         # Main application component
├── main.tsx        # Entry point of the application
└── routes.tsx      # Route configurations

Project Architecture

The architecture of the application is designed to efficiently handle different tasks by dividing them into separate threads. This ensures smooth operation and responsiveness of the UI while managing complex processes in the background.

  • Main Thread: Handles the UI application logic, ensuring a responsive user interface.
  • Database Worker Thread: Manages database operations using TypeORM and PgLite. This thread is responsible for data storage and retrieval without blocking the main UI thread.
  • LLM Thread: Dedicated to handling large language model processes using WebLLM and Langchain. This thread manages AI computations and interactions.
  • Embedding Thread: Focuses on handling the vector database and embedding models. It processes and manages embeddings for efficient data retrieval and manipulation.
graph LR
    A[Main Thread] <--> C[Database Worker Thread]
    C -->|uses| I((TypeORM))
    I -->|Wraps| D((PGLite))
    A <--> E[LLM Thread]
    E -->|Uses| J((Langchain))
    J -->|Wraps| F((WebLLM))
    A <--> G[(Memory Vector database)]
    G --> K[Embedding thread]
    K -->|Use| L((Embedding Model))
    
    A -->|Handle| B((UI Application Logic))
Loading

Libraries and Tools

  • Vite: Fast and modern build tool for web projects.
  • React: A popular JavaScript library for building user interfaces.
  • ReactFlow: A library for building node-based applications.
  • PGLite: A lightweight PostgreSQL client for Node.js and browsers.
  • Voy: A WASM vector similarity search engine written in Rust
  • Memory Vector Database: ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.
  • WebLLM: Run large language models in the browser without server dependencies.
  • Langchain: LangChain is a framework for developing applications powered by large language models (LLMs)
  • Langgraph: A graph-based language model.
  • shadcn UI: A lightweight and fast state management library for React.
  • TypeORM: An ORM that can run in NodeJS and the browser which supports SQLite WASM.
  • Tailwind CSS: A utility-first CSS framework for quickly building custom designs.
  • i18next: Internationalization framework for browser or any other JavaScript environment.
  • React Router: Declarative routing for React applications.
  • Zustand: A small, fast, and scaleable state management library for React.
  • ESLint: A pluggable and configurable linter tool for identifying and reporting on patterns in JavaScript.
  • Prettier: An opinionated code formatter that ensures consistent code style.
  • Components: magicui and kokonut

Getting Started

To get started with the Project, follow these steps:

  1. Clone the Repository:
    git clone git@github.com:zrg-team/NoLLMChat.git
  2. Install Dependencies:
    cd NoLLMChat
    yarn install
  3. Run the Development Server:
    yarn dev
  4. Open in Browser: Visit http://localhost:PORT to start interacting with the AI assistant.

Contributing

We welcome contributions from the community! Whether it's bug fixes, new features, or documentation improvements, your help is appreciated. Please check our contribution guidelines for more information.

License

This project is licensed under the MIT License. See the LICENSE file for more details.

Contact

For questions, feedback, or suggestions, feel free to open an issue on GitHub or contact us at zerglingno2@outlook.com.