Skip to content

MemoryVault-LLM - This Streamlit application functions as an AI-powered chatbot, utilizing OpenAI's GPT-4o model with a memory retention feature

Notifications You must be signed in to change notification settings

0xPriyanshuJha/MemoryVault-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

LLM App with Memory (MemoryVault-LLM)

This Streamlit application functions as an AI-powered chatbot, utilizing OpenAI's GPT-4o model with a memory retention feature. It allows users to converse with the AI, maintaining context over multiple interactions for a seamless conversational experience.

Features

  • Utilizes OpenAI's GPT-4o model for generating responses.
  • Implements persistent memory using Mem0 and the Qdrant vector store.
  • Allows users to view their conversation history.
  • Provides a user-friendly interface through Streamlit.

How to get Started?

  1. Clone the GitHub repository
git clone https://github.com/0xPriyanshuJha/MemoryVault-LLM.git
  1. Install the required dependencies:
pip install streamlit, mem0, openai
  1. Ensure Qdrant is running: The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code as per your requirement.
docker pull qdrant/qdrant

docker run -p 6333:6333 -p 6334:6334 \
    -v $(pwd)/qdrant_storage:/qdrant/storage:z \
    qdrant/qdrant
  1. Run the Streamlit App
streamlit run llm.py

About

MemoryVault-LLM - This Streamlit application functions as an AI-powered chatbot, utilizing OpenAI's GPT-4o model with a memory retention feature

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages