This project leverages natural language processing and information retrieval techniques to create an interactive system for answering user queries based on a collection of PDF documents. The process begins with loading and segmenting PDFs into smaller text chunks. These chunks are then embedded using a pre-trained Hugging Face model. The embeddings are indexed using Pinecone, a vector search engine, facilitating efficient similarity searches. User queries are processed using a retrieval question-answering (QA) system, which combines the Pinecone index, a language model loaded from a file, and a defined prompt template. The project aims to provide concise and accurate responses to user queries, fostering a seamless interaction between the user and the information stored in the PDF documents.
- Python
- LangChain
- Flask
- Meta Llama2
- Pinecone
This will help you understand how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.
Follow these steps to install and set up the project directly from the GitHub repository:
-
Clone the Repository
- Open your terminal or command prompt.
- Navigate to the directory where you want to install the project.
- Run the following command to clone the GitHub repository:
git clone https://github.com/KalyanMurapaka45/Medical-Chatbot-using-Llama-2.git
-
Create a Virtual Environment (Optional but recommended)
- It's a good practice to create a virtual environment to manage project dependencies. Run the following command:
conda create -p <Environment_Name> python==<python version> -y
- It's a good practice to create a virtual environment to manage project dependencies. Run the following command:
-
Activate the Virtual Environment (Optional)
- Activate the virtual environment based on your operating system:
conda activate <Environment_Name>/
- Activate the virtual environment based on your operating system:
-
Install Dependencies
- Navigate to the project directory:
cd [project_directory]
- Run the following command to install project dependencies:
pip install -r requirements.txt
- Navigate to the project directory:
-
Run the Project
- Start the project by running the appropriate command.
python app.py
- Start the project by running the appropriate command.
-
Access the Project
- Open a web browser or the appropriate client to access the project.
PINECONE_API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
PINECONE_API_ENV = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Download the quantize model from the link provided in model folder & keep the model in the model directory:
Model: llama-2-7b-chat.ggmlv3.q4_0.bin
URL: https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
• Report bugs: If you encounter any bugs, please let us know. Open up an issue and let us know the problem.
• Contribute code: If you are a developer and want to contribute, follow the instructions below to get started!
- Fork the Project
- Create your Feature Branch
- Commit your Changes
- Push to the Branch
- Open a Pull Request
• Suggestions: If you don't want to code but have some awesome ideas, open up an issue explaining some updates or improvements you would like to see!
This project is licensed under the Open Source Initiative (OSI) approved GNU General Public License v3.0 License - see the LICENSE.txt file for details.
Hema Kalyan Murapaka - kalyanmurapaka274@gmail.com
We'd like to extend our gratitude to all individuals and organizations who have played a role in the development and success of this project. Your support, whether through contributions, inspiration, or encouragement, has been invaluable. Thank you for being a part of our journey.