Skip to content

HarshavardhanK/Tunga

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to Tunga! An LLM inference project that will introduce Retrieval Augmented Generation

This project will walk you through one of the hottest areas of Computer Science today - LLMs!

To start with, we will introduce HuggingFace Transformers library, and how to make use of it to generate text from a prompt. We will explore some of the important concepts behind LLMs like embeddings.

Installations

If you already have Conda setup, run the below command in the root directory to set up the environment

make -f Makefile

Install the Python packages

pip install -r requirements.txt 

Ollama

Install Ollama. Follow the instructions and install Ollama locally. Run the Llama 2 7B with the command. It will serve Llama 2 on port 11434

ollama run llama2

Llama Index

  • Indexing and Vector Storage library

Langchain

  • Serves the Ollama Llama2 7B model

ChromaDB

  • Vector Database

Create a database of PDFs (we've used some chapters from Artificial Intelligence: A Modern Approach) and save it in a directory 'data' at the root of this project. This creates a private knowledge database for the LLM.

Place the data directory at the root of the project folder. Run main.ipynb found at the src directory.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published