ProLlama is a chatbot interface written in Python using the Streamlit library. It connects to your local Llama 3.1 model through Ollama, ensuring privacy and security.
-
Install Python 3:
brew install python3
-
Install Ollama by following the instructions at ollama.com
-
Download and run the Llama 3 model:
ollama run llama3.1
-
Navigate to the project directory:
cd your-work-directory/prollama
-
Create and activate a virtual environment:
python3 -m venv env source env/bin/activate
-
Install required packages:
pip install -r requirements.txt
- Launch ProLlama:
streamlit run app.py
- Build the Docker image:
docker buildx build --platform=linux/amd64 -t prollama:version-tag .
- Run the Docker image:
docker run -p 8501:8501 prollama:version-tag