This project provides a containerization setup utilizing Ollama and Ollama WebUI.
NOTE: Ensure that you have Podman and Podman Compose (or Docker Compose) installed.
To install Podman, run:
sudo dnf install podman
To install Podman Compose, use:
pip3 install podman-compose
To run myllm
, execute the following command:
./myllm
If you prefer to use myllm
as a command, follow these steps:
-
sudo cp myllm myllm_docker-compose.yml /opt
-
export PATH=$PATH:/opt
- Now you can run:
myllm
- Run
myllm
or./myllm
and input1
to start the local LLM. - Open your browser and navigate to http://localhost:8080 to access the prompt UI.
- If this is your first time running the application on your machine, create a user account. All user data is stored locally.
- By default, this local LLM setup does not include any pre-installed models. You will need to download a model from the Ollama repository.
To download a model, go to Settings > Models. Under "Pull a model from ollama.com," enter the model name (e.g.,
qwen2:0.5b
) and click the download button. You can find a list of available models here.
Happy Prompting! 😊