Skip to content

Latest commit

 

History

History
44 lines (29 loc) · 943 Bytes

README.md

File metadata and controls

44 lines (29 loc) · 943 Bytes

PolyOllama

Run multiple same or different open source large language models such as Llama2, Mistral and Gemma in parallel simultaneously powered by Ollama.

Demo

Screen.Recording.April.4.mov

Instructions to run it locally

You need Ollama installed on your computer.

cmd + k (to open the chat prompt) alt + k (on Windows)

cd backend
bun install
bun run index.ts
cd frontend
bun install
bun run dev

Running in docker containers frontend + (backend + ollama)

On Windows

docker compose -f docker-compose.windows.yml up

On Linux/MacOS

docker compose -f docker-compose.unix.yml up

frontend available at http://localhost:5173

⚠️ Still work in progress