Run multiple same or different open source large language models such as Llama2, Mistral and Gemma in parallel simultaneously powered by Ollama.
Screen.Recording.April.4.mov
You need Ollama installed on your computer.
cmd + k (to open the chat prompt) alt + k (on Windows)
cd backend
bun install
bun run index.ts
cd frontend
bun install
bun run dev
Running in docker containers frontend + (backend + ollama)
On Windows
docker compose -f docker-compose.windows.yml up
On Linux/MacOS
docker compose -f docker-compose.unix.yml up
frontend available at http://localhost:5173
⚠️ Still work in progress