11-05-2024: Added support for vision on hosted APIs
11-01-2024: Added support for hosted APIs
10-27-2024: Added prompt refining
vnc-lm is a Discord bot with ollama, OpenRouter, Mistral, Cohere, and GitHub Models API integration.
Load and manage language models through local or hosted API endpoints. Configure parameters, branch conversations, and refine prompts to improve responses.
Web scraping
Model pulling with ollama
Load models using the /model
command. The bot sends notifications upon successful model loading. Local models can be removed with the remove
parameter. Download new models by sending a model tag link in Discord.
https://ollama.com/library/llama3.2:1b-instruct-q8_0
https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q8_0.gguf
π§ Model downloading and removal is turned off by default and can be enabled by configuring the
.env
.
Configure model behavior by adjusting the num_ctx
(context length), system_prompt
(base instructions), and temperature
(response randomness) parameters.
Messages longer than 1500 characters are automatically paginated during generation. Message streaming is available with ollama. Other APIs handle responses quickly without streaming. The context window accepts text files, web links, and images. Vision is available only through the supported hosted APIs. Even on hosted APIs, not all models support vision capabilities. Models running locally need OCR to process text from images. Deploy using Docker for a simplified setup.
Switch conversations by selecting rejoin conversation
from the context menu. Branch conversations from any message. Messages are cached and organized in bot_cache.json
. Messages deleted in Discord are also deleted from the cache. The entrypoint.sh
script maintains conversation history across Docker container restarts.
π‘ Message
stop
to end message generation early.
Edit your last prompt to refine the model's response. The bot generates a new response using your edited prompt, replacing the previous output.
Docker: Docker is a platform designed to help developers build, share, and run container applications. We handle the tedious setup, so you can focus on the code.
Provider | Description |
---|---|
ollama | Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models. |
OpenRouter | A unified interface for LLMs. Find the best models & prices for your prompts. Use the latest state-of-the-art models from OpenAI, Anthropic, Google, and Meta. |
Mistral | Mistral AI is a research lab building the best open source models in the world. La Plateforme enables developers and enterprises to build new products and applications, powered by Mistral's open source and commercial LLMs. |
Cohere | The Cohere platform builds natural language processing and generation into your product with a few lines of code. Our large language models can solve a broad spectrum of natural language use cases, including classification, semantic search, paraphrasing, summarization, and content generation. |
GitHub Models | If you want to develop a generative AI application, you can use GitHub Models to find and experiment with AI models for free. Once you are ready to bring your application to production, you can switch to a token from a paid Azure account. |
π‘ Each API offers a free tier.
git clone https://github.com/jake83741/vnc-lm.git
cd vnc-lm
Rename .env.example
to .env
.
Configure the below fields in the .env
:
TOKEN
: Discord bot token from the Discord Developer Portal. Set required bot permissions.
OLLAMAURL
: ollama server URL. See API documentation. For Docker: http://host.docker.internal:11434
NUM_CTX
: Context window size. Default: 2048
TEMPERATURE
: Response randomness. Default: 0.4
KEEP_ALIVE
: Model retention time in memory. Default: 45m
CHARACTER_LIMIT
: Page embed character limit. Default: 1500
API_RESPONSE_UPDATE_FREQUENCY
: API response chunk size before message updates. Low values trigger Discord throttling. Default: 10
ADMIN
: Discord user ID for model management permissions
REQUIRE_MENTION
: Toggle bot mention requirement. Default: false
USE_OCR
: Toggle OCR. Default: false
OPENROUTER
: OpenRouter API key from the OpenRouter Dashboard
OPENROUTER_MODELS
: Comma-separated OpenRouter model list
MISTRAL_API_KEY
: Mistral API key from the Mistral Dashboard
MISTRAL_MODELS
: Comma-separated Mistral model list
COHERE_API_KEY
: Cohere API key from the Cohere Dashboard
COHERE_MODELS
: Comma-separated Cohere model list
GITHUB_API_KEY
: GitHub API key from the GitHub Models Dashboard
GITHUB_MODELS
: Comma-separated GitHub model list
docker compose up --build
π‘ Send
/help
for instructions on how to use the bot.
npm install
npm run build
npm start
Use /model
to load, configure, and remove models. Quickly adjust model behavior using the optional parameters num_ctx
, system_prompt
, and temperature
. Note that num_ctx
only works with local ollama models.
Refine prompts to modify model responses. Each refinement generates a new response that overwrites the previous one. Multiple refinements are supported. The latest prompt version is saved in bot_cache.json
.
Send images to vision-enabled models to process visual content alongside text. Images are included directly in the conversation context. Images are encoded in Base64 before being sent to the API.
Access Rejoin Conversation
in Discord's context menu to resume from any message. Hop between conversations while maintaining context. Create new conversation branches as needed. Continue conversations using different models and parameter settings.
.
βββ LICENSE
βββ README.md
βββ docker-compose.yaml
βββ dockerfile
βββ .env.example
βββ package.json
βββ screenshots
βββ src
βββ api-connections
β βββ config
β β βββ models.ts
β βββ factory.ts
β βββ index.ts
β βββ interfaces
β β βββ base-client.ts
β β βββ model-manager.ts
β βββ models.ts
β βββ provider
β βββ hosted
β β βββ client.ts
β βββ ollama
β βββ client.ts
βββ bot.ts
βββ commands
β βββ command-registry.ts
β βββ help-command.ts
β βββ model-command.ts
β βββ optional-params
β β βββ remove.ts
β βββ rejoin-conversation.ts
βββ managers
β βββ cache
β β βββ entrypoint.sh
β β βββ index.ts
β β βββ manager.ts
β β βββ store.ts
β βββ generation
β β βββ chunk.ts
β β βββ create.ts
β β βββ preprocessing.ts
β βββ message
β β βββ manager.ts
β βββ pages
β βββ manager.ts
βββ services
β βββ ocr.ts
β βββ scraper.ts
βββ utilities
βββ constants.ts
βββ index.ts
βββ settings.ts
βββ types.ts
βββ tsconfig.json
{
"dependencies": {
"@azure-rest/ai-inference": "latest",
"@azure/core-auth": "latest",
"@mozilla/readability": "^0.5.0",
"@types/xlsx": "^0.0.35",
"axios": "^1.7.2",
"cohere-ai": "^7.14.0",
"discord.js": "^14.15.3",
"dotenv": "^16.4.5",
"jsdom": "^24.1.3",
"puppeteer": "^22.14.0",
"sharp": "^0.33.5",
"tesseract.js": "^5.1.0"
},
"devDependencies": {
"@types/axios": "^0.14.0",
"@types/dotenv": "^8.2.0",
"@types/jsdom": "^21.1.7",
"@types/node": "^18.15.25",
"@types/pdf-parse": "^1.1.4",
"typescript": "^5.1.3"
}
}
- Set higher
num_ctx
values when using attachments with large amounts of text - Vision models may have difficulty with follow-up questions.
This project is licensed under the MIT License.