Use your locally running AI models to assist you in your web browsing
-
Updated
Sep 15, 2024 - TypeScript
Use your locally running AI models to assist you in your web browsing
A generalized information-seeking agent system with Large Language Models (LLMs).
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Read your local files and answer your queries
MVP of an idea using multiple local LLM models to simulate and play D&D
Local AI Search assistant web or CLI for ollama and llama.cpp. Lightweight and easy to run, providing a Perplexity-like experience.
Run gguf LLM models in Latest Version TextGen-webui
Local AI Open Orca For Dummies is a user-friendly guide to running Large Language Models locally. Simplify your AI journey with easy-to-follow instructions and minimal setup. Perfect for developers tired of complex processes!
Alacritty + Fish + Zellij + Starship + Neovim + i3 + Ollama 🦙 = 🚀
ScrAIbe Assistant is designed to leverage Whisper for precise audio processing and local LLMs via Ollama for efficient summarization. This tool is perfect for tasks such as taking notes from team meetings or lectures, offering a secure environment where no data—be it text, audio, or otherwise—leaves your local machine.
Add a description, image, and links to the localllm topic page so that developers can more easily learn about it.
To associate your repository with the localllm topic, visit your repo's landing page and select "manage topics."