ChatGPT is GPT-3.5 finetuned with RLHF (Reinforcement Learning with Human Feedback) for human instruction and chat.
Alternatives are projects featuring different instruct finetuned language models for chat. Projects are not counted if they are:
- Alternative frontend projects which simply call OpenAI's APIs.
- Using language models which are not finetuned for human instruction or chat.
Tags:
- Bare: only source code, no data, no model's weight, no chat system
- Standard: yes data, yes model's weight, bare chat via API
- Full: full yes data, yes model's weight, fancy chat system including TUI and GUI
- Complicated: semi open source, not really open source, based on closed model, etc...
Other revelant lists:
- yaodongC/awesome-instruction-dataset: A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
- The template
- The list
- lucidrains/PaLM-rlhf-pytorch
- togethercomputer/OpenChatKit
- oobabooga/text-generation-webui
- KoboldAI/KoboldAI-Client
- LAION-AI/Open-Assistant
- tatsu-lab/stanford_alpaca
- BlinkDL/ChatRWKV
- THUDM/ChatGLM-6B
- bigscience-workshop/xmtf
- carperai/trlx
- databrickslabs/dolly
- LianjiaTech/BELLE
- ethanyanjiali/minChatGPT
- cerebras/Cerebras-GPT
- TavernAI/TavernAI
- Cohee1207/SillyTavern
- h2oai/h2ogpt
- mlc-ai/web-llm
- Stability-AI/StableLM
- clue-ai/ChatYuan
- OpenLMLab/MOSS
Append the new project at the end of file
## [{owner}/{project-name}]{https://github.com/link/to/project}
Description goes here
Tags: Bare/Standard/Full/Complicated
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
Tags: Bare
OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.
Related links:
Tags: Full
A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
Tags: Full
This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author’s Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed.
Tags: Full
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Related links:
Tags: Full
This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model.
Tags: Complicated
- pointnetwork/point-alpaca Released weights recreated from Stanford Alpaca, an experiment in fine-tuning LLaMA on a synthetic instruction dataset.
- tloen/alpaca-lora Code for rproducing the Stanford Alpaca results using low-rank adaptation (LoRA).
- ggerganov/llama.cpp Ports for inferencing LLaMA in C/C++ running on CPUs, supports alpaca, gpt4all, etc.
- setzer22/llama-rs Rust port of the llama.cpp project.
- juncongmoo/chatllama Open source implementation for LLaMA-based ChatGPT runnable in a single GPU.
- Lightning-AI/lit-llama Implementation of the LLaMA language model based on nanoGPT.
- nomic-ai/gpt4all Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMA.
- hpcaitech/ColossalAI#ColossalChat An open-source solution for cloning ChatGPT with a complete RLHF pipeline.
- lm-sys/FastChat An open platform for training, serving, and evaluating large language model based chatbots.
- nsarrazin/serge A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Tags: Full
ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).
Related links:
- Alternative Web UI: Akegarasu/ChatGLM-webui
- Slim version (remove 20K image tokens to reduce memory usage): silver/chatglm-6b-slim
- Fintune ChatGLM-6b using low-rank adaptation (LoRA): lich99/ChatGLM-finetune-LoRA
- Deploying ChatGLM on Modelz: tensorchord/modelz-ChatGLM
- Docker image with built-on playground UI and streaming API compatible with OpenAI, using Basaran: peakji92/chatglm:6b
Tags: Full
This repository provides an overview of all components used for the creation of BLOOMZ & mT0 and xP3 introduced in the paper Crosslingual Generalization through Multitask Finetuning.
Related links:
Tags: Standard
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF), supporting online RL up to 20b params and offline RL to larger models. Basically what you would use to finetune GPT into ChatGPT.
Tags: Bare
Databricks’ dolly-v2-12b, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on pythia-12b trained on ~15k instruction/response fine tuning records databricks-dolly-15k generated by Databricks employees in capability domains from the InstructGPT paper.
Related links:
Tags: Standard
The goal of this project is to promote the development of the open-source community for Chinese language large-scale conversational models. This project optimizes Chinese performance in addition to original Stanford Alpaca. The model finetuning uses only data generated via ChatGPT (without other data). This repo contains: 175 chinese seed tasks used for generating the data, code for generating the data, 0.5M generated data used for fine-tuning the model, model finetuned from BLOOMZ-7B1-mt on data generated by this project.
Related links:
Tags: Standard
A minimum example of aligning language models with RLHF similar to ChatGPT
Related links:
Tags: Standard
7 open source GPT-3 style models with parameter ranges from 111 million to 13 billion, trained using the Chinchilla formula. Model weights have been released under a permissive license (Apache 2.0 license in particular).
Related links:
Tags: Standard
Atmospheric adventure chat for AI language model Pygmalion by default and other models such as KoboldAI, ChatGPT, GPT-4
Tags: Full
SillyTavern is a fork of TavernAI 1.2.8 which is under more active development, and has added many major features. At this point they can be thought of as completely independent programs. On its own Tavern is useless, as it's just a user interface. You have to have access to an AI system backend that can act as the roleplay character. There are various supported backends: OpenAPI API (GPT), KoboldAI (either running locally or on Google Colab), and more.
Tags: Full
h2oGPT - The world's best open source GPT
- Open-source repository with fully permissive, commercially usable code, data and models
- Code for preparing large open-source datasets as instruction datasets for fine-tuning of large language models (LLMs), including prompt engineering
- Code for fine-tuning large language models (currently up to 20B parameters) on commodity hardware and enterprise GPU servers (single or multi node)
- Code to run a chatbot on a GPU server, with shareable end-point with Python client API
- Code to evaluate and compare the performance of fine-tuned LLMs
Related links:
Tags: Full
Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
Related links:
Tags: Full
This repository contains Stability AI's ongoing development of the StableLM series of language models and will be continuously updated with new checkpoints.
Related links:
- huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat
- StableVicuna an RLHF fine-tune of Vicuna-13B v0, which itself is a fine-tune of LLaMA-13B.
Tags: Full
ChatYuan: Large Language Model for Dialogue in Chinese and English (The repos are mostly in Chinese)
Related links:
Tags: Full
MOSS: An open-source tool-augmented conversational language model from Fudan University. (Most examples are in Chinese)
Related links:
Tags: Full