Skip to content

Explore a comprehensive collection of resources, tutorials, papers, tools, and best practices for fine-tuning Large Language Models (LLMs). Perfect for ML practitioners and researchers!

Notifications You must be signed in to change notification settings

Curated-Awesome-Lists/awesome-llms-fine-tuning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

9 Commits
ย 
ย 

Repository files navigation

Awesome LLMs Fine-Tuning

Welcome to the curated collection of resources for fine-tuning Large Language Models (LLMs) like GPT, BERT, RoBERTa, and their numerous variants! In this era of artificial intelligence, the ability to adapt pre-trained models to specific tasks and domains has become an indispensable skill for researchers, data scientists, and machine learning practitioners.

Large Language Models, trained on massive datasets, capture an extensive range of knowledge and linguistic nuances. However, to unleash their full potential in specific applications, fine-tuning them on targeted datasets is paramount. This process not only enhances the modelsโ€™ performance but also ensures that they align with the particular context, terminology, and requirements of the task at hand.

In this awesome list, we have meticulously compiled a range of resources, including tutorials, papers, tools, frameworks, and best practices, to aid you in your fine-tuning journey. Whether you are a seasoned practitioner looking to expand your expertise or a beginner eager to step into the world of LLMs, this repository is designed to provide valuable insights and guidelines to streamline your endeavors.

Table of Contents

GitHub projects

  • LlamaIndex ๐Ÿฆ™: A data framework for your LLM applications. (23010 stars)
  • Petals ๐ŸŒธ: Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading. (7768 stars)
  • LLaMA-Factory: An easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM3). (5532 stars)
  • lit-gpt: Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. (3469 stars)
  • H2O LLM Studio: A framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/ (2880 stars)
  • Phoenix: AI Observability & Evaluation - Evaluate, troubleshoot, and fine tune your LLM, CV, and NLP models in a notebook. (1596 stars)
  • LLM-Adapters: Code for the EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models". (769 stars)
  • Platypus: Code for fine-tuning Platypus fam LLMs using LoRA. (589 stars)
  • xtuner: A toolkit for efficiently fine-tuning LLM (InternLM, Llama, Baichuan, QWen, ChatGLM2). (540 stars)
  • DB-GPT-Hub: A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance, especially in Text-to-SQL, and achieved higher exec acc than GPT-4 in spider eval with 13B LLM used this project. (422 stars)
  • LLM-Finetuning-Hub : Repository that contains LLM fine-tuning and deployment scripts along with our research findings. โญ 416
  • Finetune_LLMs : Repo for fine-tuning Casual LLMs. โญ 391
  • MFTCoder : High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs; ไธšๅ†…้ฆ–ไธช้ซ˜็ฒพๅบฆใ€้ซ˜ๆ•ˆ็Ž‡ใ€ๅคšไปปๅŠกใ€ๅคšๆจกๅž‹ๆ”ฏๆŒใ€ๅคš่ฎญ็ปƒ็ฎ—ๆณ•๏ผŒๅคงๆจกๅž‹ไปฃ็ ่ƒฝๅŠ›ๅพฎ่ฐƒๆก†ๆžถ. โญ 337
  • llmware : Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models. โญ 289
  • LLM-Kit : ๐Ÿš€WebUI integrated platform for latest LLMs | ๅ„ๅคง่ฏญ่จ€ๆจกๅž‹็š„ๅ…จๆต็จ‹ๅทฅๅ…ท WebUI ๆ•ดๅˆๅŒ…ใ€‚ๆ”ฏๆŒไธปๆตๅคงๆจกๅž‹APIๆŽฅๅฃๅ’Œๅผ€ๆบๆจกๅž‹ใ€‚ๆ”ฏๆŒ็Ÿฅ่ฏ†ๅบ“๏ผŒๆ•ฐๆฎๅบ“๏ผŒ่ง’่‰ฒๆ‰ฎๆผ”๏ผŒmjๆ–‡็”Ÿๅ›พ๏ผŒLoRAๅ’Œๅ…จๅ‚ๆ•ฐๅพฎ่ฐƒ๏ผŒๆ•ฐๆฎ้›†ๅˆถไฝœ๏ผŒlive2d็ญ‰ๅ…จๆต็จ‹ๅบ”็”จๅทฅๅ…ท. โญ 232
  • h2o-wizardlm : Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning. โญ 228
  • hcgf : Humanable Chat Generative-model Fine-tuning | LLMๅพฎ่ฐƒ. โญ 196
  • llm_qlora : Fine-tuning LLMs using QLoRA. โญ 136
  • awesome-llm-human-preference-datasets : A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval. โญ 124
  • llm_finetuning : Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes). โญ 114

Articles & Blogs

Online Courses

Books

Research Papers

Videos

Tools & Software

  • LLaMA Efficient Tuning ๐Ÿ› ๏ธ: Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon).
  • H2O LLM Studio ๐Ÿ› ๏ธ: Framework and no-code GUI for fine-tuning LLMs.
  • PEFT ๐Ÿ› ๏ธ: Parameter-Efficient Fine-Tuning (PEFT) methods for efficient adaptation of pre-trained language models to downstream applications.
  • ChatGPT-like model ๐Ÿ› ๏ธ: Run a fast ChatGPT-like model locally on your device.
  • Petals: Run large language models like BLOOM-176B collaboratively, allowing you to load a small part of the model and team up with others for inference or fine-tuning. ๐ŸŒธ
  • NVIDIA NeMo: A toolkit for building state-of-the-art conversational AI models and specifically designed for Linux. ๐Ÿš€
  • H2O LLM Studio: A framework and no-code GUI tool for fine-tuning large language models on Windows. ๐ŸŽ›๏ธ
  • Ludwig AI: A low-code framework for building custom LLMs and other deep neural networks. Easily train state-of-the-art LLMs with a declarative YAML configuration file. ๐Ÿค–
  • bert4torch: An elegant PyTorch implementation of transformers. Load various open-source large model weights for reasoning and fine-tuning. ๐Ÿ”ฅ
  • Alpaca.cpp: Run a fast ChatGPT-like model locally on your device. A combination of the LLaMA foundation model and an open reproduction of Stanford Alpaca for instruction-tuned fine-tuning. ๐Ÿฆ™
  • promptfoo: Evaluate and compare LLM outputs, catch regressions, and improve prompts using automatic evaluations and representative user inputs. ๐Ÿ“Š

Conferences & Events

Slides & Presentations

Podcasts


This initial version of the Awesome List was generated with the help of the Awesome List Generator. It's an open-source Python package that uses the power of GPT models to automatically curate and generate starting points for resource lists related to a specific topic.

About

Explore a comprehensive collection of resources, tutorials, papers, tools, and best practices for fine-tuning Large Language Models (LLMs). Perfect for ML practitioners and researchers!

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published