The lesson was built using a number of core resources from OpenAI and Azure OpenAI as references for the terminology and tutorials. Here is a non-comprehensive list, for your own self-guided learning journeys.
Title/Link | Description |
---|---|
Fine-tuning with OpenAI Models | Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, saving you costs, improving response quality, and enabling lower-latency requests. Get an overview of fine-tuning from OpenAI. |
What is Fine-Tuning with Azure OpenAI? | Understand what fine-tuning is (concept), why you should look at it (motivating problem), what data to use (training) and measuring the quality |
Customize a model with fine-tuning | Azure OpenAI Service lets you tailor our models to your personal datasets using fine-tuning. Learn how to fine-tune (process) select models using Azure AI Studio, Python SDK or REST API. |
Recommendations for LLM fine-tuning | LLMs may not perform well on specific domains, tasks, or datasets, or may produce inaccurate or misleading outputs. When should you consider fine-tuning as a possible solution to this? |
Continuous Fine Tuning | Continuous fine-tuning is the iterative process of selecting an already fine-tuned model as a base model and fine-tuning it further on new sets of training examples. |
Fine-tuning and function calling | Fine-tuning your model with function calling examples can improve model output by getting more accurate and consistent outputs - with similarly-formatted responses & cost-savings |
Fine-tuning Models: Azure OpenAI Guidance | Look up this table to understand what models can be fine-tuned in Azure OpenAI, and which regions these are available in. Look up their token limits and training data expiry dates if needed. |
To Fine Tune or Not To Fine Tune? That is the Question | This 30-min Oct 2023 episode of the AI Show discusses benefits, drawbacks and practical insights that help you make this decision. |
Getting Started With LLM Fine-Tuning | This AI Playbook resource walks you through data requirements, formatting, hyperparameter fine-tuning and challenges/limitations you should know. |
Tutorial: Azure OpenAI GPT3.5 Turbo Fine-Tuning | Learn to create a sample fine-tuning dataset, prepare for fine-tuning, create a fine-tuning job, and deploy the fine-tuned model on Azure. |
Tutorial: Fine-tune a Llama 2 model in Azure AI Studio | Azure AI Studio lets you tailor large language models to your personal datasets using a UI-based workflow suitable for low-code developers. See this example. |
Tutorial:Fine-tune Hugging Face models for a single GPU on Azure | This article describes how to fine-tune a Hugging Face model with the Hugging Face transformers library on a single GPU with Azure DataBricks + Hugging Face Trainer libraries |
Training: Fine-tune a foundation model with Azure Machine Learning | The model catalog in Azure Machine Learning offers many open source models you can fine-tune for your specific task. Try this module is from the AzureML Generative AI Learning Path |
Tutorial: Azure OpenAI Fine-Tuning | Fine-tuning GPT-3.5 or GPT-4 models on Microsoft Azure using W&B allows for detailed tracking and analysis of model performance. This guide extends the concepts from the OpenAI Fine-Tuning guide with specific steps and features for Azure OpenAI. |
This section captures additional resources that are worth exploring, but that we did not have time to cover in this lesson. They may be covered in a future lesson, or as a secondary assignment option, at a later date. For now, use them to build your own expertise and knowledge around this topic.
Title/Link | Description |
---|---|
OpenAI Cookbook: Data preparation and analysis for chat model fine-tuning | This notebook serves as a tool to preprocess and analyze the chat dataset used for fine-tuning a chat model. It checks for format errors, provides basic statistics, and estimates token counts for fine-tuning costs. See: Fine-tuning method for gpt-3.5-turbo. |
OpenAI Cookbook: Fine-Tuning for Retrieval Augmented Generation (RAG) with Qdrant | The aim of this notebook is to walk through a comprehensive example of how to fine-tune OpenAI models for Retrieval Augmented Generation (RAG). We will also be integrating Qdrant and Few-Shot Learning to boost model performance and reduce fabrications. |
OpenAI Cookbook: Fine-tuning GPT with Weights & Biases | Weights & Biases (W&B) is the AI developer platform, with tools for training models, fine-tuning models, and leveraging foundation models. Read their OpenAI Fine-Tuning guide first, then try the Cookbook exercise. |
Community Tutorial Phinetuning 2.0 - fine-tuning for Small Language Models | Meet Phi-2, Microsoft’s new small model, remarkably powerful yet compact. This tutorial will guide you through fine-tuning Phi-2, demonstrating how to build a unique dataset and fine-tune model using QLoRA. |
Hugging Face Tutorial How to Fine-Tune LLMs in 2024 with Hugging Face | This blog post walks you thorugh how to fine-tune open LLMs using Hugging Face TRL, Transformers & datasets in 2024. You define a use case, setup a dev environment, prepare a dataset, fine tune the model, test-evaluate it, then deploy it to production. |
Hugging Face: AutoTrain Advanced | Brings faster and easier training and deployments of state-of-the-art machine learning models. Repo has Colab-friendly tutorials with YouTube video guidance, for fine-tuning. Reflects recent local-first update . Read the AutoTrain documentation |