GitHub LLM Course from Maxime Labonne and Pietro Monticone: https://github.com/mlabonne/llm-course
Video from Maxime Labonne about what is fine-tuning and merging LLMs: https://youtu.be/uLrOI65XbDw?si=AI7KRy7hHGZLx7hN
Business use cases: https://www.turing.com/resources/finetuning-large-language-models
DataCamp tutorial: https://www.datacamp.com/tutorial/fine-tuning-large-language-models?dc_referrer=https%3A%2F%2Fwww.google.com%2F https://www.datacamp.com/tutorial/model-distillation-openai
Blogs from Meta
- Methods for LLM Adaptation (part 1): https://ai.meta.com/blog/adapting-large-language-models-llms/
- To fine-tune or not to fine-tune (part 2): https://ai.meta.com/blog/when-to-fine-tune-llms-vs-other-techniques/
- Data curation (part 3): https://ai.meta.com/blog/how-to-fine-tune-llms-peft-dataset-curation/
Fine tuning on AWS Fine-tuning on AWS https://aws.amazon.com/blogs/machine-learning/fine-tune-and-deploy-language-models-with-amazon-sagemaker-canvas-and-amazon-bedrock/
Knowledge distillation Pytorch tutorial on knowledge distillation: https://pytorch.org/tutorials/beginner/knowledge_distillation_tutorial.html
Workflow to streamline instruction-tuning and models aligning: https://github.com/datadreamer-dev/DataDreamer