This repository provides the official implementation for the paper:
KD-LoRA: A Hybrid Approach to Efficient Fine-Tuning with LoRA and Knowledge Distillation
KD-LoRA combines Low-Rank Adaptation (LoRA) and Knowledge Distillation to enable lightweight, effective, and efficient fine-tuning of large language models.
Name | Email Address |
---|---|
Rambod Azimi | rambod.azimi@mail.mcgill.ca |
Rishav Rishav | mail.rishav9@gmail.com |
Marek Teichmann | marek@cm-labs.com |
Samira Ebrahimi Kahou | samira.ebrahimi.kahou@gmail.com |
# Clone the repository
git clone https://github.com/rambodazimi/kd-lora.git
cd kd-lora
# Install dependencies
pip install -r requirements.txt
Instructions for running experiments and fine-tuning models will go here.
A selection of fine-tuned models is available on my Hugging Face account. You can explore and use them at the following link:
🔗 https://huggingface.co/rambodazimi
If you find this work helpful, please consider citing our paper:
This project is licensed under the MIT License - see the LICENSE file for details.