Skip to content

KD-LoRA: A Hybrid Approach to Efficient Fine-Tuning with LoRA and Knowledge Distillation

License

Notifications You must be signed in to change notification settings

rambodazimi/KD-LoRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

KD-LoRA

This repository provides the official implementation for the paper:
KD-LoRA: A Hybrid Approach to Efficient Fine-Tuning with LoRA and Knowledge Distillation

Overview

KD-LoRA combines Low-Rank Adaptation (LoRA) and Knowledge Distillation to enable lightweight, effective, and efficient fine-tuning of large language models.

Authors

Name Email Address
Rambod Azimi rambod.azimi@mail.mcgill.ca
Rishav Rishav mail.rishav9@gmail.com
Marek Teichmann marek@cm-labs.com
Samira Ebrahimi Kahou samira.ebrahimi.kahou@gmail.com

Installation

# Clone the repository
git clone https://github.com/rambodazimi/kd-lora.git
cd kd-lora

# Install dependencies
pip install -r requirements.txt

Usage

Instructions for running experiments and fine-tuning models will go here.

Models

A selection of fine-tuned models is available on my Hugging Face account. You can explore and use them at the following link:
🔗 https://huggingface.co/rambodazimi

Citation

If you find this work helpful, please consider citing our paper:

License

This project is licensed under the MIT License - see the LICENSE file for details.

Releases

No releases published

Sponsor this project

Packages

No packages published