Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
-
Updated
Aug 25, 2023 - Python
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
[SIGIR'24] The official implementation code of MOELoRA.
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Easy wrapper for inserting LoRA layers in CLIP.
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
A curated list of Parameter Efficient Fine-tuning papers with a TL;DR
Advanced AI-driven tool for generating unique video game characters using Stable Diffusion, DreamBooth, and LoRa adaptations. Enhances creativity with customizable, high-quality character designs, tailored specifically for game developers and artists.
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]
Long term project about a custom AI architecture. Consist of cutting-edge technique in machine learning such as Flash-Attention, Group-Query-Attention, ZeRO-Infinity, BitNet, etc.
PersonifAI is an AI-powered platform delivering personalized, multilingual educational content and real-time adaptive learning for postgraduate students.
A Low-Rank Adaptation of a pretrained Stable Diffusion model that generates background scenery. Trained with PyTorch, and deployed with AWS EC2 and Ngrok.
My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
Unlocking the Power of Generative AI: In-Context Learning, Instruction Fine-Tuning and Reinforcement Learning Fine-Tuning.
Add a description, image, and links to the low-rank-adaptation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-adaptation topic, visit your repo's landing page and select "manage topics."