PyTorch Implementation of Attention Prompt Tuning: Parameter-Efficient Adaptation of Pre-Trained Models for Action Recognition
-
Updated
Mar 12, 2024 - Python
PyTorch Implementation of Attention Prompt Tuning: Parameter-Efficient Adaptation of Pre-Trained Models for Action Recognition
This repository contains code for implementing the LexGLUE benchmark using two different versions of the BERT architecture. The original BERT model is compared to a modified version that includes bottleneck adapter modules.
This repository is doing the finetuning of the Qwen2 7B VLM for performing VQA (Visual Question Answering) on various kinds of patient radiologies or medical scans.
Add a description, image, and links to the adapter-tuning topic page so that developers can more easily learn about it.
To associate your repository with the adapter-tuning topic, visit your repo's landing page and select "manage topics."