Skip to content

Replication study of paper ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts.

Notifications You must be signed in to change notification settings

DisAI-Replication-Challenge/ATTEMPT

Repository files navigation

ATTEMPT – Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts: A Replication Study

This is a repository from the replication study of the work Attempt: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts, published in the proceedings of the EMNLP 2022 conference. The original implentation can be found in the author's repository.

This replication study is a part of Replication Challenge organized by DisAI.

How to run

python3 -m venv repl
source repl/bin/activate
pip install -r requirements.txt
python run.py [config]

Experiment results

All of our experiments details and saved data can be found at our Weights & Biases projects:

References

ASAI, Akari, et al. Attempt: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022. p. 6655-6672.

MANGRULKAR, Sourab, et al. PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods. 2022.

About

Replication study of paper ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages