ATTEMPT – Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts: A Replication Study
This is a repository from the replication study of the work Attempt: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts, published in the proceedings of the EMNLP 2022 conference. The original implentation can be found in the author's repository.
This replication study is a part of Replication Challenge organized by DisAI.
python3 -m venv repl
source repl/bin/activate
pip install -r requirements.txt
python run.py [config]
All of our experiments details and saved data can be found at our Weights & Biases projects:
- Prompt tuning
- ATTEMPT single authors' prompts
- ATTEMPT single our prompts
- ATTEMPT multi authors' prompts
- ATTEMPT multi our prompts
ASAI, Akari, et al. Attempt: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022. p. 6655-6672.
MANGRULKAR, Sourab, et al. PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods. 2022.