- Here we have collect info about all the works that may be useful for writing our paper
Note
This review table will be updated, so it is not a final version
Title | Year | Authors | Paper | Code | Summary |
---|---|---|---|---|---|
LLM-Informed Discrete Prompt Optimization | 2024 | Zeeshan Memon | paper | GitHub | TODO |
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | 2017 | Chelsea Finn | paper | GitHub | Simple explaination TODO |
Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery | 2023 | Yuxin Wen, Neel Jain, John Kirchenbauer | paper | GitHub | Learning hard prompts for image generation using continuous optimization. The scheme builds on existing gradient reprojection schemes for optimizing text. Берут непрерывные промпты и на каждом шагу проецируют на дискретное пространство, затем оптимизируют градиентым спуском как непрерывные. |
How Hard Can It Prompt? Adventures in Cross-model Prompt Transferability | 2024 | Lola Solovyeva | paper | GitHub | Discretizing soft prompts by leveraging cosine similarity between the embeddings of soft and hard tokens. Algorithm designed to identify a set of hard tokens using gradients obtained through the tuning of soft prompts. Testing the transferability of the derived hard prompts between different models. Написано примерно то же, что и в предыдущей статье, но в виде более подробной книжки с усложнением алгоритма из статьи выше. |
Dynamic Prompting: A Unified Framework for Prompt Tuning | 2023 | Xianjun Yang | paper | GitHub | TODO |
Automatic Prompt Optimization with “Gradient Descent” and Beam Search | 2023 | R Pryzant, D Iter | paper | GitHub | TODO |
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners | 2022 | Ningyu Zhang Luoqiu Li | paper | GitHub | TODO |