A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
-
Updated
Oct 14, 2024
A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
[IPCAI'2024 (IJCARS special issue)] Surgical-DINO: Adapter Learning of Foundation Models for Depth Estimation in Endoscopic Surgery
[MICCAI'2024] EndoDAC: Efficient Adapting Foundation Model for Self-Supervised Depth Estimation from Any Endoscopic Camera
Add a description, image, and links to the adapter-learning topic page so that developers can more easily learn about it.
To associate your repository with the adapter-learning topic, visit your repo's landing page and select "manage topics."