This is a curated list of "Continual Learning with Pretrained Models" research which is maintained by danelpeng.
[2024/10/25] Updated with latest papers.
[2024/10/08] Created this repo.
- Survey
- Prompt Based
- Adapter Based
- LoRA Based
- MoE/Ensemble Based
- VLM Based
- Diffusion Based
- Representation Based
- Application
-
Recent Advances of Multimodal Continual Learning: A Comprehensive Survey [Arxiv 2024.10] The Chinese University of Hong Kong, Tsinghua University, University of Illinois Chicago
-
Continual Learning with Pre-Trained Models: A Survey [IJCAI 2024] Nanjing University
-
Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach [NeurIPS 2024] University of Technology Sydney, Singapore Management University, University of Illinois at Chicago
-
ModalPrompt:Dual-Modality Guided Prompt for Continual Learning of Large Multimodal Models [Arxiv 2024.10] Institute of Automation, CAS
-
Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning [Arxiv 2024.10] VinAI Research, Monash University, Hanoi University of Science and Technolgy, Univesity of Oregon, The University of Texas at Austin
-
LW2G: Learning Whether to Grow for Prompt-based Continual Learning [Arxiv 2024.09] Zhejiang University, Nanjing University
-
Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [ECCV 2024] Tsinghua University, SmartMore, CUHK, HIT(SZ), Meta Reality Labs, HKU
-
Evolving Parameterized Prompt Memory for Continual Learning [AAAI 2024] Xi'an Jiaotong University
-
Generating Prompts in Latent Space for Rehearsal-free Continual Learning [ACMMM 2024] East China Normal University
-
Convolutional Prompting meets Language Models for Continual Learning [CVPR 2024] IIT Kharagpur, IML Amazon India
-
Consistent Prompting for Rehearsal-Free Continual Learning [CVPR 2024] Sun Yat-sen University, HKUST
-
Steering Prototypes with Prompt-tuning for Rehearsal-free Continual Learning [WACV 2024] Rutgers University, Google Research, Google Cloud AI
-
Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality [NeurIPS 2023] Tsinghua-Bosch Joint Center for ML, Tsinghua University
-
When Prompt-based Incremental Learning Does Not Meet Strong Pretraining [ICCV 2023] Sun Yat-sen University, Peng Cheng Laboratory
-
Introducing Language Guidance in Prompt-based Continual Learning [ICCV 2023] RPTU, DFKI, ETH Zurich, TUM, Google
-
MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning [Arxiv 2023.07] ETS Montreal
-
Progressive Prompts: Continual Learning for Language Models [ICLR 2023] University of Toronto & Vector Institute, Meta AI
-
Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning [ICCV 2023] Kyung Hee University
-
Self-regulating Prompts: Foundational Model Adaptation without Forgetting [ICCV 2023] Mohamed bin Zayed University of AI, Australian National University, Linkoping University, University of California, Merced, Google Research
-
Generating Instance-level Prompts for Rehearsal-free Continual Learning [ICCV 2023 (oral)] Seoul National University, NAVER AI Lab, NAVER Cloud, AWS AI Labs
-
CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning [CVPR 2023] Georgia Institute of Technology, MIT-IBM Watson AI Lab, Rice University, IBM Research
-
S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning [NeurIPS 2022] Xi’an Jiaotong University
-
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning [ECCV 2022] Northeastern University, Google Cloud AI, Google Research
-
Learning to Prompt for Continual Learning [CVPR 2022] Northeastern University, Google Cloud AI, Google Research
-
ATLAS: Adapter-Based Multi-Modal Continual Learning with a Two-Stage Learning Strategy [Arxiv 2024.10] Shanghai Jiao Tong University, ShanghaiTech University, Tsinghua University
-
Adaptive Adapter Routing for Long-Tailed Class-Incremental Learning [Arxiv 2024.09] Nanjing University
-
Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models [Arxiv 2024.08] KU Leuven
-
Expand and Merge: Continual Learning with the Guidance of Fixed Text Embedding Space [IJCNN 2024] Sun Yat-sen University
-
Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [ECCV 2024] Xi’an Jiaotong University
-
Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer [CVPR 2024] Huazhong University of Science and Tech., DAMO Academy, Alibaba Group
-
Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [CVPR 2024] Nanjing University
-
InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning [CVPR 2024] Nanjing University
-
Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation [NeurIPSW 2024] University of Texas at Austin
-
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [CVPR 2024] Dalian University of Technology, UESTC, Tsinghua University
-
Learning Attentional Mixture of LoRAs for Language Model Continual Learning [Arxiv 2024.09] Nankai University
-
Theory on Mixture-of-Experts in Continual Learning [Arxiv 2024.10] Singapore University of Technology and Design, University of Houston, The Ohio State University
-
Weighted Ensemble Models Are Strong Continual Learners [ECCV 2024] Télécom-Paris, Institut Polytechnique de Paris
-
LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models [Arxiv 2024.06] Nanjing University of Aeronautics and Astronautics
-
Mixture of Experts Meets Prompt-Based Continual Learning [Arxiv 2024.05] The University of Texas at Austin, Hanoi University of Science and Technology, VinAI Research
-
Learning More Generalized Experts by Merging Experts in Mixture-of-Experts [Arxiv 2024.05] KAIST
-
MoRAL: MoE Augmented LoRA for LLMs’ Lifelong Learning [Arxiv 2024.02] Provable Responsible AI and Data Analytics (PRADA) Lab, KAUST, University of Macau
-
Divide and not forget: Ensemble of selectively trained experts in Continual Learning [ICLR 2024] IDEAS-NCBR, Warsaw University of Technology
-
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [CVPR 2024] Dalian University of Technology, UESTC, Tsinghua University
-
An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training [Arxiv 2023.06] University of Massachusetts Amherst, University of California Berkeley, MIT-IBM Watson AI Lab
-
Lifelong Language Pretraining with Distribution-Specialized Experts [ICML 2023] The University of Texas at Austin, Google
-
Continual Learning Beyond a Single Model [CoLLAs 2023] Bosch Center for Artificial Intelligence, Washington State University, Apple
-
Mixture-of-Variational-Experts for Continual Learning [Arxiv 2022.03] Ulm University
-
CoSCL: Cooperation of Small Continual Learners is Stronger Than a Big One [ECCV 2022] Tsinghua University
-
Ex-Model: Continual Learning from a Stream of Trained Models [CVPRW 2022] University of Pisa
-
Routing Networks with Co-training for Continual Learning [ICMLW 2020] Google AI, Zurich
-
A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning [ICLR 2020] Seoul National University
-
Continual learning with task specialist [Arxiv 2024.09] International Institute of Information Technology Bangalore, A*STAR
-
A Practitioner’s Guide to Continual Multimodal Pretraining [Arxiv 2024.08] University of T¨ubingen, Helmholtz Munich, Munich Center for ML, Google DeepMind
-
CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning [BMVC 2024] University of Modena and Reggio
-
Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [ECCV 2024] Tsinghua University, SmartMore, CUHK, HIT(SZ), Meta Reality Labs, HKU
-
Anytime Continual Learning for Open Vocabulary Classification [ECCV 2024 (oral)] University of Illinois at Urbana-Champaign
-
Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models [ECCV 2024] National Taiwan University, NVIDIA
-
Expand and Merge: Continual Learning with the Guidance of Fixed Text Embedding Space [IJCNN 2024] Sun Yat-sen University
-
CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning [Arxiv 2024.05] Northwestern Polytechnical University, Singapore Management University, Zhejiang University, University of Adelaide
-
TiC-CLIP: Continual Training of CLIP Models [ICLR 2024] Apple, Carnegie Mellon University
-
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [CVPR 2024] Dalian University of Technology, UESTC, Tsinghua University
-
Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners [CVPR 2024] Kyung Hee University, Yonsei University
-
MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning [Arxiv 2023.07] ETS Montreal
-
Learning without Forgetting for Vision-Language Models [Arxiv 2023.05] Nanjing University, Nanyang Technological University
-
Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models [ICCV 2023] National University of Singapore, UC Berkeley, The Chinese University of Hong Kong
-
Self-regulating Prompts: Foundational Model Adaptation without Forgetting [ICCV 2023] Mohamed bin Zayed University of AI, Australian National University, Linkoping University, University of California, Merced, Google Research
-
Continual Vision-Language Representation Learning with Off-Diagonal Information [ICML 2023] Zhejiang University, Huawei Cloud
-
CLIP model is an Efficient Continual Learner [Arxiv 2022.10] Mohamed bin Zayed University of Artificial Intelligence, Australian National University, Monash University, Linkoping University
-
Don’t Stop Learning: Towards Continual Learning for the CLIP Model [Arxiv 2022.07] Xidian University, University of Adelaide
-
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks [NeurIPS 2022] University of Southern California,
-
S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning [NeurIPS 2022] Xi’an Jiaotong University
-
Robust Fine-Tuning of Zero-Shot Models [CVPR 2022] University of Washington, OpenAI, Columbia University, Google Research, Brain Team
-
Continual learning with task specialist [Arxiv 2024.09] International Institute of Information Technology Bangalore, A*STAR
-
Diffusion Model Meets Non-Exemplar Class-Incremental Learning and Beyond [Arxiv 2024.08] BNRist, Tsinghua University
-
Class-Prototype Conditional Diffusion Model with Gradient Projection for Continual Learning [Arxiv 2024.03] VinAI Research, Monash University
-
Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [ECCV 2024 (oral)] South China University of Technology, HKUST, China University of Petroleum, WeBank, Pazhou Laboratory
-
DiffClass: Diffusion-Based Class Incremental Learning [ECCV 2024] Northeastern University, ETH Zürich
-
GUIDE: Guidance-based Incremental Learning with Diffusion Models [Arxiv 2024.03] Warsaw University of Technology
-
SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection [CVPR 2024] UNIST, LG Electronics, KETI
-
Class-Incremental Learning using Diffusion Model for Distillation and Replay [ICCVW 2023] Tokyo Institute of Technology, Artificial Intelligence Research Center
-
DDGR: Continual Learning with Deep Diffusion-based Generative Replay [ICML 2023] Wuhan University
- Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning [Arxiv 2024.10] Nanjing University
-
Incremental Learning for Robot Shared Autonomy [Arxiv 2024.10] Robotics Institute, Carnegie Mellon University
-
Task-unaware Lifelong Robot Learning with Retrieval-based Weighted Local Adaptation [Arxiv 2024.10] TU Delft, Booking.com, UCSD
-
Vision-Language Navigation with Continual Learning [Arxiv 2024.09] Institute of Automation, Chinese Academy of Science
-
Continual Vision-and-Language Navigation [Arxiv 2024.03] Seoul National University
-
Online Continual Learning For Interactive Instruction Following Agents [ICLR 2024] Yonsei University, Seoul National University
-
LLaCA: Multimodal Large Language Continual Assistant [Arxiv 2024.10] East China Normal University, Xiamen University, Tencent YouTu Lab
-
Is Parameter Collision Hindering Continual Learning in LLMs? [Arxiv 2024.10] Peking University, DAMO Academy
-
ModalPrompt:Dual-Modality Guided Prompt for Continual Learning of Large Multimodal Models [Arxiv 2024.10] Institute of Automation, CAS
-
Learning Attentional Mixture of LoRAs for Language Model Continual Learning [Arxiv 2024.09] Nankai University
-
Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting [EMNLP 2024] Nanyang Technological University
-
Low-Rank Continual Personalization of Diffusion Models [Arxiv 2024.10] Warsaw University of Technology
-
Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters [CVPRW 2024] Samsung Research America, Georgia Institute of Technology
-
Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA [TMLR 2024] Samsung Research America, Georgia Institute of Technology
-
Continual Learning of Diffusion Models with Generative Distillation [CoLLAs 2024] Master in Computer Vision (Barcelona), Apple, KU Leuven
-
Conditioned Prompt-Optimization for Continual Deepfake Detection [ICPR 2024] University of Trento, Fondazione Bruno Kessler
-
A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials [WACV 2023] ETH Zurich, Singapore Management University, Xi’an Jiaotong University, Harbin Institute of Technology, KU Leuven
- Figure from this URL: Lifelong learning? Part-time undergraduate provision is in crisis.