- Image-Language Pretraining
- Video-Language Pretraining
- Image-Language Datasets
- Video-Language Datasets
(NeurIPS2019_ViLBERT) ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks.
Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee.
[paper]
[code]
(ACL2020_VisualBERT) VisualBERT: A Simple and Performant Baseline for Vision and Language.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
[paper]
[code]
(EMNLP2019_B2T2) Fusion of Detected Objects in Text for Visual Question Answering.
Chris Alberti, Jeffrey Ling, Michael Collins, David Reitter.
[paper]
[code]
(AAAI2020_Unicoder-VL) Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training.
Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou.
[paper]
(EMNLP2019_LXMERT) LXMERT: Learning Cross-Modality Encoder Representations from Transformers.
Hao Tan, Mohit Bansal.
[paper]
[code]
(ICLR2020_VL-BERT) VL-BERT: Pre-training of Generic Visual-Linguistic Representations.
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai.
[paper]
[code]
(AAAI2020_Unified-VLP) Unified Vision-Language Pre-Training for Image Captioning and VQA.
Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, Jianfeng Gao.
[paper]
[code]
(ECCV2020_UNITER) UNITER: UNiversal Image-TExt Representation Learning.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu.
[paper]
[code]
(CVPR2020_M4C) Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA.
Ronghang Hu, Amanpreet Singh, Trevor Darrell, Marcus Rohrbach.
[paper]
(CVPR2020_12-in-1) 12-in-1: Multi-Task Vision and Language Representation Learning.
Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, Stefan Lee.
[paper]
[code]
(ECCV2020_VisDial-BERT) Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art Baseline.
Vishvak Murahari, Dhruv Batra, Devi Parikh, Abhishek Das.
[paper]
[code]
(arXiv2020_ImageBERT) ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data.
Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, Arun Sacheti.
[paper]
(NAACL2021_MSB) Measuring Social Biases in Grounded Vision and Language Embeddings.
Candace Ross, Boris Katz, Andrei Barbu.
[paper]
[code]
(CVPR2020_PREVALENT) Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training.
Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, Jianfeng Gao.
[paper]
[code]
(INLG2020_VQG-BERT) What BERT Sees: Cross-Modal Transfer for Visual Question Generation.
Thomas Scialom, Patrick Bordes, Paul-Alexis Dray, Jacopo Staiano, Patrick Gallinari.
[paper]
(NLPCC2021_XGPT) XGPT: Cross-modal Generative Pre-Training for Image Captioning.
Qiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, Xin Liu, Ming Zhou.
[paper]
(arXiv2020_InterBERT) InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining.
Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, Hongxia Yang.
[paper]
(arXiv2020_Pixel-BERT) Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Transformers.
Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, Jianlong Fu.
[paper]
(ECCV2020_Oscar) Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks.
Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao.
[paper]
[code]
(arXiv2020_MMF) Are we pretraining it right? Digging deeper into visio-linguistic pretraining.
Amanpreet Singh, Vedanuj Goswami, Devi Parikh.
[paper]
[code]
(ACMMM2020_MMNas) Deep Multimodal Neural Architecture Search.
Zhou Yu, Yuhao Cui, Jun Yu, Meng Wang, Dacheng Tao, Qi Tian.
[paper]
[code]
(EMNLP2020_VD-BERT) VD-BERT: A Unified Vision and Dialog Transformer with BERT.
Yue Wang, Shafiq Joty, Michael R. Lyu, Irwin King, Caiming Xiong, Steven C.H. Hoi.
[paper]
[code]
(ECCV2020_VALUE) Behind the Scene: Revealing the Secrets of Pre-trained Vision-and-Language Models.
Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, Jingjing Liu.
[paper]
[code]
(ACLSRW2020_AT) Adaptive Transformers for Learning Multimodal Representations.
Prajjwal Bhargava.
[paper]
[code]
(NeurIPS2020_VILLA) Large-Scale Adversarial Training for Vision-and-Language Representation Learning.
Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, Jingjing Liu.
[paper]
[code]
(CVPR2021_VirTex) VirTex: Learning Visual Representations from Textual Annotations.
Karan Desai, Justin Johnson.
[paper]
[code]
(AAAI2021_ERNIE-ViL) ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph.
Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.
[paper]
(ACMMM2020_DeVLBert) DeVLBert: Learning Deconfounded Visio-Linguistic Representations.
Shengyu Zhang, Tan Jiang, Tan Wang, Kun Kuang, Zhou Zhao, Jianke Zhu, Jin Yu, Hongxia Yang, Fei Wu.
[paper]
[code]
(Access2021_RVL-BERT) Visual Relationship Detection With Visual-Linguistic Knowledge From Multimodal Representations.
Meng-Jiun Chiou, Roger Zimmermann, Jiashi Feng.
[paper]
[code]
(EMNLP2020_X-LXMERT) X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers.
Jaemin Cho, Jiasen Lu, Dustin Schwenk, Hannaneh Hajishirzi, Aniruddha Kembhavi.
[paper]
[code]
(arXiv2020_CAPT) CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations.
Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu Sun.
[paper]
(EMNLP2020_STL-CQA) STL-CQA: Structure-based Transformers with Localization and Encoding for Chart Question Answering.
Hrituraj Singh, Sumit Shekhar.
[paper]
(CVPR2021_DenseCL) Dense Contrastive Learning for Self-Supervised Visual Pre-Training.
Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, Lei Li.
[paper]
[code]
(TACL2021_MPU) Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs.
Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, Desmond Elliott.
[paper]
[code]
(arXiv2020_LAMP) LAMP: Label Augmented Multimodal Pretraining.
Jia Guo, Chen Zhu, Yilun Zhao, Heda Wang, Yao Hu, Xiaofei He, Deng Cai.
[paper]
(arXiv2020_MiniVLM) MiniVLM: A Smaller and Faster Vision-Language Model.
Jianfeng Wang, Xiaowei Hu, Pengchuan Zhang, Xiujun Li, Lijuan Wang, Lei Zhang, Jianfeng Gao, Zicheng Liu.
[paper]
(arXiv2020_MANGO) A Closer Look at the Robustness of Vision-and-Language Pre-trained Models.
Linjie Li, Zhe Gan, Jingjing Liu.
[paper]
(ACL2021_UNIMO) UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning.
Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang.
[paper]
[code]
(CVPR2021_VinVL) VinVL: Revisiting Visual Representations in Vision-Language Models.
Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao.
[paper]
[code]
(AAAI2021_VisualMRC) VisualMRC: Machine Reading Comprehension on Document Images.
Ryota Tanaka, Kyosuke Nishida, Sen Yoshida.
[paper]
[code]
(AAAI2021_TDEN) Scheduled Sampling in Vision-Language Pretraining with Decoupled Encoder-Decoder Network.
Yehao Li, Yingwei Pan, Ting Yao, Jingwen Chen, Tao Mei.
[paper]
[code]
(ICML2021_VL-BART) Unifying Vision-and-Language Tasks via Text Generation.
Jaemin Cho, Jie Lei, Hao Tan, Mohit Bansal.
[paper]
[code]
(ICML2021_ViLT) ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision.
Wonjae Kim, Bokyung Son, Ildoo Kim.
[paper]
[code]
(ICML2021_ALIGN) Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
[paper]
[blog]
(ICCV2021_UniT) UniT: Multimodal Multitask Learning with a Unified Transformer.
Ronghang Hu, Amanpreet Singh.
[paper]
[code]
(ICML2021_CLIP) Learning Transferable Visual Models From Natural Language Supervision.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
[paper]
[code]
(arXiv2021_SemVLP) SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels.
Chenliang Li, Ming Yan, Haiyang Xu, Fuli Luo, Wei Wang, Bin Bi, Songfang Huang.
[paper]
(NAACL2021_LightningDOT) LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Siqi Sun, Yen-Chun Chen, Linjie Li, Shuohang Wang, Yuwei Fang, Jingjing Liu.
[paper]
[code]
(CVPR2021_Fast&Slow) Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers.
Antoine Miech, Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, Andrew Zisserman.
[paper]
(CVPR2021_UC2) UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training.
Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, Jingjing Liu.
[paper]
(ICCV2021_DistillVLM) Compressing Visual-linguistic Model via Knowledge Distillation.
Zhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lijuan Wang, Yezhou Yang, Zicheng Liu.
[paper]
(CVPR2021_SOHO) Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning.
Zhicheng Huang, Zhaoyang Zeng, Yupan Huang, Bei Liu, Dongmei Fu, Jianlong Fu.
[paper]
[code]
(EMNLP2021_GLUE) Effect of Visual Extensions on Natural Language Understanding in Vision-and-Language Models.
Taichi Iki, Akiko Aizawa.
[paper]
[code]
(ICCV2021_MDETR) MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding.
Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, Nicolas Carion.
[paper]
[code]
(CVPR2021_MCT) Multimodal Contrastive Training for Visual Representation Learning.
Xin Yuan, Zhe Lin, Jason Kuen, Jianming Zhang, Yilin Wang, Michael Maire, Ajinkya Kale, Baldo Faieta.
[paper]
(ACL2021_IAIS) Learning Relation Alignment for Calibrated Cross-modal Retrieval.
Shuhuai Ren, Junyang Lin, Guangxiang Zhao, Rui Men, An Yang, Jingren Zhou, Xu Sun, Hongxia Yang.
[paper]
[code]
(ICLR2022_CLIP-ViL) How Much Can CLIP Benefit Vision-and-Language Tasks?.
Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer.
[paper]
[code]
(SIGIR2021_GilBERT) GilBERT: Generative Vision-Language Pre-Training for Image-Text Retrieval.
Weixiang Hong, Kaixiang Ji, Jiajia Liu, Jian Wang, Jingdong Chen, Wei Chu.
[paper]
(NeurIPS2021_ALBEF) Align before Fuse: Vision and Language Representation Learning with Momentum Distillation.
Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi.
[paper]
[code]
(NeurIPS2021_Frozen) Multimodal Few-Shot Learning with Frozen Language Models.
Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill.
[paper]
[project]
(ICLR2022_SimVLM) SimVLM: Simple Visual Language Model Pretraining with Weak Supervision.
Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao.
[paper]
(arXiv2021_MURAL) MURAL: Multimodal, Multitask Retrieval Across Languages.
Aashi Jain, Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, Yinfei Yang, Jason Baldridge.
[paper]
(NAACL2022_KD-VLP) KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation.
Yongfei Liu, Chenfei Wu, Shao-yen Tseng, Vasudev Lal, Xuming He, Nan Duan.
[paper]
(CIKM2021_TDMR) Student Can Also be a Good Teacher: Extracting Knowledge from Vision-and-Language Model for Cross-Modal Retrieval.
Jun Rao, Tao Qian, Shuhan Qi, Yulin Wu, Qing Liao, Xuan Wang.
[paper]
(ICCV2021_COOKIE) COOKIE: Contrastive Cross-Modal Knowledge Sharing Pre-Training for Vision-Language Representation.
Keyu Wen, Jin Xia, Yuanyuan Huang, Linyang Li, Jiayan Xu, Jie Shao.
[paper]
[code]
(ICLR2022_DeCLIP) Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm.
Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, Junjie Yan.
[paper]
(arXiv2021_VLDeformer) VLDeformer: Learning Visual-Semantic Embeddings by Vision-Language Transformer Decomposing.
Lisai Zhang, Hongfa Wu, Qingcai Chen, Yimeng Deng, Zhonghua Li, Dejiang Kong, Zhao Cao, Joanna Siebert, Yunpeng Han.
[paper]
(NeurIPS2022_VLMo) VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts.
Wenhui Wang, Hangbo Bao, Li Dong, Furu Wei.
[paper]
[code]
(CVPR2022_METER) An Empirical Study of Training End-to-End Vision-and-Language Transformers.
Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, Michael Zeng.
[paper]
[code]
(NAACL2022_TAGS) Negative Sample is Negative in Its Own Way: Tailoring Negative Sentences for Image-Text Retrieval.
Zhihao Fan, Zhongyu Wei, Zejun Li, Siyuan Wang, Jianqing Fan.
[paper]
[code]
(ICLR2022_FILIP) FILIP: Fine-grained Interactive Language-Image Pre-Training.
Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu.
[paper]
(CVPR2022_LiT) LiT: Zero-Shot Transfer with Locked-image Text Tuning.
Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer.
[paper]
(ICML2022_X-VLM) Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts.
Yan Zeng, Xinsong Zhang, Hang Li.
[paper]
[code]
(arXiv2021_Florence) Florence: A New Foundation Model for Computer Vision.
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang.
[paper]
(CVPR2022_GLIP) Grounded Language-Image Pre-training.
Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao.
[paper]
[code]
(arXiv2021_ViT-BERT) Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text.
Qing Li, Boqing Gong, Yin Cui, Dan Kondratyuk, Xianzhi Du, Ming-Hsuan Yang, Matthew Brown.
[paper]
(ECCV2022_SLIP) SLIP: Self-supervision meets Language-Image Pre-training.
Norman Mu, Alexander Kirillov, David Wagner, Saining Xie.
[paper]
[code]
(CVPR2022_QB-NORM) Cross Modal Retrieval with Querybank Normalisation.
Simion-Vlad Bogolin, Ioana Croitoru, Hailin Jin, Yang Liu, Samuel Albanie.
[paper]
[code]
(ACLARR_PromptFuse) Prompting as Multimodal Fusing.
[paper]
(TCSVT2022_CSIC) Image-Text Retrieval with Cross-Modal Semantic Importance Consistency.
Zejun Liu, Fanglin Chen, Jun Xu, Wenjie Pei, Guangming Lu.
[paper]
(PMLR2022_VLUE) VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models.
Wangchunshu Zhou, Yan Zeng, Shizhe Diao, Xinsong Zhang.
[paper]
[code]
(CVPR2022_TCL) Vision-Language Pre-Training with Triple Contrastive Learning.
Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang.
[paper]
[code]
(CVPR2022_CODIS) Multi-modal Alignment using Representation Codebook.
Jiali Duan, Liqun Chen, Son Tran, Jinyu Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi.
[paper]
(arXiv2022_LoopITR) LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval.
Jie Lei, Xinlei Chen, Ning Zhang, Mengjiao Wang, Mohit Bansal, Tamara L. Berg, Licheng Yu.
[paper]
(ACL2022_VLKD) Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation.
Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung.
[paper]
(ACL2022_CMKT) Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer.
Woojeong Jin, Dong-Ho Lee, Chenguang Zhu, Jay Pujara, Xiang Ren.
[paper]
[code]
(CVPR2022_ViSTA) ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval.
Mengjun Cheng, Yipeng Sun, Longchao Wang, Xiongwei Zhu, Kun Yao, Jie Chen, Guoli Song, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang.
[paper]
(CVPR2022_UniCL) Unified Contrastive Learning in Image-Text-Label Space.
Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Bin Xiao, Ce Liu, Lu Yuan, Jianfeng Gao.
[paper]
[code]
(CVPR2022_PSD) Robust Cross-Modal Representation Learning with Progressive Self-Distillation.
Alex Andonian, Shixing Chen, Raffay Hamid.
[paper]
(CVPR2022_COTS) COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval.
Haoyu Lu, Nanyi Fei, Yuqi Huo, Yizhao Gao, Zhiwu Lu, Ji-Rong Wen.
[paper]
(TMLR2023_LTD) Reducing Predictive Feature Suppression in Resource-Constrained Contrastive Image-Caption Retrieval.
Maurits Bleeker, Andrew Yates, Maarten de Rijke.
[paper]
[code]
(NeurIPS2022_PyramidCLIP) PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining.
Yuting Gao, Jinfeng Liu, Zihan Xu, Jun Zhang, Ke Li, Rongrong Ji, Chunhua Shen.
[paper]
(arXiv2022_HiVLP) HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text Retrieval.
Feilong Chen, Xiuyi Chen, Jiaxin Shi, Duzhen Zhang, Jianlong Chang, Qi Tian.
[paper]
(arXiv2022_COOKIE) Contrastive Cross-Modal Knowledge Sharing Pre-training for Vision-Language Representation Learning and Retrieval.
Keyu Wen, Zhenshan Tan, Qingrong Cheng, Cheng Chen, Xiaodong Gu.
[paper]
(CBMI2022_ALADIN) ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval.
Nicola Messina, Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Fabrizio Falchi, Giuseppe Amato, Rita Cucchiara.
[paper]
[code]
(NeurIPS2022_LOUPE) Fine-Grained Semantically Aligned Vision-Language Pre-Training.
Juncheng Li, Xin He, Longhui Wei, Long Qian, Linchao Zhu, Lingxi Xie, Yueting Zhuang, Qi Tian, Siliang Tang.
[paper]
[code]
(ECCV2022_GRIT-VLP) GRIT-VLP: Grouped Mini-batch Sampling for Efficient Vision and Language Pre-training.
Jaeseok Byun, Taebaek Hwang, Jianlong Fu, Taesup Moon.
[paper]
[code]
(arXiv2022_TokenFlow) TokenFlow: Rethinking Fine-grained Cross-modal Alignment in Vision-Language Retrieval.
Xiaohan Zou, Changqiao Wu, Lele Cheng, Zhongyuan Wang.
[paper]
(NeurIPS2022_Knowledge-CLIP) Contrastive Language-Image Pre-Training with Knowledge Graphs.
Xuran Pan, Tianzhu Ye, Dongchen Han, Shiji Song, Gao Huang.
[paper]
(CVPR2023_xCLIP) Non-Contrastive Learning Meets Language-Image Pre-Training.
Jinghao Zhou, Li Dong, Zhe Gan, Lijuan Wang, Furu Wei.
[paper]
(arXiv2022_X2-VLM) X2-VLM: All-In-One Pre-trained Model For Vision-Language Tasks.
Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou.
[paper]
[code]
(BMVC2022_ViCHA) Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment.
Mustafa Shukor, Guillaume Couairon, Matthieu Cord.
[paper]
[code]
(ACMMM2022_CMAL) CMAL: A Novel Cross-Modal Associative Learning Framework for Vision-Language Pre-Training.
Zhiyuan Ma, Jianjun Li, Guohui Li, Kaiyan Huang.
[paper]
(ACMMM2022_MVPTR) MVPTR: Multi-Level Semantic Alignment for Vision-Language Pre-Training via Multi-Stage Learning.
Zejun Li, Zhihao Fan, Huaixiao Tou, Jingjing Chen, Zhongyu Wei, Xuanjing Huang.
[paper]
(CVPR2022_CLIP-Event) CLIP-Event: Connecting Text and Images with Event Structures.
Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, Shih-Fu Chang.
[paper]
[code]
(CVPR2023_TCL) Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs.
Junbum Cha, Jonghwan Mun, Byungseok Roh.
[paper]
[code]
(AAAI2023_NLIP) NLIP: Noise-robust Language-Image Pre-training.
Runhui Huang, Yanxin Long, Jianhua Han, Hang Xu, Xiwen Liang, Chunjing Xu, Xiaodan Liang.
[paper]
(ECIR2023_HADA) HADA: A Graph-based Amalgamation Framework in Image-text Retrieval.
Manh-Duy Nguyen, Binh T. Nguyen, Cathal Gurrin.
[paper]
[code]
(ICCV2023_LexLIP) LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval.
Ziyang luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, Qingwen lin, Daxin Jiang.
[paper]
[code]
(arXiv2023_VITR) VITR: Augmenting Vision Transformers with Relation-Focused Learning for Cross-Modal Information Retrieval.
Yan Gong, Georgina Cosma, Axel Finke.
[paper]
(arXiv2023_UKnow) UKnow: A Unified Knowledge Protocol for Common-Sense Reasoning and Vision-Language Pre-training.
Biao Gong, Xiaoying Xie, Yutong Feng, Yiliang Lv, Yujun Shen, Deli Zhao.
[paper]
[code]
(CVPR2023_SCL) Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning.
Yatai Ji, Rongcheng Tu, Jie Jiang, Weijie Kong, Chengfei Cai, Wenzhe Zhao, Hongfa Wang, Yujiu Yang, Wei Liu.
[paper]
(CVPR2023_RO-ViT) Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers.
Dahun Kim, Anelia Angelova, Weicheng Kuo.
[paper]
(ICCV2023_EqSim) Equivariant Similarity for Vision-Language Foundation Models.
Tan Wang, Kevin Lin, Linjie Li, Chung-Ching Lin, Zhengyuan Yang, Hanwang Zhang, Zicheng Liu, Lijuan Wang.
[paper]
[code]
(ICCV2023_SigLIP) Sigmoid Loss for Language Image Pre-Training.
Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer.
[paper]
[code]
(arXiv2023_CAVL) CAVL: Learning Contrastive and Adaptive Representations of Vision and Language.
Shentong Mo, Jingfei Xia, Ihor Markevych.
[paper]
(ICML2023_MERU) Hyperbolic Image-Text Representations.
Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, Justin Johnson, Ramakrishna Vedantam.
[paper]
[code]
(ACL2023_MI) Vision Language Pre-training by Contrastive Learning with Cross-Modal Similarity Regulation.
Chaoya Jiang, Wei Ye, Haiyang Xu, Miang yan, Shikun Zhang, Jie Zhang, Fei Huang.
[paper]
(arXiv2023_Boon) Boon: A Neural Search Engine for Cross-Modal Information Retrieval.
Yan Gong, Georgina Cosma.
[paper]
(ACMMM2023_COPA) COPA: Efficient Vision-Language Pre-training Through Collaborative Object- and Patch-Text Alignment.
Chaoya Jiang, Haiyang Xu, Wei Ye, Qinghao Ye, Chenliang Li, Ming Yan, Bin Bi, Shikun Zhang, Ji Zhang, Fei Huang.
[paper]
(AAAI2024_EVE) EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE.
Junyi Chen, Longteng Guo, Jia Sun, Shuai Shao, Zehuan Yuan, Liang Lin, Dongyu Zhang.
[paper]
(NeurIPS2023_PAU) Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval.
Hao Li, Jingkuan Song, Lianli Gao, Xiaosu Zhu, Heng Tao Shen.
[paper]
[code]
(arXiv2023_TiC-CLIP) TiC-CLIP: Continual Training of CLIP Models.
Saurabh Garg, Mehrdad Farajtabar, Hadi Pouransari, Raviteja Vemulapalli, Sachin Mehta, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri.
[paper]
(arXiv2023_MCAD) MCAD: Multi-teacher Cross-modal Alignment Distillation for Efficient Image-text Retrieval.
Youbo Lei, Feifei He, Chen Chen, Yingbin Mo, Si Jia Li, Defeng Xie, Haonan Lu.
[paper]
(arXiv2023_MLLMs-Augmented) MLLMs-Augmented Visual-Language Representation Learning.
Yanqing Liu, Kai Wang, Wenqi Shao, Ping Luo, Yu Qiao, Mike Zheng Shou, Kaipeng Zhang, Yang You.
[paper]
[code]
(CVPR2024_MAFA) MAFA: Managing False Negatives for Vision-Language Pre-training.
Jaeseok Byun, Dohoon Kim, Taesup Moon.
[paper]
[code]
(ICCV2019_VideoBERT) VideoBERT: A Joint Model for Video and Language Representation Learning.
Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid.
[paper]
(ICCV2019_HowTo100M) HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips.
Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, Josef Sivic.
[paper]
[code]
(arXiv2019_CBT) Learning Video Representations using Contrastive Bidirectional Transformer.
Chen Sun, Fabien Baradel, Kevin Murphy, Cordelia Schmid.
[paper]
(EMNLP2020_HERO) HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training.
Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, Jingjing Liu.
[paper]
[code]
(CVPR2020_ActBERT) ActBERT: Learning Global-Local Video-Text Representations.
Linchao Zhu, Yi Yang.
[paper]
(CVPR2021_ClipBERT) Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling.
Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu.
[paper]
[code]
(CVPRW2021_MDMMT) MDMMT: Multidomain Multimodal Transformer for Video Retrieval
Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr Petiushko.
[paper]
[code]
(ICCV2021_Frozen) Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval.
Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman.
[paper]
[code]
(ICCV2021_TEACHTEXT) TEACHTEXT: CrossModal Generalized Distillation for Text-Video Retrieval.
Ioana Croitoru, Simion-Vlad Bogolin, Marius Leordeanu, Hailin Jin, Andrew Zisserman, Samuel Albanie, Yang Liu.
[paper]
[code]
(Neurocomputing2022_CLIP4Clip) CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval.
Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, Tianrui Li.
[paper]
[code]
(ACL2021_VLM) VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding.
Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer.
[paper]
[code]
(arXiv2021_CLIP2Video) CLIP2Video: Mastering Video-Text Retrieval via Image CLIP.
Han Fang, Pengfei Xiong, Luhui Xu, Yu Chen.
[paper]
[code]
(ICCV2021_TACo) TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignment.
Jianwei Yang, Yonatan Bisk, Jianfeng Gao.
[paper]
(arXiv2021_CAMoE) Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss.
Xing Cheng, Hezheng Lin, Xiangyu Wu, Fan Yang, Dong Shen.
[paper]
(EMNLP2021_VideoCLIP) VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding.
Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer.
[paper]
[code]
(arXiv2021_CLIP2TV) CLIP2TV: An Empirical Study on Transformer-based Methods for Video-Text Retrieval.
Zijian Gao, Jingyu Liu, Sheng Chen, Dedan Chang, Hao Zhang, Jinwei Yuan.
[paper]
(CVPR2022_OA-Transformer) Object-aware Video-language Pre-training for Retrieval.
Alex Jinpeng Wang, Yixiao Ge, Guanyu Cai, Rui Yan, Xudong Lin, Ying Shan, Xiaohu Qie, Mike Zheng Shou.
[paper]
[code]
(AAAI2023_RegionLearner) Video-Text Pre-training with Learned Regions.
Rui Yan, Mike Zheng Shou, Yixiao Ge, Alex Jinpeng Wang, Xudong Lin, Guanyu Cai, Jinhui Tang.
[paper]
[code]
(ECCV2022_LAFF) Lightweight Attentional Feature Fusion: A New Baseline for Text-to-Video Retrieval.
Fan Hu, Aozhu Chen, Ziyue Wang, Fangming Zhou, Jianfeng Dong, Xirong Li.
[paper]
[code]
(ACMMM2021_CoCo-BERT) CoCo-BERT: Improving Video-Language Pre-training with Contrastive Cross-modal Matching and Denoising.
Jianjie Luo, Yehao Li, Yingwei Pan, Ting Yao, Hongyang Chao, Tao Mei.
[paper]
(CVPR2022_ALPRO) Align and Prompt: Video-and-Language Pre-training with Entity Prompts.
Dongxu Li, Junnan Li, Hongdong Li, Juan Carlos Niebles, Steven C.H. Hoi.
[paper]
[code]
(CVPR2022_MCQ) Bridging Video-text Retrieval with Multiple Choice Questions.
Yuying Ge, Yixiao Ge, Xihui Liu, Dian Li, Ying Shan, Xiaohu Qie, Ping Luo.
[paper]
[code]
(arXiv2022_MDMMT-2) MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization.
Alexander Kunitsyn, Maksim Kalashnikov, Maksim Dzabraev, Andrei Ivaniuta.
[paper]
(arXiv2022_DRL) Disentangled Representation Learning for Text-Video Retrieval.
Qiang Wang, Yanhao Zhang, Yun Zheng, Pan Pan, Xian-Sheng Hua.
[paper]
[code]
(CVPR2023_All-in-One) All in One: Exploring Unified Video-Language Pre-training.
Alex Jinpeng Wang, Yixiao Ge, Rui Yan, Yuying Ge, Xudong Lin, Guanyu Cai, Jianping Wu, Ying Shan, Xiaohu Qie, Mike Zheng Shou.
[paper]
[code]
(arXiv2022_MDMMT-2) MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization.
Alexander Kunitsyn, Maksim Kalashnikov, Maksim Dzabraev, Andrei Ivaniuta.
[paper]
(arXiv2022_DRL) Disentangled Representation Learning for Text-Video Retrieval.
Qiang Wang, Yanhao Zhang, Yun Zheng, Pan Pan, Xian-Sheng Hua.
[paper]
[code]
(CVPR2023_All-in-One) All in One: Exploring Unified Video-Language Pre-training.
Alex Jinpeng Wang, Yixiao Ge, Rui Yan, Yuying Ge, Xudong Lin, Guanyu Cai, Jianping Wu, Ying Shan, Xiaohu Qie, Mike Zheng Shou.
[paper]
[code]
(arXiv2022_DemoVLP) Revitalize Region Feature for Democratizing Video-Language Pre-training.
Guanyu Cai, Yixiao Ge, Alex Jinpeng Wang, Rui Yan, Xudong Lin, Ying Shan, Lianghua He, Xiaohu Qie, Jianping Wu, Mike Zheng Shou.
[paper]
[code]
(CVPR2022_X-Pool) X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval.
Satya Krishna Gorti, Noel Vouitsis, Junwei Ma, Keyvan Golestan, Maksims Volkovs, Animesh Garg, Guangwei Yu.
[paper]
[code]
[project]
(CVPR2022_TAN) Temporal Alignment Networks for Long-term Video.
Tengda Han, Weidi Xie, Andrew Zisserman.
[paper]
[code]
(arXiv2022_HCMI) Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations.
Jie Jiang, Shaobo Min, Weijie Kong, Dihong Gong, Hongfa Wang, Zhifeng Li, Wei Liu.
[paper]
(CVPR2022_MILES) MILES: Visual BERT Pre-training with Injected Language Semantics for Video-text Retrieval.
Yuying Ge, Yixiao Ge, Xihui Liu, Alex Jinpeng Wang, Jianping Wu, Ying Shan, Xiaohu Qie, Ping Luo.
[paper]
[code]
(SIGIR2022_CenterCLIP) CenterCLIP: Token Clustering for Efficient Text-Video Retrieval.
Shuai Zhao, Linchao Zhu, Xiaohan Wang, Yi Yang.
[paper]
[code]
(arXiv2022_VL-BEiT) VL-BEiT: Generative Vision-Language Pretraining.
Hangbo Bao, Wenhui Wang, Li Dong, Furu Wei.
[paper]
(ACL2023_Singularity) Revealing Single Frame Bias for Video-and-Language Learning.
Jie Lei, Tamara L. Berg, Mohit Bansal.
[paper]
[code]
(arXiv2022_LaT) LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval.
Jinbin Bai, Chunhui Liu, Feiyue Ni, Haofan Wang, Mengying Hu, Xiaofeng Guo, Lele Cheng.
[paper]
(ACMMM2022_X-CLIP) X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval.
Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, Rongrong Ji.
[paper]
[code]
(ECCV2022_TS2-Net) TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval.
Yuqi Liu, Pengfei Xiong, Luhui Xu, Shengming Cao, Qin Jin.
[paper]
[code]
(CVPR2023_Clover) Clover: Towards A Unified Video-Language Alignment and Fusion Model.
Jingjia Huang, Yinan Li, Jiashi Feng, Xinglong Wu, Xiaoshuai Sun, Rongrong Ji.
[paper]
[code]
(SIGIR2022_CRET) CRET: Cross-Modal Retrieval Transformer for Efficient Text-Video Retrieval.
Kaixiang Ji, Jiajia Liu, Weixiang Hong, Liheng Zhong, Jian Wang, Jingdong Chen, Wei Chu.
[paper]
(ECCV2022_LocVTP) LocVTP: Video-Text Pre-training for Temporal Localization.
Meng Cao, Tianyu Yang, Junwu Weng, Can Zhang, Jue Wang, Yuexian Zou.
[paper]
[code]
(ICLR2023_CLIP-ViP) CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment.
Hongwei Xue, Yuchong Sun, Bei Liu, Jianlong Fu, Ruihua Song, Houqiang Li, Jiebo Luo.
[paper]
[code]
(NeurIPS2022_LGDN) LGDN: Language-Guided Denoising Network for Video-Language Modeling.
Haoyu Lu, Mingyu Ding, Nanyi Fei, Yuqi Huo, Zhiwu Lu.
[paper]
(NeurIPS2022_EMCL) Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations.
Peng Jin, Jinfa Huang, Fenglin Liu, Xian Wu, Shen Ge, Guoli Song, David A. Clifton, Jie Chen.
[paper]
[code]
(arXiv2022_MAC) Masked Contrastive Pre-Training for Efficient Video-Text Retrieval.
Fangxun Shu, Biaolong Chen, Yue Liao, Shuwen Xiao, Wenyu Sun, Xiaobo Li, Yousong Zhu, Jinqiao Wang, Si Liu.
[paper]
[code]
(CVPR2023_VindLU) VindLU: A Recipe for Effective Video-and-Language Pretraining.
Feng Cheng, Xizi Wang, Jie Lei, David Crandall, Mohit Bansal, Gedas Bertasius.
[paper]
[code]
(ICCV2023_HiTeA) HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training.
Qinghao Ye, Guohai Xu, Ming Yan, Haiyang Xu, Qi Qian, Ji Zhang, Fei Huang.
[paper]
(CVPR2023_BIKE) Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models.
Wenhao Wu, Xiaohan Wang, Haipeng Luo, Jingdong Wang, Yi Yang, Wanli Ouyang.
[paper]
[code]
(CVPR2023_Cap4Video) Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?.
Wenhao Wu, Haipeng Luo, Bo Fang, Jingdong Wang, Wanli Ouyang.
[paper]
[code]
(CVPR2023_STAN) Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring.
Ruyang Liu, Jingjia Huang, Ge Li, Jiashi Feng, Xinglong Wu, Thomas H. Li.
[paper]
[code]
(EMNLP2023_S3MA) Video-Text Retrieval by Supervised Sparse Multi-Grained Learning.
Yimu Wang, Peng Shi.
[paper]
[code]
(AAAI2023_STOA-VLP) STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training.
Weihong Zhong, Mao Zheng, Duyu Tang, Xuan Luo, Heng Gong, Xiaocheng Feng, Bing Qin.
[paper]
(AAAI2024_MuLTI) MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling.
Jiaqi Xu, Bo Liu, Yunkuo Chen, Mengli Cheng, Xing Shi.
[paper]
(CVPRW2023_Cali-NCE) Cali-NCE: Boosting Cross-modal Video Representation Learning with Calibrated Alignment.
Nanxuan Zhao, Jianbo Jiao, Weidi Xie, Dahua Lin.
[paper]
[code]
(CVPR2023_CLIPPING) CLIPPING: Distilling CLIP-Based Models with a Student Base for Video-Language Retrieval.
Renjing Pei, Jianzhuang Liu, Weimian Li, Bin Shao, Songcen Xu, Peng Dai, Juwei Lu, Youliang Yan.
[paper]
(CVPR2023_SViTT) SViTT: Temporal Learning of Sparse Video-Text Transformers.
Yi Li, Kyle Min, Subarna Tripathi, Nuno Vasconcelos.
[paper]
[code]
(CVPR2023_LAVENDER) LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling.
Linjie Li, Zhe Gan, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Ce Liu, Lijuan Wang.
[paper]
[code]
(ICCV2023_DiffusionRet) DiffusionRet: Generative Text-Video Retrieval with Diffusion Model.
Peng Jin, Hao Li, Zesen Cheng, Kehan Li, Xiangyang Ji, Chang Liu, Li Yuan, Jie Chen.
[paper]
[code]
(CVPR2023_MELTR) MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models.
Dohwan Ko, Joonmyung Choi, Hyeong Kyu Choi, Kyoung-Woon On, Byungseok Roh, Hyunwoo J. Kim.
[paper]
[code]
(CVPR2023_HBI) Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning.
Peng Jin, Jinfa Huang, Pengfei Xiong, Shangxuan Tian, Chang Liu, Xiangyang Ji, Li Yuan, Jie Chen.
[paper]
[code]
(ICCV2023_PIDRo) PIDRo: Parallel Isomeric Attention with Dynamic Routing for Text-Video Retrieval.
Peiyan Guan, Renjing Pei, Bin Shao, Jianzhuang Liu2, Weimian Li, Jiaxi Gu, Hang Xu, Songcen Xu, Youliang Yan, Edmund Y. Lam.
[paper]
(ICCV2023_UCoFiA) Unified Coarse-to-Fine Alignment for Video-Text Retrieval.
Ziyang Wang, Yi-Lin Sung, Feng Cheng, Gedas Bertasius, Mohit Bansal.
[paper]
[code]
(ACMMM2023_DMAE) Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning.
Chen Jiang, Hong Liu, Xuzheng Yu, Qing Wang, Yuan Cheng, Jia Xu, Zhongyi Liu, Qingpei Guo, Wei Chu, Ming Yang, Yuan Qi.
[paper]
[code]
(ICLR2024_Norton) Multi-granularity Correspondence Learning from Long-term Noisy Videos.
Yijie Lin, Jie Zhang, Zhenyu Huang, Jia Liu, Zujie Wen, Xi Peng.
[paper]
[code]
(COLING2024_UNIFY) Unifying Latent and Lexicon Representations for Effective Video-Text Retrieval.
Haowei Liu, Yaya Shi, Haiyang Xu, Chunfeng Yuan, Qinghao Ye, Chenliang Li, Ming Yan, Ji Zhang, Fei Huang, Bing Li, Weiming Hu.
[paper]
(NIPS2011_SBU) Im2Text: Describing Images Using 1 Million Captioned Photographs.
Vicente Ordonez, Girish Kulkarni, Tamara Berg.
[paper]
(CACM2016_YFCC100M) YFCC100M: The New Data in Multimedia Research.
Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, Li-Jia Li.
[paper]
(IJCV2017_VG) Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, Fei-Fei Li.
[paper]
(ICCV2017_JFT-300M) Revisiting Unreasonable Effectiveness of Data in Deep Learning Era.
Chen Sun, Abhinav Shrivastava, Saurabh Singh, Abhinav Gupta.
[paper]
(ECCV2020_TextCaps) TextCaps: a Dataset for Image Captioning with Reading Comprehension.
Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, Amanpreet Singh.
[paper]
[code]
(SIGIR2021_WIT) WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning.
Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, Marc Najork.
[paper]
[code]
(CVPR2021_CC-12M) Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts.
Soravit Changpinyo, Piyush Sharma, Nan Ding, Radu Soricut.
[paper]
[code]
(NeurIPS2022_VLMo) VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts.
Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Furu Wei.
[paper]
[code]
(CVPR2022_LiT) LiT: Zero-Shot Transfer with Locked-image text Tuning.
Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer.
[paper]
[code]
(NeurIPS2021_RedCaps) RedCaps: web-curated image-text data created by the people, for the people.
Karan Desai, Gaurav Kaul, Zubin Aysola, Justin Johnson.
[paper]
[code]
(CVPR2022_ALT200M) Scaling Up Vision-Language Pre-training for Image Captioning.
Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, Lijuan Wang.
[paper]
[code]
(TMLR2022_GIT) GIT: A Generative Image-to-text Transformer for Vision and Language.
Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
[paper]
[code]
(ICLR2023_WebLI) PaLI: A Jointly-Scaled Multilingual Language-Image Model.
Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut.
[paper]
(NeurIPS2022_LAION-5B) LAION-5B: An open large-scale dataset for training next generation image-text models.
Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, Jenia Jitsev.
[paper]
[code]
(Github2022_COYO-700M) COYO-700M: Image-Text Pair Dataset.
Byeon, Minwoo and Park, Beomhee and Kim, Haecheon and Lee, Sungjun and Baek, Woonhyuk and Kim, Saehoon.
[code]
(Blog2023_LAION POP) LAION POP: 600,000 High-resolution Images with Detailed Descriptions.
Christoph Schuhmann, Peter Bevan.
[code]
(SIGIR2023_COCO-F30K-FG) Rethinking Benchmarks for Cross-modal Image-text Retrieval.
Weijing Chen, Linli Yao, Qin Jin.
[paper]
[code]
(ACL2011_MSVD) Collecting Highly Parallel Data for Paraphrase Evaluation.
David L. Chen, William B. Dolan.
[paper]
(CVPR2015_ActivityNet) ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding.
Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, Juan Carlos Niebles.
[paper]
(CVPR2016_MSR-VTT) MSR-VTT: A Large Video Description Dataset for Bridging Video and Language.
Jun Xu, Tao Mei, Ting Yao, Yong Rui.
[paper]
(CVPR2016_TGIF) TGIF: A New Dataset and Benchmark on Animated GIF Description.
Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, Jiebo Luo.
[paper]
(IJCV2017_LSMDC) Movie Description.
Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, Bernt Schiele.
[paper]
(arXiv2016_YouTube-8M) YouTube-8M: A Large-Scale Video Classification Benchmark.
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan.
[paper]
(AAAI2018_YouCook2) Towards Automatic Learning of Procedures from Web Instructional Videos.
Luowei Zhou, Chenliang Xu, Jason J. Corso.
[paper]
(CVPR2017_Kinetics-400) Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset.
Joao Carreira, Andrew Zisserman.
[paper]
(CVPR2018_AVA) AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions.
Chunhui Gu, Chen Sun, David A. Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, Jitendra Malik.
[paper]
(ICCV2017_Something-Something V2) The "something something" video database for learning and evaluating visual common sense.
Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzyńska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, Roland Memisevic.
[paper]
(ICCV2017_DiDeMo) Localizing Moments in Video with Natural Language.
Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell.
[paper]
(arXiv2018_Kinetics-600) A Short Note about Kinetics-600.
Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, Andrew Zisserman.
[paper]
(arXiv2018_How2) How2: A Large-scale Dataset for Multimodal Language Understanding.
Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loïc Barrault, Lucia Specia, Florian Metze.
[paper]
(ICCV2019_VATEX) VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research.
Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan-Fang Wang, William Yang Wang.
[paper]
(arXiv2019_Kinetics-700) A Short Note on the Kinetics-700 Human Action Dataset.
Joao Carreira, Eric Noland, Chloe Hillier, Andrew Zisserman.
[paper]
(ICCV2019_HowTo100M) HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips.
Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, Josef Sivic.
[paper]
(arXiv2020_WTS70M) Learning Video Representations from Textual Web Supervision.
Jonathan C. Stroud, Zhichao Lu, Chen Sun, Jia Deng, Rahul Sukthankar, Cordelia Schmid, David A. Ross.
[paper]
(ICCV2021_WebVid10M) Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval.
Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman.
[paper]
(NeurIPS2021_YT-Temporal-180M) MERLOT: Multimodal Neural Script Knowledge Models.
Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi.
[paper]
(CVPR2022_HD-VILA-100M) Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions.
Hongwei Xue, Tiankai Hang, Yanhong Zeng, Yuchong Sun, Bei Liu, Huan Yang, Jianlong Fu, Baining Guo.
[paper]
(ECCV2022_VideoCC3M) Learning Audio-Video Modalities from Image Captions.
Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid.
[paper]
(CVPR2022_Tencent-MVSE) Tencent-MVSE: A Large-Scale Benchmark Dataset for Multi-Modal Video Similarity Evaluation.
Zhaoyang Zeng, Yongsheng Luo, Zhenhua Liu, Fengyun Rao, Dian Li, Weidong Guo, Zhen Wen.
[paper]
(CVPR2023_CNVid-3.5M) CNVid-3.5M: Build, Filter, and Pre-train the Large-scale Public Chinese Video-text Dataset.
Tian Gan, Qing Wang, Xingning Dong, Xiangyuan Ren, Liqiang Nie, Qingpei Guo.
[paper]
(arXiv2023_InternVid-10M) InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation.
Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinyuan Chen, Yaohui Wang, Ping Luo, Ziwei Liu, Yali Wang, Limin Wang, Yu Qiao.
[paper]
(arXiv2023_VideoCon) VideoCon: Robust Video-Language Alignment via Contrast Captions.
Hritik Bansal, Yonatan Bitton, Idan Szpektor, Kai-Wei Chang, Aditya Grover.
[paper]