Skip to content

Latest commit

 

History

History
4 lines (3 loc) · 437 Bytes

README.md

File metadata and controls

4 lines (3 loc) · 437 Bytes

clip_discourse

This is a project for my CS 2002 project, where I implemented a semi-supervised model for predicting discourse relations between images and captions. This work is based on CLIP (https://github.com/openai/CLIP) and SwAV (https://arxiv.org/pdf/2006.09882.pdf). CLIP is a self-supervsied model for learning joint representatin of images and captions. SwAV is a unsupervised model for learning visual features of images.