Masked Siamese Networks
MSN: Masked Siamese Networks for Label-Efficient Learning
Lightly 1.2.28 comes with the new MSN model introduced in Masked Autoencoders Are Scalable Vision Learners. Please head over to our docs to see how to use MSN with Lightly: https://docs.lightly.ai/examples/msn.html
Other Changes
- Lightly is now compatible with Pytorch Lightning v1.7
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020