SMoG
SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping
Lightly 1.2.30 comes with the SMoG model introduced in Unsupervised Visual Representation Learning
by Synchronous Momentum Grouping. Documentation and benchmarks will be released soon.
Breaking Change
- in the ApiWorkflowClient, create_dataset now throws an error where a dataset of the same name already exists. To reuse an existing dataset users should switch to using set_dataset_id_by_name.
Other Changes
- OBS (object storage service) remote datasources now supported
- documentation improvements
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020