Releases: lightly-ai/lightly
API update for working with delegated access
Api connections
- Delegated access use lightly urls by (#875)
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Delegated Access and Documentation Updates
Documentation
- new account ID for delegated access (#872)
- docs around loading model from Lightly worker model checkpoint (#870)
- additional speedup information around max_epochs and num_workers settings (#873)
- improved README (#871)
- datasource documentation udpated (#867)
Testing
Dependencies
- pyav version relaxed (#868)
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Documentation updates and API connections
Documentation
- Improved docs for active learning (#862)
Api connections
- Datasource loading now allows to use a tqdm progress bar (#860)
- All API requests now have a timeout (#863)
- Video downloads also have a timeout (#864)
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Documentation Updates and Improved Video Loading
Documentation
- New docs on how to create frame predictions compatible with the Lightly platform (#857)
- New docs for sequence selection features in the Lightly worker (#856)
- Remove duplicated section in docs (#855)
- Updated docs for first steps with the Lightly worker (#858)
Video Loading
- Fixed loading of videos with wrong metadata (#853)
Other
- Removed trailing comma in filenames exported from API (#859)
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
PIRL and more API helpers
PIRL and more API helpers
Self-Supervised Learning of Pretext-Invariant Representations
- Support for the PIRL collate function has been added (#850). Special thanks to @shikharmn for contributing this!
Improvement
- Expose functionality to export the filenames of the samples within a tag (#852)
- Better error handling of requests by passing sessions (#851)
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Refresh docs and more API helpers
Refresh docs and more API helpers
Docs
- @jwuphysics noticed and fixed some typos in the docs, thanks a lot!
- @MalteEbner found some more and fixed them too 🙂
Support for role based access to S3 from ApiWorkflowClient
- With #841 we added helpers to configure a Lightly dataset with delegated access rules.
- #847 added the necessary documentation
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Examples for MAE and better docs
Masked Autoencoders Examples
We added examples for the MAE model
Docs
- We added docs for the collapse detection helper
- We added docs for plotting positive and negative example images
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Masked Autoencoder, SSL debug utilities
Masked Autoencoders
We implemented the paper Masked Autoencoders Are Scalable Vision Learners. https://arxiv.org/abs/2111.06377 is suggesting that a masked auto-encoder (similar to pre-training on NLP) works very well as a pretext task for self-supervised learning. See #721 for more details. Thanks to @Atharva-Phatak for helping us figure out a good implementation method.
Collapse detection helper for SimSiam
We added a helper for detecting a collapsing SimSiam network. See https://ar5iv.labs.arxiv.org/html/2011.10566#S4.SS1 for more details.
Plot positive and negative example images
We added a helper to plot positive and negative example images, which also allows seeing what the augmentations do. See #818 for more details.
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Video Dataset initialisation improvements
Video Dataset initialisation speedup
When initialising a LightlyDataset
on a directory with videos, all frames in all videos have to be counted to know the number of frames in the dataset and their filenames. This process now uses multihreading over videos and can thus be much faster.
Video Dataset initialisation bugfix
We fixed a bug that the number of frames was estimated wrongly based on the length of the video when using the pyav backend.
Video Dataset initialisation progress bar
When initialising the video dataset, a progress bar over the videos is shown. This is helpful information for datasets with many videos.
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
Decoupled Contrastive Learning, Model heads with default parameters
Decoupled Contrastive Learning
We have implemented the Decoupled Contrastive Learning (DCL) loss. It allows faster training with smaller batch sizes than other self-supervised learning models.
Documentation: https://docs.lightly.ai/examples/dcl.html
Decoupled Contrastive Learning paper: https://arxiv.org/abs/2110.06848
Model heads with default parameters
All model heads have now default parameters following the values of the original papers.
Create custom metadata config
Custom metadata in the Lightly Platform can now easily be configured via ApiWorkflowClient.create_custom_metadata_config()
Progress bar for video dataset initialization
Constructing a LightlyDataset
with large video datasets can take long, as all frames in all videos have to be counted. We added a tqdm progress bar for it.
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020