an implementation of MoCo and MoCo-v2 improvements pre-trained on Imagenette
-
Updated
Jun 15, 2021 - Python
an implementation of MoCo and MoCo-v2 improvements pre-trained on Imagenette
This project compares the performance of Swin-Transformer v2 implemented in JAX and PyTorch.
t-SNE visualisation of CNN features
🍭 Unofficial reproduction of the paper "What's Hidden in a Randomly Weighted Neural Network?"
Image classification model for the Imagenette dataset.
training on different neural network architectures for different datasets to achieve (~=) SOTA result with less time and compute
Trained the ResNet50 model from scratch on the imagewoof dataset. Reached 83% accuracy
My Vision Transformer (ViT) implementation using PyTorch.
This repository contains the implementation of two adversarial example attack methods FGSM, IFGSM and one Input Transformation defense mechanism against all attacks using Imagenet dataset.
Add a description, image, and links to the imagenette topic page so that developers can more easily learn about it.
To associate your repository with the imagenette topic, visit your repo's landing page and select "manage topics."