analysing Model Pruning and Unit Pruning on a large dense MNIST network
-
Updated
Mar 25, 2023 - Jupyter Notebook
analysing Model Pruning and Unit Pruning on a large dense MNIST network
Pruning is <3
Implementation of Neuron Pruning with weight pruning
Code Implementation of On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Code for "Characterising Across Stack Optimisations for Deep Convolutional Neural Networks"
Neural network weights prune in a static LoRA–like way
TensorFlow implementation of weight and unit pruning and sparsification
Feather is a module that enables effective sparsification of neural networks during training. This repository accompanies the paper "Feather: An Elegant Solution to Effective DNN Sparsification" (BMVC2023).
(Unstructured) Weight Pruning via Adaptive Sparsity Loss
Image captioning with weight pruning in PyTorch
Knowledge distillation from Ensembles of Iterative pruning (BMVC 2020)
[ICCV2023 Official PyTorch code] for Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution
[ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
A research library for pytorch-based neural network pruning, compression, and more.
Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.
Add a description, image, and links to the weight-pruning topic page so that developers can more easily learn about it.
To associate your repository with the weight-pruning topic, visit your repo's landing page and select "manage topics."