Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.
-
Updated
May 13, 2019 - Python
Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.
A research library for pytorch-based neural network pruning, compression, and more.
[ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
Knowledge distillation from Ensembles of Iterative pruning (BMVC 2020)
[ICCV2023 Official PyTorch code] for Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution
Image captioning with weight pruning in PyTorch
(Unstructured) Weight Pruning via Adaptive Sparsity Loss
Feather is a module that enables effective sparsification of neural networks during training. This repository accompanies the paper "Feather: An Elegant Solution to Effective DNN Sparsification" (BMVC2023).
TensorFlow implementation of weight and unit pruning and sparsification
Neural network weights prune in a static LoRA–like way
Implementation of several neural network compression techniques (knowledge distillation, pruning, quantization, factorization), in Haiku.
Implementation of Neuron Pruning with weight pruning
Code Implementation of On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Code for "Characterising Across Stack Optimisations for Deep Convolutional Neural Networks"
analysing Model Pruning and Unit Pruning on a large dense MNIST network
Pruning is <3
Add a description, image, and links to the weight-pruning topic page so that developers can more easily learn about it.
To associate your repository with the weight-pruning topic, visit your repo's landing page and select "manage topics."