3rd place solution for NeurIPS 2019 MicroNet challenge
-
Updated
Nov 8, 2019 - Python
3rd place solution for NeurIPS 2019 MicroNet challenge
Submission name: QualcommAI-EfficientNet. MicroNet Challenge (NeurIPS 2019) submission - Qualcomm AI Research
Experimental Adversarial Attack notebooks on CV models
Image classification done with Mindspore technology
FrostNet: Towards Quantization-Aware Network Architecture Search
Our work implements novel L2-Norm gradient (L2Grad) and variance of the weight distrbution (VarianceNorm) regularizers for quantization-aware training such that the distribution of weights are more compatible with post-training quantization especially for low bit-widths. We provide a theoretical basis that directly relates L2-Grad with post quan…
A simple formula supports eight types of quantization
A tutorial of model quantization using TensorFlow
8 bit quantizated Transformer for neural machine translation.
all methods of pytorch quantization based on resnet50
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、reg…
Visualizing DNN Quantization effect on Network.
FakeQuantize with Learned Step Size(LSQ+) as Observer in PyTorch
Quantization-aware training with spiking neural networks
Code for paper 'Multi-Component Optimization and Efficient Deployment of Neural-Networks on Resource-Constrained IoT Hardware'
Transformer quantization and binarization exploration
One Bit at a Time: Impact of Quantisation on Neural Machine Translation
Quantization for Object Detection in Tensorflow 2.x
Low-Precision Neural Networks for Classification on PYNQ with FINN
Add a description, image, and links to the quantization-aware-training topic page so that developers can more easily learn about it.
To associate your repository with the quantization-aware-training topic, visit your repo's landing page and select "manage topics."