You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your interest. As mentioned in the supplementary material, we fine-tune the quantized model with additional learnable layer-wise offsets for activations and knowledge distillation following [1][2].
Reference:
[1] Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. CVPR 2022.
[2] PROFIT: A Novel Training Method for sub-4-bit MobileNet Models. ECCV 2020.
Great job! In this paper, the top1 of 4bit mobilenetv2 is 72.0 and this is beyond SOTA. Is there a way that can reproduce this result?
The text was updated successfully, but these errors were encountered: