Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train 4 bit mobilenent v2 to 72.0 (in the paper)? #3

Open
talenz opened this issue Jan 25, 2023 · 2 comments
Open

How to train 4 bit mobilenent v2 to 72.0 (in the paper)? #3

talenz opened this issue Jan 25, 2023 · 2 comments

Comments

@talenz
Copy link

talenz commented Jan 25, 2023

Great job! In this paper, the top1 of 4bit mobilenetv2 is 72.0 and this is beyond SOTA. Is there a way that can reproduce this result?

@liujingcs
Copy link
Collaborator

Thanks for your interest. As mentioned in the supplementary material, we fine-tune the quantized model with additional learnable layer-wise offsets for activations and knowledge distillation following [1][2].
Reference:
[1] Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. CVPR 2022.
[2] PROFIT: A Novel Training Method for sub-4-bit MobileNet Models. ECCV 2020.

@talenz
Copy link
Author

talenz commented Jan 26, 2023

Will you provide the script to reproduce this result?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants