Replies: 1 comment
-
Last I checked it was problematic to export native PyTorch quantization to ONNX at all, but if there is a pathway to exporting it to |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I was wondering if anyone has looked into supporting conversion to QONNX from "native" PyTorch quantized models: https://pytorch.org/docs/stable/quantization.html
We are working on a project where this could come in handy, and I think in general more people may look to use this kind of quantization.
@maltanar @jicampos @nhanvtran
Beta Was this translation helpful? Give feedback.
All reactions