From e01cc5add67e40cadb7ec80be30d40da1977b228 Mon Sep 17 00:00:00 2001 From: Alexander Kozlov Date: Tue, 10 Sep 2024 11:30:58 +0400 Subject: [PATCH] Update docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst Co-authored-by: Tatiana Savina --- .../model-optimization-guide/weight-compression.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst b/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst index fb9d196f6f25fe..62350d04ace4ec 100644 --- a/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst +++ b/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst @@ -226,9 +226,9 @@ For data-aware weight compression refer to the following .. note:: - Some of the methods can be stacked one on top of another to achieve a better - accuracy-performance trade-off after weight quantization. For example, Scale Estimation - method can be applied along with AWQ and mixed-precision quantization (``ratio`` parameter). + Some methods can be stacked on top of one another to achieve a better + accuracy-performance trade-off after weight quantization. For example, the Scale Estimation + method can be applied along with AWQ and mixed-precision quantization (the ``ratio`` parameter). The example below shows data-free 4-bit weight quantization applied on top of OpenVINO IR. Before trying the example, make sure Optimum Intel