From 20002a82b01bdb358eb36fc7f0388aa722b2e7ba Mon Sep 17 00:00:00 2001 From: Fang Xu Date: Tue, 10 Sep 2024 18:16:42 +0800 Subject: [PATCH] Update docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst Co-authored-by: Tatiana Savina --- .../optimizing-latency/model-caching-overview.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst index 13e1365b6e9732..41c05026bce567 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst @@ -161,5 +161,4 @@ When model caching is enabled, the model topology can be encrypted when saving t .. important:: - Currently, this property is only supported by CPU plugin. For other HW plugins, setting this property will perform - normally but do not encrypt/decrypt the model topology in cache. + Currently, this property is supported only by the CPU plugin. For other HW plugins, setting this property will not encrypt/decrypt the model topology in cache and will not affect performance.