Skip to content

Commit

Permalink
Update docs/articles_en/openvino-workflow/running-inference/optimize-…
Browse files Browse the repository at this point in the history
…inference/optimizing-latency/model-caching-overview.rst

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
  • Loading branch information
xufang-lisa and tsavina authored Sep 10, 2024
1 parent 952aa3e commit 20002a8
Showing 1 changed file with 1 addition and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -161,5 +161,4 @@ When model caching is enabled, the model topology can be encrypted when saving t

.. important::

Currently, this property is only supported by CPU plugin. For other HW plugins, setting this property will perform
normally but do not encrypt/decrypt the model topology in cache.
Currently, this property is supported only by the CPU plugin. For other HW plugins, setting this property will not encrypt/decrypt the model topology in cache and will not affect performance.

0 comments on commit 20002a8

Please sign in to comment.