Skip to content

Latest commit

 

History

History
132 lines (116 loc) · 5.83 KB

index.rst

File metadata and controls

132 lines (116 loc) · 5.83 KB

OpenVINO™ Documentation

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.

  • Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
  • Use models trained with popular frameworks like TensorFlow, PyTorch and more
  • Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud

OpenVINO allows to process models built with Caffe, Keras, mxnet, TensorFlow, ONNX, and PyTorch. They can be easily optimized and deployed on devices running Windows, Linux, or MacOS.

Check the full range of supported hardware in the Supported Devices page and see how it stacks up in our Performance Benchmarks page.
Supports deployment on Windows, Linux, and macOS.

OpenVINO Workflow

Want to know more?

Learn how to download, install, and configure OpenVINO.

Browse through over 200 publicly available neural networks and pick the right one for your solution.

Learn how to convert your model and optimize it for use with OpenVINO.

Learn how to use OpenVINO based on our training material.

Try OpenVINO using ready-made applications explaining various use cases.

Learn about the alternative, web-based version of OpenVINO. DL Workbench container installation Required.

Learn about OpenVINO's inference mechanism which executes the IR, ONNX, Paddle models on target devices.

Model-level (e.g. quantization) and Runtime (i.e. application) -level optimizations to make your inference as fast as possible.

View performance benchmark results for various models on Intel platforms.

.. toctree::
   :maxdepth: 2
   :hidden:

   pages/get-started-guide
   pages/documentation
   tutorials
   api/api_reference
   model_zoo
   pages/resources