diff --git a/notebooks/action-recognition-webcam/README.md b/notebooks/action-recognition-webcam/README.md index fca53961ff0..12feda5c0ee 100644 --- a/notebooks/action-recognition-webcam/README.md +++ b/notebooks/action-recognition-webcam/README.md @@ -10,7 +10,7 @@ Human action recognition finds actions over time in a video. The list of actions ## Notebook Contents -This notebook demonstrates live human action recognition with OpenVINO, using the [Action Recognition Models](https://docs.openvino.ai/2024/omz_models_group_intel.html#action-recognition-models) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo), specifically the Encoder and Decoder from [action-recognition-0001](https://docs.openvino.ai/2024/omz_models_model_action_recognition_0001.html). Both models create a sequence to sequence (`"seq2seq"`)[1](#f1) system to identify the human activities for [Kinetics-400 dataset](https://arxiv.org/pdf/1705.06950.pdf). The models use the Video Transformer approach with ResNet34 encoder[2](#f2). The notebook shows how to create the following pipeline: +This notebook demonstrates live human action recognition with OpenVINO, using the [Action Recognition Models](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/index.md#action-recognition-models) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo), specifically the Encoder and Decoder from [action-recognition-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/action-recognition-0001/README.md). Both models create a sequence to sequence (`"seq2seq"`)[1](#f1) system to identify the human activities for [Kinetics-400 dataset](https://arxiv.org/pdf/1705.06950.pdf). The models use the Video Transformer approach with ResNet34 encoder[2](#f2). The notebook shows how to create the following pipeline:

@@ -36,6 +36,6 @@ For details, please refer to [Installation Guide](../../README.md). * [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks) * [Model Conversion API](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html) -* [Action Recognition Demo (OpenVINO - No notebooks)](https://docs.openvino.ai/2024/omz_demos_action_recognition_demo_python.html) +* [Action Recognition Demo (OpenVINO - No notebooks)](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/action_recognition_demo/python/README.md) diff --git a/notebooks/explainable-ai-1-basic/explainable-ai-1-basic.ipynb b/notebooks/explainable-ai-1-basic/explainable-ai-1-basic.ipynb index 62ffc85506d..b78961069af 100644 --- a/notebooks/explainable-ai-1-basic/explainable-ai-1-basic.ipynb +++ b/notebooks/explainable-ai-1-basic/explainable-ai-1-basic.ipynb @@ -24,7 +24,7 @@ "\n", "![](https://github.com/openvinotoolkit/openvino_xai/assets/17028475/ccb67c0b-c58e-4beb-889f-af0aff21cb66)\n", "\n", - "A pre-trained [MobileNetV3 model](https://docs.openvino.ai/2024/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial.\n", + "A pre-trained [MobileNetV3 model](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v3-small-1.0-224-tf/README.md) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial.\n", "\n", "\n", "#### Table of contents:\n", diff --git a/notebooks/handwritten-ocr/README.md b/notebooks/handwritten-ocr/README.md index 5ea657ecd17..2e8b1353f9a 100644 --- a/notebooks/handwritten-ocr/README.md +++ b/notebooks/handwritten-ocr/README.md @@ -11,7 +11,7 @@ This tutorial demonstrates optical character recognition for handwritten Chinese ## Notebook Contents -This notebook provides a tutorial on how to use OCR for handwritten Japanese and simplified Chinese. Models used for this notebook are [`handwritten-japanese-recognition-0001`](https://docs.openvino.ai/2024/omz_models_model_handwritten_japanese_recognition_0001.html) and [`handwritten-simplified-chinese-0001`](https://docs.openvino.ai/2024/omz_models_model_handwritten_simplified_chinese_recognition_0001.html). To decode model output to readable text [`kondate_nakayosi`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/kondate_nakayosi.txt) and [`scut_ept`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/scut_ept.txt) charlists are used. Both models are available from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). +This notebook provides a tutorial on how to use OCR for handwritten Japanese and simplified Chinese. Models used for this notebook are [`handwritten-japanese-recognition-0001`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/handwritten-japanese-recognition-0001/README.md) and [`handwritten-simplified-chinese-0001`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/handwritten-simplified-chinese-recognition-0001/README.md). To decode model output to readable text [`kondate_nakayosi`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/kondate_nakayosi.txt) and [`scut_ept`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/scut_ept.txt) charlists are used. Both models are available from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). ## Installation Instructions diff --git a/notebooks/handwritten-ocr/handwritten-ocr.ipynb b/notebooks/handwritten-ocr/handwritten-ocr.ipynb index 81b6e69d232..dc269e9f050 100644 --- a/notebooks/handwritten-ocr/handwritten-ocr.ipynb +++ b/notebooks/handwritten-ocr/handwritten-ocr.ipynb @@ -10,7 +10,7 @@ "\n", "In this tutorial, we perform optical character recognition (OCR) for handwritten Chinese (simplified) and Japanese. An OCR tutorial using the Latin alphabet is available in [notebook 208](../optical-character-recognition/optical-character-recognition.ipynb). This model is capable of processing only one line of symbols at a time.\n", "\n", - "The models used in this notebook are [`handwritten-japanese-recognition-0001`](https://docs.openvino.ai/2024/omz_models_model_handwritten_japanese_recognition_0001.html) and [`handwritten-simplified-chinese-0001`](https://docs.openvino.ai/2024/omz_models_model_handwritten_simplified_chinese_recognition_0001.html). To decode model outputs as readable text [`kondate_nakayosi`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/kondate_nakayosi.txt) and [`scut_ept`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/scut_ept.txt) charlists are used. Both models are available on [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/).\n", + "The models used in this notebook are [`handwritten-japanese-recognition-0001`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/handwritten-japanese-recognition-0001/README.md) and [`handwritten-simplified-chinese-0001`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/handwritten-simplified-chinese-recognition-0001/README.md). To decode model outputs as readable text [`kondate_nakayosi`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/kondate_nakayosi.txt) and [`scut_ept`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/scut_ept.txt) charlists are used. Both models are available on [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/).\n", "\n", "\n", "#### Table of contents:\n", diff --git a/notebooks/hello-detection/README.md b/notebooks/hello-detection/README.md index dc528e2d619..8ae9ae04aab 100644 --- a/notebooks/hello-detection/README.md +++ b/notebooks/hello-detection/README.md @@ -12,7 +12,7 @@ This notebook demonstrates how to do inference with detection model. ## Notebook Contents -In this basic introduction to detection with OpenVINO, the [horizontal-text-detection-0001](https://docs.openvino.ai/2024/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects text in images and returns blob of data in shape of `[100, 5]`. For each detection, a description is in the `[x_min, y_min, x_max, y_max, conf]` format. +In this basic introduction to detection with OpenVINO, the [horizontal-text-detection-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/horizontal-text-detection-0001/README.md) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects text in images and returns blob of data in shape of `[100, 5]`. For each detection, a description is in the `[x_min, y_min, x_max, y_max, conf]` format. ## Installation Instructions diff --git a/notebooks/hello-detection/hello-detection.ipynb b/notebooks/hello-detection/hello-detection.ipynb index 27e5bc54453..013582403c3 100644 --- a/notebooks/hello-detection/hello-detection.ipynb +++ b/notebooks/hello-detection/hello-detection.ipynb @@ -10,7 +10,7 @@ "\n", "A very basic introduction to using object detection models with OpenVINO™.\n", "\n", - "The [horizontal-text-detection-0001](https://docs.openvino.ai/2024/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n", + "The [horizontal-text-detection-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/horizontal-text-detection-0001/README.md) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n", "`(x_min, y_min)` are the coordinates of the top left bounding box corner, `(x_max, y_max)` are the coordinates of the bottom right bounding box corner and `conf` is the confidence for the predicted class.\n", "\n", "\n", diff --git a/notebooks/hello-segmentation/README.md b/notebooks/hello-segmentation/README.md index 30fb7981048..2c6a99c6b2a 100644 --- a/notebooks/hello-segmentation/README.md +++ b/notebooks/hello-segmentation/README.md @@ -11,7 +11,7 @@ This notebook demonstrates how to do inference with segmentation model. ## Notebook Contents -A very basic introduction to segmentation with OpenVINO. This notebook uses the [`road-segmentation-adas-0001`](https://docs.openvino.ai/2024/omz_models_model_road_segmentation_adas_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image downloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark. +A very basic introduction to segmentation with OpenVINO. This notebook uses the [`road-segmentation-adas-0001`](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/road-segmentation-adas-0001/README.md) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image downloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark. ## Installation Instructions diff --git a/notebooks/hello-segmentation/hello-segmentation.ipynb b/notebooks/hello-segmentation/hello-segmentation.ipynb index b4145f85dea..88c6e6e35ae 100644 --- a/notebooks/hello-segmentation/hello-segmentation.ipynb +++ b/notebooks/hello-segmentation/hello-segmentation.ipynb @@ -10,7 +10,7 @@ "\n", "A very basic introduction to using segmentation models with OpenVINO™.\n", "\n", - "In this tutorial, a pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/2024/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.\n", + "In this tutorial, a pre-trained [road-segmentation-adas-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/road-segmentation-adas-0001/README.md) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.\n", "\n", "\n", "#### Table of contents:\n", diff --git a/notebooks/hello-world/hello-world.ipynb b/notebooks/hello-world/hello-world.ipynb index df6dd605b9f..2535f0d965b 100644 --- a/notebooks/hello-world/hello-world.ipynb +++ b/notebooks/hello-world/hello-world.ipynb @@ -10,7 +10,7 @@ "\n", "This basic introduction to OpenVINO™ shows how to do inference with an image classification model.\n", "\n", - "A pre-trained [MobileNetV3 model](https://docs.openvino.ai/2024/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../tensorflow-classification-to-openvino/tensorflow-classification-to-openvino.ipynb) tutorial.\n", + "A pre-trained [MobileNetV3 model](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v3-small-1.0-224-tf/README.md) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../tensorflow-classification-to-openvino/tensorflow-classification-to-openvino.ipynb) tutorial.\n", "\n", "\n", "#### Table of contents:\n", diff --git a/notebooks/optical-character-recognition/README.md b/notebooks/optical-character-recognition/README.md index 8437b202ae6..069efbc680d 100644 --- a/notebooks/optical-character-recognition/README.md +++ b/notebooks/optical-character-recognition/README.md @@ -10,7 +10,7 @@ In this tutorial optical character recognition is presented. This notebook is a ## Notebook Contents -In addition to previously used [horizontal-text-detection-0001](https://docs.openvino.ai/2024/omz_models_model_horizontal_text_detection_0001.html) model, a[text-recognition-resnet](https://docs.openvino.ai/2024/omz_models_model_text_recognition_resnet_fc.html) model is used. This model reads tight aligned crop with detected text converted to a grayscale image and returns tensor that is easily decoded to predicted text. Both models are from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). +In addition to previously used [horizontal-text-detection-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/horizontal-text-detection-0001/README.md) model, a[text-recognition-resnet](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/text-recognition-resnet-fc/README.md) model is used. This model reads tight aligned crop with detected text converted to a grayscale image and returns tensor that is easily decoded to predicted text. Both models are from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). ## Installation Instructions diff --git a/notebooks/optical-character-recognition/optical-character-recognition.ipynb b/notebooks/optical-character-recognition/optical-character-recognition.ipynb index 275ebf8266d..620d3f5d62a 100644 --- a/notebooks/optical-character-recognition/optical-character-recognition.ipynb +++ b/notebooks/optical-character-recognition/optical-character-recognition.ipynb @@ -10,7 +10,7 @@ "\n", "This tutorial demonstrates how to perform optical character recognition (OCR) with OpenVINO models. It is a continuation of the [hello-detection](../hello-detection/hello-detection.ipynb) tutorial, which shows only text detection.\n", "\n", - "The [horizontal-text-detection-0001](https://docs.openvino.ai/2024/omz_models_model_horizontal_text_detection_0001.html) and [text-recognition-resnet](https://docs.openvino.ai/2024/omz_models_model_text_recognition_resnet_fc.html) models are used together for text detection and then text recognition.\n", + "The [horizontal-text-detection-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/horizontal-text-detection-0001/README.md) and [text-recognition-resnet](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/text-recognition-resnet-fc/README.md) models are used together for text detection and then text recognition.\n", "\n", "In this tutorial, Open Model Zoo tools including Model Downloader, Model Converter and Info Dumper are used to download and convert the models from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo). For more information, refer to the [model-tools](../model-tools/model-tools.ipynb) tutorial.\n", "\n", diff --git a/notebooks/person-tracking-webcam/README.md b/notebooks/person-tracking-webcam/README.md index 56eb5b6e02d..8a3b6b38de9 100644 --- a/notebooks/person-tracking-webcam/README.md +++ b/notebooks/person-tracking-webcam/README.md @@ -14,7 +14,7 @@ This notebook shows a person tracking scenario: it reads frames from an input vi ## Notebook Contents This tutorial uses the [Deep SORT](https://arxiv.org/abs/1703.07402) algorithm to perform object tracking. -[person detection model]( https://docs.openvino.ai/2024/omz_models_model_person_detection_0202.html) is deployed to detect the person in each frame of the video, and [reidentification model]( https://docs.openvino.ai/2024/omz_models_model_person_reidentification_retail_0287.html) is used to output embedding vector to match a pair of images of a person by the cosine distance. +[person detection model]( https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-detection-0202/README.md) is deployed to detect the person in each frame of the video, and [reidentification model]( https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-reidentification-retail-0287/README.md) is used to output embedding vector to match a pair of images of a person by the cosine distance. ## Installation Instructions @@ -26,6 +26,6 @@ For details, please refer to [Installation Guide](../../README.md). * [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks) * [Model Conversion API](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-to-ir.html) -* [Pedestrian Tracker C++ Demo](https://docs.openvino.ai/2024/omz_demos_pedestrian_tracker_demo_cpp.html) +* [Pedestrian Tracker C++ Demo](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/pedestrian_tracker_demo/cpp/README.md) diff --git a/notebooks/person-tracking-webcam/person-tracking.ipynb b/notebooks/person-tracking-webcam/person-tracking.ipynb index 72f09450ef2..d083649bef7 100644 --- a/notebooks/person-tracking-webcam/person-tracking.ipynb +++ b/notebooks/person-tracking-webcam/person-tracking.ipynb @@ -164,10 +164,10 @@ "\n", "> **NOTE**: Using a model outside the list can require different pre- and post-processing.\n", "\n", - "In this case, [person detection model]( https://docs.openvino.ai/2024/omz_models_model_person_detection_0202.html) is deployed to detect the person in each frame of the video, and [reidentification model]( https://docs.openvino.ai/2024/omz_models_model_person_reidentification_retail_0287.html) is used to output embedding vector to match a pair of images of a person by the cosine distance.\n", + "In this case, [person detection model]( https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-detection-0202/README.md) is deployed to detect the person in each frame of the video, and [reidentification model]( https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/person-reidentification-retail-0287/README.md) is used to output embedding vector to match a pair of images of a person by the cosine distance.\n", "\n", "\n", - "If you want to download another model (`person-detection-xxx` from [Object Detection Models list](https://docs.openvino.ai/2024/omz_models_group_intel.html#object-detection-models), `person-reidentification-retail-xxx` from [Reidentification Models list](https://docs.openvino.ai/2024/omz_models_group_intel.html#reidentification-models)), replace the name of the model in the code below." + "If you want to download another model (`person-detection-xxx` from [Object Detection Models list](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/index.md#object-detection-models), `person-reidentification-retail-xxx` from [Reidentification Models list](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/index.md#reidentification-models)), replace the name of the model in the code below." ] }, { diff --git a/notebooks/style-transfer-webcam/README.md b/notebooks/style-transfer-webcam/README.md index 8ec9163c324..20474a4346e 100644 --- a/notebooks/style-transfer-webcam/README.md +++ b/notebooks/style-transfer-webcam/README.md @@ -22,5 +22,5 @@ For details, please refer to [Installation Guide](../../README.md). * [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks) * [Model Conversion API](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html) -* [Image Processing Demo](https://docs.openvino.ai/2024/omz_demos_image_processing_demo_cpp.html) +* [Image Processing Demo](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/image_processing_demo/cpp/README.md) diff --git a/notebooks/tensorflow-classification-to-openvino/tensorflow-classification-to-openvino.ipynb b/notebooks/tensorflow-classification-to-openvino/tensorflow-classification-to-openvino.ipynb index 868776eae2c..d7d5ec521eb 100644 --- a/notebooks/tensorflow-classification-to-openvino/tensorflow-classification-to-openvino.ipynb +++ b/notebooks/tensorflow-classification-to-openvino/tensorflow-classification-to-openvino.ipynb @@ -9,7 +9,7 @@ "source": [ "# Convert a TensorFlow Model to OpenVINO™\n", "\n", - "This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/2024/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets.html) (OpenVINO IR) format, using [Model Conversion API](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/2024/openvino-workflow/running-inference.html) and do inference with a sample image. \n", + "This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v3-small-1.0-224-tf/README.md) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets.html) (OpenVINO IR) format, using [Model Conversion API](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/2024/openvino-workflow/running-inference.html) and do inference with a sample image. \n", "\n", "\n", "#### Table of contents:\n", diff --git a/notebooks/vision-monodepth/vision-monodepth.ipynb b/notebooks/vision-monodepth/vision-monodepth.ipynb index 7aaad88f919..5710746ca09 100644 --- a/notebooks/vision-monodepth/vision-monodepth.ipynb +++ b/notebooks/vision-monodepth/vision-monodepth.ipynb @@ -13,7 +13,7 @@ "source": [ "# Monodepth Estimation with OpenVINO\n", "\n", - "This tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information can be found [here](https://docs.openvino.ai/2024/omz_models_model_midasnet.html).\n", + "This tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information can be found [here](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/midasnet/README.md).\n", "\n", "![monodepth](https://user-images.githubusercontent.com/36741649/127173017-a0bbcf75-db24-4d2c-81b9-616e04ab7cd9.gif)\n", "\n",