diff --git a/.gitignore b/.gitignore index 1cd70d1f9..04ea46037 100644 --- a/.gitignore +++ b/.gitignore @@ -3,6 +3,7 @@ *.pyc .DS_Store **.log +pretrained/ .pytest* .venv* .coverage diff --git a/README.md b/README.md index d417c1dc6..b1bef9ef4 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,16 @@ -# Model Zoo for Intel® Architecture +# Intel® AI Reference Models This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors and Intel® Data Center GPUs. -Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html). +Containers for running the workloads can be found at the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html). -## Purpose of the Model Zoo +[Intel® AI Reference Models in a Jupyter Notebook](/notebooks/README.md) is also available for the [listed workloads](/notebooks/README.md#supported-models) - - Demonstrate the AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware - - Show how to efficiently execute, train, and deploy Intel-optimized models - - Make it easy to get started running Intel-optimized models on Intel hardware in the cloud or on bare metal +## Purpose of Intel® AI Reference Models + +Intel optimizes popular deep learning frameworks such as TensorFlow* and PyTorch* by contributing to the upstream projects. Additional optimizations are built into plugins/extensions such as the [Intel Extension for Pytorch*](https://github.com/intel/intel-extension-for-pytorch) and the [Intel Extension for TensorFlow*](https://github.com/intel/intel-extension-for-tensorflow). Popular neural network models running against common datasets are the target workloads that drive these optimizations. + +The purpose of the Intel® AI Reference Models repository (and associated containers) is to quickly replicate the complete software environment that demonstrates the best-known performance of each of these target model/dataset combinations. When executed in optimally-configured hardware environments, these software environments showcase the AI capabilities of Intel platforms. ***DISCLAIMER: These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit [https://www.intel.ai/blog](https://www.intel.ai/blog).*** @@ -16,12 +18,12 @@ For any performance and/or benchmarking information on specific Intel platforms, Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right. ## License -The Model Zoo for Intel® Architecture is licensed under [Apache License Version 2.0](https://github.com/IntelAI/models/blob/master/LICENSE). +The Intel® AI Reference Models is licensed under [Apache License Version 2.0](https://github.com/intel/ai-reference-models/blob/master/LICENSE). ## Datasets To the extent that any public datasets are referenced by Intel or accessed using tools or code on this site those datasets are provided by the third party indicated as the data source. Intel does not create the data, or datasets, and does not warrant their accuracy or quality. By accessing the public dataset(s) you agree to the terms associated with those datasets and that your use complies with the applicable license. -Please check the list of datasets used in Model Zoo for Intel® Architecture in [datasets directory](/datasets). +Please check the list of datasets used in Intel® AI Reference Models in [datasets directory](/datasets). Intel expressly disclaims the accuracy, adequacy, or completeness of any public datasets, and is not liable for any errors, omissions, or defects in the data, or for any reliance on the data. Intel is not liable for any liability or damages relating to your use of public datasets. @@ -30,7 +32,7 @@ The model documentation in the tables below have information on the prerequisites to run each model. The model scripts run on Linux. Certain models are also able to run using bare metal on Windows. For more information and a list of models that are supported on Windows, see the -[documentation here](/docs/general/Windows.md#using-intel-model-zoo-on-windows-systems). +[documentation here](/docs/general/Windows.md#using-intel-ai-reference-models-on-windows-systems). Instructions available to run on [Sapphire Rapids](https://www.intel.com/content/www/us/en/newsroom/opinion/updates-next-gen-data-center-platform-sapphire-rapids.html#gs.blowcx). @@ -73,7 +75,6 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec | Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset | | -------------------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- | -| [3D U-Net](https://arxiv.org/pdf/1606.06650.pdf) | TensorFlow | Inference | [FP32](/benchmarks/image_segmentation/tensorflow/3d_unet/inference/fp32/README.md) | [BRATS 2018](https://github.com/IntelAI/models/tree/master/benchmarks/image_segmentation/tensorflow/3d_unet/inference/fp32#datasets) | | [3D U-Net MLPerf*](https://arxiv.org/pdf/1606.06650.pdf) | TensorFlow | Inference | [FP32 BFloat16 Int8](/benchmarks/image_segmentation/tensorflow/3d_unet_mlperf/inference/README.md) | [BRATS 2019](https://www.med.upenn.edu/cbica/brats2019/data.html) | | [3D U-Net MLPerf*](https://arxiv.org/pdf/1606.06650.pdf) [Sapphire Rapids](https://www.intel.com/content/www/us/en/newsroom/opinion/updates-next-gen-data-center-platform-sapphire-rapids.html#gs.blowcx) | Tensorflow | Inference | [FP32 BFloat16 Int8 BFloat32](/quickstart/image_segmentation/tensorflow/3d_unet_mlperf/inference/cpu/README_SPR_Baremetal.md) | [BRATS 2019](https://www.med.upenn.edu/cbica/brats2019/data.html) | | [MaskRCNN](https://arxiv.org/abs/1703.06870) | TensorFlow | Inference | [FP32](/benchmarks/image_segmentation/tensorflow/maskrcnn/inference/fp32/README.md) | [MS COCO 2014](https://github.com/IntelAI/models/tree/master/benchmarks/image_segmentation/tensorflow/maskrcnn/inference/fp32#datasets-and-pretrained-model) | @@ -114,7 +115,6 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec | Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset | | ----------------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- | -| [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) | TensorFlow | Inference | [Int8](/benchmarks/object_detection/tensorflow/faster_rcnn/inference/int8/README.md) [FP32](/benchmarks/object_detection/tensorflow/faster_rcnn/inference/fp32/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | | [R-FCN](https://arxiv.org/pdf/1605.06409.pdf) | TensorFlow | Inference | [Int8 FP32](/benchmarks/object_detection/tensorflow/rfcn/inference/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | | [SSD-MobileNet*](https://arxiv.org/pdf/1704.04861.pdf)| TensorFlow | Inference | [Int8 FP32 BFloat16](/benchmarks/object_detection/tensorflow/ssd-mobilenet/inference/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | | [SSD-MobileNet*](https://arxiv.org/pdf/1704.04861.pdf) [Sapphire Rapids](https://www.intel.com/content/www/us/en/newsroom/opinion/updates-next-gen-data-center-platform-sapphire-rapids.html#gs.blowcx) | TensorFlow | Inference | [Int8 FP32 BFloat16 BFloat32](/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/cpu/README_SPR_baremetal.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | @@ -145,6 +145,9 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec | [Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf) | TensorFlow | Training | [FP32](/benchmarks/recommendation/tensorflow/wide_deep_large_ds/training/README.md) | [Large Kaggle Display Advertising Challenge dataset](https://github.com/IntelAI/models/tree/master/benchmarks/recommendation/tensorflow/wide_deep_large_ds/training/fp32#dataset) | | [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/recommendation/pytorch/dlrm/inference/cpu/README.md) | [Criteo Terabyte](/quickstart/recommendation/pytorch/dlrm/inference/cpu/README.md#datasets) | | [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Training | [FP32 BFloat16 BFloat32](/quickstart/recommendation/pytorch/dlrm/training/cpu/README.md) | [Criteo Terabyte](/quickstart/recommendation/pytorch/dlrm/training/cpu/README.md#datasets) | +| [DLRM v2](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Inference | [FP32 FP16 BFloat16 BFloat32 Int8](/quickstart/recommendation/pytorch/torchrec_dlrm/inference/cpu/README.md) | [Criteo 1TB Click Logs dataset](/quickstart/recommendation/pytorch/torchrec_dlrm/inference/cpu#datasets) | +| [DLRM v2](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Training | [FP32 FP16 BFloat16 BFloat32](/quickstart/recommendation/pytorch/torchrec_dlrm/training/cpu/README.md) | [Random dataset](/quickstart/recommendation/pytorch/torchrec_dlrm/training/cpu#datasets) | +| [MEMREC-DLRM](https://arxiv.org/pdf/2305.07205.pdf) | PyTorch | Inference | [FP32](/quickstart/recommendation/pytorch/memrec_dlrm/inference/cpu/README.md) | [Criteo Terabyte](/quickstart/recommendation/pytorch/memrec_dlrm/inference/cpu/README.md#datasets) | ### Text-to-Speech @@ -189,6 +192,8 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec | [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Training | Max Series | [BFloat16](/quickstart/language_modeling/pytorch/bert_large/training/gpu/README.md) | |[BERT large](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | Max Series | [FP32 FP16](/quickstart/language_modeling/tensorflow/bert_large/inference/gpu/README.md) | | [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Training | Max Series | [BFloat16](/quickstart/language_modeling/tensorflow/bert_large/training/gpu/README.md) | +| [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | TensorFlow | Inference | Max Series | [FP16](/quickstart/recommendation/pytorch/torchrec_dlrm/inference/gpu/README.md) | +| [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | TensorFlow | Training | Max Series | [BFloat16](/quickstart/recommendation/pytorch/torchrec_dlrm/training/gpu/README.md) | ## How to Contribute If you would like to add a new benchmarking script, please use [this guide](/Contribute.md). diff --git a/benchmarks/README.md b/benchmarks/README.md deleted file mode 100644 index 51b603deb..000000000 --- a/benchmarks/README.md +++ /dev/null @@ -1,105 +0,0 @@ -# Model Zoo Scripts - -Training and inference scripts with TensorFlow optimizations that use the -Intel® oneAPI Deep Neural Network Library (Intel® oneDNN) and -Intel® Extension for PyTorch. - -## Prerequisites - -The model documentation in the tables below have information on the -prerequisites to run each model. The model scripts run on Linux. Certain -models are also able to run using bare metal on Windows. For more information -and a list of models that are supported on Windows, see the -[documentation here](/docs/general/Windows.md#using-intel-model-zoo-on-windows-systems). - -For information on running more advanced use cases using the workload containers see the: -[advanced options documentation](/quickstart/common/tensorflow/ModelPackagesAdvancedOptions.md). - -## TensorFlow Use Cases - -| Use Case | Model | Mode | oneContainer Portal | Model Documentation | Dataset | -| ------------------------| ------------------ | --------- | ------------------- | ------------------- | ------- | -| Image Recognition | [DenseNet169](https://arxiv.org/pdf/1608.06993.pdf) | Inference | | [FP32](image_recognition/tensorflow/densenet169/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [Inception V3](https://arxiv.org/pdf/1512.00567.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv3-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html)| [Int8 FP32](image_recognition/tensorflow/inceptionv3/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [Inception V4](https://arxiv.org/pdf/1602.07261.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv4-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html) | [Int8 FP32](image_recognition/tensorflow/inceptionv4/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [MobileNet V1*](https://arxiv.org/pdf/1704.04861.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/mobilenetv1-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html) | [Int8 FP32 BFloat16](image_recognition/tensorflow/mobilenet_v1/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [MobileNet V2](https://arxiv.org/pdf/1801.04381.pdf) | Inference | | [Int8 FP32 BFloat16](image_recognition/tensorflow/mobilenet_v2/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [ResNet 101](https://arxiv.org/pdf/1512.03385.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet101-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet101-fp32-inference-tensorflow-container.html) | [Int8 FP32](image_recognition/tensorflow/resnet101/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [ResNet 50](https://arxiv.org/pdf/1512.03385.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50-fp32-inference-tensorflow-container.html) | [Int8 FP32](image_recognition/tensorflow/resnet50/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [ResNet 50v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50v1-5-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50v1-5-fp32-inference-tensorflow-container.html) [BFloat16](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50v1-5-bfloat16-inference-tensorflow-container.html) | [Int8 FP32 BFloat16](image_recognition/tensorflow/resnet50v1_5/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [ResNet 50v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | Training | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50v1-5-fp32-training-tensorflow-container.html) [BFloat16](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50v1-5-bfloat16-training-tensorflow-container.html) | [FP32 BFloat16](image_recognition/tensorflow/resnet50v1_5/training/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [Vision Transformer](https://arxiv.org/abs/2010.11929) | Inference | Model Containers: | [FP32 BFloat16 FP16](https://github.com/IntelAI/models/benchmarks/image_recognition/tensorflow/vision_transformer/inference/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Recognition | [Vision Transformer](https://arxiv.org/abs/2010.11929) | Training | Model Containers: | [FP32 BFloat16 FP16](https://github.com/IntelAI/models/benchmarks/image_recognition/tensorflow/vision_transformer/training/README.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | -| Image Segmentation | [3D U-Net](https://arxiv.org/pdf/1606.06650.pdf) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/3d-unet-fp32-inference-tensorflow-container.html) | [FP32](image_segmentation/tensorflow/3d_unet/inference/fp32/README.md) | [BRATS 2018](https://github.com/IntelAI/models/tree/master/benchmarks/image_segmentation/tensorflow/3d_unet/inference/fp32#datasets) | -| Image Segmentation | [3D U-Net MLPerf*](https://arxiv.org/pdf/1606.06650.pdf) | Inference | | [FP32 BFloat16 Int8](image_segmentation/tensorflow/3d_unet_mlperf/inference/README.md) | [BRATS 2019](https://www.med.upenn.edu/cbica/brats2019/data.html) | -| Image Segmentation | [MaskRCNN](https://arxiv.org/abs/1703.06870) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/mask-rcnn-fp32-inference-tensorflow-container.html) | [FP32](image_segmentation/tensorflow/maskrcnn/inference/fp32/README.md) | [MS COCO 2014](https://github.com/IntelAI/models/tree/master/benchmarks/image_segmentation/tensorflow/maskrcnn/inference/fp32#datasets-and-pretrained-model) | -| Image Segmentation | [UNet](https://arxiv.org/pdf/1606.06650.pdf) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/unet-fp32-inference-tensorflow-container.html) | [FP32](image_segmentation/tensorflow/unet/inference/fp32/README.md) | -| Language Modeling | [BERT](https://arxiv.org/pdf/1810.04805.pdf) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/bert-large-fp32-inference-tensorflow-container.html) [BFloat16](https://www.intel.com/content/www/us/en/developer/articles/containers/bert-large-bfloat16-inference-tensorflow-container.html) | [Int8](language_modeling/tensorflow/bert_large/inference/int8/README.md) [FP32](language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16](language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) | -| Language Modeling | [BERT](https://arxiv.org/pdf/1810.04805.pdf) | Training | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/bert-large-fp32-training-tensorflow-container.html) [BFloat16](https://www.intel.com/content/www/us/en/developer/articles/containers/bert-large-bfloat16-training-tensorflow-container.html) | [FP32](language_modeling/tensorflow/bert_large/training/fp32/Advanced.md) [BFloat16](language_modeling/tensorflow/bert_large/training/bfloat16/Advanced.md) [FP16](language_modeling/tensorflow/bert_large/training/fp16/Advanced.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#fine-tuning-with-bert-using-squad-data) and [MRPC](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#classification-training-with-bert) | -| Language Modeling | [distilBERT](https://arxiv.org/abs/1910.01108) | Inference | Model Containers: | [FP32 BFloat16](https://github.com/IntelAI/models/benchmarks/language_modeling/tensorflow/distilbert_base/inference/README.md) | [SST-2](https://huggingface.co/datasets/sst2) | -| Language Translation | [BERT](https://arxiv.org/pdf/1810.04805.pdf) | Inference | | [FP32](language_translation/tensorflow/bert/inference/README.md) | [MRPC](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#classification-training-with-bert) | -| Language Translation | [GNMT*](https://arxiv.org/pdf/1609.08144.pdf) | Inference | | [FP32](language_translation/tensorflow/mlperf_gnmt/inference/README.md) | [MLPerf GNMT model benchmarking dataset](https://github.com/IntelAI/models/tree/master/benchmarks/language_translation/tensorflow/mlperf_gnmt/inference/fp32#datasets) | -| Language Translation | [Transformer_LT_mlperf*](https://arxiv.org/pdf/1706.03762.pdf) | Training | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/transformer-lt-mlperf-fp32-training-tensorflow-container.html) [BFloat16](https://www.intel.com/content/www/us/en/developer/articles/containers/transformer-lt-mlperf-bfloat16-training-tensorflow-container.html) | [FP32 BFloat16](language_translation/tensorflow/transformer_mlperf/training/README.md) | [WMT English-German dataset](https://github.com/IntelAI/models/tree/master/datasets/transformer_data#transformer-language-mlperf-dataset) | -| Language Translation | [Transformer_LT_mlperf*](https://arxiv.org/pdf/1706.03762.pdf) | Inference | | [FP32 BFloat16 Int8](language_translation/tensorflow/transformer_mlperf/inference/README.md) | [WMT English-German data](https://github.com/IntelAI/models/tree/master/datasets/transformer_data#transformer-language-mlperf-dataset) | -| Language Translation | [Transformer_LT_Official](https://arxiv.org/pdf/1706.03762.pdf) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/transformer-lt-official-fp32-inference-tensorflow-container.html) | [FP32](language_translation/tensorflow/transformer_lt_official/inference/README.md) | [WMT English-German dataset](https://github.com/IntelAI/models/tree/master/datasets/transformer_data#transformer-language-mlperf-dataset) | -| Object Detection | [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) | Inference | | [Int8](object_detection/tensorflow/faster_rcnn/inference/int8/README.md) [FP32](object_detection/tensorflow/faster_rcnn/inference/fp32/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | -| Object Detection | [R-FCN](https://arxiv.org/pdf/1605.06409.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/rfcn-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/rfcn-fp32-inference-tensorflow-container.html) | [Int8 FP32](object_detection/tensorflow/rfcn/inference/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | -| Object Detection | [SSD-MobileNet*](https://arxiv.org/pdf/1704.04861.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/ssd-mobilenet-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/ssd-mobilenet-fp32-inference-tensorflow-container.html) | [Int8 FP32 BFloat16](object_detection/tensorflow/ssd-mobilenet/inference/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | -| Object Detection | [SSD-ResNet34*](https://arxiv.org/pdf/1512.02325.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/ssd-resnet34-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/ssd-resnet34-fp32-inference-tensorflow-container.html) | [Int8 FP32 BFloat16](object_detection/tensorflow/ssd-resnet34/inference/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | -| Object Detection | [SSD-ResNet34](https://arxiv.org/pdf/1512.02325.pdf) | Training | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/ssd-resnet34-fp32-training-tensorflow-container.html) [BFloat16](https://www.intel.com/content/www/us/en/developer/articles/containers/ssd-resnet34-bfloat16-training-tensorflow-container.html) | [FP32](object_detection/tensorflow/ssd-resnet34/training/fp32/README.md) [BFloat16](object_detection/tensorflow/ssd-resnet34/training/bfloat16/README.md) | [COCO 2017 training dataset](https://github.com/IntelAI/models/tree/master/datasets/coco/README_train.md) | -| Recommendation | [DIEN](https://arxiv.org/abs/1809.03672) | Inference | | [FP32 BFloat16](/benchmarks/recommendation/tensorflow/dien/inference/README.md) | [DIEN dataset](https://github.com/IntelAI/models/tree/master/benchmarks/recommendation/tensorflow/dien/inference#datasets) | -| Recommendation | [DIEN](https://arxiv.org/abs/1809.03672) | Training | | [FP32](/benchmarks/recommendation/tensorflow/dien/training/README.md) | [DIEN dataset](https://github.com/IntelAI/models/tree/master/benchmarks/recommendation/tensorflow/dien#1-prepare-datasets-1) | -| Recommendation | [NCF](https://arxiv.org/pdf/1708.05031.pdf) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/ncf-fp32-inference-tensorflow-container.html) | [FP32](/benchmarks/recommendation/tensorflow/ncf/inference/fp32/README.md) | [MovieLens 1M](https://github.com/IntelAI/models/tree/master/benchmarks/recommendation/tensorflow/ncf/inference/fp32#datasets) | -| Recommendation | [Wide & Deep](https://arxiv.org/pdf/1606.07792.pdf) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/wide-deep-fp32-inference-tensorflow-container.html) | [FP32](/benchmarks/recommendation/tensorflow/wide_deep/inference/README.md) | [Census Income dataset](https://github.com/IntelAI/models/tree/master/benchmarks/recommendation/tensorflow/wide_deep/inference/fp32#dataset) | -| Recommendation | [Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf) | Inference | Model Containers: [Int8](https://www.intel.com/content/www/us/en/developer/articles/containers/wide-deep-large-dataset-int8-inference-tensorflow-container.html) [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/wide-deep-large-dataset-fp32-inference-tensorflow-container.html) | [Int8 FP32](/benchmarks/recommendation/tensorflow/wide_deep_large_ds/inference/README.md) | [Large Kaggle Display Advertising Challenge dataset](https://github.com/IntelAI/models/tree/master/datasets/large_kaggle_advertising_challenge/README.md) | -| Recommendation | [Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf) | Training | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/wide-deep-large-dataset-fp32-training-tensorflow-container.html) | [FP32](/benchmarks/recommendation/tensorflow/wide_deep_large_ds/training/README.md) | [Large Kaggle Display Advertising Challenge dataset](https://github.com/IntelAI/models/tree/master/benchmarks/recommendation/tensorflow/wide_deep_large_ds/training/fp32#dataset) | -| Text-to-Speech | [WaveNet](https://arxiv.org/pdf/1609.03499.pdf) | Inference | Model Containers: [FP32](https://www.intel.com/content/www/us/en/developer/articles/containers/wavenet-fp32-inference-tensorflow-container.html) | [FP32](/benchmarks/text_to_speech/tensorflow/wavenet/inference/fp32/README.md) | - -## TensorFlow Serving Use Cases - -| Use Case | Model | Mode | Model Documentation | -| -----------------------| ------------------- | --------- |---------------------| -| Image Recognition | [Inception V3](https://arxiv.org/pdf/1512.00567.pdf) | Inference | [FP32](image_recognition/tensorflow_serving/inceptionv3/README.md#fp32-inference-instructions) | -| Image Recognition | [ResNet 50v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | Inference | [FP32](image_recognition/tensorflow_serving/resnet50v1_5/README.md#fp32-inference-instructions) | -| Language Translation | [Transformer_LT_Official](https://arxiv.org/pdf/1706.03762.pdf) | Inference | [FP32](language_translation/tensorflow_serving/transformer_lt_official/README.md#fp32-inference-instructions) | -| Object Detection | [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | Inference | [FP32](object_detection/tensorflow_serving/ssd-mobilenet/README.md#fp32-inference-instructions) | - -## PyTorch Use Cases - -| Use Case | Model | Mode | Model Documentation | -| ----------------------- | --------- | ------------------- | --------------------| -| Image Recognition | [GoogLeNet](https://arxiv.org/abs/1409.4842) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/googlenet/inference/cpu/README.md) | -| Image Recognition | [Inception v3](https://arxiv.org/pdf/1512.00567.pdf) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/inception_v3/inference/cpu/README.md) | -| Image Recognition | [MNASNet 0.5](https://arxiv.org/abs/1807.11626) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/mnasnet0_5/inference/cpu/README.md) | -| Image Recognition | [MNASNet 1.0](https://arxiv.org/abs/1807.11626) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/mnasnet1_0/inference/cpu/README.md) | -| Image Recognition | [ResNet 50](https://arxiv.org/pdf/1512.03385.pdf) | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/image_recognition/pytorch/resnet50/inference/cpu/README.md) | -| Image Recognition | [ResNet 50](https://arxiv.org/pdf/1512.03385.pdf) | Training | [FP32 BFloat16 BFloat32](/quickstart/image_recognition/pytorch/resnet50/training/cpu/README.md) | -| Image Recognition | [ResNet 101](https://arxiv.org/pdf/1512.03385.pdf) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/resnet101/inference/cpu/README.md) | -| Image Recognition | [ResNet 152](https://arxiv.org/pdf/1512.03385.pdf) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/resnet152/inference/cpu/README.md) | -| Image Recognition | [ResNext 32x4d](https://arxiv.org/abs/1611.05431) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/resnext-32x4d/inference/cpu/README.md) | -| Image Recognition | [ResNext 32x16d](https://arxiv.org/abs/1611.05431) | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/image_recognition/pytorch/resnext-32x16d/inference/cpu/README.md) | -| Image Recognition | [VGG-11](https://arxiv.org/abs/1409.1556) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/vgg11/inference/cpu/README.md) | -| Image Recognition | [VGG-11 with batch normalization](https://arxiv.org/abs/1409.1556) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/vgg11_bn/inference/cpu/README.md) | -| Image Recognition | [Wide ResNet-50-2](https://arxiv.org/pdf/1605.07146.pdf) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/wide_resnet50_2/inference/cpu/README.md) | -| Image Recognition | [Wide ResNet-101-2](https://arxiv.org/pdf/1605.07146.pdf) | Inference | [FP32 BFloat16](/quickstart/image_recognition/pytorch/wide_resnet101_2/inference/cpu/README.md) | -| Language Modeling | [BERT base](https://arxiv.org/pdf/1810.04805.pdf) | Inference | [FP32 BFloat16](/quickstart/language_modeling/pytorch/bert_base/inference/cpu/README.md) | -| Language Modeling | [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/language_modeling/pytorch/bert_large/inference/cpu/README.md) | -| Language Modeling | [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | Training | [FP32 BFloat16 BFloat32](/quickstart/language_modeling/pytorch/bert_large/training/cpu/README.md) | -| Language Modeling | [DistilBERT base](https://arxiv.org/abs/1910.01108) | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/language_modeling/pytorch/distilbert_base/inference/cpu/README.md) | -| Language Modeling | [RNN-T](https://arxiv.org/abs/2007.15188) | Inference | [FP32 BFloat16 BFloat32](/quickstart/language_modeling/pytorch/rnnt/inference/cpu/README.md) | -| Language Modeling | [RNN-T](https://arxiv.org/abs/2007.15188) | Training | [FP32 BFloat16 BFloat32](/quickstart/language_modeling/pytorch/rnnt/training/cpu/README.md) | -| Language Modeling | [RoBERTa base](https://arxiv.org/abs/1907.11692) | Inference | [FP32 BFloat16](/quickstart/language_modeling/pytorch/roberta_base/inference/cpu/README.md) | -| Language Modeling | [T5](https://arxiv.org/abs/1910.10683) | Inference | [FP32 Int8](/quickstart/language_modeling/pytorch/t5/inference/cpu/README.md) | -| Object Detection | [Faster R-CNN ResNet50 FPN](https://arxiv.org/abs/1506.01497) | Inference | [FP32 BFloat16](/quickstart/object_detection/pytorch/faster_rcnn_resnet50_fpn/inference/cpu/README.md) | -| Object Detection | [Mask R-CNN](https://arxiv.org/abs/1703.06870) | Inference | [FP32 BFloat16 BFloat32](/quickstart/object_detection/pytorch/maskrcnn/inference/cpu/README.md) | -| Object Detection | [Mask R-CNN](https://arxiv.org/abs/1703.06870) | Training | [FP32 BFloat16 BFloat32](/quickstart/object_detection/pytorch/maskrcnn/training/cpu/README.md) | -| Object Detection | [Mask R-CNN ResNet50 FPN](https://arxiv.org/abs/1703.06870) | Inference | [FP32 BFloat16](/quickstart/object_detection/pytorch/maskrcnn_resnet50_fpn/inference/cpu/README.md) | -| Object Detection | [RetinaNet ResNet-50 FPN](https://arxiv.org/abs/1708.02002) | Inference | [FP32 BFloat16](/quickstart/object_detection/pytorch/retinanet_resnet50_fpn/inference/cpu/README.md) | -| Object Detection | [SSD-ResNet34](https://arxiv.org/abs/1512.02325) | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/object_detection/pytorch/ssd-resnet34/inference/cpu/README.md) | -| Object Detection | [SSD-ResNet34](https://arxiv.org/abs/1512.02325) | Training | [FP32 BFloat16 BFloat32](/quickstart/object_detection/pytorch/ssd-resnet34/training/cpu/README.md) | -| Recommendation | [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/recommendation/pytorch/dlrm/inference/cpu/README.md) | -| Recommendation | [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | Training | [FP32 BFloat16 BFloat32](/quickstart/recommendation/pytorch/dlrm/training/cpu/README.md) | -| Shot Boundary Detection | [TransNetV2](https://arxiv.org/pdf/2008.04838.pdf) | Inference | [FP32 BFloat16](/quickstart/shot_boundary_detection/pytorch/transnetv2/inference/cpu/README.md) | -| AI Drug Design (AIDD) | [AlphaFold2](https://www.nature.com/articles/s41586-021-03819-2)| Inference | [FP32](/quickstart/aidd/pytorch/alphafold2/inference/README.md) | - -*Means the model belongs to [MLPerf](https://mlperf.org/) models and will be supported long-term. diff --git a/benchmarks/common/base_benchmark_util.py b/benchmarks/common/base_benchmark_util.py index 58cd8bbb9..bc254fa20 100644 --- a/benchmarks/common/base_benchmark_util.py +++ b/benchmarks/common/base_benchmark_util.py @@ -228,13 +228,6 @@ def _define_args(self): dest="experimental_gelu", choices=["True", "False"], default=False) - self._common_arg_parser.add_argument( - "--amp", - help="use grappler auto-mixed precision as opposed to \ - keras mixed precision", - dest="amp", choices=["True", "False"], - default=False) - # Note this can't be a normal boolean flag, because we need to know when the user # does not explicitly set the arg value so that we can apply the appropriate # default value, depending on the the precision. @@ -287,6 +280,14 @@ def _define_args(self): help="Run the benchmark script using GPU", dest="gpu", action="store_true") + # Check if OneDNN Graph is enabled + self._common_arg_parser.add_argument( + "--onednn-graph", + help="If Intel® Extension for TensorFlow* is installed, oneDNN Graph for INT8 will be enabled" + " by default. Otherwise, default value of this flag will be False.", + dest="onednn_graph", choices=["True", "False"], + default=None) + def _validate_args(self): """validate the args and initializes platform_util""" # check if socket id is in socket number range diff --git a/benchmarks/common/tensorflow/start.sh b/benchmarks/common/tensorflow/start.sh index 43323f8dc..80d338a6e 100644 --- a/benchmarks/common/tensorflow/start.sh +++ b/benchmarks/common/tensorflow/start.sh @@ -55,6 +55,7 @@ echo " PYTHON_EXE: ${PYTHON_EXE}" echo " PYTHONPATH: ${PYTHONPATH}" echo " DRY_RUN: ${DRY_RUN}" echo " GPU: ${GPU}" +echo " ONEDNN_GRAPH: ${ONEDNN_GRAPH}" # Enable GPU Flag gpu_arg="" @@ -80,6 +81,13 @@ if [ ${MODE} != "inference" ] && [ ${MODE} != "training" ]; then exit 1 fi +# Enable OneDNN Graph Flag +onednn_graph_arg="" +if [ ${ONEDNN_GRAPH} == "True" ]; then + onednn_graph_arg="--onednn-graph=True" + export ITEX_ONEDNN_GRAPH=1 +fi + # Determines if we are running in a container by checking for .dockerenv function _running-in-container() { @@ -408,7 +416,8 @@ ${output_results_arg} \ ${weight_sharing_arg} \ ${synthetic_data_arg} \ ${verbose_arg} \ -${gpu_arg}" +${gpu_arg} \ +${onednn_graph_arg}" if [ ${MOUNT_EXTERNAL_MODELS_SOURCE} != "None" ]; then CMD="${CMD} --model-source-dir=${MOUNT_EXTERNAL_MODELS_SOURCE}" @@ -1431,10 +1440,12 @@ function gpt_j() { if [ ${BENCHMARK_ONLY} == "True" ]; then CMD=" ${CMD} --max_output_tokens=${MAX_OUTPUT_TOKENS}" CMD=" ${CMD} --input_tokens=${INPUT_TOKENS}" - if [[ -z "${SKIP_ROWS}" ]]; then - SKIP_ROWS=0 + CMD=" ${CMD} --steps=${STEPS}" + CMD=" ${CMD} --warmup_steps=${WARMUP_STEPS}" + if [[ -z "${DUMMY_DATA}" ]]; then + DUMMY_DATA=0 fi - CMD=" ${CMD} --skip_rows=${SKIP_ROWS}" + CMD=" ${CMD} --dummy_data=${DUMMY_DATA}" fi CMD=${CMD} run_model else @@ -1683,6 +1694,7 @@ function vision_transformer() { if [ ${MODE} == "training" ]; then CMD="${CMD} $(add_arg "--init-checkpoint" ${INIT_CHECKPOINT})" + CMD="${CMD} $(add_arg "--epochs" ${EPOCHS})" fi if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || @@ -1775,6 +1787,58 @@ function rgat() { fi } +function stable_diffusion() { + if [ ${MODE} == "inference" ]; then + if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then + curr_dir=${pwd} + echo "Curr dir: " + echo ${curr_dir} + + infer_dir=${MOUNT_INTELAI_MODELS_SOURCE}/${MODE} + benchmarks_patch_path=${infer_dir}/patch + echo "benchmarks_patch_path:" + echo ${benchmarks_patch_path} + + cd /tmp + rm -rf keras-cv + git clone https://github.com/keras-team/keras-cv.git + cd keras-cv + git reset --hard 66fa74b6a2a0bb1e563ae8bce66496b118b95200 + git apply ${benchmarks_patch_path} + pip install . + cd ${curr_dir} + + if [[ ${NOINSTALL} != "True" ]]; then + python3 -m pip install -r "${MOUNT_BENCHMARK}/${USE_CASE}/${FRAMEWORK}/${MODEL_NAME}/${MODE}/requirements.txt" + fi + + python -c $'from tensorflow import keras\n_ = keras.utils.get_file( + "bpe_simple_vocab_16e6.txt.gz", + "https://github.com/openai/CLIP/blob/main/clip/bpe_simple_vocab_16e6.txt.gz?raw=true", + file_hash="924691ac288e54409236115652ad4aa250f48203de50a9e4722a6ecd48d6804a", + )\n_ = keras.utils.get_file( + origin="https://huggingface.co/fchollet/stable-diffusion/resolve/main/kcv_encoder.h5", + file_hash="4789e63e07c0e54d6a34a29b45ce81ece27060c499a709d556c7755b42bb0dc4", + )\n_ = keras.utils.get_file( + origin="https://huggingface.co/fchollet/stable-diffusion/resolve/main/kcv_diffusion_model.h5", + file_hash="8799ff9763de13d7f30a683d653018e114ed24a6a819667da4f5ee10f9e805fe", + )\n_ = keras.utils.get_file( + origin="https://huggingface.co/fchollet/stable-diffusion/resolve/main/kcv_decoder.h5", + file_hash="ad350a65cc8bc4a80c8103367e039a3329b4231c2469a1093869a345f55b1962", + )' + + export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE} + + CMD="${CMD} $(add_arg "--steps" ${STEPS})" + CMD="${CMD} $(add_arg "--output-dir" ${OUTPUT_DIR})" + CMD=${CMD} run_model + else + echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo." + exit 1 + fi + fi +} + # Wide & Deep model function wide_deep() { if [ ${PRECISION} == "fp32" ]; then @@ -1857,7 +1921,7 @@ function wide_deep_large_ds() { function graphsage() { if [ ${MODE} == "inference" ]; then - if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then + if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ] || [ ${PRECISION} == "int8" ]; then export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE} if [ ${NUM_INTER_THREADS} != "None" ]; then @@ -1955,6 +2019,8 @@ elif [ ${MODEL_NAME} == "gpt_j" ]; then gpt_j elif [ ${MODEL_NAME} == "rgat" ]; then rgat +elif [ ${MODEL_NAME} == "stable_diffusion" ]; then + stable_diffusion else echo "Unsupported model: ${MODEL_NAME}" exit 1 diff --git a/benchmarks/image_recognition/tensorflow/densenet169/inference/README.md b/benchmarks/image_recognition/tensorflow/densenet169/inference/README.md index 84d5cb370..1c750ec3f 100644 --- a/benchmarks/image_recognition/tensorflow/densenet169/inference/README.md +++ b/benchmarks/image_recognition/tensorflow/densenet169/inference/README.md @@ -25,21 +25,21 @@ Set the `DATASET_DIR` to point to this directory when running DenseNet 169. | [`batch_inference.sh`](/quickstart/image_recognition/tensorflow/densenet169/inference/cpu/batch_inference.sh) | Runs batch inference (batch_size=100). | | [`accuracy.sh`](/quickstart/image_recognition/tensorflow/densenet169/inference/cpu/accuracy.sh) | Measures the model accuracy (batch_size=100). | - + ## Run the model Setup your environment using the instructions below, depending on if you are -using [AI Kit](/docs/general/tensorflow/AIKit.md): +using [AI Tools](/docs/general/tensorflow/AITools.md):
Setup using AI Kit on Linux | -Setup without AI Kit on Linux | -Setup without AI Kit on Windows | +Setup using AI Tools on Linux | +Setup without AI Tools on Linux | +Setup without AI Tools on Windows | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
- To run using AI Kit on Linux you will need: +To run using AI Tools on Linux you will need:
|
- To run without AI Kit on Linux you will need: +To run without AI Tools on Linux you will need:
|
- To run without AI Kit on Windows you will need: +To run without AI Tools on Windows you will need:
|
@@ -82,7 +82,7 @@ Set the environment variables and run quickstart script on either Linux or Windo
### Run on Linux
```
-# cd to your model zoo directory
+# cd to your AI Reference Models directory
cd models
export DATASET_DIR=
Setup using AI Kit on Linux | -Setup without AI Kit on Linux | -Setup without AI Kit on Windows | +Setup using AI Tools on Linux | +Setup without AI Tools on Linux | +Setup without AI Tools on Windows | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
- To run using AI Kit on Linux you will need: +To run using AI Tools on Linux you will need:
|
- To run without AI Kit on Linux you will need: +To run without AI Tools on Linux you will need:
|
- To run without AI Kit on Windows you will need: +To run without AI Tools on Windows you will need:
|
@@ -97,7 +97,7 @@ Set the environment variables and run quickstart script on either Linux or Windo
### Run on Linux
```
-# cd to your model zoo directory
+# cd to your AI Reference Models directory
cd models
export PRETRAINED_MODEL=
Setup using AI Kit on Linux | -Setup without AI Kit on Linux | -Setup without AI Kit on Windows | +Setup using AI Tools on Linux | +Setup without AI Tools on Linux | +Setup without AI Tools on Windows | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
- To run using AI Kit on Linux you will need: +To run using AI Tools on Linux you will need:
|
- To run without AI Kit on Linux you will need: +To run without AI Tools on Linux you will need:
|
- To run without AI Kit on Windows you will need: +To run without AI Tools on Windows you will need:
|
@@ -88,7 +88,7 @@ Set the environment variables and run quickstart script on either Linux or Windo
### Run on Linux
```
-# cd to your model zoo directory
+# cd to your AI Reference Models directory
cd models
export PRETRAINED_MODEL=
Setup using AI Kit on Linux | -Setup without AI Kit on Linux | -Setup without AI Kit on Windows | +Setup using AI Tools on Linux | +Setup without AI Tools on Linux | +Setup without AI Tools on Windows | |
---|---|---|---|---|---|---|
- To run using AI Kit on Linux you will need: +To run using AI Tools on Linux you will need:
|
- To run without AI Kit on Linux you will need: +To run without AI Tools on Linux you will need:
|
- To run without AI Kit on Windows you will need: +To run without AI Tools on Windows you will need:
|
@@ -95,7 +95,7 @@ Set the environment variables and run quickstart script on either Linux or Windo
### Run on Linux:
```
-# cd to your model zoo directory
+# cd to your AI Reference Models directory
cd models
export PRETRAINED_MODEL=
Setup using AI Kit on Linux | -Setup without AI Kit on Linux | -
---|---|
- To run using AI Kit on Linux you will need: -
|
-
- To run without AI Kit on Linux you will need: -
|
-