Skip to content

Commit

Permalink
Sync with r2.12.1 (#1530)
Browse files Browse the repository at this point in the history
* update driver version (#1429)

* p0 ipex rn50 ATS-M (#1426)

* add ipex stable diffusion

* change base image P0 ITEX rn50 (#1431)

* MaskRCNN ATS-M container (#1417)

* p0 ipex stable diffusion (#1424)

* yolov5 p0 ipex ATS-M (#1425)

* itex atsm stable diffusion (#1418)

* P0 ITEX Efficientnet B0,B3 (#1411)

* EOLing docker builder files for workload containers (#1437)

* removing dockerfiles directory

* removed docker builder spec, partials

* change precision to lowercase (#1456)

* Update IPEX cpu baremetal instructions (#1451)

* clean up ipex baremetal instructions

* update horovod version in docs (#1458)

* Remove all software.intel.com links (#1381)

* Corrected software.intel.com

* Removed dev catalog pages for EOL models

* Added and updated baremetal README for P0 GPU models (#1447)

* updated the GPU readme

* PYT SPR BERT Large  (#1472)

* add avx-fp32

* Adapt newer BKC

* remove idsid

* update base image

* updated tpp files for 2.12.1 release (#1479)

* updated tpp files

* added yolo5

* another update to TPPs (#1503)

* resolve merge conflicts

* Bump mlflow in /datasets/cloud_data_connector/samples/interoperability (#1492)

Bumps [mlflow](https://github.com/mlflow/mlflow) from 2.5.0 to 2.6.0.

* Bump mlflow in /datasets/cloud_data_connector/samples/azure (#1491)

Bumps [mlflow](https://github.com/mlflow/mlflow) from 2.5.0 to 2.6.0.

* fix issues with resolving conflicts

* P0 models list (#1500)

* sync with r2.12.1

---------

Co-authored-by: mahathis <36486206+Mahathi-Vatsal@users.noreply.github.com>
Co-authored-by: Srikanth Ramakrishna <srikanth.ramakrishna@intel.com>
Co-authored-by: Tyler Titsworth <tyler.titsworth@intel.com>
Co-authored-by: Sharvil Shah <shahsharvil96@gmail.com>
Co-authored-by: Jitendra Patil <jitendra.patil@intel.com>
  • Loading branch information
6 people authored Sep 15, 2023
1 parent 20a2fbb commit 56789bd
Show file tree
Hide file tree
Showing 517 changed files with 12,035 additions and 22,284 deletions.
7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors and Intel® Data Center GPUs.

Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://software.intel.com/containers).
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html).

## Purpose of the Model Zoo

Expand Down Expand Up @@ -176,6 +176,11 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec
| [SSD-MobileNet*](https://arxiv.org/pdf/1704.04861.pdf)| TensorFlow | Inference | Flex Series| [Int8](/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/README.md) |
| [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf)| PyTorch | Inference | Flex Series | [Int8](/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md) |
| [Yolo V4](https://arxiv.org/pdf/1704.04861.pdf)| PyTorch | Inference | Flex Series | [Int8](/quickstart/object_detection/pytorch/yolov4/inference/gpu/README.md) |
| [EfficientNet](https://arxiv.org/pdf/1905.11946.pdf) | TensorFlow | Inference | Flex Series | [FP16](/quickstart/image_recognition/tensorflow/efficientnet/inference/gpu/README.md) |
| [MaskRCNN](https://arxiv.org/pdf/1703.06870.pdf) | TensorFlow | Inference | Flex Series | [FP16](/quickstart/image_segmentation/tensorflow/maskrcnn/inference/gpu/README.md) |
| [Stable Diffusion](https://arxiv.org/pdf/2112.10752.pdf) | TensorFlow | Inference | Flex Series | [FP16 FP32](/quickstart/generative-ai/tensorflow/stable_diffusion/inference/gpu/README.md) |
| [Stable Diffusion](https://arxiv.org/pdf/2112.10752.pdf) | PyTorch | Inference | Flex Series | [FP16 FP32](/quickstart/generative-ai/pytorch/stable_diffusion/inference/gpu/README.md) |
| [Yolo V5](https://arxiv.org/pdf/2108.11539.pdf) | PyTorch | Inference | Flex Series | [FP16](/quickstart/object_detection/pytorch/yolov5/inference/gpu/README.md) |
| [ResNet 50v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | TensorFlow | Inference | Max Series | [Int8 FP32 FP16](/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/README_Max_Series.md) |
| [ResNet 50 v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | TensorFlow | Training | Max Series | [BFloat16](/quickstart/image_recognition/tensorflow/resnet50v1_5/training/gpu/README.md) |
| [ResNet 50 v1.5](https://arxiv.org/pdf/1512.03385.pdf) | PyTorch | Inference | Max Series |[Int8](/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/README_Max_Series.md) |
Expand Down
54 changes: 28 additions & 26 deletions benchmarks/README.md

Large diffs are not rendered by default.

255 changes: 251 additions & 4 deletions benchmarks/common/tensorflow/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -562,9 +562,6 @@ function bert_options() {
if [[ -n "${OPTIMIZED_SOFTMAX}" && ${OPTIMIZED_SOFTMAX} != "" ]]; then
CMD=" ${CMD} --optimized-softmax=${OPTIMIZED_SOFTMAX}"
fi
if [[ -n "${AMP}" && ${AMP} != "" ]]; then
CMD=" ${CMD} --amp=${AMP}"
fi

if [[ -n "${MPI_WORKERS_SYNC_GRADIENTS}" && ${MPI_WORKERS_SYNC_GRADIENTS} != "" ]]; then
CMD=" ${CMD} --mpi_workers_sync_gradients=${MPI_WORKERS_SYNC_GRADIENTS}"
Expand Down Expand Up @@ -1418,6 +1415,38 @@ function transformer_mlperf() {
fi
}

# GPT-J base model
function gpt_j() {
if [ ${MODE} == "inference" ]; then
if [[ (${PRECISION} == "bfloat16") || ( ${PRECISION} == "fp32") || ( ${PRECISION} == "fp16") ]]; then
if [[ -z "${CHECKPOINT_DIRECTORY}" ]]; then
echo "Checkpoint directory not found. The script will download the model."
else
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
export HF_HOME=${CHECKPOINT_DIRECTORY}
export HUGGINGFACE_HUB_CACHE=${CHECKPOINT_DIRECTORY}
export TRANSFORMERS_CACHE=${CHECKPOINT_DIRECTORY}
fi

if [ ${BENCHMARK_ONLY} == "True" ]; then
CMD=" ${CMD} --max_output_tokens=${MAX_OUTPUT_TOKENS}"
CMD=" ${CMD} --input_tokens=${INPUT_TOKENS}"
if [[ -z "${SKIP_ROWS}" ]]; then
SKIP_ROWS=0
fi
CMD=" ${CMD} --skip_rows=${SKIP_ROWS}"
fi
CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME}."
exit 1
fi
else
echo "Only inference use-case is supported for now."
exit 1
fi
}

# Wavenet model
function wavenet() {
if [ ${PRECISION} == "fp32" ]; then
Expand Down Expand Up @@ -1563,6 +1592,189 @@ function distilbert_base() {
fi
}

function gpt_j_6B() {
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "fp16" ] ||
[ ${PRECISION} == "bfloat16" ]; then

if [[ ${INSTALL_TRANSFORMER_FIX} != "True" ]]; then
echo "Information: Installing transformers from Hugging Face...!"
echo "python3 -m pip install git+https://github.com/intel-tensorflow/transformers@gptj_add_padding"
python3 -m pip install git+https://github.com/intel-tensorflow/transformers@gptj_add_padding
fi

export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
CMD="${CMD} $(add_arg "--warmup-steps" ${WARMUP_STEPS})"
CMD="${CMD} $(add_arg "--steps" ${STEPS})"

if [[ ${MODE} == "training" ]]; then
if [[ -z "${TRAIN_OPTION}" ]]; then
echo "Error: Please specify a train option (GLUE, Lambada)"
exit 1
fi

CMD=" ${CMD} --train-option=${TRAIN_OPTION}"
fi

if [[ -z "${CACHE_DIR}" ]]; then
echo "Checkpoint directory not found. The script will download the model."
else
export HF_HOME=${CACHE_DIR}
export HUGGINGFACE_HUB_CACHE=${CACHE_DIR}
export TRANSFORMERS_CACHE=${CACHE_DIR}
fi

if [ ${NUM_INTER_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
fi

if [ ${NUM_INTRA_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
fi

if [[ -n "${NUM_TRAIN_EPOCHS}" && ${NUM_TRAIN_EPOCHS} != "" ]]; then
CMD=" ${CMD} --num-train-epochs=${NUM_TRAIN_EPOCHS}"
fi

if [[ -n "${LEARNING_RATE}" && ${LEARNING_RATE} != "" ]]; then
CMD=" ${CMD} --learning-rate=${LEARNING_RATE}"
fi

if [[ -n "${NUM_TRAIN_STEPS}" && ${NUM_TRAIN_STEPS} != "" ]]; then
CMD=" ${CMD} --num-train-steps=${NUM_TRAIN_STEPS}"
fi

if [[ -n "${DO_TRAIN}" && ${DO_TRAIN} != "" ]]; then
CMD=" ${CMD} --do-train=${DO_TRAIN}"
fi

if [[ -n "${DO_EVAL}" && ${DO_EVAL} != "" ]]; then
CMD=" ${CMD} --do-eval=${DO_EVAL}"
fi

if [[ -n "${TASK_NAME}" && ${TASK_NAME} != "" ]]; then
CMD=" ${CMD} --task-name=${TASK_NAME}"
fi

if [[ -n "${CACHE_DIR}" && ${CACHE_DIR} != "" ]]; then
CMD=" ${CMD} --cache-dir=${CACHE_DIR}"
fi

if [[ -n "${PROFILE}" && ${PROFILE} != "" ]]; then
CMD=" ${CMD} --profile=${PROFILE}"
fi

if [ -z ${STEPS} ]; then
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
fi

if [ -z $MAX_SEQ_LENGTH ]; then
CMD="${CMD} $(add_arg "--max-seq-length" ${MAX_SEQ_LENGTH})"
fi
CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
exit 1
fi
}


# vision-transformer base model
function vision_transformer() {

if [ ${MODE} == "training" ]; then
CMD="${CMD} $(add_arg "--init-checkpoint" ${INIT_CHECKPOINT})"
fi

if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] ||
[ ${PRECISION} == "fp16" ]; then
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
CMD="${CMD} $(add_arg "--warmup-steps" ${WARMUP_STEPS})"
CMD="${CMD} $(add_arg "--steps" ${STEPS})"

if [ ${NUM_INTER_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
fi

if [ ${NUM_INTRA_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
fi

if [ -z ${STEPS} ]; then
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
fi
CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
exit 1
fi
}

# mmoe base model
function mmoe() {
if [ ${MODE} == "inference" ]; then
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
CMD="${CMD} $(add_arg "--warmup-steps" ${WARMUP_STEPS})"
CMD="${CMD} $(add_arg "--steps" ${STEPS})"

if [ ${NUM_INTER_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
fi

if [ ${NUM_INTRA_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
fi

if [ -z ${STEPS} ]; then
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
fi

CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
exit 1
fi
elif [ ${MODE} == "training" ]; then
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
CMD="${CMD} $(add_arg "--train-epochs" ${TRAIN_EPOCHS})"
CMD="${CMD} $(add_arg "--model_dir" ${CHECKPOINT_DIRECTORY})"
CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
exit 1
fi
fi
}

# rgat base model
function rgat() {
if [ ${MODE} == "inference" ]; then
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}

# Installing tensorflow_gnn from it's main branch
python3 -m pip install git+https://github.com/tensorflow/gnn.git@main

if [ ${NUM_INTER_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
fi

if [ ${NUM_INTRA_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
fi

CMD="${CMD} $(add_arg "--graph-schema-path" ${GRAPH_SCHEMA_PATH})"
CMD="${CMD} $(add_arg "--pretrained-model" ${PRETRAINED_MODEL})"
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
exit 1
fi
fi
}

# Wide & Deep model
function wide_deep() {
if [ ${PRECISION} == "fp32" ]; then
Expand Down Expand Up @@ -1643,6 +1855,29 @@ function wide_deep_large_ds() {
fi
}

function graphsage() {
if [ ${MODE} == "inference" ]; then
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}

if [ ${NUM_INTER_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
fi

if [ ${NUM_INTRA_THREADS} != "None" ]; then
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
fi

CMD="${CMD} $(add_arg "--pretrained-model" ${PRETRAINED_MODEL})"
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
exit 1
fi
fi
}

LOGFILE=${OUTPUT_DIR}/${LOG_FILENAME}

MODEL_NAME=$(echo ${MODEL_NAME} | tr 'A-Z' 'a-z')
Expand Down Expand Up @@ -1707,7 +1942,19 @@ elif [ ${MODEL_NAME} == "bert_large" ]; then
elif [ ${MODEL_NAME} == "dien" ]; then
dien
elif [ ${MODEL_NAME} == "distilbert_base" ]; then
distilbert_base
distilbert_base
elif [ ${MODEL_NAME} == "vision_transformer" ]; then
vision_transformer
elif [ ${MODEL_NAME} == "gpt_j_6b" ]; then
gpt_j_6B
elif [ ${MODEL_NAME} == "mmoe" ]; then
mmoe
elif [ ${MODEL_NAME} == "graphsage" ]; then
graphsage
elif [ ${MODEL_NAME} == "gpt_j" ]; then
gpt_j
elif [ ${MODEL_NAME} == "rgat" ]; then
rgat
else
echo "Unsupported model: ${MODEL_NAME}"
exit 1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,3 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
## Additional Resources

* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [<int8 precision>](<int8 advanced readme link>) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
workload container:<br />
[https://www.intel.com/content/www/us/en/developer/articles/machine-learning-model/densenet169-fp32-inference-tensorflow-model.html](https://www.intel.com/content/www/us/en/developer/articles/machine-learning-model/densenet169-fp32-inference-tensorflow-model.html).

Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
## Additional Resources

* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
workload container:<br />
[https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html).
[https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html).

Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
## Additional Resources

* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
workload container:<br />
[https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html).
[https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html).

Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
## Additional Resources

* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [BFloat16](bfloat16/Advanced.md) for calling the `launch_benchmark.py` script directly.
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
workload container:<br />
[https://software.intel.com/content/www/us/en/develop/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html).
[https://www.intel.com/content/www/us/en/developer/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html).

Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
## Additional Resources

* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
workload container:<br />
[https://software.intel.com/content/www/us/en/develop/articles/containers/resnet101-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/resnet101-fp32-inference-tensorflow-container.html).
[https://www.intel.com/content/www/us/en/developer/articles/containers/resnet101-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet101-fp32-inference-tensorflow-container.html).

Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
## Additional Resources

* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
workload container:<br />
[https://software.intel.com/content/www/us/en/develop/articles/containers/resnet50-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/resnet50-fp32-inference-tensorflow-container.html).
[https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50-fp32-inference-tensorflow-container.html).

Loading

0 comments on commit 56789bd

Please sign in to comment.