Running MaskRCNN Inference with FP16 on Intel® Data Center GPU Flex Series using Intel® Extension for TensorFlow*
This document has instructions for running MaskRCNN inference using Intel® Extension for TensorFlow* with Intel® Data Center GPU Flex Series.
Item | Detail |
---|---|
Host machine | Intel® Data Center GPU Flex Series 170 or 140 |
Drivers | GPU-compatible drivers need to be installed: Download Driver |
Software | Docker* |
This repository provides scripts to download and extract the COCO 2017 dataset.
Download and pre-process the datasets using script download_and_preprocess_coco.sh
provided here. Set the DATASET_DIR
to point to the TF records directory when running MaskRCNN.
Script name | Description |
---|---|
run_model.sh |
Runs batch and online inference for FP16 precision on Flex series 170 and 140 |
docker pull intel/image-segmentation:tf-flex-gpu-maskrcnn-inference
The MaskRCNN inference container includes scripts,model and libraries needed to run FP16 batch and online inference. To run the inference script using this container, you'll need to provide volume mounts for the COCO processed dataset.You will also need to provide an output directory where log files will be written.
#Optional
export BATCH_SIZE=<provide batch size. default batch for batch inference is 16>
#Required
export PRECISION=<provide precision,supports float16>
export OUTPUT_DIR=<path to output directory>
export DATASET_DIR=<path to the preprocessed COCO dataset>
export GPU_TYPE=<provide either flex_170 or flex_140>
IMAGE_NAME=intel/image-segmentation:tf-flex-gpu-maskrcnn-inference
SCRIPT=run_model.sh
docker run -it \
--device=/dev/dri \
--ipc=host \
--privileged \
--env PRECISION=${PRECISION} \
--env OUTPUT_DIR=${OUTPUT_DIR} \
--env DATASET_DIR=${DATASET_DIR} \
--env BATCH_SIZE=${BATCH_SIZE} \
--env PRECISION=${PRECISION} \
--env GPU_TYPE=${GPU_TYPE} \
--env http_proxy=${http_proxy} \
--env https_proxy=${https_proxy} \
--env no_proxy=${no_proxy} \
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
--volume ${DATASET_DIR}:${DATASET_DIR} \
--rm -it \
$IMAGE_NAME \
/bin/bash $SCRIPT
Note: Add --cap-add=SYS_NICE
to the docker run
command for executing run_model.sh
on Flex series 140.
Support for Intel® Extension for TensorFlow* is found via the Intel® AI Analytics Toolkit. Additionally, the Intel® Extension for TensorFlow* team tracks both bugs and enhancement requests using GitHub issues. Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.