Skip to content

Commit

Permalink
Hotfix/docs (#19)
Browse files Browse the repository at this point in the history
Updating documentation to reflect current purpose of this repo.
  • Loading branch information
claynerobison authored May 30, 2019
1 parent 659895b commit 005c4c8
Show file tree
Hide file tree
Showing 35 changed files with 296 additions and 314 deletions.
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# Model Zoo for Intel® Architecture

This repository contains **links to pre-trained models, benchmarking scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.

## Purpose of the Model Zoo

- Demonstrate the AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware
- Show how to efficiently execute, train, and deploy Intel-optimized models
- Make it easy to benchmark model performance on Intel hardware
- Make it easy to get started running Intel-optimized models on Intel hardware in the cloud or on bare metal

***DISCLAIMER: These scripts are not intended for benchmarking Intel platforms. Please see [https://www.intel.ai/blog](https://www.intel.ai/blog) for any information on performance and/or benchmarking information on specific Intel platforms.***

## How to Use the Model Zoo

### Getting Started
Expand All @@ -17,10 +18,10 @@ This repository contains **links to pre-trained models, benchmarking scripts, be

### Directory Structure
The Model Zoo is divided into four main directories:
- **[benchmarks](/benchmarks)**: Look here for benchmarking scripts and complete instructions on downloading and benchmarking each Intel-optimized pre-trained model.
- **[benchmarks](/benchmarks)**: Look here for sample scripts and complete instructions on downloading and running each Intel-optimized pre-trained model.
- **[docs](/docs)**: General best practices and detailed tutorials for a selection of models and frameworks can be found in this part of the repo.
- **[models](/models)**: This directory contains optimized model code that has not yet been upstreamed to its respective official repository, such as dataset processing routines.
There are no user-friendly READMEs in this directory, but many supporting modules used for benchmarking are here.
There are no user-friendly READMEs in this directory, but many supporting modules are here.
- **[tests](/tests)**: Look here for unit tests and information on how to run them.

The benchmarks, models, and docs folders share a common structure. Each model (or document) is organized first by *use case* and then by *framework*.
Expand Down
10 changes: 5 additions & 5 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Benchmark scripts
# Model Zoo Scripts

Training and inference scripts with Intel-optimized MKL

## Prerequisites

The benchmarking scripts can be run on Linux and require the following
The model scripts can be run on Linux and require the following
dependencies to be installed:
* [Docker](https://docs.docker.com/install/)
* [Python](https://www.python.org/downloads/) 2.7 or later
Expand All @@ -13,7 +13,7 @@ dependencies to be installed:

## Use Cases

| Use Case | Framework | Model | Mode | Benchmarking Instructions |
| Use Case | Framework | Model | Mode | Instructions |
| -----------------------| --------------| ------------------- | --------- |------------------------------|
| Adversarial Networks | TensorFlow | [DCGAN](https://arxiv.org/pdf/1511.06434.pdf) | Inference | [FP32](adversarial_networks/tensorflow/dcgan/README.md#fp32-inference-instructions) |
| Content Creation | TensorFlow | [DRAW](https://arxiv.org/pdf/1502.04623.pdf) | Inference | [FP32](content_creation/tensorflow/draw/README.md#fp32-inference-instructions) |
Expand All @@ -31,9 +31,9 @@ dependencies to be installed:
| Language Translation | TensorFlow | [GNMT](https://arxiv.org/pdf/1609.08144.pdf) | Inference | [FP32](language_translation/tensorflow/gnmt/README.md#fp32-inference-instructions) |
| Language Translation | TensorFlow | [Transformer Language](https://arxiv.org/pdf/1706.03762.pdf)| Inference | [FP32](language_translation/tensorflow/transformer_language/README.md#fp32-inference-instructions) |
| Language Translation | TensorFlow | [Transformer_LT_Official ](https://arxiv.org/pdf/1706.03762.pdf)| Inference | [FP32](language_translation/tensorflow/transformer_lt_official/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [R-FCN](https://arxiv.org/pdf/1605.06409.pdf) | Inference | [Int8](object_detection/tensorflow/rfcn/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [R-FCN](https://arxiv.org/pdf/1605.06409.pdf) | Inference | [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) | Inference | [Int8](object_detection/tensorflow/faster_rcnn/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/faster_rcnn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | Inference | [Int8](object_detection/tensorflow/ssd-mobilenet/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/ssd-mobilenet/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | Inference | [FP32](object_detection/tensorflow/ssd-mobilenet/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | [SSD-ResNet34](https://arxiv.org/pdf/1512.02325.pdf) | Inference | [FP32](object_detection/tensorflow/ssd-resnet34/README.md#fp32-inference-instructions) |
| Recommendation | TensorFlow | [NCF](https://arxiv.org/pdf/1708.05031.pdf) | Inference | [FP32](recommendation/tensorflow/ncf/README.md#fp32-inference-instructions) |
| Recommendation | TensorFlow | [Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf) | Inference | [Int8](recommendation/tensorflow/wide_deep_large_ds/README.md#int8-inference-instructions) [FP32](recommendation/tensorflow/wide_deep_large_ds/README.md#fp32-inference-instructions) |
Expand Down
11 changes: 5 additions & 6 deletions benchmarks/adversarial_networks/tensorflow/dcgan/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This document has instructions for how to run DCGAN for the
following modes/precisions:
* [FP32 inference](#fp32-inference-instructions)

Benchmarking instructions and scripts for model training and inference.
Script instructions for model training and inference.

## FP32 Inference Instructions

Expand Down Expand Up @@ -35,19 +35,18 @@ repository:
$ git clone https://github.com/IntelAI/models.git
```

This repository includes launch scripts for running benchmarks and the
an optimized version of the DCGAN model code.
This repository includes launch scripts for running an optimized version of the DCGAN model code.

5. Navigate to the `benchmarks` directory in your local clone of
the [intelai/models](https://github.com/IntelAI/models) repo from step 4.
The `launch_benchmark.py` script in the `benchmarks` directory is
used for starting a benchmarking run in a optimized TensorFlow docker
used for starting a model script run in a optimized TensorFlow docker
container. It has arguments to specify which model, framework, mode,
precision, and docker image to use, along with your path to the external model directory
for `--model-source-dir` (from step 1) `--data-location` (from step 2), and `--checkpoint` (from step 3).


Run benchmarking for throughput and latency with `--batch-size=100` :
Run the model script for batch and online inference with `--batch-size=100` :
```
$ cd /home/<user>/models/benchmarks
Expand All @@ -66,7 +65,7 @@ $ python launch_benchmark.py \

5. Log files are located at the value of `--output-dir`.

Below is a sample log file tail when running benchmarking for throughput:
Below is a sample log file tail when running for batch inference:
```
Batch size: 100
Batches number: 500
Expand Down
18 changes: 9 additions & 9 deletions benchmarks/content_creation/tensorflow/draw/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ modes/precisions:
```

The mnist directory will be passed as the dataset location when we
run the benchmarking script in step 4.
run the model script in step 4.

2. Download and extract the pretrained model:
```
Expand All @@ -27,21 +27,21 @@ modes/precisions:
```

3. Clone this [intelai/models](https://github.com/IntelAI/models) repo,
which contains the scripts that we will be using to run benchmarking
for DRAW. After the clone has completed, navigate to the `benchmarks`
which contains the DRAW model scripts.
After the clone has completed, navigate to the `benchmarks`
directory in the repository.

```
$ git clone https://github.com/IntelAI/models.git
$ cd models/benchmarks
```

4. Run benchmarking for either throughput or latency using the commands
4. Run the model for either batch or online inference using the commands
below. Replace in the path to the `--data-location` with your `mnist`
dataset directory from step 1 and the `--checkpoint` files that you
downloaded and extracted in step 2.

* Run benchmarking for latency (with `--batch-size 1`):
* Run DRAW for online inference (with `--batch-size 1`):
```
python launch_benchmark.py \
--precision fp32 \
Expand All @@ -54,7 +54,7 @@ modes/precisions:
--batch-size 1 \
--socket-id 0
```
* Run benchmarking for throughput (with `--batch-size 100`):
* Run DRAW for batch inference (with `--batch-size 100`):
```
python launch_benchmark.py \
--precision fp32 \
Expand All @@ -70,9 +70,9 @@ modes/precisions:
Note that the `--verbose` or `--output-dir` flag can be added to any of the above
commands to get additional debug output or change the default output location.

5. The log files for each benchmarking run are saved at the value of `--output-dir`.
5. The log files for each run are saved at the value of `--output-dir`.

* Below is a sample log file tail when benchmarking latency:
* Below is a sample log file tail when testing online inference:
```
...
Elapsed Time 0.006622
Expand All @@ -88,7 +88,7 @@ modes/precisions:
Log location outside container: {--output-dir value}/benchmark_draw_inference_fp32_20190123_012947.log
```

* Below is a sample log file tail when benchmarking throughput:
* Below is a sample log file tail when testing batch inference:
```
Elapsed Time 0.028355
Elapsed Time 0.028221
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@ This document has instructions for how to run FaceNet for the
following modes/precisions:
* [FP32 inference](#fp32-inference-instructions)

Benchmarking instructions and scripts for model training and inference
other precisions are coming later.
Script instructions for model training and inference for other precisions are coming later.

## FP32 Inference Instructions

Expand Down Expand Up @@ -37,18 +36,17 @@ Instructions for downloading the dataset and converting it can be found in the d
5. Navigate to the `benchmarks` directory in your local clone of
the [intelai/models](https://github.com/IntelAI/models) repo from step 2.
The `launch_benchmark.py` script in the `benchmarks` directory is
used for starting a benchmarking run in a optimized TensorFlow docker
used for starting a model run in a optimized TensorFlow docker
container. It has arguments to specify which model, framework, mode,
precision, and docker image.

Substitute in your own `--checkpoint` pretrained model file path (from step 3),
and `--data-location` (from step 4).

FaceNet can be run for latency benchmarking, throughput
benchmarking, or accuracy. Use one of the following examples below,
depending on your use case.
FaceNet can be run for testing online inference, batch inference, or accuracy.
Use one of the following examples below, depending on your use case.

* For latency (using `--batch-size 1`):
* For online inference (using `--batch-size 1`):

```
python launch_benchmark.py \
Expand All @@ -63,7 +61,7 @@ python launch_benchmark.py \
--model-source-dir /home/<user>/facenet/ \
--docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl
```
Example log tail when benchmarking for latency:
Example log tail for online inference:
```
Batch 979 elapsed Time 0.0297989845276
Batch 989 elapsed Time 0.029657125473
Expand All @@ -85,7 +83,7 @@ Ran inference with batch size 1
Log location outside container: {--output-dir value}/benchmark_facenet_inference_fp32_20190328_205911.log
```

* For throughput (using `--batch-size 100`):
* For batch inference (using `--batch-size 100`):

```
python launch_benchmark.py \
Expand All @@ -100,7 +98,7 @@ python launch_benchmark.py \
--model-source-dir /home/<user>/facenet/ \
--docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl
```
Example log tail when benchmarking for throughput:
Example log tail for batch inference:
```
Batch 219 elapsed Time 0.446497917175
Batch 229 elapsed Time 0.422048091888
Expand Down Expand Up @@ -134,7 +132,7 @@ python launch_benchmark.py \
--model-source-dir /home/<user>/facenet/ \
--docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl
```
Example log tail when benchmarking for accuracy:
Example log tail for accuracy:
```
Batch 219 elapsed Time 0.398629188538
Batch 229 elapsed Time 0.354953050613
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@ This document has instructions for how to run MTCC for the
following modes/precisions:
* [FP32 inference](#fp32-inference-instructions)

Benchmarking instructions and scripts for the MTCC model training and inference
other precisions are coming later.
Instructions for MTCC model training and inference for other precisions are coming later.

## FP32 Inference Instructions

Expand Down Expand Up @@ -33,7 +32,7 @@ other precisions are coming later.
```

4. Clone the [intelai/models](https://github.com/intelai/models) repo.
This repo has the launch script for running benchmarking.
This repo has the launch script for running models.

```
$ git clone https://github.com/IntelAI/models.git
Expand All @@ -43,7 +42,7 @@ This repo has the launch script for running benchmarking.
5. Run the `launch_benchmark.py` script from the intelai/models repo with the appropriate parameters including: the `--model-source-dir` from step 1, `--data-location` from step 2,
and the `--checkpoint` from step 3.

Run benchmarking:
Run:
```
$ cd /home/<user>/models/benchmarks
Expand All @@ -61,7 +60,7 @@ Run benchmarking:

6. The log file is saved to the value of `--output-dir`.

Below is a sample log file tail when running benchmarking for throughput,latency and accuracy:
Below is a sample log file tail when running for batch inference, online inference, and accuracy:

```
time cost 0.459 pnet 0.166 rnet 0.144 onet 0.149
Expand Down
Loading

0 comments on commit 005c4c8

Please sign in to comment.