Skip to content
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.

ReQuEST Artifact Installation Guide

guomingz edited this page Feb 13, 2018 · 6 revisions

As the Intel caffe enabled the 8-bit Inference of Convolution Neural Networks in the 1.1.0 release, we also submitted one paper to ReQuEST 2018.Correspondingly, we provide this step-by-step tutorial to reproduce the result on Amazon aws cloud.

0.Prerequisite

  • Select the AWS cloud instance which contains pre-built caffe;
  • Install the The Intel C++ compiler on the AWS cloud instance;
  • Run source <compiler root>/bin/compilervars.sh {ia32 OR intel64} or source <compiler root>/bin/compilervars.csh {ia32 OR intel64} e.g source /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh intel64
  • Download the benchmark zip file from dropbox;
  • Unzip it and change the working directory to the benchmark folder.

1.Throughput Testing

  • Run the command python benchmark.py -m throughput.

2.Latency Testing

  • Run the command python benchmark.py -m latency.

3.Accuracy Testing

  • Use the calibration tool to generate the quantized prototxt with pre-trained FP32 weights which could be downloaded form this link.

  • Copy the weights/FP32 prototxt/quantized prototxt to /path/to/benchmark/accuracy folder and rename the corresponding weights/FP32 prototxt/quantized prototxt to the that pre-existed examples.

  • We strongly suggest you check the file path definitions in the prototxt, it'd be better to use the absolute path rather than relative path.

  • Run the command python benchmark.py -m accuracy

Clone this wiki locally