This repository has been archived by the owner on Aug 5, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 491
ReQuEST Artifact Installation Guide
guomingz edited this page Feb 13, 2018
·
6 revisions
As the Intel caffe enabled the 8-bit Inference of Convolution Neural Networks in the 1.1.0 release, we also submitted one paper to ReQuEST 2018.Correspondingly, we provide this step-by-step tutorial to reproduce the result on Amazon aws cloud.
- Select the AWS cloud instance which contains pre-built caffe;
- Install the The Intel C++ compiler on the AWS cloud instance;
- Run
source <compiler root>/bin/compilervars.sh {ia32 OR intel64}
orsource <compiler root>/bin/compilervars.csh {ia32 OR intel64}
e.gsource /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh intel64
- Download the benchmark zip file from dropbox;
- Unzip it and change the working directory to the benchmark folder.
- Run the command
python benchmark.py -m throughput
.
- Run the command
python benchmark.py -m latency
.
-
Use the calibration tool to generate the quantized prototxt with pre-trained FP32 weights.
-
Copy the weights/FP32 prototxt/quantized prototxt to /path/to/benchmark/accuracy folder and rename the corresponding weights/FP32 prototxt/quantized prototxt to the that pre-existed examples.
-
We strongly suggest you check the file path definitions in the prototxt, it'd be better to use the absolute path rather than relative path.
-
Run the command
python benchmark.py -m accuracy