Skip to content

Latest commit

 

History

History
242 lines (177 loc) · 9.5 KB

File metadata and controls

242 lines (177 loc) · 9.5 KB

Deep Learning accuracy validation framework

Installation

Prerequisites

Install prerequisites first:

1. Python

accuracy checker uses Python 3. Install it first:

sudo apt-get install python3 python3-dev python3-setuptools python3-pip

Python setuptools and python package manager (pip) install packages into system directory by default. Installation of accuracy checker tested only via virtual environment.

In order to use virtual environment you should install it first:

python3 -m pip install virtualenv
python3 -m virtualenv -p `which python3` <directory_for_environment>

Before starting to work inside virtual environment, it should be activated:

source <directory_for_environment>/bin/activate

Virtual environment can be deactivated using command

deactivate

2. Frameworks

The next step is installing backend frameworks for Accuracy Checker.

In order to evaluate some models required frameworks have to be installed. Accuracy-Checker supports these frameworks:

You can use any of them or several at a time.

Install accuracy checker

If all prerequisite are installed, then you are ready to install accuracy checker:

python3 setup.py install

Usage

You may test your installation and get familiar with accuracy checker by running sample.

Once you installed accuracy checker you can evaluate your configurations with:

accuracy_check -c path/to/configuration_file -m /path/to/models -s /path/to/source/data -a /path/to/annotation

All relative paths in config files will be prefixed with values specified in command line:

  • -c, --config path to configuration file.
  • -m, --models specifies directory in which models and weights declared in config file will be searched.
  • -s, --source specifies directory in which input images will be searched.
  • -a, --annotations specifies directory in which annotation and meta files will be searched.

You may refer to -h, --help to full list of command line options. Some optional arguments are:

  • -r, --root prefix for all relative paths.
  • -d, --definitions path to the global configuration file
  • -e, --extensions directory with InferenceEngine extensions.
  • -b, --bitstreams directory with bitstream (for Inference Engine with fpga plugin).
  • -C, '--converted_models directory to store Model Optimizer converted models (used for DLSDK launcher only).
  • -tf, --target_framework framework for infer.
  • -td, --target_devices devices for infer. You can specify several devices using space as a delimiter.

Configuration

There is config file which declares validation process. Every validated model has to have its entry in models list with distinct name and other properties described below.

There is also definitions file, which declares global options shared across all models. Config file has priority over definitions file.

example:

models:
- name: model_name
  launchers:
    - framework: caffe
      model:   public/alexnet/caffe/bvlc_alexnet.prototxt
      weights: public/alexnet/caffe/bvlc_alexnet.caffemodel
      adapter: classification
      batch: 128
  datasets:
    - name: dataset_name

Launchers

Launcher is a description of how your model should be executed. Each launcher configuration starts with setting framework name. Currently caffe, dlsdk, mxnet, tf, tf_lite, opencv, onnx_runtime supported. Launcher description can have differences. Please view:

Datasets

Dataset entry describes data on which model should be evaluated, all required preprocessing and postprocessing/filtering steps, and metrics that will be used for evaluation.

If your dataset data is a well-known competition problem (COCO, Pascal VOC, ...) and/or can be potentially reused for other models it is reasonable to declare it in some global configuration file (definition file). This way in your local configuration file you can provide only name and all required steps will be picked from global one. To pass path to this global configuration use --definition argument of CLI.

Each dataset must have:

  • name - unique identifier of your model/topology.
  • data_source: path to directory where input data is stored.
  • metrics: list of metrics that should be computed.

And optionally:

  • preprocessing: list of preprocessing steps applied to input data. If you want calculated metrics to match reported, you must reproduce preprocessing from canonical paper of your topology or ask topology author about required steps if it is ICV topology.
  • postprocessing: list of postprocessing steps.
  • reader: approach for data reading. Default reader is opencv_imread.

Also it must contain data related to annotation. You can convert annotation inplace using:

  • annotation_conversion: parameters for annotation conversion

or use existing annotation file and dataset meta:

  • annotation - path to annotation file, you must convert annotation to representation of dataset problem first, you may choose one of the converters from annotation-converters if there is already converter for your dataset or write your own.
  • dataset_meta: path to metadata file (generated by converter). More detailed information about annotation conversion you can find in Annotation Conversion Guide

example of dataset definition:

- name: dataset_name
  annotation: annotation.pickle
  data_source: images_folder

  preprocessing:
    - type: resize
      dst_width: 256
      dst_height: 256

    - type: normalization
      mean: imagenet

    - type: crop
      dst_width: 227
      dst_height: 227

  metrics:
    - type: accuracy

Preprocessing, Metrics, Postprocessing

Each entry of preprocessing, metrics, postprocessing must have type field, other options are specific to type. If you do not provide any other option, then it will be picked from definitions file.

You can find useful following instructions:

You may optionally provide reference field for metric, if you want calculated metric tested against specific value (i.e. reported in canonical paper).

Some metrics support providing vector results ( e. g. mAP is able to return average precision for each detection class). You can change view mode for metric results using presenter (e.g. print_vector, print_scalar).

example:

metrics:
- type: accuracy
  top_k: 5
  reference: 86.43
  threshold: 0.005

Testing new models

Typical workflow for testing new model include:

  1. Convert annotation of your dataset. Use one of the converters from annotation-converters, or write your own if there is no converter for your dataset. You can find detailed instruction how to use converters in Annotation Conversion Guide.
  2. Choose one of adapters or write your own. Adapter converts raw output produced by framework to high level problem specific representation (e.g. ClassificationPrediction, DetectionPrediction, etc).
  3. Reproduce preprocessing, metrics and postprocessing from canonical paper.
  4. Create entry in config file and execute.