This repo contains the following tools for running MLPerf benchmarks:
- eval.py: For the tiny MLPerf visual wake word (vww), this script downloads the dataset from Silabs and runs both TFLite reference models (int8-model and float-model) with the 1000 images listed in y_labels.csv to measure their accuracy.
- eval.ipynb: Jupyter notebook generated from eval.py, click here to run it from your browser.