Common CM interface to make it easier to prepare, run and reproduce experiments from research projects
This CM repository contains a unified Collective Mind interface (MLCommons CM) to access, run and reproduce experiments from research projects and benchmarks (ACM, IEEE, NeurIPS, ICML, MLCommons, MLPerf ...) in a unified and automated way using the artifact evaluation methodology from ACM/IEEE/cTuning, MLCommons and NeurIPS.
While working with the community to reproduce or replicate 150+ research papers during artifact evaluation, we have seen that reviewers spend most of their time at the kick-the-tires phase deciphering various READMEs and scripts to figure out how to prepare and run experiments.
This experience motivated us to develop a simple, technology-agnostic and human-friendly interface and automation language (MLCommons Collective Mind) to provide the same common interface to prepare, run and visualize experiments from any paper or research project.
The goal is to make it easier for the community and evaluators to start reproducing/replicating research results and even fully automate this process in the future.
Install the CM automation language as described here.
cm pull repo mlcommons@cm4mlops --branch=dev
cm pull repo ctuning@cm4research
cm find script --tags=reproduce,project
If you use python, we suggest you to set up Python virtual environment via CM as follows:
cm run script "install python-venv" --name=reproducibility
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=reproducibility"
It will be automatically activated for all further CM commands while keeping your Python installation intact.
Each CM script wrapper should have a CM script with the following variations:
install_deps
, run
, analyze
, plot
, validate
, reproduce
It's possible to run them as follows:
cm find script --tags=reproduce,project
cm run script {name from above list} --tags=_{variation}
Many CM commands download or produce new artifacts that are stored in CM cache (including CM Python virtual environments) to avoid polluting user system.
You can view current CM cache as follows:
cm show cache
cm show cache --tags=python
You can clean CM cache and start from scratch at any time using the following command:
cm rm cache -f
You can just copy any CM script
and update alias
in _cm.yaml
. You can then extend it based on the following examples:
- CM script to reproduce MICRO paper to reproduce results from a MICRO paper.
- CM script to reproduce IPOL journal paper
- CM tutorial to reproduce an IPOL journal paper.
- _cm.yaml description to download and extract files from Zenodo/Dropbox via CM
After manually updating meta description of CM artifacts (_cm.yaml
or _cm.json
)
you must reindex them as follows:
cm reindex repo
We are updating a tutorial to add CM interface to new projects and papers and will share it soon!
Don't hesitate to open tickets here or contact the cTuning foundation and cKnowledge.org (developers of the MLCommons CM automation framework).