Skip to content

Reinforcement learning environments for compiler and program optimization tasks

License

Notifications You must be signed in to change notification settings

NilsGraf/CompilerGym

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CompilerGym

Colab Python versions PyPi Downloads PyPI version License CI status

Reinforcement learning environments for compiler optimization tasks.

Check the website for more information.

Introduction

CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks. It allows ML researchers to interact with important compiler optimization problems in a language and vocabulary with which they are comfortable, and provides a toolkit for systems developers to expose new compiler tasks for ML research. We aim to act as a catalyst for making compilers faster using ML. Key features include:

  • Ease of use: built on the the popular Gym interface - use Python to write your agent. With CompilerGym, building ML models for compiler research problems is as easy as building ML models to play video games.

  • Batteries included: includes everything required to get started. Wraps real world programs and compilers to provide millions of instances for training. Provides multiple kinds of pre-computed program representations: you can focus on end-to-end deep learning or features + boosted trees, all the way up to graph models. Appropriate reward functions and loss functions for optimization targets are provided out of the box.

  • Reproducible: provides validation for correctness of results, common baselines, and leaderboards for you to submit your results.

For a glimpse of what's to come, check out our roadmap.

News

  • November 2022: CompilerGym v0.2.5 adds support for Python 3.10 and drops support for 3.7. See release notes for full details.
  • April 2022: ⭐️ CompilerGym wins the distinguished paper award at CGO'22! You can read our work here.
  • April 2022: 📖 Our tutorial at CGO'22 was well attended. If you missed the event, you can work through the materials here.
  • September 2021: 📄 CompilerGym was featured on the Meta AI research blog. You can read the post here.

Installation

Install the latest CompilerGym release using:

pip install -U compiler_gym

See INSTALL.md for further details.

Usage

Starting with CompilerGym is simple. If you not already familiar with the gym interface, refer to the getting started guide for an overview of the key concepts.

In Python, import compiler_gym to use the environments:

>>> import compiler_gym                      # imports the CompilerGym environments
>>> env = compiler_gym.make(                 # creates a new environment (same as gym.make)
...     "llvm-v0",                           # selects the compiler to use
...     benchmark="cbench-v1/qsort",         # selects the program to compile
...     observation_space="Autophase",       # selects the observation space
...     reward_space="IrInstructionCountOz", # selects the optimization target
... )
>>> env.reset()                              # starts a new compilation session
>>> env.render()                             # prints the IR of the program
>>> env.step(env.action_space.sample())      # applies a random optimization, updates state/reward/actions
>>> env.close()                              # closes the environment, freeing resources

See the examples directory for agent implementations, environment extensions, and more. See the documentation website for the API reference.

Leaderboards

These leaderboards track the performance of user-submitted algorithms for CompilerGym tasks. To submit a result please see this document.

LLVM Instruction Count

LLVM is a popular open source compiler used widely in industry and research. The llvm-ic-v0 environment exposes LLVM's optimizing passes as a set of actions that can be applied to a particular program. The goal of the agent is to select the sequence of optimizations that lead to the greatest reduction in instruction count in the program being compiled. Reward is the reduction in instruction count achieved scaled to the reduction achieved by LLVM's builtin -Oz pipeline.

This leaderboard tracks the results achieved by algorithms on the llvm-ic-v0 environment on the 23 benchmarks in the cbench-v1 dataset.

Author Algorithm Links Date Walltime (mean) Codesize Reduction (geomean)
Robin Schmöcker, Yannik Mahlau, Nicolas Fröhlich PPO + Guided Search write-up, results 2022-02 69.821s 1.070×
Facebook Random search (t=10800) write-up, results 2021-03 10,512.356s 1.062×
Facebook Random search (t=3600) write-up, results 2021-03 3,630.821s 1.061×
Facebook Greedy search write-up, results 2021-03 169.237s 1.055×
Anthony. W. Jung GATv2 + DD-PPO write-up, results 2022-06 258.149s 1.047×
Facebook Random search (t=60) write-up, results 2021-03 91.215s 1.045×
Facebook e-Greedy search (e=0.1) write-up, results 2021-03 351.611s 1.041×
Jiadong Guo Tabular Q (N=5000, H=10) write-up, results 2021-04 2534.305 1.036×
Facebook Random search (t=10) write-up, results 2021-03 42.939s 1.031×
Patrick Hesse DQN (N=4000, H=10) write-up, results 2021-06 91.018s 1.029×
Jiadong Guo Tabular Q (N=2000, H=5) write-up, results 2021-04 694.105 0.988×

Contributing

We welcome contributions to CompilerGym. If you are interested in contributing please see this document.

Citation

If you use CompilerGym in any of your work, please cite our paper:

@inproceedings{CompilerGym,
      title={{CompilerGym: Robust, Performant Compiler Optimization Environments for AI Research}},
      author={Chris Cummins and Bram Wasti and Jiadong Guo and Brandon Cui and Jason Ansel and Sahir Gomez and Somya Jain and Jia Liu and Olivier Teytaud and Benoit Steiner and Yuandong Tian and Hugh Leather},
      booktitle={CGO},
      year={2022},
}

About

Reinforcement learning environments for compiler and program optimization tasks

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 63.5%
  • C++ 14.9%
  • CMake 7.7%
  • JavaScript 5.8%
  • Starlark 4.2%
  • C 1.5%
  • Other 2.4%