Flow is a computational framework for deep RL and control experiments for traffic microsimulation.
See our website for more information on the application of Flow to several mixed-autonomy traffic scenarios. Other results and videos are available as well.
We welcome your contributions.
- Please report bugs or ask questions by submitting a GitHub issue.
- Submit contributions using pull requests.
If you use Flow for academic research, you are highly encouraged to cite our paper:
C. Wu, A. Kreidieh, K. Parvate, E. Vinitsky, A. Bayen, "Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control," CoRR, vol. abs/1710.05465, 2017. [Online]. Available: https://arxiv.org/abs/1710.05465
If you use the benchmarks, you are highly encouraged to cite our paper:
Vinitsky, E., Kreidieh, A., Le Flem, L., Kheterpal, N., Jang, K., Wu, F., ... & Bayen, A. M. (2018, October). Benchmarks for reinforcement learning in mixed-autonomy traffic. In Conference on Robot Learning (pp. 399-409).
Cathy Wu, Eugene Vinitsky, Aboudy Kreidieh, Kanaad Parvate, Nishant Kheterpal, Kathy Jang, Fangyu Wu, Mahesh Murag, Kevin Chien, and Jonathan Lin. Alumni contributors include Leah Dickstein, Ananth Kuchibhotla, and Nathan Mandi. Flow is supported by the Mobile Sensing Lab at UC Berkeley and Amazon AWS Machine Learning research grants.