diff --git a/LICENSE.md b/LICENSE.md index a581b58..49fdaeb 100644 --- a/LICENSE.md +++ b/LICENSE.md @@ -1,6 +1,7 @@ MIT License -Copyright (c) 2019 Kai Arulkumaran +Copyright (c) 2019 Kai Arulkumaran (Original PlaNet parts) +Copyright (c) 2020 Yusuke Urakami (Dreamer parts) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: diff --git a/README.md b/README.md index 0af2dd8..1c99df1 100644 --- a/README.md +++ b/README.md @@ -1,40 +1,48 @@ -PlaNet +Dreamer implementation in PyTorch ====== [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE.md) -PlaNet: A Deep Planning Network for Reinforcement Learning [[1]](#references). Supports symbolic/visual observation spaces. Supports some Gym environments (including classic control/non-MuJoCo environments, so DeepMind Control Suite/MuJoCo are optional dependencies). Hyperparameters have been taken from the original work and are tuned for DeepMind Control Suite, so would need tuning for any other domains (such as the Gym environments). +## Dreamer +This repo implements the Dreamer algorithm from [Dream to Control: Learning Behaviors By latent Imagination](https://arxiv.org/pdf/1912.01603.pdf) based on the [PlaNet-Pytorch](https://github.com/Kaixhin/PlaNet). It has been confirmed working on the DeepMind Control Suite/MuJoCo environment. Hyperparameters have been taken from the paper. -Run with `python.main.py`. For best performance with DeepMind Control Suite, try setting environment variable `MUJOCO_GL=egl` (see instructions and details [here](https://github.com/deepmind/dm_control#rendering)). +## Installation +To install all dependencies with Anaconda run `conda env create -f conda_env.yml` and use `source activate dreamer` to activate the environment. +## Training (e.g. DMC walker-walk) +```bash +python main.py --algo dreamer --env walker-walk --action-repeat 2 --id name-of-experiement +``` -Results and pretrained models can be found in the [releases](https://github.com/Kaixhin/PlaNet/releases). - -Requirements ------------- - -- Python 3 -- [DeepMind Control Suite](https://github.com/deepmind/dm_control) (optional) -- [Gym](https://gym.openai.com/) -- [OpenCV Python](https://pypi.python.org/pypi/opencv-python) -- [Plotly](https://plot.ly/) -- [PyTorch](http://pytorch.org/) - -To install all dependencies with Anaconda run `conda env create -f environment.yml` and use `source activate planet` to activate the environment. +For best performance with DeepMind Control Suite, try setting environment variable `MUJOCO_GL=egl` (see instructions and details [here](https://github.com/deepmind/dm_control#rendering)). -Links ------ +Use Tensorboard to monitor the training. -- [Introducing PlaNet: A Deep Planning Network for Reinforcement Learning](https://ai.googleblog.com/2019/02/introducing-planet-deep-planning.html) -- [google-research/planet](https://github.com/google-research/planet) +`tensorboard --logdir results` -Acknowledgements ----------------- - -- [@danijar](https://github.com/danijar) for [google-research/planet](https://github.com/google-research/planet) and [help reproducing results](https://github.com/google-research/planet/issues/28) -- [@sg2](https://github.com/sg2) for [running experiments](https://github.com/Kaixhin/PlaNet/issues/9) - -References ----------- +## Results +Results and pretrained models can be found in the [releases](https://github.com/Kaixhin/PlaNet/releases). -[1] [Learning Latent Dynamics for Planning from Pixels](https://arxiv.org/abs/1811.04551) +The performances are compared with the other SoTA algorithms as follows (Note! Tested once using seed 0.) +* [State-SAC](https://github.com/denisyarats/pytorch_sac) +* [PlaNet-PyTorch](https://github.com/Kaixhin/PlaNet) +* [SAC-AE](https://github.com/denisyarats/pytorch_sac_ae) +* [SLAC](https://github.com/ku2482/slac.pytorch) +* [CURL](https://github.com/MishaLaskin/curl) +* [Dreamer (tensorflow2 implementation)](https://github.com/danijar/dreamer) + + +
+ + + + + + +
+ + +## Links +- [Dream to Control: Learning Behaviors By latent Imagination](https://ai.googleblog.com/2020/03/introducing-dreamer-scalable.html) +- [google-research/dreamer](https://github.com/google-research/dreamer) diff --git a/imgs/ball_in_cup-catch.png b/imgs/ball_in_cup-catch.png new file mode 100644 index 0000000..8e3914b Binary files /dev/null and b/imgs/ball_in_cup-catch.png differ diff --git a/imgs/cartpole-swingup.png b/imgs/cartpole-swingup.png new file mode 100644 index 0000000..98270f8 Binary files /dev/null and b/imgs/cartpole-swingup.png differ diff --git a/imgs/cheetah-run.png b/imgs/cheetah-run.png new file mode 100644 index 0000000..5287d52 Binary files /dev/null and b/imgs/cheetah-run.png differ diff --git a/imgs/finger-spin.png b/imgs/finger-spin.png new file mode 100644 index 0000000..d6bab1f Binary files /dev/null and b/imgs/finger-spin.png differ diff --git a/imgs/reacher-easy.png b/imgs/reacher-easy.png new file mode 100644 index 0000000..f6436a8 Binary files /dev/null and b/imgs/reacher-easy.png differ diff --git a/imgs/walker-walk.png b/imgs/walker-walk.png new file mode 100644 index 0000000..7392bb1 Binary files /dev/null and b/imgs/walker-walk.png differ