-
Notifications
You must be signed in to change notification settings - Fork 34
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Your Name
committed
Jun 26, 2020
1 parent
03ba42f
commit d06cf2d
Showing
8 changed files
with
39 additions
and
30 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,40 +1,48 @@ | ||
PlaNet | ||
Dreamer implementation in PyTorch | ||
====== | ||
|
||
[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE.md) | ||
|
||
PlaNet: A Deep Planning Network for Reinforcement Learning [[1]](#references). Supports symbolic/visual observation spaces. Supports some Gym environments (including classic control/non-MuJoCo environments, so DeepMind Control Suite/MuJoCo are optional dependencies). Hyperparameters have been taken from the original work and are tuned for DeepMind Control Suite, so would need tuning for any other domains (such as the Gym environments). | ||
## Dreamer | ||
This repo implements the Dreamer algorithm from [Dream to Control: Learning Behaviors By latent Imagination](https://arxiv.org/pdf/1912.01603.pdf) based on the [PlaNet-Pytorch](https://github.com/Kaixhin/PlaNet). It has been confirmed working on the DeepMind Control Suite/MuJoCo environment. Hyperparameters have been taken from the paper. | ||
|
||
Run with `python.main.py`. For best performance with DeepMind Control Suite, try setting environment variable `MUJOCO_GL=egl` (see instructions and details [here](https://github.com/deepmind/dm_control#rendering)). | ||
## Installation | ||
To install all dependencies with Anaconda run `conda env create -f conda_env.yml` and use `source activate dreamer` to activate the environment. | ||
|
||
## Training (e.g. DMC walker-walk) | ||
```bash | ||
python main.py --algo dreamer --env walker-walk --action-repeat 2 --id name-of-experiement | ||
``` | ||
|
||
Results and pretrained models can be found in the [releases](https://github.com/Kaixhin/PlaNet/releases). | ||
|
||
Requirements | ||
------------ | ||
|
||
- Python 3 | ||
- [DeepMind Control Suite](https://github.com/deepmind/dm_control) (optional) | ||
- [Gym](https://gym.openai.com/) | ||
- [OpenCV Python](https://pypi.python.org/pypi/opencv-python) | ||
- [Plotly](https://plot.ly/) | ||
- [PyTorch](http://pytorch.org/) | ||
|
||
To install all dependencies with Anaconda run `conda env create -f environment.yml` and use `source activate planet` to activate the environment. | ||
For best performance with DeepMind Control Suite, try setting environment variable `MUJOCO_GL=egl` (see instructions and details [here](https://github.com/deepmind/dm_control#rendering)). | ||
|
||
Links | ||
----- | ||
Use Tensorboard to monitor the training. | ||
|
||
- [Introducing PlaNet: A Deep Planning Network for Reinforcement Learning](https://ai.googleblog.com/2019/02/introducing-planet-deep-planning.html) | ||
- [google-research/planet](https://github.com/google-research/planet) | ||
`tensorboard --logdir results` | ||
|
||
Acknowledgements | ||
---------------- | ||
|
||
- [@danijar](https://github.com/danijar) for [google-research/planet](https://github.com/google-research/planet) and [help reproducing results](https://github.com/google-research/planet/issues/28) | ||
- [@sg2](https://github.com/sg2) for [running experiments](https://github.com/Kaixhin/PlaNet/issues/9) | ||
|
||
References | ||
---------- | ||
## Results | ||
Results and pretrained models can be found in the [releases](https://github.com/Kaixhin/PlaNet/releases). | ||
|
||
[1] [Learning Latent Dynamics for Planning from Pixels](https://arxiv.org/abs/1811.04551) | ||
The performances are compared with the other SoTA algorithms as follows (Note! Tested once using seed 0.) | ||
* [State-SAC](https://github.com/denisyarats/pytorch_sac) | ||
* [PlaNet-PyTorch](https://github.com/Kaixhin/PlaNet) | ||
* [SAC-AE](https://github.com/denisyarats/pytorch_sac_ae) | ||
* [SLAC](https://github.com/ku2482/slac.pytorch) | ||
* [CURL](https://github.com/MishaLaskin/curl) | ||
* [Dreamer (tensorflow2 implementation)](https://github.com/danijar/dreamer) | ||
<!-- | ||
![finger-spin](imgs/finger-spin.png) --> | ||
|
||
<p align="center"> | ||
<img width="800" src="./imgs/finger-spin.png"> | ||
<img width="800" src="./imgs/walker-walk.png"> | ||
<img width="800" src="./imgs/cheetah-run.png"> | ||
<img width="800" src="./imgs/cartpole-swingup.png"> | ||
<img width="800" src="./imgs/reacher-easy.png"> | ||
<img width="800" src="./imgs/ball_in_cup-catch.png"> | ||
</p> | ||
|
||
|
||
## Links | ||
- [Dream to Control: Learning Behaviors By latent Imagination](https://ai.googleblog.com/2020/03/introducing-dreamer-scalable.html) | ||
- [google-research/dreamer](https://github.com/google-research/dreamer) |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.