Skip to content
/ dex Public

Continual Learning Toolkit for Reinforcement Learning

License

Notifications You must be signed in to change notification settings

Innixma/dex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Hexagon (WIP)

Deep Hexagon (dex) is the first reinforcement learning environment toolkit specialized for continual learning, with an OpenAI gym like API. It contains hundreds of environments ranging drastically in difficulty, but using the same basic objectives and obstacles.

Currently this repository is a work in progress, and thus may not work as intended. A more indepth readme will be created once development has stabilized.

Demo

Demo2

Dex comes with state-of-the-art algorithms such as DDQN, A3C, and ACER to rapidly learn environments, and also supports integration with any other methods. The algorithms are compatible with OpenAI Gym, as well as the custom environment 'Open Hexagon' played with screen pixel information, which is included with the repo.

Requirements

Optional Requirements

  • OpenAI Gym

Optional Visualization Requirements (For running visualization.py)

  • cv2
  • vis
  • imageio

Algorithms

  • DDQN | dex_ddqn.py
  • ACER | dex_a3c.py

Setup Open Hexagon Environment

  • Extract OpenHexagonV1.92.7z and launch the game to the level you wish to learn.
  • Run the script of the desired algorithm and it will detect and begin playing the game.

Run

  • Run dex_ddqn.py for DDQN
  • Run dex_a3c.py for ACER

Currently, you will need to edit the code in these files to run the specific parameters you want. (WIP)

Note: ACER here is not identical to the paper, and is my own implementation. Visualization is based on saliency.

Releases

No releases published

Packages

No packages published