Applying the DQN-Agent from keras-rl to Starcraft 2 Learning Environment and modding it to to use the Rainbow-DQN algorithms.
Final Paper (german): read here
- Naive DQN with basic keras-rl dqn agent
- Fully-conv network with 2 outputs (described in this deepmind paper)
- Double DQN (described here)
- Dueling DQN (described here)
- Prioritized experience replay (described here)
- Multi-step learning (described here)
- Noisy nets (described here)
- Distributional RL - working, but not learning (described here)
- Final rainbow agent without Distributional RL
Make sure, you have Python 3.6.
Follow the instructions on the pysc2 repository for installing it as well as for installing StarCraft2 and the required mini_games Maps.
Follow the instructions on the keras-rl repository for installation.
Follow the instructions on the baselines repository for installation.
You will also need the following python packages installed:
- tensorflow 1.12 (newer is currently not working with CUDA support for me)
- keras 2.2.4
- numpy
- matplotlib
If you want to use a CUDA-able GPU, install tensorflow-gpu and keras-gpu as well. You need to make sure to have a compatible driver and CUDA-toolkit (9.0 works for me) and the cudnn library (7.1.2 works for me) installed. This provides a 5x to 20x SpeedUp and therefor is recommended for training.
Running it on Linux is recommended for training as well, because it is required for running the game headless with up to 2x speedup.
Download the project files:
git clone https://github.com/chucnorrisful/dqn.git
The executable is located in exec.py - just set some Hyperparameters and run it!
The plot.py file provides some visualisation, but you have to manually enter the path to a (created by execution) log file.
- MoveToBeacon [mean: 25,64, max: 34]
- CollectMineralShards [mean: 89, max: 120]
- FindAndDefeatZerglings
- DefeatRoaches
- DefeatZerglingsAndBanelings
- CollectMineralsAndGas
- BuildMarines