Skip to content

Stanford-ILIAD/Multi-Agent-RL

Repository files navigation

MADRL

This package provides implementations of the following multi-agent reinforcement learning environemnts:

Requirements

This package requires both OpenAI Gym and a forked version of rllab (the multiagent branch). There are a number of other requirements which can be found in rllab/environment.yml file if using anaconda distribution.

Setup

The easiest way to install MADRL and its dependencies is to perform a recursive clone of this repository.

git clone --recursive git@github.com:sisl/MADRL.git

Then, add directories to PYTHONPATH

export PYTHONPATH=$(pwd):$(pwd)/rltools:$(pwd)/rllab:$PYTHONPATH

Install the required dependencies. Good idea is to look into rllab/environment.yml file if using anaconda distribution.

Usage

Example run with curriculum:

python3 runners/run_multiwalker.py rllab \ # Use rllab for training
    --control decentralized \ # Decentralized training protocol
    --policy_hidden 100,50,25 \ # Set MLP policy hidden layer sizes
    --n_iter 200 \ # Number of iterations
    --n_walkers 2 \ # Starting number of walkers
    --batch_size 24000 \ # Number of rollout waypoints
    --curriculum lessons/multiwalker/env.yaml

Details

Policy definitions exist in rllab/sandbox/rocky/tf/policies.

About

Multi agent Reinforcement learning setting

Resources

Stars

Watchers

Forks

Packages

No packages published