Preprint and supplementary material available online.
The implementation has been tested with Python 3.8
under Ubuntu 20.04
.
- Clone this repo.
- Install requirements:
pip install -r requirements.txt
For better reproducibility, we will release soon a Dockerfile to build a container with all the necessary dependencies. 👷
We assume that all the experiments are run from the project directory
and that the project directory is added to the PYTHONPATH
environment variable as follows:
export PYTHONPATH=$PYTHONPATH:$(pwd)
- For the multi-robot environment, run from the project directory:
./scripts/run_exp_baselines.sh [0-6]
where the exp-id [0-6]
denotes runs with
PPOPID
, PPOLag
, CPO
, IPO
, DDPGLag
, TD3Lag
, and PPOSaute
respectively.
- Similary, For the racing environment, run:
./scripts/run_exp_baselines.sh [7-13]
The results will be saved in the logs/baselines
folder.
We provide a couple of ablate models to augment built-in controllers with adaptive safety in the checkpoints
folder.
To play with trained models with adaptive safety, run:
./scripts/run_checkpoint_eval.sh [0-1]
where the exp-id [0-1]
denotes runs for particle-env and racing environments respectively.
@misc{berducci2023learning,
title={Learning Adaptive Safety for Multi-Agent Systems},
author={Luigi Berducci and Shuo Yang and Rahul Mangharam and Radu Grosu},
year={2023},
eprint={2309.10657},
archivePrefix={arXiv},
primaryClass={cs.RO}
}