This repository is part of the Pitt RAS effort for IARC Mission 7. For an overview of the IARC competition as well as the team's efforts and technical approaches, check out our team website, and in particular the technical postmortem post for the project.
The goal of this project is to provide a high-level AI training environment for IARC Mission 7 ("herding roombas").
The simulator follows the following principles:
-
decoupled graphics - the simulator supports a graphical viewer that can run in realtime, but all of that code is extracted into a separate class that is not necessary for simulation
-
time independent simulation - simulations can run much faster than realtime at the same level of accuracy (this is important for reinforcement learning where many thousands of rounds need to be simulated during the training phase)
-
highly configurable parameters - important constants are defined in
config.py
and can be redefined before simulation to change the behavior
The simulator is written entirely in python and requires the following python libraries:
- numpy
- pyglet
The module structure is as follows:
roombasim
: utility functions and module-wide configurationroombasim.agent
: drone simulation coderoombasim.ai
: ai, task, and state controllersroombasim.environment
: code to simulate the IARC 7 missionroombasim.graphics
: (optional) code to display a round in realtimeroombasim.<team>
: team-specific submoduleroombasim.<team>.ai
: custom AI controllersroombasim.<team>.state
: custom state detectorsroombasim.<team>.task
: custom tasks
The high level control structure uses three main class types:
roombasim.ai.Controller
: Base class for a high level AI controller that takes current state inputs and chooses which tasks to perform.roombasim.ai.Task
: Base class for a generic task that receives state information and issues high level motion commands.roombasim.ai.State
: Base class for a "sensor" that can compute (optionally noisy) sensor data from the environment.
The following diagram shows the general flow of information through the system:
You can run a specific controller with the command line interface:
$ ./roombasim-cli.py run [config] [controller]
For instance, to load the roombasim.pittras
configuration and run the WaypointDemoController
, you can use the following command:
$ ./roombasim-cli.py run roombasim.pittras.config roombasim.pittras.ai.WaypointDemoController
This command will use default settings and create a window to preview the controller in real time.
To send actions to the drone manually, you can use the human_player
script:
$ ./roombasim-cli.py human_player
Click anywhere to move the drone there, left click on a roomba to land on top, and right click to block.
There are also a few demo controllers that can be run using different commands.
$ ./roombasim-cli.py demo {params}
Optional params:
-num_targets
-num_obstacles
-target_spawn_radius
-obstacle_spawn_radius
-timescale
For example:
# default args
$ ./roombasim-cli.py demo
# custom args
$ ./roombasim-cli.py demo -num_targets=4 -num_obstacles=2 -target_spawn_radius=0.5 -obstacle_spawn_radius=2
To experiment with drone movement and controls, you can run the keyboard demo which simulates manual control in a form of attitude mode.
(This is really hard to control)
$ ./roombasim-cli.py keydemo
Controls:
K
: pitch upI
: pitch downJ
: roll leftL
: roll rightA
: yaw leftD
: yaw rightW
: throttle upS
: throttle down