Skip to content
/ ipp-rl Public
forked from dmar-bonn/ipp-rl

Adaptive Informative Path Planning Using Deep Reinforcement Learning for UAV-based Active Sensing

License

Notifications You must be signed in to change notification settings

cagoc/ipp-rl

 
 

Repository files navigation

Adaptive Informative Path Planning Using Deep Reinforcement Learning for UAV-based Active Sensing

Aerial robots are increasingly being utilized for environmental monitoring and exploration. However, a key challenge is efficiently planning paths to maximize the information value of acquired data as an initially unknown environment is explored. To address this, we propose a new approach for informative path planning based on deep reinforcement learning (RL). Combining recent advances in RL and robotic applications, our method combines tree search with an offline-learned neural network predicting informative sensing actions. We introduce several components making our approach applicable for robotic tasks with high-dimensional state and large action spaces. By deploying the trained network during a mission, our method enables sample-efficient online replanning on platforms with limited computational resources. Simulations show that our approach performs on par with existing methods while reducing runtime by 8−10×. We validate its performance using real-world surface temperature data.

The paper can be found here. If you found this work useful for your own research, feel free to cite it.

@INPROCEEDINGS{9812025,
  author={Rückin, Julius and Jin, Liren and Popović, Marija},
  booktitle={2022 International Conference on Robotics and Automation (ICRA)}, 
  title={Adaptive Informative Path Planning Using Deep Reinforcement Learning for UAV-based Active Sensing}, 
  year={2022},
  volume={},
  number={},
  pages={4473-4479},
  doi={10.1109/ICRA46639.2022.9812025}
}

Installation

The installation is based on Docker for easy transferability between different hardware setups. Furthermore, for smooth deployment, a docker-compose setup is established. Please make sure to install Docker and docker-compose. Please make sure you have installed at least docker-compose 1.26 (docker-compose --version for version information). To upgrade docker-compose to the latest version, have a look at this Stack Overflow post.

For installation of the framework, run:

./build.sh

Usage

Create a file named .env in the top-level repo directory to make configurable environment variables accessible to the docker containers. Within the .env file the following env-variables are defined:

REPO_DIR=/path/to/your/repo/ipp-rl/ # mandatory, absolute path on host machine
CONFIG_FILE_PATH=path/in/repo/to/config/file.yaml # optional, if not set use default: 'config/example.yaml'
LOG_DIR=subfolder/for/storing/logs/ # optional, if not set use default: 'logs'

To execute the framework, run:

./run.sh

To stop the execution, run:

./stop.sh

Each run's logs are stored in a separate and timestamped logfile in the specified LOG_DIR.

Software Architecture

Software Architecture

Development

Style Guidelines

In general, we follow the Python PEP 8 style guidelines. Please install black to format your python code properly. To run the black code formatter, use the following command:

black -l 120 path/to/python/module/or/package/

To optimize and clean up your imports, feel free to have a look at this solution for PyCharm.

Funding

This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2070 – 390732324. Authors are with the Cluster of Excellence PhenoRob, Institute of Geodesy and Geoinformation, University of Bonn.

About

Adaptive Informative Path Planning Using Deep Reinforcement Learning for UAV-based Active Sensing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.8%
  • Shell 1.2%
  • Other 1.0%