Skip to content

Latest commit

 

History

History
39 lines (28 loc) · 2.09 KB

README.md

File metadata and controls

39 lines (28 loc) · 2.09 KB

Camera Projection Node for Drone Mapping

Combine Lidar scans, Mask-R-CNN predictions, and GPS data to create 3D landmarks for mapping and localization!

Visualization

Table of Contents

Details

This node listens to 4 ROS topics:

  • /dji_sdk/gps_position for GPS latitude and longitude
  • /dji_sdk/imu for IMU (orientation only)
  • /velodyne_aggregated or /velodyne_points for Lidar scans, depending on whether you use aggregated (described below) or individual scans. Using aggregated scans is slower but acheives better accuracy.
  • /cnn_predictions for Mask-R-CNN predictions, ask described below

and publishes to 2 ROS topics (or more for debugging):

  • /projected_predictions for predictions of 3D landmarks (of type predictions.msg found in this repo)
  • /vis_full for live visualizations of the projection and estimated trunk depths

In summary, this node (1) transforms the Lidar point cloud to the image frame, (2) matches image pixels to Lidar points, (3) estimates the depth of the detected trunks in the image using surrounding Lidar points, (4) converts these 3D locations to GPS coordinates, and (5) publishes these for use in localization and mapping.

Pipeline

This node is part of a larger drone mapping pipeline with separate ROS nodes, each receiving and publishing relevant data.

Code locations for the other nodes are listed below:

  • MASK-R-CNN - Predict locations of trunks (and other objects) in images
  • AGGREGATION - Aggregate multiple Lidar scans together using IMU and GPS data

Usage

This node has been tested in combination with the other two nodes listed above performs well offline.

For information on usage, please contact Aaron (aaronzberger@gmail.com).

Acknowledgements

  • Francisco Yandun for assistance throughout the creation of the pipeline