This work has been accepted by the IEEE Robotics and Automation Letters.
YouTube link to the introduction video: https://www.youtube.com/watch?v=AYBQHAEWBLM.
Bilibili link to the introduction video: https://www.bilibili.com/video/BV1s34y147UM/.
🎉🎉🎉News!
📣 The first mapping and localization framework based on LiDAR fiducial markers has been released here! Check out the instance reconstruction results below. The top row displays the ground truth on the left and ours on the right. The bottom row shows Livox Mapping on the left and LOAM Livox on the right.
📣 Our new work Fiducial Tag Localization on a 3D LiDAR Prior Map has been released!
Extensive research has been carried out on the Visual Fiducial Marker (VFM) systems. However, no single study utilizes these systems to their fullest potential in LiDAR applications. In this work, we develop an Intensity Image-based LiDAR Fiducial Marker (IILFM) system that fills the above-mentioned gap. The proposed system only requires an unstructured point cloud with intensity as the input and it outputs the detected markers' information and the 6-DOF pose that describes the transmission from the world coordinate system to the LiDAR coordinate system. The use of the IIFLM system is as convenient as the conventional VFM systems with no restrictions on marker placement and shape. Different VFM systems, such as Apriltag 3, ArUco, and CCTag, can be easily embedded into the system. Hence, the proposed system inherits the functionality of the VFM systems, such as the coding and decoding methods.
One and Two markers detection:
Apriltag grid (35 markers) detection:
The proposed system shows potential in augmented reality, SLAM, multisensor calbartion, etc. Here, an augumented reality demo using the proposed system is presented. The teapot point cloud is transmitted to the location of the marker in the LiDAR point cloud based on the pose provided by the IILFM system.
In this repository, we only release the version of which the embedded system is Apriltag 3. The versions with ArUco, CCTag detectors are coming soon. It is a very straightforward process to replace the embedded visual fiducial marker system. Hence, following the method introduced in our scripts, you may add any visual marker detector as you like.
- Ubuntu 20.04
Other versions of the Ubuntu system could work if the following libraries are installed correctly. - ROS Noetic
Ubuntu install of ROS Noetic
Lower ROS versions could work. Yet you might have to deal with the conflicts of OpenCV4 and OpenCV3... - PCL
sudo apt update
sudo apt install libpcl-dev
- OpenCV
sudo apt update
sudo apt install libopencv-dev python3-opencv
- catkin
sudo apt update
sudo apt install catkin
- yaml-cpp
sudo apt update
sudo apt-get install libyaml-cpp-dev
- Boost
sudo apt update
sudo apt-get install libboost-all-dev
git clone https://github.com/York-SDCNLab/IILFM.git
cd IILFM
catkin build
Modify the 'yorktag.launch' in ~/IILFM/src/yorkapriltag/launch according to your LiDAR model (e.g. rostopic, angular resolutions, and so on) and the employed tag family. Then modify the 'config.yaml' in ~/IILFM/src/yorkapriltag/resources based on your setup (define the locations of the vertices with respect to the world coordinate system). Otherwise, the outputted pose is meaningless. Afterward, run
source ./devel/setup.bash
roslaunch yorkapriltag yorktag.launch
Open a new terminal in ~/IILFM/src/yorkapriltag/resources and run
rosbag play -l bagname.bag
To view the 6-DOF pose, open a new terminal and run
rostopic echo /iilfm/pose
To view the point could of the detected 3D fiducials in rviz, open a new terminal and run rviz. In rviz, change the 'Fixed Frame' to 'livox_frame'. add/ By topic/ iilfm/ features/ PointCloud2
- By default, the settings in 'yorktag.launch' are corresponding to Livox Mid-40. If you just want to try our system and see how it works, there is no need to modify 'yorktag.launch' and 'config.yaml'. You may simply run
source ./devel/setup.bash
roslaunch yorkapriltag yorktag.launch
Then, in ~/IILFM/src/yorkapriltag/resources, open a new terminal and run
rosbag play -l bagname.bag
Due to the page limitation, we removed this huge table from our manuscript submitted to RA-L and replaced it with a histogram. Considering that some readers might be interested in the ground truth, we present the table here. Please refer to our paper to see the detailed experimental setup.
If you find this work helpful for your research, please cite our paper:
@ARTICLE{9774900,
author={Liu, Yibo and Schofield, Hunter and Shan, Jinjun},
journal={IEEE Robotics and Automation Letters},
title={Intensity Image-Based LiDAR Fiducial Marker System},
year={2022},
volume={7},
number={3},
pages={6542-6549},
doi={10.1109/LRA.2022.3174971}}