From 6dc8fec5b4f3a46fe077bbf0963ac8fe70eb29c9 Mon Sep 17 00:00:00 2001 From: "Yi-Hsiang Fang (Vivid)" <146902905+vividf@users.noreply.github.com> Date: Fri, 12 Jul 2024 10:21:38 +0900 Subject: [PATCH] Update calibrators/marker_radar_lidar_calibrator/README.md Co-authored-by: Kenzo Lobos Tsunekawa --- calibrators/marker_radar_lidar_calibrator/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/calibrators/marker_radar_lidar_calibrator/README.md b/calibrators/marker_radar_lidar_calibrator/README.md index d813a2fa..d520bd26 100644 --- a/calibrators/marker_radar_lidar_calibrator/README.md +++ b/calibrators/marker_radar_lidar_calibrator/README.md @@ -46,7 +46,7 @@ Since it is not possible to directly differentiate individual reflector detectio After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method. -For the 2d SVD-based method, we reduce the problem to 2d transformation estimation since radar detections lack a z component. However, because lidar detections are in the lidar frame and likely involve a 3d transformation to the radar frame, we transform the lidar detections to a `radar parallel` frame and then set the z component to zero. The `radar parallel` frame has only a 2d transformation (x, y, yaw) relative to the radar frame. In autonomous vehicles, radars are mounted to minimize pitch and roll angles, maximizing their performance and ensuring accurate distance measurements. This means the radar sensors are aligned as parallel as possible to the ground plane, making the `base_link` a suitable choice for the `radar parallel` frame. +For the 2d SVD-based method, we reduce the problem to 2d transformation estimation since radar detections lack a z component. However, because lidar detections are in the lidar frame and likely involve a 3d transformation to the radar frame, we transform the lidar detections to a `radar parallel` frame and then set the z component to zero. The `radar parallel` frame has only a 2d transformation (x, y, yaw) relative to the radar frame. In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and ensuring accurate distance measurements. This means the radar sensors are aligned as parallel as possible to the ground plane, making the `base_link` a suitable choice for the `radar parallel` frame. Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation with the radar-parallel-to-lidar transformation. The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then calculates the translation component based on the rotation.