diff --git a/calibrators/marker_radar_lidar_calibrator/README.md b/calibrators/marker_radar_lidar_calibrator/README.md index d813a2fa..d520bd26 100644 --- a/calibrators/marker_radar_lidar_calibrator/README.md +++ b/calibrators/marker_radar_lidar_calibrator/README.md @@ -46,7 +46,7 @@ Since it is not possible to directly differentiate individual reflector detectio After matching detection pairs, we apply rigid transformation estimation algorithms to those pairs to estimate the transformation between the radar and lidar sensors. We currently support two algorithms: a 2d SVD-based method and a yaw-only rotation method. -For the 2d SVD-based method, we reduce the problem to 2d transformation estimation since radar detections lack a z component. However, because lidar detections are in the lidar frame and likely involve a 3d transformation to the radar frame, we transform the lidar detections to a `radar parallel` frame and then set the z component to zero. The `radar parallel` frame has only a 2d transformation (x, y, yaw) relative to the radar frame. In autonomous vehicles, radars are mounted to minimize pitch and roll angles, maximizing their performance and ensuring accurate distance measurements. This means the radar sensors are aligned as parallel as possible to the ground plane, making the `base_link` a suitable choice for the `radar parallel` frame. +For the 2d SVD-based method, we reduce the problem to 2d transformation estimation since radar detections lack a z component. However, because lidar detections are in the lidar frame and likely involve a 3d transformation to the radar frame, we transform the lidar detections to a `radar parallel` frame and then set the z component to zero. The `radar parallel` frame has only a 2d transformation (x, y, yaw) relative to the radar frame. In autonomous vehicles, radars are mounted in a way designed to minimize pitch and roll angles, maximizing their performance and ensuring accurate distance measurements. This means the radar sensors are aligned as parallel as possible to the ground plane, making the `base_link` a suitable choice for the `radar parallel` frame. Next, we apply the SVD-based rigid transformation estimation algorithm between the lidar detections in the radar parallel frame and the radar detections in the radar frame. This allows us to estimate the transformation between the lidar and radar by multiplying the radar-to-radar-parallel transformation with the radar-parallel-to-lidar transformation. The SVD-based algorithm, provided by PCL, leverages SVD to find the optimal rotation component and then calculates the translation component based on the rotation.