This is a package for automatic extrinsic calibration between a 3D LiDAR and a camera, described in paper: 3D LiDAR Intrinsic Calibration and Automatic System for LiDAR to Camera Calibration (PDF). This system for target-based automatic LiDAR to camera extrinsic calibration is given. Specifically, using the back end from this paper, a front end is developed that automatically takes in LiDAR and camera data for diamond-shaped planar targets, synchronizes them, extracts the LiDAR payloads and AprilTag corners for the camera images, and then passes the data to the back end that produces the rigid body transformation between the camera and LiDAR.
- Authors: Bruce JK Huang, Chenxi Feng, Madhav Achar, Maani Ghaffari, and Jessy W. Grizzle
- Maintainer: Bruce JK Huang, brucejkh[at]gmail.com
- Affiliation: The Biped Lab, the University of Michigan
This package has been tested under MATLAB 2019a and Ubuntu 16.04.
[Issues] If you encounter any issues, I would be happy to help. If you cannot find a related one in the existing issues, please open a new one. I will try my best to help!
Periodic intrinsic and extrinsic (re-)calibrations are essential for modern perception and navigation systems deployed on autonomous robots. To date, intrinsic calibration models for LiDARs have been based on hypothesized physical mechanisms for how a spinning LiDAR functions, resulting in anywhere from three to ten parameters to be estimated from data. Instead we propose to abstract away from the physics of a LiDAR type (spinning vs solid state, for example) and focus on the spatial geometry of the point cloud generated by the sensor. This leads to a unifying view of calibration. In experimental data, we show that it outperforms physics-based models for a spinning LiDAR. In simulation, we show how this perspective can be applied to a solid state LiDAR. We complete the paper by reporting on an open- source automatic system for target-based extrinsic calibration from a LiDAR to a camera.
Please checkout the introduction video. It highlights some importants keypoints in the paper!
A system diagram for automatic intrinsic and extrinsic calibration. The top shows the front-end of the pipeline. Its input is raw camera and LiDAR data, which are subsequently synchronized. The AprilTag and LiDARTag packages are used to extract the target information, which is then examined to remove outliers and ensure that targets are still properly synchronized. Each run of the front-end saves all related information into a ROS bagfile, which can then be sent to the back-end for further processing. The back-end takes in (possibly many) ROS bagfiles and does the following: (i) refines the image corners; and (ii) extracts vertices of the LiDAR targets. The correspondences are established between corners and vertices and an extrinsic transformation is determined PnP as in this paper. For intrinsic calibration, the resulting vertices of the LiDAR targets are used to extract normal vectors and a point on the plane. The calibration parameters are determined to minimize the P2P distance from the plane to the target points provided by the LiDARTag package.
The 3D-LiDAR map shown in the videos used this package to calibrate the LiDAR to camera (to get the transformatoin between the LiDAR and camera). Briefly speaking, we project point coulds from the LiDAR back to the semantic labeled images using the obtained transformation and then associate labels with the point to build the 3D LiDAR semantic map.
Halloween Edition: Cassie Autonomy
Autonomous Navigation and 3D Semantic Mapping on Bipedal Robot Cassie Blue (Shorter Version)
Autonomous Navigation and 3D Semantic Mapping on Bipedal Robot Cassie Blue (Longer Version)
Using the obtained transformation, LiDAR points are mapped onto a semantically segmented image. Each point is associated with the label of a pixel. The road is marked as white; static objects such buildings as orange; the grass as yellow-green, and dark green indicates trees.
A calibration result is not usable if it has few degrees of rotation error and a few percent of translation error. The below shows that a calibration result with little disturbance from the well-aigned image.
Any square targets would be fine. The dimensions are assumed known. note: You can place any number of targets with different size in different datasets.
- The front-end
- Please download the follow packages and place them under a catkin workspace:
- The back-end
- Which toolboxes are used in this package:
- MATLAB 2019a
- optimization_toolbox
- phased_array_system_toolbox
- robotics_system_toolbox
- signal_blocks
For the front-end, please download from here.
For the back-end and the optimization process, please download from here.
All the datasets have to be downloaded.
-
The front-end
In the sync_lidartag_apriltag package, the sync_cam_lidar launch file should be ran first, and then run the alignment_node_only launch file to run the tag pairing node. The alignment_msgs will be published as output, and one can record them. -
The back-end
Onces all the data have been processed by the front-end node, i.e., saved as (possibly many) ROS bagfiles, please place them under a folder and change the path.bag_file_path in _automatic_calibration_main.m_ and run.
For the method GL_1-R trained on S_1, the LiDAR point cloud has been projected into the image plane for the other data sets and marked in green. The red circles highlight various poles, door edges, desk legs, monitors, and sidewalk curbs where the quality of the alignment can be best judged. The reader may find other areas of interest. Enlarge in your browser for best viewing.
For the method GL_1-R, five sets of estimated LiDAR vertices for each target have been projected into the image plane and marked in green, while the target's point cloud has been marked in red. Blowing up the image allows the numbers reported in the table to be visualized. The vertices are key.
- Jiunn-Kai Huang, Chenxi Feng, Madhav Achar, Maani Ghaffari, and Jessy W. Grizzle, "Global Unifying Intrinsic Calibration for Spinning and Solid-State LiDARs" (arXiv)
@article{huang2020global,
title={Global Unifying Intrinsic Calibration for Spinning and Solid-State LiDARs},
author={Huang, Jiunn-Kai and Feng, Chenxi and Achar, Madhav and Ghaffari, Maani and Grizzle, Jessy W},
journal={arXiv preprint arXiv:2012.03321},
year={2020}
}
- Jiunn-Kai Huang and J. Grizzle, "Improvements to Target-Based 3D LiDAR to Camera Calibration" (PDF)(arXiv)
@article{huang2019improvements,
title={Improvements to Target-Based 3D LiDAR to Camera Calibration},
author={Huang, Jiunn-Kai and Grizzle, Jessy W},
journal={arXiv preprint arXiv:1910.03126},
year={2019}
}
- Jiunn-Kai Huang, Maani Ghaffari, Ross Hartley, Lu Gan, Ryan M. Eustice, and Jessy W. Grizzle, "LiDARTag: A Real-Time Fiducial Tag using Point Clouds" (PDF)(arXiv)
@article{huang2020improvements,
author={J. {Huang} and J. W. {Grizzle}},
journal={IEEE Access},
title={Improvements to Target-Based 3D LiDAR to Camera Calibration},
year={2020},
volume={8},
number={},
pages={134101-134110},}