3DGS-Calib: 3D Gaussian Splatting for Multimodal SpatioTemporal Calibration
IROS 2024 (Oral)
|
|
|
|
|
|
|
|
|
|
Reliable multimodal sensor fusion algorithms require accurate spatiotemporal calibration. Recently, targetless calibration techniques based on implicit neural representations have proven to provide precise and robust results. Nevertheless, such methods are inherently slow to train given the high computational overhead caused by the large number of sampled points required for volume rendering. With the recent introduction of 3D Gaussian Splatting as a faster alternative to implicit representation methods, we propose to leverage this new rendering approach to achieve faster multi-sensor calibration. We introduce 3DGS-Calib, a new calibration method that relies on the speed and rendering accuracy of 3D Gaussian Splatting to achieve multimodal spatiotemporal calibration that is accurate, robust, and with a substantial speed-up compared to methods relying on implicit neural representations. We demonstrate the superiority of our proposal with experimental results on sequences from KITTI-360, a widely used driving dataset. |
| |
Pipeline of 3DGS-Calib: The Gaussians’ positions are given as input to the neural network which predicts their parameters. In parallel, the calibration parameters provide the input pose that transforms the Gaussians from the world frame to the image frame. Then, the 3D Gaussians are splatted using their predicted parameters to generate the rendered image. This image is compared to its ground-truth (GT) counterpart to compute the photometric loss. Finally, the gradients are backpropagated to the neural network and the calibration parameters. |
Q. Herau, M. Bennehar, A. Moreau, N. Piasco, L. Roldão, D. Tsishkou, C. Migniot, P. Vasseur, C. Demonceaux. MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal calibration. (hosted on ArXiv) |