Initial applications of 3D Gaussian Splatting (3DGS) in Visual Simultaneous Localization and Mapping (VSLAM) demonstrate the generation of high-quality volumetric reconstructions from monocular video streams. However, despite these promising advancements, current 3DGS integrations have reduced tracking performance and lower operating speeds compared to traditional VSLAM. To address these issues, we propose integrating 3DGS with Direct Sparse Odometry, a monocular photometric SLAM system. We have done preliminary experiments showing that using Direct Sparse Odometry point cloud outputs, as opposed to standard structure-from-motion methods, significantly shortens the training time needed to achieve high-quality renders. Reducing 3DGS training time enables the development of 3DGS-integrated SLAM systems that operate in real-time on mobile hardware. These promising initial findings suggest further exploration is warranted in combining traditional VSLAM systems with 3DGS.
3D 高斯点光源(3DGS)在视觉同时定位与地图构建(VSLAM)中的初步应用展示了从单目视频流中生成高质量体积重建的能力。然而,尽管这些进展令人鼓舞,目前的 3DGS 集成在跟踪性能和操作速度上仍低于传统的 VSLAM。为了解决这些问题,我们提出将 3DGS 与直接稀疏视觉测程(Direct Sparse Odometry,DSO)这一单目光度 SLAM 系统进行集成。我们的初步实验表明,使用 DSO 点云输出相比于标准的结构光法(structure-from-motion)方法,显著缩短了实现高质量渲染所需的训练时间。缩短 3DGS 的训练时间使得开发能够在移动硬件上实时运行的 3DGS 集成 SLAM 系统成为可能。这些有前景的初步发现表明,结合传统的 VSLAM 系统与 3DGS 仍需进一步探索。