Traditional volumetric fusion algorithms preserve the spatial structure of 3D scenes, which is beneficial for many tasks in computer vision and robotics. However, they often lack realism in terms of visualization. Emerging 3D Gaussian splatting bridges this gap, but existing Gaussian-based reconstruction methods often suffer from artifacts and inconsistencies with the underlying 3D structure, and struggle with real-time optimization, unable to provide users with immediate feedback in high quality. One of the bottlenecks arises from the massive amount of Gaussian parameters that need to be updated during optimization. Instead of using 3D Gaussian as a standalone map representation, we incorporate it into a volumetric mapping system to take advantage of geometric information and propose to use a quadtree data structure on images to drastically reduce the number of splats initialized. In this way, we simultaneously generate a compact 3D Gaussian map with fewer artifacts and a volumetric map on the fly. Our method, GSFusion, significantly enhances computational efficiency without sacrificing rendering quality, as demonstrated on both synthetic and real datasets.
传统的体积融合算法能够保持 3D 场景的空间结构,这对于计算机视觉和机器人技术中的许多任务是有益的。然而,这些算法在可视化方面往往缺乏现实感。新兴的 3D 高斯点云技术弥补了这一差距,但现有的基于高斯的重建方法常常存在伪影和与基础 3D 结构的不一致,并且在实时优化方面表现不佳,无法为用户提供高质量的即时反馈。瓶颈之一来自于在优化过程中需要更新的大量高斯参数。我们不再将 3D 高斯作为单独的地图表示,而是将其整合到体积映射系统中,利用几何信息,并提出使用图像上的四叉树数据结构,以大幅减少初始化时的点云数量。通过这种方式,我们同时生成了一个更紧凑的 3D 高斯地图,减少了伪影,并实时生成体积地图。我们的方法 GSFusion 显著提高了计算效率,同时不牺牲渲染质量,这在合成数据集和真实数据集上都得到了验证。