Despite the photorealistic novel view synthesis (NVS) performance achieved by the original 3D Gaussian splatting (3DGS), its rendering quality significantly degrades with sparse input views. This performance drop is mainly caused by the limited number of initial points generated from the sparse input, insufficient supervision during the training process, and inadequate regularization of the oversized Gaussian ellipsoids. To handle these issues, we propose the LoopSparseGS, a loop-based 3DGS framework for the sparse novel view synthesis task. In specific, we propose a loop-based Progressive Gaussian Initialization (PGI) strategy that could iteratively densify the initialized point cloud using the rendered pseudo images during the training process. Then, the sparse and reliable depth from the Structure from Motion, and the window-based dense monocular depth are leveraged to provide precise geometric supervision via the proposed Depth-alignment Regularization (DAR). Additionally, we introduce a novel Sparse-friendly Sampling (SFS) strategy to handle oversized Gaussian ellipsoids leading to large pixel errors. Comprehensive experiments on four datasets demonstrate that LoopSparseGS outperforms existing state-of-the-art methods for sparse-input novel view synthesis, across indoor, outdoor, and object-level scenes with various image resolutions.
尽管原始的 3D 高斯点云(3DGS)在照片级真实感的新视角合成(NVS)中取得了良好的性能,但其渲染质量在稀疏输入视角下显著下降。这种性能下降主要是由于稀疏输入生成的初始点数量有限、训练过程中的监督不足以及过大的高斯椭球体的正则化不充分。为了解决这些问题,我们提出了 LoopSparseGS,一个基于循环的 3DGS 框架,用于稀疏新视角合成任务。具体来说,我们提出了一种基于循环的渐进高斯初始化(PGI)策略,该策略可以利用训练过程中的渲染伪图像迭代地稠密化初始化点云。然后,利用来自结构光束法(Structure from Motion)的稀疏且可靠的深度信息和基于窗口的密集单目深度,为提出的深度对齐正则化(DAR)提供精确的几何监督。此外,我们引入了一种新颖的稀疏友好采样(SFS)策略,以处理导致大像素误差的过大高斯椭球体。对四个数据集的全面实验表明,LoopSparseGS 在稀疏输入的新视角合成任务中,优于现有的最先进方法,适用于各种图像分辨率的室内、室外和对象级场景。