Dynamic scene reconstruction has garnered significant attention in recent years due to its capabilities in high-quality and real-time rendering. Among various methodologies, constructing a 4D spatial-temporal representation, such as 4D-GS, has gained popularity for its high-quality rendered images. However, these methods often produce suboptimal surfaces, as the discrete 3D Gaussian point clouds fail to align with the object's surface precisely. To address this problem, we propose DynaSurfGS to achieve both photorealistic rendering and high-fidelity surface reconstruction of dynamic scenarios. Specifically, the DynaSurfGS framework first incorporates Gaussian features from 4D neural voxels with the planar-based Gaussian Splatting to facilitate precise surface reconstruction. It leverages normal regularization to enforce the smoothness of the surface of dynamic objects. It also incorporates the as-rigid-as-possible (ARAP) constraint to maintain the approximate rigidity of local neighborhoods of 3D Gaussians between timesteps and ensure that adjacent 3D Gaussians remain closely aligned throughout. Extensive experiments demonstrate that DynaSurfGS surpasses state-of-the-art methods in both high-fidelity surface reconstruction and photorealistic rendering.
动态场景重建近年来受到了广泛关注,因为它在高质量和实时渲染方面具有显著能力。在各种方法中,构建4D时空表示(如4D-GS)因其高质量的渲染图像而受到欢迎。然而,这些方法往往会产生次优的表面,因为离散的3D高斯点云无法精确对齐物体表面。为了解决这一问题,我们提出了DynaSurfGS,以实现动态场景的光学真实感渲染和高保真表面重建。具体而言,DynaSurfGS框架首先将来自4D神经体素的高斯特征与基于平面的高斯斑点结合,以促进精确的表面重建。它利用法线正则化来强制动态物体表面的光滑性。它还引入了尽可能刚性的(ARAP)约束,以保持时间步之间3D高斯的局部邻域的近似刚性,并确保相邻的3D高斯在整个过程中保持紧密对齐。大量实验表明,DynaSurfGS在高保真表面重建和光学真实感渲染方面超越了现有的最先进方法。