Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.07 KB

2406.10373.md

File metadata and controls

5 lines (3 loc) · 3.07 KB

Wild-GS: Real-Time Novel View Synthesis from Unconstrained Photo Collections

Photographs captured in unstructured tourist environments frequently exhibit variable appearances and transient occlusions, challenging accurate scene reconstruction and inducing artifacts in novel view synthesis. Although prior approaches have integrated the Neural Radiance Field (NeRF) with additional learnable modules to handle the dynamic appearances and eliminate transient objects, their extensive training demands and slow rendering speeds limit practical deployments. Recently, 3D Gaussian Splatting (3DGS) has emerged as a promising alternative to NeRF, offering superior training and inference efficiency along with better rendering quality. This paper presents Wild-GS, an innovative adaptation of 3DGS optimized for unconstrained photo collections while preserving its efficiency benefits. Wild-GS determines the appearance of each 3D Gaussian by their inherent material attributes, global illumination and camera properties per image, and point-level local variance of reflectance. Unlike previous methods that model reference features in image space, Wild-GS explicitly aligns the pixel appearance features to the corresponding local Gaussians by sampling the triplane extracted from the reference image. This novel design effectively transfers the high-frequency detailed appearance of the reference view to 3D space and significantly expedites the training process. Furthermore, 2D visibility maps and depth regularization are leveraged to mitigate the transient effects and constrain the geometry, respectively. Extensive experiments demonstrate that Wild-GS achieves state-of-the-art rendering performance and the highest efficiency in both training and inference among all the existing techniques.

在非结构化旅游环境中拍摄的照片常常表现出变化多端的外观和瞬时遮挡,这对准确的场景重建和新视图合成造成挑战,并可能引发伪影。尽管之前的方法已将神经辐射场(NeRF)与额外的可学习模块结合起来以处理动态外观并消除瞬时物体,但它们广泛的训练需求和慢速的渲染速度限制了实际部署。最近,三维高斯平涂(3DGS)作为NeRF的有希望的替代方案出现,提供了更优的训练和推理效率以及更好的渲染质量。本文介绍了Wild-GS,这是对3DGS的创新适应,专为不受约束的照片集合优化,同时保留了其效率优势。Wild-GS通过每张图片的固有材料属性、全局照明和相机属性以及点级局部反射率变化来确定每个3D高斯的外观。与之前在图像空间建模参考特征的方法不同,Wild-GS通过采样从参考图像提取的三平面,显式地将像素外观特征与对应的局部高斯对齐。这种新颖的设计有效地将参考视图的高频详细外观转移到3D空间,并显著加快了训练过程。此外,还利用了2D可见性图和深度正则化来减轻瞬时效应和约束几何形状。广泛的实验表明,Wild-GS在所有现有技术中实现了最佳的渲染性能和最高的训练及推理效率。