Skip to content

Latest commit

 

History

History
9 lines (7 loc) · 3.35 KB

2407.16503.md

File metadata and controls

9 lines (7 loc) · 3.35 KB

HDRSplat: Gaussian Splatting for High Dynamic Range 3D Scene Reconstruction from Raw Images

The recent advent of 3D Gaussian Splatting (3DGS) has revolutionized the 3D scene reconstruction space enabling high-fidelity novel view synthesis in real-time. However, with the exception of RawNeRF, all prior 3DGS and NeRF-based methods rely on 8-bit tone-mapped Low Dynamic Range (LDR) images for scene reconstruction. Such methods struggle to achieve accurate reconstructions in scenes that require a higher dynamic range. Examples include scenes captured in nighttime or poorly lit indoor spaces having a low signal-to-noise ratio, as well as daylight scenes with shadow regions exhibiting extreme contrast. Our proposed method HDRSplat tailors 3DGS to train directly on 14-bit linear raw images in near darkness which preserves the scenes' full dynamic range and content. Our key contributions are two-fold: Firstly, we propose a linear HDR space-suited loss that effectively extracts scene information from noisy dark regions and nearly saturated bright regions simultaneously, while also handling view-dependent colors without increasing the degree of spherical harmonics. Secondly, through careful rasterization tuning, we implicitly overcome the heavy reliance and sensitivity of 3DGS on point cloud initialization. This is critical for accurate reconstruction in regions of low texture, high depth of field, and low illumination. HDRSplat is the fastest method to date that does 14-bit (HDR) 3D scene reconstruction in ≤15 minutes/scene (∼30x faster than prior state-of-the-art RawNeRF). It also boasts the fastest inference speed at ≥120fps. We further demonstrate the applicability of our HDR scene reconstruction by showcasing various applications like synthetic defocus, dense depth map extraction, and post-capture control of exposure, tone-mapping and view-point.

最近出现的 3D 高斯溅射(3DGS)技术彻底改变了 3D 场景重建领域,实现了实时高保真新视角合成。然而,除了 RawNeRF 之外,所有先前的基于 3DGS 和 NeRF 的方法都依赖于 8 位色调映射的低动态范围(LDR)图像进行场景重建。这些方法在需要更高动态范围的场景中难以实现准确重建。例如,在夜间或光线不足的室内空间拍摄的场景具有低信噪比,以及具有极端对比度阴影区域的日光场景。 我们提出的 HDRSplat 方法将 3DGS 定制为直接在接近黑暗的 14 位线性原始图像上进行训练,从而保留场景的全动态范围和内容。我们的主要贡献有两个方面: 首先,我们提出了一种适用于线性 HDR 空间的损失函数,可以同时有效地从噪声较大的暗区和几乎饱和的亮区提取场景信息,同时在不增加球谐函数阶数的情况下处理视角依赖的颜色。 其次,通过精心调整光栅化,我们隐式地克服了 3DGS 对点云初始化的严重依赖和敏感性。这对于在低纹理、高景深和低照度区域进行准确重建至关重要。 HDRSplat 是迄今为止最快的 14 位(HDR)3D 场景重建方法,每个场景仅需 ≤15 分钟(比之前最先进的 RawNeRF 快约 30 倍)。它还拥有最快的推理速度,达到 ≥120fps。我们进一步展示了我们的 HDR 场景重建的应用性,展示了各种应用,如合成散焦、密集深度图提取,以及后期捕捉控制曝光、色调映射和视点。