Volumetric rendering based methods, like NeRF, excel in HDR view synthesis from RAWimages, especially for nighttime scenes. While, they suffer from long training times and cannot perform real-time rendering due to dense sampling requirements. The advent of 3D Gaussian Splatting (3DGS) enables real-time rendering and faster training. However, implementing RAW image-based view synthesis directly using 3DGS is challenging due to its inherent drawbacks: 1) in nighttime scenes, extremely low SNR leads to poor structure-from-motion (SfM) estimation in distant views; 2) the limited representation capacity of spherical harmonics (SH) function is unsuitable for RAW linear color space; and 3) inaccurate scene structure hampers downstream tasks such as refocusing. To address these issues, we propose LE3D (Lighting Every darkness with 3DGS). Our method proposes Cone Scatter Initialization to enrich the estimation of SfM, and replaces SH with a Color MLP to represent the RAW linear color space. Additionally, we introduce depth distortion and near-far regularizations to improve the accuracy of scene structure for downstream tasks. These designs enable LE3D to perform real-time novel view synthesis, HDR rendering, refocusing, and tone-mapping changes. Compared to previous volumetric rendering based methods, LE3D reduces training time to 1% and improves rendering speed by up to 4,000 times for 2K resolution images in terms of FPS.
基于体积渲染的方法,如NeRF,在从RAW图像合成高动态范围(HDR)视图方面表现出色,特别是在夜间场景中。然而,它们因密集采样要求而训练时间长,无法进行实时渲染。3D高斯涂抹(3DGS)的出现使得实时渲染和更快的训练成为可能。然而,直接使用3DGS实现基于RAW图像的视图合成面临挑战,由于其固有的缺点:1) 在夜间场景中,极低的信噪比导致远视图中的运动结构(SfM)估计质量差;2) 球谐函数(SH)的有限表示能力不适合RAW线性颜色空间;以及3) 不准确的场景结构妨碍了如重新聚焦等下游任务。为解决这些问题,我们提出了LE3D(用3DGS照亮每一个黑暗)。我们的方法提出了锥形散射初始化以丰富SfM的估计,并用颜色MLP替代SH来表示RAW线性颜色空间。此外,我们引入了深度畸变和近远规范化以提高场景结构的准确性,以便于下游任务。这些设计使LE3D能够进行实时的新视角合成、HDR渲染、重新聚焦和色调映射变化。与之前基于体积渲染的方法相比,LE3D将训练时间减少到1%,并将渲染速度提高了高达4000倍,以FPS计,适用于2K分辨率图像。