Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 1.96 KB

2407.11343.md

File metadata and controls

5 lines (3 loc) · 1.96 KB

Ev-GS: Event-based Gaussian splatting for Efficient and Accurate Radiance Field Rendering

Computational neuromorphic imaging (CNI) with event cameras offers advantages such as minimal motion blur and enhanced dynamic range, compared to conventional frame-based methods. Existing event-based radiance field rendering methods are built on neural radiance field, which is computationally heavy and slow in reconstruction speed. Motivated by the two aspects, we introduce Ev-GS, the first CNI-informed scheme to infer 3D Gaussian splatting from a monocular event camera, enabling efficient novel view synthesis. Leveraging 3D Gaussians with pure event-based supervision, Ev-GS overcomes challenges such as the detection of fast-moving objects and insufficient lighting. Experimental results show that Ev-GS outperforms the method that takes frame-based signals as input by rendering realistic views with reduced blurring and improved visual quality. Moreover, it demonstrates competitive reconstruction quality and reduced computing occupancy compared to existing methods, which paves the way to a highly efficient CNI approach for signal processing.

计算神经形态成像(CNI)使用事件相机提供了最小的运动模糊和增强的动态范围等优势,与传统的基于帧的方法相比。现有基于事件的辐射场渲染方法构建在神经辐射场上,这种方法计算量大且重建速度慢。受到这两个方面的启发,我们介绍了Ev-GS,这是第一个基于CNI的方案,从单目事件相机推断3D高斯喷溅,实现高效的新视角合成。通过利用3D高斯和纯事件基的监督,Ev-GS克服了快速移动对象检测和光照不足等挑战。实验结果表明,与采用基于帧的信号输入的方法相比,Ev-GS通过渲染真实视图减少模糊并提高视觉质量而胜出。此外,它显示出与现有方法相比具有竞争性的重建质量和降低的计算占用,为高效的CNI信号处理方法铺平了道路。