Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.2 KB

2406.18214.md

File metadata and controls

5 lines (3 loc) · 2.2 KB

Trimming the Fat: Efficient Compression of 3D Gaussian Splats through Pruning

In recent times, the utilization of 3D models has gained traction, owing to the capacity for end-to-end training initially offered by Neural Radiance Fields and more recently by 3D Gaussian Splatting (3DGS) models. The latter holds a significant advantage by inherently easing rapid convergence during training and offering extensive editability. However, despite rapid advancements, the literature still lives in its infancy regarding the scalability of these models. In this study, we take some initial steps in addressing this gap, showing an approach that enables both the memory and computational scalability of such models. Specifically, we propose "Trimming the fat", a post-hoc gradient-informed iterative pruning technique to eliminate redundant information encoded in the model. Our experimental findings on widely acknowledged benchmarks attest to the effectiveness of our approach, revealing that up to 75% of the Gaussians can be removed while maintaining or even improving upon baseline performance. Our approach achieves around 50× compression while preserving performance similar to the baseline model, and is able to speed-up computation up to 600~FPS.

近期,由于神经辐射场(NeRF)和更近期的3D高斯散射(3DGS)模型提供的端到端训练能力,3D模型的使用已经变得越来越受欢迎。后者在训练期间能够快速收敛,并提供广泛的可编辑性,因此具有显著的优势。然而,尽管取得了迅速的进展,现有文献对这些模型的可扩展性研究仍处于初期阶段。在这项研究中,我们采取了一些初步措施来解决这一差距,展示了一种既能扩展内存也能扩展计算能力的模型方法。具体来说,我们提出了“剪除冗余”,这是一种事后基于梯度的迭代修剪技术,用于消除模型中编码的冗余信息。我们在广泛认可的基准测试上的实验结果证明了我们方法的有效性,显示出多达75%的高斯可以被移除,同时保持或甚至提升基线性能。我们的方法实现了大约50倍的压缩,同时保持与基线模型类似的性能,并能将计算速度提高到600 FPS。