Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.02 KB

2308.09713.md

File metadata and controls

5 lines (3 loc) · 2.02 KB

Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis

We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians which are optimized to reconstruct input images via differentiable rendering. To model dynamic scenes, we allow Gaussians to move and rotate over time while enforcing that they have persistent color, opacity, and size. By regularizing Gaussians' motion and rotation with local-rigidity constraints, we show that our Dynamic 3D Gaussians correctly model the same area of physical space over time, including the rotation of that space. Dense 6-DOF tracking and dynamic reconstruction emerges naturally from persistent dynamic view synthesis, without requiring any correspondence or flow as input. We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.

我们提出了一种方法,同时解决了动态场景新视角合成和所有密集场景元素的六自由度(6-DOF)跟踪任务。我们遵循分析-合成框架,受到最近将场景建模为一组3D高斯分布的工作启发,这些高斯分布通过可微渲染优化来重构输入图像。为了建模动态场景,我们允许高斯分布随时间移动和旋转,同时确保它们保持持久的颜色、不透明度和大小。通过对高斯分布的运动和旋转施加局部刚性约束,我们展示了我们的动态3D高斯分布如何正确地随时间模拟物理空间的相同区域,包括该空间的旋转。密集的6-DOF跟踪和动态重构自然地从持久的动态视角合成中产生,而不需要任何对应或流作为输入。我们展示了我们的表征所支持的大量下游应用,包括第一人称视角合成、动态组合场景合成和4D视频编辑。