Achieving high-resolution novel view synthesis (HRNVS) from low-resolution input views is a challenging task due to the lack of high-resolution data. Previous methods optimize high-resolution Neural Radiance Field (NeRF) from low-resolution input views but suffer from slow rendering speed. In this work, we base our method on 3D Gaussian Splatting (3DGS) due to its capability of producing high-quality images at a faster rendering speed. To alleviate the shortage of data for higher-resolution synthesis, we propose to leverage off-the-shelf 2D diffusion priors by distilling the 2D knowledge into 3D with Score Distillation Sampling (SDS). Nevertheless, applying SDS directly to Gaussian-based 3D super-resolution leads to undesirable and redundant 3D Gaussian primitives, due to the randomness brought by generative priors. To mitigate this issue, we introduce two simple yet effective techniques to reduce stochastic disturbances introduced by SDS. Specifically, we 1) shrink the range of diffusion timestep in SDS with an annealing strategy; 2) randomly discard redundant Gaussian primitives during densification. Extensive experiments have demonstrated that our proposed GaussainSR can attain high-quality results for HRNVS with only low-resolution inputs on both synthetic and real-world datasets.
从低分辨率输入视图实现高分辨率新视图合成(HRNVS)是一个具有挑战性的任务,因为缺乏高分辨率数据。以往的方法是从低分辨率输入视图优化高分辨率的神经辐射场(NeRF),但受制于渲染速度慢。在这项工作中,我们基于三维高斯平涂(3DGS),因为它能够以更快的渲染速度产生高质量图像。为了缓解高分辨率合成数据的短缺,我们提出利用现成的二维扩散先验通过得分蒸馏采样(SDS)将二维知识蒸馏到三维。然而,直接将SDS应用于基于高斯的三维超分辨率会导致不良和冗余的三维高斯原语,这是由生成先验带来的随机性造成的。为了缓解这一问题,我们引入了两种简单而有效的技术来减少SDS引入的随机干扰。具体来说,我们1)通过退火策略缩小SDS中扩散时间步的范围;2)在密集化过程中随机丢弃冗余的高斯原语。广泛的实验表明,我们提出的GaussainSR可以仅使用低分辨率输入在合成和现实世界数据集上获得HRNVS的高质量结果。