High-fidelity reconstruction of 3D human avatars has a wild application in visual reality. In this paper, we introduce FAGhead, a method that enables fully controllable human portraits from monocular videos. We explicit the traditional 3D morphable meshes (3DMM) and optimize the neutral 3D Gaussians to reconstruct with complex expressions. Furthermore, we employ a novel Point-based Learnable Representation Field (PLRF) with learnable Gaussian point positions to enhance reconstruction performance. Meanwhile, to effectively manage the edges of avatars, we introduced the alpha rendering to supervise the alpha value of each pixel. Extensive experimental results on the open-source datasets and our capturing datasets demonstrate that our approach is able to generate high-fidelity 3D head avatars and fully control the expression and pose of the virtual avatars, which is outperforming than existing works.
高保真重建3D人类化身在虚拟现实中有广泛的应用。在本文中,我们介绍了FAGhead方法,该方法能从单目视频生成完全可控的人类肖像。我们明确了传统的3D可变形网格(3DMM),并优化了中性3D高斯来重建复杂的表情。此外,我们采用了一种新颖的基于点的可学习表示场(PLRF),其中包含可学习的高斯点位置,以增强重建性能。同时,为了有效管理化身的边缘,我们引入了alpha渲染来监督每个像素的alpha值。在开源数据集和我们的捕获数据集上的广泛实验结果表明,我们的方法能够生成高保真的3D头部化身,并完全控制虚拟化身的表情和姿势,性能优于现有的工作。