Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 1.95 KB

2501.07104.md

File metadata and controls

7 lines (5 loc) · 1.95 KB

RMAvatar: Photorealistic Human Avatar Reconstruction from Monocular Video Based on Rectified Mesh-embedded Gaussians

We introduce RMAvatar, a novel human avatar representation with Gaussian splatting embedded on mesh to learn clothed avatar from a monocular video. We utilize the explicit mesh geometry to represent motion and shape of a virtual human and implicit appearance rendering with Gaussian Splatting. Our method consists of two main modules: Gaussian initialization module and Gaussian rectification module. We embed Gaussians into triangular faces and control their motion through the mesh, which ensures low-frequency motion and surface deformation of the avatar. Due to the limitations of LBS formula, the human skeleton is hard to control complex non-rigid transformations. We then design a pose-related Gaussian rectification module to learn fine-detailed non-rigid deformations, further improving the realism and expressiveness of the avatar. We conduct extensive experiments on public datasets, RMAvatar shows state-of-the-art performance on both rendering quality and quantitative evaluations.

我们提出了 RMAvatar,一种嵌入高斯散点到网格上的新型人体化身表示方法,可通过单目视频学习穿衣化身。我们利用显式网格几何表示虚拟人类的运动和形状,并通过高斯散点实现隐式外观渲染。我们的方法由两个主要模块组成:高斯初始化模块和高斯校正模块。 我们将高斯嵌入到三角面片中,并通过网格控制其运动,从而确保化身的低频运动和表面变形。由于线性混合骨骼(LBS)公式的限制,人体骨架难以控制复杂的非刚性变形。为此,我们设计了一个与姿态相关的高斯校正模块,用于学习精细的非刚性变形,进一步提升化身的真实感和表现力。 我们在公共数据集上进行了大量实验,结果表明 RMAvatar 在渲染质量和定量评估方面均达到了最新水平的性能。