Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.4 KB

2501.13335.md

File metadata and controls

5 lines (3 loc) · 2.4 KB

Deblur-Avatar: Animatable Avatars from Motion-Blurred Monocular Videos

We introduce Deblur-Avatar, a novel framework for modeling high-fidelity, animatable 3D human avatars from motion-blurred monocular video inputs. Motion blur is prevalent in real-world dynamic video capture, especially due to human movements in 3D human avatar modeling. Existing methods either (1) assume sharp image inputs, failing to address the detail loss introduced by motion blur, or (2) mainly consider blur by camera movements, neglecting the human motion blur which is more common in animatable avatars. Our proposed approach integrates a human movement-based motion blur model into 3D Gaussian Splatting (3DGS). By explicitly modeling human motion trajectories during exposure time, we jointly optimize the trajectories and 3D Gaussians to reconstruct sharp, high-quality human avatars. We employ a pose-dependent fusion mechanism to distinguish moving body regions, optimizing both blurred and sharp areas effectively. Extensive experiments on synthetic and real-world datasets demonstrate that Deblur-Avatar significantly outperforms existing methods in rendering quality and quantitative metrics, producing sharp avatar reconstructions and enabling real-time rendering under challenging motion blur conditions.

我们介绍了Deblur-Avatar,一个新颖的框架,用于从运动模糊的单目视频输入中建模高保真、可动画化的3D人类虚拟形象。运动模糊在现实世界的动态视频捕捉中很常见,尤其是在3D人类虚拟形象建模中,由于人体运动的原因。现有方法要么(1)假设图像输入是清晰的,未能解决运动模糊带来的细节丢失,要么(2)主要考虑相机运动产生的模糊,忽略了在人类虚拟形象的动画中更常见的人的运动模糊。我们提出的方法将基于人体运动的运动模糊模型集成到3D高斯溅射(3DGS)中。通过在曝光时间内显式建模人体运动轨迹,我们共同优化这些轨迹和3D高斯,从而重建清晰、高质量的人物虚拟形象。我们采用了一个依赖于姿势的融合机制来区分运动中的身体区域,有效地优化模糊区域和清晰区域。大量合成数据和真实数据集上的实验表明,Deblur-Avatar在渲染质量和定量指标方面显著超越了现有方法,生成清晰的虚拟形象重建,并在具有挑战性的运动模糊条件下实现实时渲染。