Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question About Custom Data Training for Arbitrary Self-Shot Video Reconstruction #33

Open
YaqiChang opened this issue Jul 2, 2024 · 2 comments

Comments

@YaqiChang
Copy link

First, I'd like to express my gratitude for your outstanding work and congratulations on your acceptance to CVPR 2024!

I am currently testing he provided model on the AvatarRex dataset to animate avatars, and it works wonderfully. However, I am facing some confusion regarding the process of reconstructing an avatar from an arbitrary self-shot video.

According to GEN_DATA.md, it seems that I need to provide a dataset for a single avatar. Given that the THuman4.0 dataset uses 24 cameras for each avatar, does this imply that I need to create an independent dataset for each new avatar I wish to reconstruct?

This requirement seems to suggest that the reconstruction cost for each avatar might be high. Could you please guide me on whether I have misunderstood the process?

Thank you for your assistance!

@lizhe00
Copy link
Owner

lizhe00 commented Jul 2, 2024

Hi, our work is based on multi-view videos for creating an animatable avatar. If you want to reduce the device cost to a single camera, our model also supports a monocular video as input given the SMPL-X registration. But the quality will degenerate because of the absence of 3D supervision and inaccurate SMPL fitting.

@YaqiChang
Copy link
Author

Thanks for your explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants