Zhuoqian Yang, Shikai Li, Wayne Wu, Bo Dai
[Video Demo] | [Project Page] | [Paper]
Abstract: We present 3DHumanGAN, a 3D-aware generative adversarial network (GAN) that synthesizes images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it allows us to harness the power of 2D GANs to generate photo-realistic images; ii) it generates consistent images under varying view-angles and specifiable poses; iii) the model can benefit from the 3D human prior. Our model is adversarially learned from a collection of web images needless of manual annotation.
Please see doc/INSTALL.md
for setting up the project environment.
Please see doc/GET_STARTED.md
for an inference tutorial.
- Release technical report.
- Release code and pretrained models for training and inference.
- Release preprocessed train-ready dataset.
- Add instructions and scripts for data preprocessing.
- Add instructions and code for evaluation.
- (ICCV 2023) OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs, Honglin He et al. [Paper], [Project Page]
- (ECCV 2022) StyleGAN-Human: A Data-Centric Odyssey of Human Generation, Jianglin Fu et al. [Paper], [Project Page], [Dataset]
If you find this work useful for your research, please consider citing our paper:
@inproceedings{yang20233dhumangan,
title={3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping},
author={Yang, Zhuoqian and Li, Shikai and Wu, Wayne and Dai, Bo},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={23008--23019},
year={2023}
}