arXiv, December 2021.
Yuliang Xiu
·
Jinlong Yang
·
Dimitrios Tzionas
·
Michael J. Black
Table of Contents
- Given an RGB image, you could get:
- image (png): masked out human, normal images (rendered from body, predicted from image)
- mesh (obj): SMPL-(X) body, reconstructed clothed human
- video (mp4): self-rotated clothed human
ICON's outputs from single RGB image |
- If you want to create a realistic and animatable 3D clothed avatar direclty from video / sequential images
- fully-textured with per-vertex color
- can be animated by SMPL pose parameters
- natural pose-dependent clothing deformation
3D Clothed Avatar, created from 400+ images using ICON+SCANimate, animated by AIST++ |
- testing code and pretrained models (*self-implemented version)
- ICON (w/ & w/o global encoder)
- PIFu* (RGB image + predicted normal map as input)
- PaMIR* (RGB image + predicted normal map as input)
- colab notebook
- training code
- dataset processing pipeline
- Video-to-Avatar module
Please follow the Installation Instruction to setup all the required packages, extra data, and models.
cd ICON/apps
# PIFu* (*: re-implementation)
python infer.py -cfg ../configs/pifu.yaml -gpu 0 -in_dir ../examples -out_dir ../results
# PaMIR* (*: re-implementation)
python infer.py -cfg ../configs/pamir.yaml -gpu 0 -in_dir ../examples -out_dir ../results
# ICON w/ global filter (better visual details --> lower Normal Error))
python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results
# ICON w/o global filter (higher evaluation scores --> lower P2S/Chamfer Error))
python infer.py -cfg ../configs/icon-nofilter.yaml -gpu 0 -in_dir ../examples -out_dir ../results
Comparison with other state-of-the-art methods |
Predicted normals on in-the-wild images with extreme poses |
@article{xiu2021icon,
title={ICON: Implicit Clothed humans Obtained from Normals},
author={Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J},
journal={arXiv preprint arXiv:2112.09127},
year={2021}
}
We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.
Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries
Here are some great resources we benefit from:
- MonoPortDataset for Data Processing
- PaMIR, PIFu, PIFuHD, and MonoPort for Benchmark
- SCANimate and AIST++ for Animation
- rembg for Human Segmentation
- smplx, PARE, PyMAF, and PIXIE for Human Pose & Shape Estimation
- CAPE and THuman for Dataset
- PyTorch3D for Differential Rendering
Some images used in the qualitative examples come from pinterest.com.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
For more questions, please contact icon@tue.mpg.de
For commercial licensing, please contact ps-licensing@tue.mpg.de