Unsupervised 3D Pose Transfer with Cross Consistency and Dual Reconstruction
Chaoyue Song,
Jiacheng Wei,
Ruibo Li,
Fayao Liu,
Guosheng Lin
in TPAMI 2023.
- Clone this repo:
git clone https://github.com/ChaoyueSong/X-DualNet.git
cd X-DualNet
- Install the dependencies. Our code has been tested on Python 3.6, PyTorch 1.8 (previous versions also work, plz install it according to your cuda version). We also need pymesh and open3d.
conda env create -f environment.yml
conda activate x_dualnet
- Clone the Synchronized-BatchNorm-PyTorch repo.
cd models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../
We use SMPL as the human mesh data, please download data here. And we generate our animal mesh data using SMAL, please download it here.
By default, we load the latest checkpoint for test, which can be changed using --which_epoch
.
Download the pretrained model from pretrained model link and save them in checkpoints/human
. Then run the command
python test.py --dataset_mode human --dataroot [Your data path] --gpu_ids 0
The results will be saved in test_results/human/
. human_test_list
is randomly choosed for test and different from it in 3D-CoreNet.
Download the pretrained model from pretrained model link and save them in checkpoints/animal
. Then run the command
python test.py --dataset_mode animal --dataroot [Your data path] --gpu_ids 0
The results will be saved in test_results/animal/
. animal_test_list
is randomly choosed for test. For the calculation of CD and EMD, please check TMNet and MSN.
To train new models on human meshes, please run:
python train.py --dataset_mode human --dataroot [Your data path] --niter 100 --niter_decay 100 --batchSize 4 --gpu_ids 0,1
The output meshes in the training process will be saved in output/human/
.
To train new models on animal meshes, please run:
python train.py --dataset_mode animal --dataroot [Your data path] --niter 100 --niter_decay 100 --batchSize 6 --gpu_ids 0,1
The output meshes in the training process will be saved in output/animal/
.
Please change the batch size and gpu_ids as you desired.
If you need continue training from checkpoint, use --continue_train
.
If you find our work is useful to your research, please consider citing the paper:
@ARTICLE{10076900,
author={Song, Chaoyue and Wei, Jiacheng and Li, Ruibo and Liu, Fayao and Lin, Guosheng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Unsupervised 3D Pose Transfer With Cross Consistency and Dual Reconstruction},
year={2023},
volume={},
number={},
pages={1-13},
doi={10.1109/TPAMI.2023.3259059}}
This code is heavily based on CoCosNet. We rewrite the pix2pix architecture to ver2ver. We also use Optimal Transport and PointNet++-like convolution code from FLOT, Data and Edge loss code from NPT, and Synchronized Batch Normalization.
We thank all authors for the wonderful code!