conda create python=3.9 --name GUESS
conda activate GUESS
Install the packages in requirements.txt
and install PyTorch 1.12.1
pip install -r requirements.txt
We test our code on Python 3.9.12 and PyTorch 1.12.1.
Run the script to download dependencies materials
bash prepare/download_smpl_model.sh
bash prepare/prepare_clip.sh
For Text to Motion Evaluation
bash prepare/download_t2m_evaluators.sh
For convenience, you can directly download the datasets we processed and put them into ./datasets/
. Please cite their oroginal papers if you use these datasets.
Datasets | Google Cloud |
---|---|
HumanML3D | Download |
KIT | Download |
Please first check the parameters in configs/config_vae_humanml3d.yaml
, e.g. NAME
,DEBUG
.
Then, run the following command
python -m train --cfg configs/config_vae_humanml3d.yaml --cfg_assets configs/assets.yaml --batch_size 64 --nodebug
Please update the parameters in configs/config_mld_humanml3d.yaml
, e.g. NAME
,DEBUG
,PRETRAINED_VAE
(change to your latest ckpt model path
in previous step)
Then, run the following command
python -m train --cfg configs/config_mld_humanml3d.yaml --cfg_assets configs/assets.yaml --batch_size 64 --nodebug
Please first put the tained model checkpoint path to TEST.CHECKPOINT
in configs/config_mld_humanml3d.yaml
.
Then, run the following command
python -m test --cfg configs/config_mld_humanml3d.yaml --cfg_assets configs/assets.yaml
We support text file or keyboard input, the generated motions are npy files.
Please check the configsasset.yaml
for path config, TEST.FOLDER as output folder.
Then, run the following script
python demo.py --cfg ./configs/config_mld_humanml3d.yaml --cfg_assets ./configs/assets.yaml --example ./demo/example.txt
Some parameters
--example=.demoexample.txt
input file as text prompts--task=text_motion
generate from the test set of dataset--task=random_sampling
random motion sampling from noise--replication
generate motions for same input texts multiple times--allinone
store all generated motions in a single npy file with the shape of[num_samples, num_ replication, num_frames, num_joints, xyz]
The outputs
npy file
the generated motions with the shape of (nframe, 22, 3)text file
the input text prompt
Refer to TEMOS-Rendering motions for blender setup, then install the following dependencies.
YOUR_BLENDER_PYTHON_PATH/python -m pip install -r prepare/requirements_render.txt
Run the following command using blender:
YOUR_BLENDER_PATH/blender --background --python render.py -- --cfg=./configs/render.yaml --dir=YOUR_NPY_FOLDER --mode=video --joint_type=HumanML3D
python -m fit --dir YOUR_NPY_FOLDER --save_folder TEMP_PLY_FOLDER --cuda
This outputs:
mesh npy file
: the generate SMPL vertices with the shape of (nframe, 6893, 3)ply files
: the ply mesh file for blender or meshlab
Run the following command to render SMPL using blender:
YOUR_BLENDER_PATH/blender --background --python render.py -- --cfg=./configs/render.yaml --dir=YOUR_NPY_FOLDER --mode=video --joint_type=HumanML3D
optional parameters:
--mode=video
: render mp4 video--mode=sequence
: render the whole motion in a png image.
If you find our code or paper helps, please consider citing
@ARTICLE{10399852,
author={Gao, Xuehao and Yang, Yang and Xie, Zhenyu and Du, Shaoyi and Sun, Zhongqian and Wu, Yang},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={GUESS GradUally Enriching SyntheSis for Text-Driven Human Motion Generation},
year={2024}}
Thanks to MLD, our code is partially borrowing from them.
This code is distributed under an MIT LICENSE.
Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, and uses datasets which each have their own respective licenses that must also be followed.