Skip to content
/ AMDM Public

Interactive Character Control with Auto-Regressive Motion Diffusion Models

License

Notifications You must be signed in to change notification settings

Yi-Shi94/AMDM

Repository files navigation

Interactive Character Control with Auto-Regressive Motion Diffusion Models

Links

page | paper | video | poster | slides

Implementation of Auto-regressive Motion Diffusion Model (A-MDM)

We developed a PyTorch framework for kinematic-based auto-regressive motion generation models, supporting both training and inference. Our framework also includes implementations for real-time inpainting and reinforcement learning-based interactive control. If you have any questions about A-MDM, please feel free to reach out via ISSUE or email.

Update

  1. July 28 2024, framework released.
  2. Aug 24 2024, LAFAN1 15step checkpoint released.
  3. Sep 5 2024, 100STYLE 25step checkpoint released.
  4. Stay tuned for support for more dataset.

Checkpoints

Download, unzip and merge with your output directory.

LAFAN1_15step 100STYLE_25step

Dataset Preparation

For each dataset, our dataloader automatically parses it into a sequence of 1D frames, saving the frames as data.npz and the essential normalization statistics as stats.npz within your dataset directory. We provide stats.npz so users can perform inference without needing to download the full dataset and provide a single file from the dataset instead.

LaFAN1:

Download and extract under ./data/LAFAN directory. BEWARE: We didn't include files with a prefix of 'obstacle' in our experiments.

100STYLE:

Download and extract under ./data/100STYLE directory.

Arbitrary BVH dataset:

Download and extract under ./data/ directory. Create a yaml config file in ./config/model/,

AMASS:

Follow the procedure described in the repo of HuMoR and extract under ./data/AMASS directory.

HumanML3D:

Follow the procedure described in the repo of HumanML3D and extract under ./data/HumanML3D directory.

Installation

conda create -n amdm python=3.7
conda activate amdm
pip install -r requirement.txt
mkdir output

Base Model

Training

python run_base.py --arg_file args/amdm_DATASET_train.txt

or

python run_base.py
--model_config config/model/amdm_lafan1.yaml
--log_file output/base/amdm_lafan1/log.txt

--int_output_dir output/base/amdm_lafan1/
--out_model_file output/base/amdm_lafan1/model_param.pth

--mode train
--master_port 0
--rand_seed 122

Training time visualization is saved in --int_output_dir

Inference

python run_env.py --arg_file args/RP_amdm_DATASET.txt

Inpainting

python run_env.py --arg_file args/PI_amdm_DATASET.txt

High-Level Controller

Training

python run_env.py --arg_file args/ENV_train_amdm_DATASET.txt

Inference

python run_env.py --arg_file args/ENV_test_amdm_DATASET.txt

For users wish to create more variants given a mocap dataset

  1. Train the base model
  2. Follow the main function in gen_base_bvh.py, you can generate diverse motion given any starting pose:

Acknowledgement

Part of the RL modules utilized in our framework are based on the existing codebase of MotionVAE, please cite their work if you find using RL to guide autoregressive motion generative models helpful to your research.

BibTex

@article{
        shi2024amdm,
        author = {Shi, Yi and Wang, Jingbo and Jiang, Xuekun and Lin, Bingkun and Dai, Bo and Peng, Xue Bin},
        title = {Interactive Character Control with Auto-Regressive Motion Diffusion Models},
        year = {2024},
        issue_date = {August 2024},
        publisher = {Association for Computing Machinery},
        address = {New York, NY, USA},
        volume = {43},
        journal = {ACM Trans. Graph.},
        month = {jul},
        keywords = {motion synthesis, diffusion model, reinforcement learning}
      }