Skip to content

mingyuan-zhang/LMM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Large Motion Model for Unified Multi-Modal Motion Generation

1S-Lab, Nanyang Technological University  2SenseTime Research 
*co-first authors +corresponding author

Abstract: Human motion generation, a cornerstone technique in animation and video production, has widespread applications in various tasks like text-to-motion and music-to-dance. Previous works focus on developing specialist models tailored for each task without scalability. In this work, we present Large Motion Model (LMM), a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model. A unified motion model is appealing since it can leverage a wide range of motion data to achieve broad generalization beyond a single task. However, it is also challenging due to the heterogeneous nature of substantially different motion data and tasks. LMM tackles these challenges from three principled aspects: 1) Data: We consolidate datasets with different modalities, formats and tasks into a comprehensive yet unified motion generation dataset, MotionVerse, comprising 10 tasks, 16 datasets, a total of 320k sequences, and 100 million frames. 2) Architecture: We design an articulated attention mechanism ArtAttention that incorporates body part-aware modeling into Diffusion Transformer backbone. 3) Pre-Training: We propose a novel pre-training strategy for LMM, which employs variable frame rates and masking forms, to better exploit knowledge from diverse training data. Extensive experiments demonstrate that our generalist LMM achieves competitive performance across various standard motion generation tasks over state-of-the-art specialist models. Notably, LMM exhibits strong generalization capabilities and emerging properties across many unseen tasks.

Updates

[12/2024] Release code for LMM, FineMoGen, MoMat-MoGen, ReMoDiffuse and MotionDiffuse

Benchmark and Model Zoo

Supported methods

Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{zhang2025large,
  title={Large motion model for unified multi-modal motion generation},
  author={Zhang, Mingyuan and Jin, Daisheng and Gu, Chenyang and Hong, Fangzhou and Cai, Zhongang and Huang, Jingfang and Zhang, Chongzhi and Guo, Xinying and Yang, Lei and He, Ying and others},
  booktitle={European Conference on Computer Vision},
  pages={397--421},
  year={2025},
  organization={Springer}
}
@article{zhang2023finemogen,
  title={Finemogen: Fine-grained spatio-temporal motion generation and editing},
  author={Zhang, Mingyuan and Li, Huirong and Cai, Zhongang and Ren, Jiawei and Yang, Lei and Liu, Ziwei},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  pages={13981--13992},
  year={2023}
}
@article{zhang2023remodiffuse,
  title={ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model},
  author={Zhang, Mingyuan and Guo, Xinying and Pan, Liang and Cai, Zhongang and Hong, Fangzhou and Li, Huirong and Yang, Lei and Liu, Ziwei},
  journal={arXiv preprint arXiv:2304.01116},
  year={2023}
}
@article{zhang2022motiondiffuse,
  title={MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model},
  author={Zhang, Mingyuan and Cai, Zhongang and Pan, Liang and Hong, Fangzhou and Guo, Xinying and Yang, Lei and Liu, Ziwei},
  journal={arXiv preprint arXiv:2208.15001},
  year={2022}
}

Installation

# Create Conda Environment
conda create -n mogen python=3.9 -y
conda activate mogen

# Install Pytorch
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -y

# Install MMCV
pip install "mmcv-full>=1.4.2,<=1.9.0" -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.12.1/index.html

# Install Pytorch3d
conda install -c bottler nvidiacub -y
conda install -c fvcore -c iopath -c conda-forge fvcore iopath -y
conda install pytorch3d -c pytorch3d -y

# Install tutel
python3 -m pip install --verbose --upgrade git+https://github.com/microsoft/tutel@main

# Install other requirements
pip install -r requirements/mogen.txt

# Install ImageBind
pip install --no-deps git+https://github.com/facebookresearch/ImageBind@main

Data Preparation

Please kindly refer to the documentation for the detailed instruction.

Model Inference

You may try our oneline demo on Hugging Face. Also you can download the pretrained weights form google drive and run the visualization script locally:

PYTHONPATH=".":$PYTHONPATH python tools/visualize_lmm.py ${CONFIG} ${CHECKPOINT} \
    --text ${TEXT} \
    --speech ${SPEECH_WAV_PATH} \
    --motion_length ${MOTION_LENGTH} \
    --out ${OUTPUT_ANIMATION_PATH} \
    --fps 20.0 \
    --device cpu

About

Large Motion Model for Unified Multi-Modal Motion Generation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published