title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | license | tags | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Text2EMotionDiffuse |
🧠 |
blue |
red |
gradio |
3.44.1 |
text2motion/app.py |
false |
mit |
|
Conditioning human motion on natural language (text-to-motion) is critical for many graphics-based applications, including training neural nets for motion-based tasks like detecting changes in posture for medical applications. Recently diffusion models have become popular for text-to-motion generation, but many are trained on human pose representations that lack face and hand details and thus fall short on prompts that involve emotion or detailed object interaction. To fill this gap, we re-trained the text-to-motion model MotionDiffuse on a new dataset Motion-X, which uses SMPL-X poses to include facial expressions and fully articulated hands.
Go to text2motion/DTU_readme.md for installation instructions
To demo the model, see the Hugging Face space or
checkout the notebook text2motion/demo.ipynb. The notebook will guide you through the process of generating a motion from a text prompt. The same code is also available as a python script text2motion/demo.py.
Note: To visualize the output, the make gen
command must be run from the text2motion
directory.
The group would like to thank the authors of the original paper for their work and for making their code available. Also, a deep thank you to Frederik Warbug, for his support and technical guidance and to the DTU HPC team for their support with the HPC cluster.