This repository currently provides the unofficial pre-trained weights and inference code of Animate Anyone. It is inspired by the implementation of the MooreThreads/Moore-AnimateAnyone repository and we made some adjustments to the training process and datasets.
demo1.mp4 |
demo4.mp4 |
demo2.mp4 |
demo3.mp4 |
We Recommend a python version >=3.10
and cuda version =11.7
. Then build environment as follows:
# [Optional] Create a virtual env
python -m venv .venv
source .venv/bin/activate
# Install with pip:
pip install -r requirements.txt
Automatically downloading: You can run the following command to download weights automatically:
python tools/download_weights.py
Weights will be placed under the ./pretrained_weights
direcotry. The whole downloading process may take a long time.
Here is the cli command for running inference scripts:
python -m scripts.pose2vid --config ./configs/prompts/animation.yaml -W 512 -H 784 -L 64
You can refer the format of animation.yaml
to add your own reference images or pose videos. To convert the raw video into a pose video (keypoint sequence), you can run with the following command:
python tools/vid2pose.py --video_path /path/to/your/video.mp4
We've deployed this model on Novita AI, and you can try it out with Playground ➡️ https://novita.ai/playground#animate-anyone .
This project is based on MooreThreads/Moore-AnimateAnyone which is licensed under the Apache License 2.0. We thank to the authors of Animate Anyone and MooreThreads/Moore-AnimateAnyone, for their open research and exploration.