This repository is an extension of the official implementation of AnimateDiff. It adds a video frame interpolation module as a plugin. The plugin is based on RIFE.
Original AnimateDiff | |
AnimateDiff with RIFE |
The frame interporlation module is using a light-weight Neural Network to generate a higher fps video. The fps can be adjusted by the user.
This plugin adds the following new features to AnimateDiff:
- VFI-RIFE: VFI stands for the video frame interpolation. Based on the original inference result, the RIFE model will guess the interpolation frames. Finally, the plugin will combine the original frames and the interpolation frames to generate a higher fps video.
- fps adjustment: Add a fps control to the original inference task.
- pytorch profiler: Add a method to use pytorch profiler to monitor the performance metrics during the inference.
To install the plugin, follow these steps:
git clone https://github.com/hohoXin/RIFE-AnimateDiff.git
cd AnimateDiff
conda env create -f environment.yaml
conda activate animatediff
We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately. It's recommanded to try both of them for best results.
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
bash download_bashscripts/0-MotionModule.sh
You may also directly download the motion module checkpoints from Google Drive / HuggingFace / CivitAI, then put them in models/Motion_Module/
folder.
Here we provide inference configs for 6 demo T2I on CivitAI. You may run the following bash scripts to download these checkpoints.
bash download_bashscripts/1-ToonYou.sh
bash download_bashscripts/2-Lyriel.sh
bash download_bashscripts/3-RcnzCartoon.sh
bash download_bashscripts/4-MajicMix.sh
bash download_bashscripts/5-RealisticVision.sh
bash download_bashscripts/6-Tusun.sh
bash download_bashscripts/7-FilmVelvia.sh
bash download_bashscripts/8-GhibliBackground.sh
Download the latest RIFE model provided by repository hzwer/Practical-RIFE
4.13.1 - 2023.12.05 | Google Drive | 百度网盘
4.13.lite - 2023.11.27 | Google Drive | 百度网盘 || v4.12.2 - 2023.11.13 | Google Drive | 百度网盘
Download a model from the model list and put *.py and flownet.pkl on rife/train_log/
After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to samples/
folder.
python -m scripts.RIFE-animate --config configs/prompts/v2-RIFE-ToonYou-test.yaml
To generate animations with a new DreamBooth/LoRA model, you may create a new config .yaml
file in the following format:
NewModel:
inference_config: "[path to motion module config file]"
motion_module:
- "models/Motion_Module/mm_sd_v14.ckpt"
- "models/Motion_Module/mm_sd_v15.ckpt"
motion_module_lora_configs:
- path: "[path to MotionLoRA model]"
alpha: 1.0
- ...
dreambooth_path: "[path to your DreamBooth model .safetensors file]"
lora_model_path: "[path to your LoRA model .safetensors file, leave it empty string if not needed]"
seed: 114514
steps: 25
guidance_scale: 7.5
VFI_flag: True
VFI_num: 3
fps: 24
profiler: True
prompt:
- "[positive prompt]"
n_prompt:
- "[negative prompt]"
Then run the following commands:
python -m scripts.RIFE-animate --config [path to the config file]
Please refer to the original README for more details.
official animatediff repo: AnimateDiff
official RIFE repo: RIFE