-
Train AnimateDiff (24+ frames by multiplying existing module by scale factor and finetune)
# Multiply pe weights by multiplier for training more than 24 frames if motion_module_pe_multiplier > 1: for key in motion_module_state_dict: if 'pe' in key: t = motion_module_state_dict[key] t = repeat(t, "b f d -> b (f m) d", m=motion_module_pe_multiplier) motion_module_state_dict[key] = t
I trained till 264 frames on A100
-
Train AnimateDiff + LoRA/DreamBooth
-
Infinite infer (credits to dajes) (temporal_context and video_length params).
-
ControlNet (works with Infinite infer). VRAM consumming. Can only infer 120 frames on single controlnet module on A100
-
Prompt Walking. Start from Egg and finish with Duck
{ 0: "Egg", 10: "Duck", }
-
Updated to last diffusers version
-
Train LoRA (all layers, sd and mm at once, could be separated if needed)
-
Region prompter
-
FreeInit added
This repository is the official implementation of AnimateDiff.
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
Yuwei Guo,
Ceyuan Yang*,
Anyi Rao,
Yaohui Wang,
Yu Qiao,
Dahua Lin,
Bo Dai
*Corresponding Author
- Code Release
- Arxiv Report
- GPU Memory Optimization
- Gradio Interface
Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded.
We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!
git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
conda env create -f environment.yaml
conda activate animatediff
We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately. It's recommanded to try both of them for best results.
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
bash download_bashscripts/0-MotionModule.sh
You may also directly download the motion module checkpoints from Google Drive, then put them in models/Motion_Module/
folder.
Here we provide inference configs for 6 demo T2I on CivitAI. You may run the following bash scripts to download these checkpoints.
bash download_bashscripts/1-ToonYou.sh
bash download_bashscripts/2-Lyriel.sh
bash download_bashscripts/3-RcnzCartoon.sh
bash download_bashscripts/4-MajicMix.sh
bash download_bashscripts/5-RealisticVision.sh
bash download_bashscripts/6-Tusun.sh
bash download_bashscripts/7-FilmVelvia.sh
bash download_bashscripts/8-GhibliBackground.sh
After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to samples/
folder.
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml
To generate animations with a new DreamBooth/LoRA model, you may create a new config .yaml
file in the following format:
NewModel:
path: "[path to your DreamBooth/LoRA model .safetensors file]"
base: "[path to LoRA base model .safetensors file, leave it empty string if not needed]"
motion_module:
- "models/Motion_Module/mm_sd_v14.ckpt"
- "models/Motion_Module/mm_sd_v15.ckpt"
steps: 25
guidance_scale: 7.5
prompt:
- "[positive prompt]"
n_prompt:
- "[negative prompt]"
Then run the following commands:
python -m scripts.animate --config [path to the config file]
Here we demonstrate several best results we found in our experiments.
Model:ToonYou
Model:Counterfeit V3.0
Model:Realistic Vision V2.0
Model: majicMIX Realistic
Model:RCNZ Cartoon
Model:FilmVelvia
You can also generate longer animations by using overlapping sliding windows.
python -m scripts.animate --config configs/prompts/{your_config}.yaml --L 64 --context_length 16
L
- the length of the generated animation.
context_length
- the length of the sliding window (limited by motion modules capacity), default to L
.
context_overlap
- how much neighbouring contexts overlap. By default context_length
/ 2
context_stride
- (2^context_stride
) is a max stride between 2 neighbour frames. By default 0
Model:ToonYou
Model:Realistic Vision V2.0
Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚.
Character Model:Yoimiya (with an initial reference image, see WIP fork for the extended implementation.)
Character Model:Paimon; Pose Model:Hold Sign
@misc{guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
author={Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai},
year={2023},
eprint={2307.04725},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Yuwei Guo: [email protected] Ceyuan Yang: [email protected] Bo Dai: [email protected]
Codebase built upon Tune-a-Video.