📢 News
- [2023.12.4] Release inference code and gradio demo. We are working to improve MagicAnimate, stay tuned!
- [2023.11.23] Release MagicAnimate paper and project page.
🏃♂️ Getting Started
Please download the pretrained base models for StableDiffusion V1.5 and MSE-finetuned VAE.
Download our MagicAnimate checkpoints.
Place them as follows:
magic-animate |----pretrained_models |----MagicAnimate |----appearance_encoder |----diffusion_pytorch_model.safetensors |----config.json |----densepose_controlnet |----diffusion_pytorch_model.safetensors |----config.json |----temporal_attention |----temporal_attention.ckpt |----sd-vae-ft-mse |----... |----stable-diffusion-v1-5 |----... |----...
⚒️ Installation
prerequisites: python>=3.8, CUDA>=11.3, and ffmpeg.
Install with conda:
conda env create -f environment.yaml conda activate manimate
or pip:
pip3 install -r requirements.txt
💃 Inference
Run inference on single GPU:
Run inference with multiple GPUs:
bash scripts/animate_dist.sh
🎨 Gradio Demo
Online Gradio Demo:
Try our online gradio demo quickly.
Local Gradio Demo:
Launch local gradio demo on single GPU:
python3 -m demo.gradio_animate
Launch local gradio demo if you have multiple GPUs:
python3 -m demo.gradio_animate_dist
Then open gradio demo in local browser.
🙏 Acknowledgements
We would like to thank AK(@_akhaliq) and huggingface team for the help of setting up oneline gradio demo.
🎓 Citation
If you find this codebase useful for your research, please use the following entry.
@inproceedings{xu2023magicanimate, author = {Xu, Zhongcong and Zhang, Jianfeng and Liew, Jun Hao and Yan, Hanshu and Liu, Jia-Wei and Zhang, Chenxu and Feng, Jiashi and Shou, Mike Zheng}, title = {MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model}, booktitle = {arXiv}, year = {2023} }