这是indexloc提供的服务,不要输入任何密码
Skip to content

[CVPR 2024] Official implementation of "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations"

License

Notifications You must be signed in to change notification settings

bytedance/DEADiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations (CVPR 2024)

           

Tianhao Qi*, Shancheng Fang, Yanze Wu✝, Hongtao Xie✉, Jiawei Liu,
Lang Chen, Qian He, Yongdong Zhang


(*Works done during the internship at ByteDance, ✝Project Lead, ✉Corresponding author)

From University of Science and Technology of China and ByteDance.

🔆 Introduction

TL;DR: We propose DEADiff, a generic method facilitating the synthesis of novel images that embody the style of a given reference image and adhere to text prompts.

⭐⭐ Stylized Text-to-Image Generation.

Stylized text-to-image results. Resolution: 512 x 512. (Compressed)

⭐⭐ Style Transfer.

Style transfer results with ControlNet.

📝 Changelog

  • [2024.4.3]: 🔥🔥 Release the inference code and pretrained checkpoint.
  • [2024.3.5]: 🔥🔥 Release the project page.

⏳ TODO

  • Release the inference code.
  • Release training data.

⚙️ Setup

conda create -n deadiff python=3.9.2
conda activate deadiff
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install git+https://github.com/salesforce/LAVIS.git@20230801-blip-diffusion-edit
pip install -r requirements.txt
pip install -e .

💫 Inference

  1. Download the pretrained model from Hugging Face and put it under ./pretrained/.
  2. Run the commands in terminal.
python3 scripts/app.py

The Gradio app allows you to transfer style from the reference image. Just try it for more details.

Prompt: "A curly-haired boy" p

Prompt: "A robot" p

Prompt: "A motorcycle" p

➕ Style Transfer with ControlNet

We support style transfer with structural control by combining DEADiff with ControlNet. This enables users to guide the spatial layout (e.g., edges or depth maps) of the generated images, while transferring the visual style from a reference image.

To perform style transfer with ControlNet, please download the following pretrained models:

  • control_sd15_canny.pth: Download → place it under ./pretrained/
  • control_sd15_depth.pth: Download → place it under ./pretrained/
  • dpt_hybrid-midas-501f0c75.pt (for depth estimation): Download → place it under ldm/controlnet/annotator/ckpts/ These checkpoints are required for Canny and Depth-based ControlNet stylization modes. Then run the following commands in terminal.
# Canny-based control
python3 scripts/app_canny_control.py
# Depth-based control
python3 scripts/app_depth_control.py

📢 Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.


✈️ Citation

@article{qi2024deadiff,
  title={DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations},
  author={Qi, Tianhao and Fang, Shancheng and Wu, Yanze and Xie, Hongtao and Liu, Jiawei and Chen, Lang and He, Qian and Zhang, Yongdong},
  journal={arXiv preprint arXiv:2403.06951},
  year={2024}
}

📭 Contact

If your have any comments or questions, feel free to contact qth@mail.ustc.edu.cn

About

[CVPR 2024] Official implementation of "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published