finetune_taiyi_stable_diffusion 最简单的办法 - 是否可以把 CLIPTextModel 与 CLIPTokenizer 训练模型单独训练并发布在 huggingface上 , 后面用下面代码即可调用:
import torch
from diffusers import StableDiffusionPipeline,UNet2DConditionModel
from transformers import CLIPTextModel,CLIPTokenizer
//方式二、
text_encoder = CLIPTextModel.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/text_encoder")
tokenizer = CLIPTokenizer.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/tokenizer")
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", text_encoder=text_encoder,tokenizer=tokenizer)
pipe.to("cpu")
image = pipe(prompt="一只猫咪").images[0]
image.save("cat2.png")