+
Skip to content

How to use get_T2I_Flash_pipeline with kandinsky3.1 weights to inference in a single 40G A100 GPU or multiple gpus? #21

@EYcab

Description

@EYcab

It seems your get_T2I_Flash_pipeline requires more than 40G memory in a A100 GPU when trying to inference.However,when I tried to load the model in 2*40G A100 GPU,it seems your codes has not supported it yet?

Could you provide a way to load this model with multiple gpus?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载