-
Notifications
You must be signed in to change notification settings - Fork 16
Open
Description
👋 您好,注意到论文中提到
Notably, with our twolevel invertible models, the peak memory use per GPU is suppressed to 14.64GB, making it manageable not only for A100 but also for other GPUs like 3090 and 4090 (24GB).
但是复现时,使用单卡 3090,显存实际占用 20G 左右
配置如下
-
$T$ = 3 - batch_size = 8 (default)
- patch_size = 256 (default)
想问下如何将显存控制在 14G,谢谢
Metadata
Metadata
Assignees
Labels
No labels