2 suggestions about memory saving #312
TheLastBen
started this conversation in
Feature suggestions
Replies: 1 comment 6 replies
-
Loading GFPGAN on demand is something I have been using on my personal version of the webui. It's the only way of using GFPGAN with a 4GB GPU. |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Save 1Gb of VRAM by loading GFPGAN and ESRGAN models only on demand by moving load_GFPGAN() inside def run_GFPGAN, same for ESRGAN, so that the models won't load from the start and eat up around 1Gb of VRAM, though it requires changing a bit some lines to avoid (model_name not defined)
Speed up the loading of the model :
Inside def load_model_from_config I changed pl_sd = torch.load(ckpt, map_location="cpu" to pl_sd = torch.load(ckpt, map_location="cuda" and model.cuda() to model.half(), I got the same VRAM usage but without going through the RAM, so faster loading.
(this suggestion doesn't work with the optimized version).
With 8Gb of VRAM, I can generate a resolution up to 512x704 or 704x512.
**UPDATE : using PYTORCH_CUDA_ALLOC_CONF=50 allowed me to get up to 512x768 as it uses 99% of memory now because of the small memory allocation batch size instead of shooting from 89% to more than 100% which caused an out of memory crash.
I don't use torch_gc(), from my experience, there is no need for it memory-wise.
Beta Was this translation helpful? Give feedback.
All reactions