Skip to content

Commit

Permalink
Workaround for lora OOM on lowvram mode.
Browse files Browse the repository at this point in the history
  • Loading branch information
comfyanonymous committed Aug 7, 2024
1 parent 1208863 commit cb7c4b4
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions comfy/model_patcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,8 +348,8 @@ def __call__(self, weight):
m.comfy_cast_weights = True
else:
if hasattr(m, "weight"):
self.patch_weight_to_device(weight_key, device_to)
self.patch_weight_to_device(bias_key, device_to)
self.patch_weight_to_device(weight_key) #TODO: speed this up without causing OOM
self.patch_weight_to_device(bias_key)
m.to(device_to)
mem_counter += comfy.model_management.module_size(m)
logging.debug("lowvram: loaded module regularly {} {}".format(n, m))
Expand Down

0 comments on commit cb7c4b4

Please sign in to comment.