-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Memory management improvements; force load SD15 models fully #318
Commits on Aug 26, 2024
-
docs: update the epistle at the top of
comfy_horde.py
As with all cases of comments/docs, the details of the diatribe targeted at would-be developers have become somewhat inaccurate. However, the overall message is still roughly correct, as is the fact that problems would arise if it its warnings are not headed.
Configuration menu - View commit details
-
Copy full SHA for 76cb540 - Browse repository at this point
Copy the full SHA 76cb540View commit details -
docs/style: fix the bound member function types from comfy
Member functions are not "FunctionTypes" (which are 'bound' to modules) from a static typing perspective. This updates the types to use `Callable` and to briefly explain their purposes so a would-be dev doesn't need to dig into comfy to get the implementation
Configuration menu - View commit details
-
Copy full SHA for 45ef5cf - Browse repository at this point
Copy the full SHA 45ef5cfView commit details -
fix: supress 'lowvram' comfyui message
When I introduced `logging` redirection to loguru, I also introduced the many hundreds/min irrelevant message to do with the comfyui internals of loading models in low vram conditions. I was original hesitant to remove these message, thinking that there may have been some potentially useful information, but as we are going to be forcing models to always fully load, these messages are now less important. There will still be a message which indicates if the 'full' load did not occur, so that will due for now.
Configuration menu - View commit details
-
Copy full SHA for 09d9372 - Browse repository at this point
Copy the full SHA 09d9372View commit details -
fix: force models loading to the GPU to be gpu-only
Previous to this change, comfyui was too likely to only partially load the models to the GPU. This had **major** performance implications. The comfyui hard-coded numbers which underpin the logic doesn't consider the worker use case and is overly cautious when speed is the chief consideration. I'm sure this change has the potential to increase worker instability but I have seen a 10%-15% increase in throughput on my 2080 when used via the worker, so I am going to at least try and get this to work. There are a number of techniques the worker can use (manually clearing memory or even killing a process) to manage this, so I believe it should be possible to make this viable.
Configuration menu - View commit details
-
Copy full SHA for cdcd32c - Browse repository at this point
Copy the full SHA cdcd32cView commit details -
feat: expose
interrupt_current_processing
from comfyuiI am hoping to one day implement a graceful way to interrupt generations in comfyui, such as from a user-initiated abort. I suspect that exposing this function will get us a some percent of the way to accomplishing that goal. See also Haidra-Org/horde-worker-reGen#262.
Configuration menu - View commit details
-
Copy full SHA for 569c971 - Browse repository at this point
Copy the full SHA 569c971View commit details -
feat:
aggressive_unloading
for more regular garbage collectionComfyUI's default mode of running (`main.py`) includes calls to `cleanup_models(...)` and `soft_empty_cache(...)` on a timer. I suspect issues that have arisen lately are rooted in the fact that horde-engine does not currently do something similar. I am adding this (default on) option to the `HordeLib` class to call these after every pipeline run.
Configuration menu - View commit details
-
Copy full SHA for eed7c5b - Browse repository at this point
Copy the full SHA eed7c5bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 8767454 - Browse repository at this point
Copy the full SHA 8767454View commit details -
fix: rely on comfyui internals for loading cascade series models
This is a patent "temporary" fix to prevent extremely large models, such as cascade models, from being forced to fully load. I will track the resolution of this in an issue: #312
Configuration menu - View commit details
-
Copy full SHA for 44e3981 - Browse repository at this point
Copy the full SHA 44e3981View commit details -
Configuration menu - View commit details
-
Copy full SHA for 18881b1 - Browse repository at this point
Copy the full SHA 18881b1View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1ff7fe8 - Browse repository at this point
Copy the full SHA 1ff7fe8View commit details -
tests:
disable_smart_memory=True
This is an attempt to get the tests to run on high vram cards
Configuration menu - View commit details
-
Copy full SHA for 2d58136 - Browse repository at this point
Copy the full SHA 2d58136View commit details