Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Memory management improvements; force load SD15 models fully #318

Merged
merged 11 commits into from
Aug 26, 2024

Conversation

tazlin
Copy link
Member

@tazlin tazlin commented Aug 26, 2024

As with all cases of comments/docs, the details of the diatribe targeted at would-be developers have become somewhat inaccurate. However, the overall message is still roughly correct, as is the fact that problems would arise if it its warnings are not headed.
Member functions are not "FunctionTypes" (which are 'bound' to modules) from a static typing perspective. This updates the types to use `Callable` and to briefly explain their purposes so a would-be dev doesn't need to dig into comfy to get the implementation
When I introduced `logging` redirection to loguru, I also introduced the many hundreds/min irrelevant message to do with the comfyui internals of loading models in low vram conditions. I was original hesitant to remove these message, thinking that there may have been some potentially useful information, but as we are going to be forcing models to always fully load, these messages are now less important. There will still be a message which indicates if the 'full' load did not occur, so that will due for now.
Previous to this change, comfyui was too likely to only partially load the models to the GPU. This had **major** performance implications. The comfyui hard-coded numbers which underpin the logic doesn't consider the worker use case and is overly cautious when speed is the chief consideration. I'm sure this change has the potential to increase worker instability but I have seen a 10%-15% increase in throughput on my 2080 when used via the worker, so I am going to at least try and get this to work. There are a number of techniques the worker can use (manually clearing memory or even killing a process) to manage this, so I believe it should be possible to make this viable.
I am hoping to one day implement a graceful way to interrupt generations in comfyui, such as from a user-initiated abort. I suspect that exposing this function will get us a some percent of the way to accomplishing that goal. See also Haidra-Org/horde-worker-reGen#262.
ComfyUI's default mode of running (`main.py`) includes calls to `cleanup_models(...)` and `soft_empty_cache(...)` on a timer. I suspect issues that have arisen lately are rooted in the fact that horde-engine does not currently do something similar. I am adding this (default on) option to the `HordeLib` class to call these after every pipeline run.
This is a patent "temporary" fix to prevent extremely large models, such as cascade models, from being forced to fully load. I will track the resolution of this in an issue: #312
This is an attempt to get the tests to run on high vram cards
@tazlin tazlin added the release:patch Version _._.x label Aug 26, 2024
@tazlin tazlin merged commit 7481314 into releases Aug 26, 2024
1 of 2 checks passed
@tazlin tazlin mentioned this pull request Sep 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release:patch Version _._.x
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant