Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execution Model Inversion #2666

Merged
merged 39 commits into from
Aug 15, 2024

Conversation

guill
Copy link
Contributor

@guill guill commented Jan 29, 2024

This PR inverts the execution model -- from recursively calling nodes to using a topological sort of the nodes. This change allows for modification of the node graph during execution. This allows for two major advantages:

1. The implementation of lazy evaluation in nodes. For example, if a
"Mix Images" node has a mix factor of exactly 0.0, the second image
input doesn't even need to be evaluated (and visa-versa if the mix
factor is 1.0).

2. Dynamic expansion of nodes. This allows for the creation of dynamic
"node groups". Specifically, custom nodes can return subgraphs that
replace the original node in the graph. This is an incredibly
powerful concept. Using this functionality, it was easy to
implement:
    a. Components (a.k.a. node groups)
    b. Flow control (i.e. while loops) via tail recursion
    c. All-in-one nodes that replicate the WebUI functionality
    d. and more
All of those were able to be implemented entirely via custom nodes,
so those features are *not* a part of this PR. (There are some
front-end changes that should occur before that functionality is
made widely available, particularly around variant sockets.)

The custom nodes associated with this PR can be found at:
https://github.com/BadCafeCode/execution-inversion-demo-comfyui

Note that some of them require that variant socket types ("*") be enabled.

This PR inverts the execution model -- from recursively calling nodes to
using a topological sort of the nodes. This change allows for
modification of the node graph during execution. This allows for two
major advantages:

    1. The implementation of lazy evaluation in nodes. For example, if a
    "Mix Images" node has a mix factor of exactly 0.0, the second image
    input doesn't even need to be evaluated (and visa-versa if the mix
    factor is 1.0).

    2. Dynamic expansion of nodes. This allows for the creation of dynamic
    "node groups". Specifically, custom nodes can return subgraphs that
    replace the original node in the graph. This is an incredibly
    powerful concept. Using this functionality, it was easy to
    implement:
        a. Components (a.k.a. node groups)
        b. Flow control (i.e. while loops) via tail recursion
        c. All-in-one nodes that replicate the WebUI functionality
        d. and more
    All of those were able to be implemented entirely via custom nodes,
    so those features are *not* a part of this PR. (There are some
    front-end changes that should occur before that functionality is
    made widely available, particularly around variant sockets.)

The custom nodes associated with this PR can be found at:
https://github.com/BadCafeCode/execution-inversion-demo-comfyui

Note that some of them require that variant socket types ("*") be
enabled.
@comfyanonymous
Copy link
Owner

If anyone can test and report if it works or not with their most complex workflows that would be very helpful.

@Bocian-1
Copy link

All errors I get on this one seem to be from custom nodes. The missing input and invalid image file errors happen regardless due to how this workflow is built
Interaction OpenPose.json

got prompt
ERROR:root:Failed to validate prompt for output 2003:
ERROR:root:* ImageReceiver 3135:
ERROR:root:  - Custom validation failed for node: image - Invalid image file: ImgSender_temp_uvqrp_00001_.png [temp]
ERROR:root:  - Custom validation failed for node: link_id - Invalid image file: ImgSender_temp_uvqrp_00001_.png [temp]
ERROR:root:  - Custom validation failed for node: save_to_workflow - Invalid image file: ImgSender_temp_uvqrp_00001_.png [temp]
ERROR:root:  - Custom validation failed for node: image_data - Invalid image file: ImgSender_temp_uvqrp_00001_.png [temp]
ERROR:root:  - Custom validation failed for node: trigger_always - Invalid image file: ImgSender_temp_uvqrp_00001_.png [temp]
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3862:6:
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3133:5:
ERROR:root:* (prompt):
ERROR:root:  - Return type mismatch between linked nodes: a, INT != INT,FLOAT,IMAGE,LATENT
ERROR:root:  - Return type mismatch between linked nodes: b, FLOAT != INT,FLOAT,IMAGE,LATENT
ERROR:root:* MathExpression|pysssss 3133:5:
ERROR:root:  - Return type mismatch between linked nodes: a, INT != INT,FLOAT,IMAGE,LATENT
ERROR:root:  - Return type mismatch between linked nodes: b, FLOAT != INT,FLOAT,IMAGE,LATENT
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3059:11:
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3822:16:
ERROR:root:* MathExpression|pysssss 3133:4:
ERROR:root:  - Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT,IMAGE,LATENT
ERROR:root:  - Return type mismatch between linked nodes: b, INT != INT,FLOAT,IMAGE,LATENT
ERROR:root:* ImpactConditionalBranch 3822:15:
ERROR:root:  - Required input is missing: tt_value
ERROR:root:* ImpactConditionalBranch 3822:14:
ERROR:root:  - Required input is missing: tt_value
ERROR:root:* KSampler 3822:3:
ERROR:root:  - Required input is missing: latent_image
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3678:6:
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3244:
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 2322:
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3861:6:
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 3133:4:
ERROR:root:* (prompt):
ERROR:root:  - Return type mismatch between linked nodes: a, FLOAT != INT,FLOAT,IMAGE,LATENT
ERROR:root:  - Return type mismatch between linked nodes: b, INT != INT,FLOAT,IMAGE,LATENT
ERROR:root:Output will be ignored
Exception in thread Thread-8 (prompt_worker):
Traceback (most recent call last):
  File "threading.py", line 1045, in _bootstrap_inner
  File "threading.py", line 982, in run
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\main.py", line 111, in prompt_worker
    e.execute(item[2], prompt_id, item[3], item[4])
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 470, in execute
    execution_list.add_node(node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 110, in add_node
    self.add_strong_link(from_node_id, from_socket, unique_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 136, in add_strong_link
    super().add_strong_link(from_node_id, from_socket, to_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 87, in add_strong_link
    self.add_node(from_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 110, in add_node
    self.add_strong_link(from_node_id, from_socket, unique_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 136, in add_strong_link
    super().add_strong_link(from_node_id, from_socket, to_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 87, in add_strong_link
    self.add_node(from_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 110, in add_node
    self.add_strong_link(from_node_id, from_socket, unique_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 136, in add_strong_link
    super().add_strong_link(from_node_id, from_socket, to_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 87, in add_strong_link
    self.add_node(from_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 110, in add_node
    self.add_strong_link(from_node_id, from_socket, unique_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 136, in add_strong_link
    super().add_strong_link(from_node_id, from_socket, to_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 87, in add_strong_link
    self.add_node(from_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 110, in add_node
    self.add_strong_link(from_node_id, from_socket, unique_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 136, in add_strong_link
    super().add_strong_link(from_node_id, from_socket, to_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 87, in add_strong_link
    self.add_node(from_node_id)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\comfy\graph.py", line 108, in add_node
    is_lazy = "lazy" in input_info and input_info["lazy"]
              ^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable

@blepping
Copy link
Contributor

blepping commented Jan 29, 2024

started trying to test this. first the job-iterator extension has to be disabled. it also is not compatible with rgthree's nodes even if you disable the executor stuff in it because it will always try to patch the executor regardless. i commented out that part and rgthree didn't seem to be causing problems after that.

those problems are basically expected, stuff that messes with the executor is not going to be compatible with these changes.

>> info 363 VAE -- None None None
Exception in thread Thread-6 (prompt_worker):
Traceback (most recent call last):
  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.11/threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "/raid/vantec/ai/models/sd/ComfyUI/main.py", line 111, in prompt_worker
    e.execute(item[2], prompt_id, item[3], item[4])
  File "/raid/vantec/ai/models/sd/ComfyUI/execution.py", line 470, in execute
    execution_list.add_node(node_id)
  File "/raid/vantec/ai/models/sd/ComfyUI/comfy/graph.py", line 109, in add_node
    is_lazy = "lazy" in input_info and input_info["lazy"]
              ^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable

this is more of an issue. i added a debug print after line 107 which calls self.get_input_info: print(">> info", unique_id, input_name, "--", input_type, input_category, input_info)

the method can return None however the downstream code doesn't check for that.

>> info 363 VAE -- None None None

seems like it failed fetching info for the standard VAE node. maybe other weird stuff is going on here but there definitely should be more graceful handling for methods that can return None.

don't want to seem negative, i definitely appreciate the work you put into these changes and a better approach to execution is certainly very welcome and needed!


edit: did some more digging, the issue seemed to be an incompatibility with the use everywhere nodes (was using that to broadcast the VAE so maybe it wasn't a standard VAE node after all).

are these changes expected to be compatible with use everywhere?

@ltdrdata
Copy link
Collaborator

ltdrdata commented Feb 1, 2024

started trying to test this. first the job-iterator extension has to be disabled. it also is not compatible with rgthree's nodes even if you disable the executor stuff in it because it will always try to patch the executor regardless. i commented out that part and rgthree didn't seem to be causing problems after that.

those problems are basically expected, stuff that messes with the executor is not going to be compatible with these changes.

>> info 363 VAE -- None None None
Exception in thread Thread-6 (prompt_worker):
Traceback (most recent call last):
  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.11/threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "/raid/vantec/ai/models/sd/ComfyUI/main.py", line 111, in prompt_worker
    e.execute(item[2], prompt_id, item[3], item[4])
  File "/raid/vantec/ai/models/sd/ComfyUI/execution.py", line 470, in execute
    execution_list.add_node(node_id)
  File "/raid/vantec/ai/models/sd/ComfyUI/comfy/graph.py", line 109, in add_node
    is_lazy = "lazy" in input_info and input_info["lazy"]
              ^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable

this is more of an issue. i added a debug print after line 107 which calls self.get_input_info: print(">> info", unique_id, input_name, "--", input_type, input_category, input_info)

the method can return None however the downstream code doesn't check for that.

>> info 363 VAE -- None None None

seems like it failed fetching info for the standard VAE node. maybe other weird stuff is going on here but there definitely should be more graceful handling for methods that can return None.

don't want to seem negative, i definitely appreciate the work you put into these changes and a better approach to execution is certainly very welcome and needed!

edit: did some more digging, the issue seemed to be an incompatibility with the use everywhere nodes (was using that to broadcast the VAE so maybe it wasn't a standard VAE node after all).

are these changes expected to be compatible with use everywhere?

ping @ali1234, @rgthree.

I see that compatibility of existing custom nodes may be compromised with the new structure.

More important than immediate compatibility breaking is verifying whether each extension can provide compatibility patches for the new structure.

@ali1234
Copy link

ali1234 commented Feb 1, 2024

Job iterator only patches the executor to let it run multiple times in a loop. This pull request should make it obsolete.

@rgthree
Copy link
Contributor

rgthree commented Feb 1, 2024

Thanks for the ping and opportunity to fix before breaking. The execution optimization was meant to be forward compatible, but it did assume methods weren't being removed.

I just pushed rgthree/rgthree-comfy@6aa0392 which is forwards compatible with this PR by no longer attempting to patch the optimization to ComfyUI's execution if the recursive methods don't exist, as is the case in this PR. I've patched this PR in and ensured it.

Question: I assume this PR will render the rgthree optimization obsolete? Currently, the patch I provide reduces iterations and times by 1000s of percent (from 250,496,808 to just 142, and 158.13 seconds to 0.0 as tested on my machine).

Also, I noticed the client API events have changed (client-side progress bar shows 100% complete, even though the workflow is still running). Are there more details on breaking changes?

@Trung0246
Copy link

Trung0246 commented Feb 1, 2024

Crashed for the following nodes while attempt to create info['input_order']:

ttN pipeLoader
ttN pipeLoaderSDXL

Potentially problematic line:
https://github.com/TinyTerra/ComfyUI_tinyterraNodes/blob/main/tinyterraNodes.py#L1534

It looks like the problematic key is my_unique_id, which result in "UNIQUE_ID" string, therefore .keys()call will fail.


[FIXED]

Assertion crash during execution. Looks like there's some incorrect assumption when calling BasicCache.set_prompt. The following are simplified call hierarchy:

https://github.com/guill/ComfyUI/blob/36b2214e30db955a10b27ae0d58453bab99dac96/execution.py#L457
https://github.com/guill/ComfyUI/blob/36b2214e30db955a10b27ae0d58453bab99dac96/comfy/caching.py#L141

Then the rest:

CacheKeySetInputSignature.add_keys
CacheKeySetInputSignature.get_node_signature
CacheKeySetInputSignature.get_immediate_node_signature
IsChangedCache.get
get_input_data
cached_output = outputs.get(input_unique_id)
BasicCache._get_immediate

Crash at https://github.com/guill/ComfyUI/blob/36b2214e30db955a10b27ae0d58453bab99dac96/comfy/caching.py#L175

The root cause is kinda hard to explain, but take this workflow for instance (workflow embedded in the image):
workflow_buggy_topo

This should run fine if every is at it is (notice that the node id order MUST BE 10, 8, 9 such that the middle node always have the lowest id). But the moment when I change the source code of Concat Text_O to this:

class concat_text_O:
    """
    This node will concatenate two strings together
    """
    @ classmethod
    def INPUT_TYPES(cls):
        return {"required": {
            "text1": ("STRING", {"multiline": True, "defaultBehavior": "input"}),
            "text2": ("STRING", {"multiline": True, "defaultBehavior": "input"}),
            "separator": ("STRING", {"multiline": False, "default": ","}),
        }}

    RETURN_TYPES = ("STRING",)
    FUNCTION = "fun"
    CATEGORY = "O/text/operations"

    @ staticmethod
    def fun(text1, separator, text2):
        return (text1 + separator + text2,)
    
    @classmethod
    def IS_CHANGED(cls, *args, **kwargs):
        return float("NaN")

Comparing with the https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92/blob/ebcaad0edbfbd8783eb6ad3cb979f23ee3e71c5e/src/QualityOfLifeSuit_Omar92.py#L1189, the new code added IS_CHANGED, which is typical for some nodes. However now IsChangedCache.get is forced to call get_input_data, which at that point BasicCache.cache_key_set is not initialized yet, hence the assertion crash.


[FIXED]

Another bug when a node is being added but does not get executed. This occurs when the node has all of it inputs being optional (with may includes link inputs, not value inputs) AND when the same output being connected to two nodes: the process node and the prompt expand node. When this node get added to expand by prompt expand node, it somehow ignore such node and will no longer execute and this includes all subsequent process node.

Not sure how to process with this. The only way I could think is adding a way to force add a node to execution_list through the same expand dict by checking certain keys with the current codebase.

@asagi4
Copy link
Contributor

asagi4 commented Feb 1, 2024

I managed to trigger an error like this by running out of VRAM mid-generation and then trying to run another generation.

Traceback (most recent call last):                                                   
  File "/home/sd/.conda/envs/sd/lib/python3.11/threading.py", line 1045, in _bootstrap_inner                                                                              
    self.run()                                                                       
  File "/home/sd/.conda/envs/sd/lib/python3.11/threading.py", line 982, in run       
    self._target(*self._args, **self._kwargs)                                        
  File "/home/sd/git/ComfyUI/main.py", line 111, in prompt_worker                        e.execute(item[2], prompt_id, item[3], item[4])                                  
  File "/home/sd/git/ComfyUI/execution.py", line 476, in execute                     
    self.handle_execution_error(prompt_id, dynamic_prompt.original_prompt, current_ou
tputs, executed, error, ex)
  File "/home/sd/git/ComfyUI/execution.py", line 438, in handle_execution_error
    "current_outputs": error["current_outputs"],
                       ~~~~~^^^^^^^^^^^^^^^^^^^
KeyError: 'current_outputs'

ComfyUI then seems to get stuck unable to do anything.

I use ComfyBox as a frontend to ComfyUI. Didn't yet try reproducing this without it.

I fixed it by changing the code to use error.get("current_outputs", []) instead. I don't know why current_outputs is missing in this case, but using get makes ComfyUI recover from the OOM without having to restart.

@WeeBull
Copy link

WeeBull commented Feb 3, 2024

Update: I've managed to test the PR now, and apart from having to disable rgthree, my flows seem to work fine. My fear that mapping over lists might be broken seems to be unfounded.

Original: I'm not able to test this at the moment, but does this preserve the behaviour that a node will map over lists given as inputs? Looking at the change it appears like it might have removed it.

For example, using the impact pack read prompts from file followed by unzip prompts will give you lists of +ve and -ve prompts, which a CLIP prompt encode will turn into lists of conditioning, which a ksampler will turn into lists of latents.

Also really useful for shmoo-ing parameters over ranges with CR Float / Integer range list.

@Seedsa
Copy link

Seedsa commented Feb 5, 2024

ERROR:root:Failed to validate prompt for output 197:
ERROR:root:* (prompt):
ERROR:root:  - Return type mismatch between linked nodes: model, * != MODEL
ERROR:root:  - Return type mismatch between linked nodes: latent_image, * != LATENT
ERROR:root:* Efficient Loader 190:
ERROR:root:  - Return type mismatch between linked nodes: positive, * != STRING
ERROR:root:* KSampler (Efficient) 197:
ERROR:root:  - Return type mismatch between linked nodes: model, * != MODEL
ERROR:root:  - Return type mismatch between linked nodes: latent_image, * != LATENT
ERROR:root:Output will be ignored
ERROR:root:Failed to validate prompt for output 9:
ERROR:root:* (prompt):
ERROR:root:  - Return type mismatch between linked nodes: images, * != IMAGE
ERROR:root:* SaveImage 9:
ERROR:root:  - Return type mismatch between linked nodes: images, * != IMAGE
ERROR:root:Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': 'Return type mismatch between linked nodes: model, * != MODEL\nReturn type mismatch between linked nodes: latent_image, * != LATENT\nReturn type mismatch between linked nodes: images, * != IMAGE', 'extra_info': {}}

@doctorpangloss
Copy link

doctorpangloss commented Feb 9, 2024

suppose i already have a way to distributed-compute whole workflows robustly and transparently inside python. with these changes, is it a small lift to distribute pieces of the graph / individual nodes? the idea would be to make it practicable to integrate many models together performantly - each individual model may take up the entire VRAM, but distributed on different machines. this is coming from the POV of having a working implementation.

if the signature of node execution were async, it would be a very small lift to parallelize individual node execution among consumers (aka workers). it would bring no changes to the current blocking behavior in ordinary 1-producer-1-consume.

@guill
Copy link
Contributor Author

guill commented Feb 9, 2024

Update: I've managed to test the PR now, and apart from having to disable rgthree, my flows seem to work fine. My fear that mapping over lists might be broken seems to be unfounded.

@WeeBull Yeah, the intention is that functionality is maintained. I tested it via some contrived test cases, but it's not functionality I use much in my actual workflows. Good to hear that it seems to work for you!

@Seedsa As mentioned in the PR summary, this PR does not enable "*"-type sockets. You'll have to manually re-enable those via a patch just as you would on mainline. @comfyanonymous How do you feel about a command-line argument that enables "*" types so people don't have to circulate another patch file for it after this PR?

@doctorpangloss That should be a good bit easier with this architecture than the old one, though it's likely any old working code won't transfer over unchanged. In this execution model, nodes can already return ExecutionResult.SLEEPING to say "do other nodes that don't require my result and come back to me later". You could fairly easily kick off whatever remote process you want in that first call to Execute, add an artificial prerequisite to the node (that gets removed when you get the result of the remote process), and sleep in the meantime.

There might be a way to wrap the Execute call in such a way that normal Python async semantics do that automatically for you, but I'd have to do some research. Python isn't really my area of expertise. 😅

@guill
Copy link
Contributor Author

guill commented Feb 15, 2024

@Trung0246 I believe the issue with ttN pipeLoader is actually a bug in the node. The declaration of "my_unique_id": "UNIQUE_ID" should be inside of the "hidden" category for it to actually function. Right now, it's doing nothing at all (and the my_unique_id argument will always be None). Honestly, an error with a callstack might be preferable to the current situation where it silently ignores that argument. I can add additional checks if people disagree though.

I'm unable to reproduce any of your other issues. Any chance you could upload some workflows? (I'm not sure what node pack the 'process' or 'prompt_expand' nodes are from.)

This allows the use of nodes that have sockets of type '*' without
applying a patch to the code.
@guill
Copy link
Contributor Author

guill commented Feb 15, 2024

Pasting a code block that @Trung0246 gave me on Matrix here so that it's documented:

class TautologyStr(str):
	def __ne__(self, other):
		return False

class ByPassTypeTuple(tuple):
	def __getitem__(self, index):
		if index > 0:
			index = 0
		item = super().__getitem__(index)
		if isinstance(item, str):
			return TautologyStr(item)
		return item

class StubNode:
	@classmethod
	def INPUT_TYPES(cls):
		return {
			"required": {
				"text": ("STRING", {
					"default": "TEST",
					"multiline": False
				}),
			},
			"optional": {
				"_stub_in": (TautologyStr("*"), ),
				"_stub": ("STUB_TYPE", )
			},
			"hidden": {
				"_id": "UNIQUE_ID",
				"_prompt": "PROMPT",
				"_workflow": "EXTRA_PNGINFO"
			}
		}
	
	RETURN_TYPES = ByPassTypeTuple(("*", "*"))
	RETURN_NAMES = ("_stub_out", "_stub_out_all")
	INPUT_IS_LIST = True
	OUTPUT_IS_LIST = (True, True)
	# OUTPUT_NODE = True
	FUNCTION = "execute"
	CATEGORY = "_for_testing"

	def execute(self, **kwargs):
		return (kwargs.get("text", ["???"]), kwargs.get("_stub_in", ["STUB"]))

This could happen when attempting to evaluate `IS_CHANGED` for a node
during the creation of the cache (in order to create the cache key).
@guill
Copy link
Contributor Author

guill commented Feb 19, 2024

I believe all the issues reported in this PR so far have been addressed. Please let me know if you encounter any new ones (or I'm wrong about the existing reports being resolved).

@ricklove
Copy link
Contributor

Getting this error in a complex workflow - working on identifying the source so I can upload a minimal test case:

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "D:\Projects\ai\comfyui\ComfyUI\execution.py", line 313, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Projects\ai\comfyui\ComfyUI\execution.py", line 191, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Projects\ai\comfyui\ComfyUI\execution.py", line 166, in map_node_over_list
    results.append(getattr(obj, func)(**input_dict))
TypeError: LoadImage.load_image() got an unexpected keyword argument 'upload'

@ricklove
Copy link
Contributor

ricklove commented Feb 21, 2024

Getting this error in a complex workflow - working on identifying the source so I can upload a minimal test case:

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "D:\Projects\ai\comfyui\ComfyUI\execution.py", line 313, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Projects\ai\comfyui\ComfyUI\execution.py", line 191, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Projects\ai\comfyui\ComfyUI\execution.py", line 166, in map_node_over_list
    results.append(getattr(obj, func)(**input_dict))
TypeError: LoadImage.load_image() got an unexpected keyword argument 'upload'

Looks like just the Load Image node will cause this

Edit: I verified this is still happening after merging in the latest of master branch, so something is breaking LoadImage node

Behavior should now match the master branch with regard to undeclared
inputs. Undeclared inputs that are socket connections will be used while
undeclared inputs that are literals will be ignored.
comfyanonymous pushed a commit that referenced this pull request Aug 22, 2024
* Fix task.status.status_str caused by 2666 regression

* fix

* fix
bigcat88 added a commit to Visionatrix/Visionatrix that referenced this pull request Aug 24, 2024
I had slightly checked, looks like this PR
comfyanonymous/ComfyUI#2666 does not break
anything in Visionatrix, which is very surprising for me =)

Signed-off-by: Alexander Piskun <bigcat88@icloud.com>
@hananbeer
Copy link

nice.

but... I actually liked workflows going only in one direction. it simplifies things a lot.
yea some things are more difficult without it. and if batching is possible it can just take too much ram.

I may be late here but if I may join the discussion:
I would approach it differently and use functions instead.
lets say you want to decrease denoise over time - instead of the proposed loop nodes over ksamplers, I would just prefer to pass a generator object as returned in python by enumerate(np.arange(1, 0, -0.2)) which evaluates to the list [1.0, 0.8, 0.6, 0.4, 0.2]
(a bit of a side track - since range() doesn't support float and np.arange isn't a generator I simply wrapped it with enumerate)

then the ksampler node (and any node for that matter) needs to unpack this generator.
in python it's just calling next(), e.g:

>>> enumerate(np.arange(1,0,-0.2))
<enumerate object at 0x0000028898F91A80>
>>> a=_
>>> next(a)
(0, 1.0)
>>> next(a)
(1, 0.8)
>>> next(a)
(2, 0.6000000000000001)
>>> next(a)
(3, 0.40000000000000013)
>>> next(a)
(4, 0.20000000000000018)
>>> next(a)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration

it should probably just be an arbitrary function instead of generator, just wanted to use the range() example for the sake of making my point.

now nodes should treat every input as a function, a "normal" value (constant) will just be the identity function.
this solution is cleaner imo and reduces the complexity of loops.

the only issue is existing nodes don't expect functions, but perhaps a possible solution could be a new input type of type function and if the execution engine sees functions that are going to non-function input types, it will just unpack it automatically. (just take the first value and error if there is more than one value)

chenbaiyujason added a commit to NeoWorldTeam/ComfyUI that referenced this pull request Sep 4, 2024
commit f067ad15d139d6e07e44801759f7ccdd9985c636
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Wed Sep 4 01:16:38 2024 +0200

    Make live preview size a configurable launch argument (#4649)

    * Make live preview size a configurable launch argument

    * Remove import from testing phase

    * Update cli_args.py

commit 483004dd1d379837a06e1244e8e833ab1369dd50
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Sep 3 17:02:19 2024 -0400

    Support newer glora format.

commit 00a5d081038b5c64c9076f06a2c95c4dee394e97
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Sep 3 01:25:05 2024 -0400

    Lower fp8 lora memory usage.

commit d043997d30d91ab057f770d3396c2e288e37b38a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Sep 2 08:22:15 2024 -0400

    Flux onetrainer lora.

commit f1c2301697cb1cd538f8d4190741935548bb6734
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Sun Sep 1 14:44:49 2024 -0700

    fix typo in stale-issues (#4735)

commit 8d31a6632fdfbe240828f3a4c77d1464b30023c8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Sep 1 17:29:31 2024 -0400

    Speed up inference on nvidia 10 series on Linux.

commit b643eae08b7f0c8eb69b77bd61e31009bfb325b9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Sep 1 01:01:54 2024 -0400

    Make minimum_inference_memory() depend on --reserve-vram

commit baa6b4dc36a5262deee2d69e8c0199e6a4db3847
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 31 03:59:42 2024 -0400

    Update manual install instructions.

commit d4aeefc29772b080ba8d46c4397594232a3b9b44
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Fri Aug 30 22:57:18 2024 -0700

    add github action to automatically handle stale user support issues (#4683)

    * add github action to automatically handle stale user support issues

    * improve stale message

    * remove token part

commit 587e7ca6542c8c9bffe35dd9dfd9b6a91c961906
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 31 01:45:41 2024 -0400

    Remove github buttons.

commit c90459eba0daf15796de2fd7b9e7d8d42f7c7e71
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Fri Aug 30 19:32:10 2024 -0400

    Update ComfyUI_frontend to 1.2.40 (#4691)

    * Update ComfyUI_frontend to 1.2.40

    * Add files

commit 04278afb1014b7c7b31b75ebd771205c98fa2d51
Author: Vedat Baday <54285744+badayvedat@users.noreply.github.com>
Date:   Sat Aug 31 02:26:47 2024 +0300

    feat: return import_failed from init_extra_nodes function (#4694)

commit 935ae153e154813ace36db4c4656a5e96f403eba
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 30 12:48:42 2024 -0400

    Cleanup.

commit e91662e7849aed594abf425e88660cb09f502926
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Fri Aug 30 12:46:37 2024 -0400

    Get logs endpoint & system_stats additions (#4690)

    * Add route for getting output logs

    * Include ComfyUI version

    * Move to own function

    * Changed to memory logger

    * Unify logger setup logic

    * Fix get version git fallback

    ---------

    Co-authored-by: pythongosssss <125205205+pythongosssss@users.noreply.github.com>

commit 63fafaef451f03cb88b1ec5399a78385b5e3ca21
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 30 04:58:41 2024 -0400

    Fix potential issue with hydit controlnets.

commit ec28cd91363a4de6c0e7a968aba61fd035a550b9
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 29 16:48:48 2024 -0700

    swap legacy sdv15 link (#4682)

    * swap legacy sdv15 link

    * swap v15 ckpt examples to safetensors

    * link the fp16 copy of the model by default

commit 6eb5d645227033aaea327f0949a8774920fa07c4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 29 19:07:23 2024 -0400

    Fix glora lowvram issue.

commit 10a79e989869f8878e27a8f373d85aef31822415
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 29 18:41:22 2024 -0400

    Implement model part of flux union controlnet.

commit ea3f39bd6906dd455c867198d4d94152e76ad074
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 29 02:14:19 2024 -0400

    InstantX depth flux controlnet.

commit b33cd610703213dbe73baa6aaa3fdc2c61a84adc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 28 18:56:33 2024 -0400

    InstantX canny controlnet.

commit 34eda0f853daffdfcab04e1b3187de40f1d30bbf
Author: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
Date:   Thu Aug 29 06:46:30 2024 +0900

    fix: remove redundant useless loop (#4656)

    fix: potential error of undefined variable

    https://github.com/comfyanonymous/ComfyUI/discussions/4650

commit d31e226650ad01daefff66ec202992b8c3bf8384
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 28 16:18:39 2024 -0400

    Unify RMSNorm code.

commit b79fd7d92c7355b5e9cd5c1ea746d7dd06c27351
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 28 16:12:24 2024 -0400

    ComfyUI supports more than just stable diffusion.

commit 38c22e631ad090a4841e4a0f015a30c565a9f7fc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 27 18:46:55 2024 -0400

    Fix case where model was not properly unloaded in merging workflows.

commit 6bbdcd28aee104db1ae83e2146512dd45dbbad6e
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Aug 27 13:55:37 2024 -0400

    Support weight padding on diff weight patch (#4576)

commit ab130001a8b966ed788f7436aa3b689d038e42a3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 27 02:41:56 2024 -0400

    Do RMSNorm in native type.

commit ca4b8f30e0bf40cf58dcb3f3e6118832a60348c8
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Aug 27 02:07:25 2024 -0400

    Cleanup empty dir if frontend zip download failed (#4574)

commit 70b84058c1737eac0ee41911b2e3b5e0edb49bc7
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Mon Aug 26 23:06:12 2024 -0700

    Add relative file path to the progress report. (#4621)

commit 2ca8f6e23d41a8e4cf064ca9f2ebe5c768e155dc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 26 15:12:06 2024 -0400

    Make the stochastic fp8 rounding reproducible.

commit 7985ff88b9a7099378b5f2026bee5da63d3fc53f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 26 12:33:57 2024 -0400

    Use less memory in float8 lora patching by doing calculations in fp16.

commit c6812947e98eb384250575d94108d9eb747765d9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 26 02:07:32 2024 -0400

    Fix potential memory leak.

commit 9230f658232fd94d0beeddb94aed093a1eca82b5
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 25 05:43:55 2024 -0400

    Fix some controlnets OOMing when loading.

commit 6ab1e6fd4a2f7cc5945310f0ecfc11617aa9a2cb
Author: guill <guill@users.noreply.github.com>
Date:   Sat Aug 24 12:34:58 2024 -0700

    [Bug #4529] Fix graph partial validation failure (#4588)

    Currently, if a graph partially fails validation (i.e. some outputs are
    valid while others have links from missing nodes), the execution loop
    could get an exception resulting in server lockup.

    This isn't actually possible to reproduce via the default UI, but is a
    potential issue for people using the API to construct invalid graphs.

commit 07dcbc3a3e19e638098121fb6b437f07601334a4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 24 02:31:03 2024 -0400

    Clarify how to use high quality previews.

commit 8ae23d8e80620835495248cd29d5ae17c80622ca
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 23 17:52:47 2024 -0400

    Fix onnx export.

commit 7df42b9a2364bae6822fbd9e9fa10cea2e319ba3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 23 04:58:59 2024 -0400

    Fix dora.

commit 5d8bbb72816c263df1cff53c3c4a7835eeecc636
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 23 04:06:27 2024 -0400

    Cleanup.

commit 2c1d2375d6bc62b6095e0b8c1b31df1112ec2ea2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 23 04:04:55 2024 -0400

    Fix.

commit 64ccb3c7e3e1f3e4075669b71b870aae66cec077
Author: Simon Lui <502929+simonlui@users.noreply.github.com>
Date:   Fri Aug 23 00:59:57 2024 -0700

    Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). (#4562)

commit 9465b23432da552ed1439ebe1e365104cfc48ea0
Author: Scorpinaus <85672737+Scorpinaus@users.noreply.github.com>
Date:   Fri Aug 23 15:57:08 2024 +0800

    Added SD15_Inpaint_Diffusers model support for unet_config_from_diffusers_unet function (#4565)

commit bb4416dd5b2d7c2f34dc17e18761dd6b3d8b6ead
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Thu Aug 22 17:38:30 2024 -0400

    Fix task.status.status_str caused by #2666 (#4551)

    * Fix task.status.status_str caused by 2666 regression

    * fix

    * fix

commit c0b0da264b30c8583ad54a7646b9d55286d499ca
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 22 17:20:39 2024 -0400

    Missing imports.

commit c26ca272076262c8b21a8f2e094cf538d88b9e46
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 22 17:12:00 2024 -0400

    Move calculate function to comfy.lora

commit 7c6bb84016d9f7cb668e056d7b13840471f88fc3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 22 17:05:12 2024 -0400

    Code cleanups.

commit c54d3ed5e628984114f9c2af6161b52141dfc28e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 22 15:57:40 2024 -0400

    Fix issue with models staying loaded in memory.

commit c7ee4b37a1e1d91496bba34c246485c3c2c7393a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 22 15:15:47 2024 -0400

    Try to fix some lora issues.

commit 7b70b266d843150adffdb7262394e6b777e0d6ce
Author: David <david@dbenton.com>
Date:   Thu Aug 22 12:24:21 2024 -0500

    Generalize MacOS version check for force-upcast-attention (#4548)

    This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.

    I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.

    See comfyanonymous/ComfyUI#3521

commit 8f60d093ba066eee953d5997136962ff605e801a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 22 10:38:24 2024 -0400

    Fix issue.

commit dafbe321d2dfbe2cabd9eb32e88f8088996cd524
Author: guill <guill@users.noreply.github.com>
Date:   Wed Aug 21 20:38:46 2024 -0700

    Fix a bug where cached outputs affected IS_CHANGED (#4535)

    This change fixes a bug where non-constant values could be passed to the
    IS_CHANGED function. This would result in workflows taking an extra
    execution before they acted as if they were cached.

    The actual change is like 4 characters -- the rest is adding unit tests.

commit 5f84ea63e824debfbe36efc6773e6190e8da324c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 21 23:36:58 2024 -0400

    Add a shortcut to the nightly package to run with --fast.

commit 843a7ff70c6f0ad107c22124951f17599a6f595b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 21 23:23:50 2024 -0400

    fp16 is actually faster than fp32 on a GTX 1080.

commit a60620dcea1302ef5c7f555e5e16f70b39c234ef
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 21 16:38:26 2024 -0400

    Fix slow performance on 10 series Nvidia GPUs.

commit 015f73dc4941ae6e01e01b934368f031c7fa8b8d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 21 16:17:15 2024 -0400

    Try a different type of flux fp16 fix.

commit 904bf58e7d27eb254d20879e306042653debc4b3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 21 14:01:41 2024 -0400

    Make --fast work on pytorch nightly.

commit 5f502630883450e1605518c46b486040239b628d
Author: Svein Ove Aas <sveina@gmail.com>
Date:   Wed Aug 21 16:21:48 2024 +0100

    Replace use of .view with .reshape (#4522)

    When generating images with fp8_e4_m3 Flux and batch size >1, using --fast, ComfyUI throws a "view size is not compatible with input tensor's size and stride" error pointing at the first of these two calls to view.

    As reshape is semantically equivalent to view except for working on a broader set of inputs, there should be no downside to changing this. The only difference is that it clones the underlying data in cases where .view would error out. I have confirmed that the output still looks as expected, but cannot confirm that no mutable use is made of the tensors anywhere.

    Note that --fast is only marginally faster than the default.

commit 5e806f555d1704d10ae02f4fafbc1b85713d389f
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Tue Aug 20 23:04:42 2024 -0700

    add a get models list api route (#4519)

    * get models list api route

    * remove copypasta

commit f07e5bb5225ff8b9706866a7a5105c987c92b330
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Tue Aug 20 22:25:06 2024 -0700

    Add GET /internal/files.  (#4295)

    * Create internal route table.

    * List files.

    * Add GET /internal/files.

    Retrieves list of files in models, output, and user directories.

    * Refactor file names.

    * Use typing_extensions for Python 3.8

    * Fix tests.

    * Remove print statements.

    * Update README.

    * Add output and user to valid directory test.

    * Add missing type hints.

commit 03ec517afba721fbd13af5d55408489470b07e43
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 21 00:47:19 2024 -0400

    Remove useless line, adjust windows default reserved vram.

commit f257fc999fde8e9695e4755902c75e7f7192fe2b
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Wed Aug 21 00:01:34 2024 -0400

    Add optional deprecated/experimental flag to node class (#4506)

    * Add optional deprecated flag to node class

    * nit

    * Add experimental flag

commit bb50e6983970e3e01d1f2516a38e532883a416ae
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Wed Aug 21 00:00:49 2024 -0400

    Update frontend to 1.2.30 (#4513)

commit 510f3438c1fab09657061f966d8272327ea1fd42
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 20 22:53:26 2024 -0400

    Speed up fp8 matrix mult by using better code.

commit ea63b1c092e2974be9bcf4753d5305213f188e25
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 20 12:05:13 2024 -0400

    Simpletrainer lycoris format.

commit 9953f22fce0ba899da0676a0b374e5d1f72bf259
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 20 11:49:33 2024 -0400

    Add --fast argument to enable experimental optimizations.

    Optimizations that might break things/lower quality will be put behind
    this flag first and might be enabled by default in the future.

    Currently the only optimization is float8_e4m3fn matrix multiplication on
    4000/ADA series Nvidia cards or later. If you have one of these cards you
    will see a speed boost when using fp8_e4m3fn flux for example.

commit d1a6bd684551affeef538147228c0ec6ee0ca94a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 20 10:42:40 2024 -0400

    Support loading long clipl model with the CLIP loader node.

commit 83dbac28ebaac7c3230bf48b647826621954b39f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 20 10:00:16 2024 -0400

    Properly set if clip text pooled projection instead of using hack.

commit 538cb068bc10c8eec3fc2884f0b79c71a3c0b75a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 20 00:50:39 2024 -0400

    Make cast_to a nop if weight is already good.

commit 1b3eee672cec4bf89d06a72ddd5866f8e331aa7e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 20 00:31:04 2024 -0400

    Fix potential issue with multi devices.

commit 5a69f84c3c38481dd0a78b25eb1111e82ade04b0
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Mon Aug 19 18:25:20 2024 -0400

    Update README.md (Add shield badges) (#4490)

commit 9eee470244b39ba24b83a3f8fa9b5cfd92d2214a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 17:36:35 2024 -0400

    New load_text_encoder_state_dicts function.

    Now you can load text encoders straight from a list of state dicts.

commit 045377ea893d0703e515d87f891936784cb2f5de
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 17:16:18 2024 -0400

    Add a --reserve-vram argument if you don't want comfy to use all of it.

    --reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.

    This can also be useful if workflows are failing because of OOM errors but
    in that case please report it if --reserve-vram improves your situation.

commit 4d341b78e8a04f76a289d9bcec264e4f38997c64
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 16:28:55 2024 -0400

    Bug fixes.

commit 6138f92084cff2ad380aa232c208d0b448887620
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 15:35:25 2024 -0400

    Use better dtype for the lowvram lora system.

commit be0726c1ed9c8ae5d02733dcc2fd5b997bd265de
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 15:24:07 2024 -0400

    Remove duplication.

commit 766ae119a86e51bf0ee0b068c26fa3acc0e1b815
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 14:00:56 2024 -0400

    CheckpointSave node name.

commit fc90ceb6bae5d8dac79acfb813fc5d67e8b2179e
Author: Yoland Yan <4950057+yoland68@users.noreply.github.com>
Date:   Mon Aug 19 10:41:30 2024 -0700

    Update issue template config.yml to direct frontend issues to frontend repos (#4486)

    * Update config.yml

    * Typos

commit 4506ddc86a413779cd2274305694c73af02c3892
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 13:38:03 2024 -0400

    Better subnormal fp8 stochastic rounding. Thanks Ashen.

commit 20ace7c853ae980272ded5957e196b7dcf57ee8c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 12:48:59 2024 -0400

    Code cleanup.

commit b29b3b86c5be1d95737fe966598400ea09f63900
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Mon Aug 19 07:12:32 2024 -0400

    Update README to include frontend section (#4468)

    * Update README to include frontend section

    * nit

commit 22ec02afc0c900ee86ba6f0387bfd9dc0e34fb83
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 19 05:19:59 2024 -0400

    Handle subnormal numbers in float8 rounding.

commit 39f114c44bb99d4a221e8da451d4f2a20119c674
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 18 16:53:17 2024 -0400

    Less broken non blocking?

commit 6730f3e1a362d5f3ed44f8541517b03356e7bf0e
Author: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Date:   Sun Aug 18 14:38:09 2024 -0400

    Disable non blocking.

    It fixed some perf issues but caused other issues that need to be debugged.

commit 73332160c8c9843876680fb04f037793c73d55b6
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 18 10:29:33 2024 -0400

    Enable non blocking transfers in lowvram mode.

commit 2622c55aff9433d425a62e5f6c379cf22a42139e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 18 00:47:25 2024 -0400

    Automatically use RF variant of dpmpp_2s_ancestral if RF model.

commit 1beb348ee2dafea19615cdcfdcfc096501c2f519
Author: Ashen <>
Date:   Sat Aug 17 17:32:27 2024 -0700

    dpmpp_2s_ancestral_RF for rectified flow (Flux, SD3 and Auraflow).

commit 9aa39e743cf94c55ef3b365e3ac3184a85be08c2
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Sat Aug 17 20:52:56 2024 -0700

    Add new shortcuts to readme (#4442)

commit d31df04c8aa84da61f96862a3967d2a9900bb9b0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 23:00:44 2024 -0400

    Indentation.

commit e68763f40c32da0f5fc921f0bd7112fb6319dc92
Author: Xrvk <eero.heikkinen@gmail.com>
Date:   Sun Aug 18 05:58:23 2024 +0300

    Add Flux model support for InstantX style controlnet residuals (#4444)

    * Add Flux model support for InstantX style controlnet residuals

    * Refactor Flux controlnet residual step to a separate method

    * Rollback minor change

    * New format for applying controlnet residuals: input->double_blocks, output->single_blocks

    * Adjust XLabs Flux controlnet to fit new syntax of applying Flux controlnet residuals

    * Remove unnecessary import and minor style change

commit 310ad09258cd6ba80c44d31e183063455726dcb8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 21:31:15 2024 -0400

    Add a ModelSave node.

commit 4f7a3cb6fbd58d7546b3c76ec1f418a2650ed709
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 21:28:36 2024 -0400

    unet -> diffusion_models.

commit bb222ceddb232aafafa99cd4dec38b3719c29d7d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 14:07:19 2024 -0400

    Fix loras having a weak effect when applied on fp8.

commit 14af129c5509d10504113a1520c45b0ebcf81f14
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 11:36:10 2024 -0400

    Improve execution UX.

    Some branches with VAELoader -> VAEDecode -> Preview were being executed
    last. With this change they will be executed earlier.

commit fca42836f26d06b14a3149c0c94fc1c7f264f633
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 10:15:13 2024 -0400

    Add model_options for text encoder.

commit 858d51f91a6039387d749c107d15ee9690cb7f1b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 04:08:54 2024 -0400

    Fix VAEDecode -> Preview not being executed first.

commit cd5017c1c9b3a7b0fec892e80290a32616bbff38
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 17 01:06:08 2024 -0400

    calculate_weight function to use a different dtype.

commit 83f343146ae1e8ccaf21da5b012bf59c78b97179
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 16 17:12:42 2024 -0400

    Fix potential lowvram issue.

commit b021cf67c7517aed914810d0f9dae3343d3305d4
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Fri Aug 16 15:25:02 2024 -0400

    Update frontend to 1.2.26 (#4415)

commit 1770fc77ed91348f4060e3c0b040c1519d6f91d0
Author: Matthew Turnshek <matthewturnshek@gmail.com>
Date:   Fri Aug 16 12:53:13 2024 -0400

    Implement support for taef1 latent previews (#4409)

    * add taef1 handling to several places

    * remove guess_latent_channels and add latent_channels info directly to flux model

    * remove TODO

    * fix numbers

commit 05a9f3faa1ac4236705681cdf5f289267f4d3b9a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 16 08:50:17 2024 -0400

    Log a warning when there's an issue with IS_CHANGED.

commit 86c5970ac030df5735bfa03495695e64ecc90984
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 16 08:40:31 2024 -0400

    Fix custom nodes hooking the map_node_over_list and breaking things.

commit bfc214f434e9c92d05d755857c597733a7fd3b91
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Thu Aug 15 16:50:25 2024 -0400

    Use new TS frontend uncompressed (#4379)

    * Swap frontend uncompressed

    * Add uncompressed files

commit 3f5939add69c2a8fea2b892a46a48c2937dc4128
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 15 13:48:18 2024 -0400

    Tell github not to count the web directory in language stats.

commit 5960f946a9353f4a8ff97e92f82e0541caa32bf7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 15 09:37:30 2024 -0400

    Move a few files from comfy -> comfy_execution.

    Python code in the comfy folder should not import things from outside it.

commit 5cfe38f41c7091b0fd954877d9d7427a8b438b1a
Author: guill <guill@users.noreply.github.com>
Date:   Thu Aug 15 08:21:11 2024 -0700

    Execution Model Inversion (#2666)

    * Execution Model Inversion

    This PR inverts the execution model -- from recursively calling nodes to
    using a topological sort of the nodes. This change allows for
    modification of the node graph during execution. This allows for two
    major advantages:

        1. The implementation of lazy evaluation in nodes. For example, if a
        "Mix Images" node has a mix factor of exactly 0.0, the second image
        input doesn't even need to be evaluated (and visa-versa if the mix
        factor is 1.0).

        2. Dynamic expansion of nodes. This allows for the creation of dynamic
        "node groups". Specifically, custom nodes can return subgraphs that
        replace the original node in the graph. This is an incredibly
        powerful concept. Using this functionality, it was easy to
        implement:
            a. Components (a.k.a. node groups)
            b. Flow control (i.e. while loops) via tail recursion
            c. All-in-one nodes that replicate the WebUI functionality
            d. and more
        All of those were able to be implemented entirely via custom nodes,
        so those features are *not* a part of this PR. (There are some
        front-end changes that should occur before that functionality is
        made widely available, particularly around variant sockets.)

    The custom nodes associated with this PR can be found at:
    https://github.com/BadCafeCode/execution-inversion-demo-comfyui

    Note that some of them require that variant socket types ("*") be
    enabled.

    * Allow `input_info` to be of type `None`

    * Handle errors (like OOM) more gracefully

    * Add a command-line argument to enable variants

    This allows the use of nodes that have sockets of type '*' without
    applying a patch to the code.

    * Fix an overly aggressive assertion.

    This could happen when attempting to evaluate `IS_CHANGED` for a node
    during the creation of the cache (in order to create the cache key).

    * Fix Pyright warnings

    * Add execution model unit tests

    * Fix issue with unused literals

    Behavior should now match the master branch with regard to undeclared
    inputs. Undeclared inputs that are socket connections will be used while
    undeclared inputs that are literals will be ignored.

    * Make custom VALIDATE_INPUTS skip normal validation

    Additionally, if `VALIDATE_INPUTS` takes an argument named `input_types`,
    that variable will be a dictionary of the socket type of all incoming
    connections. If that argument exists, normal socket type validation will
    not occur. This removes the last hurdle for enabling variant types
    entirely from custom nodes, so I've removed that command-line option.

    I've added appropriate unit tests for these changes.

    * Fix example in unit test

    This wouldn't have caused any issues in the unit test, but it would have
    bugged the UI if someone copy+pasted it into their own node pack.

    * Use fstrings instead of '%' formatting syntax

    * Use custom exception types.

    * Display an error for dependency cycles

    Previously, dependency cycles that were created during node expansion
    would cause the application to quit (due to an uncaught exception). Now,
    we'll throw a proper error to the UI. We also make an attempt to 'blame'
    the most relevant node in the UI.

    * Add docs on when ExecutionBlocker should be used

    * Remove unused functionality

    * Rename ExecutionResult.SLEEPING to PENDING

    * Remove superfluous function parameter

    * Pass None for uneval inputs instead of default

    This applies to `VALIDATE_INPUTS`, `check_lazy_status`, and lazy values
    in evaluation functions.

    * Add a test for mixed node expansion

    This test ensures that a node that returns a combination of expanded
    subgraphs and literal values functions correctly.

    * Raise exception for bad get_node calls.

    * Minor refactor of IsChangedCache.get

    * Refactor `map_node_over_list` function

    * Fix ui output for duplicated nodes

    * Add documentation on `check_lazy_status`

    * Add file for execution model unit tests

    * Clean up Javascript code as per review

    * Improve documentation

    Converted some comments to docstrings as per review

    * Add a new unit test for mixed lazy results

    This test validates that when an output list is fed to a lazy node, the
    node will properly evaluate previous nodes that are needed by any inputs
    to the lazy node.

    No code in the execution model has been changed. The test already
    passes.

    * Allow kwargs in VALIDATE_INPUTS functions

    When kwargs are used, validation is skipped for all inputs as if they
    had been mentioned explicitly.

    * List cached nodes in `execution_cached` message

    This was previously just bugged in this PR.

commit 0f9c2a78224ce3179c773fe3af63722f438b0613
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 14 23:08:54 2024 -0400

    Try to fix SDXL OOM issue on some configurations.

commit 153d0a8142d14c6c0d71eb0ba98d3e09c7e7abea
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 14 22:29:23 2024 -0400

    Add a update/update_comfyui_stable.bat to the standalones.

commit ab4dd19b913be95dc7f1a8080cd49aa940345c96
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Wed Aug 14 21:01:06 2024 -0400

    Remove legacy ui test files (#4316)

commit f1d6cef71c70719cc3ed45a2455a4e5ac910cd5e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 14 08:38:07 2024 -0400

    Revert "Disable cuda malloc by default."

    This reverts commit 50bf66e5c44fe3637f29999034c10a0c083c7600.

commit 33fb282d5c12cbc037ec89b45ed53caee4a060a6
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 14 02:51:47 2024 -0400

    Fix issue.

commit 50bf66e5c44fe3637f29999034c10a0c083c7600
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 14 02:49:25 2024 -0400

    Disable cuda malloc by default.

commit e60e19b175f93f2c8fac037063de9f13259be9d4
Author: pythongosssss <125205205+pythongosssss@users.noreply.github.com>
Date:   Wed Aug 14 06:22:10 2024 +0100

    Add support for simple tooltips (#3842)

    * Add support for simple tooltips

    * Fix overflow

    * Add tooltips for nodes in the default workflow

    * new line

    * Prevent potential crash

    * PR feedback

    * Hide tooltip when clicking (e.g. combo widget)

    * Refactor tooltips, add node level support

    * Fix

    * move

    * Fix test (and undo last change)

    * Fixed indent

    * Fix dom widgets, dont show tooltip if not over canvas

commit a5af64d3ceb4c67f4048aa00083b093c96efc527
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 14 01:05:17 2024 -0400

    Revert "Not sure if this actually changes anything but it can't hurt."

    This reverts commit 34608de2e9ba253db7f9ba23d1e6314172d02b7d.

commit 3e52e0364cf81764f58e5aa4f53f0b702f4d4a81
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Tue Aug 13 12:48:52 2024 -0700

    Add model downloading endpoint. (#4248)

    * Add model downloading endpoint.

    * Move client session init to async function.

    * Break up large function.

    * Send "download_progress" as websocket event.

    * Fixed

    * Fixed.

    * Use async mock.

    * Move server set up to right before run call.

    * Validate that model subdirectory cannot contain relative paths.

    * Add download_model test checking for invalid paths.

    * Remove DS_Store.

    * Consolidate DownloadStatus and DownloadModelResult

    * Add progress_interval as an optional parameter.

    * Use tuple type from annotations.

    * Use pydantic.

    * Update comment.

    * Revert "Use pydantic."

    This reverts commit 7461e8eb0073add315c65c6f5e361f0891bffc7d.

    * Add new line.

    * Add newline EOF.

    * Validate model filename as well.

    * Add comment to not reply on internal.

    * Restrict downloading to safetensor files only.

commit 34608de2e9ba253db7f9ba23d1e6314172d02b7d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 13 12:35:25 2024 -0400

    Not sure if this actually changes anything but it can't hurt.

commit 39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 13 03:57:55 2024 -0400

    Fix bug when model cannot be partially unloaded.

commit 74e124f4d784b859465e751a7b361c20f192f0f9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 23:42:21 2024 -0400

    Fix some issues with TE being in lowvram mode.

commit a562c17e8ac52c6a3cb14902af43dee5a6f1adf4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 23:18:54 2024 -0400

    load_unet -> load_diffusion_model with a model_options argument.

commit 5942c17d5558e3a6a9065e24e86971db3bce0f7f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 21:56:18 2024 -0400

    Order of operations matters.

commit c032b11e074530a6892c4de8d9b457a3d268698e
Author: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Date:   Mon Aug 12 21:22:22 2024 -0400

    xlabs Flux controlnet implementation. (#4260)

    * xlabs Flux controlnet.

    * Fix not working on old python.

    * Remove comment.

commit b8ffb2937f9daeaead6e9225f8f5d1dde6afc577
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 15:03:33 2024 -0400

    Memory tweaks.

commit ce37c11164ebc452592f3b0e67fb63c8c16374c0
Author: Vladimir Semyonov <20096510+vovsemenv@users.noreply.github.com>
Date:   Mon Aug 12 19:32:34 2024 +0300

    add DS_Store to gitignore (#4324)

commit b5c3906b38fdb493b22d113c4e191ef17801652f
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Mon Aug 12 09:32:16 2024 -0700

    Automatically link the Comfy CI page on PRs (#4326)

    also use_prior_commit so it doesn't get a janked merge commit instead of the real one

commit 5d43e75e5b94c203075e315e4516fee47c4d6950
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 12:27:54 2024 -0400

    Fix some issues with the model sometimes not getting patched.

commit 517f4a94e4a5c45edc64594d70585ec8aeb787e0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 11:38:06 2024 -0400

    Fix some lora loading slowdowns.

commit 52a471c5c7e4af28423b3c690cbb6e1238ea9d60
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 10:35:06 2024 -0400

    Change name of log.

commit ad76574cb8b28ee498f3dceafc9d00b56f12f992
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 00:23:29 2024 -0400

    Fix some potential issues with the previous commits.

commit 9acfe4df41b3b0ad8c600fc2d70a3af5c82cf4a4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 00:06:01 2024 -0400

    Support loading directly to vram with CLIPLoader node.

commit 9829b013eaef91a29e47128d1addf98fb0f1ea48
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 12 00:00:17 2024 -0400

    Fix mistake in last commit.

commit 5c69cde0374207c1d4b6ec1ec033cfd5592d6de0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 11 23:50:01 2024 -0400

    Load TE model straight to vram if certain conditions are met.

commit e9589d6d9246d1ce5a810be1507ead39fff50e04
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 11 08:50:34 2024 -0400

    Add a way to set model dtype and ops from load_checkpoint_guess_config.

commit 0d82a798a5c9ec3c70617c3445ba8144833ac444
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 11 08:37:35 2024 -0400

    Remove the ckpt_path from load_state_dict_guess_config.

commit 925fff26fd6e7e313751a9873964d9cbfde70e6b
Author: ljleb <lebel.louisjacob@gmail.com>
Date:   Sun Aug 11 08:36:52 2024 -0400

    alternative to `load_checkpoint_guess_config` that accepts a loaded state dict (#4249)

    * make alternative fn

    * add back ckpt path as 2nd argument?

commit 75b9b55b221fc95f7137a91e2349e45693e342b8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 10 21:28:24 2024 -0400

    Fix issues with #4302 and support loading diffusers format flux.

commit 1765f1c60c862f8fc9f3346384a37c5d6d13d35b
Author: Jaret Burkett <jaretburkett@gmail.com>
Date:   Sat Aug 10 19:26:41 2024 -0600

    FLUX: Added full diffusers mapping for FLUX.1 schnell and dev. Adds full LoRA support from diffusers LoRAs. (#4302)

commit 1de69fe4d56cfb0c1dbf5a14944c60079ba09d23
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 10 15:29:36 2024 -0400

    Fix some issues with inference slowing down.

commit ae197f651b07389bfb778b690575043205a9a5c5
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 10 07:36:27 2024 -0400

    Speed up hunyuan dit inference a bit.

commit 1b5b8ca81a5bc141ed40a94919fa5b6c81d8babb
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 21:45:21 2024 -0400

    Fix regression.

commit 6678d5cf65894e6bd46614a4e03c8036894d9a6a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 14:02:38 2024 -0400

    Fix regression.

commit e172564eeaa3e1d61319f94b31c82b1c98fe1dcb
Author: TTPlanetPig <152850462+TTPlanetPig@users.noreply.github.com>
Date:   Sat Aug 10 01:40:05 2024 +0800

    Update controlnet.py to fix the default controlnet weight as constant (#4285)

commit a3cc3267489bfd44e5a994d98d52481c0cc80730
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 12:16:25 2024 -0400

    Better fix for lowvram issue.

commit 86a97e91fcbbcd6dc06e24540da39b2838801814
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 12:08:58 2024 -0400

    Fix controlnet regression.

commit 5acdadc9f3a62eabf363f96f12797d45343635ca
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 03:58:28 2024 -0400

    Fix issue with some lowvram weights.

commit 55ad9d5f8c8b906102313e894e471d2f5e833577
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 03:36:40 2024 -0400

    Fix regression.

commit a9f04edc5887095f312bc16d9a6617e08c764678
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 03:21:10 2024 -0400

    Implement text encoder part of HunyuanDiT loras.

commit a475ec2300abb4eab845510ad0da596114174274
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 02:35:19 2024 -0400

    Cleanup HunyuanDit controlnets.

    Use the: ControlNetApply SD3 and HunyuanDiT node.

commit 06eb9fb426706550fe46aa4e36e2abcba9af241d
Author: 来新璐 <35400185+CrazyBoyM@users.noreply.github.com>
Date:   Fri Aug 9 14:59:24 2024 +0800

    feat: add support for HunYuanDit ControlNet (#4245)

    * add support for HunYuanDit ControlNet

    * fix hunyuandit controlnet

    * fix typo in hunyuandit controlnet

    * fix typo in hunyuandit controlnet

    * fix code format style

    * add control_weight support for HunyuanDit Controlnet

    * use control_weights in HunyuanDit Controlnet

    * fix typo

commit 413322645e713bdda69836620a97d4c9ca66b230
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 22:09:29 2024 -0400

    Raw torch is faster than einops?

commit 11200de9700aed41011ed865a164f43d27b62d82
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 20:07:09 2024 -0400

    Cleaner code.

commit 037c38eb0fff2b18344faec3323c2703eadf2ec7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 17:28:35 2024 -0400

    Try to improve inference speed on some machines.

commit 1e11d2d1f5535bc5bb50ce2843213203da8bca7d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 17:05:16 2024 -0400

    Better prints.

commit 65ea6be38f6365dcdc057e4cf60ae9e601121f6e
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 14:20:48 2024 -0700

    PullRequest CI Run: use pull_request_target to allow the CI Dashboard to work (#4277)

    '_target' allows secrets to pass through, and we're just using the secret that allows uploading to the dashboard and are manually vetting PRs before running this workflow anyway

commit 5df6f57b5d2c9e599aed333abb62e70d81f19a1a
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 13:30:59 2024 -0700

    minor fix on copypasta action name (#4276)

    my bad sorry

commit 6588bfdef99919f249668a4cd171688e056c0efc
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 13:24:49 2024 -0700

    add GitHub workflow for CI tests of PRs (#4275)

    When the 'Run-CI-Test' label is added to a PR, it will be tested by the CI, on a small matrix of stable versions.

commit 50ed2879eff33f5adaf9ce86b806536df0b4f818
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 12:40:07 2024 -0700

    Add full CI test matrix GitHub Workflow (#4274)

    automatically runs a matrix of full GPU-enabled tests on all new commits to the ComfyUI master branch

commit 66d42332101107d66a9dc8e18d781ec49991cce8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 15:16:51 2024 -0400

    Fix.

commit 591010b7efc317f994a647d2e805f386e583b17c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 14:45:52 2024 -0400

    Support diffusers text attention flux loras.

commit 08f92d55e934c19f753b47ec4c51760c68bbe2b7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 03:27:37 2024 -0400

    Partial model shift support.

commit 8115d8cce97a3edaaad8b08b45ab37c6782e1cb4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 15:08:39 2024 -0400

    Add Flux fp16 support hack.

commit 6969fc9ba457067dbf61d478256c7dbe9adc4f61
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 15:00:06 2024 -0400

    Make supported_dtypes a priority list.

commit cb7c4b4be3b3ed0602c5d68d06a14c5d8d4f6f45
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 14:30:54 2024 -0400

    Workaround for lora OOM on lowvram mode.

commit 1208863eca8fe1b88330652eb4fee891ee3653b2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 13:49:31 2024 -0400

    Fix "Comfy" lora keys.

    They are in this format now:
    diffusion_model.full.model.key.name.lora_up.weight

commit e1c528196ef77e8c69b67d96dc909b8ccb776007
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 13:30:45 2024 -0400

    Fix bundled embed.

commit 17030fd4c03331545698c8f1e299a17e1b93b8c6
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 13:18:32 2024 -0400

    Support for "Comfy" lora format.

    The keys are just: model.full.model.key.name.lora_up.weight

    It is supported by all comfyui supported models.

    Now people can just convert loras to this format instead of having to ask
    for me to implement them.

commit c19dcd362f5e32ce4800e600b91d09c89b19ab4f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 12:59:28 2024 -0400

    Controlnet code refactor.

commit 1c08bf35b49879115dedd8ec6bc92d9e8d8fd871
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 03:45:25 2024 -0400

    Support format for embeddings bundled in loras.

commit 2a02546e2085487d34920e5b5c9b367918531f32
Author: PhilWun <philipp.wundrack@live.de>
Date:   Wed Aug 7 03:59:34 2024 +0200

    Add type hints to folder_paths.py (#4191)

    * add type hints to folder_paths.py

    * replace deprecated standard collections type hints

    * fix type error when using Python 3.8

commit b334605a6631c12bbe7b3aff6d77526f47acdf42
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 13:27:48 2024 -0400

    Fix OOMs happening in some cases.

    A cloned model patcher sometimes reported a model was loaded on a device
    when it wasn't.

commit de17a9755ecb8419cb167fe8504791df5b07246f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 03:30:28 2024 -0400

    Unload all models if there's an OOM error.

commit c14ac98fedd0176686d285d384abec5e4c0140c2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 03:22:39 2024 -0400

    Unload models and load them back in lowvram mode no free vram.

commit 2894511893b0ad27151b615c7488380bc0aa73f8
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Mon Aug 5 22:46:09 2024 -0700

    Clone taesd with depth of 1 to reduce download size. (#4232)

commit f3bc40223a3bd58db51dc44da8bafe2aba8d6bc3
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Tue Aug 6 07:45:24 2024 +0200

    Add format metadata to CLIP save to make compatible with diffusers safetensors loading (#4233)

commit 841e74ac402e602471af48594d387496b0f76f4f
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Aug 6 01:27:28 2024 -0400

    Change browser test CI python to 3.8 (#4234)

commit 2d75df45e6eb354acb800707bbb6b91f184d4ede
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 21:58:28 2024 -0400

    Flux tweak memory usage.

commit 1abc9c8703abbb1f4d666cc2d6be34c9e13480c3
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Mon Aug 5 17:07:16 2024 -0700

    Stable release uses cached dependencies (#4231)

    * Release stable based on existing tag.

    * Update default cuda to 12.1.

commit 8edbcf520900112d4e11f510ba33949503b58f51
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 16:24:04 2024 -0400

    Improve performance on some lowend GPUs.

commit e545a636baae052000abb1250a69e1cac32b2bae
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 12:31:12 2024 -0400

    This probably doesn't work anymore.

commit 33e5203a2a7bc90dc4c6577ed645456abc530155
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Mon Aug 5 09:25:28 2024 -0700

    Don't cache index.html (#4211)

commit a178e25912b01abf436eba1cfaab316ba02d272d
Author: a-One-Fan <100067309+a-One-Fan@users.noreply.github.com>
Date:   Mon Aug 5 08:26:20 2024 +0300

    Fix Flux FP64 math on XPU (#4210)

commit 78e133d0415784924cd2674e2ee48f3eeca8a2aa
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 21:59:42 2024 -0400

    Support simple diffusers Flux loras.

commit 7afa985fbafc15b2b603a4428917f4a600560699
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Sun Aug 4 23:10:02 2024 +0200

    Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' (#4200)

commit ddb6a9f47cd2e680aa821f320d52e909f0a03fc3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 15:59:02 2024 -0400

    Set the step in EmptySD3LatentImage to 16.

    These models work better when the res is a multiple of 16.

commit 3b71f84b5051905be8f3311abeb39d725743d15b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 15:45:43 2024 -0400

    ONNX tracing fixes.

commit 0a6b0081176c6233015ec00d004c534c088ddcb0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 10:03:33 2024 -0400

    Fix issue with some custom nodes.

commit 56f3c660bf79769bbfa003c0e4152dfb50feadc5
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 04:06:00 2024 -0400

    ModelSamplingFlux now takes a resolution and adjusts the shift with it.

    If you want to sample Flux dev exactly how the reference code does use
    the same resolution as your image in this node.

commit f7a5107784cded39f92a4bb7553507575e78edbe
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 16:55:38 2024 -0400

    Fix crash.

commit 91be9c2867ef9ae5b255f038665649536c1e1b8b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 16:34:27 2024 -0400

    Tweak lowvram memory formula.

commit 03c5018c98b9dd2654dc4942a0978ac53e755900
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 15:14:07 2024 -0400

    Lower lowvram memory to 1/3 of free memory.

commit 2ba5cc8b867bc1aabe59fdaf0a8489e65012d603
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 15:06:40 2024 -0400

    Fix some issues.

commit 1e68002b87a3fb70afc7030c1b4dc6a31fea965e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 14:50:20 2024 -0400

    Cap lowvram to half of free memory.

commit ba9095e5bd7914c2456b2dfe939c06180e97b1ad
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 13:45:19 2024 -0400

    Automatically use fp8 for diffusion model weights if:

    Checkpoint contains weights in fp8.

    There isn't enough memory to load the diffusion model in GPU vram.

commit f123328b826dcd122d307b75288f89ea301fa25b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 12:39:33 2024 -0400

    Load T5 in fp8 if it's in fp8 in the Flux checkpoint.

commit 63a7e8edba76b30e3c01190345126ae75c94777d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 11:53:30 2024 -0400

    More aggressive batch splitting.

commit 0eea47d58086d31695f3e8e9d7ef36c6a6986faa
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 03:54:38 2024 -0400

    Add ModelSamplingFlux to experiment with the shift value.

    Default shift on Flux Schnell is 0.0

commit 7cd0cdfce601a52c52252ace517b9f52f6237fdb
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 23:20:30 2024 -0400

    Add advanced model merge node for Flux model.

commit ea03c9dcd2e2b223c0eb25f24be6b3e1995e2c44
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 18:08:21 2024 -0400

    Better per model memory usage estimations.

commit 3a9ee995cfbb9425227df9aff534dea12c1af532
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 17:34:30 2024 -0400

    Tweak regular SD memory formula.

commit 47da42d9283815a58636bd6b42c0434f70b24c9c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 17:02:35 2024 -0400

    Better Flux vram estimation.

commit 17bbd83176268c76a8597bb3a88768d325536651
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 13:14:28 2024 -0400

    Fix bug loading flac workflow when it contains = character.

commit bfb52de866ce659c281a6c243c8485109e4d8b8b
Author: fgdfgfthgr-fox <60460773+fgdfgfthgr-fox@users.noreply.github.com>
Date:   Sat Aug 3 02:29:03 2024 +1200

    Lower SAG scale step for finer control (#4158)

    * Lower SAG step for finer control

    Since the introduction of cfg++ which uses very low cfg value, a step of 0.1 in SAG might be too high for finer control. Even SAG of 0.1 can be too high when cfg is only 0.6, so I change the step to 0.01.

    * Lower PAG step as well.

    * Update nodes_sag.py

commit eca962c6dae395cab1258456529030880c188734
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 10:24:53 2024 -0400

    Add FluxGuidance node.

    This lets you adjust the guidance on the dev model which is a parameter
    that is passed to the diffusion model.

commit c1696cd1b5f572d7694dde223764861255d2398b
Author: Jairo Correa <jn.j41r0@gmail.com>
Date:   Fri Aug 2 10:34:12 2024 -0300

    Add missing import (#4174)

commit 369f459b2058f793c1230472f04edc9fd9471b46
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 22:19:53 2024 -0400

    Fix no longer working on old pytorch.

commit ce9ac2fe0581288d4a24869dae2e04a3c2b67061
Author: Alexander Brown <DrJKL0424@gmail.com>
Date:   Thu Aug 1 18:40:56 2024 -0700

    Fix clip_g/clip_l mixup (#4168)

commit e638f2858a93ea3b94edc2938b213ebc1fcf4e20
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 21:03:26 2024 -0400

    Hack to make all resolutions work on Flux models.

commit a531001cc772305364a319a760fcd5034e28411a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 18:53:25 2024 -0400

    Add CLIPTextEncodeFlux.

commit d420bc792af0b61a6ef7410c65fa2d4dcc646c56
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 17:49:46 2024 -0400

    Tweak the memory usage formulas for Flux and SD.

commit d965474aaae2f1b461e0925a7e1519b740393994
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:39:59 2024 -0400

    Make ComfyUI split batches a higher priority than weight offload.

commit 1c61361fd2478068e69816e78a0689db6664b65d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:28:11 2024 -0400

    Fast preview support for Flux.

commit a6decf1e620907347c9c5d8c815172f349b19c21
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:18:14 2024 -0400

    Fix bfloat16 potentially not being enabled on mps.

commit 48eb1399c02bdae7e14b2208c448b69b382d0090
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:41:27 2024 -0400

    Try to fix mac issue.

commit b4f6ebb2e88a43876caa1d0b2b8eb1e99ac57adb
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:33:30 2024 -0400

    Rename UNETLoader node to "Load Diffusion Model".

commit d7430a1651a300e8230867ce3e6d86cc0101facc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:28:41 2024 -0400

    Add a way to load the diffusion model in fp8 with UNETLoader node.

commit f2b80f95d2a3384609c1ffdec08457c4724d1d20
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 12:55:28 2024 -0400

    Better Mac support on flux model.

commit 1aa9cf3292499303260533780d25bbff99e076c8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 12:11:57 2024 -0400

    Make lowvram more aggressive on low memory machines.

commit 2f88d19ef311dc078a89ef39c1f9bc9265d6435d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:48:19 2024 -0400

    Add link to Flux examples to readme.

commit eb96c3bd82ba7eda5cac343ea2465baf6715b6d0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:32:58 2024 -0400

    Fix .sft file loading (they are safetensors files).

commit 5f98de7697de9aed79561a3f764c0e7d9766f8f1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:05:56 2024 -0400

    Load flux t5 in fp8 if weights are in fp8.

commit 8d34211a7abd06d659279cfa2d9d3d0cada75f58
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 09:57:01 2024 -0400

    Fix old python versions no longer working.

commit 1589b58d3e29e44623a1f3f595917b98f2301c3e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 04:03:59 2024 -0400

    Basic Flux Schnell and Flux Dev model implementation.

commit 7ad574bffd31b80c7c89c828c87e5b0557a29b99
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 09:42:17 2024 -0400

    Mac supports bf16 just make sure you are using the latest pytorch.

commit e2382b6adb70c65416f3e90a168cbbc5ffe491bd
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 03:58:58 2024 -0400

    Make lowvram less aggressive when there are large amounts of free memory.

commit c24f897352238f040e162a81d253c290635d44fd
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 02:00:19 2024 -0400

    Fix to get fp8 working on T5 base.

commit a5991a7aa6f5dae3af820151abe15cb63ac86ac8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 01:34:57 2024 -0400

    Fix hunyuan dit text encoder weights always being in fp32.

commit 2c038ccef0f819ee8693a94dd880f05a4eb3808c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 01:32:35 2024 -0400

    Lower CLIP memory usage by a bit.

commit b85216a3c0d4fc443c87cd2af362c1f4d3be50ce
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 00:52:34 2024 -0400

    Lower T5 memory usage by a few hundred MB.

commit 82cae45d44df3bd2d042c32d9b56cd3056bd0f7f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 14:20:28 2024 -0400

    Fix potential issue with non clip text embeddings.

commit 25853d0be8be6622195afaba2bc92e49e518bdcc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 05:03:20 2024 -0400

    Use common function for casting weights to input.

commit 79040635dace9466a9f3fe20b5604b8e8e79f44f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 05:01:34 2024 -0400

    Remove unnecessary code.

commit 66d35c07ce44b07011314ad7a28b2bdbcbb4e4cc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 29 20:27:40 2024 -0400

    Improve artifacts on hydit, auraflow and SD3 on specific resolutions.

    This breaks seeds for resolutions that are not a multiple of 16 in pixel
    resolution by using circular padding instead of reflection padding but
    should lower the amount of artifacts when doing img2img at those
    resolutions.

commit c75b50607b4ab78ef9c7c5c9e0c8672146ead91b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 29 11:15:37 2024 -0400

    Less confusing exception if pillow() function fails.

commit 4ba7fa0244badcf901f2b8ddbfb8539c6398672f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 28 01:19:20 2024 -0400

    Refactor: Move sd2_clip.py to text_encoders folder.

commit ab76abc7676ea30726661b85c4fb85f84a3ff5aa
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Sat Jul 27 20:34:19 2024 -0700

    Active workflow use primary fg color (#4090)

commit 93000580261971971ebb12aff03f6bc3ce30a9f2
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Sat Jul 27 22:19:50 2024 +0200

    Add dpmpp_2s_ancestral as custom sampler (#4101)

    Adding dpmpp_2s_ancestral as custom sampler node to enable its use with eta and s_noise when using custom sampling.

commit f82d09c9b40fd9ebbc080bc5662ddb39787b1ec9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 04:48:19 2024 -0400

    Update packaging workflow.

commit e6829e7ac5bef5db8099005b5b038c49e173e87c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 04:41:46 2024 -0400

    Add a way to set custom dependencies in the release workflow.

commit 07f6a1a685d16a1b7e2c4b05d94670d2543ec29e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 03:15:22 2024 -0400

    Handle case in the updater when master branch is not in local repo.

commit e746965c5051660ffa10330c00461094270f0207
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 01:20:18 2024 -0400

    Update nightly package workflow.

commit 45a2842d7fe2ed92c3402c0a17b55bd46366b407
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 14:52:00 2024 -0400

    Set stable releases as a prerelease initially.

    This should give time to test the standalone package before making it live.

commit 17b41f622ef64ba9a9ce38da0f1038e718508dc6
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Fri Jul 26 11:37:40 2024 -0700

    Change windows standalone URL to stable release. (#4065)

commit cf4418b806af5f7f67e3ce5b6ee386360b410bbb
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 13:07:39 2024 -0400

    Don't treat Bert model like CLIP.

    Bert can accept up to 512 tokens so any prompt with more than 77 should
    just be passed to it as is instead of splitting it up like CLIP.

commit 6225a7827c17bc237167f8500029d282f4c91950
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 13:04:48 2024 -0400

    Add CLIPTextEncodeHunyuanDiT.

    Useful for testing what each text encoder does.

commit b6779d8df310bcac115d9949fcc6c7502b4c9551
Author: filtered <176114999+webfiltered@users.noreply.github.com>
Date:   Sat Jul 27 02:25:42 2024 +1000

    Fix undo incorrectly undoing text input (#4114)

    Fixes an issue where under certain conditions, the ComfyUI custom undo / redo functions would not run when intended to.

    When trying to undo an action like deleting several nodes, instead the native browser undo runs - e.g. a textarea gets focus and the last typed text is undone.  Clicking outside the text area and typing again just keeps doing the same thing.

commit 8328a2d8cdabd0e42b856dd0193ebc24ea41c359
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 12:11:32 2024 -0400

    Let hunyuan dit work with all prompt lengths.

commit afe732bef960d753661acb5b886ba42573dd3720
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 11:52:58 2024 -0400

    Hunyuan dit can now accept longer prompts.

commit a9ac56fc0db5777de0edf2fe4b8ed628ccab1293
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 04:32:33 2024 -0400

    Own BertModel implementation that works with lowvram.

commit 25b51b1a8b6be9c3dadd1d755c78394009c4d1d4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 22:42:54 2024 -0400

    Hunyuan DiT lora support.

commit 61a2b00bc2f06dec3b570dfde2eb43890613d054
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 19:06:43 2024 -0400

    Add HunyuanDiT support to readme.

commit a5f4292f9f41ab78759e6e2de490fe6240f5d9ea
Author: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Date:   Thu Jul 25 18:21:08 2024 -0400

    Basic hunyuan dit implementation. (#4102)

    * Let tokenizers return weights to be stored in the saved checkpoint.

    * Basic hunyuan dit implementation.

    * Fix some resolutions not working.

    * Support hydit checkpoint save.

    * Init with right dtype.

    * Switch to optimized attention in pooler.

    * Fix black images on hunyuan dit.

commit f87810cd3ed2cc3922811422181d0572f98b103d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 10:52:09 2024 -0400

    Let tokenizers return weights to be stored in the saved checkpoint.

commit 10c919f4c77b3615f0efa9014e8b77f294c23a2d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 16:43:53 2024 -0400

    Make it possible to load tokenizer data from checkpoints.

commit ce80e69fb89f5f8e48273977e94b7a3b0421f6e6
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 13:50:34 2024 -0400

    Avoid loading the dll when it's not necessary.

commit 19944ad252106e46a40e0b952c86c1cebc8486ab
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 12:49:29 2024 -0400

    Add code to fix issues with new pytorch version on the standalone.

commit 10b43ceea52d86395d10a7029eda2822fd65dfd1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 01:12:59 2024 -0400

    Remove duplicate code.

commit 0a4c49c57ca792535174d27aea87449257527b2f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 15:35:28 2024 -0400

    Support MT5.

commit 88ed89303463ab030654ef87f7f54d72ee3979bb
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 14:17:42 2024 -0400

    Allow SPieceTokenizer to load model from a byte string.

commit 334ba48cea2961994e92c2fb25de9417b19897ed
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 14:13:32 2024 -0400

    More generic unet prefix detection code.

commit 14764aa2e2e2b282c4a4dffbfab4c01d3e46e8a7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 22 12:21:45 2024 -0400

    Rename LLAMATokenizer to SPieceTokenizer.

commit b2c995f623fd7169db87dfd6d356e535d0bb99ce
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 22 11:30:38 2024 -0400

    "auto" type is only relevant to the SetUnionControlNetType node.

commit 4151fbfa8ab705d0a7e632c7d7c23ea5c436a60b
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Mon Jul 22 11:27:32 2024 -0400

    Add error message on union controlnet (#4081)

commit 6045ed31f898e278e4693f2a4e210393fc9153d0
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Sun Jul 21 21:15:01 2024 -0400

    Supress frontend exception on unhandled message type (#4078)

    * Supress frontend exception on unhandled message type

    * nit

commit f836e69346844c65dcf2418346db9a9a9b32a445
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 21 16:16:45 2024 -0400

    Fix bug with SaveAudio node with --gpu-only

commit 5b69cfe7c343e8672fc350ec35e17f0d046297ca
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Sun Jul 21 15:29:10 2024 -0400

    Add timestamp to execution messages (#4076)

    * Add timestamp to execution messages

    * Add execution_end message

    * Rename to execution_success

commit 95fa9545f167ccf4010849c70045a67a8800aa31
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 20 12:27:42 2024 -0400

    Only append zero to noise schedule if last sigma isn't zero.

commit 11b74147…
@LiChangyi
Copy link

@guill Good job, I found that now we are selecting one to execute based on the get_ready_nodes results. Why not execute them all together? Because during the running process, there may be some CPU dependent, some GPU dependent, and even interface requests. If parallel execution is possible, the speed of running graphs would be enormous.

@guill
Copy link
Contributor Author

guill commented Sep 13, 2024

@LiChangyi You might be interested in this discussion: #3683

Parallel execution is definitely something I think we all want to see, but there are some challenges in accomplishing it -- especially since so many users are already running close to their machine's RAM/VRAM limits.

@mijuku233
Copy link

mijuku233 commented Sep 13, 2024

@guill

Nodes in loops don't update state.

As shown below, control_after_generate is set to decrement, but except for the first run, the subsequent seed value will not decrease, and the image preview will not update.

AI_decrement

I tried to write a node that "outputs different values ​​in sequence", which depends on loops.
But the node in the loop does not update state, it can only output the same value, so I can't realize my idea.

My Workflow: loop.json

@Amorano
Copy link

Amorano commented Sep 13, 2024

@guill

Nodes in loops don't update state.

I tried to write a node that "outputs different values ​​in sequence", which depends on loops. But the node in the loop does not update state, it can only output the same value, so I can't realize my idea.

My Workflow: loop.json

You should use the examples from his repository. You need to accumulate the result in a list. You cannot preview the execution as it runs like that.

https://github.com/BadCafeCode/execution-inversion-demo-comfyui

hth

@mijuku233
Copy link

@guill

Nodes in loops don't update state.

I tried to write a node that "outputs different values ​​in sequence", which depends on loops. But the node in the loop does not update state, it can only output the same value, so I can't realize my idea.
My Workflow: loop.json

You should use the examples from his repository. You need to accumulate the result in a list. You cannot preview the execution as it runs like that.

https://github.com/BadCafeCode/execution-inversion-demo-comfyui

hth

I was using his example node to demonstrate the loop flaw.
This flaw prevented the node I was writing from working as expected.

@guill
Copy link
Contributor Author

guill commented Sep 14, 2024

@mijuku233 "control_after_generate" means "update after the workflow executes", not "update after the node executes". If you want the seed to decrement with each iteration of the loop, you'll want to convert it to an input and use an Int Math operation on it (either adding remaining to the initial seed or subtracting 1 from the seed and passing it to the next iteration of the loop).

@mijuku233
Copy link

mijuku233 commented Sep 14, 2024

@guill
I just used "control_after_generate" to show you the problem.

There is a simple logic in my node code, which count +1 after each run so that I can use the count to switch different values.
I connected my node in loop node, similar to the ksampler in the gif display, and the returned value will not increment according to the loop.

    def __init__(self):
        self.count = 0

    def doit(self):

        self.count += 1

        return (self.count,)

@guill
Copy link
Contributor Author

guill commented Sep 14, 2024

You generally shouldn't rely on nodes being stateful. Data stored on node objects should be limited to cached information that can be recalculated if necessary. If you rely on statefulness in the way you're doing now, results will change depending on what workflow you happened to run previously.

As to why you're seeing this within a single execution: nodes aren't actually executed multiple times in a loop. Iterations after the first are entirely different instances that map to the original node in the UI.

@mijuku233
Copy link

unique_id of new instances generated in a loop is abnormal

uid

@Amorano
Copy link

Amorano commented Sep 18, 2024

unique_id of new instances generated in a loop is abnormal

If you are seeing multiple problems, I would suggest you make multiple NEW tickets.

This PR is already merged and this is just adding actual problems into a dead end ticket.

return False
selfset = set(self.split(','))
otherset = set(other.split(','))
return not selfset.issubset(otherset)
Copy link

@acorderob acorderob Oct 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be backwards, because selftest will have more elements than otherset if self has multiple types and we are comparing with only one type.

Also, shouldn't all this be available for custom nodes instead of being for tests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Run-CI-Test This is an administrative label to tell the CI to run full automatic testing on this PR now.
Projects
None yet
Development

Successfully merging this pull request may close these issues.