Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make default hash lib configurable without code changes via CLI argument #3947

Merged

Conversation

teward
Copy link
Contributor

@teward teward commented Jul 4, 2024

This is an extension to #3465 in which we add an option via CLI arguments to specify the comparison hashing function used in the operation of compare_image_hash.

While it uses an eval(), the eval is safe because we use a CLI argument that only allows one of 4 strings to specifically be selected, thereby preventing unsafe input. The other option is to write a parser around that that parses the string and returns the hash function, but an eval in line for this case is safe.

Since there's no way to specify the hash function in the fix for #3465, this makes it configurable, which allows the use of other common (and known to be part of hashlib) hashing functions.

Uses an argument added in cli_args to specify the type of hashing to default to for duplicate hash checking.  Uses an `eval()` to identify the specific hashlib class to utilize, but ultimately safely operates because we have specific options and only those options/choices in the arg parser.  So we don't have any unsafe input there.
Copy link
Contributor

@shawnington shawnington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me, although maybe it can be abstracted into a helper function that returns the hash function so that it is accessible to more functions

def hasher():
   return eval(f"hashlib.{args.duplicate_check_hash_function}")

somewhere like node_helpers or another place where it can be called in the function like

a = node_helpers.hasher()
b = node_helpers.hasher()

@comfyanonymous
Copy link
Owner

Is there a reason why you would want to use other hash functions?

@teward
Copy link
Contributor Author

teward commented Jul 4, 2024

Is there a reason why you would want to use other hash functions?

In some cases, people may have resource availability in CPU issues that make using slower hash functions unfeasible. This was mentioned in #3465 (comment) stating "It can be swapped out for whatever hash function you want", but that requires a code diversion that will break updates/pulls to server.py in local trees/branches

By making it configurable this lets people who want to use weaker and faster hash functions the option to do so.

Personally, I'd prefer using SHA512 sums and only SHA512 sums, but not everyone has powerful enough CPUs to rapidly process and compute those hashes.

@shawnington
Copy link
Contributor

Is there a reason why you would want to use other hash functions?

I can't think of one unless there are issues with a specific function on specific platforms.

@shawnington
Copy link
Contributor

Is there a reason why you would want to use other hash functions?

In some cases, people may have resource availability in CPU issues that make using slower hash functions unfeasible. This was mentioned in #3465 (comment) stating "It can be swapped out for whatever hash function you want", but that requires a code diversion that will break updates/pulls to server.py in local trees/branches

By making it configurable this lets people who want to use weaker and faster hash functions the option to do so.

Personally, I'd prefer using SHA512 sums and only SHA512 sums, but not everyone has powerful enough CPUs to rapidly process and compute those hashes.

In the case of this function, you are only hashing images, and only when you load a new image, and only if there is a name collision. Even on a very slow system where you run into an extreme edge case of having say 100 images with the same name, hashing completes very fast.

I personally have no opposition to having the ability to select a hash function through cli though, as this is not the only node to use hashlib.

@teward
Copy link
Contributor Author

teward commented Jul 4, 2024

FYI I just took @shawnington's recommendation and moved hasher() into node_helpers so it can be used elsewhere. It also changes the argument to be default-hashing-function so we can use the helper library in node_helpers for other cases of hashing function selection for such default 'hashing' function selection in other nodes. This can be used, then, by many other nodes even custom nodes to default to specific hashing functions based on CLI selection. Which in turn defaults to sha256 because of the configuration of arguments.

Working from a laptop without my IDE, so feel free to squash commits at merge time, 'cause i'm stuck using the GitHub editor.

@teward teward changed the title compare_image_hash: Make hash lib configurable without code changes via CLI argument Make default hash lib configurable without code changes via CLI argument Jul 4, 2024
@mcmonkey4eva mcmonkey4eva added the Good PR This PR looks good to go, it needs comfy's final review. label Jul 8, 2024
node_helpers.py Outdated Show resolved Hide resolved
@mcmonkey4eva mcmonkey4eva added Feature A new feature to add to ComfyUI. and removed Good PR This PR looks good to go, it needs comfy's final review. labels Jul 16, 2024
Uses a safer handling method than `eval` to evaluate default hashing function.
@teward
Copy link
Contributor Author

teward commented Jul 16, 2024

@mcmonkey4eva updates made, should address your concern about eval. My 'switch' is a dict with key-values instead of long if statements chained together for switches. That's just me though. :)

node_helpers.py Outdated Show resolved Hide resolved
@mcmonkey4eva mcmonkey4eva added the Good PR This PR looks good to go, it needs comfy's final review. label Jul 16, 2024
@comfyanonymous
Copy link
Owner

  File "/mnt/fast_ssd/stable_diff/comfy_ui/node_helpers.py", line 37
    return hashfuncs[args.default_hashing_function]
                                                   ^
IndentationError: unindent does not match any outer indentation level

I get an error.

@teward
Copy link
Contributor Author

teward commented Jul 16, 2024

Interesting. Let me check the indentation

Somehow when I hit save I didn't notice I missed a space to make indentation work proper.  Oops!
@teward
Copy link
Contributor Author

teward commented Jul 16, 2024

Interesting. Let me check the indentation

Fixed, looks like when I was using the github editor I didn't notice a missing space in my typing to bring things back into line indentation wise. Fixed!

Copy link
Contributor

@shawnington shawnington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, I like the dict much better than using eval.

@comfyanonymous comfyanonymous merged commit c5a48b1 into comfyanonymous:master Jul 16, 2024
2 checks passed
@teward teward deleted the configurable-comparison-hashlib branch July 16, 2024 22:29
pmason314 pushed a commit to pmason314/ComfyUI that referenced this pull request Jul 26, 2024
…ent (comfyanonymous#3947)

* cli_args: Add --duplicate-check-hash-function.

* server.py: compare_image_hash configurable hash function

Uses an argument added in cli_args to specify the type of hashing to default to for duplicate hash checking.  Uses an `eval()` to identify the specific hashlib class to utilize, but ultimately safely operates because we have specific options and only those options/choices in the arg parser.  So we don't have any unsafe input there.

* Add hasher() to node_helpers

* hashlib selection moved to node_helpers

* default-hashing-function instead of dupe checking hasher

This makes a default-hashing-function option instead of previous selected option.

* Use args.default_hashing_function

* Use safer handling for node_helpers.hasher()

Uses a safer handling method than `eval` to evaluate default hashing function.

* Stray parentheses are evil.

* Indentation fix.

Somehow when I hit save I didn't notice I missed a space to make indentation work proper.  Oops!
chenbaiyujason added a commit to NeoWorldTeam/ComfyUI that referenced this pull request Aug 6, 2024
commit de17a97
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 03:30:28 2024 -0400

    Unload all models if there's an OOM error.

commit c14ac98
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 03:22:39 2024 -0400

    Unload models and load them back in lowvram mode no free vram.

commit 2894511
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Mon Aug 5 22:46:09 2024 -0700

    Clone taesd with depth of 1 to reduce download size. (comfyanonymous#4232)

commit f3bc402
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Tue Aug 6 07:45:24 2024 +0200

    Add format metadata to CLIP save to make compatible with diffusers safetensors loading (comfyanonymous#4233)

commit 841e74a
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Aug 6 01:27:28 2024 -0400

    Change browser test CI python to 3.8 (comfyanonymous#4234)

commit 2d75df4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 21:58:28 2024 -0400

    Flux tweak memory usage.

commit 1abc9c8
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Mon Aug 5 17:07:16 2024 -0700

    Stable release uses cached dependencies (comfyanonymous#4231)

    * Release stable based on existing tag.

    * Update default cuda to 12.1.

commit 8edbcf5
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 16:24:04 2024 -0400

    Improve performance on some lowend GPUs.

commit e545a63
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 12:31:12 2024 -0400

    This probably doesn't work anymore.

commit 33e5203
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Mon Aug 5 09:25:28 2024 -0700

    Don't cache index.html (comfyanonymous#4211)

commit a178e25
Author: a-One-Fan <100067309+a-One-Fan@users.noreply.github.com>
Date:   Mon Aug 5 08:26:20 2024 +0300

    Fix Flux FP64 math on XPU (comfyanonymous#4210)

commit 78e133d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 21:59:42 2024 -0400

    Support simple diffusers Flux loras.

commit 7afa985
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Sun Aug 4 23:10:02 2024 +0200

    Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' (comfyanonymous#4200)

commit ddb6a9f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 15:59:02 2024 -0400

    Set the step in EmptySD3LatentImage to 16.

    These models work better when the res is a multiple of 16.

commit 3b71f84
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 15:45:43 2024 -0400

    ONNX tracing fixes.

commit 0a6b008
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 10:03:33 2024 -0400

    Fix issue with some custom nodes.

commit 56f3c66
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 04:06:00 2024 -0400

    ModelSamplingFlux now takes a resolution and adjusts the shift with it.

    If you want to sample Flux dev exactly how the reference code does use
    the same resolution as your image in this node.

commit f7a5107
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 16:55:38 2024 -0400

    Fix crash.

commit 91be9c2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 16:34:27 2024 -0400

    Tweak lowvram memory formula.

commit 03c5018
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 15:14:07 2024 -0400

    Lower lowvram memory to 1/3 of free memory.

commit 2ba5cc8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 15:06:40 2024 -0400

    Fix some issues.

commit 1e68002
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 14:50:20 2024 -0400

    Cap lowvram to half of free memory.

commit ba9095e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 13:45:19 2024 -0400

    Automatically use fp8 for diffusion model weights if:

    Checkpoint contains weights in fp8.

    There isn't enough memory to load the diffusion model in GPU vram.

commit f123328
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 12:39:33 2024 -0400

    Load T5 in fp8 if it's in fp8 in the Flux checkpoint.

commit 63a7e8e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 11:53:30 2024 -0400

    More aggressive batch splitting.

commit 0eea47d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 03:54:38 2024 -0400

    Add ModelSamplingFlux to experiment with the shift value.

    Default shift on Flux Schnell is 0.0

commit 7cd0cdf
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 23:20:30 2024 -0400

    Add advanced model merge node for Flux model.

commit ea03c9d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 18:08:21 2024 -0400

    Better per model memory usage estimations.

commit 3a9ee99
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 17:34:30 2024 -0400

    Tweak regular SD memory formula.

commit 47da42d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 17:02:35 2024 -0400

    Better Flux vram estimation.

commit 17bbd83
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 13:14:28 2024 -0400

    Fix bug loading flac workflow when it contains = character.

commit bfb52de
Author: fgdfgfthgr-fox <60460773+fgdfgfthgr-fox@users.noreply.github.com>
Date:   Sat Aug 3 02:29:03 2024 +1200

    Lower SAG scale step for finer control (comfyanonymous#4158)

    * Lower SAG step for finer control

    Since the introduction of cfg++ which uses very low cfg value, a step of 0.1 in SAG might be too high for finer control. Even SAG of 0.1 can be too high when cfg is only 0.6, so I change the step to 0.01.

    * Lower PAG step as well.

    * Update nodes_sag.py

commit eca962c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 10:24:53 2024 -0400

    Add FluxGuidance node.

    This lets you adjust the guidance on the dev model which is a parameter
    that is passed to the diffusion model.

commit c1696cd
Author: Jairo Correa <jn.j41r0@gmail.com>
Date:   Fri Aug 2 10:34:12 2024 -0300

    Add missing import (comfyanonymous#4174)

commit 369f459
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 22:19:53 2024 -0400

    Fix no longer working on old pytorch.

commit ce9ac2f
Author: Alexander Brown <DrJKL0424@gmail.com>
Date:   Thu Aug 1 18:40:56 2024 -0700

    Fix clip_g/clip_l mixup (comfyanonymous#4168)

commit e638f28
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 21:03:26 2024 -0400

    Hack to make all resolutions work on Flux models.

commit a531001
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 18:53:25 2024 -0400

    Add CLIPTextEncodeFlux.

commit d420bc7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 17:49:46 2024 -0400

    Tweak the memory usage formulas for Flux and SD.

commit d965474
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:39:59 2024 -0400

    Make ComfyUI split batches a higher priority than weight offload.

commit 1c61361
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:28:11 2024 -0400

    Fast preview support for Flux.

commit a6decf1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:18:14 2024 -0400

    Fix bfloat16 potentially not being enabled on mps.

commit 48eb139
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:41:27 2024 -0400

    Try to fix mac issue.

commit b4f6ebb
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:33:30 2024 -0400

    Rename UNETLoader node to "Load Diffusion Model".

commit d7430a1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:28:41 2024 -0400

    Add a way to load the diffusion model in fp8 with UNETLoader node.

commit f2b80f9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 12:55:28 2024 -0400

    Better Mac support on flux model.

commit 1aa9cf3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 12:11:57 2024 -0400

    Make lowvram more aggressive on low memory machines.

commit 2f88d19
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:48:19 2024 -0400

    Add link to Flux examples to readme.

commit eb96c3b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:32:58 2024 -0400

    Fix .sft file loading (they are safetensors files).

commit 5f98de7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:05:56 2024 -0400

    Load flux t5 in fp8 if weights are in fp8.

commit 8d34211
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 09:57:01 2024 -0400

    Fix old python versions no longer working.

commit 1589b58
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 04:03:59 2024 -0400

    Basic Flux Schnell and Flux Dev model implementation.

commit 7ad574b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 09:42:17 2024 -0400

    Mac supports bf16 just make sure you are using the latest pytorch.

commit e2382b6
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 03:58:58 2024 -0400

    Make lowvram less aggressive when there are large amounts of free memory.

commit c24f897
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 02:00:19 2024 -0400

    Fix to get fp8 working on T5 base.

commit a5991a7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 01:34:57 2024 -0400

    Fix hunyuan dit text encoder weights always being in fp32.

commit 2c038cc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 01:32:35 2024 -0400

    Lower CLIP memory usage by a bit.

commit b85216a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 00:52:34 2024 -0400

    Lower T5 memory usage by a few hundred MB.

commit 82cae45
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 14:20:28 2024 -0400

    Fix potential issue with non clip text embeddings.

commit 25853d0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 05:03:20 2024 -0400

    Use common function for casting weights to input.

commit 7904063
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 05:01:34 2024 -0400

    Remove unnecessary code.

commit 66d35c0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 29 20:27:40 2024 -0400

    Improve artifacts on hydit, auraflow and SD3 on specific resolutions.

    This breaks seeds for resolutions that are not a multiple of 16 in pixel
    resolution by using circular padding instead of reflection padding but
    should lower the amount of artifacts when doing img2img at those
    resolutions.

commit c75b506
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 29 11:15:37 2024 -0400

    Less confusing exception if pillow() function fails.

commit 4ba7fa0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 28 01:19:20 2024 -0400

    Refactor: Move sd2_clip.py to text_encoders folder.

commit ab76abc
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Sat Jul 27 20:34:19 2024 -0700

    Active workflow use primary fg color (comfyanonymous#4090)

commit 9300058
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Sat Jul 27 22:19:50 2024 +0200

    Add dpmpp_2s_ancestral as custom sampler (comfyanonymous#4101)

    Adding dpmpp_2s_ancestral as custom sampler node to enable its use with eta and s_noise when using custom sampling.

commit f82d09c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 04:48:19 2024 -0400

    Update packaging workflow.

commit e6829e7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 04:41:46 2024 -0400

    Add a way to set custom dependencies in the release workflow.

commit 07f6a1a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 03:15:22 2024 -0400

    Handle case in the updater when master branch is not in local repo.

commit e746965
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 01:20:18 2024 -0400

    Update nightly package workflow.

commit 45a2842
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 14:52:00 2024 -0400

    Set stable releases as a prerelease initially.

    This should give time to test the standalone package before making it live.

commit 17b41f6
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Fri Jul 26 11:37:40 2024 -0700

    Change windows standalone URL to stable release. (comfyanonymous#4065)

commit cf4418b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 13:07:39 2024 -0400

    Don't treat Bert model like CLIP.

    Bert can accept up to 512 tokens so any prompt with more than 77 should
    just be passed to it as is instead of splitting it up like CLIP.

commit 6225a78
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 13:04:48 2024 -0400

    Add CLIPTextEncodeHunyuanDiT.

    Useful for testing what each text encoder does.

commit b6779d8
Author: filtered <176114999+webfiltered@users.noreply.github.com>
Date:   Sat Jul 27 02:25:42 2024 +1000

    Fix undo incorrectly undoing text input (comfyanonymous#4114)

    Fixes an issue where under certain conditions, the ComfyUI custom undo / redo functions would not run when intended to.

    When trying to undo an action like deleting several nodes, instead the native browser undo runs - e.g. a textarea gets focus and the last typed text is undone.  Clicking outside the text area and typing again just keeps doing the same thing.

commit 8328a2d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 12:11:32 2024 -0400

    Let hunyuan dit work with all prompt lengths.

commit afe732b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 11:52:58 2024 -0400

    Hunyuan dit can now accept longer prompts.

commit a9ac56f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 04:32:33 2024 -0400

    Own BertModel implementation that works with lowvram.

commit 25b51b1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 22:42:54 2024 -0400

    Hunyuan DiT lora support.

commit 61a2b00
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 19:06:43 2024 -0400

    Add HunyuanDiT support to readme.

commit a5f4292
Author: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Date:   Thu Jul 25 18:21:08 2024 -0400

    Basic hunyuan dit implementation. (comfyanonymous#4102)

    * Let tokenizers return weights to be stored in the saved checkpoint.

    * Basic hunyuan dit implementation.

    * Fix some resolutions not working.

    * Support hydit checkpoint save.

    * Init with right dtype.

    * Switch to optimized attention in pooler.

    * Fix black images on hunyuan dit.

commit f87810c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 10:52:09 2024 -0400

    Let tokenizers return weights to be stored in the saved checkpoint.

commit 10c919f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 16:43:53 2024 -0400

    Make it possible to load tokenizer data from checkpoints.

commit ce80e69
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 13:50:34 2024 -0400

    Avoid loading the dll when it's not necessary.

commit 19944ad
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 12:49:29 2024 -0400

    Add code to fix issues with new pytorch version on the standalone.

commit 10b43ce
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 01:12:59 2024 -0400

    Remove duplicate code.

commit 0a4c49c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 15:35:28 2024 -0400

    Support MT5.

commit 88ed893
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 14:17:42 2024 -0400

    Allow SPieceTokenizer to load model from a byte string.

commit 334ba48
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 14:13:32 2024 -0400

    More generic unet prefix detection code.

commit 14764aa
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 22 12:21:45 2024 -0400

    Rename LLAMATokenizer to SPieceTokenizer.

commit b2c995f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 22 11:30:38 2024 -0400

    "auto" type is only relevant to the SetUnionControlNetType node.

commit 4151fbf
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Mon Jul 22 11:27:32 2024 -0400

    Add error message on union controlnet (comfyanonymous#4081)

commit 6045ed3
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Sun Jul 21 21:15:01 2024 -0400

    Supress frontend exception on unhandled message type (comfyanonymous#4078)

    * Supress frontend exception on unhandled message type

    * nit

commit f836e69
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 21 16:16:45 2024 -0400

    Fix bug with SaveAudio node with --gpu-only

commit 5b69cfe
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Sun Jul 21 15:29:10 2024 -0400

    Add timestamp to execution messages (comfyanonymous#4076)

    * Add timestamp to execution messages

    * Add execution_end message

    * Rename to execution_success

commit 95fa954
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 20 12:27:42 2024 -0400

    Only append zero to noise schedule if last sigma isn't zero.

commit 11b7414
Author: Greg Wainer <gregcebowainer@hotmail.com>
Date:   Fri Jul 19 17:39:04 2024 -0500

    Fix/webp exif little endian (comfyanonymous#4061)

    * Fix for isLittleEndian flag in parseExifData.

    * Add break after reading first exif chunk in getWebpMetadata.

commit 6ab8cad
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 19 17:44:56 2024 -0400

    Implement beta sampling scheduler.

    It is based on: https://arxiv.org/abs/2407.12173

    Add "beta" to the list of schedulers and the BetaSamplingScheduler node.

commit 011b11d
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Thu Jul 18 18:59:18 2024 -0700

    LoadAudio restores file value from workflow (comfyanonymous#4043)

    * LoadAudio restores file value from workflow

    * use onAfterGraphConfigured

    * Don't use anonnymous function

commit ff6ca2a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 18 17:20:05 2024 -0400

    Move PAG to model_patches/unet section.

    Move other unet model_patches nodes to model_patches/unet section.

commit 374e093
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Wed Jul 17 13:11:10 2024 -0700

    Disable audio widget trying to get previews (comfyanonymous#4044)

commit 8557894
Author: 喵哩个咪 <wailovet@163.com>
Date:   Thu Jul 18 01:12:50 2024 +0800

    support clip-vit-large-patch14-336 (comfyanonymous#4042)

    * support clip-vit-large-patch14-336

    * support clip-vit-large-patch14-336

commit 6f7869f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 17 13:05:38 2024 -0400

    Get clip vision image size from config.

commit 281ad42
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 17 10:16:31 2024 -0400

    Fix lowvram union controlnet bug.

commit 1cde6b2
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Jul 16 21:15:08 2024 -0400

    Disallow use of eval with pylint (comfyanonymous#4033)

commit c5a48b1
Author: Thomas Ward <teward@thomas-ward.net>
Date:   Tue Jul 16 18:27:09 2024 -0400

    Make default hash lib configurable without code changes via CLI argument (comfyanonymous#3947)

    * cli_args: Add --duplicate-check-hash-function.

    * server.py: compare_image_hash configurable hash function

    Uses an argument added in cli_args to specify the type of hashing to default to for duplicate hash checking.  Uses an `eval()` to identify the specific hashlib class to utilize, but ultimately safely operates because we have specific options and only those options/choices in the arg parser.  So we don't have any unsafe input there.

    * Add hasher() to node_helpers

    * hashlib selection moved to node_helpers

    * default-hashing-function instead of dupe checking hasher

    This makes a default-hashing-function option instead of previous selected option.

    * Use args.default_hashing_function

    * Use safer handling for node_helpers.hasher()

    Uses a safer handling method than `eval` to evaluate default hashing function.

    * Stray parentheses are evil.

    * Indentation fix.

    Somehow when I hit save I didn't notice I missed a space to make indentation work proper.  Oops!

commit f229879
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Jul 16 18:20:39 2024 -0400

    Fix annotation (comfyanonymous#4035)

commit 60383f3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 17:08:25 2024 -0400

    Move controlnet nodes to conditioning/controlnet.

commit 8270c62
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 17:01:40 2024 -0400

    Add SetUnionControlNetType to set the type of the union controlnet model.

commit 821f938
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 15:18:24 2024 -0400

    Allow model sampling to set number of timesteps.

commit e163039
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 11:29:38 2024 -0400

    Allow version names like v0.0.1 for the FrontendManager.

commit 99458e8
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Jul 16 11:26:11 2024 -0400

    Add `FrontendManager` to manage non-default front-end impl (comfyanonymous#3897)

    * Add frontend manager

    * Add tests

    * nit

    * Add unit test to github CI

    * Fix path

    * nit

    * ignore

    * Add logging

    * Install test deps

    * Remove 'stable' keyword support

    * Update test

    * Add web-root arg

    * Rename web-root to front-end-root

    * Add test on non-exist version number

    * Use repo owner/name to replace hard coded provider list

    * Inline cmd args

    * nit

    * Fix unit test

commit 33346fd
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 15 20:36:03 2024 -0400

    Fix bug with custom nodes on other drives.

commit 136c93c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 15 20:01:49 2024 -0400

    Fix bug with workflow not registering change.

    There was an issue when only the class type of a node changed with all the
    inputs staying the same.

commit 1305fb2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 15 17:36:24 2024 -0400

    Refactor: Move some code to the comfy/text_encoders folder.

commit 7914c47
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 14 10:07:36 2024 -0400

    Quick fix for the promax controlnet.

commit 79547ef
Author: pythongosssss <125205205+pythongosssss@users.noreply.github.com>
Date:   Sun Jul 14 07:04:40 2024 +0100

    New menu fixes - fix send to workflow (comfyanonymous#3909)

    * Fix send to workflow
    Fix center align of close workflow dialog
    Better support for elements around canvas

    * More resilent to extra elements added to body

commit a3dffc4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 13 13:51:40 2024 -0400

    Support AuraFlow Lora and loading model weights in diffusers format.

    You can load model weights in diffusers format using the UNETLoader node.

commit ce2473b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 12 15:25:07 2024 -0400

    Add link to AuraFlow example in Readme.

commit 4ca9b9c
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Fri Jul 12 10:33:57 2024 -0700

    Add Github Workflow for releasing stable versions and standalone bundle. (comfyanonymous#3949)

    * Add stable release.

    * Only build CUDA 12.1 + 3.11 Python.

    * Upgrade checkout and setup-python to latest version.

    * lzma2

    * Update artifact name to be ComfyUI_windows_portable_nvidia.7z

commit 29c2e26
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 12 01:08:45 2024 -0400

    Better tokenizing code for AuraFlow.

commit b6f09cf
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 22:58:03 2024 -0400

    Add sentencepiece dependency.

commit 8e01204
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 17:51:56 2024 -0400

    Add a ModelSamplingAuraFlow node to change the shift value.

    Set the default AuraFlow shift value to 1.73 (sqrt(3)).

commit 9f291d7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 16:51:06 2024 -0400

    AuraFlow model implementation.

commit f45157e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 11:46:51 2024 -0400

    Fix error message never being shown.

commit 5e1fced
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 11:37:31 2024 -0400

    Cleaner support for loading different diffusion model types.

commit ffe0bb0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 10 20:33:12 2024 -0400

    Remove useless code.

commit 391c104
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 10 20:06:50 2024 -0400

    More flexibility with text encoder return values.

    Text encoders can now return other values to the CONDITIONING than the cond
    and pooled output.

commit e44fa56
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 10 19:31:22 2024 -0400

    Support returning text encoder attention masks.

commit 90389b3
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Wed Jul 10 11:28:15 2024 -0400

    Update bug issue template (comfyanonymous#3996)

    * Update issue template

    * nit

# Conflicts:
#	.gitignore   resolved by upstream/master version
#	comfy/samplers.py   resolved by upstream/master version
chenbaiyujason added a commit to NeoWorldTeam/ComfyUI that referenced this pull request Aug 10, 2024
commit ae197f6
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 10 07:36:27 2024 -0400

    Speed up hunyuan dit inference a bit.

commit 1b5b8ca
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 21:45:21 2024 -0400

    Fix regression.

commit 6678d5c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 14:02:38 2024 -0400

    Fix regression.

commit e172564
Author: TTPlanetPig <152850462+TTPlanetPig@users.noreply.github.com>
Date:   Sat Aug 10 01:40:05 2024 +0800

    Update controlnet.py to fix the default controlnet weight as constant (comfyanonymous#4285)

commit a3cc326
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 12:16:25 2024 -0400

    Better fix for lowvram issue.

commit 86a97e9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 12:08:58 2024 -0400

    Fix controlnet regression.

commit 5acdadc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 03:58:28 2024 -0400

    Fix issue with some lowvram weights.

commit 55ad9d5
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 03:36:40 2024 -0400

    Fix regression.

commit a9f04ed
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 03:21:10 2024 -0400

    Implement text encoder part of HunyuanDiT loras.

commit a475ec2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 9 02:35:19 2024 -0400

    Cleanup HunyuanDit controlnets.

    Use the: ControlNetApply SD3 and HunyuanDiT node.

commit 06eb9fb
Author: 来新璐 <35400185+CrazyBoyM@users.noreply.github.com>
Date:   Fri Aug 9 14:59:24 2024 +0800

    feat: add support for HunYuanDit ControlNet (comfyanonymous#4245)

    * add support for HunYuanDit ControlNet

    * fix hunyuandit controlnet

    * fix typo in hunyuandit controlnet

    * fix typo in hunyuandit controlnet

    * fix code format style

    * add control_weight support for HunyuanDit Controlnet

    * use control_weights in HunyuanDit Controlnet

    * fix typo

commit 4133226
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 22:09:29 2024 -0400

    Raw torch is faster than einops?

commit 11200de
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 20:07:09 2024 -0400

    Cleaner code.

commit 037c38e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 17:28:35 2024 -0400

    Try to improve inference speed on some machines.

commit 1e11d2d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 17:05:16 2024 -0400

    Better prints.

commit 65ea6be
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 14:20:48 2024 -0700

    PullRequest CI Run: use pull_request_target to allow the CI Dashboard to work (comfyanonymous#4277)

    '_target' allows secrets to pass through, and we're just using the secret that allows uploading to the dashboard and are manually vetting PRs before running this workflow anyway

commit 5df6f57
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 13:30:59 2024 -0700

    minor fix on copypasta action name (comfyanonymous#4276)

    my bad sorry

commit 6588bfd
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 13:24:49 2024 -0700

    add GitHub workflow for CI tests of PRs (comfyanonymous#4275)

    When the 'Run-CI-Test' label is added to a PR, it will be tested by the CI, on a small matrix of stable versions.

commit 50ed287
Author: Alex "mcmonkey" Goodwin <4000772+mcmonkey4eva@users.noreply.github.com>
Date:   Thu Aug 8 12:40:07 2024 -0700

    Add full CI test matrix GitHub Workflow (comfyanonymous#4274)

    automatically runs a matrix of full GPU-enabled tests on all new commits to the ComfyUI master branch

commit 66d4233
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 15:16:51 2024 -0400

    Fix.

commit 591010b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 14:45:52 2024 -0400

    Support diffusers text attention flux loras.

commit 08f92d5
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 8 03:27:37 2024 -0400

    Partial model shift support.

commit 8115d8c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 15:08:39 2024 -0400

    Add Flux fp16 support hack.

commit 6969fc9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 15:00:06 2024 -0400

    Make supported_dtypes a priority list.

commit cb7c4b4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 14:30:54 2024 -0400

    Workaround for lora OOM on lowvram mode.

commit 1208863
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 13:49:31 2024 -0400

    Fix "Comfy" lora keys.

    They are in this format now:
    diffusion_model.full.model.key.name.lora_up.weight

commit e1c5281
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 13:30:45 2024 -0400

    Fix bundled embed.

commit 17030fd
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 13:18:32 2024 -0400

    Support for "Comfy" lora format.

    The keys are just: model.full.model.key.name.lora_up.weight

    It is supported by all comfyui supported models.

    Now people can just convert loras to this format instead of having to ask
    for me to implement them.

commit c19dcd3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 12:59:28 2024 -0400

    Controlnet code refactor.

commit 1c08bf3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Aug 7 03:45:25 2024 -0400

    Support format for embeddings bundled in loras.

commit 2a02546
Author: PhilWun <philipp.wundrack@live.de>
Date:   Wed Aug 7 03:59:34 2024 +0200

    Add type hints to folder_paths.py (comfyanonymous#4191)

    * add type hints to folder_paths.py

    * replace deprecated standard collections type hints

    * fix type error when using Python 3.8

commit b334605
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 13:27:48 2024 -0400

    Fix OOMs happening in some cases.

    A cloned model patcher sometimes reported a model was loaded on a device
    when it wasn't.

commit de17a97
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 03:30:28 2024 -0400

    Unload all models if there's an OOM error.

commit c14ac98
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Aug 6 03:22:39 2024 -0400

    Unload models and load them back in lowvram mode no free vram.

commit 2894511
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Mon Aug 5 22:46:09 2024 -0700

    Clone taesd with depth of 1 to reduce download size. (comfyanonymous#4232)

commit f3bc402
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Tue Aug 6 07:45:24 2024 +0200

    Add format metadata to CLIP save to make compatible with diffusers safetensors loading (comfyanonymous#4233)

commit 841e74a
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Aug 6 01:27:28 2024 -0400

    Change browser test CI python to 3.8 (comfyanonymous#4234)

commit 2d75df4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 21:58:28 2024 -0400

    Flux tweak memory usage.

commit 1abc9c8
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Mon Aug 5 17:07:16 2024 -0700

    Stable release uses cached dependencies (comfyanonymous#4231)

    * Release stable based on existing tag.

    * Update default cuda to 12.1.

commit 8edbcf5
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 16:24:04 2024 -0400

    Improve performance on some lowend GPUs.

commit e545a63
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Aug 5 12:31:12 2024 -0400

    This probably doesn't work anymore.

commit 33e5203
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Mon Aug 5 09:25:28 2024 -0700

    Don't cache index.html (comfyanonymous#4211)

commit a178e25
Author: a-One-Fan <100067309+a-One-Fan@users.noreply.github.com>
Date:   Mon Aug 5 08:26:20 2024 +0300

    Fix Flux FP64 math on XPU (comfyanonymous#4210)

commit 78e133d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 21:59:42 2024 -0400

    Support simple diffusers Flux loras.

commit 7afa985
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Sun Aug 4 23:10:02 2024 +0200

    Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' (comfyanonymous#4200)

commit ddb6a9f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 15:59:02 2024 -0400

    Set the step in EmptySD3LatentImage to 16.

    These models work better when the res is a multiple of 16.

commit 3b71f84
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 15:45:43 2024 -0400

    ONNX tracing fixes.

commit 0a6b008
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 10:03:33 2024 -0400

    Fix issue with some custom nodes.

commit 56f3c66
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Aug 4 04:06:00 2024 -0400

    ModelSamplingFlux now takes a resolution and adjusts the shift with it.

    If you want to sample Flux dev exactly how the reference code does use
    the same resolution as your image in this node.

commit f7a5107
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 16:55:38 2024 -0400

    Fix crash.

commit 91be9c2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 16:34:27 2024 -0400

    Tweak lowvram memory formula.

commit 03c5018
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 15:14:07 2024 -0400

    Lower lowvram memory to 1/3 of free memory.

commit 2ba5cc8
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 15:06:40 2024 -0400

    Fix some issues.

commit 1e68002
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 14:50:20 2024 -0400

    Cap lowvram to half of free memory.

commit ba9095e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 13:45:19 2024 -0400

    Automatically use fp8 for diffusion model weights if:

    Checkpoint contains weights in fp8.

    There isn't enough memory to load the diffusion model in GPU vram.

commit f123328
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 12:39:33 2024 -0400

    Load T5 in fp8 if it's in fp8 in the Flux checkpoint.

commit 63a7e8e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 11:53:30 2024 -0400

    More aggressive batch splitting.

commit 0eea47d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Aug 3 03:54:38 2024 -0400

    Add ModelSamplingFlux to experiment with the shift value.

    Default shift on Flux Schnell is 0.0

commit 7cd0cdf
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 23:20:30 2024 -0400

    Add advanced model merge node for Flux model.

commit ea03c9d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 18:08:21 2024 -0400

    Better per model memory usage estimations.

commit 3a9ee99
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 17:34:30 2024 -0400

    Tweak regular SD memory formula.

commit 47da42d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 17:02:35 2024 -0400

    Better Flux vram estimation.

commit 17bbd83
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 13:14:28 2024 -0400

    Fix bug loading flac workflow when it contains = character.

commit bfb52de
Author: fgdfgfthgr-fox <60460773+fgdfgfthgr-fox@users.noreply.github.com>
Date:   Sat Aug 3 02:29:03 2024 +1200

    Lower SAG scale step for finer control (comfyanonymous#4158)

    * Lower SAG step for finer control

    Since the introduction of cfg++ which uses very low cfg value, a step of 0.1 in SAG might be too high for finer control. Even SAG of 0.1 can be too high when cfg is only 0.6, so I change the step to 0.01.

    * Lower PAG step as well.

    * Update nodes_sag.py

commit eca962c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Aug 2 10:24:53 2024 -0400

    Add FluxGuidance node.

    This lets you adjust the guidance on the dev model which is a parameter
    that is passed to the diffusion model.

commit c1696cd
Author: Jairo Correa <jn.j41r0@gmail.com>
Date:   Fri Aug 2 10:34:12 2024 -0300

    Add missing import (comfyanonymous#4174)

commit 369f459
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 22:19:53 2024 -0400

    Fix no longer working on old pytorch.

commit ce9ac2f
Author: Alexander Brown <DrJKL0424@gmail.com>
Date:   Thu Aug 1 18:40:56 2024 -0700

    Fix clip_g/clip_l mixup (comfyanonymous#4168)

commit e638f28
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 21:03:26 2024 -0400

    Hack to make all resolutions work on Flux models.

commit a531001
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 18:53:25 2024 -0400

    Add CLIPTextEncodeFlux.

commit d420bc7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 17:49:46 2024 -0400

    Tweak the memory usage formulas for Flux and SD.

commit d965474
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:39:59 2024 -0400

    Make ComfyUI split batches a higher priority than weight offload.

commit 1c61361
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:28:11 2024 -0400

    Fast preview support for Flux.

commit a6decf1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 16:18:14 2024 -0400

    Fix bfloat16 potentially not being enabled on mps.

commit 48eb139
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:41:27 2024 -0400

    Try to fix mac issue.

commit b4f6ebb
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:33:30 2024 -0400

    Rename UNETLoader node to "Load Diffusion Model".

commit d7430a1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 13:28:41 2024 -0400

    Add a way to load the diffusion model in fp8 with UNETLoader node.

commit f2b80f9
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 12:55:28 2024 -0400

    Better Mac support on flux model.

commit 1aa9cf3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 12:11:57 2024 -0400

    Make lowvram more aggressive on low memory machines.

commit 2f88d19
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:48:19 2024 -0400

    Add link to Flux examples to readme.

commit eb96c3b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:32:58 2024 -0400

    Fix .sft file loading (they are safetensors files).

commit 5f98de7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 11:05:56 2024 -0400

    Load flux t5 in fp8 if weights are in fp8.

commit 8d34211
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 09:57:01 2024 -0400

    Fix old python versions no longer working.

commit 1589b58
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 04:03:59 2024 -0400

    Basic Flux Schnell and Flux Dev model implementation.

commit 7ad574b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 09:42:17 2024 -0400

    Mac supports bf16 just make sure you are using the latest pytorch.

commit e2382b6
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Aug 1 03:58:58 2024 -0400

    Make lowvram less aggressive when there are large amounts of free memory.

commit c24f897
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 02:00:19 2024 -0400

    Fix to get fp8 working on T5 base.

commit a5991a7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 01:34:57 2024 -0400

    Fix hunyuan dit text encoder weights always being in fp32.

commit 2c038cc
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 01:32:35 2024 -0400

    Lower CLIP memory usage by a bit.

commit b85216a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 31 00:52:34 2024 -0400

    Lower T5 memory usage by a few hundred MB.

commit 82cae45
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 14:20:28 2024 -0400

    Fix potential issue with non clip text embeddings.

commit 25853d0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 05:03:20 2024 -0400

    Use common function for casting weights to input.

commit 7904063
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 30 05:01:34 2024 -0400

    Remove unnecessary code.

commit 66d35c0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 29 20:27:40 2024 -0400

    Improve artifacts on hydit, auraflow and SD3 on specific resolutions.

    This breaks seeds for resolutions that are not a multiple of 16 in pixel
    resolution by using circular padding instead of reflection padding but
    should lower the amount of artifacts when doing img2img at those
    resolutions.

commit c75b506
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 29 11:15:37 2024 -0400

    Less confusing exception if pillow() function fails.

commit 4ba7fa0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 28 01:19:20 2024 -0400

    Refactor: Move sd2_clip.py to text_encoders folder.

commit ab76abc
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Sat Jul 27 20:34:19 2024 -0700

    Active workflow use primary fg color (comfyanonymous#4090)

commit 9300058
Author: Silver <65376327+silveroxides@users.noreply.github.com>
Date:   Sat Jul 27 22:19:50 2024 +0200

    Add dpmpp_2s_ancestral as custom sampler (comfyanonymous#4101)

    Adding dpmpp_2s_ancestral as custom sampler node to enable its use with eta and s_noise when using custom sampling.

commit f82d09c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 04:48:19 2024 -0400

    Update packaging workflow.

commit e6829e7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 04:41:46 2024 -0400

    Add a way to set custom dependencies in the release workflow.

commit 07f6a1a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 03:15:22 2024 -0400

    Handle case in the updater when master branch is not in local repo.

commit e746965
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 27 01:20:18 2024 -0400

    Update nightly package workflow.

commit 45a2842
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 14:52:00 2024 -0400

    Set stable releases as a prerelease initially.

    This should give time to test the standalone package before making it live.

commit 17b41f6
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Fri Jul 26 11:37:40 2024 -0700

    Change windows standalone URL to stable release. (comfyanonymous#4065)

commit cf4418b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 13:07:39 2024 -0400

    Don't treat Bert model like CLIP.

    Bert can accept up to 512 tokens so any prompt with more than 77 should
    just be passed to it as is instead of splitting it up like CLIP.

commit 6225a78
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 13:04:48 2024 -0400

    Add CLIPTextEncodeHunyuanDiT.

    Useful for testing what each text encoder does.

commit b6779d8
Author: filtered <176114999+webfiltered@users.noreply.github.com>
Date:   Sat Jul 27 02:25:42 2024 +1000

    Fix undo incorrectly undoing text input (comfyanonymous#4114)

    Fixes an issue where under certain conditions, the ComfyUI custom undo / redo functions would not run when intended to.

    When trying to undo an action like deleting several nodes, instead the native browser undo runs - e.g. a textarea gets focus and the last typed text is undone.  Clicking outside the text area and typing again just keeps doing the same thing.

commit 8328a2d
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 12:11:32 2024 -0400

    Let hunyuan dit work with all prompt lengths.

commit afe732b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 11:52:58 2024 -0400

    Hunyuan dit can now accept longer prompts.

commit a9ac56f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 26 04:32:33 2024 -0400

    Own BertModel implementation that works with lowvram.

commit 25b51b1
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 22:42:54 2024 -0400

    Hunyuan DiT lora support.

commit 61a2b00
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 19:06:43 2024 -0400

    Add HunyuanDiT support to readme.

commit a5f4292
Author: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Date:   Thu Jul 25 18:21:08 2024 -0400

    Basic hunyuan dit implementation. (comfyanonymous#4102)

    * Let tokenizers return weights to be stored in the saved checkpoint.

    * Basic hunyuan dit implementation.

    * Fix some resolutions not working.

    * Support hydit checkpoint save.

    * Init with right dtype.

    * Switch to optimized attention in pooler.

    * Fix black images on hunyuan dit.

commit f87810c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 25 10:52:09 2024 -0400

    Let tokenizers return weights to be stored in the saved checkpoint.

commit 10c919f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 16:43:53 2024 -0400

    Make it possible to load tokenizer data from checkpoints.

commit ce80e69
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 13:50:34 2024 -0400

    Avoid loading the dll when it's not necessary.

commit 19944ad
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 12:49:29 2024 -0400

    Add code to fix issues with new pytorch version on the standalone.

commit 10b43ce
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 24 01:12:59 2024 -0400

    Remove duplicate code.

commit 0a4c49c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 15:35:28 2024 -0400

    Support MT5.

commit 88ed893
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 14:17:42 2024 -0400

    Allow SPieceTokenizer to load model from a byte string.

commit 334ba48
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 23 14:13:32 2024 -0400

    More generic unet prefix detection code.

commit 14764aa
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 22 12:21:45 2024 -0400

    Rename LLAMATokenizer to SPieceTokenizer.

commit b2c995f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 22 11:30:38 2024 -0400

    "auto" type is only relevant to the SetUnionControlNetType node.

commit 4151fbf
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Mon Jul 22 11:27:32 2024 -0400

    Add error message on union controlnet (comfyanonymous#4081)

commit 6045ed3
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Sun Jul 21 21:15:01 2024 -0400

    Supress frontend exception on unhandled message type (comfyanonymous#4078)

    * Supress frontend exception on unhandled message type

    * nit

commit f836e69
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 21 16:16:45 2024 -0400

    Fix bug with SaveAudio node with --gpu-only

commit 5b69cfe
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Sun Jul 21 15:29:10 2024 -0400

    Add timestamp to execution messages (comfyanonymous#4076)

    * Add timestamp to execution messages

    * Add execution_end message

    * Rename to execution_success

commit 95fa954
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 20 12:27:42 2024 -0400

    Only append zero to noise schedule if last sigma isn't zero.

commit 11b7414
Author: Greg Wainer <gregcebowainer@hotmail.com>
Date:   Fri Jul 19 17:39:04 2024 -0500

    Fix/webp exif little endian (comfyanonymous#4061)

    * Fix for isLittleEndian flag in parseExifData.

    * Add break after reading first exif chunk in getWebpMetadata.

commit 6ab8cad
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 19 17:44:56 2024 -0400

    Implement beta sampling scheduler.

    It is based on: https://arxiv.org/abs/2407.12173

    Add "beta" to the list of schedulers and the BetaSamplingScheduler node.

commit 011b11d
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Thu Jul 18 18:59:18 2024 -0700

    LoadAudio restores file value from workflow (comfyanonymous#4043)

    * LoadAudio restores file value from workflow

    * use onAfterGraphConfigured

    * Don't use anonnymous function

commit ff6ca2a
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 18 17:20:05 2024 -0400

    Move PAG to model_patches/unet section.

    Move other unet model_patches nodes to model_patches/unet section.

commit 374e093
Author: bymyself <abolkonsky.rem@gmail.com>
Date:   Wed Jul 17 13:11:10 2024 -0700

    Disable audio widget trying to get previews (comfyanonymous#4044)

commit 8557894
Author: 喵哩个咪 <wailovet@163.com>
Date:   Thu Jul 18 01:12:50 2024 +0800

    support clip-vit-large-patch14-336 (comfyanonymous#4042)

    * support clip-vit-large-patch14-336

    * support clip-vit-large-patch14-336

commit 6f7869f
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 17 13:05:38 2024 -0400

    Get clip vision image size from config.

commit 281ad42
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 17 10:16:31 2024 -0400

    Fix lowvram union controlnet bug.

commit 1cde6b2
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Jul 16 21:15:08 2024 -0400

    Disallow use of eval with pylint (comfyanonymous#4033)

commit c5a48b1
Author: Thomas Ward <teward@thomas-ward.net>
Date:   Tue Jul 16 18:27:09 2024 -0400

    Make default hash lib configurable without code changes via CLI argument (comfyanonymous#3947)

    * cli_args: Add --duplicate-check-hash-function.

    * server.py: compare_image_hash configurable hash function

    Uses an argument added in cli_args to specify the type of hashing to default to for duplicate hash checking.  Uses an `eval()` to identify the specific hashlib class to utilize, but ultimately safely operates because we have specific options and only those options/choices in the arg parser.  So we don't have any unsafe input there.

    * Add hasher() to node_helpers

    * hashlib selection moved to node_helpers

    * default-hashing-function instead of dupe checking hasher

    This makes a default-hashing-function option instead of previous selected option.

    * Use args.default_hashing_function

    * Use safer handling for node_helpers.hasher()

    Uses a safer handling method than `eval` to evaluate default hashing function.

    * Stray parentheses are evil.

    * Indentation fix.

    Somehow when I hit save I didn't notice I missed a space to make indentation work proper.  Oops!

commit f229879
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Jul 16 18:20:39 2024 -0400

    Fix annotation (comfyanonymous#4035)

commit 60383f3
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 17:08:25 2024 -0400

    Move controlnet nodes to conditioning/controlnet.

commit 8270c62
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 17:01:40 2024 -0400

    Add SetUnionControlNetType to set the type of the union controlnet model.

commit 821f938
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 15:18:24 2024 -0400

    Allow model sampling to set number of timesteps.

commit e163039
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Tue Jul 16 11:29:38 2024 -0400

    Allow version names like v0.0.1 for the FrontendManager.

commit 99458e8
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Tue Jul 16 11:26:11 2024 -0400

    Add `FrontendManager` to manage non-default front-end impl (comfyanonymous#3897)

    * Add frontend manager

    * Add tests

    * nit

    * Add unit test to github CI

    * Fix path

    * nit

    * ignore

    * Add logging

    * Install test deps

    * Remove 'stable' keyword support

    * Update test

    * Add web-root arg

    * Rename web-root to front-end-root

    * Add test on non-exist version number

    * Use repo owner/name to replace hard coded provider list

    * Inline cmd args

    * nit

    * Fix unit test

commit 33346fd
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 15 20:36:03 2024 -0400

    Fix bug with custom nodes on other drives.

commit 136c93c
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 15 20:01:49 2024 -0400

    Fix bug with workflow not registering change.

    There was an issue when only the class type of a node changed with all the
    inputs staying the same.

commit 1305fb2
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Mon Jul 15 17:36:24 2024 -0400

    Refactor: Move some code to the comfy/text_encoders folder.

commit 7914c47
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sun Jul 14 10:07:36 2024 -0400

    Quick fix for the promax controlnet.

commit 79547ef
Author: pythongosssss <125205205+pythongosssss@users.noreply.github.com>
Date:   Sun Jul 14 07:04:40 2024 +0100

    New menu fixes - fix send to workflow (comfyanonymous#3909)

    * Fix send to workflow
    Fix center align of close workflow dialog
    Better support for elements around canvas

    * More resilent to extra elements added to body

commit a3dffc4
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Sat Jul 13 13:51:40 2024 -0400

    Support AuraFlow Lora and loading model weights in diffusers format.

    You can load model weights in diffusers format using the UNETLoader node.

commit ce2473b
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 12 15:25:07 2024 -0400

    Add link to AuraFlow example in Readme.

commit 4ca9b9c
Author: Robin Huang <robin.j.huang@gmail.com>
Date:   Fri Jul 12 10:33:57 2024 -0700

    Add Github Workflow for releasing stable versions and standalone bundle. (comfyanonymous#3949)

    * Add stable release.

    * Only build CUDA 12.1 + 3.11 Python.

    * Upgrade checkout and setup-python to latest version.

    * lzma2

    * Update artifact name to be ComfyUI_windows_portable_nvidia.7z

commit 29c2e26
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Fri Jul 12 01:08:45 2024 -0400

    Better tokenizing code for AuraFlow.

commit b6f09cf
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 22:58:03 2024 -0400

    Add sentencepiece dependency.

commit 8e01204
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 17:51:56 2024 -0400

    Add a ModelSamplingAuraFlow node to change the shift value.

    Set the default AuraFlow shift value to 1.73 (sqrt(3)).

commit 9f291d7
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 16:51:06 2024 -0400

    AuraFlow model implementation.

commit f45157e
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 11:46:51 2024 -0400

    Fix error message never being shown.

commit 5e1fced
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Thu Jul 11 11:37:31 2024 -0400

    Cleaner support for loading different diffusion model types.

commit ffe0bb0
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 10 20:33:12 2024 -0400

    Remove useless code.

commit 391c104
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 10 20:06:50 2024 -0400

    More flexibility with text encoder return values.

    Text encoders can now return other values to the CONDITIONING than the cond
    and pooled output.

commit e44fa56
Author: comfyanonymous <comfyanonymous@protonmail.com>
Date:   Wed Jul 10 19:31:22 2024 -0400

    Support returning text encoder attention masks.

commit 90389b3
Author: Chenlei Hu <chenlei.hu@mail.utoronto.ca>
Date:   Wed Jul 10 11:28:15 2024 -0400

    Update bug issue template (comfyanonymous#3996)

    * Update issue template

    * nit

# Conflicts:
#	comfy/controlnet.py   resolved by upstream/master version
#	comfy/ldm/flux/layers.py   resolved by upstream/master version
#	comfy/ldm/hydit/attn_layers.py   resolved by upstream/master version
#	comfy/ldm/hydit/models.py   resolved by upstream/master version
#	comfy/model_detection.py   resolved by upstream/master version
#	comfy/model_management.py   resolved by upstream/master version
#	comfy/supported_models.py   resolved by upstream/master version
#	comfy/utils.py   resolved by upstream/master version
#	comfy_extras/nodes_hunyuan.py   resolved by upstream/master version
#	folder_paths.py   resolved by upstream/master version
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature A new feature to add to ComfyUI. Good PR This PR looks good to go, it needs comfy's final review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants