Issues with ComfyUI crashing with and without error message using API and Flux - !!! Exception during processing !!! Given groups=1, weight of size [4, 4, 1, 1], expected input[1, 16, 90, 160] to have 4 channels, but got 16 channels instead. Flux is 16 channels, why the error! #4831
Replies: 21 comments 16 replies
-
There should be detailed error messages in the terminal. |
Beta Was this translation helpful? Give feedback.
-
It is always crashing after the line using pytorch attention in VAE... got prompt |
Beta Was this translation helpful? Give feedback.
-
I have tried MANY different workflows using 3 different Flux models to try and get this working. The API is broken! Here are some of the API's I have tried to use that cause it to crash. They all work fine in the web interface but cause comfyUI to crash through the API!!! diffusion API test.json |
Beta Was this translation helpful? Give feedback.
-
Here is a screenshot showing SDXL working, then trying to run a Flux model and crashing with no error. I have tried on 2.3.1 and 2.4.1, I have tried with and without xformers. I have tried checkpoint,gguf, and diffusion models. I have tried on all released versions from .0.0.8 to 0.2.2. |
Beta Was this translation helpful? Give feedback.
-
Here is the complete error even when using a flux 8fp model... (After successfully running SDXL) got prompt Prompt executed in 127.55 seconds |
Beta Was this translation helpful? Give feedback.
-
Here is the latest API file where that error is from.. |
Beta Was this translation helpful? Give feedback.
-
This clearly shows I am loading the Flux model.. You stated Flux is 16 channels and I am loading a Flux model so why am I getting an error stating 16 channels is a problem... model weight dtype torch.float8_e4m3fn, manual cast: torch.float16 |
Beta Was this translation helpful? Give feedback.
-
This is the Flux STOIQ model running from the web interface with no issues. Look at the memory allocation... This is across two runs.. same prompt.. showing first time load and second time.. got prompt |
Beta Was this translation helpful? Give feedback.
-
This is the API attempting to do the exact same thing crashing.. Again, look at the memory allocation.. Same Flux STOIQ model. Same workflow used just through the API. got prompt |
Beta Was this translation helpful? Give feedback.
-
Can you please look at my thread again... Loading Flux model which is 16
channels but getting an error saying 16 channels is wrong supposed to be 4
…On Sat, Sep 7, 2024 at 11:10 PM Dr.Lt.Data ***@***.***> wrote:
Cases where a process is killed without any error message are generally
due to insufficient RAM, causing the OS to forcibly kill the process.
Occasionally, it can be due to a torch issue, but this is rare.
—
Reply to this email directly, view it on GitHub
<#4831 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEVP2RC5MZML5OFTIJYQOZ3ZVO5Z7AVCNFSM6AAAAABN2KL25CVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTANJXHE4DCOA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
When the |
Beta Was this translation helpful? Give feedback.
-
Here is the information running from my diffusion workflow... Using pytorch attention in VAE
Prompt executed in 44.74 seconds |
Beta Was this translation helpful? Give feedback.
-
Here is the output run from the API of that exact same workflow.. Crashed as usual! No output since no generation so no logger info. Ignore the SDXL output.. I am running a website so someone just generated an SDXL image through the API on my other card. got prompt ComfyUI-Manager: installing dependencies done.** ComfyUI startup time: 2024-09-12 20:11:20.763956 Prestartup times for custom nodes: Total VRAM 24474 MB, total RAM 65464 MB |
Beta Was this translation helpful? Give feedback.
-
I am running the newest version of comfyui again... The new one with only one Comfy directory deep... Same error.. got prompt At least now it gets through the generation.. it's just the porblem of it it thinking 16 is wrong when it's not! Should be an easy fix. |
Beta Was this translation helpful? Give feedback.
-
Can you please look into fixing this.. It's been over 2 months and I'm pretty sure this is broken for everybody. I really want to get flux working through the API to add to my website.. https://AiImageCentral.com. I have tried Torch 2.3.1 2.4.0 2.4.1... |
Beta Was this translation helpful? Give feedback.
-
Create
This will print prompt. |
Beta Was this translation helpful? Give feedback.
-
Here are both DBG results.. they are different even though it's exactly the same workflow I exported to API Working prompt from workflow: Debug prompt from API Even when I copy the DBG prompt from the workflow it still fails.. It's not recognizing 16 channels as valid for Flux in the API Error again capture from got prompt: got prompt Prompt executed in 84.25 seconds Running latest torch and xformers.. 2.4.1 |
Beta Was this translation helpful? Give feedback.
-
I even tried combinning all the prompt info from both toegether to see if that fixes it.. NOPE prompt_text = """ Requested to load Flux Prompt executed in 81.35 seconds |
Beta Was this translation helpful? Give feedback.
-
It always says given groups=1 which I think must be wrong. What should groups be equal to when 16 channels? |
Beta Was this translation helpful? Give feedback.
-
I also can't run SDXL in 1080p anymore either through the API as it crashes exactly the same way now! I was doing 1080p for like 8 months till this happened!!! (Like 2-2.5 months now) Now I can only do 1280x720 through the API. This should not be that difficult, it used to work! Just tried again after clean git clone again (On 9/22/2024) running 2.4.1 and everything working from workflows but API still very messed up. RTX 2070 (8gb) for SDXL generation.. P40 (24gb) for Flux if the API ever works with it and doing remBG. Both used for my website https://AiImageCentral.com. I would really appreciate if you could figure this out. I know it's not my setup. I have run through so many variations and versions... fresh installs.. git clone and released versions.. I don't think the API works with Flux for anyone. |
Beta Was this translation helpful? Give feedback.
-
Same issue with v0.3.7 windows portable in browser the workflow can work, be if export API json and run script with it, the server will crash without any log my workaround is get the api payload json in browser instead of use the export API json, and remove the client_id modify the code a little payload='xxx'
def queue_prompt(prompt):
data = json.dumps(payload).encode('utf-8')
req = request.Request("http://127.0.0.1:8188/prompt", data=data)
request.urlopen(req)
prompt = json.loads(prompt_text)
#set diff parameters here
prompt["prompt"]["6"]["inputs"]["text"] = "comic style"
queue_prompt(prompt) |
Beta Was this translation helpful? Give feedback.
-
I have been having issues with the API ever since Flux was added to the mix. My workflows work fine when I run them locally through the web interface but when I take the exact same API and export it as an API and then try and use it with my website, it crashes without an error message. Believe it's memory issues. This is happening when attempting to run SDXL in 1920x1080 resolution or running any of the flux models. Again they work fine through the web interface but cause ComfyUI to crash with or without error when run from the API. I have tried all releases and they are all affected. I have only been able to run 720p and 1440p resolutions without crashing for about the last 2 months now... Also in the latest release (0.2.2) now RemBG is broken as well. I have tried fresh installs like 5 times now going back to the earliest releases available and they are all broken.
Is anyone else having these issues??? I posted an issue on this like a month ago and still no reply. I am running a P40 and RTX 2070.
Beta Was this translation helpful? Give feedback.
All reactions