-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.OutOfMemoryError: Allocation on device #6073
Comments
The first time it runs, it encounters an OOM (Out of Memory) error. However, if you click again, it runs successfully. After that, every subsequent click will result in an OOM error. |
No, I can't generate any image when this pops up. Even when I reload and queue agian |
Did you encounter this problem as well? Has it been solved? |
How can I configure PyTorch to use more cache? It's reporting an OOM (Out - Of - Memory) error, but I still have around 10GB of VRAM (Video Random Access Memory) free. |
I am getting this error on my PC which has a 4070 card with 16gb of VRAM, yet on my laptop which has a 3080 with 8gb vram, using the same workflow, models etc, it runs (slowly) but fine. What's the go? Is there something wrong with the way I installed cuda/pytorch etc on my PC? I don't really want to do a fresh install of ComfyUI, It takes forever to compile everything and redownload the nodes. |
I'm not sure either, but I got an alert here saying that the weight file wasn't loaded. I guess it might be the reason. I'm looking for this file to see if I can manually allocate VRAM. |
My problem is always this ''ApplyPulidFlux allocation on device'' |
I am having similar Flux memory issues. I solved the OOM with Garbage Collector node from ControlFlowUtils. |
Your question
SamplerCustomAdvanced
Allocation on device
Logs
Other
128g memory,4090d-24g;
The text was updated successfully, but these errors were encountered: