-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: SDXL 1.0 model don't starts #12081
Comments
The minimum graphics memory requirement for SDX 1.0 is 12GB+ |
needed 12G VRAM, 8G cannot start 😒😒 |
@mooonwalker1983 you ran out of regular ram, but also use this vae, name it the same name as your model with |
thats nonsense it does even run with 6GB just fine @mooonwalker1983 |
I am having the exact same issue with a RTX 3060 12GB and 24GB System RAM |
These two will produce 1024x1024 on 4-6gb, run with --lowvram --opt-sdp-attention / --xformers*. --lowvram is slow, it is a memory efficiency tradeoff for speed. You can replace it with --medvram (middle-ground) or remove it entirely. The original stability-ai repo does not use these optimizations, so their recommendations are higher. *The latest xformers / edit: opt-sdp-attention is still not 100% image reproduction |
Just tried based on that branch and the same error is occurring. Though, it does now error faster than before. And I have checkpoint caches set to 0 Here is my error log: Launching Web UI with arguments: --opt-split-attention --medvram --no-half --no-half-vae --autolaunch --listen --api --cors-allow-origins=http://localhost:7860/ --enable-insecure-extension-access --xformers --disable-model-loading-ram-optimization To create a public link, set Failed to create model quickly; will retry using slow method. |
i try to start SDXL model with this VAE and low params , unsuccessfully |
What happens to memory usage in the task manager? |
@mooonwalker1983 Can you try --lowram --medvram so the checkpoint gets directly loaded to VRAM |
@dhwz Just tried this and still getting the same issues. (This is on 1.5.1) (Though it did reduce my RAM usage) |
|
@mooonwalker1983 just a guess, is swap (pagefile) disabled on your Windows? If yes try enabling it, If not can you maybe increase the size, running out of ideas. @rmdtech you're on an AMD GPU? If not please try without --no-half |
@dhwz Used to use an AMD GPU but recently upgraded, accidently copied that over when setting up. I have removed it, but unfortunately no luck. I've also already got pagefile enabled, with 12GB allocated |
Also test on --lowvram (not --lowram) |
i enable auto mode for swap in Windows 10 and it works !!!!!! very very slowly but works! |
@mooonwalker1983 speed should improve if --medvram is used? Also try sdp or sdp-no-mem setting in optimization settings |
nice |
I've got the latest Nvidia drivers, but you're right, I can't see any reason why this wouldn't work. It works fine for non SDXL models, but anything SDXL based fails to load :/ |
It really works. thank you for help me! can it be more faster ? cat in park Time taken: 32.9 sec. |
Unfortunately, I've already got identical settings. Though might be because I need more space on my C:/ drive, since I run this on a VM, I'll try increasing the drive storage and will report back |
I use old drivers ( see the pinned issue), didn't wanna risk vram being loaded to ram |
I think that's close to what is possible right now, your GPU isn't the fastest. I'm getting similar results with my 2070Super ~38secs. Just remember the resolution is 4x of the 1.5 model. If you reduce resolution you'll get similar speed but bad quality. |
@dhwz have you seen anywhere a model with the built-in fixed vae? Its a bit odd to suggest everyone use it is also the 0.9 vae, there may be a difference. |
I had the same problem until A1111 started with the "old" parameters (--precision full --no-half), then changed to (--no-half-vae) and now the start works. |
nope, I've already asked if we can have an updated version of the fp16 VAE, right now we need to stay on the 0.9 VAE, but I haven't seen any big difference in results |
@ClashSAN I've to correct my answer now someone pushed a SDXL 1.0 base model with baked in fp16 VAE |
Suspect I'm in the same boat, with 1.5.1 and all the requirements.txt installed. Although I'm on an EC2 - 16GB RAM, 12GB VRAM. As soon as I select SDXL from the checkpoints drop-down I wait a bit then the system runs out of RAM. It appears fine for the first few seconds, then this is the last breath:
From the console, memory use soars once this final line is printed:
Net result is me stopping the EC2 instance to start it again. Uncertain what this "Creating model from config..." does. |
That's probably not enough RAM for SDXL, you need a lot of RAM while the model is loading. |
Pagefiles for EC2 isn't "normal". Not saying I can't do it, but it even with --lowram 16GB didn't work. I'm expecting a lot of memory optimisations to make this stuff bearable. |
I can confirm this is 100% a RAM issue, since I'm lucky enough to run this in a VM, I've simply allocated more RAM and that has resolved the issue. Running at 32GB has solved my issue. Thank you all for your help |
It's definitely much more RAM intensive, if the SDXL base model is already loaded an I've enabled checkpoint cache and then loading the refiner and afterwards another model I'm running OOM even with 64GB RAM. |
FWIW latest ComfyUI does launch and renders some images with SDXL on my EC2. It will crash eventually - possibly RAM but doesn't take the VM with it - but as a comparison that one "works". |
I have the same issues (1070 8GB - 32GB RAM). The weird thing is : yesterday it worked. Today i moved SD from HDD to NVME - old checkpoints work, but SDXL refuses to do... Installed from scratch, same problems, chagend pagefile to auto : same. "edit* I realised the only difference was the drive, and the swapfile settings. I DISABLED the swapfile on the drive where A1111 is located -> now it works. Weird, but it does ;) |
args: I am on WSL and all it says is But if I do so definitely looks like an out of memory issue here... Edit: following this guide on increasing memory helped. Works now! |
I downloaded SDXL 1.0 base, refiner, Lora and placed them where they should be. After firing up A1111, when I went to select SDXL1.0, it tries to load and reverts back to the previous 1.5 model. System Spec: Calculating sha256 for F:\StableDiffusion\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\SDXL\sd_xl_base_1.0.safetensors: 31e35c80fc4829d14f90153f4c74cd59c90b779f6afe05a74cd6120b893f7e5b |
@thegreatsai you're not on latest webui? |
I am probably not. How do you manually update it? I didn't do a gitpull when I first installed it. |
I ended up doing another fresh install and moved over existing models and other stuff. SDXL1.0 does load now and works! :D |
Same problem. I also desactivated all extensions & tryed to keep some after, dont work too. Honestly idk. Memory issues ? |
The same thing happened to me with the refiner, and after trying several arguments without success, I noticed that I had "Checkpoints to cache in RAM=2" configured, setting it to "0" I was able to get enough RAM (I have 32GB of RAM) to load the refiner. COMMANDLINE_ARGS= --disable-safe-unpickle --opt-sdp-attention --no-half-vae --medvram --xformers |
I added swap to my EC2. I was able to switch to SDXL. Prior to running anything else, here's the
|
Guys try to aolocate virtual ram in performance settings if you dont know what is this try search "adding virtual ram windows" |
1.6.0-RC should have resolved this. Also see the wiki for more information: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimum-SDXL-Usage |
Let's help stop this misinformation floating around the internet 👌🙂 SDXL 1.0 doesn't need 12GB+ SDXL 1.0 works just fine with 8GB RAM. But if you're using it with the AUTOMATIC1111 UI, then yeah, you'll need 12GB+ |
The wiki already explains that. 4GB even works. |
Is there an existing issue for this?
What happened?
I have install and update automatic1111, put SDXL model in models and it dont play, trying to start but failed. but It works in ComfyUI . RTX 4060TI 8 GB, 32 GB, Ryzen 5 5600
Steps to reproduce the problem
i dont know
What should have happened?
errors
Version or Commit where the problem happens
1.5.0
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: