Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Launch Prevention after recent update #12223

Closed
1 task done
KimoriWasTaken opened this issue Jul 31, 2023 · 14 comments
Closed
1 task done

[Bug]: Launch Prevention after recent update #12223

KimoriWasTaken opened this issue Jul 31, 2023 · 14 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@KimoriWasTaken
Copy link

KimoriWasTaken commented Jul 31, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

git pulled as always, launch failed afterwards.
Proceeded to try a clean-install, didn't work.
=> Unable to launch sd-webui at the moment as I don't know how to roll-back

Steps to reproduce the problem

  1. Launch fresh installed or updated webui-user.bat

What should have happened?

It should have launched.

Version or Commit where the problem happens

Version: 1.5.1

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

AMD GPUs (RX 5000 below)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Command Line Arguments

none

List of extensions

clean install

Console logs

Creating venv in directory C:\Users\tommo\stable-diffusion-webui-directml\venv using python "C:\Users\tommo\AppData\Local\Programs\Python\Python310\python.exe"
venv "C:\Users\tommo\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: ed20ba7f9ff593bcf012db79278930a381a55748
Installing torch and torchvision
Collecting torch==2.0.0
  Using cached torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB)
Collecting torchvision==0.15.1
  Using cached torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB)
Collecting torch-directml
  Using cached torch_directml-0.2.0.dev230426-cp310-cp310-win_amd64.whl (8.2 MB)
Collecting typing-extensions
  Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting networkx
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting filelock
  Using cached filelock-3.12.2-py3-none-any.whl (10 kB)
Collecting jinja2
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting sympy
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting numpy
  Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl (15.6 MB)
     ---------------------------------------- 15.6/15.6 MB 12.1 MB/s eta 0:00:00
Collecting requests
  Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached Pillow-10.0.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting MarkupSafe>=2.0
  Using cached MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl (17 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-3.2.0-cp310-cp310-win_amd64.whl (96 kB)
Collecting urllib3<3,>=1.21.1
  Downloading urllib3-2.0.4-py3-none-any.whl (123 kB)
     ---------------------------------------- 123.9/123.9 kB 7.1 MB/s eta 0:00:00
Collecting idna<4,>=2.5
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
  Downloading certifi-2023.7.22-py3-none-any.whl (158 kB)
     ---------------------------------------- 158.3/158.3 kB 9.9 MB/s eta 0:00:00
Collecting mpmath>=0.19
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torch-directml
Successfully installed MarkupSafe-2.1.3 certifi-2023.7.22 charset-normalizer-3.2.0 filelock-3.12.2 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 numpy-1.25.2 pillow-10.0.0 requests-2.31.0 sympy-1.12 torch-2.0.0 torch-directml-0.2.0.dev230426 torchvision-0.15.1 typing-extensions-4.7.1 urllib3-2.0.4

[notice] A new release of pip available: 22.2.1 -> 23.2.1
[notice] To update, run: C:\Users\tommo\stable-diffusion-webui-directml\venv\Scripts\python.exe -m pip install --upgrade pip
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into C:\Users\tommo\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai...
Cloning into 'C:\Users\tommo\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai'...
remote: Enumerating objects: 574, done.
remote: Counting objects: 100% (304/304), done.
remote: Compressing objects: 100% (86/86), done.
remote: Total 574 (delta 244), reused 218 (delta 218), pack-reused 270
Receiving objects: 100% (574/574), 73.43 MiB | 11.87 MiB/s, done.
Resolving deltas: 100% (276/276), done.
Cloning Stable Diffusion XL into C:\Users\tommo\stable-diffusion-webui-directml\repositories\generative-models...
Cloning into 'C:\Users\tommo\stable-diffusion-webui-directml\repositories\generative-models'...
remote: Enumerating objects: 357, done.
remote: Counting objects: 100% (180/180), done.
remote: Compressing objects: 100% (104/104), done.
remote: Total 357 (delta 121), reused 76 (delta 76), pack-reused 177
 MiB/s
Receiving objects: 100% (357/357), 22.26 MiB | 11.88 MiB/s, done.
Resolving deltas: 100% (159/159), done.
Cloning K-diffusion into C:\Users\tommo\stable-diffusion-webui-directml\repositories\k-diffusion...
Cloning into 'C:\Users\tommo\stable-diffusion-webui-directml\repositories\k-diffusion'...
remote: Enumerating objects: 735, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 735 (delta 4), reused 6 (delta 4), pack-reused 724
Receiving objects: 100% (735/735), 143.59 KiB | 11.04 MiB/s, done.
Resolving deltas: 100% (482/482), done.
Cloning CodeFormer into C:\Users\tommo\stable-diffusion-webui-directml\repositories\CodeFormer...
Cloning into 'C:\Users\tommo\stable-diffusion-webui-directml\repositories\CodeFormer'...
remote: Enumerating objects: 594, done.
remote: Counting objects: 100% (245/245), done.
remote: Compressing objects: 100% (98/98), done.
Receiving objects:  99% (589/594), 11.43 MiB | 11.38 MiB/sremote: Total 594 (delta 176), reused 167 (delta 147), pack-re

Resolving deltas: 100% (287/287), done.
Cloning BLIP into C:\Users\tommo\stable-diffusion-webui-directml\repositories\BLIP...
Cloning into 'C:\Users\tommo\stable-diffusion-webui-directml\repositories\BLIP'...
remote: Enumerating objects: 277, done.
Receiving objects:   5% (14/277)165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
 MiB/s
Receiving objects: 100% (277/277), 7.03 MiB | 10.32 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements for CodeFormer
Installing requirements
Installing diffusers
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Traceback (most recent call last):
  File "C:\Users\tommo\stable-diffusion-webui-directml\launch.py", line 39, in <module>
    main()
  File "C:\Users\tommo\stable-diffusion-webui-directml\launch.py", line 35, in main
    start()
  File "C:\Users\tommo\stable-diffusion-webui-directml\modules\launch_utils.py", line 443, in start
    import webui
  File "C:\Users\tommo\stable-diffusion-webui-directml\webui.py", line 54, in <module>
    from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, queue_lock  # noqa: F401
  File "C:\Users\tommo\stable-diffusion-webui-directml\modules\call_queue.py", line 6, in <module>
    from modules import shared, progress, errors
  File "C:\Users\tommo\stable-diffusion-webui-directml\modules\shared.py", line 95, in <module>
    directml_do_hijack()
  File "C:\Users\tommo\stable-diffusion-webui-directml\modules\dml\__init__.py", line 69, in directml_do_hijack
    _set_memory_provider()
  File "C:\Users\tommo\stable-diffusion-webui-directml\modules\dml\__init__.py", line 14, in _set_memory_provider
    from modules.shared import opts, cmd_opts, log
ImportError: cannot import name 'opts' from partially initialized module 'modules.shared' (most likely due to a circular import) (C:\Users\tommo\stable-diffusion-webui-directml\modules\shared.py)
Press any key to continue . . .

Additional information

No response

@KimoriWasTaken KimoriWasTaken added the bug-report Report of a bug, yet to be confirmed label Jul 31, 2023
@XeroCreator
Copy link

I was JUST about to post this...

I saw an update last night but this morning i couldn't restart Auto1111

Did an entire reinstall to no avail.

Already up to date.
venv "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: 1.5.1
Commit hash: ed20ba7f9ff593bcf012db79278930a381a55748
Launching Web UI with arguments: --autolaunch --upcast-sampling --opt-sub-quad-attention --opt-split-attention-v1 --no-half-vae --medvram
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Traceback (most recent call last):
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\launch.py", line 39, in <module>
    main()
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\launch.py", line 35, in main
    start()
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\modules\launch_utils.py", line 443, in start
    import webui
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\webui.py", line 54, in <module>
    from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, queue_lock  # noqa: F401
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\modules\call_queue.py", line 6, in <module>
    from modules import shared, progress, errors
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\modules\shared.py", line 95, in <module>
    directml_do_hijack()
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\modules\dml\__init__.py", line 69, in directml_do_hijack
    _set_memory_provider()
  File "C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\modules\dml\__init__.py", line 14, in _set_memory_provider
    from modules.shared import opts, cmd_opts, log
ImportError: cannot import name 'opts' from partially initialized module 'modules.shared' (most likely due to a circular import) (C:\AI Art\Auto1111-SD\stable-diffusion-webui-directml\modules\shared.py)

@Sumpherien
Copy link

I have the same issue on a previously working install, it was working like 12 hours ago. I guess something got wrong in the most recent update.

@XeroCreator
Copy link

had to be, I generated one image this morning before work to test that I could run a batch... and something happened and it suddenly failed. hope for a fix in the next 2 hours :D lol (off work)

@KimoriWasTaken
Copy link
Author

can't seem to find anything related to a change between now and ~12hours, no commit here, windows update or anything really. Guess we have to wait for someone

@phataku
Copy link

phataku commented Jul 31, 2023

I have this too. I don't know enough to fix it myself. Seems like attempting to fix it might break any fixes forthcoming.

"from modules.shared import opts, cmd_opts, log
ImportError: cannot import name 'opts' from partially initialized module 'modules.shared' (most likely due to a circular import) (K:\stable diffusion\webui\stable-diffusion-webui-directml\modules\shared.py)
Press any key to continue . . ."

@XeroCreator
Copy link

I have this too. I don't know enough to fix it myself. Seems like attempting to fix it might break any fixes forthcoming.

"from modules.shared import opts, cmd_opts, log ImportError: cannot import name 'opts' from partially initialized module 'modules.shared' (most likely due to a circular import) (K:\stable diffusion\webui\stable-diffusion-webui-directml\modules\shared.py) Press any key to continue . . ."

afaik you can mess with your own cloned repo and it won't impact the main one at all (not that you can anyway).
I may try and fix it at home if it isn't working but i'm no programmer, this just seems like something that might be solved with a small change :x

@gkoogz
Copy link

gkoogz commented Aug 1, 2023

I have this exact issue. Started about 12 hours ago. Waiting on fix,

@XeroCreator
Copy link

XeroCreator commented Aug 1, 2023

I figured out the temp fix to roll back to the previous. You can switch back to the main repo later when an update hits.

Temp fix for Auto 1111 on AMD
Go to folder where your 'stable-diffusion-webui-directml' folder is
right click, open git bash
type git checkout 4873e6a

use Auto1111 again.
thank me later.

It might work on others, not sure though since I use the fork for AMD and directml, I assume nvidia would just go to the main repo and go to the last commit that was good (if they are even having issues?

@gkoogz
Copy link

gkoogz commented Aug 1, 2023

@XeroCreator

OH MY GOSH THAT FIXED IT FOR NOW! THANK YOU

@cl0ck-byte
Copy link

cl0ck-byte commented Aug 1, 2023

This is due to shoddy support of RDNA1 (and lower) cards (like RX5700XT) - torch>=2.0.0 won't run on these from my experience, at least on ROCm. This was apparently fixed in #11048 but someone forgot about it and forced torch version to be >=2.0.0 in requirements.txt

I figured out the temp fix to roll back to the previous. You can switch back to the main repo later when an update hits.

Temp fix for Auto 1111 on AMD Go to folder where your 'stable-diffusion-webui-directml' folder is right click, open git bash type git checkout 4873e6a

use Auto1111 again. thank me later.

It might work on others, not sure though since I use the fork for AMD and directml, I assume nvidia would just go to the main repo and go to the last commit that was good (if they are even having issues?

Though, I am a bit worried, since RDNA2 cards aren't working apparently too #12228 and they did before (no workaround present in shell script, so assuming that it's true) with torch>=2.0.0 so it could be due to newest torch version?

@gkoogz
Copy link

gkoogz commented Aug 1, 2023

This is due to shoddy support of RDNA1 cards (like RX5700XT)
Though, I am a bit worried, since RDNA2 cards aren't working apparently too #12228 and they did before (no workaround present in shell script, so assuming that it's true) with torch>=2.0.0 so it could be due to newest torch version?

I'm on RDNA3. Same problem.

@cl0ck-byte
Copy link

cl0ck-byte commented Aug 1, 2023

I'm on RDNA3. Same problem.

Try downgrading torch to latest stable (2.0.0) version, and report if it works. torch 2.0.1 released in May, so no way that it could mess things up

Or downgrade to nightly versions before 2023-07-31, unless I'm mistaken and other requirements (or something else out of my scope) like torch-directml or torchvision are causing this issue

@dhwz
Copy link
Contributor

dhwz commented Aug 1, 2023

You know you're not using A1111 right? That's a forks so please don't open issues on the A1111 repository.
Post your issue on there: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues
The issue is invalid.

@KimoriWasTaken
Copy link
Author

KimoriWasTaken commented Aug 1, 2023

You know you're not using A1111 right? That's a forks so please don't open issues on the A1111 repository. Post your issue on there: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues The issue is invalid.

in fact I did not know.
the wiki to install webui on amd gpus is on A1111:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

so unless you read each step super extra careful you won't notice

edit:
my wording sounds a little offensive, but I meant to be apologetic, as it's an oversight.
I'll close the issue here and mark your answer as the solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

7 participants