-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug]: ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor' torchvision 0.17 promblem #5108
Comments
we've pinned torchvision to 0.16.0 which does not have this problem. will be in the next release, should fix this for you. hopefully the upstream PR gets merged. |
I edited myself it fixed doing this |
@KEDI103 torch 2.2 doesn't exist. torch 2.1.0 is fine with torchvision 0.16.0. While the fix you applied works, it's not a good way to fix this issue. If we end up needing torchvision 0.17.0, we can figure out a solution (possibly just a fork of basicsr with the fix) |
|
the latest torch release is 2.1.1, just released yesterday you appear to have a dev/nightly build of torch installed. invokeai doesn't use those. |
I have force to use it we got for months of test to deal with pci atomics problems for ROCM Also can we got options in updater won't always force uninstall torch because it give me additional steps to delete and reinstall my forced torch versions. |
What kind of options do you need? Do you mean to install different versions of certain packages? |
I meant when I update invoke update detect my pytorch version and uninstall it everytime and reinstall the broken one for me. And I need to reinstall my needed pytorch version from dev console. If there is option to keep the installed pytorch would be so awesome. It saves so much time for me. |
@lstein Can we provide a special optional package group for this? Say, ".[rocm-fix]", something like that? I'm not sure how this stuff all interacts. @KEDI103 which gpus are affected? |
Affected GPUs are gfx906 based GPUs with no PCIe atomics, e.g. Radeon VII, Instinct MI50, etc. |
Actually this not gpu problem its pci atomics missing problem. |
Next week nighty pre build will show us. Also It not gpu problem its motherboard+cpu missing pci atomics support. |
Ah, I see. Thanks for clarifying. Do we have an ETA on torch 2.2.0? |
well right now I gen at torch-2.2.0.dev20231114+rocm5.7-cp310-cp310-linux_x86_64 with torchvision-0.17.0+rocm5.7-cp310-cp310-linux_x86_64 |
Sorry, what I meant is, do we know when torch 2.2.0 is expected to be released? |
Maybe next week because what I am using wheel gona merge with offical nighty pre build. |
Usually there are only a few torch minor version releases per year (they say 3 per year is typical). We just had 2.1.0 at the beginning of october, so I'd be surprised if we get 2.2.0 so soon after. If we can just wait until torch 2.2.0 is released, that seems a lot simpler than updating the installation process with a new UI to install certain versions of things. |
`basicsr` has a hard dependency on torchvision <= 0.16 and is unmaintained. Extract the code we need from it and remove the dep. Closes #5108
`basicsr` has a hard dependency on torchvision <= 0.16 and is unmaintained. Extract the code we need from it and remove the dep. Closes #5108
`basicsr` has a hard dependency on torchvision <= 0.16 and is unmaintained. Extract the code we need from it and remove the dep. Closes #5108
Still have this issue in 2.2.2 on mac Name: torch
|
@ohmerhe You are on an old version. Please follow the instructions here to update: https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/ |
Is there an existing issue for this?
OS
Linux
GPU
amd
VRAM
16
What version did you experience this issue on?
all version contain
What happened?
Screenshots
No response
Additional context
The cure is:
AUTOMATIC1111/stable-diffusion-webui#13985 (comment)
I tested for AUTOMATIC1111 its working but for invokeai it hide venv folder and only accesable by dev console could be need bit help to edit it.
For real fix awaiting approval:
XPixelGroup/BasicSR#650
Contact Details
discord: b_cansin
The text was updated successfully, but these errors were encountered: