You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
what do I need to do to fix this
open-oasis\generate.py", line 19, in
model = model.to(device).eval()
^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 927, in apply
param_applied = fn(param)
^^^^^^^^^
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
return t.to(
^^^^^
File "C:\Python312\Lib\site-packages\torch\cuda_init.py", line 310, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
The text was updated successfully, but these errors were encountered:
I believe this model is meant with NVIDIA GPUs in mind, but you might be able to make it use OpenCL or SYCL instead, which should support AMD GPUs. I've heard SYCL tends to be even faster than CUDA.
EDIT: You may want to check this issue out, seems like they've fixed the issue.
I think this can be closed as duplicate, also I'd suggest either using WSL (not sure if ROCm is available) or Linux on bate metal. Likely to have best results there as PyTorch isn't available on AMD Windows, although last time I checked might be soon but it's been a while and haven't heard much
what do I need to do to fix this
open-oasis\generate.py", line 19, in
model = model.to(device).eval()
^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 927, in apply
param_applied = fn(param)
^^^^^^^^^
File "C:\Python312\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
return t.to(
^^^^^
File "C:\Python312\Lib\site-packages\torch\cuda_init.py", line 310, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
The text was updated successfully, but these errors were encountered: