Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA error: no kernel image is available for execution on the device #6

Closed
xielongze opened this issue Feb 2, 2021 · 27 comments

Comments

@xielongze
Copy link

I'm trying to run the sample code but it raises an error. I'm running on RTX 3090 with cuda 11.1(as the description recommends) and cudnn8.0.5. The message is attached below.
image

I'm able to run pytorch with cuda.
image
Do you have any idea how to solve this problem? Thanks in advance!

@xielongze
Copy link
Author

image
A quick update of my pytorch version

@johndpope
Copy link

johndpope commented Feb 2, 2021

what nvidia driver are you using? I have 3090 and recommend latest cudatoolkit 11.2 / driver 460 - you can check with
nvidia-smi - I have this working with docker no problem. Yet to try just with host. Am using the nightly pytorch build -

- pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html

https://developer.nvidia.com/Cuda-downloads

related
pytorch/pytorch#51080

@xielongze
Copy link
Author

what nvidia driver are you using? I have 3090 and recommend latest cudatoolkit 11.2 / driver 460 - you can check with
nvidia-smi - I have this working with docker no problem. Yet to try just with host. Am using the nightly pytorch build -

- pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html

https://developer.nvidia.com/Cuda-downloads

related
pytorch/pytorch#51080

I followed your suggestions and reinstalled everything, but the error persists.

@johndpope
Copy link

I used run_docker.sh / maybe try updating docker? I made a PR that added in extra params for 3090 card. Though it worked without. Do you have 2 gpus? Or just 1? Are you sure you’re on cuda11.2 not 11.1?

@xielongze
Copy link
Author

I used run_docker.sh / maybe try updating docker? I made a PR that added in extra params for 3090 card. Though it worked without. Do you have 2 gpus? Or just 1? Are you sure you’re on cuda11.2 not 11.1?

image

Here is my configuration. I only have 1 gpu. I'll try docker later, but I do hope the issue can be solved in Windows since I've never used docker before.

@nurpax
Copy link
Contributor

nurpax commented Feb 3, 2021

Hi, can you post some extra details about your environment? Here's the procedure that pytorch bug process requires. Preferably run the script and post results (as text please, not screenshot for this one).

Environment

Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).

You can get the script and run it with:

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

@johndpope
Copy link

johndpope commented Feb 3, 2021

Id just throw in another hard drive and boot to Ubuntu / pop-os. https://pop.system76.com/ You can get Linux subsystem to work with windows / but you could waste days getting it working. I’m using iMac and connect to hp workstation running pop-os via RemoteDesktop.google.com

@nurpax
Copy link
Contributor

nurpax commented Feb 3, 2021

Many people have been able to get started with StyleGAN2-ADA on native PyTorch on Windows 10 without problems. PyTorch is pretty well packaged, so it's a lot easier than what it used to be with SG2. While I prefer Linux myself, I wouldn't give up on Windows 10 just yet..

Awaiting for @xielongze to post the results of the env collection script.

@xielongze
Copy link
Author

xielongze commented Feb 3, 2021

Hi, can you post some extra details about your environment? Here's the procedure that pytorch bug process requires. Preferably run the script and post results (as text please, not screenshot for this one).

Environment

Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).

You can get the script and run it with:

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Thanks a lot!
I have some additional information of my current environments.

  1. I installed my vs studio instead of C:/, and I add 'E:/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.28.29333/bin/Hostx64/x64' to the _find_compiler_bindir in /torch_utils/custom_ops.py. I noticed that I have both 14.16.27023 and 14.28.29333 under MSVC directory. Is this the right thing to do?
  2. My windows' system language is Chinese rather than English. Don't know if it's helpful.

Below is the script result.
Collecting environment information...
PyTorch version: 1.7.1
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 专业版(Professional)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.19.3

Python version: 3.7 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 3090
Nvidia driver version: 460.89
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.3
[pip3] torch==1.7.1
[pip3] torchvision==0.8.2
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 11.0.3 h3f58a73_6 conda-forge
[conda] mkl 2020.2 256 defaults
[conda] mkl-service 2.3.0 py37h196d8e1_0 defaults
[conda] mkl_fft 1.2.0 py37h45dec08_0 defaults
[conda] mkl_random 1.1.1 py37h47e9c7a_0 defaults
[conda] numpy 1.20.0 pypi_0 pypi
[conda] numpy-base 1.19.2 py37ha3acd2a_0 defaults
[conda] pytorch 1.7.1 py3.7_cuda110_cudnn8_0 pytorch
[conda] torchvision 0.8.2 py37_cu110 pytorch

@nurpax
Copy link
Contributor

nurpax commented Feb 3, 2021

Thanks!

Is PyTorch working for you otherwise on the GPU? Try running the below commands in your Python interpreter:

$ python
Python 3.8.5 (default, Sep  4 2020, 07:30:14) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.version.cuda
'11.0'
>>> torch.backends.cudnn.version()
8005
>>> torch.tensor([1.0, 2.0])
tensor([1., 2.])
>>> torch.tensor([1.0, 2.0]).cuda()
tensor([1., 2.], device='cuda:0')
>>> 

@xielongze
Copy link
Author

Thanks!

Is PyTorch working for you otherwise on the GPU? Try running the below commands in your Python interpreter:

$ python
Python 3.8.5 (default, Sep  4 2020, 07:30:14) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.version.cuda
'11.0'
>>> torch.backends.cudnn.version()
8005
>>> torch.tensor([1.0, 2.0])
tensor([1., 2.])
>>> torch.tensor([1.0, 2.0]).cuda()
tensor([1., 2.], device='cuda:0')
>>> 

As far as I'm concerned it works pretty well, but the output is slightly different from yours. Is there any version incompatible here?

(style-gan) F:/>python
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.

import torch
torch.cuda.is_available()
True
torch.version.cuda
'11.0'
torch.backends.cudnn.version()
8004
torch.tensor([1.0, 2.0])
tensor([1., 2.])
torch.tensor([1.0, 2.0]).cuda()
tensor([1., 2.], device='cuda:0')

@nurpax
Copy link
Contributor

nurpax commented Feb 3, 2021

I'm using the Anaconda3 2020.11 version from the Anaconda website and I'm running Linux. I guess that explains the difference.

Worth trying out also: delete this directory and its contents and rerun:

%USERPROFILE%\AppData\Local\torch_extensions\torch_extensions\Cache

@xielongze
Copy link
Author

I'm using the Anaconda3 2020.11 version from the Anaconda website and I'm running Linux. I guess that explains the difference.

Worth trying out also: delete this directory and its contents and rerun:

%USERPROFILE%\AppData\Local\torch_extensions\torch_extensions\Cache

Did it but still no luck.

@johndpope
Copy link

Can you run
Pip freeze > test.txt
And post results ?

Did you configure packages for cuda using conda or pip ? I’m aware of cuda toolkit having problems with 3090 on conda latest commit.

@xielongze
Copy link
Author

Can you run
Pip freeze > test.txt
And post results ?

Did you configure packages for cuda using conda or pip ? I’m aware of cuda toolkit having problems with 3090 on conda latest commit.

Here is the result of pip freeze
backcall==0.2.0
brotlipy==0.7.0
certifi==2020.12.5
cffi @ file:///C:/ci/cffi_1606255207413/work
chardet @ file:///C:/ci/chardet_1607706910910/work
click==7.1.2
colorama==0.4.3
cryptography @ file:///C:/ci/cryptography_1607639129468/work
decorator==4.4.2
idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work
imageio-ffmpeg==0.4.3
ipykernel==5.3.4
ipython==7.18.1
ipython-genutils==0.2.0
jedi==0.17.2
jupyter-client==6.1.7
jupyter-core==4.6.3
mkl-fft==1.2.0
mkl-random==1.1.1
mkl-service==2.3.0
ninja==1.10.0.post2
numpy==1.19.3
olefile==0.46
parso==0.7.1
pickleshare==0.7.5
Pillow @ file:///C:/ci/pillow_1609786872067/work
prompt-toolkit==3.0.7
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
Pygments==2.7.1
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
PySocks @ file:///C:/ci/pysocks_1594394709107/work
pyspng==0.1.0
pywin32==228
pyzmq==19.0.2
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
six @ file:///C:/ci/six_1605205426665/work
torch==1.7.1
torchvision==0.8.2
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1611857934208/work
traitlets==5.0.4
typing-extensions @ file:///tmp/build/80754af9/typing_extensions_1611751222202/work
urllib3 @ file:///tmp/build/80754af9/urllib3_1611694770489/work
wcwidth==0.2.5
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306165655/work
wincertstore==0.2

I did install cuda using conda, but I'm not sure if the problem is with conda.

1 similar comment
@xielongze
Copy link
Author

Can you run
Pip freeze > test.txt
And post results ?

Did you configure packages for cuda using conda or pip ? I’m aware of cuda toolkit having problems with 3090 on conda latest commit.

Here is the result of pip freeze
backcall==0.2.0
brotlipy==0.7.0
certifi==2020.12.5
cffi @ file:///C:/ci/cffi_1606255207413/work
chardet @ file:///C:/ci/chardet_1607706910910/work
click==7.1.2
colorama==0.4.3
cryptography @ file:///C:/ci/cryptography_1607639129468/work
decorator==4.4.2
idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work
imageio-ffmpeg==0.4.3
ipykernel==5.3.4
ipython==7.18.1
ipython-genutils==0.2.0
jedi==0.17.2
jupyter-client==6.1.7
jupyter-core==4.6.3
mkl-fft==1.2.0
mkl-random==1.1.1
mkl-service==2.3.0
ninja==1.10.0.post2
numpy==1.19.3
olefile==0.46
parso==0.7.1
pickleshare==0.7.5
Pillow @ file:///C:/ci/pillow_1609786872067/work
prompt-toolkit==3.0.7
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
Pygments==2.7.1
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
PySocks @ file:///C:/ci/pysocks_1594394709107/work
pyspng==0.1.0
pywin32==228
pyzmq==19.0.2
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
six @ file:///C:/ci/six_1605205426665/work
torch==1.7.1
torchvision==0.8.2
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1611857934208/work
traitlets==5.0.4
typing-extensions @ file:///tmp/build/80754af9/typing_extensions_1611751222202/work
urllib3 @ file:///tmp/build/80754af9/urllib3_1611694770489/work
wcwidth==0.2.5
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306165655/work
wincertstore==0.2

I did install cuda using conda, but I'm not sure if the problem is with conda.

@nurpax
Copy link
Contributor

nurpax commented Feb 3, 2021

I don't know if this helps, but when we've been installing PyTorch, we installed pytorch using conda (as per pytorch.org instructions for CUDA 11.0 and conda). This automatically installs whatever cuda toolkit package pytorch needs.

But in addition, you need a separate CUDA toolkit installation from NVIDIA's website so that custom extension builds work (they spawn nvcc among other things -- these tools are not installed when you install pytorch.). I think you already have this, just mentioning this for completeness.

I'm running out of things to try. Probably one more thing to try is to do a full reinstall of Python and ensure that you're running what you intended. For us the full Anaconda distribution version 2020.11 has worked well, you need only a couple of extra packages on top of that. But like John says above, some versions apparently have problems with 3090. FWIW, I successfully ran StyleGAN2-ADA pytorch on RTX 3090 yesterday on Linux and it was working just fine. Also a colleague at work was running it on Windows with Anaconda using what I believe was pytorch 1.7-cu110 with either a CUDA 11.1 or CUDA 11.2 toolkit installed with NVIDIA's installers.

One more thing: please post the build.ninja file from here %USERPROFILE%\AppData\Local\torch_extensions\torch_extensions\Cache. That might have some clues about what tools get picked up in extension builds.

@johndpope
Copy link

johndpope commented Feb 3, 2021

Can you get torch-1.8.0.dev20210129 installed? you're on 1.7 - don't use conda to install pytorch.
pytorch/pytorch#51080

I had similiar issue - this worked and resolved all issues

pip uninstall torch
pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html

Successfully installed torch-1.8.0.dev20210129+cu110

@xielongze
Copy link
Author

I don't know if this helps, but when we've been installing PyTorch, we installed pytorch using conda (as per pytorch.org instructions for CUDA 11.0 and conda). This automatically installs whatever cuda toolkit package pytorch needs.

But in addition, you need a separate CUDA toolkit installation from NVIDIA's website so that custom extension builds work (they spawn nvcc among other things -- these tools are not installed when you install pytorch.). I think you already have this, just mentioning this for completeness.

I'm running out of things to try. Probably one more thing to try is to do a full reinstall of Python and ensure that you're running what you intended. For us the full Anaconda distribution version 2020.11 has worked well, you need only a couple of extra packages on top of that. But like John says above, some versions apparently have problems with 3090. FWIW, I successfully ran StyleGAN2-ADA pytorch on RTX 3090 yesterday on Linux and it was working just fine. Also a colleague at work was running it on Windows with Anaconda using what I believe was pytorch 1.7-cu110 with either a CUDA 11.1 or CUDA 11.2 toolkit installed with NVIDIA's installers.

One more thing: please post the build.ninja file from here %USERPROFILE%\AppData\Local\torch_extensions\torch_extensions\Cache. That might have some clues about what tools get picked up in extension builds.

By any chance, could you list all of the dependencies of this project, including cuda&cudnn versions, python packages as well as their versions? I'll try to install the same packages and see if it will work. Thanks in advance.

@nurpax
Copy link
Contributor

nurpax commented Feb 3, 2021

I don't have that at hand as I work on Linux, but in a nutshell here's what I know has been working (a colleague used this yesterday - but you should double and triple check what John's been saying in this thread too):

  1. Blow away your Python installation in its entirety and make sure none of it remains in PATH.
  2. Install https://www.anaconda.com/products/individual (this will install a lot of packages) -- for Windows 10 64-bit the direct link is https://repo.anaconda.com/archive/Anaconda3-2020.11-Windows-x86_64.exe
  3. Install pytorch with conda (with options like in the below screenshot). I don't create a separate conda environment, I've just installed pytorch in the base environment.
  4. Install NVIDIA's CUDA toolkit 11.1 from NVIDIA's website

Ensure that you don't have other versions of CUDA toolkit in anywhere your system.

You should NOT need to separately install cudnn as the step #2 above will install it with pytorch.

image

I can't give much more precise information as I don't have a Windows machine to debug this on and I cannot reproduce this on my system.

BTW: I think you missed my earlier question: "One more thing: please post the build.ninja file from here %USERPROFILE%\AppData\Local\torch_extensions\torch_extensions\Cache. That might have some clues about what tools get picked up in extension builds."

@nerdyrodent
Copy link

Don't know if it helps, but Jeff Heaton has a Windows guide - https://www.youtube.com/watch?v=BCde68k6KXg

@xielongze
Copy link
Author

xielongze commented Feb 5, 2021

I don't have that at hand as I work on Linux, but in a nutshell here's what I know has been working (a colleague used this yesterday - but you should double and triple check what John's been saying in this thread too):

  1. Blow away your Python installation in its entirety and make sure none of it remains in PATH.
  2. Install https://www.anaconda.com/products/individual (this will install a lot of packages) -- for Windows 10 64-bit the direct link is https://repo.anaconda.com/archive/Anaconda3-2020.11-Windows-x86_64.exe
  3. Install pytorch with conda (with options like in the below screenshot). I don't create a separate conda environment, I've just installed pytorch in the base environment.
  4. Install NVIDIA's CUDA toolkit 11.1 from NVIDIA's website

Ensure that you don't have other versions of CUDA toolkit in anywhere your system.

You should NOT need to separately install cudnn as the step #2 above will install it with pytorch.

image

I can't give much more precise information as I don't have a Windows machine to debug this on and I cannot reproduce this on my system.

BTW: I think you missed my earlier question: "One more thing: please post the build.ninja file from here %USERPROFILE%\AppData\Local\torch_extensions\torch_extensions\Cache. That might have some clues about what tools get picked up in extension builds."

Sorry I missed that out. Below is the file content. I noticed that I did download and install cudnn from Nvidia separately Is that a problem? Will try to reinstall everything as you suggest next week and see if that helps.

ninja_required_version = 1.3
cxx = cl
nvcc = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin\nvcc

cflags = -DTORCH_EXTENSION_NAME=bias_act_plugin -DTORCH_API_INCLUDE_EXTENSION_H -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include\TH -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include" -IC:\ProgramData\Anaconda3\envs\style-gan\Include -D_GLIBCXX_USE_CXX11_ABI=0 /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc
post_cflags =
cuda_cflags = -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=bias_act_plugin -DTORCH_API_INCLUDE_EXTENSION_H -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include\TH -IC:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include" -IC:\ProgramData\Anaconda3\envs\style-gan\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --use_fast_math
cuda_post_cflags =
ldflags = /DLL c10.lib c10_cuda.lib torch_cpu.lib torch_cuda.lib -INCLUDE:?warp_size@cuda@at@@yahxz torch.lib torch_python.lib /LIBPATH:C:\ProgramData\Anaconda3\envs\style-gan\libs /LIBPATH:C:\ProgramData\Anaconda3\envs\style-gan\lib\site-packages\torch\lib "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib/x64" cudart.lib

rule compile
command = cl /showIncludes $cflags -c $in /Fo$out $post_cflags
deps = msvc

rule cuda_compile
command = $nvcc $cuda_cflags -c $in -o $out $cuda_post_cflags

rule link
command = "E$:\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\bin\Hostx64\x64/link.exe" $in /nologo $ldflags /out:$out

build bias_act.o: compile F$:\python_projects\stylegan2-ada-pytorch-main\torch_utils\ops\bias_act.cpp
build bias_act.cuda.o: cuda_compile F$:\python_projects\stylegan2-ada-pytorch-main\torch_utils\ops\bias_act.cu

build bias_act_plugin.pyd: link bias_act.o bias_act.cuda.o

default bias_act_plugin.pyd

@nurpax
Copy link
Contributor

nurpax commented Feb 5, 2021

I did download and install cudnn from Nvidia separately

Not sure if it's a problem, just mentioning that this should not be necessary with PyTorch.

@xielongze
Copy link
Author

Finally, I am able to get the project to work! I still have no idea what went wrong last time, but below is what I did to make it right. Hope it's helpful.

First, as @nurpax mentioned, completely remove python/Anaconda and install Anaconda3-2020.11. Make sure none of it remains in PATH when removing.
Second, reinstall Visual Studio 2019 community in the default directory. Then add "C:\Program Files (x86)\Microsoft Visual Studio<VERSION>\Community\VC\Auxiliary\Build" to system PATH. Last time I installed it in a customized directory, which caused some error in custom_op.py.
Third, remove all Cuda components and every version of it. Then reinstall Cuda 11.1. Last time I have several Cuda versions like 10.1,11.0,11.1. So even if the "nvcc -V" command returns the right version, it may still cause some trouble. I didn't install Cudnn from Nvidia's official source, just Cuda.
Next, install pytorch by "conda install pytorch torchvision torchaudio cudatoolkit=10.1" in a Python3.8.5 virtual environment. I didn't add "-c torch" because I used the Tsinghua channel.
After all these, I'm able to get the generate.py running.

Thank you soooooooo much for your help!

@lff12940
Copy link

Hi, how to solve the problem in the end, I encountered the same problem.

@johnkuan506
Copy link

Hi, @lff12940 You can try to change your pytorch version from https://pytorch.org/get-started/previous-versions/. It's work for me when I installed pytorch with conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge. If pytorch==1.8.0 is not work for you, try another pytorch version instead. BTW, I didn't update Anaconda or do other adjustments, only update pytorch to another version. Hope this will help you.

@hosein-cnn
Copy link

Hi, I am also facing this problem.
But I don't use Pytorch, I just want to write simple Cuda C++ code.

I use the following :

  1. Windows 10 , 19044(21H2)
  2. Visual Studio 2019 or 2022
  3. Nvidia GeForce GTX 960m , Maxwell , Capability 5.0
  4. Cuda Toolkit 11.7 or 11.8 or 12.0

After I installed all the packages related to C++ on VS2019, I installed Cuda Toolkit.
When I run sample code , I get the following error :

  • No kernel image is available for execution on the device.

I even ran Cuda 11.7, 11.8 and 12.0 on VS2019 and VS2022, But the error still exists.
I am facing this error for 20 days and I am really fed up. also deviceQuery.exe and nvidia-smi and nvcc --version runs fine.
I have also checked all the nvidia and other sites and my GPU has no problem with the Cuda version.
What factors can cause this error ?
What could be the reasons for this error?

please help me out ,
Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants