You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When i run pip install --global-option="--no-networks" git+https://github.com/NVlabs/tiny-cuda-nn#subdirectory=bindings/torch
The error code shown to me:
`
WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option / --install-option. Consider using --config-settings for more flexibility.
DEPRECATION: --no-binary currently disables reading from the cache of locally built wheels. In the future --no-binary will not influence the wheel cache. pip 23.1 will enforce this behaviour change. A possible replacement is to use the --no-cache-dir option. You can use the flag --use-feature=no-binary-enable-wheel-cache to test the upcoming behaviour. Discussion can be found at pypa/pip#11453
Collecting git+https://github.com/NVlabs/tiny-cuda-nn#subdirectory=bindings/torch
Cloning https://github.com/NVlabs/tiny-cuda-nn to /tmp/pip-req-build-9_cmwgll
Running command git clone --quiet https://github.com/NVlabs/tiny-cuda-nn /tmp/pip-req-build-9_cmwgll
Resolved https://github.com/NVlabs/tiny-cuda-nn to commit 14053e9a87ebf449d32bda335c0363dd4f5667a4
Running command git submodule update --init --recursive -q
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-req-build-9_cmwgll/bindings/torch/setup.py", line 30, in
raise EnvironmentError("Unknown compute capability. Specify the target compute capabilities in the TCNN_CUDA_ARCHITECTURES environment variable or install PyTorch with the CUDA backend to detect it automatically.")
OSError: Unknown compute capability. Specify the target compute capabilities in the TCNN_CUDA_ARCHITECTURES environment variable or install PyTorch with the CUDA backend to detect it automatically.
Building PyTorch extension for tiny-cuda-nn version 1.7
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
`
The text was updated successfully, but these errors were encountered:
Regardless, this is an issue with tiny-cuda-nn, not nvdiffrec, so I recommend to file an issue at https://github.com/NVlabs/tiny-cuda-nn or check there for a recommended solution.
When i run
pip install --global-option="--no-networks" git+https://github.com/NVlabs/tiny-cuda-nn#subdirectory=bindings/torch
The error code shown to me:
`
WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option / --install-option. Consider using --config-settings for more flexibility.
DEPRECATION: --no-binary currently disables reading from the cache of locally built wheels. In the future --no-binary will not influence the wheel cache. pip 23.1 will enforce this behaviour change. A possible replacement is to use the --no-cache-dir option. You can use the flag --use-feature=no-binary-enable-wheel-cache to test the upcoming behaviour. Discussion can be found at pypa/pip#11453
Collecting git+https://github.com/NVlabs/tiny-cuda-nn#subdirectory=bindings/torch
Cloning https://github.com/NVlabs/tiny-cuda-nn to /tmp/pip-req-build-9_cmwgll
Running command git clone --quiet https://github.com/NVlabs/tiny-cuda-nn /tmp/pip-req-build-9_cmwgll
Resolved https://github.com/NVlabs/tiny-cuda-nn to commit 14053e9a87ebf449d32bda335c0363dd4f5667a4
Running command git submodule update --init --recursive -q
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-req-build-9_cmwgll/bindings/torch/setup.py", line 30, in
raise EnvironmentError("Unknown compute capability. Specify the target compute capabilities in the TCNN_CUDA_ARCHITECTURES environment variable or install PyTorch with the CUDA backend to detect it automatically.")
OSError: Unknown compute capability. Specify the target compute capabilities in the TCNN_CUDA_ARCHITECTURES environment variable or install PyTorch with the CUDA backend to detect it automatically.
Building PyTorch extension for tiny-cuda-nn version 1.7
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
`
The text was updated successfully, but these errors were encountered: