Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compiler errors: common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies: #10

Closed
dumpinfo opened this issue Jan 2, 2022 · 6 comments · Fixed by #15

Comments

@dumpinfo
Copy link

dumpinfo commented Jan 2, 2022

tiny-cuda-nn/include/tiny-cuda-nn/common_device.h(75): error: more than one conversion function from "tcnn::network_precision_t" to a built-in type applies:
function "__half::operator float() const"
function "__half::operator short() const"
function "__half::operator unsigned short() const"
function "__half::operator int() const"
function "__half::operator unsigned int() const"
function "__half::operator long long() const"
function "__half::operator unsigned long long() const"
function "__half::operator __nv_bool() const"
detected during:
instantiation of "void tcnn::warp_activation<T,fragment_t>(tcnn::Activation, const fragment_t &, fragment_t &) [with T=tcnn::network_precision_t, fragment_t=tcnn::vector_fragment_t<tcnn::network_precision_t, 8U>]"
(245): here
instantiation of "void tcnn::kernel_activation(uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t, N=8U]"
(287): here
instantiation of "void tcnn::activation_gpu(cudaStream_t, uint32_t, tcnn::Activation, const T *, T *) [with T=tcnn::network_precision_t]"

enviroment:
ubuntu 18.04
gtx 1080
g++ 9.4.0
cuda-11.0

@Tom94
Copy link
Collaborator

Tom94 commented Jan 4, 2022

Unfortunately, I'm at a loss as to what's wrong with the compilation here. Two things of note, though:

  • Compatibility with older CUDA versions and newer compilers is not always given. It may be worth either upgrading to the latest CUDA version (11.5, Update 1) or trying to match the CUDA/compiler versions of CI. (see https://github.com/NVlabs/tiny-cuda-nn/runs/4695456983?check_suite_focus=true)
  • GTX 1000-series GPUs are technically not supported by the framework as they don't have TensorCores. There's a chance they'll still run, but I have no guarantees. Could you tell me which compute architecture is detected by CMake?

@dumpinfo
Copy link
Author

dumpinfo commented Jan 4, 2022

Unfortunately, I'm at a loss as to what's wrong with the compilation here. Two things of note, though:

  • Compatibility with older CUDA versions and newer compilers is not always given. It may be worth either upgrading to the latest CUDA version (11.5, Update 1) or trying to match the CUDA/compiler versions of CI. (see https://github.com/NVlabs/tiny-cuda-nn/runs/4695456983?check_suite_focus=true)
  • GTX 1000-series GPUs are technically not supported by the framework as they don't have TensorCores. There's a chance they'll still run, but I have no guarantees. Could you tell me which compute architecture is detected by CMake?

it is a old linux server .
Mother board: Supermicro X9DR3-F Dual Socket R (LGA 2011)
CPU: [Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz ] X 2

@Tom94
Copy link
Collaborator

Tom94 commented Jan 4, 2022

Could you tell me the detected GPU architecture? (this line in CMake:)
-- Targeting GPU architectures: XX

@dumpinfo
Copy link
Author

dumpinfo commented Jan 4, 2022

Could you tell me the detected GPU architecture? (this line in CMake:) -- Targeting GPU architectures: XX

-- The CXX compiler identification is GNU 9.4.0
-- The CUDA compiler identification is NVIDIA 11.0.194
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/local/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- No release type specified. Setting to 'Release'.
-- Automatically detected GPU architectures: 61;52
-- Configuring done
-- Generating done
nvidia-smi:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 Off | 00000000:06:00.0 Off | N/A |
| 0% 36C P8 16W / 180W | 277MiB / 4043MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 1080 Off | 00000000:82:00.0 Off | N/A |
| 27% 32C P8 6W / 180W | 11MiB / 8119MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

@pwais
Copy link

pwais commented Jan 14, 2022

I got the same compiler error with a K80 installed. I set export TCNN_CUDA_ARCHITECTURES=86 manually and things compiled. Important to do the before running cmake. If you do it after, you need to delete your build (out-of-source-build) and re-run cmake. (Sometimes the cmake-generated stuff will adapt but it looks like the compute setting gets embedded in the build artifacts)

@Tom94 Is there any chance we might get support for older stuff like K80s even if it's a bit slower? OP does not have tensor core GPUs but a 1080 is pretty fast has plenty of CUDA cores

@pwais
Copy link

pwais commented Jan 17, 2022

@Tom94 FWIW i tried the latest commit (well, instant-ngp at least) for Kepler / K80 / compute 37 and looks like the really old stuff might still need some help. context: the k80s are really cheap in the cloud and common in colab / kaggle etc and they have 11gb ram (times 2 for k80)

https://gist.github.com/pwais/4283a069c4b736d59e364e22048ae0ce

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants