-
-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nvcc wrapper: Add -ccbin only if not already supplied #54
nvcc wrapper: Add -ccbin only if not already supplied #54
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
(NB: If the same is needed on Windows, then I'll leave it for someone else to address. I.e., I don't have time/nerve for Windows CMD hacking.) |
If needed, I'll handle the Windows part once this PR is merged! |
This is running into the same gcc-vs-nvcc version problem that I had in conda-forge/faiss-split-feedstock#17 resp. conda-forge/faiss-split-feedstock#18:
Possible solutions are manual downgrade (won't survive rerender), across-the-bank downgrade to gcc=7 on linux, or tackling conda-forge/conda-forge-pinning-feedstock#1000. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this Marcell! 😄
Generally this makes sense. CMake has gotten a bit smarter with setting these things. So it needs less hand-holding, which is nice to see 🙂
Made one suggestion below
recipe/install_nvcc.sh
Outdated
for i ; do | ||
case \$i in -ccbin) | ||
exec "\${CUDA_HOME}/bin/nvcc" "\${@}" | ||
esac | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to just loop through ${@}
and see if -ccbin
is there without the exec
? Might simplify things a bit 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to just loop through
${@}
This is exactly what for i
does ;)
without the
exec
?
The exec
is just an independent improvement: If we have a launcher script, it makes sense to exec
the called process so that our shell process can be stopped/replaced.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting would have expected that to look more like for i in "${@}"
. It might be clearer to write that (even if bash
is doing some magic under-the-hood).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(even if
bash
is doing some magic under-the-hood).
It's standard POSIX shell: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_04_03
Interesting would have expected that to look more like
for i in "${@}"
. It might be clearer to write that
I don't mind either way -- I can add a commit to change that :).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would make it clearer I think. We already are cat
ing into a script (my fault 😞). So it's already a bit confusing. The less confusing we can make it the better 🙂
3bbc113
to
337557b
Compare
recipe/meta.yaml
Outdated
# due to https://github.com/conda/conda-build/issues/3974 | ||
# => use hardcoded compiler package names to workaround that. | ||
- {{ compiler("c") }} # [not (linux64 and (cuda_compiler_version or "").startswith(("9.", "10.")))] | ||
- gcc_linux-64 7 # [linux64 and (cuda_compiler_version or "").startswith(("9.", "10."))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this have a run_export
? If so can we do <= 7
so users can use an older compiler if desired still?
Similarly, it would be good to handle the CUDA 11.0, and 11.1 requirements: #51
Something like:
- gcc_linux-64 7 # [linux64 and (cuda_compiler_version or "").startswith(("9.", "10."))] | |
- gcc_linux_64 <=7 # [linux64 and (cuda_compiler_version or "").startswith(("9.2", "10.0"))] | |
- gcc_linux_64 <=8 # [linux64 and (cuda_compiler_version or "").startswith(("10.1", "10.2"))] | |
- gcc_linux_64 <=9 # [linux64 and (cuda_compiler_version or "").startswith("11.0")] | |
- gcc_linux_64 <=10 # [linux64 and (cuda_compiler_version or "").startswith("11.1")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just the compiler version used during the build/test. We only have run_exports
for the runtimes (libgcc-ng
/libstdcxx-ng
) but not for the compilers themselves. You should be able to build downstream packages with older compilers either way.
I've used the exact versions because that's how we build packages on conda-forge generally, with a fixed compiler version (7
or 9
recently). I've also not special cased nvcc=10.1|10.2
to have a consistent version with other packages (we hadn't pinned GCC 8
before).
My goal was to do the least necessary for the build and rather wait until someone fixes the mentioned conda-build bug instead of investing time in workarounds.
That said, I think it's best if I split this change off into another PR and just use a temporary workaround in the CI files for the intended change of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hadn't looked at the whole recipe until now...
I don't really understand the purpose of the {{ compiler('c') }}
dependency in host
here. It is not used in any way.
Also the other duplicate deps for sysroot/glibc I dont understand either.
I can't deduce what the intentions of the dependency declarations overall are. They need some cleanup, but without knowing the indented use, I can't help with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvcc
uses the host compiler during it's compilation. Depending on the CUDA version, there's compatibilities with the version of the host compiler used that we need to respect.
Additionally, as of CUDA 11, CentOS 6 is no longer supported so we needed a way to force only CentOS 7+.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvcc
uses the host compiler during it's compilation. Depending on the CUDA version, there's compatibilities with the version of the host compiler used that we need to respect.
This recipe only creates wrapper/(de-)activation scripts => the compiler is not used. If it's meant to be declared for downstream packages, something has to be declared in requirements/run
.
Additionally, as of CUDA 11, CentOS 6 is no longer supported so we needed a way to force only CentOS 7+.
The duplication of that in host
/run
/test/requires
is what I meant.
This reverts commit 337557b.
This change will be overridden with the next rerender.
Thanks @jaimergp, much appreciated! |
@conda-forge/nvcc, this is ready for review. I opted to postpone the unrelated GCC version pinning to a later PR (<-- not opened yet, because I had problems understanding what the desired actions would be, see #54 (comment) ). |
Will let @jakirkham take another look as well before merging |
Merging as no response from others. |
Checklist
conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)Closes gh-34
Probably fixes conda-forge/faiss-split-feedstock#17 (comment)
cc @h-vetinari