-
-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CUDA 11.8 cross-compilation support #261
Add CUDA 11.8 cross-compilation support #261
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if this actually turns out this painless (given the struggle to get this right for 11.2)!
I would still suggest to actually let this run through execution here before merging. At the time of #210, I had locally pulled in a (cross-enabled) 11.2 migrator, so that the CI would generate the respective jobs here and actually execute the branches in question.
In your case, this would be a cross-enabled version of conda-forge/conda-forge-pinning-feedstock#4834 (plus local: True
).
I don't necessarily expect that nvidia's artefact naming has changed a lot between 11.2 & 11.8, but IMO better to double-check. As one concrete issue, the extra="11-2-"
further down will need adaptation as well.
Thanks Axel! 🙏 Appreciate the helpful tips. Have tried to push some changes to address those Please let me know how that looks |
@@ -2,6 +2,7 @@ migrator_ts: 1692828152 | |||
__migrator: | |||
kind: | |||
version | |||
use_local: true # TODO: Remove this line after completing testing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure that we want to have the cross-compiled jobs for merging. We don't need them from the POV of the CI setup itself, just for testing the branches here in CI. I suggested adding them in #236, but never got a response.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep no strong feelings here. Happy to keep or drop as preferred
CI is failing as the Addressing with PR: conda-forge/nvcc-feedstock#99 |
Tried to merge main here after #265, but it seem you've unchecked the box that says "allow edits by maintainers" |
Thanks Axel! 🙏 Unfortunately since this is an org fork (as opposed to a user fork) that check box isn't even available (I couldn't check it if I wanted to) Have clicked the update branch button. So we should be good to go Also FWIW closing and reopening PRs has the same effect (it deletes the merge ref for the PR and recreates it using the latest upstream branch), which will also update what CI builds from as a result |
Ah, I had forgotten that this depends on conda-forge/nvcc-feedstock#99. 🙃 |
Yeah no worries. This wound up being a bit of a yak shave (though that was mostly expected) |
It looks like this is going to need
in |
I did this in #267 and it's passing! 🥳 If we do not want to keep the cross-compilation jobs (which would be purely for testing purposes), then you could rebase/revert out the migrator and merge this from my POV. 👍 |
necessary to test cross-compiled CUDA
…nda-forge-pinning 2023.08.30.17.25.59
Thanks Axel! 🙏 Went ahead and cherry-picked the last 2 commits. Hope that is ok |
Hi! This is the friendly conda-forge automerge bot! I considered the following status checks when analyzing this PR:
Thus the PR was passing and merged! Have a great day! |
Forgot to bump version 🤦♂️ Fixed with commit: 22b4448 |
This adds additional logic to the CUDA 11 cross-compilation case here to support CUDA 11.8.
Needed as part of the CUDA 11.8 migration: conda-forge/conda-forge-pinning-feedstock#4834
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)