Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Follow proposal to fix cross #13

Merged
merged 46 commits into from
May 24, 2023
Merged

Conversation

adibbley
Copy link
Contributor

@adibbley adibbley commented May 5, 2023

Update to proposed layout for cross support. For reference the resulting packaging contents:

cuda-nvcc-12.0.76-h59595ed_3
├── bin
│   ├── bin2c
│   ├── crt
│   │   ├── link.stub
│   │   └── prelink.stub
│   ├── cudafe++
│   ├── fatbinary
│   ├── nvcc -> ../targets/x86_64-linux/bin/nvcc
│   ├── __nvcc_device_query
│   ├── nvcc.profile
│   ├── nvlink
│   └── ptxas
├── etc
│   └── conda
│       ├── activate.d
│       │   └── ~cuda-nvcc_activate.sh
│       └── deactivate.d
│           └── ~cuda-nvcc_deactivate.sh
├── lib
│   └── libnvptxcompiler_static.a -> ../targets/x86_64-linux/lib/libnvptxcompiler_static.a
├── nvvm
│   ├── bin
│   ├── include
│   ├── lib64
│   └── libdevice
└── targets
    └── x86_64-linux
        └── bin
            └── nvcc

cuda-nvcc-impl_linux-64-12.0.76-h59595ed_3
├── include
│   ├── crt -> ../targets/x86_64-linux/include/crt
│   ├── fatbinary_section.h -> ../targets/x86_64-linux/include/fatbinary_section.h
│   └── nvPTXCompiler.h -> ../targets/x86_64-linux/include/nvPTXCompiler.h
└── lib
    └── libnvptxcompiler_static.a -> ../targets/x86_64-linux/lib/libnvptxcompiler_static.a

cuda-nvcc_linux-64-12.0.76-ha770c72_3
└── targets
    └── x86_64-linux
        ├── include
        │   ├── crt
        │   ├── fatbinary_section.h
        │   └── nvPTXCompiler.h
        └── lib
            └── libnvptxcompiler_static.a

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

Fixes #12
Closes #11

@conda-forge-webservices
Copy link
Contributor

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

recipe/build.sh Outdated Show resolved Hide resolved
recipe/meta.yaml Outdated Show resolved Hide resolved
@adibbley adibbley mentioned this pull request May 8, 2023
@adibbley adibbley changed the title [WIP] Follow proposal to fix cross Follow proposal to fix cross May 9, 2023
@adibbley adibbley marked this pull request as ready for review May 9, 2023 18:56
@adibbley
Copy link
Contributor Author

adibbley commented May 9, 2023

@conda-forge-admin, please rerender

conda-forge-webservices[bot] and others added 2 commits May 9, 2023 19:01
@adibbley
Copy link
Contributor Author

adibbley commented May 9, 2023

@conda-forge-admin, please rerender

Copy link
Contributor

@bdice bdice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a step in the right direction. We have a few more things to iron out for cross compilation, but I think this is an improvement over the current package and would allow us to parallelize some work on other packages that can now use (and thus test) this package more fully.

Note: I was able to build cuda-python with this PR locally. However, it seems my most recent test commit builds without these changes, too.

@isuruf
Copy link
Member

isuruf commented May 9, 2023

This is completely in the wrong direction. This does not follow the proposal at all.

@jakirkham
Copy link
Member

Could you please add suggestions in the PR diff? Think this will help us to have a more constructive conversation about what we would like to do here

@isuruf
Copy link
Member

isuruf commented May 9, 2023

I've added a detailed description in #12

@jakirkham
Copy link
Member

And devs here have tried to incorporate what they understood from that discussion into this PR

Would suggest that if there are changes desired in this PR, that they are proposed via suggestions in the diff. That should hopefully help us converge on a reasonable solution

@isuruf
Copy link
Member

isuruf commented May 9, 2023

Can you explain what's hard to understand about the top comment in #12?

@jakirkham
Copy link
Member

Think most folks found it easier to have a discussion about code changes in a PR where there is an ability to see a diff and discuss any finer points that need fixing in the relevant context for those changes. Does this seem like a reasonable way for us to have this discussion?

Alex has even seeded some discussion above based on points he was unsure of. Do you have thoughts on any of those?

Are there any other points in the diff that would benefit from further discussion? If so, could you please point them out?

@isuruf
Copy link
Member

isuruf commented May 9, 2023

Does this seem like a reasonable way for us to have this discussion?

Not really. The issue seems like the best way to have a discussion. For eg: I don't see any cuda-nvcc-tools package here nor any noarch:generic package here.

@adibbley
Copy link
Contributor Author

I don't see any cuda-nvcc-tools package

This renaming isn't clear to me. The contents of cuda-nvcc cover this. Same with cuda-nvcc-dev_{{ target_platform }} instead of cuda-nvcc_{{ target_platform }}.

nor any noarch:generic package here

Here https://github.com/conda-forge/cuda-nvcc-feedstock/pull/13/files#diff-f3725a55bf339595bf865fec73bda8ac99f283b0810c205442021f29c06eea9aR82-R84

@isuruf
Copy link
Member

isuruf commented May 10, 2023

This renaming isn't clear to me. The contents of cuda-nvcc cover this. Same with cuda-nvcc-dev_{{ target_platform }} instead of cuda-nvcc_{{ target_platform }}.

Would you mind looking at the package structure for the gcc/g++ compilers in ctng-compilers-feedstock and ctng-compiler-activation-feedstock and get back to me? There seems to be a lack of fundamental understanding about compiler packaging here. Looking at those feedstocks should give you an understanding about how cross compilers work.
Basically, the cross compiler environment (not the package itself) from linux-64 to linux-aarch64, should have bin/nvcc and targets/include/sbsa-linux, but not targets/include/x86_64-linux. The PR here does not achieve that at all.
As seen in all of the compiler packages, the compiler tools and the {{ compiler }}_{{ cross_target_platform }} packages cannot be in the same feedstock because we need to package the target dependent files like targets/include/sbsa-linux in the tools feedstock when target_platform == cross_target_platform. Otherwise every-time you bump the compiler version, the cross compiler build would fail and then you would have to merge with failing CI and restart later on.

robertmaynard and others added 2 commits May 10, 2023 14:22
Co-authored-by: Bradley Dice <bdice@bradleydice.com>
recipe/activate.sh Outdated Show resolved Hide resolved
robertmaynard and others added 2 commits May 10, 2023 15:52
Co-authored-by: adibbley <103537006+adibbley@users.noreply.github.com>
Expand testing to include most of CMake's cuda tests
@isuruf
Copy link
Member

isuruf commented May 23, 2023

Wow. That's over-engineering if I ever saw one.

@jakirkham
Copy link
Member

It would be nice if it wasn't needed and we could rely on Jinja. However it doesn't seem like conda-build parses that correctly. We can raise an issue about it

@isuruf
Copy link
Member

isuruf commented May 23, 2023

Why not use Alex's original code? Just sysroot_{{ cross_target_platform }} 2.17.* instead of the dozens of lines in your suggestion to do the same

@jakirkham
Copy link
Member

jakirkham commented May 23, 2023

Open to that. So just hard-coding it in host and using pin_compatible in run?

Is there a reason that change was dropped?

Edit: Though this will present a challenge later as we will want to do different things for different architectures in terms of minimum versions. So there will be a few more lines in any event

@h-vetinari
Copy link
Member

Open to that. So just hard-coding it in host and using pin_compatible in run?

the sysroot has a run-export, so nothing is required in run.

Though this will present a challenge later as we will want to do different things for different architectures in terms of minimum versions.

Why not cross that bridge when we get to it?

@jakirkham
Copy link
Member

jakirkham commented May 23, 2023

Open to that. So just hard-coding it in host and using pin_compatible in run?

the sysroot has a run-export, so nothing is required in run.

We are trying to ensure only compatible versions of sysroot are installed. The default (2.12) isn't compatible. So we are trying to constrain it to 2.17+

Though this will present a challenge later as we will want to do different things for different architectures in terms of minimum versions.

Why not cross that bridge when we get to it?

The point is that more lines will still be needed in this alternative proposal. IOW the simplification is not much simpler

@adibbley
Copy link
Contributor Author

adibbley commented May 23, 2023

I've tried using just sysroot_{{ cross_target_platform }} 2.17.* as well as John's suggestion above. The dependencies of both approaches resolve to sysroot_linux-64 >=2.17,<3.0a0 as expected.

There is a difference in the build log, using John's suggestion I see:

  "cuda-nvcc_linux-64-12.0.76-hd9db254": {
    "recipe": {
      ...
      "sysroot_linux_64": "2.17",
      ...
    }

There is no sysroot entry here when only using sysroot_{{ cross_target_platform }} 2.17.*. It's not clear to me if this is a relevant difference or not.

@h-vetinari
Copy link
Member

The default (2.12) isn't compatible. So we are trying to constrain it to 2.17+

I understand that, hence I didn't comment further about the already-mentioned sysroot_{{ cross_target_platform }} 2.17.* in host.

The point is that more lines will still be needed in this alternative proposal. IOW the simplification is not much simpler

What else would be needed?

@jakirkham
Copy link
Member

Different versions may be needed for different architectures. Think the 2 proposals simply come down to where those live

@h-vetinari
Copy link
Member

Different versions may be needed for different architectures.

My point (resp. opinion) is: "may" -> not at the moment -> cross that bridge when we get to it

@jakirkham
Copy link
Member

Think we may be talking past each other

Some fix is needed for the sysroot version constraint. 2 proposals have been presented. The discussion above is just evaluating them

Am just trying to point out the 2 proposals are not all that different (the architecture version point is just why that is the case; not a thing that requires substantial effort)

Anyways would suggest we return to evaluating those 2 proposals so we can select one and move to the next item

@h-vetinari
Copy link
Member

Anyways would suggest we return to evaluating those 2 proposals so we can select one and move to the next item

I thought that's what we are doing. Unless I'm misunderstand something yet again, I'm in favour of the proposal that's (closest to) sysroot_{{ cross_target_platform }} 2.17.* in host, nothing in run & CBC.

You said you were open to that, I just responded (to your question) that nothing is needed in run:

@adibbley
Copy link
Contributor Author

Updated to only have sysroot_{{ cross_target_platform }} 2.17.* in host. We can follow up with any version splits later if that becomes an issue.

@isuruf
Copy link
Member

isuruf commented May 24, 2023

You also need - {{ pin_compatible("sysroot_" + cross_target_platform), max_pin="x.x" }} in run.

recipe/meta.yaml Outdated Show resolved Hide resolved
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
Co-authored-by: jakirkham <jakirkham@gmail.com>
@adibbley
Copy link
Contributor Author

@conda-forge-admin, please rerender

@jakirkham jakirkham added the automerge Merge the PR when CI passes label May 24, 2023
@github-actions github-actions bot merged commit 7350ccc into conda-forge:main May 24, 2023
@github-actions
Copy link
Contributor

Hi! This is the friendly conda-forge automerge bot!

I considered the following status checks when analyzing this PR:

  • linter: passed
  • travis: passed
  • azure: passed

Thus the PR was passing and merged! Have a great day!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automerge Merge the PR when CI passes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Package layout
6 participants