-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update artifacts.auto.tfvars for 2.2 release #5908
Conversation
I am not sure if we still need |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
pytorch_git_rev = "v2.2.0" | ||
package_version = "2.2.0", | ||
accelerator = "cuda" | ||
cuda_version = "12.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which CUDA version are we targeting? Should we just pick the same CUDA version torch
will use in their default wheel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we leave 11.8 and 12.1 here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found PyTorch only has cuda 11.8 and 12.1 support. cc @vanbasten23, to double check on the needs on cuda 12.0 whl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lsy323 , do we still need CUDA 11.8 for 2.2 release?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @will-cromar, thanks, do we know how to confirm the CUDA version torch
will use in their default wheel? would they mention in the blog related to 2.2 release?
since there were many PRs to add specific CUDA version to 2.1 release like #5683, so there might be some requirements to build 2.2 wheel with these CUDA versions. And if build failed, we could catch the failure early too
For resources, do we want to delete triggers for 2.1 to limit the resource used for release here?
@@ -58,6 +58,46 @@ xrt_versioned_builds = [ | |||
# Built on push to specific tag. | |||
versioned_builds = [ | |||
# Remove libtpu from PyPI builds | |||
{ | |||
git_tag = "v2.2.0" | |||
pytorch_git_rev = "v2.2.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, PyTorch doesn't apply the v2.2.0
tag until the final release. They'll have v2.2.0rc0
, v2.2.0rc1
, etc. I actually suggest we do the same for this release rather than updating the v2.2.0
tag multiple times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, can you elaborate more about the solution? Should we add v2.2.0rc0
, v2.2.0rc1
build etc. here in advance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this, Will
Hi, @zpcore, FYI, since we want to add trigger for 2.2.0 wheel to test in xl-ml-dashboard, and we won't catch pytorch_git_rev
tag v2.2.0
until PyTorch 2.2 release from PyTorch, and we also won't catch any other tag like v2.2.0rc0
/v2.2.0rc1
until PyTorch 2.2.0 branch cut date, we could merge this PR now but the trigger could build wheel successfully until they catch the tag
So for wheel on xl-ml-dashboard, let's use nightly wheel of torch and torch_xla until branch cut date
cc @lsy323
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, can you elaborate more about the solution? Should we add
v2.2.0rc0
,v2.2.0rc1
build etc. here in advance?
let's modify git_tag
and pytorch_git_rev
to "v2.2.0rc0"
in this PR, we would modify again once PyTorch has other wheels built
cc @will-cromar, would you mind help to confirm it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, we have removed pytorch_git_rev
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
python_version = "3.10" | ||
pytorch_git_rev = "v2.1.0" | ||
package_version = "2.1.0+xrt" | ||
git_tag = "v2.2.0-rc1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @zpcore, thanks for the update, not sure how PyTorch rag 2.2 at branch cut day, in 2.1 release, PyTorch's wheel version is 2.1.0-rc0
at the branch cut day; if we want to catch and start testing 2.2 release from branch cut day, do we want to use v2.2.0-rc0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ManfeiBai , I checked the pytorch repo but didn't find tag end with -rc0
. Can you help double check if we should start from -rc0
? They may delete it after the release. Thanks,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for confirming this, @zpcore, you are right, I didn't find rc0, we could start from -rc1
based on 2.1.0 release history
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
python_version = "3.10" | ||
pytorch_git_rev = "v2.1.0" | ||
package_version = "2.1.0+xrt" | ||
git_tag = "v2.2.0-rc1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for confirming this, @zpcore, you are right, I didn't find rc0, we could start from -rc1
based on 2.1.0 release history
{ | ||
git_tag = "v2.2.0-rc1" | ||
package_version = "2.2.0-rc1" | ||
accelerator = "cuda" | ||
cuda_version = "11.8" | ||
python_version = "3.10" | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vanbasten23 do we need this?
* add release build for v2.2 for xl-ml test * remove 2.2 xrt build * update release tag name * update package version name
* add release build for v2.2 for xl-ml test * remove 2.2 xrt build * update release tag name * update package version name
* add release build for v2.2 for xl-ml test * remove 2.2 xrt build * update release tag name * update package version name
* add release build for v2.2 for xl-ml test * remove 2.2 xrt build * update release tag name * update package version name
Add the 2.2 release build similar to 2.1. We will trigger the 2.2 wheel build for xl-ml test. Refer to 2.1 release PR
Tag for 2.2 has also been added -- https://github.com/pytorch/xla/tags.