-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update setup.py
and README for r2.1 release.
#5669
Conversation
|
||
</details> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a place in README mentions pip install torch_xla[tpuvm]
, can you add notes saying that only works for release prior to 2.0? for nightly do we use [tpuvm]
or [tpu]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left tpuvm
on master (see comment in setup.py
) so I don't break people's scripts. We should remove it eventually, so I'd rather leave this as an undocumented "feature".
The instruction in this doc already says "before PyTorch/XLA 2.0"
Since I touched |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| 2.0 (Python 3.8) | `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-2.0-cp38-cp38-linux_x86_64.whl` | | ||
| nightly >= 2023/04/25 (Python 3.8) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl` | | ||
| nightly >= 2023/04/25 (Python 3.10) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl` | | ||
| 2.1 (CUDA 12.0 + Python 3.8) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.0/torch_xla-2.1.0-cp38-cp38-manylinux_2_28_x86_64.whl` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytorch uses cuda 12.1 instead of 12.0: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu121
. Do we want to do the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting. I just went off of what we had built. Do you know if our 12.0 wheel is compatible with their 12.1 wheel? Or do we need to add a new build config for the 2.1 release?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know if our 12.0 wheel is compatible with their 12.1 wheel?
If we do pip3 install torch torchvision torchaudio
, it'll install cuda 12.1 by default per https://screenshot.googleplex.com/BDFoq3C7TXiWeXn. I can do some testing.
But before that, we do need to add 2 new build config for the 2.1 release, one for cuda12.1+py3.8 and another one for cuda12.1+py3.10. Once we get the wheel and validate that it is compatible with pytorch's, we can update our GPU doc. Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me. Thanks for catching the issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @ManfeiBai , it seems we need cuda 12.1 wheels instead of 12.0 because by default PyTorch uses cuda 12.1. Can you please generate a py=3.8 cuda=12.1 wheel and a py=3.10 cuda=12.1 wheel for torch_xla release 2.1?
cc @JackCaoG
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR for CUDA12.1 wheel: #5683
<details>
to make the lists less overwhelming