diff --git a/README.md b/README.md index 7f2498c1dde..235c459d7ff 100644 --- a/README.md +++ b/README.md @@ -14,14 +14,7 @@ TPUs](https://cloud.google.com/tpu/). You can try it right now, for free, on a single Cloud TPU VM with [Kaggle](https://www.kaggle.com/discussions/product-feedback/369338)! -Take a look at one of our [Kaggle -notebooks](https://github.com/pytorch/xla/tree/master/contrib/kaggle) to get -started: - -* [Stable Diffusion with PyTorch/XLA - 2.0](https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb) -* [Distributed PyTorch/XLA - Basics](https://github.com/pytorch/xla/blob/master/contrib/kaggle/distributed-pytorch-xla-basics-with-pjrt.ipynb) +Please find tutorials on our [GitHub page](https://github.com/pytorch/xla) for the latest release. ## Installation @@ -148,153 +141,8 @@ Our comprehensive user guides are available at: ## Available docker images and wheels -### Python packages - -PyTorch/XLA releases starting with version r2.1 will be available on PyPI. You -can now install the main build with `pip install torch_xla`. To also install the -Cloud TPU plugin, install the optional `tpu` dependencies: - -``` -pip install torch_xla[tpu] -f https://storage.googleapis.com/libtpu-releases/index.html -``` - -GPU, XRT (legacy runtime), and nightly builds are available in our public GCS -bucket. - -| Version | Cloud TPU/GPU VMs Wheel | -| --- | ----------- | -| 2.2 (Python 3.8) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.2.0-cp38-cp38-manylinux_2_28_x86_64.whl` | -| 2.2 (Python 3.10) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl` | -| 2.2 (CUDA 12.1 + Python 3.8) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.2.0-cp38-cp38-manylinux_2_28_x86_64.whl` | -| 2.2 (CUDA 12.1 + Python 3.10) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl` | -| nightly (Python 3.8) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl` | -| nightly (Python 3.10) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl` | -| nightly (CUDA 12.1 + Python 3.8) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.1/torch_xla-nightly-cp38-cp38-linux_x86_64.whl` | - -
- -older versions - -| Version | Cloud TPU VMs Wheel | -|---------|-------------------| -| 2.1 (XRT + Python 3.10) | `https://storage.googleapis.com/pytorch-xla-releases/wheels/xrt/tpuvm/torch_xla-2.1.0%2Bxrt-cp310-cp310-manylinux_2_28_x86_64.whl` | -| 2.1 (Python 3.8) | `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-2.1-cp38-cp38-linux_x86_64.whl` | -| 2.0 (Python 3.8) | `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-2.0-cp38-cp38-linux_x86_64.whl` | -| 1.13 | `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.13-cp38-cp38-linux_x86_64.whl` | -| 1.12 | `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.12-cp38-cp38-linux_x86_64.whl` | -| 1.11 | `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.11-cp38-cp38-linux_x86_64.whl` | -| 1.10 | `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.10-cp38-cp38-linux_x86_64.whl` | - -
- -Note: For TPU Pod customers using XRT (our legacy runtime), we have custom -wheels for `torch` and `torch_xla` at -`https://storage.googleapis.com/tpu-pytorch/wheels/xrt`. - -| Package | Cloud TPU VMs Wheel (XRT on Pod, Legacy Only) | -| --- | ----------- | -| torch_xla | `https://storage.googleapis.com/pytorch-xla-releases/wheels/xrt/tpuvm/torch_xla-2.1.0%2Bxrt-cp310-cp310-manylinux_2_28_x86_64.whl` | -| torch | `https://storage.googleapis.com/pytorch-xla-releases/wheels/xrt/tpuvm/torch-2.1.0%2Bxrt-cp310-cp310-linux_x86_64.whl` | - -
- -| Version | GPU Wheel + Python 3.8 | -| --- | ----------- | -| 2.1+ CUDA 11.8 | `https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/11.8/torch_xla-2.1.0-cp38-cp38-manylinux_2_28_x86_64.whl` | -| 2.0 + CUDA 11.8 | `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/118/torch_xla-2.0-cp38-cp38-linux_x86_64.whl` | -| 2.0 + CUDA 11.7 | `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/117/torch_xla-2.0-cp38-cp38-linux_x86_64.whl` | -| 1.13 | `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.13-cp38-cp38-linux_x86_64.whl` | -| nightly + CUDA 12.0 >= 2023/06/27| `https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/12.0/torch_xla-nightly-cp38-cp38-linux_x86_64.whl` | -| nightly + CUDA 11.8 <= 2023/04/25| `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/118/torch_xla-nightly-cp38-cp38-linux_x86_64.whl` | -| nightly + CUDA 11.8 >= 2023/04/25| `https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/11.8/torch_xla-nightly-cp38-cp38-linux_x86_64.whl` | - -
- -| Version | GPU Wheel + Python 3.7 | -| --- | ----------- | -| 1.13 | `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.13-cp37-cp37m-linux_x86_64.whl` | -| 1.12 | `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.12-cp37-cp37m-linux_x86_64.whl` | -| 1.11 | `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl` | -| nightly | `https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-nightly-cp37-cp37-linux_x86_64.whl` | - -
- -| Version | Colab TPU Wheel | -| --- | ----------- | -| 2.0 | `https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl` | - -You can also add `+yyyymmdd` after `torch_xla-nightly` to get the nightly wheel -of a specified date. To get the companion pytorch and torchvision nightly wheel, -replace the `torch_xla` with `torch` or `torchvision` on above wheel links. - -#### Installing libtpu (before PyTorch/XLA 2.0) - -For PyTorch/XLA release r2.0 and older and when developing PyTorch/XLA, install -the `libtpu` pip package with the following command: - -``` -pip3 install torch_xla[tpuvm] -``` - -This is only required on Cloud TPU VMs. - -
- -### Docker - -| Version | Cloud TPU VMs Docker | -| --- | ----------- | -| 2.2 | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.2.0_3.10_tpuvm` | -| 2.1 | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.1.0_3.10_tpuvm` | -| 2.0 | `gcr.io/tpu-pytorch/xla:r2.0_3.8_tpuvm` | -| 1.13 | `gcr.io/tpu-pytorch/xla:r1.13_3.8_tpuvm` | -| nightly python | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm` | - -
- -| Version | GPU CUDA 12.1 Docker | -| --- | ----------- | -| 2.2 | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.2.0_3.10_cuda_12.1` | -| 2.1 | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.1.0_3.10_cuda_12.1` | -| nightly | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_12.1` | -| nightly at date | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_12.1_YYYYMMDD` | - -
- -| Version | GPU CUDA 11.8 + Docker | -| --- | ----------- | -| 2.1 | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.1.0_3.10_cuda_11.8` | -| 2.0 | `gcr.io/tpu-pytorch/xla:r2.0_3.8_cuda_11.8` | -| nightly | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.8` | -| nightly at date | `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.8_YYYYMMDD` | - -
- -
- -older versions - -| Version | GPU CUDA 11.7 + Docker | -| --- | ----------- | -| 2.0 | `gcr.io/tpu-pytorch/xla:r2.0_3.8_cuda_11.7` | - -
- -| Version | GPU CUDA 11.2 + Docker | -| --- | ----------- | -| 1.13 | `gcr.io/tpu-pytorch/xla:r1.13_3.8_cuda_11.2` | - -
- -| Version | GPU CUDA 11.2 + Docker | -| --- | ----------- | -| 1.13 | `gcr.io/tpu-pytorch/xla:r1.13_3.7_cuda_11.2` | -| 1.12 | `gcr.io/tpu-pytorch/xla:r1.12_3.7_cuda_11.2` | - -
- -To run on [compute instances with -GPUs](https://cloud.google.com/compute/docs/gpus/create-vm-with-gpus). +For all builds and all versions of `torch-xla`, see our main [GitHub +README](https://github.com/pytorch/xla). ## Troubleshooting