-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update TPU docs for installation #6794
Conversation
@@ -64,8 +64,7 @@ To get a TPU on colab, follow these steps: | |||
|
|||
.. code-block:: | |||
|
|||
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py | |||
!python pytorch-xla-env-setup.py --version 1.7 --apt-packages libomp5 libopenblas-dev | |||
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well here the XLA version shall be aligned with PT version, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The instructions are for Colab & they have updated the runtime to use torch>=1.8.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this same command work for kaggle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lezwon Just checked. It's 1.7 for Kaggle notebooks.
We could add {torch.__version__}
to the URL as a placeholder.
Codecov Report
@@ Coverage Diff @@
## master #6794 +/- ##
=======================================
- Coverage 92% 87% -5%
=======================================
Files 192 192
Lines 12174 12248 +74
=======================================
- Hits 11147 10618 -529
- Misses 1027 1630 +603 |
* update readme by v1.2.x (#6728) * [bugfix] Add support for omegaconf and tpu (#6741) * fix_hydra * update changelog Co-authored-by: Your Name <you@example.com> * [docs] Update Bolts link (#6743) * Update Bolts link * Update Bolts link * formt Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Update logic for checking TPUs availability (#6767) * Update logic for checking TPUs availability * fix flake8 * add fix * resolve bug (#6781) * Fix validation progress counter with check_val_every_n_epoch > 1 (#5952) Co-authored-by: rohitgr7 <rohitgr1998@gmail.com> Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com> * Remove extinct parameters from lightning_module.rst (#6801) Fixes #6800 * Update TPU docs for installation (#6794) * fix boolean check on iterable dataset when len not defined (#6828) * fix iterable dataset len check * update predict and validate * add validate to test * add changelog * add predict * Sanitize `None` params during pruning (#6836) * sanitize none params during pruning * amend * Fix `unfreeze_and_add_param_group` expects `modules` rather than `module` (#6822) * Enforce an epoch scheduler interval when using SWA (#6588) Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com> * Fix DPP + SyncBN (#6838) * Fix DPP + SyncBN Ensure that model is already on correct GPU before applying SyncBN conversion * Fix order of SyncBN for ddp_spawn * [Fix] TPU Training Type Plugin (#6816) * Fix support for symlink save_dir in TensorBoardLogger (#6730) * Add test for symlink support and initial fix * Respond to comment and add docstring * Update CHANGELOG.md * Simplify * Update pytorch_lightning/utilities/cloud_io.py Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com> * Make `LightningLocalFileSystem` protected Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com> * Fixed missing arguments in `lr_find` call (#6784) There seem to be 3 arguments missing in the `lr_find` call in the tunining.py file. * Update Changelog & version * Fix TPU tests for checkpoint Skip advanced profiler for torch > 1.8 Skip pytorch profiler for torch > 1.8 Fix save checkpoint logic for TPUs Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: thomas chaton <thomas@grid.ai> Co-authored-by: Your Name <you@example.com> Co-authored-by: Akihiro Nitta <nitta@akihironitta.com> Co-authored-by: Yuan-Hang Zhang <sailordiary@users.noreply.github.com> Co-authored-by: rohitgr7 <rohitgr1998@gmail.com> Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com> Co-authored-by: Elizaveta Logacheva <elimohl@gmail.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Karthik Prasad <prasadkr@uci.edu> Co-authored-by: Sadiq Jaffer <sadiq@toao.com> Co-authored-by: Michael Baumgartner <m.baumgartner@ymail.com> Co-authored-by: Eugene Khvedchenya <ekhvedchenya@gmail.com> Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk> Co-authored-by: Tharindu Hasthika <tharindubathigama@gmail.com>
What does this PR do?
Ref: pytorch/xla#2813
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃