Skip to content

Commit

Permalink
Merge branch 'master' into ref/tuner
Browse files Browse the repository at this point in the history
  • Loading branch information
rohitgr7 authored Feb 16, 2021
2 parents 0c28e26 + 141316f commit 1b85744
Show file tree
Hide file tree
Showing 55 changed files with 668 additions and 2,067 deletions.
18 changes: 9 additions & 9 deletions .github/mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,15 +36,15 @@ pull_request_rules:
label:
remove: [ "has conflicts" ]

- name: update PR
conditions:
- -conflict
- -draft # filter-out GH draft PRs
- base=master # apply only on master
- -title~=(?i)wip # skip all PR that title contains “WIP” (ignoring case)
- "#approved-reviews-by>=3" # number of review approvals
actions:
update: {}
#- name: update PR
# conditions:
# - -conflict
# - -draft # filter-out GH draft PRs
# - base=master # apply only on master
# - -title~=(?i)wip # skip all PR that title contains “WIP” (ignoring case)
# - "#approved-reviews-by>=3" # number of review approvals
# actions:
# update: {}

- name: add core reviewer
conditions:
Expand Down
18 changes: 8 additions & 10 deletions .github/workflows/ci_dockers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,21 +75,21 @@ jobs:
matrix:
include:
# todo: see notes in Dockerfile
#- python_version: 3.7
# pytorch_version: 1.8
- python_version: 3.8
pytorch_version: 1.7
- python_version: 3.7
pytorch_version: 1.6
- python_version: 3.6
pytorch_version: 1.4
- python_version: 3.7
pytorch_version: 1.6
- python_version: 3.8
pytorch_version: 1.7
#- python_version: 3.9
# pytorch_version: 1.8
steps:
- name: Checkout
uses: actions/checkout@v2

# for PT 1.3 and 1.4 we need to use CUDA 10.1
# for PT 1.4 we need to use CUDA 10.1
- run: |
cuda=$(python -c "print(10.2 if float(${{matrix.pytorch_version}}) > 1.4 else 10.1)" 2>&1)
cuda=$(python -c "print(10.2 if float(${{matrix.pytorch_version}}) >= 1.5 else 10.1)" 2>&1)
echo "::set-output name=CUDA::$cuda"
id: extend
Expand All @@ -114,8 +114,6 @@ jobs:
fail-fast: false
matrix:
include:
- python_version: 3.8
pytorch_version: 1.6
- python_version: 3.6
pytorch_version: 1.4
- python_version: 3.7
Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/events-nightly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@ jobs:
steps:
# does nightly releases from feature branch
- uses: actions/checkout@v2
with:
ref: release/1.2-dev
- uses: actions/setup-python@v2
with:
python-version: 3.7
Expand Down
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -153,4 +153,5 @@ wandb
cifar-10-batches-py
*.pt
# ctags
tags
tags
data
4 changes: 0 additions & 4 deletions .yapfignore
Original file line number Diff line number Diff line change
@@ -1,5 +1 @@
.git/*


# TODO
pytorch_lightning/plugins/legacy/*
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Update `lr_finder` to check for attribute if not running `fast_dev_run` ([#5990](https://github.com/PyTorchLightning/pytorch-lightning/pull/5990))


- LightningOptimizer manual optimizer is more flexible and expose `toggle_model` ([#5771](https://github.com/PyTorchLightning/pytorch-lightning/pull/5771))



### Deprecated

- Function `stat_scores_multiple_classes` is deprecated in favor of `stat_scores` ([#4839](https://github.com/PyTorchLightning/pytorch-lightning/pull/4839))
Expand Down
40 changes: 25 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,41 +115,51 @@ Simple installation from PyPI
pip install pytorch-lightning
```

<!-- following section will be skipped from PyPI description -->
<details>
<summary>Other installation options</summary>
<!-- following section will be skipped from PyPI description -->

#### Install with optional dependencies (CPU)

```bash
pip install pytorch-lightning['cpu-extra']
```

#### Install with optional dependencies (GPU, TPU)
#### Install with optional dependencies

```bash
pip install pytorch-lightning['extra']
```

#### Conda

```bash
conda install pytorch-lightning -c conda-forge
```

#### Install stable - future 1.1.x

the actual status of 1.1 [stable] is following:

![CI base testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20base%20testing/badge.svg?branch=release%2F1.1.x&event=push)
![CI complete testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=release%2F1.1.x&event=push)
![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=release%2F1.1.x&event=push)
![TPU tests](https://github.com/PyTorchLightning/pytorch-lightning/workflows/TPU%20tests/badge.svg?branch=release%2F1.1.x&event=push)
![Docs check](https://github.com/PyTorchLightning/pytorch-lightning/workflows/Docs%20check/badge.svg?branch=release%2F1.1.x&event=push)

Install future release from the source
```bash
pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@release/1.1.x --upgrade
```

#### Install bleeding-edge - future 1.2

Install future release from the source (no guarantees)
Install nightly from the source (no guarantees)
```bash
pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@release/1.2-dev --upgrade
pip install https://github.com/PyTorchLightning/pytorch-lightning/archive/master.zip
```
or nightly from testing PyPI

or from testing PyPI
```bash
pip install -iU https://test.pypi.org/simple/ pytorch-lightning
```

<!-- end skipping PyPI description -->
</details>

<!-- end skipping PyPI description -->

### Step 1: Add these imports

Expand Down Expand Up @@ -369,8 +379,8 @@ class LitAutoEncoder(pl.LightningModule):
## Community
The lightning community is maintained by
- [16 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, Ph.D. students from top AI labs.
- 280+ community contributors.
- [16 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
- 400+ community contributors.
Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support.
Expand Down
2 changes: 1 addition & 1 deletion dockers/base-xla/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -115,4 +115,4 @@ RUN \
conda info && \
pip list && \
python -c "import sys; assert sys.version[:3] == '$PYTHON_VERSION', sys.version" && \
python -c "import torch; ver = '$XLA_VERSION' ; ver = dict(nightly='1.8').get(ver, ver) ; assert torch.__version__[:3] == ver, torch.__version__"
python -c "import torch; ver = '$XLA_VERSION' ; ver = dict(nightly='1.9').get(ver, ver) ; assert torch.__version__[:3] == ver, torch.__version__"
12 changes: 6 additions & 6 deletions docs/source/advanced/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -580,9 +580,9 @@ Below are the possible configurations we support.

Implement Your Own Distributed (DDP) training
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.init_ddp_connection`.
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.training_type.ddp.DDPPlugin.init_ddp_connection`.

If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.configure_ddp`.
If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.training_type.ddp.DDPPlugin.configure_ddp`.


----------
Expand Down Expand Up @@ -679,20 +679,20 @@ In addition, we use Gradient Checkpointing to reduce GPU memory requirements fur

Reference: https://arxiv.org/abs/1811.06965

.. note:: DDPSequentialPlugin is currently supported only for Pytorch 1.6.
.. note:: RPCSequentialPlugin is currently supported only for Pytorch 1.6.

To get started, install FairScale using the command below. We install a specific branch which contains PyTorch related fixes for Sequential Parallelism.

.. code-block:: bash
pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip
pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.2.0.zip
To use Sequential Model Parallelism, you must define a :class:`nn.Sequential <torch.nn.Sequential>` module that defines the layers you wish to parallelize across GPUs.
This should be kept within the ``sequential_module`` variable within your ``LightningModule`` like below.

.. code-block:: python
from pytorch_lightning.plugins.legacy.ddp_sequential_plugin import DDPSequentialPlugin
from pytorch_lightning.plugins.training_type.rpc_sequential import RPCSequentialPlugin
from pytorch_lightning import LightningModule
class MyModel(LightningModule):
Expand All @@ -702,7 +702,7 @@ This should be kept within the ``sequential_module`` variable within your ``Ligh
# Split my module across 4 gpus, one layer each
model = MyModel()
plugin = DDPSequentialPlugin(balance=[1, 1, 1, 1])
plugin = RPCSequentialPlugin(balance=[1, 1, 1, 1])
trainer = Trainer(accelerator='ddp', gpus=4, plugins=[plugin])
trainer.fit(model)
Expand Down
Loading

0 comments on commit 1b85744

Please sign in to comment.