Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware specific parts of Accelerator Refactoring #5719

Merged
merged 41 commits into from
Feb 1, 2021
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
d86861d
add basic accelerator class.
justusschock Jan 30, 2021
8b32117
pep8
justusschock Jan 30, 2021
1269a47
add cpu accelerator
justusschock Jan 30, 2021
a9caca0
add gpu accelerator
justusschock Jan 30, 2021
1121ee3
add tpu accelerator
justusschock Jan 30, 2021
31b9b29
add accelerator connector
justusschock Jan 30, 2021
b58be97
add single device training
justusschock Jan 30, 2021
9f985b4
add single tpu
justusschock Jan 30, 2021
1c4db3e
add tpu spawn
justusschock Jan 30, 2021
398abb9
make on_colab_kaggle utility func
justusschock Jan 31, 2021
a9608d3
add basic accelerator class.
justusschock Jan 30, 2021
1f33b9f
pep8
justusschock Jan 30, 2021
7b76e7a
add cpu accelerator
justusschock Jan 30, 2021
863e723
add gpu accelerator
justusschock Jan 30, 2021
85590b5
add tpu accelerator
justusschock Jan 30, 2021
b60086b
add accelerator connector
justusschock Jan 30, 2021
6dbaf17
add single device training
justusschock Jan 30, 2021
976defa
add single tpu
justusschock Jan 30, 2021
a8762cb
add tpu spawn
justusschock Jan 30, 2021
96d1a51
make on_colab_kaggle utility func
justusschock Jan 31, 2021
f905b88
fixes
Borda Jan 31, 2021
92d928f
move
Borda Jan 31, 2021
82ad2f4
yapf
Borda Jan 31, 2021
e561096
.
Borda Jan 31, 2021
618f6d8
.
Borda Jan 31, 2021
36d469c
.
Borda Jan 31, 2021
df0900c
flake8
Borda Jan 31, 2021
0a20f95
sync accelerator connector changes from dev1.2
awaelchli Feb 1, 2021
1085a23
changelog
awaelchli Feb 1, 2021
224c8ee
merge
justusschock Feb 1, 2021
4695882
fix tpu handling
justusschock Feb 1, 2021
f461f2b
tpu
Borda Feb 1, 2021
a4190fd
aval
Borda Feb 1, 2021
5571ce6
yapf
Borda Feb 1, 2021
90e379e
Update pytorch_lightning/plugins/training_type/tpu_spawn.py
justusschock Feb 1, 2021
725d18e
Update pytorch_lightning/accelerators/accelerator_connector.py
justusschock Feb 1, 2021
d5dd1bb
Update pytorch_lightning/plugins/training_type/tpu_spawn.py
justusschock Feb 1, 2021
90bb35d
Update tpu_spawn.py
justusschock Feb 1, 2021
53338f8
Update pytorch_lightning/accelerators/accelerator_connector.py
justusschock Feb 1, 2021
3a44c73
indentation
awaelchli Feb 1, 2021
ea88661
stupid formatting
awaelchli Feb 1, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/common/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1121,7 +1121,7 @@ To define your own behavior, subclass the relevant class and pass it in. Here's

.. code-block:: python

from pytorch_lightning.cluster_environments import cluster_environment
from pytorch_lightning.environments import cluster_environment

class MyCluster(ClusterEnvironment):

Expand Down
2 changes: 1 addition & 1 deletion docs/source/extensions/accelerators.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ First, implement your own ClusterEnvironment. Here is the torch elastic implemen
import os
from pytorch_lightning import _logger as log
from pytorch_lightning.utilities import rank_zero_warn
from pytorch_lightning.cluster_environments.cluster_environment import ClusterEnvironment
from pytorch_lightning.environments.cluster_environment import ClusterEnvironment

class TorchElasticEnvironment(ClusterEnvironment):

Expand Down
3 changes: 1 addition & 2 deletions pytorch_lightning/accelerators/accelerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -347,8 +347,7 @@ def amp_backend(self) -> Optional[LightningEnum]:
return AMPType.APEX
elif isinstance(self.precision_plugin, NativeMixedPrecisionPlugin):
return AMPType.NATIVE
else:
return None
return None

@property
def precision(self) -> int:
Expand Down
Loading