Skip to content

Commit

Permalink
changelog + update docstring
Browse files Browse the repository at this point in the history
  • Loading branch information
ananthsub committed Sep 21, 2021
1 parent 2aa4497 commit 22fd170
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 6 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -336,6 +336,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Removed deprecated properties `DeepSpeedPlugin.cpu_offload*` in favor of `offload_optimizer`, `offload_parameters` and `pin_memory` ([#9244](https://github.com/PyTorchLightning/pytorch-lightning/pull/9244))


- Removed `call_configure_sharded_model_hook` property from `Accelerator` and `TrainingTypePlugin` ([#9612](https://github.com/PyTorchLightning/pytorch-lightning/pull/9612))


### Fixed


Expand Down
8 changes: 2 additions & 6 deletions pytorch_lightning/core/hooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -297,12 +297,8 @@ def configure_sharded_model(self) -> None:
where we'd like to shard the model instantly, which is useful for extremely large models which can save
memory and initialization time.
The accelerator manages whether to call this hook at every given stage.
For sharded plugins where model parallelism is required, the hook is usually on called once
to initialize the sharded parameters, and not called again in the same process.
By default for accelerators/plugins that do not use model sharding techniques,
this hook is called during each fit/val/test/predict stages.
This hook is called during each of fit/val/test/predict stages in the same process, so ensure that
implementation of this hook is idempotent.
"""


Expand Down

0 comments on commit 22fd170

Please sign in to comment.