Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove deprecated model hooks #3980

Merged
merged 1 commit into from
Oct 8, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions docs/source/hooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,15 +40,13 @@ Training loop
^^^^^^^^^^^^^

- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_epoch_start`
- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_batch_start`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we add on_train_batch_start?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, mind make it as suggestion here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Github shows Applying suggestions on deleted line is not supported.

- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_train_batch_start`

- :meth:`~pytorch_lightning.core.lightning.LightningModule.tbptt_split_batch`
- :meth:`~pytorch_lightning.core.lightning.LightningModule.training_step`
- :meth:`~pytorch_lightning.core.lightning.LightningModule.training_step_end` (optional)
- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_before_zero_grad`
- :meth:`~pytorch_lightning.core.hooks.ModelHooks.backward`
- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_after_backward`
- ``optimizer.step()``
- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_batch_end`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here on_train_batch_end too?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_train_batch_end`

- :meth:`~pytorch_lightning.core.lightning.LightningModule.training_epoch_end`
- :meth:`~pytorch_lightning.core.hooks.ModelHooks.on_epoch_end`

Expand Down
21 changes: 0 additions & 21 deletions pytorch_lightning/core/hooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,27 +209,6 @@ def on_test_model_train(self) -> None:
"""
self.train()

def on_batch_start(self, batch: Any) -> None:
"""
Called in the training loop before anything happens for that batch.

If you return -1 here, you will skip training for the rest of the current epoch.

Args:
batch: The batched data as it is returned by the training DataLoader.

.. warning:: Deprecated in 0.9.0 will remove 1.0.0 (use `on_train_batch_start` instead)
"""
# do something when the batch starts

def on_batch_end(self) -> None:
"""
Called in the training loop after the batch.

.. warning:: Deprecated in 0.9.0 will remove 1.0.0 (use `on_train_batch_end` instead)
"""
# do something when the batch ends

def on_epoch_start(self) -> None:
"""
Called in the training loop at the very beginning of the epoch.
Expand Down
32 changes: 0 additions & 32 deletions pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -685,13 +685,6 @@ def validation_epoch_end(self, val_step_outputs):
See the :ref:`multi_gpu` guide for more details.
"""

def validation_end(self, outputs):
"""
Warnings:
Deprecated in v0.7.0. Use :meth:`validation_epoch_end` instead.
Will be removed in 1.0.0.
"""

def validation_epoch_end(
self, outputs: List[Any]
) -> None:
Expand Down Expand Up @@ -868,13 +861,6 @@ def test_epoch_end(self, output_results):
See the :ref:`multi_gpu` guide for more details.
"""

def test_end(self, outputs):
"""
Warnings:
Deprecated in v0.7.0. Use :meth:`test_epoch_end` instead.
Will be removed in 1.0.0.
"""

def test_epoch_end(
self, outputs: List[Any]
) -> None:
Expand Down Expand Up @@ -1288,24 +1274,6 @@ def get_progress_bar_dict(self):

return tqdm_dict

def get_tqdm_dict(self) -> Dict[str, Union[int, str]]:
"""
Additional items to be displayed in the progress bar.

Return:
Dictionary with the items to be displayed in the progress bar.

Warning:
Deprecated since v0.7.3.
Use :meth:`get_progress_bar_dict` instead.
"""
rank_zero_warn(
"`get_tqdm_dict` was renamed to `get_progress_bar_dict` in v0.7.3"
" and this method will be removed in v1.0.0",
DeprecationWarning,
)
return self.get_progress_bar_dict()

@classmethod
def _auto_collect_arguments(cls, frame=None) -> Tuple[Dict, Dict]:
""""""
Expand Down
29 changes: 0 additions & 29 deletions tests/test_deprecated.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,32 +52,3 @@ def test_dataloader(self):

def test_end(self, outputs):
return {'test_loss': torch.tensor(0.7)}


# def test_tbd_remove_in_v1_0_0_model_hooks():
#
# model = ModelVer0_6()
#
# with pytest.deprecated_call(match='will be removed in v1.0. Use `test_epoch_end` instead'):
# trainer = Trainer(logger=False)
# trainer.test(model)
# assert trainer.logger_connector.callback_metrics == {'test_loss': torch.tensor(0.6)}
#
# with pytest.deprecated_call(match='will be removed in v1.0. Use `validation_epoch_end` instead'):
# trainer = Trainer(logger=False)
# # TODO: why `dataloder` is required if it is not used
# result = trainer._evaluate(model, dataloaders=[[None]], max_batches=1)
# assert result[0] == {'val_loss': torch.tensor(0.6)}
#
# model = ModelVer0_7()
#
# with pytest.deprecated_call(match='will be removed in v1.0. Use `test_epoch_end` instead'):
# trainer = Trainer(logger=False)
# trainer.test(model)
# assert trainer.logger_connector.callback_metrics == {'test_loss': torch.tensor(0.7)}
#
# with pytest.deprecated_call(match='will be removed in v1.0. Use `validation_epoch_end` instead'):
# trainer = Trainer(logger=False)
# # TODO: why `dataloder` is required if it is not used
# result = trainer._evaluate(model, dataloaders=[[None]], max_batches=1)
# assert result[0] == {'val_loss': torch.tensor(0.7)}