Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Progress bar callback #1450

Merged
merged 19 commits into from
Apr 24, 2020
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Changed

- Changed the default behaviour to no longer include a NaN check with each training iteration. ([#1475](https://github.com/PyTorchLightning/pytorch-lightning/pull/1475))
- Decoupled the progress bar from trainer. It is a callback now and can be customized or even be replaced entirely ([#1450](https://github.com/PyTorchLightning/pytorch-lightning/pull/1450)).

- Changed lr schedule step interval behavior to update every backwards pass instead of every forwards pass ([#1476](https://github.com/PyTorchLightning/pytorch-lightning/issues/1476))

Expand All @@ -41,6 +42,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Deprecated

- Deprecatd `training_tqdm_dict` in favor of `progress_bar_dict` ([#1450](https://github.com/PyTorchLightning/pytorch-lightning/pull/1450)).


### Removed
Expand Down
6 changes: 6 additions & 0 deletions docs/source/callbacks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,9 @@ We successfully extended functionality without polluting our super clean
_save_model,
_abc_impl,
check_monitor_top_k,

---------

.. automodule:: pytorch_lightning.callbacks.progress
:noindex:
:exclude-members:
1 change: 1 addition & 0 deletions docs/source/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,5 +19,6 @@ Trainer
slurm_job_id,
tng_tqdm_dic,
training_tqdm_dict,
progress_bar_dict,
init_optimizers,
configure_schedulers
3 changes: 3 additions & 0 deletions pytorch_lightning/callbacks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,13 @@
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks.gradient_accumulation_scheduler import GradientAccumulationScheduler
from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint
from pytorch_lightning.callbacks.progress import ProgressBarBase, ProgressBar

__all__ = [
'Callback',
'EarlyStopping',
'ModelCheckpoint',
'GradientAccumulationScheduler',
'ProgressBarBase',
'ProgressBar',
]
24 changes: 24 additions & 0 deletions pytorch_lightning/callbacks/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,14 @@ def on_init_end(self, trainer):
"""Called when the trainer initialization ends, model has not yet been set."""
pass

def on_sanity_check_start(self, trainer, pl_module):
"""Called when the validation sanity check starts."""
pass

def on_sanity_check_end(self, trainer, pl_module):
"""Called when the validation sanity check ends."""
pass

def on_epoch_start(self, trainer, pl_module):
"""Called when the epoch begins."""
pass
Expand All @@ -34,6 +42,22 @@ def on_batch_start(self, trainer, pl_module):
"""Called when the training batch begins."""
pass

def on_validation_batch_start(self, trainer, pl_module):
"""Called when the validation batch begins."""
pass

def on_validation_batch_end(self, trainer, pl_module):
"""Called when the validation batch ends."""
pass

def on_test_batch_start(self, trainer, pl_module):
"""Called when the test batch begins."""
pass

def on_test_batch_end(self, trainer, pl_module):
"""Called when the test batch ends."""
pass

def on_batch_end(self, trainer, pl_module):
"""Called when the training batch ends."""
pass
Expand Down
Loading