Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1/2] Deprecate outputs in on_train_epoch_end hooks #7339

Merged
merged 15 commits into from
May 5, 2021

Conversation

ananthsub
Copy link
Contributor

@ananthsub ananthsub commented May 4, 2021

What does this PR do?

This addresses part of #6865

Traditionally, the differentiator between LightningModule.training_epoch_end vs the on_train_epoch_end hook is that the training_epoch_end received all the batch outputs for the epoch from that rank for post-processing.

on_train_epoch_end took no arguments and didn't dictate whether the trainer should cache these outputs.

We deprecate outputs from on_train_epoch_end because:

  • We need a hook that runs at the end of the epoch which does not indicate to the trainer to cache outputs. Doing so can unintentionally inflate memory requirements and severely slow down training, putting training at risk of OOMs for large scale use cases. This is the primary performance concern.
  • Having both hooks which both run at the end of the epoch and both receive outputs is confusing for users: when should they use which? This is the secondary usability concern.

This PR checks these conditions for needing to store the per-batch results at the end of the epoch:

  • If the LightningModule overrides training_epoch_end
  • If the LightningModule overrides on_train_epoch_end and includes outputs in its signature (until v1.5)

The outputs were originally added here: #4369

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

Make sure you had fun coding 🙃

@pep8speaks
Copy link

pep8speaks commented May 4, 2021

Hello @ananthsub! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-05-05 12:25:47 UTC

@codecov
Copy link

codecov bot commented May 4, 2021

Codecov Report

Merging #7339 (c84577d) into master (1a6dcbd) will decrease coverage by 4%.
The diff coverage is 98%.

@@           Coverage Diff           @@
##           master   #7339    +/-   ##
=======================================
- Coverage      92%     87%    -4%     
=======================================
  Files         200     200            
  Lines       12953   12985    +32     
=======================================
- Hits        11883   11360   -523     
- Misses       1070    1625   +555     

@ananthsub ananthsub changed the title Remove outputs from on_train_epoch_end [wip] Deprecate outputs in on_train_epoch_end hooks May 4, 2021
@ananthsub ananthsub added design Includes a design discussion refactor labels May 4, 2021
@ananthsub ananthsub added this to the v1.3 milestone May 4, 2021
@ananthsub ananthsub changed the title [wip] Deprecate outputs in on_train_epoch_end hooks Deprecate outputs in on_train_epoch_end hooks May 4, 2021
@ananthsub ananthsub changed the title Deprecate outputs in on_train_epoch_end hooks Deprecate outputs in on_train_epoch_end hooks May 4, 2021
@ananthsub ananthsub changed the title Deprecate outputs in on_train_epoch_end hooks [1/2] Deprecate outputs in on_train_epoch_end hooks May 4, 2021
@ananthsub ananthsub linked an issue May 4, 2021 that may be closed by this pull request
Copy link
Member

@ethanwharris ethanwharris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 😃 minor queries

pytorch_lightning/accelerators/accelerator.py Show resolved Hide resolved
pytorch_lightning/callbacks/base.py Show resolved Hide resolved
pytorch_lightning/trainer/callback_hook.py Outdated Show resolved Hide resolved
pytorch_lightning/trainer/training_loop.py Outdated Show resolved Hide resolved
pytorch_lightning/trainer/callback_hook.py Show resolved Hide resolved

# if the PL module doesn't have the hook then call the accelerator
# used to auto-reduce things for the user with Results obj
elif hasattr(self.trainer.accelerator, hook_name):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a huge fan of this. Better to use call_hook and maybe perform the signature analysis somewhere else.

Copy link
Contributor Author

@ananthsub ananthsub May 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from the comment, call_hook enforces that all of accelerator/trainer/module all take the exact same arguments for the hook, which might not be the case here. this was the same pattern @kaushikb11 followed in #6120

I'm not really a fan either, but call_hook is calling over 3 distinct interfaces which aren't enforced to be compatible.

maybe this is something we can look at for v1.4 is how to make to simplify/strengthen this? maybe the techniques @SkafteNicki used for metrics collections could apply here, but that seems beyond the scope of this PR

one thing I can do is add comments to Trainer.call_hook to indicate that there's this override being applied in training loop and any changes to call_hook must also be applied here.

@mergify mergify bot added the has conflicts label May 4, 2021
Copy link
Contributor

@SeanNaren SeanNaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ananthsub! I've strayed away from these hooks because of the caching logic and this is clearer

@mergify mergify bot removed the has conflicts label May 4, 2021
@ananthsub ananthsub force-pushed the fix-rm-outputs-train-epoch-end branch from 31314ee to d18455c Compare May 4, 2021 15:10
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
@awaelchli
Copy link
Contributor

awaelchli commented May 5, 2021

For callback implementers, if they need the outputs in the callback what do you suggest? cache through the batch_end callback methods?

@mergify mergify bot added the has conflicts label May 5, 2021
@Borda Borda added the _Will label May 5, 2021
@mergify mergify bot removed the has conflicts label May 5, 2021
@ananthsub
Copy link
Contributor Author

ananthsub commented May 5, 2021

For callback implementers, if they need the outputs in the callback what do you suggest? cache through the batch_end callback methods?

Yes, there are at least these options to support this:

  • If the callback is meant to work across multiple lightning modules and also have access to the outputs, the callback can cache the batch results at the end of each batch end hook. Downside: if multiple callbacks do this independently, overall memory is wasted because of redundant copies.
  • The LightningModule can cache the results in its *_epoch_end* method to make them available for callbacks to access. This would address the redundant copies from the point above, but requires more control over both module and callback.

the content of outputs is entirely LightningModule specific, so more of the logic should be moved closer to the module. What do you think of this?

@Borda
Copy link
Member

Borda commented May 5, 2021

@ananthsub can you pls add an example to docs how to proper use cache in this case?

@ananthsub
Copy link
Contributor Author

@ananthsub can you pls add an example to docs how to proper use cache in this case?

on the docs site? in the callback/model hooks?

@Borda
Copy link
Member

Borda commented May 5, 2021

@ananthsub can you pls add an example to docs how to proper use cache in this case?

on the docs site? in the callback/model hooks?

where ever you feel it is better place :]
so lets merge this and add this doc in an extra 3/2 pr, ok?

@awaelchli
Copy link
Contributor

@ananthsub Yes, I also see it that way and I think it's the most straightforward solution. Just wanted to make sure we know what to recommend when someone asks for this since we remove the feature that was requested. For everything we deprecate we should have a solution for people who rely on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design Includes a design discussion refactor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Remove duplicate epoch_end hooks in the Lightning Module
10 participants