-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IPU] Call accelerator hooks regardless if LM hook overridden 1/n #7826
Changes from 1 commit
5246055
18ceedc
d9a4b54
6a55739
e66be32
f28d388
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1237,11 +1237,13 @@ def call_hook(self, hook_name: str, *args, **kwargs) -> Any: | |
hook_fx = getattr(model_ref, hook_name) | ||
output = hook_fx(*args, **kwargs) | ||
|
||
# if the PL module doesn't have the hook then call the accelerator | ||
# used to auto-reduce things for the user with Results obj | ||
elif hasattr(self.accelerator, hook_name): | ||
# call the accelerator hook | ||
if hasattr(self.accelerator, hook_name): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I feel less comfortable adding more dependencies to separately (unrelated to this PR) we're aggregating all of these hooks together in the profiling results. the model/callbacks/accelerator could all have different behavior. i think it's worth splitting out these into separate profiling sections to give provide more fine-grained visibility There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree, the accelerator hooks shouldn't be a part of Also agree on the profiling comment There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. thanks @ananthsub! For the IPU integration we'll be moving as is, but after I want to remove this logic for the accelerators from this call, and call them explicitly through the loop functions as this can foster some nasty bugs as you suggested. Does that sound fair? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @SeanNaren that sounds good! |
||
accelerator_hook = getattr(self.accelerator, hook_name) | ||
output = accelerator_hook(*args, **kwargs) | ||
accelerator_output = accelerator_hook(*args, **kwargs) | ||
# Rely on the accelerator output if lightningModule hook returns nothing | ||
if output is None: | ||
output = accelerator_output | ||
carmocca marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
if not skip: | ||
self._cache_logged_metrics() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one example is
training_step_end
for DP which is handled in the plugin. If user doesn't do the reduction we do it in the accelerator. But now with the change, if the user has it overridden you still calltraining_step_end
on the DP plugin which will do another automatic reduction.