-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New modular metric interface #2528
Conversation
This pull request is now in conflict... :( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In total I really like those changes!
I feel like we should automatically sync the output of forward
whenever being on ddp, and then call aggregate
on it. We could also think on defaults (i.e. aggregate
could be a simple mean by default).
So the next step would be to revisit all metrics for reduction?
Edit: This should also make it simpler to pickle metrics, since we only need to make sure the converters can be pickled :)
@justusschock thanks, I took me a while to figure out how to keep this as close to native pytorch as possible, but still expressive enough to support the features we need. For aggregation over multiple batches, one way to achieve this to introduce a new
|
Codecov Report
@@ Coverage Diff @@
## master #2528 +/- ##
========================================
- Coverage 90% 81% -9%
========================================
Files 81 84 +3
Lines 7858 9321 +1463
========================================
+ Hits 7034 7518 +484
- Misses 824 1803 +979 |
@SkafteNicki I see your point. But isn't it basically the same whether you accumulate across nodes or across different batches in the same node? This would probably avoid some code duplication. We should have a chat in slack considering the perfect integration, once we finished the API for metrics. |
@justusschock it probably is the same syncing between nodes and across the same node, so I agree that it should somewhat be handled in the same way. Only difference is that accumulation on the same node, should probably be a feature that the user can enable/disable (i.e. could be a For now, I will rename the |
This pull request is now in conflict... :( |
1 similar comment
This pull request is now in conflict... :( |
@SkafteNicki How is it going with this PR? |
This pull request is now in conflict... :( |
1 similar comment
This pull request is now in conflict... :( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🐰
@SkafteNicki some metric test seems to be hanging... |
@Borda which test are hanging, I cannot figure it out from drone details? |
see this build - http://35.192.60.23/PyTorchLightning/pytorch-lightning/8999 |
@Borda fixed the bug, we can merge this now :] |
What does this PR do?
Fixes #3069
This is a proposal on how an extension for the modular interface for
metric packages could look like. What our interface is missing, is the option to do computations
after dpp sync. Consider the following example for rmse:
This PR therefore propose to go from our decorator orientated modular interface
to a hook-based interface. All hooks are optional, such that the user only needs
to implement
forward
if the inherent from eitherTensorMetric
orNumpyMetric
.Hooks to add:
Note: this PR just implement the hooks, but I still need to go over each metric,
fixing those where the
compute
hook is needed.That will be fixed in follow up PR, since this is already extensive enough.
Tagging @justusschock and @Borda for opinion
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃