-
-
Notifications
You must be signed in to change notification settings - Fork 18k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Support mask in groupby cumprod #48138
Conversation
# Conflicts: # pandas/core/groupby/ops.py # pandas/tests/groupby/test_groupby.py
# Conflicts: # doc/source/whatsnew/v1.6.0.rst
Comparing to the plain (non-grouped) sum/prod, those currently also overflow:
So it seems sensible that the groupby variants follow this as well. In general, we should maybe better document those constraints and expectations around overflow (not sure if this is now documented somewhere?) |
doc/source/whatsnew/v1.6.0.rst
Outdated
@@ -100,6 +100,7 @@ Deprecations | |||
|
|||
Performance improvements | |||
~~~~~~~~~~~~~~~~~~~~~~~~ | |||
- Performance improvement in :meth:`.GroupBy.cumprod` for extension array dtypes (:issue:`37493`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also now uses int64 instead of float64 for the numpy dtypes? So that also changes behaviour in those cases regarding overflow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, should we mention this in the whatsnew?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, yes. Maybe as notable bug fix, as it has some behaviour change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -641,10 +641,10 @@ def test_groupby_cumprod(): | |||
tm.assert_series_equal(actual, expected) | |||
|
|||
df = DataFrame({"key": ["b"] * 100, "value": 2}) | |||
df["value"] = df["value"].astype(float) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can maybe keep this with as int (or test both in addition), so we have a test for the silent overflow behaviour?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a new test explicitly testing that overflow is consistent with numpy
@@ -641,10 +641,10 @@ def test_groupby_cumprod(): | |||
tm.assert_series_equal(actual, expected) | |||
|
|||
df = DataFrame({"key": ["b"] * 100, "value": 2}) | |||
df["value"] = df["value"].astype(float) | |||
actual = df.groupby("key")["value"].cumprod() | |||
# if overflows, groupby product casts to float | |||
# while numpy passes back invalid values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment can probably be updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
# Conflicts: # doc/source/whatsnew/v1.6.0.rst # pandas/_libs/groupby.pyx
# Conflicts: # doc/source/whatsnew/v1.6.0.rst
So this is the last one of the groupby algos. We can start refactoring the groupby ops code paths after this is through |
doc/source/whatsnew/v1.6.0.rst
Outdated
|
||
In previous versions we cast to float when applying ``cumsum`` and ``cumprod`` which | ||
lead to incorrect results even if the result could be hold by ``int64`` dtype. | ||
Additionally, the aggregation overflows consistent with numpy when the limit of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would maybe mention that it is making it consistent with the DataFrame method as well? (without groupby)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a reference to the methods
I am still a bit uneasy about this change, since it is silently changing actual results that you get (a previously somewhat correct results (an inexact float) could silently become completely incorrect (overflowed int)). To what extent would it be possible to split the overflow behaviour change from the mask introduction, so we could for example leave that behaviour change for 2.0? (not sure myself whether this is worth it, just wondering) |
It is a bit unfortunate, this is true. But we can preserve precision now if possible, this was buggy before and since the behaviour is aligned with numpy and the regular DataFrame behaviour this should be ok imo. In the end it probably does not matter how far off your values are if they are off. We could cast to float before calling the algos, this would keep the current behaviour but would lose performance gains and the precision fixes (would also hit cumsum that is already merged). Since we intend to do 2.0 as the next release anyways, would it be ok to merge this and revert to casting to float before passing the array to the cython algos, If we do an unexpected 1.6 next? |
Sounds good! |
Great! Thanks. @mroeschke Would you mind having a look before merging? |
# Conflicts: # doc/source/whatsnew/v1.6.0.rst
Thanks @phofl |
* ENH: Support mask in groupby cumprod * Add whatsnew * Move whatsnew * Adress review * Fix example * Clarify * Change dtype access
doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.This is a general issue here. If we overflow int64 we get garbage. Previously we were working with float64, which gave us back numbers, but they were incorrect. But we keep precision as long as our numbers fit into int64, which was not the case previously, since we were casting to float64 beforehand, imo this is more important.
cc @jorisvandenbossche