Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement some reductions for string Series #31757

Closed
wants to merge 30 commits into from
Closed
Show file tree
Hide file tree
Changes from 18 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions doc/source/whatsnew/v1.0.2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,11 @@ Fixed regressions
Bug fixes
~~~~~~~~~

**ExtensionArray**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is likely too invasive for 1.02, move to 1.1


- Fixed issue where taking the minimum or maximum of a ``StringArray`` or ``Series`` with ``StringDtype`` type would raise. (:issue:`31746`)
jreback marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

say .min() or .max()

-

**Categorical**

- Fixed bug where :meth:`Categorical.from_codes` improperly raised a ``ValueError`` when passed nullable integer codes. (:issue:`31779`)
Expand Down
16 changes: 15 additions & 1 deletion pandas/core/arrays/string_.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import numpy as np

from pandas._libs import lib, missing as libmissing
from pandas.compat.numpy import function as nv

from pandas.core.dtypes.base import ExtensionDtype
from pandas.core.dtypes.common import pandas_dtype
Expand All @@ -12,7 +13,7 @@
from pandas.core.dtypes.inference import is_array_like

from pandas import compat
from pandas.core import ops
from pandas.core import nanops, ops
from pandas.core.arrays import PandasArray
from pandas.core.construction import extract_array
from pandas.core.indexers import check_array_indexer
Expand Down Expand Up @@ -274,8 +275,21 @@ def astype(self, dtype, copy=True):
return super().astype(dtype, copy)

def _reduce(self, name, skipna=True, **kwargs):
if name in ["min", "max"]:
return getattr(self, name)(skipna=skipna, **kwargs)

raise TypeError(f"Cannot perform reduction '{name}' with string dtype")

def min(self, axis=None, out=None, keepdims=False, skipna=True):
nv.validate_min((), dict(out=out, keepdims=keepdims))
result = nanops.nanmin(self._ndarray, axis=axis, skipna=skipna)
return libmissing.NA if isna(result) else result

def max(self, axis=None, out=None, keepdims=False, skipna=True):
jreback marked this conversation as resolved.
Show resolved Hide resolved
nv.validate_max((), dict(out=out, keepdims=keepdims))
result = nanops.nanmax(self._ndarray, axis=axis, skipna=skipna)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be no need to explicitly pass through the axis keyword, I think

return libmissing.NA if isna(result) else result

def value_counts(self, dropna=False):
from pandas import value_counts

Expand Down
8 changes: 8 additions & 0 deletions pandas/core/nanops.py
Original file line number Diff line number Diff line change
Expand Up @@ -854,6 +854,8 @@ def reduction(
mask: Optional[np.ndarray] = None,
) -> Dtype:

na_mask = isna(values)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should already have the mask (pass it in when you call this).


values, mask, dtype, dtype_max, fill_value = _get_values(
values, skipna, fill_value_typ=fill_value_typ, mask=mask
)
Expand All @@ -864,6 +866,12 @@ def reduction(
result.fill(np.nan)
except (AttributeError, TypeError, ValueError):
result = np.nan
elif is_object_dtype(dtype) and values.ndim == 1 and na_mask.any():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you have a test case that fails on non ndim==1?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, was getting a couple test failures otherwise, I think for reductions when the entire DataFrame has object dtype (I can't recall which tests exactly). I figured the subsetting values[~mask] is only going to make sense if values has one dimension.

# Need to explicitly mask NA values for object dtypes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why?

if skipna:
result = getattr(values[~na_mask], meth)(axis)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This masking could also be done in the min/max functions? (as you had before?)

Or, another option might be to add a min/max function to mask_ops.py, similarly as I am doing for sum in #30982 (but it should be simpler for min/max, as those don't need to handle the min_count)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a benefit of having it here is that this also fixes a bug for Series: pd.Series(["a", np.nan]).min() currently raises even though it shouldn't

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that's a good point. Can you add a test for that, then?

Now, that aside, I think longer term we still want the separate min/max in mask_ops.py, so it can also be used for the int dtypes. But that can then certainly be done for a separate PR.

else:
result = np.nan
else:
result = getattr(values, meth)(axis)

Expand Down
23 changes: 23 additions & 0 deletions pandas/tests/arrays/string_/test_string.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,3 +269,26 @@ def test_value_counts_na():
result = arr.value_counts(dropna=True)
expected = pd.Series([2, 1], index=["a", "b"], dtype="Int64")
tm.assert_series_equal(result, expected)


@pytest.mark.parametrize("func", ["min", "max"])
@pytest.mark.parametrize("skipna", [True, False])
def test_reduction(func, skipna):
jreback marked this conversation as resolved.
Show resolved Hide resolved
s = pd.Series(["x", "y", "z"], dtype="string")
result = getattr(s, func)(skipna=skipna)
expected = "x" if func == "min" else "z"

assert result == expected
jorisvandenbossche marked this conversation as resolved.
Show resolved Hide resolved


@pytest.mark.parametrize("func", ["min", "max"])
@pytest.mark.parametrize("skipna", [True, False])
def test_reduction_with_na(func, skipna):
dsaxton marked this conversation as resolved.
Show resolved Hide resolved
s = pd.Series([pd.NA, "y", "z"], dtype="string")
result = getattr(s, func)(skipna=skipna)

if skipna:
expected = "y" if func == "min" else "z"
assert result == expected
else:
assert result is pd.NA
6 changes: 6 additions & 0 deletions pandas/tests/extension/base/reduce.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,12 @@ class BaseNoReduceTests(BaseReduceTests):

@pytest.mark.parametrize("skipna", [True, False])
def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna):
if isinstance(data, pd.arrays.StringArray) and all_numeric_reductions in [
"min",
"max",
]:
pytest.skip("These reductions are implemented")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you see if you can rather update this in test_string.py ? It might be we now need to subclass the ReduceTests instead of NoReduceTests.
(ideally the base tests remain dtype agnostic)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By updating in test_string.py do you mean adding tests using the fixtures data and all_numeric_reductions, only checking for the "correct" output (and skipping over those reductions that aren't yet implemented)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, actually looking at the base reduction tests now: they are not really written in a way that they will pass for strings.

But so you can copy this test to tests/extension/test_strings.py (and so override the base one), and then do the string-array-specific adaptation there. It gives some duplication of the test code, but it's not long, and it clearer separation of concerns (the changes for string array are in test_string)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so we can remove the special cases for StringArray in BaseNoReduceTests without getting test failures, as long as they're handled in TestNoReduce in test_string.py? I'm not too familiar with how these particular tests actually get executed during CI


op_name = all_numeric_reductions
s = pd.Series(data)

Expand Down
4 changes: 2 additions & 2 deletions pandas/tests/frame/test_apply.py
Original file line number Diff line number Diff line change
Expand Up @@ -1406,8 +1406,8 @@ def test_apply_datetime_tz_issue(self):
@pytest.mark.parametrize("method", ["min", "max", "sum"])
def test_consistency_of_aggregates_of_columns_with_missing_values(self, df, method):
# GH 16832
none_in_first_column_result = getattr(df[["A", "B"]], method)()
none_in_second_column_result = getattr(df[["B", "A"]], method)()
none_in_first_column_result = getattr(df[["A", "B"]], method)().sort_index()
none_in_second_column_result = getattr(df[["B", "A"]], method)().sort_index()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously the column with the missing value was getting dropped from the result so it only had a single row and the order didn't matter

jorisvandenbossche marked this conversation as resolved.
Show resolved Hide resolved

tm.assert_series_equal(
none_in_first_column_result, none_in_second_column_result
Expand Down