Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into ea-divmod
Browse files Browse the repository at this point in the history
  • Loading branch information
TomAugspurger committed Oct 3, 2018
2 parents 0671e7d + ee80803 commit 35d4213
Show file tree
Hide file tree
Showing 38 changed files with 785 additions and 310 deletions.
4 changes: 2 additions & 2 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ jobs:
- template: ci/azure/windows.yml
parameters:
name: Windows
vmImage: vs2017-win2017
vmImage: vs2017-win2016
- template: ci/azure/windows-py27.yml
parameters:
name: WindowsPy27
vmImage: vs2017-win2017
vmImage: vs2017-win2016
2 changes: 1 addition & 1 deletion ci/travis-27.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ dependencies:
# universal
- pytest
- pytest-xdist
- moto
- moto==1.3.4
- hypothesis>=3.58.0
- pip:
- backports.lzma
Expand Down
33 changes: 31 additions & 2 deletions doc/source/whatsnew/v0.24.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -532,6 +532,35 @@ Current Behavior:
...
OverflowError: Trying to coerce negative values to unsigned integers

.. _whatsnew_0240.api.crosstab_dtypes

Crosstab Preserves Dtypes
^^^^^^^^^^^^^^^^^^^^^^^^^

:func:`crosstab` will preserve now dtypes in some cases that previously would
cast from integer dtype to floating dtype (:issue:`22019`)

Previous Behavior:

.. code-block:: ipython

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2], 'b': [3, 3, 4, 4, 4],
...: 'c': [1, 1, np.nan, 1, 1]})
In [4]: pd.crosstab(df.a, df.b, normalize='columns')
Out[4]:
b 3 4
a
1 0.5 0.0
2 0.5 1.0

Current Behavior:

.. code-block:: ipython

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2], 'b': [3, 3, 4, 4, 4],
...: 'c': [1, 1, np.nan, 1, 1]})
In [4]: pd.crosstab(df.a, df.b, normalize='columns')

Datetimelike API Changes
^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down Expand Up @@ -666,7 +695,7 @@ Timedelta
- Bug in :class:`Index` with numeric dtype when multiplying or dividing an array with dtype ``timedelta64`` (:issue:`22390`)
- Bug in :class:`TimedeltaIndex` incorrectly allowing indexing with ``Timestamp`` object (:issue:`20464`)
- Fixed bug where subtracting :class:`Timedelta` from an object-dtyped array would raise ``TypeError`` (:issue:`21980`)
-
- Fixed bug in adding a :class:`DataFrame` with all-`timedelta64[ns]` dtypes to a :class:`DataFrame` with all-integer dtypes returning incorrect results instead of raising ``TypeError`` (:issue:`22696`)
-

Timezones
Expand Down Expand Up @@ -794,6 +823,7 @@ Groupby/Resample/Rolling
- Bug in :meth:`Resampler.asfreq` when frequency of ``TimedeltaIndex`` is a subperiod of a new frequency (:issue:`13022`).
- Bug in :meth:`SeriesGroupBy.mean` when values were integral but could not fit inside of int64, overflowing instead. (:issue:`22487`)
- :func:`RollingGroupby.agg` and :func:`ExpandingGroupby.agg` now support multiple aggregation functions as parameters (:issue:`15072`)
- Bug in :meth:`DataFrame.resample` and :meth:`Series.resample` when resampling by a weekly offset (``'W'``) across a DST transition (:issue:`9119`, :issue:`21459`)

Sparse
^^^^^^
Expand Down Expand Up @@ -834,4 +864,3 @@ Other
- :meth:`DataFrame.nlargest` and :meth:`DataFrame.nsmallest` now returns the correct n values when keep != 'all' also when tied on the first columns (:issue:`22752`)
- :meth:`~pandas.io.formats.style.Styler.bar` now also supports tablewise application (in addition to rowwise and columnwise) with ``axis=None`` and setting clipping range with ``vmin`` and ``vmax`` (:issue:`21548` and :issue:`21526`). ``NaN`` values are also handled properly.
- Logical operations ``&, |, ^`` between :class:`Series` and :class:`Index` will no longer raise ``ValueError`` (:issue:`22092`)
-
7 changes: 1 addition & 6 deletions pandas/core/arrays/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,12 +108,7 @@ class IntervalArray(IntervalMixin, ExtensionArray):
_na_value = _fill_value = np.nan

def __new__(cls, data, closed=None, dtype=None, copy=False,
fastpath=False, verify_integrity=True):

if fastpath:
return cls._simple_new(data.left, data.right, closed,
copy=copy, dtype=dtype,
verify_integrity=False)
verify_integrity=True):

if isinstance(data, ABCSeries) and is_interval_dtype(data):
data = data.values
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/period.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ def asfreq(self, freq=None, how='E'):
if self.hasnans:
new_data[self._isnan] = iNaT

return self._simple_new(new_data, self.name, freq=freq)
return self._shallow_copy(new_data, freq=freq)

# ------------------------------------------------------------------
# Arithmetic Methods
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/computation/pytables.py
Original file line number Diff line number Diff line change
Expand Up @@ -411,7 +411,7 @@ def visit_Subscript(self, node, **kwargs):
slobj = self.visit(node.slice)
try:
value = value.value
except:
except AttributeError:
pass

try:
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/dtypes/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,7 @@ def is_timedelta64_dtype(arr_or_dtype):
return False
try:
tipo = _get_dtype_type(arr_or_dtype)
except:
except (TypeError, ValueError, SyntaxError):
return False
return issubclass(tipo, np.timedelta64)

Expand Down
8 changes: 4 additions & 4 deletions pandas/core/dtypes/dtypes.py
Original file line number Diff line number Diff line change
Expand Up @@ -358,11 +358,11 @@ def construct_from_string(cls, string):
try:
if string == 'category':
return cls()
except:
else:
raise TypeError("cannot construct a CategoricalDtype")
except AttributeError:
pass

raise TypeError("cannot construct a CategoricalDtype")

@staticmethod
def validate_ordered(ordered):
"""
Expand Down Expand Up @@ -519,7 +519,7 @@ def __new__(cls, unit=None, tz=None):
if m is not None:
unit = m.groupdict()['unit']
tz = m.groupdict()['tz']
except:
except TypeError:
raise ValueError("could not construct DatetimeTZDtype")

elif isinstance(unit, compat.string_types):
Expand Down
22 changes: 10 additions & 12 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -3260,7 +3260,7 @@ def _ensure_valid_index(self, value):
if not len(self.index) and is_list_like(value):
try:
value = Series(value)
except:
except (ValueError, NotImplementedError, TypeError):
raise ValueError('Cannot set a frame with no defined index '
'and a value that cannot be converted to a '
'Series')
Expand Down Expand Up @@ -3629,7 +3629,8 @@ def align(self, other, join='outer', axis=None, level=None, copy=True,
fill_axis=fill_axis,
broadcast_axis=broadcast_axis)

@Appender(_shared_docs['reindex'] % _shared_doc_kwargs)
@Substitution(**_shared_doc_kwargs)
@Appender(NDFrame.reindex.__doc__)
@rewrite_axis_style_signature('labels', [('method', None),
('copy', True),
('level', None),
Expand Down Expand Up @@ -4479,7 +4480,8 @@ def f(vals):
# ----------------------------------------------------------------------
# Sorting

@Appender(_shared_docs['sort_values'] % _shared_doc_kwargs)
@Substitution(**_shared_doc_kwargs)
@Appender(NDFrame.sort_values.__doc__)
def sort_values(self, by, axis=0, ascending=True, inplace=False,
kind='quicksort', na_position='last'):
inplace = validate_bool_kwarg(inplace, 'inplace')
Expand Down Expand Up @@ -4521,7 +4523,8 @@ def sort_values(self, by, axis=0, ascending=True, inplace=False,
else:
return self._constructor(new_data).__finalize__(self)

@Appender(_shared_docs['sort_index'] % _shared_doc_kwargs)
@Substitution(**_shared_doc_kwargs)
@Appender(NDFrame.sort_index.__doc__)
def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
kind='quicksort', na_position='last', sort_remaining=True,
by=None):
Expand Down Expand Up @@ -4886,7 +4889,7 @@ def _arith_op(left, right):
left, right = ops.fill_binop(left, right, fill_value)
return func(left, right)

if this._is_mixed_type or other._is_mixed_type:
if ops.should_series_dispatch(this, other, func):
# iterate over columns
return ops.dispatch_to_series(this, other, _arith_op)
else:
Expand All @@ -4896,7 +4899,6 @@ def _arith_op(left, right):
copy=False)

def _combine_match_index(self, other, func, level=None):
assert isinstance(other, Series)
left, right = self.align(other, join='outer', axis=0, level=level,
copy=False)
assert left.index.equals(right.index)
Expand All @@ -4916,11 +4918,7 @@ def _combine_match_columns(self, other, func, level=None, try_cast=True):
left, right = self.align(other, join='outer', axis=1, level=level,
copy=False)
assert left.columns.equals(right.index)

new_data = left._data.eval(func=func, other=right,
axes=[left.columns, self.index],
try_cast=try_cast)
return self._constructor(new_data)
return ops.dispatch_to_series(left, right, func, axis="columns")

def _combine_const(self, other, func, errors='raise', try_cast=True):
if lib.is_scalar(other) or np.ndim(other) == 0:
Expand Down Expand Up @@ -7747,7 +7745,7 @@ def convert(v):
values = np.array([convert(v) for v in values])
else:
values = convert(values)
except:
except (ValueError, TypeError):
values = convert(values)

else:
Expand Down
Loading

0 comments on commit 35d4213

Please sign in to comment.