Skip to content

Commit

Permalink
DEPR: rename Timestamp.offset to .freq
Browse files Browse the repository at this point in the history
closes #12160

Author: sinhrks <sinhrks@gmail.com>

Closes #13593 from sinhrks/depr_timestamp_offset and squashes the following commits:

c7749d5 [sinhrks] DEPR: rename Timestamp.offset to .freq
  • Loading branch information
sinhrks authored and jreback committed Jul 10, 2016
1 parent 1edc1df commit 2a96ab7
Show file tree
Hide file tree
Showing 20 changed files with 187 additions and 142 deletions.
32 changes: 14 additions & 18 deletions doc/source/whatsnew/v0.19.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -194,15 +194,15 @@ Other enhancements
pd.to_numeric(s, downcast='unsigned')
pd.to_numeric(s, downcast='integer')

- ``Index`` now supports ``.str.extractall()`` which returns a ``DataFrame``, see :ref:`documentation here <text.extractall>` (:issue:`10008`, :issue:`13156`)
- ``Index`` now supports ``.str.extractall()`` which returns a ``DataFrame``, the see :ref:`docs here <text.extractall>` (:issue:`10008`, :issue:`13156`)
- ``.to_hdf/read_hdf()`` now accept path objects (e.g. ``pathlib.Path``, ``py.path.local``) for the file path (:issue:`11773`)

.. ipython:: python

idx = pd.Index(["a1a2", "b1", "c1"])
idx.str.extractall("[ab](?P<digit>\d)")

- ``Timestamp`` s can now accept positional and keyword parameters like :func:`datetime.datetime` (:issue:`10758`, :issue:`11630`)
- ``Timestamp`` can now accept positional and keyword parameters similar to :func:`datetime.datetime` (:issue:`10758`, :issue:`11630`)

.. ipython:: python

Expand All @@ -227,8 +227,7 @@ Other enhancements
- Consistent with the Python API, ``pd.read_csv()`` will now interpret ``+inf`` as positive infinity (:issue:`13274`)
- The ``DataFrame`` constructor will now respect key ordering if a list of ``OrderedDict`` objects are passed in (:issue:`13304`)
- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`)
- A ``union_categorical`` function has been added for combining categoricals, see :ref:`Unioning Categoricals<categorical.union>` (:issue:`13361`)
- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`)
- A top-level function :func:`union_categorical` has been added for combining categoricals, see :ref:`Unioning Categoricals<categorical.union>` (:issue:`13361`)
- ``Series`` has gained the properties ``.is_monotonic``, ``.is_monotonic_increasing``, ``.is_monotonic_decreasing``, similar to ``Index`` (:issue:`13336`)

.. _whatsnew_0190.api:
Expand All @@ -238,9 +237,16 @@ API changes


- Non-convertible dates in an excel date column will be returned without conversion and the column will be ``object`` dtype, rather than raising an exception (:issue:`10001`)
- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`)
- An ``UnsupportedFunctionCall`` error is now raised if NumPy ufuncs like ``np.mean`` are called on groupby or resample objects (:issue:`12811`)
- Calls to ``.sample()`` will respect the random seed set via ``numpy.random.seed(n)`` (:issue:`13161`)
- ``Styler.apply`` is now more strict about the outputs your function must return. For ``axis=0`` or ``axis=1``, the output shape must be identical. For ``axis=None``, the output must be a DataFrame with identical columns and index labels. (:issue:`13222`)
- ``Float64Index.astype(int)`` will now raise ``ValueError`` if ``Float64Index`` contains ``NaN`` values (:issue:`13149`)
- ``TimedeltaIndex.astype(int)`` and ``DatetimeIndex.astype(int)`` will now return ``Int64Index`` instead of ``np.array`` (:issue:`13209`)
- ``.filter()`` enforces mutual exclusion of the keyword arguments. (:issue:`12399`)
- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
- ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`)


.. _whatsnew_0190.api.tolist:

Expand Down Expand Up @@ -361,7 +367,7 @@ We are able to preserve the join keys
pd.merge(df1, df2, how='outer').dtypes

Of course if you have missing values that are introduced, then the
resulting dtype will be upcast (unchanged from previous).
resulting dtype will be upcast, which is unchanged from previous.

.. ipython:: python

Expand Down Expand Up @@ -419,17 +425,6 @@ Furthermore:
- Passing duplicated ``percentiles`` will now raise a ``ValueError``.
- Bug in ``.describe()`` on a DataFrame with a mixed-dtype column index, which would previously raise a ``TypeError`` (:issue:`13288`)

.. _whatsnew_0190.api.other:

Other API changes
^^^^^^^^^^^^^^^^^

- ``Float64Index.astype(int)`` will now raise ``ValueError`` if ``Float64Index`` contains ``NaN`` values (:issue:`13149`)
- ``TimedeltaIndex.astype(int)`` and ``DatetimeIndex.astype(int)`` will now return ``Int64Index`` instead of ``np.array`` (:issue:`13209`)
- ``.filter()`` enforces mutual exclusion of the keyword arguments. (:issue:`12399`)
- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
- ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`)

.. _whatsnew_0190.deprecations:

Deprecations
Expand All @@ -439,6 +434,7 @@ Deprecations
- ``buffer_lines`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13360`)
- ``as_recarray`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13373`)
- top-level ``pd.ordered_merge()`` has been renamed to ``pd.merge_ordered()`` and the original name will be removed in a future version (:issue:`13358`)
- ``Timestamp.offset`` property (and named arg in the constructor), has been deprecated in favor of ``freq`` (:issue:`12160`)

.. _whatsnew_0190.performance:

Expand Down Expand Up @@ -503,7 +499,7 @@ Bug Fixes
- Bug in ``pd.read_csv()`` in which the ``nrows`` argument was not properly validated for both engines (:issue:`10476`)
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which infinities of mixed-case forms were not being interpreted properly (:issue:`13274`)
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which trailing ``NaN`` values were not being parsed (:issue:`13320`)
- Bug in ``pd.read_csv()`` with ``engine='python'`` when reading from a tempfile.TemporaryFile on Windows with Python 3 (:issue:`13398`)
- Bug in ``pd.read_csv()`` with ``engine='python'`` when reading from a ``tempfile.TemporaryFile`` on Windows with Python 3 (:issue:`13398`)
- Bug in ``pd.read_csv()`` that prevents ``usecols`` kwarg from accepting single-byte unicode strings (:issue:`13219`)
- Bug in ``pd.read_csv()`` that prevents ``usecols`` from being an empty set (:issue:`13402`)
- Bug in ``pd.read_csv()`` with ``engine=='c'`` in which null ``quotechar`` was not accepted even though ``quoting`` was specified as ``None`` (:issue:`13411`)
Expand All @@ -516,7 +512,7 @@ Bug Fixes


- Bug in ``pd.to_datetime()`` when passing invalid datatypes (e.g. bool); will now respect the ``errors`` keyword (:issue:`13176`)
- Bug in ``pd.to_datetime()`` which overflowed on ``int8``, `int16`` dtypes (:issue:`13451`)
- Bug in ``pd.to_datetime()`` which overflowed on ``int8``, and ``int16`` dtypes (:issue:`13451`)
- Bug in extension dtype creation where the created types were not is/identical (:issue:`13285`)

- Bug in ``NaT`` - ``Period`` raises ``AttributeError`` (:issue:`13071`)
Expand Down
11 changes: 6 additions & 5 deletions pandas/io/packers.py
Original file line number Diff line number Diff line change
Expand Up @@ -481,12 +481,12 @@ def encode(obj):
tz = obj.tzinfo
if tz is not None:
tz = u(tz.zone)
offset = obj.offset
if offset is not None:
offset = u(offset.freqstr)
freq = obj.freq
if freq is not None:
freq = u(freq.freqstr)
return {u'typ': u'timestamp',
u'value': obj.value,
u'offset': offset,
u'freq': freq,
u'tz': tz}
if isinstance(obj, NaTType):
return {u'typ': u'nat'}
Expand Down Expand Up @@ -556,7 +556,8 @@ def decode(obj):
if typ is None:
return obj
elif typ == u'timestamp':
return Timestamp(obj[u'value'], tz=obj[u'tz'], offset=obj[u'offset'])
freq = obj[u'freq'] if 'freq' in obj else obj[u'offset']
return Timestamp(obj[u'value'], tz=obj[u'tz'], freq=freq)
elif typ == u'nat':
return NaT
elif typ == u'period':
Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
12 changes: 10 additions & 2 deletions pandas/io/tests/generate_legacy_storage_files.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
SparseSeries, SparseDataFrame,
Index, MultiIndex, bdate_range, to_msgpack,
date_range, period_range,
Timestamp, Categorical, Period)
Timestamp, NaT, Categorical, Period)
from pandas.compat import u
import os
import sys
Expand Down Expand Up @@ -140,6 +140,13 @@ def create_data():
int16=Categorical(np.arange(1000)),
int32=Categorical(np.arange(10000)))

timestamp = dict(normal=Timestamp('2011-01-01'),
nat=NaT,
tz=Timestamp('2011-01-01', tz='US/Eastern'),
freq=Timestamp('2011-01-01', freq='D'),
both=Timestamp('2011-01-01', tz='Asia/Tokyo',
freq='M'))

return dict(series=series,
frame=frame,
panel=panel,
Expand All @@ -149,7 +156,8 @@ def create_data():
sp_series=dict(float=_create_sp_series(),
ts=_create_sp_tsseries()),
sp_frame=dict(float=_create_sp_frame()),
cat=cat)
cat=cat,
timestamp=timestamp)


def create_pickle_data():
Expand Down
28 changes: 20 additions & 8 deletions pandas/io/tests/test_packers.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from distutils.version import LooseVersion

from pandas import compat
from pandas.compat import u
from pandas.compat import u, PY3
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, period_range, Index, Categorical)
from pandas.core.common import PerformanceWarning
Expand Down Expand Up @@ -58,6 +58,19 @@ def check_arbitrary(a, b):
assert_series_equal(a, b)
elif isinstance(a, Index):
assert_index_equal(a, b)
elif isinstance(a, Categorical):
# Temp,
# Categorical.categories is changed from str to bytes in PY3
# maybe the same as GH 13591
if PY3 and b.categories.inferred_type == 'string':
pass
else:
tm.assert_categorical_equal(a, b)
elif a is NaT:
assert b is NaT
elif isinstance(a, Timestamp):
assert a == b
assert a.freq == b.freq
else:
assert(a == b)

Expand Down Expand Up @@ -815,8 +828,8 @@ def check_min_structure(self, data):
for typ, v in self.minimum_structure.items():
assert typ in data, '"{0}" not found in unpacked data'.format(typ)
for kind in v:
assert kind in data[
typ], '"{0}" not found in data["{1}"]'.format(kind, typ)
msg = '"{0}" not found in data["{1}"]'.format(kind, typ)
assert kind in data[typ], msg

def compare(self, vf, version):
# GH12277 encoding default used to be latin-1, now utf-8
Expand All @@ -839,8 +852,8 @@ def compare(self, vf, version):

# use a specific comparator
# if available
comparator = getattr(
self, "compare_{typ}_{dt}".format(typ=typ, dt=dt), None)
comp_method = "compare_{typ}_{dt}".format(typ=typ, dt=dt)
comparator = getattr(self, comp_method, None)
if comparator is not None:
comparator(result, expected, typ, version)
else:
Expand Down Expand Up @@ -872,9 +885,8 @@ def read_msgpacks(self, version):
n = 0
for f in os.listdir(pth):
# GH12142 0.17 files packed in P2 can't be read in P3
if (compat.PY3 and
version.startswith('0.17.') and
f.split('.')[-4][-1] == '2'):
if (compat.PY3 and version.startswith('0.17.') and
f.split('.')[-4][-1] == '2'):
continue
vf = os.path.join(pth, f)
try:
Expand Down
6 changes: 6 additions & 0 deletions pandas/io/tests/test_pickle.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,12 @@ def compare_element(self, result, expected, typ, version=None):
if typ.startswith('sp_'):
comparator = getattr(tm, "assert_%s_equal" % typ)
comparator(result, expected, exact_indices=False)
elif typ == 'timestamp':
if expected is pd.NaT:
assert result is pd.NaT
else:
tm.assert_equal(result, expected)
tm.assert_equal(result.freq, expected.freq)
else:
comparator = getattr(tm, "assert_%s_equal" %
typ, tm.assert_almost_equal)
Expand Down
1 change: 1 addition & 0 deletions pandas/lib.pxd
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# prototypes for sharing

cdef bint is_null_datetimelike(v)
cpdef bint is_period(val)
5 changes: 1 addition & 4 deletions pandas/src/inference.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def is_bool(object obj):
def is_complex(object obj):
return util.is_complex_object(obj)

def is_period(object val):
cpdef bint is_period(object val):
""" Return a boolean if this is a Period object """
return util.is_period_object(val)

Expand Down Expand Up @@ -538,9 +538,6 @@ def is_time_array(ndarray[object] values):
return False
return True

def is_period(object o):
from pandas import Period
return isinstance(o,Period)

def is_period_array(ndarray[object] values):
cdef Py_ssize_t i, n = len(values)
Expand Down
7 changes: 5 additions & 2 deletions pandas/src/period.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ cimport cython
from datetime cimport *
cimport util
cimport lib
from lib cimport is_null_datetimelike
from lib cimport is_null_datetimelike, is_period
import lib
from pandas import tslib
from tslib import Timedelta, Timestamp, iNaT, NaT
Expand Down Expand Up @@ -484,8 +484,11 @@ def extract_freq(ndarray[object] values):

for i in range(n):
p = values[i]

try:
return p.freq
# now Timestamp / NaT has freq attr
if is_period(p):
return p.freq
except AttributeError:
pass

Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/indexing/test_indexing.py
Original file line number Diff line number Diff line change
Expand Up @@ -965,7 +965,7 @@ def test_indexing_with_datetime_tz(self):
# indexing - fast_xs
df = DataFrame({'a': date_range('2014-01-01', periods=10, tz='UTC')})
result = df.iloc[5]
expected = Timestamp('2014-01-06 00:00:00+0000', tz='UTC', offset='D')
expected = Timestamp('2014-01-06 00:00:00+0000', tz='UTC', freq='D')
self.assertEqual(result, expected)

result = df.loc[5]
Expand Down
4 changes: 2 additions & 2 deletions pandas/tests/series/test_constructors.py
Original file line number Diff line number Diff line change
Expand Up @@ -426,10 +426,10 @@ def test_constructor_with_datetime_tz(self):
# indexing
result = s.iloc[0]
self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500',
tz='US/Eastern', offset='D'))
tz='US/Eastern', freq='D'))
result = s[0]
self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500',
tz='US/Eastern', offset='D'))
tz='US/Eastern', freq='D'))

result = s[Series([True, True, False], index=s.index)]
assert_series_equal(result, s[0:2])
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/test_multilevel.py
Original file line number Diff line number Diff line change
Expand Up @@ -2365,7 +2365,7 @@ def test_reset_index_datetime(self):
'a': np.arange(6, dtype='int64')},
columns=['level_0', 'level_1', 'a'])
expected['level_1'] = expected['level_1'].apply(
lambda d: pd.Timestamp(d, offset='D', tz=tz))
lambda d: pd.Timestamp(d, freq='D', tz=tz))
assert_frame_equal(df.reset_index(), expected)

def test_reset_index_period(self):
Expand Down
7 changes: 4 additions & 3 deletions pandas/tseries/index.py
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ def _generate(cls, start, end, periods, name, offset,

@property
def _box_func(self):
return lambda x: Timestamp(x, offset=self.offset, tz=self.tz)
return lambda x: Timestamp(x, freq=self.offset, tz=self.tz)

def _convert_for_op(self, value):
""" Convert value to be insertable to ndarray """
Expand Down Expand Up @@ -1199,8 +1199,9 @@ def __iter__(self):
for i in range(chunks):
start_i = i * chunksize
end_i = min((i + 1) * chunksize, l)
converted = tslib.ints_to_pydatetime(
data[start_i:end_i], tz=self.tz, offset=self.offset, box=True)
converted = tslib.ints_to_pydatetime(data[start_i:end_i],
tz=self.tz, freq=self.freq,
box=True)
for v in converted:
yield v

Expand Down
35 changes: 18 additions & 17 deletions pandas/tseries/tests/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,10 +124,11 @@ def test_minmax(self):

def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
self.assertEqual(np.min(dr), Timestamp(
'2016-01-15 00:00:00', offset='D'))
self.assertEqual(np.max(dr), Timestamp(
'2016-01-20 00:00:00', offset='D'))

self.assertEqual(np.min(dr),
Timestamp('2016-01-15 00:00:00', freq='D'))
self.assertEqual(np.max(dr),
Timestamp('2016-01-20 00:00:00', freq='D'))

errmsg = "the 'out' parameter is not supported"
tm.assertRaisesRegexp(ValueError, errmsg, np.min, dr, out=0)
Expand All @@ -148,11 +149,11 @@ def test_round(self):
elt = rng[1]

expected_rng = DatetimeIndex([
Timestamp('2016-01-01 00:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 01:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 02:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 02:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 01:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 02:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 02:00:00', tz=tz, freq='30T'),
])
expected_elt = expected_rng[1]

Expand All @@ -175,10 +176,10 @@ def test_repeat(self):
freq='30Min', tz=tz)

expected_rng = DatetimeIndex([
Timestamp('2016-01-01 00:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, freq='30T'),
])

tm.assert_index_equal(rng.repeat(reps), expected_rng)
Expand All @@ -192,10 +193,10 @@ def test_numpy_repeat(self):
freq='30Min', tz=tz)

expected_rng = DatetimeIndex([
Timestamp('2016-01-01 00:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, offset='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 00:00:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, freq='30T'),
Timestamp('2016-01-01 00:30:00', tz=tz, freq='30T'),
])

tm.assert_index_equal(np.repeat(rng, reps), expected_rng)
Expand Down
Loading

0 comments on commit 2a96ab7

Please sign in to comment.