From 7ab15e8e805437b6de5a09020ae0d1da2c25d119 Mon Sep 17 00:00:00 2001 From: Keewis Date: Fri, 6 Dec 2019 15:36:48 +0100 Subject: [PATCH 01/30] fix the deprecated section and update links to drop --- doc/api.rst | 3 +++ doc/howdoi.rst | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/doc/api.rst b/doc/api.rst index a1fae3deb03..7678266852a 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -679,6 +679,9 @@ arguments for the ``from_store`` and ``dump_to_store`` Dataset methods: Deprecated / Pending Deprecation ================================ +.. autosummary:: + :toctree: generated/ + Dataset.drop DataArray.drop Dataset.apply diff --git a/doc/howdoi.rst b/doc/howdoi.rst index 91644ba2718..3eddc322093 100644 --- a/doc/howdoi.rst +++ b/doc/howdoi.rst @@ -22,7 +22,7 @@ How do I ... * - change the order of dimensions - :py:meth:`DataArray.transpose`, :py:meth:`Dataset.transpose` * - remove a variable from my object - - :py:meth:`Dataset.drop`, :py:meth:`DataArray.drop` + - :py:meth:`Dataset.drop_vars`, :py:meth:`DataArray.drop_vars` * - remove dimensions of length 1 or 0 - :py:meth:`DataArray.squeeze`, :py:meth:`Dataset.squeeze` * - remove all variables with a particular dimension From dbc8847c1acb274d202d185686175765d5647f2e Mon Sep 17 00:00:00 2001 From: Keewis Date: Fri, 6 Dec 2019 15:39:50 +0100 Subject: [PATCH 02/30] link to interp_like instead of interpolate_like --- doc/howdoi.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/howdoi.rst b/doc/howdoi.rst index 3eddc322093..80266bd3b84 100644 --- a/doc/howdoi.rst +++ b/doc/howdoi.rst @@ -48,7 +48,7 @@ How do I ... * - write xarray objects with complex values to a netCDF file - :py:func:`Dataset.to_netcdf`, :py:func:`DataArray.to_netcdf` specifying ``engine="h5netcdf", invalid_netcdf=True`` * - make xarray objects look like other xarray objects - - :py:func:`~xarray.ones_like`, :py:func:`~xarray.zeros_like`, :py:func:`~xarray.full_like`, :py:meth:`Dataset.reindex_like`, :py:meth:`Dataset.interpolate_like`, :py:meth:`Dataset.broadcast_like`, :py:meth:`DataArray.reindex_like`, :py:meth:`DataArray.interpolate_like`, :py:meth:`DataArray.broadcast_like` + - :py:func:`~xarray.ones_like`, :py:func:`~xarray.zeros_like`, :py:func:`~xarray.full_like`, :py:meth:`Dataset.reindex_like`, :py:meth:`Dataset.interp_like`, :py:meth:`Dataset.broadcast_like`, :py:meth:`DataArray.reindex_like`, :py:meth:`DataArray.interp_like`, :py:meth:`DataArray.broadcast_like` * - replace NaNs with other values - :py:meth:`Dataset.fillna`, :py:meth:`Dataset.ffill`, :py:meth:`Dataset.bfill`, :py:meth:`Dataset.interpolate_na`, :py:meth:`DataArray.fillna`, :py:meth:`DataArray.ffill`, :py:meth:`DataArray.bfill`, :py:meth:`DataArray.interpolate_na` * - extract the year, month, day or similar from a DataArray of time values From 7dbacee4d70be6d71c0786b7bdc24cf8a87f3e19 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sat, 7 Dec 2019 00:18:15 +0100 Subject: [PATCH 03/30] update links in the manually written parts of the documentation --- doc/groupby.rst | 6 +++--- doc/indexing.rst | 2 +- doc/interpolation.rst | 4 ++-- doc/io.rst | 14 +++++++------- doc/pandas.rst | 2 +- doc/plotting.rst | 2 +- doc/terminology.rst | 4 ++-- 7 files changed, 17 insertions(+), 17 deletions(-) diff --git a/doc/groupby.rst b/doc/groupby.rst index f5943703765..927e192eb6c 100644 --- a/doc/groupby.rst +++ b/doc/groupby.rst @@ -94,7 +94,7 @@ Apply ~~~~~ To apply a function to each group, you can use the flexible -:py:meth:`~xarray.DatasetGroupBy.map` method. The resulting objects are automatically +:py:meth:`~xarray.core.groupby.DatasetGroupBy.map` method. The resulting objects are automatically concatenated back together along the group axis: .. ipython:: python @@ -104,8 +104,8 @@ concatenated back together along the group axis: arr.groupby('letters').map(standardize) -GroupBy objects also have a :py:meth:`~xarray.DatasetGroupBy.reduce` method and -methods like :py:meth:`~xarray.DatasetGroupBy.mean` as shortcuts for applying an +GroupBy objects also have a :py:meth:`~xarray.core.groupby.DatasetGroupBy.reduce` method and +methods like :py:meth:`~xarray.core.groupby.DatasetGroupBy.mean` as shortcuts for applying an aggregation function: .. ipython:: python diff --git a/doc/indexing.rst b/doc/indexing.rst index e8482ac66b3..cfbb84a8343 100644 --- a/doc/indexing.rst +++ b/doc/indexing.rst @@ -132,7 +132,7 @@ use them explicitly to slice data. There are two ways to do this: The arguments to these methods can be any objects that could index the array along the dimension given by the keyword, e.g., labels for an individual value, -Python :py:func:`slice` objects or 1-dimensional arrays. +Python :py:class:`slice` objects or 1-dimensional arrays. .. note:: diff --git a/doc/interpolation.rst b/doc/interpolation.rst index 7c750506cf3..63e9a7cd35e 100644 --- a/doc/interpolation.rst +++ b/doc/interpolation.rst @@ -48,7 +48,7 @@ array-like, which gives the interpolated result as an array. # interpolation da.interp(time=[2.5, 3.5]) -To interpolate data with a :py:func:`numpy.datetime64` coordinate you can pass a string. +To interpolate data with a :py:doc:`numpy.datetime64 ` coordinate you can pass a string. .. ipython:: python @@ -128,7 +128,7 @@ It is now possible to safely compute the difference ``other - interpolated``. Interpolation methods --------------------- -We use :py:func:`scipy.interpolate.interp1d` for 1-dimensional interpolation and +We use :py:class:`scipy.interpolate.interp1d` for 1-dimensional interpolation and :py:func:`scipy.interpolate.interpn` for multi-dimensional interpolation. The interpolation method can be specified by the optional ``method`` argument. diff --git a/doc/io.rst b/doc/io.rst index 8f8a776f73a..f8e4caa9b24 100644 --- a/doc/io.rst +++ b/doc/io.rst @@ -47,7 +47,7 @@ read/write netCDF V4 files and use the compression options described below). __ https://github.com/Unidata/netcdf4-python We can save a Dataset to disk using the -:py:meth:`~Dataset.to_netcdf` method: +:py:meth:`~xarray.Dataset.to_netcdf` method: .. ipython:: python @@ -79,7 +79,7 @@ We can load netCDF files to create a new Dataset using ds_disk Similarly, a DataArray can be saved to disk using the -:py:attr:`DataArray.to_netcdf ` method, and loaded +:py:meth:`~xarray.DataArray.to_netcdf` method, and loaded from disk using the :py:func:`~xarray.open_dataarray` function. As netCDF files correspond to :py:class:`~xarray.Dataset` objects, these functions internally convert the ``DataArray`` to a ``Dataset`` before saving, and then convert back @@ -142,10 +142,10 @@ To do so, pass a ``group`` keyword argument to the string, e.g., to access subgroup ``'bar'`` within group ``'foo'`` pass ``'/foo/bar'`` as the ``group`` argument. In a similar way, the ``group`` keyword argument can be given to the -:py:meth:`~xarray.Dataset.to_netcdf` method to write to a group +:py:meth:`Dataset.to_netcdf ` method to write to a group in a netCDF file. When writing multiple groups in one file, pass ``mode='a'`` to -:py:meth:`~xarray.Dataset.to_netcdf` to ensure that each call does not delete the file. +:py:meth:`Dataset.to_netcdf ` to ensure that each call does not delete the file. .. _io.encoding: @@ -445,9 +445,9 @@ Invalid netCDF files The library ``h5netcdf`` allows writing some dtypes (booleans, complex, ...) that aren't allowed in netCDF4 (see -`h5netcdf documentation `_. -This feature is availabe through :py:func:`DataArray.to_netcdf` and -:py:func:`Dataset.to_netcdf` when used with ``engine="h5netcdf"`` +`h5netcdf documentation `_). +This feature is availabe through :py:meth:`DataArray.to_netcdf ` and +:py:meth:`Datset.to_netcdf ` when used with ``engine="h5netcdf"`` and currently raises a warning unless ``invalid_netcdf=True`` is set: .. ipython:: python diff --git a/doc/pandas.rst b/doc/pandas.rst index 72abf6609f6..c403fa8b44c 100644 --- a/doc/pandas.rst +++ b/doc/pandas.rst @@ -65,7 +65,7 @@ For datasets containing dask arrays where the data should be lazily loaded, see To create a ``Dataset`` from a ``DataFrame``, use the :py:meth:`~xarray.Dataset.from_dataframe` class method or the equivalent -:py:meth:`pandas.DataFrame.to_xarray ` method: +:py:meth:`~pandas.DataFrame.to_xarray` method: .. ipython:: python diff --git a/doc/plotting.rst b/doc/plotting.rst index 270988b99de..371acb63873 100644 --- a/doc/plotting.rst +++ b/doc/plotting.rst @@ -227,7 +227,7 @@ It is required to explicitly specify either Thus, we could have made the previous plot by specifying ``hue='lat'`` instead of ``x='time'``. If required, the automatic legend can be turned off using ``add_legend=False``. Alternatively, -``hue`` can be passed directly to :py:func:`xarray.plot` as `air.isel(lon=10, lat=[19,21,22]).plot(hue='lat')`. +``hue`` can be passed directly to :py:func:`xarray.plot.plot` as `air.isel(lon=10, lat=[19,21,22]).plot(hue='lat')`. ======================== diff --git a/doc/terminology.rst b/doc/terminology.rst index d1265e4da9d..2ba10fb71cb 100644 --- a/doc/terminology.rst +++ b/doc/terminology.rst @@ -3,7 +3,7 @@ Terminology =========== -*Xarray terminology differs slightly from CF, mathematical conventions, and pandas; and therefore using xarray, understanding the documentation, and parsing error messages is easier once key terminology is defined. This glossary was designed so that more fundamental concepts come first. Thus for new users, this page is best read top-to-bottom. Throughout the glossary,* ``arr`` *will refer to an xarray* :py:class:`DataArray` *in any small examples. For more complete examples, please consult the relevant documentation.* +*Xarray terminology differs slightly from CF, mathematical conventions, and pandas; and therefore using xarray, understanding the documentation, and parsing error messages is easier once key terminology is defined. This glossary was designed so that more fundamental concepts come first. Thus for new users, this page is best read top-to-bottom. Throughout the glossary,* ``arr`` *will refer to an xarray* :py:class:`~xarray.DataArray` *in any small examples. For more complete examples, please consult the relevant documentation.* ---- @@ -19,7 +19,7 @@ Terminology .. note:: - The :py:class:`Variable` class is low-level interface and can typically be ignored. However, the word "variable" appears often enough in the code and documentation that is useful to understand. + The :py:class:`~xarray.Variable` class is low-level interface and can typically be ignored. However, the word "variable" appears often enough in the code and documentation that is useful to understand. ---- From 4d9f9e19a6e676609f54af4f0b29dd184aa4e7eb Mon Sep 17 00:00:00 2001 From: Keewis Date: Sat, 7 Dec 2019 00:33:08 +0100 Subject: [PATCH 04/30] add missing methods for DatasetGroupBy, DataArrayGroupBy and Variable --- doc/api-hidden.rst | 98 +++++++++++++++++++++++++++++++++++++++++ xarray/core/variable.py | 2 + 2 files changed, 100 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 027c732697f..05308c8ce47 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -34,6 +34,18 @@ core.groupby.DatasetGroupBy.fillna core.groupby.DatasetGroupBy.quantile core.groupby.DatasetGroupBy.where + core.groupby.DatasetGroupBy.all + core.groupby.DatasetGroupBy.any + core.groupby.DatasetGroupBy.argmax + core.groupby.DatasetGroupBy.argmin + core.groupby.DatasetGroupBy.count + core.groupby.DatasetGroupBy.mean + core.groupby.DatasetGroupBy.median + core.groupby.DatasetGroupBy.min + core.groupby.DatasetGroupBy.prod + core.groupby.DatasetGroupBy.std + core.groupby.DatasetGroupBy.sum + core.groupby.DatasetGroupBy.var Dataset.argsort Dataset.astype @@ -77,6 +89,18 @@ core.groupby.DataArrayGroupBy.fillna core.groupby.DataArrayGroupBy.quantile core.groupby.DataArrayGroupBy.where + core.groupby.DataArrayGroupBy.all + core.groupby.DataArrayGroupBy.any + core.groupby.DataArrayGroupBy.argmax + core.groupby.DataArrayGroupBy.argmin + core.groupby.DataArrayGroupBy.count + core.groupby.DataArrayGroupBy.mean + core.groupby.DataArrayGroupBy.median + core.groupby.DataArrayGroupBy.min + core.groupby.DataArrayGroupBy.prod + core.groupby.DataArrayGroupBy.std + core.groupby.DataArrayGroupBy.sum + core.groupby.DataArrayGroupBy.var DataArray.argsort DataArray.clip @@ -91,6 +115,80 @@ DataArray.cumprod DataArray.rank + Variable.all + Variable.any + Variable.argmax + Variable.argmin + Variable.argsort + Variable.astype + Variable.broadcast_equals + Variable.chunk + Variable.clip + Variable.coarsen + Variable.compute + Variable.concat + Variable.conj + Variable.conjugate + Variable.copy + Variable.count + Variable.cumprod + Variable.cumsum + Variable.equals + Variable.fillna + Variable.get_axis_num + Variable.identical + Variable.isel + Variable.isnull + Variable.item + Variable.load + Variable.max + Variable.mean + Variable.median + Variable.min + Variable.no_conflicts + Variable.notnull + Variable.pad_with_fill_value + Variable.prod + Variable.quantile + Variable.rank + Variable.reduce + Variable.roll + Variable.rolling_window + Variable.round + Variable.searchsorted + Variable.set_dims + Variable.shift + Variable.squeeze + Variable.stack + Variable.std + Variable.sum + Variable.to_base_variable + Variable.to_coord + Variable.to_dict + Variable.to_index + Variable.to_index_variable + Variable.to_variable + Variable.transpose + Variable.unstack + Variable.var + Variable.where + + Variable.T + Variable.attrs + Variable.chunks + Variable.data + Variable.dims + Variable.dtype + Variable.encoding + Variable.imag + Variable.nbytes + Variable.ndim + Variable.real + Variable.shape + Variable.size + Variable.sizes + Variable.values + ufuncs.angle ufuncs.arccos ufuncs.arccosh diff --git a/xarray/core/variable.py b/xarray/core/variable.py index aa04cffb5ea..c118d9419cf 100644 --- a/xarray/core/variable.py +++ b/xarray/core/variable.py @@ -1693,6 +1693,7 @@ def quantile(self, q, dim=None, interpolation="linear", keep_attrs=None): This optional parameter specifies the interpolation method to use when the desired quantile lies between two data points ``i < j``: + * linear: ``i + (j - i) * fraction``, where ``fraction`` is the fractional part of the index surrounded by ``i`` and ``j``. @@ -1700,6 +1701,7 @@ def quantile(self, q, dim=None, interpolation="linear", keep_attrs=None): * higher: ``j``. * nearest: ``i`` or ``j``, whichever is nearest. * midpoint: ``(i + j) / 2``. + keep_attrs : bool, optional If True, the variable's attributes (`attrs`) will be copied from the original object to the new one. If False (default), the new From 097d36a9aa07ff35b878bb77610077ab1bb9fa84 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sat, 7 Dec 2019 01:12:01 +0100 Subject: [PATCH 05/30] update references in whats-new.rst --- doc/whats-new.rst | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/doc/whats-new.rst b/doc/whats-new.rst index 96e5eeacf95..a02564763b7 100644 --- a/doc/whats-new.rst +++ b/doc/whats-new.rst @@ -28,8 +28,8 @@ New Features - :py:meth:`Dataset.quantile`, :py:meth:`DataArray.quantile` and ``GroupBy.quantile`` now work with dask Variables. By `Deepak Cherian `_. -- Added the :py:meth:`count` reduction method to both :py:class:`DatasetCoarsen` - and :py:class:`DataArrayCoarsen` objects. (:pull:`3500`) +- Added the :py:meth:`count` reduction method to both :py:class:`~core.coarsen.DatasetCoarsen` + and :py:class:`~core.coarsen.DataArrayCoarsen` objects. (:pull:`3500`) By `Deepak Cherian `_ Bug fixes @@ -137,7 +137,7 @@ New Features invoked. (:issue:`3378`, :pull:`3446`, :pull:`3515`) By `Deepak Cherian `_ and `Guido Imperiale `_. -- Add the documented-but-missing :py:meth:`DatasetGroupBy.quantile`. +- Add the documented-but-missing :py:meth:`~core.groupby.DatasetGroupBy.quantile`. (:issue:`3525`, :pull:`3527`). By `Justus Magin `_. Bug fixes @@ -269,7 +269,7 @@ New functions/methods Enhancements ~~~~~~~~~~~~ -- :py:class:`~xarray.core.GroupBy` enhancements. By `Deepak Cherian `_. +- :py:class:`~core.groupby.GroupBy` enhancements. By `Deepak Cherian `_. - Added a repr (:pull:`3344`). Example:: @@ -304,7 +304,7 @@ Bug fixes - Fix error in concatenating unlabeled dimensions (:pull:`3362`). By `Deepak Cherian `_. - Warn if the ``dim`` kwarg is passed to rolling operations. This is redundant since a dimension is - specified when the :py:class:`DatasetRolling` or :py:class:`DataArrayRolling` object is created. + specified when the :py:class:`~core.rolling.DatasetRolling` or :py:class:`~core.rolling.DataArrayRolling` object is created. (:pull:`3362`). By `Deepak Cherian `_. Documentation @@ -377,7 +377,7 @@ Breaking changes - Reindexing with variables of a different dimension now raise an error (previously deprecated) - ``xarray.broadcast_array`` is removed (previously deprecated in favor of :py:func:`~xarray.broadcast`) -- :py:meth:`Variable.expand_dims` is removed (previously deprecated in favor of +- ``Variable.expand_dims`` is removed (previously deprecated in favor of :py:meth:`Variable.set_dims`) New functions/methods @@ -611,7 +611,7 @@ New functions/methods By `Alan Brammer `_ and `Ryan May `_. -- :py:meth:`~xarray.core.GroupBy.quantile` is now a method of ``GroupBy`` +- :py:meth:`~core.groupby.GroupBy.quantile` is now a method of ``GroupBy`` objects (:issue:`3018`). By `David Huard `_. @@ -1153,7 +1153,7 @@ Announcements of note: for more details. - We have a new :doc:`roadmap` that outlines our future development plans. -- `Dataset.apply` now properly documents the way `func` is called. +- ``Dataset.apply`` now properly documents the way `func` is called. By `Matti Eskelinen `_. Enhancements @@ -1585,7 +1585,7 @@ Backwards incompatible changes Enhancements ~~~~~~~~~~~~ -- Added :py:func:`~xarray.dot`, equivalent to :py:func:`np.einsum`. +- Added :py:func:`~xarray.dot`, equivalent to :py:func:`numpy.einsum`. Also, :py:func:`~xarray.DataArray.dot` now supports ``dims`` option, which specifies the dimensions to sum over. (:issue:`1951`) @@ -2390,7 +2390,7 @@ Enhancements By `Stephan Hoyer `_ and `Phillip J. Wolfram `_. -- New aggregation on rolling objects :py:meth:`DataArray.rolling(...).count()` +- New aggregation on rolling objects :py:meth:`~core.rolling.DataArrayRolling.count` which providing a rolling count of valid values (:issue:`1138`). Bug fixes From 446806d1419b9164d209b2083c6e1b9bf28b89f6 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sat, 7 Dec 2019 02:03:31 +0100 Subject: [PATCH 06/30] fix a few mistakes in the reference targets --- doc/whats-new.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/whats-new.rst b/doc/whats-new.rst index a02564763b7..8238419ce4b 100644 --- a/doc/whats-new.rst +++ b/doc/whats-new.rst @@ -28,7 +28,7 @@ New Features - :py:meth:`Dataset.quantile`, :py:meth:`DataArray.quantile` and ``GroupBy.quantile`` now work with dask Variables. By `Deepak Cherian `_. -- Added the :py:meth:`count` reduction method to both :py:class:`~core.coarsen.DatasetCoarsen` +- Added the ``count`` reduction method to both :py:class:`~core.coarsen.DatasetCoarsen` and :py:class:`~core.coarsen.DataArrayCoarsen` objects. (:pull:`3500`) By `Deepak Cherian `_ @@ -133,7 +133,7 @@ New Features `_ for xarray objects. Note that xarray objects with a dask.array backend already used deterministic hashing in previous releases; this change implements it when whole - xarray objects are embedded in a dask graph, e.g. when :py:meth:`DataArray.map` is + xarray objects are embedded in a dask graph, e.g. when :py:meth:`DataArray.map_blocks` is invoked. (:issue:`3378`, :pull:`3446`, :pull:`3515`) By `Deepak Cherian `_ and `Guido Imperiale `_. From e5f1be904489e9f2985484ee53c5b65e21a21713 Mon Sep 17 00:00:00 2001 From: Keewis Date: Mon, 9 Dec 2019 23:36:56 +0100 Subject: [PATCH 07/30] add missing methods for Data*Rolling, Data*GroupBy and Data*Resample --- doc/api-hidden.rst | 77 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 05308c8ce47..0096d32f10c 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -39,6 +39,7 @@ core.groupby.DatasetGroupBy.argmax core.groupby.DatasetGroupBy.argmin core.groupby.DatasetGroupBy.count + core.groupby.DatasetGroupBy.max core.groupby.DatasetGroupBy.mean core.groupby.DatasetGroupBy.median core.groupby.DatasetGroupBy.min @@ -47,6 +48,44 @@ core.groupby.DatasetGroupBy.sum core.groupby.DatasetGroupBy.var + core.resample.DatasetResample.all + core.resample.DatasetResample.any + core.resample.DatasetResample.apply + core.resample.DatasetResample.argmax + core.resample.DatasetResample.argmin + core.resample.DatasetResample.assign + core.resample.DatasetResample.assign_coords + core.resample.DatasetResample.bfill + core.resample.DatasetResample.count + core.resample.DatasetResample.ffill + core.resample.DatasetResample.fillna + core.resample.DatasetResample.first + core.resample.DatasetResample.last + core.resample.DatasetResample.map + core.resample.DatasetResample.max + core.resample.DatasetResample.mean + core.resample.DatasetResample.median + core.resample.DatasetResample.min + core.resample.DatasetResample.prod + core.resample.DatasetResample.quantile + core.resample.DatasetResample.reduce + core.resample.DatasetResample.std + core.resample.DatasetResample.sum + core.resample.DatasetResample.var + core.resample.DatasetResample.where + + core.rolling.DatasetRolling.argmax + core.rolling.DatasetRolling.argmin + core.rolling.DatasetRolling.count + core.rolling.DatasetRolling.max + core.rolling.DatasetRolling.mean + core.rolling.DatasetRolling.median + core.rolling.DatasetRolling.min + core.rolling.DatasetRolling.prod + core.rolling.DatasetRolling.std + core.rolling.DatasetRolling.sum + core.rolling.DatasetRolling.var + Dataset.argsort Dataset.astype Dataset.clip @@ -94,6 +133,7 @@ core.groupby.DataArrayGroupBy.argmax core.groupby.DataArrayGroupBy.argmin core.groupby.DataArrayGroupBy.count + core.groupby.DataArrayGroupBy.max core.groupby.DataArrayGroupBy.mean core.groupby.DataArrayGroupBy.median core.groupby.DataArrayGroupBy.min @@ -102,6 +142,43 @@ core.groupby.DataArrayGroupBy.sum core.groupby.DataArrayGroupBy.var + core.resample.DataArrayResample.all + core.resample.DataArrayResample.any + core.resample.DataArrayResample.apply + core.resample.DataArrayResample.argmax + core.resample.DataArrayResample.argmin + core.resample.DataArrayResample.assign_coords + core.resample.DataArrayResample.bfill + core.resample.DataArrayResample.count + core.resample.DataArrayResample.ffill + core.resample.DataArrayResample.fillna + core.resample.DataArrayResample.first + core.resample.DataArrayResample.last + core.resample.DataArrayResample.map + core.resample.DataArrayResample.max + core.resample.DataArrayResample.mean + core.resample.DataArrayResample.median + core.resample.DataArrayResample.min + core.resample.DataArrayResample.prod + core.resample.DataArrayResample.quantile + core.resample.DataArrayResample.reduce + core.resample.DataArrayResample.std + core.resample.DataArrayResample.sum + core.resample.DataArrayResample.var + core.resample.DataArrayResample.where + + core.rolling.DataArrayRolling.argmax + core.rolling.DataArrayRolling.argmin + core.rolling.DataArrayRolling.count + core.rolling.DataArrayRolling.max + core.rolling.DataArrayRolling.mean + core.rolling.DataArrayRolling.median + core.rolling.DataArrayRolling.min + core.rolling.DataArrayRolling.prod + core.rolling.DataArrayRolling.std + core.rolling.DataArrayRolling.sum + core.rolling.DataArrayRolling.var + DataArray.argsort DataArray.clip DataArray.conj From aaa862856e8c00d88084e599a06ada431bba486d Mon Sep 17 00:00:00 2001 From: Keewis Date: Mon, 9 Dec 2019 23:44:17 +0100 Subject: [PATCH 08/30] add all CFTimeIndex methods --- doc/api-hidden.rst | 90 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 89 insertions(+), 1 deletion(-) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 0096d32f10c..c2e7ee50f7c 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -331,6 +331,94 @@ plot.FacetGrid.set_ticks plot.FacetGrid.map + CFTimeIndex.all + CFTimeIndex.any + CFTimeIndex.append + CFTimeIndex.argmax + CFTimeIndex.argmin + CFTimeIndex.argsort + CFTimeIndex.asof + CFTimeIndex.asof_locs + CFTimeIndex.astype + CFTimeIndex.contains + CFTimeIndex.copy + CFTimeIndex.delete + CFTimeIndex.difference + CFTimeIndex.drop + CFTimeIndex.drop_duplicates + CFTimeIndex.droplevel + CFTimeIndex.dropna + CFTimeIndex.duplicated + CFTimeIndex.equals + CFTimeIndex.factorize + CFTimeIndex.fillna + CFTimeIndex.format + CFTimeIndex.get_duplicates + CFTimeIndex.get_indexer + CFTimeIndex.get_indexer_for + CFTimeIndex.get_indexer_non_unique + CFTimeIndex.get_level_values + CFTimeIndex.get_loc + CFTimeIndex.get_slice_bound + CFTimeIndex.get_value + CFTimeIndex.get_values + CFTimeIndex.groupby + CFTimeIndex.holds_integer + CFTimeIndex.identical + CFTimeIndex.insert + CFTimeIndex.intersection + CFTimeIndex.is_ + CFTimeIndex.is_boolean + CFTimeIndex.is_categorical + CFTimeIndex.is_floating + CFTimeIndex.is_integer + CFTimeIndex.is_interval + CFTimeIndex.is_lexsorted_for_tuple + CFTimeIndex.is_mixed + CFTimeIndex.is_numeric + CFTimeIndex.is_object + CFTimeIndex.is_type_compatible + CFTimeIndex.isin + CFTimeIndex.isna + CFTimeIndex.isnull + CFTimeIndex.item + CFTimeIndex.join + CFTimeIndex.map + CFTimeIndex.max + CFTimeIndex.memory_usage + CFTimeIndex.min + CFTimeIndex.notna + CFTimeIndex.notnull + CFTimeIndex.nunique + CFTimeIndex.putmask + CFTimeIndex.ravel + CFTimeIndex.reindex + CFTimeIndex.rename + CFTimeIndex.repeat + CFTimeIndex.searchsorted + CFTimeIndex.set_names + CFTimeIndex.set_value CFTimeIndex.shift - CFTimeIndex.to_datetimeindex + CFTimeIndex.slice_indexer + CFTimeIndex.slice_locs + CFTimeIndex.sort + CFTimeIndex.sort_values + CFTimeIndex.sortlevel CFTimeIndex.strftime + CFTimeIndex.summary + CFTimeIndex.symmetric_difference + CFTimeIndex.take + CFTimeIndex.to_datetimeindex + CFTimeIndex.to_flat_index + CFTimeIndex.to_frame + CFTimeIndex.to_list + CFTimeIndex.to_native_types + CFTimeIndex.to_numpy + CFTimeIndex.to_series + CFTimeIndex.tolist + CFTimeIndex.transpose + CFTimeIndex.union + CFTimeIndex.unique + CFTimeIndex.value_counts + CFTimeIndex.view + CFTimeIndex.where From d931eff54b13ec39f1278bee721b793d218eeb1d Mon Sep 17 00:00:00 2001 From: Keewis Date: Tue, 10 Dec 2019 01:04:10 +0100 Subject: [PATCH 09/30] fix a few more broken links in whats-new.rst --- doc/api-hidden.rst | 4 ++++ doc/whats-new.rst | 28 ++++++++++++++-------------- xarray/core/resample.py | 4 ++++ 3 files changed, 22 insertions(+), 14 deletions(-) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index c2e7ee50f7c..1f30f12a6da 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -27,6 +27,8 @@ Dataset.std Dataset.var + core.rolling.DatasetCoarsen + core.groupby.DatasetGroupBy.assign core.groupby.DatasetGroupBy.assign_coords core.groupby.DatasetGroupBy.first @@ -122,6 +124,8 @@ DataArray.std DataArray.var + core.rolling.DataArrayCoarsen + core.groupby.DataArrayGroupBy.assign_coords core.groupby.DataArrayGroupBy.first core.groupby.DataArrayGroupBy.last diff --git a/doc/whats-new.rst b/doc/whats-new.rst index fcc49506f75..9cdc457fe91 100644 --- a/doc/whats-new.rst +++ b/doc/whats-new.rst @@ -28,8 +28,8 @@ New Features - :py:meth:`Dataset.quantile`, :py:meth:`DataArray.quantile` and ``GroupBy.quantile`` now work with dask Variables. By `Deepak Cherian `_. -- Added the ``count`` reduction method to both :py:class:`~core.coarsen.DatasetCoarsen` - and :py:class:`~core.coarsen.DataArrayCoarsen` objects. (:pull:`3500`) +- Added the ``count`` reduction method to both :py:class:`~core.rolling.DatasetCoarsen` + and :py:class:`~core.rolling.DataArrayCoarsen` objects. (:pull:`3500`) By `Deepak Cherian `_ Bug fixes @@ -66,7 +66,7 @@ Internal Changes :py:meth:`DataArray.isel`, and :py:meth:`DataArray.__getitem__` when indexing by int, slice, list of int, scalar ndarray, or 1-dimensional ndarray. (:pull:`3533`) by `Guido Imperiale `_. -- Removed internal method ``Dataset._from_vars_and_coord_names``, +- Removed internal method ``Dataset._from_vars_and_coord_names``, which was dominated by ``Dataset._construct_direct``. (:pull:`3565`) By `Maximilian Roos `_ @@ -93,8 +93,8 @@ Breaking changes New Features ~~~~~~~~~~~~ -- Added the ``sparse`` option to :py:meth:`~xarray.DataArray.unstack`, - :py:meth:`~xarray.Dataset.unstack`, :py:meth:`~xarray.DataArray.reindex`, +- Added the ``sparse`` option to :py:meth:`~xarray.DataArray.unstack`, + :py:meth:`~xarray.Dataset.unstack`, :py:meth:`~xarray.DataArray.reindex`, :py:meth:`~xarray.Dataset.reindex` (:issue:`3518`). By `Keisuke Fujii `_. - Added the ``fill_value`` option to :py:meth:`DataArray.unstack` and @@ -104,13 +104,13 @@ New Features :py:meth:`~xarray.Dataset.interpolate_na`. This controls the maximum size of the data gap that will be filled by interpolation. By `Deepak Cherian `_. - Added :py:meth:`Dataset.drop_sel` & :py:meth:`DataArray.drop_sel` for dropping labels. - :py:meth:`Dataset.drop_vars` & :py:meth:`DataArray.drop_vars` have been added for + :py:meth:`Dataset.drop_vars` & :py:meth:`DataArray.drop_vars` have been added for dropping variables (including coordinates). The existing :py:meth:`Dataset.drop` & :py:meth:`DataArray.drop` methods remain as a backward compatible option for dropping either labels or variables, but using the more specific methods is encouraged. (:pull:`3475`) By `Maximilian Roos `_ -- Added :py:meth:`Dataset.map` & :py:meth:`GroupBy.map` & :py:meth:`Resample.map` for +- Added :py:meth:`Dataset.map` & ``GroupBy.map`` & ``Resample.map`` for mapping / applying a function over each item in the collection, reflecting the widely used and least surprising name for this operation. The existing ``apply`` methods remain for backward compatibility, though using the ``map`` @@ -129,7 +129,7 @@ New Features - :py:func:`xarray.dot`, and :py:meth:`DataArray.dot` now support the ``dims=...`` option to sum over the union of dimensions of all input arrays (:issue:`3423`) by `Mathias Hauser `_. -- Added new :py:meth:`Dataset._repr_html_` and :py:meth:`DataArray._repr_html_` to improve +- Added new ``Dataset._repr_html_`` and ``DataArray._repr_html_`` to improve representation of objects in Jupyter. By default this feature is turned off for now. Enable it with ``xarray.set_options(display_style="html")``. (:pull:`3425`) by `Benoit Bovy `_ and @@ -147,13 +147,13 @@ New Features Bug fixes ~~~~~~~~~ -- Ensure an index of type ``CFTimeIndex`` is not converted to a ``DatetimeIndex`` when +- Ensure an index of type ``CFTimeIndex`` is not converted to a ``DatetimeIndex`` when calling :py:meth:`Dataset.rename`, :py:meth:`Dataset.rename_dims` and :py:meth:`Dataset.rename_vars`. By `Mathias Hauser `_. (:issue:`3522`). - Fix a bug in :py:meth:`DataArray.set_index` in case that an existing dimension becomes a level variable of MultiIndex. (:pull:`3520`). By `Keisuke Fujii `_. - Harmonize ``_FillValue``, ``missing_value`` during encoding and decoding steps. (:pull:`3502`) - By `Anderson Banihirwe `_. + By `Anderson Banihirwe `_. - Fix regression introduced in v0.14.0 that would cause a crash if dask is installed but cloudpickle isn't (:issue:`3401`) by `Rhys Doyle `_ - Fix grouping over variables with NaNs. (:issue:`2383`, :pull:`3406`). @@ -168,7 +168,7 @@ Bug fixes - Rolling reduction operations no longer compute dask arrays by default. (:issue:`3161`). In addition, the ``allow_lazy`` kwarg to ``reduce`` is deprecated. By `Deepak Cherian `_. -- Fix :py:meth:`GroupBy.reduce` when reducing over multiple dimensions. +- Fix ``GroupBy.reduce`` when reducing over multiple dimensions. (:issue:`3402`). By `Deepak Cherian `_ - Allow appending datetime and bool data variables to zarr stores. (:issue:`3480`). By `Akihiro Matsukawa `_. @@ -218,7 +218,7 @@ Internal Changes - Enable type checking on default sentinel values (:pull:`3472`) By `Maximilian Roos `_ -- Add :py:meth:`Variable._replace` for simpler replacing of a subset of attributes (:pull:`3472`) +- Add ``Variable._replace`` for simpler replacing of a subset of attributes (:pull:`3472`) By `Maximilian Roos `_ .. _whats-new.0.14.0: @@ -274,7 +274,7 @@ New functions/methods Enhancements ~~~~~~~~~~~~ -- :py:class:`~core.groupby.GroupBy` enhancements. By `Deepak Cherian `_. +- ``core.groupby.GroupBy`` enhancements. By `Deepak Cherian `_. - Added a repr (:pull:`3344`). Example:: @@ -616,7 +616,7 @@ New functions/methods By `Alan Brammer `_ and `Ryan May `_. -- :py:meth:`~core.groupby.GroupBy.quantile` is now a method of ``GroupBy`` +- ``GroupBy.quantile`` is now a method of ``GroupBy`` objects (:issue:`3018`). By `David Huard `_. diff --git a/xarray/core/resample.py b/xarray/core/resample.py index fb388490d06..2b3b7da6217 100644 --- a/xarray/core/resample.py +++ b/xarray/core/resample.py @@ -184,6 +184,7 @@ def map(self, func, shortcut=False, args=(), **kwargs): Apply uses heuristics (like `pandas.GroupBy.apply`) to figure out how to stack together the array. The rule is: + 1. If the dimension along which the group coordinate is defined is still in the first grouped array after applying `func`, then stack over this dimension. @@ -196,11 +197,13 @@ def map(self, func, shortcut=False, args=(), **kwargs): Callable to apply to each array. shortcut : bool, optional Whether or not to shortcut evaluation under the assumptions that: + (1) The action of `func` does not depend on any of the array metadata (attributes or coordinates) but only on the data and dimensions. (2) The action of `func` creates arrays with homogeneous metadata, that is, with the same dimensions and attributes. + If these conditions are satisfied `shortcut` provides significant speedup. This should be the case for many common groupby operations (e.g., applying numpy ufuncs). @@ -275,6 +278,7 @@ def map(self, func, args=(), shortcut=None, **kwargs): Apply uses heuristics (like `pandas.GroupBy.apply`) to figure out how to stack together the datasets. The rule is: + 1. If the dimension along which the group coordinate is defined is still in the first grouped item after applying `func`, then stack over this dimension. From 02685e0e6e3a6b870710dbf23752bedddcb5b386 Mon Sep 17 00:00:00 2001 From: Keewis Date: Tue, 10 Dec 2019 16:02:47 +0100 Subject: [PATCH 10/30] remove documentation links for some non-public methods / functions --- doc/whats-new.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/whats-new.rst b/doc/whats-new.rst index 9cdc457fe91..be97e689e10 100644 --- a/doc/whats-new.rst +++ b/doc/whats-new.rst @@ -1775,7 +1775,7 @@ Bug fixes coordinates of target, destination and keys. If there are any conflict among these coordinates, ``IndexError`` will be raised. By `Keisuke Fujii `_. -- Properly point :py:meth:`DataArray.__dask_scheduler__` to +- Properly point ``DataArray.__dask_scheduler__`` to ``dask.threaded.get``. By `Matthew Rocklin `_. - Bug fixes in :py:meth:`DataArray.plot.imshow`: all-NaN arrays and arrays with size one in some dimension can now be plotted, which is good for @@ -1987,7 +1987,7 @@ Enhancements - Support for :py:class:`pathlib.Path` objects added to :py:func:`~xarray.open_dataset`, :py:func:`~xarray.open_mfdataset`, - :py:func:`~xarray.to_netcdf`, and :py:func:`~xarray.save_mfdataset` + ``xarray.to_netcdf``, and :py:func:`~xarray.save_mfdataset` (:issue:`799`): .. ipython:: From 9ee174557f659646ae1d53eb6754eddb9657730e Mon Sep 17 00:00:00 2001 From: Keewis Date: Tue, 10 Dec 2019 16:23:03 +0100 Subject: [PATCH 11/30] add missing methods to Data*Coarsen --- doc/api-hidden.rst | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 1f30f12a6da..aec5b9056b4 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -28,6 +28,19 @@ Dataset.var core.rolling.DatasetCoarsen + core.rolling.DatasetCoarsen.all + core.rolling.DatasetCoarsen.any + core.rolling.DatasetCoarsen.argmax + core.rolling.DatasetCoarsen.argmin + core.rolling.DatasetCoarsen.count + core.rolling.DatasetCoarsen.max + core.rolling.DatasetCoarsen.mean + core.rolling.DatasetCoarsen.median + core.rolling.DatasetCoarsen.min + core.rolling.DatasetCoarsen.prod + core.rolling.DatasetCoarsen.std + core.rolling.DatasetCoarsen.sum + core.rolling.DatasetCoarsen.var core.groupby.DatasetGroupBy.assign core.groupby.DatasetGroupBy.assign_coords @@ -125,6 +138,19 @@ DataArray.var core.rolling.DataArrayCoarsen + core.rolling.DataArrayCoarsen.all + core.rolling.DataArrayCoarsen.any + core.rolling.DataArrayCoarsen.argmax + core.rolling.DataArrayCoarsen.argmin + core.rolling.DataArrayCoarsen.count + core.rolling.DataArrayCoarsen.max + core.rolling.DataArrayCoarsen.mean + core.rolling.DataArrayCoarsen.median + core.rolling.DataArrayCoarsen.min + core.rolling.DataArrayCoarsen.prod + core.rolling.DataArrayCoarsen.std + core.rolling.DataArrayCoarsen.sum + core.rolling.DataArrayCoarsen.var core.groupby.DataArrayGroupBy.assign_coords core.groupby.DataArrayGroupBy.first From 397e2b1aecbc28ed3b395338b0b44ee17a3c47bf Mon Sep 17 00:00:00 2001 From: Keewis Date: Tue, 10 Dec 2019 17:17:37 +0100 Subject: [PATCH 12/30] move the coarsen objects into their own section in api.rst --- doc/api-hidden.rst | 2 -- doc/api.rst | 10 ++++++++++ 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index aec5b9056b4..b35b19cd02d 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -27,7 +27,6 @@ Dataset.std Dataset.var - core.rolling.DatasetCoarsen core.rolling.DatasetCoarsen.all core.rolling.DatasetCoarsen.any core.rolling.DatasetCoarsen.argmax @@ -137,7 +136,6 @@ DataArray.std DataArray.var - core.rolling.DataArrayCoarsen core.rolling.DataArrayCoarsen.all core.rolling.DataArrayCoarsen.any core.rolling.DataArrayCoarsen.argmax diff --git a/doc/api.rst b/doc/api.rst index 7678266852a..61d96a0cb2a 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -564,6 +564,16 @@ Rolling objects core.rolling.DatasetRolling.reduce core.rolling_exp.RollingExp +Coarsen objects +=============== + +.. autosummary:: + :toctree: generated/ + + core.rolling.DataArrayCoarsen + core.rolling.DatasetCoarsen + + Resample objects ================ From a5c05ca1f0fafd3cf084e55a2db5bbb1c8c1ce9e Mon Sep 17 00:00:00 2001 From: Keewis Date: Tue, 10 Dec 2019 22:12:06 +0100 Subject: [PATCH 13/30] use currentmodule instead of prefixing with ~xarray --- doc/io.rst | 84 ++++++++++++++++++++++----------------------- doc/pandas.rst | 23 +++++++------ doc/terminology.rst | 5 +-- 3 files changed, 57 insertions(+), 55 deletions(-) diff --git a/doc/io.rst b/doc/io.rst index f8e4caa9b24..1bae148275f 100644 --- a/doc/io.rst +++ b/doc/io.rst @@ -1,3 +1,4 @@ +.. currentmodule:: xarray .. _io: Reading and writing files @@ -23,8 +24,8 @@ netCDF The recommended way to store xarray data structures is `netCDF`__, which is a binary file format for self-described datasets that originated in the geosciences. xarray is based on the netCDF data model, so netCDF files -on disk directly correspond to :py:class:`~xarray.Dataset` objects (more accurately, -a group in a netCDF file directly corresponds to a to :py:class:`~xarray.Dataset` object. +on disk directly correspond to :py:class:`Dataset` objects (more accurately, +a group in a netCDF file directly corresponds to a to :py:class:`Dataset` object. See :ref:`io.netcdf_groups` for more.) NetCDF is supported on almost all platforms, and parsers exist @@ -47,7 +48,7 @@ read/write netCDF V4 files and use the compression options described below). __ https://github.com/Unidata/netcdf4-python We can save a Dataset to disk using the -:py:meth:`~xarray.Dataset.to_netcdf` method: +:py:meth:`Dataset.to_netcdf` method: .. ipython:: python @@ -65,13 +66,13 @@ the ``format`` and ``engine`` arguments. .. tip:: Using the `h5netcdf `_ package - by passing ``engine='h5netcdf'`` to :py:meth:`~xarray.open_dataset` can + by passing ``engine='h5netcdf'`` to :py:meth:`open_dataset` can sometimes be quicker than the default ``engine='netcdf4'`` that uses the `netCDF4 `_ package. We can load netCDF files to create a new Dataset using -:py:func:`~xarray.open_dataset`: +:py:func:`open_dataset`: .. ipython:: python @@ -79,9 +80,9 @@ We can load netCDF files to create a new Dataset using ds_disk Similarly, a DataArray can be saved to disk using the -:py:meth:`~xarray.DataArray.to_netcdf` method, and loaded -from disk using the :py:func:`~xarray.open_dataarray` function. As netCDF files -correspond to :py:class:`~xarray.Dataset` objects, these functions internally +:py:meth:`DataArray.to_netcdf` method, and loaded +from disk using the :py:func:`open_dataarray` function. As netCDF files +correspond to :py:class:`Dataset` objects, these functions internally convert the ``DataArray`` to a ``Dataset`` before saving, and then convert back when loading, ensuring that the ``DataArray`` that is loaded is always exactly the same as the one that was saved. @@ -108,9 +109,9 @@ is modified: the original file on disk is never touched. xarray's lazy loading of remote or on-disk datasets is often but not always desirable. Before performing computationally intense operations, it is often a good idea to load a Dataset (or DataArray) entirely into memory by - invoking the :py:meth:`~xarray.Dataset.load` method. + invoking the :py:meth:`Dataset.load` method. -Datasets have a :py:meth:`~xarray.Dataset.close` method to close the associated +Datasets have a :py:meth:`Dataset.close` method to close the associated netCDF file. However, it's often cleaner to use a ``with`` statement: .. ipython:: python @@ -135,17 +136,17 @@ to the original netCDF file, regardless if they exist in the original dataset. Groups ~~~~~~ -NetCDF groups are not supported as part of the :py:class:`~xarray.Dataset` data model. +NetCDF groups are not supported as part of the :py:class:`Dataset` data model. Instead, groups can be loaded individually as Dataset objects. To do so, pass a ``group`` keyword argument to the -:py:func:`~xarray.open_dataset` function. The group can be specified as a path-like +:py:func:`open_dataset` function. The group can be specified as a path-like string, e.g., to access subgroup ``'bar'`` within group ``'foo'`` pass ``'/foo/bar'`` as the ``group`` argument. In a similar way, the ``group`` keyword argument can be given to the -:py:meth:`Dataset.to_netcdf ` method to write to a group +:py:meth:`Dataset.to_netcdf` method to write to a group in a netCDF file. When writing multiple groups in one file, pass ``mode='a'`` to -:py:meth:`Dataset.to_netcdf ` to ensure that each call does not delete the file. +:py:meth:`Dataset.to_netcdf` to ensure that each call does not delete the file. .. _io.encoding: @@ -155,7 +156,7 @@ Reading encoded data NetCDF files follow some conventions for encoding datetime arrays (as numbers with a "units" attribute) and for packing and unpacking data (as described by the "scale_factor" and "add_offset" attributes). If the argument -``decode_cf=True`` (default) is given to :py:func:`~xarray.open_dataset`, xarray will attempt +``decode_cf=True`` (default) is given to :py:func:`open_dataset`, xarray will attempt to automatically decode the values in the netCDF objects according to `CF conventions`_. Sometimes this will fail, for example, if a variable has an invalid "units" or "calendar" attribute. For these cases, you can @@ -164,8 +165,8 @@ turn this decoding off manually. .. _CF conventions: http://cfconventions.org/ You can view this encoding information (among others) in the -:py:attr:`DataArray.encoding ` and -:py:attr:`DataArray.encoding ` attributes: +:py:attr:`DataArray.encoding` and +:py:attr:`DataArray.encoding` attributes: .. ipython:: :verbatim: @@ -206,13 +207,13 @@ Reading multi-file datasets NetCDF files are often encountered in collections, e.g., with different files corresponding to different model runs or one file per timestamp. xarray can straightforwardly combine such files into a single Dataset by making use of -:py:func:`~xarray.concat`, :py:func:`~xarray.merge`, :py:func:`~xarray.combine_nested` and -:py:func:`~xarray.combine_by_coords`. For details on the difference between these +:py:func:`concat`, :py:func:`merge`, :py:func:`combine_nested` and +:py:func:`combine_by_coords`. For details on the difference between these functions see :ref:`combining data`. Xarray includes support for manipulating datasets that don't fit into memory with dask_. If you have dask installed, you can open multiple files -simultaneously in parallel using :py:func:`~xarray.open_mfdataset`:: +simultaneously in parallel using :py:func:`open_mfdataset`:: xr.open_mfdataset('my/files/*.nc', parallel=True) @@ -221,7 +222,7 @@ single xarray dataset. It is the recommended way to open multiple files with xarray. For more details on parallel reading, see :ref:`combining.multi`, :ref:`dask.io` and a `blog post`_ by Stephan Hoyer. -:py:func:`~xarray.open_mfdataset` takes many kwargs that allow you to +:py:func:`open_mfdataset` takes many kwargs that allow you to control its behaviour (for e.g. ``parallel``, ``combine``, ``compat``, ``join``, ``concat_dim``). See its docstring for more details. @@ -246,14 +247,14 @@ See its docstring for more details. .. _dask: http://dask.pydata.org .. _blog post: http://stephanhoyer.com/2015/06/11/xray-dask-out-of-core-labeled-arrays/ -Sometimes multi-file datasets are not conveniently organized for easy use of :py:func:`~xarray.open_mfdataset`. +Sometimes multi-file datasets are not conveniently organized for easy use of :py:func:`open_mfdataset`. One can use the ``preprocess`` argument to provide a function that takes a dataset and returns a modified Dataset. -:py:func:`~xarray.open_mfdataset` will call ``preprocess`` on every dataset +:py:func:`open_mfdataset` will call ``preprocess`` on every dataset (corresponding to each file) prior to combining them. -If :py:func:`~xarray.open_mfdataset` does not meet your needs, other approaches are possible. +If :py:func:`open_mfdataset` does not meet your needs, other approaches are possible. The general pattern for parallel reading of multiple files using dask, modifying those datasets and then combining into a single ``Dataset`` is:: @@ -446,8 +447,8 @@ Invalid netCDF files The library ``h5netcdf`` allows writing some dtypes (booleans, complex, ...) that aren't allowed in netCDF4 (see `h5netcdf documentation `_). -This feature is availabe through :py:meth:`DataArray.to_netcdf ` and -:py:meth:`Datset.to_netcdf ` when used with ``engine="h5netcdf"`` +This feature is availabe through :py:meth:`DataArray.to_netcdf` and +:py:meth:`Dataset.to_netcdf` when used with ``engine="h5netcdf"`` and currently raises a warning unless ``invalid_netcdf=True`` is set: .. ipython:: python @@ -480,7 +481,7 @@ The Iris_ tool allows easy reading of common meteorological and climate model fo (including GRIB and UK MetOffice PP files) into ``Cube`` objects which are in many ways very similar to ``DataArray`` objects, while enforcing a CF-compliant data model. If iris is installed xarray can convert a ``DataArray`` into a ``Cube`` using -:py:meth:`~xarray.DataArray.to_iris`: +:py:meth:`DataArray.to_iris`: .. ipython:: python @@ -492,7 +493,7 @@ installed xarray can convert a ``DataArray`` into a ``Cube`` using cube Conversely, we can create a new ``DataArray`` object from a ``Cube`` using -:py:meth:`~xarray.DataArray.from_iris`: +:py:meth:`DataArray.from_iris`: .. ipython:: python @@ -594,7 +595,7 @@ over the network until we look at particular values: .. image:: _static/opendap-prism-tmax.png Some servers require authentication before we can access the data. For this -purpose we can explicitly create a :py:class:`~xarray.backends.PydapDataStore` +purpose we can explicitly create a :py:class:`backends.PydapDataStore` and pass in a `Requests`__ session object. For example for HTTP Basic authentication:: @@ -657,8 +658,8 @@ this version of xarray will work in future versions. When pickling an object opened from a NetCDF file, the pickle file will contain a reference to the file on disk. If you want to store the actual - array values, load it into memory first with :py:meth:`~xarray.Dataset.load` - or :py:meth:`~xarray.Dataset.compute`. + array values, load it into memory first with :py:meth:`Dataset.load` + or :py:meth:`Dataset.compute`. .. _dictionary io: @@ -666,7 +667,7 @@ Dictionary ---------- We can convert a ``Dataset`` (or a ``DataArray``) to a dict using -:py:meth:`~xarray.Dataset.to_dict`: +:py:meth:`Dataset.to_dict`: .. ipython:: python @@ -674,7 +675,7 @@ We can convert a ``Dataset`` (or a ``DataArray``) to a dict using d We can create a new xarray object from a dict using -:py:meth:`~xarray.Dataset.from_dict`: +:py:meth:`Dataset.from_dict`: .. ipython:: python @@ -709,7 +710,7 @@ Rasterio GeoTIFFs and other gridded raster datasets can be opened using `rasterio`_, if rasterio is installed. Here is an example of how to use -:py:func:`~xarray.open_rasterio` to read one of rasterio's `test files`_: +:py:func:`open_rasterio` to read one of rasterio's `test files`_: .. ipython:: :verbatim: @@ -768,8 +769,7 @@ Xarray's Zarr backend allows xarray to leverage these capabilities. Xarray can't open just any zarr dataset, because xarray requires special metadata (attributes) describing the dataset dimensions and coordinates. At this time, xarray can only open zarr datasets that have been written by -xarray. To write a dataset with zarr, we use the -:py:attr:`Dataset.to_zarr ` method. +xarray. To write a dataset with zarr, we use the :py:attr:`Dataset.to_zarr` method. To write to a local directory, we pass a path to a directory .. ipython:: python @@ -816,7 +816,7 @@ can be omitted as it will internally be set to ``'a'``. To store variable length strings use ``dtype=object``. To read back a zarr dataset that has been created this way, we use the -:py:func:`~xarray.open_zarr` method: +:py:func:`open_zarr` method: .. ipython:: python @@ -885,12 +885,12 @@ opening the store. (For more information on this feature, consult the If you have zarr version 2.3 or greater, xarray can write and read stores with consolidated metadata. To write consolidated metadata, pass the ``consolidated=True`` option to the -:py:attr:`Dataset.to_zarr ` method:: +:py:attr:`Dataset.to_zarr` method:: ds.to_zarr('foo.zarr', consolidated=True) To read a consolidated store, pass the ``consolidated=True`` option to -:py:func:`~xarray.open_zarr`:: +:py:func:`open_zarr`:: ds = xr.open_zarr('foo.zarr', consolidated=True) @@ -912,7 +912,7 @@ GRIB format via cfgrib xarray supports reading GRIB files via ECMWF cfgrib_ python driver and ecCodes_ C-library, if they are installed. To open a GRIB file supply ``engine='cfgrib'`` -to :py:func:`~xarray.open_dataset`: +to :py:func:`open_dataset`: .. ipython:: :verbatim: @@ -934,7 +934,7 @@ Formats supported by PyNIO xarray can also read GRIB, HDF4 and other file formats supported by PyNIO_, if PyNIO is installed. To use PyNIO to read such files, supply -``engine='pynio'`` to :py:func:`~xarray.open_dataset`. +``engine='pynio'`` to :py:func:`open_dataset`. We recommend installing PyNIO via conda:: @@ -956,7 +956,7 @@ identify readers heuristically, or format can be specified via a key in `backend_kwargs`. To use PseudoNetCDF to read such files, supply -``engine='pseudonetcdf'`` to :py:func:`~xarray.open_dataset`. +``engine='pseudonetcdf'`` to :py:func:`open_dataset`. Add ``backend_kwargs={'format': ''}`` where `` options are listed on the PseudoNetCDF page. diff --git a/doc/pandas.rst b/doc/pandas.rst index c403fa8b44c..a84c89ab938 100644 --- a/doc/pandas.rst +++ b/doc/pandas.rst @@ -1,3 +1,4 @@ +.. currentmodule:: xarray .. _pandas: =================== @@ -32,9 +33,9 @@ Tabular data is easiest to work with when it meets the criteria for __ http://www.jstatsoft.org/v59/i10/ -In this "tidy data" format, we can represent any :py:class:`~xarray.Dataset` and -:py:class:`~xarray.DataArray` in terms of :py:class:`pandas.DataFrame` and -:py:class:`pandas.Series`, respectively (and vice-versa). The representation +In this "tidy data" format, we can represent any :py:class:`Dataset` and +:py:class:`DataArray` in terms of :py:class:`~pandas.DataFrame` and +:py:class:`~pandas.Series`, respectively (and vice-versa). The representation works by flattening non-coordinates to 1D, and turning the tensor product of coordinate indexes into a :py:class:`pandas.MultiIndex`. @@ -42,7 +43,7 @@ Dataset and DataFrame --------------------- To convert any dataset to a ``DataFrame`` in tidy form, use the -:py:meth:`Dataset.to_dataframe() ` method: +:py:meth:`Dataset.to_dataframe()` method: .. ipython:: python @@ -61,11 +62,11 @@ use ``DataFrame`` methods like :py:meth:`~pandas.DataFrame.reset_index`, :py:meth:`~pandas.DataFrame.stack` and :py:meth:`~pandas.DataFrame.unstack`. For datasets containing dask arrays where the data should be lazily loaded, see the -:py:meth:`Dataset.to_dask_dataframe() ` method. +:py:meth:`Dataset.to_dask_dataframe()` method. To create a ``Dataset`` from a ``DataFrame``, use the -:py:meth:`~xarray.Dataset.from_dataframe` class method or the equivalent -:py:meth:`~pandas.DataFrame.to_xarray` method: +:py:meth:`Dataset.from_dataframe` class method or the equivalent +:py:meth:`pandas.DataFrame.to_xarray` method: .. ipython:: python @@ -83,7 +84,7 @@ DataArray and Series -------------------- ``DataArray`` objects have a complementary representation in terms of a -:py:class:`pandas.Series`. Using a Series preserves the ``Dataset`` to +:py:class:`~pandas.Series`. Using a Series preserves the ``Dataset`` to ``DataArray`` relationship, because ``DataFrames`` are dict-like containers of ``Series``. The methods are very similar to those for working with DataFrames: @@ -109,7 +110,7 @@ Multi-dimensional data Tidy data is great, but it sometimes you want to preserve dimensions instead of automatically stacking them into a ``MultiIndex``. -:py:meth:`DataArray.to_pandas() ` is a shortcut that +:py:meth:`DataArray.to_pandas()` is a shortcut that lets you convert a DataArray directly into a pandas object with the same dimensionality (i.e., a 1D array is converted to a :py:class:`~pandas.Series`, 2D to :py:class:`~pandas.DataFrame` and 3D to ``pandas.Panel``): @@ -122,7 +123,7 @@ dimensionality (i.e., a 1D array is converted to a :py:class:`~pandas.Series`, df To perform the inverse operation of converting any pandas objects into a data -array with the same shape, simply use the :py:class:`~xarray.DataArray` +array with the same shape, simply use the :py:class:`DataArray` constructor: .. ipython:: python @@ -143,7 +144,7 @@ preserve all use of multi-indexes: However, you will need to set dimension names explicitly, either with the ``dims`` argument on in the ``DataArray`` constructor or by calling -:py:class:`~xarray.Dataset.rename` on the new object. +:py:class:`~Dataset.rename` on the new object. .. _panel transition: diff --git a/doc/terminology.rst b/doc/terminology.rst index 2ba10fb71cb..ab6d856920a 100644 --- a/doc/terminology.rst +++ b/doc/terminology.rst @@ -1,9 +1,10 @@ +.. currentmodule:: xarray .. _terminology: Terminology =========== -*Xarray terminology differs slightly from CF, mathematical conventions, and pandas; and therefore using xarray, understanding the documentation, and parsing error messages is easier once key terminology is defined. This glossary was designed so that more fundamental concepts come first. Thus for new users, this page is best read top-to-bottom. Throughout the glossary,* ``arr`` *will refer to an xarray* :py:class:`~xarray.DataArray` *in any small examples. For more complete examples, please consult the relevant documentation.* +*Xarray terminology differs slightly from CF, mathematical conventions, and pandas; and therefore using xarray, understanding the documentation, and parsing error messages is easier once key terminology is defined. This glossary was designed so that more fundamental concepts come first. Thus for new users, this page is best read top-to-bottom. Throughout the glossary,* ``arr`` *will refer to an xarray* :py:class:`DataArray` *in any small examples. For more complete examples, please consult the relevant documentation.* ---- @@ -19,7 +20,7 @@ Terminology .. note:: - The :py:class:`~xarray.Variable` class is low-level interface and can typically be ignored. However, the word "variable" appears often enough in the code and documentation that is useful to understand. + The :py:class:`Variable` class is low-level interface and can typically be ignored. However, the word "variable" appears often enough in the code and documentation that is useful to understand. ---- From 0a5395be108c36ee3400d4d736b01f3da602d642 Mon Sep 17 00:00:00 2001 From: Keewis Date: Wed, 11 Dec 2019 00:30:29 +0100 Subject: [PATCH 14/30] add a new tutorial section --- doc/api.rst | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/doc/api.rst b/doc/api.rst index 02545c8df46..3f5cb38dea8 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -654,6 +654,14 @@ Faceting plot.FacetGrid.set_xlabels plot.FacetGrid.set_ylabels +Tutorial +======== + +.. autosummary:: + :toctree: generated/ + + tutorial.open_dataset + tutorial.load_dataset Testing ======= From 57e67fd47caa8f9bf35f1aa4652b36cc9461c695 Mon Sep 17 00:00:00 2001 From: Keewis Date: Wed, 11 Dec 2019 01:57:47 +0100 Subject: [PATCH 15/30] add show_versions and set_options --- doc/api.rst | 2 ++ xarray/util/print_versions.py | 7 +++++++ 2 files changed, 9 insertions(+) diff --git a/doc/api.rst b/doc/api.rst index 3f5cb38dea8..ecbf6cd3a31 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -31,6 +31,8 @@ Top-level functions ones_like dot map_blocks + show_versions + set_options Dataset ======= diff --git a/xarray/util/print_versions.py b/xarray/util/print_versions.py index 0d6d147f0bb..6a0e62cc9dc 100755 --- a/xarray/util/print_versions.py +++ b/xarray/util/print_versions.py @@ -78,6 +78,13 @@ def netcdf_and_hdf5_versions(): def show_versions(file=sys.stdout): + """ print the versions of xarray and its dependencies + + Parameters + ---------- + file : file-like, optional + print to the given file-like object. Defaults to sys.stdout. + """ sys_info = get_sys_info() try: From a46d4b3f7df081555b7b6d0aae181c969cc44145 Mon Sep 17 00:00:00 2001 From: Keewis Date: Wed, 11 Dec 2019 02:05:35 +0100 Subject: [PATCH 16/30] add FacetGrid to api.rst and update links --- doc/api.rst | 1 + doc/plotting.rst | 17 +++++++++-------- doc/whats-new.rst | 5 ++--- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/doc/api.rst b/doc/api.rst index ecbf6cd3a31..2316fdbee7e 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -637,6 +637,7 @@ Plotting plot.imshow plot.line plot.pcolormesh + plot.FacetGrid Faceting -------- diff --git a/doc/plotting.rst b/doc/plotting.rst index 1168090e249..782aa065903 100644 --- a/doc/plotting.rst +++ b/doc/plotting.rst @@ -1,3 +1,4 @@ +.. currentmodule:: xarray .. _plotting: Plotting @@ -10,8 +11,8 @@ Labeled data enables expressive computations. These same labels can also be used to easily create informative plots. xarray's plotting capabilities are centered around -:py:class:`xarray.DataArray` objects. -To plot :py:class:`xarray.Dataset` objects +:py:class:`DataArray` objects. +To plot :py:class:`Dataset` objects simply access the relevant DataArrays, ie ``dset['var1']``. Dataset specific plotting routines are also available (see :ref:`plot-dataset`). Here we focus mostly on arrays 2d or larger. If your data fits @@ -94,7 +95,7 @@ One Dimension Simple Example ================ -The simplest way to make a plot is to call the :py:func:`xarray.DataArray.plot()` method. +The simplest way to make a plot is to call the :py:func:`DataArray.plot()` method. .. ipython:: python @@ -256,7 +257,7 @@ made using 1D data. The argument ``where`` defines where the steps should be placed, options are ``'pre'`` (default), ``'post'``, and ``'mid'``. This is particularly handy -when plotting data grouped with :py:func:`xarray.Dataset.groupby_bins`. +when plotting data grouped with :py:meth:`Dataset.groupby_bins`. .. ipython:: python @@ -295,7 +296,7 @@ Two Dimensions Simple Example ================ -The default method :py:meth:`xarray.DataArray.plot` calls :py:func:`xarray.plot.pcolormesh` by default when the data is two-dimensional. +The default method :py:meth:`DataArray.plot` calls :py:func:`xarray.plot.pcolormesh` by default when the data is two-dimensional. .. ipython:: python @@ -573,7 +574,7 @@ Faceted plotting supports other arguments common to xarray 2d plots. FacetGrid Objects =================== -The object returned, ``g`` in the above examples, is a :py:class:`~xarray.plot.FacetGrid`` object +The object returned, ``g`` in the above examples, is a :py:class:`~xarray.plot.FacetGrid` object that links a :py:class:`DataArray` to a matplotlib figure with a particular structure. This object can be used to control the behavior of the multiple plots. It borrows an API and code from `Seaborn's FacetGrid @@ -612,11 +613,11 @@ they have been plotted. plt.draw() -:py:class:`~xarray.FacetGrid` objects have methods that let you customize the automatically generated +:py:class:`~xarray.plot.FacetGrid` objects have methods that let you customize the automatically generated axis labels, axis ticks and plot titles. See :py:meth:`~xarray.plot.FacetGrid.set_titles`, :py:meth:`~xarray.plot.FacetGrid.set_xlabels`, :py:meth:`~xarray.plot.FacetGrid.set_ylabels` and :py:meth:`~xarray.plot.FacetGrid.set_ticks` for more information. -Plotting functions can be applied to each subset of the data by calling :py:meth:`~xarray.plot.FacetGrid.map_dataarray` or to each subplot by calling :py:meth:`FacetGrid.map`. +Plotting functions can be applied to each subset of the data by calling :py:meth:`~xarray.plot.FacetGrid.map_dataarray` or to each subplot by calling :py:meth:`~xarray.plot.FacetGrid.map`. TODO: add an example of using the ``map`` method to plot dataset variables (e.g., with ``plt.quiver``). diff --git a/doc/whats-new.rst b/doc/whats-new.rst index a4501427c66..9f4952459f0 100644 --- a/doc/whats-new.rst +++ b/doc/whats-new.rst @@ -152,7 +152,7 @@ New Features - xarray now respects the ``DataArray.encoding["coordinates"]`` attribute when writing to disk. See :ref:`io.coordinates` for more. (:issue:`3351`, :pull:`3487`) By `Deepak Cherian `_. -- Add the documented-but-missing :py:meth:`DatasetGroupBy.quantile`. +- Add the documented-but-missing :py:meth:`~core.groupby.DatasetGroupBy.quantile`. (:issue:`3525`, :pull:`3527`). By `Justus Magin `_. Bug fixes @@ -477,8 +477,7 @@ Enhancements - ``xarray.Dataset.drop`` now supports keyword arguments; dropping index labels by using both ``dim`` and ``labels`` or using a - :py:class:`~xarray.core.coordinates.DataArrayCoordinates` object are - deprecated (:issue:`2910`). + ``DataArrayCoordinates`` object are deprecated (:issue:`2910`). By `Gregory Gundersen `_. - Added examples of :py:meth:`Dataset.set_index` and From 89032648b19cd6a8c9d999964bcbc401bfcc279b Mon Sep 17 00:00:00 2001 From: Keewis Date: Wed, 11 Dec 2019 02:17:54 +0100 Subject: [PATCH 17/30] use plot.line instead of plot.plot --- doc/plotting.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/plotting.rst b/doc/plotting.rst index 782aa065903..ea9816780a7 100644 --- a/doc/plotting.rst +++ b/doc/plotting.rst @@ -228,7 +228,7 @@ It is required to explicitly specify either Thus, we could have made the previous plot by specifying ``hue='lat'`` instead of ``x='time'``. If required, the automatic legend can be turned off using ``add_legend=False``. Alternatively, -``hue`` can be passed directly to :py:func:`xarray.plot.plot` as `air.isel(lon=10, lat=[19,21,22]).plot(hue='lat')`. +``hue`` can be passed directly to :py:func:`xarray.plot.line` as `air.isel(lon=10, lat=[19,21,22]).plot.line(hue='lat')`. ======================== From 610710fc4397103a2b31693334004af4a34794bb Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 16:02:57 +0100 Subject: [PATCH 18/30] add the CFTimeIndex properties to api-hidden.rst --- doc/api-hidden.rst | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index b35b19cd02d..4c09601e04d 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -450,3 +450,41 @@ CFTimeIndex.value_counts CFTimeIndex.view CFTimeIndex.where + + CFTimeIndex.T + CFTimeIndex.array + CFTimeIndex.asi8 + CFTimeIndex.base + CFTimeIndex.data + CFTimeIndex.date_type + CFTimeIndex.day + CFTimeIndex.dayofweek + CFTimeIndex.dayofyear + CFTimeIndex.dtype + CFTimeIndex.dtype_str + CFTimeIndex.empty + CFTimeIndex.flags + CFTimeIndex.has_duplicates + CFTimeIndex.hasnans + CFTimeIndex.hour + CFTimeIndex.inferred_type + CFTimeIndex.is_all_dates + CFTimeIndex.is_monotonic + CFTimeIndex.is_monotonic_increasing + CFTimeIndex.is_monotonic_decreasing + CFTimeIndex.is_unique + CFTimeIndex.itemsize + CFTimeIndex.microsecond + CFTimeIndex.minute + CFTimeIndex.month + CFTimeIndex.name + CFTimeIndex.names + CFTimeIndex.nbytes + CFTimeIndex.ndim + CFTimeIndex.nlevels + CFTimeIndex.second + CFTimeIndex.shape + CFTimeIndex.size + CFTimeIndex.strides + CFTimeIndex.values + CFTimeIndex.year From 223d48d58506bc253883298eacc5b8de1d195ce3 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 16:03:30 +0100 Subject: [PATCH 19/30] add the backend objects' methods to api-hidden.rst --- doc/api-hidden.rst | 101 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 101 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 4c09601e04d..0a9861ac854 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -488,3 +488,104 @@ CFTimeIndex.strides CFTimeIndex.values CFTimeIndex.year + + backends.NetCDF4DataStore.close + backends.NetCDF4DataStore.encode + backends.NetCDF4DataStore.encode_attribute + backends.NetCDF4DataStore.encode_variable + backends.NetCDF4DataStore.get + backends.NetCDF4DataStore.get_attrs + backends.NetCDF4DataStore.get_dimensions + backends.NetCDF4DataStore.get_encoding + backends.NetCDF4DataStore.get_variables + backends.NetCDF4DataStore.items + backends.NetCDF4DataStore.keys + backends.NetCDF4DataStore.load + backends.NetCDF4DataStore.open + backends.NetCDF4DataStore.open_store_variable + backends.NetCDF4DataStore.prepare_variable + backends.NetCDF4DataStore.set_attribute + backends.NetCDF4DataStore.set_attributes + backends.NetCDF4DataStore.set_dimension + backends.NetCDF4DataStore.set_dimensions + backends.NetCDF4DataStore.set_variable + backends.NetCDF4DataStore.set_variables + backends.NetCDF4DataStore.store + backends.NetCDF4DataStore.store_dataset + backends.NetCDF4DataStore.sync + backends.NetCDF4DataStore.values + + backends.H5NetCDFStore.close + backends.H5NetCDFStore.encode + backends.H5NetCDFStore.encode_attribute + backends.H5NetCDFStore.encode_variable + backends.H5NetCDFStore.get + backends.H5NetCDFStore.get_attrs + backends.H5NetCDFStore.get_dimensions + backends.H5NetCDFStore.get_encoding + backends.H5NetCDFStore.get_variables + backends.H5NetCDFStore.items + backends.H5NetCDFStore.keys + backends.H5NetCDFStore.load + backends.H5NetCDFStore.open_store_variable + backends.H5NetCDFStore.prepare_variable + backends.H5NetCDFStore.set_attribute + backends.H5NetCDFStore.set_attributes + backends.H5NetCDFStore.set_dimension + backends.H5NetCDFStore.set_dimensions + backends.H5NetCDFStore.set_variable + backends.H5NetCDFStore.set_variables + backends.H5NetCDFStore.store + backends.H5NetCDFStore.store_dataset + backends.H5NetCDFStore.sync + backends.H5NetCDFStore.values + + backends.PydapDataStore.close + backends.PydapDataStore.get + backends.PydapDataStore.get_attrs + backends.PydapDataStore.get_dimensions + backends.PydapDataStore.get_encoding + backends.PydapDataStore.get_variables + backends.PydapDataStore.items + backends.PydapDataStore.keys + backends.PydapDataStore.load + backends.PydapDataStore.open + backends.PydapDataStore.open_store_variable + backends.PydapDataStore.values + + backends.ScipyDataStore.close + backends.ScipyDataStore.encode + backends.ScipyDataStore.encode_attribute + backends.ScipyDataStore.encode_variable + backends.ScipyDataStore.get + backends.ScipyDataStore.get_attrs + backends.ScipyDataStore.get_dimensions + backends.ScipyDataStore.get_encoding + backends.ScipyDataStore.get_variables + backends.ScipyDataStore.items + backends.ScipyDataStore.keys + backends.ScipyDataStore.load + backends.ScipyDataStore.open_store_variable + backends.ScipyDataStore.prepare_variable + backends.ScipyDataStore.set_attribute + backends.ScipyDataStore.set_attributes + backends.ScipyDataStore.set_dimension + backends.ScipyDataStore.set_dimensions + backends.ScipyDataStore.set_variable + backends.ScipyDataStore.set_variables + backends.ScipyDataStore.store + backends.ScipyDataStore.store_dataset + backends.ScipyDataStore.sync + backends.ScipyDataStore.values + + backends.FileManager.acquire + backends.FileManager.acquire_context + backends.FileManager.close + + backends.CachingFileManager.acquire + backends.CachingFileManager.acquire_context + backends.CachingFileManager.close + + backends.DummyFileManager.acquire + backends.DummyFileManager.acquire_context + backends.DummyFileManager.close From 47a55047af4c89e020741e80e91ddc9e11adc102 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 16:04:03 +0100 Subject: [PATCH 20/30] add missing dict methods to api.rst --- doc/api.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/api.rst b/doc/api.rst index 2316fdbee7e..57b3e8f73d0 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -76,7 +76,9 @@ and values given by ``DataArray`` objects. Dataset.__setitem__ Dataset.__delitem__ Dataset.update + Dataset.get Dataset.items + Dataset.keys Dataset.values Dataset contents From 78eda590e2ea0065c60119ae292af5d4f625e88f Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 16:04:29 +0100 Subject: [PATCH 21/30] add the coordinates objects to api.rst --- doc/api-hidden.rst | 24 ++++++++++++++++++++++++ doc/api.rst | 9 +++++++++ doc/whats-new.rst | 2 +- 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 0a9861ac854..5f334f89062 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -27,6 +27,18 @@ Dataset.std Dataset.var + core.coordinates.DatasetCoordinates.get + core.coordinates.DatasetCoordinates.items + core.coordinates.DatasetCoordinates.keys + core.coordinates.DatasetCoordinates.merge + core.coordinates.DatasetCoordinates.to_dataset + core.coordinates.DatasetCoordinates.to_index + core.coordinates.DatasetCoordinates.update + core.coordinates.DatasetCoordinates.values + core.coordinates.DatasetCoordinates.dims + core.coordinates.DatasetCoordinates.indexes + core.coordinates.DatasetCoordinates.variables + core.rolling.DatasetCoarsen.all core.rolling.DatasetCoarsen.any core.rolling.DatasetCoarsen.argmax @@ -136,6 +148,18 @@ DataArray.std DataArray.var + core.coordinates.DataArrayCoordinates.get + core.coordinates.DataArrayCoordinates.items + core.coordinates.DataArrayCoordinates.keys + core.coordinates.DataArrayCoordinates.merge + core.coordinates.DataArrayCoordinates.to_dataset + core.coordinates.DataArrayCoordinates.to_index + core.coordinates.DataArrayCoordinates.update + core.coordinates.DataArrayCoordinates.values + core.coordinates.DataArrayCoordinates.dims + core.coordinates.DataArrayCoordinates.indexes + core.coordinates.DataArrayCoordinates.variables + core.rolling.DataArrayCoarsen.all core.rolling.DataArrayCoarsen.any core.rolling.DataArrayCoarsen.argmax diff --git a/doc/api.rst b/doc/api.rst index 57b3e8f73d0..a8ac994560b 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -541,6 +541,15 @@ DataArray methods DataArray.unify_chunks DataArray.map_blocks +Coordinates objects +=================== + +.. autosummary:: + :toctree: generated/ + + core.coordinates.DataArrayCoordinates + core.coordinates.DatasetCoordinates + GroupBy objects =============== diff --git a/doc/whats-new.rst b/doc/whats-new.rst index 9f4952459f0..84637604f27 100644 --- a/doc/whats-new.rst +++ b/doc/whats-new.rst @@ -477,7 +477,7 @@ Enhancements - ``xarray.Dataset.drop`` now supports keyword arguments; dropping index labels by using both ``dim`` and ``labels`` or using a - ``DataArrayCoordinates`` object are deprecated (:issue:`2910`). + :py:class:`~core.coordinates.DataArrayCoordinates` object are deprecated (:issue:`2910`). By `Gregory Gundersen `_. - Added examples of :py:meth:`Dataset.set_index` and From 78a8d8310ecc26ade6ea116e6a8a09c49240f885 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 16:22:26 +0100 Subject: [PATCH 22/30] add the data store properties to api-hidden.rst --- doc/api-hidden.rst | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 5f334f89062..e553fbc3571 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -538,6 +538,14 @@ backends.NetCDF4DataStore.store_dataset backends.NetCDF4DataStore.sync backends.NetCDF4DataStore.values + backends.NetCDF4DataStore.attrs + backends.NetCDF4DataStore.autoclose + backends.NetCDF4DataStore.dimensions + backends.NetCDF4DataStore.ds + backends.NetCDF4DataStore.format + backends.NetCDF4DataStore.is_remote + backends.NetCDF4DataStore.lock + backends.NetCDF4DataStore.variables backends.H5NetCDFStore.close backends.H5NetCDFStore.encode @@ -563,6 +571,10 @@ backends.H5NetCDFStore.store_dataset backends.H5NetCDFStore.sync backends.H5NetCDFStore.values + backends.H5NetCDFStore.attrs + backends.H5NetCDFStore.dimensions + backends.H5NetCDFStore.ds + backends.H5NetCDFStore.variables backends.PydapDataStore.close backends.PydapDataStore.get @@ -576,6 +588,9 @@ backends.PydapDataStore.open backends.PydapDataStore.open_store_variable backends.PydapDataStore.values + backends.PydapDataStore.attrs + backends.PydapDataStore.dimensions + backends.PydapDataStore.variables backends.ScipyDataStore.close backends.ScipyDataStore.encode @@ -601,6 +616,10 @@ backends.ScipyDataStore.store_dataset backends.ScipyDataStore.sync backends.ScipyDataStore.values + backends.ScipyDataStore.attrs + backends.ScipyDataStore.dimensions + backends.ScipyDataStore.ds + backends.ScipyDataStore.variables backends.FileManager.acquire backends.FileManager.acquire_context From 7fc929112c8b3bb49be1101d66aa9a1e9e9fd9c1 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 16:31:41 +0100 Subject: [PATCH 23/30] add IndexVariable methods and properties to api-hidden.rst --- doc/api-hidden.rst | 76 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index e553fbc3571..50b4e73911f 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -301,7 +301,6 @@ Variable.unstack Variable.var Variable.where - Variable.T Variable.attrs Variable.chunks @@ -318,6 +317,81 @@ Variable.sizes Variable.values + IndexVariable.all + IndexVariable.any + IndexVariable.argmax + IndexVariable.argmin + IndexVariable.argsort + IndexVariable.astype + IndexVariable.broadcast_equals + IndexVariable.chunk + IndexVariable.clip + IndexVariable.coarsen + IndexVariable.compute + IndexVariable.concat + IndexVariable.conj + IndexVariable.conjugate + IndexVariable.copy + IndexVariable.count + IndexVariable.cumprod + IndexVariable.cumsum + IndexVariable.equals + IndexVariable.fillna + IndexVariable.get_axis_num + IndexVariable.identical + IndexVariable.isel + IndexVariable.isnull + IndexVariable.item + IndexVariable.load + IndexVariable.max + IndexVariable.mean + IndexVariable.median + IndexVariable.min + IndexVariable.no_conflicts + IndexVariable.notnull + IndexVariable.pad_with_fill_value + IndexVariable.prod + IndexVariable.quantile + IndexVariable.rank + IndexVariable.reduce + IndexVariable.roll + IndexVariable.rolling_window + IndexVariable.round + IndexVariable.searchsorted + IndexVariable.set_dims + IndexVariable.shift + IndexVariable.squeeze + IndexVariable.stack + IndexVariable.std + IndexVariable.sum + IndexVariable.to_base_variable + IndexVariable.to_coord + IndexVariable.to_dict + IndexVariable.to_index + IndexVariable.to_index_variable + IndexVariable.to_variable + IndexVariable.transpose + IndexVariable.unstack + IndexVariable.var + IndexVariable.where + IndexVariable.T + IndexVariable.attrs + IndexVariable.chunks + IndexVariable.data + IndexVariable.dims + IndexVariable.dtype + IndexVariable.encoding + IndexVariable.imag + IndexVariable.level_names + IndexVariable.name + IndexVariable.nbytes + IndexVariable.ndim + IndexVariable.real + IndexVariable.shape + IndexVariable.size + IndexVariable.sizes + IndexVariable.values + ufuncs.angle ufuncs.arccos ufuncs.arccosh From 51202bc09b0ef900df6a2db3f97809b34914fc6f Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 16:41:29 +0100 Subject: [PATCH 24/30] add properties for *Coarsen, *GroupBy, *Resample and *Rolling to api-hidden.rst --- doc/api-hidden.rst | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 50b4e73911f..2e9d6a4bc80 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -52,6 +52,12 @@ core.rolling.DatasetCoarsen.std core.rolling.DatasetCoarsen.sum core.rolling.DatasetCoarsen.var + core.rolling.DatasetCoarsen.boundary + core.rolling.DatasetCoarsen.coord_func + core.rolling.DatasetCoarsen.obj + core.rolling.DatasetCoarsen.side + core.rolling.DatasetCoarsen.trim_excess + core.rolling.DatasetCoarsen.windows core.groupby.DatasetGroupBy.assign core.groupby.DatasetGroupBy.assign_coords @@ -73,6 +79,8 @@ core.groupby.DatasetGroupBy.std core.groupby.DatasetGroupBy.sum core.groupby.DatasetGroupBy.var + core.groupby.DatasetGroupBy.dims + core.groupby.DatasetGroupBy.groups core.resample.DatasetResample.all core.resample.DatasetResample.any @@ -99,6 +107,8 @@ core.resample.DatasetResample.sum core.resample.DatasetResample.var core.resample.DatasetResample.where + core.resample.DatasetResample.dims + core.resample.DatasetResample.groups core.rolling.DatasetRolling.argmax core.rolling.DatasetRolling.argmin @@ -111,6 +121,12 @@ core.rolling.DatasetRolling.std core.rolling.DatasetRolling.sum core.rolling.DatasetRolling.var + core.rolling.DatasetRolling.center + core.rolling.DatasetRolling.dim + core.rolling.DatasetRolling.min_periods + core.rolling.DatasetRolling.obj + core.rolling.DatasetRolling.rollings + core.rolling.DatasetRolling.window Dataset.argsort Dataset.astype @@ -173,6 +189,12 @@ core.rolling.DataArrayCoarsen.std core.rolling.DataArrayCoarsen.sum core.rolling.DataArrayCoarsen.var + core.rolling.DataArrayCoarsen.boundary + core.rolling.DataArrayCoarsen.coord_func + core.rolling.DataArrayCoarsen.obj + core.rolling.DataArrayCoarsen.side + core.rolling.DataArrayCoarsen.trim_excess + core.rolling.DataArrayCoarsen.windows core.groupby.DataArrayGroupBy.assign_coords core.groupby.DataArrayGroupBy.first @@ -193,6 +215,8 @@ core.groupby.DataArrayGroupBy.std core.groupby.DataArrayGroupBy.sum core.groupby.DataArrayGroupBy.var + core.groupby.DataArrayGroupBy.dims + core.groupby.DataArrayGroupBy.groups core.resample.DataArrayResample.all core.resample.DataArrayResample.any @@ -218,6 +242,8 @@ core.resample.DataArrayResample.sum core.resample.DataArrayResample.var core.resample.DataArrayResample.where + core.resample.DataArrayResample.dims + core.resample.DataArrayResample.groups core.rolling.DataArrayRolling.argmax core.rolling.DataArrayRolling.argmin @@ -230,6 +256,12 @@ core.rolling.DataArrayRolling.std core.rolling.DataArrayRolling.sum core.rolling.DataArrayRolling.var + core.rolling.DataArrayRolling.center + core.rolling.DataArrayRolling.dim + core.rolling.DataArrayRolling.min_periods + core.rolling.DataArrayRolling.obj + core.rolling.DataArrayRolling.window + core.rolling.DataArrayRolling.window_labels DataArray.argsort DataArray.clip From 3e137112559e7396ac84006635adae00f0379b9a Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 17:09:35 +0100 Subject: [PATCH 25/30] add IndexVariable.get_level_variable to api-hidden.rst --- doc/api-hidden.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 2e9d6a4bc80..35afdb23239 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -370,6 +370,7 @@ IndexVariable.equals IndexVariable.fillna IndexVariable.get_axis_num + IndexVariable.get_level_variable IndexVariable.identical IndexVariable.isel IndexVariable.isnull From f0893a095c85ab04d9c41f0dee787a08fc4a603e Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 17:09:54 +0100 Subject: [PATCH 26/30] add the accessor methods / properties to api-hidden.rst --- doc/api-hidden.rst | 66 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 35afdb23239..9be814eb9b4 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -276,6 +276,72 @@ DataArray.cumprod DataArray.rank + core.accessor_dt.DatetimeAccessor.ceil + core.accessor_dt.DatetimeAccessor.floor + core.accessor_dt.DatetimeAccessor.round + core.accessor_dt.DatetimeAccessor.strftime + core.accessor_dt.DatetimeAccessor.day + core.accessor_dt.DatetimeAccessor.dayofweek + core.accessor_dt.DatetimeAccessor.dayofyear + core.accessor_dt.DatetimeAccessor.days_in_month + core.accessor_dt.DatetimeAccessor.daysinmonth + core.accessor_dt.DatetimeAccessor.hour + core.accessor_dt.DatetimeAccessor.microsecond + core.accessor_dt.DatetimeAccessor.minute + core.accessor_dt.DatetimeAccessor.month + core.accessor_dt.DatetimeAccessor.nanosecond + core.accessor_dt.DatetimeAccessor.quarter + core.accessor_dt.DatetimeAccessor.season + core.accessor_dt.DatetimeAccessor.second + core.accessor_dt.DatetimeAccessor.time + core.accessor_dt.DatetimeAccessor.week + core.accessor_dt.DatetimeAccessor.weekday + core.accessor_dt.DatetimeAccessor.weekday_name + core.accessor_dt.DatetimeAccessor.weekofyear + core.accessor_dt.DatetimeAccessor.year + + core.accessor_str.StringAccessor.capitalize + core.accessor_str.StringAccessor.center + core.accessor_str.StringAccessor.contains + core.accessor_str.StringAccessor.count + core.accessor_str.StringAccessor.decode + core.accessor_str.StringAccessor.encode + core.accessor_str.StringAccessor.endswith + core.accessor_str.StringAccessor.find + core.accessor_str.StringAccessor.get + core.accessor_str.StringAccessor.index + core.accessor_str.StringAccessor.isalnum + core.accessor_str.StringAccessor.isalpha + core.accessor_str.StringAccessor.isdecimal + core.accessor_str.StringAccessor.isdigit + core.accessor_str.StringAccessor.islower + core.accessor_str.StringAccessor.isnumeric + core.accessor_str.StringAccessor.isspace + core.accessor_str.StringAccessor.istitle + core.accessor_str.StringAccessor.isupper + core.accessor_str.StringAccessor.len + core.accessor_str.StringAccessor.ljust + core.accessor_str.StringAccessor.lower + core.accessor_str.StringAccessor.lstrip + core.accessor_str.StringAccessor.match + core.accessor_str.StringAccessor.pad + core.accessor_str.StringAccessor.repeat + core.accessor_str.StringAccessor.replace + core.accessor_str.StringAccessor.rfind + core.accessor_str.StringAccessor.rindex + core.accessor_str.StringAccessor.rjust + core.accessor_str.StringAccessor.rstrip + core.accessor_str.StringAccessor.slice + core.accessor_str.StringAccessor.slice_replace + core.accessor_str.StringAccessor.startswith + core.accessor_str.StringAccessor.strip + core.accessor_str.StringAccessor.swapcase + core.accessor_str.StringAccessor.title + core.accessor_str.StringAccessor.translate + core.accessor_str.StringAccessor.upper + core.accessor_str.StringAccessor.wrap + core.accessor_str.StringAccessor.zfill + Variable.all Variable.any Variable.argmax From 578350efa7fedb6957acca85a0d7c8523ce148e0 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 17:28:30 +0100 Subject: [PATCH 27/30] add the RollingExp method to api-hidden.rst --- doc/api-hidden.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 9be814eb9b4..4e54a1ea2ce 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -128,6 +128,8 @@ core.rolling.DatasetRolling.rollings core.rolling.DatasetRolling.window + core.rolling_exp.RollingExp.mean + Dataset.argsort Dataset.astype Dataset.clip From 65752fd67f250f8f86ae3c264b551ac542487708 Mon Sep 17 00:00:00 2001 From: Keewis Date: Sun, 15 Dec 2019 18:25:19 +0100 Subject: [PATCH 28/30] fix the docstring of StringAccessor.replace --- xarray/core/accessor_str.py | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/xarray/core/accessor_str.py b/xarray/core/accessor_str.py index 8838e71e6ca..6a975b948eb 100644 --- a/xarray/core/accessor_str.py +++ b/xarray/core/accessor_str.py @@ -854,12 +854,10 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=True): ---------- pat : string or compiled regex String can be a character sequence or regular expression. - repl : string or callable Replacement string or a callable. The callable is passed the regex match object and must return a replacement string to be used. See :func:`re.sub`. - n : int, default -1 (all) Number of replacements to make from start case : boolean, default None @@ -873,7 +871,7 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=True): - If True, assumes the passed-in pattern is a regular expression. - If False, treats the pattern as a literal string - Cannot be set to False if `pat` is a compiled regex or `repl` is - a callable. + a callable. Returns ------- From 216fc6c3f2fa4c0fc678414bf9c197cc8b421697 Mon Sep 17 00:00:00 2001 From: Keewis Date: Tue, 17 Dec 2019 16:43:45 +0100 Subject: [PATCH 29/30] mention load_store instead of from_store and generate a page for dump_to_store --- doc/api-hidden.rst | 1 + doc/api.rst | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 4e54a1ea2ce..37170cb5b1e 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -141,6 +141,7 @@ Dataset.cumsum Dataset.cumprod Dataset.rank + Dataset.dump_to_store DataArray.ndim DataArray.nbytes diff --git a/doc/api.rst b/doc/api.rst index a8ac994560b..d3491e020fd 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -713,7 +713,7 @@ Advanced API These backends provide a low-level interface for lazily loading data from external file-formats or protocols, and can be manually invoked to create -arguments for the ``from_store`` and ``dump_to_store`` Dataset methods: +arguments for the ``load_store`` and ``dump_to_store`` Dataset methods: .. autosummary:: :toctree: generated/ From c1aacb9bc94d08f91de5469a8639b978f84817f8 Mon Sep 17 00:00:00 2001 From: Keewis Date: Tue, 17 Dec 2019 17:23:55 +0100 Subject: [PATCH 30/30] also add load_store --- doc/api-hidden.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/api-hidden.rst b/doc/api-hidden.rst index 37170cb5b1e..c117b0f4fc7 100644 --- a/doc/api-hidden.rst +++ b/doc/api-hidden.rst @@ -141,6 +141,8 @@ Dataset.cumsum Dataset.cumprod Dataset.rank + + Dataset.load_store Dataset.dump_to_store DataArray.ndim