Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Silence sphinx warnings #3516

Merged
merged 26 commits into from
Nov 19, 2019
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
c9b53bd
silence sphinx warnings
keewis Nov 12, 2019
5d9d263
silence more sphinx warnings
keewis Nov 12, 2019
fb559c0
fix some references
keewis Nov 12, 2019
192eebd
fix the docstrings of Dataset reduce methods
keewis Nov 12, 2019
4b93534
mark the orphaned files as such
keewis Nov 13, 2019
9b24219
silence some nit-picky warnings
keewis Nov 14, 2019
65fc7c3
Merge branch 'master' into silence-sphinx-warnings
dcherian Nov 15, 2019
e452482
Merge branch 'master' into silence-sphinx-warnings
keewis Nov 17, 2019
939c60d
convert all references to xray to double backtick quoted text
keewis Nov 17, 2019
1170910
silence more warnings in whats-new.rst
keewis Nov 17, 2019
4f8d0f1
require a whatsnew format of Name <https://github.com/user>
keewis Nov 17, 2019
7c4211e
rename the second cf conventions link
keewis Nov 17, 2019
cee59e6
silence more sphinx warnings
keewis Nov 17, 2019
562567b
get interpolate_na docstrings in sync with master
keewis Nov 18, 2019
6e223ea
fix sphinx warnings for interpolate_na docstrings
keewis Nov 18, 2019
c8559e4
update references to old documentation sections
keewis Nov 18, 2019
f2cf661
cut the link to h5netcdf.File
keewis Nov 18, 2019
58d243f
use the correct reference types for numpy
keewis Nov 18, 2019
0112190
update the reference to atop (dask renamed it to blockwise)
keewis Nov 18, 2019
935f68c
rewrite numpy docstrings
keewis Nov 18, 2019
c61d19b
guard against non-str documentation
keewis Nov 18, 2019
bdc8594
pass name to skip_signature
keewis Nov 18, 2019
a6453d2
remove links to pandas.Panel
keewis Nov 18, 2019
a220c7a
convince sphinx to create pages astype and groupby().quantile
keewis Nov 18, 2019
17fe69d
more warnings
keewis Nov 19, 2019
20dbc51
Merge branch 'master' into silence-sphinx-warnings
dcherian Nov 19, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions doc/README.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
:orphan:

xarray
------

Expand Down
2 changes: 2 additions & 0 deletions doc/api-hidden.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
.. This extra page is a work around for sphinx not having any support for
.. hiding an autosummary table.

:orphan:

.. currentmodule:: xarray

.. autosummary::
Expand Down
6 changes: 3 additions & 3 deletions doc/combining.rst
Original file line number Diff line number Diff line change
Expand Up @@ -255,11 +255,11 @@ Combining along multiple dimensions
``combine_nested``.

For combining many objects along multiple dimensions xarray provides
:py:func:`~xarray.combine_nested`` and :py:func:`~xarray.combine_by_coords`. These
:py:func:`~xarray.combine_nested` and :py:func:`~xarray.combine_by_coords`. These
functions use a combination of ``concat`` and ``merge`` across different
variables to combine many objects into one.

:py:func:`~xarray.combine_nested`` requires specifying the order in which the
:py:func:`~xarray.combine_nested` requires specifying the order in which the
objects should be combined, while :py:func:`~xarray.combine_by_coords` attempts to
infer this ordering automatically from the coordinates in the data.

Expand Down Expand Up @@ -310,4 +310,4 @@ These functions can be used by :py:func:`~xarray.open_mfdataset` to open many
files as one dataset. The particular function used is specified by setting the
argument ``'combine'`` to ``'by_coords'`` or ``'nested'``. This is useful for
situations where your data is split across many files in multiple locations,
which have some known relationship between one another.
which have some known relationship between one another.
6 changes: 3 additions & 3 deletions doc/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -325,8 +325,8 @@ Broadcasting by dimension name
``DataArray`` objects are automatically align themselves ("broadcasting" in
the numpy parlance) by dimension name instead of axis order. With xarray, you
do not need to transpose arrays or insert dimensions of length 1 to get array
operations to work, as commonly done in numpy with :py:func:`np.reshape` or
:py:const:`np.newaxis`.
operations to work, as commonly done in numpy with :py:func:`numpy.reshape` or
:py:data:`numpy.newaxis`.

This is best illustrated by a few examples. Consider two one-dimensional
arrays with different sizes aligned along different dimensions:
Expand Down Expand Up @@ -566,7 +566,7 @@ to set ``axis=-1``. As an example, here is how we would wrap

Because ``apply_ufunc`` follows a standard convention for ufuncs, it plays
nicely with tools for building vectorized functions, like
:func:`numpy.broadcast_arrays` and :func:`numpy.vectorize`. For high performance
:py:func:`numpy.broadcast_arrays` and :py:class:`numpy.vectorize`. For high performance
needs, consider using Numba's :doc:`vectorize and guvectorize <numba:user/vectorize>`.

In addition to wrapping functions, ``apply_ufunc`` can automatically parallelize
Expand Down
2 changes: 1 addition & 1 deletion doc/dask.rst
Original file line number Diff line number Diff line change
Expand Up @@ -285,7 +285,7 @@ automate `embarrassingly parallel
<https://en.wikipedia.org/wiki/Embarrassingly_parallel>`__ "map" type operations
where a function written for processing NumPy arrays should be repeatedly
applied to xarray objects containing Dask arrays. It works similarly to
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.atop`, but without
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.blockwise`, but without
requiring an intermediate layer of abstraction.

For the best performance when using Dask's multi-threaded scheduler, wrap a
Expand Down
240 changes: 119 additions & 121 deletions doc/whats-new.rst

Large diffs are not rendered by default.

8 changes: 5 additions & 3 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -729,13 +729,13 @@ def open_mfdataset(
``combine_by_coords`` and ``combine_nested``. By default the old (now deprecated)
``auto_combine`` will be used, please specify either ``combine='by_coords'`` or
``combine='nested'`` in future. Requires dask to be installed. See documentation for
details on dask [1]. Attributes from the first dataset file are used for the
details on dask [1]_. Attributes from the first dataset file are used for the
combined dataset.

Parameters
----------
paths : str or sequence
Either a string glob in the form "path/to/my/files/*.nc" or an explicit list of
Either a string glob in the form ``"path/to/my/files/*.nc"`` or an explicit list of
files to open. Paths can be given as strings or as pathlib Paths. If
concatenation along more than one dimension is desired, then ``paths`` must be a
nested list-of-lists (see ``manual_combine`` for details). (A string glob will
Expand All @@ -745,7 +745,7 @@ def open_mfdataset(
In general, these should divide the dimensions of each dataset. If int, chunk
each dimension by ``chunks``. By default, chunks will be chosen to load entire
input files into memory at once. This has a major impact on performance: please
see the full documentation for more details [2].
see the full documentation for more details [2]_.
concat_dim : str, or list of str, DataArray, Index or None, optional
Dimensions to concatenate files along. You only need to provide this argument
if any of the dimensions along which you want to concatenate is not a dimension
Expand All @@ -761,6 +761,7 @@ def open_mfdataset(
'no_conflicts', 'override'}, optional
String indicating how to compare variables of the same name for
potential conflicts when merging:

* 'broadcast_equals': all values must be equal when variables are
broadcast against each other to ensure common dimensions.
* 'equals': all values and dimensions must be the same.
Expand All @@ -770,6 +771,7 @@ def open_mfdataset(
must be equal. The returned dataset then contains the combination
of all non-null values.
* 'override': skip comparing and pick variable from first dataset

preprocess : callable, optional
If provided, call this function on each dataset prior to concatenation.
You can find the file-name from which each dataset was loaded in
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/alignment.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ def align(

Returns
-------
aligned : same as *objects
aligned : same as `*objects`
Tuple of objects with aligned coordinates.

Raises
Expand Down
2 changes: 2 additions & 0 deletions xarray/core/combine.py
Original file line number Diff line number Diff line change
Expand Up @@ -531,6 +531,7 @@ def combine_by_coords(
* 'all': All data variables will be concatenated.
* list of str: The listed data variables will be concatenated, in
addition to the 'minimal' data variables.

If objects are DataArrays, `data_vars` must be 'all'.
coords : {'minimal', 'different', 'all' or list of str}, optional
As per the 'data_vars' kwarg, but for coordinate variables.
Expand Down Expand Up @@ -747,6 +748,7 @@ def auto_combine(
'no_conflicts', 'override'}, optional
String indicating how to compare variables of the same name for
potential conflicts:

- 'broadcast_equals': all values must be equal when variables are
broadcast against each other to ensure common dimensions.
- 'equals': all values and dimensions must be the same.
Expand Down
26 changes: 18 additions & 8 deletions xarray/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,15 +91,23 @@ def wrapped_func(self, dim=None, **kwargs): # type: ignore

return wrapped_func

_reduce_extra_args_docstring = """dim : str or sequence of str, optional
_reduce_extra_args_docstring = dedent(
"""
dim : str or sequence of str, optional
Dimension(s) over which to apply `{name}`. By default `{name}` is
applied over all dimensions."""
applied over all dimensions.
"""
).strip()

_cum_extra_args_docstring = """dim : str or sequence of str, optional
_cum_extra_args_docstring = dedent(
"""
dim : str or sequence of str, optional
Dimension over which to apply `{name}`.
axis : int or sequence of int, optional
Axis over which to apply `{name}`. Only one of the 'dim'
and 'axis' arguments can be supplied."""
and 'axis' arguments can be supplied.
"""
).strip()


class AbstractArray(ImplementsArrayReduce):
Expand Down Expand Up @@ -454,7 +462,7 @@ def assign_coords(self, coords=None, **coords_kwargs):
def assign_attrs(self, *args, **kwargs):
"""Assign new attrs to this object.

Returns a new object equivalent to self.attrs.update(*args, **kwargs).
Returns a new object equivalent to ``self.attrs.update(*args, **kwargs)``.

Parameters
----------
Expand All @@ -481,7 +489,7 @@ def pipe(
**kwargs,
) -> T:
"""
Apply func(self, *args, **kwargs)
Apply ``func(self, *args, **kwargs)``

This method replicates the pandas method of the same name.

Expand Down Expand Up @@ -810,6 +818,7 @@ def rolling_exp(
----------
window : A single mapping from a dimension name to window value,
optional

dim : str
Name of the dimension to create the rolling exponential window
along (e.g., `time`).
Expand Down Expand Up @@ -848,6 +857,7 @@ def coarsen(
----------
dim: dict, optional
Mapping from the dimension name to the window size.

dim : str
Name of the dimension to create the rolling iterator
along (e.g., `time`).
Expand All @@ -858,7 +868,7 @@ def coarsen(
multiple of the window size. If 'trim', the excess entries are
dropped. If 'pad', NA will be padded.
side : 'left' or 'right' or mapping from dimension to 'left' or 'right'
coord_func: function (name) that is applied to the coordintes,
coord_func : function (name) that is applied to the coordintes,
or a mapping from coordinate name to function (name).

Returns
Expand Down Expand Up @@ -921,7 +931,7 @@ def resample(
Parameters
----------
indexer : {dim: freq}, optional
Mapping from the dimension name to resample frequency. The
Mapping from the dimension name to resample frequency [1]_. The
dimension must be datetime-like.
skipna : bool, optional
Whether to skip missing values when aggregating in downsampling.
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/computation.py
Original file line number Diff line number Diff line change
Expand Up @@ -947,7 +947,7 @@ def earth_mover_distance(first_samples,
appropriately for use in `apply`. You may find helper functions such as
numpy.broadcast_arrays helpful in writing your function. `apply_ufunc` also
works well with numba's vectorize and guvectorize. Further explanation with
examples are provided in the xarray documentation [3].
examples are provided in the xarray documentation [3]_.

See also
--------
Expand Down
1 change: 1 addition & 0 deletions xarray/core/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ def concat(
* 'all': All data variables will be concatenated.
* list of str: The listed data variables will be concatenated, in
addition to the 'minimal' data variables.

If objects are DataArrays, data_vars must be 'all'.
coords : {'minimal', 'different', 'all' or list of str}, optional
These coordinate variables will be concatenated together:
Expand Down
10 changes: 6 additions & 4 deletions xarray/core/dataarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -1315,7 +1315,7 @@ def interp(
values.
kwargs: dictionary
Additional keyword passed to scipy's interpolator.
**coords_kwarg : {dim: coordinate, ...}, optional
``**coords_kwarg`` : {dim: coordinate, ...}, optional
The keyword arguments form of ``coords``.
One of coords or coords_kwargs must be provided.

Expand Down Expand Up @@ -2044,6 +2044,7 @@ def interpolate_na(
provided.
- 'barycentric', 'krog', 'pchip', 'spline', 'akima': use their
respective :py:class:`scipy.interpolate` classes.

use_coordinate : bool, str, default True
Specifies which index to use as the x values in the interpolation
formulated as `y = f(x)`. If False, values are treated as if
Expand All @@ -2063,6 +2064,7 @@ def interpolate_na(
- a string that is valid input for pandas.to_timedelta
- a :py:class:`numpy.timedelta64` object
- a :py:class:`pandas.Timedelta` object

Otherwise, ``max_gap`` must be an int or a float. Use of ``max_gap`` with unlabeled
dimensions has not been implemented yet. Gap length is defined as the difference
between coordinate values at the first data point after a gap and the last value
Expand Down Expand Up @@ -2946,7 +2948,7 @@ def quantile(
is a scalar. If multiple percentiles are given, first axis of
the result corresponds to the quantile and a quantile dimension
is added to the return array. The other dimensions are the
dimensions that remain after the reduction of the array.
dimensions that remain after the reduction of the array.

See Also
--------
Expand Down Expand Up @@ -3071,8 +3073,8 @@ def integrate(
Coordinate(s) used for the integration.
datetime_unit: str, optional
Can be used to specify the unit if datetime coordinate is used.
One of {'Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns',
'ps', 'fs', 'as'}
One of {'Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns', 'ps',
'fs', 'as'}

Returns
-------
Expand Down
15 changes: 10 additions & 5 deletions xarray/core/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -1509,7 +1509,7 @@ def to_netcdf(
Nested dictionary with variable names as keys and dictionaries of
variable specific encodings as values, e.g.,
``{'my_variable': {'dtype': 'int16', 'scale_factor': 0.1,
'zlib': True}, ...}``
'zlib': True}, ...}``

The `h5netcdf` engine supports both the NetCDF4-style compression
encoding parameters ``{'zlib': True, 'complevel': 9}`` and the h5py
Expand Down Expand Up @@ -2118,7 +2118,7 @@ def thin(
indexers: Union[Mapping[Hashable, int], int] = None,
**indexers_kwargs: Any,
) -> "Dataset":
"""Returns a new dataset with each array indexed along every `n`th
"""Returns a new dataset with each array indexed along every `n`-th
value for the specified dimension(s)

Parameters
Expand All @@ -2127,7 +2127,7 @@ def thin(
A dict with keys matching dimensions and integer values `n`
or a single integer `n` applied over all dimensions.
One of indexers or indexers_kwargs must be provided.
**indexers_kwargs : {dim: n, ...}, optional
``**indexers_kwargs`` : {dim: n, ...}, optional
The keyword arguments form of ``indexers``.
One of indexers or indexers_kwargs must be provided.

Expand Down Expand Up @@ -3476,6 +3476,7 @@ def merge(
'no_conflicts'}, optional
String indicating how to compare variables of the same name for
potential conflicts:

- 'broadcast_equals': all values must be equal when variables are
broadcast against each other to ensure common dimensions.
- 'equals': all values and dimensions must be the same.
Expand All @@ -3484,6 +3485,7 @@ def merge(
- 'no_conflicts': only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.

join : {'outer', 'inner', 'left', 'right', 'exact'}, optional
Method for joining ``self`` and ``other`` along shared dimensions:

Expand Down Expand Up @@ -3624,7 +3626,7 @@ def drop_sel(self, labels=None, *, errors="raise", **labels_kwargs):
in the dataset. If 'ignore', any given labels that are in the
dataset are dropped and no error is raised.
**labels_kwargs : {dim: label, ...}, optional
The keyword arguments form of ``dim`` and ``labels`
The keyword arguments form of ``dim`` and ``labels``

Returns
-------
Expand Down Expand Up @@ -3914,6 +3916,7 @@ def interpolate_na(
----------
dim : str
Specifies the dimension along which to interpolate.

method : str, optional
String indicating which method to use for interpolation:

Expand All @@ -3925,6 +3928,7 @@ def interpolate_na(
provided.
- 'barycentric', 'krog', 'pchip', 'spline', 'akima': use their
respective :py:class:`scipy.interpolate` classes.

use_coordinate : bool, str, default True
Specifies which index to use as the x values in the interpolation
formulated as `y = f(x)`. If False, values are treated as if
Expand All @@ -3944,6 +3948,7 @@ def interpolate_na(
- a string that is valid input for pandas.to_timedelta
- a :py:class:`numpy.timedelta64` object
- a :py:class:`pandas.Timedelta` object

Otherwise, ``max_gap`` must be an int or a float. Use of ``max_gap`` with unlabeled
dimensions has not been implemented yet. Gap length is defined as the difference
between coordinate values at the first data point after a gap and the last value
Expand Down Expand Up @@ -5251,7 +5256,7 @@ def integrate(self, coord, datetime_unit=None):
datetime_unit
Can be specify the unit if datetime coordinate is used. One of
{'Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns', 'ps', 'fs',
'as'}
'as'}

Returns
-------
Expand Down
6 changes: 4 additions & 2 deletions xarray/core/groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -728,17 +728,19 @@ def map(self, func, shortcut=False, args=(), **kwargs):
Callable to apply to each array.
shortcut : bool, optional
Whether or not to shortcut evaluation under the assumptions that:

(1) The action of `func` does not depend on any of the array
metadata (attributes or coordinates) but only on the data and
dimensions.
(2) The action of `func` creates arrays with homogeneous metadata,
that is, with the same dimensions and attributes.

If these conditions are satisfied `shortcut` provides significant
speedup. This should be the case for many common groupby operations
(e.g., applying numpy ufuncs).
args : tuple, optional
``*args`` : tuple, optional
Positional arguments passed to `func`.
**kwargs
``**kwargs``
Used to call `func(ar, **kwargs)` for each array `ar`.

Returns
Expand Down
2 changes: 1 addition & 1 deletion xarray/plot/plot.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ def line(
if None, use the default for the matplotlib function.
add_legend : boolean, optional
Add legend with y axis coordinates (2D inputs only).
*args, **kwargs : optional
``*args``, ``**kwargs`` : optional
Additional arguments to matplotlib.pyplot.plot
"""
# Handle facetgrids first
Expand Down
Loading