Skip to content
forked from pydata/xarray

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into fix/user-coordinates
Browse files Browse the repository at this point in the history
* upstream/master: (35 commits)
  fix plotting with transposed nondim coords. (pydata#3441)
  make coarsen reductions consistent with reductions on other classes (pydata#3500)
  Resolve the version issues on RTD (pydata#3589)
  Add bottleneck & rasterio git tip to upstream-dev CI (pydata#3585)
  update whats-new.rst (pydata#3581)
  Examples for quantile (pydata#3576)
  add cftime intersphinx entries (pydata#3577)
  Add pyXpcm to Related Projects doc page (pydata#3578)
  Reimplement quantile with apply_ufunc (pydata#3559)
  add environment file for binderized examples (pydata#3568)
  Add drop to api.rst under pending deprecations (pydata#3561)
  replace duplicate method _from_vars_and_coord_names (pydata#3565)
  propagate indexes in to_dataset, from_dataset (pydata#3519)
  Switch examples to notebooks + scipy19 docs improvements (pydata#3557)
  fix whats-new.rst (pydata#3554)
  Tweaks to release instructions (pydata#3555)
  Clarify conda environments for new contributors (pydata#3551)
  Revert to dev version
  0.14.1 whatsnew (pydata#3547)
  sparse option to reindex and unstack (pydata#3542)
  ...
  • Loading branch information
dcherian committed Dec 5, 2019
2 parents 670f97f + 577d3a7 commit 77c7c0f
Show file tree
Hide file tree
Showing 65 changed files with 3,814 additions and 1,723 deletions.
39 changes: 39 additions & 0 deletions .binder/environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: xarray-examples
channels:
- conda-forge
dependencies:
- python=3.7
- boto3
- bottleneck
- cartopy
- cdms2
- cfgrib
- cftime
- coveralls
- dask
- distributed
- dask_labextension
- h5netcdf
- h5py
- hdf5
- iris
- lxml # Optional dep of pydap
- matplotlib
- nc-time-axis
- netcdf4
- numba
- numpy
- pandas
- pint
- pip
- pydap
- pynio
- rasterio
- scipy
- seaborn
- sparse
- toolz
- xarray
- zarr
- pip:
- numbagg
2 changes: 2 additions & 0 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
github: numfocus
custom: http://numfocus.org/donate-to-xarray
51 changes: 40 additions & 11 deletions HOW_TO_RELEASE → HOW_TO_RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
How to issue an xarray release in 15 easy steps
How to issue an xarray release in 14 easy steps

Time required: about an hour.

1. Ensure your master branch is synced to upstream:
git pull upstream master
```
git pull upstream master
```
2. Look over whats-new.rst and the docs. Make sure "What's New" is complete
(check the date!) and consider adding a brief summary note describing the
release at the top.
Expand All @@ -12,37 +14,53 @@ Time required: about an hour.
- Function/method references should include links to the API docs.
- Sometimes notes get added in the wrong section of whats-new, typically
due to a bad merge. Check for these before a release by using git diff,
e.g., ``git diff v0.X.Y whats-new.rst`` where 0.X.Y is the previous
e.g., `git diff v0.X.Y whats-new.rst` where 0.X.Y is the previous
release.
3. If you have any doubts, run the full test suite one final time!
py.test
```
pytest
```
4. On the master branch, commit the release in git:
```
git commit -a -m 'Release v0.X.Y'
```
5. Tag the release:
```
git tag -a v0.X.Y -m 'v0.X.Y'
```
6. Build source and binary wheels for pypi:
```
git clean -xdf # this deletes all uncommited changes!
python setup.py bdist_wheel sdist
```
7. Use twine to register and upload the release on pypi. Be careful, you can't
take this back!
```
twine upload dist/xarray-0.X.Y*
```
You will need to be listed as a package owner at
https://pypi.python.org/pypi/xarray for this to work.
8. Push your changes to master:
```
git push upstream master
git push upstream --tags
```
9. Update the stable branch (used by ReadTheDocs) and switch back to master:
```
git checkout stable
git rebase master
git push upstream stable
git checkout master
It's OK to force push to 'stable' if necessary.
We also update the stable branch with `git cherrypick` for documentation
only fixes that apply the current released version.
```
It's OK to force push to 'stable' if necessary. (We also update the stable
branch with `git cherrypick` for documentation only fixes that apply the
current released version.)
10. Add a section for the next release (v.X.(Y+1)) to doc/whats-new.rst.
11. Commit your changes and push to master again:
git commit -a -m 'Revert to dev version'
```
git commit -a -m 'New whatsnew section'
git push upstream master
```
You're done pushing to master!
12. Issue the release on GitHub. Click on "Draft a new release" at
https://github.com/pydata/xarray/releases. Type in the version number, but
Expand All @@ -53,11 +71,22 @@ Time required: about an hour.
14. Issue the release announcement! For bug fix releases, I usually only email
xarray@googlegroups.com. For major/feature releases, I will email a broader
list (no more than once every 3-6 months):
pydata@googlegroups.com, xarray@googlegroups.com,
numpy-discussion@scipy.org, scipy-user@scipy.org,
pyaos@lists.johnny-lin.com
- pydata@googlegroups.com
- xarray@googlegroups.com
- numpy-discussion@scipy.org
- scipy-user@scipy.org
- pyaos@lists.johnny-lin.com

Google search will turn up examples of prior release announcements (look for
"ANN xarray").
You can get a list of contributors with:
```
git log "$(git tag --sort="v:refname" | sed -n 'x;$p').." --format="%aN" | sort -u
```
or by replacing `v0.X.Y` with the _previous_ release in:
```
git log v0.X.Y.. --format="%aN" | sort -u
```

Note on version numbering:

Expand Down
6 changes: 4 additions & 2 deletions ci/azure/install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,18 @@ steps:
--pre \
--upgrade \
matplotlib \
numpy \
pandas \
scipy
# numpy \ # FIXME https://github.com/pydata/xarray/issues/3409
pip install \
--no-deps \
--upgrade \
git+https://github.com/dask/dask \
git+https://github.com/dask/distributed \
git+https://github.com/zarr-developers/zarr \
git+https://github.com/Unidata/cftime
git+https://github.com/Unidata/cftime \
git+https://github.com/mapbox/rasterio \
git+https://github.com/pydata/bottleneck
condition: eq(variables['UPSTREAM_DEV'], 'true')
displayName: Install upstream dev dependencies

Expand Down
6 changes: 5 additions & 1 deletion ci/requirements/doc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,20 @@ dependencies:
- python=3.7
- bottleneck
- cartopy
- cfgrib
- h5netcdf
- ipykernel
- ipython
- iris
- jupyter_client
- nbsphinx
- netcdf4
- numpy
- numpydoc
- pandas<0.25 # Hack around https://github.com/pydata/xarray/issues/3369
- rasterio
- seaborn
- sphinx
- sphinx-gallery
- sphinx_rtd_theme
- xarray
- zarr
2 changes: 1 addition & 1 deletion ci/requirements/py36.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ dependencies:
- nc-time-axis
- netcdf4
- numba
- numpy<1.18 # FIXME https://github.com/pydata/xarray/issues/3409
- numpy
- pandas
- pint
- pip
Expand Down
2 changes: 1 addition & 1 deletion ci/requirements/py37.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ dependencies:
- nc-time-axis
- netcdf4
- numba
- numpy<1.18 # FIXME https://github.com/pydata/xarray/issues/3409
- numpy
- pandas
- pint
- pip
Expand Down
2 changes: 2 additions & 0 deletions doc/README.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
:orphan:

xarray
------

Expand Down
5 changes: 5 additions & 0 deletions doc/api-hidden.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
.. This extra page is a work around for sphinx not having any support for
.. hiding an autosummary table.
:orphan:

.. currentmodule:: xarray

.. autosummary::
Expand Down Expand Up @@ -30,9 +32,11 @@
core.groupby.DatasetGroupBy.first
core.groupby.DatasetGroupBy.last
core.groupby.DatasetGroupBy.fillna
core.groupby.DatasetGroupBy.quantile
core.groupby.DatasetGroupBy.where

Dataset.argsort
Dataset.astype
Dataset.clip
Dataset.conj
Dataset.conjugate
Expand Down Expand Up @@ -71,6 +75,7 @@
core.groupby.DataArrayGroupBy.first
core.groupby.DataArrayGroupBy.last
core.groupby.DataArrayGroupBy.fillna
core.groupby.DataArrayGroupBy.quantile
core.groupby.DataArrayGroupBy.where

DataArray.argsort
Expand Down
9 changes: 9 additions & 0 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -675,3 +675,12 @@ arguments for the ``from_store`` and ``dump_to_store`` Dataset methods:
backends.FileManager
backends.CachingFileManager
backends.DummyFileManager

Deprecated / Pending Deprecation
================================

Dataset.drop
DataArray.drop
Dataset.apply
core.groupby.DataArrayGroupBy.apply
core.groupby.DatasetGroupBy.apply
6 changes: 3 additions & 3 deletions doc/combining.rst
Original file line number Diff line number Diff line change
Expand Up @@ -255,11 +255,11 @@ Combining along multiple dimensions
``combine_nested``.

For combining many objects along multiple dimensions xarray provides
:py:func:`~xarray.combine_nested`` and :py:func:`~xarray.combine_by_coords`. These
:py:func:`~xarray.combine_nested` and :py:func:`~xarray.combine_by_coords`. These
functions use a combination of ``concat`` and ``merge`` across different
variables to combine many objects into one.

:py:func:`~xarray.combine_nested`` requires specifying the order in which the
:py:func:`~xarray.combine_nested` requires specifying the order in which the
objects should be combined, while :py:func:`~xarray.combine_by_coords` attempts to
infer this ordering automatically from the coordinates in the data.

Expand Down Expand Up @@ -310,4 +310,4 @@ These functions can be used by :py:func:`~xarray.open_mfdataset` to open many
files as one dataset. The particular function used is specified by setting the
argument ``'combine'`` to ``'by_coords'`` or ``'nested'``. This is useful for
situations where your data is split across many files in multiple locations,
which have some known relationship between one another.
which have some known relationship between one another.
9 changes: 6 additions & 3 deletions doc/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,9 @@ for filling missing values via 1D interpolation.
Note that xarray slightly diverges from the pandas ``interpolate`` syntax by
providing the ``use_coordinate`` keyword which facilitates a clear specification
of which values to use as the index in the interpolation.
xarray also provides the ``max_gap`` keyword argument to limit the interpolation to
data gaps of length ``max_gap`` or smaller. See :py:meth:`~xarray.DataArray.interpolate_na`
for more.

Aggregation
===========
Expand Down Expand Up @@ -322,8 +325,8 @@ Broadcasting by dimension name
``DataArray`` objects are automatically align themselves ("broadcasting" in
the numpy parlance) by dimension name instead of axis order. With xarray, you
do not need to transpose arrays or insert dimensions of length 1 to get array
operations to work, as commonly done in numpy with :py:func:`np.reshape` or
:py:const:`np.newaxis`.
operations to work, as commonly done in numpy with :py:func:`numpy.reshape` or
:py:data:`numpy.newaxis`.

This is best illustrated by a few examples. Consider two one-dimensional
arrays with different sizes aligned along different dimensions:
Expand Down Expand Up @@ -563,7 +566,7 @@ to set ``axis=-1``. As an example, here is how we would wrap
Because ``apply_ufunc`` follows a standard convention for ufuncs, it plays
nicely with tools for building vectorized functions, like
:func:`numpy.broadcast_arrays` and :func:`numpy.vectorize`. For high performance
:py:func:`numpy.broadcast_arrays` and :py:class:`numpy.vectorize`. For high performance
needs, consider using Numba's :doc:`vectorize and guvectorize <numba:user/vectorize>`.

In addition to wrapping functions, ``apply_ufunc`` can automatically parallelize
Expand Down
39 changes: 26 additions & 13 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,16 @@

import datetime
import os
import pathlib
import subprocess
import sys
from contextlib import suppress

# make sure the source version is preferred (#3567)
root = pathlib.Path(__file__).absolute().parent.parent
os.environ["PYTHONPATH"] = str(root)
sys.path.insert(0, str(root))

import xarray

allowed_failures = set()
Expand Down Expand Up @@ -76,20 +82,24 @@
"numpydoc",
"IPython.sphinxext.ipython_directive",
"IPython.sphinxext.ipython_console_highlighting",
"sphinx_gallery.gen_gallery",
"nbsphinx",
]

extlinks = {
"issue": ("https://github.com/pydata/xarray/issues/%s", "GH"),
"pull": ("https://github.com/pydata/xarray/pull/%s", "PR"),
}

sphinx_gallery_conf = {
"examples_dirs": "gallery",
"gallery_dirs": "auto_gallery",
"backreferences_dir": False,
"expected_failing_examples": list(allowed_failures),
}
nbsphinx_timeout = 600
nbsphinx_execute = "always"
nbsphinx_prolog = """
{% set docname = env.doc2path(env.docname, base=None) %}
You can run this notebook in a `live session <https://mybinder.org/v2/gh/pydata/xarray/doc/examples/master?urlpath=lab/tree/doc/{{ docname }}>`_ |Binder| or view it `on Github <https://github.com/pydata/xarray/blob/master/doc/{{ docname }}>`_.
.. |Binder| image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/pydata/xarray/master?urlpath=lab/tree/doc/{{ docname }}
"""

autosummary_generate = True
autodoc_typehints = "none"
Expand Down Expand Up @@ -137,7 +147,7 @@

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["_build"]
exclude_patterns = ["_build", "**.ipynb_checkpoints"]

# The reST default role (used for this markup: `text`) to use for all
# documents.
Expand Down Expand Up @@ -340,9 +350,12 @@
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
"python": ("https://docs.python.org/3/", None),
"pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
"iris": ("http://scitools.org.uk/iris/docs/latest/", None),
"numpy": ("https://docs.scipy.org/doc/numpy/", None),
"numba": ("https://numba.pydata.org/numba-doc/latest/", None),
"matplotlib": ("https://matplotlib.org/", None),
"pandas": ("https://pandas.pydata.org/pandas-docs/stable", None),
"iris": ("https://scitools.org.uk/iris/docs/latest", None),
"numpy": ("https://docs.scipy.org/doc/numpy", None),
"scipy": ("https://docs.scipy.org/doc/scipy/reference", None),
"numba": ("https://numba.pydata.org/numba-doc/latest", None),
"matplotlib": ("https://matplotlib.org", None),
"dask": ("https://docs.dask.org/en/latest", None),
"cftime": ("https://unidata.github.io/cftime", None),
}
4 changes: 3 additions & 1 deletion doc/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,9 @@ We'll now kick off a two-step process:
.. code-block:: none
# Create and activate the build environment
conda env create -f ci/requirements/py36.yml
# This is for Linux and MacOS. On Windows, use py37-windows.yml instead.
conda env create -f ci/requirements/py37.yml
conda activate xarray-tests
# or with older versions of Anaconda:
Expand Down
2 changes: 1 addition & 1 deletion doc/dask.rst
Original file line number Diff line number Diff line change
Expand Up @@ -285,7 +285,7 @@ automate `embarrassingly parallel
<https://en.wikipedia.org/wiki/Embarrassingly_parallel>`__ "map" type operations
where a function written for processing NumPy arrays should be repeatedly
applied to xarray objects containing Dask arrays. It works similarly to
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.atop`, but without
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.blockwise`, but without
requiring an intermediate layer of abstraction.

For the best performance when using Dask's multi-threaded scheduler, wrap a
Expand Down
Loading

0 comments on commit 77c7c0f

Please sign in to comment.