Skip to content

Commit

Permalink
merge master
Browse files Browse the repository at this point in the history
  • Loading branch information
aulemahal committed Dec 12, 2022
2 parents ceac955 + 503c506 commit 2a76211
Show file tree
Hide file tree
Showing 29 changed files with 1,569 additions and 1,345 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ default_language_version:

repos:
- repo: https://github.com/asottile/pyupgrade
rev: v3.2.3
rev: v3.3.0
hooks:
- id: pyupgrade
args: [ "--py38-plus" ]
Expand Down
19 changes: 10 additions & 9 deletions HISTORY.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,13 @@ Contributors to this version: Trevor James Smith (:user:`Zeitsperre`), Pascal Bo
New features and enhancements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Virtual modules can add variables to ``xclim.core.utils.VARIABLES`` through the new `variables` section of the yaml files. (:issue:`1129`, :pull:`1231`).
* ``xclim.core.units.convert_units_to`` can now perform automatic conversions based on the standard name of the input when needed. (:issue:`1205`, :pull:`1206`).
- Conversion from amount (thickness) to flux (rate), using ``amount2rate`` and ``rate2amount``.
- Conversion from amount to thickness for liquid water quantities, using the new ``amount2lwethickness`` and ``lwethickness2amount``. This is similar to the implicit transformations enabled by the "hydro" unit context.
* ``xclim.core.units.convert_units_to`` can now perform automatic conversions based on the standard name of the input when needed. (:issue:`1205`, :pull:`1206`).
- Conversion from amount (thickness) to flux (rate), using ``amount2rate`` and ``rate2amount``.
- Conversion from amount to thickness for liquid water quantities, using the new ``amount2lwethickness`` and ``lwethickness2amount``. This is similar to the implicit transformations enabled by the "hydro" unit context.
- Passing ``context='infer'`` will activate the "hydro" context if the source or the target are DataArrays with a standard name that is compatible, as decided by the new ``xclim.core.units.infer_context`` function.

Breaking changes
^^^^^^^^^^^^^^^^
Expand All @@ -29,14 +36,9 @@ Bug fixes
^^^^^^^^^
* The weighted ensemble statistics are now performed within a context in order to preserve data attributes. (:issue:`1232`, :pull:`1234`).
* The `make docs` Makefile recipe was failing with an esoteric error. This has been resolved by splitting the `linkcheck` and `docs` steps into separate actions. (:issue:`1248`. :pull:`1251`).

New features and enhancements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* ``xclim.core.units.convert_units_to`` can now perform automatic conversions based on the standard name of the input when needed. (:issue:`1205`, :pull:`1206`).
- Conversion from amount (thickness) to flux (rate), using ``amount2rate`` and ``rate2amount``.
- Conversion from amount to thickness for liquid water quantities, using the new ``amount2lwethickness`` and ``lwethickness2amount``. This is similar to the implicit transformations enabled by the "hydro" unit context.
- Passing ``context='infer'`` will activate the "hydro" context if the source or the target are DataArrays with a standard name that is compatible, as decided by the new ``xclim.core.units.infer_context`` function.

* The setup step for `pytest` needed to be addressed due to the fact that files were being accessed/modified by multiple tests at a time, causing segmentation faults in some tests. This has been resolved by splitting functions into those that fetch or generate test data (under `xclim.testing.tests.data`) and the fixtures that supply accessors to them (under `xclim.testing.tests.conftest`). (:issue:`1238`, :pull:`1254`).
* Relaxed the expected output for ``test_spatial_analogs[friedman_rafsky]`` to support expected results from `scikit-learn` 1.2.0.
* The MBCn example in documentation has been fixed to properly imitate the source. (:issue:`1249`, :pull:`1250`).

Internal changes
^^^^^^^^^^^^^^^^
Expand All @@ -47,7 +49,6 @@ Internal changes
* Documentation restructured to include `ReadMe` page (as `About`) with some minor changes to documentation titles. (:pull:`1233`).
* `xclim` development build now uses `nbqa` to effectively run black checks over notebook cells. (:pull:`1233`).


0.39.0 (2022-11-02)
-------------------
Contributors to this version: Trevor James Smith (:user:`Zeitsperre`), Abel Aoun (:user:`bzah`), Éric Dupuis (:user:`coxipi`), Travis Logan (:user:`tlogan2000`), Pascal Bourgault (:user:`aulemahal`).
Expand Down
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ def _indicator_table(module):
r"https://www.ouranos.ca/.*", # bad ssl certificate
r"https://doi.org/10.1080/.*", # tandfonline does not allow linkcheck requests (error 403)
r"https://www.tandfonline.com/.*", # tandfonline does not allow linkcheck requests (error 403)
r"http://www.utci.org/.*", # Added on 2022-12-08: site appears to be down (timeout)
]
linkcheck_exclude_documents = [r"readme"]

Expand Down
60 changes: 19 additions & 41 deletions docs/notebooks/sdba.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -516,7 +516,7 @@
"##### Stack the variables to multivariate arrays and standardize them\n",
"The standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.\n",
"\n",
"`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result."
"`scenh` and `scens` are standardized together so the two series are coherent. As we'll see further, we do not need to keep the mean and standard deviation as we only keep the rank order information from the `NpdfTransform` output."
]
},
{
Expand All @@ -533,9 +533,9 @@
"# Standardize\n",
"ref, _, _ = sdba.processing.standardize(ref)\n",
"\n",
"allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), \"time\"))\n",
"hist = allsim.sel(time=scenh.time)\n",
"sim = allsim.sel(time=scens.time)"
"allsim_std, _, _ = sdba.processing.standardize(xr.concat((scenh, scens), \"time\"))\n",
"scenh_std = allsim_std.sel(time=scenh.time)\n",
"scens_std = allsim_std.sel(time=scens.time)"
]
},
{
Expand All @@ -559,21 +559,17 @@
"with set_options(sdba_extra_output=True):\n",
" out = sdba.adjustment.NpdfTransform.adjust(\n",
" ref,\n",
" hist,\n",
" sim,\n",
" scenh_std,\n",
" scens_std,\n",
" base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.\n",
" base_kws={\"nquantiles\": 20, \"group\": \"time\"},\n",
" n_iter=20, # perform 20 iteration\n",
" n_escore=1000, # only send 1000 points to the escore metric (it is realy slow)\n",
" )\n",
"\n",
"scenh = out.scenh.rename(time_hist=\"time\") # Bias-adjusted historical period\n",
"scens = out.scen # Bias-adjusted future period\n",
"extra = out.drop_vars([\"scenh\", \"scen\"])\n",
"\n",
"# Un-standardize (add the mean and the std back)\n",
"scenh = sdba.processing.unstandardize(scenh, savg, sstd)\n",
"scens = sdba.processing.unstandardize(scens, savg, sstd)"
"scenh_npdft = out.scenh.rename(time_hist=\"time\") # Bias-adjusted historical period\n",
"scens_npdft = out.scen # Bias-adjusted future period\n",
"extra = out.drop_vars([\"scenh\", \"scen\"])"
]
},
{
Expand All @@ -593,8 +589,8 @@
"metadata": {},
"outputs": [],
"source": [
"scenh = sdba.processing.reordering(hist, scenh, group=\"time\")\n",
"scens = sdba.processing.reordering(sim, scens, group=\"time\")"
"scenh = sdba.processing.reordering(scenh_npdft, scenh, group=\"time\")\n",
"scens = sdba.processing.reordering(scens_npdft, scens, group=\"time\")"
]
},
{
Expand All @@ -613,7 +609,7 @@
"source": [
"##### There we are!\n",
"\n",
"Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call."
"Let's trigger all the computations. The use of `dask.compute` allows the three DataArrays to be computed at the same time, avoiding repeating the common steps."
]
},
{
Expand All @@ -625,16 +621,10 @@
"from dask import compute\n",
"from dask.diagnostics import ProgressBar\n",
"\n",
"tasks = [\n",
" scenh.isel(location=2).to_netcdf(\"mbcn_scen_hist_loc2.nc\", compute=False),\n",
" scens.isel(location=2).to_netcdf(\"mbcn_scen_sim_loc2.nc\", compute=False),\n",
" extra.escores.isel(location=2)\n",
" .to_dataset()\n",
" .to_netcdf(\"mbcn_escores_loc2.nc\", compute=False),\n",
"]\n",
"\n",
"with ProgressBar():\n",
" compute(tasks)"
" scenh, scens, escores = compute(\n",
" scenh.isel(location=2), scens.isel(location=2), extra.escores.isel(location=2)\n",
" )"
]
},
{
Expand All @@ -650,8 +640,6 @@
"metadata": {},
"outputs": [],
"source": [
"scenh = xr.open_dataset(\"mbcn_scen_hist_loc2.nc\")\n",
"\n",
"fig, ax = plt.subplots()\n",
"\n",
"dref.isel(location=2).tasmax.plot(ax=ax, label=\"Reference\")\n",
Expand All @@ -667,20 +655,10 @@
"metadata": {},
"outputs": [],
"source": [
"escores = xr.open_dataarray(\"mbcn_escores_loc2.nc\")\n",
"diff_escore = escores.differentiate(\"iterations\")\n",
"diff_escore.plot()\n",
"plt.title(\"Difference of the subsequent e-scores.\")\n",
"plt.ylabel(\"E-scores difference\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"diff_escore"
"escores.plot()\n",
"plt.title(\"E-scores for each iteration.\")\n",
"plt.xlabel(\"iteration\")\n",
"plt.ylabel(\"E-score\")"
]
},
{
Expand Down
10 changes: 8 additions & 2 deletions setup.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.39.11-beta
current_version = 0.39.12-beta
commit = True
tag = False
parse = (?P<major>\d+)\.(?P<minor>\d+).(?P<patch>\d+)(\-(?P<release>[a-z]+))?
Expand Down Expand Up @@ -79,7 +79,13 @@ extend-ignore =
test = pytest

[tool:pytest]
addopts = --verbose --cov=xclim --cov-report term-missing --numprocesses=auto --maxprocesses=6 --dist=loadscope
addopts =
--verbose
--cov=xclim
--cov-report=term-missing
--numprocesses=auto
--maxprocesses=6
--dist=loadscope
norecursedirs = docs/notebooks/*
filterwarnings =
ignore::UserWarning
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
AUTHOR = "Travis Logan"
AUTHOR_EMAIL = "logan.travis@ouranos.ca"
REQUIRES_PYTHON = ">=3.8.0"
VERSION = "0.39.11-beta"
VERSION = "0.39.12-beta"
LICENSE = "Apache Software License 2.0"

with open("README.rst") as readme_file:
Expand Down
2 changes: 1 addition & 1 deletion xclim/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

__author__ = """Travis Logan"""
__email__ = "logan.travis@ouranos.ca"
__version__ = "0.39.11-beta"
__version__ = "0.39.12-beta"


# Load official locales
Expand Down
Loading

0 comments on commit 2a76211

Please sign in to comment.