Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation adjustments #287

Merged
merged 41 commits into from
Jun 1, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
7a2f807
structuring, typo fixes, environment-rtd.yml fixes
Zeitsperre May 29, 2023
a43d308
suppress myst warnings
Zeitsperre May 29, 2023
f4d77b2
docs recipe no longer reliant on linkcheck
Zeitsperre May 29, 2023
0a12860
notebook cleanup, config changes for running things locally
Zeitsperre May 29, 2023
8b94808
fix often-failing tests
Zeitsperre May 29, 2023
c8b4592
add raven-hydro
Zeitsperre May 29, 2023
e798d70
re-enable GDAL
Zeitsperre May 29, 2023
d9ba1e3
only build PDF
Zeitsperre May 29, 2023
5b2448f
docstring adjustments
Zeitsperre May 29, 2023
9686edd
split tests
Zeitsperre May 29, 2023
bbe65a5
do not install GIS libs, add seaborn
Zeitsperre May 29, 2023
c5d179f
update xfail error
Zeitsperre May 29, 2023
5cb9387
Fix NB02.
huard May 30, 2023
66b2295
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] May 30, 2023
39fe1dd
whitespace
Zeitsperre May 30, 2023
a917fdf
loose pinning
Zeitsperre May 30, 2023
ffd6600
fix command examples
Zeitsperre May 30, 2023
94a8c36
formatting
Zeitsperre May 30, 2023
be231a8
Fix NB10
huard May 31, 2023
0c26d60
Ignore NumbaDeprecation warnings
huard May 31, 2023
fe3460f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] May 31, 2023
5a789c5
Update HydroShare NB to work with hsclient and token authentication
huard May 31, 2023
0f94089
merge
huard May 31, 2023
b17a327
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] May 31, 2023
f21d4fe
Simplified logic in NB08
huard May 31, 2023
b4bf001
merge
huard May 31, 2023
dbf8fb3
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] May 31, 2023
6b3fbc6
Fix NB Canopex. Suggest weird error caused by repeatedly accessing hy…
huard May 31, 2023
3c79514
merge
huard May 31, 2023
abd0a34
removed numba warnings from nb 10
huard May 31, 2023
e19be27
rerun with default kernel
huard May 31, 2023
039e372
manually remove references to birdy kernel
Zeitsperre May 31, 2023
86cbfad
try to reduce run time of nb cells
huard May 31, 2023
f0bd1f5
replace hs_restclient by hsclient in docs dependencies
huard Jun 1, 2023
829a3a7
docs: fail on warnings
huard Jun 1, 2023
18d76f3
exclude Notebooks for execution. Only md jupytext files should be exe…
huard Jun 1, 2023
d41f789
Merge branch 'master' into fix_docs
huard Jun 1, 2023
ee1e570
fix sphinx warnings
huard Jun 1, 2023
b07b457
Merge branch 'fix_docs' of github.com:CSHS-CWRA/RavenPy into fix_docs
huard Jun 1, 2023
44172f5
readd apidoc. Fix block warning
huard Jun 1, 2023
5aba957
user_api preferred for external hyperlinks, fix pydantic docstrings, …
Zeitsperre Jun 1, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ autodoc: clean-docs ## create sphinx-apidoc files:
linkcheck: autodoc ## run checks over all external links found throughout the documentation
$(MAKE) -C docs linkcheck

docs: linkcheck ## generate Sphinx HTML documentation, including API docs
docs: autodoc ## generate Sphinx HTML documentation, including API docs
$(MAKE) -C docs html
ifndef READTHEDOCS
$(BROWSER) docs/_build/html/index.html
Expand Down
5 changes: 4 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,10 @@
todo_include_todos = False

# Suppress "WARNING: unknown mimetype for ..." when building EPUB.
suppress_warnings = ["epub.unknown_project_files"]
suppress_warnings = [
"epub.unknown_project_files",
"mystnb.unknown_mime_type"
]

# -- Options for HTML output -------------------------------------------

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,16 +28,13 @@
"source": [
"# We need to import a few packages required to do the work\n",
"import os\n",
"import xml.etree.ElementTree as et\n",
"\n",
"import geopandas as gpd\n",
"import matplotlib as mpl\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import rasterio\n",
"import requests\n",
"import rioxarray as rio\n",
"import xarray as xr\n",
"from birdy import WPSClient\n",
"\n",
"from ravenpy.utilities.testdata import get_file\n",
Expand Down
8 changes: 4 additions & 4 deletions docs/notebooks/03_Extracting_forcing_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@
"import tempfile\n",
"from pathlib import Path\n",
"\n",
"import fsspec\n",
"import fsspec # noqa\n",
"import intake\n",
"import s3fs\n",
"import s3fs # noqa\n",
"import xarray as xr\n",
"from clisops.core import subset\n",
"\n",
Expand Down Expand Up @@ -224,7 +224,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Here we will write the files to disk in a temporary folder since the root folder containing these notebooks is read-only. \n",
"### Here we will write the files to disk in a temporary folder since the root folder containing these notebooks is read-only.\n",
"You can change the path here to your own preferred path in your writable workspace. Alternatively, if you copy this notebook to your writable-workspace as shown in the introduction documentation, you can save just the filename (no absolute path) and the file will appear \"beside\" the notebooks, ready to be read by the next series of notebooks."
]
},
Expand All @@ -251,7 +251,7 @@
"source": [
"We now have daily precipitation and minimum/maximum temperatures to drive our Raven Model, which we will do in the next notebook!\n",
"\n",
"Note that our dataset generated here is very short (1 year) but the same dataset for the period 1980-12-31 to 1991-01-01 has been pre-generated and stored on the server for efficiency."
"Note that our dataset generated here is very short (1 year) but the same dataset for the period 1980-12-31 to 1991-01-01 has been pre-generated and stored on the server for efficiency.\n"
]
}
],
Expand Down
7 changes: 2 additions & 5 deletions docs/notebooks/04_Emulating_hydrological_models.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,6 @@
"import tempfile\n",
"from pathlib import Path\n",
"\n",
"import xarray as xr\n",
"\n",
"from ravenpy.config import commands as rc\n",
"from ravenpy.utilities.testdata import get_file"
]
Expand All @@ -74,7 +72,6 @@
"source": [
"# Define the hydrological response unit. We can use the information from the tutorial notebook #02! Here we are using\n",
"# arbitrary data for a test catchment.\n",
"hru = {}\n",
"hru = dict(\n",
" area=4250.6,\n",
" elevation=843.0,\n",
Expand Down Expand Up @@ -265,7 +262,7 @@
"# we are providing. We will generate the list now and pass it later to Ravenpy as an argument to the model.\n",
"data_type = [\"TEMP_MAX\", \"TEMP_MIN\", \"PRECIP\"]\n",
"\n",
"# Setup the gauge using the second method, i.e., using a single file that contains all meteorological inputs. As\n",
"# Set up the gauge using the second method, i.e., using a single file that contains all meteorological inputs. As\n",
"# you can see, a single gauge is added, but it contains all the information we need.\n",
"default_emulator_config[\"Gauge\"] = [\n",
" rc.Gauge.from_nc(\n",
Expand Down Expand Up @@ -802,7 +799,7 @@
" 7.469583e-01,\n",
" ),\n",
" **default_emulator_config,\n",
")"
")\n"
]
}
],
Expand Down
4 changes: 1 addition & 3 deletions docs/notebooks/05_Advanced_RavenPy_configuration.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,6 @@
"source": [
"from ravenpy import OutputReader\n",
"from ravenpy.ravenpy import run\n",
"from ravenpy.utilities.nb_graphs import hydrographs\n",
"\n",
"run_name = \"raven-gr4j-salmon\" # As can be seen in the config above, this is the name of the .rvX files.\n",
"configdir = config[\n",
Expand All @@ -104,7 +103,7 @@
"# Read the output files at the output_path\n",
"\n",
"outputs = OutputReader(run_name=None, path=outputs_path) # Get the outputs\n",
"# Note. We setup the run_name to None, because we didnt rename the output files. If you gave a different name to your file\n",
"# Note. We set up the run_name to None, because we didn't rename the output files. If you gave a different name to your file\n",
"# compared to the one above, you should change the run_name value to this new name. It's important though that you keep the end\n",
"# of the filename the same\n",
"\n",
Expand Down Expand Up @@ -211,7 +210,6 @@
"import datetime as dt\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import xarray as xr\n",
"\n",
"from ravenpy import Emulator\n",
"from ravenpy.config import commands as rc\n",
Expand Down
13 changes: 6 additions & 7 deletions docs/notebooks/06_Raven_calibration.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@
"outputs": [],
"source": [
"import datetime as dt\n",
"import tempfile\n",
"\n",
"import spotpy\n",
"\n",
Expand Down Expand Up @@ -56,12 +55,12 @@
"from ravenpy.utilities.testdata import get_file\n",
"\n",
"# We get the netCDF for testing on a server. You can replace the getfile method by a string containing the path to your own netCDF\n",
"nc_files = get_file(\n",
"nc_file = get_file(\n",
" \"raven-gr4j-cemaneige/Salmon-River-Near-Prince-George_meteo_daily.nc\"\n",
")\n",
"\n",
"# Display the datasets that we will be using\n",
"print(nc_files)"
"# Display the dataset that we will be using\n",
"print(nc_file)"
]
},
{
Expand Down Expand Up @@ -111,10 +110,10 @@
"\n",
"# We need to create the desired model with its parameters the same way as in the Notebook 04_Emulating_hydrological_models.\n",
"model_config = GR4JCN(\n",
" ObservationData=[rc.ObservationData.from_nc(nc_files, alt_names=\"qobs\")],\n",
" ObservationData=[rc.ObservationData.from_nc(nc_file, alt_names=\"qobs\")],\n",
" Gauge=[\n",
" rc.Gauge.from_nc(\n",
" nc_files,\n",
" nc_file,\n",
" alt_names=alt_names,\n",
" data_kwds={\"ALL\": {\"elevation\": hru[\"elevation\"]}},\n",
" )\n",
Expand Down Expand Up @@ -185,7 +184,7 @@
"# be sure of all the configuration above before executing with a high number of model evaluations.\n",
"model_evaluations = 10\n",
"\n",
"# Setup the spotpy sampler with the method, the setup configuration, a run name and other options. Please refer to\n",
"# Set up the spotpy sampler with the method, the setup configuration, a run name and other options. Please refer to\n",
"# the spotpy documentation for more options. We recommend sticking to this format for efficiency of most applications.\n",
"sampler = spotpy.algorithms.dds(\n",
" spot_setup, dbname=\"RAVEN_model_run\", dbformat=\"ram\", save_sim=False\n",
Expand Down
15 changes: 6 additions & 9 deletions docs/notebooks/07_Making_and_using_hotstart_files.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@
"\n",
"Hydrological models have state variables that describe the snow pack, soil moisture, underground reservoirs, etc. Typically, those cannot be measured empirically, so one way to estimate those values is to run the model for a period before the period we are actually interested in, and save the state variables at the end of this *warm-up* simulation.\n",
"\n",
"This notebook shows how to save those state variables and use them to configure another Raven simulation. These *states* are configured by the `:HRUStateVariableTable` and `:BasinStateVariables` commands, but `ravenpy` has a convenience function `set_solution` to update those directly from the `solution.rvc` simulation output. \n",
"This notebook shows how to save those state variables and use them to configure another Raven simulation. These *states* are configured by the `:HRUStateVariableTable` and `:BasinStateVariables` commands, but `ravenpy` has a convenience function `set_solution` to update those directly from the `solution.rvc` simulation output.\n",
"\n",
"In the following, we run the model on two years then save the final states. Next, we use those final states to configure the initial state of a second simulation over the next two years. If everything is done correctly, these two series should be identical to a simulation over the full four years. \n",
"In the following, we run the model on two years then save the final states. Next, we use those final states to configure the initial state of a second simulation over the next two years. If everything is done correctly, these two series should be identical to a simulation over the full four years.\n",
"\n",
"## Model configuration \n",
"## Model configuration\n",
"\n",
"At this point the following blocks of code should be quite familiar! If not, please go back to notebook \"04 - Emulating hydrological models\" to understand what is happening.\n"
]
Expand All @@ -34,15 +34,13 @@
"import datetime as dt\n",
"import warnings\n",
"\n",
"import xarray as xr\n",
"from matplotlib import pyplot as plt\n",
"\n",
"from ravenpy import Emulator, RavenWarning\n",
"from ravenpy.config import commands as rc\n",
"\n",
"# Import the GR4JCN model\n",
"from ravenpy.config.emulators import GR4JCN\n",
"from ravenpy.ravenpy import run\n",
"from ravenpy.utilities.testdata import get_file"
]
},
Expand All @@ -58,7 +56,6 @@
"end_date = dt.datetime(1988, 1, 1)\n",
"\n",
"# Define HRU\n",
"hru = {}\n",
"hru = dict(\n",
" area=4250.6,\n",
" elevation=843.0,\n",
Expand Down Expand Up @@ -126,7 +123,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Now let's run the model for the next two years, setting the initial conditions to the final states of the first simulation. \n",
"### Now let's run the model for the next two years, setting the initial conditions to the final states of the first simulation.\n",
"\n",
"The path to the `solution.rvc` file can be found in `out1.files[\"solution\"]`.\n",
"The content itself can be displayed with `out1.solution`"
Expand Down Expand Up @@ -160,7 +157,7 @@
"source": [
"## Compare with simulation over entire period\n",
"\n",
"Now in theory, those two simulations should be identical to one simulation over the whole period of four years, let's confirm this. "
"Now in theory, those two simulations should be identical to one simulation over the whole period of four years, let's confirm this."
]
},
{
Expand Down Expand Up @@ -207,7 +204,7 @@
"\n",
"delta1.plot(label=\"Part 1\")\n",
"delta2.plot(label=\"Part 2\")\n",
"plt.title(\"Difference between two parts and full simulation\")"
"plt.title(\"Difference between two parts and full simulation\")\n"
]
}
],
Expand Down
21 changes: 10 additions & 11 deletions docs/notebooks/08_Getting_and_bias_correcting_CMIP6_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"source": [
"## Applying bias correction on climate model data to perform climate change impact studies on hydrology\n",
"\n",
"This notebook will guide you on how to conduct bias correction of climate model outputs that will be fed as inputs to the hydrological model `Raven` to perform climate change impact studies on hydrology. \n",
"This notebook will guide you on how to conduct bias correction of climate model outputs that will be fed as inputs to the hydrological model `Raven` to perform climate change impact studies on hydrology.\n",
"\n",
"## Geographic data\n",
"In this tutorial, we will be using the shapefile or GeoJSON file for watershed contours as generated in previous notebooks. The file can be uploaded to your workspace here and used directly in the cells below. In this notebook, we present a quick demonstration of the bias-correction approach on a small and predetermined dataset, but you can use your own basin according to your needs."
Expand Down Expand Up @@ -175,7 +175,7 @@
"Now, we can be more selective about what we want to get from the CMIP6 project data:\n",
"\n",
"- source_id: The climate model, in this case 'MIROC6'\n",
"- experiment_id: The forcing scenario. Here we will use 'historical' (for the historical period) and for future data we could use any of the SSP simulations, such as 'ssp585' or 'ssp245'. \n",
"- experiment_id: The forcing scenario. Here we will use 'historical' (for the historical period) and for future data we could use any of the SSP simulations, such as 'ssp585' or 'ssp245'.\n",
"- table_id: The timestep of the model simulation. Here we will use 'day' for daily data, but some models have monthly and 3-hourly data, for example.\n",
"- variable_id: The codename for the variable of interest. Here we will want 'tasmin', 'tasmax', and 'pr' for minimum temperature, maximum temperature and total precipitation, respectively.\n",
"- member_id: The code identifying the model member. Some models are run multiple times with varying initial conditions to represent natural variability. Here we will only focus on the first member 'r1i1p1f1'.\n",
Expand Down Expand Up @@ -231,7 +231,7 @@
"metadata": {},
"source": [
"The final step is to open the dataset with xarray by using the 'open_zarr()' function. The following block performs multiple operations to get the data that we want:\n",
" \n",
"\n",
"- It opens the data using xarray\n",
"- It extracts only the times that we need for the reference/historical period\n",
"- It then subsets it spatially by getting only the points within the catchment boundaries. If your catchments is too small and this fails, try with a larger basin or apply a buffer around your boundaries.\n",
Expand Down Expand Up @@ -276,11 +276,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that we have a single chunk of 10 years of tasmin data, as expected! However, you might also have noticed that there is no metadata, such as units and variable properties left in the data array. We can fix that by wrapping the code in a block that forces xarray to keep the metadata. \n",
"We can see that we have a single chunk of 10 years of tasmin data, as expected! However, you might also have noticed that there is no metadata, such as units and variable properties left in the data array. We can fix that by wrapping the code in a block that forces xarray to keep the metadata.\n",
"\n",
"Also, since we will need to use this block of code for each variable, it might become tedious. Therefore, to simplify the code, we can combine everything into a single line, like this:\n",
" \n",
" "
"\n"
]
},
{
Expand Down Expand Up @@ -458,12 +457,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The model is now going to be trained to find correction factors between the reference dataset (observations) and historical dataset (climate model outputs for the same time period). The correction factors obtained are then applied to both reference and future climate outputs to correct them. This step is called the bias correction. In this test-case, we apply a method named `detrended quantile mapping`. \n",
"The model is now going to be trained to find correction factors between the reference dataset (observations) and historical dataset (climate model outputs for the same time period). The correction factors obtained are then applied to both reference and future climate outputs to correct them. This step is called the bias correction. In this test-case, we apply a method named `detrended quantile mapping`.\n",
"\n",
"Here we use the `xclim` utilities to bias-correct CMIP6 GCM data using ERA5 reanalysis data as the reference. See `xclim` documentation for more options! (https://xclim.readthedocs.io/en/stable/notebooks/sdba.html)\n",
"\n",
"## WARNING: \n",
"This following block of code will take a while to run, and some warning messages will appear during the process (related to longitude wrapping and other information on calendar types). Unless an error message appears, the code should run just fine!"
"> **Warning**\n",
"> This following block of code will take a while to run, and some warning messages will appear during the process (related to longitude wrapping and other information on calendar types). Unless an error message appears, the code should run just fine!"
]
},
{
Expand Down Expand Up @@ -564,7 +563,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#### We will now do as we did with Notebook 03 - Extracting forcing data: We will save the data to a temporary location due to this folder being read-only. \n",
"#### We will now do as we did with Notebook 03 - Extracting forcing data: We will save the data to a temporary location due to this folder being read-only.\n",
"If you copy this Notebook to your writable workspace, you can save the file to that workspace directly and use it in other notebooks. For now, we will keep things simple and provide pre-computed files for the next notebooks."
]
},
Expand Down Expand Up @@ -600,7 +599,7 @@
"outputs": [],
"source": [
"# Compare it to the future precipitation without bias-correction.\n",
"future_pr.plot()"
"future_pr.plot()\n"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@
"import datetime as dt\n",
"import warnings\n",
"\n",
"import xarray as xr\n",
"from matplotlib import pyplot as plt\n",
"\n",
"from ravenpy import Emulator\n",
Expand Down Expand Up @@ -60,7 +59,6 @@
"source": [
"# Define the hydrological response unit. We can use the information from the tutorial notebook #02! Here we are using\n",
"# arbitrary data for a test catchment.\n",
"hru = {}\n",
"hru = dict(\n",
" area=4250.6,\n",
" elevation=843.0,\n",
Expand Down Expand Up @@ -195,7 +193,7 @@
"outputs": [],
"source": [
"# Work with the hydrograph data directly:\n",
"outputs_future.hydrograph"
"outputs_future.hydrograph\n"
]
}
],
Expand Down
10 changes: 4 additions & 6 deletions docs/notebooks/10_Data_assimilation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@
"# Import packages\n",
"import datetime as dt\n",
"import tempfile\n",
"from glob import glob\n",
"from pathlib import Path\n",
"\n",
"import matplotlib.pyplot as plt\n",
Expand All @@ -45,7 +44,6 @@
")\n",
"\n",
"# Define HRU\n",
"hru = {}\n",
"hru = dict(\n",
" area=4250.6,\n",
" elevation=843.0,\n",
Expand Down Expand Up @@ -205,7 +203,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Get the paths to all of the ens_1...ens_N folders, one per member\n",
"# Get the paths to all the ens_1...ens_N folders, one per member\n",
"paths_spinup = list(tmp_path.glob(\"ens_*\"))\n",
"\n",
"# Read those into memory in an EnsembleReader object\n",
Expand Down Expand Up @@ -262,7 +260,7 @@
"# new 3-day period.\n",
"loop = Emulator(config=conf_loop, workdir=tmp_path, overwrite=True).run(overwrite=True)\n",
"\n",
"# Get the paths to all of the ens_1...ens_N folders, one per member\n",
"# Get the paths to all the ens_1...ens_N folders, one per member\n",
"paths_loop = list(tmp_path.glob(\"ens_*\"))\n",
"\n",
"# Repeat the same process as the spinup to look at model results:\n",
Expand Down Expand Up @@ -327,7 +325,7 @@
"\n",
" # Extract the results for this 3-day hydrograph and store it into our \"total_hydrograph\" which keeps track\n",
" # of the flows for each of the 3-day periods.\n",
" ens_loop = EnsembleReader(crun_name=onf_loop.run_name, paths=paths_loop)\n",
" ens_loop = EnsembleReader(crun_name=conf_loop.run_name, paths=paths_loop)\n",
" total_hydrograph = xr.concat([total_hydrograph, ens_loop.hydrograph], dim=\"time\")\n",
"\n",
"\n",
Expand Down Expand Up @@ -430,7 +428,7 @@
"plt.ylabel(\"Streamlfow (m³/s)\")\n",
"plt.title(\"Forecast after assimilation\")\n",
"plt.xlim([dt.date(1997, 11, 29), dt.date(1997, 12, 29)])\n",
"plt.show()"
"plt.show()\n"
]
}
],
Expand Down
Loading