Skip to content
/ Hy2DL Public

Hybrid Hydrological modeling using Deep Learning methods

License

Notifications You must be signed in to change notification settings

KIT-HYD/Hy2DL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hy2DL: Hybrid Hydrological modeling using Deep Learning methods

DOI

Hy2DL is a python library to create hydrological models for rainfall-runoff prediction, which make use of deep learning methods. The main idea of this repository is to provide models that are 'easy' to understand, interpret and implement. This 'ease', naturally, comes at the cost of code modularity and, to some extent flexibility. The logic of the codes presented here are heavily based on 'NeuralHydrology --- A Python library for Deep Learning research in hydrology' (https://github.com/neuralhydrology/neuralhydrology.git). For a more flexible, robust and modular implementation of deep learning method in hydrological modeling we advice the use of Neural Hydrology.

In addition to Long Short Term Memory (LSTM) network architectures, the repository also features hybrid hydrological models which use an LSTM network combined with a process based rainfall-runoff model and transformer based hydrological model. Regional hydrological models for several datasets namely the CAMELS_GB, CAMELS_US, CAMELS_CH, CAMELS_DE and the CARAVAN are at the user's easy disposal.


Structure of the repository:

The codes presented in the repository are in the form of python scripts. Additionally several experiments are in the form of JupyterNotebooks for easy reproduction and execution. Detailed documentation for the repository can be found at Hy2DL.readthedocs.io.

Following is a quick overview of the repository structure:

  • data: Information necessary to run the codes. The dataset chosen for analyses should be added here. This folder also consists of a .txt file containg the catchment IDs in a format consistent with the nomenclature in the original dataset
  • aux_functions: Auxiliary functions to run the codes as python scripts
  • benchmarks: Information from other studies that was used to benchmark our models
  • conceptual_models: Codes to calibrate basin-wise process-based hydrological model. The calibration routines are based on the SPOTPY library (https://spotpy.readthedocs.io/en/latest/). The process-based models are used as baselines to compare the performance of the hybrid models
  • datasetzoo: Codes to process the datasets and incorporate them to the models
  • experiments: JupyterNotebooks to run the experiments
  • modelzoo: Codes of the different models that can be used
  • results: Folder where the results generated by all the codes will be stored.

Dependencies

The packages used to run the codes are indicated at the beginning of each notebook. It must be considered that the codes for the data-driven models run better in GPU, therefore a PyTorch version that supports GPU should be installed!

Citation:

This code is part of our study

Acuña Espinoza, E., Loritz, R., Álvarez Chaves, M., Bäuerle, N., and Ehret, U.: To Bucket or not to Bucket? Analyzing the performance and interpretability of hybrid hydrological models with dynamic parameterization, (https://doi.org/10.5194/hess-28-2705-2024, 2024)
  • If you want to reproduce the experiments of this paper, run the scripts: Hybrid_LSTM_SHM.ipynb, Hybrid_LSTM_Bucket.ipynb, Hybrid_LSTM_NonSense.ipynb, LSTM_CAMELS_GB.ipynb and LSTM_CAMELS_US.ipynb located in the path Hy2DL/experiments/.
  • If you want to reproduce the figures without re-running the experiments, copy the files located in the result folder of the repository https://zenodo.org/records/11103634 and paste them into Hy2DL/results/. Then run the notebook Results_Analysis.ipynb.

Authors:

Disclaimer:

No warranty is expressed or implied regarding the usefulness or completeness of the information and documentation provided. References to commercial products do not imply endorsement by the Authors. The concepts, materials, and methods used in the algorithms and described in the documentation are for informational purposes only. The Authors has made substantial effort to ensure the accuracy of the algorithms and the documentation, but the Authors shall not be held liable, nor his employer or funding sponsors, for calculations and/or decisions made on the basis of application of the scripts and documentation. The information is provided "as is" and anyone who chooses to use the information is responsible for her or his own choices as to what to do with the data. The individual is responsible for the results that follow from their decisions.

This web site contains external links to other, external web sites and information provided by third parties. There may be technical inaccuracies, typographical or other errors, programming bugs or computer viruses contained within the web site or its contents. Users may use the information and links at their own risk. The Authors of this web site excludes all warranties whether express, implied, statutory or otherwise, relating in any way to this web site or use of this web site; and liability (including for negligence) to users in respect of any loss or damage (including special, indirect or consequential loss or damage such as loss of revenue, unavailability of systems or loss of data) arising from or in connection with any use of the information on or access through this web site for any reason whatsoever (including negligence).