Skip to content

Identification of compounding drivers of river floods

License

Notifications You must be signed in to change notification settings

oreopie/flood-compounding-drivers

Repository files navigation

Identification of compounding drivers of river floods

DOI

This repository contains the code to reproduce the main procedure for identifying the compounding drivers of river floods and the associated flood complexity, as presented in the paper:

Jiang et al. (2024). Compounding effects in flood drivers challenge estimates of extreme river floods. Science Advances, 10(13), eadl4005.

Overview

The aim of this research is to investigate the compounding effects of various drivers on river floods and to understand their influence on flood severity and estimates of extreme floods. The code and data provided in this repository allow for the reproduction of the main analyses and figures presented in the paper.

The repository is structured as follows:

|- data/
|   |- sample.csv                     # Demo data for a river basin
|- libs/                              # Custom functions
|   |- plots.py
|   |- utils.py
|- outputs/                           # Folder to save the output to
|- analyze_individual_catchment.ipynb # Jupyter Notebook for the demo to obtain Figs. S4 and 4A
|- requirements.txt                   # PyPI dependencies
|- results.csv                        # Main results for all catchments
|- run.py                             # Standalone script to obtain results

Quick Start

The code was tested with Python 3.8 (on Windows 10/11 and macOS Ventura). To use this code please do the following in command line:

a) Change into this directory

cd /path/to/flood-compounding-drivers

b) Install dependencies (conda/virtualenv is recommended to manage packages):

pip install -r ./requirements.txt

It may take ~30 mins to run the script (100 replicates of 5-fold cross-validation).

c) Start Jupyter Notebook and run analyze_individual_catchment.ipynb in the browser:

jupyter notebook

d) Alternatively, run the standalone script to get the results:

python run.py --input_path=./data/sample.csv --basin_size=827.00 --output_dir=./outputs/

The result should be as follows:

prop_rr  prop_tg  prop_sm  prop_sp  prop_mu  mag_ratio  mag_ttest_p  flood_com  flood_com_p  est_err
0.622    0.081    0.297    0.378    0.514    1.261      0.057        0.073      0           -37.017

Description of results.csv (used to generate Figs. 2-5 in the text)

Description of columns in results.csv:

- lat:         Latitude coordinates of station
- long:        Longitude coordinates of station
- prop_rr:     Proportion of recent rainfall as a main driver of AM floods
- prop_tg:     Proportion of recent temperature as a main driver of AM floods
- prop_sm:     Proportion of soil moisture as a main driver of AM floods
- prop_sp:     Proportion of snowpack as a main driver of AM floods
- prop_mu:     Proportion of multi-driver floods
- mag_ratio:   Magnitude ratio of multi-driver floods to single-driver floods
- mag_ttest_p: T-test p-value for the mean magnitude of multi-driver floods vs. single-driver floods
- flood_com:   Flood complexity
- flood_com_p: Combined p-value for the flood complexity slope
- est_err:     Estimation error in the magnitude of the largest observed floods

Prepare the dataset

The dataset in the study is implemented based on the following data and codes:

After all grid-based data are prepared, the catchment average data are then calculated by the tool: https://github.com/nzahasan/pyscissor

Contact Information

For any questions or inquiries about this research or repository, please contact the corresponding author of the paper.

License

This project is licensed under the MIT License.

When using the code from this repository, we kindly request that you cite the paper.