Skip to content

Commit

Permalink
Add descriptions
Browse files Browse the repository at this point in the history
  • Loading branch information
hendrikmakait committed Oct 30, 2024
1 parent 9260c11 commit 36dcc86
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 1 deletion.
17 changes: 16 additions & 1 deletion tests/geospatial/test_cloud_optimize.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,21 @@ def test_cloud_optimize(
"large": {"n_workers": 200},
},
):
"""
This benchmark loads the NASA Earth Exchange Global Daily Downscaled Projections (NEX-GDDP-CMIP6)
dataset stored in NetCDF, rechunks it from time-oriented chunks to spatial chunks, and writes it
to a Zarr dataset.
The benchmark can be scaled across these dimensions:
* Models
* Variables
* Time
* Space
* Cluster size
At the moment, it is not scaled along the temporal or spatial dimensions.
"""
with client_factory(
**scale_kwargs[scale], **cluster_kwargs
) as client: # noqa: F841
Expand Down Expand Up @@ -96,5 +111,5 @@ def test_cloud_optimize(
# Rechunk from "pancake" to "pencil" format
ds = ds.chunk({"time": -1, "lon": "auto", "lat": "auto"})

# Write out to a Zar dataset
# Write out to a Zarr dataset
ds.to_zarr(s3_url)
12 changes: 12 additions & 0 deletions tests/geospatial/test_satellite_filtering.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,18 @@ def test_satellite_filtering(
"large": {"n_workers": 100},
},
):
"""
This benchmark processes Sentinel-2 satellite imagery. It computes the monthly average of a humidity index
and stores the result to a Zarr dataset.
The benchmark can be scaled across these dimensions:
* Indices to calculate
* Time
* Space
* Cluster size
At the moment, the spatial extent is fixed to Germany and only a single index is derived.
"""
with client_factory(
**scale_kwargs[scale],
env={
Expand Down

0 comments on commit 36dcc86

Please sign in to comment.