This repository contains benchmarks of Zarr V3 implementations.
Note
Contributions are welcomed for additional benchmarks, more implementations, or otherwise cleaning up this repository.
Also consider restarting development of the official zarr benchmark repository: https://github.com/zarr-developers/zarr-benchmark
LDeakin/zarrs
viaLDeakin/zarrs_tools
- Read executable: zarrs_benchmark_read_sync
- Round trip executable: zarrs_reencode
- Python (v3.12.7):
google/tensorstore
zarr-developers/zarr-python
- With and without the
ZarrsCodecPipeline
fromilan-gold/zarrs-python
- With and without
dask
- With and without the
Benchmark scripts are in the scripts folder and implementation versions are listed in the benchmark charts.
Warning
Python benchmarks are subject to the overheads of Python and may not be using an optimal API/parameters.
Please open a PR if you can improve these benchmarks.
pydeps
: install python dependencies (recommended to activate a venv first)zarrs_tools
: installzarrs_tools
(setCARGO_HOME
to override the installation dir)generate_data
: generate benchmark databenchmark_read_all
: run read all benchmarkbenchmark_read_chunks
: run chunk-by-chunk benchmarkbenchmark_roundtrip
: run roundtrip benchmarkbenchmark_all
: run all benchmarks
All datasets are uint16
arrays.
Name | Chunk Shape | Shard Shape | Compression | Size |
---|---|---|---|---|
Uncompressed | None | 8.0 GB | ||
Compressed |
blosclz 9 + bitshuffling |
377 MB | ||
Compressed + Sharded |
blosclz 9 + bitshuffling |
1.1 GB |
- AMD Ryzen 5900X
- 64GB DDR4 3600MHz (16-19-19-39)
- 2TB Samsung 990 Pro
- Ubuntu 22.04 (in Windows 11 WSL2, swap disabled, 32GB available memory)
This benchmark measures time and peak memory usage to "round trip" a dataset (potentially chunk-by-chunk).
- The disk cache is cleared between each measurement
- These are best of 3 measurements
Table of raw measurements (benchmarks_roundtrip.md)
This benchmark measures the the minimum time and peak memory usage to read a dataset chunk-by-chunk into memory.
- The disk cache is cleared between each measurement
- These are best of 1 measurements
Table of raw measurements (benchmarks_read_chunks.md)
Note
zarr-python
benchmarks with sharding are not visible in this plot
This benchmark measures the minimum time and and peak memory usage to read an entire dataset into memory.
- The disk cache is cleared between each measurement
- These are best of 3 measurements