Skip to content

Commit

Permalink
docs(frontend-python): write benchmarking guide
Browse files Browse the repository at this point in the history
  • Loading branch information
umut-sahin committed Sep 19, 2024
1 parent c6633eb commit b906721
Show file tree
Hide file tree
Showing 2 changed files with 139 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@
* [Runtime dialect](explanations/RTDialect.md)
* [SDFG dialect](explanations/SDFGDialect.md)
* [Call FHE circuits from other languages](explanations/call_from_other_language.md)
* [Benchmarking](dev/benchmarking.md)
* [Making a release](explanations/releasing.md)
* [Release note](https://github.com/zama-ai/concrete/releases)
* [Feature request](https://github.com/zama-ai/concrete/issues/new?assignees=\&labels=feature\&projects=\&template=features.md)
Expand Down
138 changes: 138 additions & 0 deletions docs/dev/benchmarking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
# Benchmarking

This document gives an overview of the benchmarking infrastructure of Concrete.

## Concrete Python

Concrete Python uses [progress-tracker-python](https://github.com/zama-ai/progress-tracker-python) to do benchmarks. Please refer to its README to learn how it works.

### How to run all benchmarks?

Use a makefile target:

```shell
make benchmark
```

Note that this command removes the previous benchmark results before doing the benchmark.

### How to run a single benchmark?

Since the full benchmark suite takes a long time to run, it's not recommended for development. Instead, use the following command to run just a single benchmark.

```shell
TARGET=foo make benchmark-target
```

This command would only run the benchmarks defined in `benchmarks/foo.py`. It also retains the previous runs, so it can be run back to back to collect data from multiple benchmarks.

### What are benchmarked?

Every python file inside `benchmarks` directory is a benchmark script:

### primitive.py

Purpose: benchmarking simple operations (e.g., tlu, matmul, ...)

Parameters:
- operation
- inputset (e.g., different bit-widths, different shapes)
- configuration (e.g., different strategies)

Collected Metrics:
- Compilation Time (ms)
- Complexity
- Key Generation Time (ms)
- Evaluation Key Size (MB)
- Input Ciphertext Size (MB)
- Output Ciphertext Size (MB)
- Encryption Time (ms)
- Evaluation Time (ms)
- Decryption Time (ms)
- Accuracy

### static_kvdb.py

Purpose: benchmarking the key-value database example

Parameters:
- number of entries
- key and value sizes
- chunk size

Collected Metrics:
- Evaluation Time (ms) of inserting into the database
- Evaluation Time (ms) of querying of the database
- Evaluation Time (ms) of replacing within the database

### levenshtein_distance.py

Purpose: benchmarking the levenshtein distance example

Parameters:
- alphabet
- maximum input size

Collected Metrics:
- Evaluation Time (ms) of the worst case (i.e., both inputs have maximum input size)

### game_of_life.py

Purpose: benchmarking the game of life example

Parameters:
- dimensions
- implementations

Collected Metrics:
- Evaluation Time (ms) of computing the next state

### How to add new benchmarks?

Simply add a new Python script in `benchmarks` directory and write your logic.

The recommended file structure is as follows:

```python
# import progress tracker
import py_progress_tracker as progress

# import any other dependencies
from concrete import fhe

# create a list of targets to benchmark
targets = [
{
"id": (
f"name-of-the-benchmark :: "
f"parameter1 = {foo} | parameter2 = {bar}"
),
"name": (
f"Name of the benchmark with parameter1 of {foo} and parameter2 of {bar}"
),
"parameters": {
"parameter1": foo,
"parameter2": bar,
},
}
]

# write the benchmark logic
@progress.track(targets)
def main(parameter1, parameter2):
...

# to track timings
with progress.measure(id="some-metric-ms", label="Some metric (ms)"):
# execution time of this block will be measured
...

...

# to track values
progress.measure(id="another-metric", label="Another metric", value=some_metric)

...
```

Feel free to check `benchmarks/primitive.py` to see this structure in action.

0 comments on commit b906721

Please sign in to comment.