-
Notifications
You must be signed in to change notification settings - Fork 147
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs(frontend-python): write benchmarking guide
- Loading branch information
1 parent
c6633eb
commit b906721
Showing
2 changed files
with
139 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,138 @@ | ||
# Benchmarking | ||
|
||
This document gives an overview of the benchmarking infrastructure of Concrete. | ||
|
||
## Concrete Python | ||
|
||
Concrete Python uses [progress-tracker-python](https://github.com/zama-ai/progress-tracker-python) to do benchmarks. Please refer to its README to learn how it works. | ||
|
||
### How to run all benchmarks? | ||
|
||
Use a makefile target: | ||
|
||
```shell | ||
make benchmark | ||
``` | ||
|
||
Note that this command removes the previous benchmark results before doing the benchmark. | ||
|
||
### How to run a single benchmark? | ||
|
||
Since the full benchmark suite takes a long time to run, it's not recommended for development. Instead, use the following command to run just a single benchmark. | ||
|
||
```shell | ||
TARGET=foo make benchmark-target | ||
``` | ||
|
||
This command would only run the benchmarks defined in `benchmarks/foo.py`. It also retains the previous runs, so it can be run back to back to collect data from multiple benchmarks. | ||
|
||
### What are benchmarked? | ||
|
||
Every python file inside `benchmarks` directory is a benchmark script: | ||
|
||
### primitive.py | ||
|
||
Purpose: benchmarking simple operations (e.g., tlu, matmul, ...) | ||
|
||
Parameters: | ||
- operation | ||
- inputset (e.g., different bit-widths, different shapes) | ||
- configuration (e.g., different strategies) | ||
|
||
Collected Metrics: | ||
- Compilation Time (ms) | ||
- Complexity | ||
- Key Generation Time (ms) | ||
- Evaluation Key Size (MB) | ||
- Input Ciphertext Size (MB) | ||
- Output Ciphertext Size (MB) | ||
- Encryption Time (ms) | ||
- Evaluation Time (ms) | ||
- Decryption Time (ms) | ||
- Accuracy | ||
|
||
### static_kvdb.py | ||
|
||
Purpose: benchmarking the key-value database example | ||
|
||
Parameters: | ||
- number of entries | ||
- key and value sizes | ||
- chunk size | ||
|
||
Collected Metrics: | ||
- Evaluation Time (ms) of inserting into the database | ||
- Evaluation Time (ms) of querying of the database | ||
- Evaluation Time (ms) of replacing within the database | ||
|
||
### levenshtein_distance.py | ||
|
||
Purpose: benchmarking the levenshtein distance example | ||
|
||
Parameters: | ||
- alphabet | ||
- maximum input size | ||
|
||
Collected Metrics: | ||
- Evaluation Time (ms) of the worst case (i.e., both inputs have maximum input size) | ||
|
||
### game_of_life.py | ||
|
||
Purpose: benchmarking the game of life example | ||
|
||
Parameters: | ||
- dimensions | ||
- implementations | ||
|
||
Collected Metrics: | ||
- Evaluation Time (ms) of computing the next state | ||
|
||
### How to add new benchmarks? | ||
|
||
Simply add a new Python script in `benchmarks` directory and write your logic. | ||
|
||
The recommended file structure is as follows: | ||
|
||
```python | ||
# import progress tracker | ||
import py_progress_tracker as progress | ||
|
||
# import any other dependencies | ||
from concrete import fhe | ||
|
||
# create a list of targets to benchmark | ||
targets = [ | ||
{ | ||
"id": ( | ||
f"name-of-the-benchmark :: " | ||
f"parameter1 = {foo} | parameter2 = {bar}" | ||
), | ||
"name": ( | ||
f"Name of the benchmark with parameter1 of {foo} and parameter2 of {bar}" | ||
), | ||
"parameters": { | ||
"parameter1": foo, | ||
"parameter2": bar, | ||
}, | ||
} | ||
] | ||
|
||
# write the benchmark logic | ||
@progress.track(targets) | ||
def main(parameter1, parameter2): | ||
... | ||
|
||
# to track timings | ||
with progress.measure(id="some-metric-ms", label="Some metric (ms)"): | ||
# execution time of this block will be measured | ||
... | ||
|
||
... | ||
|
||
# to track values | ||
progress.measure(id="another-metric", label="Another metric", value=some_metric) | ||
|
||
... | ||
``` | ||
|
||
Feel free to check `benchmarks/primitive.py` to see this structure in action. |