This is an API performance test comparing:
Using the bombardier HTTP benchmarking tool.
Setup is identical for all frameworks.
- Applications reside in the
frameworks
folder and consist of a single file named<framework_name>_app.py
All tests are run sync and async
- Sending 100 bytes plaintext
- Sending 1kB plaintext
- Sending 10kB plaintext
- Sending 100kB plaintext
- Sending 500kB plaintext
- Sending 1MB plaintext
- Sending 5MB plaintext
Serializing a dictionary into JSON
- Serializing and sending 1kB JSON
- Serializing and sending 10kB JSON
- Serializing and sending 100kB JSON
- Serializing and sending 500kB JSON
- Serializing and sending 1MB JSON
(only supported by Litestar
, Starlite
, and FastAPI
)
- Serializing 50 dataclass objects each referencing 2 more dataclass objects
- Serializing 100 dataclass objects each referencing 5 more dataclass objects
- Serializing 500 dataclass objects each referencing 3 more dataclass objects
- Serializing 50 pydantic objects each referencing 2 more pydantic objects
- Serializing 100 pydantic objects each referencing 5 more pydantic objects
- Serializing 500 pydantic objects each referencing 3 more pydantic objects
- Sending a 100 bytes binary file
- Sending a 1kB bytes binary file
- Sending a 50kB binary file
- Sending a 1MB bytes binary file
All responses return "No Content"
- No path parameters
- Single path parameter, coerced into an integer
- Single query parameter, coerced into an integer
- A path and a query parameters, coerced into integers
(not supported by Starlette
)
- Resolving 3 nested synchronous dependencies
- Resolving 3 nested asynchronous dependencies (only supported by
Litestar
,Starlite
, andFastAPI
) - Resolving 3 nested synchronous, and 3 nested asynchronous dependencies (only supported by
Litestar
,Starlite
, andFastAPI
)
All responses return "No Content"
- Setting response headers
- Setting response cookies
- Clone this repo
- Run
poetry install
- Run tests with
poetry run bench run --rps --latency
After the run, the results will be stored in results/run_<run_mumber>.json
To select a framework, simply pass its name to the run command
:
bench run --rps litestar starlite starlette fastapi
- A version available on PyPi:
bench run --rps litestar@v2.0.0
- A version from git:
bench run --rps litestar@git+branch_or_tag_name
- A version from a specific git repository:
bench run --rps litestar@git+https://github.com/litestar-org/litestar.git@branch_or_tag_name
- A local file:
bench run --rps litestar@file+/path/to/litestar
You can run a single test by specifying its full name and category:
bench run --rps litestar -t json:json-1K
-r, --rebuild | rebuild docker images |
-L, --latency | run latency tests |
-R, --rps | run RPS tests |
-w, --warmup | duration of the warmup period (default: 5s) |
-e, --endpoint mode [sync|async] | endpoint types to select (default: sync, async) |
-c, --endpoint-category [plaintext|json|files|params|dynamic-response|dependency-injection|serialization|post-json|post-body] | test types to select (default: all) |
-d, --duration | duration of the rps benchmarks (default: 15s) |
-l, --limit | max requests per second for latency benchmarks (default: 20) |
-r, --requests | total number of requests for latency benchmarks (default: 1000) |
- Run
bench results
to generate plots from the latest test results - Run
bench results -s
to generate plots from the latest test results and split them into separate files for each category
PRs are welcome.
Please make sure to install pre-commit on your system, and then execute pre-commit install
in the repository root - this will ensure the pre-commit hooks are in place.
After doing this, add a PR with your changes and a clear description of the changes.