This Benchmark measures communication efficiency between services through REST versus gRPC unary calls. The specific scenario is a BFF that consumes data in a given service.
With client-side load balancing, the client knows about endpoints. For each gRPC call, it selects a different endpoint to send the call to.
dns:///my-example-host
gRPC client factory offers a centralized way to configure channels. It automatically reuses underlying channels.
Each call will require multiple network round-trips between the client and the server to create a new HTTP/2 connection:
- Opening a socket
- Establishing TCP connection
- Negotiating TLS
- Starting HTTP/2 connection
- Making the gRPC call
HTTP/2 connections typically have a limit on the number of maximum concurrent streams on a connection at one time.
new SocketsHttpHandler
{
EnableMultipleHttp2Connections = true
}
Having an existing HTTP/2 connection ready when an app resumes activity allows for the initial gRPC calls to be made quickly, without a delay caused by the connection being reestablished.
new SocketsHttpHandler
{
PooledConnectionIdleTimeout = Timeout.InfiniteTimeSpan,
KeepAlivePingDelay = TimeSpan.FromSeconds(60),
KeepAlivePingTimeout = TimeSpan.FromSeconds(30),
}
The respective ./docker-compose.yaml will provision all system dependencies, with the minimal resources needed for the benchmark:
docker-compose up -d
cd ./test
docker run --network=internal --name k6 --rm -i grafana/k6 run - <usecase-rest-health.js
docker run --network=internal --name k6 --rm -i grafana/k6 run - <usecase-rest-retrieve.js
docker run --network=internal --name k6 --rm -i grafana/k6 run - <usecase-rest-submission.js
docker run --network=internal --name k6 --rm -i grafana/k6 run - <usecase-grpc-health.js
docker run --network=internal --name k6 --rm -i grafana/k6 run - <usecase-grpc-retrieve.js
docker run --network=internal --name k6 --rm -i grafana/k6 run - <usecase-grpc-submission.js