Skip to content

madara-alliance/madara-bench

Repository files navigation


A simple Starknet RPC benchmarking tool

Table of content

How it works

MADARA bench runs various Starknet RPC nodes in isolated, resource-constrained containers for testing. These nodes are automatically set up and have their RPC endpoints opened up for you to test using an online api (served with FastAPI).

Dependencies

Tip

If you are using Nixos or the nix package manager, you can skip to running (you will still need to specify secrets).

MADARA bench currently only supports running on linux and requires docker and docker-compose to be installed on the host system.

Step 1: installing build-essential

This is needed by certain python packages

sudo apt update
sudo apt install build-essential

Step 2: installing python 3.12

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install -y python3.12 python3.12-venv

Step 3: installing poetry (python package manager)

official instructions

curl -sSL https://install.python-poetry.org | python3 -

Step 4: specifying secrets

The following are required for MADARA bench to start:

echo *** > secrets/gateway_key.secret
echo *** > secrets/rpc_api.secret
echo *** > secrets/rpc_api_ws.secret
echo *** > secrets/db_password.secret 

Warning

Make sure you pick the right db_password: you will not be able to change it without restarting the database with make clean-db.

You can get an RPC api key from Alchemy, Infura or BlastAPI, amongst others. gateway_key is a special key which has been given out to node development teams in the ecosystem and is used to bypass sequencer rate limits for testing purposes. It is not available publicly.

Important

By default, MADARA bench runs on starknet testnet, and you will need your RPC api keys to point to ethereum sepolia for this to work.

Running

To start MADARA bench, run the following command:

make start

This will automatically build docker images for each RPC node, create individual volumes for each databases, start the nodes and serve a FastAPI endpoint at 0.0.0.0:8000/docs.

Warning

If this is the first time you are running MADARA bench, sit back and grab a cup of coffee as building the node images can take a while.

To stop MADARA bench, run:

make stop

Or if you are using the Nix runner, just CTRL-C. You can also get a list of all available commands by running:

make help

You can restart the benchmarks at 0, erasing all data in the data in the process, by running:

make clean-db

Nix

If you are using Nixos or the nix package manager, you do not need to install any dependencies and can instead just run:

nix develop --extra-experimental-features "nix-command flakes" .#start

This will download every dependency into a development shell, independent of the rest of your system and start MADARA bench. This is the preferred way of running MADARA bench and will also handle auto-closing docker containers for you.


As a service

Important

The following instructions assume you have set up MADARA bench to run under nix. Otherwise, you will have to install the required dependencies system-wide.

You should make sure the user you are using to run MADARA bench as a service is part of the docker group. This way you can run it as a user service instead of a root service.

To run MADARA bench as a user service, follow these instructions:

  1. Replace /path/to/madara-bench in madara-bench.service with its actual path on your machine.

  2. If it does not exist already, create $HOME/.config/systemd/user/:

mkdir -p $HOME/.config/systemd/user
  1. Copy over madara-bench.service to $HOME/.config/systemd/user/:
cp madara-bench.service $HOME/.config/systemd/user
  1. Start the service:
systemctl --user daemon-reload
systemctl --user enable madara-bench.service
systemctl --user start madara-bench.service
journalctl --user -u madara-bench -f

Benchmarks

Once you have started MADARA bench, start by heading to your FastAPI endpoint. There you will see multiple sections:

  • bench: read system and RPC benchmarks, or generate performance graphs
  • read: query individual RPC methods on each node
  • trace: run tracing RPC calls on each node
  • debug: display useful extra information

RPC benchmarks are procedural and run continuously in a background thread, that is to say inputs are generated automatically as the chain keeps making progress. This way, you do not need to worry about running the tests yourself or passing valid up-to-date parameters, you can just focus on the results.

Note

When needed, RPC method inputs are generated by sampling from a random point in the last 2000 blocks of the chain. For a more concrete example of how this works, check generators.py.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published