The project objective is to allow executing repeatable performance tests in different technical stacks. This project is mainly designed to be executed on an AWS account, however the stacks scenarios are developed and designed to run in Docker.
Stack | Health Check | Hash SHA 256 | Cipher Bcrypt | SieveOfEratosthenes |
---|---|---|---|---|
Elixir (Plug Cowboy) | ✅ | ✅ | ✅ | ✅ |
Go (Gin) | ✅ | ✅ | ✅ | |
Java (Spring Boot MVC) | ✅ | ✅ | ✅ | ✅ |
Java (Spring Boot Webflux) | ✅ | ✅ | ✅ | ✅ |
NodeJs (Express) | ✅ | ✅ | ✅ | ✅ |
NodeJs (Fastify) | ✅ | ✅ | ✅ | ✅ |
NodeJs (NestJs) | ✅ | ✅ | ✅ | ✅ |
NodeJs (NestJs & Rxjs) | ✅ | ✅ | ✅ | |
NodeJs (NestJs & Fastify Adapter & Rxjs) | ✅ | ✅ | ✅ | |
Rust (Actix) | ✅ | ✅ | ✅ |
See current results at our Github Page
To run this project you need:
aws-cli
and an AWS accountnodejs
for reportsterminal
or emulator with ssh for remote connectionsjq
is a lightweight and flexible command-line JSON processor.Download here.
- Clone the repo
git clone https://github.com/bancolombia/performance-benchmark-stacks cd performance-benchmark-stacks
- Install NPM packages for reports
npm run install
- Build your configuration in
config.json
cp .config.json config.json
{ "instance": "t2.micro", # AWS instance type "key": "reactive", # Instance private key name, if name is `reactive` the key file should be in root of this project with `reactive.pem` name. "securityGroup": "sg-00000000000000000", # Security group for your instances, should allow requests to the 8080 port "subnet": "subnet-00000000000000000", "amiUser": "ubuntu", # default user of the ami "ami": "ami-03d315ad33b9d49c4", # ami id, if you want to change it, you should change the docker installation file, located in the infra folder "benchRepo": "https://github.com/bancolombia/performance-benchmark-stacks.git", "perfImage": "bancolombia/distributed-performance-analyzer:0.2.1" }
In the start_all.sh script you can change the scenarios and stacks array what you want to run, this script will run the start.sh script with each stack and list of desired scenarios. The start.sh script will create two instances, first one will be the instance where the stack will be deployed, the second one will be the performance instance.
The performance tool is the distributed performance analyzer project, also available as docker image at dockerhub.
Then it will run every scenario on the stack and will download the results in the .tmp/results folder.
The results will be visualized as graphs.
Run performance tests:
./start_all.sh
See the open issues for a list of proposed features (and known issues).
Any contributions you make are greatly appreciated.
Please see the contribution guide
Distributed under the MIT License. See LICENSE
for more information.