This repository contains code for benchmarking LP/MILP solvers, and an interactive website for analyzing the results.
Before you begin, make sure your development environment includes Python.
Preferred use:
- python: 3.12.4
- pip: 24.1.2
We use Python virtual environments to manage the dependencies for the website. This is how to create a virtual environment:
python -m venv venv
This is how to activate one:
- Windows
.\venv\Scripts\activate
- Linux/MacOS
source venv/bin/activate
And this is how to install the required dependencies once a venv
is activated:
- Website:
pip install -r website/requirements.txt
We also use the conda
package manager to run benchmarks using different solver versions, so please make sure it is installed before running the benchmark runner.
We use the ruff code linter and formatter, and GitHub Actions runs various pre-commit checks to ensure code and files are clean.
You can install a git pre-commit that will ensure that your changes are formatted and no lint issues are detected before creating new commits:
pip install pre-commit
pre-commit install
If you want to skip these pre-commit steps for a particular commit, you can run:
git commit --no-verify
-
The PyPSA benchmarks in
benchmarks/pypsa/
can be generated by using the Dockerfile present in that directory. Please see the instructions for more details. -
The JuMP-HiGHS benchmarks in
benchmarks/jump_highs_platform/
contain only the metadata for the benchmarks that are present in https://github.com/jump-dev/open-energy-modeling-benchmarks/tree/main/instances. These are fetched automatically by the benchmark runner from GitHub. -
The metadata of all benchmarks under
benchmarks/
are collected by the following script to generate a unifiedresults/metadata.yaml
file, when run as follows:python benchmarks/merge_metadata.py
-
The file
benchmarks/benchmark_config.yaml
specifies the names, sizes (instances), and URLs of the LP/MPS files for each benchmark. This is used by the benchmark runner.
The benchmark runner script creates conda environments containing the solvers and other necessary pre-requisites, so a virtual environment is not necessary.
./runner/benchmark_all.sh ./benchmarks/benchmark_config.yaml
The script will save the measured runtime and memory consumption into a CSV file in results/
that the website will then read and display.
The script has options, e.g. to run only particular years, that you can see with the -h
flag:
Usage: ./runner/benchmark_all.sh [-a] [-y "<space separated years>"] <benchmarks yaml file>
Runs the solvers from the specified years (default all) on the benchmarks in the given file
Options:
-a Append to the results CSV file instead of overwriting. Default: overwrite
-y A space separated string of years to run. Default: 2020 2021 2022 2023 2024
The benchmark_all.sh
script activates the appropriate conda environment and then calls python runner/run_benchmarks.py
.
This script can also be called directly, if required, but you must be in a conda environment that contains the solvers you want to benchmark.
For example:
python runner/run_benchmarks.py benchmarks/benchmark_config.yaml 2024
Remember to activate the virtual environment containing the website's requirements, and then run:
streamlit run website/app.py
The website will be running on: http://localhost:8501
docker build -t benchmark-website-snapshot .
docker run -p 8501:8501 benchmark-website-snapshot
docker save -o benchmark-website-snapshot.tar benchmark-website-snapshot
-
Load the Image:
docker load < benchmark-website-snapshot.tar
-
Run the Docker Container:
docker run -p 8501:8501 benchmark-website-snapshot