Skip to content

Continuous Performance Benchmarking

Siddhartha Kasivajhula edited this page Mar 15, 2024 · 6 revisions

Qi uses continuous performance benchmarking as part of its CI workflows in order to (1) measure performance on every commit, and (2) track performance changes across commits. The results may be viewed here:

Competitive benchmarks report

This uses the vlibench library via the entry point qi-sdk/benchmarks/competitive/report.scrbl. This report may be generated either locally or on CI by the command make new-benchmarks.

Performance trends

This report is generated by using github-action-benchmark.

How It's Set Up

  • There is a workflow file in .github/workflows/benchmarks.yml
  • Like most development workflows, the CI workflow is configured to run a Makefile target, in this case make report-benchmarks, to generate the performance data. This can be run locally as well, and would produce the data in JSON format as standard output (STDOUT).
  • In the benchmarks.yml workflow, the output from the Make target is dumped into a file called benchmarks.txt.
  • github-action-benchmark is then invoked, and it expects the input as a JSON-formatted text file -- we simply tell it, via workflow config, that it can use the file benchmarks.txt that was produced by the previous step.
  • This GitHub Action works by pushing the data and an HTML facade to the gh-pages branch of the Qi repo, at the specified folder (benchmarks).
  • Note that the gh-pages branch is also being used to host backup docs via the GitHub Pages Deploy action, so we need to tell that workflow (docs.yml) to exclude the benchmarks folder from its "clean" step, so that it doesn't delete the benchmarks whenever there is an update to the documentation.
  • The deployed HTML page, including the benchmark data in charts indexed by commit, are available at https://drym-org.github.io/qi/benchmarks/

Manually Modifying Data

Data is automatically tracked on commits to the main branch, and manually editing the data should generally be avoided. But there are a few cases where you might want to do this, for instance, if the data got corrupted and you'd like to fix it, or if there is a new data point you've added to the tracking, and would like to retroactively add the value prior to some new changes that you believe affected it.

Manually Adding Data

In cases where you want to retroactively add data to the report, the data should be generated using GitHub Actions directly, and not locally or via some form of estimation from local analysis, to ensure that it is accurate. Here are the steps to do this:

  1. Create a new branch from the specific commit you'd like to benchmark
  2. Add the necessary configuration you need in order to produce the value you are interested in
  3. Modify the benchmarks.yml workflow file so that (1) it only generates the data but doesn't invoke the github-action-benchmark step -- i.e. remove that step, and (2) modify the on trigger condition so that it runs on any commit, not only on main branch commits.
  4. Push your branch to GitHub, and find the generated data in the output for the "Run benchmark" step
  5. Re-run the job as many times as you need, and compute the average or statistically reasonable value
  6. Checkout the gh-pages branch locally
  7. Manually modify the JSON file benchmarks/data.json to include the new data point(s)
  8. Ensure the JSON passes a validation tool
  9. Commit it and push it upstream
  10. The new data should show up in a few minutes, after the "pages deploy" job has completed

See this PR for an example.

Fixing Corrupted Data

The benchmark data is a JSON file in the gh-pages branch of the repo. If it ended up getting corrupted (for instance, if you changed the name of the report, it will publish new data points all the way at the end of the original report, as a new report! Instead of simply changing the name of the existing report), here's how you can fix it:

  1. Locally checkout the gh-pages branch and ensure it is up to date with the upstream drym-org branch.
  2. Manually edit the benchmarks/data.js JSON file and fix it.
  3. Commit it and push it upstream.
  4. Any push upstream should automatically get deployed to the Pages site (you should see this happen in the Actions tab), so just wait a few minutes and then visit https://drym-org.github.io/qi/benchmarks/
Clone this wiki locally