You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently have automation that periodically runs the benchmark suite against the tip of master and the last stable release. The results are posted to http://opa-benchmark-results.s3-website-us-east-1.amazonaws.com/. This helps us identify performance regressions during development.
The script that generates the benchstat results takes about half an hour to run. It takes this long because we have to run the entire benchmark suite 2*N times (N times on tip and N times on latest release).
It would be nice if we could come up with a way to make the benchmark results more visible without introducing so much latency on the pre-merge check. Perhaps we could generate the benchstat results post-merge and then add a comment to the PR. This way developers would be reminded to review the results on each merge.
The text was updated successfully, but these errors were encountered:
We currently have automation that periodically runs the benchmark suite against the tip of master and the last stable release. The results are posted to http://opa-benchmark-results.s3-website-us-east-1.amazonaws.com/. This helps us identify performance regressions during development.
The script that generates the benchstat results takes about half an hour to run. It takes this long because we have to run the entire benchmark suite 2*N times (N times on tip and N times on latest release).
It would be nice if we could come up with a way to make the benchmark results more visible without introducing so much latency on the pre-merge check. Perhaps we could generate the benchstat results post-merge and then add a comment to the PR. This way developers would be reminded to review the results on each merge.
The text was updated successfully, but these errors were encountered: