You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I decided to test the Profile-Guided Optimization (PGO) technique to optimize the library performance. For reference, results for other projects are available at https://github.com/zamazan4ik/awesome-pgo . Since PGO has helped many different libraries, I decided to apply it to cel-rust to see if a performance win (or loss) can be achieved. Here are my benchmark results.
This information can be interesting for anyone who wants to achieve more performance with the library in their use cases.
For PGO optimization I use cargo-pgo tool. Release bench results I got with taskset -c 0 cargo bench command. The PGO training phase is done with taskset -c 0 cargo pgo bench, PGO optimization phase - with taskset -c 0 cargo pgo optimize bench.
taskset -c 0 is used to reduce the OS scheduler's influence on the results. All measurements are done on the same machine, with the same background "noise" (as much as I can guarantee).
According to the results, PGO measurably improves the library's performance.
Further steps
At the very least, the library's users can find this performance report and decide to enable PGO for their applications if they care about the library's performance in their workloads. Maybe a small note somewhere in the documentation (the README file?) will be enough to raise awareness about this possible performance improvement.
Please don't treat the issue like an actual issue - it's just a benchmark report (since Discussions are disabled for the repo).
Thank you.
The text was updated successfully, but these errors were encountered:
Perhaps I'm misreading the benchmarks but I see "Performance has regressed" in almost all cases when looking at your comparison between PGO and default. How should I interpret these results?
Perhaps I'm misreading the benchmarks but I see "Performance has regressed" in almost all cases when looking at your comparison between PGO and default. How should I interpret these results?
Yeah, I need to explain a bit. You need to read the "PGO optimized to Release" results - these are the results after applying PGO optimization compared to the Release. "PGO instrumented compared to Release" are shown just for reference - these are the results from the PGO training phase.
PGO is a two-step process:
Collect runtime metrics with PGO instrumentation
Use the collected metrics during PGO optimization
Since collecting metrics in runtime has some runtime overhead - that's during the instrumentation phase performance is regressed. However, I show this information just for estimation of how performance can regress during the training phase (can be important for someone who wants to perform PGO instrumentation directly in the prod environment)
Repository owner
locked and limited conversation to collaborators
Oct 2, 2024
Hi!
I decided to test the Profile-Guided Optimization (PGO) technique to optimize the library performance. For reference, results for other projects are available at https://github.com/zamazan4ik/awesome-pgo . Since PGO has helped many different libraries, I decided to apply it to
cel-rust
to see if a performance win (or loss) can be achieved. Here are my benchmark results.This information can be interesting for anyone who wants to achieve more performance with the library in their use cases.
Test environment
cel-rust
version:master
branch,a5c6c2dbb658b13acf69f7b96c313288ae81d29b
commitBenchmark
For PGO optimization I use cargo-pgo tool. Release bench results I got with
taskset -c 0 cargo bench
command. The PGO training phase is done withtaskset -c 0 cargo pgo bench
, PGO optimization phase - withtaskset -c 0 cargo pgo optimize bench
.taskset -c 0
is used to reduce the OS scheduler's influence on the results. All measurements are done on the same machine, with the same background "noise" (as much as I can guarantee).Results
I got the following results:
According to the results, PGO measurably improves the library's performance.
Further steps
At the very least, the library's users can find this performance report and decide to enable PGO for their applications if they care about the library's performance in their workloads. Maybe a small note somewhere in the documentation (the README file?) will be enough to raise awareness about this possible performance improvement.
Please don't treat the issue like an actual issue - it's just a benchmark report (since Discussions are disabled for the repo).
Thank you.
The text was updated successfully, but these errors were encountered: