Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate using Profile-Guided Optimization (PGO) in the future #2

Closed
zamazan4ik opened this issue Oct 27, 2024 · 0 comments
Closed

Comments

@zamazan4ik
Copy link

Hi!

It's not an actual problem - just a possible improvement idea for the project. I created the Issue since Discussions are disabled for the repo.

I decided to test the Profile-Guided Optimization (PGO) technique to optimize Fennec's performance. For reference, results for other projects are available at https://github.com/zamazan4ik/awesome-pgo . Since PGO helped with many projects (including compilers, code formatters, language servers, linters, etc.), I decided to apply it to Fennec to see if the performance win (or loss) can be achieved. Here are my benchmark results.

Test environment

  • Fedora 40
  • Linux kernel 6.10.12
  • AMD Ryzen 9 5900x
  • 48 Gib RAM
  • SSD Samsung 980 Pro 2 Tib
  • Compiler - Rustc 1.82.0
  • fennec version: main branch on commit 636ebde54be22d52413cef7bfcb05e59127fb31e
  • Disabled Turbo boost

Benchmark

For benchmark purposes, I use fennec lint command for linting PHP-CS-Fixer. For PGO optimization I use cargo-pgo tool. The PGO training workload is the same - fennec lint on the PHP-CS-Fixer project with the PGO instrumented fennec (is done with cargo pgo build).

taskset -c 0 is used to reduce the OS scheduler's influence on the results. All measurements are done on the same machine, with the same background "noise" (as much as I can guarantee) and multiple times with hyperfine.

Results

I got the following results.

hyperfine --warmup 3 -i 'taskset -c 0 ../fennec_release lint' 'taskset -c 0 ../fennec_optimized lint'
Benchmark 1: taskset -c 0 ../fennec_release lint
  Time (mean ± σ):      1.516 s ±  0.013 s    [User: 1.010 s, System: 0.497 s]
  Range (min … max):    1.504 s …  1.544 s    10 runs

  Warning: Ignoring non-zero exit code.

Benchmark 2: taskset -c 0 ../fennec_optimized lint
  Time (mean ± σ):      1.419 s ±  0.012 s    [User: 0.917 s, System: 0.493 s]
  Range (min … max):    1.403 s …  1.441 s    10 runs

  Warning: Ignoring non-zero exit code.

Summary
  'taskset -c 0 ../fennec_optimized lint' ran
    1.07 ± 0.01 times faster than 'taskset -c 0 ../fennec_release lint'

where:

  • fennec_release: the default Release profile
  • fennec_optimized: the default Release profile + PGO optimization

According to the results, we see measurable performance improvement.

Further steps

I can suggest the following action points:

  • Mention somewhere in the user-visible place that PGO brings measurable performance improvements for the project
  • Integrate PGO into the build pipeline (like it's done in CPython or other projects)
  • Optimize with PGO prebuilt binaries (if any)
  • Test PGO for other Fennec parts: code formatter, LSP server, etc. According to the awesome-pgo results, such tools also can be performance-improved with PGO

Also, Post-Link Optimization (PLO) can be tested after PGO. It can be done by applying tools like LLVM BOLT. However, it's a much less mature optimization technique compared to PGO.

Since the project is in its early stages, for now, I recommend you don't spend much time on integrating PGO into the project pipelines. But when most of the features are implemented, then PGO will be a good addition for the project

Thank you.

@carthage-software carthage-software locked and limited conversation to collaborators Oct 27, 2024
@azjezz azjezz converted this issue into discussion #4 Oct 27, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant