-
-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible performance regression on Windows #347
Comments
I just 1.0.80, and I'm seeing the same performance issue. Reverting back to 1.0.76 fixes the issue. |
I traced the regression down to d6acd22. I added the following to our
And then changed the revision until I found the earliest commit (according to git log) that exhibits the performance issue in our CI. That was d6acd22200de94b65804e08ca41a4bdff3404512. The preceding commit 5121cd2, appears to be fine. |
This looks like the backtrace functionality that was stabilized in the Rust std is performing differently / worse than the one in the external EDIT: Looks like with Rust >= 1.65, backtrace support is now enabled unconditionally, so rather than things being slower with std than with the external |
Hello, I just stubbed my toe on this too and wasted a few hours combing through our dependency upgrades. This change ruins the performance of our game on Windows. Capturing a single backtrace on Windows takes enough time to cause noticeable hitching. That's a huge amount of time spent on backtraces that my users aren't even seeing! Edit: It seems like setting |
The ~current nightly version of the standard library should have performance improvements for obtaining backtraces, specifically on Windows. I would be interested in knowing how bad the regression is, comparatively, in applications compiled with the recent nightly stdlib (versus those compiled with the latest stable, which will be slightly behind that). Note that I do expect it will still be very harsh, but it will at least use a less... clumsy, ancient, deprecated API for obtaining the trace. |
new version of anyhow with std backtrace is much slower: dtolnay/anyhow#347 ``` cargo bench --bench bench_utils ... anyhow_error_bench/anyhow_v1.0.93 time: [15.040 ns 15.148 ns 15.259 ns] change: [+1.5448% +2.1215% +2.7219%] (p = 0.00 < 0.05) Performance has regressed. anyhow_error_bench/anyhow_v1.0.76 time: [6.2104 ns 6.2182 ns 6.2278 ns] change: [-0.4029% -0.1998% +0.0002%] (p = 0.06 > 0.05) No change in performance detected. Found 10 outliers among 100 measurements (10.00%) 2 (2.00%) high mild 8 (8.00%) high severe ```
* feat(rooch-benchmarks): bench anyhow with/without std backtrace new version of anyhow with std backtrace is much slower: dtolnay/anyhow#347 ``` cargo bench --bench bench_utils ... anyhow_error_bench/anyhow_v1.0.93 time: [15.040 ns 15.148 ns 15.259 ns] change: [+1.5448% +2.1215% +2.7219%] (p = 0.00 < 0.05) Performance has regressed. anyhow_error_bench/anyhow_v1.0.76 time: [6.2104 ns 6.2182 ns 6.2278 ns] change: [-0.4029% -0.1998% +0.0002%] (p = 0.06 > 0.05) No change in performance detected. Found 10 outliers among 100 measurements (10.00%) 2 (2.00%) high mild 8 (8.00%) high severe ``` * fix(deps): downgrade anyhow version to 1.0.76 Downgraded anyhow dependency from version 1.0.93 to 1.0.76. Won't break anything but gain better perf.
As part of our release process, we update all of our dependencies. Prior to starting the release process for sequoia-openpgp 1.18.0, we had anyhow 1.0.75 in our
Cargo.lock
file.cargo update
updated anyhow to the latest version, 1.0.79.Our CI is configured to build Sequoia in various configurations on Linux, and on Windows. We have two Windows jobs: windows-msvc-cng, which uses MSVC in 64-bit mode, and windows-msvc32-cng, which uses MSVC in 32-bit mode. After applying this change, the time to run the windows-msvc-cng went from ~4 minutes to 23 minutes, and the windows-msvc32-cng went from 3 minutes to not finishing after ~3 hours.
At first we thought it was the Windows VM, but retrying old pipelines worked in the sense that the Windows jobs finished in the expected amount of time. We then had the theory that some package has a performance regression. I spent some time updating a few packages at a time, and seeing what happens with the CI jobs. When the performance problem kicked in, I removed packages until I found a package that induces the performance problem. It turned out that it was anyhow.
I then tried to figure out what version of anyhow first induced the problem. 1.0.76 is fine. Using 1.0.77 however exhibits the performance problem.
Here's what the CI output from the 1.0.77 log looks like:
Here are the tests, which don't look suspicious to me:
Some details of the environment:
The text was updated successfully, but these errors were encountered: