-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question: Slowdown after upgrading from 1.0.26 to 1.0.28 #395
Comments
This configuration would mean that In any case, I recommend to try 1.0.27, and once the first good and the first bad tag are found, I'd use a local clone of [patch."crates-io"]
flate2 = { path = "/path/to/clone" } Then As this issue can't be reproduced here, I recommend closing it while considering the submission of a PR for a fix that works for you. |
thank you @Byron ! We have identified why there is a slowdown locally vs on github actions... because locally I ran my project just with: But, on github, there the package is built with Locally , it seems that running both with cargo run --release with 1.0.26 cargo run --release with 1.0.28 |
Thanks for digging in and for solving the puzzle :). I am glad it isn't anything more serious. |
A good candidate for the cause of the slowdown is #373. That PR explicitly prioritized correctness over performance, so it is positive that the times in release mode do not show any change. |
flate2 1.0.28 built unoptimized is significantly slower than before an unsafe unsoundness was fixed. This unnecessarily slows down some of our tests which use `flate2` during test setup. For more info please check rust-lang/flate2-rs#395 This change makes sure that `flate2` is optimized even for a `dev` build.
Hey, |
Thanks for bringing this to my attention! I took a look at the flamegraph in vectordotdev/vector#19981 and noticed that the runtime is dominated by calls to an unaligned memset function. This is probably caused by the Something I couldn't make sense of is this claim in the
The change in #373 definitely does not allocate (or deallocate). The only change really is that it can fill unused capacity with zeroes the first time. Thus I think it's more about an issue in the usage of output vectors during compression in I'd wait and see how |
The current implementation indeed suffers from a performance degradation when exposed to many small writes during compression. The problem is that the internal buffer is created with 32kb capacity. However, with each small write it's memset to its full capacity, just to be truncated to what's actually written right after. These calls accumulate to something very costly. The solution is typically to avoid many small writes by buffering them, so in a way, leaving this as is seems like a net-positive as it can reveal issues with its usage, while being easy to fix in the caller's code. |
flate2 1.0.28 built unoptimized is significantly slower than before an unsafe unsoundness was fixed. This unnecessarily slows down some of our tests which use `flate2` during test setup. For more info please check rust-lang/flate2-rs#395 This change makes sure that `flate2` is optimized even for a `dev` build.
flate2 1.0.28 built unoptimized is significantly slower than before an unsafe unsoundness was fixed. This unnecessarily slows down some of our tests which use `flate2` during test setup. For more info please check rust-lang/flate2-rs#395 This change makes sure that `flate2` is optimized even for a `dev` build.
I also noticed the performance regression when updating from 1.0.27 to 1.0.28. |
Hello!
This will be more of a question, sorry in advance if something is missing, but I'm not sure what details might help!
Anyway, not sure why we are experiencing a slowdown (> x2) while running tests in our project locally after bumping flate2 from 1.0.26 to 1.0.28. Everything else in the project stays the same!
Cargo.toml:
flate2 = { version = "1.0.28" }
Cargo.lock:
The problem is that when we run the same tests on github actions there is no slowdown between these versions!
I personally am running tests on Ubuntu on WSL2... I can reproduce this behavior every time between versions!
More information needed
The text was updated successfully, but these errors were encountered: