-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression in compile time when optimizing for size #91809
Comments
@crawford could you use |
Or the new
|
@camelid I gave that a shot last night, but I was having trouble getting it to run correctly. Your suggestion to use @hkratz I'm on NixOS, which requires some patching of binaries before they can be run, so (long story short) I'm using an older pre-built version which doesn't have the timeout flag. I do have the Thank you for both of your suggestions. I'm really glad I was able to get this running on my system. What a fantastic tool! Here's the output (regression in ef4b306):
|
cc @the8472 |
No concrete idea, just stabs in the dark: Since this is optimization-level dependent I suspect LLVM is going crazy somewhere. This should be testable with The added inline annotation itself shouldn't cause rustc to generate massive amounts of additional IR, I think. Unless maybe the code somehow contains many more Edit: Another possibility is that your test cases contain some code triggering pathological behavior while the release build doesn't.
The profiles differ in the Edit: Additionally they also differ in the defaulted
That covers |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Hmmm, I'm not sure what happened in my test, but I must have set the timeout too low. It also occurred to me that sccache may have been influencing my tests, so I went ahead and temporarily disabled that. I've re-run the bisection numerous times with varying timeouts (mostly to try to reproduce the original result, but to no avail) and I'm seeing a different result from what I had originally reported:
So it looks like a16f686 is actually the culprit. Sorry for the false alarm, @the8472. In the meantime, I let a full build finish (it took 10m 10s) and it resulted in a test binary that is 1.5 GB in size! For comparison, the test binary without optimization is 11 MB. I also tried altering the optimization levels (LTO is disabled in all of these) and found that |
Hmm, it's possible, but it seems quite unlikely to me that that's the cause. |
I'm also getting a16f686 as the culprit but I agree it doesn't look like the correct commit since it's cargo related.
Full output:
|
FWIW, that doesn't mean those commits regressed; it just means the regression was present in them. I.e., it very well may have occurred before them. |
This seems to be cargo related. I've copied cargo 1.56.1 to stable (1.57.0) and after that it compiles as fast as before. I can only reproduce it when building with Seems to be rust-lang/cargo#9943 |
@wtfsck Could you run |
@hkratz It was |
@crawford checking in about this issue. Did you have a chance to try the new stable 1.58? There are substancial improvement in the compile time (can't say, though, if they apply to your codebase) |
@apiraino Sorry I disappeared after initially filing the issue. I haven't had much time to revisit this to help narrow it down. I just gave 1.58 a try and am still seeing the slowdown when optimizing for size. My current workaround is still functional though: [profile.test]
opt-level = 0 |
Assigning priority as discussed in the Zulip thread of the Prioritization Working Group. @rustbot label -I-prioritize +P-medium |
@rustbot label +S-needs-repro -E-needs-mcve |
After updating to Rust 1.57, I'm seeing a huge increase in compilation time for an embedded project that I'm working on. The code is unfortunately not open source, but I'm hoping I can provide enough information here to work around that. If I've forgotten to include anything or made bad assumptions, please let me know.
I don't have specific numbers for the slowdown because I haven't waiting long enough for the compilation to finish, but I can say that what used to take ~90 seconds for a full build is now taking more than 10 minutes. The project targets thumbv7em-none-eabihf and is optimized for size. Here are the profiles defined in the cargo manifest:
Here's the interesting thing: we have seen no increase in compilation time when building the main binary. It's the tests that are actually exhibiting the behavior. The tests target x86_64-unknown-linux-gnu and are built with
cargo test
. If I removeopt-level = "s"
from the profiles, I'm able to build the tests just as quickly as before.I bisected the nightlies and determined that this slowdown seems to have been introduced into nightly-2021-10-14; nightly-2021-10-13 and 1.56.1 don't exhibit this behavior.
The text was updated successfully, but these errors were encountered: