Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment: force debug_assertions in consteval context #97467

Closed

Conversation

clarfonthey
Copy link
Contributor

@clarfonthey clarfonthey commented May 27, 2022

This is probably very cursed, but, I saw this as a potential avenue to problems like #95332 and decided to just try and implement it since it felt comically simple.

A lot of unsafe code uses debug assertions to verify safety constraints in debug mode, even though they're removed in release mode. Since compile time is expected to be longer in release mode anyway due to optimisations… why not enable these assertions when doing consteval, since we're expecting compile times to be longer anyway, if it means that we'll catch more problems? While a fully featured UB check is helpful, it's way easier to just use all of the checks we already use to make sure unsafe code is working correctly.

Of course, I have no idea how well this will work, or if it will cause a serious dip in performance, but the code is so small I figured I'd open a PR anyway and see what people think about this idea.

I did try testing and noticed a few UI tests that explicitly try to look for the UB-checker behaviour and try to disable the debug-assert behaviour, and I'm not sure how to deal with these. But other than that, it looks like everything works with this turned on.


Note: I think that a full implementation of this would involve modifying conditional compilation in constants to make cfg!(debug_assertions) actually just return true always in consteval, but that's way more complicated since it would mean actually compiling things twice for consteval. Plus, even if we technically modify the cfg! macro, it will leave #[cfg] unchanged.

This is basically a hack to just apply this to debug_assert! for now, and we can figure out the finer details later.


Okay, I'm genuinely unsure how tests managed to pass for this when ./x.py test --stage 1 failed locally. But 🤷🏻, cool.

@rustbot rustbot added the T-libs Relevant to the library team, which will review and decide on the PR/issue. label May 27, 2022
@rust-highfive
Copy link
Collaborator

Hey! It looks like you've submitted a new PR for the library teams!

If this PR contains changes to any rust-lang/rust public library APIs then please comment with r? rust-lang/libs-api @rustbot label +T-libs-api -T-libs to request review from a libs-api team reviewer. If you're unsure where your change falls no worries, just leave it as is and the reviewer will take a look and make a decision to forward on if necessary.

Examples of T-libs-api changes:

  • Stabilizing library features
  • Introducing insta-stable changes such as new implementations of existing stable traits on existing stable types
  • Introducing new or changing existing unstable library APIs (excluding permanently unstable features / features without a tracking issue)
  • Changing public documentation in ways that create new stability guarantees
  • Changing observable runtime behavior of library APIs

@rust-highfive
Copy link
Collaborator

r? @scottmcm

(rust-highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label May 27, 2022
@clarfonthey
Copy link
Contributor Author

r? @oli-obk (since I know you're also reviewing #95377, and this has a similar goal)

also cc @saethlin as the author of that PR

@rust-highfive rust-highfive assigned oli-obk and unassigned scottmcm May 27, 2022
@scottmcm scottmcm added the I-libs-nominated Nominated for discussion during a libs team meeting. label May 27, 2022
@clarfonthey
Copy link
Contributor Author

(Thanks @scottmcm for noticing regardless and nominating it. I have no idea whether the compiler & lang teams should also be involved on this, and mostly just pinged Oli since I know he was helping out solve this issue via other means.)

@saethlin
Copy link
Member

saethlin commented May 28, 2022

Most of the standard library's UB-detecting debug assertions use this macro, which you didn't update:

macro_rules! assert_unsafe_precondition {

If you update that macro and this PR works as advertised, you should be able to get a compile error from #95332

@clarfonthey
Copy link
Contributor Author

So I will admit that I saw that macro and just left it alone because I wasn't sure what it was trying to accomplish. But in hindsight, those calls could probably just be replaced with debug_asserts, and I'll try that.

@saethlin
Copy link
Member

those calls could probably just be replaced with debug_assert

Whether or not you can, you shouldn't, per the comment in that macro.

@clarfonthey
Copy link
Contributor Author

Right, the macro uses abort instead of panicking to reduce code size. I'm not sure why though, since it's intended to be run only on debug builds.

@RalfJung
Copy link
Member

Right, the macro uses abort instead of panicking to reduce code size. I'm not sure why though, since it's intended to be run only on debug builds.

Making debug builds even slower can make them useless, so some amount of code quality matters even for them. In particular for the extremely common operations that this macro is used in.
Also, caller might (rightfully) assume that certain operations do not panic, so making them panic can lead to hard-to-debug secondary effects.

@oli-obk
Copy link
Contributor

oli-obk commented May 28, 2022

Okay, I'm genuinely unsure how tests managed to pass for this when ./x.py test --stage 1 failed locally. But 🤷🏻, cool.

Probably because libstd is built without debug assertions in CI? Technically your code should still emit them in const contexts, but I didn't give it a thorough reading yet.

@bors try @rust-timer queue

@bors
Copy link
Contributor

bors commented May 28, 2022

⌛ Trying commit fb55d43 with merge de9eef9c0d94a63b720e98be8eafa11f5a5e967f...

@rust-timer
Copy link
Collaborator

Awaiting bors try build completion.

@rustbot label: +S-waiting-on-perf

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 28, 2022
@bors
Copy link
Contributor

bors commented May 28, 2022

☀️ Try build successful - checks-actions
Build commit: de9eef9c0d94a63b720e98be8eafa11f5a5e967f (de9eef9c0d94a63b720e98be8eafa11f5a5e967f)

@rust-timer
Copy link
Collaborator

Queued de9eef9c0d94a63b720e98be8eafa11f5a5e967f with parent 19abca1, future comparison URL.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (de9eef9c0d94a63b720e98be8eafa11f5a5e967f): comparison url.

Instruction count

  • Primary benchmarks: 😿 relevant regressions found
  • Secondary benchmarks: 😿 relevant regressions found
mean1 max count2
Regressions 😿
(primary)
29.5% 59.4% 251
Regressions 😿
(secondary)
25.1% 56.1% 269
Improvements 🎉
(primary)
N/A N/A 0
Improvements 🎉
(secondary)
N/A N/A 0
All 😿🎉 (primary) 29.5% 59.4% 251

Max RSS (memory usage)

Results
  • Primary benchmarks: mixed results
  • Secondary benchmarks: 😿 relevant regressions found
mean1 max count2
Regressions 😿
(primary)
4.2% 13.3% 200
Regressions 😿
(secondary)
4.3% 11.2% 224
Improvements 🎉
(primary)
-1.2% -1.2% 1
Improvements 🎉
(secondary)
N/A N/A 0
All 😿🎉 (primary) 4.1% 13.3% 201

Cycles

Results
  • Primary benchmarks: 😿 relevant regressions found
  • Secondary benchmarks: 😿 relevant regressions found
mean1 max count2
Regressions 😿
(primary)
32.2% 68.3% 251
Regressions 😿
(secondary)
28.6% 80.2% 269
Improvements 🎉
(primary)
N/A N/A 0
Improvements 🎉
(secondary)
N/A N/A 0
All 😿🎉 (primary) 32.2% 68.3% 251

If you disagree with this performance assessment, please file an issue in rust-lang/rustc-perf.

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: +S-waiting-on-review -S-waiting-on-perf +perf-regression

Footnotes

  1. the arithmetic mean of the percent change 2 3

  2. number of relevant changes 2 3

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels May 28, 2022
@saethlin
Copy link
Member

since it's intended to be run only on debug builds.

It's intended to be run on builds with debug assertions enabled. That's not the same as builds with optimizations disabled because runtime doesn't matter. I know of two crates off the top of my head which enable optimizations in the dev and test profiles because opt-level = 0 is too slow. Additionally, debug assertions are generally a bug detection mechanism which is too expensive to deploy into production. That makes them a perfect fit for fuzzers; one of the motivations for my PR that added that macro was toggling it on and being able to detect more UB with cargo-fuzz.

@joshtriplett
Copy link
Member

We talked about this in today's @rust-lang/libs meeting. We felt that this level of performance hit isn't something we'd be able to merge, at all. We're open to the possibility of a conditional mechanism for this, if that would provide value for debugging, and if that didn't have any substantial performance hit when not enabled.

@clarfonthey
Copy link
Contributor Author

I assumed as much; when I saw the benchmark I basically assumed this was a non-starter.

Will close this since I think more in-depth discussion is necessary before a proper change is proposed.

@clarfonthey clarfonthey closed this Jun 9, 2022
@clarfonthey clarfonthey deleted the cursed_eval_debug_assertions branch June 9, 2022 04:28
@dtolnay dtolnay removed the I-libs-nominated Nominated for discussion during a libs team meeting. label Sep 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.