-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache declarative macro expansion on disk (for incremental comp.). Based on #128605 #128747
base: master
Are you sure you want to change the base?
Conversation
Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @petrochenkov (or someone else) some time within the next two weeks. Please see the contribution instructions for more information. Namely, in order to ensure the minimum review times lag, PR authors and assigned reviewers should ensure that the review label (
|
@rustbot author Don't need a review for now, I think, since it's already linked in the previous PR. |
Actually, since we are mostly waiting for the base PR, maybe this is better: @rustbot blocked |
This comment has been minimized.
This comment has been minimized.
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
Cache declarative macro expansion on disk (for incremental comp.). Based on rust-lang#128605 ## NOTE: Don't merge yet, mostly here for CI, also rebased on top of rust-lang#128605! This PR enables on-disk caching for incremental compilation of declarative macro expansions. The base mechanism is added in rust-lang#128605, but not enabled for incremental comp. there yet. r? `@petrochenkov` since you are in the loop here, but feel free to un-/reassign.
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (33076b4): comparison URL. Overall result: ❌✅ regressions and improvements - BENCHMARK(S) FAILEDBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never ❗ ❗ ❗ ❗ ❗
❗ ❗ ❗ ❗ ❗ Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)Results (primary 6.6%, secondary 33.5%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResults (primary 7.4%, secondary 27.7%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeResults (primary -0.2%, secondary -0.2%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Bootstrap: 760.99s -> 765.527s (0.60%) |
These are my takeaways from the perf run:
Unconditional Decl Macro Caching Seems to Not Be Worth ItI think the big takeaway is that unconditionally caching all declarative macro expansions for incremental compilation is not worth it. I'd assume the overhead of caching+retrieving is simply much bigger than re-evaluation for many/most of the invocations. However, the big -10% gain for html5ever shows that some crates can benefit from incremental caching. As far as I remember html5ever is very heavy on declarative macro usage (I think it predates proc macros by quite a bit, it was around at/before Rust 1.0 iirc?), and thus might have some macro expansions that take enough time for disk-caching to be worth it. Needs Cost-Heuristic to Be Useful, but Potential Gains UncertainAt this point a next step could be to try to figure out some cut-off/condition for when to cache decl macro expansions. I.e., some kind of "complex enough that disk-caching is (probably) useful"-heuristic. However, as we saw from the perf run in #128605, integration with the query system has a non-zero cost, which we would thus need to overcome as well (for almost all crates that invoke decl macros), which would need to be tried out/experimented on. How big the possible gains are is also hard to say, for many crates all decl macro invocations might just be cheap enough that caching is never worth it, thus only leaving the query system integration overhead. This overhead might also be improvable, but I don't really have any idea about that. One possibility that could make this still useful would be if integration into the query system would enable more parts of the dependency graph to stay "green" (I think?), which currently always need to be recomputed. But given the performance regressions that at least doesn't seem to be the case at the moment (and probably applies for many other operations as well). Implementation Probably Needs to Wait For #124141 AnywayThe build failures of hyper and libc seem to be due to Ok, that said, since my initial motivation was to cache proc macro expansions (see #99515 (comment)), I tackled this mostly because it seemed to be a good first step implementation-wise (and because I wanted to). However, it seems that performance-wise it isn't a good first step 😅 But it has given me a basis for how incremental caching and macro expansion work in the compiler, which should still help when tackling proc macros. Not sure if I want to immediately start that, also have some other stuff coming up/a bit bummed out by this work not being useful in the end. I didn't know that this might not be a certain good first step (due to potential perf losses), but just assumed it because the other PR had been commented on/discussion going on, so I thought it was the right direction. Turns out it wasn't, guess that can only be known afterwards. Oh well, good experience I guess, and reviewers already have enough to do anyway :) So thanks a lot for engaging and taking the time to review @petrochenkov! :) It would be ok for me to close this PR, since I don't plan to work on it again soon (would rather tackle proc macro caching), not sure what standard procedure here is. I hope that is okay. I guess it would be nice to have the perf results "findable" somehow, but I'll just leave that up to you. |
…=petrochenkov refactor(rustc_expand::mbe): Don't require full ExtCtxt when not necessary Refactor `mbe::diagnostics::failed_to_match_macro()` to not require a full `ExtCtxt`, but only a `&ParseSess`. It hard-required the `ExtCtxt` only for a call to `cx.trace_macros_diag()`, which we move instead to the only call-site of the function. Note: This could be a potential change in observed behavior, because a call to `cx.trace_macros_diag()` now always happens after `failed_to_match_macro()` was called, where before it was only called at the end of the main return path of the function. But since `trace_macros_diag()` "flushes" out any not-yet-reported errors, it should be ok to call it for all paths, since there shouldn't be any on the non-main paths I think. However, I don't know the rest of the codebase well enough to say that with 100% confidence, but `tests/ui` still pass, which gives at least some confidence in the change. Also concretize the return type from `Box<dyn MacResult>` to `(Span, ErrorGuaranteed)`, because this function will _always_ return an error, and never any other kind of result. Was part of rust-lang#128605 and rust-lang#128747, but is a standalone refactoring. r? ``@petrochenkov``
Rollup merge of rust-lang#128798 - futile:refactor/mbe-diagnostics, r=petrochenkov refactor(rustc_expand::mbe): Don't require full ExtCtxt when not necessary Refactor `mbe::diagnostics::failed_to_match_macro()` to not require a full `ExtCtxt`, but only a `&ParseSess`. It hard-required the `ExtCtxt` only for a call to `cx.trace_macros_diag()`, which we move instead to the only call-site of the function. Note: This could be a potential change in observed behavior, because a call to `cx.trace_macros_diag()` now always happens after `failed_to_match_macro()` was called, where before it was only called at the end of the main return path of the function. But since `trace_macros_diag()` "flushes" out any not-yet-reported errors, it should be ok to call it for all paths, since there shouldn't be any on the non-main paths I think. However, I don't know the rest of the codebase well enough to say that with 100% confidence, but `tests/ui` still pass, which gives at least some confidence in the change. Also concretize the return type from `Box<dyn MacResult>` to `(Span, ErrorGuaranteed)`, because this function will _always_ return an error, and never any other kind of result. Was part of rust-lang#128605 and rust-lang#128747, but is a standalone refactoring. r? ``@petrochenkov``
…ng, r=<try> Experimental: Add Derive Proc-Macro Caching # On-Disk Caching For Derive Proc-Macro Invocations This PR adds on-disk caching for derive proc-macro invocations using rustc's query system to speed up incremental compilation. The implementation is (intentionally) a bit rough/incomplete, as I wanted to see whether this helps with performance before fully implementing it/RFCing etc. I did some ad-hoc performance testing. ## Rough, Preliminary Eval Results: Using a version built through `DEPLOY=1 src/ci/docker/run.sh dist-x86_64-linux` (which I got from [here](https://rustc-dev-guide.rust-lang.org/building/optimized-build.html#profile-guided-optimization)). ### [Some Small Personal Project](https://github.com/futile/ultra-game): ```console # with -Zthreads=0 as well $ touch src/main.rs && cargo +dist check ``` Caused a re-check of 1 crate (the only one). Result: | Configuration | Time (avg. ~5 runs) | |--------|--------| | Uncached | ~0.54s | | Cached | ~0.54s | No visible difference. ### [Bevy](https://github.com/bevyengine/bevy): ```console $ touch crates/bevy_ecs/src/lib.rs && cargo +dist check ``` Caused a re-check of 29 crates. Result: | Configuration | Time (avg. ~5 runs) | |--------|--------| | Uncached | ~6.4s | | Cached | ~5.3s | Roughly 1s, or ~17% speedup. ### [Polkadot-Sdk](https://github.com/paritytech/polkadot-sdk): Basically this script (not mine): https://github.com/coderemotedotdev/rustc-profiles/blob/d61ad38c496459d82e35d8bdb0a154fbb83de903/scripts/benchmark_incremental_builds_polkadot_sdk.sh TL;DR: Two full `cargo check` runs to fill the incremental caches (for cached & uncached). Then 10 repetitions of `touch $some_file && cargo +uncached check && cargo +cached check`. ```console $ cargo update # `time` didn't build because compiler too new/dep too old $ ./benchmark_incremental_builds_polkadot_sdk.sh # see above ``` _Huge_ workspace with ~190 crates. Not sure how many were re-built/re-checkd on each invocation. Result: | Configuration | Time (avg. 10 runs) | |--------|--------| | Uncached | 99.4s | | Cached | 67.5s | Very visible speedup of 31.9s or ~32%. --- **-> Based on these results I think it makes sense to do a rustc-perf run and see what that reports.** --- ## Current Limitations/TODOs I left some `FIXME(pr-time)`s in the code for things I wanted to bring up/draw attention to in this PR. Usually when I wasn't sure if I found a (good) solution or when I knew that there might be a better way to do something; See the diff for these. ### High-Level Overview of What's Missing For "Real" Usage: * [ ] Add caching for `Bang`- and `Attr`-proc macros (currently only `Derive`). * Not a big change, I just focused on `derive`-proc macros for now, since I felt like these should be most cacheable and are used very often in practice. * [ ] Allow marking specific macros as "do not cache" (currently only all-or-nothing). * Extend the unstable option to support, e.g., `-Z cache-derive-macros=some_pm_crate::some_derive_macro_fn` for easy testing using the nightly compiler. * After Testing: Add a `#[proc_macro_cacheable]` annotation to allow proc-macro authors to "opt-in" to caching (or sth. similar). Would probably need an RFC? * Might make sense to try to combine this with rust-lang#99515, so that external dependencies can be picked up and be taken into account as well. --- So, just since you were in the loop on the attempt to cache declarative macro expansions: r? `@petrochenkov` Please feel free to re-/unassign! Finally: I hope this isn't too big a PR, I'll also show up in Zulip since I read that that is usually appreciated. Thanks a lot for taking a look! :) (Kind of related/very similar approach, old declarative macro caching PR: rust-lang#128747)
NOTE: Don't merge yet, mostly here for CI, also rebased on top of #128605!
This PR enables on-disk caching for incremental compilation of declarative macro expansions. The base mechanism is added in #128605, but not enabled for incremental comp. there yet.
r? @petrochenkov since you are in the loop here, but feel free to un-/reassign.