Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache declarative macro expansion on disk (for incremental comp.). Based on #128605 #128747

Draft
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

futile
Copy link
Contributor

@futile futile commented Aug 6, 2024

NOTE: Don't merge yet, mostly here for CI, also rebased on top of #128605!

This PR enables on-disk caching for incremental compilation of declarative macro expansions. The base mechanism is added in #128605, but not enabled for incremental comp. there yet.

r? @petrochenkov since you are in the loop here, but feel free to un-/reassign.

@rustbot
Copy link
Collaborator

rustbot commented Aug 6, 2024

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @petrochenkov (or someone else) some time within the next two weeks.

Please see the contribution instructions for more information. Namely, in order to ensure the minimum review times lag, PR authors and assigned reviewers should ensure that the review label (S-waiting-on-review and S-waiting-on-author) stays updated, invoking these commands when appropriate:

  • @rustbot author: the review is finished, PR author should check the comments and take action accordingly
  • @rustbot review: the author is ready for a review, this PR will be queued again in the reviewer's queue

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Aug 6, 2024
@futile
Copy link
Contributor Author

futile commented Aug 6, 2024

@rustbot author

Don't need a review for now, I think, since it's already linked in the previous PR.

@rustbot rustbot added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Aug 6, 2024
@futile
Copy link
Contributor Author

futile commented Aug 6, 2024

Actually, since we are mostly waiting for the base PR, maybe this is better:

@rustbot blocked

@rustbot rustbot added S-blocked Status: Blocked on something else such as an RFC or other implementation work. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Aug 6, 2024
@rust-log-analyzer

This comment has been minimized.

@petrochenkov
Copy link
Contributor

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Aug 6, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Aug 6, 2024
Cache declarative macro expansion on disk (for incremental comp.). Based on rust-lang#128605

## NOTE: Don't merge yet, mostly here for CI, also rebased on top of rust-lang#128605!

This PR enables on-disk caching for incremental compilation of declarative macro expansions. The base mechanism is added in rust-lang#128605, but not enabled for incremental comp. there yet.

r? `@petrochenkov` since you are in the loop here, but feel free to un-/reassign.
@bors
Copy link
Contributor

bors commented Aug 6, 2024

⌛ Trying commit f2cf758 with merge 33076b4...

@bors
Copy link
Contributor

bors commented Aug 6, 2024

☀️ Try build successful - checks-actions
Build commit: 33076b4 (33076b42c5fbe698a1d1887910d8bc5f2fc93c2a)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (33076b4): comparison URL.

Overall result: ❌✅ regressions and improvements - BENCHMARK(S) FAILED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

❗ ❗ ❗ ❗ ❗
Warning ⚠️: The following benchmark(s) failed to build:

  • hyper-0.14.18
  • libc-0.2.124

❗ ❗ ❗ ❗ ❗

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
3.8% [0.2%, 58.4%] 135
Regressions ❌
(secondary)
16.5% [0.2%, 87.5%] 49
Improvements ✅
(primary)
-10.2% [-11.5%, -9.6%] 6
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 3.2% [-11.5%, 58.4%] 141

Max RSS (memory usage)

Results (primary 6.6%, secondary 33.5%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
6.6% [1.0%, 34.4%] 105
Regressions ❌
(secondary)
34.5% [2.0%, 156.1%] 34
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-1.3% [-1.3%, -1.3%] 1
All ❌✅ (primary) 6.6% [1.0%, 34.4%] 105

Cycles

Results (primary 7.4%, secondary 27.7%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
7.8% [0.9%, 55.2%] 55
Regressions ❌
(secondary)
27.7% [2.1%, 93.1%] 27
Improvements ✅
(primary)
-1.6% [-1.6%, -1.5%] 2
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 7.4% [-1.6%, 55.2%] 57

Binary size

Results (primary -0.2%, secondary -0.2%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.2% [-0.8%, -0.0%] 30
Improvements ✅
(secondary)
-0.2% [-0.3%, -0.0%] 9
All ❌✅ (primary) -0.2% [-0.8%, -0.0%] 30

Bootstrap: 760.99s -> 765.527s (0.60%)
Artifact size: 336.87 MiB -> 337.33 MiB (0.14%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Aug 7, 2024
@petrochenkov petrochenkov added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-blocked Status: Blocked on something else such as an RFC or other implementation work. labels Aug 7, 2024
@futile
Copy link
Contributor Author

futile commented Aug 7, 2024

These are my takeaways from the perf run:

  1. Lots of mid-big regressions for many benchmarks, and small-mid regressions on many others.
  2. Only html5ever sees improvements, but there it actually gets a noticeable -10% instructions in most configurations.
  3. hyper and libc both fail to build due to AttrId in the argument TokenStream, which could either not be flattened or not be cached; But TokenStream::flattened() definitely seems to not be enough in practice.

Unconditional Decl Macro Caching Seems to Not Be Worth It

I think the big takeaway is that unconditionally caching all declarative macro expansions for incremental compilation is not worth it. I'd assume the overhead of caching+retrieving is simply much bigger than re-evaluation for many/most of the invocations. However, the big -10% gain for html5ever shows that some crates can benefit from incremental caching. As far as I remember html5ever is very heavy on declarative macro usage (I think it predates proc macros by quite a bit, it was around at/before Rust 1.0 iirc?), and thus might have some macro expansions that take enough time for disk-caching to be worth it.

Needs Cost-Heuristic to Be Useful, but Potential Gains Uncertain

At this point a next step could be to try to figure out some cut-off/condition for when to cache decl macro expansions. I.e., some kind of "complex enough that disk-caching is (probably) useful"-heuristic. However, as we saw from the perf run in #128605, integration with the query system has a non-zero cost, which we would thus need to overcome as well (for almost all crates that invoke decl macros), which would need to be tried out/experimented on. How big the possible gains are is also hard to say, for many crates all decl macro invocations might just be cheap enough that caching is never worth it, thus only leaving the query system integration overhead. This overhead might also be improvable, but I don't really have any idea about that.

One possibility that could make this still useful would be if integration into the query system would enable more parts of the dependency graph to stay "green" (I think?), which currently always need to be recomputed. But given the performance regressions that at least doesn't seem to be the case at the moment (and probably applies for many other operations as well).

Implementation Probably Needs to Wait For #124141 Anyway

The build failures of hyper and libc seem to be due to TokenStream::flattened()/trying to stable-hash the TokenStream, which seems to not work yet. So it probably doesn't make sense to try this before #124141 anyway.


Ok, that said, since my initial motivation was to cache proc macro expansions (see #99515 (comment)), I tackled this mostly because it seemed to be a good first step implementation-wise (and because I wanted to). However, it seems that performance-wise it isn't a good first step 😅 But it has given me a basis for how incremental caching and macro expansion work in the compiler, which should still help when tackling proc macros.

Not sure if I want to immediately start that, also have some other stuff coming up/a bit bummed out by this work not being useful in the end. I didn't know that this might not be a certain good first step (due to potential perf losses), but just assumed it because the other PR had been commented on/discussion going on, so I thought it was the right direction. Turns out it wasn't, guess that can only be known afterwards. Oh well, good experience I guess, and reviewers already have enough to do anyway :)

So thanks a lot for engaging and taking the time to review @petrochenkov! :) It would be ok for me to close this PR, since I don't plan to work on it again soon (would rather tackle proc macro caching), not sure what standard procedure here is. I hope that is okay. I guess it would be nice to have the perf results "findable" somehow, but I'll just leave that up to you.

@petrochenkov
Copy link
Contributor

I expect #124141 to be merged in a couple of months, so I'd rather keep this PR open and then retry testing and benchmarking after the #124141's merge to see what the effect is.

tgross35 added a commit to tgross35/rust that referenced this pull request Aug 8, 2024
…=petrochenkov

refactor(rustc_expand::mbe): Don't require full ExtCtxt when not necessary

Refactor `mbe::diagnostics::failed_to_match_macro()` to not require a full `ExtCtxt`, but only a `&ParseSess`. It hard-required the `ExtCtxt` only for a call to `cx.trace_macros_diag()`, which we move instead to the only call-site of the function.

Note: This could be a potential change in observed behavior, because a call to `cx.trace_macros_diag()` now always happens after `failed_to_match_macro()` was called, where before it was only called at the end of the main return path of the function. But since `trace_macros_diag()` "flushes" out any not-yet-reported errors, it should be ok to call it for all paths, since there shouldn't be any on the non-main paths I think. However, I don't know the rest of the codebase well enough to say that with 100% confidence, but `tests/ui` still pass, which gives at least some confidence in the change.

Also concretize the return type from `Box<dyn MacResult>` to `(Span, ErrorGuaranteed)`, because this function will _always_ return an error, and never any other kind of result.

Was part of rust-lang#128605 and rust-lang#128747, but is a standalone refactoring.

r? ``@petrochenkov``
rust-timer added a commit to rust-lang-ci/rust that referenced this pull request Aug 8, 2024
Rollup merge of rust-lang#128798 - futile:refactor/mbe-diagnostics, r=petrochenkov

refactor(rustc_expand::mbe): Don't require full ExtCtxt when not necessary

Refactor `mbe::diagnostics::failed_to_match_macro()` to not require a full `ExtCtxt`, but only a `&ParseSess`. It hard-required the `ExtCtxt` only for a call to `cx.trace_macros_diag()`, which we move instead to the only call-site of the function.

Note: This could be a potential change in observed behavior, because a call to `cx.trace_macros_diag()` now always happens after `failed_to_match_macro()` was called, where before it was only called at the end of the main return path of the function. But since `trace_macros_diag()` "flushes" out any not-yet-reported errors, it should be ok to call it for all paths, since there shouldn't be any on the non-main paths I think. However, I don't know the rest of the codebase well enough to say that with 100% confidence, but `tests/ui` still pass, which gives at least some confidence in the change.

Also concretize the return type from `Box<dyn MacResult>` to `(Span, ErrorGuaranteed)`, because this function will _always_ return an error, and never any other kind of result.

Was part of rust-lang#128605 and rust-lang#128747, but is a standalone refactoring.

r? ``@petrochenkov``
@petrochenkov petrochenkov added S-blocked Status: Blocked on something else such as an RFC or other implementation work. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Aug 8, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Aug 14, 2024
…ng, r=<try>

Experimental: Add Derive Proc-Macro Caching

# On-Disk Caching For Derive Proc-Macro Invocations

This PR adds on-disk caching for derive proc-macro invocations using rustc's query system to speed up incremental compilation.

The implementation is (intentionally) a bit rough/incomplete, as I wanted to see whether this helps with performance before fully implementing it/RFCing etc.

I did some ad-hoc performance testing.

## Rough, Preliminary Eval Results:

Using a version built through `DEPLOY=1 src/ci/docker/run.sh dist-x86_64-linux` (which I got from [here](https://rustc-dev-guide.rust-lang.org/building/optimized-build.html#profile-guided-optimization)).

### [Some Small Personal Project](https://github.com/futile/ultra-game):

```console
# with -Zthreads=0 as well
$ touch src/main.rs && cargo +dist check
```

Caused a re-check of 1 crate (the only one).

Result:
| Configuration | Time (avg. ~5 runs) |
|--------|--------|
| Uncached | ~0.54s |
| Cached | ~0.54s |

No visible difference.

### [Bevy](https://github.com/bevyengine/bevy):

```console
$ touch crates/bevy_ecs/src/lib.rs && cargo +dist check
```

Caused a re-check of 29 crates.

Result:
| Configuration | Time (avg. ~5 runs) |
|--------|--------|
| Uncached | ~6.4s |
| Cached | ~5.3s |

Roughly 1s, or ~17% speedup.

### [Polkadot-Sdk](https://github.com/paritytech/polkadot-sdk):

Basically this script (not mine): https://github.com/coderemotedotdev/rustc-profiles/blob/d61ad38c496459d82e35d8bdb0a154fbb83de903/scripts/benchmark_incremental_builds_polkadot_sdk.sh

TL;DR: Two full `cargo check` runs to fill the incremental caches (for cached & uncached). Then 10 repetitions of `touch $some_file && cargo +uncached check && cargo +cached check`.

```console
$ cargo update # `time` didn't build because compiler too new/dep too old
$ ./benchmark_incremental_builds_polkadot_sdk.sh # see above
```

_Huge_ workspace with ~190 crates. Not sure how many were re-built/re-checkd on each invocation.

Result:
| Configuration | Time (avg. 10 runs) |
|--------|--------|
| Uncached | 99.4s |
| Cached | 67.5s |

Very visible speedup of 31.9s or ~32%.

---

**-> Based on these results I think it makes sense to do a rustc-perf run and see what that reports.**

---

## Current Limitations/TODOs

I left some `FIXME(pr-time)`s in the code for things I wanted to bring up/draw attention to in this PR. Usually when I wasn't sure if I found a (good) solution or when I knew that there might be a better way to do something; See the diff for these.

### High-Level Overview of What's Missing For "Real" Usage:

* [ ] Add caching for `Bang`- and `Attr`-proc macros (currently only `Derive`).
  * Not a big change, I just focused on `derive`-proc macros for now, since I felt like these should be most cacheable and are used very often in practice.
* [ ] Allow marking specific macros as "do not cache" (currently only all-or-nothing).
  * Extend the unstable option to support, e.g., `-Z cache-derive-macros=some_pm_crate::some_derive_macro_fn` for easy testing using the nightly compiler.
  * After Testing: Add a `#[proc_macro_cacheable]` annotation to allow proc-macro authors to "opt-in" to caching (or sth. similar). Would probably need an RFC?
  * Might make sense to try to combine this with rust-lang#99515, so that external dependencies can be picked up and be taken into account as well.

---

So, just since you were in the loop on the attempt to cache declarative macro expansions:

r? `@petrochenkov`

Please feel free to re-/unassign!

Finally: I hope this isn't too big a PR, I'll also show up in Zulip since I read that that is usually appreciated. Thanks a lot for taking a look! :)

(Kind of related/very similar approach, old declarative macro caching PR: rust-lang#128747)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-blocked Status: Blocked on something else such as an RFC or other implementation work. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants