-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up compilation of large constant arrays #51833
Speed up compilation of large constant arrays #51833
Conversation
This is a different approach to rust-lang#51672 as suggested by @oli-obk. Rather than write each repeated value one-by-one, we write the first one and then copy its value directly into the remaining memory.
This save an additional 6 seconds on the test program.
Nice work!
For the first patch in #51672 I inlined several methods in addition to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are some awesome improvements! After the review is addressed we'll throw it into the perf tests.
src/librustc_mir/interpret/memory.rs
Outdated
} | ||
} | ||
|
||
self.copy_undef_mask(src, dest, size)?; | ||
self.copy_undef_mask(src, dest, size * length)?; | ||
// copy back the relocations | ||
self.get_mut(dest.alloc_id)?.relocations.insert_presorted(relocations); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to reapeat this, too (and offset the indices).
Try a [&FOO; 500]
(for non-ZST FOO
) and then access any field but the first (at compile-time! at runtime you'll get a segfault). If I'm reading the code correctly this will tell you about a dangling pointer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks! Can you double check my math?
src/librustc_mir/interpret/memory.rs
Outdated
} | ||
} | ||
|
||
self.copy_undef_mask(src, dest, size)?; | ||
self.copy_undef_mask(src, dest, size * length)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this results in the correct result, it does n^2/2
copies instead of n
copies. Inside the function itself we should probably move the self.get(src.alloc_id)?
out of the loops, too. We can probably improve the nonoverlapping case enormously, too by not requiring an intermediate allocation.
It would be interesting to see if this could be further sped up by copying larger chunks of memory at a time. Right now this makes |
This save 3 seconds on the test program.
Looks like the logic for that already exists in core: Line 422 in 3515dab
Maybe there's a way to extract the copying part into an unsafe |
@scottmcm Great find! |
@nnethercote That's a good idea. I tried that and it saved an additional 3 seconds on the compile time (now 10 seconds). |
This saves 4.5 seconds and takes the compile time down to 5.5 seconds.
This saves 0.5 seconds on the test compilation.
This saves 2.5 seconds on the test program.
@michaelwoerister @scottmcm I tried that but it only saved about 100ms. Edit: I can push that commit too if you think it's worth pursuing. |
With the latest changes, the compile time is down to 2.8 seconds on my machine. |
A 22.5x speedup! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome improvements. Just a nit so miri the tool keeps working.
@@ -882,25 +882,16 @@ impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> Memory<'a, 'mir, 'tcx, M> { | |||
) -> EvalResult<'tcx> { | |||
// The bits have to be saved locally before writing to dest in case src and dest overlap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment makes me think that we should not do this commit, otherwise we'll run into trouble in the future (and in miri right now). Can you do an if
for whether there is overlap and if there is, just run the old code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. I thought I preserved the existing behavior by cloning the source allocation's undef_mask
before writing to the destination's. Is that sufficient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh right. sorry. I misread the code.
I still think the code isn't doing the right thing. It's only copying once, when it should be copying N-1 times.
You can try this out by creating an array of types with padding, everything starting at the third element will probably not have undef masks for the padding. (you'll need unions to get the bits and then attempt to use them for an array length to actually get a compiler error from that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid I'm not quite following. We do call this function with size * length
so shouldn't it cover all of the repeated copies? Can you provide a sample program that will fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, you are using the length, but that just means that the entire array is copied from 0..N
to 1..=N
, not that the 1st element is copied N times.
I'll make a regression test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fairly certain that the following test will succeed to compile on your PR: http://play.rust-lang.org/?gist=1d0183fcfb65164d1ca58ccd9614c33c
src/librustc_mir/interpret/memory.rs
Outdated
); | ||
} | ||
for i in 0..size.bytes() { | ||
let defined = undef_mask.get(src.offset + Size::from_bytes(i)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you pass a repeat counter to the function, you should be able to just modulo the i
here over the size and have the for loop go from 0
ot size.bytes() * repeat
Please also add a test for http://play.rust-lang.org/?gist=1c0e90ac9064edfa12fbd286902e20ef to make sure we always properly copy the relocations. |
Added tests and fixed that issue |
The job Click to expand the log.
I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact |
7c64a63
to
46512e0
Compare
Fixed tidy |
@bors r+ We should probably add a bunch of tests to the perf tests to ensure this doesn't regress. |
📌 Commit 46512e0 has been approved by |
…i-obk Speed up compilation of large constant arrays This is a different approach to #51672 as suggested by @oli-obk. Rather than write each repeated value one-by-one, we write the first one and then copy its value directly into the remaining memory. With this change, the [toy program](https://github.com/rust-lang/rust/blob/c2f4744d2db4e162df824d0bd0b093ba4b351545/src/test/run-pass/mir_heavy_promoted.rs) goes from 63 seconds to 19 seconds on my machine. Edit: Inlining `Size::bytes()` saves an additional 6 seconds dropping the total time to 13 seconds on my machine. Edit2: Now down to 2.8 seconds. r? @oli-obk cc @nnethercote @eddyb
☀️ Test successful - status-appveyor, status-travis |
Thanks for giving it a try, @wesleywiser. Great work! |
…i-obk Update a FIXME in memory.rs In rust-lang#51833, I improved the performance of `copy_undef_mask()`. As such, the old FIXME wasn't appropriate anymore. The main remaining thing left to do is to implement a fast path for non-overlapping copies (per @oli-obk). r? @oli-obk
…i-obk Update a FIXME in memory.rs In rust-lang#51833, I improved the performance of `copy_undef_mask()`. As such, the old FIXME wasn't appropriate anymore. The main remaining thing left to do is to implement a fast path for non-overlapping copies (per @oli-obk). r? @oli-obk
…i-obk Update a FIXME in memory.rs In rust-lang#51833, I improved the performance of `copy_undef_mask()`. As such, the old FIXME wasn't appropriate anymore. The main remaining thing left to do is to implement a fast path for non-overlapping copies (per @oli-obk). r? @oli-obk
This is a different approach to #51672 as suggested by @oli-obk. Rather
than write each repeated value one-by-one, we write the first one and
then copy its value directly into the remaining memory.
With this change, the toy program goes from 63 seconds to 19 seconds on my machine.
Edit: Inlining
Size::bytes()
saves an additional 6 seconds dropping the total time to 13 seconds on my machine.Edit2: Now down to 2.8 seconds.
r? @oli-obk
cc @nnethercote @eddyb