-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
interpret: make overflowing binops just normal binops #125359
Conversation
rustbot has assigned @compiler-errors. Use |
Some changes occurred to the CTFE / Miri engine cc @rust-lang/miri Some changes occurred to MIR optimizations cc @rust-lang/wg-mir-opt The Miri subtree was changed cc @rust-lang/miri Some changes occurred to the CTFE / Miri engine cc @rust-lang/miri |
@@ -223,7 +223,7 @@ enum Value<'tcx> { | |||
NullaryOp(NullOp<'tcx>, Ty<'tcx>), | |||
UnaryOp(UnOp, VnIndex), | |||
BinaryOp(BinOp, VnIndex, VnIndex), | |||
CheckedBinaryOp(BinOp, VnIndex, VnIndex), | |||
CheckedBinaryOp(BinOp, VnIndex, VnIndex), // FIXME get rid of this, work like MIR instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's more clean-up possible here (and in dataflow_const_prop as well), but I'll leave that to someone else.
Cc @cjgillot
let val_field = self.project_field(dest, 0)?; | ||
self.write_scalar(val.to_scalar(), &val_field)?; | ||
let overflowed_field = self.project_field(dest, 1)?; | ||
self.write_scalar(Scalar::from_bool(overflowed), &overflowed_field)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this code means we now assume that the return type of the WithOverflow ops actually is a ScalarPair. I think we should enforce that our primitive operations return data we can represent with out primitive types, so IMO this is a reasonable assumption. At some point we added the codepath here to account for layout randomization, but looking through the layout code I think tuples are using ReprFlags::default()
which will not have RANDOMIZE_LAYOUT
set so this shouldn't happen any more?
If we ever make it happen again, we should not randomize tuples of the form ($int, bool)
.
Looking at codegen I can't see support there for checked-binops returning something that does not have the ScalarPair ABI.
EDIT: Ah, that was in preparation for #97861 which never landed. Seems fine to un-do then, IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 to saying it has to be a scalar pair. We could, potentially, still allow layout randomization on it within that constraint, however -- not that we should do that (at least right now), though.
I think we could add layout-aware scalar-pair constructors (and destructors) that work like
rust/compiler/rustc_codegen_ssa/src/mir/rvalue.rs
Lines 713 to 724 in 54cdc13
let mut inputs = ArrayVec::<Bx::Value, 2>::new(); | |
let mut input_scalars = ArrayVec::<abi::Scalar, 2>::new(); | |
for field_idx in layout.fields.index_by_increasing_offset() { | |
let field_idx = FieldIdx::from_usize(field_idx); | |
let op = self.codegen_operand(bx, &fields[field_idx]); | |
let values = op.val.immediates_or_place().left_or_else(|p| { | |
bug!("Field {field_idx:?} is {p:?} making {layout:?}"); | |
}); | |
inputs.extend(values); | |
let scalars = self.value_kind(op.layout).scalars().unwrap(); | |
input_scalars.extend(scalars); | |
} |
But yeah, no need to do that for now, and the way we do layout optimizations for structs we put bigger, higher-alignment types first anyway, so (T, bool)
will I think always be in that order anyway right now.
(Or we could add a repr(linear)
type for WithOverflow<T>
that these could return, if needed, though that seems like overkill.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(To elaborate on this, one thing that cg_ssa does now is that the in-register type doesn't need to be an exact match for the in-memory type. So we could -- potentially and in the future -- do something like allow the 8 possible randomizations of (u32, bool)
while still saying that it's a scalar pair.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's for Rvalue::Aggregate
, I don't see how it is related.
But yeah we could make ScalarPair
more flexible, we just need to add the information at which offset the two fields are living. Then all variants of ($int, bool)
can be represented. (Also see here and here for the more general topic of refactoring our Abi
handling.)
This comment has been minimized.
This comment has been minimized.
f18d8a3
to
aa3a9ea
Compare
// Overflows must be ignored here. | ||
// The overflowing operators are handled in `handle_assign`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why the overflowing operators can't be handled here, but it matches the previous structure where Rvalue::CheckedBinaryOp
was handled in handle_assign
, not in handle_rvalue
, for some reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As checked ops return scalar pairs, the routine needs to project the destination as 2 scalars, which handle_rvalue
is not designed to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to add a comment in the code explaining that.
aa3a9ea
to
c0b4b45
Compare
self.binary_op(BinOp::SubWithOverflow, &a_offset, &b_offset)? | ||
.to_scalar_pair() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, nice. I like the approach of it returning ImmTy
so the method doesn't need to talk about tuples 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a (admittedly fairly quick) scan over this and it seems good to me, but you should probably get a review from someone more familiar with the nuances of CTFE/Miri.
let (res, overflow) = match un_op { | ||
Not => (self.truncate(!val, layout), false), // bitwise negation, then truncate | ||
let res = match un_op { | ||
Not => self.truncate(!val, layout), // bitwise negation, then truncate | ||
Neg => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nothing to do for this PR: Interesting that we don't have a checked version of neg
in mir anywhere. I guess it's like Div
in that lowering to MIR deals with that.
Co-authored-by: scottmcm <scottmcm@users.noreply.github.com>
d549576
to
cb53194
Compare
r? @oli-obk |
@bors try @rust-timer queue r=me with perf clean. I'll give the known panics pass another look for more cleanups once this lands |
This comment has been minimized.
This comment has been minimized.
…r=<try> interpret: make overflowing binops just normal binops Follow-up to rust-lang#125173 (Cc `@scottmcm)`
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
FWIW I think GVN/const-prop have the more obvious cleanup possibilities. But yeah known-lints can probably also be cleaned up a bit more. |
Finished benchmarking commit (d373d09): comparison URL. Overall result: ❌ regressions - no action neededBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. @bors rollup=never Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)This benchmark run did not return any relevant results for this metric. CyclesResults (primary -2.9%, secondary -2.1%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 672.666s -> 674.077s (0.21%) |
…piler-errors Stop using `to_hir_binop` in codegen This came up in rust-lang#125359 (comment) , and looking into it we can just use the `mir::BinOp`s directly instead of `hir::BinOpKind`s. (AKA rather than going `mir::BinOp` → `hir::BinOpKind` → `IntPredicate`, just go `mir::BinOp` → `IntPredicate`.)
Seems clean enough to me -- a barely significant regression that only affects some build configurations of the stress test. |
Agreed, Ralf. The bot doesn't consider it a regression, and while cycles are unreliable, they're not showing an issue either. |
…piler-errors Stop using `to_hir_binop` in codegen This came up in rust-lang#125359 (comment) , and looking into it we can just use the `mir::BinOp`s directly instead of `hir::BinOpKind`s. (AKA rather than going `mir::BinOp` → `hir::BinOpKind` → `IntPredicate`, just go `mir::BinOp` → `IntPredicate`.)
Rollup merge of rust-lang#125399 - scottmcm:less-hir-in-cg_ssa, r=compiler-errors Stop using `to_hir_binop` in codegen This came up in rust-lang#125359 (comment) , and looking into it we can just use the `mir::BinOp`s directly instead of `hir::BinOpKind`s. (AKA rather than going `mir::BinOp` → `hir::BinOpKind` → `IntPredicate`, just go `mir::BinOp` → `IntPredicate`.)
Stop using `to_hir_binop` in codegen This came up in rust-lang/rust#125359 (comment) , and looking into it we can just use the `mir::BinOp`s directly instead of `hir::BinOpKind`s. (AKA rather than going `mir::BinOp` → `hir::BinOpKind` → `IntPredicate`, just go `mir::BinOp` → `IntPredicate`.)
☀️ Test successful - checks-actions |
Finished benchmarking commit (5293c6a): comparison URL. Overall result: no relevant changes - no action needed@rustbot label: -perf-regression Instruction countThis benchmark run did not return any relevant results for this metric. Max RSS (memory usage)Results (primary -1.1%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesThis benchmark run did not return any relevant results for this metric. Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 674.126s -> 672.751s (-0.20%) |
Ah, great -- no regressions post-merge so the two "highly reliable" ones pre-merge definitely weren't an issue. |
Follow-up to #125173 (Cc @scottmcm)