-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
overloaded assignment operations a += b
#953
Conversation
|
||
# Alternatives | ||
|
||
Alternatively, we could change the traits to take the RHS by value. This makes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to see discussion of why the proposed design differs from the plain Add
, Sub
etc. traits. (I don't think it's a bad idea, but it seems worthy of at least mentioning briefly.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a section on by-ref vs by-value.
Alternative would be delegate |
I'd prefer taking RHS by value, like the "normal" binary operator traits do, for consistency. I think we can deal with the ergonomics loss with another RFC (auto-referencing for operands), which will work uniformly across "normal" binary operators and their assignment counterparts. (Actually currently I am not quite convinced that we should add auto-referencing, but I'd prefer "uniform auto-referencing for operands" to "assignment operators taking RHS by reference".) |
@kennytm I doubt that's not the primary concern - what about 👍 to this proposal, and even though I prefer by-value RHS, most of my places where I've wanted to use this feature could've worked fine with by-ref. I am (pleasantly) surprised to see this get a chance before 1.0, for some reason I thought it wouldn't even be considered for now. |
I think the same can also be said about the "normal" binary operators. We generally want by-value for primitives and by-ref for custom types. And the actual problem is our "operator traits" approach cannot support both (without ergonomics loss when calling). I still prefer by-value traits (and maybe with uniform auto-referencing support for operands). Besides consistency, suppose we have a But if we decide to use by-ref traits, then I suggest adding a It is as if we have: trait Add { ... } // binary `+`, which takes both operands by value. (Existing.)
trait AddRef { ... } // binary `+`, which takes both operands by reference. (Not added.)
trait AddAssign { ... } // `+=`, which takes the RHS by value. (Not added.)
trait AddAssignRef { ... } // `+=`, which takes the RHS by reference. (Added.) This also means renaming |
@eddyb Ah right. Unfortunately |
Do we need to worry about supporting atomicity ever? Maybe we want to implement |
@shepmaster that wouldn't be doable because a shared atomic would be |
Using the current |
Question: Should the
Both might be abuse of the operator, and I don't know of rust-versions
|
For a linear algebra library, this would lead to an API with unnecessary cloning and slower operations, for example if you want to write // x, y, z: are owned matrices (`Mat`) that contain `f32` elements
// `impl Mul<f32> for Mat { type Output = Mat; .. }` eager scaling - O(N) time
// `impl Add<Mat> for Mat { type Output = Mat>; .. }` saxpy call - O(N) time
y *= 2.; // y = y * 2.;
y += x * 3.; // y = y + x * 3.;
z += x * 4.; // z = z + x * 4.; error: `x` has been moved Instead you need to write: y *= 2.; // O(N) time from eager scaling
y += x.clone() * 3.; // O(N) memory use from clone
// O(N) time from eager scaling (3.0 * x.clone())
// O(N) time from saxpy (y += temp)
// ???? time from calling destructor on temp
z += x * 4.; // O(N) time from eager scaling (4.0 * x)
// O(N) time from saxpy (z += temp)
// ???? time calling destructor on temp
// and you can't use `x` afterwards However with the // `struct Scaled<'a>(f32, &'a Mat)`
// `impl MulAssign<f32> for Mat { .. }` eager scaling - O(N) time
// `impl<'a> Mul<f32> for &'a Mat { type Output = Scaled<'a>; .. }` lazy scaling - O(1) time
// `impl AddAssign<Scaled<'a>> for Mat { .. }` saxpy call - O(N) time
y *= 2.;
y += x * 3.; // error: `Mul<f32>` not implemented for `Mat`
z += x * 4.; // error: `Mul<f32>` not implemented for `Mat` Here the compiler errors lead the user to more efficient operations: y *= 2.; // O(N) time from eager scaling
y += &x * 3.; // O(1) time from lazy scaling (3.0 * &x)
// O(N) from saxpy (y += scaled)
z += &x * 4.; // O(1) time from lazy scaling (4.0 * &x)
// O(N) from saxpy (z += scaled)
// and `x` *can* be used afterwards Note that here is important to provide Even if we changed the signature of
No, I don't think so. In fact, I usually do it the other way and define // X: An owned value (`Mat`)
// Y: A reference-like value (`Scaled<'a>`)
impl Add<Y> for X {
type Output = X;
fn add(mut self, rhs: Y) -> X {
self += rhs;
self
}
} But I don't think one trait should depend on the other. Nor do I think that if one
Could you link to examples of this sugar?
What do you have in mind here? AFAIK, the set operations provided by stdlib are lazy (return an iterator). |
@japaric We currently provide |
@gankro, I think set operations are a fine use case of operator overloading, that said, having lazy named methods but eager operators may or may not surprise people. (Again I think the status quo is okay.) |
This looks good to me, basically like what I had in mind, modulo the question of by-value or by-ref. I think I am sympathetic to @CloudiDust's point that we should handle by-ref uniformly (and I am vaguely in favor of doing so). It seems to me that consuming the rhs could be more efficient in some cases. For example, if you were to write |
@nikomatsakis, an assignment operator that consumes its RHS would be quite surprising. I'd expect that only |
@CloudiDust, why? Consuming is the default just about everywhere in Rust. Including the addition operator |
@glaebhoerl, I'd say arithmetic operators are a bit different from normal named functions, their operands are not expected to be mutated/consumed. (For an assignment operator, only the LHS is expected to be mutated.) I think on the doc page of the |
I do think we should clearly distinguish between "mutate" ("by (We may have been talking past each other? As far as I can tell, you came out in favor of |
@glaebhoerl, oops, what was I thinking. (Note: In the two previous comments, when I talked about "assignment operators", I always meant "compound assignment operators", stressing the "arithmetic operator" parts. Not that this correction makes my previous statements true, though.) I thought that
Time to get some rest. |
@gankro That's operator abuse, because @nikomatsakis Thanks for the example, I agree that taking RHS by value would lead to better performance in that case. Though I'd expect |
c# event handlers: https://msdn.microsoft.com/en-us/library/aa645739%28v=vs.71%29.aspx (search for += and -=, you can add multiple handlers to an event, there's also lots of confusion about closures in event handlers: http://stackoverflow.com/questions/2226510/closures-in-c-sharp-event-handler-delegates) |
@oli-obk thanks for the links! I'd personally prefer methods like I guess another case where the RHS needs to be by-value is pushing elements into a |
I'm pretty negative on doing collection operations with operators, personally. Not least of-which because it opens a huge coherence-asymmetry can-of-worms. Creates a huge mess of implementations to handle all the different combinations, too. |
@japaric I noticed you did not have Was this considered? I didn't see it listed on the alternatives, but I also do not see any immediately obvious reason why one wouldn't have In particular, I would be curious to know if the trait could offer a default impl: trait AddAssign<Rhs=Self> : Add<Rhs> {
fn add_assign(&mut self, rhs: Rhs) {
unsafe {
ptr::write(self as *mut Self, Add::add(ptr::read(self as *const Self), rhs));
}
}
} so that an unoptimized impl would just look like: impl AddAssign for Foo { } (though an unsafe default impl admittedly might be a really really bad idea ...) Update: Obviously we do not need any default method impl to add this feature. And maybe a where clause on the method would suffice, rather than requiring all impls of Update 2: Ah, reviewing the discussion here, I see that @oli-obk already raised this point and you responded. Please do transcribe a summary of your argument into the RFC document; part of the point of the process is that the final document contains succinct discussions of matters like this. Thanks! |
@oli-obk btw you asked:
Your first intuition was correct: If we made fn foo<X:Copy+AddAssign>(x: X) -> X { x + x } |
Just wanted to register my support for this RFC. I really don't see why |
Suggestion: allow deriving FooAssign, which would generate an implementation based on Foo, like |
I'm still a little unclear on the motivation for additional traits rather that e.g. adding a method with this signature to
This would also eliminate the trait inheritance questions. Am I missing something? |
Follow-up: I missed the relevance of some of the earlier discussion about cases where you want to overload |
So I agree that the relationship to Felix's unsafe code is unsafe for a reason -- it's not exception safe. What if a panic occurs in the middle, for example? It would leave the If we had specialization, we could provide a default bridge impl of this form: #[low_priority]
impl<R,T:Add<R>+Clone> AddAssign<R> for T {
fn add_assign(&mut self, rhs: R) {
let tmp = self.clone() + rhs;
*self = tmp;
}
} In that case, if you know that |
@nikomatsakis Ah, good point -- I wasn't thinking carefully about @pnkfelix's implementation. I agree that making |
To be precise, I used |
Also, just to be clear: I am happy without |
True. :) |
I believe this would also be useful for the |
Hear ye, hear ye. This RFC is moving into final comment period. The primary change since the RFC was first introduced is that the RHS was changed to be by-value, not by-reference. |
Looking forward to this. Seems pretty solid and a perfect fit for Ramp (which fits solidly into the intended use case for the feature). |
👍 I would find it pretty useful for vectors and matrices. |
This is not a comment on the feature itself but on the documentation. Currently, [https://doc.rust-lang.org/reference.html#compound-assignment-expressions] states that “ |
i.e. only `impl AddAssign<i32> for i32 { .. }`. | ||
|
||
Add an `op_assign` feature gate. When the feature gate is enabled, the compiler | ||
will consider these traits when typecheking `a += b`. Without the feature gate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: "typecheking"
👍 |
Congratulations. The language design subteam has decided to accept this RFC. The canonical text can now be found in the |
Rendered
Implementation
cc @bjz @aturon @nikomatsakis