Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always require an explicit modifier at the call-site when passing a parameter with reference semantics #1136

Closed
VisualMelon opened this issue Nov 22, 2017 · 20 comments

Comments

@VisualMelon
Copy link

VisualMelon commented Nov 22, 2017

Always require an explicit modifier at the call-site when passing a parameter with reference semantics

First off, an apology about how terrible I am at communication. I've been whining about this for a couple of months now (and have lost sleep over it), and I'm still not sure anyone understands what I'm saying.

This is really a response to in. I believe the current in proposal is dangerous primarily because it allows pass-by-reference semantics to be used without this being explicitly acknowledged at the call-site. I will refer to it as in, as this is what the current proposal goes with. I would prefer ref readonly/readonly ref, but that is a different issue (which I absolutely support): #1133

Here is why in should be declared at the call-site:

  • This is the case in C# pre-in: ref and out both require the caller to explicitly acknowledge that they are passing by reference: in should be no different. Everyone agrees that requiring ref and out at the call-site is a good idea (as isn't the case in C++), so let's keep employing good ideas.

  • The semantics of in are very different from normal 'ByVal' at the call-site when handling an lvalue, so it should be acknowledged. Examples below will outline some of differences.

  • Hardly anybody understands the semantics of in, so when the syntax looks the same and they read stuff saying "it's a readonly reference" they will subconsciously believe the semantics are the same as ByVal

The main use-case considered for in is for performance-hungry libraries (e.g. math libraries, e.g. for matrix multiplication) where there is a significant cost in copying medium-to-large structs around, so people use ref instead, which fails to enforce or convey the readonly intent of such arguments. in is terrible, because it also fails to enforce or convey the intent by neglecting to require the call-site to acknowledge that the parameter probably isn't meant to be changed, leaving the callee to hope that the caller doesn't pass in anything too exotic, or it will fail in exciting and hard to debug ways.

It has been stated that it is beneficial for in to not be required at the call-site, because it will allow library providers to change their APIs without the callers updating there code: unfortunatly, changing the API is a breaking change, so the whole packet gives you a silent breaking change. This isn't a "in C#x, foreach works a bit differently", this is a "I can change my API whenever I like and nothing will tell you about it"... When the semantics of an API change, I think we can all agree that it should be treated like any breaking change, and require both the implementer and consumer to reconsider the code. I consider this stated goal a disastrous misjudgement that undermines the confidence that anyone can have in their own code.

We can of course argue about whether the semantics are sufficiently similar that we can just ignore the fact. I shall argue that they are sufficiently different by presenting examples of where they are different and discussing how painful it could be to spot/debug.

public static void DoSomethingWithThing(in Thing thing)
{
    thing.DoStuff();
    thing.DoOtherStuff(); // this might be a different thing
}

Any time you call another method, there is a change that the in parameter will change: it could even change to a different type or null!

But I'm the implementer, I can declare that if someone changes the parameter while I'm using it that it is their fault, my API doesn't support that. What I the implementer am not going to do is defend against this, because that would defeat the whole point of in.

public static void DoSomethingWithThing(in Thing thing)
{
    Thing fixedThing = thing; // NO!
    // now I can use fixedThing all I like and know that it won't change
}

Now an innocent API-consumer comes along and writes some code:

public class MySimpleClass
{
    private Thing _thing;

    void MyInnocentLookingMethod()
    {
        DoSomethingWithThing(_thing);
    }

    // more code... probably a constructor somewhere
}

What does this code do? I have no idea. This code is not 'thread-safe' by any stretch of the imagination, and yet there is nothing iffy-looking here at all. It is passing (let's say) a reference type to a static method: nothing to go wrong if this was ByVal. But it's in, so if another thread modifies _thing, DoSomethingWithThings might not work (hopefully it will crash violently and give us a stack trace, but it might not, and instead leave us in an unintended state). How is the caller supposed to debug this? They have no choice but to check the signature of DoSomethingWithThing, and then make an explicit copy to make their code safe again.

You could suggest that if _thing is expected to change, that special care has to be taken to address thread safety, but who on earth wants to write this code, just in-case the API silently changes in their multi-threaded application?:

public partial class MySimpleClass
{
    private Thing _thing;

    void MyInnocentLookingMethod()
    {
        Thing localThing = _thing; // better cache this, so that I can pass an lvalue without expressing any intent
        DoSomethingWithThing(localThing);
    }
}

This is awful. Quite honestly, I do not know what the behaviour of DoSomethingWithThing would be if the target of the in parameter was changed by another thread: I really hope it isn't, but I can imagine that it could be a function of optimisation strategies. This innocent looking method is unpredictable, and the caller has no idea what is going on.

But threading is difficult, so lets ignore that, and contemplate how it can go wrong without invoking threading:

public static void DoSomethingWithThing(in Thing thing)
{
    thing.DoStuff();
    thing.DoOtherStuff(); // this might be a different thing, even on the same thread
}

Didn't take long to find that example. Any external call (e.g. _thing.DoStuff) could change the value of the in parameter. This isn't so hard to debug as non-deterministic thread racing, but it is the caller that has to debug this, and all they will see is their param going in, changing (which may well be expected), and DoSomethingWithThing not behaving as expected. Again, this might have worked when it was first written, and be broken by a silently changing API.

The caller can also pass delegates to the method which it can use to shoot itself in the foot. Those delegates might come from somewhere else. The possibilities are endless.

And even better, when you work out that you are passing to a method with in semantics (after you stop crying), you can decide that this method is to blame, and resolve to fix it. But first you have to check every method call in the method in-case your parameter is being passed to those methods as in and it might be that method that is to blame.

Even more fun can be had if we combine in with readonly ref returns:

void MyMethod()
{
    int n = 17;
    readonly ref int result = DoStuff(n);
    n = 18;
    // result might now be 18
}

The caller can change the value of the 'readonly' ref returned to it. Another variation:

void MyMethod(in Thing inThing, ref Thing refThing)
{
    // inThing and refThing may be the same thing... the caller has no idea the first param isn't copied

    refThing = process(inThing); // this may change inThing
}

I'm sure we can procure more examples in the comments as necessary, but hopefully the point is made that in allows the call-site to seriously mess with the method, perhaps not even intentionally.

None of this is meant to attack the semantics of in. They make sense for readonly ref, but they are utterly different to 'ByVal', so there should be no pretending that they are not. Such a complicated (and powerful) tool should be explicitly acknowledge at the call-site.

I recall reading somewhere also that it is not impossible for an in parameter to be updated by the callee through seriously dubious means that obviously no-one would ever write, so it can't even guarantee readonlyness which 'ByVal' does.

Summary

in does more than just (fail to) express intent, it invokes a whole load of implementation details which I do not what to think about when I'm calling 'normal looking' methods. All of the potential misery can be cast aside if we simply require the semantics to be acknowledged at the call-site.

A feature that is being added to help in places where performance is a serious concern could easily pollute code where it shouldn't be used, and undermines all the confidence we currently have in a boring method call such as System.Console.WriteLine(count) in any code.

It is a licence to silently break APIs.

Common Arguments and brief Counter-arguments

  • The semantics for in are the same as ByVal (as far as I can tell... obviously it's complicated, but I thing I understand) when passing in an rvalue (i.e. an expression: something can't take the address of). It has been argued that duly there should be no indication at the call-site that it is different, because it isn't different. ...Unfortunately, method overloading is a thing. There is an issue trying to untangle this already: it wouldn't be an issue if we just enforced the modifier at the call site. And the rules suggested for overload resolution are not simple.

  • We want people to reap the benefit of in without having to change existing calling code. As stated above, I believe the semantics are more than sufficiently different that this is a daft notion. It has been suggested that libraries will update to using in, and nobody will notice. However... what I suspect is much more likely is that the libraries will provide 2 versions (one with in, one without in), and nobody will notice they are using an overload, or indeed know which one they are using. I would also argue that this is downright irresponsible, because it explicitly condones changing APIs silently. I will personally be outraged if the BCL is changed to use in as a breaking change (yes, it's still a breaking change if the code still compiles: it's worse than if the code didn't compile, because when it stops working I have no idea why: a breaking change is a breaking change and I want to know when my code breaks). Furthermore, the APIs that this is targeting already use ref for performance reasons: so they will still have to change the calling code.

  • We want people to reap the benefit of in without having to type when writing new code. This is an appeal to the fact that typing causes RSI and may cause long term health effects. I'll risk the RSI if it means I am guaranteed my sanity.

  • Operators and Extension Methods taking in this parameters can't be explicitly acknowledged at the call-site: there is no nice syntax for that. Firstly, because operators may be ambiguous, doesn't mean 'normal' calls have to be ambiguous. Secondly, operators don't have to support in. This feature is about considered performance benefits: if you want performance, you can probably accept that you have to type a method name instead of a plus sign. Extension methods are even less of a concern, just write them as a static call. Thirdly, I'm sure we could come up with some ugly syntax for it: I'm sure our inventive minds must be able to come up with something better than nothing.

Misc relevant stuff

@orthoxerox
Copy link

Do you have a real-life example of code that will be silently semantically broken if an in modifier is introduced to increase performance?

@VisualMelon
Copy link
Author

VisualMelon commented Nov 22, 2017

@orthoxerox I've not looked. Personally I hardly ever work with threading (where it will be easiest to trip up), and I'm not going to pretend that I believe unthreaded code that this could change the behaviour of is common. The point is that code like this could exist, and debugging it could well be a nightmare. The point is that in introduces needless ambiguity, and I'm tired of defending the position that this is bad, and having to attack the position that covertly changing APIs is good. I can see nothing in favour of not requiring in at the call-site (except for less physical typing), and plenty of ways it can go wrong

Your point is of course to question whether this will really be an issue in the real world, and I can't answer that, because I'm not nearly real-worldly enough; I'm a miserable theorist who only sees how things can go wrong (yeah, I have a boring life). I shall have a peruse of some code I am vaguely familiar with, but naturally invite others to present some as well. Part of what is so troubling for me is that I don't really know where to look: I wrote the code knowing it would be ByVal, and I moved on. There was nothing in the past saying "oh, better remember that was ByVal just in-case it changes to ByRef" (which of course will be the case if this goes through, something I am less than happy about).

I completely appreciate the point that if this isn't going to create many problems, then it probably isn't a problem, but I do wish to point out that I am accepting a challenge to find bugs that I believe will be hard to find when they show themselves, let alone when they don't! Also, ref readonly returns havn't been released yet, so I obviously can't find an example of those). It is also made harder, because to be certain that the behaviour will change relies on having both the method implementation and calling coder, so I can't check any interfaces with closed-source APIs... though in theory the implementation can change anyway to break it once it uses in.

@CyrusNajmabadi
Copy link
Member

and I'm tired of defending the position that this is bad

The onus is on the person making the claim :)

Counter argument:
Let's begin with me believing that you're right and that there could be problems here for the reasons you've stated. What you need to do is convince me that these problems are sufficient enough to warrant taking your solution.

Today you could go to practically any C# language feature and extoll all the ways they could be abused. Heck, take a c# 1.0 example: implicit conversions. I still hear from people how they think implicit conversions are bad because of all the ways that people will abuse them and how it means they can't even know what their code actually means. Many arguments you made are brought up in regard to them. For example:

The semantics of assignment [with implicit conversions] are very different from [normal assignments], so it should be acknowledged

Or

This isn't a "in C#x, foreach works a bit differently", this is a "I can change my API whenever I like and nothing will tell you about it"

That applies to implicit conversions as well. Someone could return a new type, and have it implicitly convert to hte old type.

Why do i bring this up? Because, in practice, we don't see people abusing things all over the place. API authors are not going and maliciously using features to try to muck with callers. They're following sane and reasonable practices.

--

Also, many of your arguments apply to C# as it exists today. For example:

But it's in, so if another thread modifies _thing, DoSomethingWithThings might not work (hopefully it will crash violently and give us a stack trace, but it might not, and instead leave us in an unintended state).

That alreayd exists today. Say another thread mutates _thing in some way. DoSomethingWithThigns may not expect that, and can crash because of it. This is an issue regardless of 'in' or not. When you use threads you need to be careful. Today if you have threads and you share state with anything you better be darn well certain about what you're doing.

@CyrusNajmabadi
Copy link
Member

Individual rebuttals:

public static void DoSomethingWithThing(in Thing thing)
{
    thing.DoStuff();
    thing.DoOtherStuff(); // this might be a different thing
}

// Any time you call another method, there is a change that the in parameter will change: 
// it could even change to a different type or null!

Yes. And? :) That's how things work today already with 'ref'. In practice, it's not an issue. That's the nature of references. :) It's like saying "any time you operate over an array, the values in the array might be different if you or another thread is mutating them" or "any time you reference a field ina class, it might be different if you or another thread mutates it". Yes :) That's all true.

If you want to operate in a way that that isn't case, you do what you always have had to do: copy the value into a local. But know that if you do that, you still aren't protected from the data that thing is pointing at from potentially changing (unless you made your world entirely immutable).

@VisualMelon
Copy link
Author

VisualMelon commented Nov 22, 2017

@CyrusNajmabadi

The onus is on the person making the claim :)

The day I have to defend the correctness of my code is the day I pack up my computer and move to Canada to live with the wolves (though that is pretty tempting already). It strikes me as very odd that I need to defend the position that code which everybody knows is ambiguous is ambiguous, and that this is not ideal. My position is that there is no good reason for this ambiguity, ambiguity generally makes life worse, so let's avoid it.

Heck, take a c# 1.0 example: implicit conversions.

Indeed, and I think that implicit conversions are dangerous as well. I am particularly frustrated every time C# implicitly converts int to float. However, the availability of implicit conversions themselves is not a problem, specific implicit conversions (defined by the programmer/API author) are a problem (i.e. like int to float). My complaint with in is that perfectly well implemented APIs (i.e. methods) can be (inadvertently) misused/abused by the caller without the caller realising and that this silent behaviour can then change. No one is at fault here, someone was definitely at fault for explicitly writing the code to implicitly convert float to int.

Edit: and this is irrelevant anyway: C#1 is set in the past, I'm trying to help avoid adding features that don't make sense to the future C#

Also, many of your arguments apply to C# as it exists today. For example:
Yes. And? :) That's how things work today already with 'ref'.

This is all true. I have not said otherwise. I have explicitly stated that I do not object to the semantics of in/readonly ref. What I object to is that the semantics are not acknowledge (hence not apparent) at the call-site, and that the semantics can change silently. Just because I can shoot myself in the foot by explicitly using ref, doesn't mean I should be able to shoot myself in the foot accidently with in.

If you want to operate in a way that that isn't case, you do what you always have had to do: copy the value into a local

This is both ugly, and currently completely unnecessary in places where we all know for absolute certain thatn ByVal semantics are being used. The in proposal removes that certainty, and I have no intention (as already stated in the OP) of explicitly caching every lvalue I pass to a method I have any reason to be suspicious of.

I want the static type checker to help me. It can't help me if it doesn't know what I want. And it can't know what I want if I don't tell it.

@CyrusNajmabadi
Copy link
Member

and that this is not ideal.

If everyone agreed on what was best for a language, language design would be easy :)

@VisualMelon
Copy link
Author

VisualMelon commented Nov 22, 2017

@CyrusNajmabadi find me something that states that ambiguity is good for programmers, and I will turn myself into a psychiatric ward forthwith. (end hyperbole)

@sharwell
Copy link
Member

💭 This appears to be a code style request which is easily addressed by an analyzer/code fix combination.

@CyrusNajmabadi
Copy link
Member

Indeed, and I think that implicit conversions are dangerous as well

Note: i was talking about user defined implicit conversions.

However, to still address this: note that this danger exists, and yet it's not really a big deal. It's easy to have implicit conversions all over your code. For example, see if you can answer how many implicit conversions might appear here: Foo((a, b)) (hint: it's a TON). And yet, despite all that danger, by and large things keep on trucking just fine.

I think that's my overall point. You may be 100% correct that there are potential concerns here. But the important question is: are these concerns substantial enough to warrant a language change. You've started with the forgone conclusion that they do. I'm pointing out it's not at all that clear (to me at least) :)

@CyrusNajmabadi
Copy link
Member

find me something that states that ambiguity is good for programmers,

That's not my argument. My argument is that while ambiguity may not be good, that doesn't mean it must be eliminated. Here's a simple example:

a.b.c = z;

What does that mean? There are many interpretations and tons of code could be executing implicitly, despite never being called out in code. Is that bad? At times, yes. Is it bad enough to warrant needing to be explicit about each and every thing? Clearly, we've already decided that no, that's not hte case :)

@VisualMelon
Copy link
Author

VisualMelon commented Nov 22, 2017

@CyrusNajmabadi

Note: i was talking about user defined implicit conversions.

As was I. The point is that a user (programmer) made a decision to write the implicit conversion, and it can be judged whether or not this was a good or bad idea on a case by case basis. float to int is lossy, so clearly shouldn't exist, to give a cheap example.

Using in in a method is also a case-by-case concern, but what it can't take into account is all the code that calls the method which is (quite literally) expecting a different method.

see if you can answer how many implicit conversions might appear here...

No clue, and I certainly agree with you that implicit conversions have the undesirable property of allowing APIs to be covertly changed. Perhaps I would have opposed them if they'd been suggested here ;) But again, they are in C#, they are not going away. in is not in my VS yet, so I can keep fighting!

What does that mean? There are many interpretations and tons of code could be executing implicitly...

This is certainly a good point, and you are right that this is something we have (as such) agreed is acceptable; however, this is because it has some discernible/perceived benefit. The 'implicit' execution of code (e.g. from properties) is something that C# is built upon because it was deemed an expressive way of writing such things. There is a clear benefit there. I believe that is no benefit to leaving in ambiguous at the call-site: I have attempted to address the 2 usual arguments as to why it might be in the OP (less typing, less changing existing code) (I really hope there were better reasons, but I can't find them).

The design decisions must of course be a balancing act, but again, I'm not saying we shouldn't have in, all I ask is that it be acknowledged at the call-site. This is the simplest, least ambiguous, and most flexible solution, and one which removes any concerns with breaking behaviour (be they real or not, we won't know until it is released).

@VisualMelon
Copy link
Author

@sharwell

This appears to be a code style request which is easily addressed by an analyzer/code fix combination.

I'd rather it didn't come to that, but thanks for the suggestion none-the-less: perhaps I can cling to my sanity a little longer!

@CyrusNajmabadi
Copy link
Member

Perhaps I would have opposed them if they'd been suggested here ;) But again, they are in C#, they are not going away. in is not in my VS yet, so I can keep fighting!

Here's my point. You seem vehemently against these things because of hte presupposed conclusion that they will be a major problem. But, as we've seen with many existing features, these don't really tend to be problems. I'm hoping that by showing this to you it will help you reassess the vehemence of your position and to consider that sometimes there can be pragmatic approaches that can be beneficial even when there are potential drawbacks that come along with them.

@sharwell
Copy link
Member

❓ If you compile a library with a method with an in parameter, and then use that method from a library using the old C# compiler, will it compile? If it's usable, does the call site "just work" or does it require ref?

@CyrusNajmabadi
Copy link
Member

If it's usable, does the call site "just work" or does it require ref?

I can't see how it couldn't require ref. it's just a ref parameter with a special attribute on it.

@VisualMelon
Copy link
Author

VisualMelon commented Nov 22, 2017

@CyrusNajmabadi

You seem vehemently against these things because of hte presupposed conclusion that they will be a major problem

That's a pretty fair appraisal; though, I would suggest it is less that I have supposed they will be a major problem, and more that I see potential for misery where I think there needn't be any. I'm certainly guilty of using hyperbole and exaggeration (sarcasm is how I communicate in the real world, it doesn't translate so well into text), which doesn't help, but I do maintain my (reasoned) opinion that the benefit of not requiring in is minimal, and the cost is (if potentially small) immeasurable (it may indeed be significant).

I would like to believe that I would take to task any proposed feature that creates opportunity for ambiguity. I just happened to have discovered in and (apparently) noticed some potential concerns which I felt have been understated. My intent is very much to present a case which I haven't seen strongly presented elsewhere. Simply, I see some downsides to not requiring in at the call-site (personally, I would weight such ambiguity above most things), and I don't see any real benefits:

  • less typing: ... this shouldn't even be considered when correctness is on the table

  • less changing of old code: assuming that nothing goes wrong, this is a one-time cost (which could in theory be automated, a possibility sharwell has just put me onto). After jnm2 challenged me to find some breakable code, I had a look back, and the 2 math libraries I have used most are both practically immune to covert breakage (unless you consider in float to be a sufficient condition, in which case there are a myriad of potential bugs): one uses ref everywhere; the other uses reference types for everything, so has no reason to use ref or in anywhere.

A 'real' benefit would be something that enables greater expressiveness, or a some new, powerful, nominal type construct. Not requiring in at the call-site provide no such new capability.

Because we seem rather at a philosophical stalemate, perhaps I should just restate my position that I think in is a valuable feature (though I'd prefer a different keyword), but I think there is no good reason for incurring the risk that comes with not requiring acknowledgement of the most definitively different semantics at the call-site.

Sorry that was so long!

@CyrusNajmabadi
Copy link
Member

CyrusNajmabadi commented Nov 22, 2017

but I do maintain my (reasoned) opinion that the benefit of not requiring in is minimal

That's fair. Though your opinion is driven by certain value statements that are certainly not universally shared. For example:

Secondly, operators don't have to support in

I would very much like operators to support 'in'. I have my own math libraries, and i want to able to use operators and use in. And i would rather have that not. And i would def want to keep using operators instead of having all the excessive verbosity of explicit method calls.

This is a case where there is a clear tradeoff of your position. You may think that tradeoff is fine and your approach is a clear win, but that's definitely not how everyone (including myself) will see things. When your own proposal negatively affects core use cases that people do care aobut, it's worthwhile to be cognizant of this and to understand that it's not a "slam dunk" to go with your approach.

I don't see any real benefits:

Saying that you don't see real benefits tells me that you're not considering other peoples' perspectives here. For example, i can definitely see the benefits of your approach. But your approach also comes with downsides that are important to me.

--

Also, you get more flies with honey and all that :)

@VisualMelon
Copy link
Author

VisualMelon commented Nov 22, 2017

@CyrusNajmabadi

I would very much like operators to support 'in'. I have my own math libraries, and I want to able to use operators and use in

I didn't mean to suggest that in operator semantics shouldn't be supported, but I hoped to indicate that the consistency of not requiring any call-site modifier for any type of call is also not a slam-dunk suggestion: it is possible, perhaps preferable, to take clarity in 'normal' method calls and accept ambiguity where it can't be avoided without some other cost (e.g. ugly syntax): I just wanted to suggest that there are many options here, and the 'normal' call-site syntax shouldn't be tied to other contexts. To qualify this point, I am not 'happy' that operators may be ambiguous, but there is no obvious cost free way out, so I can perhaps live with that, even if it is at odds with my miserable world-view. 'Normal' calls are different, because there is an obvious and (I would argue) low cost solution.

You may think that tradeoff is fine and your approach is a clear win, but that's definitely not how everyone (including myself) will see things

Quite right, and I don't necessarily expect others to align, but I do feel that the objective risk that is inherent in ambiguity must be given significant weight, and I didn't get the impression it was taken especially seriously from previous discussions (not that I should wish to suggest that it wasn't, I haven't seen much of a paper trail indicating a calculated consideration; I'm sure it has been discussed at length somewhere), but more over I didn't feel like the community as a whole was aware of them (as has, to some extend, been evidenced by the general confusing regarding the lack of equivalence between in and readonly non-ref). This is not to say that the proposal is confusing, just that presently it is not necessarily well-understood (and I wouldn't wish to imply I'm especially well informed).

worthwhile to be cognizant of this and to understand that it's not a "slam dunk"

You are absolutely right, and I am very keen to hear why people would support the proposal, because it is my perception thus far that the benefits will be short-lived, but that the consequences may never go away. I struggle to believe that these are the real reasons, and the decision is genuinely surprising to me (accounting for the inevitable differences in requirement and opinion).

Also, you get more flies with honey and all that :)

Indeed (not that I should wish to compare anyone on this forum to flies!): I must confess that the in proposal has put me under an inordinate amount of stress, so I'm perhaps not my usual amicable self (if anyone could believe that!) on this particular forum. I made a concerted effort to avoid the OP being too passionate, but my commentary has certainly taken a turn for the worse: there is a reason I have been putting off writing it up as its own issue for so long! Please (everyone) accept my apologies for what continues to be such a drawn out miserable experience.

@jnm2
Copy link
Contributor

jnm2 commented Dec 4, 2017

C# 7.2 is now RTW with VS 15.5, so this issue would be a breaking change.

@gafter
Copy link
Member

gafter commented Dec 21, 2017

The decision to permit omitting in at the call site was an explicit and intentional decision of the LDM leading up to the release of C# 7.2. As it is now in the language, it would be a breaking change to remove it. So closing because it is a change we will not make.

Although the issue is closed, you are welcome to continue discussion here if you want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants