Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible bug when typing a function that receives a function and its parameters #16871

Closed
leoasis opened this issue Jun 30, 2017 · 12 comments
Closed
Labels
Design Limitation Constraints of the existing architecture prevent this from being fixed

Comments

@leoasis
Copy link

leoasis commented Jun 30, 2017

TypeScript Version: 2.3.4

Code

interface Payload {
    a: string,
    b: number
}

let doFoo: (payload: Payload) => void;

let executeAction: <P>(
    action: (payload: P) => void, payload: P
) => void;


executeAction(doFoo, { a: 'hello', b: 2 }); //no errors, ok
executeAction(doFoo, {}); //no errors, wrong!
executeAction(doFoo, { qwe: 2 }); //errors, ok
executeAction(doFoo, { a: 'hola' }); //no errors, wrong!

doFoo({ a: 'hello', b: 2 }); //no errors, ok
doFoo({}); //errors, ok
doFoo({ qwe: 2 }); //errors, ok
doFoo({ a: 'hola' }); //errors, ok

Expected behavior:

The lines commented with "wrong!" should show errors.

Actual behavior:

Those lines don't show errors

I added the direct calls to doFoo at the bottom for clarification of what I'm expecting. If I'm doing something wrong please let me know.

@hrajchert
Copy link

Reviewing this issue with @leoasis I reduced this example down to this

interface Payload {
    a: string,
    b: number
}

let doFoo: (payload: Payload) => void;
let doBar: (payload: {}) => void;


doBar = doFoo; // I think this should throw an error but it doesn't

let doZar: (payload: { qwe: number }) => void;
doFoo = doZar; // This throws an error as expected

@sylvanaar
Copy link

sylvanaar commented Jun 30, 2017

It seems to behave correctly. Remember the rules for function type equivalency.

let x = (a: number) => 0;
let y = (b: number, s: string) => 0;

y = x; // OK
x = y; // Error

interface Payload {
    a: string,
    b: number
}

let doFoo: (payload: Payload) => void;
let doBar: (payload: {}) => void;

let a:Payload
let b:{}

a = b  // error
b = a  // OK


doBar = doFoo; // I think this should throw an error but it doesn't

let doZar: (payload: { qwe: number }) => void;
doFoo = doZar; // This throws an error as expected
`

@leoasis
Copy link
Author

leoasis commented Jun 30, 2017

@sylvanaar I don't think you applied the same rule, since that example you quoted has the same types in the first argument, and here we have a mismatch in types in the same argument.

Also, it's not behaving correctly, check this invalid program that type checks successfully:

interface Payload {
    a: string,
    b: number
}

let doFoo = function (payload: Payload): void {
  console.log(payload.a + payload.b.toString());
};
let doBar: (payload: {}) => void;

doBar = doFoo

doBar({}); // will throw a runtime error
doBar({ whatever: 123 }) // will throw a runtime error

@sylvanaar
Copy link

Look at the test I did to show you the equivalence of {} and Payload. You can store a Payload in a {} but not the other way around.

That is the result you saw.

@leoasis
Copy link
Author

leoasis commented Jul 2, 2017

I still don't understand how that relates to the examples I showed. Maybe I need more knowledge of the rules that you're applying to associate them, sorry about that :(

Anyway, don't you agree that the last example, even if it is "behaving according to the rules", is not what you want from a type checker?

Why would it be ok from a type checker perspective to allow me to call a function with a type that is more general than the one that is specified in its signature? That will never be safe to assume, can't think of a scenario where that is a good assumption.

Also, I found this fragment of the documentation, in https://www.typescriptlang.org/docs/handbook/type-compatibility.html#function-parameter-bivariance:

When comparing the types of function parameters, assignment succeeds if either the source parameter is assignable to the target parameter, or vice versa. This is unsound because a caller might end up being given a function that takes a more specialized type, but invokes the function with a less specialized type. In practice, this sort of error is rare, and allowing this enables many common JavaScript patterns.

Then follows an example that doesn't explain why the "vice versa" has to be that way. And also, in my own experience, this sort of error is not rare, in fact it is very common.

Would love to understand a bit more of the rationale of this, and if there's a way to make this be safe. At the very least, would love to understand this and improve the documentation and examples, since right now it's not clear why this decision was made.

@sylvanaar
Copy link

sylvanaar commented Jul 3, 2017

You should really read the FAQ

https://github.com/Microsoft/TypeScript/wiki/FAQ#why-are-all-types-assignable-to-empty-interfaces

I do agree that bivariance is a real flaw. but I guess they haven't been getting many complaints about it until recently.

@ahejlsberg
Copy link
Member

ahejlsberg commented Jul 3, 2017

The reason parameters in function types are related bi-variantly is that otherwise common types such as Array and Map (and other interfaces that represent read-write protocols) would not relate co-variantly (but rather invariantly). The common intuition is that such types are co-variant, even though it is technically only safe for reading.

The alternative would be to have read-only, write-only, and read-write forms of every type, along with a strict requirement that every use of a particular type be accordingly annotated. That would make type relationships significantly more rigid and inflexible (in the name of soundness) and would obviously be a huge breaking change for all existing code (e.g. every existing .d.ts file would have to be revised to include proper read-only, write-only, or read-write annotations for every property and parameter in every type). Or, for backwards compatibility, we'd possibly need a fourth "unknown" state for read/write-ability along with checking rules.

It's not that this couldn't be done, but we have to weigh the benefits against the additional complexity it would introduce and it's not clear that it would worthwhile, particularly now that we already handle callback parameters co-variantly (see #15104).

@ahejlsberg ahejlsberg added the Design Limitation Constraints of the existing architecture prevent this from being fixed label Jul 3, 2017
@leoasis
Copy link
Author

leoasis commented Jul 4, 2017

Thanks for taking the time to answer @ahejlsberg!

However I'm still not fully understanding the problem, sorry about that. I don't want to take too much time from you with this, so feel free to refer me to documentation or somewhere else if I'm talking nonsense or trivial stuff.

Maybe I need more examples to fully understand what you say, since I still find that most of the time you want in parameters to be contra-variant and out parameters to be co-variant. In JS the most common way to get out parameters are callbacks, and that's something you added in 2.4.1, so that solves most of the use cases without adding extra syntax. However, I fail to see any common pattern that would break if we defaulted to contra variant in parameters, if we resorted to other techniques (already available in Typescript) to solve them. The example shown in https://www.typescriptlang.org/docs/handbook/type-compatibility.html#function-parameter-bivariance (the listenEvent example), instead of making parameters bi-variant, could have been solved by function overloading, right? Since in that particular example, you have to encode the function logic that for a given event type in the enum, you will get a particular event type. That relationship is not explicit by relying on bi-variance, it'd be explicit by overloading.

Anyway, not sure how that matches the goals of this project, but would love to see more examples where this breaks stuff and another technique (such as overloading) wouldn't suffice, because if they would, we could have those use cases supported and be safer at the same time.

Again, sorry if I'm talking trivial stuff, and feel free to redirect me to other resources where this has already been discussed.

@ahejlsberg
Copy link
Member

ahejlsberg commented Jul 5, 2017

Imagine the following scenario:

function getDivElements(): HTMLDivElement[] {
    // Return a list of HTMLDivElements
}

function processElements(e: HTMLElement[]) {
    // Do something with list of HTMLElements
}

processElements(getDivElements());

The call to processElements is permitted because Array<T> is co-variant with respect to T and therefore an HTMLDivElement[] is assignable to an HTMLElement[]. However, since Array<T> uses T in both input positions and output positions (e.g. the push and pop methods), it would end up being invariant with respect to T if we said that input positions are strictly contra-variant and output positions are strictly co-variant. Therefore the code above would become an error without a type assertion. (Specifically, in a structural comparison of the two array types, the push method of HTMLDivElement[] would not be assignable to the push method of HTMLElement[] because HTMLDivElement is not a supertype of HTMLElement as contra-variance would require.) However, by treating input positions as bi-variant (both co- and contra-variant), type parameters are always at least co-variant.

Technically, arrays are invariant and in a perfect world the processElements function above would indicate that it doesn't modify the array by taking a ReadonlyArray<HTMLElement> (which is safely co-variant because it uses T only in output positions). But strictly enforcing the difference between read-only, write-only, and read-write arrays introduces a lot of friction and we have chosen instead to be safe for reading by default (because reading is far more common than writing). This was a conscious choice based on weighing advantages (safety) against disadvantages (complexity). Type systems are basically like connected dials--the more you dial up the safety, the more you dial up the pain, and this is an area where we felt the pain wasn't worth the extra gain.

@leoasis
Copy link
Author

leoasis commented Jul 5, 2017

Thank you so much for the detailed and thorough response. I've learned a lot and this made me research more about the problem.

I think I understand now the problem with parametrized read-write structures that would become invariant if the in parameters of functions were forced to be contra-variant.

However, I still think there's a case to be made about this decision, which may lead to an opportunity to be revisited, as the goal of being safe for reading, holds as long as you don't try to do things like the following (which is what I stumbled upon in the OT):

function getDivElements(): HTMLDivElement[] {
    return [];
}

function processElements(e: HTMLElement[]) {
    // Do something with list of HTMLElements
}

processElements(getDivElements());

// Added a function that returns an array of more general items than HTMLElement
function getElements(): Element[] {
    return [];
}

// This correctly type checks, but will cause a runtime error if the original `processElements`
// tries to read any value from the items that is on HTMLElement that is NOT in Element.
const processFoo : (e: Element[]) => void = processElements;
processFoo(getElements());

Also, I know you are aware of this already, but Flow seems to have that restriction and doesn't seem to be much of a problem, it's actually a good thing. In these cases that problem can be solved by slightly modifying the signature of the function like this (or using ReadOnlyArray as you suggest):

function processElements<T extends HTMLElement>(e: Array<T>) {
    // Do something with list of HTMLElements
}

and that would type check fine even with the contra-variant restriction.

Also, from my own experience when researching which type checker should I use, this tradeoff comes out a lot in the wild when comparing Flow vs Typescript, and it is mentioned as a disadvantage of Typescript, not as a benefit, so maybe there can be a way to poll the current Typescript user base to see if this would be desired to be added. Maybe something it could be implemented similarly as how strictNullChecks was introduced, and then eventually become the default.

Thanks a lot for the time you took to comment and explain the rationale, even if this doesn't end up being considered to be added into Typescript, hope it would serve as explanation of the rationale to improve the docs. I truly think what you just wrote in this issue is way clearer that what is currently in the docs, so I'm willing to contribute some changes to that section with what I learned from you.

@ahejlsberg
Copy link
Member

@leoasis You're welcome. Definitely feel free to contribute a change to the docs. Meanwhile, we'll continue to think about possible non-breaking ways to solve this issue.

@mhegazy
Copy link
Contributor

mhegazy commented Aug 17, 2017

Automatically closing this issue for housekeeping purposes. The issue labels indicate that it is unactionable at the moment or has already been addressed.

@mhegazy mhegazy closed this as completed Aug 17, 2017
@microsoft microsoft locked and limited conversation to collaborators Jun 14, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Design Limitation Constraints of the existing architecture prevent this from being fixed
Projects
None yet
Development

No branches or pull requests

5 participants