Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do we expect required context types to interact with await? #3575

Open
leafpetersen opened this issue Jan 19, 2024 · 17 comments
Open

How do we expect required context types to interact with await? #3575

leafpetersen opened this issue Jan 19, 2024 · 17 comments
Labels
type-inference Type inference, issues or improvements

Comments

@leafpetersen
Copy link
Member

In #3471 , there is a proposal to require that all expressions be a subtype of their context type. This issue is to clarify the intended treatment of a specific corner of the language under this proposal.

Consider the following program:

void main() async {
  dynamic x = Future<int?>.value(3);
  int y = await x;
}

This program passes static analysis currently. I believe that the current treatment chooses FutureOr<int> for the context type when do inference on x. This has no effect, the static type of x remains dynamic. The static type of await x is therefore dynamic as well, and there is an implicit downcast to int inserted at the assignment to y. Semantically, it is as if we had written int y = (await x) as int.

This program also runs to completion currently. I believe that the semantic interpretation of await x in this case is essentially to check whether x is Future<Object?> and if so to unwrap the value from the Future and otherwise to simply return the value. In this case, the instance check returns true, the value is unwrapped, and await x evaluates to 3 and the implicit cast succeeds.

My natural interpretation of the proposal in #3471 is that we would change the above as follows.

Statically, we would instead say that since x does not satisfy its context type, we should insert a cast to the context type (here FutureOr<int> ) inside of the await. The result would be that semantically, it is as if we had written await (x as FutureOr<int>). The static type of await x would then be int, and no implicit downcast outside of the await x would be inserted.

Assuming the above, then the runtime behavior of the above program will change. The cast inserted on x will fail, and the program above will terminate with a cast failure.

Is my interpretation of the intended semantics above correct? Is this change expected/desired? If so, do we have any data on how breaking it would be?

cc @dart-lang/language-team

@lrhn
Copy link
Member

lrhn commented Jan 19, 2024

If we are completely consistent, then an expression inside await has a context type, and if dynamic, it should be downcast to the context type.

I'm not absolutely sure we need to be consistent in this particular case. Because await is special, and dynamic is special, and the combination could be allowed to be double special if we want it to.
That's mostly a feeling, that await is not like the other operators, or more like a control flow operator than a data operator. In some ways the operand is more like a receiver than a parameter - the await doesn't so much "make the future complete", it tells the future to complete itself, and then come back.

If we had a suffix await (#25) then int i = dynamicExpression.await; would (if treated consistently) have no context type, and the type of dynamicExpression.await would be dynamic, which will be downcast to int. And nobody would bat an eye, because that's how getters work, and it looks like a getter.

(But if we get @chloestefantsova's selector-chain inference, then we'd probably end up giving the recevier dynamicExpression a context type of FutureOr<int> through that as well. So not having a context type in receiver position migth just be due to not having a good enough inference.)

By writing it as int i = await dynamicExpression;, we introduce a context type, and it is one of the few places where we introduce a union type, which wasn't there before. That's caused a lot of trouble. (The other place I know is C x = _ ?? e; where _ gets a context type of C?, which has also caused some issues, but fewer, probably because nullability is easier to work with, and doesn't nest - int?? is just int?, but FutureOr<FutureOr<int>> is more. Even if we don't actually want those nested futures at all.)

All this to say that if we want await dynamicExpression to ignore the context type for the dynamic expression, and await first, we can probably specify that. It'd be a "dynamic await", dynamic operation because it's done on a dynamic value.
That just means no implicit downcast before awaiting, the await dynamicExpression is considered a single dynamic operation when the operand is dynamic, not something that passes a value from one type to another.

But the consistent behavior is to downcast to FutureOr<C>.
And that's also consistent with how we generally treat dynamic operations on dynamic receivers: The runtime behavior is what it would have been, if the runtime type had been the static type. (Just without any statically inserted coercions or extensions.)
So int i = await (Future<int?>.value(3) as dynamic); would be consistent in working like int i = await Future<int?>.value(3);, which is an error. It'll just be a runtime error, which is exactly what downcasting to FutureOr<int> will give us.

So that can work too.

TL;DR: Either way works for me. (Short term. Long term, I'd love to do something more to await. E.g., change the context type to Future<C> instead of FutureOr<C>, if at all possible.)

I do worry that we will have some occurences of code that will break with this change, and they'll be hard to find precisely because the code is typed as dynamic. So the only way to find out is to try.

@eernstg
Copy link
Member

eernstg commented Jan 19, 2024

You could say that the newly introduced breakage in

void main() async {
  dynamic x = Future<int?>.value(3);
  int y = await x; // Would throw if it means `await (x as FutureOr<int>)`.
}

is consistent with another part of the language:

Future<int> f(dynamic d) async {
  // Consider an invocation where `d` is a `Future<int?>`.
  if (../*just an excuse for having two return statements*/..) {
    return d; // Throws.
  } else {
    return await d; // Succeeds today;
    // .. would throw if it means `return await (d as FutureOr<int>);`
  }
}
 
void main() async {
  int y = await f(Future<int?>.value(3));
}

The breakage is real, but it might be "good breakage" because it makes the current (implicit) await behave more like an explicit await (which would be needed if we adopt #870).

Also, we already decided (with return statements) that a Future<S> is a dynamic error if the future value type isn't a supertype of S (it is not enough that S is assignable to the future value type, for example, it is not enough that S is dynamic).

In other words, "we will not wait for an object unless it is guaranteed to have the required type". The semantics for await that this issue proposes makes the same sentence fit the behavior of await.

@lrhn
Copy link
Member

lrhn commented Jan 20, 2024

I wouldn't say that we decided that return should behave differently from other awaits. It's more like the person specifying the return not being the same as the person specifying the special case for nested FutureOrs in await, and not being aware that the special case existed. (Or if the special case came last, not being aware that returns duplicates the behavior description instead of just referencing await.)

@stereotype441
Copy link
Member

@leafpetersen said:

Is my interpretation of the intended semantics above correct?

Yes, I think so.

Is this change expected/desired? If so, do we have any data on how breaking it would be?

It's certainly not something I'd fully understood when writing up #3471, so in that sense, no, it's not something I expected. Thanks for bringing it up!

In fact, thinking about it more, this problem doesn't just apply to await. It applies to all constructs in a larger category that I guess I would call "unwrapping" constructs. The following constructs all take a subexpression and "unwrap" its type, so to account for that, during downwards inference, the incoming type context gets "wrapped":

  • for (var pattern in e) ... (infers e with context Iterable<T>, where T is the type schema of pattern)
  • [e1, ...e2] (infers e2 with context Iterable<T>, where T is the type context that's used for inferring e1)
  • await e (infers e with context FutureOr<T>, where T is the incoming context, provided that T doesn't involve futures)

All of these work today because the implicit downcast happens after unwrapping:

dynamic x = <int?>[3];
for (var (int y) in x) print(y); // prints `3` (the list element is downcast, not the list)
dynamic x = <int?>[3];
print(<int>[...x]); // prints `[3]` (again, the list elements are downcast)

And of course your example:

dynamic x = Future<int?>.value(3);
int y = await x; // sets `y` to `3` (the awaited result is downcast, not the future)

I believe all of these will stop working if we go ahead with #3471. In a sense, it's a consequence of that proposal pushing all coercions down, not just the coercions associated with conditionals and switch expressions.

I can't think of any other constructs with the same problem, but I might be missing something.

Incidentally, the user can create generic functions with the same kind of "unwrapping" behavior, but that leads to a runtime error even today. For example:

T getFirstListElement<T>(List<T> list) => list.first;
dynamic x = <int?>[3];
int y = getFirstListElement(x); // Runtime error because `x` is downcast to `List<int>`

So I suppose you could argue that #3471 is a good change, because by pushing the downcast down into await, for in, and ..., it's making them behave consistently with generic functions.

Counterpoint: with generic functions, if you want the downcast to happen after the unwrapping, you can make that happen by introducing an explicit type argument:

dynamic x = <int?>[3];
int y = getFirstListElement<dynamic>(x); // Works

But there's no corresponding place to shove a dynamic argument into await, for in, or ... syntax.

I certainly think that if we do wind up going ahead with #3471, these potential breakages argue for doing it as a language-versioned change, so that people have more control over when the breakage happens to them.

I'll spend some time investigating these "unwrapping" constructs to see how much breakage we would be talking about here.

@stereotype441
Copy link
Member

One other thought: In some other issues recently I've been talking about the idea of extending type schemas with a coercibleTo operator. If we did that, we could potentially address these issues. Considering Leaf's example again:

void main() async {
  dynamic x = Future<int?>.value(3);
  int y = await x;
}

Since await is an unwrapping construct, when pushing the context int down through it, instead of creating the context FutureOr<int>, we could create the context FutureOr<coercibleTo(int)>. Since dynamic is coercible to anything, x would be considered to satisfy ths context, so no implicit downcast would be inserted, and the code would continue to be interpreted as:

  int y = (await x) as int;

@stereotype441
Copy link
Member

I've done some additional prototyping along the lines suggested in this discussion, and I'm less certain about whether requiered context types are necessarily a good idea anymore. If they are a good idea, they're certainly not the "slam dunk" I once thought they were.

The main issue I've run into with my additional prototyping is for loops. It's not uncommon for someone to try to do something like this:

f(dynamic d) {
  for (int x in d) {
    print(x);
  }
}

Today, this is works at runtime as long as d implements Iterable<Object?>, and every object yielded by the iterable is an int. If we make contexts required, then the context for d will be Iterable<int>, and so this will only work at runtime if d implements Iterable<int>. This seems particularly bad, because this sort of thing tends to happen at the interface boundary between strongly typed code and more dynamic logic (for example when processing deserialized JSON data), and in that sort of circumstance, it's very common to have lists whose element type is Object?, even though in practice the data they contain is all of some more precise type. I wouldn't want to break this sort of use case.

I have one other idea I would like to try before I give up, which is to see what would happen if we say that every expression is required to satisfy its context or have type dynamic. In this idea, if the expression has type dynamic, then a dynamic downcast isn't necessarily inserted; downcasting only happens as necessary to preserve soundness (which is similar to what happens today). So for example, the for loop above would be interpreted as:

f(dynamic d) {
  for (var tmp in d as Iterable<Object?>) {
    var x = tmp as int;
    print(x);
  }
}

@lrhn
Copy link
Member

lrhn commented Feb 6, 2024

if we say that every expression is required to satisfy its context or have type dynamic

That would mean that the other coercions (.call insertion and generic instantiation) would always happen eagerly, but downcast from dynamic would not.
The question is then: When does the downcast from dynamic happen?

I'll refer to #3363 and say always downcast before any use of UP, before any join-point, because UP(dynamic, any-not-top) is dynamic, which throws away good information.
Other than that, when the value would lose the dynamic type, which means assignment to a variable or occurring as an argument. The receiver of that argument has its own type.

@eernstg
Copy link
Member

eernstg commented Feb 7, 2024

@stereotype441 wrote:

every expression is required to satisfy its context or have type dynamic.

@lrhn wrote:

That would mean that the other coercions (.call insertion and generic instantiation) would always happen eagerly, but downcast from dynamic would not.

I think this is very promising! To me it looks like #3471 is maturing, not falling apart.

However, I think we might be able to adjust #3471 such that it behaves in a useful manner with these examples: Coercions should not be pushed down into the operand of an await expression. The fact that we'd get (await e) as T rather than await (e as FutureOr<T>) would follow, and it probably doesn't matter that we're going to get (await e).call (like today) as opposed to await (e.call). This just means that we're changing a breaking change such that it is slightly less breaking. ;-)

Similarly, we might (very well!) want to avoid pushing coercions into cascades (see 'Coercions are pushed down into cascade targets' in #3471).

Finally, I suspect that we want to avoid pushing coercions into a member access if and when we introduce receiver type inference.

When we have 3 kinds of expressions where coercions are applied directly (rather than being pushed into subexpressions), it's a significant part of the design, not a weird exception. Conceptually, it might work out to claim that propagation occurs into constructs involving a choice, otherwise there's no propagation (just like today). Also, I'd expect the behavior where coercions aren't pushed into these specific kinds of expressions to be useful (more useful than the alternative where the coercions are pushed in).

@stereotype441, is there a technical reason why it would be difficult (like hard to implement, unsound, or in some other way not working) to omit the propagation of coercions into subexpressions in these particular cases?

@stereotype441
Copy link
Member

@eernstg

@stereotype441, is there a technical reason why it would be difficult (like hard to implement, unsound, or in some other way not working) to omit the propagation of coercions into subexpressions in these particular cases?

It all depends what we mean by omitting the propagation of coercions into subexpressions. As I see it, whether or not coercions are "propagated" from one place to another is an emergent property of the type anaylsis rules we choose. If we want different behavior we have to pick different rules, but it's not always clear to me what rules we're implying when we discuss alternatives like this. Currently we have a set of rules that says:

  1. Type inference of expressions operates by performing a single-pass depth-first traversal of the expression tree.
  2. The input to each traversal step is a type schema, which we call the "context".
  3. The output of each traversal step is a static type, plus potential modifications to the expression.
  4. With a few exceptions, coercions are performed only when this depth-first traversal does an assignability check that's mandated by the spec, and the expression's static type is not a subtype of the type required by the assignability check.

These rules have the emergent property that when there's a "choice" of when to apply a coercion (e.g. whether to apply a coercion to the target of a cascade or to the entire cascade expression), it tends to happen nearer to the top of the expression tree. But there's no code anywhere that's explicitly "making" that choice; it just emerges as a result of the rules.

In #3471, I've imagined replacing rule 4 with a rule that says that coercions are performed whenever the single-pass depth-first traversal of the expression tree encounters an expression whose static type would otherwise not be a subtype of its context. That's not too difficult to do, since there is a single place in the codebase where the new rule would need to be added, and removal of the old rule could be spread out over time as a clean-up activity. But as we've discussed, it looks like it's a pretty seriously breaking change.

In #3575 (comment), I mentioned the idea of requiring every expression to satisfy its context or have type dynamic. This would amount to replacing rule 4 with a slightly different rule (and so, not too difficult), but then since this new rule 4 wouldn't do dynamic downcasts, we would need to add a rule 5 that says that we still do dynamic downcasts at assignability checks. Again, not too difficult, since the logic to do this already exists; it's just that today it does all coercions, not just dynamic downcasts. I haven't had a chance to see how deeply breaking this change would be.

Another possible rule change we've talked about would be split contexts into a required part and an aspirational part. Concretely, we would change rule 2 to say that "The input to each traversal step is a type schema, which we call the 'aspirational context', and a required type context". Then we would change rule 4 to say that coercions are performed when the static type of an expression would not otherwise be a subtype of the required type. I explored this idea a few months ago and I found it to be a lot of work, both in terms of implementation (because it required changing all expression visitor methods to accept this additional input) and in terms of specification (because it required deciding what the required type should be for every recursive invocation). Also, it gets tricky because when inferring generic type parameters on method and function calls, sometimes we don't know precisly what the type of a subexpression will be required to be until after we've visited it (because we haven't fully committed to what types we're going to infer yet).

So getting back to your question, to determine whether it will be difficult to push coercions down into some kinds of expressions but not others, we need to first figure out what rules will lead to that emergent property. Of the rule changes I've considered, only the idea of splitting contexts into a required part and an aspirational part would let us push some coercions down but not others. And that one looks like a lot of work. But I'm open to more brainstorming.

@leafpetersen
Copy link
Member Author

These rules have the emergent property that when there's a "choice" of when to apply a coercion (e.g. whether to apply a coercion to the target of a cascade or to the entire cascade expression), it tends to happen nearer to the top of the expression tree. But there's no code anywhere that's explicitly "making" that choice; it just emerges as a result of the rules.

I think another way of saying this is there is no choice to be made about where to insert the coercions. Or at least, the coercions from dynamic. That choice follows naturally from the choice of what type to infer for expressions. That is, type inference is the process of assigning to every sub-expression a type. In certain places, we have a choice about how to distribute that assignment. So for example, given int x = b ? i : d where i has type int and d has type dynamic, we can infer this as int x = (b::bool ? i::int : d::dynamic) :: dynamic or we can infer this as int x = (b::bool ? i::int : d::int) :: int where e :: int is pseudo-syntax for "The expression e inferred at type int". Both of those inference choices result in an expression being used at a type which it does not satisfy. In order to mediate that, we must insert a coercion. We insert coercions at the point of type mismatch, and so the two choices result in coercions in different places.

The point is that coercions are intended to mediate soundness, and soundness is the property that an expression of type T can only evaluate to a value whose runtime type is a subtype of T. So choosing the inference int x = (b::bool ? i::int : d::int) :: int but inserting the cast as int x = ((b::bool ? i::int : d::int) :: int) as int is a violation of soundness, because we have inferred d::int, but d may evaluate to something other than an int. This may "work" here (not all static soundness violations actually result in runtime soundness violations), but it will not work in general. If we infer a cascade of the form int x = d..isEven as int x = ((d::int)..isEven)::int but choose to insert the cast on the "outside" as int x = ((d::int)..isEven)::int as int then we will have a statically dispatched call to isEven which will fail if d does not evaluate to a String, which is a soundness violation.

So I believe that the correct way to think about this is to think directly about it in terms of what types we infer for the various sub-expressions. The coercions then follow naturally from that. Am I missing anything there?

@lrhn
Copy link
Member

lrhn commented Feb 7, 2024

I'd say we have one more option here:

int x = (b::bool ? i::int : d::dynamic) :: int;

That one isn't mentioned, but we can define an inference that gives it:

The static type of e1 ? e2 : e3 ... context C ... e2 with context C has static type T2 ... e3 with context C has static type T3 ...

  • If T2 is not assignable to C, compile-time error.
  • If T2 is subtype of (greatest closure of?) C, let S2 be T2
  • Otherwise coerce e2 to type S2 where S2 is a subtype of C (guraranteed by being assignable).
  • Ditto T3 to S3
  • Static type of e1 ? e2 : e3 is UP(S2, S3)

Inference is about assigning static types to expression, and about deriving the static type of an expression from the static types of its subexpressions.
If anything, it's the last part, the semantics of the operation itself, based on its subexpressions, that we can use to insert coercions at the points where we want them.

If we have a general rule of "If you would infer static type T for e with context C and T is assignable-to-but-no-subtype-of C, then a coercion is inserted at this point, from T towards C, giving a type S, and then the static type of e becomes S instead", which is how we've mostly talked about it so far, then we are adding coercions in the gaps between the semantics of our operations.

The way it's currently implemented for dynamic downcasts, it's more like coersions are part of the semantics of assignments, function calls, and any other operations where we add them. We don't need to change the static type of the subexpression, but we coerce the value before we use it, as part of the semantics of the operation that has that subexpression.

I think that can be easier to specify precisely, because coercion happens at specific points in the semantics, where we specify that they do. Rather than at some implicit point between the inference of the subexpression, and the containing expression seeing the result of that inference. That's not really a point that exists, and likely why it hasn't been implemented that way. ("The static type of the subexpression" has two different answers at different points in the algorithm, and we really need to say something like that we do "pre-inference" on the sub-expression, which is the inference we have currently specified, and actual "inference" of the subexpression takes the result of the pre-inference and possibly coerces it, so we can swap back-and-forth, interleaving the two algorithms. Just so we can give different names to the two types assigned to the same expression.)

The cost of putting the coercion into the operation semantics is that we have to be explicit about every coercion. If we forget to mention it in one place, no coercion happens there. The "coercions happen all the time, between any two expressions" has it easier, but ... well ... we didn't actually implement it. I'm wagering that there's a reason for that.

@leafpetersen
Copy link
Member Author

leafpetersen commented Feb 8, 2024

@lrhn

int x = (b::bool ? i::int : d::dynamic) :: int;

I don't know where you're going with this. As written, this is an unsound typing, because it gives the conditional expression the type int, while it can evaluate to something which is not an int. I think you're just getting tripped up by the weakness of my informal notation, and really you're just pointing out a distinction without a difference. The problem with my informal notation above is that it doesn't really leave room to distinguish between places where we are filling in a missing type (which doesn't provide a place for casting) vs annotating an expression (which does allow for casting to make it true). So let me try again to see if it's clearer, and then if you're seeing something I'm missing, maybe the more precise notation will help express it in a way I can see.

So first, we need a way to say what the type of a conditional expression is. Let's add that as a missing type parameter on the ?. So the post-inference AST for a conditional expression will look like e ?<T> e1 : e2. A job of type inference is to fill in T, and if the types of e1 and e2 are not subtypes (and I do mean subtypes) of T, it must be an error (otherwise the inference is unsound).

Second, we need a way to say what the "inferred type" of an expression is, separate from it's underlying "inherent" type. So for an expression e, let's use the notation (e)::T to mean that e has been inferred to have type T, but that it is possible that a coercion will be needed to mediate between the static type of e and T.

Finally, to reduce pointless ambiguity, let's say that we're going to require as well that at assignments to variables, the expression being assigned must have a type which is a subtype of the variable being assigned to. This reduces the pointless distinction between int x = (d)::dynamic and int x = (d)::int. We'll also insist that we only annotate one level, so that the infinite set of pointless choices possible of the form ((d)::dynamic)::int and (((d)::dynamic)::int)::int aren't a thing.

Ok, given that, let's try again. Given int x = b ? i : d where i has type int and d has type dynamic:

  • we can infer this as int x = ((b)::bool ?<dynamic> (i)::int : (d)::dynamic) :: int
    • This leads to the cast insertion as follows: int x = ((b)::bool ?<dynamic> (i)::int : (d)::dynamic) as int :: int
  • we can infer this as int x = ((b)::bool ?<int> (i)::int : (d)::int) :: int
    • This leads to the cast insertion as follows: int x = ((b)::bool ?<int> (i)::int : (d) as int::int) :: int

I don't think there are any other ways to infer this that are not either unsound, or just pointlessly redundant in a similar way to what I've tried to forbid above. Am I missing something? I couldn't really follow where you were going with the rest of your comment, so maybe if there's something I'm missing the syntax above will give us a common language to express it in?

@lrhn
Copy link
Member

lrhn commented Feb 8, 2024

@leafpetersen

TL;DR: I agree that what you present matches the intented currently specified behavior. I’m not sure that behavior is well specified, and we already know it’s not what’s implemented. I’m suggesting that there can be a model where coercions happen during the evaluation of operations, not between them, and that this is actually what we have currently implemented. That model is internally consistent, so I think it’s too early to say that the behavior it describes, or could describe with small tweaks to specific operations, cannot happen.

Your description matches,our current attempt at specifying coercions.
It still leaves, even in the expanded version, some things implicit that I think it is important to make explicit, and makes some assumptions, inherent to the current approach, that I am trying to question. If we take those as granted, then things do have to be this one way.

Let's start at the root:

the "inferred type" of an expression is, separate from it's underlying "inherent" type.

The quotes are there for a reason, because this is the distinction that I think our specification fails to make, which again may be a contributing factor to not actually implementing the specification. Because when the specification is ambiguous, it's not clear whether an implementation follows it or not.

The <T> added to e ?<T> e1 : e2 is an attempt to make this "inherent" type manifest, but that doesn't scale when every kind of expression can have an "inherent" and "inferred" type.

The specification is ambiguous because it calls both of these "the static type of e", or both “the inferred type of e", even if they may be different types. Or some parts of it simply predates that distinction even existing, and have never been updated.

I have a hard time pointing to the precise places where the current semantics are specified, because it’s combined from a number of different documents, some maybe not committed yet, but for the static typing of e1 ? e2 : e3, the only actual specification I can find is the the pre-null-safety language specification, which is still partly oblivious to inference. It states that for c = e1? e2 : e3,

The static type of c is the least upper bound (leastUpperBounds) of
the static type of e2 and the static type of e3.

That is, the input types to the static semantics of the conditional expression, the types of the subexpressions, has the same kind as the output type. There is no distinction between "inherent" and "inferred" type.

So, as a first step, let’s try to make the current specified behavior (or what I understand as its intent) explicit.

Let’s use completely new names, to avoid implicit assumptions.

  • The initial inferred type of an expression with a context type scheme, is what the current specification computes and assigns as the “static type” of an expression, like the conditional expression above. The initial inferred type should be assignable to the context type scheme.
  • The final inferred type of an expression with a context type is the static coercion of the initial inferred type with that context type, coerced to that context type (which may be the same type if no coercion is needed). The final inferred type should be a subtype of the context type scheme.
  • The coercion of a type T to a context type (scheme) C is defined as a static semantic operation. (Which means its not a pure function, it can introduce compile-time errors, and fail to compute a value if it does, it’s an abstraction over static semantic reasoning). If it succeeds, the result is a new (coerced) type that is guaranteed to be a subtype of C.
    • If T is a subtype of (the greatest closure of) C, then the result is T. (Or should we use some other predicate to say that T fits into C, like the full T <# C […] -> K? Probably, but I can’t find where this is actually used for a concrete expression like e1 ? e2 : e3, so I’m not entirely sure how to use it.)
    • Otherwise if T is assignable to (the greatest closure of?) C, then there is a set of coercions that can convert a value of type T to a value (or fail and throw) so that the value is a subtype of C. The result of coercion is then the static type of the result of those coercions (for downcast, the type is C, for tear-off/instantiation, when T is a callable type or generic function type, it’s a function type).
    • Otherwise coercion of T to C fails. (Don’t know if that should be made a compile-time error here or somewhere else.)
  • Wherever the current static semantics talk about the static type of a subexpressions, we assume it refers to the final inferred type of that subexpression. This is what forces eager coercion.
  • When it assigns a static type to an expression, it assigns the initial inferred type.

Coercions are then the (implicit) step that goes from initial inferred type to final inferred type, based on the context type scheme. The step is implicitly introduced between any expression and its subexpressions, because the inference on the subexpression only finds the initial inferred type, and the parent expression needs and uses the final inferred type.

Using meta-syntax like above, the inference of an expression e gives a result that can be described as e:T::S, where T is the initial inferred type and S is the final inferred type, we get:

int x = ((b):bool::bool ? (i):int::int : (d):dynamic::int) :: int

It’s nigh impossible to specify coercions this way, and not get eager coercion, because the coercion of a subexpression is specified independently of the expression it’s part of. We implicitly say that an expression always refer to the final inferred type of its subexpressions. That implies, to ensure soundness, that the expression must produce a value of that final inferred type, which means doing the coercion that produces the final inferred type.

Now my point, which is that we didn’t actually implement that, and maybe the implementations were right.

I don’t know why implementations didn’t do eager coercion. A contributing factory was likely that the specification was very implicit about it, so it was hard to point to place that said “do eager coercions”. Also that the implementations could keep doing what they already did in Dart 1, which was to do type checks when type checks were needed, and it still worked well for almost all reasonable programs. And we never actually wrote tests for the edge cases of the specific behavior. Maybe the people doing the specification weren’t entirely clear on precisely what was specified, and whether it matched what they wanted (I know I wasn’t).

A way to describe the model that is actually implemented is that coercions happen internally in the semantics of specific syntactic constructs, not in the gaps between an expression and its subexpressions.

Or stated differently, the semantics of an expression refers only to the initial inferred types of its subexpressions, and it has to explicitly perform static and runtime coercions as part of its semantics, if it wants or needs those values.

The (currently implemented, late-coerced) semantics of an expression like e1 ? e2 : e3 with context type scheme C could be described as:

Let T1 and S1 be the initial and final inferred types of e1 with context type bool. Implies a compile-time error if the initial inferred type of e1, S1, is not assignable to bool.

Let T2 be the initial inferred type of e2 with context type C, and T3 the initial inferred type of e3 with context type C. Then the initial inferred type of e1 ? e2 : e3 is Up(T2, T3).

Evaluation of e1 ? e2 : e3 proceeds as follows:

  • Evaluate e1 to a value v of type T1.
    • If T1 is not a subtype of bool, coerce v from type T1 to context type C, let w be the resulting value. Coercion is a semantic function, paralleling the static type coercion by doing the similar operations on the value at runtime. It can throw, which as usual means so does the surrounding evaluation.
    • Otherwise let w be v.
  • If w is the value true, evaluate e2 to a value r.
  • Otherwise w is the value false, then evaluate e3 to a value r.
  • Then e1 ? e2 : e3 evaluates to r.

There is a coercion at the branch expression, because it needs to have type bool, not just being assignable to bool. But the semantics of the branches do not coerce, and the initial inferred type of the entire conditional expression is defined as the upper bound of the initial (non-coerced) inferred types of the branches. That’s only possible to express because we can refer to the initial inferred type of subexpressions.

Similarly x = e where x is a variable with type C:

Let T be the initial inferred type and S be the final inferred type of e with context type C. It’s a compile time error if S is not a subtype of C, which implies that the initial inferred type of e is not assignable to C._

Evaluation of x = e proceeds as follows:

  • Evaluate e to a value v, whose runtime type must be a subtype of T.
  • If T is not a subtype of C, then:
    • T is guaranteed to be assignable to C.
    • Coerce v from type T to context type C, let w be the resulting value. Then w is guaranteed to have a runtime type which is a subtype of C.
  • Otherwise let w be v.
  • Assign w to the variable x.

(We can, and would, choose to abstract over these coercion steps and just write “Evaluate e to a final value w of type S” which will imply the steps above, when the initial inferred type of e is not the same as the final inferred type.)

That is, coercions are not something that happens between the inference and evaluation of expressions, its something that happens as part of the static and runtime semantics of an operation if it chooses to do so.

Currently, the implementations choose to apply coercions at specific operations:

  • Argument expressions of function invocations.
  • RHS of assignment expressions.
  • Instantiation at static function tear-off.

which we were mostly aware of, but also, implicitly, inside “unwrapping” operations (as Paul named it) like:

  • for/in (if the supposed iterable is dynamic, we do dynamic operations on it, but do not coerce to a context type until we reach assigning a value to the loop variable).
  • spreads (performs an entire iteration dynamically, then casts the individual values).
  • await (where we await any future, then try to downcast the result, rather than coerce first to FutureOr<C>).

If we want to actually specify these operations properly (not as the current desugaring which effectively introduces new syntax after inference), then I think we’ll need something like this to do so.

The multiple discussions related to #3471 suggests, at least a little, that just maybe the implementations were on the right track. Eagerly downcasting dynamic removes some capabilities. Eagerly downcasting before await  (this issue) does the same. It is as if dynamic is best used late, because some operations are not simple invocations, they do multiple smaller operations on a value before coming back to user code, and currently all we need is that the final result ends up being down-castable, it doesn’t matter how we got there. (Not so sure about .call insertion, but if we remove it entirely, that’d be one less worry.)

So, when you say about:

int x = (b::bool ? i::int : d::dynamic) :: int;

that it gives the conditional expression a type int, while it can evaluate to a value that is not an int, it is true in the currently specified semantics, because ?/: is specified as it currently is. We could specify it differently (And #3363 is a request to do so). It’s just hard with the current coercion specification which forces eager coercion.

But as things are currently implemented, where the conditional expression doesn’t coerce its branches at all, we can give ?/: a semantics where it’s explicit that it does coercion:

Given an expression e of the form e1 ? e2 : e3 with context type C.

Let S1 be the final inferred type of e1 with context type bool. (Which is a compile-time error if S1 is not a subtype of bool.)
Let S2 be the final inferred type of e2 with context type C.
Let S3 be the final inferred type of e3 with context type C.
The initial inferred type of e1?e2:e3 is UP(S2, S3)

Evaluation of e proceeds as follows:

  • Evaluate e1 to a final value b of type S1. (Aka. evaluate e1 to a value v of the initial inferred type of e1, then coerce it to a value of type S1 if necessary.)
  • If b is true, evaluate e2 to a final value v of type C.
  • Otherwise b is false, then evaluate e3 to a final value v of type C.
  • Then e evaluates to the value v.

This forces coercion of the taken branch, which means that it’s a compile-time error if the final inferred type (initial inferred type coerced to context type) of each branch is not a subtype (or the more complicated version) of the context type. That will make it much easier to change the use of UP to be at most C, fx “Let R be UP(S2, S3). If R <# C […] -> K then the initial inferred type of e is R with constraints K+whatever happened before. Otherwise the initial inferred type of e is the greatest closure of C.”

When coercion is built into the semantics of individual expressions, we only need to assign one type to each expression (the initial inferred type), then the other operations can find the final inferred type from that if it needs it, and then that operation must also introduce a coercion of the the runtime value to that final inferred type.

And we have to be explicit about every coercion, if we forget one, coercion won’t happen. At least, as long as the static and dynamic semantics agree on whether coercion happens, the result should be sound.

@leafpetersen
Copy link
Member Author

TL;DR: I agree that what you present matches the intented currently specified behavior. I’m not sure that behavior is well specified, and we already know it’s not what’s implemented.

I'm confused. There is no spec. There is an implementation, and I believe it matches one of the two options I presented. The discussion we've been having is about changing it from one to the other. What am I missing? What is the other option that you see?

It’s nigh impossible to specify coercions this way, and not get eager coercion,

I don't know where you're going here. I'm highly confident that I can specify coercion insertion to produce either of the two coercion reifications that I presented above. If you feel this is impossible, I'll need more explanation as to why.

  • Wherever the current static semantics talk about the static type of a subexpressions, we assume it refers to the final inferred type of that subexpression. This is what forces eager coercion.

No, this is where you're going wrong, at least as I see it. The point I am making is that there is a choice of how to produce what you are calling the initial inferred type, and once you have made that choice, the insertion of coercions is fully determined. It is true that the current choice of how to assign inferred types forces eager coercions. My point is just that there are two ways to assign initial inferred types (in your terminology) and one of them naturally leads to eager coercions and one naturally leads to late coercions.

I don’t know why implementations didn’t do eager coercion.

I implemented this. The first implementation of strong mode, I implemented type inference in the analyzer, and then implemented coercion insertion as a separate pass (I called it "coercion reification"). I wasn't smart enough about the internals of the analyzer to realize that it was practical/possible to attach a type to conditional expressions, so I by default made the choice to make the type of conditional expressions implicit as the LUB of the types of their arms. The coercion reifier simply worked as I describe above: it looked at very term, checked whether its type was a subtype of the destination, and if not, checked whether a coercion could be inserted to satisfy it. The composition of those two design choices gives you the current implementation choice. All I'm trying to say here is that if I had simply made the choice to annotate conditional expressions with a type based on the context (where present), the coercion reifier would then naturally have inserted the coercions inside of the branches, instead of outside. No magic there - the types drive the coercions, so all of the decisions are about how you distribute the types across composite terms.

@lrhn
Copy link
Member

lrhn commented Feb 9, 2024

I am possibly confused.
And not making it better by discussing two things at the same time: Implementation and specification.

We have an implementation, which is what our compilers currently do. We're talking about whether and how to change that.

We have (a dream of) a specification. I am mainly talking about specification here, which relates to implementation in that some specification strategies may have problems specifying certain actual behaviors.

I have been under the impression that we generally consider implicit downcast, and implicit coercions in general, to be (intended to be) specified by something like:

If expression e with context type C has static type T, and T is assignable to C, but not a subtype,
then we insert the implied coercion operations, which converts a value of type T to a value of a type S,
where S is a subtype of C.

That's how we have generally talked about it in discussion, and we have discussed how the implementations don't actually match that.

It isn't actually written out (possibly anywhere, maybe just not where I can find it), so it's quite possible that that's just, like, my opinion, and everybody else has had some other idea about how things are intended to work, and it's been vague enough that we haven't actually noticed the discrepancy.

What I am saying is that "something like" the above, where the choice of whether to do coercion is based only on static type and context type of the expression, will unavoidably specifiy eager coercion. If coercion can happen for that expression type and context type anywhere, it will happen for the same expression type and context type everywhere, because it's defined entirely in terms of those.
If that's not the kind of specification we're considering, then that point is moot.

You are then saying that we can have specification like this, and get late coercion. That's where I think I'm missing something, maybe something unstated, because there need to be some extra information to trigger the coercion only in some positions.
In the description of the implementation, you say:

it looked at every term, checked whether its type was a subtype of the destination, and if not, checked whether a coercion could be inserted to satisfy it.

I think the missing extra information might be the "destination", which I'm not sure what is. It's not just the context type, because then coercion would be eager.

If a "destination" is something that only assignment, argument or test positions have, the positions that really require coercion to have happened, then that explains why that's where we coerce. But that also means that coercion depends on the expression that the coerced expression is a subexpression off. Doing Function f = (e); puts e in a context without a destination, as a subexpression of (e), but (e) in a context with a destination, as a subexpression of Function f = (e);.
That behavior is what I've been trying to find a way to specify, except that I expressed it as the coercion being part of the parent expression's semantics. What you are describing can probably also be specified by an extra flag on the inference context, a "must coerce to context type" flag, which forces specific subexpressions to be coerced if needed. At least if the "destination" always has the same type as the context type.

So for my example of a conditional expression which forces coercions to its context type,
((b)::bool ? (i)::int : (d)::dynamic)::int, that typing makes perfect sense to me, if ?/: is the operation which does the coercion.
It has a context type of int, a subexpression of type dynamic, so it introduces a coercion from dynamic to int as part of the conditional expression's static and dynamic semantics. The result is an expression of type int, found as UP(int, T) where T is the result of coercing from dynamic to int. (Aka, int, that one is easy, the function type coercions may not yield the precise context type.)

If we instead use the view that d is in a position with a "destination" of type int, then I agree the similar type-notation would be ((b)::bool ? (i)::int : (d)::int)::int, because we do the coercion before getting to ?/:, which is still the cause of it by saying that the subexpression is in a context with a destination of type int.

I have a hunch that the "parent expression semantics explicitly specifies coercions" approach is necessary for formally specifying something like the current for (int i in somethingDynamic)... behavior, where the only runtime type check is from dynamic to int, and all the .iterator, .moveNext() and .current invocations are dynamic invocations. (If that's a behavior we want to keep — which I think I do).
The current specification-by-desugaring isn't really well-defined. Maybe it can be made so, but I'd rather specify the behavior of for/in directly, using the normal "evaluate e to value v"-type steps, without introducing new synthetic syntax. And that means using the pre-coerced dynamic-typed iterable expression value in the semantics, and doing repeated dynamic invocations where necessary, then only doing coercion at the point where we assign a value to the loop variable declaration.
But that also means performing a coercion in the middle of the semantics of a larger syntactic construct, one which is not on any expression that occurs in the original program. It's coercing the result of a dynamic invocation of a .current getter on another intermediate dynamic typed object.
(That's what desugaring bypasses, it introduces synthetic code like var it = e.iterator; var b = it.moveNext(); where we allow those intermediate variables to have type dynamic, then at some point we do int i = it.current;, and only there do we get a coercion, because that's the first time a dynamic is assigned to another type. And then we can say that this coercion is a normal subexpression coercion on the synthetic it.current expression. But desugaring has problems, so I really want to move away from it.)

If we want to have a formal specification like that, it means that coercion cannot solely happen at the boundaries between expressions of the program. It can be invoked for a context type and a value that arose in the middle of sequence of semantic steps, which was never the value of any expression with that context type in the source program.

We don't have a specification, so we don't have established terminology (or we do, and I just don't know it). That makes it hard to figure out if we agree, or are even talking about the same things. I think we do mostly agree on the behavior, but not really how to talk about it.
Maybe defining a consistent and precise terminology for these things should be our next sub-goal.

@leafpetersen
Copy link
Member Author

I have been under the impression that we generally consider implicit downcast, and implicit coercions in general, to be (intended to be) specified by something like:

If expression e with context type C has static type T, and T is assignable to C, but not a subtype,
then we insert the implied coercion operations, which converts a value of type T to a value of a type S,
where S is a subtype of C.

Ok, this is helpful. I agree that that description leads naturally to eager coercions. But I don't consider something like the above a specification, or at least the only way to do a specification. It's a declarative statement that tries to capture "how things work", but it doesn't really specify anything. What is a context? Where are they introduced? What does it mean for an expression to have a static type? etc. Additionally, it's not by any means the only way to do either an informal or formal specification.

I think (but am not sure) that a similar "specification" (which as I say above is not a specification) of lazy coercions would be something like this:

When we introduce a new context C to infer a type for an expression E, and the static type inferred for E is T, and T is not a subtype of C, then we insert an implied coercion on E to convert it to a value of type S where S is a subtype of C (if such a coercion exists).

I believe that this is essentially a declarative description of lazy coercions as we implement them, but I'm not completely sure. As I say, whatever it is, it's not a specification, since, as with your description above, it doesn't actually say what contexts are, where they are introduce, what the static types of terms are, etc. My claim is simply that once you do all of that, the choices you make about what the static types of terms are will essentially (within the degrees of freedom granted by redundant coercions, etc) have fully defined the coercion strategy. But really, that claim is kind of irrelevant to the main point, which is that when we do fully specify the inference process, we can specify coercion insertion however we want, again, so long as it is compatible with the typing.

Generally speaking, for an actual specification here, I would propose (and certainly the way that I think about this) is that we simply specify (via syntax directed inference rules, or algorithm, or whatever) exactly how we mean to assign types and coercions to transform a piece of raw un-annotated syntax into fully typed and annotated AST. To make this concrete, I'll try to give an example of how we might specify this.

So first off, what is it that we are specifying? Well, in Dart post 2.0, we allow programmers to elide some of the types, and some of the coercions, that would otherwise be necessary to fully specify a typed Dart 2.0+ program. We do type reconstruction to rewrite terms into a form which is fully annotated with types and coercions. Note that not all of the types needed for a fully typed Dart program can even be written in Dart syntax. Two examples: Dart syntax has no way to write the return type on a lambda, and yet a fully type inferred Dart program must specify what the return types of lambdas are; and Dart syntax has no way to write the type on a conditional expression (or a switch expression) and yet a fully type inferred Dart program must specify what the type of the conditional expression is. I'm being loose with that I mean by "fully type inferred" here, but it basically means that I can produce the full type of an expression simply by looking at the outermost node of the AST for that expression (more or less, constant factor), without doing any significant computation. So I consider the term (bool b) { if (b) return true; return 1; } to not be fully type inferred, because to read off the type I have to find all of the return expressions, read off their types, and know how to combine them to produce the final return type. On the other hand, using some pseudo-syntax for a lambda with a fixed return type, (bool b) : Object {....} is fully type inferred, because I can read off the type (Object Function(bool)) simply by looking at the outer structure of the AST. In addition to reconstructing types, we also need to insert coercions at appropriate places, which we can notate informally just using regular casts.

Ok, so how might we specify this? If we're using markdown friendly algorithmic rules, we might say something like this.

(Note: this isn't an actual attempt at a formal specification here - I'm banging this out in a comment thread, so please don't get stuck on typos, or minor details that I'm forgetting to handle - I'm just trying to give the general flavor here).

INFER

We define a type reconstruction algorithm INFER(E, K) which, given an expression E and a type schema K (see definition elsewhere) produces a fully type inferred expression E2 and its static type T (note that for a fully type inferred expression, it should always be possible to read off T directly from E2).

Option 1: eager coercions (sample of some rules).

Commentary: it should be an invariant of this option that if INFER(E, K) produces E2 with static type T, then T is a subtype of the greatest closure of K. I'm being vague about where the error checking happens, you can be more or less eager with it

Declarations:

Given a variable declaration with initializer of the form S x = E for some S, x, and E, then the fully type annotated form of the declaration is S x = E2 where INFER(E, S) is E2 of static type E.

More declaration forms here

Expressions:

We define INFER(E, K) inductively on the structure of terms as follows:

If E is of the form x where x is an identifier, then INFER(E, K) is E2 of static type T where:

  • S is the greatest closure of K
  • T2 is the static type of x
  • if T2 is a subtype of S then E2 is xandTisT2`.
  • Otherwise if T2 is dynamic then E2 is x as S and T is S
  • Commentary: if we which to account for other coercions, add them here, or better, abstract all of this into a helper

If E is of the form (b) ? E_t : E_f then INFER(E, K) is E2 of static type T where:

  • INFER(b, bool) is E_b
  • INFER( E_t, K) is E2_t
  • INFER( E_f, K) is E2_f
  • The greatest closure of K is S
  • E2 is (E_b) ?<S> E2_t : E2_f
    • Commentary: I'm using this notation for a fully type annotated conditional
  • T is S

More expression cases here

Option 2: lazy coercions (sample of some rules).

Commentary: it should be an invariant of this option that if INFER(E, K) produces E2 with static type T, then either T is a subtype of the greatest closure of K, or T is dynamic. This is assuming we are only dealing with implicit casts. To account for other coercions lazily as well, we generalize, left as an exercise to the reader.

Declarations:

Given a variable declaration with initializer of the form S x = E for some S, x, and E, then the fully type annotated form of the declaration is S x = E2 where COERCE(E, S) is E2 of static type T.

More declaration forms here

Expressions:

We start by defining a helper relation COERCE(E, K) on terms and type schemas which produces a fully type inferred term E2 of static type T where T is a subtype of the greatest closure of K as follows:

  • Let S be the greatest closure of K
  • Let INFER(E, K) be E_i of static type S_i
  • If S_i is a subtype of S then E2 is E_i and T is S_i
  • Otherwise (by invariant we should have that T is dynamic, but it doesn't really matter) E2 is E_i as S and T is S.

We define INFER(E, K) inductively on the structure of terms as follows:

If E is of the form x where x is an identifier, then INFER(E, K) is x of static type T where:

  • S is the greatest closure of K
  • T2 is the static type of x
  • if T2 is a subtype of S then E2 is xandTisT2`.
  • Otherwise if T2 is dynamic then E2 is x and T is dynamic
    • Commentary: if we which to account for other coercions, we allow arbitrary types here instead of just dynamic, and probably leave the error checking to the point where we try to do the coercion.

If E is of the form (b) ? E_t : E_f then INFER(E, K) is E2 of static type T where:

  • COERCE(b, bool) is E_b
  • INFER( E_t, K) is E2_t of type T_t
  • INFER( E_f, K) is E2_f of type T_f
  • T_c is UP(T_t, T_f)
  • E2 is (E_b) ?<T_c> E2_t : E2_f
    • Commentary: I'm using this notation for a fully type annotated conditional
  • T is T_c

More expression cases here

Option 3: lazy coercions with eager conditional expression typing (sample of some rules).

I think this is basically how we would specify the change that I'm asking @stereotype441 to make in the short term if we are not able to move directly to eager coercions (which it seems like will be hard, if even possible/desirable).

Commentary: it should be an invariant of this option that if INFER(E, K) produces E2 with static type T, then either T is a subtype of the greatest closure of K, or T is dynamic. This is assuming we are only dealing with implicit casts. To account for other coercions lazily as well, we generalize, left as an exercise to the reader.

Declarations:

As in option 2

Expressions:

We start by defining a helper relation COERCE(E, K) as in option 2 above.

We define INFER(E, K) inductively on the structure of terms as follows:

If E is of the form x where x is an identifier, then INFER(E, K) is x of static type T as defined in option 2 above.

If E is of the form (b) ? E_t : E_f then INFER(E, K) is E2 of static type T where:

  • COERCE(b, bool) is E_b
  • INFER( E_t, K) is E2_t of type T_t
  • INFER( E_f, K) is E2_f of type T_f
  • S is the greatest closure of K
  • T_c is UP(T_t, T_f)
  • IF K is not the empty context, and if T_t is a subtype of S and T_f is a subtype of S then:
    • E2 is (E_b) ?<S> E2_t : E2_f
    • T is S
  • Otherwise:
    • E2 is (E_b) ?<T_c> E2_t : E2_f
    • T is T_c

Commentary: I'm doing this from memory, but I think this is basically what @stereotype441 proposed. Basically, if the two branches of the conditional are each subtypes of the context (and we have a non-trivial context) we use the context as the type of the conditional. Otherwise, we fall back to the upper bound.

More expression cases here

So that's how I would propose we think about this, and when we sit down to specify it, how we actually specify it. Again - apologies if I've made typos, missed some cases, or am not getting things quite right. This is just a quick comment to try to explain where I'm coming from here. And I'm open to other approaches! So for example, as I said earlier, we could probably do this in two stages where we first infer types (allowing for type mismatches) and then specify a second relation which looks for type mismatches and inserts coercions where required. Or... something else? But this is how I would go about it, and where I'm coming from when I say that we can specify lazy or eager coercions equally well simply by specifying how the types are inferred.

@lrhn
Copy link
Member

lrhn commented Feb 11, 2024

That does look like the approach I'd also suggest, with the caveat that I'd avoid creating a new expression if at all possible.
The "expressions" we are working on are abstract syntax trees (at least one abstraction level over the raw grammar productions), which is why we can choose to ignore trailing commas and assume expressions always have the most general form. We haven't formalized that, but it's close enough to real grammatical syntax that there isn't any confusion.

If we have something like Future.value(1) as an expression, and we want to infer type arguments to the constructor, it's problematic to say that we introduce a new expression, or insert the missing type arguments, so the program becomes Future<int>.value(1). The most important reason is that it requires us to go from the inferred static type int, which is a semantic type, an entity in the type system, and convert it to the syntax int, which is a type clause (from the <type> production) which denotes the semantic type int. (It's potentially confusing to use the same notation for both the syntactic type terms and the semantics types. It's manageable, but we need to be very careful about mixing categories.)
So if we say that we insert the <int> type argument, we really insert <T> where T is a syntactic type that denotes the type int. It's not obvious that that's always possible, if the type's declaration isn't accessible in the library where the type argument is inserted. There may be no valid Dart syntax we can insert. We can patch that by saying that all libraries implicitly export non-private freshly-named aliases for all type declarations, and all libraries have implicit fresh-name prefixed imports of all other libraries in the program, so that we can always add syntax which denotes any type. (That will then wreck havoc with the field promotion rules, which we then have to patch to ignore these synthetic imports.)

I'd suggest that instead of patching the syntax, we say that type inference annotates every expression which uses type arguments, whether some were given explicitly in the syntax or or not, with the semantic type arguments that will be used later. That means that if you write Future<List>.value([]), the type inference will assign the single semantic type argument List<dynamic> to the constructor invocation, the semantic type denoted by the instantiated-to-bounds syntactic argument List.
To that end, we could say that abstract syntax expressions have empty slots for semantic values, and we are filling in those, not the holes in the syntax. Or that it converts the program from one abstract syntax to another abstract syntax with more "fields" for each abstract syntax node. Or probably equivalent, just saying that inference assigns metadata to the expression, that can be referenced later, like we do with "static type" today. (That assumes that the expression has an identity, which rewrites do not need, but we assume that in many other places, like when referring to the lexical scope of an expression).

The static type would also be such an extra semantic slot/assigned metadata on the abstract syntax expression.

Just because I want to, let me try to specify ?/: the way I'd want it.

  • Let e be e1 ? e2 : e3.
  • The static type of e with context type scheme K is inferred as:
    • Infer the static type of e1 with context type scheme bool to the type T1.
    • It's a compile-time error if T1 is not assignable to bool.
    • Infer the static type of e2 with context type scheme K to the type T2.
    • Infer the static type of e3 with context type scheme K to the type T3.
    • It is a compile time error if T2 or T3 is not assignable to K.
    • Let S2 be STATIC_COERCE(T2, K). (Performs any needed coercion at the type level to ensure S2 <: K.)
    • Let S3 be STATIC_COERCE(T3, K).
    • Let S be UP(S2,S3)
    • If S <: K, the static type of e is S.
    • Otherwise the static type of e is K. (Probably add some "greatest closure of" in a few places.)
  • Evaluation of e proceeds as follows:
    • Evaluate e1 to a value v1.
    • Let b be the result of COERCE(v1, T1, bool) (semantic function which performs the runtime operations needed to make a value of type T1, which is assignable to bool, be coerced to a value whose runtime type is a subtype of bool. It's a no-op if T1 <: bool.)
    • If b is true:
      • evaluate e2 to a value v2.
      • let v be COERCE(v2, T2, K).
    • otherwise b is false:
      • evaluate e3 to a value v3.
      • let v be COERCE(v3, T3, K).
    • Then e evaluates to v.

and STATIC_COERCE(T, K) where T is assignable to K would be S s.t.

  • If T <: K (greatest closure of), then S = T.
  • If T is dynamic then S = K (GCO)
  • If T is a non-function type and K (GCO) is a function type, let F be the function type of T.call. Then S is STATIC_COERCE(F, K)
  • If T is a generic function type and K (GCO) is a non-generic function type, let <T1,...,Tn> be the type arguments that proved T to be assignable to K. Then S is the generic instantiation of T with <T1,...,Tn>.

and COERCE(v, T, K) where T is assignable to K is w, the result of the semantic computation:

  • If T <: K (greatest closure of), then w = v
  • If T is dynamic then;
    • If the runtime type of v is a subtype of K (GCO), then w = v
    • Otherwise a runtime type error occurs.
  • If T is a non-function type and K (GCO) is a function type, then let f be the tear-off of the call method of v
    and F is the function type of T.call.
    Then w is the result of COERCE(f, F, K)
  • If T is a generic function type and K (GCO) is a non-generic function type, let <T1,...,Tn> be the type arguments that proved T to be assignable to K. Then w is the generic instantiation of v with <T1,...,Tn>.

Something like that. (Forced coercion before UP, use UP if it's better than context, context if not.)
I really want to figure out where the inference T1 <# T2 [L] -> C rules fit into this. It feels like some of these <:s should be able to introduce constraints.

Going back to await, with that approach to specification, and wanting to preserve the "dynamic as long as possible" approach:

  • Let e be await e1.
  • Inference of e with context type scheme K:
    • Infer the static type, T1, of e1 with context type scheme awaitContextType(K) (the function we're discussing elsewhere, which tries to avoid nested Futures).
    • It's a compile-time error if T1 is not assignable to FutureOr<K>.
    • the static type of e is S = flatten(T1)
  • Evaluation of e proceeds as follows:
    • Evaluate e to a value v.
    • If the runtime type of v implements Future<S> then "await the future v to a result w". Then e evaluates to w.
    • Otherwise suspend the current async function until a later microtask. When it resumes, e evaluates to v.

Then we will need to explain how we know this is sound. We only require T1 to be assignable to FutureOr<K>, and with that we can show that the result S is assignable to K. But we need to actually do that, it's not obviously true - it only holds because flatten(dynamic) is dynamic.

Let's try for/in:

  • Let s be for (d1 in e2) s3 where d1 is a declaration of a variable x, which can now be a variable declaration or a pattern assignment declaration.
  • Inference of s is:
    • Let K be the type scheme of d1. (For patterns, it's the context type from the first step of the three-step inference, for plain variables, it's the type denoted by the declared type clause, or _ if no type.)
    • Infer T1, the static type of e2 with context type scheme Iterable<K>.
    • If T1 implements Iterable<T2> for some T2,
      • Let I be the return type of the T1.iterator getter.
      • If I implements Iterator<T3> for some T3, let E be the return type of the I.current getter.
      • Otherwise BOTTOM(I), then let E be Never.
    • Otherwise if T1 is dynamic, let I and E be dynamic.
    • Otherwise if BOTTOM(T1), let I and E be Never.
    • Otherwise it's a compile-time error.
    • Infer types for d1 with E as matched value type.
    • Let R be the required type of d1.
    • It's a compile-time error if E is not assignable to R.
    • Let S be the scope introduced by d1 (declared variables and their declared types, after inference.)
    • Infer s3 in a scope which extends the lexical scope of s with S.
  • Execution of s proceeds as follows:
    • Evaluate e2 to a value v of type T1.
    • If T1 is dynamic, let v2 be COERCE(v, T1, Iterable<dynamic>), otherwise let v2 be v.
    • Let i be the result of a getter invocation of iterator on v2. (Must implement Iterator<E>)
    • Then begin a loop:
      • Perform a method invocation of .moveNext() on i, with return value b.
      • If b is false execution of s completes normally.
      • Otherwise b is true.
      • Perform a getter invocation of .current on i, with return value c.
      • Let c2 be COERCE(C, E, R).
      • Perform declaration binding of d1 with value c2, giving a binding of S2.
      • Let S3 be the current runtime scope extended with S2.
      • Execute s3 in a runtime scope with the bindings of S3
      • If this execution completes normally, restart the loop.
      • If this execution completes by continuing with no label, or by continuing with a label matching a label on this for/in loop, restart the loop.
      • If this execution completes by breaking with no label, or by continuing with a label matching a label on this for/in loop, execution of s completes normally.
      • Otherwise execution of s completes in the same way (throwing, returning, breaking or continuing to an unrelated label).

I believe this should match the current behavior. It allows both dynamic and Iterable<dynamic> to be iterated at a specific element type, and even allows Iterable<Callable> to be iterated with a function-typed loop variable, but it also requires a dynamic iterable value to implement Iterable at runtime, by doing the implicit down-casts from dynamic to Iterable<dynamic> along the way, and then a further downcast from dynamic to R for each value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type-inference Type inference, issues or improvements
Projects
None yet
Development

No branches or pull requests

4 participants