-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support self-types containing ParamSpec #15903
Support self-types containing ParamSpec #15903
Conversation
This comment has been minimized.
This comment has been minimized.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Diff from mypy_primer, showing the effect of this PR on open source code: vision (https://github.com/pytorch/vision): typechecking got 1.38x faster (42.5s -> 30.9s)
(Performance measurements are based on a single noisy sample)
|
typeargs = infer_type_arguments( | ||
func.variables, self_param_type, TypeType(original_type), is_supertype=True | ||
self_vars, self_param_type, TypeType(original_type), is_supertype=True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to check my intuition here: I presume the speedup is decreasing the amount of typevars inferred here? (And a couple lines above)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, decreasing number of type vars to solve for will speed things up slightly, but the main speed-up is from not using a nested function in relatively hot code. (Which especially affects mypyc, that anecdotally emits inefficient code for nested functions).
ret_type = expand(func.ret_type) | ||
variables = [v for v in func.variables if v.id not in ids] | ||
func = expand_type(func, {tv.id: arg for tv, arg in zip(self_vars, to_apply)}) | ||
variables = [v for v in func.variables if v not in self_vars] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing this whole "expand a type with certain type vars then remove those certain type vars" dance feels weird. Is there a reason expand_type
doesn't remove the replaced type variables?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, it is time for type theory lesson. There are two kinds of things that people usually call "generic types": type constructors, and polymorphic types. This terminology is still vague (and I am not sure it is standard), so it is better to explain on examples. For example, Callable
is a type constructor, by itself it is not a valid type, it expects some type arguments, so e.g. Callable[[int], int]
is a valid type. Now when you write:
def foo(x: T) -> T: ...
assuming T
is not bound in current scope, this defines a polymorphic type for foo
. Such type is valid by itself and does not expect any type arguments. You can imagine it as an infinite intersection or an infinite overload forall T . Callable[[T], T]
.
(An implementation note: In an ideal world variables
should not be a property of CallableType
, we would have ForAllType(variables, template)
, and there would be no fallback
in callable types, in an ideal world type of foo
would be ForAllType([T], IntersectionType(Callable[[T], T], Instance("types.FunctionType")))
, but we are not living in an ideal world.)
If we are dealing with type constructor (in particular with generic type aliases), we never need to "remove certain type vars" (what would it even mean). The only thing we should do is to formally substitute type parameters expected by constructor with type arguments we got. For example, with A[T] = Callable[[T], T]
(it is sad that it is not a valid syntax, having explicit vars for type aliases would save a lot of ambiguity/confusion), and with A[int]
we just do expand_type(Callable[[T], T], {T -> int}) = Callable[[int], int]
, end of story.
Now when we talk about type inference this usually involves several steps:
- Given an infinite overload (ranging over some type variables), figure out how types of arguments passed in
CallExpr
narrow down range of each type variable (this is called inferring constraints, done inconstraints.py
). Forfoo(42)
this would beT :> int
. - Find the matching overload by solving those constraints (done in
solve.py
). In the above exampleT = int
is best solution. - Construct the type of matching overload. This is usually done in
applytype.py
and has three sub-steps:
a. Validate that the solution found actually satisfies type variable bounds, values, etc.
b. Callexpand_type()
on the r.h.s. offorall
expression with solutions found.
c. Update free variables on the l.h.s offorall
expression (usually this means just remove some, but not always).
So what you are seeing here is some ad-hoc version of this logic. Or course it all can be refactored to reduce code duplication but FWIW I don't think it is a big deal in this case, since the amount of code duplication is really small here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And one more clarifying comments on why we don't have the validation step (3a) in the case of type constructor: this is because validation can be complete during semantic analysis, see semanal_typeargs.py
.
This PR seems low risk, so I will be merging it unless there are objections. |
Fixes #14968
Fixes #13911
The fix is simple, as I predicted on Discord, we simply should use
get_all_type_vars()
instead ofget_type_vars()
(that specifically returns onlyTypeVarType
). I also use this opportunity to tidy-up code inbind_self()
, it should be now more readable, and much faster (especially when compiled with mypyc).cc @A5rocks