From ce2275c2abe33446c29cdfa3fe55d703fcf8a3f9 Mon Sep 17 00:00:00 2001 From: Shuhei Kadowaki Date: Thu, 19 Aug 2021 22:09:51 +0900 Subject: [PATCH 1/2] introduce `@nospecializeinfer` macro to tell the compiler to avoid excess inference MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This commit introduces a new compiler annotation called `@nospecializeinfer`, which allows us to request the compiler to avoid excessive inference. \## `@nospecialize` mechanism T discuss `@nospecializeinfer`, let's first understand the behavior of `@nospecialize`. Its docstring says that > This is only a hint for the compiler to avoid excess code generation. , and it works by suppressing dispatches with complex runtime occurrences of the annotated arguments. This could be understood with the example below: ```julia julia> function call_func_itr(func, itr) local r = 0 r += func(itr[1]) r += func(itr[2]) r += func(itr[3]) r end; julia> _isa = isa; # just for the sake of explanation, global variable to prevent inlining julia> func_specialize(a) = _isa(a, Function); julia> func_nospecialize(@nospecialize a) = _isa(a, Function); julia> dispatchonly = Any[sin, muladd, nothing]; # untyped container can cause excessive runtime dispatch julia> @code_typed call_func_itr(func_specialize, dispatchonly) CodeInfo( 1 ─ %1 = π (0, Int64) │ %2 = Base.arrayref(true, itr, 1)::Any │ %3 = (func)(%2)::Any │ %4 = (%1 + %3)::Any │ %5 = Base.arrayref(true, itr, 2)::Any │ %6 = (func)(%5)::Any │ %7 = (%4 + %6)::Any │ %8 = Base.arrayref(true, itr, 3)::Any │ %9 = (func)(%8)::Any │ %10 = (%7 + %9)::Any └── return %10 ) => Any julia> @code_typed call_func_itr(func_nospecialize, dispatchonly) CodeInfo( 1 ─ %1 = π (0, Int64) │ %2 = Base.arrayref(true, itr, 1)::Any │ %3 = invoke func(%2::Any)::Any │ %4 = (%1 + %3)::Any │ %5 = Base.arrayref(true, itr, 2)::Any │ %6 = invoke func(%5::Any)::Any │ %7 = (%4 + %6)::Any │ %8 = Base.arrayref(true, itr, 3)::Any │ %9 = invoke func(%8::Any)::Any │ %10 = (%7 + %9)::Any └── return %10 ) => Any ``` The calls of `func_specialize` remain to be `:call` expression (so that they are dispatched and compiled at runtime) while the calls of `func_nospecialize` are resolved as `:invoke` expressions. This is because `@nospecialize` requests the compiler to give up compiling `func_nospecialize` with runtime argument types but with the declared argument types, allowing `call_func_itr(func_nospecialize, dispatchonly)` to avoid runtime dispatches and accompanying JIT compilations (i.e. "excess code generation"). The difference is evident when checking `specializations`: ```julia julia> call_func_itr(func_specialize, dispatchonly) 2 julia> length(Base.specializations(only(methods(func_specialize)))) 3 # w/ runtime dispatch, multiple specializations julia> call_func_itr(func_nospecialize, dispatchonly) 2 julia> length(Base.specializations(only(methods(func_nospecialize)))) 1 # w/o runtime dispatch, the single specialization ``` The problem here is that it influences dispatch only, and does not intervene into inference in anyway. So there is still a possibility of "excess inference" when the compiler sees a considerable complexity of argument types during inference: ```julia julia> func_specialize(a) = _isa(a, Function); # redefine func to clear the specializations julia> @assert length(Base.specializations(only(methods(func_specialize)))) == 0; julia> func_nospecialize(@nospecialize a) = _isa(a, Function); # redefine func to clear the specializations julia> @assert length(Base.specializations(only(methods(func_nospecialize)))) == 0; julia> withinfernce = tuple(sin, muladd, "foo"); # typed container can cause excessive inference julia> @time @code_typed call_func_itr(func_specialize, withinfernce); 0.000812 seconds (3.77 k allocations: 217.938 KiB, 94.34% compilation time) julia> length(Base.specializations(only(methods(func_specialize)))) 4 # multiple method instances inferred julia> @time @code_typed call_func_itr(func_nospecialize, withinfernce); 0.000753 seconds (3.77 k allocations: 218.047 KiB, 92.42% compilation time) julia> length(Base.specializations(only(methods(func_nospecialize)))) 4 # multiple method instances inferred ``` The purpose of this PR is to implement a mechanism that allows us to avoid excessive inference to reduce the compilation latency when inference sees a considerable complexity of argument types. \## Design Here are some ideas to implement the functionality: 1. make `@nospecialize` block inference 2. add nospecializeinfer effect when `@nospecialize`d method is annotated as `@noinline` 3. implement as `@pure`-like boolean annotation to request nospecializeinfer effect on top of `@nospecialize` 4. implement as annotation that is orthogonal to `@nospecialize` After trying 1 ~ 3., I decided to submit 3. \### 1. make `@nospecialize` block inference This is almost same as what Jameson has done at . It turned out that this approach performs very badly because some of `@nospecialize`'d arguments still need inference to perform reasonably. For example, it's obvious that the following definition of `getindex(@nospecialize(t::Tuple), i::Int)` would perform very badly if `@nospecialize` blocks inference, because of a lack of useful type information for succeeding optimizations: \### 2. add nospecializeinfer effect when `@nospecialize`d method is annotated as `@noinline` The important observation is that we often use `@nospecialize` even when we expect inference to forward type and constant information. Adversely, we may be able to exploit the fact that we usually don't expect inference to forward information to a callee when we annotate it with `@noinline` (i.e. when adding `@noinline`, we're usually fine with disabling inter-procedural optimizations other than resolving dispatch). So the idea is to enable the inference suppression when `@nospecialize`'d method is annotated as `@noinline` too. It's a reasonable choice and can be efficiently implemented with #41922. But it sounds a bit weird to me to associate no infer effect with `@noinline`, and I also think there may be some cases we want to inline a method while partly avoiding inference, e.g.: ```julia \# the compiler will always infer with `f::Any` @noinline function twof(@nospecialize(f), n) # this method body is very simple and should be eligible for inlining if occursin('+', string(typeof(f).name.name::Symbol)) 2 + n elseif occursin('*', string(typeof(f).name.name::Symbol)) 2n else zero(n) end end ``` \### 3. implement as `@pure`-like boolean annotation to request nospecializeinfer effect on top of `@nospecialize` This is what this commit implements. It basically replaces the previous `@noinline` flag with a newly-introduced annotation named `@nospecializeinfer`. It is still associated with `@nospecialize` and it only has effect when used together with `@nospecialize`, but now it is not associated to `@noinline`, and it would help us reason about the behavior of `@nospecializeinfer` and experiment its effect more safely: ```julia \# the compiler will always infer with `f::Any` Base.@nospecializeinfer function twof(@nospecialize(f), n) # the compiler may or not inline this method if occursin('+', string(typeof(f).name.name::Symbol)) 2 + n elseif occursin('*', string(typeof(f).name.name::Symbol)) 2n else zero(n) end end ``` \### 4. implement as annotation that is orthogonal to `@nospecialize` Actually, we can have `@nospecialize` and `@nospecializeinfer` separately, and it would allow us to configure compilation strategies in a more fine-grained way. ```julia function noinfspec(Base.@nospecializeinfer(f), @nospecialize(g)) ... end ``` I'm fine with this approach but at the same time I'm afraid to have too many annotations that are related to some sort (I expect we will annotate both `@nospecializeinfer` and `@nospecialize` in this scheme). Co-authored-by: Mosè Giordano Co-authored-by: Tim Holy --- base/compiler/abstractinterpretation.jl | 4 + base/compiler/utilities.jl | 19 +++- base/essentials.jl | 3 +- base/expr.jl | 39 +++++++- doc/src/base/base.md | 2 + src/ast.c | 2 + src/gf.c | 9 +- src/ircode.c | 10 +- src/jltypes.c | 14 ++- src/julia.h | 2 + src/julia_internal.h | 2 + src/method.c | 5 + stdlib/Serialization/src/Serialization.jl | 11 ++- test/compiler/inference.jl | 107 ++++++++++++++++++++-- test/compiler/irutils.jl | 18 +++- 15 files changed, 216 insertions(+), 31 deletions(-) diff --git a/base/compiler/abstractinterpretation.jl b/base/compiler/abstractinterpretation.jl index 0f2011fd07c3c..097bd56d913ce 100644 --- a/base/compiler/abstractinterpretation.jl +++ b/base/compiler/abstractinterpretation.jl @@ -521,6 +521,10 @@ function abstract_call_method(interp::AbstractInterpreter, sigtuple = unwrap_unionall(sig) sigtuple isa DataType || return MethodCallResult(Any, false, false, nothing, Effects()) + if is_nospecializeinfer(method) + sig = get_nospecializeinfer_sig(method, sig, sparams) + end + # Limit argument type tuple growth of functions: # look through the parents list to see if there's a call to the same method # and from the same method. diff --git a/base/compiler/utilities.jl b/base/compiler/utilities.jl index 836c370b98bd4..cb5f916e76914 100644 --- a/base/compiler/utilities.jl +++ b/base/compiler/utilities.jl @@ -107,6 +107,10 @@ function is_inlineable_constant(@nospecialize(x)) return count_const_size(x) <= MAX_INLINE_CONST_SIZE end +is_nospecialized(method::Method) = method.nospecialize ≠ 0 + +is_nospecializeinfer(method::Method) = method.nospecializeinfer && is_nospecialized(method) + ########################### # MethodInstance/CodeInfo # ########################### @@ -154,8 +158,16 @@ function get_compileable_sig(method::Method, @nospecialize(atype), sparams::Simp isa(atype, DataType) || return nothing mt = ccall(:jl_method_get_table, Any, (Any,), method) mt === nothing && return nothing - return ccall(:jl_normalize_to_compilable_sig, Any, (Any, Any, Any, Any), - mt, atype, sparams, method) + return ccall(:jl_normalize_to_compilable_sig, Any, (Any, Any, Any, Any, Cint), + mt, atype, sparams, method, #=int return_if_compileable=#1) +end + +function get_nospecializeinfer_sig(method::Method, @nospecialize(atype), sparams::SimpleVector) + isa(atype, DataType) || return method.sig + mt = ccall(:jl_method_table_for, Any, (Any,), atype) + mt === nothing && return method.sig + return ccall(:jl_normalize_to_compilable_sig, Any, (Any, Any, Any, Any, Cint), + mt, atype, sparams, method, #=int return_if_compileable=#0) end isa_compileable_sig(@nospecialize(atype), sparams::SimpleVector, method::Method) = @@ -203,6 +215,9 @@ function specialize_method(method::Method, @nospecialize(atype), sparams::Simple if isa(atype, UnionAll) atype, sparams = normalize_typevars(method, atype, sparams) end + if is_nospecializeinfer(method) + atype = get_nospecializeinfer_sig(method, atype, sparams) + end if preexisting # check cached specializations # for an existing result stored there diff --git a/base/essentials.jl b/base/essentials.jl index e2035601f4fb5..63e209331b6f0 100644 --- a/base/essentials.jl +++ b/base/essentials.jl @@ -85,7 +85,8 @@ f(y) = [x for x in y] !!! note `@nospecialize` affects code generation but not inference: it limits the diversity of the resulting native code, but it does not impose any limitations (beyond the - standard ones) on type-inference. + standard ones) on type-inference. Use [`Base.@nospecializeinfer`](@ref) together with + `@nospecialize` to additionally suppress inference. # Example diff --git a/base/expr.jl b/base/expr.jl index e45684f95a34f..5952904b3d17b 100644 --- a/base/expr.jl +++ b/base/expr.jl @@ -342,7 +342,6 @@ macro noinline(x) return annotate_meta_def_or_block(x, :noinline) end - """ @constprop setting [ex] @@ -763,6 +762,44 @@ function compute_assumed_setting(@nospecialize(setting), val::Bool=true) end end +""" + Base.@nospecializeinfer function f(args...) + @nospecialize ... + ... + end + Base.@nospecializeinfer f(@nospecialize args...) = ... + +Tells the compiler to infer `f` using the declared types of `@nospecialize`d arguments. +This can be used to limit the number of compiler-generated specializations during inference. + +# Example + +```julia +julia> f(A::AbstractArray) = g(A) +f (generic function with 1 method) + +julia> @noinline Base.@nospecializeinfer g(@nospecialize(A::AbstractArray)) = A[1] +g (generic function with 1 method) + +julia> @code_typed f([1.0]) +CodeInfo( +1 ─ %1 = invoke Main.g(_2::AbstractArray)::Any +└── return %1 +) => Any +``` + +In this example, `f` will be inferred for each specific type of `A`, +but `g` will only be inferred once with the declared argument type `A::AbstractArray`, +meaning that the compiler will not likely see the excessive inference time on it +while it can not infer the concrete return type of it. +Without the `@nospecializeinfer`, `f([1.0])` would infer the return type of `g` as `Float64`, +indicating that inference ran for `g(::Vector{Float64})` despite the prohibition on +specialized code generation. +""" +macro nospecializeinfer(ex) + esc(isa(ex, Expr) ? pushmeta!(ex, :nospecializeinfer) : ex) +end + """ @propagate_inbounds diff --git a/doc/src/base/base.md b/doc/src/base/base.md index 7e45e2176478d..5556578bcc245 100644 --- a/doc/src/base/base.md +++ b/doc/src/base/base.md @@ -285,6 +285,8 @@ Base.@inline Base.@noinline Base.@nospecialize Base.@specialize +Base.@nospecializeinfer +Base.@constprop Base.gensym Base.@gensym var"name" diff --git a/src/ast.c b/src/ast.c index 97bbc6e8227ba..9da3cd6dfe995 100644 --- a/src/ast.c +++ b/src/ast.c @@ -83,6 +83,7 @@ JL_DLLEXPORT jl_sym_t *jl_aggressive_constprop_sym; JL_DLLEXPORT jl_sym_t *jl_no_constprop_sym; JL_DLLEXPORT jl_sym_t *jl_purity_sym; JL_DLLEXPORT jl_sym_t *jl_nospecialize_sym; +JL_DLLEXPORT jl_sym_t *jl_nospecializeinfer_sym; JL_DLLEXPORT jl_sym_t *jl_macrocall_sym; JL_DLLEXPORT jl_sym_t *jl_colon_sym; JL_DLLEXPORT jl_sym_t *jl_hygienicscope_sym; @@ -342,6 +343,7 @@ void jl_init_common_symbols(void) jl_isdefined_sym = jl_symbol("isdefined"); jl_nospecialize_sym = jl_symbol("nospecialize"); jl_specialize_sym = jl_symbol("specialize"); + jl_nospecializeinfer_sym = jl_symbol("nospecializeinfer"); jl_optlevel_sym = jl_symbol("optlevel"); jl_compile_sym = jl_symbol("compile"); jl_force_compile_sym = jl_symbol("force_compile"); diff --git a/src/gf.c b/src/gf.c index 6d55e479babfe..35bea787f5355 100644 --- a/src/gf.c +++ b/src/gf.c @@ -2565,7 +2565,8 @@ JL_DLLEXPORT int32_t jl_invoke_api(jl_code_instance_t *codeinst) return -1; } -JL_DLLEXPORT jl_value_t *jl_normalize_to_compilable_sig(jl_methtable_t *mt, jl_tupletype_t *ti, jl_svec_t *env, jl_method_t *m) +JL_DLLEXPORT jl_value_t *jl_normalize_to_compilable_sig(jl_methtable_t *mt, jl_tupletype_t *ti, jl_svec_t *env, jl_method_t *m, + int return_if_compileable) { jl_tupletype_t *tt = NULL; jl_svec_t *newparams = NULL; @@ -2589,7 +2590,7 @@ JL_DLLEXPORT jl_value_t *jl_normalize_to_compilable_sig(jl_methtable_t *mt, jl_t if (!is_compileable) is_compileable = jl_isa_compileable_sig(tt, env, m); JL_GC_POP(); - return is_compileable ? (jl_value_t*)tt : jl_nothing; + return (!return_if_compileable || is_compileable) ? (jl_value_t*)tt : jl_nothing; } jl_method_instance_t *jl_normalize_to_compilable_mi(jl_method_instance_t *mi JL_PROPAGATES_ROOT) @@ -2600,7 +2601,7 @@ jl_method_instance_t *jl_normalize_to_compilable_mi(jl_method_instance_t *mi JL_ jl_methtable_t *mt = jl_method_get_table(def); if ((jl_value_t*)mt == jl_nothing) return mi; - jl_value_t *compilationsig = jl_normalize_to_compilable_sig(mt, (jl_datatype_t*)mi->specTypes, mi->sparam_vals, def); + jl_value_t *compilationsig = jl_normalize_to_compilable_sig(mt, (jl_datatype_t*)mi->specTypes, mi->sparam_vals, def, 1); if (compilationsig == jl_nothing || jl_egal(compilationsig, mi->specTypes)) return mi; jl_svec_t *env = NULL; @@ -2633,7 +2634,7 @@ jl_method_instance_t *jl_method_match_to_mi(jl_method_match_t *match, size_t wor JL_UNLOCK(&mt->writelock); } else { - jl_value_t *tt = jl_normalize_to_compilable_sig(mt, ti, env, m); + jl_value_t *tt = jl_normalize_to_compilable_sig(mt, ti, env, m, 1); if (tt != jl_nothing) { JL_GC_PUSH2(&tt, &env); if (!jl_egal(tt, (jl_value_t*)ti)) { diff --git a/src/ircode.c b/src/ircode.c index 4121d6691aa5b..bc5cc61e7f892 100644 --- a/src/ircode.c +++ b/src/ircode.c @@ -434,13 +434,14 @@ static void jl_encode_value_(jl_ircode_state *s, jl_value_t *v, int as_literal) } } -static jl_code_info_flags_t code_info_flags(uint8_t inferred, uint8_t propagate_inbounds, - uint8_t has_fcall, uint8_t inlining, uint8_t constprop) +static jl_code_info_flags_t code_info_flags(uint8_t inferred, uint8_t propagate_inbounds, uint8_t has_fcall, + uint8_t nospecializeinfer, uint8_t inlining, uint8_t constprop) { jl_code_info_flags_t flags; flags.bits.inferred = inferred; flags.bits.propagate_inbounds = propagate_inbounds; flags.bits.has_fcall = has_fcall; + flags.bits.nospecializeinfer = nospecializeinfer; flags.bits.inlining = inlining; flags.bits.constprop = constprop; return flags; @@ -785,8 +786,8 @@ JL_DLLEXPORT jl_string_t *jl_compress_ir(jl_method_t *m, jl_code_info_t *code) 1 }; - jl_code_info_flags_t flags = code_info_flags(code->inferred, code->propagate_inbounds, - code->has_fcall, code->inlining, code->constprop); + jl_code_info_flags_t flags = code_info_flags(code->inferred, code->propagate_inbounds, code->has_fcall, + code->nospecializeinfer, code->inlining, code->constprop); write_uint8(s.s, flags.packed); write_uint8(s.s, code->purity.bits); write_uint16(s.s, code->inlining_cost); @@ -885,6 +886,7 @@ JL_DLLEXPORT jl_code_info_t *jl_uncompress_ir(jl_method_t *m, jl_code_instance_t code->inferred = flags.bits.inferred; code->propagate_inbounds = flags.bits.propagate_inbounds; code->has_fcall = flags.bits.has_fcall; + code->nospecializeinfer = flags.bits.nospecializeinfer; code->purity.bits = read_uint8(s.s); code->inlining_cost = read_uint16(s.s); diff --git a/src/jltypes.c b/src/jltypes.c index 1a30df637a706..810e1b954633d 100644 --- a/src/jltypes.c +++ b/src/jltypes.c @@ -2903,7 +2903,7 @@ void jl_init_types(void) JL_GC_DISABLED jl_code_info_type = jl_new_datatype(jl_symbol("CodeInfo"), core, jl_any_type, jl_emptysvec, - jl_perm_symsvec(21, + jl_perm_symsvec(22, "code", "codelocs", "ssavaluetypes", @@ -2921,11 +2921,12 @@ void jl_init_types(void) JL_GC_DISABLED "inferred", "propagate_inbounds", "has_fcall", + "nospecializeinfer", "inlining", "constprop", "purity", "inlining_cost"), - jl_svec(21, + jl_svec(22, jl_array_any_type, jl_array_int32_type, jl_any_type, @@ -2943,17 +2944,18 @@ void jl_init_types(void) JL_GC_DISABLED jl_bool_type, jl_bool_type, jl_bool_type, + jl_bool_type, jl_uint8_type, jl_uint8_type, jl_uint8_type, jl_uint16_type), jl_emptysvec, - 0, 1, 20); + 0, 1, 22); jl_method_type = jl_new_datatype(jl_symbol("Method"), core, jl_any_type, jl_emptysvec, - jl_perm_symsvec(29, + jl_perm_symsvec(30, "name", "module", "file", @@ -2980,10 +2982,11 @@ void jl_init_types(void) JL_GC_DISABLED "nkw", "isva", "is_for_opaque_closure", + "nospecializeinfer", "constprop", "max_varargs", "purity"), - jl_svec(29, + jl_svec(30, jl_symbol_type, jl_module_type, jl_symbol_type, @@ -3010,6 +3013,7 @@ void jl_init_types(void) JL_GC_DISABLED jl_int32_type, jl_bool_type, jl_bool_type, + jl_bool_type, jl_uint8_type, jl_uint8_type, jl_uint8_type), diff --git a/src/julia.h b/src/julia.h index 286bef615c92d..d214509c7d0b6 100644 --- a/src/julia.h +++ b/src/julia.h @@ -302,6 +302,7 @@ typedef struct _jl_code_info_t { uint8_t inferred; uint8_t propagate_inbounds; uint8_t has_fcall; + uint8_t nospecializeinfer; // uint8 settings uint8_t inlining; // 0 = default; 1 = @inline; 2 = @noinline uint8_t constprop; // 0 = use heuristic; 1 = aggressive; 2 = none @@ -359,6 +360,7 @@ typedef struct _jl_method_t { // various boolean properties uint8_t isva; uint8_t is_for_opaque_closure; + uint8_t nospecializeinfer; // uint8 settings uint8_t constprop; // 0x00 = use heuristic; 0x01 = aggressive; 0x02 = none uint8_t max_varargs; // 0xFF = use heuristic; otherwise, max # of args to expand diff --git a/src/julia_internal.h b/src/julia_internal.h index 49f0b19ec4209..1dcf40b3d920b 100644 --- a/src/julia_internal.h +++ b/src/julia_internal.h @@ -607,6 +607,7 @@ typedef struct { uint8_t inferred:1; uint8_t propagate_inbounds:1; uint8_t has_fcall:1; + uint8_t nospecializeinfer:1; uint8_t inlining:2; // 0 = use heuristic; 1 = aggressive; 2 = none uint8_t constprop:2; // 0 = use heuristic; 1 = aggressive; 2 = none } jl_code_info_flags_bitfield_t; @@ -1552,6 +1553,7 @@ extern JL_DLLEXPORT jl_sym_t *jl_aggressive_constprop_sym; extern JL_DLLEXPORT jl_sym_t *jl_no_constprop_sym; extern JL_DLLEXPORT jl_sym_t *jl_purity_sym; extern JL_DLLEXPORT jl_sym_t *jl_nospecialize_sym; +extern JL_DLLEXPORT jl_sym_t *jl_nospecializeinfer_sym; extern JL_DLLEXPORT jl_sym_t *jl_macrocall_sym; extern JL_DLLEXPORT jl_sym_t *jl_colon_sym; extern JL_DLLEXPORT jl_sym_t *jl_hygienicscope_sym; diff --git a/src/method.c b/src/method.c index c207149032fb9..9583ead272dca 100644 --- a/src/method.c +++ b/src/method.c @@ -321,6 +321,8 @@ static void jl_code_info_set_ir(jl_code_info_t *li, jl_expr_t *ir) li->inlining = 2; else if (ma == (jl_value_t*)jl_propagate_inbounds_sym) li->propagate_inbounds = 1; + else if (ma == (jl_value_t*)jl_nospecializeinfer_sym) + li->nospecializeinfer = 1; else if (ma == (jl_value_t*)jl_aggressive_constprop_sym) li->constprop = 1; else if (ma == (jl_value_t*)jl_no_constprop_sym) @@ -477,6 +479,7 @@ JL_DLLEXPORT jl_code_info_t *jl_new_code_info_uninit(void) src->inferred = 0; src->propagate_inbounds = 0; src->has_fcall = 0; + src->nospecializeinfer = 0; src->edges = jl_nothing; src->constprop = 0; src->inlining = 0; @@ -682,6 +685,7 @@ static void jl_method_set_source(jl_method_t *m, jl_code_info_t *src) } } m->called = called; + m->nospecializeinfer = src->nospecializeinfer; m->constprop = src->constprop; m->purity.bits = src->purity.bits; jl_add_function_to_lineinfo(src, (jl_value_t*)m->name); @@ -811,6 +815,7 @@ JL_DLLEXPORT jl_method_t *jl_new_method_uninit(jl_module_t *module) m->primary_world = 1; m->deleted_world = ~(size_t)0; m->is_for_opaque_closure = 0; + m->nospecializeinfer = 0; m->constprop = 0; m->purity.bits = 0; m->max_varargs = UINT8_MAX; diff --git a/stdlib/Serialization/src/Serialization.jl b/stdlib/Serialization/src/Serialization.jl index dd901d6910abf..7c1043f33bdfe 100644 --- a/stdlib/Serialization/src/Serialization.jl +++ b/stdlib/Serialization/src/Serialization.jl @@ -80,7 +80,7 @@ const TAGS = Any[ const NTAGS = length(TAGS) @assert NTAGS == 255 -const ser_version = 23 # do not make changes without bumping the version #! +const ser_version = 24 # do not make changes without bumping the version #! format_version(::AbstractSerializer) = ser_version format_version(s::Serializer) = s.version @@ -418,6 +418,7 @@ function serialize(s::AbstractSerializer, meth::Method) serialize(s, meth.nargs) serialize(s, meth.isva) serialize(s, meth.is_for_opaque_closure) + serialize(s, meth.nospecializeinfer) serialize(s, meth.constprop) serialize(s, meth.purity) if isdefined(meth, :source) @@ -1026,10 +1027,14 @@ function deserialize(s::AbstractSerializer, ::Type{Method}) nargs = deserialize(s)::Int32 isva = deserialize(s)::Bool is_for_opaque_closure = false + nospecializeinfer = false constprop = purity = 0x00 template_or_is_opaque = deserialize(s) if isa(template_or_is_opaque, Bool) is_for_opaque_closure = template_or_is_opaque + if format_version(s) >= 24 + nospecializeinfer = deserialize(s)::Bool + end if format_version(s) >= 14 constprop = deserialize(s)::UInt8 end @@ -1054,6 +1059,7 @@ function deserialize(s::AbstractSerializer, ::Type{Method}) meth.nargs = nargs meth.isva = isva meth.is_for_opaque_closure = is_for_opaque_closure + meth.nospecializeinfer = nospecializeinfer meth.constprop = constprop meth.purity = purity if template !== nothing @@ -1195,6 +1201,9 @@ function deserialize(s::AbstractSerializer, ::Type{CodeInfo}) if format_version(s) >= 20 ci.has_fcall = deserialize(s) end + if format_version(s) >= 24 + ci.nospecializeinfer = deserialize(s)::Bool + end if format_version(s) >= 21 ci.inlining = deserialize(s)::UInt8 end diff --git a/test/compiler/inference.jl b/test/compiler/inference.jl index 5987e10401bc8..385315d614de2 100644 --- a/test/compiler/inference.jl +++ b/test/compiler/inference.jl @@ -1167,25 +1167,18 @@ let typeargs = Tuple{Type{Int},Type{Int},Type{Int},Type{Int},Type{Int},Type{Int} @test only(Base.return_types(promote_type, typeargs)) === Type{Int} end -function count_specializations(method::Method) - specs = method.specializations - specs isa Core.MethodInstance && return 1 - n = count(!isnothing, specs::Core.SimpleVector) - return n -end - # demonstrate that inference can complete without waiting for MAX_TYPE_DEPTH copy_dims_out(out) = () copy_dims_out(out, dim::Int, tail...) = copy_dims_out((out..., dim), tail...) copy_dims_out(out, dim::Colon, tail...) = copy_dims_out((out..., dim), tail...) @test Base.return_types(copy_dims_out, (Tuple{}, Vararg{Union{Int,Colon}})) == Any[Tuple{}, Tuple{}, Tuple{}] -@test all(m -> 4 < count_specializations(m) < 15, methods(copy_dims_out)) # currently about 5 +@test all(m -> 4 < length(Base.specializations(m)) < 15, methods(copy_dims_out)) # currently about 5 copy_dims_pair(out) = () copy_dims_pair(out, dim::Int, tail...) = copy_dims_pair(out => dim, tail...) copy_dims_pair(out, dim::Colon, tail...) = copy_dims_pair(out => dim, tail...) @test Base.return_types(copy_dims_pair, (Tuple{}, Vararg{Union{Int,Colon}})) == Any[Tuple{}, Tuple{}, Tuple{}] -@test all(m -> 3 < count_specializations(m) < 15, methods(copy_dims_pair)) # currently about 5 +@test all(m -> 3 < length(Base.specializations(m)) < 15, methods(copy_dims_pair)) # currently about 5 # splatting an ::Any should still allow inference to use types of parameters preceding it f22364(::Int, ::Any...) = 0 @@ -4160,6 +4153,102 @@ Base.getproperty(x::Interface41024Extended, sym::Symbol) = x.x end |> only === Int +function call_func_itr(func, itr) + local r = 0 + r += func(itr[1]) + r += func(itr[2]) + r += func(itr[3]) + r += func(itr[4]) + r += func(itr[5]) + r +end + +global inline_checker = c -> c # untyped global, a call of this func will prevent inlining +# if `f` is inlined, `GlobalRef(m, :inline_checker)` should appear within the body of `invokef` +function is_inline_checker(@nospecialize stmt) + isa(stmt, GlobalRef) && stmt.name === :inline_checker +end + +function func_nospecialized(@nospecialize a) + c = isa(a, Function) + inline_checker(c) # dynamic dispatch, preventing inlining +end + +@inline function func_nospecialized_inline(@nospecialize a) + c = isa(a, Function) + inline_checker(c) # dynamic dispatch, preventing inlining (but forced by the annotation) +end + +Base.@nospecializeinfer function func_nospecializeinfer(@nospecialize a) + c = isa(a, Function) + inline_checker(c) # dynamic dispatch, preventing inlining +end + +Base.@nospecializeinfer @inline function func_nospecializeinfer_inline(@nospecialize a) + c = isa(a, Function) + inline_checker(c) # dynamic dispatch, preventing inlining (but forced by the annotation) +end + +Base.@nospecializeinfer Base.@constprop :aggressive function func_nospecializeinfer_constprop(c::Bool, @nospecialize a) + if c + return inline_checker(a) # dynamic dispatch, preventing inlining/constprop (but forced by the annotation) + end + return false +end +Base.@nospecializeinfer func_nospecializeinfer_constprop(@nospecialize a) = func_nospecializeinfer_constprop(false, a) + +itr_dispatchonly = Any[sin, muladd, "foo", nothing, missing] # untyped container can cause excessive runtime dispatch +itr_withinfernce = tuple(sin, muladd, "foo", nothing, missing) # typed container can cause excessive inference + +@testset "compilation annotations" begin + @testset "@nospecialize" begin + # `@nospecialize` should suppress runtime dispatches of `nospecialize` + @test call_func_itr(func_nospecialized, itr_dispatchonly) == 2 + @test length(Base.specializations(only(methods((func_nospecialized))))) == 1 + # `@nospecialize` should allow inference to happen + @test call_func_itr(func_nospecialized, itr_withinfernce) == 2 + @test length(Base.specializations(only(methods((func_nospecialized))))) == 6 + @test count(is_inline_checker, @get_code call_func_itr(func_nospecialized, itr_dispatchonly)) == 0 + + # `@nospecialize` should allow inlinining + @test call_func_itr(func_nospecialized_inline, itr_dispatchonly) == 2 + @test length(Base.specializations(only(methods((func_nospecialized_inline))))) == 1 + @test call_func_itr(func_nospecialized_inline, itr_withinfernce) == 2 + @test length(Base.specializations(only(methods((func_nospecialized_inline))))) == 6 + @test count(is_inline_checker, @get_code call_func_itr(func_nospecialized_inline, itr_dispatchonly)) == 5 + end + + @testset "@nospecializeinfer" begin + # `@nospecialize` should suppress runtime dispatches of `nospecialize` + @test call_func_itr(func_nospecializeinfer, itr_dispatchonly) == 2 + @test length(Base.specializations(only(methods((func_nospecializeinfer))))) == 1 + # `@nospecializeinfer` suppresses inference also + @test call_func_itr(func_nospecializeinfer, itr_withinfernce) == 2 + @test length(Base.specializations(only(methods((func_nospecializeinfer))))) == 1 + @test !any(is_inline_checker, @get_code call_func_itr(func_nospecializeinfer, itr_dispatchonly)) + + # `@nospecializeinfer` should allow inlinining + @test call_func_itr(func_nospecializeinfer_inline, itr_dispatchonly) == 2 + @test length(Base.specializations(only(methods((func_nospecializeinfer_inline))))) == 1 + @test call_func_itr(func_nospecializeinfer_inline, itr_withinfernce) == 2 + @test length(Base.specializations(only(methods((func_nospecializeinfer_inline))))) == 1 + @test any(is_inline_checker, @get_code call_func_itr(func_nospecializeinfer_inline, itr_dispatchonly)) + + # `@nospecializeinfer` should allow constprop + @test Base.return_types((Any,)) do x + Val(func_nospecializeinfer_constprop(x)) + end |> only == Val{false} + @test call_func_itr(func_nospecializeinfer_constprop, itr_dispatchonly) == 0 + for m = methods(func_nospecializeinfer_constprop) + @test length(Base.specializations(m)) == 1 + end + @test call_func_itr(func_nospecializeinfer_constprop, itr_withinfernce) == 0 + for m = methods(func_nospecializeinfer_constprop) + @test length(Base.specializations(m)) == 1 + end + end +end + @testset "fieldtype for unions" begin # e.g. issue #40177 f40177(::Type{T}) where {T} = fieldtype(T, 1) for T in [ diff --git a/test/compiler/irutils.jl b/test/compiler/irutils.jl index 95ac0d555ef88..00de9b2472de4 100644 --- a/test/compiler/irutils.jl +++ b/test/compiler/irutils.jl @@ -1,10 +1,17 @@ -import Core: CodeInfo, ReturnNode, MethodInstance -import Core.Compiler: IRCode, IncrementalCompact, VarState, argextype, singleton_type -import Base.Meta: isexpr +using Core: CodeInfo, ReturnNode, MethodInstance +using Core.Compiler: IRCode, IncrementalCompact, singleton_type, VarState +using Base.Meta: isexpr +using InteractiveUtils: gen_call_with_extracted_types_and_kwargs -argextype(@nospecialize args...) = argextype(args..., VarState[]) +argextype(@nospecialize args...) = Core.Compiler.argextype(args..., VarState[]) code_typed1(args...; kwargs...) = first(only(code_typed(args...; kwargs...)))::CodeInfo +macro code_typed1(ex0...) + return gen_call_with_extracted_types_and_kwargs(__module__, :code_typed1, ex0) +end get_code(args...; kwargs...) = code_typed1(args...; kwargs...).code +macro get_code(ex0...) + return gen_call_with_extracted_types_and_kwargs(__module__, :get_code, ex0) +end # check if `x` is a statement with a given `head` isnew(@nospecialize x) = isexpr(x, :new) @@ -45,3 +52,6 @@ function fully_eliminated(@nospecialize args...; retval=(@__FILE__), kwargs...) return length(code) == 1 && isreturn(code[1]) end end +macro fully_eliminated(ex0...) + return gen_call_with_extracted_types_and_kwargs(__module__, :fully_eliminated, ex0) +end From 1dc2ed644597ad5e8c8cf61ec7b7735155028fd1 Mon Sep 17 00:00:00 2001 From: Shuhei Kadowaki Date: Wed, 12 Apr 2023 19:19:35 +0900 Subject: [PATCH 2/2] experiment `@nospecializeinfer` on `Core.Compiler` This commit adds `@nospecializeinfer` macro on various `Core.Compiler` functions and achieves the following sysimage size reduction: | | this commit | master | % | | --------------------------------- | ----------- | ----------- | ------- | | `Core.Compiler` compilation (sec) | `66.4551` | `71.0846` | `0.935` | | `corecompiler.jl` (KB) | `17638080` | `18407248` | `0.958` | | `sys.jl` (KB) | `88736432` | `89361280` | `0.993` | | `sys-o.a` (KB) | `189484400` | `189907096` | `0.998` | --- base/compiler/abstractinterpretation.jl | 30 +++++------ base/compiler/abstractlattice.jl | 50 +++++++++---------- base/compiler/typelattice.jl | 66 ++++++++++++------------- base/compiler/typelimits.jl | 23 +++++---- base/compiler/utilities.jl | 4 +- 5 files changed, 86 insertions(+), 87 deletions(-) diff --git a/base/compiler/abstractinterpretation.jl b/base/compiler/abstractinterpretation.jl index 097bd56d913ce..35ffcac8f4279 100644 --- a/base/compiler/abstractinterpretation.jl +++ b/base/compiler/abstractinterpretation.jl @@ -2645,18 +2645,18 @@ struct BestguessInfo{Interp<:AbstractInterpreter} end end -function widenreturn(@nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn(@nospecialize(rt), info::BestguessInfo) return widenreturn(typeinf_lattice(info.interp), rt, info) end -function widenreturn(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo) return widenreturn(widenlattice(𝕃ᵢ), rt, info) end -function widenreturn_noslotwrapper(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn_noslotwrapper(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo) return widenreturn_noslotwrapper(widenlattice(𝕃ᵢ), rt, info) end -function widenreturn(𝕃ᵢ::MustAliasesLattice, @nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn(𝕃ᵢ::MustAliasesLattice, @nospecialize(rt), info::BestguessInfo) if isa(rt, MustAlias) if 1 ≤ rt.slot ≤ info.nargs rt = InterMustAlias(rt) @@ -2668,7 +2668,7 @@ function widenreturn(𝕃ᵢ::MustAliasesLattice, @nospecialize(rt), info::Bestg return widenreturn(widenlattice(𝕃ᵢ), rt, info) end -function widenreturn(𝕃ᵢ::ConditionalsLattice, @nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn(𝕃ᵢ::ConditionalsLattice, @nospecialize(rt), info::BestguessInfo) ⊑ᵢ = ⊑(𝕃ᵢ) if !(⊑(ipo_lattice(info.interp), info.bestguess, Bool)) || info.bestguess === Bool # give up inter-procedural constraint back-propagation @@ -2705,7 +2705,7 @@ function widenreturn(𝕃ᵢ::ConditionalsLattice, @nospecialize(rt), info::Best isa(rt, InterConditional) && return rt return widenreturn(widenlattice(𝕃ᵢ), rt, info) end -function bool_rt_to_conditional(@nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function bool_rt_to_conditional(@nospecialize(rt), info::BestguessInfo) bestguess = info.bestguess if isa(bestguess, InterConditional) # if the bestguess so far is already `Conditional`, try to convert @@ -2723,7 +2723,7 @@ function bool_rt_to_conditional(@nospecialize(rt), info::BestguessInfo) end return rt end -function bool_rt_to_conditional(@nospecialize(rt), slot_id::Int, info::BestguessInfo) +@nospecializeinfer function bool_rt_to_conditional(@nospecialize(rt), slot_id::Int, info::BestguessInfo) ⊑ᵢ = ⊑(typeinf_lattice(info.interp)) old = info.slottypes[slot_id] new = widenslotwrapper(info.changes[slot_id].typ) # avoid nested conditional @@ -2742,13 +2742,13 @@ function bool_rt_to_conditional(@nospecialize(rt), slot_id::Int, info::Bestguess return rt end -function widenreturn(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo) return widenreturn_partials(𝕃ᵢ, rt, info) end -function widenreturn_noslotwrapper(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn_noslotwrapper(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo) return widenreturn_partials(𝕃ᵢ, rt, info) end -function widenreturn_partials(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo) +@nospecializeinfer function widenreturn_partials(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo) if isa(rt, PartialStruct) fields = copy(rt.fields) local anyrefine = false @@ -2771,21 +2771,21 @@ function widenreturn_partials(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info: return widenreturn(widenlattice(𝕃ᵢ), rt, info) end -function widenreturn(::ConstsLattice, @nospecialize(rt), ::BestguessInfo) +@nospecializeinfer function widenreturn(::ConstsLattice, @nospecialize(rt), ::BestguessInfo) return widenreturn_consts(rt) end -function widenreturn_noslotwrapper(::ConstsLattice, @nospecialize(rt), ::BestguessInfo) +@nospecializeinfer function widenreturn_noslotwrapper(::ConstsLattice, @nospecialize(rt), ::BestguessInfo) return widenreturn_consts(rt) end -function widenreturn_consts(@nospecialize(rt)) +@nospecializeinfer function widenreturn_consts(@nospecialize(rt)) isa(rt, Const) && return rt return widenconst(rt) end -function widenreturn(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo) +@nospecializeinfer function widenreturn(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo) return widenconst(rt) end -function widenreturn_noslotwrapper(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo) +@nospecializeinfer function widenreturn_noslotwrapper(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo) return widenconst(rt) end diff --git a/base/compiler/abstractlattice.jl b/base/compiler/abstractlattice.jl index a84050816cb21..719b5fcf325e4 100644 --- a/base/compiler/abstractlattice.jl +++ b/base/compiler/abstractlattice.jl @@ -161,7 +161,7 @@ If `𝕃` is `JLTypeLattice`, this is equivalent to subtyping. """ function ⊑ end -⊑(::JLTypeLattice, @nospecialize(a::Type), @nospecialize(b::Type)) = a <: b +@nospecializeinfer ⊑(::JLTypeLattice, @nospecialize(a::Type), @nospecialize(b::Type)) = a <: b """ ⊏(𝕃::AbstractLattice, a, b) -> Bool @@ -169,7 +169,7 @@ function ⊑ end The strict partial order over the type inference lattice. This is defined as the irreflexive kernel of `⊑`. """ -⊏(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = ⊑(𝕃, a, b) && !⊑(𝕃, b, a) +@nospecializeinfer ⊏(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = ⊑(𝕃, a, b) && !⊑(𝕃, b, a) """ ⋤(𝕃::AbstractLattice, a, b) -> Bool @@ -177,7 +177,7 @@ This is defined as the irreflexive kernel of `⊑`. This order could be used as a slightly more efficient version of the strict order `⊏`, where we can safely assume `a ⊑ b` holds. """ -⋤(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = !⊑(𝕃, b, a) +@nospecializeinfer ⋤(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = !⊑(𝕃, b, a) """ is_lattice_equal(𝕃::AbstractLattice, a, b) -> Bool @@ -186,7 +186,7 @@ Check if two lattice elements are partial order equivalent. This is basically `a ⊑ b && b ⊑ a` in the lattice of `𝕃` but (optionally) with extra performance optimizations. """ -function is_lattice_equal(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function is_lattice_equal(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) a === b && return true return ⊑(𝕃, a, b) && ⊑(𝕃, b, a) end @@ -197,14 +197,14 @@ end Determines whether the given lattice element `t` of `𝕃` has non-trivial extended lattice information that would not be available from the type itself. """ -has_nontrivial_extended_info(𝕃::AbstractLattice, @nospecialize t) = +@nospecializeinfer has_nontrivial_extended_info(𝕃::AbstractLattice, @nospecialize t) = has_nontrivial_extended_info(widenlattice(𝕃), t) -function has_nontrivial_extended_info(𝕃::PartialsLattice, @nospecialize t) +@nospecializeinfer function has_nontrivial_extended_info(𝕃::PartialsLattice, @nospecialize t) isa(t, PartialStruct) && return true isa(t, PartialOpaque) && return true return has_nontrivial_extended_info(widenlattice(𝕃), t) end -function has_nontrivial_extended_info(𝕃::ConstsLattice, @nospecialize t) +@nospecializeinfer function has_nontrivial_extended_info(𝕃::ConstsLattice, @nospecialize t) isa(t, PartialTypeVar) && return true if isa(t, Const) val = t.val @@ -212,7 +212,7 @@ function has_nontrivial_extended_info(𝕃::ConstsLattice, @nospecialize t) end return has_nontrivial_extended_info(widenlattice(𝕃), t) end -has_nontrivial_extended_info(::JLTypeLattice, @nospecialize(t)) = false +@nospecializeinfer has_nontrivial_extended_info(::JLTypeLattice, @nospecialize(t)) = false """ is_const_prop_profitable_arg(𝕃::AbstractLattice, t) -> Bool @@ -220,9 +220,9 @@ has_nontrivial_extended_info(::JLTypeLattice, @nospecialize(t)) = false Determines whether the given lattice element `t` of `𝕃` has new extended lattice information that should be forwarded along with constant propagation. """ -is_const_prop_profitable_arg(𝕃::AbstractLattice, @nospecialize t) = +@nospecializeinfer is_const_prop_profitable_arg(𝕃::AbstractLattice, @nospecialize t) = is_const_prop_profitable_arg(widenlattice(𝕃), t) -function is_const_prop_profitable_arg(𝕃::PartialsLattice, @nospecialize t) +@nospecializeinfer function is_const_prop_profitable_arg(𝕃::PartialsLattice, @nospecialize t) if isa(t, PartialStruct) return true # might be a bit aggressive, may want to enable some check like follows: # for i = 1:length(t.fields) @@ -236,7 +236,7 @@ function is_const_prop_profitable_arg(𝕃::PartialsLattice, @nospecialize t) isa(t, PartialOpaque) && return true return is_const_prop_profitable_arg(widenlattice(𝕃), t) end -function is_const_prop_profitable_arg(𝕃::ConstsLattice, @nospecialize t) +@nospecializeinfer function is_const_prop_profitable_arg(𝕃::ConstsLattice, @nospecialize t) if isa(t, Const) # don't consider mutable values useful constants val = t.val @@ -245,24 +245,24 @@ function is_const_prop_profitable_arg(𝕃::ConstsLattice, @nospecialize t) isa(t, PartialTypeVar) && return false # this isn't forwardable return is_const_prop_profitable_arg(widenlattice(𝕃), t) end -is_const_prop_profitable_arg(::JLTypeLattice, @nospecialize t) = false +@nospecializeinfer is_const_prop_profitable_arg(::JLTypeLattice, @nospecialize t) = false -is_forwardable_argtype(𝕃::AbstractLattice, @nospecialize(x)) = +@nospecializeinfer is_forwardable_argtype(𝕃::AbstractLattice, @nospecialize(x)) = is_forwardable_argtype(widenlattice(𝕃), x) -function is_forwardable_argtype(𝕃::ConditionalsLattice, @nospecialize x) +@nospecializeinfer function is_forwardable_argtype(𝕃::ConditionalsLattice, @nospecialize x) isa(x, Conditional) && return true return is_forwardable_argtype(widenlattice(𝕃), x) end -function is_forwardable_argtype(𝕃::PartialsLattice, @nospecialize x) +@nospecializeinfer function is_forwardable_argtype(𝕃::PartialsLattice, @nospecialize x) isa(x, PartialStruct) && return true isa(x, PartialOpaque) && return true return is_forwardable_argtype(widenlattice(𝕃), x) end -function is_forwardable_argtype(𝕃::ConstsLattice, @nospecialize x) +@nospecializeinfer function is_forwardable_argtype(𝕃::ConstsLattice, @nospecialize x) isa(x, Const) && return true return is_forwardable_argtype(widenlattice(𝕃), x) end -function is_forwardable_argtype(::JLTypeLattice, @nospecialize x) +@nospecializeinfer function is_forwardable_argtype(::JLTypeLattice, @nospecialize x) return false end @@ -281,9 +281,9 @@ External lattice `𝕃ᵢ::ExternalLattice` may overload: """ function widenreturn end, function widenreturn_noslotwrapper end -is_valid_lattice(𝕃::AbstractLattice, @nospecialize(elem)) = +@nospecializeinfer is_valid_lattice(𝕃::AbstractLattice, @nospecialize(elem)) = is_valid_lattice_norec(𝕃, elem) && is_valid_lattice(widenlattice(𝕃), elem) -is_valid_lattice(𝕃::JLTypeLattice, @nospecialize(elem)) = is_valid_lattice_norec(𝕃, elem) +@nospecializeinfer is_valid_lattice(𝕃::JLTypeLattice, @nospecialize(elem)) = is_valid_lattice_norec(𝕃, elem) has_conditional(𝕃::AbstractLattice) = has_conditional(widenlattice(𝕃)) has_conditional(::AnyConditionalsLattice) = true @@ -306,12 +306,12 @@ has_extended_unionsplit(::JLTypeLattice) = false const fallback_lattice = InferenceLattice(BaseInferenceLattice.instance) const fallback_ipo_lattice = InferenceLattice(IPOResultLattice.instance) -⊑(@nospecialize(a), @nospecialize(b)) = ⊑(fallback_lattice, a, b) -tmeet(@nospecialize(a), @nospecialize(b)) = tmeet(fallback_lattice, a, b) -tmerge(@nospecialize(a), @nospecialize(b)) = tmerge(fallback_lattice, a, b) -⊏(@nospecialize(a), @nospecialize(b)) = ⊏(fallback_lattice, a, b) -⋤(@nospecialize(a), @nospecialize(b)) = ⋤(fallback_lattice, a, b) -is_lattice_equal(@nospecialize(a), @nospecialize(b)) = is_lattice_equal(fallback_lattice, a, b) +@nospecializeinfer @nospecialize(a) ⊑ @nospecialize(b) = ⊑(fallback_lattice, a, b) +@nospecializeinfer @nospecialize(a) ⊏ @nospecialize(b) = ⊏(fallback_lattice, a, b) +@nospecializeinfer @nospecialize(a) ⋤ @nospecialize(b) = ⋤(fallback_lattice, a, b) +@nospecializeinfer tmeet(@nospecialize(a), @nospecialize(b)) = tmeet(fallback_lattice, a, b) +@nospecializeinfer tmerge(@nospecialize(a), @nospecialize(b)) = tmerge(fallback_lattice, a, b) +@nospecializeinfer is_lattice_equal(@nospecialize(a), @nospecialize(b)) = is_lattice_equal(fallback_lattice, a, b) # Widenlattice with argument widenlattice(::JLTypeLattice, @nospecialize(t)) = widenconst(t) diff --git a/base/compiler/typelattice.jl b/base/compiler/typelattice.jl index 700a6d333cbc4..75071d2a8a2e0 100644 --- a/base/compiler/typelattice.jl +++ b/base/compiler/typelattice.jl @@ -244,7 +244,7 @@ const CompilerTypes = Union{MaybeUndef, Const, Conditional, MustAlias, NotFound, # slot wrappers # ============= -function assert_nested_slotwrapper(@nospecialize t) +@nospecializeinfer function assert_nested_slotwrapper(@nospecialize t) @assert !(t isa Conditional) "found nested Conditional" @assert !(t isa InterConditional) "found nested InterConditional" @assert !(t isa MustAlias) "found nested MustAlias" @@ -252,7 +252,7 @@ function assert_nested_slotwrapper(@nospecialize t) return t end -function widenslotwrapper(@nospecialize typ) +@nospecializeinfer function widenslotwrapper(@nospecialize typ) if isa(typ, AnyConditional) return widenconditional(typ) elseif isa(typ, AnyMustAlias) @@ -261,7 +261,7 @@ function widenslotwrapper(@nospecialize typ) return typ end -function widenwrappedslotwrapper(@nospecialize typ) +@nospecializeinfer function widenwrappedslotwrapper(@nospecialize typ) if isa(typ, LimitedAccuracy) return LimitedAccuracy(widenslotwrapper(typ.typ), typ.causes) end @@ -271,7 +271,7 @@ end # Conditional # =========== -function widenconditional(@nospecialize typ) +@nospecializeinfer function widenconditional(@nospecialize typ) if isa(typ, AnyConditional) if typ.thentype === Union{} return Const(false) @@ -285,7 +285,7 @@ function widenconditional(@nospecialize typ) end return typ end -function widenwrappedconditional(@nospecialize typ) +@nospecializeinfer function widenwrappedconditional(@nospecialize typ) if isa(typ, LimitedAccuracy) return LimitedAccuracy(widenconditional(typ.typ), typ.causes) end @@ -294,7 +294,7 @@ end # `Conditional` and `InterConditional` are valid in opposite contexts # (i.e. local inference and inter-procedural call), as such they will never be compared -function issubconditional(lattice::AbstractLattice, a::C, b::C) where {C<:AnyConditional} +@nospecializeinfer function issubconditional(lattice::AbstractLattice, a::C, b::C) where {C<:AnyConditional} if is_same_conditionals(a, b) if ⊑(lattice, a.thentype, b.thentype) if ⊑(lattice, a.elsetype, b.elsetype) @@ -307,7 +307,7 @@ end is_same_conditionals(a::C, b::C) where C<:AnyConditional = a.slot == b.slot -is_lattice_bool(lattice::AbstractLattice, @nospecialize(typ)) = typ !== Bottom && ⊑(lattice, typ, Bool) +@nospecializeinfer is_lattice_bool(lattice::AbstractLattice, @nospecialize(typ)) = typ !== Bottom && ⊑(lattice, typ, Bool) maybe_extract_const_bool(c::Const) = (val = c.val; isa(val, Bool)) ? val : nothing function maybe_extract_const_bool(c::AnyConditional) @@ -315,12 +315,12 @@ function maybe_extract_const_bool(c::AnyConditional) (c.elsetype === Bottom && !(c.thentype === Bottom)) && return true nothing end -maybe_extract_const_bool(@nospecialize c) = nothing +@nospecializeinfer maybe_extract_const_bool(@nospecialize c) = nothing # MustAlias # ========= -function widenmustalias(@nospecialize typ) +@nospecializeinfer function widenmustalias(@nospecialize typ) if isa(typ, AnyMustAlias) return typ.fldtyp elseif isa(typ, LimitedAccuracy) @@ -329,13 +329,13 @@ function widenmustalias(@nospecialize typ) return typ end -function isalreadyconst(@nospecialize t) +@nospecializeinfer function isalreadyconst(@nospecialize t) isa(t, Const) && return true isa(t, DataType) && isdefined(t, :instance) && return true return isconstType(t) end -function maybe_const_fldidx(@nospecialize(objtyp), @nospecialize(fldval)) +@nospecializeinfer function maybe_const_fldidx(@nospecialize(objtyp), @nospecialize(fldval)) t = widenconst(objtyp) if isa(fldval, Int) fldidx = fldval @@ -352,7 +352,7 @@ function maybe_const_fldidx(@nospecialize(objtyp), @nospecialize(fldval)) return fldidx end -function form_mustalias_conditional(alias::MustAlias, @nospecialize(thentype), @nospecialize(elsetype)) +@nospecializeinfer function form_mustalias_conditional(alias::MustAlias, @nospecialize(thentype), @nospecialize(elsetype)) (; slot, vartyp, fldidx) = alias if isa(vartyp, PartialStruct) fields = vartyp.fields @@ -401,7 +401,7 @@ ignorelimited(typ::LimitedAccuracy) = typ.typ # lattice order # ============= -function ⊑(lattice::InferenceLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function ⊑(lattice::InferenceLattice, @nospecialize(a), @nospecialize(b)) r = ⊑(widenlattice(lattice), ignorelimited(a), ignorelimited(b)) r || return false isa(b, LimitedAccuracy) || return true @@ -420,7 +420,7 @@ function ⊑(lattice::InferenceLattice, @nospecialize(a), @nospecialize(b)) return b.causes ⊆ a.causes end -function ⊑(lattice::OptimizerLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function ⊑(lattice::OptimizerLattice, @nospecialize(a), @nospecialize(b)) if isa(a, MaybeUndef) isa(b, MaybeUndef) || return false a, b = a.typ, b.typ @@ -430,7 +430,7 @@ function ⊑(lattice::OptimizerLattice, @nospecialize(a), @nospecialize(b)) return ⊑(widenlattice(lattice), a, b) end -function ⊑(lattice::AnyConditionalsLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function ⊑(lattice::AnyConditionalsLattice, @nospecialize(a), @nospecialize(b)) # Fast paths for common cases b === Any && return true a === Any && return false @@ -450,7 +450,7 @@ function ⊑(lattice::AnyConditionalsLattice, @nospecialize(a), @nospecialize(b) return ⊑(widenlattice(lattice), a, b) end -function ⊑(𝕃::AnyMustAliasesLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function ⊑(𝕃::AnyMustAliasesLattice, @nospecialize(a), @nospecialize(b)) MustAliasT = isa(𝕃, MustAliasesLattice) ? MustAlias : InterMustAlias if isa(a, MustAliasT) if isa(b, MustAliasT) @@ -463,7 +463,7 @@ function ⊑(𝕃::AnyMustAliasesLattice, @nospecialize(a), @nospecialize(b)) return ⊑(widenlattice(𝕃), a, b) end -function ⊑(lattice::PartialsLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function ⊑(lattice::PartialsLattice, @nospecialize(a), @nospecialize(b)) if isa(a, PartialStruct) if isa(b, PartialStruct) if !(length(a.fields) == length(b.fields) && a.typ <: b.typ) @@ -526,7 +526,7 @@ function ⊑(lattice::PartialsLattice, @nospecialize(a), @nospecialize(b)) return ⊑(widenlattice(lattice), a, b) end -function ⊑(lattice::ConstsLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function ⊑(lattice::ConstsLattice, @nospecialize(a), @nospecialize(b)) if isa(a, Const) if isa(b, Const) return a.val === b.val @@ -548,7 +548,7 @@ function ⊑(lattice::ConstsLattice, @nospecialize(a), @nospecialize(b)) return ⊑(widenlattice(lattice), a, b) end -function is_lattice_equal(lattice::InferenceLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function is_lattice_equal(lattice::InferenceLattice, @nospecialize(a), @nospecialize(b)) if isa(a, LimitedAccuracy) isa(b, LimitedAccuracy) || return false a.causes == b.causes || return false @@ -560,7 +560,7 @@ function is_lattice_equal(lattice::InferenceLattice, @nospecialize(a), @nospecia return is_lattice_equal(widenlattice(lattice), a, b) end -function is_lattice_equal(lattice::OptimizerLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function is_lattice_equal(lattice::OptimizerLattice, @nospecialize(a), @nospecialize(b)) if isa(a, MaybeUndef) || isa(b, MaybeUndef) # TODO: Unwrap these and recurse to is_lattice_equal return ⊑(lattice, a, b) && ⊑(lattice, b, a) @@ -568,7 +568,7 @@ function is_lattice_equal(lattice::OptimizerLattice, @nospecialize(a), @nospecia return is_lattice_equal(widenlattice(lattice), a, b) end -function is_lattice_equal(lattice::AnyConditionalsLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function is_lattice_equal(lattice::AnyConditionalsLattice, @nospecialize(a), @nospecialize(b)) ConditionalT = isa(lattice, ConditionalsLattice) ? Conditional : InterConditional if isa(a, ConditionalT) || isa(b, ConditionalT) # TODO: Unwrap these and recurse to is_lattice_equal @@ -577,7 +577,7 @@ function is_lattice_equal(lattice::AnyConditionalsLattice, @nospecialize(a), @no return is_lattice_equal(widenlattice(lattice), a, b) end -function is_lattice_equal(lattice::PartialsLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function is_lattice_equal(lattice::PartialsLattice, @nospecialize(a), @nospecialize(b)) if isa(a, PartialStruct) isa(b, PartialStruct) || return false length(a.fields) == length(b.fields) || return false @@ -600,7 +600,7 @@ function is_lattice_equal(lattice::PartialsLattice, @nospecialize(a), @nospecial return is_lattice_equal(widenlattice(lattice), a, b) end -function is_lattice_equal(lattice::ConstsLattice, @nospecialize(a), @nospecialize(b)) +@nospecializeinfer function is_lattice_equal(lattice::ConstsLattice, @nospecialize(a), @nospecialize(b)) a === b && return true if a isa Const if issingletontype(b) @@ -625,7 +625,7 @@ end # lattice operations # ================== -function tmeet(lattice::PartialsLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(lattice::PartialsLattice, @nospecialize(v), @nospecialize(t::Type)) if isa(v, PartialStruct) has_free_typevars(t) && return v widev = widenconst(v) @@ -663,7 +663,7 @@ function tmeet(lattice::PartialsLattice, @nospecialize(v), @nospecialize(t::Type return tmeet(widenlattice(lattice), v, t) end -function tmeet(lattice::ConstsLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(lattice::ConstsLattice, @nospecialize(v), @nospecialize(t::Type)) if isa(v, Const) if !has_free_typevars(t) && !isa(v.val, t) return Bottom @@ -673,7 +673,7 @@ function tmeet(lattice::ConstsLattice, @nospecialize(v), @nospecialize(t::Type)) tmeet(widenlattice(lattice), widenconst(v), t) end -function tmeet(lattice::ConditionalsLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(lattice::ConditionalsLattice, @nospecialize(v), @nospecialize(t::Type)) if isa(v, Conditional) if !(Bool <: t) return Bottom @@ -683,33 +683,33 @@ function tmeet(lattice::ConditionalsLattice, @nospecialize(v), @nospecialize(t:: tmeet(widenlattice(lattice), v, t) end -function tmeet(𝕃::MustAliasesLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(𝕃::MustAliasesLattice, @nospecialize(v), @nospecialize(t::Type)) if isa(v, MustAlias) v = widenmustalias(v) end return tmeet(widenlattice(𝕃), v, t) end -function tmeet(lattice::InferenceLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(lattice::InferenceLattice, @nospecialize(v), @nospecialize(t::Type)) # TODO: This can probably happen and should be handled @assert !isa(v, LimitedAccuracy) tmeet(widenlattice(lattice), v, t) end -function tmeet(lattice::InterConditionalsLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(lattice::InterConditionalsLattice, @nospecialize(v), @nospecialize(t::Type)) # TODO: This can probably happen and should be handled @assert !isa(v, AnyConditional) tmeet(widenlattice(lattice), v, t) end -function tmeet(𝕃::InterMustAliasesLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(𝕃::InterMustAliasesLattice, @nospecialize(v), @nospecialize(t::Type)) if isa(v, InterMustAlias) v = widenmustalias(v) end return tmeet(widenlattice(𝕃), v, t) end -function tmeet(lattice::OptimizerLattice, @nospecialize(v), @nospecialize(t::Type)) +@nospecializeinfer function tmeet(lattice::OptimizerLattice, @nospecialize(v), @nospecialize(t::Type)) # TODO: This can probably happen and should be handled @assert !isa(v, MaybeUndef) tmeet(widenlattice(lattice), v, t) @@ -727,7 +727,7 @@ widenconst(m::MaybeUndef) = widenconst(m.typ) widenconst(::PartialTypeVar) = TypeVar widenconst(t::PartialStruct) = t.typ widenconst(t::PartialOpaque) = t.typ -widenconst(t::Type) = t +@nospecializeinfer widenconst(@nospecialize t::Type) = t widenconst(::TypeVar) = error("unhandled TypeVar") widenconst(::TypeofVararg) = error("unhandled Vararg") widenconst(::LimitedAccuracy) = error("unhandled LimitedAccuracy") @@ -743,7 +743,7 @@ function smerge(lattice::AbstractLattice, sa::Union{NotFound,VarState}, sb::Unio return VarState(tmerge(lattice, sa.typ, sb.typ), sa.undef | sb.undef) end -@inline schanged(lattice::AbstractLattice, @nospecialize(n), @nospecialize(o)) = +@nospecializeinfer @inline schanged(lattice::AbstractLattice, @nospecialize(n), @nospecialize(o)) = (n !== o) && (o === NOT_FOUND || (n !== NOT_FOUND && !(n.undef <= o.undef && ⊑(lattice, n.typ, o.typ)))) # remove any lattice elements that wrap the reassigned slot object from the vartable diff --git a/base/compiler/typelimits.jl b/base/compiler/typelimits.jl index 191820951fae1..957796f6f5c49 100644 --- a/base/compiler/typelimits.jl +++ b/base/compiler/typelimits.jl @@ -304,7 +304,7 @@ end # A simplified type_more_complex query over the extended lattice # (assumes typeb ⊑ typea) -function issimplertype(𝕃::AbstractLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function issimplertype(𝕃::AbstractLattice, @nospecialize(typea), @nospecialize(typeb)) typea isa MaybeUndef && (typea = typea.typ) # n.b. does not appear in inference typeb isa MaybeUndef && (typeb = typeb.typ) # n.b. does not appear in inference @assert !isa(typea, LimitedAccuracy) && !isa(typeb, LimitedAccuracy) "LimitedAccuracy not supported by simplertype lattice" # n.b. the caller was supposed to handle these @@ -415,7 +415,7 @@ function merge_causes(causesa::IdSet{InferenceState}, causesb::IdSet{InferenceSt end end -@noinline function tmerge_limited(lattice::InferenceLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer @noinline function tmerge_limited(lattice::InferenceLattice, @nospecialize(typea), @nospecialize(typeb)) typea === Union{} && return typeb typeb === Union{} && return typea @@ -466,7 +466,7 @@ end return LimitedAccuracy(tmerge(widenlattice(lattice), typea, typeb), causes) end -function tmerge(lattice::InferenceLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function tmerge(lattice::InferenceLattice, @nospecialize(typea), @nospecialize(typeb)) if isa(typea, LimitedAccuracy) || isa(typeb, LimitedAccuracy) return tmerge_limited(lattice, typea, typeb) end @@ -476,7 +476,7 @@ function tmerge(lattice::InferenceLattice, @nospecialize(typea), @nospecialize(t return tmerge(widenlattice(lattice), typea, typeb) end -function tmerge(lattice::ConditionalsLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function tmerge(lattice::ConditionalsLattice, @nospecialize(typea), @nospecialize(typeb)) # type-lattice for Conditional wrapper (NOTE never be merged with InterConditional) if isa(typea, Conditional) && isa(typeb, Const) if typeb.val === true @@ -511,7 +511,7 @@ function tmerge(lattice::ConditionalsLattice, @nospecialize(typea), @nospecializ return tmerge(widenlattice(lattice), typea, typeb) end -function tmerge(lattice::InterConditionalsLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function tmerge(lattice::InterConditionalsLattice, @nospecialize(typea), @nospecialize(typeb)) # type-lattice for InterConditional wrapper (NOTE never be merged with Conditional) if isa(typea, InterConditional) && isa(typeb, Const) if typeb.val === true @@ -546,7 +546,7 @@ function tmerge(lattice::InterConditionalsLattice, @nospecialize(typea), @nospec return tmerge(widenlattice(lattice), typea, typeb) end -function tmerge(𝕃::AnyMustAliasesLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function tmerge(𝕃::AnyMustAliasesLattice, @nospecialize(typea), @nospecialize(typeb)) typea = widenmustalias(typea) typeb = widenmustalias(typeb) return tmerge(widenlattice(𝕃), typea, typeb) @@ -554,7 +554,7 @@ end # N.B. This can also be called with both typea::Const and typeb::Const to # to recover PartialStruct from `Const`s with overlapping fields. -function tmerge_partial_struct(lattice::PartialsLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function tmerge_partial_struct(lattice::PartialsLattice, @nospecialize(typea), @nospecialize(typeb)) aty = widenconst(typea) bty = widenconst(typeb) if aty === bty @@ -612,7 +612,7 @@ function tmerge_partial_struct(lattice::PartialsLattice, @nospecialize(typea), @ return nothing end -function tmerge(lattice::PartialsLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function tmerge(lattice::PartialsLattice, @nospecialize(typea), @nospecialize(typeb)) # type-lattice for Const and PartialStruct wrappers aps = isa(typea, PartialStruct) bps = isa(typeb, PartialStruct) @@ -655,8 +655,7 @@ function tmerge(lattice::PartialsLattice, @nospecialize(typea), @nospecialize(ty return tmerge(wl, typea, typeb) end - -function tmerge(lattice::ConstsLattice, @nospecialize(typea), @nospecialize(typeb)) +@nospecializeinfer function tmerge(lattice::ConstsLattice, @nospecialize(typea), @nospecialize(typeb)) acp = isa(typea, Const) || isa(typea, PartialTypeVar) bcp = isa(typeb, Const) || isa(typeb, PartialTypeVar) if acp && bcp @@ -668,7 +667,7 @@ function tmerge(lattice::ConstsLattice, @nospecialize(typea), @nospecialize(type return tmerge(wl, typea, typeb) end -function tmerge(::JLTypeLattice, @nospecialize(typea::Type), @nospecialize(typeb::Type)) +@nospecializeinfer function tmerge(::JLTypeLattice, @nospecialize(typea::Type), @nospecialize(typeb::Type)) # it's always ok to form a Union of two concrete types act = isconcretetype(typea) bct = isconcretetype(typeb) @@ -684,7 +683,7 @@ function tmerge(::JLTypeLattice, @nospecialize(typea::Type), @nospecialize(typeb return tmerge_types_slow(typea, typeb) end -@noinline function tmerge_types_slow(@nospecialize(typea::Type), @nospecialize(typeb::Type)) +@nospecializeinfer @noinline function tmerge_types_slow(@nospecialize(typea::Type), @nospecialize(typeb::Type)) # collect the list of types from past tmerge calls returning Union # and then reduce over that list types = Any[] diff --git a/base/compiler/utilities.jl b/base/compiler/utilities.jl index cb5f916e76914..e7ce41a3be92a 100644 --- a/base/compiler/utilities.jl +++ b/base/compiler/utilities.jl @@ -327,7 +327,7 @@ end # types # ######### -function singleton_type(@nospecialize(ft)) +@nospecializeinfer function singleton_type(@nospecialize(ft)) ft = widenslotwrapper(ft) if isa(ft, Const) return ft.val @@ -339,7 +339,7 @@ function singleton_type(@nospecialize(ft)) return nothing end -function maybe_singleton_const(@nospecialize(t)) +@nospecializeinfer function maybe_singleton_const(@nospecialize(t)) if isa(t, DataType) if issingletontype(t) return Const(t.instance)