Skip to content

Commit

Permalink
Merge pull request #41931 from JuliaLang/avi/noinfer
Browse files Browse the repository at this point in the history
* introduce `@nospecializeinfer` macro to tell the compiler to avoid excess inference

This commit introduces a new compiler annotation called `@nospecializeinfer`,
which allows us to request the compiler to avoid excessive inference.

\## `@nospecialize` mechanism

T discuss `@nospecializeinfer`, let's first understand the behavior of
`@nospecialize`.

Its docstring says that

> This is only a hint for the compiler to avoid excess code generation.

, and it works by suppressing dispatches with complex runtime
occurrences of the annotated arguments. This could be understood with
the example below:
```julia
julia> function call_func_itr(func, itr)
           local r = 0
           r += func(itr[1])
           r += func(itr[2])
           r += func(itr[3])
           r
       end;

julia> _isa = isa; # just for the sake of explanation, global variable to prevent inlining

julia> func_specialize(a) = _isa(a, Function);

julia> func_nospecialize(@nospecialize a) = _isa(a, Function);

julia> dispatchonly = Any[sin, muladd, nothing]; # untyped container can cause excessive runtime dispatch

julia> @code_typed call_func_itr(func_specialize, dispatchonly)
CodeInfo(
1 ─ %1  = π (0, Int64)
│   %2  = Base.arrayref(true, itr, 1)::Any
│   %3  = (func)(%2)::Any
│   %4  = (%1 + %3)::Any
│   %5  = Base.arrayref(true, itr, 2)::Any
│   %6  = (func)(%5)::Any
│   %7  = (%4 + %6)::Any
│   %8  = Base.arrayref(true, itr, 3)::Any
│   %9  = (func)(%8)::Any
│   %10 = (%7 + %9)::Any
└──       return %10
) => Any

julia> @code_typed call_func_itr(func_nospecialize, dispatchonly)
CodeInfo(
1 ─ %1  = π (0, Int64)
│   %2  = Base.arrayref(true, itr, 1)::Any
│   %3  = invoke func(%2::Any)::Any
│   %4  = (%1 + %3)::Any
│   %5  = Base.arrayref(true, itr, 2)::Any
│   %6  = invoke func(%5::Any)::Any
│   %7  = (%4 + %6)::Any
│   %8  = Base.arrayref(true, itr, 3)::Any
│   %9  = invoke func(%8::Any)::Any
│   %10 = (%7 + %9)::Any
└──       return %10
) => Any
```

The calls of `func_specialize` remain to be `:call` expression (so that
they are dispatched and compiled at runtime) while the calls of
`func_nospecialize` are resolved as `:invoke` expressions. This is
because `@nospecialize` requests the compiler to give up compiling
`func_nospecialize` with runtime argument types but with the declared
argument types, allowing `call_func_itr(func_nospecialize, dispatchonly)`
to avoid runtime dispatches and accompanying JIT compilations
(i.e. "excess code generation").

The difference is evident when checking `specializations`:
```julia
julia> call_func_itr(func_specialize, dispatchonly)
2

julia> length(Base.specializations(only(methods(func_specialize))))
3 # w/ runtime dispatch, multiple specializations

julia> call_func_itr(func_nospecialize, dispatchonly)
2

julia> length(Base.specializations(only(methods(func_nospecialize))))
1 # w/o runtime dispatch, the single specialization
```

The problem here is that it influences dispatch only, and does not
intervene into inference in anyway. So there is still a possibility of
"excess inference" when the compiler sees a considerable complexity of
argument types during inference:
```julia
julia> func_specialize(a) = _isa(a, Function); # redefine func to clear the specializations

julia> @Assert length(Base.specializations(only(methods(func_specialize)))) == 0;

julia> func_nospecialize(@nospecialize a) = _isa(a, Function); # redefine func to clear the specializations

julia> @Assert length(Base.specializations(only(methods(func_nospecialize)))) == 0;

julia> withinfernce = tuple(sin, muladd, "foo"); # typed container can cause excessive inference

julia> @time @code_typed call_func_itr(func_specialize, withinfernce);
  0.000812 seconds (3.77 k allocations: 217.938 KiB, 94.34% compilation time)

julia> length(Base.specializations(only(methods(func_specialize))))
4 # multiple method instances inferred

julia> @time @code_typed call_func_itr(func_nospecialize, withinfernce);
  0.000753 seconds (3.77 k allocations: 218.047 KiB, 92.42% compilation time)

julia> length(Base.specializations(only(methods(func_nospecialize))))
4 # multiple method instances inferred
```

The purpose of this PR is to implement a mechanism that allows us to
avoid excessive inference to reduce the compilation latency when
inference sees a considerable complexity of argument types.

\## Design

Here are some ideas to implement the functionality:
1. make `@nospecialize` block inference
2. add nospecializeinfer effect when `@nospecialize`d method is annotated as `@noinline`
3. implement as `@pure`-like boolean annotation to request nospecializeinfer effect on top of `@nospecialize`
4. implement as annotation that is orthogonal to `@nospecialize`

After trying 1 ~ 3., I decided to submit 3.

\### 1. make `@nospecialize` block inference

This is almost same as what Jameson has done at <vtjnash@8ab7b6b>.
It turned out that this approach performs very badly because some of
`@nospecialize`'d arguments still need inference to perform reasonably.
For example, it's obvious that the following definition of
`getindex(@nospecialize(t::Tuple), i::Int)` would perform very badly if
`@nospecialize` blocks inference, because of a lack of useful type
information for succeeding optimizations:
<https://github.com/JuliaLang/julia/blob/12d364e8249a07097a233ce7ea2886002459cc50/base/tuple.jl#L29-L30>

\### 2. add nospecializeinfer effect when `@nospecialize`d method is annotated as `@noinline`

The important observation is that we often use `@nospecialize` even when
we expect inference to forward type and constant information.
Adversely, we may be able to exploit the fact that we usually don't
expect inference to forward information to a callee when we annotate it
with `@noinline` (i.e. when adding `@noinline`, we're usually fine with
disabling inter-procedural optimizations other than resolving dispatch).
So the idea is to enable the inference suppression when `@nospecialize`'d
method is annotated as `@noinline` too.

It's a reasonable choice and can be efficiently implemented with #41922.
But it sounds a bit weird to me to associate no infer effect with
`@noinline`, and I also think there may be some cases we want to inline
a method while partly avoiding inference, e.g.:
```julia
\# the compiler will always infer with `f::Any`
@noinline function twof(@nospecialize(f), n) # this method body is very simple and should be eligible for inlining
    if occursin('+', string(typeof(f).name.name::Symbol))
        2 + n
    elseif occursin('*', string(typeof(f).name.name::Symbol))
        2n
    else
        zero(n)
    end
end
```

\### 3. implement as `@pure`-like boolean annotation to request nospecializeinfer effect on top of `@nospecialize`

This is what this commit implements. It basically replaces the previous
`@noinline` flag with a newly-introduced annotation named `@nospecializeinfer`.
It is still associated with `@nospecialize` and it only has effect when
used together with `@nospecialize`, but now it is not associated to
`@noinline`, and it would help us reason about the behavior of `@nospecializeinfer`
and experiment its effect more safely:
```julia
\# the compiler will always infer with `f::Any`
Base.@nospecializeinfer function twof(@nospecialize(f), n) # the compiler may or not inline this method
    if occursin('+', string(typeof(f).name.name::Symbol))
        2 + n
    elseif occursin('*', string(typeof(f).name.name::Symbol))
        2n
    else
        zero(n)
    end
end
```

\### 4. implement as annotation that is orthogonal to `@nospecialize`

Actually, we can have `@nospecialize` and `@nospecializeinfer` separately, and it
would allow us to configure compilation strategies in a more
fine-grained way.
```julia
function noinfspec(Base.@nospecializeinfer(f), @nospecialize(g))
    ...
end
```

I'm fine with this approach but at the same time I'm afraid to have too
many annotations that are related to some sort (I expect we will
annotate both `@nospecializeinfer` and `@nospecialize` in this scheme).

---

experiment `@nospecializeinfer` on `Core.Compiler`

This commit adds `@nospecializeinfer` macro on various `Core.Compiler`
functions and achieves the following sysimage size reduction:

|                                   | this commit | master      | %       |
| --------------------------------- | ----------- | ----------- | ------- |
| `Core.Compiler` compilation (sec) | `66.4551`   | `71.0846`   | `0.935` |
| `corecompiler.jl` (KB)            | `17638080`  | `18407248`  | `0.958` |
| `sys.jl` (KB)                     | `88736432`  | `89361280`  | `0.993` |
| `sys-o.a` (KB)                    | `189484400` | `189907096` | `0.998` |

---------

Co-authored-by: Mosè Giordano <giordano@users.noreply.github.com>
Co-authored-by: Tim Holy <tim.holy@gmail.com>
  • Loading branch information
3 people authored May 23, 2023
2 parents 944b28c + 1dc2ed6 commit f44be79
Show file tree
Hide file tree
Showing 18 changed files with 302 additions and 118 deletions.
34 changes: 19 additions & 15 deletions base/compiler/abstractinterpretation.jl
Original file line number Diff line number Diff line change
Expand Up @@ -508,6 +508,10 @@ function abstract_call_method(interp::AbstractInterpreter,
sigtuple = unwrap_unionall(sig)
sigtuple isa DataType || return MethodCallResult(Any, false, false, nothing, Effects())

if is_nospecializeinfer(method)
sig = get_nospecializeinfer_sig(method, sig, sparams)
end

# Limit argument type tuple growth of functions:
# look through the parents list to see if there's a call to the same method
# and from the same method.
Expand Down Expand Up @@ -2645,18 +2649,18 @@ struct BestguessInfo{Interp<:AbstractInterpreter}
end
end

function widenreturn(@nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn(@nospecialize(rt), info::BestguessInfo)
return widenreturn(typeinf_lattice(info.interp), rt, info)
end

function widenreturn(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo)
return widenreturn(widenlattice(𝕃ᵢ), rt, info)
end
function widenreturn_noslotwrapper(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn_noslotwrapper(𝕃ᵢ::AbstractLattice, @nospecialize(rt), info::BestguessInfo)
return widenreturn_noslotwrapper(widenlattice(𝕃ᵢ), rt, info)
end

function widenreturn(𝕃ᵢ::MustAliasesLattice, @nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn(𝕃ᵢ::MustAliasesLattice, @nospecialize(rt), info::BestguessInfo)
if isa(rt, MustAlias)
if 1 rt.slot info.nargs
rt = InterMustAlias(rt)
Expand All @@ -2668,7 +2672,7 @@ function widenreturn(𝕃ᵢ::MustAliasesLattice, @nospecialize(rt), info::Bestg
return widenreturn(widenlattice(𝕃ᵢ), rt, info)
end

function widenreturn(𝕃ᵢ::ConditionalsLattice, @nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn(𝕃ᵢ::ConditionalsLattice, @nospecialize(rt), info::BestguessInfo)
= (𝕃ᵢ)
if !((ipo_lattice(info.interp), info.bestguess, Bool)) || info.bestguess === Bool
# give up inter-procedural constraint back-propagation
Expand Down Expand Up @@ -2705,7 +2709,7 @@ function widenreturn(𝕃ᵢ::ConditionalsLattice, @nospecialize(rt), info::Best
isa(rt, InterConditional) && return rt
return widenreturn(widenlattice(𝕃ᵢ), rt, info)
end
function bool_rt_to_conditional(@nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function bool_rt_to_conditional(@nospecialize(rt), info::BestguessInfo)
bestguess = info.bestguess
if isa(bestguess, InterConditional)
# if the bestguess so far is already `Conditional`, try to convert
Expand All @@ -2723,7 +2727,7 @@ function bool_rt_to_conditional(@nospecialize(rt), info::BestguessInfo)
end
return rt
end
function bool_rt_to_conditional(@nospecialize(rt), slot_id::Int, info::BestguessInfo)
@nospecializeinfer function bool_rt_to_conditional(@nospecialize(rt), slot_id::Int, info::BestguessInfo)
= (typeinf_lattice(info.interp))
old = info.slottypes[slot_id]
new = widenslotwrapper(info.changes[slot_id].typ) # avoid nested conditional
Expand All @@ -2742,13 +2746,13 @@ function bool_rt_to_conditional(@nospecialize(rt), slot_id::Int, info::Bestguess
return rt
end

function widenreturn(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo)
return widenreturn_partials(𝕃ᵢ, rt, info)
end
function widenreturn_noslotwrapper(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn_noslotwrapper(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo)
return widenreturn_partials(𝕃ᵢ, rt, info)
end
function widenreturn_partials(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo)
@nospecializeinfer function widenreturn_partials(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info::BestguessInfo)
if isa(rt, PartialStruct)
fields = copy(rt.fields)
local anyrefine = false
Expand All @@ -2771,21 +2775,21 @@ function widenreturn_partials(𝕃ᵢ::PartialsLattice, @nospecialize(rt), info:
return widenreturn(widenlattice(𝕃ᵢ), rt, info)
end

function widenreturn(::ConstsLattice, @nospecialize(rt), ::BestguessInfo)
@nospecializeinfer function widenreturn(::ConstsLattice, @nospecialize(rt), ::BestguessInfo)
return widenreturn_consts(rt)
end
function widenreturn_noslotwrapper(::ConstsLattice, @nospecialize(rt), ::BestguessInfo)
@nospecializeinfer function widenreturn_noslotwrapper(::ConstsLattice, @nospecialize(rt), ::BestguessInfo)
return widenreturn_consts(rt)
end
function widenreturn_consts(@nospecialize(rt))
@nospecializeinfer function widenreturn_consts(@nospecialize(rt))
isa(rt, Const) && return rt
return widenconst(rt)
end

function widenreturn(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo)
@nospecializeinfer function widenreturn(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo)
return widenconst(rt)
end
function widenreturn_noslotwrapper(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo)
@nospecializeinfer function widenreturn_noslotwrapper(::JLTypeLattice, @nospecialize(rt), ::BestguessInfo)
return widenconst(rt)
end

Expand Down
50 changes: 25 additions & 25 deletions base/compiler/abstractlattice.jl
Original file line number Diff line number Diff line change
Expand Up @@ -161,23 +161,23 @@ If `𝕃` is `JLTypeLattice`, this is equivalent to subtyping.
"""
function end

(::JLTypeLattice, @nospecialize(a::Type), @nospecialize(b::Type)) = a <: b
@nospecializeinfer (::JLTypeLattice, @nospecialize(a::Type), @nospecialize(b::Type)) = a <: b

"""
⊏(𝕃::AbstractLattice, a, b) -> Bool
The strict partial order over the type inference lattice.
This is defined as the irreflexive kernel of `⊑`.
"""
(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = (𝕃, a, b) && !(𝕃, b, a)
@nospecializeinfer (𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = (𝕃, a, b) && !(𝕃, b, a)

"""
⋤(𝕃::AbstractLattice, a, b) -> Bool
This order could be used as a slightly more efficient version of the strict order `⊏`,
where we can safely assume `a ⊑ b` holds.
"""
(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = !(𝕃, b, a)
@nospecializeinfer (𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b)) = !(𝕃, b, a)

"""
is_lattice_equal(𝕃::AbstractLattice, a, b) -> Bool
Expand All @@ -186,7 +186,7 @@ Check if two lattice elements are partial order equivalent.
This is basically `a ⊑ b && b ⊑ a` in the lattice of `𝕃`
but (optionally) with extra performance optimizations.
"""
function is_lattice_equal(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b))
@nospecializeinfer function is_lattice_equal(𝕃::AbstractLattice, @nospecialize(a), @nospecialize(b))
a === b && return true
return (𝕃, a, b) && (𝕃, b, a)
end
Expand All @@ -197,32 +197,32 @@ end
Determines whether the given lattice element `t` of `𝕃` has non-trivial extended lattice
information that would not be available from the type itself.
"""
has_nontrivial_extended_info(𝕃::AbstractLattice, @nospecialize t) =
@nospecializeinfer has_nontrivial_extended_info(𝕃::AbstractLattice, @nospecialize t) =
has_nontrivial_extended_info(widenlattice(𝕃), t)
function has_nontrivial_extended_info(𝕃::PartialsLattice, @nospecialize t)
@nospecializeinfer function has_nontrivial_extended_info(𝕃::PartialsLattice, @nospecialize t)
isa(t, PartialStruct) && return true
isa(t, PartialOpaque) && return true
return has_nontrivial_extended_info(widenlattice(𝕃), t)
end
function has_nontrivial_extended_info(𝕃::ConstsLattice, @nospecialize t)
@nospecializeinfer function has_nontrivial_extended_info(𝕃::ConstsLattice, @nospecialize t)
isa(t, PartialTypeVar) && return true
if isa(t, Const)
val = t.val
return !issingletontype(typeof(val)) && !(isa(val, Type) && hasuniquerep(val))
end
return has_nontrivial_extended_info(widenlattice(𝕃), t)
end
has_nontrivial_extended_info(::JLTypeLattice, @nospecialize(t)) = false
@nospecializeinfer has_nontrivial_extended_info(::JLTypeLattice, @nospecialize(t)) = false

"""
is_const_prop_profitable_arg(𝕃::AbstractLattice, t) -> Bool
Determines whether the given lattice element `t` of `𝕃` has new extended lattice information
that should be forwarded along with constant propagation.
"""
is_const_prop_profitable_arg(𝕃::AbstractLattice, @nospecialize t) =
@nospecializeinfer is_const_prop_profitable_arg(𝕃::AbstractLattice, @nospecialize t) =
is_const_prop_profitable_arg(widenlattice(𝕃), t)
function is_const_prop_profitable_arg(𝕃::PartialsLattice, @nospecialize t)
@nospecializeinfer function is_const_prop_profitable_arg(𝕃::PartialsLattice, @nospecialize t)
if isa(t, PartialStruct)
return true # might be a bit aggressive, may want to enable some check like follows:
# for i = 1:length(t.fields)
Expand All @@ -236,7 +236,7 @@ function is_const_prop_profitable_arg(𝕃::PartialsLattice, @nospecialize t)
isa(t, PartialOpaque) && return true
return is_const_prop_profitable_arg(widenlattice(𝕃), t)
end
function is_const_prop_profitable_arg(𝕃::ConstsLattice, @nospecialize t)
@nospecializeinfer function is_const_prop_profitable_arg(𝕃::ConstsLattice, @nospecialize t)
if isa(t, Const)
# don't consider mutable values useful constants
val = t.val
Expand All @@ -245,24 +245,24 @@ function is_const_prop_profitable_arg(𝕃::ConstsLattice, @nospecialize t)
isa(t, PartialTypeVar) && return false # this isn't forwardable
return is_const_prop_profitable_arg(widenlattice(𝕃), t)
end
is_const_prop_profitable_arg(::JLTypeLattice, @nospecialize t) = false
@nospecializeinfer is_const_prop_profitable_arg(::JLTypeLattice, @nospecialize t) = false

is_forwardable_argtype(𝕃::AbstractLattice, @nospecialize(x)) =
@nospecializeinfer is_forwardable_argtype(𝕃::AbstractLattice, @nospecialize(x)) =
is_forwardable_argtype(widenlattice(𝕃), x)
function is_forwardable_argtype(𝕃::ConditionalsLattice, @nospecialize x)
@nospecializeinfer function is_forwardable_argtype(𝕃::ConditionalsLattice, @nospecialize x)
isa(x, Conditional) && return true
return is_forwardable_argtype(widenlattice(𝕃), x)
end
function is_forwardable_argtype(𝕃::PartialsLattice, @nospecialize x)
@nospecializeinfer function is_forwardable_argtype(𝕃::PartialsLattice, @nospecialize x)
isa(x, PartialStruct) && return true
isa(x, PartialOpaque) && return true
return is_forwardable_argtype(widenlattice(𝕃), x)
end
function is_forwardable_argtype(𝕃::ConstsLattice, @nospecialize x)
@nospecializeinfer function is_forwardable_argtype(𝕃::ConstsLattice, @nospecialize x)
isa(x, Const) && return true
return is_forwardable_argtype(widenlattice(𝕃), x)
end
function is_forwardable_argtype(::JLTypeLattice, @nospecialize x)
@nospecializeinfer function is_forwardable_argtype(::JLTypeLattice, @nospecialize x)
return false
end

Expand All @@ -281,9 +281,9 @@ External lattice `𝕃ᵢ::ExternalLattice` may overload:
"""
function widenreturn end, function widenreturn_noslotwrapper end

is_valid_lattice(𝕃::AbstractLattice, @nospecialize(elem)) =
@nospecializeinfer is_valid_lattice(𝕃::AbstractLattice, @nospecialize(elem)) =
is_valid_lattice_norec(𝕃, elem) && is_valid_lattice(widenlattice(𝕃), elem)
is_valid_lattice(𝕃::JLTypeLattice, @nospecialize(elem)) = is_valid_lattice_norec(𝕃, elem)
@nospecializeinfer is_valid_lattice(𝕃::JLTypeLattice, @nospecialize(elem)) = is_valid_lattice_norec(𝕃, elem)

has_conditional(𝕃::AbstractLattice) = has_conditional(widenlattice(𝕃))
has_conditional(::AnyConditionalsLattice) = true
Expand All @@ -306,12 +306,12 @@ has_extended_unionsplit(::JLTypeLattice) = false
const fallback_lattice = InferenceLattice(BaseInferenceLattice.instance)
const fallback_ipo_lattice = InferenceLattice(IPOResultLattice.instance)

(@nospecialize(a), @nospecialize(b)) = (fallback_lattice, a, b)
tmeet(@nospecialize(a), @nospecialize(b)) = tmeet(fallback_lattice, a, b)
tmerge(@nospecialize(a), @nospecialize(b)) = tmerge(fallback_lattice, a, b)
(@nospecialize(a), @nospecialize(b)) = (fallback_lattice, a, b)
(@nospecialize(a), @nospecialize(b)) = (fallback_lattice, a, b)
is_lattice_equal(@nospecialize(a), @nospecialize(b)) = is_lattice_equal(fallback_lattice, a, b)
@nospecializeinfer @nospecialize(a) @nospecialize(b) = (fallback_lattice, a, b)
@nospecializeinfer @nospecialize(a) @nospecialize(b) = (fallback_lattice, a, b)
@nospecializeinfer @nospecialize(a) @nospecialize(b) = (fallback_lattice, a, b)
@nospecializeinfer tmeet(@nospecialize(a), @nospecialize(b)) = tmeet(fallback_lattice, a, b)
@nospecializeinfer tmerge(@nospecialize(a), @nospecialize(b)) = tmerge(fallback_lattice, a, b)
@nospecializeinfer is_lattice_equal(@nospecialize(a), @nospecialize(b)) = is_lattice_equal(fallback_lattice, a, b)

# Widenlattice with argument
widenlattice(::JLTypeLattice, @nospecialize(t)) = widenconst(t)
Expand Down
Loading

4 comments on commit f44be79

@aviatesk
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nanosoldier runbenchmarks("inference", vs="@d2f5bbd7cfbac902db952b465b83d242efcf6f08")

@nanosoldier
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

@aviatesk
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nanosoldier runbenchmarks("inference", vs="@944b28c9ec1f1629d0d9116b1dfc5cbc29002249")

@nanosoldier
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here.

Please sign in to comment.