-
-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change MutableArithmetics.rewrite to move_factors_into_sums=false #3125
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #3125 +/- ##
==========================================
+ Coverage 98.05% 98.11% +0.06%
==========================================
Files 34 34
Lines 4924 4924
==========================================
+ Hits 4828 4831 +3
+ Misses 96 93 -3
☔ View full report in Codecov by Sentry. |
Urgh. We're back to the julia> import MutableArithmetics
julia> const MA = MutableArithmetics
MutableArithmetics
julia> MA.operate(*, [1, 2]', Vector{Union{String,Float64}}([1.0, 2.0]))
5.0
julia> MA.operate(*, [1 2; 3 4], Vector{Union{String,Float64}}([1.0, 2.0]))
ERROR: MethodError: no method matching zero(::Type{Union{Float64, String}})
Closest candidates are:
zero(::Union{Type{P}, P}) where P<:Dates.Period at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/Dates/src/periods.jl:53
zero(::MutableArithmetics.AbstractMutable) at /Users/oscar/.julia/dev/MutableArithmetics/src/dispatch.jl:784
zero(::SparseArrays.AbstractSparseArray) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/SparseArrays/src/SparseArrays.jl:55
...
Stacktrace:
[1] _instantiate_zero(#unused#::Type{Union{Float64, String}})
@ MutableArithmetics ~/.julia/dev/MutableArithmetics/src/interface.jl:28
[2] promote_operation_fallback(op::typeof(*), #unused#::Type{Int64}, #unused#::Type{Union{Float64, String}})
@ MutableArithmetics ~/.julia/dev/MutableArithmetics/src/interface.jl:51
[3] promote_operation(::typeof(*), ::Type, ::Type)
@ MutableArithmetics ~/.julia/dev/MutableArithmetics/src/interface.jl:113
[4] promote_sum_mul(T::Type, S::Type)
@ MutableArithmetics ~/.julia/dev/MutableArithmetics/src/implementations/LinearAlgebra.jl:186
[5] promote_array_mul(#unused#::Type{Matrix{Int64}}, #unused#::Type{Vector{Union{Float64, String}}})
@ MutableArithmetics ~/.julia/dev/MutableArithmetics/src/implementations/LinearAlgebra.jl:191
[6] operate(#unused#::typeof(*), A::Matrix{Int64}, B::Vector{Union{Float64, String}})
@ MutableArithmetics ~/.julia/dev/MutableArithmetics/src/implementations/LinearAlgebra.jl:388
[7] top-level scope
@ REPL[5]:1 |
61f4000
to
8310930
Compare
Run the larger JuMP benchmark suite |
I ran the benchmarks here: https://github.com/jump-dev/JuMP.jl/blob/master/test/perf/JuMPBenchmarks.jl. No major changes.
The linear examples use more memory because we're no longer collapsing sums into each other. If you write model = Model()
@variable(model, x[1:3])
@expression(model, sum(x[i] for i in 1:3) + sum(x[i] for i in 1:3)) the old MA produced quote
var"#11###286" = (JuMP.MutableArithmetics).Zero()
for i = 1:3
var"#11###286" = (JuMP.MutableArithmetics).operate!!((JuMP.MutableArithmetics).add_mul, var"#11###286", x[i])
end
var"#12###285" = var"#11###286"
for i = 1:3
var"#12###285" = (JuMP.MutableArithmetics).operate!!((JuMP.MutableArithmetics).add_mul, var"#12###285", x[i])
end
var"#13###284" = var"#12###285"
end whereas now we do quote
var"#11###284" = MutableArithmetics.Zero()
for i = 1:3
var"#11###284" = (MutableArithmetics.operate!!)(MutableArithmetics.add_mul, var"#11###284", x[i])
end
var"#12###285" = MutableArithmetics.Zero()
for i = 1:3
var"#12###285" = (MutableArithmetics.operate!!)(MutableArithmetics.add_mul, var"#12###285", x[i])
end
var"#13###286" = (MutableArithmetics.operate!!)(MutableArithmetics.add_mul, var"#11###284", var"#12###285")
end The new rewrite follows the syntax (there are two sums), at the cost of creating an additional Here's the output of the benchmarks: (base) oscar@Oscars-MBP JuMP % git checkout master
Switched to branch 'master'
Your branch is up to date with 'master'.
(base) oscar@Oscars-MBP JuMP % julia --project=. test/perf/JuMPBenchmarks.jl -f expr.txt
... lines omitted ...
(base) oscar@Oscars-MBP JuMP % git checkout od/ma2
Switched to branch 'od/ma2'
Your branch is up to date with 'origin/od/ma2'.
(base) oscar@Oscars-MBP JuMP % julia --project=. test/perf/JuMPBenchmarks.jl -f expr.txt --compare
========== Results ==========
bench_macro_linear_010
BenchmarkTools.TrialJudgement:
time: -31.57% => improvement (5.00% tolerance)
memory: +191.15% => regression (1.00% tolerance)
bench_dense_axis_constraints_index
BenchmarkTools.TrialJudgement:
time: -0.68% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_array_sum_050
BenchmarkTools.TrialJudgement:
time: -2.69% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_vector_sum_050
BenchmarkTools.TrialJudgement:
time: -0.01% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_sparse_axis_constraints_iterate
BenchmarkTools.TrialJudgement:
time: -4.00% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_matrix_quadratic_product_010
BenchmarkTools.TrialJudgement:
time: -1.49% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_matrix_affine_product_010
BenchmarkTools.TrialJudgement:
time: -0.45% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_sparse_axis_constraints_index
BenchmarkTools.TrialJudgement:
time: +2.37% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_vector_sum_010
BenchmarkTools.TrialJudgement:
time: -0.43% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_dense_axis_constraints_iterate
BenchmarkTools.TrialJudgement:
time: -0.04% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_matrix_sum_050
BenchmarkTools.TrialJudgement:
time: +4.17% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_macro_quad_050
BenchmarkTools.TrialJudgement:
time: +12.98% => regression (5.00% tolerance)
memory: -15.85% => improvement (1.00% tolerance)
bench_matrix_affine_product_050
BenchmarkTools.TrialJudgement:
time: -1.21% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_matrix_quadratic_product_050
BenchmarkTools.TrialJudgement:
time: +0.96% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_macro_linear_050
BenchmarkTools.TrialJudgement:
time: -39.13% => improvement (5.00% tolerance)
memory: +341.27% => regression (1.00% tolerance)
bench_array_sum_010
BenchmarkTools.TrialJudgement:
time: -2.52% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_matrix_sum_010
BenchmarkTools.TrialJudgement:
time: -1.35% => invariant (5.00% tolerance)
memory: +0.00% => invariant (1.00% tolerance)
bench_macro_quad_010
BenchmarkTools.TrialJudgement:
time: +5.64% => regression (5.00% tolerance)
memory: +7.79% => regression (1.00% tolerance) |
Running Now that I've used the nonlinear branch a bunch, I'm pretty confident that this change is both safe to do. I'd like to get it in separately to the nonlinear PR so we're not changing too much in a single go. |
@pulsipher this breaks the tests of InfininteOpt: https://github.com/jump-dev/JuMP.jl/actions/runs/5216388377/jobs/9415059599 It's the same set of changes here: We now build expressions that match what you type, instead of these somewhat arbitrary Thoughts on merging this separately to the nonlinear stuff? |
This also seemed to fix a series of test failures I'd seen on Alpine, likely because we're passing more sensible formulations to the solver. |
I am definitely in favor of this change to MutableArithmetics. Naturally, it would be easiest for me to have this merged at about the same time as #3106, but it shouldn't be that much work to breakup my existing pull request in InfiniteOpt accordingly. So, I guess in short I have no problem with this being merged. |
If it didn't break your tests I'd merge this ASAP, but perhaps it's okay as a PR immediately before #3106, so that they both come out in the same JuMP release. |
I guess the other point to readers is that this isn't a breaking change per se. It just breaks the tests in which @pulsipher compares two expressions; the expressions are equivalent mathematically, but they have a different structure in code. |
Closing for now. The |
Needs jump-dev/MutableArithmetics.jl#170
I've been experimenting with a drop-in replacement for
MA.@rewrite
andMA.rewrite
. JuMP throws up some interesting challenges, so it took me a while to get this correct. The tests are now passing with a couple of minimal changes.@expression(model, x <= 1)
@constraint(m, x[i = 1] <= 1)
so that it matches the non-macro error obtained fromx[i = 1]
y * x
tox * y
. This is just cosmetic, and depends on the order that we encounter the variables, but the new behavior now more closely resembles what the user wrote.The biggest surprises were some of our design choices:
a + b
tooperate!!(add_mul, a, b)
instead ofoperate!!(+, a, b)
a - b
tooperate!!(sub_mul, a, b)
instead ofoperate!!(-, a, b)
+
and-
to their mutableadd_mul
andsub_mul
equivalents. We don't rewrite something likeoperate!!(/, a, b)
, even if the operation is supported.sum(i+j for i=1:2, j=i:2)
:There are a couple of options for merging this.
keep it in JuMPrewrite_expr
?