-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDP-relaxation in polynomial optimization #34
Comments
Interesting, thanks for the note; we should update the docs. The original paper (linked in the docs) used SDPA and SeDuMi, not SCS, although I'm not sure exactly how they formulated the problem. With SDPA (via SDPA.jl), I find that julia> r=10
10
julia> m = relaxed_pop(r)
A JuMP Model
Feasibility problem with:
Variables: 20
`Array{GenericAffExpr{Float64,VariableRef},1}`-in-`MathOptInterface.PositiveSemidefiniteConeSquare`: 3 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
Names registered in the model: v
julia> set_optimizer(m, () -> SDPAFamily.Optimizer{Float64}(presolve=true, variant=:sdpa))
julia> optimize!(m)
Warning: Step length is too small. :: line 198 in sdpa_dataset.cpp
Warning: cannot move: step length is too short :: line 176 in sdpa_solve.cpp
julia> objective_value(m)
0.0005971314478685485 (Note the above code calls plain SDPA, not SDPA-GMP). It's not a silent failure though. I wonder if maybe the paper is just a bit out of date and the solvers can now more-or-less handle these problems? Or if maybe the paper authors also used some kind of non-optimal formulation like Convex emits here. By the way, I suspect Convex's formulation is inefficient but not truly incorrect here (not that you were implying it was!), although it is definitely not good that it leads to the wrong answer in this case. I think the solution is to use MOI more directly in Convex as JuMP does instead of essentially assembling the MPB standard form and passing that to MOI. It'll take some work though. |
I remember seeing different formulations of the same problem that performed differently with SCS, so that was my guess ;) just tried with |
btw I tried the same using MOI and SDPAFamily and hit julia> let m = relaxed_pop(25);
set_optimizer(m, SDPAFamily.Optimizer);
optimize!(m)
#@show termination_status(m)
#objective_value(m)
end
ERROR: TypeError: in typeassert, expected MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.EqualTo{Float64}}, got MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{BigFloat},MathOptInterface.EqualTo{BigFloat}}
Stacktrace:
[1] getindex at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/copy.jl:75 [inlined]
[2] #112 at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/copy.jl:123 [inlined]
[3] iterate at ./generator.jl:47 [inlined]
[4] _collect(::Array{MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.EqualTo{Float64}},1}, ::Base.Generator{Array{MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.EqualTo{Float64}},1},MathOptInterface.Utilities.var"#112#113"{MathOptInterface.Utilities.IndexMap}}, ::Base.EltypeUnknown, ::Base.HasShape{1}) at ./array.jl:678
[5] collect_similar at ./array.jl:607 [inlined]
[6] map at ./abstractarray.jl:2072 [inlined]
[7] pass_attributes(::SDPAFamily.Optimizer{BigFloat}, ::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, ::Bool, ::MathOptInterface.Utilities.IndexMap, ::Array{MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.EqualTo{Float64}},1}, ::Function) at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/copy.jl:123
[8] pass_constraints(::SDPAFamily.Optimizer{BigFloat}, ::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, ::Bool, ::MathOptInterface.Utilities.IndexMap, ::Array{DataType,1}, ::Array{Array{#s118,1} where #s118<:(MathOptInterface.ConstraintIndex{MathOptInterface.SingleVariable,S} where S),1}, ::Array{DataType,1}, ::Array{Array{#s13,1} where #s13<:(MathOptInterface.ConstraintIndex{MathOptInterface.VectorOfVariables,S} where S),1}, ::typeof(MathOptInterface.Utilities.allocate_constraints), ::typeof(MathOptInterface.Utilities.allocate)) at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/copy.jl:266
[9] allocate_load(::SDPAFamily.Optimizer{BigFloat}, ::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, ::Bool) at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/copy.jl:684
[10] #automatic_copy_to#109 at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/copy.jl:17 [inlined]
[11] #copy_to#22 at /home/kalmar/.julia/packages/SDPAFamily/rG0xe/src/MOI_wrapper.jl:211 [inlined]
[12] attach_optimizer(::MathOptInterface.Utilities.CachingOptimizer{SDPAFamily.Optimizer{BigFloat},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/cachingoptimizer.jl:149
[13] optimize!(::MathOptInterface.Utilities.CachingOptimizer{SDPAFamily.Optimizer{BigFloat},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/cachingoptimizer.jl:185
[14] optimize!(::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SDPAFamily.Optimizer{BigFloat},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}) at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Bridges/bridge_optimizer.jl:239
[15] optimize!(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at /home/kalmar/.julia/packages/MathOptInterface/RmalA/src/Utilities/cachingoptimizer.jl:189
[16] optimize!(::Model, ::Nothing; bridge_constraints::Bool, ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/kalmar/.julia/packages/JuMP/CZ8vV/src/optimizer_interface.jl:131
[17] optimize! at /home/kalmar/.julia/packages/JuMP/CZ8vV/src/optimizer_interface.jl:107 [inlined] (repeats 2 times)
[18] top-level scope at REPL[10]:3 |
Ah we should figure out how to throw a better error message for that. The fix is to use |
indeed that's a tricky question |
The simplest thing actually might be to just not choose a default at all and throw a clear error message when one passes just Maybe a nicer solution is if MOI can support numeric promotion (some kind of It's a little unfortunate that SDPAFamily.Optimizer needs to choose a numeric type and the modelling language's model (either Convex or JuMP, the latter of which is hard-coded to Float64) also needs to choose a numeric type, and these choices must agree but one cannot determine the other. Even in Convex, one has to do |
This is just a note for the documentation, namely the example here:
https://ericphanson.github.io/SDPAFamily.jl/dev/examples/#SDP-relaxation-in-polynomial-optimization-problem-1
The fact that SCS couldn't solve the problem could be the consequence of how Convex formulation. I just tried JuMPed version and it works just fine:
The text was updated successfully, but these errors were encountered: