-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OptimizationSenseAtom #636
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #636 +/- ##
=======================================
Coverage 97.86% 97.87%
=======================================
Files 88 89 +1
Lines 5114 5128 +14
=======================================
+ Hits 5005 5019 +14
Misses 109 109 ☔ View full report in Codecov by Sentry. |
I don't know what's up with nightly |
Looks good, maybe add some docstring for what this is used for ? IIUC, this is to get an error if we don't use it the right way, e.g., t = Variable()
add_constraint!(t, t >= x)
add_constraint!(t, t >= -x)
maximize(t) Here, because we maximize maximize(OptimizationSenseAtom(t, MOI.MIN_SENSE)) then you get an error saying it's not DCP because |
Looking at this more, I think we are close to this working with using Convex, LinearAlgebra, Clarabel
using Convex: AbstractExpr
# monkeypatch the existing `getproperty`
function Base.getproperty(p::Problem, s::Symbol)
if s === :optval
if getfield(p, :status) == Convex.MOI.OPTIMIZE_NOT_CALLED
return nothing
else
return Convex.objective_value(p)
end
elseif s === :size
return p.objective.size
end
return getfield(p, s)
end
function lamb_min(A::AbstractExpr)
t = Variable()
n = size(A, 1)
n == size(A,2) || throw(ArgumentError())
p = maximize(t, A - t*Matrix(1.0I, n, n) ⪰ 0)
return p
end
p = maximize( lamb_min(A) + 1, [ A >= 0, A[1,1] == 2.0] )
solve!(p, Clarabel.Optimizer) julia> print(p.model)
Maximize ScalarAffineFunction{Float64}:
1.0 + 1.0 v[5]
Subject to:
VectorAffineFunction{Float64}-in-Zeros
┌ ┐
│-2.0 + 1.0 v[1]│
└ ┘ ∈ Zeros(1)
VectorAffineFunction{Float64}-in-Nonnegatives
┌ ┐
│0.0 + 1.0 v[1]│
│0.0 + 1.0 v[2]│
│0.0 + 1.0 v[3]│
│0.0 + 1.0 v[4]│
└ ┘ ∈ Nonnegatives(4)
VectorAffineFunction{Float64}-in-PositiveSemidefiniteConeSquare
┌ ┐
│0.0 + 1.0 v[1] - 1.0 v[5] │
│0.0 + 1.0 v[3] + 1.0 v[2] - 1.0 v[3]│
│0.0 + 1.0 v[3] │
│0.0 + 1.0 v[4] - 1.0 v[5] │
└ ┘ ∈ PositiveSemidefiniteConeSquare(2) However, it's not fully correct yet, since if I flip IMO if we get this working, it will be easier for users than introducing a new atom. |
Okay let me take a look |
Closing for now. I'll take another shot at this. |
Closes #310