Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Underflow warnings trigger too often #22

Open
ericphanson opened this issue Sep 20, 2019 · 5 comments
Open

Underflow warnings trigger too often #22

ericphanson opened this issue Sep 20, 2019 · 5 comments
Labels
bug Something isn't working

Comments

@ericphanson
Copy link
Owner

If the objective is zero, then they always trigger because the current condition we use is

(norm(optimizer.b, Inf) < eps(norm(xMatvec, Inf)) || norm(optimizer.b, Inf) < eps(norm(yMatvec, Inf)))

and optimizer.b represents the objective function. So we need to change this condition for detecting underflows. Moreover, we should at least only trigger for non-bigfloat problems, since bigfloats should almost always have enough precision.

For more on why we have underflow problems: SDPA-GMP solves the problem at a precision specified by the precision parameter, which we default to 200 for non-Float64 problems and 80 for Float64 problems. However, Float64 only has 52 bits of mantissa. So when we read the solution from the file into a e.g. Float64, we lose some information. Sometimes that's OK-- since we're solving the problem at Float64 precision, we can only expect to recover those 52 bits-- but the problem comes in if in one of the bridges or other calculations performed on the Julia side, two very large numbers are subtracted and kill off many of the high order bits (or some other precision-losing calculation is performed). Then we actually need to have those low order bits in order to have an accurate final answer even to e.g. 10^-2 precision. (Thanks to @JiazhengZhu for explaining this to me).

So, if I'm understanding the problem correctly, the only time this can happen is if we get output from the solver that is much larger than the final objective value (after post-processing e.g. bridges etc), or final variable values, because that's how we lose the low-order bits that are actually significant. I'm not sure to what extent we can detect any post-processing at the level of SDPAFamily.jl, however, so I'm not sure how to figure out what the final objective value is to compare to the size of the outputs we get from the solver. The SDPAFamily.Optimizer object doesn't "know" if it's part of a larger chain of bridges and solvers, as far as I can see.

However, the parameters lowerbound and upperbound are supposed to bound the optimal objective value and dual objective value. So maybe we should just re-use them: if we get things out that fall outside those bounds, then that's cause to warn the user (if also the precision parameter is smaller than the precision of the numeric type used by the problem).

However, I don't quite see how we can be getting huge results from the solver given that it is supposed to be using these upper and lower bounds as well. Perhaps the objective values remain small but some of the entries of the variables become huge?

I should look into some example problems more.

@JiazhengZhu
Copy link
Collaborator

JiazhengZhu commented Sep 20, 2019

Yes I think the upper and lower bounds are used to decide if a problem is unbounded or not. If the objective values reach those bounds SDPA just gives up and declare its unbounded. These limits only apply to the objectives though.

I’ve been doing a tiny bit of research and it seems that there’s a google summer of code project on LP and SDP presolve for Convex.jl but I can only find the LP bit: https://github.com/ramcha24/Presolve. The author also gave a talk about his project on YouTube. So given that our final result only depends on the difference of two entries, it’s possible that we still have some redundancy in our problem. Maybe the ultimate solution will be to implement the facial reduction algorithm?

@ericphanson
Copy link
Owner Author

Ah, that's cool! I didn't know of that project. Unfortunately, we may not be able to make much use of it; it looks like it implements the presolve at the level of the MPB standard form, which we no longer construct on the Convex#MathOptInterface branch. Moreover, with MOI bridges, doing the presolve on the level of Convex may not be sufficient anyway.

Maybe we can see if underflows correspond to redundant constraints which weren't removed by the presolve. For example, we can implement versions of the partial transpose tests using exact rationals (which should be perfectly handled by the presolve) and see if we still get underflow-type problems.

That will be a bit tricky the way things are now, since if we specify the problem with rationals, we expect to write and read rationals to the files, which won't work (since SDPA doesn't understand rationals and moreover we can get irrational solutions to SDPs which are specified with rational numbers, as shown by that quantum guessing example).

I wonder if somehow we can set things up so that we can switch numeric types part way through. Allow the problem to be specified and presolved with rationals and then swapped to bigfloats to write and read the problem afterwards. That seems like it would be helpful. Convex#MathOptInterface currently uses a MOIU.Model which only allows for one numeric type, but we could make a custom MOI model, and maybe we could do it that way.

@JiazhengZhu
Copy link
Collaborator

We could do this on the presolve level: we could test if the entries in F_i are integers or simple fractions and if not we just use floating point as default?

@ericphanson
Copy link
Owner Author

I think that might be pretty fragile; many numbers aren’t exactly representable by Float64s, so either we only convert a few problems such that every number is exactly representable (which means if you change anything slightly the presolve might fail), or else we add a tolerance which could just introduce more error (if say a problem uses an irrational number that we convert to a rational). I think the hope there is that this conversation could undo floating point error, but I think it’s just as likely to introduce more error.

I think it would be pretty nice to be able to do presolve and bridges at high precision, even if we want to solve the problem at a lower precision, since those should be relatively simple transformations and maybe the slowdown there is worth it, compared to doing the whole solve at high precision.

Some of the bridges use irrationals, so we won’t be able to use rationals for all problems, but hopefully for some we could. I think that would need some changes on Convex’s side though, as well.

We could also add a check on Convex’s side, by comparing the final objective value and variable values to those output by SDPA. I’m not sure if that sort of thing would be relevant to any other solvers though.

@ericphanson
Copy link
Owner Author

ericphanson commented Sep 20, 2019

We could just try solving some problems with T = Union{Rational{BigInt}, BigFloat} and specify the problem with rationals. I think that wouldn’t require any code changes, but I’m not quite sure it would work. We’d have to make sure the data doesn’t get printed as rationals because the binary won’t understand those.

Edit: yeah, I think before printing we should call float to convert the numbers to floats for printing. We could even try some custom numeric type T that is the rationals extended with sqrt(2) and/or other irrationals, so we can do exact presolves, and then convert it to a float for solving.... maybe we could even generate the type on demand given the problem data so it always works :o

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants