Decimal precision issues

1,447 views
Skip to first unread message

Christian Kloimüllner

unread,
Jan 31, 2017, 4:43:13 AM1/31/17
to Gurobi Optimization
Dear all,

I am currently working on an optimization problem where I aim to maximize some flow according to special problem-specific constraints. I modeled it as a MIP and tried to solve it with Gurobi, but unfortunately ran into some issues when trying to solve it. As I get some warnings by Gurobi I think that the problem could be decimal precision issues. Gurobi shows the following warnings (not for every single instance, but for some of them):

Warning: Markowitz tolerance tightened to 0.5
Warning: max constraint violation (1.1405e-06) exceeds tolerance
Warning: max bound violation (1.1211e-06) exceeds tolerance
very big Kappa = 2.28756e+14, try parameter NumericFocus
Warning: 1 variables dropped from basis
Warning: switch to quad precision

I have already tried to set the parameter NumericFocus to 2, and also tried setting ScaleFlag to 0, but the solution procedure is then getting very slow, in fact too slow for my purposes. When using these parameter settings I also get these warnings but I can solve the MIPs. Without setting the parameters the root relaxation is sometimes infeasible although it should not be. I have to note that those problems mostly only occur on larger problem instances. An example of such a "larger" instance is:

Optimize a model with 756907 rows, 3114763 columns and 11188649 nonzeros

Variable types: 3113996 continuous, 767 integer (0 binary)

Coefficient statistics:

 
Matrix range     [1e-02, 3e+03]

 
Objective range  [1e+00, 1e+00]

 
Bounds range     [4e-07, 1e+06]

  RHS range        
[3e-14, 1e+06]

Probably the problem is caused by the input data although I currently do not know exactly what makes it hard for the solver. Maybe rounding the input data to fewer decimal precision (e.g., 2 decimal places) could help? Maybe it is some other characteristic of the input data I did not think of yet?

If you have any ideas/hints about my problem(s) I would acknowledge your help.

Tobias Achterberg

unread,
Jan 31, 2017, 5:50:11 AM1/31/17
to gur...@googlegroups.com
This really looks like you have numerical issues in the model, although the
model statistics do not look terrible. The matrix coefficient range from 1e-2 to
3e+3 is small, and the tiny values (4e-7, 3e-14) for the bounds and right hand
sides should not matter much.

Thus, the numerical issues seem to arise from the structure of the matrix. The
very big condition number of Kappa = 2.2e+14 you are encountering suggests that
this is the case. Consider for example a system

x0 - 2x1 = 0
x1 - 2x2 = 0
x2 - 2x3 = 0
x3 - 2x4 = 0
...
x99 - 2x100 = 0

All coefficients are nice, but we get x0 = 2^100 x100 in the factorization of
the matrix.

Unfortunately, even though this analysis may help you identify why you are
having trouble, it is a different question how you could change the model to get
rid of the problem. I don't think that you will be successful by changing solver
parameters only. My guess is that you need to somehow reformulate your model.
But it is hard to give any hints without having further information on what you
want to model.

In any case, you should check where the tiny bound and rhs coefficients come
from. Maybe this points you to some issue in your model that may also be the
cause for the structural problem you have in the matrix.


Regards,

Tobias

Christian Kloimüllner

unread,
Feb 1, 2017, 1:20:56 PM2/1/17
to Gurobi Optimization
Dear Tobias,

thank you very much for your reply. I appreciate your help very much and indeed I could already identify some problems which (after fixing them) made the whole model better solvable. The problems on the bounds and the RHS value has been caused by the input data. I work with randomly generated instances and there have also been some small values, but I think that for the problem type I am working on that all "really small" values can be discarded. Thus, I defined a threshold for values which are considered in the optimization.

Therefore, I could eliminate many of the warnings I got before, but these two still remain:

Warning: max constraint violation (1.1405e-06) exceeds tolerance
Warning: max bound violation (1.1211e-06) exceeds tolerance

and I am not able to figure out why. I tried to debug my model in detail and also checked the problem you told me when coefficients are small but the sum of all values is huge. But this does not seem to be the case in my scenario. I also tried printing the LHS, RHS and slack values for each constraint after the optimization but these values do not seem to be something special. Until now I did not debug my LP/MIP models on this detailed level, but what I figured out is that I often get very small slack values in my constraints. Could this maybe be a hint for problems?

If you could give me a hint how I can really figure out what's the reason for those error messages it would be very helpful. As you already mentioned maybe I need to reformulate my model but I really would need some indicators where I need to look at respectively what causes all the troubles.

An example for an instance where I have troubles and obtain the above mentioned warning messages can be found at https://drive.google.com/open?id=0B266pSY-_bM7LUs2WjB1ZjFBU0k. Maybe the problem is that I have a large number of variables or I should round my values before I put them into Gurobi (maybe less decimal points do not cause these warnings). In this rather small instance I have for example 799,416 continuous variables and 200 integer variables. Could this also be a possible origin of those errors?

Best regards,
Christian.

Tobias Achterberg

unread,
Feb 1, 2017, 4:55:32 PM2/1/17
to gur...@googlegroups.com
Hi Christian,

I have trouble with reading your debug.lp.bz2 file. When uncompressing, I get
some additional binary data at the beginning and the end, but deleting this in
an editor still leaves a somehow corrupted file that Gurobi refuses to read.
Maybe my bunzip2 version is too old.

Could you please save your model as a *.mps file (this makes sure to really
capture everything correctly bit by bit; the *.lp file format may truncate some
coefficients and does not preserve the order of the variables)? And then
probably use good old *.zip to compress it.


Thanks,

Tobias

Christian Kloimüllner

unread,
Feb 2, 2017, 1:03:52 AM2/2/17
to Gurobi Optimization
Hi Tobias,

I have created the mps file and zipped it: https://drive.google.com/open?id=0B266pSY-_bM7TDZMelNpNXR0WkE (14 MB)

and just in case the zip file does not work, then here you can find the original uncompressed mps: https://drive.google.com/open?id=0B266pSY-_bM7dldVVldJUjdYWjQ (176 MB)

I used Gurobi 7.0.

Thanks and best regards,
Christian.

Tobias Achterberg

unread,
Feb 2, 2017, 4:05:59 AM2/2/17
to gur...@googlegroups.com
Thanks, I got your model.

You have a number of constraints with all coefficients in the order of 100 or
1000, which can be divided by the gcd. Gurobi's presolve is doing this. Then,
the solution for this presolved model is within the feasibility tolerance of
1e-6, but if you plug this into the original model (where the coefficients are
larger) you get a larger violation.

The constraint with the largest violation (in my run that I aborted after about
a minute) is one with coefficients of 198. The violation is in the order of
1e-5, so this fits to a violation of <1e-6 in the presolved model.

It seems that your model is close to infeasible (or actually infeasible) if
exact arithmetics are applied, so there may not be any solution without
constraint violation. I tried FeasibilityTol=1e-8 and got a little bit better
(violation 1.5e-6), but with FeasibilityTol=1e-9 Gurobi claims the model is
infeasible.

You might be able to improve the numerics by scaling the model yourself before
passing it to Gurobi, for example by dividing the coefficients of each row by
the maximum absolute value or by their gcd.


Best regards,

Tobias
Reply all
Reply to author
Forward
0 new messages