Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Is all internal computation done with double-precision floating point?

45 views
Skip to first unread message

Mark Streich

unread,
Mar 2, 2023, 3:49:23 PM3/2/23
to lp_solve
I'm running into an issue where my software is correctly handling values for one item of  6,299,616 (binary 11000000001111111100000) and lower, but failing for values 6,299,617 (binary 11000000001111111100001) and above.

I'm using Java doubles throughout.

The problem is too complex to share easily, so I thought I'd just check if anyone knows if single precision floats are used anywhere in the code?

The mantissa in single precision FP is 23 bits, and those values are both 23 bits, represented as integer values, although in my software they're doubles.

As some of the computations involve addition or subtraction, it's possible I'm overflowing the precision.

Appreciation any suggestions.

-- Mark

Peter Notebaert

unread,
Mar 2, 2023, 5:55:46 PM3/2/23
to Mark Streich, lp_solve
I don't fully understand your question. I can only say that lpsolve uses doubles all the way and never float.
I don't understand what you mean by 23 bits. I hope that you understand that doubles cannot exactly represent all decimal numbers and for sure not the ones from our decimal system because a computer stores doubles with a 2 exponent. Also the solve process is doing alot of floating point calculations (in double prescision) and that can give rounding errors and numerical instabilities. Also know that 'integer' variables are internally still treated as doubles.

Without more information it is difficult so say more about this. You say that the problem is too complex to share easily. You know that you can write your model in lp or mps format? Is it not possible to explain your issue as such?

Peter

--
You received this message because you are subscribed to the Google Groups "lp_solve" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lp_solve+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lp_solve/5ef9f7ec-ec96-4552-b439-fa73c3df42b6n%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Allin Cottrell

unread,
Mar 2, 2023, 7:49:26 PM3/2/23
to lp_solve
This may or may not be relevant to your problem, but the range over which floating-point types can represent integers exactly depends on the number of bits in the mantissa. If that's 23 (single-precision floats) you should be OK up to 2^23 = 8388608. Doubles have a 52-bit mantissa so they can represent integers exactly up to 2^52, a much bigger number than you're talking about. Beyond that range you only get approximations to integers, but that in itself is not going to destroy the accuracy of FP calculations. What do you mean by "failing"?

Mark Streich

unread,
Mar 3, 2023, 3:42:44 PM3/3/23
to lp_solve
I've generated both "good" and "bad" result MPS files, and captured some more output in the document.

Here's the Google Drive folder, with notes and MPS files.


I was not clear.  The solver doesn't "fail" but it does come up with suboptimal results, and it depends on one input to the model.

Essentially, I'm trying to split a value into A and B such that we maximize A+B, and minimize the difference between A and B.  

It's slightly more complicated than that, as I allow a small delta, and then try relaxing the constraint a bit iteratively until I find a solution with different optimizations.

I'm considering scaling the input numbers down to create the model, and then scale them back up afterwards.


Peter Notebaert

unread,
Mar 4, 2023, 7:34:43 AM3/4/23
to Mark Streich, lp_solve
Your problem has for sure to to with scaling/big values.
For example Using-6300000-BadResult when converted to lp-format I see:

/* Objective function */
min: -6300000 C2 -6300000 C3 -6300000 C4 -6300000 C5 +1e-05 C6 -6300000 C7 -6300000 C8;

/* Constraints */
R1: +C1 >= 1;
R2: +C1 <= 1;
R3: +C6 >= 0;
R4: +C6 <= 1;
R5: +C4 >= 0;
R6: +C4 <= 1;
R7: +C7 >= 0;
R8: +C7 <= 1;
R9: +C8 >= 0;
R10: +C8 <= 1;
R11: +C2 >= 0;
R12: +C2 <= 1;
R13: +C3 >= 0;
R14: +C3 <= 1;
R15: +C5 >= 0;
R16: +C5 <= 1;
R17: +C6 >= 0;
R18: +C6 <= 1;
R19: +C4 >= 0;
R20: +C4 <= 1;
R21: +C7 >= 0;
R22: +C7 <= 1;
R23: +C8 >= 0;
R24: +C8 <= 1;
R25: +C2 >= 0;
R26: +C2 <= 1;
R27: +C3 >= 0;
R28: +C3 <= 1;
R29: +C5 >= 0;
R30: +C5 <= 1;
R31: -6300000 C3 -6300000 C4 +6299995 C6 <= 0.1;
R32: -6300000 C3 -6300000 C4 +6299995 C6 >= 0;
R33: -6300000 C5 -6300000 C7 <= 0.1;
R34: -6300000 C5 -6300000 C7 >= 0;
R35: -3150000 C2 -3150000 C3 +3150000 C4 -3150000 C5 +3150000 C7 +3150000 C8 <= 0.25;
R36: -3150000 C2 -3150000 C3 +3150000 C4 -3150000 C5 +3150000 C7 +3150000 C8 >= -0.25;
R37: +100 C2 +100 C6 +100 C8 <= 100;
R38: +100 C2 +100 C6 +100 C8 >= 99.999;

These real big coefficients give numerical instabilities. 

If I use a different scaling option (in this case no scaling) in lp_solve I do get the good result. for example:

lp_solve -mps d:\brol\Using-6300000-BadResult.mps  -wlp d:\brol\Using-6300000-BadResult.lp -v4 -s0

gives:

Value of objective function: -6300000.00000000

Actual values of the variables:
C1                              1
C2                            0.5
C3                              0
C4                              0
C5                              0
C6                              0
C7                              0
C8                            0.5

The default scaling option of lpsolve may not well behave with your big coefficients. So I would try other scaling options with your models and see what this gives.

This has also to do with tolerances used by the solver. Because solving such a model results in alot of floating point operations, adding, substracting, multiply and diveding results in rounding errors and the solver must cope with that via tolerances.

Your big coefficients are also in the range of the default tolerances.

For example when I use the option -epsel 1e-13 I also get:

Value of objective function: -6300000.00000000

Actual values of the variables:
C1                              1
C2                            0.5
C3                              0
C4                              0
C5                              0
C6                              0
C7                              0
C8                            0.5

So you will have to either try to scale your model better or play with the lpsolve options.

Peter


--
You received this message because you are subscribed to the Google Groups "lp_solve" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lp_solve+u...@googlegroups.com.

Peter Notebaert

unread,
Mar 4, 2023, 8:07:43 AM3/4/23
to Mark Streich, lp_solve
By the way.

If I print more precision I get:

lp_solve -mps d:\brol\Using-6300000-BadResult.mps  -wlp d:\brol\Using-6300000-BadResult.lp -ip

Value of objective function: -6299997.49999421


Actual values of the variables:
C1                   1
C2                   0.49999984126977837
C3                   0
C4                   0.49999976190469902
C5                   0
C6                   0.50000015873022163
C7                   0
C8                   0

Substituting this result in all your constraints gives a valid solution.

So this given solution is a valid solution according to me.

Peter

Mark Streich

unread,
Mar 4, 2023, 8:22:36 AM3/4/23
to lp_solve
On Saturday, March 4, 2023 at 8:07:43 AM UTC-5 Peter wrote:
So this given solution is a valid solution according to me.

Yes, it is valid. But that's also because the model tries various alternatives (at a high-level), but it was weird that one input gives me the expected result, and the input+1 gives a  different one, which is why I was wondering about internal computation.

Thank you for pointing me to the scaling option and tolerance for rounding values.  I'll try those on my extensive test suite, and see what works better.  I tend not to play with parameters I didn't understand...

Mark



Peter Notebaert

unread,
Mar 4, 2023, 9:16:18 AM3/4/23
to Mark Streich, lp_solve
Note also that although it is valid, it is not the most optimal solution. -6300000 is most optimal.

Peter

--
You received this message because you are subscribed to the Google Groups "lp_solve" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lp_solve+u...@googlegroups.com.

Peter Notebaert

unread,
Mar 4, 2023, 9:24:56 AM3/4/23
to Mark Streich, lp_solve
Another thing you can try is presolve.

The original Using-6300000-BadResult.mps file in lp format is:
When you use the presolve option -presolve, the model becomes:

/* Objective function */
min: -6300000 C2 -6300000 C3 -6300000 C4 +1e-05 C6 -6300000 C8;

/* Constraints */

R31: -6300000 C3 -6300000 C4 +6299995 C6 <= 0.1;
R32: -6300000 C3 -6300000 C4 +6299995 C6 >= 0;
R35: -3150000 C2 -3150000 C3 +3150000 C4 +3150000 C8 <= 0.25;
R36: -3150000 C2 -3150000 C3 +3150000 C4 +3150000 C8 >= -0.25;

R37: +100 C2 +100 C6 +100 C8 <= 100;
R38: +100 C2 +100 C6 +100 C8 >= 99.999;

/* Variable bounds */
C2 <= 1;
C3 <= 1;
C4 <= 1;
C6 <= 1e+29;
C8 <= 1;

And this also gives the correct result:

Value of objective function: -6300000.00000000

Actual values of the variables:
C1                              1
C2                            0.5
C3                              0
C4                              0
C5                              0
C6                              0
C7                              0
C8                            0.5

In fact what presolve did here was all your constraints that only work on a variable to replace these constraints to bounds on variables

For example:

R3: +C6 >= 0;
R4: +C6 <= 1;

Can just be replaced by a boundon C6:

C6 <= 1;

Note that all variables are by default >= 0

Or you could also adapt your program that generated this model to immediately generate bounds on variables instead of extra constraints.

Peter

On Sat, 4 Mar 2023 at 14:22, Mark Streich <str...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "lp_solve" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lp_solve+u...@googlegroups.com.

Mark Streich

unread,
Mar 4, 2023, 10:16:53 AM3/4/23
to lp_solve
On Saturday, March 4, 2023 at 9:24:56 AM UTC-5 Peter wrote:
Another thing you can try is presolve.

Does the -presolve command-line option set ALL of the  set_presolve()  options (https://lpsolve.sourceforge.net/5.5/set_presolve.htm)?

I see there are a number of -presolve* command-line options, for various alternatives, but wondered if -presolve by itself goes all-in.

I'll investigate, but will have to see if/how this will impact my code:
"PRESOLVE_LINDEP can result in deletion of rows (the linear dependent ones)"


Peter Notebaert

unread,
Mar 4, 2023, 10:37:00 AM3/4/23
to Mark Streich, lp_solve
-presolve sets PRESOLVE_ROWS | PRESOLVE_COLS

Peter

--
You received this message because you are subscribed to the Google Groups "lp_solve" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lp_solve+u...@googlegroups.com.

Mark Streich

unread,
Mar 5, 2023, 5:24:20 PM3/5/23
to lp_solve
Following up with Peter's suggestions...   Just to close the loop.  I don't require any additional help at this time.

I tried regular -presolve (or the API equivalent of PRESOLVE_ROWS | PRESOLVE_COLS), and that broke my models, as I worried it might since it deletes rows and columns.  I don't have time to dig into what's happening, or would need to change, so I tried other suggestions.

I tried the scaling option(s), but they broke some of my tests, or were unstable.  I thought that Geometric Scaling was the default (according to the manual), but set_scaling(SCALE_GEOMETRIC) broke some tests that consistently pass when defaults are used.  I'm using the Java interface, so don't know if anything is hidden there.

The one that worked best for my usage and test suite was setting epsilon to something smaller than the default. It keeps my code working, and works across my inputs, so I'll go with it.

Thank you VERY MUCH, Peter, for the pointers and library.

Reply all
Reply to author
Forward
0 new messages