Yalmip-Mosek precision

1,254 views
Skip to first unread message

Dylan Caverly

unread,
Aug 26, 2016, 10:45:12 AM8/26/16
to YALMIP
I'm using mosek to solve a positive semidefinite linear matrix inequality. Is there any way to improve the precision? I'm dealing with numbers like 2 + 1.0e-7 and need that 1.0e-7 to be included.

Johan Löfberg

unread,
Aug 28, 2016, 3:59:12 AM8/28/16
to YALMIP
You'l simply have to look through the available options and look for reasonable candidates

K>> ops = sdpsettings;
K>> ops.mosek

ans = 

                       MSK_DPAR_ANA_SOL_INFEAS_TOL: 1.000000000000000e-06
...


Erling D. Andersen

unread,
Aug 29, 2016, 4:07:00 AM8/29/16
to YALMIP
I would suggest using MOSEK version 8. That usually provide better accuracy than version 7.

If that does not help then  I suggest you think about how you model things. Is the scaling right for instance. Next you can try to change the MOSEK tolerances but in most cases it will no effect. Since MOSEK tends to report as accurate solution it can. MOSEK cannot solve problem to arbitrary high accuracy since all computations are done finite precision.

In other words the need to change the solver tolerances is often a consequence of a bad model.


Message has been deleted

Pedro Ascencio

unread,
Sep 29, 2016, 9:50:43 AM9/29/16
to YALMIP
Great improvements !!. 
For my sdp problem, from 1e-6 (version 7)  to 1e-14 (version 8) and stable. 

Mark L. Stone

unread,
Sep 29, 2016, 2:24:54 PM9/29/16
to YALMIP
@Erling D. Andersen  Hop on the Michael Saunders bandwagon and go Quad Precision (e.g., Quad MINOS).  I, myself, am waiting for hardware octuple (or higher) precision so that I can accurately enouigh do Conditional Normal calculations (Schur Complement type stuff)  on extremely ill-conditioned covariance matrices, without incurring the orders of magnitude slowdown from using software multiple precision.

On a serious note, if Quad precision version is easy to do, have it available for use when users really needs the accuracy, or the problem is intrinsically ill-conditioned.  And for regular problems, just use the hardware double precision.version.

Erling D. Andersen

unread,
Sep 29, 2016, 3:31:33 PM9/29/16
to YALMIP
It is interesting with quad prec. Currently, quad  precision is badly supported on Windows. MS dropped it from their compiler years ago.

Also according to 


Kahan is pessimistic about  quad support in hardware. And then it is going to be slow. 

PS. My exprience says it is not going to be easy though. 
Message has been deleted

Mark L. Stone

unread,
Sep 29, 2016, 4:22:34 PM9/29/16
to YALMIP
Well, Quad MINOS exists now.  So what I'm suggesting is, if Quad version is "easy" to do, then it can be used for really difficult problems, and then just accept that doing it in software will be very slow.  I leave it to you how feasible/effective bybrid approach is where computation starts in hardware double precision, then switches to software quad precision when needed, or just does small parts of the calculation in Quad.

BTW, I think I did some 128 bit extended precision, i.e.,Quad Precision more or less, on IBM 370/168 in 1980 (yes, I'm ancient)..

Yeah, maybe Kahan's right. In 1990, I thought that within 10 to 15 years, quad precision would replace double precision as standard, much as double precision had recently replaced single precision as stabdard.  Instead, we now have people computing with massive data sets in single precision on GPUs, using numerically unstable algorithms.  Gee, not much can go wrong there.

Mark L. Stone

unread,
Sep 29, 2016, 4:34:35 PM9/29/16
to YALMIP
Yeah, MINOS is in FORTRAN, which has better support for quad than most C/C++.

There is f2c.  Is there c2f, ha ha?

Magnus Nilsson

unread,
Sep 30, 2016, 11:27:59 AM9/30/16
to YALMIP
This topic caught my attention, in particular Erling's suggestion for scaling in the modeling. 

I wonder if anyone can give useful general pointers to scaling of LMIs during the modeling phase?

The "practical" case is when I use software such as YALMIP or CVX to model an optimization problem. 

I am still a non-expert on applying LMIs in optimization problems and have not yet reached a good intuition on how to do this.


Mark L. Stone

unread,
Sep 30, 2016, 2:23:42 PM9/30/16
to YALMIP
These are all legally available for free.

There's a brief write up in sections 2.2.5 and 2.2.6 of the MOSEK Modeling Cookbook http://docs.mosek.com/MOSEKModelingCookbook-letter.pdf .

If  you read and understand the whole MOSEK Modeling Cookbook, that will help you a lot in formulating LMIs and other optimization problems.

Also read and work some problems in http://stanford.edu/~boyd/cvxbook/ - that will help you formulate SOCPs, LMIs, etc, but I don't think discusses scaling or tolerances.

 If you have a strong constitution and the math "chops",, you can advance to "Linear Matrix Inequalities in System and Control Theory  Stephen Boyd, Laurent El Ghaoui, E. Feron, and V. Balakrishnan"  at  http://stanford.edu/~boyd/lmibook/

Johan Löfberg

unread,
Sep 30, 2016, 4:30:20 PM9/30/16
to YALMIP
Simplest rule. Keep your data around 1 and optimal solutions nice around 1. THe further away from this, the more the solver will struggle. Of course not the whole truth, but a reasonable first order approximation

Mark L. Stone

unread,
Jul 18, 2020, 10:48:37 PM7/18/20
to YALMIP
Erling D. Andersen.wrote Kahan is pessimistic about  quad support in hardware.

Yeah, i guess Kahan was right.Instead of quad precision hardware, we have half precision hardware.
Reply all
Reply to author
Forward
0 new messages