How is the fourth output "res" calculated in solvesos

47 views
Skip to first unread message

Yalmiper

unread,
Oct 14, 2015, 3:31:56 AM10/14/15
to YALMIP
I realize sometime at some value in p=z^TQz get from solvesos, the difference can be huge (10^89 say), but the value of z^TQz can be even bigger (say 10^92), so there is not much difference. But I had an example where when I use clean(p-z^TQz, 1e-7) it disappears, Q is strictly positive and yet p has value -10^3 at some point.

Johan Löfberg

unread,
Oct 14, 2015, 3:41:53 AM10/14/15
to YALMIP
It's ||A(Q)-b||_inf

If you have values on the order of 10^89, you have nothing but garbage

z'*Q*z having a value is irrelevant as sos never explicitly works with values of z. Perhaps you're simply saying that Q is insanely large (i.e complete numerical failure)

Johan Löfberg

unread,
Oct 14, 2015, 3:45:55 AM10/14/15
to YALMIP
i.e., what I am saying is that if you ever see values like 10^89, you should not trust anything related to that. We cannot work with numerical optimization with that kind of numerics. I get nervous above , say, 10^6 and below 10^-6

Yalmiper

unread,
Oct 14, 2015, 6:00:18 AM10/14/15
to YALMIP
Hi Johan,
 
But in this case, everything looks fine. I know it's same issue lhs has a large numerical spanning. But the solver reported NEAR_OPTIMAL, res looks fine and in p=z^TQz coefficients can be cleaned by 1e-6. In fact I do believe the first sos constraint is strict (the first is the most important). Yet at x = (1000,1) for example, on both sides of first sos constraints the value are at order 1e32, and the different is at order 10^29. Should I trust the result?

clear;
x = sdpvar(2,1);
T = [1,0;0,100];
y = T*x;
recRuls = [1 0;-1 0;0 1;0 -1];
alpha = 0;
ktc = 1/6;
kdm = 1/(3.6*(1+alpha*100));
kta = 1/60;
kdx = 1e-4;

fedmon = 1 + alpha*y(2);
recRate = [ktc;kdm*y(1)*fedmon;kta*y(1)*fedmon;kdx*y(2)*fedmon];

mono = monolist(x,3);
monoq = replace(mono,x,x.^2);
P = sdpvar(length(mono));
C = [P>=0];

V = mono'*P*mono;

qV=0;
for r = 1:size(recRuls,1)
    ds = y + recRuls(r,:)';
    ds = T\ds;
    qV = qV + recRate(r)*(replace(V,x,ds) - V);
end

[coq,moq] = coefficients(qV,x);
moqq = replace(moq,x,x.^2);
qqV = coq'*moqq;

fedmonq = replace(fedmon,x,x.^2);

b = sdpvar(1);
lhs = b-qqV;
C = C + sos(lhs);

B = [0,0;100,1000];
B = B/T;
a = B(1,:)';
c = B(2,:)';

N = [0,10;10,200];
N = N/T;
d = N(1,:)';
e = N(2,:)';
g = x.^2-e;
g = [g;d(2)-x(2)^2];

for i=1:length(g)
    l = sdpvar(1); 
    C = C + [l>=0] + sos(-(qqV+1)-l*g(i,:)');
end

ops = sdpsettings('solver','mosek');
[sol,v,Q,res] = solvesos(C,b,ops);
for k = 1:length(Q)
    rhs{k} = v{k}'*Q{k}*v{k};
end

eP = value(P);
eb = value(b);
V = mono'*eP*mono;

qV=0;
for r = 1:size(recRuls,1)
    ds = y + recRuls(r,:)';
    ds = T\ds;
    qV = qV + recRate(r)*(replace(V,x,ds) - V);
end

[coq,moq] = coefficients(qV,x);
moqq = replace(moq,x,x.^2);
vqV = coq'*moqq;

lhst = b-vqV;

Johan Löfberg

unread,
Oct 14, 2015, 6:20:17 AM10/14/15
to YALMIP
Which version of matlab/yalmip/mosek are you running? It crashes in the call to mosek here

Johan Löfberg

unread,
Oct 14, 2015, 6:21:23 AM10/14/15
to YALMIP
Scratch that, my mistake.

Johan Löfberg

unread,
Oct 14, 2015, 6:39:05 AM10/14/15
to YALMIP
Don't understand what you are talking about. res is on the order of 10^-6 here (and with eigenvalues of the 4 blocks in Q in the order 10^-5 to 10^-7, it means that no theoretical guarantee is possible. Likely a good solution, but you do not have a numerically validated proof)

Yalmiper

unread,
Oct 14, 2015, 7:45:43 AM10/14/15
to YALMIP
OK. I thought the res is not bad.

So in my problems I always has the numerical coefficients of "lhs" spanning a large magnitude. How exactly does this make the problem difficult? Is this because it make the G_i as in http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Blog.Strictly-feasible-SOS-solutions ill-conditioned. Hence when use interior-point the linear systems that needs to be solved along the central path is ill-conditioned?

Also in these solutions Q is ill-conditioned, if a polynomial does have to have a ill-conditioned decomposition, does  this mean it will very hard for the solver to find it?

Johan Löfberg

unread,
Oct 14, 2015, 7:51:38 AM10/14/15
to YALMIP
It means that the small values of lhs will disappear in the general tolerances of the solver and numerics. They will essentially be zero in the big picture

Yalmiper

unread,
Oct 14, 2015, 7:55:04 AM10/14/15
to YALMIP
OK. Thank you!

Unfortunately this seems inherent in a lot of examples I am trying to solve. I believe the key is to choose other basis in SOS decomposition. So looking forward to  see they are implemented. On the other hand I do not have much clue of what kind of basis should I choose, that probably is the reason that other basis choices is not implemented.

Johan Löfberg

unread,
Oct 14, 2015, 8:23:35 AM10/14/15
to YALMIP
Pablos student looked a bit on different basis, in the context of an alternative formulation that we worked on 

Yalmiper

unread,
Oct 14, 2015, 8:39:36 AM10/14/15
to YALMIP
Thank you!  This looks helpful.

I asked you about why you did not implemented Lagrangian basis in YALMIP as you and Prof.  Pablo indicated in that conference paper. In any case, I need to know what kind of basis is best for my problem first.

Yalmiper

unread,
Oct 14, 2015, 9:47:15 AM10/14/15
to YALMIP
I have another example that behaves strange. the following optimization problem is basically the same as before but this time I only have one sos constraint. This problem should have optimal value 0 and optimal solution value(P) = some constant. In the third line I have  T = [1,0;0,1] which is a linear transformation of the variables used in polynomials. The strange thing is, which the following code, "MOSEK', 'SDPT3" and "SEDUMI" all run into problems and give very crappy solution. If I set T = [1,0;0,100], they all give solution expected. Yet when T = [1,0;0,1], getbase(lhs) has a much smaller span on magnitudes, i.e. 10^-4~1 compare to 10^-12~1 in the case of T = [1,0;0,100].

clear;
x = sdpvar(2,1);
T = [1,0;0,1];

y = T*x;
recRuls = [1 0;-1 0;0 1;0 -1];

kdm = 0.02;
ktc = 0.01;
kdx = 2.5e-4;
kta = 0.05;

recRate = [ktc;kdm*y(1);kta*y(1);kdx*y(2)];


mono = monolist(x,3);
monoq = replace(mono,x,x.^2);
P = sdpvar(length(mono));
C = [P>=0];

V = mono'*P*mono;

qV=0;
for r = 1:size(recRuls,1)
    ds = y + recRuls(r,:)';
    ds = T\ds;
    qV = qV + recRate(r)*(replace(V,x,ds) - V);
end

[coq,moq] = coefficients(qV,x);
moqq = replace(moq,x,x.^2);
qqV = coq'*moqq;

b = sdpvar(1);
lhs = b-qqV;
C = C + sos(lhs);

Johan Löfberg

unread,
Oct 14, 2015, 10:10:31 AM10/14/15
to YALMIP
Numerical optimization ain't fun always.

Yalmiper

unread,
Oct 14, 2015, 10:13:17 AM10/14/15
to YALMIP
I just wish I have time now to set a bunch of break points in YALMIP and SDPT3 to see what the heck is going on...

Yalmiper

unread,
Oct 15, 2015, 4:58:50 AM10/15/15
to YALMIP
Right. I think I have a theory why this might be happening.

Do B = coefficients(lhs,x);
A = getbase(B);
C = A*A^T;
and the conditioning of C is very very bad in the case T = [1,0;0,1]; while when T=[1,0;0,100] it is relatively good. And this does not show up when you do getbase(lhs).
I think it make some sense to evaluate conditioning of A*A' for an underdetermined system Ax=b, but of course it is just a guess.

Johan Löfberg

unread,
Oct 15, 2015, 5:03:23 AM10/15/15
to YALMIP
Freund, Ordonez & Toh have worked a bit on estimating condition numbers for SDPs
Reply all
Reply to author
Forward
0 new messages