feasibility problem and optimization problem

713 views
Skip to first unread message

Yulin

unread,
May 26, 2014, 2:23:53 PM5/26/14
to yal...@googlegroups.com
Hello, a quick question. 

When I set the optimization goal as [], Yalmip gives me an answer, and 

 Result:  best value of t: -1.788087e-07
          f-radius saturation:  4.241% of R =  1.00e+09

So the problem is feasible.

However, when I set the optimization as something else, like a lmi variable, and try to find the minimum value of this varibale,  Yalmip feedbacks me 

 Result:  could not establish feasibility nor infeasibility

 The LMI constraints were found INFEASIBLE 


Why??????

Johan Löfberg

unread,
May 26, 2014, 2:42:30 PM5/26/14
to yal...@googlegroups.com

Yulin

unread,
May 27, 2014, 3:30:11 AM5/27/14
to yal...@googlegroups.com
OK, I change the solver, and use SDPT3 instead. However, it still gets  the same thing, i.e., when solving the problem as a feasibility problem, the answer is feasible; when solving the problem as an optimization problem, it went wrong, and Matlab report the sol.info as  " Numerical problems (SDPT3-4)".

Still why??? 

Thanks! 

Yulin

unread,
May 27, 2014, 3:35:01 AM5/27/14
to yal...@googlegroups.com
Hear is the code:


clear all
load('model.mat')
X=sdpvar(6,6,'full')
P=sdpvar(6,6)
Y=sdpvar(2,6)
S1=sdpvar(6,6)
S2=sdpvar(6,6,'full')
S3=sdpvar(6,6)
inve=sdpvar(1)
rho=sdpvar(1)
alpha=sdpvar(1)
M=sdpvar(12,12)
bigX=blkdiag(X,X);
bigS=[S1,S2;S2',S3];

G11=-S3+d^2*S1+delta1*A*X'+delta1*X*A';
G12=S3+delta1*B*Y+X*A';
G13=-S2';
G14=d^2*S2+P-delta1*X'+X*A'*delta3
G15=delta1*D*inve+X*L';
G16=delta1*Dstar;
G17=X;
G18=zeros(6,2);
G22=-S3+B*Y+Y'*B';
G23=S2';
G24=-X'+delta3*Y'*B';
G25=D*inve;
G26=Dstar;
G27=zeros(6,6);
G28=Y';
G33=-S1;
G34=zeros(6,6);
G35=zeros(6,2);
G36=zeros(6,2);
G37=zeros(6,6);
G38=zeros(6,2);
G44=d^2*S3-delta3*X'-delta3*X;
G45=delta3*D*inve;
G46=delta3*Dstar;
G47=zeros(6,6);
G48=zeros(6,2);
G55=-2*inve*eye(2);
G56=zeros(2,2);
G57=zeros(2,6);
G58=zeros(2,2);
 
G66=-rho*eye(2);
G67=zeros(2,6);
G68=zeros(2,2)
G77=-inv(Q);
G78=zeros(6,2);
G88=-inv(R)
G1=[G11,G12,G13,G14,G15,G16,G17,G18;
    G12',G22,G23,G24,G25,G26,G27,G28;
    G13',G23',G33,G34,G35,G36,G37,G38;
    G14',G24',G34',G44,G45,G46,G47,G48;
    G15',G25',G35',G45',G55,G56,G57,G58;
    G16',G26',G36',G46',G56',G66,G67,G68;
    G17',G27',G37',G47',G57',G67',G77,G78;
    G18',G28',G38',G48',G58',G68',G78',G88;]
 
G2=[S1,S2;S2',S3];

G3=[alpha,x0';x0,l1*X'+l1*X-l1^2*P];
G4=[M,N';N,l2*bigX+l2*bigX'-l2^2*bigS];
Cons=[G1<0,P>0,G2>0,G3>0,G4>0,S1>0,S3>0,inve>0,rho>0,alpha>0,M>0];
Obs = alpha+trace(M);
% Obs=[]
options = sdpsettings('verbose',1,'solver','sdpt3');
sol = solvesdp(Cons,Obs,options);
% format longEng,
format
if sol.problem == 0
 Xopt = double(X)
 Yopt = double(Y)
 barPopt = double(P)
 barS1opt = double(S1)
 barS2opt  = double(S2)
 barS3opt = double(S3)
else
 display('Hmm, something went wrong!');
 yalmiperror(sol.problem)
end
model.mat

Johan Löfberg

unread,
May 27, 2014, 4:30:31 AM5/27/14
to yal...@googlegroups.com
The problem simply looks badly scaled. As an optimal solution is approached, some variables have to grow huge, and some have to grow small. The feasibility problem is apparently easier as the solvers isn't squeezed out in a problematic corner to find the solution. I would recommend you to try to come up with a better model

The following bisection hack indicates that the optimal objective is around 4.2e5
% solve feasibility problem with a solver that works for that
sol
= solvesdp(Cons)
upper
= double(Obs);
lower
= 0;
ops
= sdpsettings('verbose',1);% Turn off when we know it works
while upper-lower > 1
    sol
= solvesdp([Cons, Obs <= (lower+upper)/2],[],ops);    
   
if min(checkset(Cons))>-1e-10
       
% This was a feasible upper bound on optimal objective
        upper
= (lower+upper)/2        
   
else
       
% Nope, seems like the solver failed to find a reasonable solution
        lower
= (lower + upper)/2;        
   
end
end


BTW, strict inequalities has no relevance
http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Blog.Prepare-your-code

Johan Löfberg

unread,
May 27, 2014, 4:46:17 AM5/27/14
to yal...@googlegroups.com
Your model is weird. rho is only used in the G66 block where it comes in as -rho*eye(2);

You want G1 to be negative definite, hence you must haveG66 =  -rho*eye(2) < 0. A trivial choice is rho = inf since that entirely removes that row and column


Johan Löfberg

unread,
May 27, 2014, 5:02:54 AM5/27/14
to yal...@googlegroups.com
...and you have a bunch of redundant constraints, such as M>0 and S1>0, S3>0 etc, all already forced to be positive (semi)definite since they appear in diagonal blocks of other semidefinite constraints

Yulin

unread,
May 27, 2014, 5:29:45 AM5/27/14
to yal...@googlegroups.com
Yes. However, when I delete the lines involving rho in G1,i.e, I delete G16, G26,... to G66  and their transposes , and restart solving the problem. It gets the same thing.  

So is it because G1 such that the optimization solution can not be found by Yalmip? 

Yulin

unread,
May 27, 2014, 5:31:32 AM5/27/14
to yal...@googlegroups.com
Yes, thanks, I notice it, however, will it affect the solution? 

Johan Löfberg

unread,
May 27, 2014, 5:42:41 AM5/27/14
to yal...@googlegroups.com
Simplifying the model completely, i.e. removing all trivially redundant constraints, still leads to issues in all solvers I've tried. You model is ill-posed somehow (related to the fact that alpha has to be huge and other matrices seem to have to be very small)

I think you must investigate the data and feasible solutions to see if there is some trivial structure you can eliminate.

As a start, you have have numerical noise in N (look at N, elements on the order of 10^-20!)
N(abs(N)<=1e-10)=0;


Solve the problem, and then look at M. It has almost zero rows and columns. Permuting a bit reveals that the matrix is a permuted block-diagonal matrix with some scalar blocks.

solvesdp(Cons)
MM
= double(M);MM(abs(MM)<=1e-9)=0;
[p,r] = dmperm(MM);
MM
(p,p)

Perhaps you should understand why, and then parameterize M as a permuted matrix of that form?

etc







Yulin

unread,
May 27, 2014, 8:30:33 AM5/27/14
to yal...@googlegroups.com
I still don't understand. Let us give up N and the condition G4>0, just solve the optimization problem min(alpha) with constraints G1>0 G>0 G3>0 P>0 and inve>0, it still can not find the optimal solution, but can give a feasible solution, while the feaibility solution shows that alpha is very big, and other matrices are small. 

 
However, let us just use one feasible solution to calculate alpha,
alpha> x0'*(l1*X'+l1*X-l1^2*P)*x0=0.5616, but the solver gives that a feasible alpha=4.9694e+06! 

Johan Löfberg

unread,
May 27, 2014, 8:36:30 AM5/27/14
to yal...@googlegroups.com
Numerics in semidefinite programs isn't obvious. Here, it seems relatively easy to find a feasibile solution, but hard to find an optimal solution. On the way towards optimality, the solvers run into numerical problems and some will then think the problem is infeasible, and some will return highly sub-optimal solutions while warning about numerical issues.

Johan Löfberg

unread,
May 27, 2014, 8:58:23 AM5/27/14
to yal...@googlegroups.com
Trivial example

solvesdp([x 1;1 y]>=0,x,sdpsettings('solver','sedumi'))

Trivial to find a feasible solution (any pair such that x*y>=1, for instance (2,2)). However, SeDuMi runs into numerical troubles when solving the optimization problem. The reason is that the optimal (unattainable) solution is x = 0+ and y = inf

Johan Löfberg

unread,
May 27, 2014, 9:31:35 AM5/27/14
to yal...@googlegroups.com
BTW, here is an extension (which can be extended further to see that there are some subtleties in the claim that semidefinite programming has polynomial complexity). Fails in all solvers I tried

solvesdp([[x 1;1 y]>=0, [z y;y 1]>=0, [w z;z 1]>=0],x)

Can you see why this model is highly problematic?

Not saying that this is what you see in your model, just pointing out that trivially feasible problems can have massive problems when trying to optimize
Reply all
Reply to author
Forward
0 new messages