non-convex sdp

181 views
Skip to first unread message

Ying Ma

unread,
Apr 29, 2015, 10:00:30 PM4/29/15
to yal...@googlegroups.com
Hi,
I am try to use YAMLIP so solve an problem like this,
min log(2det(X))
s.t. X>eye(3)
The matlab code is 
X = sdpvar(3);
% Reformulate psd constraint
R = triu(sdpvar(3));
Model = [X ==R*R'+ eye(3)];
sol =optimize(Model, log(2*det(X)))
The answer is X=eye(3).
So I use the same method to solve the problem 
min log(-2det(X))
s.t. X<eye(3)
The matlab code is 
X = sdpvar(3);
% Reformulate psd constraint
R = triu(sdpvar(3));
Model = [X ==R*R'+ eye(3)];
sol =optimize(Model, -log(2*det(X)))
I know the answer is X=eye(3). But something goes wrong, it comes out that 
   yalmiptime: 0.1627
    solvertime: 1.9453
          info: 'Maximum iterations or time limit exceeded (FMINCON)'
       problem: 3
Then I change my matlab code to 
X = sdpvar(3);
R = triu(sdpvar(3));
Model = [X <=eye(3)];
sol =optimize(Model, -log(2*det(X)))
I still doesn't work, it comes out that,
    solvertime: 0
          info: 'No suitable solver'
       problem: -2
    yalmiptime: 0.1420

I can not figure out the reason, Could you tell me what's the problem. How can I solve the problem:
min log(-2det(X))
s.t. X<eye(3)

Johan Löfberg

unread,
Apr 30, 2015, 1:51:53 AM4/30/15
to yal...@googlegroups.com
First, log(det(X)) is a concave operator, so minimizing it is a nonconvex problem. Addtionally, you have the nonconvex equality R'*R =.... furthermore, working explicitly with the determinant is no fun, as it is a horribly complex operator (derive an explicit expression for a 4x4 matrix, and you will get a full A4 paper). Consequently, you will not solve that formulation

The closest you can do is to maximize log(det(X)) under convex constraints. Note the use of the logdet operator. Some solver exploit this form directly (sdpt3), but if they don't, YALMIP can reformulate the logdet expression using SDP and SOCP constraints (i.e, you need a good SDP solver (mosek,sedumi,sdpt3,...))

optimize(X <= -eye(3), -logdet(X))


Message has been deleted

Ying Ma

unread,
Apr 30, 2015, 3:09:51 AM4/30/15
to yal...@googlegroups.com
For the first problem,
min log(2det(X))
s.t. X>eye(3)
although it is a nonconvex problem and I use the nonconvex equality R'*R=.... It is solved correctly. 
But for the second problem
min log(-2det(X))
s.t. X<eye(3),
 it is convex and I use the same logic to write the matlab code but it goes wrong.I don't know what's the diffetences here.

Ying Ma

unread,
Apr 30, 2015, 3:32:29 AM4/30/15
to yal...@googlegroups.com
The final problem I want to solve is 
   M_hat=eye(4);
    X = sdpvar(4,4);
    FI = sdpvar(4,4);

    Constraints = [[X,M_hat; M_hat,H_1*FI*H_1'*1e4+eye(4)]>=0;FI(1,1)<=1;FI(2,2)<=1;FI(3,3)<=1;FI(4,4)<=1];
    Objective =log(det(M_hat'*X*M_hat+eye(4)));
    sol = optimize(Constraints,Objective);

But it said "No suitable solver". Which solver do I need?

Johan Löfberg

unread,
Apr 30, 2015, 4:22:17 AM4/30/15
to yal...@googlegroups.com
You cannot minimize the log(det()) using the solvers available in YAMIP directly. BTW, in this form, there is no reason to keep the logarithm. You can always try to solve it using a nonlinear SDP solver such as PENLAB and hope for a locally optimal solution, but as the objective function det() is seriously ugly, I would be surprised if it worked.

Ying Ma

unread,
Apr 30, 2015, 5:49:09 AM4/30/15
to yal...@googlegroups.com
Several months ago, you help me to reformulate the problem
min log(2det(X))
s.t. X>eye(3)
as follows,
= randn(3);= A*A';

X = sdpvar(3);

% Reformulate psd constraint
R = triu(sdpvar(3));
Model = [X - A == R*R'];
optimize(Model, log(2*det(X)))
It works very well. Could you help me to reformulate my final problem please? It it really a crux to my whole system. You can use some approximation.
  M_hat=eye(4);
    X = sdpvar(4,4);
    FI = sdpvar(4,4);

    Constraints = [[X,M_hat; M_hat,H_1*FI*H_1'*1e4+eye(4)]>=0;FI(1,1)<=1;FI(2,2)<=1;FI(3,3)<=1;FI(4,4)<=1];
    Objective =log(det(M_hat'*X*M_hat+eye(4)));
    sol = optimize(Constraints,Objective);
The matrix is not very big. Maybe matlab will run for a long time. But if it can have a answer, it is ok for me.
Thank you very much for your help.

Johan Löfberg

unread,
Apr 30, 2015, 6:27:16 AM4/30/15
to yal...@googlegroups.com
So why don't you simply use exactly the same strategy then to get rid of the SDP constraint as you already know what to do?

BTW, you can skip the log, it makes no difference as it is monotone 

Ying Ma

unread,
Apr 30, 2015, 8:34:33 AM4/30/15
to yal...@googlegroups.com
Do you mean changing it like this,

R = triu(sdpvar(8));
Constraints = [[X,M_hat; M_hat,H_1*FI*H_1'*1e4+eye(4)]==R*R';FI(1,1)<=1;FI(2,2)<=1;FI(3,3)<=1;FI(4,4)<=1];
Objective =(det(M_hat'*X*M_hat+eye(4)));
The answer is still wrong. Does it solve it using Newton method or steepest descent method? If it is the case, it can not obtain the right answer. Because the iteration stops when the first-order  derivation is small. It doesn't use the monotonicity

Johan Löfberg

unread,
Apr 30, 2015, 9:19:55 AM4/30/15
to yal...@googlegroups.com
Yes, a quadratic decomposition constraints is what you apparently used before.

Which algorithm is used to solve the problem depends on which solver you use. If you use fmincon, it uses various quasi-newton methods, you will have to look at the documentation and change options accordingly.

The fact that it doesn't work is just a consequence of the simple fact that the problem is hard. There is no guarantee that these nonlinear oslvers solve your nonconvex problem.

You comment about monotinicity makes no sense. What I am saying is that the problems min f(x) s.t. g(x)<= 0 and min h(f(x)) s.t. g(x)<=0 have the same set of minimizers if h is monotonically increasing. The convergence and practical behavior might be completely different of course.

Ying Ma

unread,
Apr 30, 2015, 9:31:01 AM4/30/15
to yal...@googlegroups.com
Thank you very much for your help. Maybe there is no solver can solve it. I think I will give up working on it. 
Thank you again for your kind help and patient analysis.

Mark L. Stone

unread,
Apr 30, 2015, 12:11:32 PM4/30/15
to yal...@googlegroups.com
I tried this "Cholesky Facotrization" type constraint approach for enforcing a psd condition, and the non-convexity presents a challenge to solvers.

An alternative approach, which worked much better for me, but which I don't think is doable in YALMIP due to lack of eigenvalue support (but Yohan can correct me if I am wrong), is to use a constraint on min eigenvalue of the matrix which is being constrained to be psd.  Even though the constraint is non-differentiable at a repeated eigenvalue, I have used this approach very successfully with KNITRO on several problems, and in principle it should work as well with FMINCON, although FMINCON does not necessarily deal as well with it.  That said, the non-differentiability of the eigenvalue constraint could potentially cause difficulties in some problems, even though not in mine. I believe you can use U*U' as the gradient of the min eigenvalue, where U is a unit-length eigenvector corresponding to the min eigenvalue of the matrix being constrained to be psd; however, as I stated, this gradient is not correct (doesn't exist) if the min eigenvalue is repeated, nevertheless, it seems to work on mine (and many problems) by just blasting through it.  This approach can be used for LMIs or BMIs.

If instead, one wishes to constrain a 2-norm to be <= some value, then a similar approach can be taken by constraining the maximum singular value (i.e., 2-norm) to be <= some value, and then the gradient would be U(:,1)*V(:,1)', where U(:,1) and V(:,1) are respectively the unit length left- and right-singular vectors corresponding to the largest singular value from the SVD of the matrix being constrained. In MATLAB, the U and V from [U,S,V] = svd(matrix being constrained).

It seems to me (as a naive outsider) that YALMIP could be extended to deal in such a manner with eigenvalue and 2-norm, although maybe a warning should be provided, as it technically violates "the law" to use the gradients I have shown.  This would allow this approach to be used with YALMIP as a front-end to KNITRO, FMINCON, or similar interface solvers.

Mark L. Stone

unread,
Apr 30, 2015, 12:27:52 PM4/30/15
to yal...@googlegroups.com
Well, just for the heck of it, I tried to use the min eig approach in YALMIP, which I didn't think (eig) was supported, and this is what happened.
Q=sdpvar(2,2)
optimize(min(eig(Q)) >= 0,[],sdpsettings('solver','fmincon')) produced a feasible Q (value(Q) = diag(0.1984))
optimize(min(eig(Q)) >= 0,trace(Q), sdpsettings('solver','fmincon')) reported infeasible (using default tolerances)
optimize(min(eig(Q)) >= 0,[], sdpsettings('solver','knitro'))  crashed MATLAB
optimize(min(eig(Q)) >= 0,trace(Q), sdpsettings('solver','knitro'))  crashed MATLAB

Johan Löfberg

unread,
Apr 30, 2015, 1:03:29 PM4/30/15
to yal...@googlegroups.com
Hmm, neither did I. I guess I've been playing around.

Mark L. Stone

unread,
May 8, 2015, 9:01:53 PM5/8/15
to yal...@googlegroups.com
Just for the record, let me correct a "typo" in my first post of this thread.  That should be W * V'  as the gradient of the min eigenvalue, where V is a unit-length left eigenvector and W is a unit-length right eigenvector corresponding to the min eigenvalue, i.e,, the columns of V and W corresponding to the min eigenvalue of [V,D,W] = eig(matrix being constrained to be psd) .
Reply all
Reply to author
Forward
0 new messages