Possible size of SDP

70 views
Skip to first unread message

MaMü

unread,
Jun 18, 2014, 5:54:54 AM6/18/14
to yal...@googlegroups.com
I have a question on the possible size of an SDP that I can solve with Yalmip/SeDuMi.

I am trying so solve an SDP of size 500, i.e. the positive semidefinite matrices I am looking for have this size. In the problem I have roughly 130.000 linear constraints on the matrix entries (some of them are redundant, but I cannot identify them). If I try so solve it with the "solvesdp"-command, using SeDuMi, the program runs for 10 hours without showing any progress, and then usually crashes. Now my question is: is this a problem of my hardware, i.e. can I expect the program to run better  on a better computer, or does the size of the problem exceed the principal capacity of Yalmip/SeDuMi?

Thanks for any kind of answer!

Johan Löfberg

unread,
Jun 18, 2014, 6:44:37 AM6/18/14
to
With that size, you are definitely reaching the limit of what can be expected from vanilla second-order SDP solvers. I am surprised you even got SeDuMi started. Your problem must be insanely sparse or you have a massive amount of memory on your machine. I run into memory issues far earlier on machine (use the 'debug' flag to see what causes the crash)

From your description, it sounds as if you have a model which fits into an interpretation from a primal space. Hence, I assume you are using the dualization procedure (or you have somehow done that manually).

Running a small experiment with 1000 equalities and sedumi
yalmip('clear')
X
= sdpvar(500);
m
= 1000;
A
= sprandn(m,500^2,1e-4);
b
= randn(m,1);

F
= [X >= 0, A*X(:) == b];
solvesdp
(F,trace(X),sdpsettings('dualize',1,'solver','sedumi','debug',1))

This takes 60 seconds on my machine. Since complexity is cubic w.r.t number of equalities m in primal (as a linear system of equations of size m is one of the dominant operations), I would expect on the order of 60*13^3 seconds for sedumi to complete the 13x larger case, i.e around 36 hours...

Try the first-order solver SDPNAL instead. It solves the same problem in 18 seconds, and the case m=8000 in 36s (I get memory problems for larger instances). Precision might be lower as to be expected from 1st-order methods, but at least it is approximately solved.

Erling D. Andersen

unread,
Jun 18, 2014, 7:53:10 AM6/18/14
to yal...@googlegroups.com
Since you have lot linear constraints then optimizers like SeDuMi and MOSEK will have to compute a Cholesky factor of 130K by 130K dense matrix. That is going to be slow. 

If you can split you semi-definite variable into a block diagonal matrix then it will be faster. If most of your constraints will be nonbinding at optimum, then a constraint generation approach might work well.

Johan Löfberg

unread,
Jun 18, 2014, 8:13:49 AM6/18/14
to yal...@googlegroups.com
Just want to make sure my understanding is correct: Even though we would manage to write X as a composition of many small blocks, the most likely dense 13k x 13k Cholesky would remain. Small blocks in X simply means faster compilation of ADA (which of course might be dominating the factorization)

Erling D. Andersen

unread,
Jun 18, 2014, 8:45:57 AM6/18/14
to yal...@googlegroups.com
I think about it like this.

Assume that the i'th constraint looks like this

     sum_j <A_{ij},X_j>  = b_i

and X_j is the j'th block and A_{ij} is a symmetric matrix. Now let us create a symbolic matrix called H.

    H_ij = 0 <=> A_{ij} = 0,  otherwise H_{ij} = 1
  
Note

    H_{ij} is a scalar
    A_[ij} is matrix

Then the sparsity pattern of HH' will denote the sparsity pattern of the matrix (=Schur complement) we going to factor.
So we can easily factor HH' if it has a nice sparsity structure for instance an arrow matrix. 

I made the natural assumption that if there is only ONE semidefinite variable then A_{i1} != 0 for all i and hence H will consist of 1 dense
column. 

If you have many small semidefinite variables the situation can be much better.

[Now I hope my memory of this is intact and I am not telling you nonsense. The Japanese SDP group is talking about correlative sparsity and I think that is the same or at least related.
Once again my memory may be vague.] 

Johan Löfberg

unread,
Jun 18, 2014, 9:01:53 AM6/18/14
to yal...@googlegroups.com
Yes, with sparse structure it can lead to structure in the Schur matrix. I was thinking of the general unstructured (but sparse) case.

Anyway, some more info from the original poster would be interesting.

Erling D. Andersen

unread,
Jun 18, 2014, 9:07:27 AM6/18/14
to yal...@googlegroups.com
MOSEK will do the usual sparsity preserving ordering of HH' known from the linear case. So if HH' is reasonably sparse  (this implies no dense columns in H), then it will be cheap to factor.

More info could be intersting. Agreed.

MaMü

unread,
Jun 18, 2014, 12:00:02 PM6/18/14
to yal...@googlegroups.com
Thanks for your answers! I will try SDPNAL and I will also try to change to a machine with higher capacity. I don't think I can detect some sparsity or block-structure manually, that helps me simplifying the problem. Anyway, I will keep you informed about any progress.

Thanks again!
Reply all
Reply to author
Forward
0 new messages