Slow in Building Constraints

111 views
Skip to first unread message

XB G

unread,
Nov 21, 2018, 12:20:42 AM11/21/18
to YALMIP
Hi Johan,
Is there any method to speed up building the following constraints:

    n = 500; m = 2*n;
    x = sdpvar(n,n,'full');
    A = rand(n,n,m);
    b = rand(n,m);
    constr = [];
    for i = 1:n
        constr = [constr;
            A(:,:,i)*x(:,i) <= b(:,i)];
    end

This takes many seconds to build constr. Are there any better ways?

Also I noticed that the time on building constraints grows exponentially (?) with size n....(code attached)

untitled.png


Thanks!




test_var.m

Johan Löfberg

unread,
Nov 21, 2018, 5:14:03 AM11/21/18
to YALMIP
Not exponential but quadratic it should be (you run a loop in n, and every iteration has complexity n in subtractions and subsrefs:ing)

However, the code you will have will completely explode due to memory. You are timing memory issues primarily. Does it really make sense with 1000 1000x1000 dense matrices?

You could improve performance on the sdpvar manipulations by using a cell format for x instead, and thus avoid a lot of redundant subsrefs:ing

x = sdpvar(repmat(n,1,n),repmat(1,1,n))

and then use x{i} instead

XB G

unread,
Nov 21, 2018, 11:15:45 AM11/21/18
to YALMIP
Thanks!

I just tested it, it could save ~50% time when building constraints, but the time still look quadratically increasing though.

Unfortunately my matrix A is dense and large, but the 3rd dim could be redundant since they are similar. This could be improved.

Similarly, if my x variable is a 3d matrix: x = sdpvar(n,n,m); I can defined a 2d cell array X so that X{i,j} is the same as x(:,i,j)...I guess this could save more time since it reduces redundant subsrefs:ing. But it turns out to be more time consuming..., see the figure below:

untitled.png


I did save time when constructing constr, but I wasted more time when creating the cell array....how to efficiently create a multi-dimensional sdpvar cell array?


Currently this is what I'm doing:

    x = cell(n,m);
    for i = 1:n
        x(i,:) = sdpvar( n*ones(1,m), ones(1,m) );
    end







test_var.m

Johan Löfberg

unread,
Nov 21, 2018, 11:47:08 AM11/21/18
to YALMIP
to get a matrix cell, I would just reshape

reshape(sdpvar([5 5 5 5],[3 3 3 3]),2,2)

However, which version which is fastest on all these things unfortunately depends on matlab version and operating system. In some versions, I've seen strange performance issues in cell manipulations

Johan Löfberg

unread,
Nov 21, 2018, 11:49:53 AM11/21/18
to YALMIP
and I don't see any way to get rid of quadratic complexity, because thats what you have. Simply consider the case of computing A*x-b n times for numerical data, where A has size A n*m. That's O(n^2). And here it has to be done symbolically using the full basis of x (which is a sparse matrix of size n*n) etc

XB G

unread,
Nov 21, 2018, 2:00:00 PM11/21/18
to YALMIP
You are right, the quadratic complexity is built-in, I will always have time complexity like alpha*n^2.

What I'm worried about is that I'm doing something inefficient every step, thus I have a large alpha and waste a lot of time...

After using reshape(), the total time is reduced a lot:

untitled.png



Thanks!

Johan Löfberg

unread,
Nov 21, 2018, 2:24:28 PM11/21/18
to yal...@googlegroups.com
of course, if this really is the model you have, you would simply create the matrix first

allA = [];
for i = 1:n
  allA = blkdiag(allA,A(:,:,i));
end
x = sdpvar(n^2,1)
constr = allA*x <= b(:)

because in the end, allA and b(:) are the data that would be sent to the solver anyway. If A is dense, this is going to be extremely expensive, as the block-diagonal matrix will be huge and take a lot of space, and if you try to counter-act this by using sparse data, it will still be expensive since the non-zero blocks are dense. A bad model no matter what if you intend to use a standard solver
Reply all
Reply to author
Forward
0 new messages