Vectorizing sum of many nuclear norms

29 views
Skip to first unread message

Silvia Sellán

unread,
Jul 18, 2019, 8:22:27 PM7/18/19
to YALMIP
Hello everyone,

I'm a PhD student using YALMIP for a geometry processing project. In my code, I build a yalmip tensor G(:,:,:) (which depends on my sdp variables) such that G(i,:,:) is a 2 by 2 matrix for all integer i between 1 and a large number n. The objective function I want to minimize is *the sum of the nuclear norms of all G(i,:,:)*. Currently, I am doing a for loop; in pseudocode,

objective = 0;

for i=1:n
 objective = objective + norm(G(i,:,:),'nuclear');
end

However, even for "reasonable" values of n (10000), this for loop takes over many hours. Is there any yalmip-supported way of vectorizing this expression?

Thanks!

Silvia

Johan Löfberg

unread,
Jul 19, 2019, 2:56:33 AM7/19/19
to YALMIP
First, you must be doing something else, because that code does not run as norm does not apply to nd-arrays which G(i,:,:) will be

Saying n=10000 is reasonable sort of reveals that you don't know how computationally expensive semidefinite programming is (and/or how nuclear norm is modelled). The nuclear norms will introduce 2*n new matrices of size 2x2, and n semidefinite constraints of size 4x4. In other words, you will have a very large SDP. Sure n=10000 is a reasonable number, but it is a very large SDP

As I said though, you must be talking about something else, because writing that for-loop, fixed so it actually runs, takes just over minute here

yalmip('clear')
n = 10000;
tic
G = sdpvar(2,2,n);
obj = 0;
for i = 1:n
    obj = obj + norm(G(:,:,i),'nuclear');
end
toc
Elapsed time is 73.662825 seconds.


Silvia Sellán

unread,
Jul 19, 2019, 1:35:46 PM7/19/19
to YALMIP
Firstly, thank you so much for your answer and for YALMIP in general. It is not only an amazing library, but it is also incredible that you reply to these questions as fast and as well as you do. I apologize if my question sounded like a critique of the library, which it definitely was not intended to be. I am very happy with YALMIP and all I want is to be sure I am using it to its maximum potential.

For clarity, I am attaching the Matlab script I'm running. As I'm sure you can see, it relates to a 2D discretization of a square with some edge length hx (see line 4). I would like to run more refined examples (either for larger domains or for lower values of hx, like in the 0.001 or 0.1). I have noticed that the time bottleneck seems to be in that nuclear norm aggregation (lines 48 to 54), and was wondering if there was a way of vectorizing it so that the bottleneck becomes the SDP solver (this is to say, the call to "optimize" in line 69), or making these lines faster in any way.

Just to clarify, if the answer is "no", that's perfectly fine, and I realize this is an intrinsic issue of SDP and not of the library. Again, I just want to make sure I am using YALMIP to its best potential, since I'm just a student and am not familiar with all of its nuances.

Have a nice day, and thanks again for your reply :)

Silvia
mwe.m

Johan Löfberg

unread,
Jul 19, 2019, 2:14:55 PM7/19/19
to yal...@googlegroups.com
The problem here is that your object CC is massive. The basis for the representation of this object is 

>> M = (getbase(CC));
>> size(M)

ans =

     5988006     1002002

I.e. a sparse matrix with millions of rows and columns. This means any operation on it takes a lot of time. Already indexing is expensive

>> tic;CC(1,k);toc
Elapsed time is 0.840794 seconds.

a first step would thus be to generate H without the intermediate operations. Still, with almost a million terms, I don't think any solver available today will be able to cope with this, and a dedicated solver would be necessary

Reply all
Reply to author
Forward
0 new messages