How to minimize the maximum of a vector

393 views
Skip to first unread message

Gael Bringout

unread,
Sep 8, 2014, 3:46:39 AM9/8/14
to yal...@googlegroups.com
Hi,

I'm continuing my quest for optimization, which started on the OPTI tollbox groups there: https://groups.google.com/d/topic/opti-toolbox-forum/Hn6IUS9noSc/discussion.

Here is the Last post of Johan Löfberg, as reminder:
Are you saying you want to maximize a function involving convex quadratic function? I hope you are aware that it is a very very hard problem.

The sqrt operator in YALMIP is limited to scenarios where it can be implemented using an SOCP representation. Since you have nonconvex stuff, you don't have any solver which can mix the SOCP cone and nonconvex stuff. This is easily fixed though by using the sqrtm operator instead. With this, you allow YALMIP to model it directly as the square root (i.e., a callback in the solver) and thus simply use a nonlinear solver. No guarantees that you will find any solution though as it is a nonconvex problem (you could always thy the global solver bmibnb if the problem is small enough)

BTW, you are welcome over to the YALMIP google groups instead for further discussion (not to pollute OPTI support forum with YALMIP issues)

So, I want to minimize the maximum of a function.
I didn't know that it is a very very hard problem (I am not an optimization specialist :) ). To be honest, I do not understand what this means: does it require a lot of time (The QP problems with 202 variables need around 2 sec to be solved, the new minimizing from a maximum problem (with 240 variables) didn't succed after something like 60 hours of calculation)? Or that it simply won't succed?

I'm not sure if this function is convex, as it is build from non-squarre matrix (and thus I can not test it by looking at the eigenvalue of the matrix) and my skills aren't the right one to make sure from original equation that it is.

Also, what may help is that this is using only real number. I may drop completly the sqrt operator if it helps the solver.

I'm going to test other solver.

Again, thank you for your time and inputs!

Johan Löfberg

unread,
Sep 8, 2014, 4:00:54 AM9/8/14
to yal...@googlegroups.com
I am not sure what your notation is saying. Do you want to minimize the maximum of a set of functions involving square-roots

A nonconvex problem can in the worst-case be practically impossible to solve. It is not too hard to generate problems with a few (10) variables which will not finish in several days, and with 100 variables, you can generate problems which, in theory, won't finish until the universe has met its heat death.

However, small changes can change a model from completely intractable, to trivial. Minor modelling mistakes can create very hard models, where the problem really can be modelled in a much better form. Hence, show us your model

Gael Bringout

unread,
Sep 8, 2014, 4:50:21 AM9/8/14
to yal...@googlegroups.com
I do not really know where I should start with the description of my model. Do you need a description of each matrix I use or the whole equation needed to calculate those matrix?

(I'm also looking for the "small changement" idea. I can try to reduce the boundary condition making a part of the matrix seme-positve defined.)

Johan Löfberg

unread,
Sep 8, 2014, 5:10:08 AM9/8/14
to yal...@googlegroups.com
So why don't you just post the code you ran?

Gael Bringout

unread,
Sep 8, 2014, 5:47:35 AM9/8/14
to yal...@googlegroups.com
Well, the code rely heavily on matrix calculated somewhere else (here for the coil related properties: https://github.com/gBringout/CoilDesign and there for the induced electrical field equation: http://etheses.nottingham.ac.uk/629/)

To give an idea, I have a coil (represented by the value given to x here) wich has to be optimize to minimize the maximal induced electrical field (calculated with the objectives).

Here it is:
% pre-calculation from
% omega
% coil.Rfull - size : 240x240
% kk = 1
% coupling(kk).Ex_part1 - size : 6418x240
% coupling(kk).Ey_part1 - size : 6418x240
% coupling(kk).Ez_part1 - size : 6418x240
% coupling(kk).Ex_part2 - size : 6418x240
% coupling(kk).Ey_part2 - size : 6418x240
% coupling(kk).Ez_part2 - size : 6418x240
% coil.Cx - size : 66x240
% coil.Cy - size : 66x240
% coil.Cz - size : 66x240
% coil.btarget - size : 1x198
% coil.error - scalar


    x = sdpvar(size(coil.Rfull,2),1);
    Ex_part1 = coupling(kk).Ex_part1*omega*x;
    Ey_part1 = coupling(kk).Ey_part1*omega*x;
    Ez_part1 = coupling(kk).Ez_part1*omega*x;
    % + A^t
    Ex_part2 = coupling(kk).Ex_part2*omega*x;
    Ey_part2 = coupling(kk).Ey_part2*omega*x;
    Ez_part2 = coupling(kk).Ez_part2*omega*x;

    Exsquare = (Ex_part1.^2 + 2*Ex_part1.*Ex_part2 + Ex_part2.^2);
    Eysquare = (Ey_part1.^2 + 2*Ey_part1.*Ey_part2 + Ey_part2.^2);
    Ezsquare = (Ez_part1.^2 + 2*Ez_part1.*Ez_part2 + Ez_part2.^2);
    Objective = max(sqrtm(Exsquare+Eysquare+Ezsquare));
   
    coil.CtargetFull = [coil.Cx;coil.Cy;coil.Cz];
   
    % error constrains
    lr = zeros(1,size(coil.btarget,2));
    ur = zeros(1,size(coil.btarget,2));
    for j=1:size(coil.CtargetFull,1)
        if coil.btarget(j)<0
            % lr and ur stands for "lower" and "upper"
            lr(j) = (1+coil.error)*coil.btarget(j);
            ur(j) = (1-coil.error)*coil.btarget(j);
        elseif coil.btarget(j)>0
            lr(j) = (1-coil.error)*coil.btarget(j);
            ur(j) = (1+coil.error)*coil.btarget(j);
        else
            lr(j) = -0.001;
            ur(j) = 0.001;
        end
    end

    % the bounding calculation without reduction
    % 20000 is used to reduce the possibilities,
    % knowing that the normal coil is solved the value between
    % 7129 and -7069
    lb = ones(size(shield.Anorm,2),1)*-20000;
    ub = ones(size(shield.Anorm,2),1)*20000;
    index = 1;
    for i=1:size(coil.subBoundaries,1)
        for j=1:size(coil.subBoundaries(i).node,1)
            lb(index) = 0; %boundary of the mesh have to be null
            ub(index) = 0; %boundary of the mesh have to be null
            index = index+1;
        end
    end

    Constraints = [lr' <= coil.CtargetFull*x <= ur',...
                   lb<=x<=ub];
   
    options = sdpsettings('verbose',2);
    options = sdpsettings(options,'solver','bmibnb','showprogress',1);
    options = sdpsettings(options,'showprogress',1);

    solvesdp(Constraints,Objective,options)
    coil.s = double(x);
    displayStreamFunction(coil.listTriangle,coil.s,coil.listNode)

Johan Löfberg

unread,
Sep 8, 2014, 6:27:29 AM9/8/14
to
This looks like a much easier problem than I thought you had.

First, f(x) = sqrt is a concave operator, hence f() <= t (which you get when you model max) will not satisfy convexity propagation rules, hence, the set is not guaranteed to be convex and YALMIP will not perform epigraph formulations but go for a general nonlinear formulation (which is why you have to use sqrtm instead of sqrt). Put in a quadratic in the square root, and the convexity propagation will be even trickier to perform and convexity will be even hard to see algorithmically

However, what you seem to have is simply a bunch of norms. norm(linear expression) is a convex operator, hence max(norms) is a convex formulation as it simply is norm(expressioni) <= t which is convex

So,

bad

minmize max [sqrt(quadratic1) sqrt(quadratic1)]

good

minmize max [norm(linear1) norm(linear2)...]

good (your sqrt is unnecessary)

minmize max [quadratic1 quadratic2...]


Gael Bringout

unread,
Sep 8, 2014, 8:07:35 AM9/8/14
to yal...@googlegroups.com
Thank you very much for your clear explaination!

It seems that YALMIP do not like this line (that is, Matlab stays in "busy" state a long time (more than 1 hour). I'm asking myself, if this isn't what was blocking during the week-end.) :

Exsquare = (Ex_part1.^2 + 2*Ex_part1.*Ex_part2 + Ex_part2.^2);

Which is the expansion of (Ex_part1+Ex_part2).^2.

Do you have an idea here? :)

Johan Löfberg

unread,
Sep 8, 2014, 8:18:28 AM9/8/14
to
If, e.g., coupling(kk).Ex_part1 is dense, each element in Ex_part1 will involve 240 variables. When you square an expression involving 240 variables you get a quadratic term which will involve 28920 monomials, all of which YALMIP has to work with symbolically. This you do 6418 times. Hence, YALMIP has to shuffle around polynomials 185608560 times, just for one squared vector. This will take an awful amount of time. That is why you should try to write everything without introducing squares and square roots.

E.g., instead of sqrt(x1^2+x2^2) work with norm([x1;x2]). Instead of sqrt(3*x1^2+2*x1*x2+3*x2^2) which is sqrt(x'*[2 1;1 3]*x) work with norm(chol([2 1;1 3])*x)

Gael Bringout

unread,
Sep 8, 2014, 8:24:13 AM9/8/14
to yal...@googlegroups.com
Understood!

Johan Löfberg

unread,
Sep 8, 2014, 8:37:09 AM9/8/14
to yal...@googlegroups.com
You are still going to run into a lot of overhead when you create large amounts of norm operators. Hence, you should use the low-level cone operator instead, and even more detailed, the low.level vectorized form of the cone operator. I think your model is, with t representing the maximum,

      sdpvar t
     
Constraint =   cone([repmat(t,1,size(Ex_part1,1));
             
(Ex_part1+Ex_part2)'
              (Ey_part1+Ey_part2)'

             
(Ez_part1+Ez_part2)']);


more constraints...

Write it using norm first to make sure that works and you understand what you are doing, and then optimize it as above with cone and manual modelling of max instead
http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Commands.Cone


Gael Bringout

unread,
Sep 8, 2014, 8:42:58 AM9/8/14
to yal...@googlegroups.com
Thank you very much for the details and the tip. I will try to make it work and understand it. And I will come back with the results (Hopefully!). Otherwise, I will come back with more question (very likely)!

Gael Bringout

unread,
Sep 8, 2014, 10:22:14 AM9/8/14
to yal...@googlegroups.com
Ok, first question about this part:
instead of sqrt(3*x1^2+2*x1*x2+3*x2^2) which is sqrt(x'*[2 1;1 3]*x) work with norm(chol([2 1;1 3])*x)

unfortunatly, I have sqrt(x1^2+2*x1*x2+x2^2)which is sqrt(x'*[1 1;1 1]*x) which should be norm(chol([1 1;1 1])*x. But as  [1 1;1 1] is not positive definite, the function chol can't be used. Do you have here another tips? :)

Thank you,
Gael
Message has been deleted

Johan Löfberg

unread,
Sep 8, 2014, 10:42:24 AM9/8/14
to yal...@googlegroups.com
The nondefiniteness comes from the fact that you are working with (a+b)^2, so you should not try to factorize [1 1;1 1] since you know it comes from [1;1]*[1 1]

Gael Bringout

unread,
Sep 8, 2014, 10:54:20 AM9/8/14
to yal...@googlegroups.com
ok, Thank you!

Gael Bringout

unread,
Sep 9, 2014, 8:26:02 AM9/9/14
to yal...@googlegroups.com
Ok,
I tested first the version using the norm operators, as:

    for i=1:size(Ex_part2,1)
        E3(i) = norm([Ex_part1(i)+Ex_part2(i);Ey_part1(i)+Ey_part2(i);Ez_part1(i)+Ez_part2(i)]);
    end

As you predicted, just to build the Mattrix E3 took 1 hour. Solving also took hours.

Then I tried your version, using the cone function.
The definition of matrix and everything is way faster (it take, with solving, less than 30 sec).

    maxVector = repmat(t,1,size(Ex_part1_3,1));
    vector = [(Ex_part1+Ex_part2)';(Ey_part1+Ey_part2)';(Ez_part1+Ez_part2)'];
    Constraints =   [cone([maxVector;vector]),...
                    lr' <= coil.Ctarget*x <= ur',...
                    lb<=x<=ub];

Great!
Then when I looked at the results, they make sens (i.e. it gives a nice coil), but the value I tried to minimize is now bigger.

So, using the previous QP (whithout knowledge of the value to minimize, thus I minimize a coil related property, the other constrains are the same), I get a maximum of 135.
Using the method used here, I get a maximum of 160. I would have liked 120 or lower.

So, if you have other idea, do not hesitate! And again, thank you for your help!

Johan Löfberg

unread,
Sep 9, 2014, 8:41:05 AM9/9/14
to yal...@googlegroups.com
Is it possible for you to create a smaller instance, which doesn't take so long to setup for the quadratic model.

Also, to continue debugging, you would have to dump all data and code to a zip and atttach here or something like that.

Gael Bringout

unread,
Sep 9, 2014, 8:50:26 AM9/9/14
to yal...@googlegroups.com
I try to make a sort of minimal example with precalculated matrix, which should looks like one (big) .mat and one .m files.

Johan Löfberg

unread,
Sep 9, 2014, 8:52:02 AM9/9/14
to yal...@googlegroups.com
Good. Make sure the code clearly illustrates this issues you have (claimed suboptimality of the cone formulation)

Gael Bringout

unread,
Sep 9, 2014, 9:20:34 AM9/9/14
to yal...@googlegroups.com
Ok, the m files is here: https://dl.dropboxusercontent.com/u/25415701/SmallExample.m
the .mat (114 MB) if here: https://dl.dropboxusercontent.com/u/25415701/gBringout_Induced.mat

Just adapte the 2 first lines of the files to your setup.

The QP formulation (whithout knowledge of the induced Electrical field) get a maximal E value of 151 V/m
The cone formulation (with the knowledge of the induced electrical field) get a maximal E value of 165 V/m

Johan Löfberg

unread,
Sep 9, 2014, 9:34:27 AM9/9/14
to yal...@googlegroups.com
Without knowing what you do, but are you sure about this

for i=1:size(coil.subBoundaries,1)
    lb
(i) = 0; %boundary of the mesh have to be null
    ub
(i) = 0; %boundary of the mesh have to be null
end


i will be 1 and 2. Maybe you mean lb(coil.subBoundaries(i).node)=0

Johan Löfberg

unread,
Sep 9, 2014, 9:36:46 AM9/9/14
to yal...@googlegroups.com
Here it sounds as if you already know these are redundant

% the bounding calculation with reduction
% 20000 is used to reduce the possibilities,
% knowing that the normal coil is given with value between
% 7129 and -7069
lb
= ones(size(coil.Ctarget,2),1)*-20000;
ub
= ones(size(coil.Ctarget,2),1)*20000;


so why are you adding these anyway then?

Johan Löfberg

unread,
Sep 9, 2014, 9:49:38 AM9/9/14
to
Which solver are you using? I run out of memory in all I tried (cplex, mosek, gurobi). i.e., I need an even smaller example

Gael Bringout

unread,
Sep 9, 2014, 10:05:38 AM9/9/14
to yal...@googlegroups.com
Good questions!

For the first, yes I am sure that the first 2 bounds should be zero. The short story is, that the boundaries of the coil should all have a value of 0. But, it is needed to only gives the value to one node on each boundary (which garantie some properties) en thus reduce the mesh size by (the number node at the boundaries) - (the number of boundaries).
In this case, we have a cylinder, with 2 boundaries. To "easily" do the reducton, I re-order the mesh, so that all node on the boundary are on top of the list, and then proceed to the reduction. Thus the 2 first nodes of the solution should by zero, as those are always my nodes on the boundaries. The nodes you are refering are the one before the mesh reduction.

For the second remark, I am not sure to understand, but I suppose that the answer to the first remark should answer it. The reduction I am refering here is the mesh reduction. Does it answer your question?

The solver I am using is: DSDP-OPTI. My PC has 16 GB of RAM.

Thank for your help!

Gael Bringout

unread,
Sep 9, 2014, 10:06:43 AM9/9/14
to yal...@googlegroups.com
Ok, I try to make a smaller example !

Gael Bringout

unread,
Sep 9, 2014, 10:26:05 AM9/9/14
to yal...@googlegroups.com
Ok, new .mat files (23 MB): https://dl.dropboxusercontent.com/u/25415701/gBringout_Induced2.mat

But, now I get the expected behaviour, which is:
QP problem : 148 V/m
Cone formulation: 136 V/m

For this new, simpler example, the mesh is of better quality (that is, the value used to calculated the Ex_part1 matrix are closer to the expected value (I am doing some numerical integration to get them)) and with a simplier form (a sphere in this case, befor it was a human). May this have an influence on the optimization?

Johan Löfberg

unread,
Sep 9, 2014, 11:02:51 AM9/9/14
to yal...@googlegroups.com
The reason I asked about the zero bounds was because it looks like a classical bug where you mean

for i = 1:length(data)
 x
(data(i)) = 0;
end

but forget the indexing and instead do

for i = 1:length(data)
 x
(i) = 0;
end

Regarding the genral bounds, it looked as if you added bounds, just because you think you have to have bounds. That is not the case, and adding huge bounds just to add something, is typically detrimental to the solver. Also, infinite bounds are completely redundant, they will be eliminated by YALMIP and/or the solver. For the QP problem you setup, you can simply do

Constraints = [lr' <= coil.Ctarget*x <= ur',...
    x(1:size(coil.subBoundaries,1))==0];

(replacing equalities with double-sided inequalities is typically a bad idea)

Finally, the fact that you are using DSDP is probably the answer to the question. DSDP is a specialized solver developed for very large very sparse semidefinite programs. You have a problem with a huge amopunt of small dense SOCP constraints. As you apparently don't have any SOCP solver installed, so YALMIP reformulates the 4x1 SOCP constraints to 4x4 SDP constraints. Apart from being a sub-optimal choice of solver, DSDP is, as I remember it, rather unstable on numerically challenging problems (which you have, as the data in your model seem to range from very small to very large, a typical recipe for problems).

Install an SOCP solver such as gurobi or mosek (or cplex if you are ok with a overly complex and slow download and licensing process). They are all free for academia.

Gael Bringout

unread,
Sep 9, 2014, 11:10:56 AM9/9/14
to yal...@googlegroups.com
Thank you for the input!

Ok, I will remove the unecessaries bound on the x variables. I used that sometime to limit the actual amplitude of the solution, and afterwards used to set it simply to infinit or big numbers to avoid "non realistic" solution. But I will now always start without.

Thank you for the clear explenation. I already tried to get the license for CPLEX, but it didn't seems to work for me :D.
I will try the 2 other ones and come back with new results (hopefully)!

Thanks again for your time and your help!

Gael Bringout

unread,
Sep 10, 2014, 7:29:43 AM9/10/14
to yal...@googlegroups.com

Hi again,

I have tested Mosek and Gurobi, they both succed in the minimization of the maximal induced voltage field!


For the SOCP, Mosek take 25 sec and Gurobi 945 sec. The results are very similar.

In order to obtain "nice" coils, I have to add further constrains. Right now I'm adding x'*coil.R*x (the coil dissipated power), Coil.R being a positive definite matrix. Does I have to make the SOC transformation of those constrains myself (as given here: http://en.wikipedia.org/wiki/Second-order_cone_programming#Example:_Quadratic_constraint) or does YALMIP make it for me? Right now it is working with x'*coil.R*x<5000 as a constrain, but is this the best way? Also, if I have constrains which uses non-positive definite matrix, how can I include them?
Which leads me to a big question: can YALMIP help to make multi-objective optimization?
Other one: Is it possible to study the sensitivity of the optimization, especially in my case to variation of the btarget constrain?

A more general question, which book can you recommand to have a complete explaination of the usage of the cone operator and/or the formulation of my problem as a SOCP. I though to look into the direction of "modeling" books as the 2 recommanded by Nocedal, Jorge, Wright, Stephen, Numerical Optimization, Springer
  1. G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey, 1963.
  2. R. Fourer, D. M. Gay, and B. W. Kernighan, AMPL: A Modeling Language for Mathematical Programming, The Scientific Press, South San Francisco, Calif., 1993.
But what is your opinion?

Thank you for your help!

Johan Löfberg

unread,
Sep 10, 2014, 7:39:23 AM9/10/14
to
That was peculiarly slow by gurobi (or fast with mosek). Is it some code I can run?

No, you don't have to write as a second-order cone constraint manually, YALMIP will do that for you. If R is very large though, it might be slow though as YALMIP has to work with the quadratic expression symbolically, so then you could avoid that and simply write norm(chol(R)*x) <= sqrt(5000) instead.

If you mean positive semidefinite, no difference, except that you cannot use a Cholesky factorization but have to compute the factor using, e.g., an SVD (or you let YALMIP do it). If you mean possibly indefinite, you are in deep water as that is a much much harder problems

Here is a reading-list (your reading-list contains book written (long) before the whole conic field had become relevant)
http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Main.Reading

No, YALMIP does not do multi-objective stuff, but you can easily code that manually as you basically just want to compute solutions for many different objectives.

One way is to setup an optimizer object and then send multiple parameter values at the same time (help optimizer has a small example), or you use the trick that you can send several objectives (overhead is not eliminated here though)
http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Commands.optimizer
http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Extra.MultipleSolutions

Johan Löfberg

unread,
Sep 10, 2014, 7:40:40 AM9/10/14
to yal...@googlegroups.com
and sensitivity is, I guess, that you want to extract the dual of that constraint. You do that with the command dual.

Gael Bringout

unread,
Sep 10, 2014, 9:14:06 AM9/10/14
to yal...@googlegroups.com
I observe the same behavior on the 2 examples I gave yesterday. mosek ist way faster than gurobi. The one I did run is similar to the first example, which was too big for your computer :(

Cool, YALMIP is great! I didn't notice performance issue with my first test

Yes, I meant positive semidefinite, sorry for the mistake. Ok, understood with the indefinite which may/will be problematic for me.

Thank you for the actual list, I will start there then :D

And again, thank you very much for your input! I will put them to good use and keep you updated.

Gael Bringout

unread,
Sep 10, 2014, 11:34:12 AM9/10/14
to yal...@googlegroups.com
By the way, their is quite a number of dead link on the http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Main.Reading page :)

Johan Löfberg

unread,
Sep 10, 2014, 12:03:01 PM9/10/14
to yal...@googlegroups.com
OK, thanks for alerting me.
Reply all
Reply to author
Forward
0 new messages