norm maximization with single-element constraints

23 views
Skip to first unread message

vr7

unread,
Jul 15, 2018, 4:14:30 AM7/15/18
to YALMIP
Hello, 
I am trying to solving this optimization problem with YALMIP.


      z = sdpvar( n, 1 , 'full', 'complex' );

    obj = norm(z, 2)^2;
    cons = [];
    for m = 1:n
        cons = [cons, abs( z(m,1) ) <= P ];
    end
    
    sol = optimize(cons,- obj);


where P and n are given.

The solution of this maximization should be that all the elements of vector z are equal to P, but they are zeros...

Thank you!

Johan Löfberg

unread,
Jul 15, 2018, 5:11:52 AM7/15/18
to YALMIP
The problem will be solved by a local solver by default, hence, you have no guarantees in this nonconvex problem that you will obtain the globally optimal solution

Johan Löfberg

unread,
Jul 15, 2018, 5:14:37 AM7/15/18
to YALMIP
a better model is to get rid of the non-smooth abs operator and model as quadraatic constraints instead.


and if you want to find a global solution, you will have to use a global solver, such as the built-in bmibnb
cons = [];
for m = 1:n    
    cons = [cons,imag(z(m,1))^2 + real(z(m,1))^2 <= P^2];
end
sol = optimize(cons,-obj,sdpsettings('solver','bmibnb'))




vr7

unread,
Jul 15, 2018, 5:38:57 AM7/15/18
to YALMIP
Thank you very much! Could you please tell me why this problem is nonconvex ?

Johan Löfberg

unread,
Jul 15, 2018, 6:13:35 AM7/15/18
to YALMIP
you are maximizing a convex function

vr7

unread,
Jul 15, 2018, 6:31:02 AM7/15/18
to YALMIP
Rigth, thank you again! :) 

Mark L. Stone

unread,
Jul 15, 2018, 7:31:59 AM7/15/18
to YALMIP
It would be "nice" if these local solvers (e.g., fmincon, snopt, knitro) were finding a local optimum (minimum). Instead, they are starting from default starting value of zeros(n,1), checking the first order optimality conditions, seeing that they are met within tolerance, and declaring a locally optimal solution has been found; when in fact they have "found" the global maximum (the worst possible point) rather than a local minimum. They incorrectly declare victory, without even trying.

In this case, using a starting point, such as 1e-6*ones(n,1), such that first order optimality conditions are not met within tolerance there, is sufficient to get the solver really doing something, and then they find a true locally optimal solution, which in this case happens to be globally optimal. This is also true with Johan's alternate formulation (except that using the default starting value of zeros(n,1), ipopt declares victory without even trying on Johan's formulation, but tries and fails on the original formulation).

Reply all
Reply to author
Forward
0 new messages