How to solve a dual Minimization/maximizarion problem with yalmip?

1,117 views
Skip to first unread message

Francisco Ronay López Estrada

unread,
Jan 11, 2013, 4:23:52 AM1/11/13
to yal...@googlegroups.com
Dear Pro. 

I have the next dual problem


I need to minimize gamma and maximize betta. The first problem is that the LMIs are quadratic which I solve by considering only gamma and betta and when the solution is complete I will obtain the real values by considering the root suqare of gamma and beta.

However, I do not known how to put the dual min/max problem in yalmip by using the solvesdp function. My code is in the .m file.

Before hand thanks for your help



LMIdual.m

Johan Löfberg

unread,
Jan 11, 2013, 6:10:10 AM1/11/13
to
If you want to have optimal minimal beta and optimal maximal gamma, you have a multi-objective problem, and there is not a single solution to the problem. You have to solve a compromise, such as minimizing w*gamma + (1-w)*(-beta) for some 0<=w<=1. (It is like saying you want to have the fastest and most fuel-efficient car. There is no such thing). 

You probably have a typo in your model. It should be -beta^2*I, not beta^2*I. However with your objective, it doesn't make sense. If there is a typo and it should be -beta^2*I and beta should maximized, the optimal value is infinity. If the notation is correct, the only feasible beta is 0.

BTW, don't use strict inequalities Con=[LMI1<0,LMI2<0, P>0,V>0,g>0,b>0];

Francisco Ronay López Estrada

unread,
Jan 11, 2013, 8:18:24 AM1/11/13
to yal...@googlegroups.com
As always thanks for your answer, I have modified the code considering the following

w=0.9;
solvesdp(Con, w*g + (1-w)*(-b)) 

However, the solution is the same that if I consider
solvesdp(Con)

So, how can I knwon that the multi-objetive optimization problem was achieved?
 There exist others compromise functions?

Thanks for your help

Johan Löfberg

unread,
Jan 11, 2013, 8:26:51 AM1/11/13
to yal...@googlegroups.com
As you have written it above (and coded it),  the only feasible beta is beta=0. This immediately forces V=0 and QD=BF

However, in the code supplied, you fixed b (i.e. beta^2) to 1.5, which will lead to infeasibility.

In other words, you get the same solution because the solver hasn't computed a solution (look at the return code in the output from solvesdp)

Francisco Ronay López Estrada

unread,
Jan 11, 2013, 8:40:45 AM1/11/13
to yal...@googlegroups.com
Sorry there it was a typing mistake,  I forgot delete previous value for beta and gamma, 

Considering  beta and gamma as sdpvar variables
and

Con=[LMI1<=0,LMI2<=0, P>=0,V>=0,g>=0,b>=0]; 
w=0.99;
solvesdp(Con, w*g +(1-w)*(-b)) 

 info: 'Unbounded objective function (SeDuMi-1.3)'

What means this error?.

Then I change to 
solvesdp(Con, w*g +(1-w)*(b))


Whith the following results gamma =1.1933e-06 , b =  4.6899e-06
Which is not an optimal solution because.  With matrices P, Q and V practically zero.

Then I change to 
Con=[LMI1<=0,LMI2<=0, P>=0,V>=0,g>=0.1,b>=0.1]; 
w=0.99;
solvesdp(Con, w*g +(1-w)*(b)) 

As result gamma =0.3162 and beta =.4472
The last result look more optimal. Nevertheless, I do not known if it is correct to made this changes?, and there exist others compromise functions? 

Message has been deleted

Johan Löfberg

unread,
Jan 11, 2013, 8:47:57 AM1/11/13
to yal...@googlegroups.com
You must have changed something in the code, because the code you appended, with the new objective, is trivially bounded, and the optimal solution is gamma = beta = P = Q = V =0.

Francisco Ronay López Estrada

unread,
Jan 11, 2013, 8:51:29 AM1/11/13
to yal...@googlegroups.com
I only delete the lines where I was writting a value for gamma and beta, Y modify the part of the 
b^2*eye(2)
by 
-b^2*eye(2)

as you suggest
I attach the modified code


LMIdual.m

Johan Löfberg

unread,
Jan 11, 2013, 8:57:20 AM1/11/13
to yal...@googlegroups.com
Your new code does not show any unboundedness (also after removing your strange lower bounds on b and g). I suspect you had a messed up workspace when you obtained the unbounded result.

Anyway, reiterating

1. Your model makes no sense, since you have beta^2*I in the (2,2) block of the second LMI.
2. Your model makes no sense, since the optimal solution is either the all-zeros solution, or infinite if comment (1) is fixed.

So
1. Should it be beta^2*I as you written? If so, the model very strange
2. If zero isn't an allowed solution, why not? It satisfies your constraints.

Johan Löfberg

unread,
Jan 11, 2013, 9:09:04 AM1/11/13
to yal...@googlegroups.com
PS, I think one of your problems could be attributed to a cute issue in MATLAB. You wrote the objective as

w*g +(1-w)*(b)

Note that this is a 1x2 vector in MATLAB (look at e.g [1 +2] in MATLAB)! Be careful with spaces. you should have

w*g+(1-w)*(b)

or 

w*g + (1-w)*(b)

Hence, with the typo, YALMIP solved two optimization problems, one with the objective w*g, and one with (1-w)*(b).

Johan Löfberg

unread,
Jan 11, 2013, 9:17:20 AM1/11/13
to
Sorry, I missed the fact that you updated the code to -beta^2*I

Hence, to summarize, your only issue at the moment is the fact that the optimal solution to your problem (removing your strange lower bounds on b and g) is zero, since nothing in your original LMIs tells the solver that zero (or a solution arbitrarily close to zero) is infeasible.

Francisco Ronay López Estrada

unread,
Jan 11, 2013, 9:20:28 AM1/11/13
to yal...@googlegroups.com
Yes the equation is strange, but its ok. In fact I am trying to reproduce a paper, an the equation results from  consider the H_ index performance. 
However, as you say, probably there is a mistake in the paper, and more correctly mus be consider -beta^2*I

 >= If zero isn't an allowed solution, why not?
the problem is in this case P and V are zero also, when P and V are gains of an observer, then if the solution is zero, the solution is not optimal. 
Sorry if I am insistent, by do known if there is other comprise functions instead
w*g + (1-w)*(b)

Johan Löfberg

unread,
Jan 11, 2013, 9:30:49 AM1/11/13
to yal...@googlegroups.com
The solution IS optimal. The solver is not psychic, i.e., it knows nothing beyond what you have told it.

If you want to maximize beta, and you have a block -beta^2*I, it is trivially optimal to pick beta = infinity. With that choice, all you have to ensure is to have LMI1 feasible and block (1,1) of LMI2 feasible. A feasible (and trivially optimal) choice is g=P=Q=V=0.

You have missed some crucial constraint in the paper, interpreted it wrong, or the paper is incorrect. What is the reference?.

Johan Löfberg

unread,
Jan 11, 2013, 9:33:05 AM1/11/13
to yal...@googlegroups.com
The objective

w*g + (1-w)*(b)

does not represent a compromise between minimizing g and maximizing b. It represents a compromise of minimizing both b and g. If you want to minimize g and maximize b, you should use w*g + (1-w)*(-b)

Francisco Ronay López Estrada

unread,
Jan 11, 2013, 9:35:49 AM1/11/13
to yal...@googlegroups.com
This is the reference

Guo, J., Huang, X., & Cui, Y. (2009). Design and analysis of robust fault detection filter using LMI tools. Computers & Mathematics with Applications, 57(11-12), 1743–1747. 

However, I suppose they use the Matlab toolbox, because they propose an iterative method to solve the problem by considering initial values of gamma and beta. Nevertheless I understand what you say, and I also consider that Yalmip only try to give the best solution.  I will try with other LTI system in oder to test the code with different systems.  As always, thanks a lot for your help and teaching. 




Johan Löfberg

unread,
Jan 11, 2013, 9:56:24 AM1/11/13
to yal...@googlegroups.com
Yes, I found it by googling on one of the values in the B-matrix and the words LMI and observer. Google is cool!

You are missing that the LMIs are used in a loop, i.e., the phi-expressions in 3.3 are vital for the code to run (read the text after theorem 2.2). You first solve 3.1, obtain a solution V, and then, effectively  linearize the LMI on top of page 4, to obtain LMI 3.3. It nothing but a standard heuristic for solving a bilinear SDP.

You are now effectively starting at the value Vi=0, which doesn't work. Additionally, the problem has no predefined scale but requires P to be positive definite. Hence you should constraint P to be positive definite using, e.g, P>=I

Hence, start by solving 3.1. However, solving 3.1 can easily lead to V=0, so here is one start which gives you a reasonable linearization point

solvesdp([P>=eye(4), LMI1<=0],norm(V-eye(2)))

Now, run a loop

while 1
 
Vi = double(V);
 phi1
= Vi'*V...
 phi2 = Vi'*V...
 LMI3=3.3
 solvesdp
([P>=eye(4)...)...
end



then, define the linearization constants



Francisco Ronay López Estrada

unread,
Jan 11, 2013, 10:34:09 AM1/11/13
to yal...@googlegroups.com
I change the code as you suggest, and considering b^2I, but the rproblem becomes unfeasible,
In the seccond case considering -b^2I, the result always give V=I

clear('yalmip')
P=sdpvar(4);
Hb=sdpvar(4,2,'full');
V=sdpvar(2,2,'full');
% sdpvar g b
g=1.2;
b=1.5;

LMI1=blkvar;
LMI1(1,1)=A'*P+P*A-Hb*C-C'*Hb';
LMI1(1,2)=P*Bd-Hb*Dd;
LMI1(1,3)=C'*V';
LMI1(2,2)=-g*eye(3) ;
LMI1(2,3)=Dd'*V';
LMI1(3,3)=-eye(2);
LMI1=sdpvar(LMI1);

solvesdp([P>=eye(4), LMI1<=0],norm(V-eye(2)))

while 1
 Vo = double(V)
Vci=Vo*C;
 Vfi=Vo*Df;
vphi1=Vci'*Vci-Vci'*V*C-C'*V'*Vci;
vphi2=Vfi'*Vfi-Vfi'*V*Df-Df'*V'*Vfi;

LMI2=blkvar;
LMI2(1,1)=A'*P+P*A-Hb*C-C'*Hb'+2*vphi1;
LMI2(1,2)=Hb*Df-P*Bf;
LMI2(1,3)=C'*V';
LMI2(2,2)=b*eye(1)+2*vphi2;
LMI2(2,3)=Df'*V';
LMI2(3,3)=-eye(2);
LMI2=sdpvar(LMI2);
 
solvesdp([P>=eye(4), LMI1<=0, LMI2<=0],norm(V-eye(2)))
g=g-0.1; %decreasing  gamma
b=b+0.1; %Increasing beta
end

Johan Löfberg

unread,
Jan 11, 2013, 11:59:55 AM1/11/13
to yal...@googlegroups.com
The code I wrote was only a simple skeleton. You have to work with error codes in the output from solvesdp  in your loop to check if the solution is feasible etc.

However, the paper leaves a lot to ask for. Details are missing and it looks pretty hard to reproduce the results. They say they start with proper values on gamma and beta. What is that? How do they initialize V? If you simply pick the solution I coded for (unit matrix), it will not work, since LMI2 is infeasible in this linearization point. How is gamma and beta updated. When the feasibility problems are solved, is any objective used? It is a classical case of research with poor reproducability.

Francisco Ronay López Estrada

unread,
Jan 11, 2013, 12:38:35 PM1/11/13
to yal...@googlegroups.com
yes you are right, I will chek other works. Thanks a lot for your time and your help. Bon week-end

Edi

unread,
Aug 13, 2015, 8:04:22 AM8/13/15
to YALMIP
Dear Dr Johan,

I would like to solve a minmax problem and a maxmin problem with Yalmip.
 I also have the following: with respect to one of the decision variable, the problem is convex but with respect to the other variable, it is a quadratic fractional objective function optimization problem (non-convex). However, I have some a-priori knowledge on the objective function for the latter and know its minimum (hence have a closed form expression for this decision variable). The problem is that this optimum decision variable depends on the other decision variable that I would like to solve (refer to "a3" and "D_opt" in the code below). The code is as follows:


%==================================================================================
   M_tot=12;    
M=12;     
N=20;         
M_1=N;
    
  hk=[ 1.6171    1.5502    1.3527     0.8077    0.6157  1.3338    1.3156    1.7    0.8077    0.6157    0.5511    0.4638];
No=0.1;
A=3;
Pt=60;
P=0.1;                                                 
Store_objective=[];
a9=[];

    
sigma=0.3162;
sk=[0.022 0.0011 0.18 0.02 0.0143 0.0011 0.0024 0.2 0.06 0.09 0.0143 0.15 ];   
  
SNR_i=(sk.^2)./sigma.^2;


%==================================================================
pk = sdpvar(1,M);   
D_opt=sdpvar(1,1)         

assign(pk,.5);


     qk=(1/2)*log2((1+(pk.*hk./No)));
       sigmaq1=(A^2)./(3*(1+pk.*hk/No))
       
  
a3=((N*sigma.^2).*SNR_i-2.*P.*D_opt)./((2*N.*(sigma.^4).*(1+2*SNR_i)+2*P*(1-P)*((N^2).*sigma.^4)+P*(1-P)*((2*N.*sigma.^2).*D_opt+D_opt.^2)+sigmaq1));            %The maximizer of the objective function
D_opt=(sum((a3.*(N*sigma.^2).*SNR_i)./(2*P*sum(a3))));       %The optimimum minimizer of the objective function

    
 Pfa=0.1;
   

 Object_ive=((sum(a3.*(((N*sigma.^2).*SNR_i)-2*P.*D_opt)))^2)/((sum((a3.^2).*(2*N*(sigma.^4).*(1+2*SNR_i)+2*P*(1-P)*((N^2)*sigma.^4)+P*(1-P)*((2*N*sigma.^2).*D_opt+D_opt.^2)+sigmaq1))));                       %deflection coefficient

     Constraints = [sum(pk)== Pt, pk>=0];
     
    Object_ive=1/Object_ive;
   double(Object_ive)

  ops = sdpsettings('solver','bmibnb','bmibnb.maxiter',50,'usex0',0)
  sol = solvesdp(Constraints,Object_ive,ops)
%=========================================================================================================================================================

Would be happy if you can help how to tackle this problem.


Kind Regards
Edi

Johan Löfberg

unread,
Aug 13, 2015, 8:51:50 AM8/13/15
to YALMIP
The first obvious fix is to not minimize 1/Objective, but maximize Objective, i.e., minimize -Objective. Introducing that extra division just complicates thngs unnecessarily

Also, the intention of the code is a bit hard to follow, as you reuse the same variable D_opt. Hence it is not clear if you actually want one of the assignments to be an equality

This model

sdpvar Dopt
a3
= 1 + Dopt;
Dopt = a3 + Dopt;
Objective = Dopt;

Is very different from

sdpvar Dopt
a3
= 1 + Dopt;
Constraints = [Dopt == a3 + Dopt];
Objective = Dopt;



Edi

unread,
Aug 13, 2015, 9:11:05 AM8/13/15
to YALMIP
Basically, my problem to be solved is a zero-sum game. 
So, I would like to minimize the objective w.r.t D_opt for D_opt>=0 and then maximize w.r.t pk; So, I have an solution that attains the minimum (i.e, D_optimum=(sum((a3.*(N*sigma.^2).*SNR_i)./(2*P*sum(a3))));). However, this solution obviously depend on the other variable pk (through the a3 expression). So, I would like now to substitute D_optimum and maximize the objective w.r.t to pk.

For the case of max min, I have managed to solve it by getting D=[0:0.1:100] and do the maximization for each D and collect the Objective values in a vector. Then, find the minimum among all these values. But for the opposite, I am stacked.


In other words, I would like to verify if an equilibrium exist (i.e., show if min max=max min) for this case.


Thank you for your fast reply!


Edi

unread,
Aug 13, 2015, 9:13:50 AM8/13/15
to YALMIP
Sorry, I meant for the case of min max I have managed to solve, but I would like to solve the max min

Johan Löfberg

unread,
Aug 13, 2015, 9:20:54 AM8/13/15
to YALMIP
I don't care about the actual background, just saying that the code looks odd. 

As it is written now, it is way to complex and will not be solved. The objective has 30000 terms. It is just too complicated in this form.
Reply all
Reply to author
Forward
0 new messages