Yalmip does not return optimal value when I use loop to construct the object function

148 views
Skip to first unread message

Syed Adnan Akhtar

unread,
Feb 14, 2020, 1:24:42 PM2/14/20
to YALMIP
Hi all!

I am trying to solve a linear objective with an LMI. However, the objective function involved the summation of variables (I use a loop for this). Following is my objective function (Also included as a picture)


\begin{align}
    & \min _{\substack{\theta \in \Theta \\  \lambda_i \geq 0 \\ \gamma_i}} \sum_{i} C_{\theta}( \hat{x_i},\hat{u_i}) + \frac{1}{2} \gamma_i + \langle L, \lambda_i \rangle \\
    & \textbf{s.t. } \begin{bmatrix} \theta_{uu} & v_i \\ v_i^T & \gamma_i \end{bmatrix} \succ 0
\end{align}

When I run my optimization code, Yalmip does not return the optimal value of theta_xu and theta_uu. I say this because I put another value of these variables (satisfying all the constraints) in the loss function (objective function), and obtain an even lower loss. What could be going wrong? Is there an issue if I use a loop to construct an objective function? I am using mosek as the solver. Following is the code for reference:

n = size(x0,1);
M = [1 0 -1 0;0 1 0 -1]';
L = [2 2 2 2]';
theta_uu = sdpvar(2,2,'symmetric');
theta_xu = sdpvar(6,2);
lambda = sdpvar(n,size(M,1));
gamma = sdpvar(n,1);
ops = sdpsettings('verbose',0);
obj = 0;
cnst = [];
for i=1:n 
    obj= obj + 2*x0(i,:) * theta_xu * uOpt(i,:)' + uOpt(i,:) * theta_uu * Opt(i,:)';
    obj = obj + 0.5*gamma(i) + lambda(i,:)*L; 
    cnst = [cnst, [theta_uu M'*lambda(i,:)'+theta_xu'*x0(i,:)'; (M'*lambda(i,:)'+theta_xu'*x0(i,:)')' gamma(i)] >= 0];
end
cnst = [cnst, theta_uu >= eye(2),lambda(:,:) >= 0];
cnst = [cnst, theta_xu(:,:) <= 100, theta_xu(:,:) >= -100, theta_uu(:,:) >= -100,theta_uu(:,:) <= 100; ];
optimize(cnst,obj,ops);
objfun.PNG

Johan Löfberg

unread,
Feb 14, 2020, 4:02:37 PM2/14/20
to YALMIP
Undefined function or variable 'x0'. 
>> 

Syed Adnan Akhtar

unread,
Feb 14, 2020, 9:02:17 PM2/14/20
to YALMIP
Thanks for your reply, Johan. My question was on the problem formulation, that is why I did not include the full code (syntax and everything works fine). However, the following is the code that can be run (although the data does not make sense)

x0 = rand(200,6);
uOpt = rand(200,2);
n = size(x0,1);
M = [1 0 -1 0;0 1 0 -1]';
L = [2 2 2 2]';
theta_uu = sdpvar(2,2,'symmetric');
theta_xu = sdpvar(6,2);
lambda = sdpvar(n,size(M,1));
gamma = sdpvar(n,1);
ops = sdpsettings('verbose',0);
obj = 0;
cnst = [];
for i=1:n 
    obj= obj + 2*x0(i,:) * theta_xu * uOpt(i,:)' + uOpt(i,:) * theta_uu * uOpt(i,:)';
    obj = obj + 0.5*gamma(i) + lambda(i,:)*L; 
    cnst = [cnst, [theta_uu M'*lambda(i,:)'+theta_xu'*x0(i,:)'; (M'*lambda(i,:)'+theta_xu'*x0(i,:)')' gamma(i)] >= 0];
end
cnst = [cnst, theta_uu >= eye(2),lambda(:,:) >= 0];
cnst = [cnst, theta_xu(:,:) <= 100, theta_xu(:,:) >= -100, theta_uu(:,:) >= -100,theta_uu(:,:) <= 100; ];
optimize(cnst,obj,ops);

The cost function's picture is also attached. Thanks!
objfun.PNG

Johan Löfberg

unread,
Feb 15, 2020, 1:34:11 AM2/15/20
to YALMIP
well, that setup is infeasible for all x0 I tested

ans = 
  struct with fields:

    yalmipversion: '20200116'
       yalmiptime: 1.5242
       solvertime: 0.1648
             info: 'Infeasible problem (MOSEK)'
          problem: 1

and my general answer is that if it is solved for some feasible data, it will compute the optimal solution, and if you think you can generate a better solution you are simply mistaken

Syed Adnan Akhtar

unread,
Feb 19, 2020, 8:05:15 PM2/19/20
to YALMIP
Well, it says its DUAL_INFEASIBLE. The problem is bounded (and CVX gives me a solution!). The problem is formulated as an LMI for YALMIP, whereas it's a different formulation for CVX (but the same problem!) The solution returned by CVX (theta_xu and theta_uu) is actually a feasible and optimal solution for the LMI written in YALMIP. I am including a working code below for your reference (needs "dat.mat", attached here).

load dat.mat

n = size(dat.x0,1);
M = [1 0 -1 0;0 1 0 -1]';
L = [2 2 2 2]';

%Solve using CVX
cvx_begin 
   variable theta_uu(2,2) symmetric
   variable theta_xu(6,2)
   variable lambda(n,size(M,1))
    obj = 0;
    for i=1:n 
        obj= obj + 2*dat.x0(i,:) * theta_xu * dat.uOpt(i,:)' + ...
            dat.uOpt(i,:) * theta_uu * dat.uOpt(i,:)';
        obj = obj + lambda(i,:)*L + 0.25*matrix_frac(M'*lambda(i,:)' + 2*theta_xu'*dat.x0(i,:)',theta_uu);
    end
    minimize(obj)  
    subject to
    lambda(:,:) >= 0;theta_uu >= eye(2);
    theta_xu(:,:) <= 100;theta_xu(:,:) >= -100;
    theta_uu(:,:) >= -100;theta_uu(:,:) <= 100; 
cvx_end
fprintf('CVX Objective: %f\n',obj);

%Solve using Yalmip
theta_uu = sdpvar(2,2,'symmetric');theta_xu = sdpvar(6,2);
lambda = sdpvar(n,size(M,1));gamma = sdpvar(n,1);
ops = sdpsettings('verbose',2);
obj = 0;cnst = [];
for i=1:n 
    obj= obj + 2*dat.x0(i,:) * theta_xu * dat.uOpt(i,:)' + ...
        dat.uOpt(i,:) * theta_uu * dat.uOpt(i,:)';
    obj = obj + 0.25*gamma(i) + lambda(i,:)*L; 
    cnst = [cnst, [theta_uu M'*lambda(i,:)'+2*theta_xu'*dat.x0(i,:)';...
        (M'*lambda(i,:)'+2*theta_xu'*dat.x0(i,:)')' gamma(i)] >= 0];
end
cnst = [cnst, theta_uu >= eye(2),lambda(:,:) >= 0];
cnst = [cnst, theta_xu(:,:) >= -100, theta_xu(:,:) <= 100];
cnst = [cnst, theta_uu(:,:) >= -100, theta_uu(:,:) <= 100];
diag = optimize(cnst,obj,ops);
fprintf('Yalmip Objective: %f \n',value(obj));
dat.mat

Johan Löfberg

unread,
Feb 20, 2020, 2:15:48 AM2/20/20
to YALMIP
The problem is that you think A(:,:) returns a vector. It does not, theta_uu(:,:) is the same as theta_uu.

>> theta_uu(:,:)
Linear matrix variable 2x2 (symmetric, real, 3 variables)

You should use A(:) to get a vector with the elements of a matrix

Johan Löfberg

unread,
Feb 20, 2020, 2:31:21 AM2/20/20
to YALMIP
and I think you are mistaken on how cvx works too. I presume you intend that theta_uu >= eye(2) is a semidefinite constraint, and theta_uu(:,:) >= -100 to be interpreted as elementwise. Both of those will be interpreted as elementwise in cvx (visible by looking at the display from sdpt3 or the solver you are using. it says there are 4 SDP cones. In this case those come from the matrix_frac objective, n = 4 so 4 LMI are required to model the objective)

In the YALMIP code, I suspect you want  sdpvar(n,size(M,1),'full'). since n = size(M,1) here the matrix will be symmetric and thus >= is interpreted in the SDP cone

Michal Adamaszek

unread,
Feb 20, 2020, 3:48:26 AM2/20/20
to YALMIP
Oh I didn't see this thread but I am glad we made a similar diagnosis :)).  https://groups.google.com/forum/#!topic/mosek/VuEIhiz6nZ0
Message has been deleted

Mark L. Stone

unread,
Feb 20, 2020, 9:16:27 AM2/20/20
to YALMIP
To expand on Johan's answer regarding YALMIP, there are two ways of specifying LMI constraint A is psd for square symmetric matrix A  http://cvxr.com/cvx/doc/sdp.html

1)
  A == semidefinite(n)
where n is matrix dimension.  Or if A is a variable and not an expression, it can just be declared
variable A(n.n) semidefinite

2)  Begin cvx with
cvx_begin sdp
then specify
A >=  0
Approach 2 invokes the semidefinite programming mode of CVX.

Basically, YALMIP is always in what corresponds to CVX"s semidefinite programming mode. Except that YALMIP males square matrices symmetric by default, whereas CVX males them full  by default.

Syed Adnan Akhtar

unread,
Feb 20, 2020, 9:34:57 AM2/20/20
to YALMIP
Thanks, Johan!

The code works now. I was mistaken about the elementwise inequality. I changed from theta_uu(:,:) to theta_uu(:) to get elementwise inequality. I did the same for lambda and theta_xu. (I did not have to change anything with sdpvar(n,size(M,1)), and n is not size(M,1)).


Reply all
Reply to author
Forward
0 new messages