Matrix Decomposition Problem

49 views
Skip to first unread message

drw5ff

unread,
Sep 9, 2014, 3:45:15 PM9/9/14
to yal...@googlegroups.com, Daniel Wagner
Hey folks,

I'm having some problems with my code. I want to take variables for a decomposed matrix such that

lambda = sdpvar(1,1);
W_hat = sdpvar(1,1);
P = sdpvar(2,2);

Where lambda and W_hat are simple scalar unknown variables for this case with these known scalars

Ar = -1;
B = 1;

So that the decomposed matrix is based on

A1 = [ -Ar 0; 0 -lambda];
A2 = [B 0; -lambda 0];
A0 = A1 + A2*W_hat;

I've got the following constraints, based on P being positive definite and a solution for A0'*P+P*A0 < 0

L = [lambda >= 0, trace(lambda) == 1]
F = [-5 <= W_hat <= 5, uncertain(W_hat)]
W = [P >= eye(2), trace(P) == 1, A0'*P+P*A0 <= -eye(2)]

In addition to minimizing lambda, I'd like to maximize the absolute value of W_hat to determine what range is feasible

objective = -(lambda);
solvesdp(L + F + W, objective)

YALMIP does not like the matrix decomposition in this form, nor does it like its original form

A0 = [-Ar + B*W_hat, B; -lambda*W_hat -lambda];

I'm still relatively new to using LMIs and YALMIP. I apologize for any ambiguities. Any feedback would be greatly appreciated.

Johan Löfberg

unread,
Sep 9, 2014, 3:50:43 PM9/9/14
to
"Does not like"?

Your problem is bilinear in lambda and P, so you have a BMI and you most likely don't have any solver for (the very hard problem class of) BMIs

Of course, this particular model makes no sense as lambda only can take the value 1, but I guess you only created this minimal model to show the issue

drw5ff

unread,
Sep 15, 2014, 3:02:34 PM9/15/14
to yal...@googlegroups.com
Thanks for getting back with me. 

That's correct. I set lambda to 1 to show the issue. What about for an extremely simplified scenario where you have the Lyapunov equation and k is some time varying parameter between -inf and 1 and P is some (1,1) positive definite? The objective would be to minimize k.

%Set variables

k = sdpvar(1,1);

P = sdpvar(1,1);

 

%Set matrices

A0 = -1 + k;

 

%Set conditions and solve gevp

F = [-inf <= k <= 1, uncertain(k)]

W = [P >= eye(1), A0'*P+P*A0 <= -eye(1)]

objective = -k;

solvesdp(F + W, objective)


You were correct. This problem was bilinear. I took your suggestion and installed a BMI solver (Tomlab) and removed the Tomsys path (conflicting functions) and called it using sdpsettings, but I haven't had any luck. Thanks again for the feedback. 


 

Johan Löfberg

unread,
Sep 15, 2014, 3:07:32 PM9/15/14
to yal...@googlegroups.com
This model makes no sense

The minimax problems we solve

min_x max _k f(x,k) s.t g(x,k) >= 0 forall k in K

so here min max -k s.t g(x,k) >= 0 forall k in K

min t s.t {g(x,k) >= 0, t >= -k} forall k in K

Infeasible (as there exist no t larger than -k for all -inf <= k <= 1)

drw5ff

unread,
Sep 15, 2014, 3:20:47 PM9/15/14
to yal...@googlegroups.com
I must have the problem set up wrong. Intuitively, if k were very small, then A0 -> -1. LMI Toolbox uses something called a dilation factor where you input constraints for your uncertain variable and it returns some multiple of it that represents your lower and upper bounds for feasibility. I'm interested in that range of feasibility for k so that A0 is stable and satisfies A0'*P+P*A0 < 0. Is that not possible? 

Johan Löfberg

unread,
Sep 15, 2014, 3:31:22 PM9/15/14
to yal...@googlegroups.com
You must have misunderstood something on the robust optimization framework and standard robust optimization. When you use the uncertain command, you are saying: I want the following constraints to hold *for all* k in this set, and I want the worst-case cost. Clearly, the worst-case cost is infinity, so the problem is ill-posed

The dilation you speak of sounds like some specialized high-level function in that toolbox, that you easily can mimic by some coding. Sounds like you simply want to perform a bisection on a constant L in the model


%Set matrices

A0 = -1 + k;

%Set conditions and solve gevp

F = [L <= k <= 1, uncertain(k)]

W = [P >= eye(1), A0'*P+P*A0 <= -eye(1)]

solvesdp(F)


I.e., find the lowest L such that the uncertain model above is feasible?




Johan Löfberg

unread,
Sep 15, 2014, 3:39:46 PM9/15/14
to yal...@googlegroups.com
and your desciption in combination with the model makes things unclear. Do you want to

1. Find the range of values for which stability is guaranteed with a single P, or

2. Find the lowest possible value of k, such that there exist a P for any k larger than that value (and less than 1)

drw5ff

unread,
Sep 15, 2014, 4:40:02 PM9/15/14
to yal...@googlegroups.com
Thanks Johan,

It would be the first scenario. I apologize for the lack of clarity. 

Johan Löfberg

unread,
Sep 15, 2014, 4:48:32 PM9/15/14
to yal...@googlegroups.com
OK, still infeasible though as k=1 is infeasible

Johan Löfberg

unread,
Sep 15, 2014, 5:14:54 PM9/15/14
to yal...@googlegroups.com
Adapted from gevp example on the wiki (don't run on your trivial model as it is unbounded in k so the first loop won*t terminate)

Lup = 0.99;
Llow = -1;
sol
= solvesdp([Llow <= k <= .99, uncertain(k), TheModel]);
% Find bad k
while sol.problem == 0
   
Llow = Llow*2;
    sol
= solvesdp([Llow <= k <= .99, uncertain(k), TheModel]);
end
% bisect
while Lup-Llow > desiredtolerance    
   
Ltest = (Llow + Lup)/2;
    sol
= solvesdp([Ltest <= k <= .99, uncertain(k), TheModel]);
   
if sol.problem
       
Llow = Ltest;
   
else
       
Lup = Ltest;
       
Lworks = Ltest;
   
end
end




Reply all
Reply to author
Forward
0 new messages