piecewise functions, objective on the conditions

42 views
Skip to first unread message

Hassan D

unread,
Apr 11, 2018, 10:49:43 PM4/11/18
to YALMIP


Dear Johan,

I'm trying to build an (n+m)-by-(n+m) matrix A  using the following logic. Consider n-by-1 vector x, and m-by-1 vector s.

A( i , j )= 1   if   | x_i - x_j | <= c1   i,j = 1 ,...,n
A( i , j )= 0   if   | x_i - x_j | >   c1   i,j = 1 ,...,n


also

A( i, j+n )= 1  if | x_i - s_j | <= c2   i= 1 ,...,n    j=1, ... , m

A( i , j+n )=0  if | x_i - s_j | >   c2   i= 1 ,...,n    j=1, ... , m

and 

A(i,j)=0  for i=n+1,..., n+m  &  j=1, ..., n+m


That is, the matrix A will be as follows

A=[ A_11 , A_12;
         0     ,  0    ];

Let D=diag(sum(A,2)), and define L=D-A, which is the Laplacian of the graph associated with the adjacency matrix A.

What I'm trying to do is, find c1 and c2 such that the eigenvalues of L_11 (the same dimension as A_11) are all positive.


I wasn't sure how to write the code. I don't think A11 and A12 should be sdpvars, but I couldn't think of anything else. Is this even possible?


clc
clear
close all


c
=sdpvar;
cl
=sdpvar;

model
=[0<=c<=1,0<=cl<=1];

n
=3;
m
=2;

N
=n+m;

f
=sort(unifrnd(0,1,n,1));

s
=sort(unifrnd(0,0.7,m,1));

A21
=zeros(m,n);
A22
=zeros(m,m);

df
=binvar(2);
ds
=binvar(2);

ep
=0.0000001;

A11
=sdpvar(n,n);


for i=1:n
   
for j=1:n
        model
=[model,
         sum
(df)==1,
         implies
(df(1),[c>=abs(f(i)-f(j)), A11(i,j)==1])
         implies
(df(2),[abs(f(i)-f(j))>=c+ep,A11(i,j)==0])];
   
end
end


A12
=sdpvar(n,m,'full');


for i=1:n
   
for j=1:m
         model
=[model,
         sum
(ds)==1,
         implies
(ds(1),[cl>=abs(f(i)-s(j)), A12(i,j)==1])
         implies
(ds(2),[abs(f(i)-s(j))>=cl+ep,A12(i,j)==0])];
   
end
   
end


A
=[A11,A12;A21,A22];


D
=diag(sum(A,2));
L
=D-A;


L11
=L(1:n,1:n);

model
=[model,eig(L11)>0]

optimize
(model)

Lv=value(L)
L11v
=value(L)
Av=value(A)
eig
(L11v)




Johan Löfberg

unread,
Apr 12, 2018, 1:54:54 AM4/12/18
to YALMIP
Several issues

first, it looks strange that df and ds are 2x2 symmetric matrices. In the end, you only use the two elements anywaý

eig is an experimental hack operator you use for nonsmooth brute-force with nonlinear solvers. It typically never works, so the model you have here is almost guaranteed to fail

Why even try to use optimization. There are a small amount of possible matrices. Simply enumerate those cases.

Hassan D

unread,
Apr 12, 2018, 2:51:30 AM4/12/18
to YALMIP

Thanks for your reply. 

You're right. df and ds should be 2x1. (Do I have to define new binary variables for each iteration? I mean, do I have to iterate df and ds?)

For my problem, I'm trying to minimize c1 and c2. In fact, I need the eigenvalues of (L11) to be positive for the smallest possible values of c1 and c2. That's why I thought optimization would help.

I think a milder constraint would be det(L11) ~=0. 


Johan Löfberg

unread,
Apr 12, 2018, 3:11:41 AM4/12/18
to YALMIP
you define as many df and ds vectors as you need, one vector for every disjunction you introduce

saying you want to minimize c1 and c2 (c and cl?) is not clear. The sum? The maximum? The product?

det()~=0 is also s horrible condition for anything but trivially small problems. However, with only binary variables in A it is MILP-representable as you can compute the determinant explicitly, and that binary polynomial can be be linearized with binmodel.

A11=binvar(n,n);

for i=1:n
     
for j=1:
n
         df
= sdpvar(2,1);

         model
=[model,
          sum
(df)==1,
          implies
(df(1),[c>=abs(f(i)-f(j)), A11(i,j)==1])
          implies
(df(2),[abs(f(i)-f(j))>=c+ep,A11(i,j)==0])];
     
end
end

A12
=binvar(n,m,'full');

for i=1:n
     
for j=1:
m
         ds
= binvar(2,1);

          model
=[model,
          sum
(ds)==1,
          implies
(ds(1),[cl>=abs(f(i)-s(j)), A12(i,j)==1])
          implies
(ds(2),[abs(f(i)-s(j))>=cl+ep,A12(i,j)==0])];
     
end
end
A
=[A11,A12;A21,A22];
D
=diag(sum(A,2));
L
=D-A;
L11
=L(1:n,1:n);


p
= det(L11,'polynomial');
[plinear,cuts] = binmodel(p);
q
= binvar(2,1);
model
= [model, cuts, sum(q)==1, implies(q(1),plinear >= 1),implies(q(2),plinear <= -1)];
optimize
(model,c+cl)


Johan Löfberg

unread,
Apr 12, 2018, 3:27:58 AM4/12/18
to YALMIP
and since A11 is symmetric, I think you trivially have that diag(A11)==1, and the inner loop in the first loop can be reduced to j = i+1:n

Johan Löfberg

unread,
Apr 12, 2018, 3:29:51 AM4/12/18
to YALMIP
to reduce the number of variables and thus the complexity of the determinant, you could enforce that by parameterization,

A11=binvar(n,n);A11 = A11-diag(diag(A11)) + diag(ones(n,1));

Johan Löfberg

unread,
Apr 12, 2018, 4:14:04 AM4/12/18
to YALMIP
should be df = binvar(2,1) of course

Hassan D

unread,
Apr 12, 2018, 8:40:09 AM4/12/18
to YALMIP
Thank you for your help. This is really kind of you. 

By minimizing c and cl, I meant the following.

As it's obvious, c and cl are allowed to be between 0 and 1. I know that in my model, after a certain value of c and cl, det(L11) will be nonzero. Therefore, I'm trying to find the smallest value that c and cl can have such that det(L11)~=0. Obviously, I'm not trying to minimize their sum or product.  I'm not sure I fully understand what you have done, so I have to read the documentation first, but thanks very much for your time.

cheers,

Johan Löfberg

unread,
Apr 12, 2018, 9:05:13 AM4/12/18
to yal...@googlegroups.com
it is still not clear when you say "the smallest value c and cl can have."

What is best, c=cl=0.5, or c=0.6 and cl=0.3 or c = 0.7 and cl=0?

Hassan D

unread,
Apr 12, 2018, 7:16:53 PM4/12/18
to YALMIP
I actually know that for large values of c and cl, e.g. c=0.8 and cl=0.9, det(L11) will be nonzero. However, as I decrease these values, at one point for a c* and cl* the det will be zero. So, somewhere between 0 and 1, c and cl will have a value after which det(L11) will always be nonzero. The simplest case for me would be to assume cl is known. Then I would have to find the value of c after which det(L11)~=0. 

Johan Löfberg

unread,
Apr 13, 2018, 2:07:49 AM4/13/18
to YALMIP
The statement "at one point for a c* and cl* " doesn't make sense. That should be a set of points, a surface, and thus to get an optimization problem for that, you have to encode a point of interest on that switching surface by defining a scalar objective

Anyway, what ever you use (minimize c or minimize max(c,cl) or minimize c+cl)), they can all be represented as MILPs with the det criteria as implemented above). The problems explode in size for increasing n and m (for n = 5, m = 4 there will be 26000 binary variables and it takes YALMIP several minutes to derive the linear model), but the actual problem is trivially solved (solved in root)

However, for cl fixed, the problem is really trivially solved by simply gridding/trial in c. For any fixed c, you can easily compute everything and check eigenvalues. Hence, start at c=1 and work towards 0, perhaps with a bisection strategy if you know that the condition is monotonic in decreasing c. You will never be able to beat that
Reply all
Reply to author
Forward
0 new messages