Model creation failed of a network agents position control problem

53 views
Skip to first unread message

Rubby

unread,
May 5, 2015, 8:38:43 PM5/5/15
to yal...@googlegroups.com
Hello all,

I met some problems when I implement the following SDP problem, and most probably because of the way I write the constraints (this is a network agents position control problem),

Here, alpha is the objective and it is a scalar. Variables in the question include alpha, and matrix x(k+1).  x_i(k+1) and x_ j(k+1) are rows of matrix x(k+1), and they include 2 elements that represents the position of agents i and j.   ||x_ij(k+1)|| means the distance between agents i and j, and  Z_ij(k+1) is equal to the square of the distance between i and j.  L_G(k+1) is a square symmetric matrix (Laplacian of the network), and each element is related to the distance x_ij(k+1).  Specifically, its off-diagonal element is equal to

and we can obtain the diagonal by the property of zero row-sum.   The last constraint means distance between agents i and j should be no less than rho (constant).

All symbols with (k) are known, and with (k+1) are variables. In the simulation, I choose n=6 as an example.  

This is my first time to impelement SDP, and I meet difficulty to make the constraints convex during the implementation in YALMIP. Code is attached, and many thanks for your advice and generous help.

Best regards,

main_problem.m

Mark L. Stone

unread,
May 5, 2015, 10:11:56 PM5/5/15
to yal...@googlegroups.com
You wrote "first constraint. do not know why creation failed".  it succeeded for me, and if you remove the ; you'll see YALMIP says " Matrix inequality 6x6".

You wrote "I use norm to calculate the distance, and maybe this is a problem that destroys convexity".  Yes, you used an equality constraint with norm; that destroys convexity.

I didn't look at the rest of your code, but maybe that can get you started.

Mark L. Stone

unread,
May 5, 2015, 10:17:20 PM5/5/15
to yal...@googlegroups.com
If you have norm(affine function of decision variables) <= number, then that is convex.  if an equality, then not convex.  If norm(affine function of decision variables) >= number, then not convex, but you may be able to use the convex-concave optimization procedure, with all its caveats, described in stephen_boyd's answer in http://ask.cvxr.com/t/how-to-handle-nonlinear-equality-constraints/22/3 to handle such a "wrong way" constraint.

Johan Löfberg

unread,
May 6, 2015, 2:09:49 AM5/6/15
to yal...@googlegroups.com
As Mark, I have no problems setting up the model, i.e. it does not fails as you indicate in the comments. However, you are setting up a nonconvex model (draw the set norm(x)==1 in 2D on paper!) and you do this using a convexity-restricted operator (2-norm) which has no fallback method for the nonconvex case (If nonconvex 1-norm is encountered by YALMIP, it switches to a mixed-integer approach).

If you reformulate your model using quadratics instead of norms, you at least get a model which YAMIP can compile and theoretically solve using, e.g., a local BMI solver such as penlab, a global solver such as penbmi, or using semidefinite relaxations through solvemoment). I would hope for much though. A simple heuristics might work much better. Heuristics for these distance problems is a pretty active field

Mark L. Stone

unread,
May 6, 2015, 7:39:10 AM5/6/15
to yal...@googlegroups.com
Johan, you wrote "using, e.g., a local BMI solver such as penlab, a global solver such as penbmi".  I thought that both are local solvers, and that the main duifference between them is that penlab is the free version of penbmi, more or less, and perhaps not quite up to the same performance and reliability standards.

Johan Löfberg

unread,
May 6, 2015, 7:40:13 AM5/6/15
to yal...@googlegroups.com
typo, meant global bmibnb

Johan Löfberg

unread,
May 6, 2015, 8:49:33 AM5/6/15
to yal...@googlegroups.com
btw, your code minimizes gamma, it doesn't maximize as you appear to want from the figure

Johan Löfberg

unread,
May 6, 2015, 9:12:44 AM5/6/15
to yal...@googlegroups.com
also note that the solution you get when you relax nonconvex equalities to convex inequalities, and simply skip the lower bound norm constraint, the solution is essentially feasible to the original problem.
Reply all
Reply to author
Forward
0 new messages