Optimize L2-norm

687 views
Skip to first unread message

Ilya Grigoriev

unread,
Jan 27, 2017, 6:18:35 PM1/27/17
to CVXOPT
Dear cvxopt-ers,

  I would like to regularize an expression by the L2-norm of a vector. I.e., I want to minimize the function $\sqrt(x^T \cdot x) - q^T x$, subject to some linear constraints. The `cvxopt.solvers.qp` can be used to optimize this expression if we remove the square root: $x^T \cdot x - q^T x$. Is there a way to keep the square root in there?

  I've seen similar questions asked in the forum, but without answers. I'm sorry if it is a silly question. Is cvxopt the wrong tool for the job? Square root is a convex function, so I had hope this would be a computable problem.

  Thank you!
               Ilya.

Dima Pasechnik

unread,
Jan 28, 2017, 3:13:32 AM1/28/17
to CVXOPT
There is no need to keep sqrt there. Indeed,
you can minimise the convex function
<x-q/2,x-q/2>=<x,x>-<q,x>+<q,q>/4. (E.g. just change the variables y:=x-q/2).
Now your $\sqrt{<x,x>-<q,x>}=\sqrt{<x-q/2,x-q/2>-<q,q>/4}$, as required.
This is totally standard trick...

HTH
Dima

 

  Thank you!
               Ilya.

Martin

unread,
Jan 28, 2017, 6:34:30 AM1/28/17
to CVXOPT
The problem

   minimize    norm(x) - q.T*x
   subject to  A*x == b

is equivalent to 

   minimize    t - q.T*x 
   subject to  A*x == b 
               norm(x) <= t

where norm(x) <= t means that (t,x) belongs to a second order cone, i.e., you can use cvxopt.solvers.conelp or cvxopt.solvers.socp.

Ilya Grigoriev

unread,
Jan 29, 2017, 2:45:09 AM1/29/17
to CVXOPT
Thank you both very much for your help!

@Martin: I'm pretty sure this solved my problem. It took me a while to figure out why `socp` does exactly what is needed, so I wrote it out (even though it's just renaming things). I can take the n-vector `x` and append a scalar `t` to the beginning to make an n+1-vector `w`. To match the language of the documentation for `socp`, if I pick the matrix G_1 to be -identity and the vector h_1 to be zero, one of the conditions `socp` promises becomes `s_1 = w`. `socp` promises to find a `w` such that `s_10 >= \norm{s_11}`, which in this case means `t >= \norm{x}`, which is exactly what we want.

@Dima: Unless I misunderstood something, your suggestion optimizes \sqrt{<x,x> - <q,x>} as opposed to \sqrt{<x,x>} - <q,x> (which is what I wanted). I didn't understand what the change of variables buys me. What am I missing?

  Thanks again,
                     Ilya.


On Saturday, January 28, 2017 at 3:34:30 AM UTC-8, Martin wrote:
The problem

   minimize    norm(x) - q.T*x
   subject to  A*x == b

is equivalent to 

   minimize    t - q.T*x 
   subject to  A*x == b 
               norm(x) <= t

where norm(x) <= t means that (t,x) belongs to a second order cone, i.e., you can use cvxopt.solvers.conelp or cvxopt.solvers.socp.


On Saturday, January 28, 2017 at 12:13:32 AM UTC-8, Dima Pasechnik wrote:

There is no need to keep sqrt there. Indeed,
you can minimise the convex function
<x-q/2,x-q/2>=<x,x>-<q,x>+<q,q>/4. (E.g. just change the variables y:=x-q/2).
Now your $\sqrt{<x,x>-<q,x>}=\sqrt{<x-q/2,x-q/2>-<q,q>/4}$, as required.
This is totally standard trick...

HTH
Dima
 
On Saturday, January 28, 2017 at 12:18:35 AM UTC+1, Ilya Grigoriev wrote:

Dima Pasechnik

unread,
Jan 29, 2017, 7:41:55 PM1/29/17
to CVXOPT


On Sunday, January 29, 2017 at 7:45:09 AM UTC, Ilya Grigoriev wrote:
Thank you both very much for your help!

@Martin: I'm pretty sure this solved my problem. It took me a while to figure out why `socp` does exactly what is needed, so I wrote it out (even though it's just renaming things). I can take the n-vector `x` and append a scalar `t` to the beginning to make an n+1-vector `w`. To match the language of the documentation for `socp`, if I pick the matrix G_1 to be -identity and the vector h_1 to be zero, one of the conditions `socp` promises becomes `s_1 = w`. `socp` promises to find a `w` such that `s_10 >= \norm{s_11}`, which in this case means `t >= \norm{x}`, which is exactly what we want.

@Dima: Unless I misunderstood something, your suggestion optimizes \sqrt{<x,x> - <q,x>} as opposed to \sqrt{<x,x>} - <q,x> (which is what I wanted). I didn't understand what the change of variables buys me. What am I missing?

Sorry, never mind, I misread your formula. 
Then I was reading Martin's post, confused :-)

maksimilian musik

unread,
Apr 20, 2017, 6:01:42 AM4/20/17
to CVXOPT
@Martin

Do you have any resources that show how the epigraph form would be programmed in CVXOPT? I am working on a problem which also uses t - substitution to put the one of the objective terms in the constraints, however my problem is a QCQP and the part which has been moved to the constraint is the quadratic portion.

Martin

unread,
Apr 21, 2017, 7:22:59 AM4/21/17
to CVXOPT
I can recommend the "MOSEK Modeling Manual" (http://docs.mosek.com/generic/modeling-letter.pdf) — I think §3.3.1 may be what you are looking for.  
Reply all
Reply to author
Forward
0 new messages