norm 2 for a vector

69 views
Skip to first unread message

Lorenzo Nespoli

unread,
Nov 8, 2016, 4:30:27 AM11/8/16
to YALMIP


I have two very basic question concerning the norm operator.
I couldn't help find the answer in the documentation or in the forum.

I assume that the norm 2 operator for a vector returns the following:

norm(x,2) = (x'x)^0.5

But when I use the two forms in my optimizations I get different results.
Is my assumption correct? How is the optimization carried out in the two cases?

Also, I have found that sometimes minimizing (x'x)^0.5 is much faster than minimizing
(x'x),  and minimizing them in sequence is faster than directly minimizing (x'x) (first minimize (x'x)^0.5
and than (x'x) starting from previous solution ).
Is there a theoretical background for this or is just a lucky case? (My problem included constraints)

Thank you very much for your help.




Johan Löfberg

unread,
Nov 8, 2016, 4:37:05 AM11/8/16
to YALMIP
Since your not showing this in a context, it impossible to answer. The norm has a different gradient than the squared norm, and many weird things can probably happen with the non-smooth norm. Additionally, in some cases you might have a problem which is SOCP-representable, which YALMIP automatically will extract from quadratic epressions. If you use sqrt(x'*x) you will get a general nonlinear program even if it would be SOCP represenatable if you had used the norm operator, or squared the expression

If you have a non-convex problem, any minor change can lead to huge differences in solution and solution time.
Reply all
Reply to author
Forward
0 new messages