optimization on manifold VS classical optimization

414 views
Skip to first unread message

Artashes Asoyan

unread,
Jun 28, 2022, 4:02:38 AM6/28/22
to Manopt

Hello, 
What is the benefit of using Manifold Optimization Approach (MOA)? I understand that it is more elegant and instead of dealing with constraints we redefine a space, a Manifold where the variable lives on the manifold so the classic optimization problem with constraints becomes an optimization problem without constraints in MOA. But in terms of result or (numerical) efficiency or convergence are there any remarkable differences?

If I can use both methodes why should I choose MOA over classical optimization with constraints? Or in what case shoud I choose MOA over classical optimization with constraints?

Best, 
Artashes

Ronny Bergmann

unread,
Jun 28, 2022, 8:08:45 AM6/28/22
to Manopt
Hej,
sure it is sometimes a tradeoff if you compare constraint optimisation versus an unconstraint optimisation on a manifold; the first is for example whether the constraint yields such a nice manifold (where for example exp/log can be efficiently computed or at least a retraction and its inverse. So that might be one thing to consider, that “moving around” gets a little more complicated compared to (just) plus and minus.

For convergence, e.g. gradient descent you get the same convergence results

But there is more to consider, namely that certain problems are (geodetically) convex in the manifold setting, especially functions involving there Riemannian distance (squared).
This is because convexity on manifolds is along geodesics (which are other curves than just straight lines).

So if
- the manifold describing your constraint is nice
- your problem is convex on the manifold

I would recommend considering unconstraint optimisation on manifolds.

If for example some constraints yield a nice manifold (but your have further ones left) you can also consider constraint optimisation on a manifold, see for example https://arxiv.org/pdf/1901.10000.pdf or https://arxiv.org/pdf/1804.06214.pdf or https://arxiv.org/pdf/2110.04882

Best,
Ronny

Spencer Kraisler

unread,
Jun 28, 2022, 10:33:46 AM6/28/22
to Artashes Asoyan, Manopt
Hi,

This is a great question. One big reason is invariance of parameterization. When you’re utilizing the ambient space, then your choice of parameterization (e.g. unit quaternions vs Euler angles vs rotation matrices for SO(3)) will directly effect the limit point, it will effect the rate of convergence, and it could also effect whether the algorithm even converges or not. This is especially true if your sequence of points in gradient descent is global on the manifold (I.e. you start off far from the optimal point). If you stick to riemannian optimization, then your choice of parametrization will have no effect on these quantities. Whether you use unit quaternions or rotation matrices, you’ll get the same sequence of points, you’ll get the same limit point, you’ll get the same rate of convergence, etc. 

Best,
Spencer  

On Jun 28, 2022, at 1:02 AM, Artashes Asoyan <zaas...@gmail.com> wrote:


--
http://www.manopt.org
---
You received this message because you are subscribed to the Google Groups "Manopt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to manopttoolbo...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/manopttoolbox/ef6503b1-59a5-48ac-9312-ea57f56a9586n%40googlegroups.com.
Message has been deleted

Artashes Asoyan

unread,
Jun 28, 2022, 11:45:28 AM6/28/22
to Manopt
Hi Ronny, 
thank you for you reply and bigger Thank you for references. 

I buy your argument about convexity, but not the "nice" one. 

I will also take any article recommandation where we compair the convergence speed of different Methods

Ronny Bergmann

unread,
Jun 28, 2022, 11:49:18 AM6/28/22
to Manopt
Hi,
I am not telling anything, but what is your problem with nice? If the constraints yield a manifold, where for any exponential map evaluation you have to solve a complicated (i.e. computationally inefficient) ODE, then maybe its easier to use a constraint optimisation solver.

Uff, in theory the speed of convergence (i.e. in number of iterations) is the same – in practice – which problems do you know that can be solved on _different_ manifolds, such tax you _can_ compare them? and what would in practice be the thing you want to compare?

Concerning the argument from Spencer – you often have the problem on manifolds that everything is only reasonable locally, for example convexity, so you often end up in local minimisers, that might still be a problem overall, sure.

Best
Ronny

Artashes Asoyan

unread,
Jun 28, 2022, 12:07:58 PM6/28/22
to Manopt
well,  saying that the algorithm A is nicer than B is not scientifique reason two disqualify the use of B (if B can do the same job as welle as the A).
The nice is just a matter of preference based on personal choice :).

Thank you Spenser, I'm meditating on your answer, I have to convince myself that the parametrization will have no effect on convergence.

Best,
Artashes

Ronny Bergmann

unread,
Jun 28, 2022, 12:11:36 PM6/28/22
to Manopt
Sure, nice per se is not a scientific measure. Then let's classify them in: Is the exponential map known in closed form or do we have to solve an ODE in every step (analogously for the logarithmic map or parallel transport). In general the computational effort might increase for certain manifolds (compared to a Euclidean case with just a +, though then constraint optimisation algorithms have to be used).

BM

unread,
Jun 29, 2022, 1:26:10 AM6/29/22
to Manopt
Hello, all, 

Great questions and answers. 

My two cents. 

Broadly, manifold optimization tackles two classes of constraints. One, which are known as submanifolds, examples include the sphere, Stiefel, ... Two, which are known as quotient manifolds, examples include the Grassmann manifold.

For submanifolds (and the positive definite matrices manifold), one can definitely try to work with standard constrained Euclidean algorithms like Projected GD or the Lagrangian based methods. There have been other works as well. No doubt, there are merits for each of these approaches. However, the main selling point of manifold viewpoint for these classes of problems is that it provides a very generic principle that fits well to these problems including convergence analysis. Note that constrained Euclidean algorithms do come with convergence analysis but are limited.

For quotient manifolds, I am still to find Euclidean optimization algorithms that fit for these cases well. Here, the manifold optimization point of view has clear cut advantages which requires understanding of symmetries. 

Additionally, specifics works on understanding of geodesic convexity for problems on the positive definite matrices gives a definite advantage to the manifold viewpoint.

Regards,
Bamdev

Artashes Asoyan

unread,
Jun 29, 2022, 7:48:25 AM6/29/22
to Manopt
Thank you all for enlightening me! 

Nicolas Boumal

unread,
Jul 1, 2022, 6:35:58 AM7/1/22
to Manopt
Hello,

Thanks everyone for your replies. Here is mine:


> What is the benefit of using Manifold Optimization Approach (MOA)?

Optimization problems come in many different types. We always want to find an element $x$ in some set $S$ such that $f(x)$ is as small as possible, where $f \colon S \to \reals$ is the cost function, defined over the search space $S$. Depending on the extra structure that $S$ and $f$ may have, a certain collection of algorithms can be applied to that particular problem. Which one should we choose? It depends on the problem, and it depends on what the user wants to achieve.

Optimization on manifolds provides a unified theoretical framework to tackle optimization problems where $S$ is a smooth manifold (be it an embedded submanifold of some bigger space, or a more abstract one such as a quotient manifold, which Bamdev mentioned too).

This means that when $S$ is a smooth manifold, then optimization on manifolds is part of the collection of algorithms you can choose from.

There may be other algorithms in that collection of options.

For example, to solve problems on embedded submanifolds defined by a single set of equations $h(x) = 0$, you could also try penalty methods or augmented Lagrangian methods (ALM).

If that's the case, then the only way to know what works best for you is to run them and compare.

Part of how you may judge algorithms is performance; but another part may also be practicality.

Riemannian optimization shines most on manifolds for which we have good retractions. For general descriptions h(x) = 0, this may not be available.

On the other hand, some nice and smooth sets are not defined by a single set of equations h(x) = 0 (they are only locally described as such, with a collection of functions h). Then penalty / ALM methods are impractical, but optimization on manifolds may still be practical.

For optimization on abstract manifolds, I don't know of another approach, so that would be an easy win for Riemannian optimization, provided the manifold is nice enough for even that to be an option.



> I understand that it is more elegant and instead of dealing with constraints we redefine a space, a Manifold where the variable lives on the manifold so the classic optimization problem with constraints becomes an optimization problem without constraints in MOA.

Elegance is indeed part of the appeal. But more importantly, for some manifolds it turns out that Riemannian optimization algorithms are also more efficient and reliable: that's why people care in the end.



> But in terms of result or (numerical) efficiency or convergence are there any remarkable differences?

The theory is more satisfying in Riemannian optimization than in penalty / ALM type methods, because you do not need to worry about infeasibility: (mostly) everything happens "as if" your problem was unconstrained. This normally yields stronger guarantees.

Practical performance can indeed be superior for nice enough manifolds (e.g., the ones implemented in Manopt).


> If I can use both methods why should I choose MOA over classical optimization with constraints?

If you can use both: compare them and pick the one you prefer.

I'd be happy to hear about your experience with such comparisons.

Artashes Asoyan

unread,
Jul 7, 2022, 6:30:27 AM7/7/22
to Manopt
Hello Nicolas, 

thank you for your clear and detailed answer!
In fact I am solving a constrained optimization problem using a poor dataset and my goal is to inprove the precision of estimated parameters. 
I am comparing different methods and manifold optimization is going to be one of them.
I'll share my results once I finish the comparaison.

Best,
Artashes

Reply all
Reply to author
Forward
0 new messages