reaching maximum iterations

61 views
Skip to first unread message

Mohammad Ali Nazari

unread,
Oct 19, 2021, 1:36:38 PM10/19/21
to manopt...@googlegroups.com
Hello,
I am using manopt for solving an optimization problem. The unknown X is a data structure, consisting of manifolds, as well as vectors in euclidean space. I use the product of manifolds to formulate my problem.
The problem cost function includes huge numbers, and therefore, the absolute value of cost is large. I reach the maximum iterations, and I cannot rely on the obtained solution. Please see the attached picture.
Would you please give me some advice? Maybe I should use a general programming technique.
Best regards,
Mohammad 
manopt.png

Nicolas Boumal

unread,
Oct 20, 2021, 5:52:55 AM10/20/21
to Manopt
Dear Mohammad,

It's difficult to know without more information. Did you run "checkgradient" and (as the case may be) "checkhessian" on your problem?

Best,
Nicolas

Mohammad Ali Nazari

unread,
Oct 29, 2021, 11:32:28 AM10/29/21
to Manopt

Hi Nicolas,

Thanks for the reply, and sorry that I am replying late.
Yes. I run the checkgradient. I did not know that I can run checkhessian without giving the hessian to the problem (?)

Back to my main discussion:
As the solver, I am using conjugate gradient. 
I have a rotation factory manifold, together with a few euclidian manifolds, as the optimisation space. 
After reaching to the maximum iterations, and from the last time I wrote here, I tried to further assess the cases by optimizing over single manifolds, instead of the product of manifolds.
It seems that one of the optimisation variables belonging to one of the manifolds is the bottleneck, so that the toolbox cannot further improve that, while the other parameters belonging to the other manifolds can still improve.

Can this conjecture be true fundamentally? I mean, would it be possible that the solver gets stuck in a stationary point of a sub-space, while it can still take steps for further optimising other optimisation variables through moving in direction of their corresponding gradient? 

I can ask the question in another ways as well: when you have product of manifolds, is there only a single step size? If yes, then probably my conjecture can be true. Because sometimes my problem reaches to the maximum number of iterations, and sometimes it stops after 1 iteration, because of reaching the minimum allowed step size. 
In that case, do you have any suggestion for jointly optimising the variables belonging to different manifolds? 

Thank you!
Best regards,
Mohammad 

Nicolas Boumal

unread,
Nov 2, 2021, 12:46:15 PM11/2/21
to Manopt
Hello,

>  I did not know that I can run checkhessian without giving the hessian to the problem (?)

No need to run it in that case, indeed.

To your main question: the default metric for a Riemannian product manifold as implemented in Manopt can indeed be a poor choice. In effect, if we have two manifolds M and N and we consider their product M x N, then the inner product (Riemannian metric) on M x N is simply the sum of the inner products on M and N. But if M and N have very different scales (which is your case because you have one compact manifolds and one noncompact manifold), then simply taking the same might make little sense.

There are three ways to fix this:

 * Modify the productmanifold tool to allow for other metrics. For example, if taking the product of k manifolds, you could take as input a vector of k positive numbers and use the weighted sum of inner products as your Riemannian metric.

 * Implement a new factory just for your product, with the metric of your choosing.

 * Define a preconditioner with problem.precon and/or problem.preconsqrt: the effect of preconditioning is pretty much the same as that of changing the metric (the differences would lead to a longer discussion).

See the tutorial on manopt.org for more about all of these options.

Best,
Nicolas
Reply all
Reply to author
Forward
0 new messages