Question about the combination of lift and Product manifolds

33 views
Skip to first unread message

Zhao Xingyu

unread,
Nov 17, 2025, 1:22:16 AMNov 17
to Manopt
I have an optimization problem with two optimization variables: {x∈Rn
 :−1≤xi≤1,∀i}and {Y∈Cn×m:∥Y∥F=1}.

Clearly, Y lies on the complex sphere manifold, while x can be optimized using the cubeslift operation. My question is, based on the cubeslift operation, can the Product manifolds be used to optimize both x and Y simultaneously? Could you provide relevant examples or literature for reference? Thank you very much!

Nicolas Boumal

unread,
Nov 17, 2025, 8:21:04 AMNov 17
to Manopt

Hello!

Exactly as you said: separately, we could optimize for Y on the complex sphere, and for x through a cubes lift. Currently, "products" are not supported for lifts (they were implemented fairly recently).

Since lifts are implemented as a change of variable, the most direct approach is probably to implement that change of variable manually, as follows.

Use productmanifold to define the product {z \in R^n} x {Y : ||Y|| = 1}, like this:

elems.z = euclideanfactory(n);
elems.Y = spherecomplexfactory(n, m);
manifold = productmanifold(elems);

Then define your problem structure with a manifold and a cost function, like so:

problem.M = manifold;
problem.cost = @mycostfunction;

function f = mycostfunction(X)

   % a point X on the product manifold is a structure with two fields, named z and Y (see "elems" above)
   z = X.z;
   Y = X.Y;

   x = sin(z); % this smooth change of variable ensures -1 <= x_i <= 1

   f = ...; % implement your function of x and Y, as you normally would

end


You likely also want to implement the gradient and maybe the Hessian. Here, it's important to note that you technically need to implement the gradient of g(z, Y) = f(sin(z), Y), where f(x, Y) is your original cost function and g is the cost function that Manopt actually optimizes. The gradient should also be a structure with two fields: one for the gradient with respect to z, and one wrt Y. You can implement problem.egrad for example (as opposed to problem.grad, without "e") if you just want to provide the Euclidean gradient. Then, Manopt will automatically adapt this to compute the Riemannian gradient.

When you run an optimization algorithm as

X = trustregions(problem);

your solution is 

   z = X.z;
   Y = X.Y;

I hope this helps.

Best,
Nicolas

Zhao Xingyu

unread,
Dec 2, 2025, 5:38:33 AMDec 2
to Manopt
Thank you for your answer, I have gained a lot. However, will the non-one-to-one mapping relationship of affect the results of numerical optimization, such as convergence?

Nicolas Boumal

unread,
Dec 2, 2025, 7:18:52 AMDec 2
to Manopt
It should be ok: you can read more about the effect of a change of variable (a reparameterization) on the landscape of an optimization problem here:

The effect of smooth parametrizations on nonconvex optimization landscapes
Eitan Levin, Joe Kileel, Nicolas Boumal

Iirc, we did not explicitly cover the case of x -> sin(x), but this is indeed a "good" change of variable: it won't create serious difficulties.

Reply all
Reply to author
Forward
0 new messages