Pardon for the simple question, as I am new to the Riemannian optimization theory and the tool. I have a loss function L(M), where M is a parametric dxd matrix. It can also be factorized as M=AA^T. I want to constrain M to lie on a symmetric positive semi-definite manifold, but with a fixed rank (say k). Initially, I formulated my loss function in terms of M, i.e. L(M), and computed its Euclidean gradient egrad(L(M)). But from the description in the page related to this manifold (http://www.manopt.org/manifold_documentation_symfixedrank.html), it seems like we have to compute and pass the Euclidean gradient in terms of A instead.
My question is: I need to reformulate my loss function in terms of A as L(A) (But that will make my formulation a non-convex one, and I wanted to avoid that.), compute its Euclidean gradient to pass it to the tool. Am I right?
Thanks
Ujjal
Can you point me to some recent papers that show that dropping the convexity is not a problem for PSD optimization? I know of one (https://arxiv.org/abs/1509.03917) but any more will be really helpful.
Thanks
Ujjal