I am trying to understand egrad2rgrad, maybe document it better, but most of all, doing something similar in Julia. But I aam getting stuck somewhere.
First I thought it performs a change of metric in the following sense
Let g_1 (as matrix G_1) and g_2 (G_2) be two metrics on TpM. Transforming the metric for me would mean that for all tangent vectors X and Y
g_1(X,Y) = g_2(BX,BY)
if the use an ONB in TpM and use matrix representations (symbols pos def) wrt this basis this would change X and Y into their basis coefficient vectors x and y and the metric equation to (lets only consider the real case)
x^TG_1y = x^TB^TG_2By
So since G_1 = B^TG_2B and both are SPD we can use Cholesky to solve this. Ok.
Example. SPDs with linear affine as G2 and Euclidean as G1 (i.e. a euclidean vector comes along and we want to get it to the affine metric) then the above idea reas
g_E(X,Y) = tr(XY) = g_A(BX,BY) = tr(p^(-1)BXp^{-1}BY)
So the transformation B would just be B=p (the point we are at).
so in this derivation M.egrad2rgrad(p,X) = pX
But comparing this to what Manopt does makes me wonder, what I am getting wrong
what Manopt actually does is M.egrad2grad(p,X) = pXp
This would mean that we look for a transformation of X (by some f(X)) such that
g_E(X,Y) = tr(XY) = g_A(f(X),Y) = tr(p^(-1)f(X)Xp^{-1}Y)
holds for all tangent vectors Y? Then here, surely f(X) = pXp as in the code
But why? I thought the transformation should be the first but I do not see why it should be the second (i.e. the one that is implemented). What am I missing? And is there a reference/literature/documentation why egrad2rgread does what it does?
Best
Ronny