Hello,
In Amari's setting, the manifold (that is, the set over which we optimize) is a linear space. For example, it is just R^n. That manifold is turned into a Riemannian manifold by defining a Riemannian metric (an inner product which can vary as a function of the point x on the manifold). That metric is described by a positive definite matrix G(x).
Since Amari's manifold is linear, we don't really need a retraction: we can move away from x along any chosen direction v, and x+tv will remain on the manifold. Contrast this to the situation where the manifold is nonlinear (e.g., a sphere, and x is a point on the sphere and v is a tangent vector to the sphere at x).
This should clarify why there are retractions in one setting and not in the other.
Now, for the G(x)^{-1} in the expression for the gradient: that is exactly the same in Amari's setting and in the more general Riemannian optimization setting. (It takes a bit of effort to see it, but it's the same in the end.)
Best,
Nicolas