Hello Florentin,
The implementation in manopt stops when the gradient is smaller than options.tolgradnorm. This could happen even if the Hessian is far from positive semidefinite, so the answer is: no, there is no formal guarantee.
In practice, you do expect that the Hessian will be close to psd by the time the algorithm terminates; but this is just an empirical fact.
Theory wise, you can force RTR to guarantee approximate satisfaction of second order conditions, though; see for example:
"Global rates of convergence for nonconvex optimization on manifolds"
Boumal, Absil, Cartis, IMA JNA 2018
Very concretely, in Manopt, you could force this by doing the following:
options.tolgradnorm = 0; % this means we only stop if the gradient is exactly zero, which is unlikely to happen.
options.stopfun = @mystopfun;
function stopnow = mystopfun(problem, x, info, last)
stopnow = % .. here, evaluate your desired conditions and return true if the algorithm should terminate, false otherwise .. %;
end
This would just force trustregions to push until the 1st and 2nd order conditions are approximately met. Notice that this can get expensive. In general, you would first test the gradient norm, and return false if that's too large. Only if it is small would you check the second order condition.
But that's not such a great way to do it. The paper linked above describes a better approach (at least, in theory). The thing is: it would require changing trustregions.m, and I expect it'd be quite expensive computationally.
So here's a question: is this something you need in practice, or do you just need a theoretical guarantee that "it could be done"?
Best,
Nicolas