Hi,
We have a non-convex function to optimize in 100-2000ish dimensions, where we want solutions that are close to the original point, and where time is a limiting factor (but we have analytical hessians).
I was thinking of using the algorithm mentioned in [ByrdSchnabel] (since Nocedal Wright said that it helps with the indefinite hessian case), which seems to be implemented in Ceres as the SUBSPACE_DOGLEG method (?).
The full hessian isn't mentioned in the general unconstrained optimization section of the documentation, so I wanted to check whether there is any way of using the trust region methods on a full Newton step that is not mentioned in the documentation through the package.
[ByrdSchnabel] Byrd, Richard H., Robert B. Schnabel, and Gerald A. Shultz. "Approximate solution of the trust region problem by minimization over two-dimensional subspaces." Mathematical programming 40.1 (1988): 247-263.