# Using Ceres with an available hessian

33 views

### Arthur Bawin

Jun 21, 2023, 11:00:57 PMJun 21
to Ceres Solver
Hello and thank you for providing this library!

I have a scalar function to minimize f(x_1, ..., x_n), where the number of parameters n can vary from a few hundreds to several thousands, maybe tens of thousands (these are the coordinates of vertices in a mesh). Both the gradient and the hessian of f w.r.t. the x_i are available. So far I've been trying to minimize f through unconstrained minimization by modelling it with a FirstOrderFunction and solving with a GradientProblemSolver, but this way I can't make use of the available hessian. I don't have a strong background in optimization, so I'm not sure what would be a better way of solving this.

As I understand it, I can't provide the hessian for the unconstrained minimization. Could I use the nonlinear least squares routines to minimize the sum of ||df/dx_i||^2 while providing the hessian, thus effectively using the nonlinear least squares solver to solve grad f = 0 ? I f yes, what would be the optimal class/structure to implement to easily compute the residuals and jacobians for an arbitrary number of parameters?

Arthur Bawin

### Sameer Agarwal

Jun 21, 2023, 11:03:45 PMJun 21
Arthur,
Using the hessian is not possible.
what gradientsolverproblem configuration are you using?
have you tried using lbfgs and experimented with the rank of the hessian approximation?
Sameer

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

### Arthur Bawin

Jun 21, 2023, 11:09:10 PMJun 21
to Ceres Solver
Thank you for the quick reply.
I've been solving with these options:

options.function_tolerance = 1e-12;
options.max_num_iterations = 1000;
options.max_num_line_search_step_size_iterations = 100;
// options.line_search_direction_type = ceres::BFGS;
options.line_search_interpolation_type = ceres::BISECTION;

I've been trying with both BFGS and LBFGS, they seem to give similar results.
Would the second option work though, using nonlinear least squares to solve grad f = 0?

Arthur

### Sameer Agarwal

Jun 21, 2023, 11:31:02 PMJun 21
use lbfgs vary the rank and use a cubic line search would be my advice.
you can use a nonlinear least squares formulation too. But in that case the objective should just be the distance between matched vertices and their jacobians, so a residual block per pair of vertices. will the sparsity of your jacobian remain the same over the course of the optimization? if not then you will have to enable dynamic sparsity.
Sameer

### Arthur Bawin

Jun 22, 2023, 12:35:22 AMJun 22
to Ceres Solver
OK, I'll try that, thanks.
The sparsity pattern is constant because the mesh topology stays the same during the optimization. I'm not sure I understand why I need a residual block by pair of vertices, I was planning on adding each df/dx_i as a residual, so one residual per vertex coordinate (hence n total, where n = 2*N, N the number of mesh (2D) vertices). Is that wrong?
Arthur