Hi yall.
I've previously used Powell's dogleg method to solve nonlinear least
squares problems, instead of Levenberg-Marquardt. In my applications
I've observed significant performance gains with that method over LM.
The dogleg method is described in many places, for instance here:
http://www.mathworks.com/help/toolbox/optim/ug/brnoyhf.html
Computational efficiency is gained because
1. To retry an unsuccessful step with a smaller trust region, the dogleg
method does not require re-solving the linear system
2. If the trust region is very small, the dogleg method uses pure
gradient descent; again, without solving the linear system
I thus wrote a dogleg-based solver for Ceres, as a potential
replacement/addition to the current LM solver. The code is at
https://github.com/dkogan/ceres in the "dogleg" branch.
To use the new solver, do this before solving the problem:
options.minimizer_type = ceres::DOGLEG;
The implementation blueprint is my nonlinear optimization library:
https://github.com/dkogan/libdogleg
Most of the bugs are worked out, I think, and all the supplied examples
work. The unit tests do not yet all pass; will look at that shortly.
Surprisingly (to me), the performance of the new solver on the supplied
examples is very similar to the old solver; I do not see the expected
drop in computation time. I'm hypothesizing that the reason for this is
that all the supplied examples have well-behaved cost functions. The LM
solvers have no retried steps on any of them, for instance. Does this
sound right? Are there some more challenging problems that I'm not
seeing? I tried the simple_bundle_adjuster and the bundle_adjuster, the
latter with no options, with --use_quaternions and with
'--use_quaternions --use_local_parameterization'.
If anybody has more challenging problems lying around, I'd be interested
in hearing about the performance of the new solver backend.
Comments welcome