Powell's dogleg solver backend

1,209 views
Skip to first unread message

Dima Kogan

unread,
May 16, 2012, 4:40:39 AM5/16/12
to ceres-...@googlegroups.com
Hi yall.

I've previously used Powell's dogleg method to solve nonlinear least
squares problems, instead of Levenberg-Marquardt. In my applications
I've observed significant performance gains with that method over LM.

The dogleg method is described in many places, for instance here:
http://www.mathworks.com/help/toolbox/optim/ug/brnoyhf.html

Computational efficiency is gained because

1. To retry an unsuccessful step with a smaller trust region, the dogleg
method does not require re-solving the linear system
2. If the trust region is very small, the dogleg method uses pure
gradient descent; again, without solving the linear system


I thus wrote a dogleg-based solver for Ceres, as a potential
replacement/addition to the current LM solver. The code is at
https://github.com/dkogan/ceres in the "dogleg" branch.

To use the new solver, do this before solving the problem:
options.minimizer_type = ceres::DOGLEG;

The implementation blueprint is my nonlinear optimization library:
https://github.com/dkogan/libdogleg

Most of the bugs are worked out, I think, and all the supplied examples
work. The unit tests do not yet all pass; will look at that shortly.

Surprisingly (to me), the performance of the new solver on the supplied
examples is very similar to the old solver; I do not see the expected
drop in computation time. I'm hypothesizing that the reason for this is
that all the supplied examples have well-behaved cost functions. The LM
solvers have no retried steps on any of them, for instance. Does this
sound right? Are there some more challenging problems that I'm not
seeing? I tried the simple_bundle_adjuster and the bundle_adjuster, the
latter with no options, with --use_quaternions and with
'--use_quaternions --use_local_parameterization'.

If anybody has more challenging problems lying around, I'd be interested
in hearing about the performance of the new solver backend.

Comments welcome

whatnick

unread,
May 16, 2012, 8:36:58 AM5/16/12
to ceres-...@googlegroups.com
I have read some papers which assert the dog-leg is better for large bundle adjustment problems. I have been working on hitching Ceres to Bundler for SFM work. Some have already done so. This will provide a proof of Dogleg helping with badly behaved problems.

Cheers,

whatnick.

PS: Does it build on windows ?

Sameer Agarwal

unread,
May 16, 2012, 11:44:19 AM5/16/12
to ceres-...@googlegroups.com
On Wed, May 16, 2012 at 1:40 AM, Dima Kogan <go...@dima.secretsauce.net> wrote:
Hi yall.



I thus wrote a dogleg-based solver for Ceres, as a potential
replacement/addition to the current LM solver. The code is at
https://github.com/dkogan/ceres in the "dogleg" branch.

To use the new solver, do this before solving the problem:
 options.minimizer_type = ceres::DOGLEG;

Thanks for taking the initiative to do this. A couple of suggestions and questions based on a very cursory look.

1. Do you know how DogLeg works with inexact Gauss-Newton steps? This would determine if we can use it with iterative solvers or not.  This is an important consideration. 

2. The BAL webpage contains tons of problems, some of them reasonably hard.  Since you mentioned that you have had experience with better performance in your applications, perhaps you can compare and report performance on those. 

3. You are storing two copies of the Jacobian in your loop. This works for small problems, but for large problems this is prohibitive.

4. We follow the Google C++ style guide, your current code does not quite follow it.

5. Finally, if we decide to go ahead with this, since there is substantial overlap with the LM loop, it would be worth refactoring the common bits out of them into something that can be shared.

Surprisingly (to me), the performance of the new solver on the supplied
examples is very similar to the old solver; I do not see the expected
drop in computation time. I'm hypothesizing that the reason for this is
that all the supplied examples have well-behaved cost functions. The LM
solvers have no retried steps on any of them, for instance. Does this
sound right? Are there some more challenging problems that I'm not
seeing? I tried the simple_bundle_adjuster and the bundle_adjuster, the
latter with no options, with --use_quaternions and with
'--use_quaternions --use_local_parameterization'

The example data that ships with the code is meant to be quite simple, it is not mean to torture tests the solvers.

Sameer

Hordur Johannsson

unread,
May 16, 2012, 12:07:39 PM5/16/12
to ceres-...@googlegroups.com
Lourakis et al. compared LM and DL for bundle adjustment and got
favorable results.

Lourakis, M.L.A.; Argyros, A.A.; , "Is Levenberg-Marquardt the most
efficient optimization algorithm for implementing bundle adjustment?,"
Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference
on , vol.2, no., pp.1526-1531 Vol. 2, 17-21 Oct. 2005

Also, Rosen et al. compared LM, Dog-Leg and Gauss Newton for pose graph
optimization

“An Incremental Trust-Region Method for Robust Online Sparse
Least-Squares Estimation”
by D.M. Rosen, M. Kaess, and J.J. Leonard.
In IEEE Intl. Conf. on Robotics and Automation, ICRA, (St. Paul, MN),
May 2012
> --
> ----------------------------------------
> Ceres Solver Google Group
> http://groups.google.com/group/ceres-solver?hl=en?hl=en


Sameer Agarwal

unread,
May 16, 2012, 12:13:03 PM5/16/12
to ceres-...@googlegroups.com
The LM loop implemented by Lourakis's SBA code has some issues (e.g. they do not use the diagonal of the Gauss-Newton hessian) which make comparisons to the dogleg code a bit suspect. I would like to see some head to head comparison with the existing LM loop in Ceres.

Sameer

Sameer Agarwal

unread,
May 16, 2012, 7:59:42 PM5/16/12
to ceres-...@googlegroups.com
Dima,

Your code reminded me, just how badly structured the code for the LM loop is in Ceres. I am in the process of refactoring it right now, which should make trying out ideas like yours much simpler. I would recommend holding on for a few more days (maybe a bit more) while I get this cleanup done.

Sameer Agarwal

unread,
May 21, 2012, 2:46:24 PM5/21/12
to ceres-...@googlegroups.com
On Wednesday, May 16, 2012 4:59:42 PM UTC-7, Sameer Agarwal wrote:
Dima,

Your code reminded me, just how badly structured the code for the LM loop is in Ceres. I am in the process of refactoring it right now, which should make trying out ideas like yours much simpler. I would recommend holding on for a few more days (maybe a bit more) while I get this cleanup done.

The first bits of this effort are underway in the trust-region-deux branch. Its still a ways away from being finished, but it provides an easy path for integrating different trust region strategies. You are welcome to comment and follow along if you like.

Once this is integrated, you should only need to implement a subclass of TrustRegionStepSolver which does the dogleg computation and perhaps a simple TrustRegionStepSizer object to go along with it. 

Sameer
Reply all
Reply to author
Forward
0 new messages