Switching from automatic differentiation to numerical differentiation?

471 views
Skip to first unread message

Jae-Hak Kim

unread,
Feb 2, 2016, 1:16:03 AM2/2/16
to Ceres Solver
Dear Ceres-Solver team and users,

I have two questions about Ceres-solver's jacobian computation. 

(1) Could you please let me know how to detect if ceres-solver switches to numerical differentiation when my supplied automatic differentiation fails? If it switches to numeric diff., my program would run slow even I supply the auto diff.

One of my colleagues mentioned:

"Ceres can change automatic differentiation to numerical differentiation even if you configure it for the automatic differentiation. It is like Ceres tries to use the automatic one but under certain conditions it could give up and switch to the numerical differentiation without a warning. Also, the only way to know is to print the full report at the end of the optimisation and check which method it has actually used for the jacobian computation."

So, I am trying to figure out whether my program happened to be in the certain conditions and switched to the numerical differentiation regardless of the automatic differentiation I supplied. Unfortunately, I have not find a useful information out of the full report and have not found a member variable of the Ceres::Solver::Summary class associated with this.

The second question is,

(2) How much improvement can we expect if we use an analytical jacobian instead of automatic differentiation in ceres-solver? My program uses automatic differentiation using Ceres-solver and I would like to speed up.

Thank you in advance.
Jae-Hak




Sameer Agarwal

unread,
Feb 2, 2016, 9:53:02 AM2/2/16
to ceres-...@googlegroups.com
Dear Jae-Hak,
My comments are inline.

<SNIP>
 
(1) Could you please let me know how to detect if ceres-solver switches to numerical differentiation when my supplied automatic differentiation fails? If it switches to numeric diff., my program would run slow even I supply the auto diff.

One of my colleagues mentioned:

"Ceres can change automatic differentiation to numerical differentiation even if you configure it for the automatic differentiation. It is like Ceres tries to use the automatic one but under certain conditions it could give up and switch to the numerical differentiation without a warning. Also, the only way to know is to print the full report at the end of the optimisation and check which method it has actually used for the jacobian computation."

Your colleague to put it mildly, has no idea what he is talking about. Everything in the above paragraph is false. 

Ceres works off of the CostFunction interface and provides ways of constructing cost functions using numeric, automatic and analytic differentiation. But the CostFunction interface itself does not (and cannot) communicate anything about the method used for computing the Jacobians/derivatives to the solver. 

So, I am trying to figure out whether my program happened to be in the certain conditions and switched to the numerical differentiation regardless of the automatic differentiation I supplied. Unfortunately, I have not find a useful information out of the full report and have not found a member variable of the Ceres::Solver::Summary class associated with this.

The second question is,

(2) How much improvement can we expect if we use an analytical jacobian instead of automatic differentiation in ceres-solver? My program uses automatic differentiation using Ceres-solver and I would like to speed up.

It depends on the actual function you are evaluating, the size of the parameter blocks and residual blocks etc. There is no blanket answer. A good place to start is to look at the FullSummary to see how much time is being spent in the Jacobian evaluation and see if it is actually substantial enough.

Sameer

 

Thank you in advance.
Jae-Hak




--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/edec4066-3894-406d-a0b5-66d8646347e2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jae-Hak Kim

unread,
Feb 3, 2016, 1:18:26 AM2/3/16
to Ceres Solver
Dear Sameer,

Thank you so much for your quick answer.

I believe my program takes a significant time in computing jacobian evaluation using auto-diff. I am copy&pasting one of the FullSummary reports. I would appreciate if could give any comment. The report is from Mac (Intel 3.2Ghz core i5) without open-mp (single thread only). The problem is, simply speaking, optimising photometric errors for image alignment. 


iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  2.679188e-06    0.00e+00    1.50e-03   0.00e+00   0.00e+00  1.00e+04        0    2.19e-01    2.28e-01
   1  2.606526e-06    7.27e-08    1.56e-03   1.66e-02   7.29e-01  1.11e+04        1    4.64e-01    6.93e-01
   2  2.606526e-06   -1.41e-08    0.00e+00   6.62e-04  -1.59e+00  5.53e+03        1    1.29e+00    1.99e+00
   3  2.606526e-06   -1.42e-08    0.00e+00   6.55e-04  -1.60e+00  1.38e+03        1    1.29e+00    3.27e+00
   4  2.606526e-06   -1.48e-08    0.00e+00   6.20e-04  -1.68e+00  1.73e+02        1    1.29e+00    4.57e+00
   5  2.606526e-06   -1.73e-08    0.00e+00   5.02e-04  -2.10e+00  1.08e+01        1    1.07e+00    5.64e+00
   6  2.606526e-06   -1.67e-08    0.00e+00   3.07e-04  -2.45e+00  3.37e-01        1    1.10e+00    6.74e+00
   7  2.606526e-06   -5.74e-09    0.00e+00   2.27e-05  -2.66e+00  5.27e-03        1    8.66e-01    7.61e+00

Solver Summary (v 1.11.0-eigen-(3.2.6)-lapack-suitesparse-(4.4.4)-cxsparse-(3.1.4)-no_openmp)

                                     Original                  Reduced
Parameter blocks                         7437                     7437
Parameters                               7448                     7448
Residual blocks                          7436                     7436
Residual                                14872                    14872

Minimizer                        TRUST_REGION

Sparse linear algebra library    SUITE_SPARSE
Trust region strategy     LEVENBERG_MARQUARDT

                                        Given                     Used
Linear solver          SPARSE_NORMAL_CHOLESKY   SPARSE_NORMAL_CHOLESKY
Threads                                    24                        1
Linear solver threads                      24                        1

Cost:
Initial                          2.679188e-06
Final                            2.606526e-06
Change                           7.266241e-08

Minimizer iterations                        7
Successful steps                            1
Unsuccessful steps                          6

Time (in seconds):
Preprocessor                           0.0091

  Residual evaluation                  0.1763
    Line search cost evaluation        0.0000
  Jacobian evaluation                  7.7850
    Line search gradient evaluation    7.3511
  Linear solver                        0.0593
  Line search polynomial minimization  0.0006
Minimizer                              8.0463

Postprocessor                          0.0016
Total                                  8.0570

Termination:                      CONVERGENCE (Parameter tolerance reached. Relative step_norm: 6.290270e-09 <= 1.000000e-08.)

Parameter tolerance reached. Relative step_norm: 6.290270e-09 <= 1.000000e-08.



Thank you!!

Jae-Hak

Sameer Agarwal

unread,
Feb 3, 2016, 9:29:57 AM2/3/16
to Ceres Solver
Jae-Hak,

It looks like you have bounds constraints on your problem and almost all the time is being spent in the line search. This is a known problem. The handing of bounds in ceres is okay but not great right now. 

Your optimization problem is also not making any progress after the first iteration. Which might imply that it is getting stuck in some corner of the feasible region.

If you can transform your problem into an unconstrained problem via reparameterization you will see a significant improvement in performance.

Sameer



Reply all
Reply to author
Forward
0 new messages