Numeric Diff of Bicubic interpolation vs. Analytic diff.

58 views
Skip to first unread message

Georg Halmetschlager

unread,
Sep 14, 2017, 5:10:52 PM9/14/17
to Ceres Solver
Hi,
first of all thank you very much for this great Library.

I'm working at the moment on a rgbd camera calibration problem that includes a depth offset compensation. Simplified I plan to use a sparse grid that can be interpolated and superimposed on the depth image to compensate local offsets. This grid is interpolated during the optimization via a bicubic interpolation.

At the moment I'm using ceres' gradient check functionality to debug my analytic cost functions. One of my analytic cost functions showed high discrepancies between the numeric and the user defined (analytic) derivative. I was able to trace the problem back to the jacobian evaluated by the bicubic interpolation.

For further debugging, I setup an analytic cost function that contains only a bicubic interpolation. It takes the two interpolation coordinates [x,y] as optimization parameters. The residual is the interpolation value itself.
To get a convex optimization problem I initialized the constant interpolation parameters (grid values) simply with 0.01*Indexnumber which should result in a minimum of 0 at [x,y]=[0,0]. Finaly I setup a new problem and let ceres solve it. Again, the numeric derivatives and the user defined derivatives showed huge discrepancies, and the problem did not converge to the minimum.

As next step I defined a numericdiff cost function with the same setup. The problem converged to a point close to the minimum.

Finally I've implemented my own analytic version of the bicubic interpolation, what resulted in nearly the same residual values and derivatives as I have evaluated using the ceres internal function. However, the problem also didn't converge properly.

Now I'm a little bit stuck and I have a few open questions.

Why doesn't the problem converge properly when I'm using the analytical derivatives but does converge properly when I'm using the numericdiff cost function? Shouldn't it be exact the otherway round? Where does those huge descrepancies between the numeric and analytic derivatives come from? Have you had similar experiences with the bicubic interpolation?

Thank you very much for your help,
Georg

Sameer Agarwal

unread,
Sep 15, 2017, 12:24:21 AM9/15/17
to ceres-...@googlegroups.com
Georg,

As far as I can tell from your description of the problem, ceres is computing the derivatives/jacobians correctly, but you are having convergence problems with the correct derivatives, right?

So a couple of comments:

1. Convergence is not an indicator of good derivatives. Since it is entirely possible that the coarser derivatives given by numeric differentiation guide to the minimum skipping over some local minima. So I would not use that as a diagnostic.

2. If you are seeing large discrepancies between numeric and analytic derivatives, then it is possible that your function is highly oscillatory and the step size being used by numeric differentiation is not detecting that. One way to get around this is to use a more accurate numeric differentiation scheme like the Ridders method. You can do that by changing a template parameter to the NumericDiffCostFunction object.

Sameer

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/b2b558d2-2e89-4d84-8c18-bdd04f49a0bd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Georg Halmetschlager

unread,
Sep 15, 2017, 6:10:33 AM9/15/17
to Ceres Solver
Hi Samer, 

thank you for the incredible quick reply! 

Yes, I'm facing convergence problems. 
I also thought about local minimas that I'm not aware of.
To be sure I've used MAPLE to computed the Hessian for my set of interpolation parameters and it's determinant to make sure it's a convex problem. The determinant resulted in 0, what gives a positive-semi definite Hessian and the analytic proof that the problem is convex.
Hence the convergence shouldn't be a problem at all if the derivatives are right. This made me again going trough my code and I've found a stupid bug. I have to scale the coordinates and simply forgot to take this scaling into account during the composition of the Jacobians!

Now everything works. 

Thank you Sameer to make me rethink the problem. :)
Reply all
Reply to author
Forward
0 new messages