reset parameter blocks at iteration callback or alternative ways

418 views
Skip to first unread message

Aitor Aldomà Buchaca

unread,
May 7, 2015, 4:22:35 PM5/7/15
to ceres-...@googlegroups.com
Hi folks,

since this is my first post here, let me say that I have been using Ceres for a while now and I am really happy with it (great performance, aufodiff rocks and documentation is excellent).

That said, I am implementing an inverse compositional planar tracker that minimizes photodiscrepancy between a template and the current image. To give some context, it is similar to the planar tracker in blender from libmv but with the difference that the jacobian remains constant through the whole tracking process since it depends only on the template close to the identity warp. In a nutshell, at each iteration a parameter increment is computed around the identity, inverted and composed with the current estimate (i.e. a warp from template location to current image) to form the estimate for the next iteration. Before the next iteration, the parameters are set back to the identity again and repeat with the updated estimate.

I have been able to compute the jacobian at the identity using the internal::AutoDiff::Differentiate wrapper and worked well.
Then, I have used the base costFunction (analytic derivatives style) to feed the constant jacobian into ceres as well as the residuals.

My initial plan was using the IterationCallback to reset the parameter block to be again at the identity (if the iteration was successful) before the next iteration so that the next perturbation happens again at the identity but with the updated current estimate (within iteration callback). The problem I am facing now is that the parameter block address used internally by ceres (and given as parameter to the cost function) differs from the one allocated in the user code. So, I have no control about the parameter estimate as far as I have understood.

My question is then if its possible to do something like this in Ceres or if you know of any viable alternatives that I could use to integrate this kind of scheme. I would really like to be able to use the cool features in ceres (solvers, loss functions, ...) instead of coming up with a sloppy homemade solver implementation.

Sorry for the rather lengthy text and thanks in advance for any help. Please, let me know if clarification is needed.
Cheers,

Aitor

P.S: The forward tracker does well in terms of convergence, however it spends too much on the computation of the jacobian, thus the interest on the inverse compositional scheme.

Keir Mierle

unread,
May 8, 2015, 1:04:22 PM5/8/15
to ceres-...@googlegroups.com
Hi Aitor,

I wrote the libmv planar tracker, which as you have probably noticed uses Ceres instead of a custom KLT loop. I'd love to hear what your application is, and perhaps you would consider improving the one in libmv instead of making a separate one. With that said, I think you will not have a great time doing inverse-compositional tracking using Ceres. If you modify the parameters during a solve, you are breaking one of Ceres's key invariants, producing undefined behavior. At a fundamental level, ICT is making an approximation assumption that is not valid in the general case, and so Ceres does not offer a way to exploit it.

Have you tried using the libmv tracker? It has a bunch of tweaks to make it fast, including prediction using a Kalman filter. The Kalman filter prediction is extremely cheap and often shaves off 80% of the Ceres iterations since the prediction is so close to the minima already. In some cases the Kalman predictions make a 20X overall tracking speed improvement, due to avoiding brute-force search for fast moving targets.

Thanks,
Keir

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/ab8284b8-443c-4db4-91a7-3bf388613fda%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Aitor Aldomà

unread,
May 11, 2015, 12:14:31 PM5/11/15
to ceres-...@googlegroups.com
Hi Keir,

thanks for your answer.

On Fri, May 8, 2015 at 7:04 PM, Keir Mierle <mie...@gmail.com> wrote:
Hi Aitor,

I wrote the libmv planar tracker, which as you have probably noticed uses Ceres instead of a custom KLT loop. I'd love to hear what your application is, and perhaps you would consider improving the one in libmv instead of making a separate one. With that said, I think you will not have a great time doing inverse-compositional tracking using Ceres. If you modify the parameters during a solve, you are breaking one of Ceres's key invariants, producing undefined behavior. At a fundamental level, ICT is making an approximation assumption that is not valid in the general case, and so Ceres does not offer a way to exploit it.

Yes, I used your implementation in the libmv tracker as a basis for a simpler planar target tracker (for augmented reality) using ceres, works quite well (specially if a pyramid is used) but still needs 20 to 80ms depending on target transformation and pyramid levels. Anyway, I really liked the way you mixed the image gradients with the automatic derivatives of the warp (that is really handy for visual odometry as well, have tried the same track to align RGB-D streams and is great). Is that still the official way or are we supposed to use the CostFunctorToFunctor functionalities? I have seen a few TODOs on the tracker vouching to merge the jet extensions upstream.

I appreciate your suggestion to extend the libmv tracker but I am afraid I am not allowed to contribute to opensource projects for now... sorry.

Regarding ICT, I was afraid that it would not be possible to modify the parameters during a solve, thanks for the confirmation. Could you elaborate a bit more on the approximation that you mention (theoretically, both approaches should be sort of equivalent), I have been looking at the code doing the trust region minimization and I don't fully see why changing the parameters after successful iterations would cause problems. Is it related to the update of the trust region? Is it possible at all to use a trust region strategy for ICT (outside ceres)?. I am interested in your opinion here since I might give it a serious try depending on your answer ;)
 

Have you tried using the libmv tracker? It has a bunch of tweaks to make it fast, including prediction using a Kalman filter. The Kalman filter prediction is extremely cheap and often shaves off 80% of the Ceres iterations since the prediction is so close to the minima already. In some cases the Kalman predictions make a 20X overall tracking speed improvement, due to avoiding brute-force search for fast moving targets.

Yes, we have an extra module that uses a kalman filter to integrate other sensor modalities (IMUs) as well as motion models so in my strand of work, I just care about the visual part.

Thanks again for your time.

Cheers,
Aitor
 

Thanks,
Keir

On Thu, May 7, 2015 at 1:22 PM, Aitor Aldomà Buchaca <aldoma...@gmail.com> wrote:
Hi folks,

since this is my first post here, let me say that I have been using Ceres for a while now and I am really happy with it (great performance, aufodiff rocks and documentation is excellent).

That said, I am implementing an inverse compositional planar tracker that minimizes photodiscrepancy between a template and the current image. To give some context, it is similar to the planar tracker in blender from libmv but with the difference that the jacobian remains constant through the whole tracking process since it depends only on the template close to the identity warp. In a nutshell, at each iteration a parameter increment is computed around the identity, inverted and composed with the current estimate (i.e. a warp from template location to current image) to form the estimate for the next iteration. Before the next iteration, the parameters are set back to the identity again and repeat with the updated estimate.

I have been able to compute the jacobian at the identity using the internal::AutoDiff::Differentiate wrapper and worked well.
Then, I have used the base costFunction (analytic derivatives style) to feed the constant jacobian into ceres as well as the residuals.

My initial plan was using the IterationCallback to reset the parameter block to be again at the identity (if the iteration was successful) before the next iteration so that the next perturbation happens again at the identity but with the updated current estimate (within iteration callback). The problem I am facing now is that the parameter block address used internally by ceres (and given as parameter to the cost function) differs from the one allocated in the user code. So, I have no control about the parameter estimate as far as I have understood.

My question is then if its possible to do something like this in Ceres or if you know of any viable alternatives that I could use to integrate this kind of scheme. I would really like to be able to use the cool features in ceres (solvers, loss functions, ...) instead of coming up with a sloppy homemade solver implementation.

Sorry for the rather lengthy text and thanks in advance for any help. Please, let me know if clarification is needed.
Cheers,

Aitor

P.S: The forward tracker does well in terms of convergence, however it spends too much on the computation of the jacobian, thus the interest on the inverse compositional scheme.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/ab8284b8-443c-4db4-91a7-3bf388613fda%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/sUUTll74nXI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CADpYijEBbQrNjiX%3D7Kx5W_CZ3w%3DYBEd3MJ1p2Ty4e-uMjAD3iQ%40mail.gmail.com.

Sameer Agarwal

unread,
Jun 18, 2015, 6:08:46 PM6/18/15
to ceres-...@googlegroups.com
Aitor,
Sorry for the delayed reply. My comments are inline.

That said, I am implementing an inverse compositional planar tracker that minimizes photodiscrepancy between a template and the current image. To give some context, it is similar to the planar tracker in blender from libmv but with the difference that the jacobian remains constant through the whole tracking process since it depends only on the template close to the identity warp. In a nutshell, at each iteration a parameter increment is computed around the identity, inverted and composed with the current estimate (i.e. a warp from template location to current image) to form the estimate for the next iteration. Before the next iteration, the parameters are set back to the identity again and repeat with the updated estimate.

So if I understand you correctly, the actual parameter block you are solving for does not actually change. It is the data that it is operating on, it changes as a consequence of the transformation that you are computing. 

Put another way

You have a transformation T with parameter x. And some source data A and some target B. And the "normal" way of doing this would be

min_x |T(A, x) - B|

and the iterative algorithm will update x (starting from zero) over time till the error is minimized, but instead what you would like to do is 

min_delta |T(T(A, x), delta) - B|

where once delta has been computed, you solve a new optimization problem

min_delta_new |T(T(A, x+ delta), delta_new) - B|

so TBH, this does not really fit the ceres optimization model like this.

The sort of right way to do this would be to treat A as the parameter block and have a local parameterization associated with it of size of x, but that I do not think is going to be terribly practical.


I have been able to compute the jacobian at the identity using the internal::AutoDiff::Differentiate wrapper and worked well.
Then, I have used the base costFunction (analytic derivatives style) to feed the constant jacobian into ceres as well as the residuals.

My initial plan was using the IterationCallback to reset the parameter block to be again at the identity (if the iteration was successful) before the next iteration so that the next perturbation happens again at the identity but with the updated current estimate (within iteration callback). The problem I am facing now is that the parameter block address used internally by ceres (and given as parameter to the cost function) differs from the one allocated in the user code. So, I have no control about the parameter estimate as far as I have understood.

Yes this is how ceres works.
 

My question is then if its possible to do something like this in Ceres or if you know of any viable alternatives that I could use to integrate this kind of scheme. I would really like to be able to use the cool features in ceres (solvers, loss functions, ...) instead of coming up with a sloppy homemade solver implementation.

I do not know of a way of doing this in ceres as it stands :/

Sameer
 

Sorry for the rather lengthy text and thanks in advance for any help. Please, let me know if clarification is needed.
Cheers,

Aitor

P.S: The forward tracker does well in terms of convergence, however it spends too much on the computation of the jacobian, thus the interest on the inverse compositional scheme.

--
Reply all
Reply to author
Forward
0 new messages