Assign a component of a ceres::Jet<double,6> variable to a double variable

4,260 views
Skip to first unread message

Miguel Algaba Borrego

unread,
Jun 13, 2012, 3:07:22 AM6/13/12
to ceres-...@googlegroups.com
Hi, I want to create a residual vector of size 307200 with 6 input parameters (x,y,z,yaw,pitch,roll). I need to access each component of the x vector and transform each component into a double variable. I paste a small piece of code to ilustrate what I want to do:

class myCostFunction {
 public:
  template <typename T> bool operator()(const T* const x,T* residuals) const {

    double yaw = x[3]; //This fails

... //Residual computation

    return true;
  }
};

...

int main(int argc, char** argv) {
  google::ParseCommandLineFlags(&argc, &argv, true);
  google::InitGoogleLogging(argv[0]);

... //Some stuff

  // Build the problem.
  Problem problem;
  problem.AddResidualBlock(
      new AutoDiffCostFunction<myCostFunction, 307200,6>(
          new myCostFunction),
      NULL,
      &x);

...

}

The "double yaw = x[3];" instruction doesn't compile saying that is not posible to assign a ceres::Jet<double,6> to a double variable. How can I perform this conversion?

Thanks in advance and best regards,

Miguel Algaba.


Miguel Algaba Borrego

unread,
Jun 13, 2012, 3:19:43 AM6/13/12
to ceres-...@googlegroups.com
Thank you very much for your fast reply, I will try to change the inner error function computation to always operate with T variables instead of double.

Best regards,
Miguel Algaba.

2012/6/13 Sameer Agarwal <sameer...@google.com>
Miguel,
If you are trying to use automatic differentiation (which would be the reason to use the templated functor), 
then you cannot convert to double. Everything has to be done in terms of the type T, because this functor will 
be evaluated using doubles and Jets to evaluate the derivatives automatically.

The right thing to do here is to say

T yaw = x[3]

Sameer







--
----------------------------------------
Ceres Solver Google Group
http://groups.google.com/group/ceres-solver?hl=en?hl=en

--
----------------------------------------
Ceres Solver Google Group
http://groups.google.com/group/ceres-solver?hl=en?hl=en

Sameer Agarwal

unread,
Jun 13, 2012, 3:29:40 AM6/13/12
to ceres-...@googlegroups.com
Happy to help.

Also, just to be sure,  are you trying to create a cost function with the number of residuals output by a single cost function to be 300k ? or are you going to combine a bunch of cost functions and the sum total of their residuals is 300k?

Sameer

Keir Mierle

unread,
Jun 13, 2012, 3:30:44 AM6/13/12
to ceres-...@googlegroups.com
Hi Miguel,

Since the number of residuals is so large, you may have issues setting it at compile time. If so, consider using the dynamic-sized residual capability of autodiff (see the autodiff_cost_function.h header for details).

Out of curiosity, what is the form of your problem that produces a 300,000-sized residual block?

Keir

Miguel Algaba Borrego

unread,
Jun 13, 2012, 4:08:27 AM6/13/12
to ceres-...@googlegroups.com
I'm implementing a photoconsistency error function to estimate the 6DoF rigid transformation that aligns two RGBD frames (intensity and depth images). I have already implemented this method using the Levenberg-Marquardt implementation of the MRPT library, but I would like to use Ceres for performance. The error function computes the per-pixel intensity error from images of size 640x480=307200 pixels. Conceptually each pixel residual is computed this way:

e(i,j) = imgIntensity1(i,j) - proyectPointTo2D( Rt * point3D( imgIntensity0(i,j), imgDepth0(i,j) ) )

//Rt is a 4x4 rotation and traslation matrix computed from the state vector (x,y,z,yaw,pitch,roll)

Essentially the error function is a 2D matrix of size 640x480 where each entry contains each e(i,j) value. Should I organize the problem in other way to solve it? I think a possible organization could be to add a single residual block for each pixel, but I am not completely sure.

Thanks again for your help, I really appreciate it :)

Kind regards,
Miguel Algaba.

2012/6/13 Keir Mierle <ke...@google.com>

Keir Mierle

unread,
Jun 13, 2012, 4:30:39 AM6/13/12
to ceres-...@googlegroups.com
Hi Miguel,

In this case, using a single residual is reasonable and the formulation you have sounds sane. With that said, your solve time will be bounded by function and Jacobian evaluations rather than the linear solve time in Ceres, since your system will be 10x10 (tiny!). This means that for this problem, there may be less of a performance difference with other optimizers.

There are two ways I can think of to mitigate this:
  1. Split the function into N cost functions, where N is the number of threads/CPU's. Ceres can thread the evaluation of the overall problem by evaluating multiple cost functions in parallel; using one cost function defeats this. This may not work currently since our OpenMP use is naive and may require tweaking to use threads even for small loops. If you try this, let us know. The relevant OpenMP use in Ceres is in program_evaluator.h
  2. Use threading in your cost function (easy with OpenMP).
Let me know if you have further questions.

Keir

Miguel Algaba Borrego

unread,
Jun 13, 2012, 4:38:43 AM6/13/12
to ceres-...@googlegroups.com
Thanks again for the advice, I will try both ways starting for the second that will also speedup my current implementation.

Best regards,
Miguel Algaba.

2012/6/13 Keir Mierle <mie...@gmail.com>

Sameer Agarwal

unread,
Jun 13, 2012, 11:55:35 AM6/13/12
to ceres-...@googlegroups.com
Why not  have one cost function per pixel? Ceres will take care of multithreading, or did I miss something?
Sameer

Keir Mierle

unread,
Jun 13, 2012, 12:54:05 PM6/13/12
to ceres-...@googlegroups.com
Whether using a per-pixel cost function is worthwhile depends on the per-pixel overhead. It sounds like the computation for each pixel's residual in this case is cheap; possibly less than the Ceres per-CostFunction overhead (think copying the Jacobians, virtual overhead, etc). That's why I suggested batching some residuals together.

Keir

Miguel Algaba Borrego

unread,
Jun 13, 2012, 1:25:56 PM6/13/12
to ceres-...@googlegroups.com
That's correct, per-pixel cost function computation is cheap. I have several ideas in mind to speed up the algorithm; starting from multithreading the code, perform optimization at different scales, etc. I have also thought about computing the residuals in parallel with CUDA, but this last idea seems incompatible with auto-differentiation, right? In that case the only way to operate would be using numeric differentiation, is that correct? Thanks again.

Cheers,
Miguel Algaba.

Sameer Agarwal

unread,
Jun 13, 2012, 1:53:53 PM6/13/12
to ceres-...@googlegroups.com
yes if you use cuda you will be on your own to compute derivatives,
either analytically or numerically.
as for multiscale optimization.. I have some ideas I am working on, I
may have something to share on that front,
especially for problems on image grids some time soon :)

Sameer

Miguel Algaba Borrego

unread,
Jun 13, 2012, 2:18:35 PM6/13/12
to ceres-...@googlegroups.com
Sounds great, I will follow the improvements in that area :). Thanks a lot!

Miguel Algaba.

Miguel Algaba

unread,
Jun 16, 2012, 6:33:04 AM6/16/12
to ceres-...@googlegroups.com, ceres-...@googlegroups.com
Hi again, I am trying to transform an image of double numbers into a matrix of numbers of type T so that I can apply auto-diferentiation (the idea I described in my previous comments). I have tried to use the Eigen::Matrix<T,480,640> type to represent an image of Jet numbers, but I would like to transform the image into the matrix representation only once and not each time the () operator is called. Can you give me some hints/recommendations to do this?

Thanks in advance and best regards,
Miguel Algaba.

Sameer Agarwal

unread,
Jun 16, 2012, 1:25:16 PM6/16/12
to ceres-...@googlegroups.com
Hi Miguel
,
Instead of defining a functor, define a subclass of CostFunction instead.

In that class you will be responsible for computing the derivatives, but you can use jets yourself and compute the derivatives using them. In that class you can cache whatever information you want.

Keir may have some more suggestions. He has been playing with image derivatives and mixing automatic and numeric differentiation.

Sameer

Keir Mierle

unread,
Jun 18, 2012, 5:58:53 PM6/18/12
to ceres-...@googlegroups.com
Hi Miguel,

Sorry about my slow reply. I have integrated Ceres into Blender (www.blender.org) for motion tracking. There, I use mixed auto/numeric diff. I have some patches in the Blender fork of Ceres that will help with this. I am planning on adding this to upstream Ceres but haven't had time to do it yet and it probably won't make it into the 1.2.2 release tonight.

Here's the Blender planar tracker, which relies on Ceres. Search for "Chain::Rule" to see the mixing of numeric/automatic derivatives:
http://projects.blender.org/scm/viewvc.php/branches/soc-2011-tomato/extern/libmv/libmv/tracking/track_region.cc?view=markup&root=bf-blender

You can see the use of "Chain::Rule" that I placed in jet.h (not in upstream). Search for "ArgumentType" to find it.

These need to get ported into Ceres proper, with better names and some other fixes.

Let me know if you have further questions.
Keir

Miguel Algaba Borrego

unread,
Jun 19, 2012, 3:17:25 PM6/19/12
to ceres-...@googlegroups.com
Thank you very much again, both comments from you and Sameer are very useful for my project. Let's see if I understood what have to be done to combine numeric/auto diff when using images:

The image is essentially a 2D (sampled) function whose derivatives are unkown 'a priori'. So, we need to compute and propagate the image gradients to 'substitute' the image derivatives with the image numerical gradients. This way we can compute Jet-error-functions that depend on non-Jet-functions (images for example) chaining the Jet-functions derivatives with a Jet representation of the numerical derivatives of the non-Jet-functions.

Now going back to my case; each pixel residual is calculated this way (in my previous post I wrote the error function in a wrong way)

T point3D[4] = obtain3DPointFromDepthImage(imgDepth0(i,j)) //3D point in homogeneous coordinates of first image [imgDepth0]
T warpedPoint3D[4] = Rt * point3D //Rt is a 4x4 translation and rotation matrix that depends on the (x,y,z,yaw,pitch,roll) state vector
T warpedPoint2D[2] = project3DPointTo2D(warpedPoint3D) //simply computes the projected 2D coordinates of the 3D point to the image plane
e(i,j) = imgIntensity1(warpedPoint2D[0], warpedPoint2D[1]) - imgIntensity0(i,j)

The warpedPoint2D depends on the (x,y,z,yaw,pitch,roll) state vector, so I have to compute its coordinates into an array of type T so the auto-diff can compute the derivatives of warpedPoint2D with respect to each parameter. Then, I have to compute the image numerical gradients and chain them to the warpedPoint2D derivatives to have a Jet representation of the whole error function. The procedure make sense to me, but I will need some more time to completely understand how to implement it using Chain::Rule. I will try to deeply analyze your code in the following days. If I made a mistake in my explanation, please correct me so that I can have a better understanding of the whole process.

Thanks again and best regards,
Miguel Algaba.



On Monday, June 18, 2012 11:58:53 PM UTC+2, Keir Mierle wrote:
Hi Miguel,

Sorry about my slow reply. I have integrated Ceres into Blender (www.blender.org) for motion tracking. There, I use mixed auto/numeric diff. I have some patches in the Blender fork of Ceres that will help with this. I am planning on adding this to upstream Ceres but haven't had time to do it yet and it probably won't make it into the 1.2.2 release tonight.

Here's the Blender planar tracker, which relies on Ceres. Search for "Chain::Rule" to see the mixing of numeric/automatic derivatives:
http://projects.blender.org/scm/viewvc.php/branches/soc-2011-tomato/extern/libmv/libmv/tracking/track_region.cc?view=markup&root=bf-blender

You can see the use of "Chain::Rule" that I placed in jet.h (not in upstream). Search for "ArgumentType" to find it.

These need to get ported into Ceres proper, with better names and some other fixes.

Let me know if you have further questions.
Keir
On Sat, Jun 16, 2012 at 10:25 AM, Sameer Agarwal <sameer...@google.com> wrote:
Hi Miguel
,
Instead of defining a functor, define a subclass of CostFunction instead.

In that class you will be responsible for computing the derivatives, but you can use jets yourself and compute the derivatives using them. In that class you can cache whatever information you want.

Keir may have some more suggestions. He has been playing with image derivatives and mixing automatic and numeric differentiation.

Sameer
On Sat, Jun 16, 2012 at 3:33 AM, Miguel Algaba <miguel.algaba.borrego@gmail.com> wrote:
Hi again, I am trying to transform an image of double numbers into a matrix of numbers of type T so that I can apply auto-diferentiation (the idea I described in my previous comments). I have tried to use the Eigen::Matrix<T,480,640> type to represent an image of Jet numbers, but I would like to transform the image into the matrix representation only once and not each time the () operator is called. Can you give me some hints/recommendations to do this?

Thanks in advance and best regards,
Miguel Algaba.

Sameer Agarwal

unread,
Jun 19, 2012, 3:53:25 PM6/19/12
to ceres-...@googlegroups.com
Hi Miguel,
 
The image is essentially a 2D (sampled) function whose derivatives are unkown 'a priori'. So, we need to compute and propagate the image gradients to 'substitute' the image derivatives with the image numerical gradients. This way we can compute Jet-error-functions that depend on non-Jet-functions (images for example) chaining the Jet-functions derivatives with a Jet representation of the numerical derivatives of the non-Jet-functions.

Now going back to my case; each pixel residual is calculated this way (in my previous post I wrote the error function in a wrong way)

T point3D[4] = obtain3DPointFromDepthImage(imgDepth0(i,j)) //3D point in homogeneous coordinates of first image [imgDepth0]
T warpedPoint3D[4] = Rt * point3D //Rt is a 4x4 translation and rotation matrix that depends on the (x,y,z,yaw,pitch,roll) state vector
T warpedPoint2D[2] = project3DPointTo2D(warpedPoint3D) //simply computes the projected 2D coordinates of the 3D point to the image plane
e(i,j) = imgIntensity1(warpedPoint2D[0], warpedPoint2D[1]) - imgIntensity0(i,j)

The warpedPoint2D depends on the (x,y,z,yaw,pitch,roll) state vector, so I have to compute its coordinates into an array of type T so the auto-diff can compute the derivatives of warpedPoint2D with respect to each parameter. Then, I have to compute the image numerical gradients and chain them to the warpedPoint2D derivatives to have a Jet representation of the whole error function.

Yes this sounds about right. Chain::Rule is just a composition of derivatives, with some jet related quircks thrown in. I think in your particular case, things may be much simpler if did not use the template based autodiff machinery and instead defined a CostFunction object instead.

Inside this CostFunction, when derivatives are asked for, you use the autodiff mechanism to compute the jacobian of the warpedPoint with respect to the parameters. Examples of this can be found in autodiff_test.cc. Once you have these derivatives, composing the numeric and analytic derivatives is straight forward. 

It sounds more tedious than it is, and will actually let you debug things more easily.

Sameer

Keir Mierle

unread,
Jun 19, 2012, 3:54:44 PM6/19/12
to ceres-...@googlegroups.com
Hi Miguel,

This code is doing exactly what you need. Here's the relevant portion from track_region.cc:
// Sample the image at position (x, y) but use the gradient, if present, to
// propagate derivatives from x and y. This is needed to integrate the numeric
// image gradients with Ceres's autodiff framework.
template<typename T>
static T SampleWithDerivative(const FloatImage &image_and_gradient,
                              const T &x,
                              const T &y) {
  float scalar_x = JetOps<T>::GetScalar(x);
  float scalar_y = JetOps<T>::GetScalar(y);

  // Note that sample[1] and sample[2] will be uninitialized in the scalar
  // case, but that is not an issue because the Chain::Rule below will not read
  // the uninitialized values.
  float sample[3];
  if (JetOps<T>::IsScalar()) {
    // For the scalar case, only sample the image.
    sample[0] = SampleLinear(image_and_gradient, scalar_y, scalar_x, 0);
  } else {
    // For the derivative case, sample the gradient as well.
    SampleLinear(image_and_gradient, scalar_y, scalar_x, sample);
  }
  T xy[2] = { x, y };
  return Chain<float, 2, T>::Rule(sample[0], sample + 1, xy);
}
This does the sampling in the same way you need; e.g.

e(i,j) = imgIntensity1(warpedPoint2D[0], warpedPoint2D[1]) - imgIntensity0(i,j) 

would become

e(i, j) = SampleWithDerivative(warpedPoint2D[0], warpedPoint2D[1]) - imgintensity0(i, j)

Keir

Miguel Algaba Borrego

unread,
Jun 20, 2012, 3:09:48 PM6/20/12
to ceres-...@googlegroups.com
Hi again, I have tried to use the SampleWithDerivative code in my project. To do that I have substituted my jet.h header with the one you gave me, and compiled/linked the Libmv library to have access to FloatImage and SampleLinear. The problem is that the SampleLinear fuction that handles the derivative case is not implemented in the Libmv library I downloaded from https://github.com/libmv/libmv. Are you using a newer version of Libmv? Or, are SampleLinear and FloatImage defined anywhere else?


Thanks in advance and best regards,
Miguel Algaba.

On Tuesday, June 19, 2012 9:54:44 PM UTC+2, Keir Mierle wrote:
Hi Miguel,

Keir Mierle

unread,
Jun 20, 2012, 3:17:24 PM6/20/12
to ceres-...@googlegroups.com
Wow! I wasn't expecting you to go to all the trouble of using libmv; it makes more sense for you to adapt the code to your existing image libraries. It should be straightforward to adapt it to your case.

Either way, the libmv in upstream blender is considerably forked from the one on github. I am entirely to blame for this. If you take the sample.h from extern/libmv in blender, that should work. With that said, I don't advise this path; consider instead adapting the algorithms to your data structures.

Keir

Miguel Algaba Borrego

unread,
Jun 22, 2012, 1:44:15 PM6/22/12
to ceres-...@googlegroups.com
Hi again Keir, I'm adapting the algorithms to use my data structures as you suggested me. Now, I'm implementing the image gradients computation, but I don't know exactly which kernel you use to convolve the intensity image. You use Sobel? Scharr? any other more precise?

Thank you very much and best regards,
Miguel Algaba.


On Wednesday, June 20, 2012 9:17:24 PM UTC+2, Keir Mierle wrote:
Wow! I wasn't expecting you to go to all the trouble of using libmv; it makes more sense for you to adapt the code to your existing image libraries. It should be straightforward to adapt it to your case.

Either way, the libmv in upstream blender is considerably forked from the one on github. I am entirely to blame for this. If you take the sample.h from extern/libmv in blender, that should work. With that said, I don't advise this path; consider instead adapting the algorithms to your data structures.

Keir

Keir Mierle

unread,
Jun 22, 2012, 1:59:16 PM6/22/12
to ceres-...@googlegroups.com
On Fri, Jun 22, 2012 at 10:44 AM, Miguel Algaba Borrego <miguel.alg...@gmail.com> wrote:
Hi again Keir, I'm adapting the algorithms to use my data structures as you suggested me. Now, I'm implementing the image gradients computation, but I don't know exactly which kernel you use to convolve the intensity image. You use Sobel? Scharr? any other more precise?

Look in sample.h and convolve.h in libmv,  where the computation is done. It's just convolving with a gaussian and the derivative of a gaussain. It turns out getting image gradients right is a bit tricky. Take a look here:


Libmv is MIT licensed so it's no problem to integrate the code in whatever you are working on.

Keir

--

Miguel Algaba Borrego

unread,
Jun 22, 2012, 2:26:21 PM6/22/12
to ceres-...@googlegroups.com
Thanks a lot! Very useful information :)

2012/6/22 Keir Mierle <ke...@google.com>

Miguel Algaba Borrego

unread,
Jun 22, 2012, 3:29:14 PM6/22/12
to ceres-...@googlegroups.com
Hi, running my program Eigen prints an execution error message:

/usr/include/eigen3/Eigen/src/Core/DenseStorage.h:69: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 16>::plain_array() [with T = double, int Size = 6, int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(array) & 0xf) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ****"' failed.

I'm not using Eigen variables in my program, so I should be using the auto-diff framework in a wrong way. I imagine this problem is related to the fixed size of the residuals (is has 307200 elements). I have tried to use this constructor AutoDiffCostFunction(CostFunctor* functor, int num_residuals) to specify the number of residuals in runtime, but I must be missing something (it doesn't compile). How should I specify the number of residuals in runtime? Do you think this will fix the Eigen problem?

This is how I add the residual block to the problem:

  problem.AddResidualBlock(
      new AutoDiffCostFunction<ResidualRGBDPhotoconsistency,307200,6>(
          new ResidualRGBDPhotoconsistency),
      NULL,
      x);

And this is the class I defined to compute the residuals:

class ResidualRGBDPhotoconsistency {
 public:

  template <typename T> bool operator()(const T* const stateVector,
                                        T* residuals) const {

          //Residuals computation
          ...
  }
};

Thanks in advance!

Cheers,
Miguel Algaba.


On Friday, June 22, 2012 8:26:21 PM UTC+2, Miguel Algaba Borrego wrote:
Thanks a lot! Very useful information :)

2012/6/22 Keir Mierle <ke...@google.com>

Keir Mierle

unread,
Jun 22, 2012, 3:31:56 PM6/22/12
to ceres-...@googlegroups.com
On Fri, Jun 22, 2012 at 12:29 PM, Miguel Algaba Borrego <miguel.alg...@gmail.com> wrote:
Hi, running my program Eigen prints an execution error message:

/usr/include/eigen3/Eigen/src/Core/DenseStorage.h:69: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 16>::plain_array() [with T = double, int Size = 6, int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(array) & 0xf) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ****"' failed.

I'm not using Eigen variables in my program, so I should be using the auto-diff framework in a wrong way. I imagine this problem is related to the fixed size of the residuals (is has 307200 elements). I have tried to use this constructor AutoDiffCostFunction(CostFunctor* functor, int num_residuals) to specify the number of residuals in runtime, but I must be missing something (it doesn't compile).

Can you please paste the compile error? I don't believe Eigen will work with fixed size matrices that large.

Miguel Algaba Borrego

unread,
Jun 22, 2012, 3:32:07 PM6/22/12
to ceres-...@googlegroups.com
Sorry I forgot to paste the state vector definition:

  // The variable to solve for with its initial value.
  double initial_x[] = {0,0,0,0,0,0}; //x y z yaw pitch roll
  double x[] = {0,0,0,0,0,0}; //x y z yaw pitch roll

Thanks!

Miguel Algaba Borrego

unread,
Jun 22, 2012, 3:45:11 PM6/22/12
to ceres-...@googlegroups.com
Sorry Keir, I mean runtime not compile time. This is the error I get when adding the residual block to the problem (trying the dynamic size):

The runtime error:

F0622 21:43:06.002919  6579 sized_cost_function.h:53] Check failed: kNumResiduals > 0 || kNumResiduals == DYNAMIC Cost functions must have at least one residual block.
*** Check failure stack trace: ***
    @ 0xb6bbcc9a  google::LogMessage::Fail()
    @ 0xb6bbfafa  google::LogMessage::SendToLog()
    @ 0xb6bbc857  google::LogMessage::Flush()
    @ 0xb6bc04a2  google::LogMessageFatal::~LogMessageFatal()
    @  0x805044a  main
    @ 0xb6826e37  (unknown)
    @  0x804f2f1  (unknown)
Abortado


This is how I add the residual block to the problem:

  problem.AddResidualBlock(
      new AutoDiffCostFunction<ResidualRGBDPhotoconsistency,0/*This is not used?*/,6>(
          new ResidualRGBDPhotoconsistency,307200 /*dynamic size*/),
      NULL,
      x);

Thanks again!
Cheers,

Miguel Algaba.

On Friday, June 22, 2012 9:31:56 PM UTC+2, Keir Mierle wrote:

Sameer Agarwal

unread,
Jun 22, 2012, 3:58:33 PM6/22/12
to ceres-...@googlegroups.com
are you on a 32bit system or a 64 bit system?
There is an issue filed against ceres about 32bit alignment stuff being broken.

Eigen is used for implementing automatic differentiation. I am pretty sure thats where the error is coming from. 

Sameer

Sameer Agarwal

unread,
Jun 22, 2012, 4:01:26 PM6/22/12
to ceres-...@googlegroups.com
as the header documentation (autodiff_cost_function.h) mentions, you need to pass ceres::DYNAMIC as the first template argument if you are going to dynamic sizing with autodiff.

 problem.AddResidualBlock(
      new AutoDiffCostFunction<ResidualRGBDPhotoconsistency,ceres::DYNAMIC, 6>(

          new ResidualRGBDPhotoconsistency,307200 /*dynamic size*/),
      NULL,
      x);

Sameer

Miguel Algaba Borrego

unread,
Jun 22, 2012, 4:05:21 PM6/22/12
to ceres-...@googlegroups.com
Hi Sameer,

I compiled everything 32bits. Can you give me some pointers of how to fix this problem?


Thanks a lot!
Miguel Algaba.

On Friday, June 22, 2012 9:58:33 PM UTC+2, Sameer Agarwal wrote:
are you on a 32bit system or a 64 bit system?
There is an issue filed against ceres about 32bit alignment stuff being broken.

Eigen is used for implementing automatic differentiation. I am pretty sure thats where the error is coming from. 

Sameer

Sameer Agarwal

unread,
Jun 22, 2012, 4:08:34 PM6/22/12
to ceres-...@googlegroups.com
Yes this is going to be a problem.

I am going to look into it as soon as I can get my hands on a 32 bit system. Unfortunately I do not have access to one right now :/.

I am not sure if the changes in 1.2.2 introduced this problem or its been there for a while.

Two things will help me debug this.

1. Can you get the stack trace by compiling ceres and your application in debug mode, running it in gdb and reporting the full stack trace back.

2. can you check if you get the same error with ceres version 1.2.1 ?

Sameer

Miguel Algaba Borrego

unread,
Jun 22, 2012, 4:18:47 PM6/22/12
to ceres-...@googlegroups.com
Ok, I will compile it tomorrow in debug mode and try to give you as much information as I can. Thank you very much to you and Keir for your invaluable help :)

Cheers,
Miguel Algaba.


On Friday, June 22, 2012 10:08:34 PM UTC+2, Sameer Agarwal wrote:
Yes this is going to be a problem.

I am going to look into it as soon as I can get my hands on a 32 bit system. Unfortunately I do not have access to one right now :/.

I am not sure if the changes in 1.2.2 introduced this problem or its been there for a while.

Two things will help me debug this.

1. Can you get the stack trace by compiling ceres and your application in debug mode, running it in gdb and reporting the full stack trace back.

2. can you check if you get the same error with ceres version 1.2.1 ?

Sameer

Sameer Agarwal

unread,
Jun 22, 2012, 4:20:11 PM6/22/12
to ceres-...@googlegroups.com
here is the relevant issue to keep track of 

Miguel Algaba Borrego

unread,
Jun 23, 2012, 8:14:11 AM6/23/12
to ceres-...@googlegroups.com
Hi Sameer, I have compiled Ceres 1.2.0 and my application in debug mode. Here is the output from the debugger:

Starting debugger:
done
Registered new type: wxString
Registered new type: STL String
Registered new type: STL Vector
Setting breakpoints
Debugger name and version: GNU gdb (Ubuntu/Linaro 7.2-1ubuntu11) 7.2
Program received signal SIGABRT, Aborted.
In __kernel_vsyscall () ()
At /home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/residual_block.cc:105

And this is the Call stack:

#0 (    0x0012e416 in __kernel_vsyscall() (??:??)
#1 0x15f9e71    raise() (/lib/i386-linux-gnu/libc.so.6:??)
#2 0x15fd34e    abort() (/lib/i386-linux-gnu/libc.so.6:??)
#3 0x15f2888    __assert_fail() (/lib/i386-linux-gnu/libc.so.6:??)
#4 0x8548942    ceres::internal::AutoDiff<ResidualRGBDPhotoconsistency, double, 6, 0, 0, 0, 0, 0>::Differentiate(ResidualRGBDPhotoconsistency const&, double const* const*, int, double*, double**) () (??:??)
#6 0x85559d0    ceres::internal::ResidualBlock::Evaluate(this=0x89875f0, cost=0xbfffec08, residuals=0xb519f020, jacobians=0x8994200, scratch=0xb74e5008) (/home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/residual_block.cc:105)
#7 0x8570b45    ceres::internal::ProgramEvaluator<ceres::internal::ScratchEvaluatePreparer, ceres::internal::DenseJacobianWriter>::Evaluate(.omp_data_i=0xbfffec68) (/home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/program_evaluator.h:181)
#8 0x857385d    ceres::internal::ProgramEvaluator<ceres::internal::ScratchEvaluatePreparer, ceres::internal::DenseJacobianWriter>::Evaluate(this=0x8aa4780, state=0x8a42ff0, cost=0xbfffeeb8, residuals=0xb519f020, jacobian=0x8a43050) (/home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/program_evaluator.h:196)
#9 0x8564d80    ceres::internal::TrustRegionMinimizer::Minimize(this=0xbfffefec, options=..., parameters=0x8a42ff0, summary=0xbffff4ac) (/home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/trust_region_minimizer.cc:165)
#10 0x855cc0c    ceres::internal::SolverImpl::Minimize(options=..., program=0x8aa4680, evaluator=0x8aa4780, linear_solver=0x89955c0, parameters=0x8a42ff0, summary=0xbffff4ac) (/home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/solver_impl.cc:160)
#11 0x855d1d0    ceres::internal::SolverImpl::Solve(original_options=..., problem=0xbffff7ac, summary=0xbffff4ac) (/home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/solver_impl.cc:282)
#12 0x855b953    ceres::Solve(options=..., problem=0xbffff7ac, summary=0xbffff4ac) (/home/miguel/Libs/ceres-solver-1.2.0/internal/ceres/solver.cc:60)
#13 (    0x085436db in main() (??:??)

I will try now with Ceres 1.2.2 to see if there is any difference.

Best regards,
Miguel Algaba.


On Friday, June 22, 2012 10:20:11 PM UTC+2, Sameer Agarwal wrote:
here is the relevant issue to keep track of 

Miguel Algaba Borrego

unread,
Jun 23, 2012, 9:50:30 AM6/23/12
to ceres-...@googlegroups.com
I have compiled ceres 1.2.2 and the output is the same:

Here is the debugger output:


Starting debugger:
done
Registered new type: wxString
Registered new type: STL String
Registered new type: STL Vector
Setting breakpoints
Debugger name and version: GNU gdb (Ubuntu/Linaro 7.2-1ubuntu11) 7.2
Program received signal SIGABRT, Aborted.
In __kernel_vsyscall () ()
At /home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/residual_block.cc:105

And the call stack:


#0 (    0x0012e416 in __kernel_vsyscall() (??:??)
#1 0x15f9e71    raise() (/lib/i386-linux-gnu/libc.so.6:??)
#2 0x15fd34e    abort() (/lib/i386-linux-gnu/libc.so.6:??)
#3 0x15f2888    __assert_fail() (/lib/i386-linux-gnu/libc.so.6:??)
#4 0x854d852    ceres::internal::AutoDiff<ResidualRGBDPhotoconsistency, double, 6, 0, 0, 0, 0, 0>::Differentiate(ResidualRGBDPhotoconsistency const&, double const* const*, int, double*, double**) () (??:??)
#6 0x855a96c    ceres::internal::ResidualBlock::Evaluate(this=0x89903c8, cost=0xbfffeb88, residuals=0xb519f020, jacobians=0x8aadd98, scratch=0xb74e5008) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/residual_block.cc:105)
#7 0x857876d    ceres::internal::ProgramEvaluator<ceres::internal::ScratchEvaluatePreparer, ceres::internal::DenseJacobianWriter>::Evaluate(.omp_data_i=0xbfffebe8) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/program_evaluator.h:181)
#8 0x857b485    ceres::internal::ProgramEvaluator<ceres::internal::ScratchEvaluatePreparer, ceres::internal::DenseJacobianWriter>::Evaluate(this=0x8a4d058, state=0x8a4d0a0, cost=0xbfffee28, residuals=0xb519f020, jacobian=0x8a4d088) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/program_evaluator.h:196)
#9 0x856a44a    ceres::internal::TrustRegionMinimizer::Minimize(this=0xbfffef64, options=..., parameters=0x8a4d0a0, summary=0xbffff49c) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/trust_region_minimizer.cc:164)
#10 0x8561d72    ceres::internal::SolverImpl::Minimize(options=..., program=0x8aae3c0, evaluator=0x8a4d058, linear_solver=0x89b6690, parameters=0x8a4d0a0, summary=0xbffff49c) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/solver_impl.cc:166)
#11 0x8562382    ceres::internal::SolverImpl::Solve(original_options=..., problem_impl=0x89b1fc0, summary=0xbffff49c) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/solver_impl.cc:303)
#12 0x85608be    ceres::Solver::Solve(this=0xbffff34c, options=..., problem=0xbffff7ac, summary=0xbffff49c) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/solver.cc:52)
#13 0x856090d    ceres::Solve(options=..., problem=0xbffff7ac, summary=0xbffff49c) (/home/miguel/Libs/ceres-solver-1.2.2/internal/ceres/solver.cc:60)
#14 (    0x085485eb in main() (??:??)

I attach the disassembly because it is too large to paste it here:


Thanks in advance!

Cheers,
Miguel Algaba.

Disassembly.txt

Sameer Agarwal

unread,
Jun 23, 2012, 5:55:48 PM6/23/12
to ceres-...@googlegroups.com
Miguel,

Thanks for the traces.  This bug has been around for a while it seems.

I was able to find a 32bit machine and replicate the problem.  I have pushed a fix for review. It should be in the master branch as soon as Keir has had a chance to review it.

Sameer

Sameer Agarwal

unread,
Jun 24, 2012, 2:55:10 PM6/24/12
to ceres-...@googlegroups.com
Miguel,
This should be fixed now in the master branch. Please try it out.
We will cut a 1.2.3 release soonish.
Sameer

Miguel Algaba Borrego

unread,
Jun 27, 2012, 7:13:31 PM6/27/12
to ceres-...@googlegroups.com
Thanks a lot Sameer and Keir, this fix solved my auto-diff alignment issue and now the optimization problem converges to the right solution. In the following days I will work to improve even more the computation time. Thank you very much again for making this possible :)

Best regards,
Miguel Algaba.

Keir Mierle

unread,
Jun 27, 2012, 7:36:46 PM6/27/12
to ceres-...@googlegroups.com
If you wanted to share some results of the optimizations, we'd love to collect them together on a page showing off what people have done with Ceres.

Thanks again,
Keir

Miguel Algaba Borrego

unread,
Jun 28, 2012, 3:10:05 AM6/28/12
to ceres-...@googlegroups.com
Sure, that would be an honor for me :). It is ok if I send you my results the next week? I would like to make some optimizations and I believe that they will be ready in one week.

Best regards,
Miguel Algaba.


On Thursday, June 28, 2012 1:36:46 AM UTC+2, Keir Mierle wrote:
If you wanted to share some results of the optimizations, we'd love to collect them together on a page showing off what people have done with Ceres.

Thanks again,
Keir

Keir Mierle

unread,
Jun 28, 2012, 5:49:38 AM6/28/12
to ceres-...@googlegroups.com
That would be fantastic!

Miguel Algaba Borrego

unread,
Jul 2, 2012, 4:31:07 PM7/2/12
to ceres-...@googlegroups.com
Hi again,

I am doing some optimizations to my photoconsistency odometry project to make the algorithm faster. One of the optimizations consist on performing the error function optimization at different image scales (increasing scales), to obtain a first rough approximation to the solution and refine it in successive optimization levels. I attach a naive flow diagram to illustrate the idea. In my experiments I have found that the optimizer computes a very good approximation to the solution if I feed the algorithm using images at a certain scale. I would like to use that good approximation to initialize the optimizer with the same images at increasing scales to refine the solution. I was expecting to have a more refined solution, but when I start the optimization at the next image scale, the optimizer updates the state vector with a big step making the solution worse than before. My idea is to reduce the Levenberg-Marquardt damping factor when increasing the image scale to reduce the step size of the optimization in the next optimization level. I have tried to adjust the optimizer parameters to get the desired result but I don't see evident changes, can you give me some recommendations to adjust the parameters?

This is an example of the parameters I have used to perform optimization at 3 levels. The level 0 is the last optimization level that corresponds to the greater image scale. The optimization starts at level 2 with 50 iterations, 1e4 initial trust region, 1e8 max trust region, etc.

max_num_iterations (at each level): [3, 10, 50]
initial_trust_region_radius (at each level): [1e40, 1e25, 1e4]
max_trust_region_radius (at each level): [1e50, 1e35, 1e8]
min_trust_region_radius (at each level): [1e-32,1e-32,1e-32]
min_relative_decrease (at each level): [1e-3,1e-3,1e-3]

Please, if you find something strange in the above algorithm description or parameter setup, please let me know.

Thank you very much in advance and best regards,
Miguel Algaba.


On Thursday, June 28, 2012 11:49:38 AM UTC+2, Keir Mierle wrote:
That would be fantastic!
Photoconsistencyodometry.pdf

Sameer Agarwal

unread,
Jul 2, 2012, 6:49:31 PM7/2/12
to ceres-...@googlegroups.com
Miguel,

Do you mean that the optimizer actually computes and accepts a worse solution than the one you started with when you move to a finer scale? or is that the first step is worse and then the trust region radius gets sized appropriately till it starts computing and accepting the steps correctly?

Sameer


Miguel Algaba Borrego

unread,
Jul 3, 2012, 3:00:56 AM7/3/12
to ceres-...@googlegroups.com
Hi Sameer,

I think that the second case you mentioned is the one that is happening, however I have found that sometimes the step drifts the solution too much, making the algorithm to get stuck in a local minima.

Best regards,
Miguel Algaba.


On Tuesday, July 3, 2012 12:49:31 AM UTC+2, Sameer Agarwal wrote:
Miguel,

Miguel Algaba Borrego

unread,
Jul 20, 2012, 4:13:23 PM7/20/12
to ceres-...@googlegroups.com
Hi, it's been a while since my last comment, I apologize for taking too long to share my results. Today I have released the code for the Photoconsistency Visual Odometry project I have been working on and wanted to let you know as you helped me a lot with it. I would also like to thank you both for your invaluable help during this time.


Thank you very much and best regards,
Miguel Algaba.

Sameer Agarwal

unread,
Jul 20, 2012, 4:55:05 PM7/20/12
to ceres-...@googlegroups.com
Thanks for sharing Miguel and congratulations on the release. Nice video.
Sameer
Reply all
Reply to author
Forward
0 new messages