About covariance computation with overparameterized parameters

496 views
Skip to first unread message

Maxime Boucher

unread,
Jan 22, 2014, 8:19:08 PM1/22/14
to ceres-...@googlegroups.com
Hi,

 At first, let me thank you (very much) for releasing the ceres-library, it is efficient, convenient and very well documented. This really is an impressive piece of work.

I work on slam, and I want to compare two variations of my slam implementation against each other. As I have no ground truth I thought about comparing determinants of cameras' poses' covariance matrix (generalized variance).
I represent rotations with quaternions and when building the problem I set the parameter blocks corresponding to quaternion as locally parameterized (using the QuaternionParameterization provided by ceres).
However, as I feed quaternions in a 4-vector shape, the jacobian obtained from problem::evaluate is rank deficient. Thus when I call covariance::compute, I do it on little problems consisting only about a camera and its observed points, so that DENSE_SVD doesn't take an enormous time.
Then I compute the determinant (OpenCV implementation) of the obtained covariance matrix.

My problem is I observed extravagant variations in determinants values ranging from -e+11 to -e-12 for the first poses of the system. I am very surprised by these.
Does it appear like I badly used ceres?

For the very first pose of the system I observed the first line and first column of the covariance matrix are filled with 0. Is this an effect of the LocalParameterization?
Covariance pose matrices being of the form " [ [qq, qt]; [tq, tt] ] " does this mean I shall always drop the parts corresponding to the first part of quaternions?


Maybe these questions are more theoretical than practical, if so, sorry for the disturbance. 
Anyway, thank you for your time,

Maxime

Sameer Agarwal

unread,
Jan 22, 2014, 8:35:56 PM1/22/14
to ceres-...@googlegroups.com
Hi Maxime,

 At first, let me thank you (very much) for releasing the ceres-library, it is efficient, convenient and very well documented. This really is an impressive piece of work.

Thank you for your kind words.
 
I work on slam, and I want to compare two variations of my slam implementation against each other. As I have no ground truth I thought about comparing determinants of cameras' poses' covariance matrix (generalized variance).

Why are you comparing determinants?  determinant sounds like a really bad idea to me. the trace is actually the total variance and would be a more sensible quantity.

 
I represent rotations with quaternions and when building the problem I set the parameter blocks corresponding to quaternion as locally parameterized (using the QuaternionParameterization provided by ceres).
However, as I feed quaternions in a 4-vector shape, the jacobian obtained from problem::evaluate is rank deficient. Thus when I call covariance::compute, I do it on little problems consisting only about a camera and its observed points, so that DENSE_SVD doesn't take an enormous time.
Then I compute the determinant (OpenCV implementation) of the obtained covariance matrix.

If you only do one camera and the points observed by it, your problem will be illposed and the resulting jacobian rank deficient. In this case the covariance will be entirely meaningless as you are observing. you are going to need atleast two views and you will need to hold one view and the scale of the reconstruction constant. 

Sameer



 

My problem is I observed extravagant variations in determinants values ranging from -e+11 to -e-12 for the first poses of the system. I am very surprised by these.
Does it appear like I badly used ceres?

For the very first pose of the system I observed the first line and first column of the covariance matrix are filled with 0. Is this an effect of the LocalParameterization?
Covariance pose matrices being of the form " [ [qq, qt]; [tq, tt] ] " does this mean I shall always drop the parts corresponding to the first part of quaternions?


Maybe these questions are more theoretical than practical, if so, sorry for the disturbance. 
Anyway, thank you for your time,

Maxime

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/5aa82bcc-c6d7-4aa5-90bb-76726b702ed4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Maxime Boucher

unread,
Jan 22, 2014, 9:24:58 PM1/22/14
to ceres-...@googlegroups.com
Thank you Sameer,


Le jeudi 23 janvier 2014 02:35:56 UTC+1, Sameer Agarwal a écrit :
 
I work on slam, and I want to compare two variations of my slam implementation against each other. As I have no ground truth I thought about comparing determinants of cameras' poses' covariance matrix (generalized variance).

Why are you comparing determinants?  determinant sounds like a really bad idea to me. the trace is actually the total variance and would be a more sensible quantity.

The idea came from H. Strasdat paper "Monocular SLAM: Why Filter?". To me it seems to allow the representation of all the variance-covariance values in one real number.
  
I represent rotations with quaternions and when building the problem I set the parameter blocks corresponding to quaternion as locally parameterized (using the QuaternionParameterization provided by ceres).
However, as I feed quaternions in a 4-vector shape, the jacobian obtained from problem::evaluate is rank deficient. Thus when I call covariance::compute, I do it on little problems consisting only about a camera and its observed points, so that DENSE_SVD doesn't take an enormous time.
Then I compute the determinant (OpenCV implementation) of the obtained covariance matrix.

If you only do one camera and the points observed by it, your problem will be illposed and the resulting jacobian rank deficient. In this case the covariance will be entirely meaningless as you are observing. you are going to need atleast two views and you will need to hold one view and the scale of the reconstruction constant. 

Indeed!
If you know a way, may I ask an advice on how to hold the scale constant?

Thank you for your advices,


Maxime

Sameer Agarwal

unread,
Jan 23, 2014, 9:01:05 AM1/23/14
to ceres-...@googlegroups.com
Why are you comparing determinants?  determinant sounds like a really bad idea to me. the trace is actually the total variance and would be a more sensible quantity.

The idea came from H. Strasdat paper "Monocular SLAM: Why Filter?". To me it seems to allow the representation of all the variance-covariance values in one real number.

I recommend using the trace rather than the determinant. The trace of a covariance matrix is the variance of the variables. The determinant as far as I know does not have any statistical interpretation.
 
  
I represent rotations with quaternions and when building the problem I set the parameter blocks corresponding to quaternion as locally parameterized (using the QuaternionParameterization provided by ceres).
However, as I feed quaternions in a 4-vector shape, the jacobian obtained from problem::evaluate is rank deficient. Thus when I call covariance::compute, I do it on little problems consisting only about a camera and its observed points, so that DENSE_SVD doesn't take an enormous time.
Then I compute the determinant (OpenCV implementation) of the obtained covariance matrix.

If you only do one camera and the points observed by it, your problem will be illposed and the resulting jacobian rank deficient. In this case the covariance will be entirely meaningless as you are observing. you are going to need atleast two views and you will need to hold one view and the scale of the reconstruction constant. 

Indeed!
If you know a way, may I ask an advice on how to hold the scale constant?

Depends on your parameterization.  But no there is no simple way to do this. 

Sameer
 

Thank you for your advices,


Maxime

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

Maxime Boucher

unread,
Jan 23, 2014, 12:31:04 PM1/23/14
to ceres-...@googlegroups.com


Le jeudi 23 janvier 2014 15:01:05 UTC+1, Sameer Agarwal a écrit :

Why are you comparing determinants?  determinant sounds like a really bad idea to me. the trace is actually the total variance and would be a more sensible quantity.

The idea came from H. Strasdat paper "Monocular SLAM: Why Filter?". To me it seems to allow the representation of all the variance-covariance values in one real number.

I recommend using the trace rather than the determinant. The trace of a covariance matrix is the variance of the variables. The determinant as far as I know does not have any statistical interpretation.
 
  
I represent rotations with quaternions and when building the problem I set the parameter blocks corresponding to quaternion as locally parameterized (using the QuaternionParameterization provided by ceres).
However, as I feed quaternions in a 4-vector shape, the jacobian obtained from problem::evaluate is rank deficient. Thus when I call covariance::compute, I do it on little problems consisting only about a camera and its observed points, so that DENSE_SVD doesn't take an enormous time.
Then I compute the determinant (OpenCV implementation) of the obtained covariance matrix.

If you only do one camera and the points observed by it, your problem will be illposed and the resulting jacobian rank deficient. In this case the covariance will be entirely meaningless as you are observing. you are going to need atleast two views and you will need to hold one view and the scale of the reconstruction constant. 

Indeed!
If you know a way, may I ask an advice on how to hold the scale constant?

Depends on your parameterization.  But no there is no simple way to do this.

Thank you Sameer,


Maxime 

Keir Mierle

unread,
Jan 27, 2014, 2:07:46 PM1/27/14
to ceres-...@googlegroups.com
Hi Maxime,

Although I have't personally tried the technique explained in this paper, I believe it is for exactly the case you have.

"Gauges and Gauge Transformations for Uncertainty Description of Geometric Structure with Indeterminacy" by Kanatani and Morris

Personally, I find this paper unreadable. It has little intuitive explanation, and no clear practical recommendations. However, after studying it, I believe it boils down to the simple procedure of taking the SVD of the second derivative of the cost function, dropping the smallest K singular values where K is the number of excess gauge freedoms (e.g for a single-quaternion optimization, K would be 1 since there are only 3 true degrees of freedom) to get a N-K matrix of derivatives in the tangent space, then inverting that for the covariance (trivially by inverting the diagonal entries in S).

As I mentioned, I have not tried the procedure personally but it makes intuitive sense to me when I think about what the "extra" degrees of freedom give you at the minimum of the cost surface. I have been meaning to do some experiments and write up a short note about this, but I have not gotten there.

Sameer will probably chime in to disagree since we debated this paper at length when we were implementing the covariance code.

We have some of this implemented already in Ceres; see the covariance header note about guage freedoms:

and the null_space_rank parameter to options.

If you try this technique, please let me know how it goes, since I haven't tried myself.

Thanks,
Keir



--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

Maxime Boucher

unread,
Jan 31, 2014, 7:11:21 AM1/31/14
to ceres-...@googlegroups.com


On Monday, January 27, 2014 8:07:46 PM UTC+1, Keir Mierle wrote:
Hi Maxime,

Although I have't personally tried the technique explained in this paper, I believe it is for exactly the case you have.

"Gauges and Gauge Transformations for Uncertainty Description of Geometric Structure with Indeterminacy" by Kanatani and Morris

Personally, I find this paper unreadable. It has little intuitive explanation, and no clear practical recommendations. However, after studying it, I believe it boils down to the simple procedure of taking the SVD of the second derivative of the cost function, dropping the smallest K singular values where K is the number of excess gauge freedoms (e.g for a single-quaternion optimization, K would be 1 since there are only 3 true degrees of freedom) to get a N-K matrix of derivatives in the tangent space, then inverting that for the covariance (trivially by inverting the diagonal entries in S).

As I mentioned, I have not tried the procedure personally but it makes intuitive sense to me when I think about what the "extra" degrees of freedom give you at the minimum of the cost surface. I have been meaning to do some experiments and write up a short note about this, but I have not gotten there.

Sameer will probably chime in to disagree since we debated this paper at length when we were implementing the covariance code.

We have some of this implemented already in Ceres; see the covariance header note about guage freedoms:

and the null_space_rank parameter to options.

If you try this technique, please let me know how it goes, since I haven't tried myself.

Hi Keir,

 Thank your for sharing knowledge. Alright, I'll tell you if I end up implementing this paper (seems a bit complex).

Keir Mierle

unread,
Jan 31, 2014, 5:02:06 PM1/31/14
to ceres-...@googlegroups.com
Trying options.null_space_rank parameter = N is fairly straightforward; if the docs are not clear then we should probably fix them!


--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages