Extrinsic camera calibration problem requires adding degrees of freedom with sets of new observation

325 views
Skip to first unread message

Scott Johnson

unread,
Jun 28, 2017, 3:26:00 PM6/28/17
to Ceres Solver

I am using the OpenPTrack method of doing extrinsic camera calibration where there is a fixed set of RGBD cameras mounted in a room and the problem is to determine their position and orientation (relative to camera 0).  As a curious engineer I like to understand how it works and how it is using Ceres to solve the problem.  

The details are in the paper "A Distributed Calibration Algorithm for Color and Range Camera Networks" by Basso et al.  The important part for this question is that it formulates a Bundle Adjustment problem and uses Ceres to solve it.  A large checkerboard is presented to subsets of the cameras and the 2D corner positions of the checkerboards are observed and matched with the expected 3D model of the where the corners are on the checkerboard surface.  Each time the checkerboard is presented to cameras it generates a set of observations based on which cameras can see the checkerboard.   As a bundle adjustment problem, the X vector is the cameras' intrinsics, the cameras' extrinsics and the locations and orientations of the checkerboards when the images were grabbed.  The locations of the checkerboards are needed in order to project the checkerboard corners to the image space of each camera to compare with the observed 2D corner points.

So now we come to my question.  I can understand a problem where you have a fixed set of parameters that you are trying to solve for and you get more and more observations to reduce the error of the residuals using those parameters.  This is how a simple Least Squares problem works.  But it nags at me that something is wrong when adding observations increases the parameters that you are trying to solve for.  In this case, every time you present a checkerboard to a bunch of cameras you add the need to solve for the checkerboard location.  I really want to solve for the camera extrinsics and the checkerboard locations are just a necessary side effect.  The initial guess of the camera extrinsics part of X is from SolvePnP which itself is the result of a nonlinear optimization and from experimentation is pretty good.  So how do I know that when Ceres optimizes across the entire state vector that it hasn't left the camera extrinsics alone (the part I care about) and merely iterated on the checkerboard locations (which I don't directly care about)?  I can tell in the Ceres Full Report that the overall residual is reduced but again that could be just the checkerboard locations that are being tweaked and the camera extrinsics initial guess was as good as it was going to get.

I am new to Ceres and I must admit that this is my first direct use of nonlinear optimizers yet it seems wrong to add degrees of freedom with every set of observations.  Is my nagging feeling justified?  There is an old Far Side cartoon where a kid pulls out a bigger brain midway through a test.  Writing this forum is my attempt to pull out a bigger brain.

Scott

William Rucklidge

unread,
Jun 28, 2017, 4:15:20 PM6/28/17
to ceres-...@googlegroups.com
That's the nature of bundle adjustment problems: you add observations, which introduces more parameters (the world locations of the observed points) that you may not care about, if you're just trying to calibrate the camera intrinsics/extrinsics. The trick is that while you're adding degrees of freedom, you're also adding constraints - and if you've set things up correctly, you add more constraints than degrees of freedom. Setting things up incorrectly would involve doing something obviously wrong, like making a one-point checkerboard and presenting it so that only one camera can see it.

Something unintuitive is that as you add observations (in your case, by presenting the checkerboard additional times) in some sense the camera parameters may appear to get worse, if you look at the problem one way. Say I've presented the checkerboard 8 different ways O_1...O_8, and run Ceres to solve for a camera parameter vector (all parameters for all cameras) C_8. Now I add an additional observation O_9 and solve using O_1...O_9, giving C_9. Then (assuming Ceres did its job and didn't get stuck in a local minimum)
Error(C_8, O_1...O_8) <= Error(C_9, O_1...O_8)
which is just a restatement that C_8 was the minimum. Equality is really unlikely, so in general
Error(C_8, O_1...O_8) < Error(C_9, O_1...O_8): the new camera parameter vector fits the old observations worse than the old camera parameter vector. But C_8 is a function not just of the true observations (ground-truth reality), but also the noise in those observations, and the hope is that since C_9 is able to balance across more observations and thus average out more noise, the error between C_9 and ground truth is less than the error between C_8 and ground truth.



--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/f87b3701-0ade-4b43-bc9c-bbf9ac1baf79%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sameer Agarwal

unread,
Jun 29, 2017, 1:38:00 PM6/29/17
to ceres-...@googlegroups.com
Scott,

This is a fantastic question and something I have been thinking about for a while myself. But I do not have a solution.

Here is a  simpler version of this problem.

Consider the problem of fitting an ellipse to a set of points in the plane.

Let f(x, theta)  = 0 be the equation of the ellipse where theta are the parameters of the ellipse and x is a point in the plane.

Then given points y_i \in R^2, one optimization problem you may want to solve is

\min \sum_i | x_i  - y_i|^2
s.t \forall i, f(x_i, \theta) = 0

What we are saying here is, find the ellipse with parameters theta, such that 
the sum of distances between the points y_i and their "projections" onto the ellipse x_i are minimized.

This has the same problem that you described below, where every time you a add an observation y_i, you have to add a variable x_i to the problem and solve for it.

I believe this falls within the purview of what is known as the "Nuisance Parameter" problem, where x_i are the nuisance parameters. 

While I do not have a solution for you. I can point you to some other people in the statistics community that are thinking about similar problems.


That said, what these studies are missing is the fact that the number of parameters grows with data. I have not found any good references or studies on that yet.

Sameer

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages