Can it be used to calibrate multi-cameras system with known internal parameters?

244 views
Skip to first unread message

chocolate

unread,
Oct 30, 2012, 9:42:59 PM10/30/12
to multica...@googlegroups.com
Hi,

Can it be used to calibrate multi-cameras system with known internal parameters? Or can we just calculate the essential matrix or the  trifocal tensor to get the R and T? I don't know which one is the best.

Thank you!
Harry

Andrew Straw

unread,
Oct 31, 2012, 3:46:34 AM10/31/12
to multica...@googlegroups.com
Dear Harry - what are you trying to do? My thought is that if you don't let MCSC calculate the extrinsic and intrinsic parameters both, then you're going to have a lower quality multi-camera calibration. Mathematically, however, it seems like it should be possible to set things up only to estimate R and T, but I haven't tried anything like that. But in that case wouldn't it just be simpler to image one set of 3D points with your cameras and estimate the poses of your cameras from that? That seems like a much simpler problem.

Certainly you can pass in a model of the distortion through a .rad file and this has "focal length" and "principal point" parameters, but these are used only for distortion of projected points and actually have nothing to do with focal length per se.

-Andrew
-- 
Andrew D. Straw, Ph.D.
Research Institute of Molecular Pathology (IMP)
Vienna, Austria
http://strawlab.org/

chocolate

unread,
Oct 31, 2012, 4:46:46 AM10/31/12
to multica...@googlegroups.com
Dear Andrew,

Never thought could get so quick a reply, many thanks!

I'm trying to calibrate a multi-cameras system. And I'm not sure if I should calibrate the  extrinsic and intrinsic parameters at the same time or first calibrate the intrinsic parameters and then calibrate the extrinsic parameters. I have no idea which method could give me a more precise result.

For our system, the camera's intrinsic parameters won't change very often.

And sorry for my poor English, what do you mean by "But in that case wouldn't it just be simpler to image one set of 3D points with your cameras and estimate the poses of your cameras from that? That seems like a much simpler problem."? I planned to first calibrate the intrinsic parameters using Zhang's method(Jean-Yves Bouguet's matlab code), then with a LED I see them as the points from a virtual 3D object so I can calculate the essential matrix between two of them and get the extrinsic parameters. You mean this is an easy/good way?

By now I have calibrated the intrinsic parameters using Zhang's method both with a chess board and with a circle board. But the results are different with each other, so I'm confused. BTW., why we calibrate the intrinsic parameters using a chess board but in business software they use a circle board?

Thank you.
Harry



在 2012年10月31日星期三UTC+8下午3时46分40秒,Andrew Straw写道:

Tomas Svoboda

unread,
Oct 31, 2012, 5:08:41 AM10/31/12
to multica...@googlegroups.com
Hi,

> I'm trying to calibrate a multi-cameras system. And I'm not sure if I
> should calibrate the extrinsic and intrinsic parameters at the same time
> or first calibrate the intrinsic parameters and then calibrate the
> extrinsic parameters. I have no idea which method could give me a more
> precise result.

from what I read I suggest you use the calibration package as such. Simply
calibrate everything together (extrinsic+instrinsic). I runs
automatically.

One reason for separate calibration could be that certain software
dictates using pre-defined parameters. The whole thing gets a bit more
difficult and requires some understanding of the underlying math.

best,
Tomas

>
> For our system, the camera's intrinsic parameters won't change very often.
>
> And sorry for my poor English, what do you mean by "But in that case
> wouldn't it just be simpler to image one set of 3D points with your
> cameras and estimate the poses of your cameras from that? That seems
> like a much simpler problem."? I planned to first calibrate the
> intrinsic parameters using Zhang's method(Jean-Yves Bouguet's matlab
> code), then with a LED I see them as the points from a virtual 3D object
> so I can calculate the essential matrix between two of them and get the
> extrinsic parameters. You mean this is an easy/good way?
>
> By now I have calibrated the intrinsic parameters using Zhang's method
> both with a chess board and with a circle board. But the results are
> different with each other, so I'm confused. BTW., why we calibrate the
> intrinsic parameters using a chess board but in business software they
> use a circle board?
>
> Thank you.
> Harry
>
>
>
> ? 2012?10?31????UTC+8??3?46?40??Andrew Straw???
> Dear Harry - what are you trying to do? My thought is that if you don't let MCSC
> calculate the extrinsic and intrinsic parameters both, then you're going to have a
> lower quality multi-camera calibration. Mathematically, however, it seems like it
> should be possible to set things up only to estimate R and T, but I haven't tried
> anything like that. But in that case wouldn't it just be simpler to image one set of
> 3D points with your cameras and estimate the poses of your cameras from that? That
> seems like a much simpler problem.
>
> Certainly you can pass in a model of the distortion through a .rad file and this has
> "focal length" and "principal point" parameters, but these are used only for
> distortion of projected points and actually have nothing to do with focal length per
> se.
>
> -Andrew
>
> On 10/31/2012 02:42 AM, chocolate wrote:
> Hi,
>
> Can it be used to calibrate multi-cameras system with known internal parameters? Or
> can we just calculate the essential matrix or the  trifocal tensor to get the R and
> T? I don't know which one is the best.
>
> Thank you!
> Harry
>
>
>
>

--
----------------------------------------------------------------------
Tomas Svoboda mailto: svo...@cmp.felk.cvut.cz
Center for Machine Perception http://cmp.felk.cvut.cz/~svoboda
Department of Cybernetics http://cyber.felk.cvut.cz
Faculty of Electrical Engineering phone: (+420) 224.35.74.48
Czech Technical University in Prague fax: (+420) 224.35.73.85

CHEN Xing

unread,
Oct 31, 2012, 12:25:54 PM10/31/12
to multica...@googlegroups.com
I have this problem as well. In my experiment, I found that although
the reprojection error of calibration is very small, but the intrinsic
parameters don't make much sense. (I'm using standard consumer grade
cameras, but the results show that the principal point of the cameras
are very far away from the image center. Also, the intrinsic paramters
of some cameras of the same model and under the same setting vary
largely...) The reason is probably that I was not able to cover the
view of all the cameras.

I am wondering if we can put some constraints on the intrinsic
parameters when doing the calibration. For example, force the
principal point at the center of the image, force no distortion and so
on. Will it improve the overall result?

Thanks!

CHEN Xing / 陈醒

Andrew Straw

unread,
Oct 31, 2012, 1:19:30 PM10/31/12
to multica...@googlegroups.com
Are the intrinsic parameters you're referring to just the ones used for
the distortion model? If so, then disable the distortion fitting. (If
your cameras do have significant distortion, measure that first and use
it in input .rad files before running MCSC.)

-Andrew

CHEN Xing

unread,
Oct 31, 2012, 1:28:11 PM10/31/12
to multica...@googlegroups.com
On Wed, Oct 31, 2012 at 10:19 AM, Andrew Straw <andrew...@imp.ac.at> wrote:
> Are the intrinsic parameters you're referring to just the ones used for the
> distortion model? If so, then disable the distortion fitting.

Thanks for the rapid reply! I've done that and there are no problems
with distortion correction now. The problem now is that the 3x3 camera
matrix has weird values: the principal point can sometimes be quite
far from the center and the focal length vary largely among cameras of
the same model and under the same setting.


CHEN Xing / 陈醒
Reply all
Reply to author
Forward
0 new messages