Hi,
> I'm trying to calibrate a multi-cameras system. And I'm not sure if I
> should calibrate the extrinsic and intrinsic parameters at the same time
> or first calibrate the intrinsic parameters and then calibrate the
> extrinsic parameters. I have no idea which method could give me a more
> precise result.
from what I read I suggest you use the calibration package as such. Simply
calibrate everything together (extrinsic+instrinsic). I runs
automatically.
One reason for separate calibration could be that certain software
dictates using pre-defined parameters. The whole thing gets a bit more
difficult and requires some understanding of the underlying math.
best,
Tomas
>
> For our system, the camera's intrinsic parameters won't change very often.
>
> And sorry for my poor English, what do you mean by "But in that case
> wouldn't it just be simpler to image one set of 3D points with your
> cameras and estimate the poses of your cameras from that? That seems
> like a much simpler problem."? I planned to first calibrate the
> intrinsic parameters using Zhang's method(Jean-Yves Bouguet's matlab
> code), then with a LED I see them as the points from a virtual 3D object
> so I can calculate the essential matrix between two of them and get the
> extrinsic parameters. You mean this is an easy/good way?
>
> By now I have calibrated the intrinsic parameters using Zhang's method
> both with a chess board and with a circle board. But the results are
> different with each other, so I'm confused. BTW., why we calibrate the
> intrinsic parameters using a chess board but in business software they
> use a circle board?
>
> Thank you.
> Harry
>
>
>
> ? 2012?10?31????UTC+8??3?46?40??Andrew Straw???
> Dear Harry - what are you trying to do? My thought is that if you don't let MCSC
> calculate the extrinsic and intrinsic parameters both, then you're going to have a
> lower quality multi-camera calibration. Mathematically, however, it seems like it
> should be possible to set things up only to estimate R and T, but I haven't tried
> anything like that. But in that case wouldn't it just be simpler to image one set of
> 3D points with your cameras and estimate the poses of your cameras from that? That
> seems like a much simpler problem.
>
> Certainly you can pass in a model of the distortion through a .rad file and this has
> "focal length" and "principal point" parameters, but these are used only for
> distortion of projected points and actually have nothing to do with focal length per
> se.
>
> -Andrew
>
> On 10/31/2012 02:42 AM, chocolate wrote:
> Hi,
>
> Can it be used to calibrate multi-cameras system with known internal parameters? Or
> can we just calculate the essential matrix or the trifocal tensor to get the R and
> T? I don't know which one is the best.
>
> Thank you!
> Harry
>
>
>
>
--
----------------------------------------------------------------------
Tomas Svoboda mailto:
svo...@cmp.felk.cvut.cz
Center for Machine Perception
http://cmp.felk.cvut.cz/~svoboda
Department of Cybernetics
http://cyber.felk.cvut.cz
Faculty of Electrical Engineering phone: (+420) 224.35.74.48
Czech Technical University in Prague fax: (+420) 224.35.73.85