Multiple camera pose estimation

418 views
Skip to first unread message

NikolasT

unread,
Feb 3, 2021, 10:09:23 AM2/3/21
to Ceres Solver
Hi all,

I have a system that consists of N cameras on fixed positions. My goal is to find the relative poses between them assuming that the 1st camera's pose is (0,0,0),(0,0,0) (my world's coordinate system is 1st camera's system).

What I've done so far was to cv2.stereoCalibrate each pair of cameras and by using their relative poses I could get their pose with respect of my reference camera. This didn't seem adequate as I was propagating errors from the steroCalibration to other pairs of cameras.

To further optimise their poses I'm using the ceres bundle adjustment example, where I'm putting the poses that I've found before as initial guess, their intrinsics, some object points and the observations.
As object points' coordinates, I'm using apriltag detections coming from the 1st camera (as I'm using this camera's coordinate system as a reference).

While it seems that the solver does the job amazingly well and converges very soon, I'd like to raise this question. Is a single photo taken by all cameras enough, or should I consider using a moving target? The problem with the second approach however, would be that for each image, each detection should be considered as a new object point.

Any insight, suggestion would be more than welcome,
Thanks in advance

Alan GAO

unread,
Feb 4, 2021, 9:13:07 PM2/4/21
to Ceres Solver
Hi,NikolasT,

In my opition, this is a classical SFM(Struture From Motion) issue. You want to calibrate the cameras to obain their accurate poses. It's no doubt three-Dimension object especially with multiple features (such as sculpture)was recommended whet N camers on fixed positions.  Keep the object/target still and don't move it when take pictures. 
There are many opensource SFM software available, I recommend openMVG (https://github.com/openMVG/openMVG) which using ceres to solve BA problems.

Best,
Alan
Reply all
Reply to author
Forward
0 new messages