Hi Andrew,
We are really struggling to calibrate out 14-camera array. I suspect part of the problem may be that the working volume is large enough that most cameras do not see most of the volume, but all points in the volume are seen by at least 3 and often more of the cameras.
We have tried many configurations, and were once able to get a convergence, but it seems quite tricky, so I'm wondering if there is anything we can do to improve the convergence.
Main question:
I vaguely recall there may be a way to seed multicamselfcal with an initial guess of the camera positions and orientations. Is there indeed an option for that? Do you know where and how one would do that?
Other things:
1. We have done the checkerboard calibrations, the lenses do not introduce a lot of distortion
2. We ensure that all cameras have ~1000 surviving points, and coverage is relatively consistent across cameras
3. We checked the braidz file, and the data looks to be high quality
Thanks for any suggestions!
- Floris