wjr, Sameer,
thanks for your replies and excuse my late reply, I had to digest what you both said.
First the missing context about my program. As Sameer noticed my program runs on image pair structures. It relies on the images being in the correct order (ordered datasets). The pairs are simply constructed like that:
Pair1 = image1, image2.
Pair2 = image2, image3.
Pair3 = image3, image4 and so on.
There is always overlap between pairs. This is the reason why there are 2 addResidualBlock functions per pair.
@wjr - Yes the gradient jumping all over the place definately isn't good. I will adapt your idea of using a "perfect" (or close to perfect) dataset of points and cameras and will use it for testing but i have to find such dataset first.
For the normalization part - I'm using openCV for the first part of the program (feature matching, camera poses, triangulation) which gives Point coordinates usualy within <-10,10> in all axes (after filtering and rejecting "wild" 3d points). I will try to scale them down some more.
@Sameer - I'm not sure if I understood correctly but what you wrote looks like there should be corespondences between partial point clouds included in the Problem. This indicates that I have missed an essential part of how bundle adjustment works. My program uses ICP to align partial point clouds so it has a 3d correspondence search algorithm in it. It's been 6 days since your reply so i had some time to experiment and this is what i came up with:
Restructuring all partial point clouds into a single vector where 2 types of points are present: those that have correspondece with "next" point cloud and those that dont.
Point with correspondence have 3 cameras they are visible in - 2 cameras from N'th pair and 1 camera from N+1'th pair - those are (hopefully) the "ties" you mentioned.
Points without correspondence have 2 cameras, just like before.
It's all packed in a vector of structures:
struct PointBA {
double p3d[3];
std::vector<std::array<double, 9>> cams;
std::vector<std::array<double, 2>> p2d;
};
Now if a 3d Point has a correspondence in the "next" partial point cloud its 3d coordinates are slightly different in those partial point clouds. ICP aligns partial point clouds but it's not perfect because there is noise and distortions in those clouds. For now I'm just calculating average 3d coordinates.
Finally I'm passing all this into residual blocks like this:
for (int p = 0; p < pointsBA.size(); p++) {
for (int c = 0; c < pointsBA[p].cams.size(); c++) {
ceres::CostFunction* cost_function1 = SnavelyReprojectionError::Create(pointsBA[p].p2d[c][0], pointsBA[p].p2d[c][1]);
problem.AddResidualBlock(cost_function1, huber, pointsBA[p].cams[c].data(), pointsBA[p].p3d);
}
}
Is this a more reasonable approach?
I have already done some test with the above configuration. It's a little better now but all previous issues remain. The point clouds don't "explode" anymore, they are just distorted. I have run some test with small datasets (~7-10 images) and in some cases the optimisation didn't converge after 1000 steps. There is alot of unsuccessful steps.
Any more help highly appreciated.
Thank you