Hi all, I'm trying to solve standard visual SLAM using feature points matching on a monocular video (500-1000 frames per video).
Since I don't have a good initial guess for the poses (for now i calculated essential matrix between consecutive frames for that) trying to solve the entire video in batch doesn't converge well.
So I thought about trying ISAM2 to solve this incrementally, but I keep getting the infamous IndeterminantLinearSystem exception, even when using QR linear solver.
I created a small test example with only 10 frames, and a baseline of ~1m movement between them so I won't have degenerate images, when I solve this small set in Batch it converges fine with guass-netwon, doglog and LM solvers (DL and GN with QR solver, cholesky throws indeterminant error).
However when I try to use ISAM2 and add the measurements incrementally it throws the error. I think what happens is that the problematic landmarks are with 3 observation that due to matching noise don't all align well, but when the batch solvers see all the observations for those landmarks (6-7) its not a problem.
My main issue is that the program crashes when incrementally adding the measurements to ISAM2 and I don't know how to proceed? how can I recover from this? can I remove the problematic landmark from ISAM2 until I get more observation for it and it'll become stable? I didn't see a way to do it.
Is this a common known problem when trying to solve feature points based visual-SLAM with ISAM2?
I'm using gtsam through the python wrappers.