Hi all, i have some weird behavior from gtsam when i'm trying to optimize some large graph ~250k factors.
The reason why i think its related to the size of the graph is because all the individual factors and values seems sane, i don't see any nans or infs or all zero jacobians when i linearize each factor individually, or even linearize the entire graph with the values.
If i take a subset of the graph and try to optimize it everything goes well, for instance the first half and the second half, i managed to optimize up to ~80% of the graph without problem, only when i tried optimize it all together it dies on segfault.
This reproduces with LM, GN and DL optimizers.
I also tried to optimize both halves and then use that as initial values for an optimization of the entire graph and i get the same segfault, so i really doubt this is the result of one of the factors giving bad error or jacobian.
Unfortunately i cannot share an example for the graph because its using factors i wrote in my local project, but basically the graph is a fusion between gps and speed/yaw measurements for a vehicle. it contains gpsFactor for each gps measurement and "between" factor that constraints the poses to propagate by the speed/yaw measurements and another prior factor on the speed/yaw to comply to the measurements.
i have 71k Pose3 values and 280k factors in the graph. basically.
Did anyone encounter any problem such as this before? or have any clue as to what can be the source?
As a work around i can just optimize the graph in 2 chunks and use the combined result, but its still bothering me that it fails. i did optimize graphs with millions of factors in the past with gtsam and had no problems.