Differences in the solution between iSAM2 and

129 views
Skip to first unread message

Davide Cucci

unread,
Apr 11, 2025, 4:03:54 AMApr 11
to gtsam users
Dear all,

First of all greetings to everybody and kudos for the excellent package you've put together over all these years.

I am solving a navigation problem consisting of a GPS and other odometry sources. I have non trivial noise models for which I have hidden states to recover and wide use of robust estimators. I have an implementation using Ceres which we use in a product and  works like a charm. I am now looking into GTSAM for the incremental solving capabilities of iSAM2.

I have implemented the same problem in GTSAM (there are some custom factors but nothing crazy, they are variations of the BetweenFactor one) and solve the problem in batch using the LevenbergMarquardtOptimizer with default parameters. My implementation matches exactly the results I obtain with ceres, up to insignificant variations. This is very cool already.

Then I moved to iSAM2. Instead of solving in batch, for each odometry/GPS epoch I create the new factors, call update, and then calculateEstimate.

My issue is that the solution is largely different from the batch one, in the sense that is way more noisy and the error is way larger with respect to the ground truth. If I look into the errorAfter field returned from update, I see the values growing epoch by epoch in a way that seems wrong. It seems to me that iSAM does not "optimize until the end" and that there would be assignments for the variables yielding lower residuals.

I have played with all sorts of parameters, e.g., the relinearizeThreshold, the relinearizeSkipforceFullSolveforce_relinearize, etc, without success. I have to admit that I have only a high level understanding of the iSAM2 algorithm, whereas LM is clear to me.

Any clue on what can be wrong in my problem?

Additionally, my batch implementation with LevenbergMarquardtOptimizer is approx 1-2 orders of magnitude slower than the ceres implementation. Maybe this is out of scope here, but I've read the doc, I build in Release, etc. Are you aware of anything more that I can test to improve the speed?

Thanks a lot for the help,

best

Davide Cucci

Rafael Spring

unread,
Apr 11, 2025, 4:13:34 AMApr 11
to Davide Cucci, gtsam users
I think I recall reading that update() in iSAM2 only runs one iteration and may have to be called several times to converge per timestep. Others please correct me if I am wrong here.

W.r.t. LM being slower than in Ceres, this might be a consequence of Ceres' usage of external specialized libraries (like SuiteSparse) for sparse matrix solvers, whereas GTSAM does everything "in house" IIRC.

Rafael Spring
CTO, Co-Founder
DotProduct LLC / DotProduct GmbH


--
You received this message because you are subscribed to the Google Groups "gtsam users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/gtsam-users/f9594ff7-f206-4b98-a90f-778ef5e1250fn%40googlegroups.com.

Dellaert, Frank

unread,
Apr 11, 2025, 9:22:37 AMApr 11
to Rafael Spring, Davide Cucci, gtsam users

Indeed, update only does one iteration, amortizing the cost over successive calls to update.

About batch being slower: this depends on so many things: try Metis ordering, make sure convergence criteria are same, use TBB for parallel solving (especially if using metis).

Best

FD

 

Davide Cucci

unread,
Apr 14, 2025, 6:31:15 AMApr 14
to gtsam users
Thanks a lot to everybody for the help!

I have tried to call update and calculateEstimate multiple times but the second time I don't have any new factor to add. If I call update with an empty set of new factors the results don't change with respect to the first call. I guess I have to modify the iSAM2 algorithm internally somehow. Any suggestions?

I am a bit surprised that this case has not shown up before. In navigation one typically cares only of the most recent epoch. Does this mean that my problem is somehow weird?
Message has been deleted

Navid Mahmoudian Bidgoli

unread,
Jul 8, 2025, 6:14:46 AMJul 8
to gtsam users

Hello,

I'm encountering a similar issue regarding the observed differences in solution quality between gtsam::ISAM2 and gtsam::NonlinearFactorGraph (batch optimization) for my navigation problem.

Davide, have you found any solutions or further insights into why the iSAM2 results appear noisier and have larger errors compared to the batch solution, even after adjusting parameters like relinearizeThreshold?

Any advice or experiences from the group on how to achieve closer agreement between iSAM2's incremental solution and the batch-optimized result would be greatly appreciated.

Thank you!

Reply all
Reply to author
Forward
0 new messages