this pose graph formation correct for fusing LiDAR, Visual, and IMU data?

55 views
Skip to first unread message

Chun Jye Beh

unread,
Aug 7, 2025, 3:39:11 AMAug 7
to gtsam users

Hi, I'd like to confirm whether the pose graph structure in the attached diagram is correct for multi-sensor fusion system using GTSAM. Here’s a summary of what I’ve modeled:

  • Green circles (GT_0, GT_1, GT_2) represent the estimated robot poses (LiDAR frame).

  • Magenta triangles represent visual odometry poses, connected with BetweenFactors.

  • Blue ellipses are IMUFactors between adjacent LiDAR poses.

  • Gray squares are BetweenFactors representing LiDAR odometry between consecutive poses.

  • Yellow diamond is a PriorFactor on the initial pose GT_0.

All measurements (LiDAR odometry, visual odometry, and IMU preintegration) contribute to refining the trajectory of the GT_i poses.

Does this structure correctly reflect best practices for fusing LiDAR, visual, and IMU data in a factor graph?

Additional Questions:

  1. Should I transform the visual odometry frame into the LiDAR frame before adding it to the factor graph?
    Currently, the visual odometry poses are connected to the same trajectory nodes (GT_i) as the LiDAR poses via BetweenFactors. Should I first transform visual odometry into the LiDAR frame using a known extrinsic calibration (e.g., T_lidar_visual) before creating those factors?

  2. Should I periodically add a LiDAR pose as a PriorFactor to prevent drift accumulation?
    Right now, I only have a single PriorFactor on GT_0. In practice, would it help with long-term consistency to anchor some later LiDAR poses with weaker PriorFactors, or is that unnecessary when using tightly integrated IMU factors?

Any suggestions for improvement or error correction would be greatly appreciated!

Uploaded image

Chun Jye Beh

unread,
Aug 7, 2025, 12:17:27 PMAug 7
to gtsam users
I have attached the diagram here. May I know is my graph constructed correctly?
Screenshot 2025-08-08 at 12.17.56 AM.png

Dellaert, Frank

unread,
Aug 7, 2025, 12:44:24 PMAug 7
to Chun Jye Beh, gtsam users
One immediate comment is that I’m not sure you need 2 pose chains. Presumably the camera and the LIDAR platform are rigidly connected. So, if they are synchronized, you would only need one pose variable per time stamp. 

If they are not synchronized , you might have to do something more sophisticated with interpolation based on time stamps, although two pose chains might be an acceptable way to achieve the same goal.

Best!
Frank

From: gtsam...@googlegroups.com <gtsam...@googlegroups.com> on behalf of Chun Jye Beh <chunj...@gmail.com>
Sent: Thursday, August 7, 2025 12:17:27 PM
To: gtsam users <gtsam...@googlegroups.com>
Subject: [GTSAM] Re: this pose graph formation correct for fusing LiDAR, Visual, and IMU data?
 
--
You received this message because you are subscribed to the Google Groups "gtsam users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/gtsam-users/fdaae0af-b044-4558-9134-2a53d030f761n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages