Charles Hamesse
unread,Jul 25, 2023, 10:38:53 AM7/25/23Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to gtsam users
Hi all,
I'm looking to implement various multi-sensor SLAM systems (LiDAR, visual, inertial) and GTSAM seems like an excellent choice for the optimization part. As I browse through existing solutions, I remark that most current LiDAR(-inertial) systems use an error-state iterative Kalman filter (ESIKF) for odometry. I can imagine that the obvious solution if I want to create a LiDAR-inertial-visual system would be to take the output of the ESIKF and put that as a GTSAM pose observation factor, then include other factors (IMU preintegration and some visual factors like feature reprojection).
But then I have a couple of questions:
1. Is there a particular reason for which there doesn't seem to exist a GTSAM-based LiDAR odometry system? Most LiDAR odometry systems nowadays also use some sort of feature-based approach (see e.g. VoxelMap), so the scale of the problem would not be much different from the visual odometry systems.
2. If you know such a system, could you mention it here? I believe using some LiDAR point-to-plane registration factors or similar directly instead of pose factors as proposed earlier would make the global system more elegant. Also, many systems use GTSAM in the back-end for pose graph optimization, so it's already there - why not use it also in the front-end? Would there be any reason not to use GTSAM for a LiDAR odometry front-end, i.e. adapting FAST-LIO, VoxelMap, etc to use iSAM2 instead of the ESIKF? This could make the formulation more elegant and allow to implement extensions more easily with other sensors, but am I missing something?
Thank you very much.
Kind regards,
Charles