Motion Based Calibration of IMU and a Pose sensor

1,086 views
Skip to first unread message

Subodh Mishra

unread,
Jan 17, 2021, 3:46:50 AM1/17/21
to gtsam users
Hi,
I want to solve the motion based calibration problem (The problem is briefly explained in the pdf attached as I could not post the images here for some reason) using GTSAM. In my setup I have a Lidar and an IMU. The goal is to determine the spatial separation (extrinsic calibration) between the Lidar and IMU trajectories by solving a non linear least square problem.

Between two consecutive IMU poses (at corresponding lidarscan timestamps), we have a bunch of IMU measurements which can give us a pre-integration factor (which depends on these two (unknown) IMU poses) and we also have a lidar scan matching odometry factor obtained from consecutive lidar scans, which depends on the same two consecutive IMU poses as above and also on the unknown extrinsic calibration between the sensors (details in the pdf).

I have read the GTSAM documentation and to me it seems that using the ExpressionFactors is a good way to go about it. Can anyone please let me know if I am thinking to solve this problem in the right direction?

Thanks!
Subodh.
Lidar_IMU_Calibration_GTSAM.pdf

Dellaert, Frank

unread,
Jan 17, 2021, 11:35:30 AM1/17/21
to Subodh Mishra, gtsam users
I am in the process of starting up two classes this semester, so I have no time to look in detail. But the expressionfactor has come in handy for problems such as this. Main advice: unit-test!

Frank

From: gtsam...@googlegroups.com <gtsam...@googlegroups.com> on behalf of Subodh Mishra <subo...@tamu.edu>
Sent: Sunday, January 17, 2021 3:46:50 AM
To: gtsam users <gtsam...@googlegroups.com>
Subject: [GTSAM] Motion Based Calibration of IMU and a Pose sensor
 
--
You received this message because you are subscribed to the Google Groups "gtsam users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gtsam-users/36e3d2d9-9f09-4c15-b6d8-e347a5824450n%40googlegroups.com.

Subodh Mishra

unread,
Jan 17, 2021, 2:09:05 PM1/17/21
to gtsam users
Thanks for your advice :)

Abhishek Goudar

unread,
Jan 18, 2021, 11:35:07 AM1/18/21
to gtsam users
Your approach sounds correct. I was able to do sensor extrinsic calibration for a different range-only sensor. However, I did not use Expression factors but instead calculated the jacobians manually. In your case its a range + bearing sensor,  the jacobian would have additional components, but the general idea should still be applicable.

Subodh Mishra

unread,
Jan 18, 2021, 11:42:22 AM1/18/21
to gtsam users

Thanks for your response Abhisek! I am curious to know which factors you had used? Infact, ExpressionFactors also ask us to provide Jacobians which need to be calculated manually.

Abhishek Goudar

unread,
Jan 18, 2021, 11:59:28 AM1/18/21
to Subodh Mishra, gtsam users
I created custom factors (unary factors in my case since it was landmark based localization). The input to the custom factors were Pose3 (extracted from NavState) and an additional Point3 variable reflecting the sensor-sensor offset.

I misspoke. With ExpressionFactors one can compose jacobian blocks. In my case I had to do this manually.

Subodh Mishra

unread,
Jan 18, 2021, 12:15:43 PM1/18/21
to gtsam users
I understood the math part of your answer.. except that I am still trying to grasp how to use Expressions (or avoid using it) for my thing. My grasp in understanding the programming aspects of GTSAM is little tenuous now. Hopefully I will get it soon :)

Abhishek Goudar

unread,
Jan 19, 2021, 1:45:24 PM1/19/21
to Subodh Mishra, gtsam users
I believe Experssions are there to make your life easier and also to make the code more modular/reusable. There are two examples SFMExample.cpp and SFMExampleExpressions.cpp that highlight the differences. If you are able to derive the end-to-end jacobian, you don't have to use expressions. Hopefully that helps. 

Dellaert, Frank

unread,
Jan 19, 2021, 1:52:36 PM1/19/21
to Abhishek Goudar, Subodh Mishra, gtsam users
Expressions are in fact a small template-language that does reverse automatic differentiation (backdrop) for you automatically. You only need to implement the Jacobians for the building blocks. Any function that has OptionalJacobians can be used as a building block. Note that Expressions.h also implements some AD for operators like “+”. 

Frank

Subodh Mishra

unread,
Jan 19, 2021, 2:06:39 PM1/19/21
to gtsam users
Thanks for your inputs Abhisek and Frank.

Yes, I do have the end to end Jacobians. I derived them using the methods described in the paper "On-Manifold Preintegration for Real-Time Visual-Inertial Odometry". Although, I am still trying to understand how I can get away with not using Expressions.

Regards,
Subodh.

Dellaert, Frank

unread,
Jan 19, 2021, 2:10:13 PM1/19/21
to Subodh Mishra, gtsam users
If you don’t want to use expressions, you need to derive from NoiseModelFactor or NonlinearFactor and implement the abstract methods.

Frank

Sent: Tuesday, January 19, 2021 2:06:38 PM
To: gtsam users <gtsam...@googlegroups.com>
Subject: Re: [GTSAM] Motion Based Calibration of IMU and a Pose sensor
 

Brice Rebsamen

unread,
Jan 20, 2021, 1:11:39 PM1/20/21
to gtsam users
Subodh,

I'm curious: how do you compute the lidar odometry? Since typically we do scan matching based on a pointcloud collected over time, that means your pointcloud depends on your pose too. If you have an uncalibrated IMU-Lidar extrinsics, then you will also have distorted pointclouds, and therefore suboptimal lidar odometry, and therefore suboptimal IMU biases estimates, leading to a suboptimal extrinsics computation. I'd typically approach this by doing multiple iterations: starting from the prior extrinsics, compute poses from the IMU, then compute the lidar odometry, then setup and solve the problem, then compute new lidar odometry based on the new extrinsics and poses, and start again... Is that your approach?

I understand it might not be possible, but I hope you will be able to share more of your method and results.

This reminds me of this work from my friend Mike Bosse in the Zebedee project: https://confluence.csiro.au/display/ASL/Zebedee. Maybe you can dig in their literature for inspiration, maybe this as a starting point: https://www.researchgate.net/publication/224557584_Continuous_3D_scan-matching_with_a_spinning_2D_laser
As well as work from Jesse Levinson at Stanford: http://driving.stanford.edu/papers/ISER2010.pdf

Best,
Brice

Dellaert, Frank

unread,
Jan 20, 2021, 2:00:44 PM1/20/21
to Brice Rebsamen, gtsam users
Zebedee is awesome :-)

Subodh Mishra

unread,
Jan 20, 2021, 2:46:15 PM1/20/21
to gtsam users
Hi Brice,
Thanks for spending time on this. The concerns you have raised regarding motion distorted lidar scans are valid and I indeed have to do multiple steps/iterations to get better estimates. Here is a brief overview of the steps I am following now:

Step 1: Collect Lidar Inertial data from a sensor suite on which these two sensors are rigidly mounted with some spatial separation between them. The data collection process involves exciting (moving it randomly along) all 6 degrees of freedom of the sensor suite, so that all the degrees of freedom become observable.

Step 2a: Use the motion distorted lidar scans to calculate scan to scan NDT based lidar odometry (In this step we shall only use the rotation between two frames). This gives us: ^L_{k-1}R_{L_{k}} between consecutive scans.
         2b: Integrate the gyro measurements between the scan instants to obtain IMU rotation between two scans (instants/timestamps). This gives us: ^I_{k-1}R_{I_{k}} between consecutive scans. (I use GTSAM's functions to do gyro integration between two scans)
         2c: Solve a non linear least square problem which minimizes an objective function obtained by squared sum of the rotational components of a hand eye calibration residual. (example of a single residual between two consecutive scans
                r = ^I_{k-1}R_{I_{k}} ^I R_L - ^I R_L ^L_{k-1}R_{L_{k}} ) . I use CERES solver to solve this problem.
         Output: An initial estimate of the rotational component of the extrinsic calibration between the lidar and imu, i.e. ^I \hat{R}_L. This estimate is bias uncompensated and also time offset uncompensated but it is reasonably good (max 2/3 deg error across all axes) to be used for initialization in the next step.
         Note: I don't think estimating the translational component is possible without estimating other states (much unlike what I did in step 2) like imu pose, velocity and biases because they are all mathematically related to each other. So, in the next step I use a (kalman) filtering  approach to get an estimate of the translational component which also gives us the other states like imu pose, velocity and biases. I use the initial ^I \hat{R}_L in the step 3.

Step 3: This is a Kalman Filtering step which takes as input (not as a whole/batch, but one by one) the distorted lidar scans (from step 1), the imu measurements (from step 1) and also the initial estimate ^I \hat{R}_L (from step 2) to obtain the Imu pose at all scan instants, the Imu biases, the time offset between the lidar and imu and most importantly, estimates of ^I R_L & ^I t_L. The idea is that the imu measurements are propagated and the lidar odometry gives us the measurement equations. The state variable consists of extrinsic calibration so that is also estimated in the process. Most importantly, before the lidar odometry is calculated the lidar scans are deskewed/motion compensated using the available best estimate of the extrinsic calibration between the lidar and imu (and the time offset and the available imu measurements in a certain time window). This gives us decent results. I am attaching the results of the translational component as an image here. As can be seen the translational component converges to some fixed value. I most certainly feel that ISAM2 can be used instead of a filtering approach but KF is something I understand and know, so I went ahead with it.

Step 4: This is the step I am currently working on and I want to use GTSAM for this. In this step, first of all I want to optimize all the residuals together and not recursively as done in step 3 and second of all I want to introduced a planar target constraint which  project points in a known plane in any scan to the same plane detected in the first scan (when the suite was stationary). This constraint will involve all the state variables and on minimization will give a batch estimate of the states. The state we are interested in is the extrinsic calibration (obviously the other states also need to well estimated for the extrinsic calibration to be correct). This is where am at now. I am looking into how I can use GTSAM for doing this batch optimization. As far as I understand, I will have to use ExpressionFactors, I have derived the Jacobian blocks. But I am still trying to get the minor details so that I can implement it.

Thanks for the links you have shared, I am going to look into them now. And I appreciate your comments and inputs.

Regards,
Subodh.
KFResult.png

Subodh Mishra

unread,
Jan 20, 2021, 2:57:48 PM1/20/21
to gtsam users
I had recorded a video to show the effect of deskewing/compensating for motion distortion using the best estimate of extrinsic calibration as the kalman filter estimates them. Here it is: https://youtu.be/YmTyoQA4NaY . The first 1:20 min shows the raw lidar scan where one can see that a planar board kept in front will appear to be deformed in scans when the system undergoes fast motion. The next 1:20 min shows the scans corrected and deskewed by using inertial measurments and the current best estimate of the extrinsic calibration, such that the planar target does not appear to be deformed, i.e. the gello effect visible in the first 1:20 mins is not visible in the last 1:20 mins.

Subodh Mishra

unread,
Jan 20, 2021, 10:40:04 PM1/20/21
to gtsam users
I have a question regarding usage of ExpressionFactors. As far as I understand from the explanation provided here https://bitbucket.org/gtborg/gtsam/wiki/GTSAM_Expressions and also a small line fitting example I had tried out here https://github.com/SubMishMar/gtsam_tutorial/blob/main/tutorials/linefitting/src/linefittingExpressions.cpp . One has to provide a measurement function "h" and also the Jacobians H_x of "h" w.r.t the state variables "h" is a function of.

My measurement function "h" is a function of elements in SE(3) (=> in SO(3) as well) and I know how to calculate the Jacobians using retraction explained in the paper "On-Manifold Preintegration for Real-Time Visual-Inertial Odometry". The question is, do I really need to calculate the Jacobians as GTSAM provides the Jacobians for each operation on the Lie Group.

My measurement function would roughly look like this:

gtsam::Pose3 predictedMeasurement(const gtsam::Pose3& T1, const gtsam::Pose3& T2const gtsam::Pose3& T3, OptionalJacobian<Matrix&> H1, OptionalJacobian<Matrix&> H2, OptionalJacobian<Matrix&> H3) {

  gtsam::Pose3 h = T1.inverse().compose( (T2 . between(T3)) . compose(T1) );

  //// All the code related to Jacobians

  return h;
}


Here T1, T2, T3 are all state variables of type gtsam::Pose3. Do I have to provide/derive the Jacobian H1, H2, H3 blocks w.r.t T1, T2, T3 respectively or there is some way to extract them using GTSAM, or do I not need to bother because GTSAM will take care of them on its own so that I can get away with just providing "h" the measurement function ?

I feel I can use this as an inspiration: https://groups.google.com/u/1/g/gtsam-users/c/Z6Uhsw62B8s/m/TXWj0neVAwAJ . Any hints?

Thanks.
Subodh.

Dellaert, Frank

unread,
Jan 21, 2021, 8:49:18 AM1/21/21
to Subodh Mishra, gtsam users
Look in expressions.h (there are two of them) and testExpressions.cpp for inspiration. “between” is already there in expression format, and  “compose” can be done using the overloaded operator defined in Expression.h

Frank

Sent: Wednesday, January 20, 2021 10:40:04 PM

Subodh Mishra

unread,
Jan 21, 2021, 11:20:16 PM1/21/21
to gtsam users
Thanks for your inputs Frank. I have a working solution now. Obviously I need to make it better.

Here it is:
"
/// Add Lidar Odometry Between Factor
gtsam::Pose3 lidarOdometryMeasurement = prevLidarPose_.between(curLidarPose);
gtsam::Expression<gtsam::Pose3> lidarOdometryPrediction = gtsam::between(gtsam::Pose3_(gtsam::compose(gtsam::Pose3_(X(key-1)), gtsam::Pose3_(C(0)))),
gtsam::Pose3_(gtsam::compose(gtsam::Pose3_(X(key)), gtsam::Pose3_(C(0)))));
graphFactorsExpressions.addExpressionFactor(lidarOdometryPrediction, lidarOdometryMeasurement, correctionNoise);
"

Here C is the calibration parameter of type Pose3.

Regards,
Subodh.

Dellaert, Frank

unread,
Jan 21, 2021, 11:41:37 PM1/21/21
to Subodh Mishra, gtsam users
Nice, looks correct, but unit tests should decide 🙂
Hint: you could make use of sub-expressions, overloading and auto to make it easier to read !


Sent: Thursday, January 21, 2021 23:20

Subodh Mishra

unread,
Jan 22, 2021, 12:06:38 AM1/22/21
to gtsam users
Sure! Still trying to get the parlance :)

Brice Rebsamen

unread,
Jan 22, 2021, 12:47:19 PM1/22/21
to Subodh Mishra, gtsam users
Hey Subodh,

Thanks for the video. There are 2 things that strike me in it:
- this appears to be a very dense lidar you are working with, what
lidar is this?
- what's the strange artefact at the center? seems to me like the seam
due to composing a full spin... if that's the case, I suggest rotating
your lidar 180 degrees, either physically or in software, so that the
seam is at the back and thus not visible in your videos (that will
make for nicer videos in the future).

Regarding your method, I'm surprised by your idea of using a "planar
target constraint which project points in a known plane in any scan to
the same plane detected in the first scan". I suppose the idea is that
it lets you do a continuous optimization since you should be able to
get a jacobian from it, which is neat. I wonder how that will work
though, in particular I wonder whether it will let you fully constrain
the problem. Starting from a close enough initial calibration it might
be enough anyway.

The alternative would be an iterative approach (as in Jesse's CLAMS
paper), where you would consider small scans and align them to the map
of the whole room you obtained while the sensor was stationary, giving
you lidar unary constraints (not odometric), then solve for pose and
calibration, then do the scan matching again, and so on... Because you
would then use more data, with planes all around you, it would be a
lot more constrained.

Also: if you start stationary, then move it around a bit and bring it
back to the exact same position, move it again and bring it back
etc..., that gives you natural loop closing points that you can add to
your graph, which will let you solve for trajectory and biases even
before you start looking at the lidar data. Then you can use that
trajectory to construct nice point clouds to work with.

Again, I'm very curious about how your approach goes, so please keep
posting about your results as you go along.

Here's a nice paper related to scan matching:
https://vladlen.info/publications/fast-global-registration (and source
code is available for it).

Best,
Brice
> To view this discussion on the web visit https://groups.google.com/d/msgid/gtsam-users/731d571e-564e-4b80-b211-b5c2d5eb209fn%40googlegroups.com.

Dellaert, Frank

unread,
Jan 22, 2021, 12:56:01 PM1/22/21
to Brice Rebsamen, Subodh Mishra, gtsam users
Cool video indeed.
BTW, also check out Teaser++ for certifiable​ scan matching from Luca's group: 
TEASER++: fast & certifiable 3D registration. TEASER++ is a fast and certifiably-robust point cloud registration library written in C++, with Python and MATLAB bindings.




From: gtsam...@googlegroups.com <gtsam...@googlegroups.com> on behalf of Brice Rebsamen <brice.r...@gmail.com>
Sent: Friday, January 22, 2021 12:47
To: Subodh Mishra <subo...@tamu.edu>
Cc: gtsam users <gtsam...@googlegroups.com>

Subject: Re: [GTSAM] Motion Based Calibration of IMU and a Pose sensor

code is available for it).

Best,
Brice

On Wed, Jan 20, 2021 at 11:57 AM Subodh Mishra <subo...@tamu.edu> wrote:
>
> I had recorded a video to show the effect of deskewing/compensating for motion distortion using the best estimate of extrinsic calibration as the kalman filter estimates them. Here it is: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyoutu.be%2FYmTyoQA4NaY&amp;data=04%7C01%7Cfrank.dellaert%40cc.gatech.edu%7C35030eca8b234ed877f008d8befdc469%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637469345518900767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=QFl9OI1VhbUr0mbjz2swwDxKHa83d37S1si27SzyQW8%3D&amp;reserved=0 . The first 1:20 min shows the raw lidar scan where one can see that a planar board kept in front will appear to be deformed in scans when the system undergoes fast motion. The next 1:20 min shows the scans corrected and deskewed by using inertial measurments and the current best estimate of the extrinsic calibration, such that the planar target does not appear to be deformed, i.e. the gello effect visible in the first 1:20 mins is not visible in the last 1:20 mins.

>
> On Wednesday, January 20, 2021 at 1:46:15 PM UTC-6 Subodh Mishra wrote:
>>
>> Hi Brice,
>> Thanks for spending time on this. The concerns you have raised regarding motion distorted lidar scans are valid and I indeed have to do multiple steps/iterations to get better estimates. Here is a brief overview of the steps I am following now:
>>
>> Step 1: Collect Lidar Inertial data from a sensor suite on which these two sensors are rigidly mounted with some spatial separation between them. The data collection process involves exciting (moving it randomly along) all 6 degrees of freedom of the sensor suite, so that all the degrees of freedom become observable.
>>
>> Step 2a: Use the motion distorted lidar scans to calculate scan to scan NDT based lidar odometry (In this step we shall only use the rotation between two frames). This gives us: ^L_{k-1}R_{L_{k}} between consecutive scans.
>>          2b: Integrate the gyro measurements between the scan instants to obtain IMU rotation between two scans (instants/timestamps). This gives us: ^I_{k-1}R_{I_{k}} between consecutive scans. (I use GTSAM's functions to do gyro integration between two scans)
>>          2c: Solve a non linear least square problem which minimizes an objective function obtained by squared sum of the rotational components of a hand eye calibration residual. (example of a single residual between two consecutive scans
>>                 r = ^I_{k-1}R_{I_{k}} ^I R_L - ^I R_L ^L_{k-1}R_{L_{k}} ) . I use CERES solver to solve this problem.
>>          Output: An initial estimate of the rotational component of the extrinsic calibration between the lidar and imu, i.e. ^I \hat{R}_L. This estimate is bias uncompensated and also time offset uncompensated but it is reasonably good (max 2/3 deg error across all axes) to be used for initialization in the next step.
>>          Note: I don't think estimating the translational component is possible without estimating other states (much unlike what I did in step 2) like imu pose, velocity and biases because they are all mathematically related to each other. So, in the next step I use a (kalman) filtering  approach to get an estimate of the translational component which also gives us the other states like imu pose, velocity and biases. I use the initial ^I \hat{R}_L in the step 3.
>>
>> Step 3: This is a Kalman Filtering step which takes as input (not as a whole/batch, but one by one) the distorted lidar scans (from step 1), the imu measurements (from step 1) and also the initial estimate ^I \hat{R}_L (from step 2) to obtain the Imu pose at all scan instants, the Imu biases, the time offset between the lidar and imu and most importantly, estimates of ^I R_L & ^I t_L. The idea is that the imu measurements are propagated and the lidar odometry gives us the measurement equations. The state variable consists of extrinsic calibration so that is also estimated in the process. Most importantly, before the lidar odometry is calculated the lidar scans are deskewed/motion compensated using the available best estimate of the extrinsic calibration between the lidar and imu (and the time offset and the available imu measurements in a certain time window). This gives us decent results. I am attaching the results of the translational component as an image here. As can be seen the translational component converges to some fixed value. I most certainly feel that ISAM2 can be used instead of a filtering approach but KF is something I understand and know, so I went ahead with it.
>>
>> Step 4: This is the step I am currently working on and I want to use GTSAM for this. In this step, first of all I want to optimize all the residuals together and not recursively as done in step 3 and second of all I want to introduced a planar target constraint which  project points in a known plane in any scan to the same plane detected in the first scan (when the suite was stationary). This constraint will involve all the state variables and on minimization will give a batch estimate of the states. The state we are interested in is the extrinsic calibration (obviously the other states also need to well estimated for the extrinsic calibration to be correct). This is where am at now. I am looking into how I can use GTSAM for doing this batch optimization. As far as I understand, I will have to use ExpressionFactors, I have derived the Jacobian blocks. But I am still trying to get the minor details so that I can implement it.
>>
>> Thanks for the links you have shared, I am going to look into them now. And I appreciate your comments and inputs.
>>
>> Regards,
>> Subodh.
>>
>>
>>
>> On Wednesday, January 20, 2021 at 12:11:39 PM UTC-6 brice.r...@gmail.com wrote:
>>>
>>> Subodh,
>>>
>>> I'm curious: how do you compute the lidar odometry? Since typically we do scan matching based on a pointcloud collected over time, that means your pointcloud depends on your pose too. If you have an uncalibrated IMU-Lidar extrinsics, then you will also have distorted pointclouds, and therefore suboptimal lidar odometry, and therefore suboptimal IMU biases estimates, leading to a suboptimal extrinsics computation. I'd typically approach this by doing multiple iterations: starting from the prior extrinsics, compute poses from the IMU, then compute the lidar odometry, then setup and solve the problem, then compute new lidar odometry based on the new extrinsics and poses, and start again... Is that your approach?
>>>
>>> I understand it might not be possible, but I hope you will be able to share more of your method and results.
>>>
>>> This reminds me of this work from my friend Mike Bosse in the Zebedee project: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fconfluence.csiro.au%2Fdisplay%2FASL%2FZebedee&amp;data=04%7C01%7Cfrank.dellaert%40cc.gatech.edu%7C35030eca8b234ed877f008d8befdc469%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637469345518900767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=XdqgfUq2aqcU7xsQM2I0qZ4MEfN1%2FBxYxT8FAdIpXlg%3D&amp;reserved=0. Maybe you can dig in their literature for inspiration, maybe this as a starting point: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F224557584_Continuous_3D_scan-matching_with_a_spinning_2D_laser&amp;data=04%7C01%7Cfrank.dellaert%40cc.gatech.edu%7C35030eca8b234ed877f008d8befdc469%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637469345518900767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=IC2kWuMNeqOQjTY0MQuMj6FdgidNYDemrr5SsTo3%2FdY%3D&amp;reserved=0
>>> As well as work from Jesse Levinson at Stanford: https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdriving.stanford.edu%2Fpapers%2FISER2010.pdf&amp;data=04%7C01%7Cfrank.dellaert%40cc.gatech.edu%7C35030eca8b234ed877f008d8befdc469%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637469345518900767%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=JTzV7cwBDdmvMHSjpLU6eVYSfeSjBakGqfbJvWs7Trs%3D&amp;reserved=0

>>>>
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google Groups "gtsam users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.
>>>>

>>>>
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google Groups "gtsam users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.

>>>>
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google Groups "gtsam users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.
>>>>

>>>>
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google Groups "gtsam users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.

>
> --
> You received this message because you are subscribed to the Google Groups "gtsam users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "gtsam users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gtsam-users...@googlegroups.com.

Subodh Mishra

unread,
Jan 22, 2021, 2:02:09 PM1/22/21
to gtsam users
Thank you Frank! Yes, I have checked Teaser++ and their videos. I just needed something easy to work with, so I had picked NDT. Surely one may need better scan matching techniques in advanced implementation.

Thank you Brice for checking the video and your inputs. We are using an Ouster-128. It is dense (and also noisy I must add). You are absolutely correct, the lidar's rear is facing the planar target and that is why you see those artifacts and I most certainly should rotate the cloud by 180 degree to get better videos.  The primary reason for using a known planar target is that it eases the problem of planar data association between frames. It is also easy, like you have remarked, to calculate the Jacobians required in the optimization routine. In my opinion if one has sufficiently excited all degrees of freedom then it will not only constrain the motion based (hand eye) residual but also the planar target residual. 

I don't understand what do you mean by "continuous optimization", because there are papers which do continuous time batch optimizations (the work that went into Kalibr by Paul Furgale) using splines or Gaussian Processes (by Tim Barfoot), I am not aiming to do anything like that. I just want to detect the (motion compensated) planar points in all the scans, project them to the first scan and do a batch optimization and probably keep iterating the process until I get a satisfactory result. Obviously it is difficult to validate calibration results, especially when it involves proprioceptive sensors like IMUs or wheel encoders. The hope is that the batch optimization process will converge well when it uses the initialization results from the Kalman Filtering step.  Another added advantage of having a dominant planar surface right in front (or behind in the video I had shared, but that's not important) of the sensor suite is that we can see the effect of motion compensation very clearly. This, I feel, is a weak argument in support of having a planar target, nevertheless it helps to discern the effect of motion compensation easily.

I get the point you have made with reference to CLAMs, yes, I agree that using the entire scan and matching it against the stationary map is a better alternative. I should look into that approach. Thanks for giving me this idea.  My only question to you is, don't you think Scan to Map matching (which gives unary factors) will have more noise than scan to scan (frame to frame odometry) matching? I think the answer is yes if just do one iteration but if we do several iterations it will get better.

Thank you once again for your inputs and your interest. I will keep you posted.

Regards,
Subodh.

Subodh Mishra

unread,
Mar 4, 2021, 2:53:31 AM3/4/21
to gtsam users
At this point, I am more inclined to take the route @Brice suggested, which is to use all the planes in the environment. Although I am more or less certain about what to do as far as using all the planes are considered, I am now a little lost on how to model time delay between lidar and imu under the gtsam framework. Time delay estimation under batch optimization framework sounds a little tricky to me because the value of this delay will determine what imu measurements I must choose between two scans to do IMU pre-integration.

Is there any work or example where gtsam is being used to online/offline time delay estimation between inertial and other sensors?

I could find several visual inertial estimation papers which model time delay under a kalman filtering approach and batch optimization approach (VINS-Fusion which uses CERES) but I could not find any work where GTSAM is used for temporal calibration. II will be grateful f anybody here can refer to me an example where GTSAM is used for temporal calibration (in addition to estimation of other state variables).

gtsamVSLAM

unread,
Jun 23, 2022, 10:05:35 AM6/23/22
to gtsam users
@subo, dear subo,  did your finish time-offset calibration using GTSAM? If done, can you  share some ideas or reference or codes?
Reply all
Reply to author
Forward
0 new messages