Estimation of rotations from relative measurements with loop closure constraints

54 views
Skip to first unread message

saurav agarwal

unread,
Jun 8, 2016, 5:41:41 PM6/8/16
to Manopt
Hi Guys,

Thank you for sharing the Manopt toolbox! I'm working in robotics on the problem of SLAM (Simultaneous Localization and Mapping) and would like to apply Manopt. I'm not sure if you are aware of the details of this problem so bear with me as I give a little background info, ignore it if you already know it :) (a form of sensor localization problem)

In this problem a robot starts with a known pose, makes sensor readings to features or shapes in the world while getting odometery (Gyroscope, Accelerometer) data. Using this data it computes a map of the world and estimates its pose in its stochastic map.  Between every two robot poses we have relative orientation measurements from a gyro, and another measurement possibly by aligning two successive lidar scans or a constraint based on observations to the same set of features from two successive poses. Assume that the noise in all the relative measurements is Gaussian. One key problem in SLAM is the estimation of robot orientation. 

Generally, the error in the estimate of the orientation grows as the robot moves away from its start location. To correct for these biases in the estimate, we often drive a robot back to its start point. This is called loop closure. In loop closure, by making relative observations to a known start robot pose, the orientation estimates can be greatly improved. 

I found Nicholas's paper on Robust estimation of rotations from relative measurements by maximum likelihood which is very well applicable to my work. One thing I did not see explicitly mentioned in the paper was the constraint introduced when you have cycles in the relative measurement graph. Since a loop closure is basically a cycle in the graph, I would like to introduce this constraint and apply your method to my problem. (Think of this as a set of sequential rotations that eventually get me back to my original orientation)

Could you please share any insights into how one would model this constraint in Manopt?


Thanks
Saurav

PS: apologies for the long question

Nicolas Boumal

unread,
Jun 10, 2016, 9:51:08 AM6/10/16
to Manopt
Hello Saurav,

Thank you for your detailed question.

In the paper* you are referring to, there is no need to explicitly enforce loop constraints, in that we consider there to be N distinct poses (actually, just rotations) to estimate, and we consider all available relative measurements to do so. Once we have an estimator, it is necessarily consistent with itself. (This is as opposed to some other techniques where one does not estimate the individual rotations at the nodes, but rather tries to estimate the relative rotations (to denoise them), and hence has to also try to enforce consistency on top of that, using cycle information.)

I suppose that in SLAM, you would have one rotation to estimate for each point where the robot stopped, and you would implement loop closure by declaring that the first and the last node are actually the same?

On a related topic: there is also this paper with my collaborators about Cramér-Rao bounds for synchronization of rotations. It's interesting to use such bounds to compare (ideally) achievable accuracies as a function of the measurement graph topology (path, cycle, star...). You essentially just need to work out the Laplacian of your measurement graph (degree matrix - adjacendy matrix), compute the pseudo-inverse, and take the trace of that. A large number is bad. How that number grows with N and with the topology is informative.
"Cramér-Rao bounds for synchronization of rotations"


Best,
Nicolas

* available here and here:
(Otherwise, see my personnal webpage.)

saurav agarwal

unread,
Jun 11, 2016, 10:50:16 AM6/11/16
to Manopt
Hi Nicolas,

Thank you for the detailed answer. You are right, in general we have a rotation for some key poses (key frames in computer vision literature) and we exactly impose constraints like you said by visiting the same location. I studied your website and got your code for the paper. What you said about the self imposed consistency makes complete sense now!

I'm trying to play around with your shared code and seeing how graph structures like the ones we have in robotics affect the accuracy. For now, I'm working out how our solution looks with additional cost metrics through measurements to the environment. I will definitely check out the CRLB paper. Thanks again for sharing your research, it was very helpful!

Cheers
Saurav

pierr...@gmail.com

unread,
Jun 20, 2016, 6:07:11 AM6/20/16
to Manopt
Just for info, here is a related paper that has just appeared:
A Survey on Rotation Optimization in Structure From Motion
Roberto Tron, Xiaowei Zhou, Kostas Daniilidis; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2016, pp. 77-85
http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w23/html/Tron_A_Survey_on_CVPR_2016_paper.html

They do not seem to be aware of your work Nicolas.

Best,
PA

Reply all
Reply to author
Forward
0 new messages