Hi Guys,
Thank you for sharing the Manopt toolbox! I'm working in robotics on the problem of SLAM (Simultaneous Localization and Mapping) and would like to apply Manopt. I'm not sure if you are aware of the details of this problem so bear with me as I give a little background info, ignore it if you already know it :) (a form of sensor localization problem)
In this problem a robot starts with a known pose, makes sensor readings to features or shapes in the world while getting odometery (Gyroscope, Accelerometer) data. Using this data it computes a map of the world and estimates its pose in its stochastic map. Between every two robot poses we have relative orientation measurements from a gyro, and another measurement possibly by aligning two successive lidar scans or a constraint based on observations to the same set of features from two successive poses. Assume that the noise in all the relative measurements is Gaussian. One key problem in SLAM is the estimation of robot orientation.
Generally, the error in the estimate of the orientation grows as the robot moves away from its start location. To correct for these biases in the estimate, we often drive a robot back to its start point. This is called loop closure. In loop closure, by making relative observations to a known start robot pose, the orientation estimates can be greatly improved.
I found Nicholas's paper on Robust estimation of rotations from relative measurements
by maximum likelihood which is very well applicable to my work. One thing I did not see explicitly mentioned in the paper was the constraint introduced when you have cycles in the relative measurement graph. Since a loop closure is basically a cycle in the graph, I would like to introduce this constraint and apply your method to my problem. (Think of this as a set of sequential rotations that eventually get me back to my original orientation)
Could you please share any insights into how one would model this constraint in Manopt?
Thanks
Saurav
PS: apologies for the long question