Both tf and all ROS SLAM solutions are based on euclidean coordinate systems. ECEF is one such system for the globe, but numerical precision can become an issue (esp. for noobs) and it becomes a paint to impose simple constraints like "the robot's scanning lidar is ground parallel." Lat/long and UTM are more common methods we're likely to get out of sensors or geo-referenced databases, but not being Euclidean makes it hard for tf and SLAM. Finally, there's this: http://xkcd.com/977/
Why is this relevant? Let's say I have a plane with a nice GPS/INS taking pictures or lidar scans from 10,000 feet, or even a satellite. It's currently very tough to use this kind of data with ROS, but the tf and SLAM solutions in ROS would still apply.
Thoughts:
* often, a local translation offset can be used in conjunction with ECEF coords... transformation operations on sets of points could do this under-the-hood and be transparent to user
* is the GPS time offset (15 or so seconds from UTC?) accounted for "naturally" ? ... since tf lookups are all time-based.
On Fri, May 24, 2013 at 5:27 PM, Nick Armstrong-Crews <nickarmst...@gmail.com> wrote:
Both tf and all ROS SLAM solutions are based on euclidean coordinate systems. ECEF is one such system for the globe, but numerical precision can become an issue (esp. for noobs) and it becomes a paint to impose simple constraints like "the robot's scanning lidar is ground parallel." Lat/long and UTM are more common methods we're likely to get out of sensors or geo-referenced databases, but not being Euclidean makes it hard for tf and SLAM. Finally, there's this: http://xkcd.com/977/UTM *is* Euclidean. It is a Transverse Mercator projection, around a few degrees East or West of a meridian of longitude.Almost all of my autonomous vehicle work used UTM. As long as you stay within a grid cell several hundred kilometers across, it works fine. There are no problems transforming vehicle-relative laser scans, etc.Why is this relevant? Let's say I have a plane with a nice GPS/INS taking pictures or lidar scans from 10,000 feet, or even a satellite. It's currently very tough to use this kind of data with ROS, but the tf and SLAM solutions in ROS would still apply.
Thoughts:* often, a local translation offset can be used in conjunction with ECEF coords... transformation operations on sets of points could do this under-the-hood and be transparent to user
Beware that rviz (and OGRE) do all their graphical 3D computations in 32-bit float, not double. I had problems with that displaying coordinates in meters just within a single UTM latitude band and MGRS grid cell. For that, you need to define a local region small enough to for float calculations.IIRC, tf does use 64-bit double, so it's not a problem.
* is the GPS time offset (15 or so seconds from UTC?) accounted for "naturally" ? ... since tf lookups are all time-based.I believe the tf buffers only track information for the previous five seconds, so this could become an issue.--
joq
--
You received this message because you are subscribed to the Google Groups "ros-sig-geographic-info" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-geographi...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Wait - UTM is not Euclidean! It's a piecewise non-linear projection that maps a 3D Euclidean space onto a cylinder. Can't do so without distortion!
But as you say, within a Zone, it's darn close (I think <0.1% error in translation/distance, <0.5% error in orientation). I understand that more serious problems can happen when you cross zone boundaries...
So how would tf calculate a transform between the feet of a person straddling a UTM zone boundary? One foot at 12S 724475mE 3825205mN and the other at 13S 230304mE 3825205mN. Should we define a global tf tree with a coordinate frame at each UTM boundary corner? We'd also need logic to look up the nearest parent frame for the robot's moving frame (and users would have to be careful about zone changes, when if they're just looking at coords and not zone, there will be a huge discontinuity at zone change).
Again, these concerns are probably not relevant for common ROS use cases (indoor turtlebot), but wouldn't it be nice to be able to track the path of a plane from LAX to NYC? Maybe try to do visual odometry / stitching on the way? And view in rviz?
I think ECEF is a reasonable back-end for tf to do calculations, but it may still be worthwhile to consider some sort of automated "local offset" system of coordinate frames, so that rviz and other 32-bit precision computations don't get in trouble.
It should be noted that ECEF is not great for spaceships - the Earth is not *really* in a fixed position for them (the sun is a better reference).