TagSLAM for mobile robot in simulation

176 views
Skip to first unread message

Srividya Prasad

unread,
Oct 13, 2024, 9:42:56 AM10/13/24
to tagslam
Hello Bernd, 
We are using TagSLAM for a mobile robot in simulation. We are trying to use with and without wheel odometry. 
We did not get good odometry results. Could you please tell us what we are doing wrong?

For camera_poses.yaml:
rosrun tf tf_echo /imu_frame /kinect_camera_optical 
For cameras.yaml:
echoing camerainfo topic of the robot simulation

Note:
Results with wheel odometry turned out worse.
We had to add all tags to tagslam.yaml as the tags were not being detected in the right locations in online mode when only 1 tag was in the config file.
Ideally we want to have lesser tags in the config file (please suggest how many are minimum to get good results)
We saw in this discussion that the config you updated for the mobile robot had great results. https://groups.google.com/g/tagslam/c/a9iDzE8UP08/m/G5N2_4xZBgAJ
How can we achieve such results?

In the same setup, these are the kind of results we get compared to the wheel odometry: 
(Uploaded rosbag is a different trajectory)
tagslam1.jpeg
tagslam.yaml
camera_poses.yaml
cameras.yaml

Srividya Prasad

unread,
Oct 13, 2024, 9:45:23 AM10/13/24
to tagslam
These are the commands that we are running:

rosparam set use_sim_time true
roslaunch tagslam tagslam.launch run_online:=true
roslaunch tagslam apriltag_detector_node.launch
rviz -d `rospack find tagslam`/example/tagslam_example.rviz
rosbag play rosbag.bag

Bernd Pfrommer

unread,
Oct 14, 2024, 7:29:28 PM10/14/24
to tagslam
Downloading your rosbag right now. It's quite big, so this will take a while. I will have a look at it during the next few days.
As far as the tag poses goes: first disable odometry, and just run on visual. You will want to first do a run to localize where the tags are. So when you move the camera (robot), try to get overlap between tags, such that multiple tags are in an image. That allows tagslam to discover the poses of the tags relative to the root tag (the one you specify the pose of in tagslam.yaml).
At the end of the mapping run, do a ros service call to /dump, then tagslam will write out the discovered tag poses into ~/ros/poses.yaml
Then copy those poses into your tagslam.yaml. You should first get that working perfectly well. Verify with rviz that all your discovered tags are reasonably well in the same z plane (I see you specified z=0.2 for all tags). When this works you know your camera calbration is good. Only then should you switch on odometry.
When you turn on odometry, bear in mind that the use of odometry implies a robot "body" frame, i.e. the odom gives the transform T_world_body. The body frame that your odometry is assuming must be identical to the one that tagslam is using. The transform between Tagslam body frame and camera is given in camera_poses.yaml. So you know what your robot body frame is by looking at the camera_poses.yaml. If that robot body frame does *not* coincide with the odometry body frame that your odom is using, then you must specify the transform between those two frames in tagslam.yaml as T_body_odom. It is crucial to get that transform right, or else the odom will do harm rather than good.

Srividya Prasad

unread,
Oct 15, 2024, 5:49:23 AM10/15/24
to tagslam
Screenshot from 2024-10-15 15-04-00.png
This is the orientation of our tags, all tags have x facing into the warehouse. 
For tag 1, looking at the below world, we set the pose of tag 1 as the pose specified in the world file. is that correct?
and the poses of all other tags detected by tagslam will be relative to this pose correct? would it be a problem that the frame convention of tag in gazebo world and tagslam is different?
Advise us on what pose should be set for tag1.

The camera calibration is from /camerainfo topic. So no problem there.
The cameraposes.yaml pose however was previously set with respect to IMU, now we have changed it to be wrt base_footprint since wheel odometry child frame is base_footprint.
We have disabled odom. 
Are we ready to get good results?

When we ran it previously around the warehouse to collect tag poses (without wheel odom and with only tag1 in config file), the locations were estimated wrongly at random z locations. We will try again by keeping in mind having every 2 tags in single frame.

cameras.yaml
camera_poses.yaml
tagslam.yaml
small_warehouse_apriltags_old.world

Srividya Prasad

unread,
Oct 15, 2024, 10:37:47 AM10/15/24
to tagslam
Screenshot from 2024-10-15 20-06-16.png
We are observing spikes in the output odometry even with wheel odometry enabled. Is this normal when no tags are in frame? if not, how to correct it?

Bernd Pfrommer

unread,
Oct 16, 2024, 8:41:03 AM10/16/24
to tagslam
There are serious issues with the tag detections.

You are using the tag family 16h5. The Apriltag library produces a
copious number of false positives when using this tag family. A lot
of things kinda look like a 16h5 tag, at least to the tag detector.
The Umich detector is particulary prone to false positives, but even
the Mit detector has plenty of them. Switch to tag family 36h11, and
use the Umich detector.

If you run the attached sync_and_detect.launch file, and pass in an
argument for the bag:

roslaunch ./sync_and_detect.launch bag:=`pwd`/aroundthewarehouse.b5.bag

It will produce an output bag. Look at the annotated images in the
output bag (rqt_bag is useful here) and observe all the false positives.

About odometry:

tagslam wants the odometry messages to be synchronized with
the camera images, i.e. the odom message must have identical header time
stamp (header.stamp) or else the image and the odom message are dropped.
In the sync_and_detect launch file you can see that I set "use_approximate_sync"
to true. This will cause sync_and_detect to coerce the odom message time
stamps to match the camera time stamps. Not ideal, but if your odom has
high enough frequency, the error committed is not too grave.

But again: no point integrating odometry yet as long as your tag detection is broken. Let's fix tag detection first by switching to a different tag family, then worry about odom.
sync_and_detect.launch

Srividya Prasad

unread,
Oct 16, 2024, 8:47:37 AM10/16/24
to Bernd Pfrommer, tagslam

Okay, we will try this and let you know. Thank you very much!!


--
You received this message because you are subscribed to the Google Groups "tagslam" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tagslam+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tagslam/f22afdf8-94cf-49cd-ab8e-bc3a0cde4209n%40googlegroups.com.

Srividya Prasad

unread,
Oct 16, 2024, 3:57:15 PM10/16/24
to tagslam
Screenshot from 2024-10-17 01-22-39.png

We have switched to tag family 36h11, and used the Umich detector as suggested.
These are the tag detections for tags attached to 4 90deg walls. We also removed obstacles while collecting the poses to avoid all false positive detections. We did not see any false positives during the run.

The poses of tags on the first 2 walls (bottom and left in picture) we traversed were good but the tags on the next 2 walls were not estimated accurately. 
What do you suggest?
Should we go for coordinate measurements or plane measurements? Or is there anything else we can correct?

Srividya Prasad

unread,
Oct 16, 2024, 4:01:55 PM10/16/24
to tagslam

We also want to know if it's because we need to move close to and in front of the tags at all times. Which is hard with a differential robot.


Bernd Pfrommer

unread,
Oct 17, 2024, 3:02:57 AM10/17/24
to tagslam
Without the bag with the images (which is probably quite large) I cannot really say why the tag poses are poor. Here a couple of pointers:
- double and triple check that your tag size is specified correctly
- look at the reprojection error map (do a "dump" service call at the end of your mapping run, and look at the files in ~/.ros/ afterwards). Are there any big errors on the tag projections? If the error is much greater than about 1.0, that means there is probably some error (tag size) or something else has gone wrong. Go back to the image that corresponds to the time stamp in the error map file, and see if tag corners were badly misdetected (happens!).
- if there are no systematic errors (see above), then the poor tag poses are due to poor images: tags are only seen from afar with a low res camera (640x480 isn't exactly a lot of pixels). Also, remember that for localization you need to see multiple tags in the same image. That connects the tag poses together. Triangulation is the keyword here. Poor triangulation -> poor tag pose. So for every image, ask yourself: how well could I localize the newly visible tag with respect to the tags of know pose? In the bag you sent me, you were running the robot along the walls. That is far from ideal, because many tags will be only seen at an acute angle, which introduces large pose errors.
- to get good tag poses, tags should occupy as many pixels as possible in the image and you should see at least two tags (one new tag, one whose pose has already been established). Keep that in mind when you plan your robot mapping run.
- try starting out with an empty room, place the robot at the center, and rotate it 360deg. That should give you good bearings for the tags, but probably  poor tag distances because the tags will only occupy a few pixels in the image. After the 360 deg rotation, try going closer to the walls, but not too close to avoid acute viewing angle.
- in this situation odometry can help, but I would not introduce odometry yet until you have good tag poses to start with.
- you can introduce plane constraints for the tags, which force them to lie in a certain plane, and you could also specify e.g. the z-coordinate of a tag corner, see https://berndpfrommer.github.io/tagslam_web/measurements/
  Such external constraints may give you prettier results but may not be helpful for your project if you are expected to discover tags without prior knowledge of their location.

What is the goal of the project anyways?

Srividya Prasad

unread,
Oct 18, 2024, 4:52:21 AM10/18/24
to tagslam
We were able to get good results of tag locations! Thanks for the help!!!
We removed the obstacles in the warehouse to collect the locations and made sure to have 2 tags in frame.
Please do confirm how the pose of tag1 should be specified wrt grasp_lab as our warehouse world.

1 issue: The tags and odometry are in XZ plane instead of XY plane. Even camera is travelling in XZ plane in rviz. Why could that be?


--
You received this message because you are subscribed to the Google Groups "tagslam" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tagslam+u...@googlegroups.com.

Bernd Pfrommer

unread,
Oct 18, 2024, 5:04:14 AM10/18/24
to tagslam
That the tags are in XZ plane probably has to do with the pose you specify for the root tag (I assume you specify just one tag, and the rest are discovered).
Here is a pointer to the convention for the apriltag coordinate frames:
Note that the "rotation" part of the pose is expected to be in the so called log(Rotation) format, which is like the better known axis-angle representation, but the unit vector is multiplied with the angle.

I suggest you start by giving the identity transform initially, then look in rviz: the tag should be flat on the ground, in the x/y plane. Now rotate along an axis by specifying a rotation like for example [pi/2 0 0], check again in rviz how the tag looks, and after a while you'll get the knack of it.


Bernd Pfrommer

unread,
Oct 18, 2024, 5:27:52 AM10/18/24
to tagslam
When I look at your image, the tag looks good, i.e. it is placed against the wall as it should. The coordinate frame that is drawn (red = x, green = y, blue = z) does *not* correspond to the convention that TagSLAM uses for tag poses (see link in my earlier reply).

Srividya Prasad

unread,
Oct 18, 2024, 5:39:53 AM10/18/24
to Bernd Pfrommer, tagslam

Thank you for your response! 

Should our tag gazebo models necessarily match the tag convention? And is this for performance sake or to correct XZ plane? 


Bernd Pfrommer

unread,
Oct 18, 2024, 8:24:03 AM10/18/24
to tagslam
I don't quite understand your question, so cannot answer directly.
1) Set the tags in gazebo such that they look right in the image (it looks like you already did this). 
2) Bear in mind that the coordinate frame orientation that you use in gazebo to make the tags look right may not be the same as the one you have to enter in tagslam, because tagslam has the convention that X,Y are in the plane of the tag, and Z points out of the tag, see earlier post and tagslam documentation linked above.
3) This means in tagslam.yaml you will probably have to enter a different orientation for the root tag than what you have in your gazebo world.

Srividya Prasad

unread,
Oct 23, 2024, 3:33:37 AM10/23/24
to tagslam
WhatsApp Image 2024-10-23 at 11.20.51 AM.jpegWhatsApp Image 2024-10-23 at 11.20.52 AM.jpeg

WhatsApp Image 2024-10-23 at 11.21.25 AM.jpeg
Thanks a lot for the help!! We now got all the tags in the right XY plane and right positions!
We want to now improve odometry using wheel odometry. We are not seeing much of a difference with wheel odometry. 

We noticed we have 2 separate tf trees. We want to know if this should be corrected and how? 
Also the output odometry is being published between map and base link but there is no tf between these in the tf tree.


This project is for our mobile robot odometry research.
WhatsApp Image 2024-10-23 at 12.16.44 AM.jpeg
With wheel odometry it still struggles at places where there are relatively lesser tags i.e. at the turn.

Bernd Pfrommer

unread,
Oct 23, 2024, 3:55:42 AM10/23/24
to tagslam
One of the transform trees is published by gazebo, the other by tagslam, so they are independent of each other. You can join them at the root by running a static transform publisher that publishes a transform between the "map" frame (tagslam) and the "odom" frame (gazebo). That way you can see the ground truth poses right next to the poses discovered by tagslam.

Regarding the odometry: please prepare a bag where the robot does a turn near the wall. Ideally there would be moment where the robot sees *no* tags, such that it has to rely entirely on the odom. Make the bag as short as possible while retaining the essentials. Send me the bag and all yaml files (tagslam.yaml, cameras.yaml, camera_poses.yaml) and I will have a look at it. You should get a marked improvement from odometry (given that the odom is of reasonable quality, and that the transforms are correct).

Srividya Prasad

unread,
Oct 23, 2024, 8:43:14 AM10/23/24
to tagslam

Srividya Prasad

unread,
Oct 23, 2024, 8:46:57 AM10/23/24
to tagslam
Link to rosbag is sent in previous message.
Files are below.
This bag has 1 turn and APE RMSE is 0.23 m, APE max is 0.46.
And another run with 2 turns (3/4th of a loop around the world) APE RMSE is 1 m, max is 1.63 m.
Odometry source is wheel odometry whose APE RMSE is 0.003 m.
camera_poses.yaml
apriltag_detector_node.launch
tagslam.yaml
tagslam.launch
cameras.yaml

Bernd Pfrommer

unread,
Oct 24, 2024, 5:22:22 AM10/24/24
to tagslam
Problems were:
1) Your tag poses were poor. If you visualize them with rviz, they are not on a straight line. You should use the ground truth tag poses (which you have) first, and ground truth odometry, and then make sure tagslam works as it should, and you have low reprojection errors. Once all of that works, start discovering the tags.
2) The pose of your camera relative to the rig is wrong. The camera isn't even upright with respect to the odometry (which, from the quality of it, I assume is ground truth).

I "fixed" the camera pose by rotating it at least to the point that it is upright (see attached file), and I let tagslam discover all tags except for tag 8, which is the root tag. I get the trajectory below. It looks good of course because I'm fully leaning onto the ground truth odometry by specifying odom rotation and translation noise of 0.001, see attached tagslam.yaml.

The strategy going forward is now:
1) leave the odometry setting at a very low noise (you can go less than 0.001) when you do your mapping run to discover the tag poses. You should specify at least two tags as root tags, not just one like I did (this is the reason why the trajectory is not well aligned with the square grid). This should give you decent tag poses. Copy them into tagslam.yaml and *check that they look good in rviz*. They should be in a straight line, square etc, if this is how they are in your ground truth world. A little error is permissible but not like what you had in your file. Compare to ground truth poses of your tags and don't settle until they match well or you understand where the error comes from.
2) Now increase the noise on the odometry to the point that the tags actually influence the trajectory in a meaningful way.

Your answer regarding my question about what this project is for ("odometry research") is vague to the point that it's insulting. It would have been more polite to answer that you don't want to say. In the present form things make no sense. Why would you want to discover tags in a simulator where you have ground truth? I suppose this should go on a real robot at some point? I can be of more help if I know what you are aiming at. As it stands this seems a waste of my time. 




run_2.png
tagslam.yaml
camera_poses.yaml
Reply all
Reply to author
Forward
0 new messages