How do I use tagslam

123 views
Skip to first unread message

Foollor Lol

unread,
Feb 17, 2023, 7:54:33 PM2/17/23
to tagslam
Hi,

I just started learning ros and wanted to use this to estimate my robot pose more accurately.

I have managed to created a sample world in gazebo with april tags and did the intrinsic calibration and currently am stuck in the camera_poses.yaml. May I know, how do I do an extrinsic calibration run using TagSLAM to know all the pose of april tags and the noise covariance matrix R?

Lastly, can I ask after populating tags in tagslam.yaml and all the required inputs, do I get the pose for the robot so that I may input it in the AMCL pose estimate?

Thanks.

Bernd Pfrommer

unread,
Feb 18, 2023, 9:15:02 AM2/18/23
to tagslam
Do you have just a single camera or multiple ones? If it's just a single one you can simply set the pose to the identity and specify a large diagonal R-matrix (inverse of the covariance!) in camera_poses.yaml. See some of the single-camera examples in the tagslam_test repository, under "tests/".
If you have multiple cameras, then in camera_poses.yaml you specify just the pose of one camera, e.g. camera 0. You put up some tags (you must specify the world pose of at least one of them) and make a recording. TagSLAM can then figure out the poses of all the tags and it will also give you the extrinsics of the cameras that you did not specify in camera_poses.yaml. You can find these files in the ~/.ros/ subdirectory after you have completed the run. Note that you must also perform the service call ("dump") for tagslam to actually write those files. See documentation.
Once you have the extrinsic calibration tagslam will give you the poses of the tags and the camera poses (and thereby the robot pose). Only as long as it sees any tags of course, and the accuracy is limited if the tags are few and small.  How the AMCL pose figures into that I don't know. I'm not familiar with that package.

Foollor Lol

unread,
Feb 25, 2023, 11:13:08 PM2/25/23
to tagslam
Hi. Thanks for the reply. I'm not sure what happen to the previous message is it seems to be missing. 

I am currently using 1 camera in gazebo to mimic raspberry pi camera and it only looks vertically up to at the ceiling. I tried to use tagslam.launch in online mode but it seems to be unable to detect my camera which is quite puzzling to me. May I know how it is detecting my camera topic in the code or how I should change to be able to read from the gazebo's camera. Thanks.

Also I would like to know if tag slam works if the camera is pointing vertically upwards.

robot1.PNGrobot2.PNGrobot3.PNG
tagslam.yaml
camera_poses.yaml
cameras.yaml

Bernd Pfrommer

unread,
Feb 26, 2023, 4:26:47 AM2/26/23
to Foollor Lol, tagslam
Tagslam does not directly process the image topic. That is done by the tag detector. So first check that the tag detector is running and subscribing to the right topics and properly sees the tags, meaning it emits debug images where the tags are drawn in.

--
You received this message because you are subscribed to the Google Groups "tagslam" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tagslam+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tagslam/daa5eab8-599f-4faf-aaad-c074b144eb1cn%40googlegroups.com.

Foollor Lol

unread,
Feb 28, 2023, 6:52:46 AM2/28/23
to tagslam
I see. Thanks. I have tried using continous detector from April tag to confirm that it does work but it has the wrong coordinates. I will try to fix that issue and use tagslam again.
Reply all
Reply to author
Forward
0 new messages