Hi All,
Thought I'd share some recent experiences with webcams. I'm off overseas for a month, and wanted to at least have something portable with ROS I could take with me. Getting the webcam working seemed like a good start.
Finding this out took a bit of research, but UVC_cam is probably the best option for webcam drivers in ROS:
I installed it:
sudo apt-get install ros-groovy-camera-umd
(Note that command doesn't appear anywhere on the wiki page for it. Sigh).
I can then launch the camera node:
roslaunch uvc_camera camera_node.launch
And that'll make available an image topic I can view with RVIZ. However, it's just a raw image, and isn't calibrated in any way, so a lot of software won't be able to do anything with it.
Calibrating the camera involves finding the lens parameters. I followed the tutorial here:
Printed out the checkerboard pattern and taped it to a book to hold it flat, then ran the calibration node. Moving it around and making sure I had a diverse set of samples, I then hit calibrate.
This turned out to create a problem since it didn't have permission to save the results to the example.yaml file in the uvc_camera directory, since it's owned by root. I had to create a file and give it appropriate permissions.
roscd uvc_camera
sudo touch example.yaml
sudo chmod a+rw example.yaml
I was now able to 'commit' the detected camera calibration to a file. This exited with an error, but it seemed to have saved in the directory OK. Even after I restarted the camera node, I could see that the camera calibration was successfully preserved.
Ok, now that the calibration homework was done it was time to have some fun!
I wanted a way to detect tags in the environment. It'd be awesome if we could put up some QR-codes, or similar things in the space as visual cues to robots. This package seemed to fit the bill:
I installed it:
sudo apt-get install ros-groovy-ar-track-alvar
and printed the set of tags from their website.
I then changed into the ar_track_alvar directory and created a launch file with this inside:
<launch>
<arg name="marker_size" default="4.4" />
<arg name="max_new_marker_error" default="0.08" />
<arg name="max_track_error" default="0.2" />
<arg name="cam_image_topic" default="/image_raw" />
<arg name="cam_info_topic" default="/camera_info" />
<arg name="output_frame" default="/camera" />
<node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkersNoKinect" respawn="false" output="screen" args="$(arg marker_size) $(arg max_new_marker_error) $(arg max_track_error) $(arg cam_image_topic) $(arg cam_info_topic) $(arg output_frame)" />
</launch>
And ran it.
Straight away I could see it was working. I ran rostopic echo and looked at the output of the /visualization_marker topic. it changed frantically whenever the target was in view of the camera, and stopped when it wasn't. Perfect.
I opened up RVIZ and set the fixed frame to 'camera', and added a marker to visualise. Voila, now my markers showed up in glorious 3D!
So that was a long one, and involved a fair bit of troubleshooting, but it was worth it. And now I can leave the ar_track_alvar node running on my robot and have software query markers if they need to. For example I could have markers near a charging port, 'forbidden' markers near a dangerous doorway, etc. There's a lot of room to play with in the future.
Hope this was interesting, cheers,
Gav