Robomagellan cone color(s) and cmvision

已查看 52 次
跳至第一个未读帖子

KM6VV

未读,
2013年2月19日 17:51:562013/2/19
收件人 hbrob...@googlegroups.com、Seattle Robotics
Sorry for the cross post.

I'm building a 'bot for Robogames (San Mateo) Robomagellan. I've
FINALLY made a little progress with cmvision on ROS Electric.

My current procedure:
http://pharos.ece.utexas.edu/wiki/index.php/How_to_Detect_Blobs_using_ROS_Package_cmvision

colorgui.launch works, but only occasionally. Some special start up
sequence?

#1 Any ideas how?


I got blobfinder.launch to run, and it found the little orange/red
Sparkfun box I use as a target in my office.

From a previous good colorgui run. I got the colors and hue of the
box. But that's only with a certain amount of light on the box. I
understand that hue (YUV) ranges are desired to reduce the dependency of
illumination.

#2 What's the procedure for getting a good set of colors and hues?

Thanks!
Alan KM6VV

Tim Craig

未读,
2013年2月19日 20:20:412013/2/19
收件人 hbrob...@googlegroups.com
Alan,

The work I've done with blog tracking, I use the HSV color space and
ignore the V, is that for best results you need to determine your color
parameters on site as close to use as possible. While less dependent on
illumination than RGB, there still is some dependence. Also, depending
on how much of a stickler whoever sets up the course is, the cones
themselves could be from different batches or even different
manufactures so you need to get a range.

Tests I've wanted to run but haven't yet are whether it's better to use
one color mask that's a bit wider than what you expect or apply several
tighter masks and see if you get solid hits in any of them. Remember,
too, that you don't need to get all the pixels in the cone, just enough
for a reasonable assurance that the blob IS the cone. By the time you
get close enough to be concerned with hitting it, the cone should be
filling a large part of the image.

Tim

KM6VV

未读,
2013年2月19日 20:46:492013/2/19
收件人 hbrob...@googlegroups.com
Hi Tim,

HSV and not YUV? Can I get that from cmvision (the package I finally
got working)?

What package are you using?

I can work out "steering" from a found blob, but getting the mask right
looks like it could be a tough job.

Thanks for the thoughts!

Alan KM6VV

Tim Craig

未读,
2013年2月20日 05:15:232013/2/20
收件人 hbrob...@googlegroups.com
Alan,

Crap, Thunderbird just ate my long winded reply. Going to copy this one.

I guess HSV (or HSL both by the same person) because YUV was designed
more for a video stream for human consumption and HSV is to reduce the
dependence on illumination for machine vision. And I started using it
first, it worked well, so never got around to pushing too much further.

I'm using OpenCV for the underlying library though mostly that's for
image capture and general image handling. I wrote my own blog
segmentation package because those I found were poorly documented and it
was easier than reverse engineering them to be sure what I was getting.
I also wrote a small application to allow me to view an image stream in
real time, grab one of the images to work with, then select areas of
interest. Once an area is selected, I display histograms of the color
space channels. Then I can decide on the color mask to use, enter it
and restart the image stream to see how well it works by displaying the
original images and the masked blobs in separate windows. I'll attach a
couple of pictures showing the results using my test cone in its native
habitate, Chipotle :).

A couple of years ago at RoboGames I checked the program in actual
conditions walking with my laptop and webcam. I was pretty good finding
the cones. Of course, orange against green shouldn't be too hard. It's
the extraneous stuff like guys in orange jumpsuits walking around that
cause the problems. I did find two false positives in my short test.
There were two about 4 inch diameter metal pipes stuck in the ground by
the building that were pained red. Parts of them were picked up. Under
good circumstances, your GPS and knowing your robot pose would help you
eliminate them. And once you're close to a real cone, it should stand
out and be easy to home on.

Tim
Orange Cone-02.jpg
Orange Cone-08.jpg

KM6VV

未读,
2013年2月20日 13:19:312013/2/20
收件人 hbrob...@googlegroups.com
Hi Tim,

I'll look into HSV and OpenCV. OpenCV would have been my first choice,
but cmvision was what I found at first and hat tutorials (?).

I like your results, are you running your new routines under ROS?

I was looking forward to interfacing my BlackFin camera to ROS, and
doing the tracking on its embedded ARM processor. That will have to
wait. I need a simple solution. I do now have to write a node to
generate tracking commands for the pan/tilt camera mount. And also
generate a steering error message.

So how does one implement subsumption with ROS? Is there already a node
for that? ;>)
I have steering commands that can come from waypoints or a camera, and
steering commands that can come from obstacle avoidance or tracking. I
have different behaviors that I want to implement.

I'm totally on board with ROS. I'm just not getting the published
packages to build and run as well as I'd like. It's taking much more
effort then I had anticipated. I need a better way to learn I guess.

There's a new differential drive package for ROS, although I'd have to
rewrite it anyway for my motor driver. rosserial or Patrick's
ros_arduino_bridge looks like a good place to start.

Thanks for your comments. I'd love to hear more about your new functions!

Thanks for the pix. Is that the same cone used on the field? Where can
I find one?

Thanks!
Alan KM6VV

Tim Craig

未读,
2013年2月20日 16:51:492013/2/20
收件人 hbrob...@googlegroups.com
Alan,

No, I haven't gotten around to actually doing anything with ROS yet. I
would think the cone tracking would simply suppy cone, no cone. And if
it has a cone, an azimuth angle for steering. You'd have to propagate
that back through the pan and tilt angles to the robot frame to steer
relative to the current pose.

I would think subsumption would controlled by how far up the hierarchy
messages are passed for action and the commands go back down to the
actuators. "Lower" levels can be as disjointed from the "higher"
functions as you want. Sufficiency of data to form a decision would
determine the form of the tree.

The cone is a "trinket" someone gave me when I talked about needing a
cone to search for. It's about 2 inches tall. In the one picture,
that's a soda cup to the right of it and it's sitting on a table at
Chipotle.

Tim

KM6VV

未读,
2013年2月20日 17:32:392013/2/20
收件人 hbrob...@googlegroups.com
Tim,

That's right, it was Dave with ROS on the mini-ITX. I can't remember
anything anymore!

I found cones at Home Depot. 18" tall should give me something to play
with.

I get the levels of control for the subsumption, I was just wondering if
there was some standard way to implement it. There are messages to
subscribe to ("twist" for one), so by subsumption, one would want to
disable the message from one node, and enable it from another. Then the
subscriber(s) get the appropriate message, depending on which behavior
was in control. OK, a little more complicated then that, but I'd have
thought ROS would have built that in? Maybe one has to remap the twist
messages to allow them to be selected and re-transmitted.

Alan KM6VV

Ryan Hickman

未读,
2013年2月20日 23:14:092013/2/20
收件人 hbrob...@googlegroups.com、Jim Bruce、Seattle Robotics

Adding Jim Bruce, author of cmvision :)

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+unsubscribe@googlegroups.com.
To post to this group, send email to hbrob...@googlegroups.com.
Visit this group at http://groups.google.com/group/hbrobotics?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


回复全部
回复作者
转发
0 个新帖子