Alan,
Crap, Thunderbird just ate my long winded reply. Going to copy this one.
I guess HSV (or HSL both by the same person) because YUV was designed
more for a video stream for human consumption and HSV is to reduce the
dependence on illumination for machine vision. And I started using it
first, it worked well, so never got around to pushing too much further.
I'm using OpenCV for the underlying library though mostly that's for
image capture and general image handling. I wrote my own blog
segmentation package because those I found were poorly documented and it
was easier than reverse engineering them to be sure what I was getting.
I also wrote a small application to allow me to view an image stream in
real time, grab one of the images to work with, then select areas of
interest. Once an area is selected, I display histograms of the color
space channels. Then I can decide on the color mask to use, enter it
and restart the image stream to see how well it works by displaying the
original images and the masked blobs in separate windows. I'll attach a
couple of pictures showing the results using my test cone in its native
habitate, Chipotle :).
A couple of years ago at RoboGames I checked the program in actual
conditions walking with my laptop and webcam. I was pretty good finding
the cones. Of course, orange against green shouldn't be too hard. It's
the extraneous stuff like guys in orange jumpsuits walking around that
cause the problems. I did find two false positives in my short test.
There were two about 4 inch diameter metal pipes stuck in the ground by
the building that were pained red. Parts of them were picked up. Under
good circumstances, your GPS and knowing your robot pose would help you
eliminate them. And once you're close to a real cone, it should stand
out and be easy to home on.
Tim