Robot Vision / OpenCV Group

70 views
Skip to first unread message

Chris Mayer

unread,
Oct 3, 2017, 6:01:31 PM10/3/17
to hbrob...@googlegroups.com
Now that MottBot2 can see, I'll be doing more research into various vision projects.  Any interest and help is welcome!

In addition to using the R Pi camera, Intel RealSense, and Etron EX8029, I plan on getting the cheapest USB camera I can find. My premise is that a $4.99 webcam can be almost as good as a more exensive camera for some applications. It would be nice to have a small group sharing sowtware experimentation on the same cheap hardware. So, if interested, let's find a cheap webcam and buy a few.

Although I usually roll my own software, I'd like to also start an OpenCV learning group. A quick search for "OpenCV" in this group finds posts as far back as 2009! So, although this has been done before, for me and any other OpenCV newbies, I hope some experienced people can also join in for any much appreciated guidance.

Here are a few things I hope to play with in the near future.

- 2D color based object detection

- 2D vision base telemetry

- 3D mapping from sequenced photos while moving

- Biometric recognition
  (Facial Features and Hand Geometry)

- Movement detection security cam

- Line following

- Orange cone finding (for Robo Magellan)






Chuck Untulis

unread,
Oct 3, 2017, 6:28:41 PM10/3/17
to Homebrew Robotics
Please include me on the list of interested but newbie people. I have watched from the sidelines but would be willing to help if I can.

Chuck

On Tue, Oct 3, 2017 at 3:01 PM, Chris Mayer <cmay...@gmail.com> wrote:
Now that MotoBot2 can see, I'll be doing more research into various vision projects.  Any interest and help is welcome!

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+unsubscribe@googlegroups.com.
To post to this group, send email to hbrob...@googlegroups.com.
Visit this group at https://groups.google.com/group/hbrobotics.
For more options, visit https://groups.google.com/d/optout.

Andy Jang

unread,
Oct 3, 2017, 6:48:06 PM10/3/17
to hbrob...@googlegroups.com, cmay...@gmail.com
Hi Chris,

Computer Vision for RPi sounds like fun. I am in the Nano Driverless Car class and we are using Opencv but
it it seems to use a lot cpu power. But I noticed there are some groups in SF.

Opencv Meetup SF

and saw something about fpga and opencv.

Xilinx with Opencv

Good luck!

On Tue, Oct 3, 2017 at 3:01 PM, Chris Mayer <cmay...@gmail.com> wrote:
Now that MotoBot2 can see, I'll be doing more research into various vision projects.  Any interest and help is welcome!

Chris Albertson

unread,
Oct 3, 2017, 7:37:26 PM10/3/17
to hbrob...@googlegroups.com
I'm 100% with you on the idea of a cheap web cam.   I good one is the Sony Playstation 2 camera.  It can do low res video at 120 FPS and cost well under $10.  They are popular with people doing vision research.   https://en.wikipedia.org/wiki/PlayStation_Eye 
  

If you need stereo you can google for people who have hacked these such that two cameras use synchronized shutters.  You need sync'd shutters for stereo when this are moving fact.  This is one of the problems with using cheap web cams.  The 120 FPS helps when things are moving and really is high enough resolution for most tasks.

I bought 3 of  these cameras and then found it is REALLY hard to to use a camera for software development because what I need are "known" shots.  That is I need a photo of a dog that says "dog" and one of a truck that says "truck" and if all I have is a camera then I have to record and label pictures by hand. And I need tens of thousands of them.    So I stopped and downloaded a million or so already labeled JPS files.  I'm using the JPG files for controlled input.   

Then after the software works try the camera.

We are at a turning point, Currently people are finding that artificial neural networks are outperforming hand coded vision algorithms.    You can try all those things you listed using OpenCV and hand coded functions or you can try training a network.   Actually most people now will use openCV to do some "feature engineering" and then pass that to the network.  But even that is being beat because a network can learn what features are most useful.  

There is no way you could do anything sophisticated without using openCV.   No one could live long enough to write that much code themselves.   Just use openCV and Python.   But like I said, this kind of hand written algorithm is being beaten badly by newer methods.   My favorite framework now is Keras.  I use openCV only for front end pre-processing.   See here: https://keras.io  Notice the part on the above liked page where is says "a 30 second introduction".  That simple program could likely do the orange cone finding task, and certainly line following.   

It's good to live in a time when technology is moving fast.  Also prices are falling.  All you need to get in one the cutting edge is an nVidia GPU card.  Even my GTX1050 is enough

All that said.   A "real" video camera works really well.   Get an old used "miniDV" tape camera.  These did 480 video but, and this is KEY.  They have a Firewire 400 output for uncompressed video and the optical quality is really good.  And openCV will interface with them over FW400.
They have much better dynamic range then web cams and better quality optics too.




On Tue, Oct 3, 2017 at 3:01 PM, Chris Mayer <cmay...@gmail.com> wrote:
Now that MotoBot2 can see, I'll be doing more research into various vision projects.  Any interest and help is welcome!

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+unsubscribe@googlegroups.com.
To post to this group, send email to hbrob...@googlegroups.com.
Visit this group at https://groups.google.com/group/hbrobotics.
For more options, visit https://groups.google.com/d/optout.



--

Chris Albertson
Redondo Beach, California

Chris Albertson

unread,
Oct 3, 2017, 7:52:52 PM10/3/17
to hbrob...@googlegroups.com
Andy,

I'mthe "other" Chris but have so ideas...

and saw something about fpga and opencv.

Yes FPGA can work but they are VERY software intensive endnote easy to use and you can't simply stop in at Best Buy and pick one up.

If you need "compute power" the GPU is the way to go.  

The GTX1080 card (that you can get at Best Buy) does 9 TFLOPS.  Yes "T" as is "Terra" 32-bit floating point operations and has many gigabits per second memory bandwidth. 

Not only that but the hardware is set up to do basic matrix operations for linear algebra so it can do really useful stuff and quickly handle calculations in real time on hundred million point datasets.   Even real time video is easy when you have a few TFLOPS available.  Google's Tensorflow is becoming very, very as it works so well when pared with an nVidia GPU.  
But as I wrote above Keras is, I think a cleaner way to use Tensorflow.

OpenCV can directly use a GPU too.  To make it work you need to compile openCV from source but you get quite  boost in performance
 

Brain Higgins

unread,
Oct 4, 2017, 8:15:33 AM10/4/17
to hbrob...@googlegroups.com
I’m interested as well, although I still am missing the point.  As a professional photographer for 25 years I would love a camera involved in a robot.  I also know that vision is a humans primary sense, I also know human vision is very complex. I have heard about cameras since being involved in robotics, I just trying to wrap my head around the concept. 

Brian

Sent from my iPhone X
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.

Jeffrey Cicolani

unread,
Oct 4, 2017, 11:26:54 AM10/4/17
to hbrob...@googlegroups.com
Andy,

NVidea put some of that GTX1080 processing power into a convenient and portable platform in the form of the Jetson line of embedded computers. My autonomous robot (WIP) has one at its heart and a colleague is using one in an object recognition robotics application. NVidea had the TX1 marketed to developers as the TX1SE for $199 (1 per). If that offer is still available, it's less than half of the original retail price.

Brian,

Computer vision in robots is being applied for much the same reasons it's so important to us as people; navigation and object recognition. With the newer processors, such as the aforementioned Jetson, there is a lot that can be done on-board a mobile robot, even more if you want to off-board some of the processing. My Nomad project will be using stereo vision to build a 3D map of the environment for navigation. I still have a lot to learn in that area, but I'm well on my way.

Cheers.

Jeffrey Cicolani
President; The Robot Group; http://therobotgroup.org
Chair; Chupacabracon; http://chupacabracon.com

See my most recent geekiness at www.cicolanistudios.com and http://nomadrobot.info

"Power to the people powerful enough to crush the other people" Joss Whedon
"Always be yourself... unless you suck." Joss Whedon

Chris Mayer

unread,
Oct 4, 2017, 12:31:16 PM10/4/17
to HomeBrew Robotics Club
Thanks for all the expert advise so far!

As this is intended as in intro for newbies,  I'll be keeping the hardware cheap and simple - just a Raspberry Pi and a 2D camera for now. Any version will do, and I have a box of RPi 1's if anyone is short. Neural nets on NVidia GPUs will come, but next year for me.

Rather than let this thread grow as we progress through each step, I was thinking of starting an online blog where people could go to follow along and ask questions. I've never done one before and have seen a lot of cool ones done by members here, so please reply privately with any suggestions. Facebook or Google+ ?

The first class this weekend will be a simple OpenCV "Hello World".  Let's just get everything installed and snapping photos from a webcam.
Raspistill is already installed for the PiCam, and there are MANY sites showing how to get this up and running.
The first app I want to write is a simple security cam. Take a photo every so often, and do a simple pixel compare. If the delta exceeds a threshhold, do something.

I have a special version of the Etron SDK that was ported to the RPi.  Reply privately if you plan on buying one of these. I've arranged for a small discount if we all buy together on a single PO, but even at $179.10 it is more than I care to put on my MottBot2s.

I have an Intel RealSense, but have no desire to spend time on it.
I also have a box of Beaglebones and Arduino cameras. Unless there is any interest, I do not plan on spendng time with these. However, the arduino cams are very cheap, and I will eventually get these working as well. A less than $10 vision capable bot seems pretty cool to me!

Wayne C. Gramlich

unread,
Oct 4, 2017, 12:49:02 PM10/4/17
to hbrob...@googlegroups.com, Wayne C. Gramlich
On 10/04/2017 09:31 AM, Chris Mayer wrote:
> Thanks for all the expert advise so far!
>
[snippage]
>
> Rather than let this thread grow as we progress through each step, I was thinking of starting an online blog where people could go to follow along and ask questions. I've never done one before and have seen a lot of cool ones done by members here, so please reply privately with any suggestions. Facebook or Google+ ?

Chris:

I'd suggest you let the vision/OpenCV threads continue on this list for a while. After a
2 to 3 weeks when people have identified that they are really interested in the topic,
it can be moved to another list. I'm just trying to be helpful here.

Regards,

-Wayne

[snippage]

Albert Margolis

unread,
Oct 4, 2017, 1:00:57 PM10/4/17
to hbrob...@googlegroups.com
A great resource for OpenCv is https://www.learnopencv.com/.

The site has free articles on installing OpenCV (which is more work than most libraries these days) and lots of useful code samples. They also sell online classes on advanced topics. I have only used the free stuff, which  I have found very useful.  

Chris Albertson

unread,
Oct 4, 2017, 1:04:57 PM10/4/17
to hbrob...@googlegroups.com
If you start thinking about it from the camera end of the process it is hard.  You think "OK the camera snapped a photo.  Now what?"  What to do is not easy because you have no goal.

But start from the other end of the process.  Let's say you have a robot arm and the test is to find and grip a bolt from the parts tray.     Now you have a very well defined goal.  You want to find the center of the bolt and create a vector from that to the center of the gripper and then command the arm to make a relative movement equal to the vector.    

So now you take an image and with a specific goat in mind set to work finding edges and looking for a 3D transformation of a bolt that will best fit the list of edges you computed from the image.  Then you know the location and orientation of the bolt.  Then you craned the arm to go to the corect position and orientation to grip the bolt.

Same if the job were to recognize a face.  Given that task you'd take one exposure with the camera and try to detect a face then extract it from the frame and locate features and then measure location of features and compare to a database.  

In a more complex case like a self driving car there might be a dozen process that each do nothing different with every frame.  One looks for pedestrians, one for the edges of traffic lanes and another for traffic signals and web there is likely more than a dozen

In all cases I think it is best to start thinking about the information you want first.  Be it a vector, a person's name or whatever.

Albert Margolis

unread,
Oct 4, 2017, 1:51:44 PM10/4/17
to hbrob...@googlegroups.com
I absolutely agree with the need to start with a specific application. If you start with the idea that you want to learn ALL ABOUT computer vision, you will need to dedicate the next few decades to learning the equivalent of several Phd degrees.  On the other hand, for some applications there are ways to get started in a weekend. This is a huge subject area, so you need to stay focussed.

Another product to look at is OpenMv. This is a dedicated ARM controller board with integrated camera running MicroPython. It has both low level vision functions inspired by OpenCv and higher level functions that simplify certain types of applications. I haven't used it, but some of the competirors at the Oakland Pipe Warehouse races have had success with the platform.

Here is a complete autonomous vehicle build for about $85 using OpenMv:

The main OpenMv developer is located in the Bay Area and has been competing at the Oakland Pipe Warehouse races. I'm guessing that it would be fairly easy to have him come to a meeting for at least a demo and introductory talk. I have his business card.

Chris Albertson

unread,
Oct 4, 2017, 1:56:24 PM10/4/17
to hbrob...@googlegroups.com
I have to second that.  There are so many if these single subject blogs, I forget to look at them. Anyone interested in robots should also be interested in computer vision. 

When the subject becomes more narrow then move it to project specific email list centered around maybe a GitHub repository for that project.

About the question of where to learn.   First of all what to learn.  We are in a transitional period where people are moving fast from hand coded solutions that tend to be "brittle" to solutions based on machine learning.    Even so openCV will continue to play a large role in any computer vision system even if just for data normalization and cleaning.

a few web sites offer blogs where they over small topics
These two actors are pretty good and a lot alike, if you like one you will like the other.

But neither has any depth, they are bloggers trying to attract clicks by providing good content.  This subject requires book length treatment
Everyone knows about "O'Reily"  Anything they publish is first rate and worth buying but it has been YEARS since they released anything about openCV as openCV is just not exciting anymore.  I think it is fundamental but not cutting edge.   Another good eBook publisher who is lw cost is "Packt"  There books are quickly written and a bit formalistic but they have a 5 for $25 sale going all the time.  Instant downloads and no DRM.   
Both of the above offer books, video classes and subscriptions too their entire library.

What to learn?  As I said openCV is fundamental you can do things like open a file and read an image and not have to worry if it camera from a still or video camera or if it is a PNG, JPG or TIFF.   it can make all that transparent to your software.  Then it can re-size and apply any kind of transformations. har transforms, convolutional and simple face detections.   

Anything more complex istofday going be done with a CNN Convolutional Neural Network and of late "Keras" seems to be the framework for building those networks.

I is importent to pics a project.   At first do the "standard" ones that everyone does.  The MINST dataset of hand written digital is the "hello world" of computer image clarification.   It is dead easy to get an 80% correct score but I'd say 90% is a "passing" grade and no one gets 100%.  The "hello world" programs get aoudad 85 nd should be everyone's first exercise.

Lane keeping is a good exercise too.  Using vision to stay in the center of a sidewalk.  Very much like line following but with a bigger sensor with more pixels.

I've got a project that for me is a "high bar" and I'll be at it for at least another year.   A mobile robot will drive around more or less at random and photograph whatever it finds.   Later I can ask it "Did you see a cell phone?" or "Where was the last place you saw Lilly?" (Lilly is my dog)
The robot will need to be able to use some reasoning about what it sees.

Currently I can give it a set of photos and ask it questions like "In which photos are there animals that can eat grass where there is also a person in the same photo."  The robot knows a horse is a grass eating quadruped and that there are two images of a person with a horse.   So I have a proof of concept working.  The goal is a robot who can explain what it sees or go looking for some object you describe.  This is very much possible without requiring PhD level new research.



On Wed, Oct 4, 2017 at 9:48 AM, Wayne C. Gramlich <wayne.gra...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+unsubscribe@googlegroups.com.
To post to this group, send email to hbrob...@googlegroups.com.
Visit this group at https://groups.google.com/group/hbrobotics.
For more options, visit https://groups.google.com/d/optout.

Chris Albertson

unread,
Oct 4, 2017, 2:41:27 PM10/4/17
to hbrob...@googlegroups.com
Albert,

Yes.  Pick a project.  I just suggested several in last email.    (I'd set the bar a little higher than driving around on a prepared track.)

As for platforms, ARM controller boards are nice for deploying a project but it is MUCH easier, like by an order of magnitude easier to do this kind of work on a more powerful desktop machine.

It is so much easier to work on algorithms when you don't also have to deal with very limited resources.   So I'd advise getting your stuff to work first, then later getting it to work on such limited hardware.  

Also the methods in use today are dependent on the availability of huge amounts of processing power.  I watched a lecture by someone from Google talking about the computer in the trunk of their self driving car.   That little machine ranks on the list of the top 500 super computers on Earth.  That is WAY to expensive for a hobby project.  But you are going to need a couple TFLOPS if you goal is to break out of the class of line following and other simple hard coded behaviors.

If your Robot runs ROS it is pretty easy to place some of the nodes on a desktop and some on the Raspberry Pi and use a WiFi.   If you have the budget Look to the Nvidia Jetson TX2.  But really, the best plan is to develop on the desktop PC then when you are done, look to see how much RAM and processing you are using then and ONLY then choose a mobile platform that allows for 50% to 100% growth.      Hate to say it but choosing the execution platform first is backwards.   and if you wait, the hardware gets cheaper.  A bonus for those with patience.   Already I've seen Jetson TK1 selling for $99.  

Andy Jang

unread,
Oct 4, 2017, 5:46:26 PM10/4/17
to hbrob...@googlegroups.com
Hopefully hardware accelerated ROS will be a step closer in 2018!

Bob Smith

unread,
Oct 4, 2017, 7:04:27 PM10/4/17
to hbrob...@googlegroups.com
On 10/04/2017 10:51 AM, Albert Margolis wrote:
> I absolutely agree with the need to start with a specific application.

> On Wed, Oct 4, 2017 at 10:04 AM, Chris Albertson <alberts...@gmail.com>
> wrote:
>> If you start thinking about it from the camera end of the process it is
>> hard. You think "OK the camera snapped a photo. Now what?" What to do is
>> not easy because you have no goal.


I kind of disagree. The first line of Chris Mayer's
email mentioned newbies and just taking a snapshot
is a reasonable goal for a newbie. You have to learn
how to install OpenCV, how to write a program that
uses the OpenCV API calls, how to link in the OpenCV
libraries. and then to run the program. Yes, yes, I
know this is easy for you but for a newbie it might
not be.



On 10/04/2017 09:31 AM, Chris Mayer wrote:
> As this is intended as in intro for newbies,

Chris: when and where?

thanks
Bob Smith

Chris Mayer

unread,
Oct 4, 2017, 7:22:24 PM10/4/17
to HomeBrew Robotics Club
Newbies always appreciate help from experts!

I like the idea of specific goals to guide what to do next.
The first goal is to set up the hardware, install OpenCV, and write our first program that grabs an image and saves a BMP file.
The second goal is to take that and write a security camera program.  Take pictures every so often and when the delta exceeds a threshold, do something.
I think I'll just have MottBot2 face whatever is moving by a quick motor rotate pulse. Or, if you have a servo with a laser pointer mounted....
I think this will be a good first goal to get everyone up and running.

This weekend here in Bemont -  I'll be here Saturday morning and Sunday all day. Email privately for my phone and address.

sdey...@hotmail.com

unread,
Oct 5, 2017, 10:00:47 PM10/5/17
to HomeBrew Robotics Club
I'll be watching closely also, have RPi1,2 & 3 and camera and other stuff. I use Python alot.
Reply all
Reply to author
Forward
0 new messages