Implementing Vision

674 views
Skip to first unread message

Jason von Nieda

unread,
Jun 28, 2012, 2:59:19 AM6/28/12
to ope...@googlegroups.com
Hi everyone,

The last big task to implement in the OpenPnP software before it can be used for real work is basic computer vision support. I had some plans on how I wanted to implement this, but a recent message I saw threw a wrench in those plans so I wanted to open this up for discussion and brainstorming.

To start with, here's a short list of tasks we need to be able to do with the vision system:
1. Accurately find the pick up location for a tape drag feeder. (Top Vision)
2. Identify failure to pick. (Bottom Vision)
3. Identify the center of a picked part. (Bottom Vision)
4. Find and center on fiducials on a PCB for proper board location. (Top Vision, Wide)

You'll note (Top Vision) and (Bottom Vision) listed in the list above. These are either cameras that face down and are generally mounted to the moving head (Top Vision) or cameras that face up from the underside of the machine and are fixed. (Bottom Vision)

What we have so far is:
* Pretty good camera support. We can obtain images from lots of different types of cameras, configure their physical location and size in pixels and map pixels to physical lengths. 
* An abstract and extendable vision provider interface.
* Basic vision provider support using RoboRealm.
* Proof of concept dot homing using vision. This uses the RoboRealm vision provider in a closed loop to find and center on a dot of a specific size. This provides extremely accurate homing of the machine.

So, with that in mind, I'll get into each of the three things we need to do. I'll describe the problems and some possible solutions but I'd just like to open this up for any type of comments or brainstorming you might have.

1. Accurately find the pick up location for a tape drag feeder. (Top Vision)

One of the features of OpenPnP is support for the common hobby "drag" style of tape feeder. This is where an extendable pin on the head is used to drag the tape full of parts along a slot and then picks up the part from a predefined location. I consider this goal number one. By implementing this OpenPnP will be able to be used for real work almost immediately.

In practice, we have found that this works some of the time but inevitably by either friction or something else the tape position will get screwed up and then it all falls apart. What we need to be able to do is use vision to look at the tape and determine where to properly insert the pin to feed accurately.

This is the part that got the wrench thrown in. What I had been working on is identifying the hole that the pin needs to insert into and then determine if that hole is not where it should be. If it's not, we would adjust the insert location and then we should be able to expect an accurate feed. Another list member recently mentioned the existence of transparent tapes, which will be very difficult, if not impossible, to use this method with. 

Instead, I am wondering if it might make sense to have the vision system identify the part that is to be picked and determine if it is off center. If it is, an adjustment can be 
made by either moving the tape further or by just picking the part at an offset.

One difficulty with this is that we will need a way to identify each part package. This could mean either training for each part, or maybe building and including a part database. I'm curious how commercial packages do this.

2. Identify failure to pick. (Bottom Vision)

If the machine fails to pick up a part for any reason, we want to identify that and either notify the operator or retry the operation.

This should be pretty easy. We can either use the data discussed above or simply depend on a light background above the pick nozzle. Anything we see blocking that background is a part.

3. Identify the center of a picked part. (Bottom Vision)

By using a fixed, bottom mounted camera we can move the head over the camera once it has picked a part and determine if the part is centered on the nozzle. If it's not, we can adjust internal offsets so that when we place it it is centered properly.

I see this as as an addition to #1. We're just trying to find the center of a part. The only difference is that we have to do it from the bottom this time. I wonder if we can use the same data set as #1 or if it will be quite different.

For reference, I was thinking that we might be able to get away with defining package data as something like:
Outline (mm): 4x5
Pad 1: 2mmx2mm @ 0,0
Pad 2: 2mmx2mm @ 4,0
Pad 3: 2mmx2mm @ 0, 4
Pad 4: 2mmx2mm @ 4,4
etc.

If we had that data, are there vision algorithms that would let us identify the pads both from the top and bottom, or do we need different data and different algorithms? With my limited vision experience I would expect that we could use some type of threshold, edge finding and then blob location with fixed physical sizes to find the pads.

4. Find and center on fiducials on a PCB for proper board location. (Top Vision, Wide)

By quickly being able to center on fiducials on the PCBs we can very quickly identify the location and orientation of the board to be placed. Additionally, by using a wide angle, fixed, top mounted camera it might be possible to identify boards as soon as they are placed on the machine and automatically orient them.

This is very similar to the dot homing algorithm I am currently using. Not too worried about this one but I welcome discussion on it.



So, I open the discussion to you all. Does anyone have ideas or input they would like to add? I'm especially interested to hear from people who have experience with existing vision systems. You may be able to provide some non-obvious ideas in how things can best work.

Algorithms, processes, designs, ideas, etc. are all welcome. 

Thanks,
Jason

Daniel Dumitru

unread,
Jun 28, 2012, 3:20:50 AM6/28/12
to ope...@googlegroups.com
Hello Jason,
I have spent some time for same topic and I have made some tests using OpenCV.
My opinion it's that for those operations we need template matching for rotated objects.
Unfortunately from my knowledge RoboRealm doesn't offer support for rotated objects.

Kind Regards,
DAniel


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To post to this group, send email to ope...@googlegroups.com.
To unsubscribe from this group, send email to openpnp+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openpnp?hl=en.

David Armstrong

unread,
Jun 28, 2012, 3:25:13 AM6/28/12
to ope...@googlegroups.com
Jason ,

just a few point or idea's i have , i'm open to being totaly wrong of course

my thoughts are to implement a picture database of parts , in that the camera can take a shot of the part for the database
the vision system can reference the picked part to the one stored in the database , using matching.
centre point picking based on vision should be ok , as using the stored picture the vison system should have a good matched outline to follow

again using pad matching of part to pcb should also work .
i'm looking at using infra red led's to help have a consistant clean picture without shadows and to help in image retreaval

parts not picked could be done by measuring the vacuum pressure on pickup , and parts dropped and the vac pressure sense would come in to play
simple and easy to do .

strip feeder .. rather than have one tooth wheel , run the part over 2 wheels and the sprockets will then stay syncronised and in a straight line for picking

is roborealm going to be the software of choice here , or use ros with opencv
i'm just about to start looking at vision system intergration  and found a lot going for ros and opencv etc

has anywork been done , i'd be interested in picking up any starting code or points which may help

 

Date: Wed, 27 Jun 2012 23:59:19 -0700
Subject: [OpenPnP] Implementing Vision
From: ja...@vonnieda.org
To: ope...@googlegroups.com

David Armstrong

unread,
Jun 28, 2012, 3:30:07 AM6/28/12
to ope...@googlegroups.com
Daniel ,

arh yes good point i came across the same thoughts while looking around for ideas
i'm using a base of ROS with opencv ( ros.org) .

perhaps if someone has started we can pick up some common point of code and go from their
or we will all be working in different directions

Dave



Date: Thu, 28 Jun 2012 10:20:50 +0300
Subject: Re: [OpenPnP] Implementing Vision
From: dand...@gmail.com
To: ope...@googlegroups.com

Richard Spelling

unread,
Jun 28, 2012, 11:23:17 AM6/28/12
to ope...@googlegroups.com
I'm going to answer before reading everyone else's response, so forgive
me if I duplicate.

in no particular order:
- the parts are located between the pin holes, and centered. i.e, pin
hole section is 3mm wide, part section is 5mm wide, it is centered
between the pin holes in the part section. If you can find the geometric
center of the part you can easily calculate the pin hole location, even
if you can't see it
- some people use drag feeders with rubber feet and don't even worry
about the holes
- also some people use a single camera pointed down. you have a closely
calibrated "camera offset", so when you center the camera you know
exactly where the end of the pickup needle is in relation to that.
- for seeing the parts on the needle, why not move the head over a
mirror? if you put it in the same position every time you could subtract
the image of the head without a part on it, and very easily know the
shape and orientation of the part
- the fiduciary marks my software puts out are for aligning the
transparencies in photo etching, they are not on the actual board.
- for board finding use edge finding, or just alignment pins.
- what you really need is pad finding, so use the downward facing camera
with known offset.

- I'm going to release my feeder (and my head) design, so this may
mitigate some of the issues with tape advancement for openpnp. I'm going
to wait till they have been tested and any bugs worked out first. I'm
also going to wait till I'm setup to sell the electrics board for them,
as well some of the other parts needed to assemble them. That way I can
make enough money off them to recoup some of my expense. People will be
free to make as little or as much of the feeder design themselves as
they care to, and buy the rest.
> *1. Accurately find the pick up location for a tape drag feeder. (Top
> Vision)*
> *2. Identify failure to pick. (Bottom Vision)*
>
> If the machine fails to pick up a part for any reason, we want to
> identify that and either notify the operator or retry the operation.
>
> This should be pretty easy. We can either use the data discussed above
> or simply depend on a light background above the pick nozzle. Anything
> we see blocking that background is a part.
>
> *3. Identify the center of a picked part. (Bottom Vision)*
>
> By using a fixed, bottom mounted camera we can move the head over the
> camera once it has picked a part and determine if the part is centered
> on the nozzle. If it's not, we can adjust internal offsets so that when
> we place it it is centered properly.
>
> I see this as as an addition to #1. We're just trying to find the center
> of a part. The only difference is that we have to do it from the bottom
> this time. I wonder if we can use the same data set as #1 or if it will
> be quite different.
>
> For reference, I was thinking that we might be able to get away with
> defining package data as something like:
> Outline (mm): 4x5
> Pad 1: 2mmx2mm @ 0,0
> Pad 2: 2mmx2mm @ 4,0
> Pad 3: 2mmx2mm @ 0, 4
> Pad 4: 2mmx2mm @ 4,4
> etc.
>
> If we had that data, are there vision algorithms that would let us
> identify the pads both from the top and bottom, or do we need different
> data and different algorithms? With my limited vision experience I would
> expect that we could use some type of threshold, edge finding and then
> blob location with fixed physical sizes to find the pads.
>
> *4. Find and center on fiducials on a PCB for proper board location.
> (Top Vision, Wide)*
>
> By quickly being able to center on fiducials on the PCBs we can very
> quickly identify the location and orientation of the board to be placed.
> Additionally, by using a wide angle, fixed, top mounted camera it might
> be possible to identify boards as soon as they are placed on the machine
> and automatically orient them.
>
> This is very similar to the dot homing algorithm I am currently using.
> Not too worried about this one but I welcome discussion on it.
>
>
>
> So, I open the discussion to you all. Does anyone have ideas or input
> they would like to add? I'm especially interested to hear from people
> who have experience with existing vision systems. You may be able to
> provide some non-obvious ideas in how things can best work.
>
> Algorithms, processes, designs, ideas, etc. are all welcome.
>
> Thanks,
> Jason
>
> --
> You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
> To post to this group, send email to ope...@googlegroups.com.
> To unsubscribe from this group, send email to
> openpnp+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/openpnp?hl=en.

--
Visit my online store for solar electronics: http://www.richardsfoundry.com
------------------------------------------------------------------------------
NOT sent from a Blackberry. Sent from a free hand-me-down PC running
free open source Ubuntu Linux... :-P

Richard Spelling

unread,
Jun 28, 2012, 11:27:42 AM6/28/12
to ope...@googlegroups.com


On 06/28/2012 02:25 AM, David Armstrong wrote:
> Jason ,
>
> just a few point or idea's i have , i'm open to being totaly wrong of
> course
>
> my thoughts are to implement a picture database of parts , in that the
> camera can take a shot of the part for the database
a good idea. especially if you limit it to the parts you actually use.
for instance, I only use about 30 different parts for all the boards I sell.

> the vision system can reference the picked part to the one stored in the
> database , using matching.
> centre point picking based on vision should be ok , as using the stored
> picture the vison system should have a good matched outline to follow
>
> again using pad matching of part to pcb should also work .
> i'm looking at using infra red led's to help have a consistant clean
> picture without shadows and to help in image retreaval
>
another good idea, though th lights in my shop throw off lots of IR.
(they trip my feeders if I point them at the lights). but if you use
florescent lights, then ir illumination would be dandy for cleaning up
the image. note that the transparent tapes are also transparent to IR.

> parts not picked could be done by measuring the vacuum pressure on
> pickup , and parts dropped and the vac pressure sense would come in to play
> simple and easy to do .
>
> strip feeder .. rather than have one tooth wheel , run the part over 2
> wheels and the sprockets will then stay syncronised and in a straight
> line for picking
>
> is roborealm going to be the software of choice here , or use ros with
> opencv
> i'm just about to start looking at vision system intergration and found
> a lot going for ros and opencv etc
>
> has anywork been done , i'd be interested in picking up any starting
> code or points which may help
>
>
> ------------------------------------------------------------------------
> *1. Accurately find the pick up location for a tape drag feeder. (Top
> Vision)*
> *2. Identify failure to pick. (Bottom Vision)*
>
> If the machine fails to pick up a part for any reason, we want to
> identify that and either notify the operator or retry the operation.
>
> This should be pretty easy. We can either use the data discussed above
> or simply depend on a light background above the pick nozzle. Anything
> we see blocking that background is a part.
>
> *3. Identify the center of a picked part. (Bottom Vision)*
>
> By using a fixed, bottom mounted camera we can move the head over the
> camera once it has picked a part and determine if the part is centered
> on the nozzle. If it's not, we can adjust internal offsets so that when
> we place it it is centered properly.
>
> I see this as as an addition to #1. We're just trying to find the center
> of a part. The only difference is that we have to do it from the bottom
> this time. I wonder if we can use the same data set as #1 or if it will
> be quite different.
>
> For reference, I was thinking that we might be able to get away with
> defining package data as something like:
> Outline (mm): 4x5
> Pad 1: 2mmx2mm @ 0,0
> Pad 2: 2mmx2mm @ 4,0
> Pad 3: 2mmx2mm @ 0, 4
> Pad 4: 2mmx2mm @ 4,4
> etc.
>
> If we had that data, are there vision algorithms that would let us
> identify the pads both from the top and bottom, or do we need different
> data and different algorithms? With my limited vision experience I would
> expect that we could use some type of threshold, edge finding and then
> blob location with fixed physical sizes to find the pads.
>
> *4. Find and center on fiducials on a PCB for proper board location.
> (Top Vision, Wide)*

David Armstrong

unread,
Jun 28, 2012, 11:31:23 AM6/28/12
to ope...@googlegroups.com
i have also been looking at this idea , although not tested yet
looking at ideas etc
http://www.codeproject.com/Articles/24809/Image-Alignment-Algorithms
 
> Date: Thu, 28 Jun 2012 10:27:42 -0500
> From: r...@richardspelling.com
> To: ope...@googlegroups.com

> Subject: Re: [OpenPnP] Implementing Vision
>
>
>

David Armstrong

unread,
Jun 28, 2012, 11:42:10 AM6/28/12
to ope...@googlegroups.com
almost forgot when i closed the last mail
here's one other source , iv'e had some success with , perhaps it has some options 
worth applying
 
http://www.codeproject.com/Articles/239849/Multiple-face-detection-and-recognition-in-real-ti
 

 
> Date: Thu, 28 Jun 2012 10:27:42 -0500
> From: r...@richardspelling.com
> To: ope...@googlegroups.com
> Subject: Re: [OpenPnP] Implementing Vision
>
>
>

David Armstrong

unread,
Jun 29, 2012, 8:32:59 AM6/29/12
to openpnp
Jason ,
you may find this interesting ,  as i presume you would like to stay with java
java's not my strong point
https://sites.google.com/site/qingzongtseng/template-matching-ij-plugin

however i have ran it and i think it may have potential as a starting block  
 
Dave

Date: Wed, 27 Jun 2012 23:59:19 -0700
Subject: [OpenPnP] Implementing Vision
From: ja...@vonnieda.org
To: ope...@googlegroups.com

Jason von Nieda

unread,
Jun 29, 2012, 9:05:26 PM6/29/12
to ope...@googlegroups.com
Hi folks,

Thank you all for your responses. I am going to collect them into this
one email and respond instead of sending a bunch of emails:

> Template Matching

Several people suggested Template Matching for finding the parts. I
was/am not very familiar with this technique so I did some reading. I
found that this is generally used for matching images to images. I
found evidence that this is used in some commercial systems as being
referred to taking a "snap" photo of the part and then referencing it.

I think that this will work, and probably work well, and furthermore
that it's probably quite easy to implement. That said, I was hoping to
be able to do something a bit more sophisticated that would work "out
of the box" without having to photograph each of your feeders. My
thought was that by describing the part as an outline and a series of
pads, we might be able to have a single part database that can be
distributed with the program and will work for all systems. Perhaps
that will only work for bottom vision since for top vision not all of
the pads will be visible.

Does anyone have any thoughts on that?

> parts not picked could be done by measuring the vacuum pressure on pickup

This is something I have considered in the past and then sort of
forgot about. I appreciate the reminder :) This is probably the
easiest and most effective way to do it. Pressure sensors are cheap
and easy to interface with.

> is roborealm going to be the software of choice here , or use ros with opencv

I am currently using RoboRealm because it was very easy to get going
with and it's user interface makes it very, very easy to experiment
with different algorithms. In the end though, it is not going to work
for OpenPnP. I had intended to eventually switch to OpenCV.

I have not yet looked at ROS. I'll spend some time exploring it.

> also some people use a single camera pointed down. you have a closely calibrated "camera offset"

This is how OpenPnP currently works. You can have head attached
cameras or fixed cameras. Head attached cameras have a tool offset and
the system always knows where the tool is in comparison to the center
of the camera image.

> for seeing the parts on the needle, why not move the head over a mirror?

This is an interesting idea. Something I will need to explore more
when I have a machine actually up and running :)

> the fiduciary marks my software puts out are for aligning the transparencies in photo etching, they are not on the actual board.

This is not something I have a lot of experience with, but I have seen
quite a few videos where the machine determines the board position by
fiducial marks. It seems like a neat feature to have and is easy to
do. It's a bit down the road though.

> I'm going to release my feeder (and my head) design...

Glad to hear this! I think having mechanical feeders is clearly a
better solution than drag feeders. I do still intend to make drag
feeding work as well as possible as a low cost alternative to having
actual feeders.

> has anywork been done , i'd be interested in picking up any starting code or points which may help

The system currently supports RoboRealm as a vision provider. I am
working on integrating OpenCV. Once I have that done it should be
possible for other folks to start working on specific vision modules.

Thanks,
Jason

David Armstrong

unread,
Jul 6, 2012, 6:10:35 AM7/6/12
to openpnp
i'm getting the following error on a latest git pull  ,  have i missed a file somewhere

java.lang.Exception: Error while reading parts.xml (Element 'feeder-locations' does not have a match in class org.openpnp.model.Part at line 3)
    at org.openpnp.model.Configuration.load(Configuration.java:173)

Karl Backström

unread,
Jul 6, 2012, 11:04:08 AM7/6/12
to ope...@googlegroups.com
I got the same error, but after wiping my settings directory
(~/.openpnp/) it started working.

/ Karl
> --
> You received this message because you are subscribed to the Google Groups
> "OpenPnP" group.
> To post to this group, send email to ope...@googlegroups.com.
> To unsubscribe from this group, send email to
> openpnp+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/openpnp?hl=en.



--
Blog: http://www.akafugu.jp/blog/
Twitter: akafuguJP
Facebook: http://www.facebook.com/akafugu

David Armstrong

unread,
Jul 6, 2012, 11:19:03 AM7/6/12
to openpnp
Thanks Karl , worked for me !

 
> Date: Sat, 7 Jul 2012 00:04:08 +0900
> Subject: Re: [OpenPnP] startup error
> From: back...@akafugu.jp
> To: ope...@googlegroups.com

Jason von Nieda

unread,
Jul 6, 2012, 12:10:33 PM7/6/12
to ope...@googlegroups.com
Yep, sorry about that. I didn't realize anyone out there was using the
stuff in Git. I am in the process of building out the feeder vision
system and as part of that I refactored PartLocation out into Part and
Feeder.

Out of curiosity, how are you folks using the system? Are you actually
running machines with it or just tinkering?

Jason

David Armstrong

unread,
Jul 6, 2012, 12:19:56 PM7/6/12
to openpnp
Hi Jason ,
 
haha Dont worry about it .. at the moment just tinkering , but building a test machine rig for everything
feeders etc
 
any reason for using google sketchup ! it's rubbish ...
much prefer solidworks / draftcad , so much easier for laser cutting etc
 
i need to get into java a bit more , so following you through it all , so i can get a feel for the flow ....
 
are you using a Java IDE for development i.e Idea j or elipse
 

 Dave
> Date: Fri, 6 Jul 2012 09:10:33 -0700

> Subject: Re: [OpenPnP] startup error

Karl Backström

unread,
Jul 6, 2012, 12:29:56 PM7/6/12
to ope...@googlegroups.com
No worry Jason, things breaking because of added functionality is a
good thing, especially in alpha stage.

I am currently building my P&P machine based on a ShapeOko, progress
can be seen here:
http://www.akafugu.jp/posts/blog/2012_07_06-Making-a-Pick_place-machine_-part-1/

A bit of information on how to set up the gui to talk to a arduino
with grbl would be appreciated.
Also, in some part of the UI you can toggle between inch and mm, other
places seems to only accept inches.

/ Karl

Jason von Nieda

unread,
Jul 6, 2012, 12:30:55 PM7/6/12
to ope...@googlegroups.com
> haha Dont worry about it .. at the moment just tinkering , but building a
> test machine rig for everything
> feeders etc

Glad to hear it!

> any reason for using google sketchup ! it's rubbish ...
> much prefer solidworks / draftcad , so much easier for laser cutting etc

Unfortunately SketchUp is really the only free, Mac compatible option
I have found. If anyone knows of something better, please let me know.
I'd love to use SolidWorks but a) I don't have a spare $10k+ to buy it
and b) I don't want to produce files for an open source project in a
format that no one can look at without spending thousands of dollars.

This is a constant thorn in my side, so if there is a better solution
please let me know. I have actually decided that my next project after
OpenPnP is going to be a cross platform, open source 3D CAD :)


> are you using a Java IDE for development i.e Idea j or elipse
>

I use Eclipse. You can generate the Eclipse files using the Maven pom
included with the project by running, from the command line, mvn
eclipse:eclipse. Then you can just load the project right into
Eclipse.

Jason

Jason von Nieda

unread,
Jul 6, 2012, 12:37:23 PM7/6/12
to ope...@googlegroups.com
> I am currently building my P&P machine based on a ShapeOko, progress
> can be seen here:
> http://www.akafugu.jp/posts/blog/2012_07_06-Making-a-Pick_place-machine_-part-1/

Ah, sorry, I should have remembered. I haven't had a chance to respond
to your other email yet. I almost pulled the trigger on buying a
ShapeOko last week but a friend came through and loaned me a machine
so I will soon have a machine to do testing on. This should speed up
software development immensely :)

> A bit of information on how to set up the gui to talk to a arduino
> with grbl would be appreciated.

I'll be going through that process next week step by step and I will
document how to do it then.

> Also, in some part of the UI you can toggle between inch and mm, other
> places seems to only accept inches.

This is something I'm struggling with. I originally thought it would
be useful to be able to switch back and forth easily but I think that
was probably a mistake. I think that in the near future I will be
making the units a single settable configuration parameter and
everything will just convert automatically. You will still be able to
enter values in whatever unit you like, the value will just convert to
the program's units when you press enter.

Currently, the only places that will not allow you to enter units are
places that require units to be in machine native units. This is
something that is enforced by the machine driver. Everything else
should allow you to enter units with a value and is converted in
memory during job runtime.

David Armstrong

unread,
Jul 6, 2012, 12:52:04 PM7/6/12
to openpnp
Jason
http://sourceforge.net/apps/mediawiki/free-cad/
 
 

 
> Date: Fri, 6 Jul 2012 09:30:55 -0700

> Subject: Re: [OpenPnP] startup error
> From: ja...@vonnieda.org
> To: ope...@googlegroups.com
>

Jason von Nieda

unread,
Jul 6, 2012, 5:52:18 PM7/6/12
to ope...@googlegroups.com
I tried FreeCAD a few months back, and then again a month or so ago
and I just downloaded it again. My previous tries to plagued by
crashes. It does seem to start up and run now, so I will give it
another shot :)

Have you actually used it? Do you like it?

Jason

Christian O

unread,
Jul 7, 2012, 1:47:48 AM7/7/12
to ope...@googlegroups.com
Hi guys

Just thought id mention Alibre design pe at the link below its not free I know but it is quite cheap 
at $199 and has a workflow like solidworks just not as many features. Im not affiliated in any way just
did a google search after reading this discussion. 


cheers

Christian

Jason von Nieda

unread,
Jul 7, 2012, 4:49:44 AM7/7/12
to ope...@googlegroups.com
Alibre is nice, but the PE edition is quite limited. They force you to
upgrade to Pro if you want to do many useful things.

http://www.alibre.com/products/hobby/features.asp

For instance, you can only run a 32 bit version of PE. 64 bit requires
Pro. Additionally, your only real 3D export option is STL. There is no
good way to do interchange with people who do not own the program.

SketchUp, even with all it's limitations, is free and cross platform.
Anyone can download, open, modify and export the OpenPnP designs using
it. Another alternative might be to use OpenSCAD, but I would prefer
to do that once I have finished the design. It's easier to work within
SketchUp to finalize a part and then convert it to OpenSCAD.

Jason
> --
> You received this message because you are subscribed to the Google Groups
> "OpenPnP" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/openpnp/-/MwXG3nHLHT0J.

David Armstrong

unread,
Jul 7, 2012, 5:44:58 AM7/7/12
to openpnp
i do use freecad at times , although i'm spoilt with a full solidworks and cam setup
but i do feel that freecad does give a good balance between being easy to use and a bit better than a drawing package
at least converting files to cam systems etc is painless
 
i appricate the point of looking for and using free tools which can make the decisions not ideal
but i'd say freecad is a good starting point .
 
once the designs are reasonable i dont mind making a solidworks package to complement
if it's requested .
 

 
> Date: Fri, 6 Jul 2012 14:52:18 -0700

Jason von Nieda

unread,
Jul 7, 2012, 11:41:51 PM7/7/12
to ope...@googlegroups.com
Hi all, I just hit a major milestone with feeder vision so I thought I
would show my progress:

http://screencast.com/t/3e6ROh7o1we

This shows two tape feeders using vision to re-center over their
respective parts. What is happening here is:

1. I press the Feed button for a Feeder.
2. The Head moves to where it expects the part to be, based on the
original setup of the feeder.
3. The Feeder is asked to perform the feed operation.
4. FOR TESTING ONLY: The Feeder adds a random error to the position
and moves the head. This simulates the drag tape feeder error.
5. The Feeder runs a template match using the previously recorded part image.
6. The template match tells the Feeder how far off of center the part
is located.
7. The Feeder re-centers over the part using the vision data.

In normal operation, what will happen is that during the first feed
operation for a particular feeder, the feeder will perform the vision
upfront, feed the tape and then perform the vision again. Front
loading the vision operation in this way allows us to then only
perform the vision after each feed operation, saving a second move to
the pick location. The vision offsets will be used to determine where
the feed hole should be for the next feed operation.

This milestone also represents the first use of OpenCV instead of
RoboRealm for vision operations. RoboRealm turned out to be too
complex to use for this task so I spent quite a bit of time getting
OpenCV integrated with the project. We now have full access to OpenCV
via JavaCV in the OpenPnP project.

There are still a few days of work to get this finished, but the hard
part is now done. The rest is just polish.

Jason

Richard Spelling

unread,
Jul 7, 2012, 11:55:17 PM7/7/12
to ope...@googlegroups.com
can't see anything at your link, but cool beans.

note, even my powered feeders don't always stop in *exactly* the same
spot. it has to do with detecting the teeth, which are angled, and
eccentricities (wobble) in the cog wheel.

it's minor, and it cycles per revolution, but your vision system should
compensate without problems.

Looking forward to converting to your software. Maybe Zippy Mark II will
use Makerslide, my feeders and head, arduino mega/ramps, and your
software.

I'll have to make work it on Linux first, though. :-) I don't do that
Microsoft stuff. Mac OS10+ is basically FreeBSD (Linux/Unix flavor), so
shouldn't be to awful hard.

anyway, great progress.
Visit my online store for solar electronics: http://www.spellingbusiness.com
------------------------------------------------------------------------------
"If at first the idea is not absurd, then there is no hope for it.”-
Albert Einstein

Jason von Nieda

unread,
Jul 8, 2012, 12:22:54 AM7/8/12
to ope...@googlegroups.com
On Sat, Jul 7, 2012 at 8:55 PM, Richard Spelling
<r...@richardspelling.com> wrote:
> can't see anything at your link, but cool beans.

It should take you to a large Flash movie with a big play button in
the middle. Hit the play button to watch. Is that not showing up for
you?

> note, even my powered feeders don't always stop in *exactly* the same
> spot. it has to do with detecting the teeth, which are angled, and
> eccentricities (wobble) in the cog wheel.

Depending on how far off it is, it might not be worth the added time
to do feeder vision, but next on the list is bottom vision which would
detect any offset of the picked up part and that would be great for
your feeders.

> I'll have to make work it on Linux first, though. :-) I don't do that
> Microsoft stuff. Mac OS10+ is basically FreeBSD (Linux/Unix flavor), so
> shouldn't be to awful hard.

OpenPnP should run fine on Linux. The only thing that is missing
currently is a serial driver, but the next alpha release will include
a Linux one for that. The OpenCV distribution I am using includes
Linux binaries as well.

> anyway, great progress.

Thanks!

Richard Spelling

unread,
Jul 8, 2012, 1:06:14 AM7/8/12
to ope...@googlegroups.com
didn't get along with firefox, worked around it.

sweet. quite impressive.

I noticed some of the parts are rotated (slightly) in the tape pockets.
can your software do rotational compensation as well? (not a request, just
curious)

yeah, bottom vision would speed things up.

I like to keep things simple, usb microscope or bore scope camera on the
head, use for down vision, then move over a mirror and use same camera for
up vision. :-)

you could even use two mirrors, at 90 degrees from each other, 45 degrees
off table, distance between mirrors = camera offset to pickup needle.

that way you will always be looking strait "up" at the part... :-)

Jason von Nieda

unread,
Jul 8, 2012, 1:57:40 AM7/8/12
to ope...@googlegroups.com
On Sat, Jul 7, 2012 at 10:06 PM, Richard Spelling
<r...@richardspelling.com> wrote:
> didn't get along with firefox, worked around it.

Great!

> I noticed some of the parts are rotated (slightly) in the tape pockets.
> can your software do rotational compensation as well? (not a request, just
> curious)

If it doesn't, it will. I'm actually not sure right now :) The
algorithm I was using in RoboRealm did, and it worked great. I am
still not 100% sure what the best path under OpenCV is but I fully
expect to find one that is rotation invariant. I'll probably be asking
the group with a more detailed question about that soon.

> I like to keep things simple, usb microscope or bore scope camera on the
> head, use for down vision, then move over a mirror and use same camera for
> up vision. :-)

I agree, this is probably a great way to do it. My design does not
currently account for the concept of one camera being both Up and Down
facing, so I will need to think about that.

Do you have a particular camera that you like or recommend? I'm
shopping for one.

> you could even use two mirrors, at 90 degrees from each other, 45 degrees
> off table, distance between mirrors = camera offset to pickup needle.

Yep, I think this would be ideal. Having a single mirror would make
the vision part much trickier, although it could probably be worked
out.

Jason

Bryan

unread,
Jul 8, 2012, 7:20:17 AM7/8/12
to ope...@googlegroups.com
Very cool Jason! So neat to see things progressing so fast.

As for the mirror(s) idea ... I'll have 'reflect' on that for a while. :-P Seriously, the two mirror idea sounds like it should work. It'll come down to which is easiest to procure and attach, I suppose -- mirror or another camera. (Me being so useless with mechanics, they both sound about as difficult to me!)

Bryan.

David Armstrong

unread,
Jul 8, 2012, 8:16:01 AM7/8/12
to openpnp
Great work Jason ,
just about the way i envisioned it ...
 
i'm looking at using a stepper drive on the pickup needle to then be able to rotate the part to the placement pad
i'd think ( i may be wrong of course ) my action idea is to pick the part as to the centre point , the camera has at this point , the pickup point location centre and the rotational information , and i'd also presume the location of say pin one or ident mark , then once the part has travelled to the placement location the camera can verify the pin layout and then match the part to that location , by rotation of the pickup , and then place .
 
if i'm correct or if this is feasabe , then i dont see a need for a second camera .
 
or if using a second camera , this could be done using a prisim at the side of the pickup vac , swinging a lens under neath for the camera to pick up , whilst on it's way to placement , my thoughts are to have a infra red led light source at the pickup , this should give a better enhanced location picture as the camera is usualy more sensitive to IR
 
 just my ideas anyhow to throw in the pot
 
Dave
 
> Date: Sat, 7 Jul 2012 22:57:40 -0700
> Subject: Re: [OpenPnP] Implementing Vision

Richard Spelling

unread,
Jul 8, 2012, 9:20:48 AM7/8/12
to ope...@googlegroups.com
I was intending to do a bore scope camera from ebay on mine, even
printed a holder for it. turns out it was like 35 degrees off pointing
directly down. annoying.

but the concept is still valid, bore scopes are designed for close
vision, also Ubuntu picks it up as a webcam. plus the head is tiny and a
simple cylinder, easy to print a mount for.
>>>> "If at first the idea is not absurd, then there is no hope for it.�-
>>>> Albert Einstein
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "OpenPnP" group.
>>>> To post to this group, send email to ope...@googlegroups.com.
>>>> To unsubscribe from this group, send email to
>>>> openpnp+u...@googlegroups.com.
>>>> For more options, visit this group at
>>>> http://groups.google.com/group/openpnp?hl=en.
>>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "OpenPnP" group.
>>> To post to this group, send email to ope...@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> openpnp+u...@googlegroups.com.
>>> For more options, visit this group at
>>> http://groups.google.com/group/openpnp?hl=en.
>>>
>>>
>>>
>>
>>
>> --
>> Visit my online store for solar electronics: http://www.spellingbusiness.com
>> ------------------------------------------------------------------------------
>> "If at first the idea is not absurd, then there is no hope for it.�-
>> Albert Einstein
>>
>> --
>> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
>> To post to this group, send email to ope...@googlegroups.com.
>> To unsubscribe from this group, send email to openpnp+u...@googlegroups.com.
>> For more options, visit this group at http://groups.google.com/group/openpnp?hl=en.
>>
>

--
Visit my online store for solar electronics: http://www.spellingbusiness.com
------------------------------------------------------------------------------

Richard Spelling

unread,
Jul 8, 2012, 9:25:08 AM7/8/12
to ope...@googlegroups.com
the parts are always oriented the same way in the tape. well, at least
the ones that orientation matters on.

I simply have a "tape orientation" entry in the "tape positions" database.

I assume with the camera system since we are training it, it doesn't matter.

I like the IR diode and IR filter idea. I use IR lasers to activate my
feeders. of course, the halogen lights in my shop (and incandecent
flashlights) put out enough IR to trip them. <sigh>
>> >>> "If at first the idea is not absurd, then there is no hope for it.�-
>> >>> Albert Einstein
>> >>>
>> >>> --
>> >>> You received this message because you are subscribed to the Google
>> >>> Groups "OpenPnP" group.
>> >>> To post to this group, send email to ope...@googlegroups.com.
>> >>> To unsubscribe from this group, send email to
>> >>> openpnp+u...@googlegroups.com.
>> >>> For more options, visit this group at
>> >>> http://groups.google.com/group/openpnp?hl=en.
>> >>>
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
> Groups
>> >> "OpenPnP" group.
>> >> To post to this group, send email to ope...@googlegroups.com.
>> >> To unsubscribe from this group, send email to
>> >> openpnp+u...@googlegroups.com.
>> >> For more options, visit this group at
>> >> http://groups.google.com/group/openpnp?hl=en.
>> >>
>> >>
>> >>
>> >
>> >
>> > --
>> > Visit my online store for solar electronics:
> http://www.spellingbusiness.com
>> >
> ------------------------------------------------------------------------------
>> > "If at first the idea is not absurd, then there is no hope for it.�-
>> > Albert Einstein
>> >
>> > --
>> > You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
>> > To post to this group, send email to ope...@googlegroups.com.
>> > To unsubscribe from this group, send email to
> openpnp+u...@googlegroups.com.
>> > For more options, visit this group at
> http://groups.google.com/group/openpnp?hl=en.
>> >
>>
>> --
>> You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
>> To post to this group, send email to ope...@googlegroups.com.
>> To unsubscribe from this group, send email to
> openpnp+u...@googlegroups.com.
>> For more options, visit this group at
> http://groups.google.com/group/openpnp?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
> To post to this group, send email to ope...@googlegroups.com.
> To unsubscribe from this group, send email to
> openpnp+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/openpnp?hl=en.

--
Visit my online store for solar electronics: http://www.spellingbusiness.com
------------------------------------------------------------------------------

David Armstrong

unread,
Jul 8, 2012, 9:49:18 AM7/8/12
to openpnp

 dont know where abouts you are located Richard ,
but if you need anything machining , i can soon sort it and pop in the post
so dont worry if you need anything specific ..
i'm in the UK
 
Dave
 
> Date: Sun, 8 Jul 2012 08:20:48 -0500
> From: r...@richardspelling.com
> To: ope...@googlegroups.com

> Subject: Re: [OpenPnP] Implementing Vision
>
> >>>> "If at first the idea is not absurd, then there is no hope for it.”-

> >>>> Albert Einstein
> >>>>
> >>>> --
> >>>> You received this message because you are subscribed to the Google
> >>>> Groups "OpenPnP" group.
> >>>> To post to this group, send email to ope...@googlegroups.com.
> >>>> To unsubscribe from this group, send email to
> >>>> openpnp+u...@googlegroups.com.
> >>>> For more options, visit this group at
> >>>> http://groups.google.com/group/openpnp?hl=en.
> >>>>
> >>>
> >>> --
> >>> You received this message because you are subscribed to the Google Groups
> >>> "OpenPnP" group.
> >>> To post to this group, send email to ope...@googlegroups.com.
> >>> To unsubscribe from this group, send email to
> >>> openpnp+u...@googlegroups.com.
> >>> For more options, visit this group at
> >>> http://groups.google.com/group/openpnp?hl=en.
> >>>
> >>>
> >>>
> >>
> >>
> >> --
> >> Visit my online store for solar electronics: http://www.spellingbusiness.com
> >> ------------------------------------------------------------------------------
> >> "If at first the idea is not absurd, then there is no hope for it.”-

> >> Albert Einstein
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
> >> To post to this group, send email to ope...@googlegroups.com.
> >> To unsubscribe from this group, send email to openpnp+u...@googlegroups.com.
> >> For more options, visit this group at http://groups.google.com/group/openpnp?hl=en.
> >>
> >
>
> --
> Visit my online store for solar electronics: http://www.spellingbusiness.com
> ------------------------------------------------------------------------------
> NOT sent from a Blackberry. Sent from a free hand-me-down PC running
> free open source Ubuntu Linux... :-P
>

David Armstrong

unread,
Jul 8, 2012, 10:19:32 AM7/8/12
to openpnp
Richard , could you pulse the ir light and then
measure the frequency to enable the trip point etc ,
 
not sure on your schematic etc , but if your using a pic , should be a bit easier , ( if you need a hand let me know )
or sheild the reciever , another way is to use a dual tx and rx ir sensor and just place a reflector on the pickup to refract the light back
on activation , i use this idea quite a bit on equipment , you'll find the activation point is around 3/16" of an inch infont of the sensor 
 
also it means you dont get other light sources bothering it
why use a ir laser ? , a simple ir led will do ( or is that what you mean )
 
Dave
 
> Date: Sun, 8 Jul 2012 08:25:08 -0500
> From: r...@richardspelling.com
> To: ope...@googlegroups.com
> >> >>> "If at first the idea is not absurd, then there is no hope for it.”-

> >> >>> Albert Einstein
> >> >>>
> >> >>> --
> >> >>> You received this message because you are subscribed to the Google
> >> >>> Groups "OpenPnP" group.
> >> >>> To post to this group, send email to ope...@googlegroups.com.
> >> >>> To unsubscribe from this group, send email to
> >> >>> openpnp+u...@googlegroups.com.
> >> >>> For more options, visit this group at
> >> >>> http://groups.google.com/group/openpnp?hl=en.
> >> >>>
> >> >>
> >> >> --
> >> >> You received this message because you are subscribed to the Google
> > Groups
> >> >> "OpenPnP" group.
> >> >> To post to this group, send email to ope...@googlegroups.com.
> >> >> To unsubscribe from this group, send email to
> >> >> openpnp+u...@googlegroups.com.
> >> >> For more options, visit this group at
> >> >> http://groups.google.com/group/openpnp?hl=en.
> >> >>
> >> >>
> >> >>
> >> >
> >> >
> >> > --
> >> > Visit my online store for solar electronics:
> > http://www.spellingbusiness.com
> >> >
> > ------------------------------------------------------------------------------
> >> > "If at first the idea is not absurd, then there is no hope for it.”-

Richard Spelling

unread,
Jul 8, 2012, 10:51:25 AM7/8/12
to ope...@googlegroups.com
I could pulse it or something, but that makes it complicated. easier to
just turn out the lights. eventually I'll replace that overhead with
florescent.

I use a single (dual) j-k flip flop chip, quantum enfolding, and a paper
clip.

I trip it about 3 inches from the feeder. sure, I could have used an IR
diode if I could have gotten close to it, but the circuit board is
mounted next to the wheel, and I didn't want to have to futz with a
remote sensor.

plus, the laser is cooler, about the same price as a directional led,
and I don't have to get close to only trip one of them.

my red and green laser pointers will trip the feeders. originally I
wanted to use a a visible, but the laser modules in red won't trip it.
bummer.

I thought of using a reflective photo interrupter as you suggest, but
again, I would have to gel close to it.
> it.�-
>> >> >>> Albert Einstein
>> >> >>>
>> >> >>> --
>> >> >>> You received this message because you are subscribed to the Google
>> >> >>> Groups "OpenPnP" group.
>> >> >>> To post to this group, send email to ope...@googlegroups.com.
>> >> >>> To unsubscribe from this group, send email to
>> >> >>> openpnp+u...@googlegroups.com.
>> >> >>> For more options, visit this group at
>> >> >>> http://groups.google.com/group/openpnp?hl=en.
>> >> >>>
>> >> >>
>> >> >> --
>> >> >> You received this message because you are subscribed to the Google
>> > Groups
>> >> >> "OpenPnP" group.
>> >> >> To post to this group, send email to ope...@googlegroups.com.
>> >> >> To unsubscribe from this group, send email to
>> >> >> openpnp+u...@googlegroups.com.
>> >> >> For more options, visit this group at
>> >> >> http://groups.google.com/group/openpnp?hl=en.
>> >> >>
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >> > --
>> >> > Visit my online store for solar electronics:
>> > http://www.spellingbusiness.com
>> >> >
>> >
> ------------------------------------------------------------------------------
>> >> > "If at first the idea is not absurd, then there is no hope for it.�-

Karl Backström

unread,
Jul 9, 2012, 10:27:08 AM7/9/12
to ope...@googlegroups.com
Great work Jason!

I pulled the latest source from tree and after removing ~/.openpnp/ it
started correctly.
I am very eager to try the new functionality on my P&P hardware. I saw
there was not yet any wizard to add a opencv camera. Could you share
your settings file so I can see if it is possible to manually get it
to work?

Best regards,
Karl

Jason von Nieda

unread,
Jul 9, 2012, 11:41:51 AM7/9/12
to ope...@googlegroups.com
Hi Karl,

You should be able to add an OpenCVCamera by going to the Cameras tab,
hitting New Camera and then selecting it from the list. Please be
aware that the OpenCVCamera is brand new and does not yet a
configuration system in place, so if it works at all you will only get
your computer's default camera. I'll be fleshing the OpenCVCamera out
more in the next day or two.

Jason

Karl Backström

unread,
Jul 9, 2012, 12:00:16 PM7/9/12
to ope...@googlegroups.com
Hi Jason,

thank for a quick reply.
I did just that but I didn't realize I got an error in the console:
backstrom$ ./openpnp.sh
java.lang.NullPointerException
at com.googlecode.javacv.FrameGrabber.create(FrameGrabber.java:101)
at com.googlecode.javacv.FrameGrabber.createDefault(FrameGrabber.java:124)
at org.openpnp.machine.reference.camera.OpenCvCamera.<init>(OpenCvCamera.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
[SNIP]

Do I need anything additional installed to use it? Libraries etc?
Also, have you tried using multiple USB cameras with openCV, guess
that was a problem with the old implementation?

Sorry for being so eager to try this out :)

/ Karl

Jason von Nieda

unread,
Jul 9, 2012, 12:06:30 PM7/9/12
to ope...@googlegroups.com
Sorry, I forgot! You will need OpenCV 2.4.0 or better installed in
your system path for OpenPnP to find it.

You should be able to find OpenCV binaries from here:
http://opencv.willowgarage.com/wiki/

I have not yet tried using it with more than one camera. I hope that's
not a limitation, otherwise I've got more work to do :)

Jason

Karl Backström

unread,
Jul 9, 2012, 12:42:03 PM7/9/12
to ope...@googlegroups.com
Thanks, that was it!
I had OpenCV 2.3.1 installed but a "brew update; brew install opencv"
later and the camera is working.

While at it I changed "fg = FrameGrabber.createDefault(0);" to (1) and
it happily showed the picture from my secondary camera :)

Great work!

/ Karl

Jason von Nieda

unread,
Jul 9, 2012, 12:45:06 PM7/9/12
to ope...@googlegroups.com
Great! Glad to hear it!

I'll be working on making this configurable this evening and then
we'll be able to try it with multiple cameras at once.

Jason

mojalovaa1

unread,
Feb 13, 2016, 11:36:55 AM2/13/16
to OpenPnP
Jason can you please say me  for up look camera and part centering options , that work on openpnp or is on plane that some times will work ?
Can you please explain how setup that down camera that can work correct job on openpnp ?? 
 

Jason von Nieda

unread,
Feb 13, 2016, 2:12:40 PM2/13/16
to OpenPnP
Moja,

You have surely setup a downlooking camera in OpenPnP before. What specifically do you need help with?

Jason


On Sat, Feb 13, 2016 at 8:36 AM mojalovaa1 <moja...@gmail.com> wrote:
Jason can you please say me  for up look camera and part centering options , that work on openpnp or is on plane that some times will work ?
Can you please explain how setup that down camera that can work correct job on openpnp ?? 
 

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

mojalovaa1

unread,
Feb 13, 2016, 2:32:22 PM2/13/16
to OpenPnP
 
I ask for Up look camera  , can that camera work on openpnp job like centered nozzle position , look part pins  and similar  on this moment ?
My machine have  Up look camera and wont start it that can work  on openpnp  , if can , what can work i how setup it ? 

Jason von Nieda

unread,
Feb 13, 2016, 2:33:30 PM2/13/16
to OpenPnP
Moja,

Sorry, no, that feature is still under development. You can track it here: https://github.com/openpnp/openpnp/issues/104

Jason


On Sat, Feb 13, 2016 at 11:32 AM mojalovaa1 <moja...@gmail.com> wrote:
 
I ask for Up look camera  , can that camera work on openpnp job like centered nozzle position , look part pins  and similar  on this moment ?
My machine have  Up look camera and wont start it that can work  on openpnp  , if can , what can work i how setup it ? 

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

mojalovaa1

unread,
Feb 13, 2016, 2:37:10 PM2/13/16
to OpenPnP
Then I suggest  this not will on options some time ?

Jason von Nieda

unread,
Feb 13, 2016, 2:38:46 PM2/13/16
to OpenPnP
Moja,

I'm sorry, I don't understand. Can you expand on what you mean?

Jason


On Sat, Feb 13, 2016 at 11:37 AM mojalovaa1 <moja...@gmail.com> wrote:
Then I suggest  this not will on options some time ?

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

mojalovaa1

unread,
Feb 13, 2016, 2:49:16 PM2/13/16
to OpenPnP
Sorry my English is disaster

I m ask , when  that option will be usable on machine , that can  use on real work ?
My machine have  Up look and Down look camera , Down look is  on work , now wont run Up look  camera   , Cri.s help me  , but he say that is not possible on this moment use that camera on real work.
Can you please say me more about how use that camera , can I use it  , and if can what  is main use for that camera ?
 

Jason von Nieda

unread,
Feb 13, 2016, 2:53:28 PM2/13/16
to OpenPnP
Moja,

I can't say when it will be working. It's my primary development focus right now and I am slowly making progress but I have no idea when I'll be done. The latest will be by Maker Faire in May, but I hope to be done well before that. This is the next major new feature for OpenPnP, so when it's finished I will make an announcement and include instructions for using it.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

mojalovaa1

unread,
Feb 13, 2016, 3:35:04 PM2/13/16
to OpenPnP
 
Jason , how much is hard add that Up look camera work  like Down look camera , only that can centering nozzle and components ?

That is first what will be usable on first moment . 

Jason von Nieda

unread,
Feb 13, 2016, 3:47:58 PM2/13/16
to OpenPnP
Moja,

That is what I am working on. I don't know how long it will take, but I will let you know when it's finished.

Jason


On Sat, Feb 13, 2016 at 12:35 PM mojalovaa1 <moja...@gmail.com> wrote:
 
Jason , how much is hard add that Up look camera work  like Down look camera , only that can centering nozzle and components ?

That is first what will be usable on first moment . 

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

mojalovaa1

unread,
Feb 13, 2016, 6:22:30 PM2/13/16
to OpenPnP

Robert Walter

unread,
Feb 14, 2016, 12:02:21 AM2/14/16
to OpenPnP
Jason,

#1, Really cool idea to determine drag feeder position using previous image.Still a little confused as to how you can determine hole position accurately as the part may be somewhat skewed or misaligned in the pocket (front / rear / twisted), but I guess when it comes to clear tapes, close is better than nothing. A tapered drag pin should be able to accommodate the inaccuracies of the part in the pocket. From you initial tests, your results look fantastic!

#2, I am by no means an expert in OpenCV, but I did do some late night reading on the library, and as you likely already know, there are almost infinite ways to do just about anything. I do have considerable experience in industrial vision, so I can only put my $0.02 worth into the opinion pile. 

I started doing some testing with openCV and some raw image files using Python (it's what I am familiar with, but should be very easy to port to Java). Template matching will work, but as you said, you will need reference images, or to create descriptor files from templates to help increase speed.

What I found works quite well (limited testing) is to create a universal centering routine that is component agnostic. Using a quick test routine, I did the following:

 1) Use openCV thresholding to create a binary representation of the component over the camera.  This will make all the pins white, and everything else black. You may end up with a white hole in the middle of the component body due to reflection of the lighting, but we will take care of this in step #2

2) Copy the image, and do a white flood fill to create a mask of the binary image. This way, everything in the mask is white that is outside of the component body. It removes the pins, and only leaves the body, with the hole in the middle.

3) Invert the mask so that everything white becomes black, and everything black becomes white

4) OR the mask with the Binary image created in step #2. This fills in the any holes in the body due to reflections.

5) Now we have a reasonably clean binary image of the component body and pins. Background is white, Pins and Component Body are black.

6) Use contour search to find all pins and body perimeters

7) Add all contour objects to an array and feed into the OpenCV MinAreaRect function. This creates the smallest rectangle object that can encompass all of the pins and body. The rectangle is automatically rotated to best fit the pins and body, and returns a set of center co-ordinates, the length and width of the rectangle, and the rotation angle.

8) We already know the nozzle co-ordinates, we just need to use the rectangle center co-ordinates to determine offset, and the rotation angle is absolute. We can now re-position the component to obtain center.

I tried with some TQFN / SOT-23 / SO16 images, and it worked perfectly. I just snagged the images from google so not ideal, but will try and snap some images from my PNP this week and try with some real world images. This way I can play with lighting and contrast to get some better results. But so far, no configuration per part other than adjusting contrast and thresholds based on image input.

The rectangle object can also be easily converted to a box object and superimposed on the image to show the component outline.

Anyhow, I will keep toying around, but please feel free to ask any questions. I can email you a copy of the Python code once I clean it up a little.

Rob.
>> --
>> You received this message because you are subscribed to the Google Groups
>> "OpenPnP" group.
>> To post to this group, send email to ope...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> openpnp+u...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/openpnp?hl=en.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "OpenPnP" group.
>> To post to this group, send email to ope...@googlegroups.com.

Jason von Nieda

unread,
Feb 14, 2016, 12:56:29 AM2/14/16
to OpenPnP
Hi Robert,

Sounds like an interesting algorithm. Please do post some examples when you have a chance.

I've had pretty good luck with a similar algorithm, although not quite as complete as yours. What's actually standing in my way right now isn't the CV portion, though, it's more logistical. I just finished building a good uplooking camera mount with lighting integrated and got that mounted, and now I am working on adding exposure control to the camera system. Exposure control is pretty critical here as most of the cheap USB webcams we use tend to have auto exposure and it tends to make bad decisions. 

Unfortunately, OpenCV's camera capture system doesn't support exposure control, at least on Mac. So I'm working on a new UVC based camera driver that lets us have better control.

Anyway, my goal is to get past these initial hurdles in the coming weeks and then when I finally get to the CV part I hope to put together a very basic reference implementation and then hopefully others with more CV experience can jump in and make improvements to the CV part without having to worry about the complexities of the rest of the system.

Jason


To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Cri S

unread,
Feb 14, 2016, 1:58:43 AM2/14/16
to OpenPnP
If you want help with opencv, drop me mail, but send me real images from camera. Further tell me if camera have manual exposure control or not, as this requre different algorithm.

Jason von Nieda

unread,
Feb 14, 2016, 2:20:29 AM2/14/16
to OpenPnP
Thanks Cri, I will certainly do that once all the pieces are in place.

Jason


On Sat, Feb 13, 2016 at 10:58 PM Cri S <phon...@gmail.com> wrote:
If you want help with opencv, drop me mail, but send me real images from camera. Further tell me if camera have manual exposure control or not, as this requre different algorithm.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

mojalovaa1

unread,
Feb 14, 2016, 5:59:43 AM2/14/16
to OpenPnP
Hi Robert

You are use up look camera ?

Malte R.

unread,
Feb 14, 2016, 2:09:20 PM2/14/16
to OpenPnP
Hi Jason,

not sure if this helps but Adam (wayoutwest) from the LitePlacer community had a lot of success with libuvc:
https://int80k.com/libuvc/doc/

I believe there are several open source Java wrappers available for that.

Apparently the lib allowed him to override auto exposure and set it manually on popular Logitech C270 and other cameras.

Find related discussion here:
http://liteplacer.com/phpBB/viewtopic.php?f=4&t=144&p=935&hilit=uvc#p935

Regards
Malte

Jason von Nieda

unread,
Feb 14, 2016, 2:30:05 PM2/14/16
to OpenPnP
Hi Malte,

Yes, libuvc is what I am using. Unfortunately the Java wrappers that I've been able to find don't work well, so I wrote (generated, really) a new one. 


There's really just some cleanup and polish to do before it's ready. I've been using it for a few weeks now.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Anatoly

unread,
Feb 17, 2016, 10:45:40 AM2/17/16
to OpenPnP
Hi, guys.
Hi, Robert

I'm reading this forum for a long time but decided to write only now.

14 feb 2016 г., 12:02:21 UTC+7 Robert Walter wrote:
...


5) Now we have a reasonably clean binary image of the component body and pins. Background is white, Pins and Component Body are black.

I have tried to implement this algorithm with OpenCV and I think that the on image should be left only pins. Look this picture:
- two upper screen shows the center and the angle computed using pins and body
- two lower screen shows the center and the angle computed using pins only




Jason von Nieda

unread,
Feb 17, 2016, 11:00:47 AM2/17/16
to OpenPnP
Hi Anatoly,

Welcome to the forum! This looks very nice. Can you share the code or the algorithm?

Jason

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Robert Walter

unread,
Feb 17, 2016, 3:00:53 PM2/17/16
to OpenPnP
Anatoly / Jason


You are right Anatoly, you may get a better analysis using pins only, but in reality, this is just a threshold value, which should be adjustable for each component type in OpenCV once implemented. By adjusting this value, you should be able to easily isolate pins from body....

But in really all comes down to component type, number of pins and body color. I use my fair share of optocouplers / SSRs that are white bodied, so it is next to impossible to isolate the body from the pins.

We can get somewhat fancy, and start analyzing each of the contours sizes using openCV and comparing them to a user entered pin size, thus ensuring pin only centering, but as Jason has mentioned, getting a usable framework is priority. 

In the end, I think threshold as well as minimum and maximum length and width of component should be implemented per component, such that not only we capture the component for centering / rotation, but can also do a pass / fail test for size. This way if we get a completely over / under exposed or erroneous image that is not within reasonable tolerance of the actual component size, we can fail the part, dump and re-feed and re-pick.

Maybe implementing min / max detection contour dimension would also be advisable in the base UI framework, and we can choose to implement pins only functionality later.

Also, as much camera  / lighting control would be ideal for each component (brightness / exposure / light intensity) using the camera API would be awesome, then we can get the best possible picture for each component, then just worry about threshold to get the ideal binary image. From there it is just contour validation...

Rob.

Rob.

Cri S

unread,
Feb 17, 2016, 5:38:13 PM2/17/16
to ope...@googlegroups.com
There is difference of Center of Gravity between package outline and pins.
Layout software generally define Libs with COG of pins as Zero.
One example is micro usb connector. The COG of pins is a lot different
that COG of body.

2016-02-17 20:00 GMT, Robert Walter <ttsther...@gmail.com>:
>> On Wed, Feb 17, 2016 at 7:45 AM Anatoly <atla...@gmail.com <javascript:>>
>>
>> wrote:
>>
>>> Hi, guys.
>>> Hi, Robert
>>>
>>> I'm reading this forum for a long time but decided to write only now.
>>>
>>> 14 feb 2016 г., 12:02:21 UTC+7 Robert Walter wrote:
>>>>
>>>> ...
>>>>
>>>
>>>>
>>>> 5) Now we have a reasonably clean binary image of the component body and
>>>>
>>>> pins. Background is white, Pins and Component Body are black.
>>>>
>>>
>>> I have tried to implement this algorithm with OpenCV and I think that the
>>>
>>> on image should be left only pins. Look this picture:
>>> - two upper screen shows the center and the angle computed using pins and
>>>
>>> body
>>> - two lower screen shows the center and the angle computed using pins
>>> only
>>>
>>>
>>>
>>> <https://lh3.googleusercontent.com/-CRdLCYLjTKE/VsSJSPgH5fI/AAAAAAAAAAk/HLC5JrNQh9w/s1600/Angle_and_Position.png>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>>
>>> "OpenPnP" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>>
>>> email to openpnp+u...@googlegroups.com <javascript:>.
>>> To post to this group, send email to ope...@googlegroups.com
>>> <javascript:>.
>>> <https://groups.google.com/d/msgid/openpnp/7ff638b2-8c2d-404e-b045-03fd6c3e0006%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "OpenPnP" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openpnp/kKtUK98_r9g/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> openpnp+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/openpnp/7efeb6bf-a549-4f48-8719-8ba3acdb9283%40googlegroups.com.

Robert Walter

unread,
Feb 17, 2016, 11:15:28 PM2/17/16
to OpenPnP
Cri S,

I get what you are saying, but....

Would it not make more sense to get a simple version of vision working, with a framework that can support expansion / modification?

Center of Gravity for non-standard components would be a neat feature, but is just some math and offsets. I don't think it is all that hard to do, but someone will have to sit down and come up with a clean algorithm, set of parameters, and the formulas to shift the position when rotating around a point on the object that is not center. It would be simple to find the physical center and rotational error of the component when picked at the COG, however, one would need to know the distances from physical center and the COG to translate into the offsets for the final placement.

I am not against this, but using the right nozzle, you can usually place most components without the need to take COG into account. I would bet that most users at the moment would be extremely happy to get some form of bottom vision working to place 99% of their components accurately.

Rob.

mojalovaa1

unread,
Feb 18, 2016, 12:02:58 AM2/18/16
to OpenPnP
Anatoly , please if you can make some free version yours program that we can test on image and see how that work on other camera .
Can you make one version who can be opensource and then we have  start position for develop together ?
Jason , all so if you  have some test program for aligg. on up look camera  , and all so if you have some open source for start position on develop will be nice ?

Robert Walter

unread,
Feb 18, 2016, 12:51:34 AM2/18/16
to OpenPnP
Jason,

Here is my python code attached. Pretty simple, but it does work.

It should be easily ported to Java. I am just getting my feet wet in OpenCV, so I will try and get more familiar as time permits.

Hope it helps.


Rob.

On Wednesday, February 17, 2016 at 8:00:47 AM UTC-8, Jason von Nieda wrote:
openCV_icCenterRot.py

Anatoly

unread,
Feb 18, 2016, 2:38:09 AM2/18/16
to OpenPnP
Hi, guys


 17 feb 2016 г., 23:00:47 UTC+7 Jason von Nieda wrote:
 Can you share the code or the algorithm?

Yes, you can download zip http://rghost.ru/68ZPSDBtx
In this archive you will find:
- AnglePos3.exe - executable for WindowsXP 32 bit (tested) and Windows7+ (not tested)
- openCV libraries (*.dll)
- some test pictures (*.png)
- source code AnglePos3.pb wroted on PureBasic language https://www.purebasic.com with OpenCV addition i took place http://www.purebasic.fr/english/viewtopic.php?f=12&t=57457

I wanted to build my machine in 2008, but some conditions was changed later and i stopped work on the project.
However, interest in pick and place macines stayed and I am reading forums about them.

Good luck!

Anatoly

unread,
Feb 18, 2016, 2:42:14 AM2/18/16
to OpenPnP
Some addition: pictures must be 640x480 pixels!

Cri S

unread,
Feb 18, 2016, 3:43:18 AM2/18/16
to OpenPnP
This simple code works good with cameras having manual exposure control. If we can suppose to have such type of camera things get a lot simplier.

Yes there exist even more advanced code translated to java and even interface for external program to allow rapid prototype. Dro me a mail with some example image.

Anatoly

unread,
Feb 18, 2016, 7:25:13 AM2/18/16
to OpenPnP
You can set camera parameters with 
cvSetCaptureProperty

The parameters are:
CV_CAP_PROP_FRAME_WIDTH
CV_CAP_PROP_FRAME_HEIGHT
CV_CAP_PROP_BRIGHTNESS
CV_CAP_PROP_CONTRAST
CV_CAP_PROP_SATURATION
CV_CAP_PROP_HUE
CV_CAP_PROP_GAIN
CV_CAP_PROP_EXPOSURE
CV_CAP_PROP_SHARPNESS
CV_CAP_PROP_GAMMA

All my webcams support this settings.
My last cam is UVC ELP 1280x720
www.aliexpress.com/item/Full-HD-H-264-720p-OV9712-Mjpeg-YUY2-uvc-micro-mini-cmos-usb-camera-module/32259206697.html?spm=2114.01010208.3.47.R8BT9Y&ws_ab_test=searchweb201556_2,searchweb201644_5_505_506_503_504_502_10001_10002_10017_10010_10005_10011_10006_10012_10003_10004_10009_10008,searchweb201560_2,searchweb1451318400_-1,searchweb1451318411_6449&btsid=d7cc72b2-416b-4b84-b757-4413c2171e89

Jason von Nieda

unread,
Feb 18, 2016, 1:13:20 PM2/18/16
to ope...@googlegroups.com
Thanks for posting this Anatoly!

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

mojalovaa1

unread,
Feb 18, 2016, 2:34:34 PM2/18/16
to OpenPnP
I see that have  some difference  between what camera see and what is real  on pcb , I suggest that is problems on lens  distortion ?

This  is some interest link , maybe  you have some suggestion ?

Problem make lens distortion , or Radius 11.2 mm  look like that is 11.9 mm distance .
On program have  calculation for that ?

http://lensfun.sourceforge.net/calibration-tutorial/lens-distortion.html
http://www.theiatech.com/calculator/

http://paulbourke.net/miscellaneous/lenscorrection/

Jason von Nieda

unread,
Feb 18, 2016, 2:39:49 PM2/18/16
to OpenPnP
Hi Moja,

OpenPnP has the ability to apply lens distortion correction, but doesn't yet have a way to generate the parameters. This must currently be done by hand and it's somewhat complex. I intend to add support for this but it's low on the list right now.

If you can generate your lens distortion parameters using OpenCV you can plug them into OpenPnP in the machine.xml. The parameters are the ones documented here http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Anatoly

unread,
Feb 18, 2016, 9:24:30 PM2/18/16
to OpenPnP
Hi, mojalovaa1


I see that have  some difference  between what camera see and what is real  on pcb , I suggest that is problems on lens  distortion ?

mojalovaa1

unread,
Feb 19, 2016, 3:39:41 AM2/19/16
to OpenPnP
Hi Anatholy

I use endoscope camera on this moment .
Problem for me is space for camera , that is 20x25 mm camera bcb and lense dimmension.
Do you have problems with centered lense on camera for vision test program ?

Anatoly

unread,
Feb 19, 2016, 6:50:55 AM2/19/16
to OpenPnP
Hi, mojalovaa1


I use endoscope camera on this moment .


I think your endoscope uses a short-focus lens, so you get the geometric distortion always.

Do you have problems with centered lense on camera for vision test program ?

Reply all
Reply to author
Forward
0 new messages