Python PnP

1,116 views
Skip to first unread message

Trampas Stern

unread,
May 5, 2017, 3:22:00 PM5/5/17
to OpenPnP
I have been wanting to improve OpenPnP but have not gotten past the architecture setup of the software enough to figure out how to add functionality.  Maybe if someone who knows the software has time they could give me a primer at some point...  

Until I am able to add functionality to OpenPnP I figured I would start prototyping some of the vision algorithms and fixes in python.  Of course then I realized I needed Jog controls, camera view, recitile, etc.  so the code is slowly is becoming a PnP software on it's own.  Again I really don't want to write PnP software, I just want my machine to work... 

One thing I needed to know the pixels per mm to implement some of the advance vision items so I thought about using known size object, but instead I wanted it easy, so I had the software measure it for me.  Specifically since the machine is calibrated with the steps per mm for the stepper motors, if I move the head 1mm I can then measure the number of pixels the image moved. This of course then means I can then measure the camera's rotation as well and the X and Y backlash. It also means I can determine if the vertical and horizontal flip of the camera is correct.  

I have also implemented the lens calibration, but not gotten the GUI working for that yet and it is not turned on in image below.    



Again I really don't want to write PnP software, and really hate GUI programming, but I do want to have my machine running. 


At the moment, I have gotten a crud, but functional strip feeder setup working, but need to improve that system a bit more. Currently I am working on importing the board pick and place file and working on getting the software to do manual board alignment (offset and rotation, as well as affine transformation).  One of the things with the board alignment is that if you pick 3 points or more the software should be able to let you know if your machine is square and if the 'steps per mm' is correct (assuming board is correct). Additionally with the affine board transformation it should correct any PCB board scaling issues.  Once I have the board imported and feeders working I am hoping I can finally populate a board. 



Trampas

www.misfittech.net












dzach

unread,
May 5, 2017, 4:54:12 PM5/5/17
to OpenPnP
@Trampas,
have you checked  https://github.com/openpnp/openpnp/wiki/Scripting ?
There is a python scripting engine with everything in it. It uses Jython, http://www.jython.org/currentdocs.html.
You can do almost anything with scipts in OpenPnP, as I have mayself discovered lately. I prototype some ideas in Javascript or Beanshell and then try them in Java.
@Cri-s is a regular user of scripts and he has published lots of them in here and in github, where you can get a good idea on how to proceed. It's a pity to start from scratch when there is so much alteready available for you, I believe.

Jason von Nieda

unread,
May 5, 2017, 5:03:09 PM5/5/17
to ope...@googlegroups.com
Hi Trampas,

Do you have any specific parts of the software architecture you'd like explained? 

I believe I've already pointed out https://github.com/openpnp/openpnp/wiki/Developers-Guide#system-architecture which covers the architecture in general terms, so if you have questions that aren't covered there, just ask, and I am happy to provide information.

For instance, if you said "I want to add automatic units per pixel calibration to OpenPnP" I could direct you to the ReferenceCameraConfigurationWizard where that would be pretty easy to add.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/dce07dd4-8e65-41e4-859d-e3d215cad042%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Trampas Stern

unread,
May 5, 2017, 5:47:38 PM5/5/17
to OpenPnP
I have read all the documentation I could find but still have a hard time understanding code and the abstraction​.

My goal was not to implement a camera Calibration wizard. Rather i wanted to get strip feeders working, which required calibration of camera.
I looked at trying to implement a new strip feeder class but did not understand the abstract where I could figure out the call path or how to do add the new class.

For myself a block diagram of the classes and abstraction would help. A video of a code walk through would be awesome.

However it might be that if I can not understand the code I should not be modifying it.


Trampas



Jason von Nieda

unread,
May 5, 2017, 6:07:53 PM5/5/17
to OpenPnP
Hi Trampas,

Adding a new type of feeder would roughly consist of:

1. Create a new class that implements the Feeder interface. Typically I start by extending ReferenceFeeder, which gives you most of the interface already.
2. Add your new class to ReferenceMachine.getCompatibleFeederClasses, which is how the UI gets a list of feeders a user can add to their machine.
3. Implement the abstract methods that are leftover - Eclipse makes this easy by right clicking and choosing Source -> Override Implement Methods. The main ones of interest are feed(), which performs the feed operation and getPickLocation() which tells OpenPnP where to pick the part.

At that point, when you launch OpenPnP and attempt to add a new feeder, your class will be available to select.

In general, if you are wanting to modify something or add new code, what I recommend is to find the thing most similar to what you want to work on machine.xml. For instance, if you want to change how a strip feeder works, find that feeder in machine.xml. You'll see a class= attribute. That attribute tells you exactly which class does the work. Then you can go edit that class and make changes. Likewise, if you want to add something new, if there is already something similar, just copy the class and go to work.


I have some ideas on how I might be able to put an introduction to the codebase together in video, so I will see about doing that for my first new video and if we have to focus in on some things after that, we can.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Paul Kelly

unread,
May 5, 2017, 6:23:06 PM5/5/17
to ope...@googlegroups.com

For the record, I admit that I hadn’t found the scripting page on the wiki either. Not sure how I missed it, especially given that it answers the top 5 ‘How do I?’ questions that I intended to ask when we get our machine moving…

So I guess that adds ‘every topic on the site’ to your video list, simply because we all have short attention spans. 

Can there be motorbike tricks and girls in the videos?  We might stand a chance of watching them to the end then…. J

 

PK

--

You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Matt Brocklehurst

unread,
May 6, 2017, 1:57:14 AM5/6/17
to ope...@googlegroups.com
I do think the "design-pattern" (abstract,interfaces,reference) used whilst uber flexible  if your not used to it (i.e come from say a python background) could be a bit daunting - the name escapes me for this style - I'm c++ in day job so it sits kind of natural for me 





Sent from my iPhone

Cri S

unread,
May 6, 2017, 6:06:07 AM5/6/17
to OpenPnP
Just for curiosity, why you are not able to place part with openpnp?

Trampas Stern

unread,
May 6, 2017, 8:30:41 AM5/6/17
to OpenPnP
The strip feeders are problematic for my machine. It works for first 5 or so parts and then misses parts.

I have tried various work around but with no good results.

Trampaa

Jason von Nieda

unread,
May 6, 2017, 8:44:47 AM5/6/17
to OpenPnP
The strip feeder vision code is at https://github.com/openpnp/openpnp/blob/develop/src/main/java/org/openpnp/machine/reference/feeder/ReferenceStripFeeder.java#L256

SadMan on IRC has been experimenting with this recently and had some good results increasing the blur and playing around with the distances a bit. I think there may also be some value in capturing a couple images and taking the results from them all combined. I notice in my tests that if I am just watching the video with the holes superimposed sometimes holes disappear for a few frames for no discernible reason. Probably needs better filtering and noise removal.

I also recommend looking at the debug images in log/vision. When debug logging is turned on these are recorded for every run and can tell you a lot about what is going on.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Cri S

unread,
May 6, 2017, 9:49:28 AM5/6/17
to OpenPnP
If that from sad man works, ok, otherwise post 2 images of feeder, taken at approx 40mm of distance if possible.

Trampas Stern

unread,
May 7, 2017, 9:41:52 AM5/7/17
to OpenPnP
I appreciate the help and support! 

I have thought about it for some time, and decided that it takes a long time to become proficient at a programming language, and have decided Java is not on my list.   Note this is not a language semantics decision but more of learning the libraries,  tools  and "design-patterns"  for the language.   I feel, my time is better spent doing what I am doing in Python and then hoping that the good ideas that come out will be picked up by OpenPnP. 

Again my goal is to get my machine running. This is more than just pick and placing parts, but rather requires setting up the machine.  That is I need something to help me verify my machine is setup correctly and direct me to toward resolving any problems.  This also helps me learn about the problem space and understand design decisions OpenPnP has made. 

Jason once told me a truism along the lines of "if your machine is not working with OpenPnP, check your assumptions."  So for that particular problem I had assumed my machine was still square and it was not.  It took me awhile to figure out that problem..  So I ended up  wasting a lot of Jason and my time because of a bad assumption, and I realized I could have written a python script to verify that assumption in less time than it took to find the problem.   I view this as one of the many "pre-run" thresholds people have to overcome. 

I will continue to work on Python scripts testing my assumptions as that OpenPnP strip feeders work for a lot of people every day and they do not seem to have these problems. Therefore it means something wrong with my machine and the only way I will find the problem is by testing all the assumptions until my machine works. 

Again I do not want to write PnP software and my hope is I figure out which bad assumption(s) I have made and OpenPnP will start working for me.  Until this happens I will continue with Python and testing my assumptions, if I get were I can build boards in Python before OpenPnP works for me then my goal will have been achieved.   

As far as the strip feeders go I have posted the problems before and found that the biggest problems are with me forgetting to press the apply button, the vision not always working, and that OpenPnP extrapolates from first two points.  All of which I have tried to work around, but have not gotten a stable workaround  so again it is something in my hardware assumptions. For example my camera delay could be set too low, so I am currently testing what the camera delay needs to be for my machine to be stable in Python.  It could be that my camera is doing automatic exposure control, it could be that my lighting is off, etc.   Once I find the optimal settings for these assumptions on my machine I will plug into OpenPnP and try again.  

Trampas

Cri S

unread,
May 7, 2017, 11:35:27 AM5/7/17
to OpenPnP
Maybe you'r tools are wrong.
For testing machine squarness, use a metall meter like this https://thumbs.dreamstime.com/z/hand-holding-meter-metal-young-31754019.jpg and don't measure from 0 but from 10cm (1 inch)
I have explained this already. As example X30 and Y40cm , ruler units, not pnp units. The result measurement between point Y0X30 and Y40X0 should be 50cm . If error is more then 3 or 7mm you have a problem.
As now it can be compensate in SW, compute the square factor and not need adjust it in HW.
Camera delay need to be measured, not estimated, but this need to be done before installing camera to machine.  Run Openpnp, run a job using null driver, display a running timer that displays milliseconds.
example here: http://imgur.com/a/3cMYl and take screenshoot, add 24ms variance + one Frame time (1000/fps)ms and this is the value to configure on openpnp.
Maybe from the ton of videos  von Nieda is producing, there are several hours of hardware setup video, sorry for the sarcasm.
Thinking that you could estimate pixel/mm from 1mm move of machine is the same wrong assumtion, there are other methods, but involve open graphic editor and check the captured screenshoot from openpnp.
You could use a coin, check wikipedia the coin size, use paper with quadratic lines, ... .

Trampas Stern

unread,
May 10, 2017, 4:44:02 PM5/10/17
to OpenPnP
I have have the board registration system almost complete.  What this allows you to is right click on a part in the table and set the user "picked" location from the head camera position, once you have 3 or more points picked it will do a least squares fit to the board.   
You can also right click on a part in table and move the camera (or nozzle) to that part location. 


Additionally it will inform you about how good the registration fit was. 

BoardFile.py-181 - DEBUG - Scale x -1.0089

BoardFile.py-182 - DEBUG - Scale y -1.0045

BoardFile.py-183 - DEBUG - rot -0.6034 deg

BoardFile.py-184 - DEBUG - translation x 114.7878

BoardFile.py-185 - DEBUG - translation y 149.5994

BoardFile.py-186 - DEBUG - Maximum error 0.0678


From the transformation matrix I calculate the X and Y scaling factor, these should be 1.0 if PCB is made correctly and machine is correctly calibrated. For example if your steps per mm is off,  this scaling factor will be off (assuming PCB is correct).   The maximum error is the maximum error between the points the user "picked" and the transformed "machine" location points.  


Note that if your board is manufactured wrong or if your machine's step per mm are off this algorithm will do it's best to compensate so you populate correctly. Also with this information if you machine is out of calibration or square it will be apparent before you try and place a part.  Of course if the error is too high you can add more registration points and/or adjust the existing registration points. 


Next up is the feeders. 


Trampas

www.misfittech.net

Trampas Stern

unread,
May 12, 2017, 10:44:08 AM5/12/17
to OpenPnP
For the strip feeders rather than recording the location of the first and second hole, what I do is taking the first and second hole and then moving camera to each hole and measuring the center, and then saving the part locations as a CSV file.



The advantage of this method is that the CSV file has all the "pick locations" that is the machine X,Y,Z and rotation for the part.  This way there is no need for vision during build. 

The algorithm looks for next hole and measure the euclidean distance between the last hole and current hole. If the distance is error is too high it will try running the hole detection up to 3 more times, if it still can not get error low enough it will got back and measure previous hole again.  Doing the vision algorithms and finding the parts during setup instead of during run time means you reduce the risk of missing a part during board population and hosing a build, which is important to me. 

I am currently running my camera at 1920x1080 on an old core 2 duo laptop and it takes around 1 second per part to move to location and find the hole.   I am moving the camera one hole at a time on the first implementation as that the lens effects could affect the circle detection when the camera is not centered over the hole, so moving camera over hole yields the best possible result. I will optimize the algorithm after lens calibration and compare results, but for now the objective is to make it work repeatable and reliable, then optimize for speed. 

It would be nice if  OpenPnP had the ability to load CSV file with part locations (X,Y,Z and rotations) then I could use this strip feeder script to write the CSV file and then run with OpenPnP.  

Note that with the same strip shown in the image above OpenPnP's default strip feeder vision system could not find parts after picking the ~7th part.  

Trampas

Cri S

unread,
May 12, 2017, 11:34:20 AM5/12/17
to ope...@googlegroups.com
Post a example of CSV file, severals if you use several with different
format, and i make you a script for it to load on openpnp.

2017-05-12 16:44 GMT+02:00, Trampas Stern <tra...@gmail.com>:
> For the strip feeders rather than recording the location of the first and
> second hole, what I do is taking the first and second hole and then moving
> camera to each hole and measuring the center, and then saving the part
> locations as a CSV file.
>
> <https://lh3.googleusercontent.com/-IWu75WhgTa8/WRXAFWfS6zI/AAAAAAAB0lY/R64-V10pywckj_BxGfn4SU882XnTHpQvACLcB/s1600/Capture.JPG>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "OpenPnP" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openpnp/sO51SVL5Y8Q/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> openpnp+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/openpnp/84c14e08-0776-4c20-9413-24f814e2fde3%40googlegroups.com.

Trampas Stern

unread,
May 12, 2017, 12:57:50 PM5/12/17
to OpenPnP
Cri S, 

Enclosed is an example CSV file. 

Trampas
strip.csv

Cri S

unread,
May 12, 2017, 4:51:46 PM5/12/17
to OpenPnP
Tell me if you want some change.
bsh.zip

Juha Kuusama

unread,
May 13, 2017, 5:07:23 AM5/13/17
to OpenPnP
You get faster and potentially more accurate results by measuring the first and last hole for the parts you need. Especially, tape alignment error measurement will be more accurate. Or maybe every tenth hole or so to allow the tape not being exactly straight.

Trampas Stern

unread,
May 13, 2017, 6:58:27 AM5/13/17
to OpenPnP
One thing I have noticed is that if I am not careful to stretch the cut tape it will give a bit and the parts are not 4mm apart. Also when this happens the tape is not straight.  Even stretching the tape over the ~230mm run the tape is not straight. 

I am still working on the algorithm and my thought was that moving one hole at a time and measuring center would be easy to implement and provide good results. May not be the fastest. What I figured is that each hole measurement has noise, but knowing that on average the distance is 4mm I should be able to filter the results and obtain a really good measure of the parts location.  This good measurement then could be used as "ground truth" by which I could measure the accuracy of results as I try to improve the speed. 
However plans never survive their first introduction to reality..

Currently OpenPnP default implementation is to only measure the first and second hole and extrapolate from there. If you use vision option in OpenPnP it will only extrapolate to the next hole and then save the last two holes in memory to predict the next. An issue with this that if you restart OpenPnP after say the 10 part, then it does not save the last hole locations to a file, so on restart  it extrapolates from the first two holes to try and find the 11th part.  Now if you are like me and have 80+ parts on strip if you do not pick the holes with better than 1 pixel accuracy you will have huge errors. 

Trampas Stern

unread,
May 13, 2017, 6:59:28 AM5/13/17
to OpenPnP
Cri s,

Thank you I will give it a try latter today. I am going to be trying to get Yamaha feeders running with Open PnP today and will try your macro. 

Trampas

Cri S

unread,
May 13, 2017, 8:29:32 AM5/13/17
to ope...@googlegroups.com
Have you a black and transparent carrier photo that you can share, and please the whole photo from white carrier.
Parameter need to be adjusted for vision. 
If not, I check only for white carrier. Please additional photo having missed component, photo always above pocket.
If its possible, can you take picture of white paper and mm graph paper too?
--
You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/sO51SVL5Y8Q/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

dzach

unread,
May 13, 2017, 10:38:20 AM5/13/17
to OpenPnP

Currently OpenPnP default implementation is to only measure the first and second hole and extrapolate from there.

I think Jason has stated elsewhere that OpenPnP can actually interpolate between the "Reference hole location" and what is labeled "Next hole location". These two do not need to be separated by only one tape step. It could be possibly clearer if it was labeled "Other hole location".You can try this in your machine and see the results.

Trampas Stern

unread,
May 13, 2017, 12:40:15 PM5/13/17
to OpenPnP
Yes this correct, you have to be careful and make sure that the part spacing is correct when do this. 

Additionally if the strip is not straight then when you pick first and last it will assume straight. In my opinion it would be nicer if you manually picked N points.

On the OpenPnP strip feeder with vision I also dislike that the vision is done during placement. I prefer that all the vision algorithms that can be done at setup be done at setup, this would allow you to review the results and correct errors manually, like adding more points in the strip feeder before running boards and having a placement error. 

Trampas

Trampas Stern

unread,
May 13, 2017, 3:45:01 PM5/13/17
to OpenPnP

Here are the of the strips

Trampas Stern

unread,
May 13, 2017, 4:03:49 PM5/13/17
to OpenPnP

The camera has roughly 66pixels per mm in X and Y, the image above does not include lens calibration. 

Cri S

unread,
May 13, 2017, 7:11:33 PM5/13/17
to ope...@googlegroups.com
The vignetting is very high, you should try to use 720p resolution and
check if vignetting is
a lot better.
Probably the lens are for 720p if not for vga resolution and annother chip size.
The lightning is a bit strange and really different as the ligthning
of the grid, i don't know why.
The ligthning is bad, really bad, and for this case, mser works good.
I include the output of mser. I understand that the hough circle don't
work good on this images,
blob detection and checking for circularity works better if you want
stick with circle detection.
If you need to remain with this vignetting, use blue channel, gray2bgr
and then bgr2gray .
The difference between red/blue/green is high on the distortion.
If you want try , use gimp , divide the graph.jpg into rgb channels,
view layer and enable/disable layers of rgb channels.


2017-05-13 22:03 GMT+02:00, Trampas Stern <tra...@gmail.com>:
>
>
> <https://lh3.googleusercontent.com/-rGGOEi74Vc0/WRdmefcQRTI/AAAAAAAB0pI/G2aeKTZulTQmdF0gIjgY58qDBR6jB501wCLcB/s1600/graph.jpg>
>
> The camera has roughly 66pixels per mm in X and Y, the image above does not
>
> include lens calibration.
>
>
> On Saturday, May 13, 2017 at 3:45:01 PM UTC-4, Trampas Stern wrote:
>>
>>
>> <https://lh3.googleusercontent.com/-aAMkhMaLDl0/WRdh-PDFE4I/AAAAAAAB0oo/MSAhbyH1dM4fAOUaCYNkFl7q8WgzlINsACLcB/s1600/clear.jpg>
>>
>>
>> <https://lh3.googleusercontent.com/-47f4zR3kF0s/WRdiA0IAEqI/AAAAAAAB0os/kXmwe7s1RVsqUgzI9bE2W6RPibP-Cb47QCLcB/s1600/strip.jpg>
>>
>>
>> <https://lh3.googleusercontent.com/-0mA147bHF7I/WRdh7n6-2gI/AAAAAAAB0ok/Pnp5eGXnouwm98H7mcF8N6gKqbmiQx91ACLcB/s1600/black.jpg>
>>
>> Here are the of the strips
>>
>>
>> On Saturday, May 13, 2017 at 8:29:32 AM UTC-4, Cri S wrote:
>>>
>>> Have you a black and transparent carrier photo that you can share, and
>>> please the whole photo from white carrier.
>>> Parameter need to be adjusted for vision.
>>> If not, I check only for white carrier. Please additional photo having
>>> missed component, photo always above pocket.
>>> If its possible, can you take picture of white paper and mm graph paper
>>> too?
>>>
>>> Il sabato 13 maggio 2017, Trampas Stern <tra...@gmail.com> ha scritto:
>>>
>>>> Cri s,
>>>>
>>>> Thank you I will give it a try latter today. I am going to be trying to
>>>>
>>>> get Yamaha feeders running with Open PnP today and will try your macro.
>>>>
>>>>
>>>> Trampas
>>>>
>>>> On Friday, May 12, 2017 at 4:51:46 PM UTC-4, Cri S wrote:
>>>>>
>>>>> Tell me if you want some change.
>>>>>
>>>> --
>>>> You received this message because you are subscribed to a topic in the
>>>> Google Groups "OpenPnP" group.
>>>> To unsubscribe from this topic, visit
>>>> https://groups.google.com/d/topic/openpnp/sO51SVL5Y8Q/unsubscribe.
>>>> To unsubscribe from this group and all its topics, send an email to
>>>> openpnp+u...@googlegroups.com.
>>>> To post to this group, send email to ope...@googlegroups.com.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/openpnp/437ee9f0-bbc6-487d-9e15-7c43053c5052%40googlegroups.com
>>>>
>>>> <https://groups.google.com/d/msgid/openpnp/437ee9f0-bbc6-487d-9e15-7c43053c5052%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "OpenPnP" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openpnp/sO51SVL5Y8Q/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> openpnp+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/openpnp/fd4cdced-2c6e-45f2-b09a-95fc38df951a%40googlegroups.com.
clear_.png
black_.png
strip_.png

Trampas Stern

unread,
May 14, 2017, 1:04:58 PM5/14/17
to OpenPnP
Cri S, 

Do you  have a recommendation for light source?  I hacked something together but was thinking about design on with white and UV LEDs.  One of the things I would like is to have a light source option that is almost horizontal to part, I find lighting from angle close to horizontal seems to allow the markings on chips to be more visible. 

Also if you have a recommendation for cameras I don't mind buying new ones.  I would really like a global shutter camera, especially for bottom such that it can be faster. I found this camera but was not sure it would be good for up looking camera: https://www.e-consystems.com/1MP-USB3-Globalshutter-Camera.asp


On the images I provided I did set fixed exposure for the camera, however I did not do a histogram to adjust the exposure correctly. Also the images were a bit out of focus as that my PCB is about 2-3mm lower than my strip feeders. 

The MSER does have good results I was previously thinking about storing a top and bottom reference image for each part, then extracting features to do the final alignment. 

Trampas


Cri S

unread,
May 15, 2017, 6:15:25 AM5/15/17
to OpenPnP
Can you make up looking pic to nozzles, if its possible to view both nozzles in one image a pic of that too, and a pic of your actual lightning solution, with phone or camera in order to have a idea of what can be made better for lightning.
Global shutter cameras need to be sync with motion controller using the flash trigger output from camera. With smaller components it can be aligned using led or dedicated fiducials, screws, holes, ...

Cri S

unread,
May 16, 2017, 12:07:46 AM5/16/17
to OpenPnP
Without gamma adjustment, the clear strip have 3 and black 1 holes detected.


2017-05-16 5:59 GMT+02:00, Cri S <phon...@gmail.com>:
> Hi Trampas,
> i have reviewed the images, with preprocessing it can work, with
> exception to strip.jpg
> Question: do you have removed the cover tape from strip.jpg image.
> If no, i need a image with cover tape removed. If yes, what happend
> with ligtning as it seem
> the ligthning angle is different as the other images or way too low
> angle that it works just
> with transparent or wide (black) pockets but not with normal (white)
> strips, just my guess.
> Anyway here the images with hough checked using pipeline and the
> result on png images.
> The Canny setttings are same as on Fluent code.
>
> <cv-pipeline>
> <stages>
> <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRead"
> name="0" enabled="true" file="C:\Users\Gast\Desktop\OpenCv\t\b.jpg"/>
> <cv-stage class="org.openpnp.vision.pipeline.stages.Function"
> name="3" enabled="true" mode="Gamma" param-1="2.6" param-2="50.0"
> param-3="0" flag="false" flag-1="false" flag-2="false"/>
> <cv-stage
> class="org.openpnp.vision.pipeline.stages.BlurGaussian" name="2"
> enabled="true" kernel-size="7"/>
> <cv-stage
> class="org.openpnp.vision.pipeline.stages.ConvertColor" name="1"
> enabled="true" conversion="BGR2GRAY"/>
> <cv-stage
> class="org.openpnp.vision.pipeline.stages.DetectCirclesHough" name="6"
> enabled="true" min-distance="120" min-diameter="60" max-diameter="90"
> dp="1.0" param-1="80.0" param-2="10.0"/>
> <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRecall"
> name="7" enabled="true" image-stage-name="3"/>
> <cv-stage class="org.openpnp.vision.pipeline.stages.DrawCircles"
> name="8" enabled="true" circles-stage-name="6" thickness="2">
> <color r="255" g="0" b="0" a="255"/>
> <center-color r="255" g="0" b="0" a="255"/>
> </cv-stage>
> <cv-stage class="org.openpnp.vision.pipeline.stages.ImageWrite"
> name="10" enabled="true" file="C:\Users\Gast\Desktop\OpenCv\t\x.png"/>
> </stages>
> </cv-pipeline>
>

Trampas Stern

unread,
May 16, 2017, 8:04:59 AM5/16/17
to OpenPnP
Cri S,

Thanks for the information and help. 

My lighting solution for the cameras is not correct and I was looking at trying to purchase better lighting options. I have some LED rings but they are not diffused enough.  I might have to spin a PCB for the head camera lighting unless someone else has a good working solution?  I started looking into making a PCB for the lighting and would be interested in any advice others have or things they have learned in their head lighting that helps. For example has anyone tried UV LEDs? Or has anyone tried angled lights to read the writing on chips easier? 

Also on the camera and lens if someone has a good working solution I would appreciate knowing what it is. 

Thanks

Cri S

unread,
May 16, 2017, 8:37:29 AM5/16/17
to OpenPnP
The missing image, with CLAHE and GAMMA adjust the images should work on ReferenceStripFeeder.
The images was too big, so i have resized it. png is the hough circle made with vision pipeline using the same settings as used on StripFeeder.
For led ring, the smaller, the better, and add two foils of diffusor, check liteplacer for image, with 1-2cm space between the two diffusors.
Initially before upgrade i have used 3 layer of paper for diffuse the light. Further you must check the camera FPS in order that camera have sufficient
light. Don't use histogram for checking the quality, but as example gimp threshold operator. You can adjust interactivly the slider and the image get updated.


x.zip
Reply all
Reply to author
Forward
0 new messages