extending the MatchTemplate funcitonality - pipeline documentation

343 views
Skip to first unread message

Sebastian Pichelhofer

unread,
Mar 27, 2017, 1:59:50 PM3/27/17
to ope...@googlegroups.com
As my original "issue" report turned into development discussion I want to continue this here:

https://github.com/openpnp/openpnp/issues/476

It started with my explaining that the current MatchTemplate pipeline stage isn't very useful:

In theory comparing components bottom vision image to a predefined reference template would be THE ideal method to recognize and orient any package of any shape or pin layout even with index pins or recognizing labels/prints orientation.

Unfortunately the current MatchTemplate OpenCV functions does not work when your object is rotated compared to your reference template.

cri-s added some code snippest how this could be improved but currently I do not have the time to dive into the code I am afraid so I want to discuss here what everyone thinks or what pipelines you are currently using with different parts.


Regards Sebastian



Trampas Stern

unread,
Mar 27, 2017, 2:04:07 PM3/27/17
to OpenPnP
Never under estimate the power of brute force.... 

Take the reference image rotate one degree at a time and do template match. Take the best match and repeat with fraction of a degree. 

Trampas

Jason von Nieda

unread,
Mar 27, 2017, 2:06:17 PM3/27/17
to ope...@googlegroups.com
Hi Sebastian,

I asked a couple times in the other thread, but didn't get an answer, so I will repeat here: What is the reason that you consider template matching to be ideal, and are you having trouble with the default pipeline? 

The default pipeline is designed to work in a variety of lighting and machine configurations, and requires no per part training. Using template matching complicates things in that you would have to train every part before using it.

Template matching is well known for not being rotation invariant. It does have it's uses, but rotation and scale invariant object matching is not really one of them. For this it's better to use contours, as the default pipeline does, or more complex feature matchers like SIFT.

So, it goes back to my original question: If you are having trouble with the default pipeline, which does not suffer from the issues described above, what is the trouble? And can we fix it without throwing out the entire thing? What benefits would you get from using template matching?

Jason




--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CAJ_3Q41-0FktRL3HZWe9AHVUpKgSdH6Pkbj5u3Y_Uns-4wW3og%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Sebastian Pichelhofer

unread,
Mar 27, 2017, 2:16:28 PM3/27/17
to ope...@googlegroups.com
On Mon, Mar 27, 2017 at 8:04 PM, Trampas Stern <tra...@gmail.com> wrote:
Never under estimate the power of brute force.... 

Take the reference image rotate one degree at a time and do template match. Take the best match and repeat with fraction of a degree. 

Thats actually what we ended up considering :)

Regards Sebastian
 

Trampas

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

Sebastian Pichelhofer

unread,
Mar 27, 2017, 4:12:42 PM3/27/17
to ope...@googlegroups.com
On Mon, Mar 27, 2017 at 8:06 PM, Jason von Nieda <ja...@vonnieda.org> wrote:
Hi Sebastian,

I asked a couple times in the other thread, but didn't get an answer, so I will repeat here: What is the reason that you consider template matching to be ideal, and are you having trouble with the default pipeline?

The default pipeline is designed to work in a variety of lighting and machine configurations, and requires no per part training. Using template matching complicates things in that you would have to train every part before using it.


Sorry I wasn't aware that the default pipeline is intended to work with all kinds of parts. I considered it a starting point reference to tune my own per-part pipelines in the end and so I tried the templatematch stage which is offered in the pipeline.


Template matching is well known for not being rotation invariant. It does have it's uses, but rotation and scale invariant object matching is not really one of them. For this it's better to use contours, as the default pipeline does, or more complex feature matchers like SIFT.

So, it goes back to my original question: If you are having trouble with the default pipeline, which does not suffer from the issues described above, what is the trouble? And can we fix it without throwing out the entire thing? What benefits would you get from using template matching?

I am not having any trouble I am just thinking how to extend what is there for the future.

Such a future could bring bottom/top vision additions like:

-) Index pin, shape or label orientation identification
-) recognizing footprint missmatch -> wrong part picked up
-) indentify bent pins on chips -> discard part

Regards Sebastian
 

Jason




On Mon, Mar 27, 2017 at 12:59 PM Sebastian Pichelhofer <sebastian.pichelhofer@gmail.com> wrote:
As my original "issue" report turned into development discussion I want to continue this here:

https://github.com/openpnp/openpnp/issues/476

It started with my explaining that the current MatchTemplate pipeline stage isn't very useful:

In theory comparing components bottom vision image to a predefined reference template would be THE ideal method to recognize and orient any package of any shape or pin layout even with index pins or recognizing labels/prints orientation.

Unfortunately the current MatchTemplate OpenCV functions does not work when your object is rotated compared to your reference template.

cri-s added some code snippest how this could be improved but currently I do not have the time to dive into the code I am afraid so I want to discuss here what everyone thinks or what pipelines you are currently using with different parts.


Regards Sebastian



--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

Jason von Nieda

unread,
Mar 27, 2017, 4:23:41 PM3/27/17
to ope...@googlegroups.com
One thing I have considered in the past, and had good luck with in experiments, is something like this:

1. Use something similar to existing BV pipeline, which gives you the offset and angle of the bounding box.
2. Rotate your template by the angle found above and perform templateMatch, store results.
3. Rotate the template (or the image) by 90* and perform templateMatch again. 
4. Repeat step 3 two more times.
5. Determine which of the four templateMatches was the best match.

Doing this you can skip the initial "rotate 1 degree and match, repeat" since you already know the part is rotated by a certain amount. Doing four rotations and checks allows you to determine which of the four possible orientations the part is in.

This is the algorithm that I have been considering using for loose part pickup. I don't particularly think it's needed for bottom vision since with bottom vision we always expect the part to have been picked within 45 degrees of nominal, but for loose part pickup we don't know the pick angle at all. Doing this would allow us to find the correct orientation of the part, accounting for pin 1, and then bottom vision would clean up any final offset or rotation.

Jason



Jason




To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

alex

unread,
Mar 27, 2017, 6:37:17 PM3/27/17
to OpenPnP
Current bottom vision works well. Just want to mention a couple of possible issues and improvements.

1)I am using 503 Juki for 0603 components. The nozzle end is just a little bit smaller than the 0603 width and very often the part is a bit displaced on the tip, and the tip goes out of the part boundaries on bottom vision image and is recognised as a part. Not really a big problem because 0603 allows some displacements, but...
2)In some conditions SOT-23-3 part may be detected wrong, because there are two possible min area rectangles to draw around three legs of the part. One is obvious and the second one is by about 45 degrees of the right one. I solved this problem by using side light, and setting up pipeline not only to detect legs, but also to detect the black case of the part.

3)One great and simple improvement would be min/max thresholds for rectangle size/position/rotation. Say we setup that the rectangle must be between 8x8mm to 10x10mm with max displacement of center 2mm and max rotation 15 degrees. This check will catch 99% of bottom vision and pick possible issues such as:
-No part on the nozzle, but nozzletip is detected as part (important for metal nozzles)
-Part picked in a wrong way (sometimes, for example 0603 parts are picked by their side, not top)
-Some flares in the camera view that cause wrong rectangle result

Jason von Nieda

unread,
Mar 27, 2017, 8:15:50 PM3/27/17
to ope...@googlegroups.com
Hi Alex,

This is great info, thank you for sharing. I have a few thoughts below:


On Mon, Mar 27, 2017 at 5:37 PM alex <al...@sai.msu.ru> wrote:
Current bottom vision works well. Just want to mention a couple of possible issues and improvements.

1)I am using 503 Juki for 0603 components. The nozzle end is just a little bit smaller than the 0603 width and very often the part is a bit displaced on the tip, and the tip goes out of the part boundaries on bottom vision image and is recognised as a part. Not really a big problem because 0603 allows some displacements, but...

One thing you can do here is add a MaskCircle stage that is the size of the NozzleTip. By making it black you can be sure that there will never be a bright spot from the nozzle, and since the nozzle is always centered above the camera you know the centered mask will cover it.
 
2)In some conditions SOT-23-3 part may be detected wrong, because there are two possible min area rectangles to draw around three legs of the part. One is obvious and the second one is by about 45 degrees of the right one. I solved this problem by using side light, and setting up pipeline not only to detect legs, but also to detect the black case of the part.


I have seen this before but hadn't really taken the time to understand what was happening. Your description makes that clear. 
 
3)One great and simple improvement would be min/max thresholds for rectangle size/position/rotation. Say we setup that the rectangle must be between 8x8mm to 10x10mm with max displacement of center 2mm and max rotation 15 degrees. This check will catch 99% of bottom vision and pick possible issues such as:

I had eventually intended to use the package footprint body width and body height values to do the rectangle size part of this, but since you mentioned other factors I think it might make sense to add these to per part configuration for bottom vision. We could use defaults (or just null) for all the values and then people could just fill them in for troublesome parts.

Anyone interested in doing a Pull Request? :)

Jason

 

Cri S

unread,
Mar 28, 2017, 4:46:43 AM3/28/17
to OpenPnP

On Mon, Mar 27, 2017 at 5:37 PM alex <al...@sai.msu.ru> wrote:
Current bottom vision works well. Just want to mention a couple of possible issues and improvements.

1)I am using 503 Juki for 0603 components. The nozzle end is just a little bit smaller than the 0603 width and very often the part is a bit displaced on the tip, and the tip goes out of the part boundaries on bottom vision image and is recognised as a part. Not really a big problem because 0603 allows some displacements, but...

One thing you can do here is add a MaskCircle stage that is the size of the NozzleTip. By making it black you can be sure that there will never be a bright spot from the nozzle, and since the nozzle is always centered above the camera you know the centered mask will cover it.

This issue is present on 0201 and sometimes 0402 or when using not best sized nozzle.
There are two possibility, mask out the nozzle, or subtract the nozzle.
Subtracting require not to use height correction on uplooking cam, do fixed height for parts less then 0.8mm or similar, and subtracting photo of plain nozzle at that height, rotation and position.
Both systems have advantages and disadvantages. For the mask operation, if center is too far away from nozzle, and width is correct, the body longer side is calc from numbers and center is adjusted from calcs,
because resistor/caps/... other end is mask out and only one side of body is viewed. Using this algorithm the drop rate goes from approx 40% to less then 5%.
 
 
2)In some conditions SOT-23-3 part may be detected wrong, because there are two possible min area rectangles to draw around three legs of the part. One is obvious and the second one is by about 45 degrees of the right one. I solved this problem by using side light, and setting up pipeline not only to detect legs, but also to detect the black case of the part.


I have seen this before but hadn't really taken the time to understand what was happening. Your description makes that clear. 
 
You need to use the top/bot algorithm or if remaining current algorithm draw a circle into image with 30% of size (as example) that is the enclosing circle at contour center after you have merged contours, and then
do contour matching again if you want stay with that algorithm. Howewer top/bot is more effective and useful.
Top/bot is , you divide image in two halves using ROI (submat). Perform contour detection, group it, find out center points. From the two center points, calc angle of component, and new center, just the sum of the two
center points divided by two, or here multiplied by 0.5 .  Depending on the pin1 orientation you use on your pnp system internally it could be left-right instead of top-bot. I use the system that wider side is horizontal and pin1 is on lower left edge, others use the system same as for centeroids, pin1 is on left upper edge and components are vertical.
As consequence there cameras are rotated in order to have larger fov on vertical side. It's a system design issue.

 
3)One great and simple improvement would be min/max thresholds for rectangle size/position/rotation. Say we setup that the rectangle must be between 8x8mm to 10x10mm with max displacement of center 2mm and max rotation 15 degrees. This check will catch 99% of bottom vision and pick possible issues such as:

I had eventually intended to use the package footprint body width and body height values to do the rectangle size part of this, but since you mentioned other factors I think it might make sense to add these to per part configuration for bottom vision. We could use defaults (or just null) for all the values and then people could just fill them in for troublesome parts.

How do you validate you input if you don't use part body width/height, and maybe dedicated part width/height for vision that are filled in automatically if it's 0 ? 

 
-No part on the nozzle, but nozzletip is detected as part (important for metal nozzles)
Nozzle need to masked out, either as black hole or    doing image subtraction.
Not only for small components. The reason is that if blacking out the nozzle after thresholding for the
metallic pins, it is possible to count white pixels. If the count is extreme low, there is no component on nozzle,
or there is maybe a problem with camera returning only black image if after histeq the real image it still remain zero.
 
-Part picked in a wrong way (sometimes, for example 0603 parts are picked by their side, not top
-Some flares in the camera view that cause wrong rectangle result
maybe illumination issue ?
every part match need to be verified, and eventually alternative algorithm used or part dropped.
 
 I have different part/package/footprint classes, don't use the fluentCv code, use annother gamma and luminance instead of lightness,
https://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
so my stages are not ready to be used on official OpenPnP, and as example openpnp don't rotation correct the offsets anymore, there are some significant differences.




Sebastian Pichelhofer

unread,
Mar 30, 2017, 4:34:35 PM3/30/17
to ope...@googlegroups.com
Lots of good ideas here but most of them are above my head to test myself I am afraid.

But I collected some samples to try them on: https://cloud.gerade.org/index.php/apps/gallery/s/aDmtVupNJyejzpl#

I would be thankful for pipeline xmls to try.

Also feel free to help document the different pipeline stages here: https://github.com/openpnp/openpnp/wiki/CvPipeline

Regards Sebastian



--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

Cri S

unread,
Mar 31, 2017, 6:03:41 AM3/31/17
to OpenPnP
I understand you need for template match now.
Your lightning is too diffuse if ever you use one. What is the reason for that near 45 deg angles? Specs say max +-15 degree and for some selected components +-22 degree but for your viewed components it should be +-7 degree.
Have you serious problems with vacuum?

Sebastian Pichelhofer

unread,
Mar 31, 2017, 6:22:04 AM3/31/17
to ope...@googlegroups.com
I tried using the experimental loose parts feeder to pick up the larger chips. But rotation was indeed a big issue (being 90° off often) so I am now switiching to trays.

Regards Sebastian
 

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Cri S

unread,
Mar 31, 2017, 9:13:14 AM3/31/17
to OpenPnP
Can you provide pictures from the loose tray feeder ? and the pipeline used (if using a pipeline) probably is easy correct that error.
There is a lot offset of nozzle center between provided image. This makes it impossible to blank out nozzle.
Simple pipeline for outlines, it don't work when nozzle center is visible.
<cv-pipeline>
   <stages>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRead" name="0" enabled="true" file="C:\Users\Gast\Downloads\bv_result_1242518690705263179.png"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.BlurGaussian" name="1" enabled="true" kernel-size="7"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.DetectEdgesCanny" name="9" enabled="true" threshold-1="200.0" threshold-2="240.0"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.MinAreaRect" name="7" enabled="true" threshold-min="5" threshold-max="99999"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRecall" name="17" enabled="true" image-stage-name="0"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.DrawRotatedRects" name="10" enabled="true" rotated-rects-stage-name="7" thickness="2">
         <color r="0" g="255" b="0" a="255"/>
      </cv-stage>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageWrite" name="3" enabled="true" file="C:\Users\Gast\Downloads\pipeline.png"/>
   </stages>
</cv-pipeline>


This is annother generic pipeline that works on resistors too, based on image you have given.
<cv-pipeline>
   <stages>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRead" name="0" enabled="true" file="C:\Users\Gast\Downloads\bv_source_7226194360883052410.png"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.BlurGaussian" name="1" enabled="true" kernel-size="7"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ConvertColor" name="2" enabled="true" conversion="Bgr2Gray"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.Normalize" name="4" enabled="true"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ThresholdAdaptive" name="6" enabled="true" adaptive-method="Mean" invert="false" block-size="5" c-parm="-2"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.FindContours" name="8" enabled="true" retrieval-mode="External" approximation-method="None"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRecall" name="9" enabled="true" image-stage-name="6"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.DrawContours" name="11" enabled="true" contours-stage-name="8" thickness="4" index="-1">
         <color r="0" g="0" b="0" a="255"/>
      </cv-stage>
      <cv-stage class="org.openpnp.vision.pipeline.stages.BlurGaussian" name="15" enabled="true" kernel-size="3"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.Threshold" name="16" enabled="true" threshold="100" auto="false" invert="false"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.MinAreaRect" name="7" enabled="true" threshold-min="120" threshold-max="255"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.MinAreaRectContours" name="14" enabled="true"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRecall" name="17" enabled="true" image-stage-name="0"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.DrawRotatedRects" name="10" enabled="true" rotated-rects-stage-name="7" thickness="2">
         <color r="0" g="255" b="0" a="255"/>
      </cv-stage>
   </stages>
</cv-pipeline>


Cri S

unread,
Mar 31, 2017, 9:16:32 AM3/31/17
to OpenPnP
you can delete stage 14 , it does nothing and consume only time, i have missed to delete it.

Sebastian Pichelhofer

unread,
Mar 31, 2017, 9:22:56 AM3/31/17
to ope...@googlegroups.com
On Fri, Mar 31, 2017 at 3:13 PM, Cri S <phon...@gmail.com> wrote:
Can you provide pictures from the loose tray feeder ? and the pipeline used (if using a pipeline) probably is easy correct that error.

Will do, on monday!

Many thanks!

Regards Sebastian
 

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Sebastian Pichelhofer

unread,
Apr 3, 2017, 8:19:22 AM4/3/17
to ope...@googlegroups.com
Pipeline:

<cv-pipeline>
   <stages>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageCapture" name="9" enabled="true" settle-first="true"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageWrite" name="7" enabled="true" file="C:\Users\pnp\Desktop\BV pipeline tests\loosepartsfeeder"/>

      <cv-stage class="org.openpnp.vision.pipeline.stages.ConvertColor" name="2" enabled="true" conversion="Bgr2Gray"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.BlurMedian" name="5" enabled="true" kernel-size="3"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.Threshold" name="1" enabled="true" threshold="50" auto="false" invert="true"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.FindContours" name="3" enabled="true" retrieval-mode="External" approximation-method="Tc89Kcos"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.FilterContours" name="6" enabled="true" contours-stage-name="3" min-area="800.0" max-area="500000.0"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.MinAreaRectContours" name="results" enabled="true" contours-stage-name="6"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.MinAreaRect" name="0" enabled="false" threshold-min="10" threshold-max="500"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.ImageRecall" name="8" enabled="true" image-stage-name="9"/>
      <cv-stage class="org.openpnp.vision.pipeline.stages.DrawContours" name="4" enabled="true" contours-stage-name="0" thickness="1" index="-1">
         <color r="0" g="204" b="204" a="255"/>
      </cv-stage>
      <cv-stage class="org.openpnp.vision.pipeline.stages.DrawRotatedRects" name="10" enabled="true" rotated-rects-stage-name="results" thickness="3">
         <color r="255" g="0" b="255" a="255"/>
      </cv-stage>
   </stages>
</cv-pipeline>

Images:
https://cloud.gerade.org/index.php/apps/gallery/s/aDmtVupNJyejzpl#debug4772697552364939746.png
https://cloud.gerade.org/index.php/apps/gallery/s/aDmtVupNJyejzpl#debug4331831279684054309.png

Finding the location works pretty well with black chip on white tray.
But the orientation is different every time...



Sebastian Pichelhofer

unread,
Apr 3, 2017, 8:45:33 AM4/3/17
to ope...@googlegroups.com
Thanks, this pipeline seems to work quite well with all the samples I uploaded.

Unfortunately with live camera it is not as reliable as I would have hoped:

What seems to have helped is adjusting z-height over BV camera so that the nozzle tip is already slightly out of focus - still the results are not always right.

Regards Sebastian


 


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Cri S

unread,
Apr 3, 2017, 11:03:21 AM4/3/17
to ope...@googlegroups.com
I have given you two pipelines.
The second (i think) for the resistors.
If adjusting Z works, then adjusting gaussian blur gives the same result.
Have you used the correct algorithm for the resistors ?



2017-04-03 14:44 GMT+02:00, Sebastian Pichelhofer
<sebastian....@gmail.com>:
>> email to openpnp+u...@googlegroups.com.
>> To post to this group, send email to ope...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com
>> <https://groups.google.com/d/msgid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "OpenPnP" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openpnp/GrlUjVjkESE/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> openpnp+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/openpnp/CAJ_3Q40ew3%3D4TwFkgxx3pNrSGwSBLp3PNce6exKsGXVoeWtRtg%40mail.gmail.com.

Cri S

unread,
Apr 3, 2017, 11:13:16 AM4/3/17
to ope...@googlegroups.com
For the angle, the result is always used as angle MOD 45, the result
of that angle%45 changes
or only the angle ?

Sebastian Pichelhofer

unread,
Apr 3, 2017, 12:56:15 PM4/3/17
to ope...@googlegroups.com
On Mon, Apr 3, 2017 at 5:03 PM, Cri S <phon...@gmail.com> wrote:
I have given you two pipelines.
The second (i think) for the resistors.

Correct, thats the one I used.
With a 0402 resistor.

 
If adjusting Z works, then adjusting gaussian blur gives the same result.

Changing z depth is not the same as applying gausian blur. Changing the distance changes the focus plane so the component stays in focus but the nozzle tip in the background becomes more blurred.
The biggest issue currently is that the edge detection cant differentiate between the edged of the nozzle tip and the edges of the component.
 
Have you used the correct algorithm for the resistors ?

The second one you provided.

Feel free to run the source pngs I linked to through the pipeline yourself to see the results.

Many thanks for the help, its much appreciated!

Regards Sebastian
 



2017-04-03 14:44 GMT+02:00, Sebastian Pichelhofer

>> To post to this group, send email to ope...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com
>> <https://groups.google.com/d/msgid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "OpenPnP" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openpnp/GrlUjVjkESE/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to

> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

Cri S

unread,
Apr 3, 2017, 3:44:34 PM4/3/17
to OpenPnP
If you want easy detection, detract the nozzle image.
You must make 3 images, 0.3 0.6 and 0.9mm part height.
The requirement is, that rotation and position for the parts is correct.
On you previous images, that was not the case.
I don't know if it's you'r machine or only you'r testing.
I checking it, give me a bit of time.


Am Montag, 3. April 2017 18:56:15 UTC+2 schrieb Sebastian Pichelhofer:
On Mon, Apr 3, 2017 at 5:03 PM, Cri S <phon...@gmail.com> wrote:
I have given you two pipelines.
The second (i think) for the resistors.

Correct, thats the one I used.
With a 0402 resistor.

 
If adjusting Z works, then adjusting gaussian blur gives the same result.

Changing z depth is not the same as applying gausian blur. Changing the distance changes the focus plane so the component stays in focus but the nozzle tip in the background becomes more blurred.
The biggest issue currently is that the edge detection cant differentiate between the edged of the nozzle tip and the edges of the component.
 
Have you used the correct algorithm for the resistors ?

The second one you provided.

Feel free to run the source pngs I linked to through the pipeline yourself to see the results.

Many thanks for the help, its much appreciated!

Regards Sebastian
 



2017-04-03 14:44 GMT+02:00, Sebastian Pichelhofer

>> To post to this group, send email to ope...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com
>> <https://groups.google.com/d/msgid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "OpenPnP" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openpnp/GrlUjVjkESE/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to

> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.

Cri S

unread,
Apr 3, 2017, 10:34:11 PM4/3/17
to OpenPnP
Test this if this work better.

<cv-pipeline>
<stages>
<cv-stage class="org.openpnp.vision.pipeline.stages.ImageRead"
name="0" enabled="true"
file="C:\Users\Gast\Downloads\openpnp-bv\openpnp-bv\topvision_source3540908279455379385.png"/>
<cv-stage
class="org.openpnp.vision.pipeline.stages.ConvertColor" name="2"
enabled="true" conversion="Bgr2Gray"/>
<cv-stage class="org.openpnp.vision.pipeline.stages.Normalize"
name="4" enabled="true"/>
<cv-stage
class="org.openpnp.vision.pipeline.stages.BlurGaussian" name="1"
enabled="true" kernel-size="33"/>
<cv-stage
class="org.openpnp.vision.pipeline.stages.ThresholdAdaptive" name="6"
enabled="true" adaptive-method="Gaussian" invert="false"
block-size="15" c-parm="-4"/>
<cv-stage class="org.openpnp.vision.pipeline.stages.MaskCircle"
name="5" enabled="true" diameter="160"/>
<cv-stage
class="org.openpnp.vision.pipeline.stages.BlurGaussian" name="3"
enabled="true" kernel-size="11"/>
<cv-stage class="org.openpnp.vision.pipeline.stages.Threshold"
name="13" enabled="true" threshold="130" auto="false" invert="false"/>
<cv-stage
class="org.openpnp.vision.pipeline.stages.BlurGaussian" name="9"
enabled="true" kernel-size="11"/>
<cv-stage class="org.openpnp.vision.pipeline.stages.Threshold"
name="11" enabled="true" threshold="80" auto="false" invert="false"/>
<cv-stage class="org.openpnp.vision.pipeline.stages.MinAreaRect"
name="7" enabled="true" threshold-min="120" threshold-max="255"/>
<cv-stage class="org.openpnp.vision.pipeline.stages.ImageRecall"
name="17" enabled="true" image-stage-name="4"/>
<cv-stage
class="org.openpnp.vision.pipeline.stages.ConvertColor" name="8"
enabled="true" conversion="Gray2Bgr"/>
<cv-stage
class="org.openpnp.vision.pipeline.stages.DrawRotatedRects" name="10"
enabled="true" rotated-rects-stage-name="7" thickness="2">
<color r="0" g="0" b="255" a="255"/>
</cv-stage>
</stages>
</cv-pipeline>

2017-04-03 21:44 GMT+02:00, Cri S <phon...@gmail.com>:
> If you want easy detection, detract the nozzle image.
> You must make 3 images, 0.3 0.6 and 0.9mm part height.
> The requirement is, that rotation and position for the parts is correct.
> On you previous images, that was not the case.
> I don't know if it's you'r machine or only you'r testing.
> I checking it, give me a bit of time.
>
> Am Montag, 3. April 2017 18:56:15 UTC+2 schrieb Sebastian Pichelhofer:
>>
>>
>>
>> On Mon, Apr 3, 2017 at 5:03 PM, Cri S <phon...@gmail.com <javascript:>>
>> wrote:
>>
>>> I have given you two pipelines.
>>> The second (i think) for the resistors.
>>>
>>
>> Correct, thats the one I used.
>> With a 0402 resistor.
>>
>>
>>
>>> If adjusting Z works, then adjusting gaussian blur gives the same
>>> result.
>>>
>>
>> Changing z depth is not the same as applying gausian blur. Changing the
>> distance changes the focus plane so the component stays in focus but the
>> nozzle tip in the background becomes more blurred.
>> The biggest issue currently is that the edge detection cant differentiate
>>
>> between the edged of the nozzle tip and the edges of the component.
>>
>>
>>> Have you used the correct algorithm for the resistors ?
>>>
>>
>> The second one you provided.
>>
>> Feel free to run the source pngs I linked to through the pipeline yourself
>>
>> to see the results.
>>
>> Many thanks for the help, its much appreciated!
>>
>> Regards Sebastian
>>
>>
>>>
>>>
>>>
>>> 2017-04-03 14:44 GMT+02:00, Sebastian Pichelhofer
>>> <sebastian....@gmail.com <javascript:>>:
>>> > On Fri, Mar 31, 2017 at 3:13 PM, Cri S <phon...@gmail.com
>>> > <javascript:>>
>>> >> email to openpnp+u...@googlegroups.com <javascript:>.
>>> >> To post to this group, send email to ope...@googlegroups.com
>>> <javascript:>.
>>> >> To view this discussion on the web visit
>>> https://groups.google.com/d/ms
>>> >> gid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com
>>> >> <
>>> https://groups.google.com/d/msgid/openpnp/b5095883-6e82-482c-a98d-703b0e6e3d10%40googlegroups.com?utm_medium=email&utm_source=footer
>>> >
>>> >> .
>>> >>
>>> >> For more options, visit https://groups.google.com/d/optout.
>>> >>
>>> >
>>> > --
>>> > You received this message because you are subscribed to a topic in the
>>> > Google Groups "OpenPnP" group.
>>> > To unsubscribe from this topic, visit
>>> > https://groups.google.com/d/topic/openpnp/GrlUjVjkESE/unsubscribe.
>>> > To unsubscribe from this group and all its topics, send an email to
>>> > openpnp+u...@googlegroups.com <javascript:>.
>>> > To post to this group, send email to ope...@googlegroups.com
>>> <javascript:>.
>>> > To view this discussion on the web visit
>>> >
>>> https://groups.google.com/d/msgid/openpnp/CAJ_3Q40ew3%3D4TwFkgxx3pNrSGwSBLp3PNce6exKsGXVoeWtRtg%40mail.gmail.com
>>> .
>>> > For more options, visit https://groups.google.com/d/optout.
>>> >
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>>
>>> "OpenPnP" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>>
>>> email to openpnp+u...@googlegroups.com <javascript:>.
>>> To post to this group, send email to ope...@googlegroups.com
>>> <javascript:>.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/openpnp/CAJGcfUjGaWN_dSC-cLzSDB4BXenyfX1n%3DbBp5cT376DRZqb_ow%40mail.gmail.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "OpenPnP" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/openpnp/GrlUjVjkESE/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> openpnp+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/openpnp/9ff6c08d-5b6b-41b5-a6af-efa4d8d6ceb2%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages