Vision Pipeline stress

102 views
Skip to first unread message

Balázs Kiss

unread,
Apr 13, 2021, 3:38:44 PM4/13/21
to OpenPnP
Hi guys,


i was wondering, if everybody else has a lot of hassle with adjusting of vision pipelines for each different parts or is it just me? It takes a lot of effort and sometimes only luck brings the solution. I have some graphical background and I thought of some improvements, regarding pipeline adjustments.

How do you do it on a daily basis? Are there any best practices? 

PS: yeah, I know - pipelines are not for the faint hearted :) Just wondering, how it could be more user friendly.


Best regards,
Balazs

Clemens Koller

unread,
Apr 13, 2021, 10:50:23 PM4/13/21
to ope...@googlegroups.com
Hi, Balazs!

On 13/04/2021 21.38, 'Balázs Kiss' via OpenPnP wrote:
> i was wondering, if everybody else has a lot of hassle with adjusting of vision pipelines for each different parts or is it just me? It takes a lot of effort and sometimes only luck brings the solution. I have some graphical background and I thought of some improvements, regarding pipeline adjustments.

Well, to be honest, I don't know how to get the pipeline-work easily out of it's way without deeper knowledge of image operators. I believe that the current pipelines can be optimized to be a bit more robust to handle, but that's about it. (I.e. replace the gaussian with median filter, replace hough with template matching if possible, ...)

> How do you do it on a daily basis? Are there any best practices?

Copy & Paste & Reconfig & fail & optimize.
I wish I could manage the image processing pipeline outside of OpenPnP easily and store to configuration in MariaDB and attaching them at PCB design time to the components...

> PS: yeah, I know - pipelines are not for the faint hearted :) Just wondering, how it could be more user friendly.

I luckily have a deeper background in image analyis, so once I got used to how OpenCV/OpenPnP is doing it's stuff, my learning curve flattened. (I was working with the Halcon framework in the past.)
There are still some issues left with i.e. unexpected results as position jitter of the hough transform. I believe, there are also some inconsistencies how sub-pixel results are handled in some cases, but well.

In the long term: We could try to add more generic, more robust (= more computational intense) image operators. This might need some assisted learning in software, histogram analysis, spacial analysis, etc. Then it might be possible to be able to have generic operators as: "give me offset + rotation of an i.e. rectangle object of whatever is not the nozzle tip or image background". This gets into the area of object based image analysis.

I also thought about reusing code from a kind of SUSAN filter thing, I wrote some time ago as a basis to get some image segmentation implemented. But there is heavily optimized C++ pointer magic envolved and I don't see a way for me to get that ported over to Java.


Clemens
> --
> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com <mailto:openpnp+u...@googlegroups.com>.
> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/37437004-288e-4b9e-999f-01e58bffd6d8n%40googlegroups.com <https://groups.google.com/d/msgid/openpnp/37437004-288e-4b9e-999f-01e58bffd6d8n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Balázs Kiss

unread,
Apr 14, 2021, 2:35:13 AM4/14/21
to OpenPnP
Hi Clemens,


thank you for your detailed reaction! I am on track to understand, where the road is leading and where we are now.

Don't get me wrong, I am satisfied with all the logic already built in - I think the opencv tools provided are very handy and can achieve all that is needed. For first steps I wouldn't go too deep - the processing and image functions are the next step for me. First, I would make use of hardware acceleration that is already built in to opencv and make the pipeline step adjustment user interface more efficient by replacing (or extending) the edit boxes with sliders configured with "real-time" or "on change" image update (additionally with mouse scroll-wheel attached control). It would be a low-hanging fruit to accelerate the finding of the good MaskHSV results.

If I can be any help here, please let me know!


Best regards,
Balázs

ma...@makr.zone

unread,
Apr 14, 2021, 7:09:49 AM4/14/21
to ope...@googlegroups.com

Some ideas have been circulated here:

https://groups.google.com/g/openpnp/c/7DeSdX4cFUE/m/VYDG6x6-AAAJ

and more specifically towards simplification here ("Ideal Solution"):

https://groups.google.com/g/openpnp/c/7DeSdX4cFUE/m/9C0KYDLKAAAJ

_Mark

To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/7881da20-051d-407d-82e1-df9e785a5845n%40googlegroups.com.

Jan

unread,
Apr 16, 2021, 5:41:29 PM4/16/21
to ope...@googlegroups.com
Hi Clemens!
From my possibly limited perspective I would say, that the input image
is the most important factor. All image processing is limited by the
quality of the input. (And the requirements are low: some chinease pnps
use VGA analog cameras with amasingly stable results.) Second, I would
say that a metric for "quality" would be of gread help. Like in Physics
where any measurement is nothing without an error. Is it possible to get
some kind of conficence level out of the pipeline? For the hough
transformation it should be possible to calculate such metric by
evaluating the hight of the peak with respect to the average in the
parameter space. Or by calculating the width of the peak with respect to
each parameter.

Jan

Clemens Koller

unread,
Apr 17, 2021, 10:00:06 AM4/17/21
to ope...@googlegroups.com
Hi, Jan!

On 14/04/2021 09.11, 'Jan' via OpenPnP wrote:
> Hi Clemens!
> From my possibly limited perspective I would say, that the input image
> is the most important factor. All image processing is limited by the
> quality of the input. (And the requirements are low: some chinease pnps
> use VGA analog cameras with amasingly stable results.)

I agree. But the image requirements for OpenPnP are not always as "low" as some images I have seen here and from my own cameras.

You need to treat "image quality" in more than one dimension: spatial resolution (pixels/linear spacing) as well as radiometric resolution (bit depth/pixel) / high dynamic range and low noise (high SNR).
For precision spatial image analysis, usually monochrome sensors are used because of the modulation transfer function [1]. (If you really need color then you apply different wavelength sequentially by using RGB+x LED illuminaton.)
For PnP applications it might be better to go for fast image aquisition (higher frame rate, lower pixel count but still high dynamic range (12bit/pix)) and do the image analysis in sub-pixel precision afterwards.
But as always: it depends a lot and YMMV.

> Second, I would
> say that a metric for "quality" would be of gread help. Like in Physics
> where any measurement is nothing without an error. Is it possible to get
> some kind of conficence level out of the pipeline?

That seems to be possible here and there, but it's not standard in OpenCV as used in OpenPnP (yet).
Using measurements could help an enormous amount to adjust operators in the pipeline dynamically.
I.e. in OpenPnP there is almost always a kind of information available like:
- This area is definitely an image background. You can reliably adopt to this.
- At that center position is either the hole in the nozzle when black (component missing) or otherwise it's very likely some known region of a component where you can adopt as well.
- Then you can take into account that a lot of components have their pads arranged in a somehow symmetric way. So you can search only for features which fit that information.
- etc. pp.

> For the hough
> transformation it should be possible to calculate such metric by
> evaluating the hight of the peak with respect to the average in the
> parameter space. Or by calculating the width of the peak with respect to
> each parameter.

Yes. If you could access the Hough-space manually. But that's not what OpenCV's HoughCircle() [2] allows you to do.
I didn't look further into OpenCV in detail, but they use what they call "Hough gradient method" which is basically a canny edge detector + a hough in one operation.
Personally, I would prefer to have the steps for the image preparation/conditioning, image segmentation, feature extraction and feature analysis/measurements in separate (intelligently controlled) operators.
The HoughCircles() seems to be a bit limited in that sense. And I still don't get my head around the "dp" parameter.... I need to read the code here.

Clemens

[1] https://en.wikipedia.org/wiki/Optical_transfer_function
[2] https://docs.opencv.org/3.4/d4/d70/tutorial_hough_circle.html

sebastian...@gmail.com

unread,
Apr 17, 2021, 7:12:24 PM4/17/21
to OpenPnP
Clemens,

> And I still don't get my head around the "dp" parameter....

dp will determin the size of the accumulator array compared to your input image.
The image resolution devided by dp is the accumulator resolution. It defaults to 1 which is full resolution.
Greater dp values will reduce the accumulator resolution and hence increase robustness at the cost of accuracy.
You're basically binning votes for dp > 1.

Sebastan

Clemens Koller

unread,
Apr 18, 2021, 10:46:10 AM4/18/21
to ope...@googlegroups.com
Hi, Sebastian!

On 18/04/2021 01.12, sebastian...@gmail.com wrote:
>> And I still don't get my head around the "dp" parameter....
>
> dp will determin the size of the accumulator array compared to your input image.
> The image resolution devided by dp is the accumulator resolution. It defaults to 1 which is full resolution.
> Greater dp values will reduce the accumulator resolution and hence increase robustness at the cost of accuracy.
> You're basically binning votes for dp > 1.

Thank you! I've read it up in the meanwhile as well. I need more time to experiment further.

I also discovered a quite interesting comparison of the ELSD vs. Etemadi vs. Hough algorithms.
If you are interested, have a look at the images on this website:
http://ubee.enseeiht.fr/vision/ELSD/

Especially in the section Results, it's interesting how the dark grey on light grey octagon was misdetected by the Hough (the last row there):

http://ubee.enseeiht.fr/vision/ELSD/images/oct33.png
http://ubee.enseeiht.fr/vision/ELSD/images/lsead3.png
http://ubee.enseeiht.fr/vision/ELSD/images/etem11.png
http://ubee.enseeiht.fr/vision/ELSD/images/octHC.png

Have a look ath the octHC.png. The Hough delivers three circles, where we would expect to get only one circle as in lsead3.png
I believe that I have seen similar "jitter" issues with applying the Hough in OpenPnP which makes life much more difficult.

Greets,

Clemens
> [1] https://en.wikipedia.org/wiki/Optical_transfer_function <https://en.wikipedia.org/wiki/Optical_transfer_function>
> [2] https://docs.opencv.org/3.4/d4/d70/tutorial_hough_circle.html <https://docs.opencv.org/3.4/d4/d70/tutorial_hough_circle.html>
> >>> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/37437004-288e-4b9e-999f-01e58bffd6d8n%40googlegroups.com <https://groups.google.com/d/msgid/openpnp/37437004-288e-4b9e-999f-01e58bffd6d8n%40googlegroups.com> <https://groups.google.com/d/msgid/openpnp/37437004-288e-4b9e-999f-01e58bffd6d8n%40googlegroups.com?utm_medium=email&utm_source=footer <https://groups.google.com/d/msgid/openpnp/37437004-288e-4b9e-999f-01e58bffd6d8n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
> >>
> >
>
> --
> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com <mailto:openpnp+u...@googlegroups.com>.
> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/6da9a96c-7571-441a-96f5-0ae2a04042a9n%40googlegroups.com <https://groups.google.com/d/msgid/openpnp/6da9a96c-7571-441a-96f5-0ae2a04042a9n%40googlegroups.com?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages