Hi, Jan!
On 14/04/2021 09.11, 'Jan' via OpenPnP wrote:
> Hi Clemens!
> From my possibly limited perspective I would say, that the input image
> is the most important factor. All image processing is limited by the
> quality of the input. (And the requirements are low: some chinease pnps
> use VGA analog cameras with amasingly stable results.)
I agree. But the image requirements for OpenPnP are not always as "low" as some images I have seen here and from my own cameras.
You need to treat "image quality" in more than one dimension: spatial resolution (pixels/linear spacing) as well as radiometric resolution (bit depth/pixel) / high dynamic range and low noise (high SNR).
For precision spatial image analysis, usually monochrome sensors are used because of the modulation transfer function [1]. (If you really need color then you apply different wavelength sequentially by using RGB+x LED illuminaton.)
For PnP applications it might be better to go for fast image aquisition (higher frame rate, lower pixel count but still high dynamic range (12bit/pix)) and do the image analysis in sub-pixel precision afterwards.
But as always: it depends a lot and YMMV.
> Second, I would
> say that a metric for "quality" would be of gread help. Like in Physics
> where any measurement is nothing without an error. Is it possible to get
> some kind of conficence level out of the pipeline?
That seems to be possible here and there, but it's not standard in OpenCV as used in OpenPnP (yet).
Using measurements could help an enormous amount to adjust operators in the pipeline dynamically.
I.e. in OpenPnP there is almost always a kind of information available like:
- This area is definitely an image background. You can reliably adopt to this.
- At that center position is either the hole in the nozzle when black (component missing) or otherwise it's very likely some known region of a component where you can adopt as well.
- Then you can take into account that a lot of components have their pads arranged in a somehow symmetric way. So you can search only for features which fit that information.
- etc. pp.
> For the hough
> transformation it should be possible to calculate such metric by
> evaluating the hight of the peak with respect to the average in the
> parameter space. Or by calculating the width of the peak with respect to
> each parameter.
Yes. If you could access the Hough-space manually. But that's not what OpenCV's HoughCircle() [2] allows you to do.
I didn't look further into OpenCV in detail, but they use what they call "Hough gradient method" which is basically a canny edge detector + a hough in one operation.
Personally, I would prefer to have the steps for the image preparation/conditioning, image segmentation, feature extraction and feature analysis/measurements in separate (intelligently controlled) operators.
The HoughCircles() seems to be a bit limited in that sense. And I still don't get my head around the "dp" parameter.... I need to read the code here.
Clemens
[1]
https://en.wikipedia.org/wiki/Optical_transfer_function
[2]
https://docs.opencv.org/3.4/d4/d70/tutorial_hough_circle.html