To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/a9c7ed4e-b749-4590-987b-7c491650424fn%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/a880597b-60c8-4520-93da-f2ca0cd6d895n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/99fbf50c-cc52-403c-a674-b8befd5f5bdbn%40googlegroups.com.
Hi guys,
just my 2 cents.
OpenPnP bottom vision currently works by isolating bright contacts from anything else (body of the part, visible parts of the nozzle tip, background). Because these contacts are usually metallic, they should reflect light more than anything else.
One crucial step of bottom vision is applying a threshold.
Anything brighter than a certain brightness value is isolated. The
idea is to only detect the contacts. The threshold can be easily
tuned in OpenPnP nowadays, no need to edit CV pipelines anymore
(see the animation here):
https://github.com/openpnp/openpnp/wiki/Bottom-Vision#tuning-bottom-vision
This threshold principle has certain ramifications:
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/99fbf50c-cc52-403c-a674-b8befd5f5bdbn%40googlegroups.com.
The vision is extremely impressive!
Note, regarding the first two questions, I'm the last person to doubt your solution is still very usable, even if there are limitations. We all know that the small passives are the ones that come in large numbers. So those must be fast, if there is one MCU that's a bit slower, it does not matter.
After all, that why I made this 😎:
https://makr.zone/openpnp-multi-shot-bottom-vision/736/
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/39e9ad89-2d07-457b-8224-b6b896864854n%40googlegroups.com.
> How 'big' is big enough, in terms of the part in the image. Presumably something like 10x pixels relative to minimum feature size or something?
With pre-rotate
enabled (which is very much recommended), the parts are held at
any angle. Therefore, only the central circle that fully fits
into the camera view can really be used for vision. Bottom
vision will also us a circular mask to blot it away (as you have
discovered yourself). Therefore any diffuser that is larger than
that full central circle should be fine, i.e. yours is good.
You can see this visualized in Vision Compositing (even if you don't use multi-shots):
https://github.com/openpnp/openpnp/wiki/Vision-Compositing/5d2e803da169bfcd3de95fee2fde333abe01208d
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/fcec7e7e-bd28-4273-b025-eb02d3932e96n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/1fbe571c-fc99-4ffb-91b4-4c840acd7895%40makr.zone.