About illumination of component for vision

454 views
Skip to first unread message

Florian Chende

unread,
Jun 25, 2019, 4:44:36 AM6/25/19
to OpenPnP
Hi guys,
I am in the middle of designing a pnp machine and I need a bit of help, if possible.
Regarding various aspects of the build (vision, mechanics, servos, controllers, etc) I have some uncertainties, and I will open a topic regarding each one of them to be easier to follow in case somebody else encounters the same problem.
This is about the illumination of component for vision recognition.

I've seen in various DIY builds that a concentric source of lights with white leds (wide angle probably) pointing perpendicular to pcb does not give very good results. The illumination does not look uniform.
I also performed some basic tests with 640x480 usb camera and a ring fluorescent light and the reflections it creates are bad. Recognition is not consistent nor precise.
I was thinking at another approach. To use illumination from a side, with very narrow angle leds ( +-4 grd), pointing from all 4 sides of the component. Selected led is 5mm diameter, superbright red. Goal is to illuminate as much as possible the component, as uniform as possible, and just the component, not the background nozzle and what is behind it. I've attached a picture to illustrate what I mean. An array of 3x5 leds covers a 40x40mm component, and of course, the component is illuminated from all 4 sides. The picture contains only one side for exemplification. Further improvement can be done, ex. for a 40x40mm component, all array must be lit, but for a 0402/0603 and up to ~15x15mm component, only the center led in the array (from all 4 side arrays) would have to be lit, this way, keeping background in the dark.
Do you think that's a good approach? 
Thanks.
illumination 4 grd.png

Florian Chende

unread,
Jun 25, 2019, 4:55:57 AM6/25/19
to OpenPnP
Forgot one question: can I control from OpenPNP the parameters of the illumination (ex. only center leds lit or full array) depending on the component type?

Mike Menci

unread,
Jun 25, 2019, 5:01:46 AM6/25/19
to OpenPnP
This subject is under review -discussion here- https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/openpnp/OtT6iLNAufU
Your nozzle should be at PCB level and not above (as per your sketch) for best results. 

Mike

Florian Chende

unread,
Jun 25, 2019, 5:19:36 AM6/25/19
to OpenPnP
Thanks. I saw the post with coaxial lighting, but that's not for me right now, a bit too complex/expensive.
I know lowering component will be better for cutting background illumination but I want to avoid that due to time costs. My head will have at least 6 nozzles and it is enough that I cannot do gang vision and have to move every component over the camera. If I also have to lower it and raise it, time will add considerably.

Mark

unread,
Jun 25, 2019, 6:03:28 AM6/25/19
to ope...@googlegroups.com

Hi Florian

 

bottom vision usually tries to “see” the legs, pads or even balls of parts. The goal is to see them bright against a dark backdrop.

 

Most of the time these legs or pads or balls are made of metal. Often they are very reflective, so light will mostly be reflected in specular (mirror-like) way and much less so in the diffuse way.

 

https://en.wikipedia.org/wiki/Reflection_(physics)#Reflection_of_light

 

This means that most of your LED’s light will only hit the camera sensor in the relatively rare case where the leg, pad or ball forms a mirror surface at the precise angle where light is reflected towards the camera. For a ball that is usually just a tiny speck. The bright spot in a LEDs is often really, really tiny.

 

You really want to see the whole outline of your legs, pads or balls otherwise the bottom vision will not be accurate.

 

Of course the leg, pad or ball metal surface is never perfectly mirror-like, so some light will also be reflected diffusely. If your LEDs are extremely strong, then maybe that’s enough. Some industrial machines seem to do it that way.

 

But for more reliable lighting you usually want a large area diffuser in front of the LEDs. So the light comes from al lot of angles and the chance is high that some of it is reflected towards the camera.

 

The problem with diffusers is that they normally require a hole for the camera to look through. The hole creates a dark area on flat reflective surfaces such as the PCB (when locating fiducials with the down-looking camera). That’s where coaxial lighting shines (literally). But it is not needed IMHO. Just mount the diffuser relatively close to the camera so the hole is small.

 

To increase the range of angles, the diffuser can also be funnel shaped. Some users have also used indirect lighting. Instead of using a diffuser they use a white dome with the LEDs in reverse. There have been many discussions in the group.

 

Personally I would favor white LEDs. It allows for better/more precise color masking (“green screen”) than just by using red LEDs and does not suffer from ambient light. Also a nicer view for the user. IMHO the days are gone when red LEDs had any relevant price or performance advantage.

 

_Mark

 

Marek T.

unread,
Jun 25, 2019, 9:47:59 AM6/25/19
to OpenPnP
Hi Florian,

A dark area created by the hole in diffusor, mentioned by Mark, is not big problem for the bottom vision. It is because this area is usually fitting between the pins of the part and then the pins are shining well. The small parts like 0402/0201 can be covered by this shadow but still not much. For the bigger parts the problem does not exists at all.
You don't need low down the part over the bottom camera if you use camera resolution seeing well small parts from the distance. I use 1600x1200px and recognize parts from 0402 to 50x50mm.

Again, mentioned a dark area, is a real horror for the top camera. It is because you mainly need use this camera for the fiducials which is directly in the centre, and is covered by this shadow. ENIG flat and eually finished pads - are not probematic, they shine prefectly. But HAL is shit. It is never flat and reflects the light with no any control. The fiducial you see is hard to recognize. And here the coaxial light sounds perfect!
Second option to eliminate this shadow is to move centre of the sensor out of centre of the lens using transforms in Openpnp settings. Then the shadow is not central and you can see the fiducial well.

For the bottom camera, an iluminator from here is the one I use with added diffusor over the leds (flat hexagonal sheet with hole for the lens). But I use it with ELP camera:

For top camera I made a lot of experiments and the best reached with this after some re-designings for my machine specification. Without any diffusor. However this is still not perfect and I plan to perform the tests with coaxial light. See "Tunnel dome v2":

Friedrich Mäckle

unread,
Jun 26, 2019, 12:42:53 AM6/26/19
to OpenPnP

Florian Chende

unread,
Jun 26, 2019, 3:31:20 AM6/26/19
to OpenPnP
Great information guys, thanks a lot.

Florian Chende

unread,
Jul 6, 2019, 6:27:59 AM7/6/19
to OpenPnP
Ok, to conclude this version of illuminator: I purchased some narrow angle leds and created a test fixture. Unifying individual led spots into an uniform illumination field is more problematic in practice than in theory. I did't liked the outcome, so I am dropping this idea for the moment. I am moving to coaxial illumination, to see if I can make that work.

Marek T.

unread,
Jul 6, 2019, 6:54:47 AM7/6/19
to OpenPnP
In theory and looking for different pictures in the net, It seems be the perfect choose. I'm going to change my lights into the coaxial too, but when...

John Plocher

unread,
Jul 12, 2019, 11:32:59 AM7/12/19
to ope...@googlegroups.com
Just saw this new research paper - it twigged all the buzzwords :-)
Unfortunately, these so called "event stream" cameras (like the Inivation DVS) retail for  ~$US 3,000  !!!

Event-based Vision, Event Cameras, Event Camera SLAM. Event cameras, such as Inivation's Dynamic Vision Sensor (DVS), are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames.


High Speed and High Dynamic Range Video
with an Event Camera


"Our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality (> 20%), while comfortably running in real-time. We show that the network is able to synthesize high framerate videos (> 5,000 frames per second) of high-speed phenomena (e.g. a bullet hitting an object) and is able to provide high dynamic range reconstructions in challenging lighting conditions. "


Reply all
Reply to author
Forward
0 new messages