Vertical sided up camera lighting

485 views
Skip to first unread message

JW

unread,
Apr 19, 2024, 8:21:15 AMApr 19
to OpenPnP
One of the last bits of design for my machine that I need to complete is the up camera, specifically the lighting.

I'm aware of the fundamentals around trying to maximise lighting on the part, minimise lighting on the machinery hardware above the part to maximise contrast etc, and that this is typically achieved with angled lighting.

However, I've noticed a lot of the high end professional machines used vertical sided lighting boxes on white PCBs, such as this on an Essemtec Fox


Presumably this system works by ensuring the 'light box' is flooded with (presumably high CRI) white light, resulting in very low lighting levels above the part.

That said, what am I missing, how does the underside of the part facing the camera itself receive good lighting? Presumably just from all the reflection of the light in the box, off the reflective white PCBs?

Jan

unread,
Apr 19, 2024, 8:35:05 AMApr 19
to ope...@googlegroups.com
Hi JW!
To my limited understanding, professional machines use template
matching for bottom vision. Good contrast and clear edges is therefore
more important then the overall view of the parts bottom. However, for
things like LEDs there is a orientation mark on the bottom that's very
likely part of the template.
I've seen some nice lighting using triangular or trapezoidal shaped
PCBs with multiple rows of LEDs mounted at an angle of amount 45° in a
square. That should provide an even and homogeneous light distribution
with low direct lighting into the camera. If I had to design a light
again, I'd evaluate that design.

Jan
> --
> You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to openpnp+u...@googlegroups.com
> <mailto:openpnp+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/openpnp/3495dc02-97c6-411e-a45c-0c589dc40d8bn%40googlegroups.com <https://groups.google.com/d/msgid/openpnp/3495dc02-97c6-411e-a45c-0c589dc40d8bn%40googlegroups.com?utm_medium=email&utm_source=footer>.

JW

unread,
Apr 19, 2024, 8:41:49 AMApr 19
to OpenPnP
Thanks Jan,

That's good the know, I've seen a lot of the trapezoidal and even hex shaped, angled light boxes, which was my original intention before I saw these vertical sided light boxes.

It seems a lot of people have has good success with these too, so I'm still strongly leading towards this design.

I have a lot to learn on the machine vision side of things, the more I read the more questions I have!

Wayne Black

unread,
Apr 19, 2024, 12:24:29 PMApr 19
to ope...@googlegroups.com
I agree w Jan re angle and not being parallel or perpendicular. I think angle is related to nozzle height vs light box width. This is just an assumption on my end though.

To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/a9c7ed4e-b749-4590-987b-7c491650424fn%40googlegroups.com.


--
Wayne Black
Owner
Black Box Embedded, LLC

JW

unread,
Apr 19, 2024, 6:50:32 PMApr 19
to OpenPnP
It certainly seems to be, based on some tests this evening; for a given angle, increasing box size lifts the point at which the light 'beams' from each side of the box intersect in Z, so am doing some tests to determine required height (and thus box size) to be able to illuminate my largest parts, plus margin, whilst keeping parts within view, and allowing small parts etc to be imaged well still.

It'll presumably end up being a trade off of all requirements, as it always is!

JW

unread,
Apr 20, 2024, 8:03:00 AMApr 20
to OpenPnP
Further thoughts - does anybody have any idea of the typical luminous flux for a lightbox? I've seen everything ranging from very cheap, almost indication LED light rings - up to the light boxes in professional machines which appear many orders of magnitude brighter.

Almost all high-CRI LEDs fall into the mid-power group, which if run anywhere near rated currently result in a lightbox of ~1500lumens, that's a LOT of light!

SM

unread,
Apr 20, 2024, 10:23:03 AMApr 20
to OpenPnP
I'm using Luxeon 2835 (4000K, CRI 90) and pulsing it at 200mA for 1ms.
But i am not using a ring or lightbox, just a classic coaxial light with an flat array of 32 leds.
This strong flash light allows to get very short exposure times, without heating up everything.

JW

unread,
Apr 20, 2024, 11:01:59 AMApr 20
to OpenPnP
Well that's most encouraging, I'm using LiteOn JB2835AWT-W-U40GA0000-N0000001, which are also a 4000K, CRI90, 2835 size LED. Albeit in a square, 45degree angled light box, and perhaps overkill but with 64 LEDs. That said I they're driven by a CC driver per quadrant board so I can turn down the current.

How are you getting away with such a short pulse? I think what I mean by that, is presumably your camera frame rate is still 30/60/90fps or such, are you saying that short 1ms pulse is enough light that even with such a low frame rate? Or is frame rate irrelevant here, and it's all about exposure time, per frame? You can tell I've never done much photography of any sort!

SM

unread,
Apr 20, 2024, 12:32:59 PMApr 20
to OpenPnP
Yes, the frame rate is completely irrelevant in my case (I don't use OpenPnP) because the images are only taken on demand and the light control is done directly from the camera.

In order to get sharp, high-contrast images with e.g. 25µs exposure time, I need almost 3000lm, as the mirror only lets through 50% and the lens (c-mount, 16mm) also absorbs a lot.
This short exposure time is needed so that the camera delivers good images when the head flies over it at the appropriate speed (~0.5 to 1m/s) without having to stop at every nozzle. But I think all the effort involved in using industrial cameras, expensive electronics, servos and ball screws is certainly not justified for most hobbyists - it's better to buy something like that ready-made ;-)

JW

unread,
Apr 20, 2024, 2:37:51 PMApr 20
to OpenPnP

Interesting... I'm not sure I understand why that makes frame rate irrelevant though, but that may be my lack of detailed understand as to how an image is actually captured with a continuous streaming frame rate.

Presumably your camera isn't streaming at all, you just take a single image on demand.

My machines got servos and high-lead ballscrews, so the hardware is there for imaging on the fly, but I'm just using a cheap ELP capture to get the machine running, so I'll still be pausing over the camera.

SM

unread,
Apr 20, 2024, 4:16:41 PMApr 20
to OpenPnP
The part alignment in OpenPnP is one-dimensional and stream-based, which makes it particularly cost-effective and, thanks to openpnp-capture and java, operating system-independent. However, even with the use of the fastest mechanics, there are limits to performance. Each nozzle is positioned individually above the camera, waited until the image is stable enough and processed. With many nozzles on the head, this takes a lot of time.

An alternative are e.g. analog cameras (AHD) for each nozzle and the use of multi-channel grabber cards.
All nozzles are positioned above the cameras and the images are evaluated in parallel - this is what many asian manufacturers do for the small components. For the large chips, a USB camera with a different FOV is often installed.

But the trend (with the low-cost machines) is slowly moving towards a single fast camera whose image resolution (ROI) can be varied depending on the component size.
The images are not streamed, but the 4 to 10 nozzles are photographed one after the other at the right moment, completely without settling time. However, the use of these fast industrial cameras often requires the use of manufacturer-specific SDKs, which makes universal, operating system-independent use impossible. The triggering of the camera must also be done close to the hardware, e.g. via suitable servo drives in which the trigger points are stored. The frame rate of the camera is not so important, but rather the shortest possible exposure times and small shutter release delays. For a head with only two nozzles, this effort is rather pointless.

In the high-end range (and as a hobby user you can only dream of this) the cameras (or laser scanners) are located directly on the head, which speeds things up enormously.

Jarosław Karwik

unread,
Apr 20, 2024, 4:31:48 PMApr 20
to ope...@googlegroups.com
There is nice cheap (relatively)option I always wanted to try as external independent image processor: https://openmv.io 
I bought it some time ago and played a bit. Might not be suitable for large complicated components, but would work perfectly for most small passives


SM

unread,
Apr 20, 2024, 5:22:29 PMApr 20
to OpenPnP
>> There is nice cheap (relatively)option I always wanted to try as external independent image processor: https://openmv.io 

reminds me to the YY1 with "vision is done on the edge"

JW

unread,
Apr 20, 2024, 6:20:42 PMApr 20
to OpenPnP
Thank you, yet more interesting info.

Your messages are intriguing... you speak as if you're using professional machines; Juki, MyData, Siemens etc - yet sentences like "I'm using Luxeon 2835 (4000K, CRI 90) and pulsing it at 200mA for 1ms." suggests the machine is your own build, but not using OpenPnP - any details out there out of interest?

Also, I've attached some images of the up camera design, thought I'd put them out there for any comments/suggestions from people before I kick off manufacturing. Housing is an SLS printed part, which screws down to the machine bed such that the camera and lighting is below the bed of the machine. Connectors protruding from the back of the light boards will daisy chain the power to each board; wherein each board has 3 constant current drivers, driving 5 LEDs each for a total of 60 LEDs (not 64 as I said earlier).

Boards are standard FR4 as well within thermal limits of a standard PCB construction; especially given the pulsed operation - and the boards are secured into the housing likely with a product such as 3M VHB.

I've sized the light box so that at 58mm from the top surface of the camera board to the undersize of a part, I can fit a TQFP100 within frame, with ~20% headroom, and the intersection of the 4 lightboards hits the middle of the underside of the part.

UpCamera1.PNGUpCamera2.PNGUpCamera3.PNG

Chris Campbell

unread,
Apr 20, 2024, 11:28:33 PMApr 20
to ope...@googlegroups.com
A bit off topic maybe, but fwiw here is my experiment with one of the OpenMV units that Jarosław was talking about.

The global shutter can take a snapshot and find simple blobs in about 20ms at 160x128 resolution, and the trigger timing is very precise. After making that video, with a little more work I was able to have it output the angle and centroid of the convex hull of the blobs, which I think would be sufficient for making placement adjustments. It can receive commands and output results over UART. I only experimented with pieces of paper though, so there could be a bunch more problems to solve when using actual components. I also found that when printing actual size component footprints onto paper instead of drawing relatively large blobs by hand, the resolution needed to be increased to 320x240 which slowed it down some. But the overall direction looks quite promising.

You can read a little more about the details here:


SM

unread,
Apr 21, 2024, 6:29:13 AMApr 21
to OpenPnP
JW, your light box looks really great and will provide beautiful light with enormous power reserves.

>>
suggests the machine is your own build, but not using OpenPnP - any details out there out of interest?
Yes, I'm now on diy-machine version 4. But the first machines have long since been dismantled or recycled.

And I'm always delighted when someone makes the enormous effort to design and build a more sophisticated PnP machine themselves, given the fact that the ready-made low-cost solutions from the Far East are becoming cheaper and more powerful every year and self-building is only more expensive must be viewed as a sporting challenge.

What I find particularly interesting for do-it-yourselfers when it comes to performance is determining the maximum permitted component height and choosing the nozzle type.


Reducing the component height to a few millimeters (e.g. max. 7mm) and designing the required heights (head/feeder/PCB) can, under certain circumstances, increase the performance enormously, since the movement on the Z axes costs valuable time.
My first machine still had an enormous stroke for components up to 30mm, which in reality you never actually need, except when changing nozzles, for example if you use the still very popular 31mm high Juki clones. On the other hand, with CP45Neo clones, for example, the nozzle change can be carried out much faster and with less force - less Z-stroke is also required.

I'm also a fan of minimizing Z-Moves as much as possible. So I've been doing it this way for a long time by raising the components from the feeders to the transport height (=botcam focus height) and no extra Z-moves are necessary when flying over the camera. Of course, this is only possible if you have a separate Z-motor for each nozzle. Calibrating the nozzle offsets is of course more complex than setting the focus of the bottom camera to PCB height, since no Z axis will ever run perfectly at right angles, no matter how hard you try.

By the way
: on a modern 64-bit PC (i5 cpu) the part alignment for e.g. 0402 (image 400x400px) with opencv takes around 600 .. 800µs.

JW

unread,
Apr 21, 2024, 7:58:45 AMApr 21
to OpenPnP
Sporting challenge is at least half of it, but I think one of the main concerns with the machines coming out of the far East is parts availability, support, and longevity of the manufacturers themselves.

I personally would never consider a machine from a relatively unknown manufacturer, whom we don't know if they'll be around in a few years; this for me is where the OpenPnP proposition shines. Buy from Samsung or Fujitsu or somebody and its a different proposition, but so is the bill!

So, if you're not running OpenPnP, what are you running?

mark maker

unread,
Apr 21, 2024, 10:13:53 AMApr 21
to ope...@googlegroups.com

Hi guys,

just my 2 cents.

OpenPnP bottom vision currently works by isolating bright contacts from anything else (body of the part, visible parts of the nozzle tip, background). Because these contacts are usually metallic, they should reflect light more than anything else.

One crucial step of bottom vision is applying a threshold. Anything brighter than a certain brightness value is isolated. The idea is to only detect the contacts. The threshold can be easily tuned in OpenPnP nowadays, no need to edit CV pipelines anymore (see the animation here):

Parametric-Pipeline

https://github.com/openpnp/openpnp/wiki/Bottom-Vision#tuning-bottom-vision

This threshold principle has certain ramifications:

  1. We want the threshold to be well inside the dynamic range of the camera, i.e. the camera image must include palette of darker and brighter tones, so we can discriminate properly. If there is too much light, even the black plastic of the parts will appear too bright and clip. Looking at your "LED volcano" there, I fear this could happen. 😎
  2. There is only one global threshold. For it to work, we need a uniform effective intensity of lighting across the image.
  3. There are two principles of reflection: diffuse reflection and specular (mirror-like) reflection. Mirror-like reflection is very non-uniform, specifically the specular reflection of a LED creates an extreme peak at its center (the so-called specular highlight). A part's (plastic) body, or the nozzle tip might not be perfectly matte all around, and so it will specular-reflect a lot of light, even when its apparent color is dark.

    https://en.wikipedia.org/wiki/Diffuse_reflection
    https://en.wikipedia.org/wiki/Specular_reflection

  4. Hence we want to avoid specular reflection, and favor diffuse reflection instead. To get it, we need diffuse light
  5. The cheap-o method of creating diffuse light, is to use a diffuser in front of the LEDs. Can be as simple as using certain matte half-transparent papers, close to the lens front and with the smallest possible hole for the camera to peek through:





  6. I believe the Essemtec machines (that where mentioned) use another method: indirect lighting. The LEDs are deliberately far away (sideways) from the camera, mounted on a vertical PCB, so they will not be reflected in a specular way. Instead, I assume they illuminate the white wall and floor material in that camera shaft, and provided it is sufficiently matte, it in turn will reflect up diffuse light towards the part underside.
    https://youtu.be/tJ-N-6DMIe4
    If you want to emulate that, I guess you should keep the LEDs mounted on vertical PCBs and farther away. The camera should again be peeking through the smallest possible hole in the white reflective material. The camera shaft should be rather deep and the camera should have a long focal length, which also helps avoid specular reflections from the LED ever entering the lens (no shallow angles can do so, see the red example ray).



  7. A word about frame rates and short "flash" lighting: there are two types of shutters: global shutters and rolling shutter. Most cheap cameras have rolling shutter.
    https://expertphotography.com/rolling-shutter-vs-global-shutter/
    Therefore the subject must be absolutely still during the whole duration of the frame. If you were to light a "flash" shorter than the frame, you would geht a bright band and the rest dark. Obviously, a camera with a higher FPS is better, as it requires to wait a shorter time. Conversely, professional PnP cameras have global shutter, and the frame/shutter is triggered by the machine controller, together with a strong and extremely short "flash". That way you can do "flying" vision. Such signal generation at certain way-points must also be supported by the motion controller.
_Mark

SM

unread,
Apr 21, 2024, 10:56:51 AMApr 21
to OpenPnP
>> So, if you're not running OpenPnP, what are you running?

There was hardly anything comparable to familiarize yourself with the PnP DIY world - I also played around with OpenPnP until around 2019.

And i have a lot of respect from (academically) trained professional programmers here - unfortunately I'm not one, but as an old weirdo I still managed to write a primitive software, adapted for my hardware/controllers, that meets my requirements. My goal is to achieve maximum performance with consistently high precision with as little effort as possible. But in order to assemble more than two small components per second precisely, you have to put in a bit of effort ;-)

SM

unread,
Apr 21, 2024, 11:12:28 AMApr 21
to OpenPnP
>> Conversely, professional PnP cameras have global shutter, and the frame/shutter is triggered by the machine controller, together with a strong and extremely short "flash". That way you can do "flying" vision. Such signal generation at certain way-points must also be supported by the motion controller

Exactly, but i my case i use the servo drive to trigger the camera, because it has a low latency (less than 5µs)
Here early test video with 5kcph: https://streamable.com/0ws3us

JW

unread,
Apr 21, 2024, 11:20:45 AMApr 21
to OpenPnP
Mark, you've just thoroughly school me, that's given me yet more questions to go and answer - but that's exactly what I need, thank you!

My down camera is a flat LED panel of the same LEDs, but capped by a polycarbonate frosted diffuser cover, so I think what'll I'll do for the time being, is to add a pocket within the up camera frame that allows me to add a diffuser cover to it, with the hole through the middle as mentioned.

I'd heard the terms rolling and global shutter before, now I read them, but did not understand the difference, but it certainly explains some images I've seen before. I'll drop an image in later of the change...

mark maker

unread,
Apr 21, 2024, 1:41:03 PMApr 21
to ope...@googlegroups.com

The vision is extremely impressive!

Please show us the placed parts close up, then the same placements without alignment, so we can see how well it actually works. I'm not doubting you, just curiosity!
  1. A co-axial light was mentioned. That's easy to get right for small parts/narrow camera views, but hard for large ones, i.e. to be able to use the same camera for both tiny passives and clunky ICs. What view size do you get on the focal plane? Does the co-axial light fully light it to the edges?
  2. Your video proves you have a super fast camera. But at what resolution does this work? Can you use the same camera setup for large ICs, i.e. providing enough pixel estate and resolution?
  3. Being one of the contributors of OpenPnP, I was astonished when you said you wrote your own software. I know a thing or two about the implied complexity 😬, so I'm massively impressed. Is this a universal solution? Can you show us a video with heterogeneous parts, feeders and not-in-a-row placements, including large clunky ICs, please?

Note, regarding the first two questions, I'm the last person to doubt your solution is still very usable, even if there are limitations. We all know that the small passives are the ones that come in large numbers. So those must be fast, if there is one MCU that's a bit slower, it does not matter.

After all, that why I made this 😎:
https://makr.zone/openpnp-multi-shot-bottom-vision/736/

_Mark

SM

unread,
Apr 21, 2024, 2:42:25 PMApr 21
to OpenPnP
Oh, I'm glad you like it and I'll be honored to answer all your questions in a new thread next week so as not to hijack Julian's topic even more.

JW

unread,
Apr 21, 2024, 9:27:24 PMApr 21
to OpenPnP
I'll keep an eye out for that thread... sounds interesting!

Images of the diffused version of the up camera, it's just a 3mm thick frosted acrylic cover, need to experiment with the viewport/hole size of course, but otherwise I think I'm happy with this now.

Diffused camera.PNGDiffused camera 2.PNGDiffused camera 3.PNG

Marshall S. (Alakuu)

unread,
May 6, 2024, 12:59:20 PMMay 6
to OpenPnP
JW,

What camera did you decide to go with the bottom? Any chance you'd be willing to release your pcb files for the leds? I'm looking to replace the bottom vision on my machine as I think it's a big bottleneck for vision and your design sure looks fantastic!

JW

unread,
May 6, 2024, 2:38:21 PMMay 6
to OpenPnP
One of the 720p ELP modules, I can get the part number later today.

The printed housing will arrive in a few days, so I'll do some tests to see if anything needs changing before releasing them, but it should be OK, don't want to release something that's a disaster!

The housing is an SLS printed design, though I'm sure with some changes FDM would be possible.

I've actually got some more of the PCBs left over from the MOQ, so could pop up on eBay or something if they work well.

JW

unread,
May 6, 2024, 6:20:51 PMMay 6
to OpenPnP
Ok, camera is ELP-USB100W05MT-L36.

Here's a pic of the 4 boards assembled ready for test when the housing is delivered. The CC driver output current is set by a single resistor (per driver), and so I've built the boards driving 20mA through the LEDs, but this can be tripled easily. So I'll do some testing with different packages etc and post the results here, then let me know if you're interested and if we agree a sensible price I can order another housing, built the boards and ship the assembly finished. What country are you in?

Excuse all the fluff in the picture, I'd just wiped off the isopropyl alcohol off the bench!

Camera boards.PNG

I also built a down board too, using the same LEDs and drivers, I haven't got any pictures of the diffusers for both up and down, so I'll post them up here along with the test results.

Down camera boards 2.PNG Down camera boards.PNG

To be honest, it seems very difficult to buy OpenPnP camera and light 'assemblies' other than some very 'undocumented' pieces from China, so if there's interest I'd happily build a batch.

JW

unread,
May 6, 2024, 6:24:36 PMMay 6
to OpenPnP
I also mentioned the housing is SLS, sort of - it was designed with SLS in mind, but I've actually had them printed using MJF, including vibro polishing and black dye - so they will look like this, close to injection moulded parts.

JW

unread,
May 6, 2024, 6:44:31 PMMay 6
to OpenPnP
This is the down camera assembly, the 'stack' is screwed together with long M2 fasteners, directly into the diffuser at the bottom.

The diffuser is Opal 050 Perspex®.

I just wanted to try this out first to make sure the brightness, diffuser etc works well - then will do a nice printed housing that is visually much cleaner, and replaces the long screws and small M2 standoffs etc. Just drop in the LED board and screw it into place with the diffuser, then drop the camera module in the top and screw that into place, then screw that module to the head.

Connectors are Molex Pico-Lock on this one as they needed to be ultra low profile.

Down camera 2.PNGDown camera.PNG

Marshall S. (Alakuu)

unread,
May 7, 2024, 8:32:02 PMMay 7
to OpenPnP
I'd be interested in the bottom assembly. Surprised you went with SLS. I'd assume standard 3d printing likely would do the job. Though SLS is fantastic because the nylon will take any heat from the lighting assembly if there's any to even be worried about!

I'm in the US - Ohio specifically. 

You wouldn't need to go through the trouble with the assembly process. I probably could get it to build on my PNP, and I'd mess with a 3d printed housing (I do have a little adaption I'd need to make to get it mounted to my existing plate that my bottom camera is setup on.

I'd gladly buy the extra boards from you if you have them and components! Toss me a email!

JW

unread,
May 7, 2024, 10:32:23 PMMay 7
to OpenPnP
To be honest, it's just the save on the hassle of cleaning up support material. Will drop you an email.

JW

unread,
May 27, 2024, 5:03:49 PMMay 27
to OpenPnP
First tests are in, with a TSSOP-5 and a 2835 LED package.

Looks like the design will achieve a wide range of image types, but I'm having a few troubles with cameras in general, which I think is related to USB controllers etc, which I'll start a seperate thread for.

These images are captured with the diffuser panel in place, that just has a 20mm hole in the middle to look through.

But these tests have got me thinking a few things:

- How 'big' is big enough, in terms of the part in the image. Presumably something like 10x pixels relative to minimum feature size or something?
- What is the priority for OpenPnP, is it using template matching, or edge/corner finding? Thus, do we want the pads to be the most heavily contrasted part of the image, or the package outline?

Up1.PNG

up3.PNG

JW

unread,
May 27, 2024, 5:06:36 PMMay 27
to OpenPnP
Ah, I've just seen marks post about OpenPnP's vision processing, thresholds etc...

mark maker

unread,
May 28, 2024, 3:07:05 AMMay 28
to ope...@googlegroups.com

> How 'big' is big enough, in terms of the part in the image. Presumably something like 10x pixels relative to minimum feature size or something?

With pre-rotate enabled (which is very much recommended), the parts are held at any angle. Therefore, only the central circle that fully fits into the camera view can really be used for vision. Bottom vision will also us a circular mask to blot it away (as you have discovered yourself). Therefore any diffuser that is larger than that full central circle should be fine, i.e. yours is good.

You can see this visualized in Vision Compositing (even if you don't use multi-shots):

https://github.com/openpnp/openpnp/wiki/Vision-Compositing/5d2e803da169bfcd3de95fee2fde333abe01208d

_Mark

SM

unread,
May 28, 2024, 2:29:10 PMMay 28
to OpenPnP
>> How 'big' is big enough, in terms of the part in the image. Presumably something like 10x pixels relative to minimum feature size or something?

That is good a question.

The field of view (FOV) of your system is calculated very easy:

FOV = distance * pixelsize * pixelcount / focallength

The distance is measured from camera sensor to object.
The pixelsize can be found in the sensor manufacturer datasheet.
The number of pixels (pixelcount) corresponds to the native resolution of the corresponding level (vertical or horizontal).
The focallength is written on the camera lens.

Finally you get the units per pixel (UPP).

UPP = FOV / pixelcount

In my humble opinion, the UPP should not be much larger than 25µm/pixel in order to be able to recognize the position of 0201 perfectly, as long as the quality of the lens and the sensor allows it. On better sensors (SNR greater than 40dB) this value can also be higher.

Sudesh .s

unread,
Jun 23, 2024, 11:50:17 PM (9 days ago) Jun 23
to ope...@googlegroups.com
this error is showing in bottom vision


error1.png

JW

unread,
Jun 24, 2024, 6:58:37 AM (9 days ago) Jun 24
to OpenPnP
This just means you have not configured an up facing camera.

Post a screenshot of your settings in Machine Setup > Cameras.

Reply all
Reply to author
Forward
0 new messages