New up vision setup in my chm-t

416 views
Skip to first unread message

vespaman

unread,
Apr 9, 2025, 12:10:54 PMApr 9
to OpenPnP
Hi Guys,

I have been testing some up-light stuff recently, and thought I should share..

My starting point was good; I had an imx290 based USB3 camera, giving 1920x1080@60Hz (cropped to 1080x1080), never had any issues what so ever. The focus depth was huge, many centimeters. But my unit per pixel was a bit high (or so I thought) at 29. I have a dual nozzle machine (chm-t 48vb).

In my setup, I typically had around 60ms of settle time, and then about 30ms (IRC) of alignment CPU time.
Anyway, I thought that I could squeeze a little more out of this, if I had a 120FPS camera, so I got one of those AR0234, only to find out that it gave no real improvement at all. The frame rate was OK, but the processing time between frames where now instead higher. I concluded that this was because of the extra noise the AR0234 gives over the imx290, even with a very strong up light. Maybe also the global shutter means that it is harder for OpenCV to find differences between frames? After tweaking the setup with a title bit more denoise, and a smaller crop in the motion detector, I arrived to a little better than the imx290, kind of disappointing, but such is life.

Since I have the time, I also wanted to try up vision at Z=0, since the nozzle dipping is rather slow on these machines (they are using springs to pull the nozzles up, and the mass makes it slow). This seems to work pretty good, and gave me about 200CPH better performance (this is a estimation, since I have changed quite a few things since I last dipped the components into the pit). But even removing the component height adjustment (thanks @jan for pointing out), did a noticeable difference.


But, why stop here?
Early on, when I converted my machine, I wanted to try dual up vision cameras, since I had a bunch of imx290 sensors, but I could never find a good way to interface them to OpenPnP, without adding buffers here and there, since they only have MIPI interface, so I parked that project. But now I though it would be nice to continue this, so I modded the OpenPnP code and modded my machine. I now use two of the same AR0234 sensors.

Since the chmt machines has a round hole, I decided to use a 3d printed mockup that I had made earlier for the Z=0 focus/lens distance testing, and arrived to a cut out size like this;
(The tripod mounts mounts in the model, for testing FOV and adjusting focus)
chmt_camhole_model.jpg

testing focus.jpg

I made a saw guide and cut it with a jigsaw using a special alu blade from Bosch. Much easier than I anticipated, took less than 15 minutes. I did not even pull the top from the machine, only pulled it forward as much as possible, so I was free to handle the jigsaw from below, where the table top is flat.
jigsaw_guide.jpgsawguide_inplace.jpg

Then I did a couple of different designs to be able to adjust the camera distance, rotation etc. This is very important, and very sensitive. This is the first version in the pictures, I have changed the camera sensor holding part since then, but forgot to take new pics. I am still not happy with what I have at the moment, but it serves for testing.
 
assy1.jpg
In the machine form below...

assy1_in_machine.jpg

In order to get the Z=0 test working, I had to ditch the original LED light, and create a new (but compatible) solution that is strong enough for the AR0234, and also seems to cover the full fov. As luck would have it, I could reuse the original cover glass! :-)

camera_pit.jpg

For the code changes in OpenPnP, I basically followed the relevant points from Marks list from some years ago, and I'm sure there's things I have missed. Also the code has changed since then, so maybe there's stuff that I don't take care of like I should. But this is only a test after all, and I am not a Java programmer, so this is really just a mockup.
 
 Does it work? Looks like.
 Does it improve performance? Looks like.

Since I have been doing a lot of testing lately (with this camera stuff, and new stepper drivers etc etc), I did a python script together with Gemini, to see time differences between different setup. This shows the exact difference between running the same job with single cam and double cams, everything else the same;

Skärmbild_20250409_175715.png
(never mind the last cycle - it depends whether there's a feed needed or not (and first cycle includes the wait for vacuum to be generated))
(once I have fixed this script up a bit, I'll publish it on my github space)

There are tons of things that could be done to make this much faster.
 a. As it is now it is even moving between the vision shots (albeit very little), this adds a huge delay.
 b. If a) was handled, we could remove the second (unnecessary) settle time.
 c. I imagine if b) was handled, we could use a thread pool to do the alignments in parallel


I have no idea if it is possible to sort a) easily. I'll create yet another mechanical setup, to see if I can pinpoint the tip center even better than now, but I think a future solution should be in software somehow. If at all possible? Maybe it is possible to align a two camera machine mechanically perfect, but a four++ nozzle machine will probably not be possible.

What are your thoughts on this?

  - Micael

vespaman

unread,
Apr 9, 2025, 12:34:42 PMApr 9
to OpenPnP
Here's a short video of the miracle :-) short test job

Wayne Black

unread,
Apr 9, 2025, 12:40:41 PMApr 9
to ope...@googlegroups.com
Wow, great work Miceal!

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/06c75d6a-cf11-4f32-a11f-bb9fa2401b1bn%40googlegroups.com.


--
Wayne Black
Owner
Black Box Embedded, LLC

vespaman

unread,
Apr 10, 2025, 7:39:40 AMApr 10
to OpenPnP
Thanks Wayne!

I trying now, to think of a way to pin point the mechanical settings of the cameras. On issue I have is that once I run the advanced calibration on the cameras, the coordinates changes, and add to that, the nozzle run out is also an obstacle. For now I have done the mechanical adjustments before doing any advanced calibration, but this gives me more error afterwards.
I guess I can iterate everything until I'm "home" but this is rather time consuming (doing advanced calibration on each iteration).

But I need to re-design the sensor alignment again, so that it is easier. My thoughts now, is to skip rotation, and only allow Y adjustment on the first camera, and only X on the second camera. Then iterate between the cameras, one at a time, doing advanced calibration only on the camera that I changed.


Another strange thing I have discovered, is that the camera settle test tab gives me a rather good result, but in job, the result is way worse. This does not make sense to me. One reason is obvious - in job the speed gets higher, since the distance is longer (in the settle tests, the distance is capped to 10cm). But I think also on the 0.5mm move between cam1/nozzle1 and cam2/nozzle2, the settle of this super short move is still much higher than what I see if I do a test settle.


 - Micael

Wayne Black

unread,
Apr 10, 2025, 12:14:37 PMApr 10
to ope...@googlegroups.com
@Miceal
I confess I'm not 100% on what exactly what you are doing, but find it super interesting. I hope you keep posting your progress and findings on this. :)



Javier Hernández

unread,
Apr 10, 2025, 12:31:21 PMApr 10
to ope...@googlegroups.com
It Looks pretty cool. Maybe I will use this camera for my new project. I was using the ELP 720HP typical for OpenPnP, and I was looking for something better, with Global Shutter.

vespaman

unread,
Apr 10, 2025, 1:56:54 PMApr 10
to OpenPnP
@Wayne,
What I'm trying to achieve, is to align both cameras, so that they are dead spot on where OpenPnP thinks they are after running the advanced calibration for both cameras. Advanced calibration is changing/adjusting the coordinates of the center of the cameras, where it thinks they are (I don't really understand why). So one thing could also be to disable the advanced calibration, but I guess there will be penalty on placement then. If advanced calibration change the coordinates for one or both cameras, it means that when OpenPnP goes to the first camera, it will be spot on, but the second is off by the difference of both adjusted cameras, so I presume (haven't tested) that if I disable the move that OpenPnP wants to do (in the neighbor of 0.2-0.4mm), up vision will probably struggle, since he thinks part is not center on tip (while it is).
Yes, I understand it is hard to follow... :-)

By the way, I only used double adhesive on one board so far, but that looked like better accuracy than I had before (spot on), I guess this is the new stepper drivers that gave me that present. The testjob I'm running are all 0402's so I think it would be good to place some tricky components also, but the amount of test runs that I'm doing means a lot of component waste... I'll revisit the placement accuracy once I'm happy with the setup.

@Javier, yes I also started with the ELP 720, before I got the imx290 (which is rolling shutter). I'm not convinced with AR0234, but at least they are not costing an arm and a leg. Maybe I'll learn to tweak them better, but the noise floor is much higher at the moment then my trusty imx290, so I need to accept a much higher threshold (over 400) in the motion settle settings, where I had about 200 before.
One thing to know, with the MIPI-USB3 board that I got, is that the media processor is getting really hot, so it needs a heat sink or a fan (I opted for both).

Tomorrow, I'll start to test a new alignment solution, and also move the sensors closer to the nozzle, they are a bit unnecessary far away now, hopefully this will help OpenCV and settle times.

 -  Micael

tonyl...@gmail.com

unread,
Apr 11, 2025, 9:09:01 AMApr 11
to OpenPnP
> What I'm trying to achieve, is to align both cameras, so that they are dead spot on where OpenPnP thinks they are after running the advanced calibration for both cameras. Advanced calibration is changing/adjusting the coordinates of the center of the cameras, where it thinks they are (I don't really understand why).

Can you explain more about what you mean by "... so they are dead spot on where OpenPnP thinks they are...."?

>Advanced calibration is changing/adjusting the coordinates of the center of the cameras, where it thinks they are (I don't really understand why).

One thing Advanced calibration does is compensate for any tilt of the camera (without this, physically parallel lines will appear in the camera images as if they converge/diverge from one another). It does this by remapping the pixels of the raw camera image to new locations in the image you see in the camera view (and that gets processed through the pipeline). One part of the remapping process is to map the pixel whose ray is exactly perpendicular to the machine's X-Y axes to the point where the camera view's crosshairs intersect. Before Advanced calibration is run, the intersection point is just at the pixel at the center of the image (which is on a ray that is tilted by the same amount as the camera is tilted). On the tilted ray, objects will appear to change position in X and/or Y depending on their Z distance from the camera. On the perpendicular ray, the X-Y position of an object remains the same regardless of its distance from the camera. See https://github.com/openpnp/openpnp/wiki/Advance-Camera-Calibration---Camera-Mounting-Errors. The bottom line is that the location of the camera gets changed because the intersection of the crosshairs has changed to a different pixel.

Tony

vespaman

unread,
Apr 11, 2025, 11:42:52 AMApr 11
to OpenPnP

>Can you explain more about what you mean by "... so they are dead spot on where OpenPnP thinks they are...."?

Well, I suppose I can describe what I want to achieve; to put the cameras mechanically in a position so that no move is needed between the alignment vision is to commence on both nozzle tips.
I also have both nozzle tips at their highest position, Z zero I call it a bit carelessly.
This Z zero is the primary calibration point in advanced calibration, and also where I am eyeballing my manual setup.

I guess, that advanced calibration also takes the run out into consideration, something that I have not done so far, while eyeballing. I intend to set the nozzle tips spinning the next time I give it a try, so at least, I'll be able to be a little bit closer.

>One thing Advanced calibration does is compensate for any tilt of the camera
[..]
>The bottom line is that the location of the camera gets changed because the intersection of the crosshairs has changed to a different pixel.

I see! That all makes sense

Would it be possible make the advanced calibration to take multiple cameras into consideration when doing this remapping?
If not, I suppose the only way is to iterate like I am doing now, and perhaps accept some alignment error (which will be directly be transferred to a placement error), depending on how many iterations one can endure :-)
The way I'm doing this now, is to first calibrate N1 over C1 as best I can.
Then I run Advanced Calibration on N1. Then calibrate N2 over C2 as best I can.
Then I run Advanced Calibration on N2.
Then I place the head at N1/C1, and notice how much I need to move to get N2 over C2 again.
Then I adjust C1 for Y difference, and C2 for X difference given the above numbers.
Then again I start over with advanced calibration of N1/C1, N2/C2, adjust etc etc.

Or is there another way?


  - Micael

Toby Dickenson

unread,
Apr 11, 2025, 1:17:22 PMApr 11
to ope...@googlegroups.com
> Or is there another way?

The better way would be to change the bottom vision logic to process
the second nozzle in its natural location - fairly close to the centre
of the second camera - rather than moving it a tiny distance to put
the second nozzle exactly in the center of the second camera. I have
no idea how easy that would be.

Thank you for the updates. This is an interesting project.

vespaman

unread,
Apr 11, 2025, 1:50:36 PMApr 11
to OpenPnP

>The better way would be to change the bottom vision logic to process the second nozzle in its natural location

That makes so much sense, now that you mention it! I'll see if I can get my head around how that part is working.

Thanks for your valuable input!


   - Micael

Wayne Black

unread,
Apr 11, 2025, 2:45:15 PMApr 11
to ope...@googlegroups.com
As Toby says is how I initially thought you were implementing. Little to no XY motion for aligning both nozzles. My nozzles are as close together as you can get using nema8. Ganging 'micro' cameras centered on both nozzles from the same location would be sweet. This seems like a more attainable approach in Openpnp than a 'flyby' system.

Keep us posted :)

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Jan

unread,
Apr 11, 2025, 5:50:16 PMApr 11
to ope...@googlegroups.com
Hi Micael!
As others said, that's really awesome work! Many thanks for sharing it
and your progress.
I'll try to put my 2cents in on the open questions:

- To me the precise camera calibration does not matter. It's
overwritten/corrected by nozzle tip runout calibration. This serves as
reference measurement for precise bottom vision results. And as runout
calibration is recommended and will always indicate in a slightly
different offset, you'll always have this tiny move you don't like.
Here is how I understand the current code flow: on alignment, the
location of the nozzle on the head is calculated taking the runout into
account. Then the head is send to the cameras location for bottom
vision. I would try to change that by checking if the distance (in XY)
is small (maybe with respect to the cameras rooming radius or a new
parameter) and if so, convert the move into an offset the bottom vision
results are corrected for. You may find an idea how that could be done
by checking the camera calibration algos I&S is using
(VisionSolutions.java?). They explicitly offset the nozzle over the
camera and use bottom vision to calculate the location error.
You may have to check if setting always takes place even if no motion
is ongoing or selectively after motion only.

- Processing bottom vision asynchronously is not as easy as I initially
thought. I tried it a few weeks back and failed because of
synchronization issues (I think). I moved the entire bottom vision
procedure into a new thread (without synchronizing/waiting for the
results). If the thread is requested to run immediately it times out and
if the thread is scheduled as machine task, it runs when the job has
finished. This leads me to the assumption, that the image taking, which
is part of the pipeline requires exclusive machine access, which it can
not have as long as the job processor is running.
There are two options I'm considering: a) if synchronization is added,
the job processor might release the machine allowing a bottom vision
thread to take it or b) intersect in the pipeline processing such that
the image is take by the job processor and the actual processing is done
in a separate thread.

Jan
> chmt_camhole_model.jpg
>
> testing focus.jpg
>
> I made a saw guide and cut it with a jigsaw using a special alu blade
> from Bosch. Much easier than I anticipated, took less than 15 minutes. I
> did not even pull the top from the machine, only pulled it forward as
> much as possible, so I was free to handle the jigsaw from below, where
> the table top is flat.
> jigsaw_guide.jpgsawguide_inplace.jpg
>
> Then I did a couple of different designs to be able to adjust the camera
> distance, rotation etc. This is very important, and very sensitive. This
> is the first version in the pictures, I have changed the camera sensor
> holding part since then, but forgot to take new pics. I am still not
> happy with what I have at the moment, but it serves for testing.
>
> assy1.jpg
> In the machine form below...
>
> assy1_in_machine.jpg
>
> In order to get the Z=0 test working, I had to ditch the original LED
> light, and create a new (but compatible) solution that is strong enough
> for the AR0234, and also seems to cover the full fov. As luck would have
> it, I could reuse the original cover glass! :-)
>
> camera_pit.jpg
>
> For the code changes in OpenPnP, I basically followed the relevant
> points from Marks list from some years ago, and I'm sure there's things
> I have missed. Also the code has changed since then, so maybe there's
> stuff that I don't take care of like I should. But this is only a test
> after all, and I am not a Java programmer, so this is really just a mockup.
>
>  Does it work? Looks like.
>  Does it improve performance? Looks like.
>
> Since I have been doing a lot of testing lately (with this camera stuff,
> and new stepper drivers etc etc), I did a python script together with
> Gemini, to see time differences between different setup. This shows the
> exact difference between running the same job with single cam and double
> cams, everything else the same;
>
> Skärmbild_20250409_175715.png
> (never mind the last cycle - it depends whether there's a feed needed or
> not (and first cycle includes the wait for vacuum to be generated))
> (once I have fixed this script up a bit, I'll publish it on my github space)
>
> There are tons of things that could be done to make this much faster.
>  a. As it is now it is even moving between the vision shots (albeit
> very little), this adds a huge delay.
>  b. If a) was handled, we could remove the second (unnecessary) settle
> time.
>  c. I imagine if b) was handled, we could use a thread pool to do the
> alignments in parallel
>
>
> I have no idea if it is possible to sort a) easily. I'll create yet
> another mechanical setup, to see if I can pinpoint the tip center even
> better than now, but I think a future solution should be in software
> somehow. If at all possible? Maybe it is possible to align a two camera
> machine mechanically perfect, but a four++ nozzle machine will probably
> not be possible.
>
> What are your thoughts on this?
>
>   - Micael
>
> --
> You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to openpnp+u...@googlegroups.com
> <mailto:openpnp+u...@googlegroups.com>.
> To view this discussion visit https://groups.google.com/d/msgid/
> openpnp/1ed304f7-d0c7-45af-9ce6-f4b9e4f2ca1en%40googlegroups.com
> <https://groups.google.com/d/msgid/openpnp/1ed304f7-d0c7-45af-9ce6-
> f4b9e4f2ca1en%40googlegroups.com?utm_medium=email&utm_source=footer>.

vespaman

unread,
Apr 11, 2025, 5:52:21 PMApr 11
to OpenPnP

Wayne, there will be no 'little motion' with Toby's suggestion. Little motion is what I have today, but that spoils the efficiency greatly. The idea is now instead to (somehow) feed the delta to the alignment in the up vision code, if this is not the first nozzle in the cycle, this code accepts this delta to the current position as the natural place of the component. For some reason my mind was stuck at fixing the issue by the camera, and not within the vision/alignment during run time. But this is why it is always good with different views.

Having thought about it, since this is a test only, I'll concentrate on dual nozzle setup, maybe more nozzles will be more complicated, but now the order is simple; if it arrives first at first nozzle, the next can only be nozzle two and the other way around. If there's more nozzles, I suppose there needs to be more logic, at least if there's less cameras than nozzles. But then again, I suppose that if one is going for multiple cameras, one camera per nozzle is the natural choice.

From one thing to another, I realized that maybe the up light could be part of the relatively noisy picture (according to the settle test without moving) - it is driven with one of these LED drivers, i.e. switching. Normally, this is not a problem, since they are pretty stable, but this LED driver is a low cost china thing, that arrived with the LED ring, and from what I could make out from the data sheet, it is driven above its intended specs. So I'll try to DC feed the up light once I have the new parts in the machine, to see if it shows things in a better light (pun intended).

I hoped I'd have time to do more today, but I only made a new set of the camera adjustment parts (changing the distance and, the alignment parts). Not sure what happened to the rest of the day... :-) Not sure how much time I can spend on this tomorrow, but I'll keep you updated whenever there's progress.



 - Micael

vespaman

unread,
Apr 11, 2025, 6:12:02 PMApr 11
to OpenPnP
Hi Jan,

Exactly, this is what Toby suggested above. Thanks for the code pointers,  I'll be looking into them. Since we anyway have to disable any move and settle, could we not "just" trap the motion request (wherever it origins from), and use the new X/Y as delta to feed back to the up vision code?
I think it is good to take it step by step, so getting rid of the move and settle will be my first milestone. This is the part that brings most bang for the buck, and once this is done, the threaded up vision stuff can be next up.

It sound like you have already spent some time looking into this - great! :-)
The way I thought the up vision threading could be done (probably naive, I have not looked into this at all), was to fire all up vision alignments of all nozzles from thread pools after settle on the arrival to the first nozzle tip in the cycle. But maybe this is a to brutal change of the code, and/or brings other issues further down the line? I am sure this is not easy in any way.

  - Micael

javier.hern...@gmail.com

unread,
Apr 14, 2025, 4:55:06 AMApr 14
to OpenPnP
And why Rolling Shutter instead of Global Shutter? Isn't Global Shutter supposed to be better with OpenPnP?

Chuck Hackett

unread,
Apr 14, 2025, 6:43:51 PMApr 14
to ope...@googlegroups.com
I shudder to offer an opinion as I am a newbie but ...

My understanding is that global shutter is better for capturing moving objects but more expensive.  Rolling shutter is cheaper and fine for still objects.

Since our objects are/will be still at exposure I assume rolling shutter is fine  ... but I'm a total newbie at this ...

Chuck

javier.hern...@gmail.com

unread,
Apr 14, 2025, 8:01:45 PMApr 14
to OpenPnP
50€ original Raspberry Pi Camera

A specialised 1.6-megapixel camera that can capture rapid motion without introducing artefacts typical of rolling shutter cameras. Ideal for fast motion photography and machine vision applications.


Sensor: Sony IMX296LQR-C

Screenshot from 2025-04-15 00-59-32.png
I think that Global Shutter is better that Rolling Shutter for OpenPnP.

javier.hern...@gmail.com

unread,
Apr 14, 2025, 8:18:20 PMApr 14
to OpenPnP
Other interesting video. 

Maybe someone has a reason why Rolling Shutter is better for OpenPnP, but I think Global Shutter is better.

javier.hern...@gmail.com

unread,
Apr 14, 2025, 8:21:42 PMApr 14
to OpenPnP
AI is awesome :)

Screenshot from 2025-04-15 01-21-06.png

vespaman

unread,
Apr 15, 2025, 2:29:42 AMApr 15
to OpenPnP
Hi Javier, maybe you can put your camera questions in a new thread?

Cheers,
 - Micael

vespaman

unread,
Apr 15, 2025, 9:51:00 AMApr 15
to OpenPnP
Small update;
The new redesign of the (much simpler) alignment setup is now sufficiently good. I can dial the two camera in pretty near perfection, although securing them through the front opening (4 small allen screws each) is something that needs tons of patience. The younger version of myself would not have made it. :-) It is probably much faster to lift the top to 45° when doing this. If I ever redesign this again, I'll put small funnels for the allen keys to be guided to the screws. Then it should be OK to secure them from the small front hole.
PXL_20250414_121623236.jpg

However, I have realized that I'm now too close, so I get lens distortion (loosing focus) in the corners, so the advanced camera adjustment looses the tip during its run, and therefore the result is far from good. So my dilemma is that I don't want to move it back to where it was, since I got about 34 unit per pixel with the current 4mm lens. Where I am now is at 29UPP, which is OK-ish, would it not be for the distortion.
So I figure I have dialed the cameras so many times now, that I cannot endure changing it, until I get longer lenses. That will probably take some time. Hopefully I can do more tests in the mean time.
The AR0324 has a 2.6" die, so not all lenses will work well. There's a few on local amazon, but much more, on Ali, which is why it will take some time.

I am also waiting for a new firmware for the cameras, since the manual exposure does not work properly, and automatic exposure is, not useful in my setup.

I have not looked too much into the code, trying to find a solution on the "don't move, just emit delta" obstacle.

- Micael

tonyl...@gmail.com

unread,
Apr 15, 2025, 10:33:47 AMApr 15
to OpenPnP
> get lens distortion (loosing focus) in the corners, so the advanced camera adjustment looses the tip during its run

Have you tried cropping the image down to a square before running Advanced calibration? There are entries on the Advanced calibration tab that lets you set the cropping.

vespaman

unread,
Apr 15, 2025, 10:55:24 AMApr 15
to OpenPnP
Have you tried cropping the image down to a square before running Advanced calibration? There are entries on the Advanced calibration tab that lets you set the cropping.


So, the cameras gives 1920x1080 (I think the sensor does 1200, but this is not available with the setup of the media processor), but I have selected output mode 960x1080, and cropped it to 960x960. I might go lower until i get the proper lens setup, but it feels 'wasteful'.
When I get the updated firmware of the cameras, I can maybe play a little bit more with lighting, now it is too strong in my opinion (to please the very short exposure time available in manual exposure) - there are hot spots on the green backdrop of nozzle, when the tip is approaching the corners, so not only is the tip getting out of focus, but also the green backdrop is giving attention. But maybe this the 'human way of thinking', not something openCV has an actual problem with.

The direction of the sensors gives X 1920 max, which might be nice to have in the future, so hopefully the longer lenses will accept this, or at least closer to it, for larger components.
The problem purchasing lenses on amazon etc is that the sellers rarely have proper data on them, especially minimum focus distance, so it is a bit of a gamble.

 - Micael

Toby Dickenson

unread,
May 9, 2025, 5:57:08 AMMay 9
to ope...@googlegroups.com
Can I ask about how you have the openpnp software support working for this?

I am considering doing something similar for the top camera. I have
the problem that my current top camera is blocked from viewing the
pick area on Push/Pull feeders due to head geometry. So I am
considering adding a second tiny (borescope?) camera in between the
two nozzles.
> --
> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/openpnp/ef341707-78a8-419b-8663-46ea20285e08n%40googlegroups.com.

vespaman

unread,
May 9, 2025, 10:01:31 AMMay 9
to OpenPnP
Cool!

So, basically, I associated each up looking camera with a nozzle. OpenPnP had already everything in place, since all use of up looking cam is using a call for getting the up looking camera (getBottomVisionCamera()). I simply added the  current nozzle as a parameter to this function. And then, in this function simply returned the associated camera.

The way to properly select/associate the camera should be a camera drop down menu on each nozzle, but I struggled with this UI bit, so for now, I simply bind them together with a number in the name. E.g. "Nozzle 1" will return "Upcamera1" etc. There are a few places where there's no nozzle involved, here I simple return the first camera.

I haven't had any time to work more with this, but at least I have new firmware for the cameras and new lenses, so I will continue once I can find some time.

 - Micael

Toby Dickenson

unread,
May 10, 2025, 2:30:30 PMMay 10
to ope...@googlegroups.com
Thanks for the help. It is awesome that the software changes are so simple.
> To view this discussion visit https://groups.google.com/d/msgid/openpnp/275d0e9e-953f-4118-8277-b3b04cd0b436n%40googlegroups.com.

vespaman

unread,
May 10, 2025, 5:33:54 PMMay 10
to OpenPnP
Yes, awesome!

Not much help from me on the down looking though, sorry for that. I did a quick search just now, and by the looks of it, maybe the function getDefaultCamera() is a place to start. Don't know for sure, but it looks like it could share the same treatment (adding a new argument like a nozzle), but maybe the callers needs to be examined to see if the nozzle can is, or can be, made available prior calling getDefaultCamara().

 - Micael

vespaman

unread,
May 11, 2025, 1:50:31 AMMay 11
to OpenPnP
Correction to myself; in your case, I suppose the feeder would be the argument, and assigning a camera to each feeder?

 -  Micael

vespaman

unread,
Oct 11, 2025, 10:56:30 AMOct 11
to OpenPnP
Hi Guys,

Finally time for some update, sorry for the slow progress!

I received the lenses mentioned above, but they did not make me much happier, and then the summer came along. But some time ago, I decided to give it at try again, and, as discussed in the other thread, the issues I had could eventually be solved with yet another pair of lenses.
This gave me the boos I needed to continue.

So, regarding hardware, the new lenses (8mm) needed to have its distance slightly adjusted, so I reprinted part of the camera holder, only to realize that the hole I had previously made was now too small.
I therefore printed a new saw guide, and had some quality time with my jigsaw again. Now the hole is as big as it can be. Not sure why I did not make like this from the beginning.. :-)

mounted.jpgsaw_guide2.jpg

And while doing this, I also took the opportunity to create a new light ring on top of the camera pit. I did this in case I would find some problems with large tricky parts in the future. 
The solution was super quick; I just 3d printed the white ring, which has pockets for high voltage Acrich LED's, that I had a reel of. These are about 19V forward voltage, and their pads is located so that there's no PCB needed, just to put them in their pockets, and soldering two thin (I used stripped wire wrap) wires around the full circle, so the LED's are in paralell. Then a 0.25W resistor to 24V. Took me about an hour all in all.
new_light2.jpgnew_light.jpg
I don't like that the bottom led ring is yellow and the top white. Hopefully I'll get used to it. :-o



Then I had a go at the code, to see if I could figure something out, and I think I have something worth using now, even if it is not finished. 


Multi bottom vision cameras
------8<-----------------8<-----------------8<-----------
1. Bottom vision cameras-to-nozzle association by name only.
e.g. a nozzle named nozzle7a will be associated with camera named asdf7qwerty for instance.

2. Once the machine has dealt with the first nozzle from the job order, the decision to do a physical move (as before) is made by checking if the wanted X/Y move is >= 0.5mm, Z > 0.1mm Rotation > 0.1°.
If not, the move is not commenced, and the motion settle is jumped over.
In my machine, the wanted move is typically around 0.22mm X and 0.01 Y (depending on nozzle tip runout), maybe this should be a user setting if someone cannot reach below 0.5mm in camera adjustments. Or we can just increase it a bit?

3. Once it is decided to skip the move, three properties are added to camera; displacementActive, displacementX & displacementY. I chose 'displacement' instead of offset, since there's a gazillion offsets already in the code base, and only displacement on some feeders.
I saved them all on Camera, since camera is available also in the pipeline code, even if one could argue that they could/should be belonging to the nozzle instead.

4. Then, if displacement is active, the placement is adjusted with the previously saved X/Y.

5. Machine set to PreRotate is necessary to gain most speed. Otherwise a physical move will always take place regardless.


Remaining issues;
6. I'm using the grace of "Pick tolerance" is taxed. It is nice to adjust the pipeline to the proper displacement X/Y. I am not sure how to solve this, yet. I guess the image size could be trimmed, to have the image center adjusted.

7. Since the next camera capture is literally instant after first nozzle has been dealt with, the new pic is taken before the controller has been able to turn it on. At least on my machine, where up light is software PWM. I will not tell you how many HOURS I spent before realizing this :-) :-o
Of course, I could change my controller to use digital IO setting for up led, but I think I am not alone to have PWM on LED, so a better solution would be handy.
Ideally, I'd like to find a way not to switch LED off from the previous shot, but then we need to move the control of the up light from the pipeline process() (ImageCapture.java), and get some knowledge about "entering pit/leaving pit" instead.



Threaded parallel capture/alignment.
This is not implemented as part of this exercise. On my machine, I think that this would give us about 100ms, so it is substantial (even on a 2-nozzle machine). The more nozzles, the more gain, of course. But this needs a change of the flow that needs proper discussion, I think.
(Maybe one way would be to, instead of having one nozzle operation in a step (as today), have one camera pit operation in a step. But this is just how I think about it now, of top of my head.)


Any suggestions about 6) and 7) would be highly appreciated. Or, really any kind of feed-back at all. ;-)


------8<-----------------8<-----------------8<-----------



So, what does this mean in terms of performance enhancement?
On my machine, almost exactly 500CPS over the same settings using single camera.
single_dual.png

As before; the first cycle includes vacuum, so disregard that. Also note, that since there is no 'Settle 2' any longer in the dual, the colours (red) is a bit misleading.


 -  Micael

vespaman

unread,
Oct 11, 2025, 10:58:22 AMOct 11
to OpenPnP
... I meant 500 Parts Per Hour, not CPS. :-o

vespaman

unread,
Oct 11, 2025, 11:32:30 AMOct 11
to OpenPnP
Regarding 6) above - what a mess! I meant to say that it is only using the pick tolerance setting as of now. But it would be nicer to adjust the pipeline, so the new offset is applied, and the pick tolerance is just that - pick tolerance, nothing else.

Adding the mandatory clip of the miracle. :-)

vespaman

unread,
Oct 13, 2025, 11:37:40 AMOct 13
to OpenPnP
Hi all,

So, having boiled down the number 7 issue - LED up light turning off while it should ideally be left on for the next shot, I think there are a three of ways to think about this;

1. The code is correct as is - LED should switch on/off for each capture.
This would mean that there are some controller implementations out there (especially/at least) the ones that uses soft pwm setting on smoothieware that are likely to fail. This may also be depending on how fast the serial comm's are between controller and OpenPnP - in my case, the 'OK' will arrive in a matter of micro seconds. On a slower comm's, it will probably be milliseconds, increasing the chance that the controller will turn the light on properly before the OK has arrived to the PC, and image are captured. But also the frame rate of the  camera will come to play here. In my setup, a new capture will arrive within 0-9ms, and the higher frame rate, the higher the chance there's a frame available exactly after the controller OK, but before the controller has commenced the light on. 
Actually, I can 100% accept if this is the way to view this.  But I also don't want to just consider my own setup, where I can change my controller FW/setup. What do you think?

2. The best way (IMHO) to deal with this, would be if there's a way to peek forward to the next step in the job processor, to see if the next step is also 'align', and if so, not switch off the up light in the current step.
I have not seen any support for doing this, and it might not be simple to implement (not sure I understand the details of this yet). @Jan, I see you have done some changes here recently. Perhaps you know?

3. Another way would be to always switch the up light off in the placement steps. 
But this is not so nice (IMO), since it is not symmetric with the switch on. And also would leave the light on, on discard cycles etc.

Then, I guess those of us that has this problem, could also set the light to be on all the time, and use scripting functions for enabling/disabling the cameras wherever they would like. This could be the fallback if 1) is thought to be a controller issue. But also not so nice.

It is very quiet on the list at the moment, so I guess everyone is busy, which is fine - there's no hurry - I just wanted to put my thoughts into writing before I forget about it. :-)

Regards, 
  Micael

Jan

unread,
Oct 15, 2025, 6:12:06 PMOct 15
to ope...@googlegroups.com
Hi Micael!
Interesting process you're making!
Sounds like your ideas could result in a new feature to take a single
image of both nozzles and use it for bottom vision. With a nozzle offset
feed into the pipeline we could just mask the relevant part and -suppose
the lens is good enough - perform bottom vision using the edge of the image.

On 13.10.2025 17:37, vespaman wrote:
> Hi all,
>
> So, having boiled down the number 7 issue - LED up light turning off
> while it should ideally be left on for the next shot, I think there are
> a three of ways to think about this;
>
> 1. The code is correct as is - LED should switch on/off for each capture.
> This would mean that there are some controller implementations out there
> (especially/at least) the ones that uses soft pwm setting on
> smoothieware that are likely to fail. This may also be depending on how
> fast the serial comm's are between controller and OpenPnP - in my case,
> the 'OK' will arrive in a matter of micro seconds. On a slower comm's,
> it will probably be milliseconds, increasing the chance that the
> controller will turn the light on properly before the OK has arrived to
> the PC, and image are captured. But also the frame rate of the  camera
> will come to play here. In my setup, a new capture will arrive within
> 0-9ms, and the higher frame rate, the higher the chance there's a frame
> available exactly after the controller OK, but before the controller has
> commenced the light on.
> Actually, I can 100% accept if this is the way to view this.  But I also
> don't want to just consider my own setup, where I can change my
> controller FW/setup. What do you think?
>
I think you'd have to use the settling feature to take this into
account... Did you tried to configure the light as digital output? Does
it switch faster then? Do you see a glitch in the image if you switch
the light off and on while taking a picture?

> 2. The best way (IMHO) to deal with this, would be if there's a way to
> peek forward to the next step in the job processor, to see if the next
> step is also 'align', and if so, not switch off the up light in the
> current step.
> I have not seen any support for doing this, and it might not be simple
> to implement (not sure I understand the details of this yet). @Jan, I
> see you have done some changes here recently. Perhaps you know?
>
I'm sorry, I don't think that's possible. The light actuator is a
property of the camera and the camera is operated as part of the
pipeline. There is IMHO no option to peek forward...

> 3. Another way would be to always switch the up light off in the
> placement steps.
> But this is not so nice (IMO), since it is not symmetric with the switch
> on. And also would leave the light on, on discard cycles etc.
>
That shall work. You'd need to find all possible routes after bottom
vision to safely switch the light off.

> Then, I guess those of us that has this problem, could also set the
> light to be on all the time, and use scripting functions for enabling/
> disabling the cameras wherever they would like. This could be the
> fallback if 1) is thought to be a controller issue. But also not so nice.
>
> It is very quiet on the list at the moment, so I guess everyone is busy,
> which is fine - there's no hurry - I just wanted to put my thoughts into
> writing before I forget about it. :-)
>
> Regards,
>   Micael
>
> --
> You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to openpnp+u...@googlegroups.com
> <mailto:openpnp+u...@googlegroups.com>.
> To view this discussion visit https://groups.google.com/d/msgid/
> openpnp/34947146-2d10-4d23-abf4-6a6d1efe7e43n%40googlegroups.com
> <https://groups.google.com/d/msgid/openpnp/34947146-2d10-4d23-
> abf4-6a6d1efe7e43n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Jan

unread,
Oct 15, 2025, 6:12:12 PMOct 15
to ope...@googlegroups.com
Hi Micael!
Interesting process you're making!
Sounds like your ideas could result in a new feature to take a single
image of both nozzles and use it for bottom vision. With a nozzle offset
feed into the pipeline we could just mask the relevant part and -suppose
the lens is good enough - perform bottom vision using the edge of the image.

On 13.10.2025 17:37, vespaman wrote:
> Hi all,
>
> So, having boiled down the number 7 issue - LED up light turning off
> while it should ideally be left on for the next shot, I think there are
> a three of ways to think about this;
>
> 1. The code is correct as is - LED should switch on/off for each capture.
> This would mean that there are some controller implementations out there
> (especially/at least) the ones that uses soft pwm setting on
> smoothieware that are likely to fail. This may also be depending on how
> fast the serial comm's are between controller and OpenPnP - in my case,
> the 'OK' will arrive in a matter of micro seconds. On a slower comm's,
> it will probably be milliseconds, increasing the chance that the
> controller will turn the light on properly before the OK has arrived to
> the PC, and image are captured. But also the frame rate of the  camera
> will come to play here. In my setup, a new capture will arrive within
> 0-9ms, and the higher frame rate, the higher the chance there's a frame
> available exactly after the controller OK, but before the controller has
> commenced the light on.
> Actually, I can 100% accept if this is the way to view this.  But I also
> don't want to just consider my own setup, where I can change my
> controller FW/setup. What do you think?
>
I think you'd have to use the settling feature to take this into
account... Did you tried to configure the light as digital output? Does
it switch faster then? Do you see a glitch in the image if you switch
the light off and on while taking a picture?

> 2. The best way (IMHO) to deal with this, would be if there's a way to
> peek forward to the next step in the job processor, to see if the next
> step is also 'align', and if so, not switch off the up light in the
> current step.
> I have not seen any support for doing this, and it might not be simple
> to implement (not sure I understand the details of this yet). @Jan, I
> see you have done some changes here recently. Perhaps you know?
>
I'm sorry, I don't think that's possible. The light actuator is a
property of the camera and the camera is operated as part of the
pipeline. There is IMHO no option to peek forward...

> 3. Another way would be to always switch the up light off in the
> placement steps.
> But this is not so nice (IMO), since it is not symmetric with the switch
> on. And also would leave the light on, on discard cycles etc.
>
That shall work. You'd need to find all possible routes after bottom
vision to safely switch the light off.

> Then, I guess those of us that has this problem, could also set the
> light to be on all the time, and use scripting functions for enabling/
> disabling the cameras wherever they would like. This could be the
> fallback if 1) is thought to be a controller issue. But also not so nice.
>
> It is very quiet on the list at the moment, so I guess everyone is busy,
> which is fine - there's no hurry - I just wanted to put my thoughts into
> writing before I forget about it. :-)
>
> Regards,
>   Micael
>
> --
> You received this message because you are subscribed to the Google
> Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to openpnp+u...@googlegroups.com

vespaman

unread,
Oct 16, 2025, 8:16:08 AMOct 16
to OpenPnP
Hi Jan!

> 1. The code is correct as is - LED should switch on/off for each capture.
> This would mean that there are some controller implementations out there
> (especially/at least) the ones that uses soft pwm setting on
> smoothieware that are likely to fail. This may also be depending on how
> fast the serial comm's are between controller and OpenPnP - in my case,
> the 'OK' will arrive in a matter of micro seconds. On a slower comm's,
> it will probably be milliseconds, increasing the chance that the
> controller will turn the light on properly before the OK has arrived to
> the PC, and image are captured. But also the frame rate of the  camera
> will come to play here. In my setup, a new capture will arrive within
> 0-9ms, and the higher frame rate, the higher the chance there's a frame
> available exactly after the controller OK, but before the controller has
> commenced the light on.
> Actually, I can 100% accept if this is the way to view this.  But I also
> don't want to just consider my own setup, where I can change my
> controller FW/setup. What do you think?
>
I think you'd have to use the settling feature to take this into
account...

Sorry, I don't understand - what do you mean by that?
 
Did you tried to configure the light as digital output? Does
it switch faster then?

I have not tested, no. But I can see in the code that it takes more time with the sw pwm vs the digital pin (which is done directly). In my case, I don't really use pwm on up light anymore, so I could switch to digital. 
My up light still takes 30us to switch on in hardware though, since I use a constant current generator.  But i could easily add a delay before sending back 'ok'. The 'ok' also takes a few us. 
I can try that, and if this turns out OK, I'll leave light control as is.
In the controller, the problem could also be fixed by delaying the "off" by e.g. 100ms, then it will receive the new "on" before going off. But this is also a bit of a workaround.
 
Do you see a glitch in the image if you switch
the light off and on while taking a picture?

As it is now, first image (first nozzle) is always OK (of course), then it switches light off, switches light back on, and take the snap for the second nozzle, then light is off again. Second image is always totally black.
This makes part detection fail, and a retry is issued; light on, capture, (black image result) and repeat. After three tries, we get the "No result found" exception.


I am actually not sure about when the openCV image is taken, it might well be the last buffered image that is returned, so that would mean that the light have to be on for at least one frame time (~9ms in my case) prior to the call, to be sure.


> 2. The best way (IMHO) to deal with this, would be if there's a way to
> peek forward to the next step in the job processor, to see if the next
> step is also 'align', and if so, not switch off the up light in the
>
I'm sorry, I don't think that's possible.

Ah, that's a pity.

> 3. Another way would be to always switch the up light off in the
> placement steps.
> But this is not so nice (IMO), since it is not symmetric with the switch
> on. And also would leave the light on, on discard cycles etc.
>
That shall work. You'd need to find all possible routes after bottom
vision to safely switch the light off.

Yeah, I think this will work, but I think it will have a cost on the readability of the code, and maybe introducing bugs here or there. So maybe better to focus on fixing this in the controller.
If there's more people going for multi bottom vision going forward, the light could also be kept on during the job. But on my machine it is rather irritating, since the camera is very close to the operator.

Regarding the rest, would you say that the changes I have made so far (functionality wise), could be good enough for me to start wrap things up for submitting a PR, or do you think UI elements or anything else are needed?


  - Micael

Toby Dickenson

unread,
Oct 16, 2025, 8:38:39 AMOct 16
to ope...@googlegroups.com
> 3. Another way would be to always switch the up light off in the
> placement steps.
> But this is not so nice (IMO), since it is not symmetric with the switch
> on. And also would leave the light on, on discard cycles etc.

This sounds good to me. Have new PreAlignmentStep and PostAlignmentStep classes in the job processor.
 
>
That shall work. You'd need to find all possible routes after bottom
vision to safely switch the light off.

Yeah, I think this will work, but I think it will have a cost on the readability of the code, and maybe introducing bugs here or there.

Does this work?
* ReferenceMachine has two new methods; StartCameraBatchOperation and FinishCameraBatchOperation. Job processor PreAlignmentStep calls ReferenceMachine.StartCameraBatchOperation etc.
* ReferenceMachine needs a flag to record that it is in the middle of a camera batch operation.
* ReferenceMachine has a new list of cameras that need to have their lights turned off by FinishCameraBatchOperation.
* ReferenceCamera actuateLightAfterCapture method needs to interact with ReferenceMachine. If the machine is in the middle of a batch operation then it adds itself to that list rather than switching off immediately.
* Ending a job, and maybe other cases, need to call FinishCameraBatchOperation too.

Toby

vespaman

unread,
Oct 16, 2025, 9:10:06 AMOct 16
to OpenPnP
Hi Toby,

I think I understand your solution, and it sounds like a good way of solving it!

Not sure I'll be able to implement this though :-o So far I have been able to solve everything by small injections of code here and there. I am struggling with the class heritage, overrides and what not in java, and the job processor in particular, looks a bit scary to me (I'm just a simple C-programmer). :-) 

I'll see if Copilot can help me on the road. So far he has not been very helpful, but at least I have learnt from his mistakes. If only he wasn't so convincing every time..  :-)

 - Micael

Toby Dickenson

unread,
Oct 16, 2025, 11:14:56 AMOct 16
to ope...@googlegroups.com
My cameras are currently set up to use the anti-glare feature. Not because my lights cause glare, but because it is the best compromise for getting them powered down some of the time without having an annoying strobe effect. So this change will benefit me too.

Shall I implement this bit then you can merge my branch into yours?



--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/9dc647ab-5f16-4db2-b2f2-9d17329878ban%40googlegroups.com.

vespaman

unread,
Oct 16, 2025, 11:35:28 AMOct 16
to OpenPnP
That would be super! 
I am still on some spring version of OpenPnP, so I'll have to take care to update my fork, which is anyway good, because of the new goodies.
Currently my machine is inoperable (again), since I had to order some vacuum/blow hose and connectors, so I can't really do any testing/debugging anyway. (and since I am a java-rookie, I prefer to test the code as I write it, to make sure stuff works).

Jan

unread,
Oct 17, 2025, 4:36:19 AMOct 17
to ope...@googlegroups.com
Hi Micael!

On 16.10.2025 14:16, vespaman wrote:
[...]
> I think you'd have to use the settling feature to take this into
> account...
>
>
> Sorry, I don't understand - what do you mean by that?
>
I mean one of the methods to make sure the camera image is stable before
it is used for bottom vision. One is a delay which you could use to make
sure the light is on and the camera has adapted to it.

> Did you tried to configure the light as digital output? Does
> it switch faster then?
>
>
> I have not tested, no. But I can see in the code that it takes more time
> with the sw pwm vs the digital pin (which is done directly). In my case,
> I don't really use pwm on up light anymore, so I could switch to digital.
> My up light still takes 30us to switch on in hardware though, since I
> use a constant current generator.  But i could easily add a delay before
> sending back 'ok'. The 'ok' also takes a few us.
> I can try that, and if this turns out OK, I'll leave light control as is.
> In the controller, the problem could also be fixed by delaying the "off"
> by e.g. 100ms, then it will receive the new "on" before going off. But
> this is also a bit of a workaround.
>
Ok, even with digital it might fail. If the light is switched while the
sensor is taking an image, this image will get distorted. With rolling
shutters this is always the case, so a delay of a full frame is required.

> Do you see a glitch in the image if you switch
> the light off and on while taking a picture?
>
>
> As it is now, first image (first nozzle) is always OK (of course), then
> it switches light off, switches light back on, and take the snap for the
> second nozzle, then light is off again. Second image is always totally
> black.
> This makes part detection fail, and a retry is issued; light on,
> capture, (black image result) and repeat. After three tries, we get the
> "No result found" exception.
>
That's what I'd expect if settling is not configured correctly for the
second camera/image: the first has some settling which works, but for
the second, there is no motion and hence one might think that no
settling is required. But in fact the switching of the light affects the
image and hence settling is required for compensation.
Your idea to keep the light on is probably the best, but also requires
extra work as the light is currently controlled by the camera itself.
You might wont to check the vacuum pump handling: there are features
like delayed switch off that could be useful here too. It somehow tracks
the number of nozzles that require vacuum and only switches the pump off
if it is guaranteed that its not needed anymore.

> I am actually not sure about when the openCV image is taken, it might
> well be the last buffered image that is returned, so that would mean
> that the light have to be on for at least one frame time (~9ms in my
> case) prior to the call, to be sure.
>
Image capture is part of the pipeline. It just requests the next image
from the camera. What ever "next" means in this context... There is also
some image capture as part of some of the settling methods which are
outside of the pipeline. (Side note: I once tried to put bottom vision
processing into a dedicated thread which fails because the image capture
requires exclusive access to the "machine" which is not available if the
machine thread forks a bottom vision thread. If at the end of your
improvements the image capture is relocated outside of the pipeline,
this would provide new opportunities...)

> > 2. The best way (IMHO) to deal with this, would be if there's a
> way to
> > peek forward to the next step in the job processor, to see if the
> next
> > step is also 'align', and if so, not switch off the up light in the
> >
> I'm sorry, I don't think that's possible.
>
>
> Ah, that's a pity.
>
The camera light is (usually) a property of the camera and hence switch
on and off by the camera as needed. If you have two cameras sharing a
light, you either get what you reported (its switched of by one and
immediately on by the other) or you have to bind the processing to some
other events. However, make sure to keep the functionality for features
like test alignment.

> > 3. Another way would be to always switch the up light off in the
> > placement steps.
> > But this is not so nice (IMO), since it is not symmetric with the
> switch
> > on. And also would leave the light on, on discard cycles etc.
> >
> That shall work. You'd need to find all possible routes after bottom
> vision to safely switch the light off.
>
>
> Yeah, I think this will work, but I think it will have a cost on the
> readability of the code, and maybe introducing bugs here or there. So
> maybe better to focus on fixing this in the controller.
> If there's more people going for multi bottom vision going forward, the
> light could also be kept on during the job. But on my machine it is
> rather irritating, since the camera is very close to the operator.
>
You might add that using scripting: switch the light on using
Camera.BeforeSettle and off at Job.Finished. In addition you could
switch it off at Vision.PartAlignment.After if you make sure its the
second bottom vision operation. This would keep the lights off as much
as possible.

> Regarding the rest, would you say that the changes I have made so far
> (functionality wise), could be good enough for me to start wrap things
> up for submitting a PR, or do you think UI elements or anything else are
> needed?
>
Starting a PR is always a good idea so that others can view/help what
you did. It also clearly indicates what changes you're proposing. A PR
is only fix if it has been merged. Until then you can change it as you
like incl. adding new stuff, rewriting stuff, reverting changes, even
reorganizing the entire PR is possible (GitHub still records the
operations as part of the PR's documentation but they do not appear in
the repro after merging).
I'd suggest to partition your changes into multiple small PRs rather
then one large which are easier to understand and quick to merge.

Jan

vespaman

unread,
Oct 17, 2025, 8:54:33 AMOct 17
to OpenPnP

Hi Jan,

I mean one of the methods to make sure the camera image is stable before
it is used for bottom vision. One is a delay which you could use to make
sure the light is on and the camera has adapted to it.

Aha, yes. But getting rid of settling was the main thing that I wanted to get rid of, so obviously, there's no settling left for the second cam.
 
Ok, even with digital it might fail. If the light is switched while the
sensor is taking an image, this image will get distorted. With rolling
shutters this is always the case, so a delay of a full frame is required.

Yes, rolling shutters may get a partially black frame, but global shutters might either get a black frame or a lit frame.
As you may recall, I have two AR0234, which is global shutters. I think I should have stayed with rolling shutter, since settle looks to be faster with that (as I wrote earlier in the thread).
But now it is what it is - can't be bothered to change cameras again. For future top cam upgrade, I have a spare AR0234 camera (same as I use in the bottom vision). Hopefully it will make the QR code reading a bit less unreliable. But that time I will do real tests against a rolling shutter camera before deciding to use one over the other.

 But in fact the switching of the light affects the
image and hence settling is required for compensation.

No, it is not really, but the image capture needs light. And therefore the up light needs to be on when the camera is getting the image.
 
> I am actually not sure about when the openCV image is taken, it might
> well be the last buffered image that is returned, so that would mean
> that the light have to be on for at least one frame time (~9ms in my
> case) prior to the call, to be sure.
>
Image capture is part of the pipeline. It just requests the next image
from the camera. What ever "next" means in this context...
 
Exactly.
But, at the end of the day, I think this does not matter for my problem, picture needs light to be on regardless.

In the demo video I put up on youtube, I only run my test-job, which always processes camera 2 before camera 1, so I could simply let camera 2 keep light on, but camera 1 always shut it off after image capture.
Only for show. :-)

(Side note: I once tried to put bottom vision
processing into a dedicated thread which fails because the image capture
requires exclusive access to the "machine" which is not available if the
machine thread forks a bottom vision thread. If at the end of your
improvements the image capture is relocated outside of the pipeline,
this would provide new opportunities...)

Yes, that was my idea - the image capture is happening before the true work is started, so I think this is a clean cut.
How to manage the threads is the tricky thing. As each camera might fail for a number of reasons (e.g. missing/tombstone component OR actual capture problems, there needs to be ways for it to fallback, and also the user interface must be clear on which part/nozzle the failure is appearing on.
But I don't have the full overview of this, I am still learning. Especially the jobProcessor I find tricky. So I will not even try to accomplish threading now - that will be for a different time.
 
But:- I think we are closing in on the max performance one might expect from these machines now, threading cv is probably one of the last "big" improvements.
There are still several smaller ones, but they will not bring too much time improvements on their own. 
I'd like to reach 4k pph, before calling it quits, but I'm not sure this will be possible without threading cv.


Starting a PR is always a good idea

Good! I think it is much cleaner without any GUI components. If needed by someone, it can always be added later on.
 
-  Micael

Jan

unread,
Oct 17, 2025, 9:35:43 AMOct 17
to ope...@googlegroups.com
Hi Micael!

On 17.10.2025 14:54, vespaman wrote:
[...]> Aha, yes. But getting rid of settling was the main thing that I
wanted
> to get rid of, so obviously, there's no settling left for the second cam.
>
This might be of a problem as long as the light is switched off/on: the
image you're requesting, has been captured earlier. So in fact you might
read the image, that was captured *before* the light was switched off by
the other camera. Adding some settling delay could surfer as a
workaround as long as the light issue has not been solved in a better
way. I expect, that 1 frame time shall be enough.

> Ok, even with digital it might fail. If the light is switched while the
> sensor is taking an image, this image will get distorted. With rolling
> shutters this is always the case, so a delay of a full frame is
> required.
>
>
> Yes, rolling shutters may get a partially black frame, but global
> shutters might either get a black frame or a lit frame.
> As you may recall, I have two AR0234, which is global shutters. I /
> think/ I should have stayed with rolling shutter, since settle looks to
> be faster with that (as I wrote earlier in the thread).
> But now it is what it is - can't be bothered to change cameras again.
> For future top cam upgrade, I have a spare AR0234 camera (same as I use
> in the bottom vision). Hopefully it will make the QR code reading a bit
> less unreliable. But that time I will do real tests against a rolling
> shutter camera before deciding to use one over the other.
>
So far, I don't have experience with global shutter camera in bottom
vision, I've to admit. However, I'd love to understand why rolling
shutters might settle faster then global. I would have expected the
other way around, like with fly-by-vision where you take the image at
full speed. What settling method do you use? I expect that global
shutters will generate a larger error signal when motion has almost
stopped as the entire image content is still changing while for rolling
shutter there are only a few lines effected.

>  But in fact the switching of the light affects the
> image and hence settling is required for compensation.
>
> No, it is not really, but the image capture needs light. And therefore
> the up light needs to be on when the camera is getting the image.
>
Yes, exposure requires light and strong lights if global shutter.

[...]> There are still several smaller ones, but they will not bring too
much
> time improvements on their own.
> I'd like to reach 4k pph, before calling it quits, but I'm not sure this
> will be possible without threading cv.
>
You're already pretty fast compared to other videos I've seen. And you
shall get another ~10% by upgrading to the 2.5 test version with Tobi's
improvements to the JobProcessor in choosing better pair of placements.

>
> Starting a PR is always a good idea
>
> Good! I think it is much cleaner without any GUI components. If needed
> by someone, it can always be added later on.

You have multiple options: changes that do not effect others, do not
need to be controllable. Changes that eventually might effect others or
of less use for most others are probably better wrapped into a
configuration option. User could then manually enable or disable a
feature by editing their machine.xml. Changes that are configurable to
personal preferences/needs shall be accessible via the UI.
Once you have your PR ready, me or someone else can have a look and
might suggest one or the other.

Jan

vespaman

unread,
Oct 17, 2025, 10:49:07 AMOct 17
to OpenPnP
Hi Jan,

This might be of a problem as long as the light is switched off/on: the
image you're requesting, has been captured earlier. So in fact you might
read the image, that was captured *before* the light was switched off by
the other camera. Adding some settling delay could surfer as a
workaround as long as the light issue has not been solved in a better
way. I expect, that 1 frame time shall be enough. 

I think we are probably misunderstanding each other, and we are trying to explain the same thing to each other. :-)
I can say one thing though - the second image capture always happens after the switch off from the first camera, since it is switched off after capture, but before cv alignment.
The cv alignment on my machine takes about 80-90ms give or take. So LED is definitely off from last capture.
Anyway, hopefully, the proposed fix by Toby will work fully. The last thing I would like to do is to add delays in the alignment routine.
 

> For future top cam upgrade, I have a spare AR0234 camera (same as I use
> in the bottom vision). Hopefully it will make the QR code reading a bit
> less unreliable. But that time I will do real tests against a rolling
> shutter camera before deciding to use one over the other.
>
So far, I don't have experience with global shutter camera in bottom
vision, I've to admit. However, I'd love to understand why rolling
shutters might settle faster then global. 

Settle, in this context is waiting for being still, right? So this is the task of the settle routine - checking each image to the previous one, and once they are similar enough (as decided by user setting), head is decided to be still enough. 
The results I shared earlier in this thread, showed that my previous cam imx290 (rolling shutter) at 60FPS, was faster esp. taking into consideration that the frame rate where half of the AR0234. 
I think there are two reasons coming to play; The AR0234 is nosier, and the image of a non-still rolling shutter capture is much more deformed, which might make it easier for the cv motion comparison to find the difference faster. Motion cv checking takes longer time on AR0234 (don't remember exact number anymore), and is more variable.
It is hard to investigate fully which of the above theories is more true over the other. I have much more de noise on the AR0234 to accomplish the same thing, and still it takes more time. And also I let it have a slightly higher threshold of what is considered still enough. 
Mind you - I'm talking about differences in a couple of 10ms at most. And now memory of the old setup is fading away.
 
[...]> There are still several smaller ones, but they will not bring too
much
> time improvements on their own.
> I'd like to reach 4k pph, before calling it quits, but I'm not sure this
> will be possible without threading cv.
>
You're already pretty fast compared to other videos I've seen. And you
shall get another ~10% by upgrading to the 2.5 test version with Tobi's
improvements to the JobProcessor in choosing better pair of placements.

That sound very interesting! 
Apart from threaded cv, I will revisit the old simulated 3'rd motion again, I now know more about stuff than the last time I had a go at it.
Primarily, the Z motion takes some time on our machines - it is currently takes about 60ms to dip/pull up (I think you have the same numbers here as I do). That means 60+60+60+60 on a full pick.
I played some with blow off last week (again), but I don't think there much to be had there, maybe 5-8ms per placement, so I'll leave that as is for now. One thing that I'd like to investigate is to (somehow) stop vacuum 10ms before it is now commenced. I.e. in the middle/end of Z move. As you might know, the valve takes about 10ms before it is opening, where we now are just dead waiting now. Almost half of the dwell time. This probably needs help from FW to be accomplished. I also started to test with part off vacuum sensing, but it did, for some reason not have a huge impact, in fact it was notoriously random. Still have to investigate why that is. 
I have an idea on lowering the feed time, but maybe that's another thread.
Do you know of any other time bandits?


 - Micael

Jan

unread,
Oct 20, 2025, 7:51:42 AMOct 20
to ope...@googlegroups.com
Hi Micael!

On 17.10.2025 16:49, vespaman wrote:
> Hi Jan,
>
>
> This might be of a problem as long as the light is switched off/on: the
> image you're requesting, has been captured earlier. So in fact you
> might
> read the image, that was captured *before* the light was switched
> off by
> the other camera. Adding some settling delay could surfer as a
> workaround as long as the light issue has not been solved in a better
> way. I expect, that 1 frame time shall be enough.
>
>
> I think we are probably misunderstanding each other, and we are trying
> to explain the same thing to each other. :-)
> I can say one thing though - the second image capture always happens
> after the switch off from the first camera, since it is switched off
> after capture, but before cv alignment.
> The cv alignment on my machine takes about 80-90ms give or take. So LED
> is definitely off from last capture.
> Anyway, hopefully, the proposed fix by Toby will work fully. The last
> thing I would like to do is to add delays in the alignment routine.
>
Yes, it might well be possible, that I don't full understand your concerns.
What I now got is, that you try to read the image from the camera
without settling delay and without waiting for the cameras light to
switch on. In the way you proposed and discussed with Toby this is
possible and will work. However, I'd like to add some details here for
completeness: the light is - as it's part of the camera - operated when
the camera is asked for a new image. If that happens, the light is
switch on, any configured settling method is executed and then image is
requested from the camera driver. In reality most (all) cameras operate
asynchronously triggering themselves and the camera driver is constantly
receiving the latest image. That means, that the image you get when
asking the camera driver has been captured some times before. To
correctly handle the camera light and this "some times before" you'd
actually need a delay between switching the light on and requesting the
image to make sure the image has been captured with the light actually
switched on.

> > For future top cam upgrade, I have a spare AR0234 camera (same as
> I use
> > in the bottom vision). Hopefully it will make the QR code reading
> a bit
> > less unreliable. But that time I will do real tests against a
> rolling
> > shutter camera before deciding to use one over the other.
> >
> So far, I don't have experience with global shutter camera in bottom
> vision, I've to admit. However, I'd love to understand why rolling
> shutters might settle faster then global.
>
>
> Settle, in this context is waiting for being still, right? So this is
> the task of the settle routine - checking each image to the previous
> one, and once they are similar enough (as decided by user setting), head
> is decided to be still enough.

Yes. In fact you can choose among six methods how settling is handled.
Basically you need a delay to make sure the image the camera has been
captures was take when you requested it (see discussions above). In
addition there might be mechanical reasons why one would add more delays
to wait until the machine/head/nozzle is in a steady state to avoid that
the image that is captured by a rolling shutter camera is distorted. One
method would be to take two images and calculate their difference. If
"large", assume the machine is still moving/shaking to much. For global
shutter cameras like yours, there shall be no such motion induced
distortion (suppose your exposure time is short enough). I'd therefore
try to use FixedTime settling with 2 1/fps. (IIRC Mark once explained,
that he had to add the CV based motion settling methods to handle the
poor mechanics of the liteplacer which would otherwise require a very
long fix delay.) You may also check your logs for the actual settling
time required.

> The results I shared earlier in this thread, showed that my previous cam
> imx290 (rolling shutter) at 60FPS, was faster esp. taking into
> consideration that the frame rate where half of the AR0234.
> I think there are two reasons coming to play; The AR0234 is nosier, and
> the image of a non-still rolling shutter capture is much more deformed,
> which might make it easier for the cv motion comparison to find the
> difference faster. Motion cv checking takes longer time on AR0234 (don't
> remember exact number anymore), and is more variable.
> It is hard to investigate fully which of the above theories is more true
> over the other. I have much more de noise on the AR0234 to accomplish
> the same thing, and still it takes more time. And also I let it have a
> slightly higher threshold of what is considered still enough.
> Mind you - I'm talking about differences in a couple of 10ms at most.
> And now memory of the old setup is fading away.
>
Noise in the sensor can be fighted against using more light and shorter
exposure times... (I suppose a lens with a larger aperture is no option
for you anymore.) (Also keep in mind, that exposure time limited the
refresh rate. A short exposure time is needed to reach the promised
frame rates.)

[...]
>
> That sound very interesting!
> Apart from threaded cv, I will revisit the old simulated 3'rd motion
> again, I now know more about stuff than the last time I had a go at it.
> Primarily, the Z motion takes some time on our machines - it is

Going down is limited by the motor and going up by the spring where the
spring is the weaker of the two. You might add a small wire between the
cam wheel and the sledge it pushes to let it pull up faster.
I know that I can run at an acceleration of 30000 mm/s^2 but actually
limited it to 8000 mm/s^2 because I once sheared off an 502 nozzle.

> currently takes about 60ms to dip/pull up (I think you have the same
> numbers here as I do). That means 60+60+60+60 on a full pick.
> I played some with blow off last week (again), but I don't think there
> much to be had there, maybe 5-8ms per placement, so I'll leave that as
> is for now. One thing that I'd like to investigate is to (somehow) stop
> vacuum 10ms before it is now commenced. I.e. in the middle/end of Z
> move. As you might know, the valve takes about 10ms before it is
> opening, where we now are just dead waiting now. Almost half of the
> dwell time. This probably needs help from FW to be accomplished. I also
> started to test with part off vacuum sensing, but it did, for some
> reason not have a huge impact, in fact it was notoriously random. Still
> have to investigate why that is.
> I have an idea on lowering the feed time, but maybe that's another thread.
> Do you know of any other time bandits?
>
It the usual you all mentioned: make motors faster, check pick/place
dwell, make vacuum/blow off faster (maybe a small reservoir on the head
could help), shorten settling for bottom vision, make bottom vision faster.
From all this, I'd say executing bottom vision in parallel with motion
shall be the most effective one as it would safe you about 80ms (If I
recall the numbers you claimed correctly). SUPPOSE, you get it right in
the first go. Otherwise it will cost you much more then 80ms.

Jan

vespaman

unread,
Oct 20, 2025, 8:54:48 AMOct 20
to OpenPnP
Hi Jan,

Yes. In fact you can choose among six methods how settling is handled.
Basically you need a delay to make sure the image the camera has been
captures was take when you requested it (see discussions above). In
addition there might be mechanical reasons why one would add more delays
to wait until the machine/head/nozzle is in a steady state to avoid that
the image that is captured by a rolling shutter camera is distorted. One
method would be to take two images and calculate their difference. If
"large", assume the machine is still moving/shaking to much. For global
shutter cameras like yours, there shall be no such motion induced
distortion (suppose your exposure time is short enough). I'd therefore
try to use FixedTime settling with 2 1/fps. (IIRC Mark once explained,
that he had to add the CV based motion settling methods to handle the
poor mechanics of the liteplacer which would otherwise require a very
long fix delay.) You may also check your logs for the actual settling
time required.

OK, now I think I understand the point you are trying to make. 
But: remember, I need the machine to stop - not because I can't get at good picture, but to get exact location. If I take the snapshot too early, it is possible to be in the backlash or otherwise not at the exact position. This will then be transferred as a placement inaccuracy, and also the next image using the new displacement, will also get the same error.
I use the motion settlement, since there's a huge difference between the different distance and angle of arrival in settle time. A more or less pure X arrival is super fast, maybe in the order of 20-30ms, whereas a long Y travel will be multiples of about 3 times. Combined is probably even worse.
(the Y travel is not only longer/gaining more speed, it is also heavier). The settle motion tests unfortunately only accept up to 100mm motion, in which the travel speed never has time to reach its maximum, so the result here is always pretty much always indicating better than the actual real world settle times. 

Maybe I need to rerun my X/Y axes calibration (sneak up) again, it was a long time since I last did that.


> The results I shared earlier in this thread, showed that my previous cam
> imx290 (rolling shutter) at 60FPS, was faster esp. taking into
> consideration that the frame rate where half of the AR0234.
> I think there are two reasons coming to play; The AR0234 is nosier, and
> the image of a non-still rolling shutter capture is much more deformed,
> which might make it easier for the cv motion comparison to find the
> difference faster. Motion cv checking takes longer time on AR0234 (don't
> remember exact number anymore), and is more variable.
> It is hard to investigate fully which of the above theories is more true
> over the other. I have much more de noise on the AR0234 to accomplish
> the same thing, and still it takes more time. And also I let it have a
> slightly higher threshold of what is considered still enough.
> Mind you - I'm talking about differences in a couple of 10ms at most.
> And now memory of the old setup is fading away.
>
Noise in the sensor can be fighted against using more light and shorter
exposure times... (I suppose a lens with a larger aperture is no option
for you anymore.) (Also keep in mind, that exposure time limited the
refresh rate. A short exposure time is needed to reach the promised
frame rates.)

It can, but the imx290 is far superior to the AR0234 with regards to noise. 
I had very short exposure time in the beginning of this journey, still with so-so results.
But maybe I need to revisit this again. I think I used to have the exposure at about '1', but now I have 3 or 4 if memory serves me right.
 
Going down is limited by the motor and going up by the spring where the
spring is the weaker of the two. You might add a small wire between the
cam wheel and the sledge it pushes to let it pull up faster.

A wire will not work, unfortunately, since then it will hinder the second nozzle. 
But the solution that some machine had, with only one wheel starting at the top, instead of the two normal ones, is something I could consider. But that is a rather big change, so it will have to be for a future rainy day.


 
I'd say executing bottom vision in parallel with motion
shall be the most effective one as it would safe you about 80ms (If I
recall the numbers you claimed correctly). SUPPOSE, you get it right in
the first go. Otherwise it will cost you much more then 80ms.

Jan

I think that, ideally, one should also be able to get about 80-100ms from a pick cycle, if there's just a way to get the first and last nozzle dip/raise smarter. E.g. lowering the nozzle tip to just above the screw heads on the feeder while traveling to the feeder, and start moving away from the feeder as soon as we are above the screw heads again. Maybe this can be somewhat accomplished with "3rd simulated..", I don't know.

  - Micael

Reply all
Reply to author
Forward
0 new messages