Can BootVision be configured for two different cameras?

377 views
Skip to first unread message

ByronDong

unread,
Jun 22, 2022, 3:33:38 AM6/22/22
to OpenPnP
Hi,everyone

Can the bottom camera be configured as two, one for small device alignment and one for large device alignment.
The smallest device in my use is 0402chip, and the largest size device is QFP 28*28mm.It is difficult to cover both applications with a regular UVC camera.
So I thought using two cameras might be an easy option and would like to hear your opinions.

Byron

mark maker

unread,
Jun 22, 2022, 4:02:09 AM6/22/22
to ope...@googlegroups.com

Hi Byron,

The idea was discussed before, but unfortunately nobody implemented such a thing, yet. 😎

https://groups.google.com/g/openpnp/c/O-J2KPbHCE4/m/gvNPefiEAQAJ

My old answer is still almost the same:

> Does OpenPNP support the use of Multiple Up Cameras?

No, the current implementation does not support the use of multiple cameras, but yes, OpenPnP  does allow you to define multiple Up Cameras and has the architecture to support this with relatively few things missing, and (more importantly) nothing standing in the way. Always amazes me how great Jason's underlying architecture is!

Quasi-parallel vision i.e. just dedicating one camera for each nozzle but still performing the alignments one after the other could probably be done in a few lines of code, the GUI to associate a nozzle with a camera being the hardest part ;-). The gain is reduced motion time to position each nozzle plus some avoided settle times. This is probably only worth it, if you do the vision at retracted nozzle height, so no Safe Z up/down motion is needed (EDIT: or with dedicated one-per-nozzle Z axes). Doing it at a different focal plane than the PCB plane introduces some parallax problems that we luckily already solved (Marek has this on his machine and his testing helped me develop a solution) :-)

https://makr.zone/improved-runout-compensation-and-bottom-camera-calibration/346/

However, true ganged-up bottom vision i.e. doing it at the very same time would be a much taller order. Much more to reprogram in OpenPnP. Iterative/multi-pass bottom vision would either only work for rotation (only if you have non-shared C axes) or spoil most of the time gained. You would have to calibrate camera (pixel) centers and/or provide a way to mechanically adjust the cameras to the nozzles in X/Y and/or plane. Again this only makes sense if done at retracted nozzle height (EDIT: or with dedicated one-per-nozzle Z axes).

One important thing to keep in mind is added CPU load through running so many quality USB cameras.

https://makr.zone/camera-fps-cpu-load-and-lighting-exposure/519/

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/b2ece242-8f4a-4aa0-9b0d-3d627704e3c1n%40googlegroups.com.

ByronDong

unread,
Jun 22, 2022, 8:37:04 PM6/22/22
to OpenPnP
Mark,

Thank you for your patient explanation, understand the difficulty of code implementation.

Is there any chance to replace it with a better industrial camera, such as a 5 megapixel global industrial camera? the difficulty is that some protocols such as "GenICam".

Byron

mark maker

unread,
Jun 23, 2022, 2:36:50 AM6/23/22
to ope...@googlegroups.com

Hi Byron,

The recommended camera is an ELP 720p USB camera. It has shown to be the best choice again and again. You can connect two via hub to the same USB port.

https://github.com/openpnp/openpnp/wiki/Build-FAQ#what-should-i-build

720p is enough resolution for both the top and bottom camera.

A good quality of the image (low compression) and full manual settings are more important than more resolution, and the ELP cameras have it.

The ELP 1080p camera can also be used, alternatively in 720p mode and then has double the frame rate (60fps). This is what I use. But because this doubles the bandwidth, you need a dedicated USB root hub/computer port for each camera, i.e. you can not connect both cameras via the same hub!

More resolution is counter-productive: you need too much processing power for computer vision, but without any better results. Note that the few computer vision applications that really count (fiducials, sprocket holes etc.) do sub-pixel accuracy nowadays, you get positional accuracy of many times the nominal pixel resolution! (it has become a bit of an obsession when I made the DetectCircularSymmetry stage, where I measured 4-8 times the pixel resolution on my machine).

Higher resolution needs more bandwidth, otherwise it will just be compressed more, and compression artifacts worsen computer vision accuracy. Compression is optimized for human viewers (perceptual coding), so even if modern codecs can compress more, it might be detrimental to positional accuracy.

You  also need more light for effective higher resolution, unless the sensor is larger. But a larger sensor means clunkier lens and less focal depth, which is bad at least for the top camera.

There are also many cameras out there that fake high resolution and/or because the lens optical quality, lens speed etc. do not match the resolution, they have to "pretty up" the image with algorithms, which makes computer vision inaccurate. Not being able to switch off denoising, sharpening and other "pretty"-algorithms is bad.

Also be mindful that low fps is bad, so never exchange more resolution for lower fps. And low fps can be caused by not enough light.

_Mark

Jarosław Karwik

unread,
Jun 23, 2022, 2:53:26 AM6/23/22
to OpenPnP
I have once made such change ( on my private branch),  but as I sold machine it was done for I never published it.
I can contribute I make such change again, but you would have to help me testing it as such change is kind ow widespread -  it requires adding several new dialog options as there are very many things bottom camera is used for. It also adds entries to settings breaking backward compatibility after such change

Mark and Jayson would have to specify how such change should be done :
- Do we add option to select bottom camera for each operation the camera is used for ?
- Do we make pipelines camera specific or we just add pipeline option to select camera ?

There is also light way way of adding second camera support:
- Currently you can specify several bottom cameras, but only first is used in all vision algorithms.
- It would be easy and relatively safe (in the meaning of code integrity) to add pipeline module switching camera selection for single vision operation ( until the pipeline reaches conclusion)

This is kind of big decision impacting architecture, so the big guys should decide .....

mark maker

unread,
Jun 23, 2022, 6:29:07 AM6/23/22
to ope...@googlegroups.com

Hi Jarosław,

Thanks for the offer, this is great!

These are some thoughts...

  1. All uses of the bottom camera that I am aware of, are  looking at a nozzle (with or without a part on).
  2. So I suggest we make a drop-down on the nozzle to define which bottom camera should be used.
  3. We add  a nozzle parameter to the already existing VisionUtils.getBottomVisionCamera() function.
  4. All callers (currently 17) must now pass the nozzle.
  5. We perhaps need to check, if some code does not go through the VisionUtils.getBottomVisionCamera() function and make it so.
  6. We probably need to introduce a "nozzle order" field, that orders the nozzles in a way that allows aligned vision.
  7. To explain: In case of a four-nozzle machine that has the nozzles in a rectangular configuration but "only" two cameras, they need to go through the bottom vision steps in aligned pairs. Note that the cameras could be aligned in X or in Y, for various reasons.
  8. Make the notion of "default nozzle" aware of the camera that addresses it. Say a user presses "move nozzle to camera" (e.g. through drag-jogging on the bottom camera view) without saying which nozzle (i.e. selected tool is not a nozzle), it should respect the camera-to-nozzle assignment.
  9. See MachineControlsPanel.getSelectedNozzle().
  10. See Head.getDefaultNozzle(), which should probably get an overload that takes a camera as parameter, and all callers need to be checked if they are in the context of a particular camera.
  11. Double check that Issues & Solutions calibrates all bottom cameras (should already be the case)
  12. Make Issues & Solutions use a "default nozzle" as described above (is not currently the case).

_Mark

Jarosław Karwik

unread,
Jun 24, 2022, 11:03:07 AM6/24/22
to OpenPnP

Well, what you recommend is selecting camera by nozzle. 
This may work - it is relatively nice and simple concept.

Just that I usually misuse my nozzles - to avoid changing nozzle I use "mid" size for both small components ( like small transistors) as well as large components ( like so16 .. so24)
That is why original idea was to associate it with component - not the nozzle. This however would be more complicated as it introduces more settings , nozle/camera matrix etc.
So the idea to add camera selection in pipeline.

mark maker

unread,
Jun 24, 2022, 12:07:03 PM6/24/22
to ope...@googlegroups.com

Ah, I understand, you would forfeit "ganged-up" operation.

I guess this could be added on top: because we have the nozzle (point 3 in my list), we can also get the part that is currently on the nozzle. And this part can then override the standard affinity.

However, I would not add it inside the pipeline, but outside in the vision settings. So users can change it without having special pipeline editing skills.

Furthermore, this is also technically more straight-forward, as the "camera" is passed into the pipeline as a property and is queried by various stages for dimension, Units per Pixel etc.

Such properties should not suddenly be changed by the pipeline.

https://github.com/search?q=org%3Aopenpnp+pipeline.setProperty%28+camera&type=code

https://github.com/search?q=org%3Aopenpnp+pipeline.getProperty%28+camera+%29&type=code

This way the pipeline also remains neutral, i.e. it can be exchanged between users/OpenPnP installations.

_Mark

On 24.06.22 17:03, Jarosław Karwik wrote:

Well, what you recommend is selecting camera by nozzle. 
This may work - it is relatively nice and simple concept.

Just that I usually misuse my nozzles - to avoid changing nozzle I use "mid" size for both small components ( like small transistors) as well as large components ( like so16 .. so24)
That is why original idea was to associate it with component - not the nozzle. This however would be more complicated as it introduces more settings , nozle/camera matrix etc.
So the idea to add camera selection in pipeline.




 














czwartek, 23 czerwca 2022 o 12:29:07 UTC+2 ma...@makr.zone napisał(a):

Hi Jarosław,

Thanks for the offer, this is great!

These are some thoughts...

  1. All uses of the bottom camera that I am aware of, are  looking at a nozzle (with or without a part on).
  2. So I suggest we make a drop-down on the nozzle to define which bottom camera should be used.
  3. We add  a nozzle parameter to the already existing VisionUtils.getBottomVisionCamera() function.
  4. All callers (currently 17) must now pass the nozzle.
  5. We perhaps need to check, if some code does not go through the VisionUtils.getBottomVisionCamera() function and make it so.
  6. We probably need to introduce a "nozzle order" field, that orders the nozzles in a way that allows aligned vision.
  7. To explain: In case of a four-nozzle machine that has the nozzles in a rectangular configuration but "only" two cameras, they need to go through the bottom vision steps in aligned pairs. Note that the cameras could be aligned in X or in Y, for various reasons.
  8. Make the notion of "default nozzle" aware of the camera that addresses it. Say a user presses "move nozzle to camera" (e.g. through drag-jogging on the bottom camera view) without saying which nozzle (i.e. selected tool is not a nozzle), it should respect the camera-to-nozzle assignment.
  9. See MachineControlsPanel.getSelectedNozzle().
  10. See Head.getDefaultNozzle(), which should probably get an overload that takes a camera as parameter, and all callers need to be checked if they are in the context of a particular camera.
  11. Double check that Issues & Solutions calibrates all bottom cameras (should already be the case)
  12. Make Issues & Solutions use a "default nozzle" as described above (is not currently the case).

_Mark

On 23.06.22 08:53, Jarosław Karwik wrote:
I have once made such change ( on my private branch),  but) as I sold machine it was done for I never published it.

ByronDong

unread,
Jun 26, 2022, 12:37:20 AM6/26/22
to OpenPnP
Good to know that there is one more possibility. Looking forward to testing.

byron

Jarosław Karwik

unread,
Jun 27, 2022, 2:34:34 PM6/27/22
to OpenPnP
I think there is a way to combine several things.

The setting should be in vision setting for part, but it should not point to camera directly, but rather it should be property to be used later on when selecting proper camera. In such case even "ganged-up" would be possible.

From part we should be able to determine part size ?  So we could use it to match to camera view area.

Basically the change would need:
- passing part to to the VisionUtils.getBottomVisionCamera()  - I guess in case of operations without part default camera would be used
- adding additional setting for part vision configuration
- adding 'camera selector' logic - either in VisionUtils or sowhere where such high level decisions are made.

Need to refresh my OpenPnp repo and tools....

mark maker

unread,
Jun 28, 2022, 10:07:42 AM6/28/22
to ope...@googlegroups.com

Yes that sounds even better.

Jarosław Karwik

unread,
Jul 1, 2022, 1:28:07 PM7/1/22
to OpenPnP
I have played a bit with latest OpenPnp version ( I  took source and recompiled locally)

With current way how bottom vision is organized I would add additional setting in "Bottom vision settings" as it already collects all the relevant selections for bottom vision.
And using different cameras mean that these settings might be a little different.

mark maker

unread,
Jul 1, 2022, 1:50:46 PM7/1/22
to ope...@googlegroups.com

> And using different cameras mean that these settings might be a little different.

I see two use cases (so far):

  1. Use multiple equal cameras for some sort of "ganged-up" vision, i.e. avoid moves between nozzles (or at least make these moves very small and fast).
  2. Use multiple different cameras with different properties, like lens focal length for instance, e.g. for small and large parts.

In (1) the visions settings should be equal and not contain a camera selection. Instead the camera should be taken from the nozzle.

In (2) the camera must actually be selected by properties like the package size.

As you know in the new vision settings system, there is this "inheritance" in place, so if no "override" vision settings are assigned to a Part or Package the next level is inherited.

Part <-- Package <-- Default.

It would be relatively easy to make the selection of "Default" subject to filter-properties of the Part/Package.

Part <-- Package <-- Small Default.
                 <-- Large Default

Then the camera can be assigned to the Visions Settings. It can then still be overridden on certain Parts or Packages.

Even a mix of (1) and (2)  is possible that way, using three cameras. Only the "Large Default" would assign a large view camera. The "Small Default" would not, and instead the nozzle assignment would be effective.

_Mark

Jarosław Karwik

unread,
Jul 1, 2022, 3:58:35 PM7/1/22
to OpenPnP
I have not seen in this forum camera with "ganged-up" vision. It is most likely due to the fact that it is not supported, so nobody builds it - it is chicken egg dilemma.
Is there a plan to make job processor and parallel pipeline support for it ?

I have some experience  with both system - one with properties inheritance and one with global settings. Neither is perfect, but inheritance system with so many levels is harder to control - you would have settings. in Part/Nozzle/Package  
I even suspect that you cannot always force camera from nozzle for 1) as some parts may be too large to fit for :ganged-up" operation ( in such case only every second nozzle might be used - I think I have seen such issue for cheap Chinese multi-nozzle setup)   

But lets assume compromise:
- Part, Package and Nozzle would get additional field - "Bottom Camera  Selection  Preference" ( how to call it nicely in short way ?)
- It would contain following selection: 
  + "Default"  ( so we use global settings)
  + "Match Size" ( so select best camera for component coverage of camera view )
  + 'Closes camera' ( hmm, is it reasonable to put more cameras around machine to limit travel distance ?? )
  + ????  ( we could allow selecting camera by name, but this would be ugly)

I am not sure how you would select cameras for "ganged-up" vision ( by camera name ?)

The priority would be

Part <-- Package <-- Default .... where to put Vision Settings ?

Damn, it gets complicated when getting into detailed planning.  I do not believe you can solve all these requirements with simple  inherited properties - you would need bottom camera processor which takes into account part/package  size and machine/camera geometry.  Just that currently 99% of cases needed is simple selection of camera for part .......  

mark maker

unread,
Jul 1, 2022, 4:19:09 PM7/1/22
to ope...@googlegroups.com

Hi Jarosław

Just to make it clear, it would not be true parallel ganged-up vision. The vision would still be performed sequentially, but if multiple nozzles are exactly aligned with multiple cameras, and bottom vision is done at nozzle balance Z (not PCB Z) and if the JobProcessor is slightly updated to allow custom sorting of the nozzles, then it would still be much faster, because there would be no move between the alignment steps of the nozzles (or only a tiny adjustment move if the nozzles are not perfectly aligned). With adaptive Camera Settle, the time between camera shots would be virtually zero! So almost "ganged-up".

I've all laid it out in the post 2022 12:29:07 UTC+2 (in this discussion). If somebody builds that machine and is ready to do some thorough testing, I will implement that (offer stands for the next two months or so, implementing might take some weeks).

As for too large parts: I'm still planning to do a multi-shot bottom vision extension (where part corners are centered above the camera and aligned between multiple shots). Usually, you only have very few large parts (typically one large MCU) and you can afford for them to take a bit longer. Just make sure that you can move even the largest part sideways to the corners, at any rotation, and still not bump into something.

_Mark

bert shivaan

unread,
Jul 2, 2022, 7:11:38 AM7/2/22
to OpenPnP
On the ganged up idea, will the vision be able to correct for not perfectly aligned cameras to nozzles? For instance it the cameras are placed 1mm further apart, can that be adjusted using software so the builder does not have to build a perfect machine?


mark maker

unread,
Jul 2, 2022, 7:47:34 AM7/2/22
to ope...@googlegroups.com

In the proposed first approach it would just have to move 1mm between shots. But that's still much better than the full move between nozzles. And I expect users could come up with a clever adjustable camera holder, so we can do better than 1mm.

In a later revision, I'm sure we could just crop the camera images to make the imperfect nozzle center become the pixel center (half-way adjust per camera). No more move required.

_Mark

bert shivaan

unread,
Jul 2, 2022, 8:26:10 AM7/2/22
to OpenPnP
Mark how much of a test rig would you need me to have for testing this? Does it need to be a fully working PNP or could it be a 2 nozzle head that can move over 2 cameras? Will it need to pick up components or will looking at nozzle tips be enough?


mark maker

unread,
Jul 2, 2022, 5:25:38 PM7/2/22
to ope...@googlegroups.com

I'm afraid it would require almost all the PnP functionality to be conclusive, I mean more conclusive than simulation.

_Mark

bert shivaan

unread,
Jul 2, 2022, 5:33:49 PM7/2/22
to OpenPnP
Ok no problem. I might throw together something small to play with.


Jarosław Karwik

unread,
Jul 8, 2022, 4:13:22 AM7/8/22
to OpenPnP
Mark,

I have checked a bit the code and went over again our discussion
I do not see any way that to make it in a way that it would match your expectations - there are too many open things and from my experience it would always go different way . And my implementation would not be accepted by you.

But I would still like to contribute, but it would have too be done in a bit different way.
I could be the muscle, but you would have to be the brain and simply specify changes to be done and logic to be implemented. Then I could make to real work.

mark maker

unread,
Jul 8, 2022, 5:57:03 AM7/8/22
to ope...@googlegroups.com

Hi Jarosław,

I'm currently implementing multi-shot bottom vision. Larger problem than I thought, but I'm getting there.

So maybe your use case for using two cameras (one small, one large) is going to be obsolete?

Naturally, multi-shot is going to be slower, but for typical projects, where you only have one or two large ICs, this will be insignificant. And I'm making sure it'll be as efficient as possible.

Multi-shot can also improve the precision, because it will capture the package corners in the center of the camera, with no parallax. The following illustration shows one example where this could matter: a typical wide angle camera and a large part that is not held precisely planar (exaggeration):



Having said that, in the multi-shot feature, I'm also making sure, that nothing will be in the way of having multiple cameras in the future.

_Mark

Jarosław Karwik

unread,
Jul 8, 2022, 7:11:27 AM7/8/22
to OpenPnP
If(when actually)  it works, then for sure there is no need for second camera.

I have also tried to imagine how multiple cameras would work for mine CHM-T36VA -  for the"ganged-up" vision case.
it would be a challenge - the distance between nozzles is quite small, so ordinary ELP would be too big. You would need finger type cameras. Such cameras would work only for small components.
So no go :-(

But one day I will have more time to play with OpenMV and fly vision.....

Jim Freeman

unread,
Jul 8, 2022, 8:26:55 AM7/8/22
to OpenPnP
Mark, that sounds very interesting.  Will the 4 corner points be made available for post-processing? Also do you think this will be more accurate that processing a single image. If the machine ( mine for instance) has 50 micron error in positioning then each corner measurement will have that on top of the accuracy of finding the corner.
Best, Jim
Parallax-Multi-Shot.png

mark maker

unread,
Jul 8, 2022, 9:25:24 AM7/8/22
to ope...@googlegroups.com

Hi Jim

> Will the 4 corner points be made available for post-processing?

What do you mean by post-processing?

> Also do you think this will be more accurate that processing a single image. If the machine ( mine for instance) has 50 micron error in positioning then each corner measurement will have that on top of the accuracy of finding the corner.

If these errors are essentially random, the yes, you can expect an improvement, simply by laws of probability (confidence interval). The more corners you probe, the more accurate the overall result. This will obviously only reduce the bottom vision error, not the placement error...

In my implementation you will be able to tell it to probe extra corners to improve accuracy (at the cost of some extra bottom vision time).

If the errors are not random, then chances are, you can reduce them by backlash compensation (or by fixing some underlying problem).

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

vespaman

unread,
Jul 8, 2022, 9:26:01 AM7/8/22
to OpenPnP
dual_cam.jpg
When this thread started, I instantly got interested for my CHM--T48VB. I measured that using only the sensors on a M12 optics flex cable will fit.
But of course those outputs MIPI, and that complicates things a bit.
I am currently doing a hw design on a i.mx8m plus, with 2 cams MIPI like this right now, so there are some synergies. But there can only be 2 MIPI cameras on each imx8mp. Maybe a third/fourth camera can interface through USB3. If, dreaming on, this could then be a vision/sensor hub, that presents images onto ethernet for openPnP. Or maybe it can even host also OpenPnP, and make use of the AI/ML neural core, but that might be too much work for those little arm cores.

But back to reality;  :)
Maybe there are more generic MIPI to USB boards out there, that can work? Or maybe even rip the IC of a china USB3 camera (I suspect they are interfacing the sensors using MIPI, but maybe not?).
Or, changing optics on a finger cam (like the down cam). But that is a bit hacky, and only USB2 afaik.
Anyway, I just wanted to say that it is not really "no go"! :)

mark maker

unread,
Jul 8, 2022, 9:47:45 AM7/8/22
to ope...@googlegroups.com

I was thinking a bit more...

One problem is that you'd need to shoot the parts at balanced Z height, otherwise you still need to move Z, which would probably spoil most of the gain.

But by nailing Z you cannot account for different part heights and the part undersides will be blurred on tall parts. You could still use it for small passives (which are typically the most numerous), but probably not in general. 🙁

Ideally, you would use a quadro-head, two pairs of nozzles with shared axes. The pairs could be side-by-side (one row) or in two rows one pair behind the other, but the two rows further apart than the nozzles, to make space for the cameras.

The cameras would be aligned with one nozzle of each pair, either skipping one if in a row, or being aligned with one of the two rows.

Either way, both nozzle could independently move their Z at the same time and we would get all the accuracy benefits of a focal plane on the same Z as the PCB. When they move to the second set of shots, there is time to move X and the two Z at the same time.

This would really rock. 😎

_Mark

bert shivaan

unread,
Jul 8, 2022, 9:14:21 PM7/8/22
to OpenPnP
Just spitballing here, if the nozzles are too close to see with 2 cameras, can we use a pic of both of them for processing? It would seem like nozzle calibration would give us the "center" position for each nozzle in the FOV. Then parts are calculated from the center of the nozzle they are on.

I am sure this is infinatilly easier for me to describe than to actually implement. But as I said - just spitballing here.

mark maker

unread,
Jul 9, 2022, 2:49:10 AM7/9/22
to ope...@googlegroups.com

> if the nozzles are too close to see with 2 cameras, can we use a pic of both of them

It is the camera's PCBs/housing etc. that is colliding, not their view area.

To have a large camera view cover multiple nozzles is certainly possible, as some Neoden machines show. But you probably need a hires camera in still shot mode, a long focal length, and consequently a large camera distance, so the parallax errors remain reasonable.

And this will very likely only work for small parts (which could still be very useful, as these are typically the most numerous).

But the required changes to the code would be very profound, they would reach deep into the Job Planner and Job Processor. Unlikely to happen soon 😬.

_Mark

bing luo

unread,
Aug 21, 2022, 6:53:57 AM8/21/22
to OpenPnP
double camera.jpg   
How about this camera?

working principle:

Connect 2-way cameras through the hub and transmit through a USB cable. You can identify 2 independent video devices. The 2-way cameras work at the same time
The binocular camera is drive free (no driver is required), conforms to the standard drive free UVC protocol, facilitates secondary development, has good versatility, and provides a programming interface for content setting (brightness, contrast, saturation, tone, clarity, white balance, exposure, gain)
DirectShow, opencv and other software development can be used in Windows system, and v4l2 development can be used in Linux system
Users can use MJPEG format to output dual video of any resolution, and the maximum video can be 2560 * 720 (1280 * 720x2)

Blowtorch

unread,
Aug 30, 2022, 12:04:22 PM8/30/22
to OpenPnP
What about mapping bottom camera to feeder?  That way you may also be able to optimise by travel and not just part ID?

mark maker

unread,
Aug 30, 2022, 1:00:36 PM8/30/22
to ope...@googlegroups.com

Good point. But I'd prefer an automatic optimization rather than static assignment. I guess that can easily be obtained.

_Mark

Reply all
Reply to author
Forward
0 new messages