Develop: new Bottom Vision / Fiducial Locator assignment

471 views
Skip to first unread message

ma...@makr.zone

unread,
Jan 29, 2021, 8:00:40 AM1/29/21
to OpenPnP
Hi everybody

I take geo0rpo's question as the trigger to finally discuss this.

This is my take of things, if you think that's a bad idea, please speak up:

> shouldn't the part bottom vision pipelines be at the "packages" tab

Absolutely, and that's very high on my list.

I'll even go one step further: The pipeline should not be in the package either, but multiple "Bottom Visions" should be defined in the Vision tree:

Bottom Vision Mockup.png

Each Bottom Vision setup would define the settings and pipeline together, optimized for each other (no change from today).

My assumption is, that if done well, you only ever need a handful of different pipeline/alignment setups. One of the setups would be marked as the "Default". Obviously you would define "Default" for the most widely used setup (probably the one for small passives).

In the Package you would then select one of the Bottom Visions setups from a combo-box, if (and only if) anything other than the "Default" is needed.

In the Part the same, but with the default given by the Package.

So we get a dynamically inherited assignment: Default -> Package -> Part.

The migration of existing machine.xml would be the most difficult part to implement this. The migration algorithm must group all equal (Pipeline + "Pre-Rotate") settings, create Bottom Vision setups from them, find the "Default" as the most commonly used one, on both the Machine and Package level, and assign the non-defaults where they don't match.

Fiducials the same way.

What do you think?

_Mark

bert shivaan

unread,
Jan 29, 2021, 8:25:20 AM1/29/21
to OpenPnP
My vote still goes to in the package if I must choose one over the other.
But I think it would be awesome to have both. So your new proposed idea gets the correct pipeline to the package, then fine tune for each package.
Some of us have matching used to be sure size is correct (allows for finding edge picks and tombstone picks) The size is package dependent.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c690532c-8c06-4295-9303-5e97adcce4edn%40googlegroups.com.

geo0rpo

unread,
Jan 29, 2021, 8:45:03 AM1/29/21
to OpenPnP
I am too "young" in openpnp to even have an opinion but I will just say that the Philips emerald machine that I used to operate years ago had default pipelines associated with the packages.
Of course this brings up the need to have let's say 0805 capacitor AND 0805 resistor default pipelines (R0805, C0805 etc) because as far as vision goes, they are different to "see" even if the have the same size.
Then after you assign the packages to the parts you can make adjustments to the part's pipeline not affecting the package pipeline.
So, if I may, I will go vote for Mark's idea.

Thanks.

geo0rpo

unread,
Jan 29, 2021, 8:53:09 AM1/29/21
to OpenPnP
Also in my setup there is no bottom vision tree like yours. I only have Bottom Vision and Fiducial Locator. No tree under Bottom vision.
Am I missing something?

james.edwa...@gmail.com

unread,
Jan 29, 2021, 9:27:35 AM1/29/21
to ope...@googlegroups.com

Hi, Bert. This is a side question from your comment. What is the “canonical” way to find which part (or package) is on the nozzle for the bottom vision pipeline. Do you have a separate pipeline for each package, or is there a  way to determine for instance that  the current part is a 0402 or a 0603 or …

Thanks a lot, Jim

image001.png

johanne...@formann.de

unread,
Jan 29, 2021, 9:31:19 AM1/29/21
to OpenPnP
Hi Mark,

really like the idea.
Even with my setup that heavily relies on vision, I see no real problems, but improvements in my workflow....
(default would be something like the current default, package for the most used ones, part to (temporarily) select a training pipeline (creating a template) or to take care if this part has strange properties (e.g other colors that causes problems with the default pipeline))

greetings
Johannes

ma...@makr.zone

unread,
Jan 29, 2021, 9:55:26 AM1/29/21
to ope...@googlegroups.com

> No tree under Bottom vision.

This is just a mock-up of how it could look in the future.

_Mark

You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/7DeSdX4cFUE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/b386729c-1f0d-4fdf-9275-b48d6545e00en%40googlegroups.com.

Jason von Nieda

unread,
Jan 29, 2021, 10:12:51 AM1/29/21
to ope...@googlegroups.com
Hi Mark,

Thanks for diving into this. I have a bunch of thoughts on it, but let me start with one (okay two) question: Do you see the pipelines being editable at all at the Part / Package level, or only selectable from the list of defined ones? And since I suspect the answer is "No", I'd ask what the workflow would look like if someone is running a job and one particular part just keeps acting weird. Do they create a new pipeline in the Machine Setup and then select it for the part?

Thanks,
Jason


--

ma...@makr.zone

unread,
Jan 29, 2021, 10:20:22 AM1/29/21
to ope...@googlegroups.com
> Some of us have matching used to be sure size is correct (allows for finding edge picks and tombstone picks) The size is package dependent.

I have other ideas for this: The pipeline would not handle this, but Vision (Bottom / Fiducial) would:
  1. The first (few) times a part is recognized it would collect statistical info about the vision result and store those per Package, optionally per Part.
  2. For instance it would record average width/height of a RotatedRectangle that is returned.
  3. It can observe visions result properties in terms of both average values and variance.
  4. This could go as far as to record and average expected template images, i.e. how the part/package is supposed to look.
  5. Then after a configurable amount of training, it would start complaining if the vision result is unexpected taking into account the observed variance.
  6. The settings for this (number of training rounds/level of confidence) would again be configurable on the common Vision setup.
  7. Future option: while training, it could even interactively ask the user "does this look OK to you?"

How does this sound, Bert?

_Mark

ma...@makr.zone

unread,
Jan 29, 2021, 11:41:57 AM1/29/21
to ope...@googlegroups.com

> Do you see the pipelines being editable at all at the Part / Package level, or only selectable from the list of defined ones?

This is open for discussion.

Ideal Solution

Ideally Pipelines are not be editable on the Part/Package level.  Maybe I'm mistaken, but I get the impression that tweaking pipelines is quite hard for the average user anyway.

Each Pipeline typically has a few very specific properties that need trimming routinely. Thresholds of all sorts, sometimes dimensions such as mask diameters, min/max feature sizes etc. If I think about Bottom Vision and Fiducial Locator pipelines, I see maybe three to six properties per pipeline, or even less if you have multiple preconfigured setups to choose from.

These properties could be exposed as per Package/per Part trimming controls. If the body of your IC is shiny, you can trim the cutoff threshold.  If this nearby contact gets mistaken as a fiducial, trim the mask diameter down.

These trim controls could look like the camera properties in the OpenPnpCaptureCamera. With an "auto" checkbox (default) and a slider to tweak.

As seen in the OpenPnpCaptureCamera, a slider can equally control discreet (boolean) choices. So it should be reasonably easy to wire stage properties to sliders in a generic way.

Each trim would have a stage name, property name plus a range for Doubles/Integers (allowed to reverse upper/lower bound to make some adjustments more intuitive). It could optionally have a second stage name for the Result image you want to display.

Each trim control change would immediately re-trigger the transiently cloned and modified pipeline and show the result image in the camera view. So if you trim a mask diameter, you immediately see it applied in the pipeline. After a timeout, it would again display the end result image, hopefully with the pipeline now nailing the subject. 

These trim controls could live inside a new Machine Controls tabs: So if a vision op fails inside the job, the controls could become available immediately, without having to navigate to the Package/Part. A message box where you can choose between "Fail Vision", "Trim Package Vision" or "Trim Part Vision", perhaps. 

I believe this could eventually lead to a level of abstraction, where we could ship many specific high quality pipelines with OpenPnP (for "Juki", "Samsung", etc.) and where the need for users to really dig into them is greatly reduced or eliminated.


Easier Solution

There could be a "clone and adjust" button on both part and package. It would clone the current (inherited) Vision setup, assign it to the part/package and enter the Pipeline Editor (as today).  Some clever auto-naming by package or part could take place.

If the Setup was already cloned earlier (i.e. single Part/Package assignment), it would just re-open the existing Pipeline instead of creating another duplicate. 

Maybe these ad-hoc setups could be marked as such and be subject to a mass cleanup routine after the job is done, i.e. to keep the list tidy. A clone would have to remember the id of the template, to restore the original assignment in the part/package (including null to inherit the default), when it is deleted.

_Mark

You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/7DeSdX4cFUE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jzps8DMvPC8uPxe%3D4AA-6ijOR%2BBZZ85d3aedybKxP0WVA%40mail.gmail.com.

bert shivaan

unread,
Jan 29, 2021, 11:58:28 AM1/29/21
to OpenPnP
Hi Mark - It sounds GREAT to me. Likely an issue here is what things are called. So the pipeline is not  (according to my understanding of your idea) the singular thing to determine the part is aligned correctly. It is the tool to allow something else to decide if the part matches the template.

I am most likely not saying this correctly, but I think it sounds really awesome if I have it right.

My old commercial machine actually looked for the pins on the part, and would reject the part if a pin was bent. I am not saying we are there yet, but sounds like you are pushing in that direction.
James, In my thought the pipeline would be assigned per package. So the vision would not "determine" the part on the nozzle, only verify it is the expected part.

ozzy_sv

unread,
Jan 29, 2021, 12:02:00 PM1/29/21
to OpenPnP
my choice is two way, because some parts behave strangely / differently and for them it is necessary to adjust the piping individually. 
But at this stage, I would ask to add the ability to copy the properties of the pipeline to a group of selected parts, since it is tedious to make changes for several tens of resistor values.

пятница, 29 января 2021 г. в 19:41:57 UTC+3, ma...@makr.zone:

Jim Freeman

unread,
Jan 29, 2021, 12:07:24 PM1/29/21
to OpenPnP
Thanks, Bert. A  follow-on question. How does the pipeline for a given package get the size parameters that are stored for that package? I had a hack that doesn't seem to work in the newest OpenPnP which I am trying to migrate into.
Best, Jim


Clemens Koller

unread,
Jan 29, 2021, 2:24:44 PM1/29/21
to ope...@googlegroups.com
Hi!

On 29/01/2021 18.02, ozzy_sv wrote:
> my choice is two way, because some parts behave strangely / differently and for them it is necessary to adjust the piping individually.

I redo most of the pipelines to get them more precise or reliable. On my research list: Why is the HoughCircleDetection so wobbly?
MatchTemplate is way more precise, if setup properly.


> But at this stage, I would ask to add the ability to copy the properties of the pipeline to a group of selected parts, since it is tedious to make changes for several tens of resistor values.

+1 It should be as easy as copy & pasting text blocks within machine.xml - at least! ;-)

However, some could work with groups of parts. In my EDA workflow, I can handle PartNames + Values + Footprints + ... each separately. They are all stored in a MariaDB database. I am even thinking to store the Vision Pipelines with the Parts in the database, where
they could be handled automatically when I create a 4.7k R-1005M Resistor from a 1k one.


Clemens

On 29/01/2021 18.02, ozzy_sv wrote:
> my choice is two way, because some parts behave strangely / differently and for them it is necessary to adjust the piping individually. 
> But at this stage, I would ask to add the ability to copy the properties of the pipeline to a group of selected parts, since it is tedious to make changes for several tens of resistor values.
>
> пятница, 29 января 2021 г. в 19:41:57 UTC+3, ma...@makr.zone:
>
> /> //Do you see the pipelines being editable at all at the Part / Package level, or only selectable from the list of defined ones?/
>
> This is open for discussion.
>
> *Ideal Solution*
>
> Ideally Pipelines are *not *be editable on the Part/Package level.  Maybe I'm mistaken, but I get the impression that tweaking pipelines is quite hard for the average user anyway.
>
> Each Pipeline typically has a few very specific properties that need trimming routinely. Thresholds of all sorts, sometimes dimensions such as mask diameters, min/max feature sizes etc. If I think about Bottom Vision and Fiducial Locator pipelines, I see maybe three to six properties per pipeline, or even less if you have multiple preconfigured setups to choose from.
>
> These properties could be exposed as per Package/per Part trimming controls. If the body of your IC is shiny, you can trim the cutoff threshold.  If this nearby contact gets mistaken as a fiducial, trim the mask diameter down.
>
> These trim controls could look like the camera properties in the OpenPnpCaptureCamera. With an "auto" checkbox (default) and a slider to tweak.
>
> As seen in the OpenPnpCaptureCamera, a slider can equally control discreet (boolean) choices. So it should be reasonably easy to wire stage properties to sliders in a generic way.
>
> Each trim would have a stage name, property name plus a range for Doubles/Integers (allowed to reverse upper/lower bound to make some adjustments more intuitive). It could optionally have a second stage name for the Result image you want to display.
>
> Each trim control change would immediately re-trigger the transiently cloned and modified pipeline and show the result image in the camera view. So if you trim a mask diameter, you immediately see it applied in the pipeline. After a timeout, it would again display the end result image, hopefully with the pipeline now nailing the subject. 
>
> These trim controls could live inside a new Machine Controls tabs: So if a vision op fails inside the job, the controls could become available immediately, without having to navigate to the Package/Part. A message box where you can choose between "Fail Vision", "Trim Package Vision" or "Trim Part Vision", perhaps. 
>
> I believe this could eventually lead to a level of abstraction, where we could ship many /specific /high quality pipelines with OpenPnP (for "Juki", "Samsung", etc.) and where the need for users to really dig into them is greatly reduced or eliminated.
>
> *
> *
>
> *Easier Solution*
>
> There could be a "clone and adjust" button on both part and package. It would clone the current (inherited) Vision setup, assign it to the part/package and enter the Pipeline Editor (as today).  Some clever auto-naming by package or part could take place.
>
> If the Setup was already cloned earlier (i.e. single Part/Package assignment), it would just re-open the existing Pipeline instead of creating another duplicate. 
>
> Maybe these ad-hoc setups could be marked as such and be subject to a mass cleanup routine after the job is done, i.e. to keep the list tidy. A clone would have to remember the id of the template, to restore the original assignment in the part/package (including null to inherit the default), when it is deleted.
>
> _Mark
>
> Am 29.01.2021 um 16:12 schrieb Jason von Nieda:
>> Hi Mark,
>>
>> Thanks for diving into this. I have a bunch of thoughts on it, but let me start with one (okay two) question: Do you see the pipelines being editable at all at the Part / Package level, or only selectable from the list of defined ones? And since I suspect the answer is "No", I'd ask what the workflow would look like if someone is running a job and one particular part just keeps acting weird. Do they create a new pipeline in the Machine Setup and then select it for the part?
>>
>> Thanks,
>> Jason
>>
>>
>> On Fri, Jan 29, 2021 at 7:00 AM ma...@makr.zone <ma...@makr.zone> wrote:
>>
>> Hi everybody
>>
>> I take geo0rpo's question <https://groups.google.com/g/openpnp/c/VsXZ6Qcp7QA/m/mJHDPNC1AAAJ> as the trigger to finally discuss this.
>>
>> *This is my take of things, if you think that's a bad idea, please speak up:*
>>
>> /> s//houldn't the part bottom vision pipelines be at the "packages" tab/
>>
>> Absolutely, and that's very high on my list.
>>
>> I'll even go one step further: The pipeline should not be in the package either, but multiple "Bottom Visions" should be defined in the Vision tree:
>>
>> Bottom Vision Mockup.png
>>
>> Each Bottom Vision setup would define the settings and pipeline together, optimized for each other (no change from today).
>>
>> My assumption is, that if done well, you only ever need a handful of different pipeline/alignment setups. One of the setups would be marked as the "Default". Obviously you would define "Default" for the most widely used setup (probably the one for small passives).
>>
>> In the Package you would then select one of the Bottom Visions setups from a combo-box, if (and only if) anything other than the "Default" is needed.
>>
>> In the Part the same, but with the default given by the Package.
>>
>> So we get a dynamically inherited assignment: Default -> Package -> Part.
>>
>> The migration of existing machine.xml would be the most difficult part to implement this. The migration algorithm must group all equal (Pipeline + "Pre-Rotate") settings, create Bottom Vision setups from them, find the "Default" as the most commonly used one, on both the Machine and Package level, and assign the non-defaults where they don't match.
>>
>> Fiducials the same way.
>>
>> What do you think?
>>
>> _Mark
>>
>> --
>> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c690532c-8c06-4295-9303-5e97adcce4edn%40googlegroups.com <https://groups.google.com/d/msgid/openpnp/c690532c-8c06-4295-9303-5e97adcce4edn%40googlegroups.com?utm_medium=email&utm_source=footer>.
>>
>> --
>> You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/7DeSdX4cFUE/unsubscribe <https://groups.google.com/d/topic/openpnp/7DeSdX4cFUE/unsubscribe>.
>> To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jzps8DMvPC8uPxe%3D4AA-6ijOR%2BBZZ85d3aedybKxP0WVA%40mail.gmail.com <https://groups.google.com/d/msgid/openpnp/CA%2BQw0jzps8DMvPC8uPxe%3D4AA-6ijOR%2BBZZ85d3aedybKxP0WVA%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com <mailto:openpnp+u...@googlegroups.com>.
> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/1c4816a7-b59f-483a-8349-40fd1e179d62n%40googlegroups.com <https://groups.google.com/d/msgid/openpnp/1c4816a7-b59f-483a-8349-40fd1e179d62n%40googlegroups.com?utm_medium=email&utm_source=footer>.

ma...@makr.zone

unread,
Jan 29, 2021, 2:38:25 PM1/29/21
to ope...@googlegroups.com
> HoughCircleDetection

In some default pipeline there is a Canny Edge detection in front of HoughCircleDetection. But HoughCircleDetection has its own built-in Canny edge detection. The double Canny leads to doubling of the edges. The circles will therefore "wobble" between the doubled edges. This should be fixed but my time is limited :)

_Mark

Jason von Nieda

unread,
Jan 29, 2021, 3:11:40 PM1/29/21
to ope...@googlegroups.com
Hi Mark,

Here are some rough, quick thoughts. This is a topic I'd like to really dive into because it's starting to approach the stuff I'm really interested in changing in OpenPnP. So, some quick thoughts now to move the conversation along and then we can dig in more as we reach consensus:

To start with, a recap of my "vision": I eventually want to get to the point where a job is entirely self contained with regards to parts and packages. Changing some properties of a part when running job A should not have an effect when running job B. I know this is somewhat controversial, but this is based on real world experience of hundreds of hours of job setup and thousands of hours of machine operation.

Once a job works correctly, you want it to work like that every time you load it. Job setup and machine operation are often performed by different people. The operator will likely have to make tweaks as a job runs for the first production run, and it's critical they can do that while knowing they aren't blowing up their other jobs. Oftentimes those tweaks will not be "good". They are going to make little tweaks based on what is going on that day and it's highly likely those tweaks are not applicable to another job, even for the same part. We can wish that the operator will give deep thought to every change they make, but that is not the reality when deadlines approach.

An example of this is some of the recent issues with feeders and deleted parts. This should never happen. If one person comes along and deletes a part because they think it's not being used, or they are sick of the clutter, and that breaks another job, that is a bad result.

So, with that out of the way:

- Pipelines, or vision parameters, should ultimately move into the job file, along with part, package, and feeder information. (I have an idea of how this works for feeders, but let's not get into it for now)

- The default pipelines should be much more robust than they are now for a *fixed quality of image*. By that I mean that if you tune your camera image to be of a certain quality the default pipelines should "just work".

I think we should provide a tool, similar to the vacuum and motion tools you've created, in the vision setup areas where we judge the quality of image and suggest changes to lighting, background, contrast, etc. I don't think this is too hard, and it means that for most cases users will never have to touch pipelines.

- Having a way to easily move part and package settings between jobs is important. I see this as being able to "push" and "pull" information into and out of a central database, either locally, or maybe even cloud based. I think this is something that could happen down the road, though. Ultimately, there should be very little data to set up for a new part: Select a package, set it's size, maybe adjust a couple vision settings but hopefully the defaults are fine.

So, that is all a long way to say I almost completely agree with your Ideal Solution with a few caveats:

1. The settings on the vision object (pre-rotate, min/max distance, etc.) should be considered templates, not defaults. In other words, when you create a new part the template settings are copied into that part and from then on are independent of the setting in the vision object. There should be no concept of a part or package having a null property or using the default at runtime.

2. Pursuant to your earlier comment to Bert, I think we should do things like size and tombstone checks in code, not in pipelines. These should be features of ReferenceBottomVision and we should create whatever model properties we need to support them. Ideally, the pipeline should really just focus on cleaning up the image and picking out the features for the code to make decisions on.

3. We should work hard to limit how many trims / tweaks there are in the part/package settings. Rather than having dozens of knobs to turn, we should focus on making sure the camera is getting us a good baseline image and then improve the code or pipeline.

Thanks,
Jason



Clemens Koller

unread,
Jan 29, 2021, 3:38:54 PM1/29/21
to ope...@googlegroups.com
Hi, Mark!

On 29/01/2021 20.38, ma...@makr.zone wrote:
> /> HoughCircleDetection //
>
> In some default pipeline there is a Canny Edge detection in front of HoughCircleDetection.
> But HoughCircleDetection has its own built-in Canny edge detection. The double Canny leads to doubling of the edges. The circles will therefore "wobble" between the doubled edges. This should be fixed but my time is limited :)

Yes, I would consider these old pipelines just as a bad arrangement and kick them out. Canny + Hough with a Gradient operator upfront doesn't make sense.


I am talking about the performance of the (latest-test build) DetectCirclesHough CvStage. I am on OpenCV 4.3.0 or 4.5.1.

I personally wish to be able to use a standalone Hough-Transform and deal with the Hough-Space manually. Off course it is necessary to understand what the Hough-Transform mathematically does!

Usually, a Hough-Transform on its own should achieve superb sub-pixel resolution and precision (depending of the amount of features in the image to transform) if exercised properly. But in our case, I see a jitter of several pixels in the position detected.
I might have to have a closer look into this after some assembly is done. It might be an idea to play around with the dp value, which seems to be directly fed to the Imgproc.HoughCircles().
A good explanation of the parametes are here: https://dsp.stackexchange.com/questions/22648/in-opecv-function-hough-circles-how-does-parameter-1-and-2-affect-circle-detecti
(I did not yet focus on computational efficiency... that's a different story.)

In the meanwhile, I can stick to MatchTemplate, which is way more precise than the Hough*Stuff for what I need (i.e. nozzle tip calibration).

Clemens


On 29/01/2021 20.38, ma...@makr.zone wrote:
> /> HoughCircleDetection //
> /
> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/e768f166-58e2-532e-5a37-4eb01581acbe%40makr.zone <https://groups.google.com/d/msgid/openpnp/e768f166-58e2-532e-5a37-4eb01581acbe%40makr.zone?utm_medium=email&utm_source=footer>.

ma...@makr.zone

unread,
Jan 30, 2021, 3:44:24 AM1/30/21
to ope...@googlegroups.com

I agree, I see that jitter too, despite having cleaned up the pipeline stages. It seems to exist on a pixel level so I assume it is some kind of "rounding" artifact. It never bothered me in real applications, my camera resolution is too high.

I leave that to you!

_Mark

ma...@makr.zone

unread,
Jan 30, 2021, 4:46:57 AM1/30/21
to ope...@googlegroups.com

Hi Jason

Your explanations make a lot of sense, though I have some remarks:

  1. I agree to almost everything, therefore consider us mostly on the same page. :)
  2. I can see how it makes sense to encapsulate single jobs in a large manufacturing environment. But I would argue that most of us users are still not in that environment. For true DIY users you would probably want to carry your adjustments into the next job and not start from scratch each time. If you have a prototyping shop, you might want to encapsulate the project, but not the job of that particular prototype revision, so again, tweaked settings would carry into the next job of that project (but not into other projects). So I think the unit of encapsulation should be determinable by the user.
  3. I would suggest a simple "Database" approach. A database would contain Parts and Packages. It could be as simple as a sub-directory. The user could Open.., Safe, Close, Safe as... the database.
  4. A job has an association with the database. If you open the job, it will automatically open the right database. If none is assigned yet, it would associate itself with the current Database. If none is open, it could trigger the Open.. dialog. You can reassign a job, by Opening another database, while the job is loaded. You can make an existing database job specific, by Saving it as a new database.
  5. A global setting could make job specific databases the standard and perhaps make it so that the files of job, parts and packages are saved side-by-side with the job name as a common prefix. You could then save the bunch on a cloud drive etc.
  6. I agree that Part and Package specific Pipelines, or vision parameters should be saved in the Parts, Packages i.e. in the database.
  7. I do not agree that all the Pipelines, or vision parameters, should automatically be copied to the Parts and Packages and no longer inherit defaults (your point 1). If such a "snapshot" is really needed, it should be made a deliberate user action "Freeze all part and package visions settings".
  8. Personally, I am  very confident, that I can make my pipelines fool-proof and I will never clone them on a package or part level (just assign them by reference, maybe trim some values).
  9. For my situation (and I think this would be the same for many users), if a pipeline has to be modified, it is probably due to a global change, like when I finally replace that crappy LED ring with a better one, or when the OpenPnP version evolves and has better stages, or when I bought parts that look differently. I would absolutely hate the maintenance nightmare of not being able to evolve a handful of pipelines centrally. 

  10. One tough nut, you postponed "I have an idea of how this works for feeders, but let's not get into it for now"...
  11. I do see difficult feeder to part association problems, once we have multiple databases. The feeders remain physically loaded on the machine. But if the part in the feeder vanishes because I load a different database, OpenPnP has a problem.
  12. We have some slot feeders etc. but in most DIY/prototyping environments we don't have these job specific feeder trolleys where you can afford to reserve ten-thousands of dollars worth of feeders and parts for one job alone and dock the trolley into the machine whenever you work on that particular job.
  13. The only solution I see at the moment would drag missing Parts and Packages referenced from Feeders over into any new database that OpenPnP touches. This also means that OpenPnP would always have to remember what database it had loaded before. And it would have a problem, if that database vanished...

_Mark

Am 29.01.2021 um 21:11 schrieb Jason von Nieda:
Hi Mark,

Here are some rough, quick thoughts. This is a topic I'd like to really dive into because it's starting to approach the stuff I'm really interested in changing in OpenPnP. So, some quick thoughts now to move the conversation along and then we can dig in more as we reach consensus:

To start with, a recap of my "vision": I eventually want to get to the point where a job is entirely self contained with regards to parts and packages. Changing some properties of a part when running job A should not have an effect when running job B. I know this is somewhat controversial, but this is based on real world experience of hundreds of hours of job setup and thousands of hours of machine operation.

Once a job works correctly, you want it to work like that every time you load it. Job setup and machine operation are often performed by different people. The operator will likely have to make tweaks as a job runs for the first production run, and it's critical they can do that while knowing they aren't blowing up their other jobs. Oftentimes those tweaks will not be "good". They are going to make little tweaks based on what is going on that day and it's highly likely those tweaks are not applicable to another job, even for the same part. We can wish that the operator will give deep thought to every change they make, but that is not the reality when deadlines approach.

An example of this is some of the recent issues with feeders and deleted parts. This should never happen. If one person comes along and deletes a part because they think it's not being used, or they are sick of the clutter, and that breaks another job, that is a bad result.

So, with that out of the way:

- Pipelines, or vision parameters, should ultimately move into the job file, along with part, package,  (I have an idea of how this works for feeders, but let's not get into it for now)

bert shivaan

unread,
Jan 30, 2021, 7:04:33 AM1/30/21
to OpenPnP
Quick question, If I have a reel of parts and I change jobs, will the vision system still know how to see the parts or did I lose that when the job closed?

ma...@makr.zone

unread,
Jan 30, 2021, 7:43:35 AM1/30/21
to ope...@googlegroups.com

See points 2 and 8.

_Mark

bert shivaan

unread,
Jan 30, 2021, 7:52:43 AM1/30/21
to OpenPnP

John Plocher

unread,
Jan 30, 2021, 1:05:29 PM1/30/21
to ope...@googlegroups.com

Jumping in late...
 

 One of the setups would be marked as the "Default". Obviously you would define "Default" for the most widely used setup (probably the one for small passives).


From a usability perspective, this definition of a default should be a selection in a preference somewhere, and never the literal name "Default" - which, with no context, has no meaning to the job operator. 
My Chinesium machine does it wrong, in that the default choice that comes up checked was labeled in Mandarin that translates to "default" - which provides absolutely no clue as to what that might be :-)
Passives?  0805 resistors?  Anything vaguely rectangular?  Who knows, especially since the other choices are things like "special", "big", and one that simply says "SOT" :-)

The process of setting up a job involves rationalizing a BOM, centroids, pcbs and feeders.  At some point one needs to associate items in the BOM to feeders and vision pipelines.
In this repetitive effort, the following observations stand out:
  • Packages by themselves have a strong affinity to vision pipelines, with variations possibly coming into play for individual components (consider a 0603 package, with R,C,L, black body, blue body, tan body...)
  • Generalizations are important.  There should be (if possible) a single pipeline for "small rectangular passives" instead of a dozen specialized pipelines for [0805, 0603, 0402, 0201] X [R,C,L]
    • (That may mean smartness in the generic pipeline that can identify and use those specialized sub-cases, but don't unnecessarily expose them to the job operator!)
  • There is a finite and bounded set of required pipelines needed to support the vast majority of jobs, probably on the order of a dozen or less
  • Packages and BOMs used between jobs may be based on the same component library (and thus share learnings...), or may not (at which point, any previous learnings are invalid)
    • Consider a color sensitive pipeline with Job 1's R/0805 using grungy old stock black resistors, and job 2's R/0805 using automotive grade blue resistors...
Off to read the rest of this long thread...

  -John

ma...@makr.zone

unread,
Jan 30, 2021, 1:39:31 PM1/30/21
to ope...@googlegroups.com

Hi John

Thanks for the feedback. I'm very interested in practical experiences and therefore I'd like to ask some questions back:

So far I keep being surprised that custom pipelines are such an issue. The usual pipeline uses the contacts to recognize the part, whether those are pins, pads or balls, they are typically metal and shiny, so if you have your proper diffusor in from of the camera, the contacts will create nice bright reflective spots. It is true that I haven't worked with a wide variety of parts yet, but so far I only ever defined one pipeline and it worked without individual tweaking for all parts, from the smallest passive to the largest IC I have. The body of a part is completely irrelevant, I have bright white ceramic, brown ceramic, black matte plastic, ... everything is clearly less bright than contacts. So I'm really surprised if you say we have to account for different body colors.

Again, my experience is severely limited, so I'm really interested what the practical issues are.

Sometimes I wonder if people have their camera exposure too high. My camera image is quite dark for human perception, the dynamic range must cover the contacts' reflections, so they can be properly discriminated, there must be no clipping even for the shiniest contacts.

https://github.com/openpnp/openpnp/wiki/Bottom-Vision#tips

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Jason von Nieda

unread,
Jan 30, 2021, 2:38:43 PM1/30/21
to ope...@googlegroups.com
Hi Mark,

I've responded to several of the points below, but I've also provided a summary at the bottom. I think it might be best if we focus on the summary and avoid the long explanations, otherwise I think we'll never get anywhere :)

On Sat, Jan 30, 2021 at 3:46 AM ma...@makr.zone <ma...@makr.zone> wrote:

Hi Jason

Your explanations make a lot of sense, though I have some remarks:

  1. I agree to almost everything, therefore consider us mostly on the same page. :)
  2. I can see how it makes sense to encapsulate single jobs in a large manufacturing environment. But I would argue that most of us users are still not in that environment. For true DIY users you would probably want to carry your adjustments into the next job and not start from scratch each time. If you have a prototyping shop, you might want to encapsulate the project, but not the job of that particular prototype revision, so again, tweaked settings would carry into the next job of that project (but not into other projects). So I think the unit of encapsulation should be determinable by the user.
I mostly agree with the part I highlighted ^. That's certainly the ideal. I think if we can come up with a good UI for this we'll have a big win.

One system I have a lot of experience with has a single central database and you can push and pull from it. When you start setting up a new job you have a bunch of part identifiers, usually either MPN or your own ID system, and  you ask it to fill in any that are known from the database. Then you just fix up the ones that weren't. When you are done, you can push those, along with any changes, back into the database.

This actually works very well, if you can remember to push back into the database.

I think an important consideration is differentiating things that are normally true, and things that are true during a job run. It makes sense to have an authoritative set of parameters for, say, ATMEGA328P-AUR. That is a real world part that will never change. It has a height, width, length, body color, number of pins, package type, etc. There's no reason to have to input that data every time.

But for the run I'm doing TODAY, for whatever reason, the vision keeps failing the size check and if I just adjust the body length by 0.1mm it passes and the placements look good. Maybe there is a reflection, or
someone bumped the camera, or anything else. I don't want to change the body length for that part in the global sense, I just want to change it for this job so I can go home and have dinner :)

It is important that we remember that OpenPnP is not just for DIY. More and more people are using it for regular production, and my goal since day one has always been to eventually replace all other pick and place software :) Keeping the needs of regular production in mind is very important to me.

  1. I would suggest a simple "Database" approach. A database would contain Parts and Packages. It could be as simple as a sub-directory. The user could Open.., Safe, Close, Safe as... the database.
  2. A job has an association with the database. If you open the job, it will automatically open the right database. If none is assigned yet, it would associate itself with the current Database. If none is open, it could trigger the Open.. dialog. You can reassign a job, by Opening another database, while the job is loaded. You can make an existing database job specific, by Saving it as a new database.
I think we may have different ideas of simple :) That sounds like a lot of mental load to me, although I don't think we're too far off from agreeing. In my mind, the job "database" is just the part information embedded in the job. Just to reiterate, I very much want to get to a single file job. No more board.xml, no more dependency on parts.xml or packages.xml. All of that information is in the job. An entire OpenPnP setup to run a job should consist of machine.xml and job.xml. The job is self-contained and includes everything it needs to run - it just needs hardware (machine.xml) to run on.

  1. A global setting could make job specific databases the standard and perhaps make it so that the files of job, parts and packages are saved side-by-side with the job name as a common prefix. You could then save the bunch on a cloud drive etc.
  2. I agree that Part and Package specific Pipelines, or vision parameters should be saved in the Parts, Packages i.e. in the database.
  3. I do not agree that all the Pipelines, or vision parameters, should automatically be copied to the Parts and Packages and no longer inherit defaults (your point 1). If such a "snapshot" is really needed, it should be made a deliberate user action "Freeze all part and package visions settings".
  4. Personally, I am  very confident, that I can make my pipelines fool-proof and I will never clone them on a package or part level (just assign them by reference, maybe trim some values).
  5. For my situation (and I think this would be the same for many users), if a pipeline has to be modified, it is probably due to a global change, like when I finally replace that crappy LED ring with a better one, or when the OpenPnP version evolves and has better stages, or when I bought parts that look differently. I would absolutely hate the maintenance nightmare of not being able to evolve a handful of pipelines centrally. 

I agree, that goes back to my point 1 and my general point that we should move away from maintaining a pipeline per part / package. We should only maintain settings that apply to the vision system associated with that part. This is really just your idea of central vision systems with trimmable values. The part that should be copied into the parts / packages is not the pipeline, but the trimmable values.

  1. One tough nut, you postponed "I have an idea of how this works for feeders, but let's not get into it for now"...
  1. I do see difficult feeder to part association problems, once we have multiple databases. The feeders remain physically loaded on the machine. But if the part in the feeder vanishes because I load a different database, OpenPnP has a problem.
  2. We have some slot feeders etc. but in most DIY/prototyping environments we don't have these job specific feeder trolleys where you can afford to reserve ten-thousands of dollars worth of feeders and parts for one job alone and dock the trolley into the machine whenever you work on that particular job.
For what it's worth, in the production shop I work in, we reload most of the feeders for each job, or move around the ones that are already loaded with a part we're going to use. Here's a concrete example that I think actually applies very well to DIY:

- I've just finished job A, so the machine is set up for job A and the feeders are all loaded with parts for job A.
- It's time to set up job B. I go through the loaded feeders and remove any that aren't in job B, and move the ones that are into the correct slots if they are different.
- I unload the extraneous feeders, and reload them with parts for job B, and put them back on the machine.

This workflow could be supported very easily by some simple colors in the feeder list: White is ready, Yellow is used in this job but in the wrong position, Red is not used in this job.

A summary of concrete changes that I think we should make:

1. Move the vision settings into Machine Setup, as you described. The user can create vision setups and give them names. In a perfect world there would just be one
2. Make changes to ReferenceBottomVision, or perhaps create a new default implementation, that has trimmable settings that focus on real world values. The implementation should interpret the real world values and feed them to the pipeline if and as needed. Importantly, these should not get into things like diameters, hough settings, HSV masks, etc. Honestly, I don't think there should really be any more than the things like Pre-Rotate and some limits.
3. The resulting trimmable settings should be stored with the part, not the package. I don't think we need another layer of abstraction here. I still want to eventually merge part and most of package.
4. Focus on making the default pipelines robust, and providing tools for making a camera setup produce images that work with the pipelines. I agree that users should not need to tweak pipelines for every part, and furthermore, I think that if they are tweaking pipelines at all we've got more work to do. The pipelines should be an implementation detail for developers, not a knob for users to turn.

When I look at the support requests that come in, I'd say the top three difficulties that people have with new setups are Fiducials, Bottom Vision, and general machine hardware setup, e.g. How do I make my solenoid turn on? I think your work on Advanced Motion and Issues and Solutions has gone a long way to fixing the third, and I think what we're talking about here could help a lot with the first two. The goal should be to simplify and automate as much as we can.

Jason




  1. The only solution I see at the moment would drag missing Parts and Packages referenced from Feeders over into any new database that OpenPnP touches. This also means that OpenPnP would always have to remember what database it had loaded before. And it would have a problem, if that database vanished...

_Mark

Am 29.01.2021 um 21:11 schrieb Jason von Nieda:
Hi Mark,
You received this message because you are subscribed to the Google Groups "OpenPnP" group.

To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/398545fa-1903-aac8-a3dd-5ca9405f0160%40makr.zone.
--
You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/7DeSdX4cFUE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jyjV9XwZ2JxicqU-_cmHutt7Vpff0SenMESZ%3D0A9GZkQA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

John Plocher

unread,
Jan 30, 2021, 3:45:03 PM1/30/21
to ope...@googlegroups.com
On Sat, Jan 30, 2021 at 10:39 AM ma...@makr.zone <ma...@makr.zone> wrote:

Hi John

Thanks for the feedback. I'm very interested in practical experiences and therefore I'd like to ask some questions back:

So far I keep being surprised that custom pipelines are such an issue. The usual pipeline uses the contacts to recognize the part, whether those are pins, pads or balls, they are typically metal and shiny, so if you have your proper diffusor in from of the camera, the contacts will create nice bright reflective spots. It is true that I haven't worked with a wide variety of parts yet, but so far I only ever defined one pipeline and it worked without individual tweaking for all parts, from the smallest passive to the largest IC I have. The body of a part is completely irrelevant, I have bright white ceramic, brown ceramic, black matte plastic, ... everything is clearly less bright than contacts. So I'm really surprised if you say we have to account for different body colors.


Part of the issue is semantics, and part is roles...

Semantics:
OpenPnP has the idea of pipelines that encompass a bunch of, let's say, basic optical functionality and cleanup issues. They also handle (in conjunction with other stuff) package identification/recognition, component rotation/alignment, polarity and verification.  (Maybe we need different words here instead of overloading 'pipeline', but you get the point...)

That former thing should be considered basic hygiene, and squarely within the realm of the machine creator to get right.  That is, if your lights don't illuminate correctly, if your camera isn't calibrated or doesn't have good depth of field or ... yadda ... , your machine and its pipelines aren't ready for "production" use, and nothing built on top of those pipelines will ever work well.

The latter assumes that the basic camera/vision/pipeline functionality just works, and that it can be used as a foundation for higher level services, such as OCR and QR recognition, tombstone detection, polarity detection, rotation/alignment, etc.  These high level features will depend on underlying pipelines - and they may extend them with additional stages that encompass their own custom functionality...

The machines I've used don't expose many "pipeline setup" details to the operator (other than as part of machine calibration/verification...).

Roles:
Operators are exposed (as a list to choose from) as part of the BOM import/job setup process - the idea being that the operator is given a small set of predetermined functionality to choose from.  Changing and tuning the machine's underlying configuration is a job for the tech/engineer/machine owner....

Much of this thread feels like machine design stuff, and not operator stuff.  That's OK, just don't conflate the two too much :-)  One of the Chinesium Software hell comes from poor choices in that differentiation - too much, not enough, ...   Hopefully OpenPnP will evolve to "just right" :-)

Again, my experience is severely limited, so I'm really interested what the practical issues are.

Sometimes I wonder if people have their camera exposure too high. My camera image is quite dark for human perception, the dynamic range must cover the contacts' reflections, so they can be properly discriminated, there must be no clipping even for the shiniest contacts.


Again, my experience is that there is, in OpenPnP terms, often a pipeline for human visualization, with different pipelines for machine vision use cases - they are NOT the same.
 
  -John

John Plocher

unread,
Jan 30, 2021, 4:55:59 PM1/30/21
to ope...@googlegroups.com
On Sat, Jan 30, 2021 at 11:38 AM Jason von Nieda and_Mark  wrote:
It is important that we remember that OpenPnP is not just for DIY.
  1. I would suggest a simple "Database" approach. A database would contain Parts and Packages.

This is important, as it represents the component inventory available to place a job, and evolves (via purchasing availability...) independently from those jobs.
This database meets a Job at the point where a BOM's [value, package] is mapped to a component in a Feeder, which can happen either at placement job run time or before.  There are pros and cons both ways (think dynamic feeder changes as component reels are consumed...) 

My "happy place" workflow involves
  • Customers generally provide
    • Boards (physical pcbs fab'd from some CAD system design)
    • BOMs (a list of the components that go on a Board - Centroids, partID, [value, package], x, y, rotation....)
  • A Fab operation generally provides
    • Inventory (a logical source of available components, indexed by [value, package])
      • Many Inventories have more items than can fit in the available Feeders, 
      • the same item may be available in more than one Feeder, and 
      • many different Inventory items may map to a single [value, package] component (e.g., 2k2, 2K2, 2.2K, 2200, 2K...)
    • Feeders (a physical source for components in my inventory - reels, strips, trays, yadda that can be matched to the [value, package] tuple found in the Inventory).  
    • Machine (a device that can access a set of feeders, Boards and nozzles), 
  • Job Definition (a file that pulls all these together)
    • Produced by an "operator", combines a customer's artifacts with the fab operation's capacity

My workflow involves 
  • creating a Job definition: importing a BOM, choosing components from my Inventory, associating it with a Machine, and selecting which Feeders and nozzles I wish to use, and
  • running a job: importing a Job definition, mounting the "Feeders" in the Machine as per the job's requirements, calibrating everything, then cycling thru placing components on a set of Boards

 
I think we may have different ideas of simple :) That sounds like a lot of mental load to me, although I don't think we're too far off from agreeing. In my mind, the job "database" is just the part information embedded in the job. Just to reiterate, I very much want to get to a single file job. No more board.xml, no more dependency on parts.xml or packages.xml. All of that information is in the job. An entire OpenPnP setup to run a job should consist of machine.xml and job.xml. The job is self-contained and includes everything it needs to run - it just needs hardware (machine.xml) to run on.


Within the caveat that this single Job File is informed by some sort of inventory and feeder database, sounds good.
For me, the key is the ability to manage a parts inventory independently from the Jobs - and to be able to have existing Jobs take advantage of future changes to my Inventory. 

Examples:
The SparkFun CharmHigh workflow takes a database (Google Sheets) and a CAD file (Eagle/ KiCAD), munges them together, and creates a Job file for the CharmHigh machine.  Because it is database driven, all the required component details are also populated in the Job file (pick/place heights, component sizes, feeder/tray locations,...)

The YX/HGC workflow takes a CSV centroid file (and an optional BOM csv), but does NOT have the concept of a database/inventory.  Thus, it requires manual cross referencing and component detail entry to create a Job, for every Job.   While there are supposedly mechanisms to clone Jobs to transfer these details, they are not documented and don't work reliably.
 
For what it's worth, in the production shop I work in, we reload most of the feeders for each job, or move around the ones that are already loaded with a part we're going to use. Here's a concrete example that I think actually applies very well to DIY:

- I've just finished job A, so the machine is set up for job A and the feeders are all loaded with parts for job A.
- It's time to set up job B. I go through the loaded feeders and remove any that aren't in job B, and move the ones that are into the correct slots if they are different.
- I unload the extraneous feeders, and reload them with parts for job B, and put them back on the machine.

While this is the workflow that gets the most attention (because it allows job placement speed optimizations based on short head movements and large production runs), the other is probably more valid for this DIY crowd, where one's entire Inventory of components can be loaded into Feeders and pre-mounted onto a machine in unchanging locations.   In this other environment, optimizing for multiple different short production runs with minimal changeover time may be much more interesting.

$0.02,
   -John

bert shivaan

unread,
Jan 30, 2021, 5:21:12 PM1/30/21
to OpenPnP
I assume we can change the slot the feeder is in for job B instead of actually moving it after job A?

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

ma...@makr.zone

unread,
Jan 30, 2021, 5:45:53 PM1/30/21
to ope...@googlegroups.com

Hi Jason

I'm a bit confused now. :-)

  • On one hand you describe this central database with an "authoritative set of parameters", push pull capability, etc. that I assume can even go across multiple machines and certainly across jobs.  
  • On the other hand you proclaim "no more dependency on parts.xml or packages.xml".

How do these two go together??

I certainly embrace the first goal, but I don't see how that second goal can ever be achieved (at least before we all have these feeder trolleys ;-).

> The [info] that should be copied into the parts / packages is not the pipeline, but the trimmable values.

Good to know. In fact, may I offer another variant?

  1. Parts, Packages are still global and separate from jobs. They only have the canonical Vision setup assigned.
  2. The trimmable values would live inside the job as job specific Part/Package tweaks (lookup table).

Personally, I would still want them to have inheritance. I still believe that most adjustments needed in DIY are global and permanent. But we can have a "freeze in job" option for those that want it.

> The resulting trimmable settings should be stored with the part, not the package. I don't think we need another layer of abstraction here. I still want to eventually merge part and most of package.

I agree, not another layer of abstraction, but please reconsider to merge the other way around! All settings to the package, nothing on the part. Even my DIY project has 34 parts on the R0402 package, they all look perfectly the same. I don't want to trim 34 sliders when one vision property is wrong! ;-)

If one part is special, clone the assigned package and tweak it. We can provide a one-click function on the part for that, and we could optionally make it permanent, i.e. not to be overwritten in the next ECAD import.

> I go through the loaded feeders and remove any that aren't in job B, and move the ones that are into the correct slots if they are different.

I still believe that DIY and prototyping users have a much more continuous usage of their feeders. The most likely scenario are a handful of evolving projects and products created with the same ECAD tools and libraries. The same engineer(s) doing things in similar ways each time, using proven parts and sub-circuits. Agreed, that's completely different from the assembly shop that does contract work for hundreds of customers. But I really hope OpenPnP will never abandon us hobbyists and prototypers. <:-/

Yes there still those that user StripFeeders and simlar... I think it speaks for itself. 

Most DIY feeders that are slot feeders in theory frankly don't look like they're quickly replaced or moved. Even those that have pro feeders can't usually afford to buy a feeder for each part and they will have to unload/reload reels. Judging from some videos, this is also no snap.

And unlike in pro shops, the changeover is not for a Million-PCBs run, but for a few PCBs, so it must be doable in reasonable time.

All this leads to a strategy that will do everything to keep as many feeders mounted and in place as possible. I really strongly believe OpenPnP must continue to support this workflow nicely.

_Mark

John Plocher

unread,
Jan 30, 2021, 6:36:53 PM1/30/21
to ope...@googlegroups.com
On Sat, Jan 30, 2021 at 2:45 PM ma...@makr.zone <ma...@makr.zone> wrote:

Hi Jason

I'm a bit confused now. :-)

  • On one hand you describe this central database with an "authoritative set of parameters", push pull capability, etc. that I assume can even go across multiple machines and certainly across jobs.  
  • On the other hand you proclaim "no more dependency on parts.xml or packages.xml".

How do these two go together?? 


I took this to mean that as part of OpenPnP's job file creation process, the contents of the parts, packages and BOM/Centroid files were smartly merged together into a single "Job" file, such that the Job file became a stand alone item that contained everything necessary to run a Job on a machine that has a tuned and working machine.xml file.  You would bring a zip stick with myjob.xml on it, and would not need to include any parts.xml or packages.xml "library" files as well to make use of it...

  -John


Jason von Nieda

unread,
Jan 30, 2021, 6:40:53 PM1/30/21
to ope...@googlegroups.com
On Sat, Jan 30, 2021 at 4:45 PM ma...@makr.zone <ma...@makr.zone> wrote:

Hi Jason

I'm a bit confused now. :-)

  • On one hand you describe this central database with an "authoritative set of parameters", push pull capability, etc. that I assume can even go across multiple machines and certainly across jobs.  
  • On the other hand you proclaim "no more dependency on parts.xml or packages.xml".

How do these two go together??


The database is there to give you a place to put things between jobs, and someday, to be a cloud sourced place for part data. Let me describe an example:

- You are setting up your first OpenPnP job.
- It has two parts: R0805-1k-1% and ATMEGA328P-AUR which is a 32 pin TQFP.
- The database is currently empty, so after you've set the important properties of those two parts (width, height, length, nozzle tip, etc.) in the job you "push" or "save to" to the database. It now contains those two parts. You don't have to do this - the job could just stand on its own, but you think you might use those parts in the future, so you do.
- As of that moment, the database and the job now have duplicate data for the two parts. You could delete those parts from the database and not affect the job, and vice versa.
- Now you are setting up your second job. You import your CSV and it contains a placement with ATMEGA328P-AUR. OpenPnP sees that part in the database and copies the information from the database to the job. Zero clicks, and if the part is correct in the database you are done.

If we extend this concept to the cloud, why should I have to look up the width, length, height, package, etc. of a ATMEGA328P-AUR when others before me have already done it? If they've pushed it to a shared database OpenPnP can just pull it in if it doesn't exist locally.

In a way this is actually quite similar to your concept of multiple databases, just without the mental load of remembering what goes with what. The job.xml has its "database", your OpenPnP instance has it's database, http://openpnp.org has a public database, etc.

I certainly embrace the first goal, but I don't see how that second goal can ever be achieved (at least before we all have these feeder trolleys ;-).

I'm not sure what this has to do with feeder trolleys? For what it's worth, none of what I am describing is for a giant contract assembler. The shop I work in is building its own boards for its own products and nothing more. We have about 50 feeders total and we have to swap reels in and out of them constantly. We run about 1000 boards per job, and we do a lot of prototypes of 10-20 boards each. I think for most DIYers this is also going to be the reality for auto feeders, and likely strip feeders as well.

This does not preclude reusing feeders in any way. If I open a job that uses ATMEGA328P-AUR and there is already a feeder on the machine configured with ATMEGA328P-AUR then I have nothing to do. Click run.

> The [info] that should be copied into the parts / packages is not the pipeline, but the trimmable values.

Good to know. In fact, may I offer another variant?

  1. Parts, Packages are still global and separate from jobs. They only have the canonical Vision setup assigned.
  2. The trimmable values would live inside the job as job specific Part/Package tweaks (lookup table).
Skipping this one for now to see if things are more clear with my explanation above.

Personally, I would still want them to have inheritance. I still believe that most adjustments needed in DIY are global and permanent. But we can have a "freeze in job" option for those that want it.

> The resulting trimmable settings should be stored with the part, not the package. I don't think we need another layer of abstraction here. I still want to eventually merge part and most of package.

I agree, not another layer of abstraction, but please reconsider to merge the other way around! All settings to the package, nothing on the part. Even my DIY project has 34 parts on the R0402 package, they all look perfectly the same. I don't want to trim 34 sliders when one vision property is wrong! ;-)

Is this your experience in practice? It isn't mine. A heavy 10uF 0805 responds very differently to speed, acceleration, vacuum pressure, etc. than a super light 0.1uF 0805. Ostensibly they are the same package, but they are very different parts and often require different settings. If I turn down the speed for the 10uF that doesn't mean I want to turn down the speed for the other 34 0805s. For the single specific case of vision, I agree - probably no need to differentiate, but in my experience I should never really need to tweak the vision settings for something this common anyway.

The reason I tend to focus on Part being the result of the merger is that Part is where there is likely to be the larger number of things a user would change. At the least, a Part almost always has a specific height that is not part of the package. I'm also likely to want to make nozzle tip and speed changes at the part level.

If one part is special, clone the assigned package and tweak it. We can provide a one-click function on the part for that, and we could optionally make it permanent, i.e. not to be overwritten in the next ECAD import.

> I go through the loaded feeders and remove any that aren't in job B, and move the ones that are into the correct slots if they are different.

I still believe that DIY and prototyping users have a much more continuous usage of their feeders. The most likely scenario are a handful of evolving projects and products created with the same ECAD tools and libraries. The same engineer(s) doing things in similar ways each time, using proven parts and sub-circuits. Agreed, that's completely different from the assembly shop that does contract work for hundreds of customers. But I really hope OpenPnP will never abandon us hobbyists and prototypers. <:-/

I think we actually agree here, and I hope my clarification above shows that. There is no need to insinuate that I'm trying to leave DIY behind. I know why I started this project :) I'm trying to streamline job setup and make it faster and easier to get a job up and running, and in the process reduce the amount of questions that have to get answered by experts.

Yes there still those that user StripFeeders and simlar... I think it speaks for itself. 

Most DIY feeders that are slot feeders in theory frankly don't look like they're quickly replaced or moved. Even those that have pro feeders can't usually afford to buy a feeder for each part and they will have to unload/reload reels. Judging from some videos, this is also no snap.

And unlike in pro shops, the changeover is not for a Million-PCBs run, but for a few PCBs, so it must be doable in reasonable time.

All this leads to a strategy that will do everything to keep as many feeders mounted and in place as possible. I really strongly believe OpenPnP must continue to support this workflow nicely.

Yes, this is actually a core goal of what I'm trying to do. In a perfect world you would not have to swap any feeder between jobs. You could have 2000mm x 1000mm table with hundreds of strip feeders and just keep all your parts loaded at all times. If you load a job and the parts are already there - you are ready to run! More likely, you have all your common resistors and caps you keep loaded and then you have to swap in some unique ICs and values.

Thanks,
Jason



Jason von Nieda

unread,
Jan 30, 2021, 6:42:06 PM1/30/21
to ope...@googlegroups.com


I took this to mean that as part of OpenPnP's job file creation process, the contents of the parts, packages and BOM/Centroid files were smartly merged together into a single "Job" file, such that the Job file became a stand alone item that contained everything necessary to run a Job on a machine that has a tuned and working machine.xml file.  You would bring a zip stick with myjob.xml on it, and would not need to include any parts.xml or packages.xml "library" files as well to make use of it...


This is exactly right :) And even more to that point, when someone is having trouble with a job, they can easily send us the job file and we have everything we need to help debug it.

Jason

ma...@makr.zone

unread,
Jan 31, 2021, 5:56:25 AM1/31/21
to ope...@googlegroups.com

Hi Jason

(I really hope, this does not sound offending in any way. If it does, please be assured it was not meant so! )

Now I understand, you want to redundantly copy everything into jobs. Kind of like Eagle copies library parts/footprints into the board/schematic.

John said:

> For me, the key is the ability to manage a parts inventory independently from the Jobs - and to be able to have existing Jobs take advantage of future changes to my Inventory.  

That would be absolutely essential for me too and it must be easy and fast to use. But  ...

Begin Advocatus Diaboli.

So we'd have two Part and Package tabs, one central, one job oriented? Or two separate executables, one for operation and setup and one for library maintenance (like in those programs from the Eighties)? Plus we need an "update parts from central" functionality, and "push parts back to central"?

So the OpenPnP group would soon overflow with messages like: "Oh no, I pushed the outdated footprints from that job I resurrected from 2015 and overwrote central!", "Darn, I should have only pulled in the new package (part?) nozzle tip compatibilities, but left the job's custom vision trims alone, now everything is overwritten. It took me two day to get those right!".

To make this work without too much pain, you'd need full-fledged three-way versioning between each job and the central database, so you could push or pull only the changed and parts and attributes. You's need conflict resolution, if the same parts/attributes were changed on both sides. For cloud operation you'd also need very far-reaching version independence both ways, no more "I don't know this new attribute, so I give up" XML stubbornness.

We'd need super-complex push-pull dialogs where you can determine which attributes to actually sync and which parts/packages to include. And you'd still suffer from consequences when you didn't think of that dependency between two attributes, or that update of the database two weeks ago...

I agree that this would open many new possibilities, like sharing part/package settings across very different machines and camera setups. But it's one hell of an endeavor to develop. And it's necessary complex to use (at least if you want it to be both safe + work efficient).

End Advocatus Diaboli.

IMHO, in comparison, the multiple Database approach does not sound so complicated and frightening after all.

You can have multiple databases to reflect multiple sequential or parallel "evolutions" of your workflow. This "evolution" also reflects the OpenPnP version i.e. the capabilities/settings a version has at a time. You can let a job ride on the newest database, kind of like it is now common in software development with continuous integration. Or you can create a project/customer specific database to encapsulate it from other, completely different projects/customers. So it still evolves, but only when that project proceeds/customer orders, and as a separate "species". Or you can create a job "time capsule" with a momentary snapshot of your database (like I said this could be made into a self-contained file bundle). One day you might decide to bring that job back "to the future". You can, because you can re-associate it with an evolved database. But that would then be an all-or-nothing (you can't travel backwards in time). If you see it's too much work, you can choose a less far evolved database perhaps, or give up and connect it back to that old time capsule database, without ever overwriting or losing something. The only thing we'd do, is cloning missing parts/packages over into a newly associated database, i.e. purely non-destructive.

And it could be developed in sensible time.

Some of these databases could be cloud based. Communities could form around a specific database (they'd all have to run the right OpenPnP version). Backup/restore, versioning, sending in, etc. are all straight-forward. Databases are just files/directories.

>> Even my DIY project has 34 parts on the R0402 package, they all look perfectly the same. I don't want to trim 34 sliders when one vision property is wrong! ;-)

> Is this your experience in practice? It isn't mine. A heavy 10uF 0805 responds very differently to speed, acceleration, vacuum pressure, etc. than a super light 0.1uF 0805.

Yes, I would say that is my experience (how "practical" it really is, is another question). My parts all look essentially the same per footprint. For the capacitor example you used, I know, because I actually wrote a small script in Eagle to automatically select the package based on capacitance and max voltage. I looked a Digikey and there is a very clear association between capacitance/voltage and footprint (at price). These parts grow in all dimensions, not just Z ;-). I guess part manufacturing is also easiest, when everything remains the same per package. This page even suggests its standardized:

https://www.ultralibrarian.com/resources/blog/2020/06/02/0402-package-footprint-resistor-sizes-and-parameters-ulc/

Going beyond capacitors, packages are often already discriminated in the ECAD. They're a different package, if they need that extra cooling or mounting pad, or the extra pad clearance for higher voltage, for example (all real world examples out of my project). Increasingly, ECAD software also maintains 3D data, which further drives package discrimination. Parts that are more "3D" and heavy like inductors have proprietary footprints anyways. Admittedly, I never planned with those tall electrolytic capacitors.

So 99 out of 100 times I would be quite confident that all parts of a package can be handled in completely the same way by OpenPnP. For the remaining 1% of the cases, I can clone-and-assign an extra package in OpenPnP.

The alternative is simply not feasible. I surely don't want to set nozzle tip compatibilities 34 times for the R0402 plus 14 times for C0402 (data from my project).

And for the next project I import, I don't want to go check which of the R0402 are new, and config them from scratch.

I'm a bit surprised here. Is it really done like that in the pro softwares you used?

Finally, I should also say, I was not the one advocating to put all the settings on one object, either Part or Package. I was fine with a dynamic inheritance. Set defaults globally, inherit to package. Override on the package, inherit to part. Override on the part, inherit to Part in Job. Override on Part in Job.

This may sound very complicated, but in fact the GUI could always look exactly the same, users would just chose on what level to override.

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

John Plocher

unread,
Jan 31, 2021, 10:55:02 AM1/31/21
to ope...@googlegroups.com
Just a quick Sunday Morning comment:

Just like with eCAD systems, where I have NO DESIRE to use someone else's "unvalidated by me" footprints, I believe the same will be true here.
Having a Cloud Library source for footprints is great, as long as I can copy them to, and use them from a library of my own.  You see, until I've validated it myself, I can't afford to trust anyone else's assumptions.
I also doubt you would want any of my local tweeks to overwrite your cloud version, because that would invalidate other people's validated parts...

 It may be that there are/needs to be TWO job files - a master job that is not bound to parts/packages yet, and a transportable job that has all the parts and package stuff in it.  These could be ONE file if there was a function similar to EagleCad's "Update Library"...

  -John

Jason von Nieda

unread,
Jan 31, 2021, 11:04:35 AM1/31/21
to ope...@googlegroups.com
On Sun, Jan 31, 2021 at 9:55 AM John Plocher <john.p...@gmail.com> wrote:
Just a quick Sunday Morning comment:

Just like with eCAD systems, where I have NO DESIRE to use someone else's "unvalidated by me" footprints, I believe the same will be true here.
Having a Cloud Library source for footprints is great, as long as I can copy them to, and use them from a library of my own.  You see, until I've validated it myself, I can't afford to trust anyone else's assumptions.
I also doubt you would want any of my local tweeks to overwrite your cloud version, because that would invalidate other people's validated parts...

I don't think it applies quite as much here. The reason you (we) use self made footprints is that we all have our own ideas of what the best footprint is for keepout, solder mask, paste aperture, etc.

OpenPnP doesn't need to know any of that data. Right now the only thing footprints are used for is visualization. As of today, the only thing OpenPnP *needs* to know is part height. That's literally the only required field for a part. In the near future I want to add length and width to that so that we can do tombstone checks in bottom vision, and so that we can automatically select or suggest a nozzle tip based on the available ones. Additionally adding the JEDEC standard package type could help with selecting different bottom vision strategies.

Currently, when I need to input this data I either copy it from a known part, or I grab the datasheet and find the right values.

My point being that the things I'm talking about being available publicly are fixed and constant values for a given MPN. And, of course, you don't have to use it :)



 It may be that there are/needs to be TWO job files - a master job that is not bound to parts/packages yet, and a transportable job that has all the parts and package stuff in it.  These could be ONE file if there was a function similar to EagleCad's "Update Library"...

Yes, I'll address this in a response to Mark, but his comment that my idea is like Eagle is spot on. And yes, when I say "push" and "pull" from the database I'm talking about similar functionality to Eagle's "Update Library", although I think we can do a lot better in terms of UI.

Jason


  -John

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Jason von Nieda

unread,
Jan 31, 2021, 12:00:14 PM1/31/21
to ope...@googlegroups.com
On Sun, Jan 31, 2021 at 4:56 AM ma...@makr.zone <ma...@makr.zone> wrote:

Hi Jason

(I really hope, this does not sound offending in any way. If it does, please be assured it was not meant so! )

Now I understand, you want to redundantly copy everything into jobs. Kind of like Eagle copies library parts/footprints into the board/schematic.


Yes, this is a great comparison and is exactly what I mean, and for the same reasons. I see immense value in knowing that if I open an Eagle schematic ten years down the road nothing will have changed. That schematic will look exactly as it did the last time I opened it. And, with OpenPnP, the job will run exactly as it did the last time I ran it. If I need to pull in some changes I can "Update Library" as John mentioned. But, in my experience, that is likely to never happen. If the job ran right last time, why change it?

John said:

> For me, the key is the ability to manage a parts inventory independently from the Jobs - and to be able to have existing Jobs take advantage of future changes to my Inventory.  

That would be absolutely essential for me too and it must be easy and fast to use. But  ...


Let's not conflate what John said with what I think you might mean: OpenPnP is not an inventory management system. To me "manage a parts inventory independently" means that you need to know how many parts were consumed in a job run, and nothing else. You import that (maybe automatically) into your inventory system. The common identifier is the part ID, which is probably either the MPN or your own ID system. This could be as simple as dumping out a CSV at the end of each job run. (Or making a REST request, or a million other things, but let's not design an inventory integration system right now too)

Begin Advocatus Diaboli.

So we'd have two Part and Package tabs, one central, one job oriented? Or two separate executables, one for operation and setup and one for library maintenance (like in those programs from the Eighties)? Plus we need an "update parts from central" functionality, and "push parts back to central"?

No, it could be as simple as showing a row in the Parts tab as "synced" or "not synced". Remember, for a job, the authoritative source of part information is the job. The initial dataset was pulled in or created during setup of that job.

So the OpenPnP group would soon overflow with messages like: "Oh no, I pushed the outdated footprints from that job I resurrected from 2015 and overwrote central!", "Darn, I should have only pulled in the new package (part?) nozzle tip compatibilities, but left the job's custom vision trims alone, now everything is overwritten. It took me two day to get those right!".

Right now *everything* is global. Making a change anywhere makes it everywhere. Is the group currently full of messages like this?

To make this work without too much pain, you'd need full-fledged three-way versioning between each job and the central database, so you could push or pull only the changed and parts and attributes. You's need conflict resolution, if the same parts/attributes were changed on both sides. For cloud operation you'd also need very far-reaching version independence both ways, no more "I don't know this new attribute, so I give up" XML stubbornness.

We'd need super-complex push-pull dialogs where you can determine which attributes to actually sync and which parts/packages to include. And you'd still suffer from consequences when you didn't think of that dependency between two attributes, or that update of the database two weeks ago...

No, of course not.  First, if you have to manage this myriad of attributes for a part we've failed as developers. Setting up a job should basically consist of importing a CSV, setting up any parts that weren't already in the database, setting up any missing feeders and clicking "Go".

I agree that this would open many new possibilities, like sharing part/package settings across very different machines and camera setups. But it's one hell of an endeavor to develop. And it's necessary complex to use (at least if you want it to be both safe + work efficient).

End Advocatus Diaboli.

IMHO, in comparison, the multiple Database approach does not sound so complicated and frightening after all.

You can have multiple databases to reflect multiple sequential or parallel "evolutions" of your workflow. This "evolution" also reflects the OpenPnP version i.e. the capabilities/settings a version has at a time. You can let a job ride on the newest database, kind of like it is now common in software development with continuous integration. Or you can create a project/customer specific database to encapsulate it from other, completely different projects/customers. So it still evolves, but only when that project proceeds/customer orders, and as a separate "species". Or you can create a job "time capsule" with a momentary snapshot of your database (like I said this could be made into a self-contained file bundle). One day you might decide to bring that job back "to the future". You can, because you can re-associate it with an evolved database. But that would then be an all-or-nothing (you can't travel backwards in time). If you see it's too much work, you can choose a less far evolved database perhaps, or give up and connect it back to that old time capsule database, without ever overwriting or losing something. The only thing we'd do, is cloning missing parts/packages over into a newly associated database, i.e. purely non-destructive.

And it could be developed in sensible time.

Some of these databases could be cloud based. Communities could form around a specific database (they'd all have to run the right OpenPnP version). Backup/restore, versioning, sending in, etc. are all straight-forward. Databases are just files/directories.

>> Even my DIY project has 34 parts on the R0402 package, they all look perfectly the same. I don't want to trim 34 sliders when one vision property is wrong! ;-)

> Is this your experience in practice? It isn't mine. A heavy 10uF 0805 responds very differently to speed, acceleration, vacuum pressure, etc. than a super light 0.1uF 0805.

Yes, I would say that is my experience (how "practical" it really is, is another question). My parts all look essentially the same per footprint. For the capacitor example you used, I know, because I actually wrote a small script in Eagle to automatically select the package based on capacitance and max voltage. I looked a Digikey and there is a very clear association between capacitance/voltage and footprint (at price). These parts grow in all dimensions, not just Z ;-). I guess part manufacturing is also easiest, when everything remains the same per package. This page even suggests its standardized:

https://www.ultralibrarian.com/resources/blog/2020/06/02/0402-package-footprint-resistor-sizes-and-parameters-ulc/

Going beyond capacitors, packages are often already discriminated in the ECAD. They're a different package, if they need that extra cooling or mounting pad, or the extra pad clearance for higher voltage, for example (all real world examples out of my project). Increasingly, ECAD software also maintains 3D data, which further drives package discrimination. Parts that are more "3D" and heavy like inductors have proprietary footprints anyways. Admittedly, I never planned with those tall electrolytic capacitors.

So 99 out of 100 times I would be quite confident that all parts of a package can be handled in completely the same way by OpenPnP. For the remaining 1% of the cases, I can clone-and-assign an extra package in OpenPnP.

The alternative is simply not feasible. I surely don't want to set nozzle tip compatibilities 34 times for the R0402 plus 14 times for C0402 (data from my project).

And for the next project I import, I don't want to go check which of the R0402 are new, and config them from scratch.

I'm a bit surprised here. Is it really done like that in the pro softwares you used?

No. In fact, it works a lot like what I am describing in this thread :)  When I import Centroids I've almost always used the part before so it is imported automatically, and when it's not I just select a known good part and "Copy". For a job with 150 unique parts it might take 10 minutes to resolve the missing ones.

Finally, I should also say, I was not the one advocating to put all the settings on one object, either Part or Package. I was fine with a dynamic inheritance. Set defaults globally, inherit to package. Override on the package, inherit to part. Override on the part, inherit to Part in Job. Override on Part in Job.

This may sound very complicated, but in fact the GUI could always look exactly the same, users would just chose on what level to override.


There seems to be a lot of focus on footprint here, and I'm not sure why. We don't use footprints currently and I don't expect to in the future. Right now the primary attributes you set for a part are height and nozzle tip. Height comes from the datasheet or via measurement, and nozzle tip is usually an operator call, but can easily be automatically defaulted by having width and height of part. Adding width and height as required, or at least suggested, gets us the ability to do better bottom vision, tombstone detection, bad pick detection, etc. without having to use pipeline hacks.

I think that with part width, length, and height we can default nearly everything else. I think this because I've been doing it in practice now for 2 years with hundreds of unique parts and dozens of jobs.

As an example, and by way of summary, imagine setting up a new job like this:

1. Create a new Job.
2. Import CSV which contains: Reference, X, Y, Rotation, Part ID.
3. The Placements table shows in red any Placements that are not fully configured. This will most often mean either the Part needs work or there is no Feeder.
4. The Parts table now shows *only* the parts used in this job. The required fields in the table are ID, Width, Length, Height, and Nozzle Tip. If any of those required fields are missing, or there is no feeder, the row is red
5. Part IDs that were already in the database had their attributes imported automatically. Those that weren't can either be typed in manually, or selected from the database via a dialog, or dropdown, or whatever UI treatment we like.

For example, if you are importing R0805-10k-1% and you've never used that before, you could choose to import the attributes from R0805-10k-5% which you've set up previously. When you've set up a part manually or copied it from another part we push it back to the database because why not? It wasn't there before so no risk of overwriting anything.

In addition if we add length and width, we can automatically set / propose nozzle tip compatibility.

6. Set up any feeders that don't already have a part for the job.
7. Go!
(8.) There is no Packages tab :)

I don't see any reason it needs to be more complicated than this. You get a path to setting up a job very quickly, you build a solid database of part defaults over time, we have good data for defaulting almost everything, and you get jobs that don't suddenly change out from under you between runs.

Jason



bert shivaan

unread,
Jan 31, 2021, 1:04:23 PM1/31/21
to OpenPnP
With no packages tab, where will we input the package size?
Right now there are some that use a stage in their pipelines to call a script. With this script they check the vision result matches the size of the package. 
The package size is hard coded in the script I think, but could easily be passed instead from the package info I think.

I would hate to lose that possibility. 
Maybe you are not suggesting we will, just trying to be clear. 
I hope to help have openPNP do this checking automatically in the future.

Jason von Nieda

unread,
Jan 31, 2021, 1:08:00 PM1/31/21
to ope...@googlegroups.com
Hi Bert,

Part would have Width, Length, and Height. Bottom vision would use Width and Length for size detect, tombstone detect, etc. To be clear, I am absolutely suggesting we make size detect a feature of ReferenceBottomVision, rather than a pipeline hack via the Size check.

I mentioned this here:

Adding width and height as required, or at least suggested, gets us the ability to do better bottom vision, tombstone detection, bad pick detection, etc. without having to use pipeline hacks.
 
Jason


bert shivaan

unread,
Jan 31, 2021, 1:39:26 PM1/31/21
to OpenPnP
Ok, maybe I am starting to get it now. 
It is still hard for me to grasp ditching the idea of a package. 
You guys are talking about ditching the package and have each part defined within the part?
So every 0402 cap will have the .04" length and .02" width saved with the part, instead of having a single package used on all the parts.
Correct?

Jason von Nieda

unread,
Jan 31, 2021, 1:48:47 PM1/31/21
to ope...@googlegroups.com
Yes, that's exactly what I mean. I don't think Mark and I are on the same page yet, but I hope we get there :)

Packages tab goes away. Part tab gets (most of) the properties that were on Package. And we make it easy to copy properties between Parts.

Jason


bert shivaan

unread,
Jan 31, 2021, 1:56:50 PM1/31/21
to OpenPnP
I hope I am wrong (usually am to be sure)
there is a pit in my stomach thinking about how we have to assign parts to nozzles, and hoping this will not become like that.

I am sure I am wrong, but can't help it.

John Plocher

unread,
Jan 31, 2021, 1:58:48 PM1/31/21
to ope...@googlegroups.com
One of the POORLY handled parts of eagle is the idea of a common package definition shared across parts.
JEDIC et al define footprints such that a SOIC24 package is the same, whether it is an I2C chip or a H-Bridge or MCU part that is fab'd into it...
Having parts define their own SOIC24 package info seems to be asking for confusion and errors...

Sorry if I've missed something - busy day and PnP email is getting hard to follow ...  

  -John


Jason von Nieda

unread,
Jan 31, 2021, 2:00:01 PM1/31/21
to ope...@googlegroups.com
Hi Bert,

First, no pits in stomachs. This is all just discussion not dictation. If it turns out to be a bad idea we don't do it.

Can you explain a bit more what you mean by this?

Here is how it would work in my mind:

Currently, we assign nozzle tips to packages in the Packages tab. You are using v2, right? This was way harder in v1, but in v2 it's pretty nice.

The change is that this Nozzle Tips tab just moves to the Part. So, yes, you do assign Nozzle Tips to a Part, but remember, once you've done it for a part you never have to do it again when you use that part, because it's in the database.

Jason



Jason von Nieda

unread,
Jan 31, 2021, 2:02:33 PM1/31/21
to ope...@googlegroups.com
Hi John,

I agree - and that's how it works in OpenPnP right now! That's what I want to change. Right now you select a Package for each Part and that Package ends up being common for all the Parts that use it. I am suggesting we just put those properties on the Part to be managed per Part.

And again, we *don't use* footprints. This is a *complete list* of the properties I see as important on Part:
- ID
- Dimensions: Width, Length, Height.
- Vision Type: (Dropdown from Mark's list of vision systems)
- Compatible Nozzle Tips.

That's it.

Jason


bert shivaan

unread,
Jan 31, 2021, 2:15:35 PM1/31/21
to OpenPnP
Sorry Jason, thinking about V1.0
Have not tested v2.0 yet, still building my V2.0 machine

:)
Like I said, I am wrong as usual!

ma...@makr.zone

unread,
Jan 31, 2021, 3:56:54 PM1/31/21
to ope...@googlegroups.com

Jason,

> The change is that this Nozzle Tips tab just moves to the Part. So, yes, you do assign Nozzle Tips to a Part, but remember, once you've done it for a part you never have to do it again when you use that part, because it's in the database.

I must really be missing something. Sorry, this idea really troubles me...

Steam emergency release: on.

So it would be expected of users to enter the same data over and over again, for each of the sometimes dozens of parts that belong to the same package? And if my database contains a hundred R0402s after a while and I import a new project, I have to take out the fine comb and go through all of my R0402s and find the ones that still don't have a nozzle tip compatibility assigned, and no speed and no vision, and no width and no length and no height ... and enter it, manually? No more smart feeder setup cloning based on the package level? And in the future: no per package training data, no template image, no pipeline trimming?

Why should changing a 0402 resistor from 100k to 110k in the ECAD and re-importing it into OpenPnP mean that I have to run around and enter and potty-train all that redundant data from scratch? Isn't it enough that I have to organize a new feeder for the 110k resistor? Why the artificial work load?

And if one day I want to add a second compatible nozzle tip to my R0402s, or reassign them to an optimized vision setup, do I need to go in there and manually change a hundred parts?

I do hope I am missing something veeery big and juicy.

Steam emergency release: off.

_Mark

Jason von Nieda

unread,
Jan 31, 2021, 4:04:25 PM1/31/21
to ope...@googlegroups.com
Mark,

Did you read my response to your message from earlier, where I started with "Yes, this is a great comparison and is exactly what I mean"?

You would simply select a compatible part from the database and it would copy the data in. You could even have a "reference" part in the database if you like, say R0402-Master. Change it, select the parts you want to import it for and done. Two clicks to update everything.

Jason


ma...@makr.zone

unread,
Jan 31, 2021, 4:44:45 PM1/31/21
to ope...@googlegroups.com

Hi John

Somehow I can't parse your Email (I'm not a native speaker).

The first sentence seems to be along Jason's line.

> One of the POORLY handled parts of eagle is the idea of a common package definition shared across parts.

But then you seem to say the opposite:

> Having parts define their own SOIC24 package info seems to be asking for confusion and errors..

What do you really mean?

I'm sure you know all this, but for the purpose of the discussion and for other readers, I'd like to elaborate. Eagle has even three levels:

Parts (with value)->Devices->Packages

(a Device also has one or more Symbols for the Schematic).

A device represents the "I2C chip or a H-Bridge or MCU" you mention. The package stands for the "JEDIC et al footprint" (and more). Some devices can be parametrized with a value (resistance, capacitance etc., mostly for the passives), others have a fixed value assigned. Each unique value/device/package combo will form a separate part when imported in OpenPnP.  Devices and packages are fully independent, a package can be used by many devices, and vice versa, a device can have multiple packages.

The pads/pins of each device associated package must be logically wired up to the symbol pins, so I can switch the package any time, without touching the schematic. I find this very useful, and KiCAD missing this feature, is one of the reasons I'm not in a hurry to jump (but I'm also still on pre-subscription Eagle 7.7 :-).

I'm not saying this is the best, I just know Eagle and a bit of KiCAD.

Can you explain what you mean is "POORLY handled", John?

_Mark

ma...@makr.zone

unread,
Jan 31, 2021, 4:56:19 PM1/31/21
to ope...@googlegroups.com

> Did you read my response to your message from earlier, where I started with "Yes, this is a great comparison and is exactly what I mean"?

Yes but that's an entirely different question, namely whether we want to to have redundant copies of Parts and Packages in the job file or not.

This question is much more fundamental: whether we want to drop Package support in OpenPnP. 

> You would simply select a compatible part from the database and it would copy the data in. You could even have a "reference" part in the database if you like, say R0402-Master.

Seriously? I have to hand-curate "master parts" and on each import hand-assign them because OpenPnP has become too "simpleton" to care about ECAD-asserted Part->Package associations?

And what about this question?

>> And if one day I want to add a second compatible nozzle tip to my R0402s, or reassign them to an optimized vision setup, do I need to go in there and manually change a hundred parts?

Suddenly, this seems so old-school:

https://en.wikipedia.org/wiki/Database_normalization

_Mark

John Plocher

unread,
Jan 31, 2021, 5:33:15 PM1/31/21
to ope...@googlegroups.com
_Mark asked:

What do you really mean?
I'm sure you know all this, but for the purpose of the discussion and for other readers, I'd like to elaborate. Eagle has even three levels:
Parts (with value)->Devices->Packages


I'd expand your statement to:
Eagle has a Library called ref_packages.lbr, with hundreds of common footprints
A new part is created in a library by associating a Symbol with a set of Packages.  While both are intended to be copied from elsewhere if possible (i.e., all resistors should share the same R Symbol, and all wide small outline 24-pin ICs should share the same SO24W package), figuring out how to copy/use an existing item was sufficiently difficult that all the tutorials simply showed how to create a new Symbol or Package from scratch - which is what most people did.
A new Project is created by adding parts from libraries and giving them values (schematic) and then placing specific packages on a PCB (board editor).

In Eagle, this gives us
    1. reference libraries exist that contain an (incomplete) set of industry defined packages and symbols
    2. distribution libraries are created to hold parts (part = symbol + set of packages, with pin bindings)
    3. personal libraries are often used, which are populated with parts that have been validated by a particular designer
    4. parts from a library are added to schematics and given values (in eagle, this also copies all the part definition info from a library into the project file)
    5. A chosen package associated with each part is placed on PCBs
    6. PCBs are processed to give BOM and centroid information to OpenPnP
    7. parts in libraries can be edited/updated, and
    8. the parts in libraries used by a project remain unchanged even if the original library is updated, until the project developer explicitly asks to update the library from the project file.
In the Eagle world the intent way back when was to use the ref_packages library of all common part footprints to (as Jason said) "select a compatible part from the database and it would copy the data in. You could even have a "reference" part in the database if you like".

Instead, we end up with footprint details for a part ensconced into every brd file, with no easy way to tell if a particular package has changed upstream - or needs updating in downstream projects.  That's what I meant by "poorly".

We all can see how well that went - not!  Every library that I "stole" parts from by copying them to my personal library seems to have its own different set of package footprints, either because nobody could figure out how to leverage the existing ref_packages, or because the ref_packages footprints changed over the years, or because designers felt that their variations were superior.  20 years later, the effort to rationalize my existing designs required me to manually visit every one of my historical projects, click on "update library", edit each sch and brd file to verify (and fix) all the errors that update introduced (mostly rotation / 0˚, but some origins as well...), ... yadda...  Over hundreds of projects, this took weeks of work to get to my goal of ensuring that ALL of the same parts used in all my board designs would use the exact same validated footprints.  Yet, still, if I find I need to update a package footprint for some reason, I'll need to repeat this manual across the board 'update library' effort again!  

I hear Jason - currently, OpenPnP's use of package metadata is limited, so merging Parts and Packages is low impact.  I also hear Mark say that he has relatively few unique packages in his ecosystem, but hundreds of different parts based on those packages.  In words from my experience (above),  I'd say that Jason is talking about the effort to introduce a brand new component into a design flow, while Mark is also concerned about the adoption / migration effort involved adopting this change across his existing board / Job inventory.

Getting rid of the eCAD concept of Package and its relationship to Parts feels like the wrong way to go, which probably means I'm tired and don't understand something.  Sorry to blather on and on...

  -John

John Plocher

unread,
Jan 31, 2021, 5:36:21 PM1/31/21
to ope...@googlegroups.com


On Sun, Jan 31, 2021 at 2:33 PM John Plocher <john.p...@gmail.com> wrote:
8. the parts in libraries used by a project remain unchanged 

Should have said: 8. the parts from libraries used by a project remain unchanged 

  -John

 

Jason von Nieda

unread,
Jan 31, 2021, 5:51:38 PM1/31/21
to ope...@googlegroups.com
On Sun, Jan 31, 2021 at 3:56 PM ma...@makr.zone <ma...@makr.zone> wrote:

> Did you read my response to your message from earlier, where I started with "Yes, this is a great comparison and is exactly what I mean"?

Yes but that's an entirely different question, namely whether we want to to have redundant copies of Parts and Packages in the job file or not.

This question is much more fundamental: whether we want to drop Package support in OpenPnP. 

True, it's two different things, but part of the same "direction", I believe.  I'd be happy to address each point separately, but it seems the thread is going in a lot of directions, so I'm trying to address as much as I can in each message.

For context, this started with my mostly agreeing with the direction you wanted to take with the core thought being 'but multiple "Bottom Visions" should be defined in the Vision tree'. My addition was that I'd like for the selection of the vision instance, and the associated trimming values to be stored in the job, rather than in parts.xml or packages.xml.

My goal in bringing up the Part / Package merger was to avoid extra work. As I've mentioned several times in this thread, and in others, I think Package should be merged with Part, and by merging I mean that Package goes away and its properties move to Part. With that context in mind, it would be a waste of work to create additional dependencies on Package.

So, really, we've ended up talking about three big changes:

1. Moving Bottom Vision settings into the Machine Setup tree, and being able to reference them through a dropdown, rather than putting all those settings on each Package. The original topic of the thread.
2. Merging Part and Package into just Part.
3. Storing Part (and via #2, Package) information in the Job, instead of in a central, global database.

So, perhaps it was my mistake for bringing it up.

#2 and #3 are long term goals for me, in an effort to ease job setup and management. They don't depend on #1 and they also don't depend on each other, so I think it would be fine to drop it from the discussion and just focus on #1. We can start new threads for #2 and #3 if folks would like to discuss those now.

Thanks,
Jason

VersTop NYC

unread,
Jan 31, 2021, 5:53:33 PM1/31/21
to ope...@googlegroups.com
The conditions under which those parts will be delivered remain the same.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

VersTop NYC

unread,
Jan 31, 2021, 5:56:04 PM1/31/21
to ope...@googlegroups.com
It’s been high on your list for 7 years. Time’s up.

On Fri, Jan 29, 2021 at 8:00 AM ma...@makr.zone <ma...@makr.zone> wrote:
Hi everybody

I take geo0rpo's question as the trigger to finally discuss this.

This is my take of things, if you think that's a bad idea, please speak up:

> shouldn't the part bottom vision pipelines be at the "packages" tab

Absolutely, and that's very high on my list.

I'll even go one step further: The pipeline should not be in the package either, but multiple "Bottom Visions" should be defined in the Vision tree:

Bottom Vision Mockup.png

Each Bottom Vision setup would define the settings and pipeline together, optimized for each other (no change from today).

My assumption is, that if done well, you only ever need a handful of different pipeline/alignment setups. One of the setups would be marked as the "Default". Obviously you would define "Default" for the most widely used setup (probably the one for small passives).

In the Package you would then select one of the Bottom Visions setups from a combo-box, if (and only if) anything other than the "Default" is needed.

In the Part the same, but with the default given by the Package.

So we get a dynamically inherited assignment: Default -> Package -> Part.

The migration of existing machine.xml would be the most difficult part to implement this. The migration algorithm must group all equal (Pipeline + "Pre-Rotate") settings, create Bottom Vision setups from them, find the "Default" as the most commonly used one, on both the Machine and Package level, and assign the non-defaults where they don't match.

Fiducials the same way.

What do you think?


_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Niclas Hedhman

unread,
Feb 1, 2021, 2:44:50 AM2/1/21
to ope...@googlegroups.com
On Sun, Jan 31, 2021 at 10:44 PM ma...@makr.zone <ma...@makr.zone> wrote:
The pads/pins of each device associated package must be logically wired up to the symbol pins, so I can switch the package any time, without touching the schematic. I find this very useful, and KiCAD missing this feature, is one of the reasons I'm not in a hurry to jump (but I'm also still on pre-subscription Eagle 7.7 :-).

Unless I am completely missing your point, it is not true that "KiCad missing this feature". In KiCad there are Symbols and Packages separated in libraries, and in the design process I choose which package goes to which part. When I came to KiCad from Eagle, the workflow felt "awkward" but now (IMVHO) I think it is far superior and I can't imagine going back to Eagle (maybe it has changed since 2016, 6.x).
A huge amount of effort has been done in the last few years to create high quality packages in KiCad, many are programmatically generated to avoid mistakes, precise/unified naming, and most are complete with a 3D model as well. To the point where I now trust their libraries more than my own, and it no longer makes economic sense to spend time making my own footprints, when I can spin a board for a few dollars just to validate the dimensions.


Cheers
Niclas

image.png

Jarosław Karwik

unread,
Feb 1, 2021, 4:50:21 AM2/1/21
to OpenPnP
Well, as in life - you cannot satisfy everybody.

Could you just make simple interface which imports and exports   parts/packages/job  into something reasonable ? Preferable readable and editable in excel ( so please no XML ) and generic ( compatible between OpenPnp version -as much as possible). It would be trivial later on to make Python (well, PyQt I guess to have nice interface) based tools to setup/store/restore  production based on personal preferences.

I did something like this once for OpenPnp xml files ( and I can prepare things like this),  but there would have to be wider agreement about such way.

ma...@makr.zone

unread,
Feb 1, 2021, 8:07:31 AM2/1/21
to ope...@googlegroups.com

Hi Niclas

OT...

In Eagle you can place a device in the schematic, wire it up etc. Then later, you decide you want the smaller version of the device, you just right click the part on the schematic or board and select another package.

You change it from this...

... to this:


All the signals are still connected, the schematic is untouched. The extra thermal pad is properly connected to GND (if you look closely in the symbol, you see one extra connection to the GND pin).

I have many devices like that, because my project kept evolving to be smaller and smaller... ;-)

Last time I checked, KiCAD couldn't do that (yet).

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Mike Menci

unread,
Feb 1, 2021, 8:38:06 AM2/1/21
to OpenPnP
Enclosed EagleCad ref-packages.lib (rename .txt to .lib) - might be interesting...

ref-packages.txt

Niclas Hedhman

unread,
Feb 2, 2021, 7:35:45 AM2/2/21
to ope...@googlegroups.com
OT;

Yeah, KiCad has done this "forever" (at least since 4.x)

EDIT on U1, brings up this dialog... With Footprint chosable.

image.png


OR, one can edit that in tabular form, which is a lot neater when changing all R from 0603 to 0402.

image.png

OR, one can do it in Board layout....

AND one can have footprint in the Parts Library itself (which I recall was default in Eagle), when one "knows" that there will not be additional footprints.

I think one can conclude that convergence is happening.

Good Talk
Niclas




ma...@makr.zone

unread,
Feb 2, 2021, 12:40:51 PM2/2/21
to ope...@googlegroups.com

Test: Does GG create a new topic, if I change the subject line on the email?

___

Hi Niclas,

Continuation of https://groups.google.com/g/openpnp/c/7DeSdX4cFUE/m/5nCodBX3AQAJ

When I go to the library and open the device, I see only one Footprint selection.

And I see no way to wire the symbol to the footprint in a footprint specific way.

The way I understand, it only works half-automatically if the symbol pin names/numbers match the package contact names. And that won't allow for varied wiring, e.g. when you have that extra thermal pad or when power devices have multiple pins/pads on the same signal to allow for higher currents on smaller packages.

_Mark

johanne...@formann.de

unread,
Feb 2, 2021, 12:52:03 PM2/2/21
to OpenPnP
Additional thermal pads or additional pins on the signal are no problem.
The footprint "only" must fit to the numbering, but can have the same pin on multiple pads or even pads without any connection.
E.g. a simple switch has two used "pins" in the schematic, but matching footprints could have the number one once or twice (or even more), same for the number two.
(some examples included)

Screenshot from 2021-02-02 18-47-04.pngScreenshot from 2021-02-02 18-47-58.pngScreenshot from 2021-02-02 18-48-41.pngScreenshot from 2021-02-02 18-49-03.png

ma...@makr.zone

unread,
Feb 2, 2021, 12:55:47 PM2/2/21
to ope...@googlegroups.com

Johannes, could you please re-post this in the new thread?

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages