Vision compositing corner position

196 views
Skip to first unread message

Daniel Guerrero

unread,
Aug 26, 2022, 11:20:53 AMAug 26
to OpenPnP
Hi,

I am using the vision compositing to look at rectangular part (fullpart.png attached).  The default pipeline works without problems. I would like to get the position of the four corners. I notice that the pipeline executes MinAreaRect four times  (1 per corner, the four  shots attached). The MinAreaRect returns a result here [1]. Does this result contain the corner position information? Otherwise, could you please give some hints on how I could get the corner positions? 

Any help will be greatly appreciated. 
shot4.png
shot3.png
shot1.png
shot2.png
fullpart.png

mark maker

unread,
Aug 26, 2022, 12:01:37 PMAug 26
to ope...@googlegroups.com

The problem ist that bottom vision normally expects bright elements (contacts) that it detects. It dismisses both green elements and black stuff (usually the plastic body).

So for your use case, you need a pipeline that goes for the green only.

We had the almost same situation here: 

https://groups.google.com/g/openpnp/c/5e3US2kiliU/m/SM7hfeTGBgAJ

if you follow this discussion to the end you should be able to do it (read it all at once, I forgot some things first and added them later).

If this is unclear, just ask.

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/6d8a26b6-10c7-4a60-b834-f0d39f8ac229n%40googlegroups.com.

Daniel Guerrero

unread,
Aug 26, 2022, 4:26:08 PMAug 26
to OpenPnP
Hi Mark,

I am actually working the same project with Jim, thank you for your previous help as well as your help here!

An update since Jim's post is that it seems like the pipeline was able to find the four corners. Our ultimate goal is to know the dimensions of the sides of our part from the relative position of the corners. So we are hoping to obtain the exact position of each corner with respect to some global origin of our pick and place machine. The MinAreaRect produced one "Result" per corner, in this line of code here [1], and this result when printed looks like this:

 { {1462.8710652795055, 1294.7743770357495} 2925x2589 * -1.2860436190038762E-14 }

Are we able to extract the position information of each corner from this? If not, could you point in the right direction?

Thanks,
Daniel

mark maker

unread,
Aug 27, 2022, 4:37:35 AMAug 27
to ope...@googlegroups.com

I'm not sure I understand. Are you trying to use the pipeline outside of bottom vision?

_Mark

Daniel Guerrero

unread,
Aug 29, 2022, 11:06:34 PMAug 29
to OpenPnP
Hi Mark. I would like to create a new stage in the pipeline which uses the position of the four corners found in the MinAreaRect stage. However, I don't know how to get these positions from the MinAreaRect results. 

Daniel

mark maker

unread,
Aug 30, 2022, 7:25:27 AMAug 30
to ope...@googlegroups.com

The stage is documented here:

https://github.com/openpnp/openpnp/wiki/MinAreaRect

As you see there (screen shot below), you tell it which edge(s) to detect (subject to rotation by expectedAngle). So it is these edges you can evaluate. The other edges are just arbitrarily shifted out of the camera view, so only the selective edges are visible:

MinAreaRect partial edges

   corner = center +/- (half width/height, rotated by angle)

Bottom Vision does all this for you, but it has to at least roughly know the size of the object.

I cannot help you more specifically if you don't explain to me how you intend to use this.

_Mark

Daniel Guerrero

unread,
Aug 31, 2022, 12:22:45 AMAug 31
to OpenPnP
Hi Mark. 

I wanted to explain what we intend to do in the following items: 

 1. We are placing "hand-made" parts that are all roughly the same size. 

 2. We would like the optical bottom vision pipeline to "survey" these parts to make sure that they are the right size. 

 3. Our "hand-made" parts are very highly reflective, using basically the best reflective material we can find (Vikuiti foil). We can't use front lighting because of the reflections, so we use back lighting. We have a pretty uniform green back light. So the parts we see look black on a green background. 

 4. We have an existing pipeline stage that works, using the results from MinAreaRect as a starting point, then we use Hough lines to find the actual boundaries, and write the measurements in a .csv file. We check the size and if it is OK, we place the part. 

 5. One of the steps in our existing pipeline stage is to find the 4 intersection points of the Hough lines that form the boundaries of our parts. However, this method has accuracy issues when one of the lines is pretty vertical. Also, the binning of the Hough lines algorithm is not so great. 

 6. We feel your new algorithm (vision compositing) could be a lot more accurate, especially as you are always using the center of the field of view, and hence it minimizes the lens distortion. Our parts take a large part of our field of view (at least 50%). So without compositing we are limited by how well the lens calibration correction works. Our goal for a 5 cm wide part is to find the size to better than 100 microns. We can achieve such accuracy "some of the time" with our existing algorithm. So we would love to be able to use the 4 "corner points" instead of the Hough lines method to improve accuracy. 

Thanks, 
Daniel

mark maker

unread,
Aug 31, 2022, 4:58:19 AMAug 31
to ope...@googlegroups.com

> We are placing "hand-made" parts that are all roughly the same size.

I think you could use the whole bottom vision (alignment) function of OpenPnP for that, instead of trying to build your own motion, pipeline and math.  It would do all the positioning and calculations for you, including iteration (when you enable pre-rotate, which is always better, even when no rotation is needed).

Create a Part and Package.

Enter the rough part size as a single pad footprint (not the body!) of the Package.

Modify the standard pipeline as discussed earlier.

Use regular pick and alignment.

The size will be reported as the end result, as a properly composited RotatedRect (reported in the log, which you could parse).

You could even do a Part Size Check (PadExtents) with tolerance:

https://github.com/openpnp/openpnp/wiki/Bottom-Vision#part-configuration

If I missed something, tell me. Otherwise, you need to completely let go of your earlier elaborate plans and let OpenPnP do its job 😁

_Mark

Daniel Guerrero

unread,
Aug 31, 2022, 10:32:49 AMAug 31
to OpenPnP
Hi Mark. 

In my previous message I missed to mention that the parts have isosceles trapezoid shape: 2 parallel sides (the bases) and 2 sides at 1.25 degrees with respect to perpendicular (the legs). The widest base is about 1 mm wider than the other. We would like to sense that using the bottom vision pipeline and record it.

Thanks,
Daniel

Daniel Guerrero

unread,
Aug 31, 2022, 1:03:04 PMAug 31
to OpenPnP
The parts/trapezoids are placed on a PCB. The PCB will be loaded with 64 trapezoids, of 4 different types. Each type varies by a few mm from the next larger or the next smaller. The PCB is a pizza-pie shaped board with the parts planned to be placed with very tight tolerance, such that the entire board is covered.

Thanks,
Daniel

mark maker

unread,
Aug 31, 2022, 1:46:40 PMAug 31
to ope...@googlegroups.com

Well that's tricky. As you can guess, the corner detection expects 90° corners.

The only thing that comes to mind is to mask the corners small enough that the 1.25° don't matter.

You can trick OpenPnP into doing it by building the "footprint" out of multiple pads.

The imaginary trapezoid hull must be symmetric in X, and "halfed" in Y.

Two outer bars hug the isosceles trapezoid. There will be a minimal error due to the 1.25°.

Two inner bars are larger and deliberately asymmetric, so OpenPnP cannot take them as candidates for corners.

Because these inner bars form a concave hull, OpenPnP must isolate the corners of the outer bars , i.e. it must apply circular masks (marked yellow).

The height of the bars is probably difficult to get right:

Large enough so the corner can be well detected. Plus they must be larger than the pick tolerance of your parts (which must include the size tolerance).

But small enough, so the 1.25° won't matter.



Note: OpenPnP will give you back the bounding rectangle.

_Mark

Daniel Guerrero

unread,
Aug 31, 2022, 10:08:16 PMAug 31
to OpenPnP
Thank you for the suggestion. If we get the four corners using the inner and outer pads as you suggest, is there a way to get the four associated X-Y points? We could use them in our existing pipeline stage to find the trapezoid sides.

Daniel

mark maker

unread,
Sep 2, 2022, 2:01:18 AMSep 2
to ope...@googlegroups.com

Not in the pipeline. The pipeline is called separately for each corner shot, the compositing happens outside the pipeline.

But I could probably collect the detected corner locations and then add the corner location array to the Vision.PartAlignment.After script parameters, so you could do in a script, whatever it is you want to do with that information.

_Mark

Daniel Guerrero

unread,
Sep 8, 2022, 5:03:00 PMSep 8
to OpenPnP
Hi Mark,

We managed to collect the four outputs from the MinAreaRect (one per corner). However, when we compute corner positions from the MinAreaRect result (rectangle size and center), it seems that the compositing (Method=SingleCorners, MaxPickTolerance=5.0, MinAngleLeverage=0.7) is getting the corners of the entire image rather than the corners of our part. 

To debug this, we looked at the input pixels to MinAreaRect (shots attached). We think the reason for getting the wrong rectangle is that the input pixels contain additional contours (masked circle and image border) that pass the brightness threshold and are being used to calculate the rectangle in MinAreaEdges. Given our input pixels, does this make sense to you? Is it possible to remove the additional contours by adding other steps in the pipeline (attached)?

Thanks,
Daniel
pipeline.xml
minarearect_input.png
shot.png

mark maker

unread,
Sep 9, 2022, 3:51:29 AMSep 9
to ope...@googlegroups.com

Are you guys paying attention? We should be long past that.

Please re-read my first answer in this conversation, especially the instructions about the pipeline that I linked (my earlier answers to Jim, read until the end).

_Mark

mark maker

unread,
Sep 9, 2022, 11:45:55 AMSep 9
to ope...@googlegroups.com

Sorry, my reaction was a bit harsh. Bad work day. 😇 But the pointers are still valid.

Daniel Guerrero

unread,
Sep 14, 2022, 8:43:50 AMSep 14
to OpenPnP
Hi Mark,

Sorry we missed parts of your comments in the other thread! We looked back at it in detail and found the problem in our pipeline. We fixed it, now the compositing gets the four corners of the part (shots attached). Now we are working on reconstructing the overall trapezoid shape.

Thanks again for your help.

Best,
Daniel
2_1.png
3_1.png
1_1.png
4_2.png
3_2.png
4_1.png
1_2.png
2_2.png
Reply all
Reply to author
Forward
0 new messages