







On 2 Jun 2021, at 19.39, fxframes <mp5...@gmail.com> wrote:
Thanks for the suggestion.
From the images below it does seem to turn a bit more than 90º.
Would it be the case of adjusting steps/mm?
Start.<Screenshot 2021-06-02 at 18.34.54.png>End<Screenshot 2021-06-02 at 18.35.29.png>
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/50bc1d31-954f-4f0d-ae1d-600c6530c369n%40googlegroups.com.
<Screenshot 2021-06-02 at 18.34.54.png><Screenshot 2021-06-02 at 18.35.29.png>
--
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/F48373B7-C9C8-4AC9-BF79-7531291982B5%40gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/383ce959-b97a-403a-9cc5-83a6ded9d650n%40googlegroups.com.
On 2 Jun 2021, at 23.34, fxframes <mp5...@gmail.com> wrote:
Interesting... I’ll take a look tomorrow. Thanks.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/5b673ece-adcc-4b3d-a832-e2afb5e8f5ean%40googlegroups.com.
Hi fxframes
Some thoughts:


To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c8780fec-e689-4e73-936d-c20c931ec4ffn%40googlegroups.com.




Hi fxframes
Some thoughts:
- If you use pre-rotate then moderately wrong steps/degree should automatically be compensated. So during these tests, do not use pre-rotate. But during production do use it (it is way better).
- If it is a steps/degree issue, then different placement angles should result in different angle offsets. Parts at 0° should show no offset, at 90° it should show, at 180° double that. If this does not happen, then it is not a steps/degrees issue.
- If it is a steps/degree issue, then OpenPnP is not the place to fix this. Check your controller settings instead. The images you posted earlier, would suggest that.
--


> I've disabled pre-rotation BUT the parts still rotate on
their way to the bottom camera ?
> ...but both rotate ~180º before arriving at the bottom camera.
The feeder itself and the part inside the tape can also each have
a rotation, so this is normal. The difference is that the part
must be visible at 0° in the camera when it is aligned. The
rotation 0° means: "I see the part in the same orientation as when
I look at the footprint as drawn in the E-CAD library".
Conversely, with pre-rotate: "I see the part as it will be placed
on the PCB on the machine". So it will already have the rotation
of the design plus the rotation of the PCB itself. The advantage
of pre-rotate is that any inaccuracies through the rotation
(including runout, backlash etc.) will already be compensated out.
Important for large parts, where a few degrees offset will result
in relatively
large pad offsets, due to leverage.

_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/f248de23-8f68-40bc-b45d-1516f040567an%40googlegroups.com.


Good catch Tony! I'll fix it.
@fxframes, in the mean-time you can set it manually. I would be
surprised, if this is related with the problem here, though...
https://github.com/openpnp/openpnp/wiki/GcodeDriver%3A-Command-Reference#home_command
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/7d1097d9-53d0-439f-b5f4-d1dd29b0227an%40googlegroups.com.
» The feeder itself and the part inside the tape can also each have a rotation, so this is normal. The difference is that the part must be visible at 0° in the camera when it is aligned. The rotation 0° means: "I see the part in the same orientation as when I look at the footprint as drawn in the E-CAD library”.




> If I’m understanding this correctly Mark, should the part be correctly aligned before it leaves the bottom camera and is moved over to the pcb to be placed?
Well, not really. When pre-rotate is disabled, what you
see is what OpenPnP initially thinks is zero degrees. So
what you effectively see is the pick angle error.
In your images it is huge and both the part and the nozzle angle
(visible as the crosshairs) are strange.
Something is really wrong here.
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/08E44EDA-FC50-4509-BADA-19CB34357927%40gmail.com.

The Strip-Feeder calculates the feeder rotation by looking at the sprocket holes. The Rotation in Tape is on top of that. Looking at your photos, your parts may well be 180° turned.

Now that I look at your feeder image: there is only one hole highlighted in red!

It is possible that all this comes from a faulty Strip Feeder
sprocket hole recognition! Yes, that finally could explain why
both the part and the nozzle are rotated so oddly.
Are you sure the feeder sprocket hole recognition pipeline works
well? After all these are transparent or black plastic tapes (it
seems), which is very difficult!!
I never managed to create a reliable pipeline with these on my PushPullFeeder:

I only recently developed a new vision stage that can do it reliably, but that is not yet ready for multi-hole recognition.
https://github.com/openpnp/openpnp/pull/1179#issuecomment-823295084
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/EBD7D655-34C3-4445-88E4-D90129377210%40gmail.com.
» Now that I look at your feeder image: there is only one hole highlighted in red!
I'm no expert for the strip feeder, I always thought it needs multiple holes. But maybe that's only for Auto Setup.
The last thing that comes to my mind, is that the strip feeder
will do very crazy things, when not setup exactly right, for
instance when the reference/last holes do not match the reality.
The strip feeder only tries to correct the tape "course", it
cannot correct its initial position.
See this animation:
https://makr.zone/strip-feeder-crazy.gif
So if you shifted your home coordinate perhaps and all your
feeder holes are off by a certain distance, then the strange pick
angle could happen.
But then again, alignment should fix this. Gotta go to bed...
_Mark

Thanks for reporting this back. This was one heck of an odyssey!
;-)
> Mark, just regarding the strip feeder pipeline, if you
ever figure out if it needs
to find multiple holes even in manual setup mode, would you
please let me know?
Yeah, I had a look. It does not need more than one hole
in update mode, i.e. only in Auto Setup.
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CB079DF6-7185-4D6E-8729-665EB2F398CC%40gmail.com.
Oh, and don't forget to re-enable pre-rotate. Like I said, it
gives better results.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/2c05685d-8bad-cabd-7074-f17e22668347%40makr.zone.
Do not use the MaskCircle d=200 as you cannot really see how robust the circle detection is. I recommend to use a MaskRectangle if necessary.
Do not use BlurGaussian at all (I tend to say: generally in OpenPnP) - use BlurMedian 5 because of it's edge and round corner preserving behaviour.

Do not use DetectEdgesCanny to prepare an image for DetectCirclesHough, as the DetectCirclesHough has (unfortunately?) already an Canny Edge Detector built.
Attached is my image pipeline for regular 1608M resistors in a white tape on black background.
On 5 Jun 2021, at 20:51, Clemens Koller <cleme...@gmx.net> wrote:
Hi!
This doesn't look okay or robust in my opinion.
The DetectCirclesHough is supposed to detect all cirles in the image (unless masked).
My thoughs are:
Do not use the MaskCircle d=200 as you cannot really see how robust the circle detection is. I recommend to use a MaskRectangle if necessary.
Do not use BlurGaussian at all (I tend to say: generally in OpenPnP) - use BlurMedian 5 because of it's edge and round corner preserving behaviour.
(In OpenPnP, the only reason I am using BlurGaussian followed by a Threshold operation is to do some lazy erosion/dilatation).
Do not use DetectEdgesCanny to prepare an image for DetectCirclesHough, as the DetectCirclesHough has (unfortunately?) already an Canny Edge Detector built. If you use the DetectEdgesCanny before the DetectCirclesHough, you get the Hough Operating to work on edges of edges (=two edges), which leads to positional jitter.
I strongly suggest replacing the OpenPnP's default Image Pipelines with that in mind.
Attached is my image pipeline for regular 1608M resistors in a white tape on black background.
Since all my tapes are alinged horizontally quite close to each other, I am using MaskRectangle accordingly. But this is optionally.
Greets,
Clemens
--
You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/HPZlD0ohRcE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/84593f71-c3f4-3884-5c18-b89fd48d35f0%40gmx.net.
<imgpipeline-R1608M-Hough.xml>
Do not use BlurGaussian at all (I tend to say: generally in OpenPnP) - use BlurMedian 5 because of it's edge and round corner preserving behaviour.
If you say it this general, I disagree ;-)
BlurMedian should only be used with essentially binary (black and
white/very high contrast) images, to erode away insignificant
specks typically after a thresholding/channel
masking/edge detection operation has taken place.
On color/gray-scale images with soft gradients, BlurMedian loses
location information, i.e. it unpredictably "shifts around" the
image by as much as its kernel size. An unevenly lighted gradient
- e.g. a rounded edge on a paper sprocket hole, a ball on a BGA or
a bevel on a pin lighted slightly from the side - may appear to
shift to one side. As you cannot determine the percentile (it
always takes the fiftiest percentile, a.k.a. the median) it
effectively creates an artificial edge at an
unpredictable gradient level.
https://makr.zone/blur-median.gif
Source:
https://www.tutorialspoint.com/opencv/opencv_median_blur.htm
See how the rucksack is "growing" into the back of the boy, how
the top is "lifted", how the boy's face seems to be "pushed in",
how the balloon seen closest to him seemingly shifts position
down/left.
Of course the effect is exaggerated here, but you get the idea.
_Mark
> I was however not talking about edge displacement / shifting (loosing spatial information), which can be a problem in some cases. I was talking about edge preserving behaviour (maintaining contrast information...
OK, I understand. For filters in photography, you're absolutely right. Sorry, I was talking about performing machine vision not about erm... beautifying photos ;-) I guess you agree, for machine vision, it is 100% about spatial information.
> ... which is what Canny chews on
Canny specifically should be prepended by a Gaussian filter:
https://en.wikipedia.org/wiki/Canny_edge_detector#Gaussian_filter
https://docs.opencv.org/master/da/d5c/tutorial_canny_detector.html
If you want to improve on it, you'd need a special replacement
for the Gaussian filter "in order to reach high accuracy of
detection of the real edge":
https://en.wikipedia.org/wiki/Canny_edge_detector#Replace_Gaussian_filter
Edges are always fickle, lots of tuning needed. The most robust solution is not to detect edges in the first place but instead work probabilistically. Like with template image matching.
... or with my circular symmetry stage, see the "Example Images"
section here:
https://github.com/openpnp/openpnp/pull/1179
I since tested it on the machine. It nails everything, zero
detection fails so far (that were not sitting behind the
keyboard). Sprocket holes in tapes of all colors/transparent on
all backgrounds, nozzle tips, (bad) fiducials. Even completely out
of focus with barely any contrast. Doesn't care a bit about
changing ambient light.
One original pipeline, zero setup: All it requires is the
expected diameter. Once the camera units per pixel are known, it's
a no-brainer. Everybody can use a caliper on a nozzle tip or read
the datasheet, no pipeline editing skills required. OpenPnP
provides the diameter dynamically to the pipeline from easy to set
GUI settings or existing data (like footprints if available).
In the meantime I gave it sub-pixel accuracy. Working on
multi-target-detection now...
Sorry about the blab, I'm just really happy how this turned out. 8-D
_Mark
P.S. It took me a while to come back to the idea...
https://groups.google.com/g/openpnp/c/0-S2DMXe3t0/m/0FCu8kTzBQAJ
Hi Clemens,
To explain why I bother: I was originally responding to this:
> Do not use BlurGaussian at all (I tend to say: generally in OpenPnP) - use BlurMedian 5 because of it's edge and round corner preserving behaviour.
All I was saying is that this is not true in its general and
absolute form. I would still argue it is more often wrong
than true.
And I started to care because this has a potency to mislead
users.
> "For small to moderate levels of Gaussian noise,
the median filter is demonstrably better than Gaussian blur at
removing noise whilst preserving edges for a given, fixed window
size."
Well I still believe this sentence does apply to
photography. Immediately before that sentence you cited, it says:
"Edges are of critical importance to the
visual appearance of images, for example."
https://en.wikipedia.org/wiki/Median_filter#Edge_preservation_properties
It does preserve an edge, yes, but not necessarily at the right
place, as I demonstrated with the boy+balloons image:
https://makr.zone/blur-median.gif
Like I said the median blur is fine if the image at hand is
already very high contrast, ideally already binary. If there are
no relevant smooth gradients or artifacts involved in or around
the edge, then OK.
If in doubt, use Gaussian. Gaussian does better preserve spacial
information, at least above the channel (integral) resolution and
noise level. Hence it is benign for noise and other artifacts
reduction. Most common cameras use MJPEG or other compression
methods that produce artifacts. These look nice in our brains but
are bad for machine vision. Compression often involves an
underlying 8x8-pixel block size. Gaussian will typically restore a
weaker, but more likely correct edge signal out of that
(probabilistically speaking).
> I am looking forward to test this and read the code when I setup the next PCB on the machine.
You can already do that, if you want. it's already in newer
OpenPnP 2.0 (not yet with sub-pixel accuracy). The pipelines are
posted in the PR. Just paste them an try. First version code is
also linked:
https://github.com/openpnp/openpnp/pull/1179
_Mark

Hi fx frames
this is not yet supported in the current stage, it can only detect one hole. But I'm just in the process of testing this. ;-)
Coming soon!
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/3215201e-c589-47b0-99b3-426642c74997n%40googlegroups.com.
Ah yes, if you're only using it for the feed vision and not for
Auto Setup then it should work.
But the PR is already done:
https://github.com/openpnp/openpnp/pull/1217
Please update your OpenPnP 2.0 version.
You could help me by testing the pipelines
as proposed in the PR. ;-)
They should work out-of-the-box, the goal is no tuning
with any tape color or material, any background color or material,
transparent tapes etc.
Also for Auto Setup.
But be mindful that you need quite accurate Units per Pixel set
on the camera and your feeders must be close to the camera focal
plane in Z.
If it does not works as is with your feeders, please send the input images (insert an ImageWriteDebug stage right after ImageCapture). Appreciate!
_MarkTo view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/bde39d3b-1bfe-4535-8547-ed961b51f38en%40googlegroups.com.









On 16 Jun 2021, at 10:30, fxframes <mp5...@gmail.com> wrote:
Hello Mark,This doesn't seem quite right yet.Out of the box and running auto setup it gives me
<Screenshot 2021-06-16 at 10.20.31.png>This is what the pipeline looks like. You can see the hole in the center isn't "found".
<Screenshot 2021-06-16 at 10.21.27.png>
<Screenshot 2021-06-16 at 10.21.50.png>Also when maxTargetCount=1 it finds the wrong hole.
<Screenshot 2021-06-16 at 10.24.19.png>On Wednesday, June 16, 2021 at 9:50:41 AM UTC+1 fxframes wrote:Thanks Mark, I will test this and report back.Ah yes, if you're only using it for the feed vision and not for Auto Setup then it should work.
But the PR is already done:
https://github.com/openpnp/openpnp/pull/1217Please update your OpenPnP 2.0 version.
You could help me by testing the pipelines as proposed in the PR. ;-)
They should work out-of-the-box, the goal is no tuning with any tape color or material, any background color or material, transparent tapes etc.
Also for Auto Setup.
But be mindful that you need quite accurate Units per Pixel set on the camera and your feeders must be close to the camera focal plane in Z.
If it does not works as is with your feeders, please send the input images (insert an ImageWriteDebug stage right after ImageCapture). Appreciate!
_Mark
--
You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/HPZlD0ohRcE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/68efad37-1568-4469-b839-e9db6ce48ad7n%40googlegroups.com.
<Screenshot 2021-06-16 at 10.21.50.png><Screenshot 2021-06-16 at 10.24.19.png><Screenshot 2021-06-16 at 10.20.31.png><Screenshot 2021-06-16 at 10.21.27.png>
Hi fxframes
thanks for testing!
The DetectCircularSymmetry stage has a search range (the maxDistance
property) that limits the search to this maximum distance from the
expected position, which is usually the camera center (but it can
be overridden by the vision operation such as in nozzle tip
calibration associated camera calibration).
The search range controls both the scope but also the
computational cost of the stage. Remember: I had to develop it in
Java and Java is clearly not a good match for this low level pixel
crunching stuff. Conclusion: there is no need for a mask.
But both the expected position and the search range will only be
parametrized by OpenPnP when inside the actual vision function
of the specific (feeder) operation. It is different in Auto
Setup (range goes to camera edge) and in feed operation
(range is only half a sprocket pitch).
When in the Editor the search range is like in Auto Setup i.e. to
the camera edge, therefore the number and selection of
holes detected might be misleading.
You can make the search range visible by enabling the diagnostics
switch. Then the difference becomes visible in the result
images with the overlaid heat map, see the ImageWriteDebug images
below.
The important question for me is
this: Does it work when used in normal operation, i.e.
with Auto Setup and with feed operations?
For a feed operation (positional calibration):

In Auto-Setup / Editor (Note my camera is too narrow at the moment, because I lifted the table by 20mm but not yet the camera, so the Auto-Setup range is a bit narrow)

_Mark
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/793DBF63-D4E4-4390-BF4E-E066F4626D6D%40gmail.com.
Could you take this pipeline (if you haven't already):
https://github.com/openpnp/openpnp/wiki/DetectCircularSymmetry#referencestripfeeder
And then enable the first ImageDebugWrite stage and then send me the images?
Found here
$HOME/.openpnp2/org.openpnp.vision.pipeline.stages.ImageWriteDebug/
Don't forget to disable again, it creates a ton of images in Auto Setup.
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CB0E0C13-2475-42E3-92D8-2BC147D5533A%40gmail.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CB0E0C13-2475-42E3-92D8-2BC147D5533A%40gmail.com.
--
You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/HPZlD0ohRcE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/b580a345-038d-4307-ac54-b9ef63d59bb4n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CB0E0C13-2475-42E3-92D8-2BC147D5533A%40gmail.com.
Got the images. Can you please post your down-looking camera's
Units per Pixel?
Thanks,
Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/83ddb7f0-f907-2930-1ecd-60a5faa10c43%40makr.zone.

Yes, very difficult.
These are the gotchas I see:


Regarding the feeder surface Z:
@tonyluken has introduced "3D" Units per Pixel. Unfortunately it
is not yet applied to feeder Z. Once this is available, it will be
possible to compensate:
https://github.com/openpnp/openpnp/pull/1112
However, until then you should have the feeder tape surfaces very
close in Z to the PCB surface i.e. where you calibrated your Units
per Pixel. Everything must ideally be on the same Z plane.
Otherwise, you will likely always have some problems, because
these feeders' vision works with well known absolute
geometry from the EIA 481 standard.
For the ReferenceStripFeeder the issue is mostly with Auto-Setup
(I guess that's why you didn't use it even before trying this new
stage). There is some tolerance in the code and maybe you can get
it working by playing with the innerMargin/outerMargin.
For the routine feed vision, the camera will be centered on the
sprocket hole and Units per Pixel will not be so important (maybe
for 0402 or 0201 parts where the hole offset detected with the
wrong Units per Pixel might start to matter, but I doubt it).
Conclusion: The reason it failed can be well explained (so far)
and most of these issues will create similar problems with other
stages.
I'm afraid the new stage cannot perform miracles ;-)
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/f1ceb511-ef0e-2015-57a3-530eb3fafe12%40makr.zone.
On 16 Jun 2021, at 19:37, ma...@makr.zone wrote:
Yes, very difficult.
These are the gotchas I see:
- Your light diffuser has a hole in the middle, where the camera looks through. The clear tape therefore has no reflection there. Having a co-axial light (half-way mirror before the camera bouncing light) would help (I have the same problem on mine).
- The feeder holder has a strong layer pattern that is seen through the clear tape, because of point 1. The layer pattern disrupts circular symmetry outside the hole. You may have to reduce subSampling down from 8, so it is not fooled by interference effects. The stage might then be slower.
- The feeder holder has an outcropping that keeps the tape in. This outcropping (or the shadow of it) goes right to the sprocket hole edge. It therefore breaks the circular symmetry around the hole. Ideally you would reduce the outcropping for the next feeders you print, it seems less would do.
- You can try to workaround that by reducing the outerMargin to 0.1, so the ring margin will not be cut so much. However, this may not work (i.e. you'll have to experiment with other values) due to the next point.
- In your video it seems the Units per Pixel are not accurate for the tape surface. I guess it is higher in Z than PCB surface i.e. it appears larger. You see how the machine moves much farther than what you clicked. And you see how four ticks on the cross hairs do not align with the sprocket hole pitch (4mm).
- <noljkpaaiceennln.png>
- I tried reconstructing Units per Pixel and got ~0.0206mm/pixel. I guess your calibrated value is significantly higher, if I got that right ;-).
- Once I apply the right Units per Pixel and reduce outerMargin to 0.1, I get detection on the image that seems like the one that fails in the video: strip_7555303524774678967.png
- <docdkaioemdihpdm.png>

- Your light diffuser has a hole in the middle, where the camera looks through. The clear tape therefore has no reflection there. Having a co-axial light (half-way mirror before the camera bouncing light) would help (I have the same problem on mine).
- The feeder holder has a strong layer pattern that is seen through the clear tape, because of point 1. The layer pattern disrupts circular symmetry outside the hole. You may have to reduce subSampling down from 8, so it is not fooled by interference effects. The stage might then be slower.
- The feeder holder has an outcropping that keeps the tape in. This outcropping (or the shadow of it) goes right to the sprocket hole edge. It therefore breaks the circular symmetry around the hole. Ideally you would reduce the outcropping for the next feeders you print, it seems less would do.
- You can try to workaround that by reducing the outerMargin to 0.1, so the ring margin will not be cut so much. However, this may not work (i.e. you'll have to experiment with other values) due to the next point.
- In your video it seems the Units per Pixel are not accurate for the tape surface. I guess it is higher in Z than PCB surface i.e. it appears larger. You see how the machine moves much farther than what you clicked. And you see how four ticks on the cross hairs do not align with the sprocket hole pitch (4mm).
- <noljkpaaiceennln.png>
- I tried reconstructing Units per Pixel and got ~0.0206mm/pixel. I guess your calibrated value is significantly higher, if I got that right ;-).
- Once I apply the right Units per Pixel and reduce outerMargin to 0.1, I get detection on the image that seems like the one that fails in the video: strip_7555303524774678967.png
- <docdkaioemdihpdm.png>
Glad it works out.
> One final note, did you get one of those coax LEDs for yourself? There doesn’t seem to be a lot of them around. This one seems like it could work.
If you want even lighting in the full camera view plus high
Z clearance, co-axial lighting becomes problematic, because the
half-mirror needs to reach the edge of the reflecting light cone
(or pyramid?). The mirror glass will need to be large and reach
far away from the lens front and reduce Z clearance. You will
likely need a longer focal length (which in itself is good, but
requires buying a new lens), and a higher camera mounting point
(which is difficult on an existing machine design).
Therefore, I was thinking about creating a hybrid design, with
only the center part (where the camera needs to peek through a
diffuser) being half-mirrored and the rest conventional. You could
use one of these very thin microscope cover glasses, that are
available with optical quality. LEDs would be pointing up from a
ring towards the diffuser, and towards the mirror from a small
"side-car" PCB angled at 90°.
But I only got to design a very basic "light cone" in OpenSCAD so
far (6.2mm lens):

Another design is not co-axial but has a diffuser design that
only leaves a tiny gap (hope you get the two pictures):


_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/F9D319D5-C52C-45A8-BEFC-6527DF0D1909%40gmail.com.