To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/a32ca87c-d253-4431-829a-3855c566ad64%40googlemail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/56a8ff16-8730-4bd5-b002-278e78acf17f%40googlemail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/1b69e146-888f-41a6-9722-2359c3fa4bb0n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/325778d4-db9a-43fa-babe-8af520325fabn%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/81242adc-02b5-4287-9625-6de536dc9b5dn%40googlegroups.com.
For fly-by-vision as the ultimate goal, we still need to find out if it
can be done using a simple UVC camera. UVC cameras have the advantage,
that they can be used by everyone as they are supported as generic
cameras. Without the availability of UVC cameras with hardware trigger
option, I don't see how OpenPnP could (in a generic fashion) ever
provide fly-by-vision.

For flyover camera testing I need to automatically trigger ELP camera when the nozzle passes through a specific X-axis zone (155-165mm).
What I have working:
✅ Script runs manually from Scripts tab
✅ Camera triggering via Arduino Mega (M240 command) works
✅ Nozzle position reading works in script
✅ Vision pipeline with script stage works (but requires manual triggering)
The challenge:
I need the script to run automatically during X-axis motion when the nozzle enters the camera zone. I've tried:
MOVE_TO_COMPLETE_COMMAND with ${SCRIPT:...} - Times out (OpenPnP sends script to Smoothie board)
Vision pipeline script stage - Only runs on manual vision operations
Actuator script configuration - Only runs on manual actuator calls
My setup:
Main board: Smoothie (PNPBOARDV1.7) for motion
Camera control: Arduino Mega (sends M240 for camera trigger)
Script: snapOverCamera.bsh that checks nozzle position and triggers camera
The question:
What's the correct way to automatically execute a position-checking script during nozzle motion in OpenPnP?
Should I be using:
Machine-level script hooks?
Different driver configuration?
Custom job processors?
Another approach I'm missing?
Any guidance would be greatly appreciated!
To view this discussion visit https://groups.google.com/d/msgid/openpnp/5ecb9674-5a8e-43f9-926c-59b62df129e4n%40googlegroups.com.
勇気とユーモア
Hi Artur,
Thank you so much for your generous offer to help! Your expertise would be incredibly valuable for OpenPnP pick-and-place flyover camera feature.
Project Goal:I need to trigger a camera when the nozzle flies over a fixed camera position during rapid XY moves (Z-optional).
Hardware Setup:Controller: Smoothieboard
Motion: X and Y axes
Camera: External camera triggered via digital output
Core Functionality:
Trigger a digital output when the machine passes through a configured XY position zone during G1 moves
Support for both X and Y axis position checking
Configurable trigger zone (center position + tolerance)
Desired Features:
Configurable trigger position (X, Y coordinates)
Configurable trigger zone size (tolerance around center point)
Multiple trigger modes:
Trigger on zone entry
Trigger on zone exit
Trigger continuously while in zone
Speed compensation (if feasible)
Enable/disable via Gcode command
Technical Details:
Trigger output: Specific digital pin (configurable)
Trigger pulse: Configurable duration
Position sampling: As frequent as Smoothie's motion planning allows
Gcode interface: Commands to set trigger parameters
I want to keep the solution integrated within Smoothie rather than relying on external encoder hardware (servo ..), to maintain simplicity and use Smoothie's precise motion planning.
Testing:Initial testing: On your 3D printer (simulating nozzle movement)
I can test on my PnP machine once you have a prototype.
Would this specification work for a custom Smoothie module? We're happy to provide any additional details needed.
ELP camera specification enclosed for info.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/03fc9fa2-8988-483a-bbfb-9564500edae4n%40googlegroups.com.
勇気とユーモア
Typical PnP speeds: 50-300 mm/s
At 1kHz: 0.05-0.30mm position error
Camera FOV: 2-5mm typically
1kHz provides excellent precision for our needs
1kHz implementation would be perfect! The exponential complexity of higher rates isn't needed for PnP camera triggering.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/b4020a96-71c0-4e28-bcbc-2fa22d0e9af2n%40googlegroups.com.
勇気とユーモア
Correction is:
1kHz = ±1000μs error vs your ±5μs capability
At 300mm/s: 1000μs = 0.3mm position error
With 100 steps/mm: That's 30 steps of error!
My Yaskawa servos can do much better (±0.015mm)
Required accuracy: < 0.05mm for reliable vision
"Thank you for the offer, but after recalculating, I realize Smoothie v1's 1kHz poll rate provides insufficient precision (±0.3mm error) for my PnP accuracy requirements.
I'll explore hardware-level solutions using my Yaskawa servo encoders instead."
To view this discussion visit https://groups.google.com/d/msgid/openpnp/d28753e7-a6f8-4f9a-97c4-88002d40982fn%40googlegroups.com.
勇気とユーモア
Feeder pocket clearance: ±0.1-0.3mm
Part placement in tape: ±0.2mm
Nozzle pick accuracy: ±0.1mm
Total pick uncertainty: ±0.4-0.6mm
Even with perfect ±0.015mm camera triggering, the part itself arrives at the camera with ±0.5mm uncertainty!
The Real Requirement:Camera FOV Considerations:Typical PnP camera FOV: 3×3mm to 6×6mm
Part size + uncertainty: Needs margin in FOV
For 0603 component (0.6mm): ±0.5mm uncertainty = 1.6mm total
Required FOV: At least 2×2mm for reliable inspection
Your ±0.3mm (1kHz) trigger error = acceptable
Part position uncertainty = apx. ±0.5mm (dominant error)
Camera FOV = 3-6mm (plenty of margin)
Smoothie v1 with 1kHz (±0.3mm) trigger accuracy IS SUFFICIENT for practical PnP applications because:
Part position uncertainty (±0.5mm) dominates over trigger error
Camera FOV (3-6mm) provides ample margin
Vision correction can handle the combined errors
Cost/benefit favors simpler solution
"After reconsidering practical SMT tolerances, I realize that ±0.3mm trigger accuracy is actually sufficient for this application. The dominant error comes from part positioning in feeders (apx. ±0.2mm), making ultra-precise triggering unnecessary.
Your 1kHz solution should work!"
Am I right - we were over-engineering for theoretical precision that doesn't matter in practice!
For 0402 and larger components, the ±0.3mm trigger accuracy is FINE!
Component Size Analysis:0402 Components:Size: 1.0mm × 0.5mm
With ±0.5mm pick uncertainty: Needs ~2.0mm FOV
With ±0.3mm trigger error: Still fits in 2.5mm FOV
Typical camera FOV: 3×3mm to 6×6mm
Size: 0.6mm × 0.3mm
With ±0.5mm pick uncertainty: Needs ~1.6mm FOV
With ±0.3mm trigger error: Pushes to ~2.2mm FOV
Still fits in most PnP camera FOVs
Size: 0.4mm × 0.2mm
This becomes challenging with combined errors
But most hobbyist/small PnPs don't handle 01005 anyway
Camera FOV size (your safety margin)
Vision algorithm robustness (can find part within FOV)
Part never arrives perfectly centered anyway
Typical bottom camera: 3×3mm to 6×6mm FOV
Combined error: max±0.3mm (pick) + ±0.3mm (trigger) = ±0.6mm
Total uncertainty: 1.2mm diameter
Fits easily in 3mm FOV with 0.8mm margin each side
For 0402 and larger components (which covers 95% of typical SMT work), Smoothie v1's ±0.3mm trigger accuracy is ADEQUATE!
The part positioning uncertainty is the dominant error source, not the trigger timing.
I was over-engineering camera position precision that doesn't improve real-world results.
Artur's 1kHz solution should work for ELP camera and its needs!
Dne 10. nov. 2025 ob 09:00 je oseba vespaman <micael....@gmail.com> zapisala:
To view this discussion visit https://groups.google.com/d/msgid/openpnp/7a1dd527-7f89-42e1-b87f-d51d7e84b5den%40googlegroups.com.
Jan - thank you for jumping in with those excellent technical specifications! Your requirements align perfectly with what I need for my ELP camera flyover testing.
To Artur: Jan's specification looks ideal for my use case. The key features I'd love to see implemented are:
Multiple XY trigger points (for different nozzle positions)
Rising edge pulse with configurable duration
Simple area/band detection as Jan described
Step/dir generator alignment for precision
One-shot trigger arming
Jan's approach of using the step/dir generator for trigger timing and a background service for reset sounds like the perfect balance of precision and simplicity.
This would give me exactly what I need for testing camera synchronization during motion, while being flexible enough for future multi-nozzle setups.
Thank you all for the fantastic technical discussion!
To view this discussion visit https://groups.google.com/d/msgid/openpnp/bfd6e1e7-04ec-40ec-a0d9-6d35acfed959n%40googlegroups.com.
I see this as a brilliant idea!
Adding a trigger parameter to G1 commands would be incredibly elegant. Syntax like G1 X160 Y100 F10000 T1 to trigger at position would be simple to implement, perfectly synchronized with motion, and work seamlessly with existing motion planning. This might be the cleanest solution yet!
To view this discussion visit https://groups.google.com/d/msgid/openpnp/e04fdbd8-549a-4983-b2ba-04638ea0f948n%40googlegroups.com.
勇気とユーモア
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/e37c2395-4af5-42e8-ae7a-cca3b93a9798n%40googlegroups.com.
勇気とユーモア
Hi Chris,
Thank you for your emails, and for pointing me to your excellent detailed technical analysis from October 21st. I hadn't seen that earlier discussion, and you raise absolutely valid points about the substantial system integration challenges.
Regarding your AI comment - I'll note that many of us are non-native English speakers and use tools to communicate clearly in complex technical discussions. The focus should remain on the technical content.
You're right that the real complexity begins after triggering: the OpenPnP architectural limitations, motion controller awareness requirements, USB timing constraints, and multi-nozzle coordination are indeed the "mortar" that makes flyover vision truly work.
My approach is incremental: I'm starting with camera triggering because it's a discrete, testable component that can be validated independently. Even if full real-time correction isn't immediately achievable, solving the triggering problem enables:
Practical testing of image quality during motion
Speed characterization for different camera types
Proof-of-concept validation of basic synchronization
Foundation building for the larger system
The "crude wire trigger" approach you mentioned wouldn't work for our needs - we require sub-millisecond precision for global shutter synchronization, which demands hardware-level solutions.
You've clearly done deep thinking on this problem. Rather than dismissing the triggering work as trivial, could you share which of the larger integration challenges you think should be addressed first? Your experience could help guide the community toward the most critical next steps.
I appreciate the reality check about the bigger picture - it helps keep the work focused on practical outcomes.
Regarding your AI comment - I'll note that many of us are non-native English speakers and use tools to communicate clearly in complex technical discussions. The focus should remain on the technical content.
If someone has no idea about programming and no other experience with solving complex problems, then perhaps years of begging and struggling for support will help. Eventually. Maybe.
I know from my experiments that everything depends on the quality of the trigger source, which allows only a few microseconds of tolerance at higher fly-by speeds, as otherwise inaccuracies in the further processing of the images lead to poor component placement quality.
The use of windows or radii for a trigger area is therefore complete nonsense and not effective if you want to take precise images. And how are you supposed to determine the offsets from the images if more than one axis is involved?
I'm already looking forward to the upcoming AI mud...
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/fda50ef1-c0d5-40b3-b952-faf6eb77adc8%40googlemail.com.
勇気とユーモア
Hi Jan and Arthur,
- G1 from X150 to X170 means TTL stays HIGH for the entire 20mm move
To view this discussion visit https://groups.google.com/d/msgid/openpnp/3ac26afc-34fa-4cde-957c-95f223b5da6en%40googlegroups.com.
勇気とユーモア
My belt driven test jig does not reach 0.1mm accurancy...
Typical belt accuracy: ±0.1mm to ±0.3mm ++++ Backlash: Additional 0.1mm+ uncertainty
To view this discussion visit https://groups.google.com/d/msgid/openpnp/ae0d57a0-18ff-4205-8626-17ae5ab01b4an%40googlegroups.com.
勇気とユーモア
To view this discussion visit https://groups.google.com/d/msgid/openpnp/d794eebe-b9f1-4d26-b30d-50603b96d3e1%40googlemail.com.
And back to where I was trying to explain the trigger issues :)I do not propose the trigger in G code is not an excellent idea, but it may not be portable across platforms. That does not mean we should not use it, just that maybe we should look for other ways as well.
Mike, You already have a means to trigger when you want given a command from OpenPNP correct?I think the "flyover run" should ALWAYS start from the same position and ALWAYS move to a known position and ALWAYS be at the same speed. Given these 3 conditions the nozzle should always be at the same position relative to when that move started. So you can send your command to trigger to the arduino when the flyover run is started. Then it should just be a matter of timing adjust to hit that nail on the head.Henrik, I agree the best solution is to have the motion control board fire the trigger when the position is where it should be, but that is not a great option given the number of different boards and software people are using.From the beginning I liked having an optical trigger that is adjustable delay based on when that trigger is closed. It will always be reliable and not dependent on backlash or any other variables. Simply have your Adrduino wait for the trigger to fire the camera. It could even have a ignore command so it doesnt fire everytime the axis moves past it.Arthur can smoothie watch an input and use it to time delay a trigger?
To view this discussion visit https://groups.google.com/d/msgid/openpnp/CA%2BKNHNx40ZCqAuN6zh_jpfsrnh8OEm%3Dtzd3_o-PGH2nSV8J6Dg%40mail.gmail.com.
勇気とユーモア
To view this discussion visit https://groups.google.com/d/msgid/openpnp/a0f11e42-9c90-4687-89f5-65645f182d7dn%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/Oe-diw7--N-9%40tuta.io.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/CA%2BKNHNz30H9CjN%2B6Q8syja0nh8Ot%3D3pv710sX7jjBBCyEC5xog%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/openpnp/c5152d50-d48b-4de3-84bc-dd35310d5c14n%40googlegroups.com.
Hi CNC ..
I did some testing with Mega and Smoothie and the Mega passthrough breaks OpenPnP's position tracking,
To view this discussion visit https://groups.google.com/d/msgid/openpnp/c5152d50-d48b-4de3-84bc-dd35310d5c14n%40googlegroups.com.
勇気とユーモア
Updated Summary for OpenPnP Group:
I conducted extensive ELP camera testing and achieved maximum performance optimization for vision capture. Here are my key findings:
Performance Breakthrough Achieved:
Through rigorous optimization, I reduced 7-position calibration from 3.35 seconds to 2.56 seconds - a 24% performance improvement. This represents the absolute hardware limits of OpenPnP's architecture.
What Doesn't Work - True Fly-Over:
OpenPnP's moveTo() command always includes M400 wait for completion, blocking script execution until movement finishes. This makes true motion capture impossible through scripts. Every capture happens AFTER movement completes, regardless of timing attempts.
What Works Excellently - Optimized Micro-Stop Capture:
My optimized approach achieves:
7-position calibration: 2.56 seconds
Single nozzle setup: 2.9 seconds
Full multi-nozzle setup: 29.2 seconds
Average per position: 365ms
Key Optimizations That Worked:
Removed settle moves (saved 130ms per position)
Reduced micro-stop time to 15ms
Direct movements only, no intermediate positions
Maximum speed settings (hardware limits)
Hardware Limits Reached:
Movement time: ~280ms per 10mm move (cannot be faster)
Camera capture: ~30ms (hardware limit)
Micro-stop: 15ms (minimum for stability)
OpenPnP architecture prevents real-time position updates
Production Impact:
The 2.9-second nozzle calibration enables true production-line speeds. Full machine calibration in under 30 seconds makes this approach production-ready, though it requires working within OpenPnP's architectural constraints rather than true fly-over capture.
Test Setup: OpenPnP /Windows with Dedicated camera testing jig with fixed nozzle height, Yaskawa servo X-axis with encoder, and Nema 17 Y-axis - specifically designed for camera qualification, not full PnP operation.