But
1. Don't you think you have too weak vacuum or just too small nozzle for the part like you show? I think that my machine is faster and sharper than yours but I've never seen part's slippery so strong like in your video shown.
2. Sending M204 together with G0 (in moveTo) instead of F really fixes a lot of problems. Pure speed adjustment is not too usable for PNP.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/708392a2-0e3b-43af-9313-d6e8f64aa0d8%40googlegroups.com.
I went from Linuxcnc to smoothie I am using the same drivers for both I just made a db25 break out for the smoothie clone step,dir etc. My bot is old I made over 10 years ago it is big and heavy with smallish steppers but it gets moving very fast. I have noticed just looking at the cam I get a lot of bounce. I do not remember getting any of that before but this system also sat for 10 years so I was thinking the belts might have got weak. Before I could run at 100% speed just putting the parts from a strip onto a pcb and from the pcb back into strip. With just raw gcode done by hand, I can't get anywhere near the accuracy now. I was thinking it was just due to aging of the bot. But now I thinking the smoothie might be a lot of the problem. I know I had to slow my Z to almost 1/2 to keep the stepper from skipping once in a while. I never had that with the linuxcnc. Over all I have been disappointed with smoothie. I went to smoothie just to do OpenPNP as it seems to be what it is based around. if I get everything working I may put back a real cnc motion controller. all my bots have encoders (not wired yet) but I am used to mills and such. and closed loop is the norm.
I was wondering if you could add a encoder to a motor or on a idler pulley. Then you can compare and see if it matches.(I think it will)
1. Don't you think you have too weak vacuum or just too small nozzle for the part like you show? I think that my machine is faster and sharper than yours but I've never seen part's slippery so strong like in your video shown.
That's also my plan (in some free time) to flash my spare board MKS1.3 LPC1768 with Marlin to see how it works comparing to Smoothie firmware.
However the project of Jaroslaw controller sounds even better.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/d50d338d-c633-4bbc-9077-d0d755664668%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/24f2d0dd-963f-4bbc-a74c-d764ddb3b4c6%40googlegroups.com.
Now, since we will (currently) *always* want the initial acceleration and jerk values to be 0,
We set P_i = P_0 = P_1 = P_2 (initial velocity), and P_t = P_3 = P_4 = P_5 (target velocity),
which, after simplification, resolves to ...
Hi Mark,I'm fully on board with all of this. I think it's the right direction to improve the current extremely limited motion system.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/9e122023-5dd0-4a58-96bc-5242aebf083d%40googlegroups.com.
Hi dc42
Yep, the Eigenfrequency.
Already thought about how to measure this per Axis (you can
already glimpse the sinusoidal in the Camera Settle graphs) and
then try and suppress it. But I thought it goes both ways
(symmetrically), i.e. to also increase jerk/acceleration, when (or
slightly before) the pendulum wants to swing back.
I thought I could integrate the product with the sin() and cos() of the Eigenfrequency over the past/future moments of jerk/acceleration, get amplitude and phase and feed it back negatively.
Found this helpful to understand the principles without too many
formulas :-)
https://youtu.be/spUNpyF58BY
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/9c500e82-74f2-4652-a334-54c3a261bfe7%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/4485eb44-eb19-4267-8108-05fff2b326b4%40googlegroups.com.
> Have at it Mark. I trust you'll get it right :)
Thanks, Jason
I've already had my run-ins with the tests, and your new Gcode
Server is a great addition because it tests the relevant driver
;-) The tests correctly caught some unfinished conversion and
demonstrated that it works!
> if, as you find things that are changing significantly, ...
This is half answer, half note self. If you see something that is irking you, just tell me.
Outside of the obvious Axes rework, up until now, there are these
things that are changing in semantics (rather than just hidden
Axis implementation):
I saw some existing test cases that I believe will fail due to 2.
I will need to adjust these and try to add some new testing there.
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jzCw63uKUQSChtzZ%3DF0%2BtzGRLiNq9z4zRWpmqXnFC7ivA%40mail.gmail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/4485eb44-eb19-4267-8108-05fff2b326b4%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
> Have at it Mark. I trust you'll get it right :)
Thanks, Jason
I've already had my run-ins with the tests, and your new Gcode Server is a great addition because it tests the relevant driver ;-) The tests correctly caught some unfinished conversion and demonstrated that it works!
> if, as you find things that are changing significantly, ...
This is half answer, half note self. If you see something that is irking you, just tell me.
Outside of the obvious Axes rework, up until now, there are these things that are changing in semantics (rather than just hidden Axis implementation):
- I want all HeadMountable moveTo() to go though the head, currently the moveToSafeZ() variants went directly to the driver. Now moveToSafeZ() is just a wrapper for moveTo(). The following will make clear why:
- BIGGEST CHANGE: the headOffset translation (back and forth) must be handled by the HeadMountable rather than by the driver, I believe that's the correct place to encapsulate this. This will allow for future variable offsets, like Renee's revolver head.
- Axis mapping and transform will be coordinated by the Head (using functionality on the HeadMountable) and it will call all the drivers that have one or more axis mapped in the operation.
- Order of calling drivers is always guaranteed to match the one in Machine Configuration. User has Arrow buttons to permutate.
- No more NaNs to the driver. They must be substituted by the HeadMountables, before head offset translation and axis transformation as this is essential for multi-axis transforms (like Non-Squareness Compensation) to work.
- The drivers will work purely with raw coordinates.
- Visual Homing is now in the Head and available for all drivers. You could even Visual Home a machine that has the X axis on a different driver than the Y axis.
- It works with transformed axes (including Non-Squareness Compensation) therefore the fiducial coordinates will finally match. For existing machine.xml the fiducial coordinate is migrated to "unapply" Squareness Compensation, so captured coordinates on existing machines will not be broken.
- Visual Homing now aborts the home() when Vision fails (I had some near-miss experiences when I tested Camera Settle and it was misconfigured).
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/4a6aca1b-727b-4246-8c8c-5aaf6b239ce0%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/4a6aca1b-727b-4246-8c8c-5aaf6b239ce0%40googlegroups.com.
Hi Jarosław
> quite many interesting features were in GCode driver only.
This will change now. Most features will be outside the driver and be available for all drivers/machines.
> I mean that instead of sending several G codes for
all small operations - you would send something like -
"pick/place component on certain Z and return result" ? This
was the way my previous chinese machine was working on protocol
level.
OpenPnP has this kind of abstraction, starting from the Job
Processor steps. They are really extremely basic and versatile.
The "Reference" classes in OpenPnP are only one possible
implementation. But if you wanted to replace those, you would need
to replace large chucks of code with your own. The important
message is that it is possible within OpenPnP (unlike any
other framework I know).
_Mark
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/902d5853-30ff-4fd5-837c-07a06edc4fbc%40googlegroups.com.
Hi Duncan
thanks a lot!
It seems to work.
Some impressions of your machine after fully automatic migration:
Axis mapping using drop downs:
Actuator Mapping:
Simpler management through mapping. Only what is mapped can have Gcode:
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/524f28a4-4cf0-4cc7-bbd9-f6ccf5281fe3%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/da7883e7-6274-7014-5fef-6bf422cb540e%40makr.zone.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
Hi Duncan,
Hi Everybody,
your feed-back prompted some deeper scrutiny and I actually
changed some thing again. Thanks for that. :-)
> Will be interesting to see if this breaks anything.
The only things now known to break are these:
If 2. is a problem, I'd rather create a new sub-class
DelegatedActuator that calls on a second actuator for select
functions.
Naturally Script access to machine objects might need to be adapted. Some APIs have changed and if you're doing something with them, you need to adapt your scripts:
> Maybe as well to warn everyone to take a backup of machine.xml first?
Always do that when upgrading OpenPnP!! Do
not expect an extra Warning.
_Mark
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c7673f67-ae03-4583-91e8-35a1e7665b0d%40googlegroups.com.
Hi Mike,
I'm referring to your PM but posting this on the list for others
to see.
Some impressions from the result of your machine's migration.
As you see it has created the four nozzle dual rack and pinion Z
configuration:
Btw, the MappedAxis can do any axis scaling, axis offset, but also combinations, like full axis reversal for instance, in an easy to understand way by just relating two points on the axis to each other in "input" --> "output" manner.
Migrated Visual Homing:
Axis mapping with simple drop-downs:
Actuators mapped to second driver:
Looks good!
Thanks again for the machine.xml
@Everybody, the call for your
OpenPnP 2.0 machine.xml file still stands!
_Mark
Am 13.05.2020 um 17:24 schrieb M. Mencinger:
>
> I refer to OpenPnP :
> What I now need is your machine.xml, expecially if you have
special mappings, special transforms, shared axes, sub-drivers,
moveable actuators etc. pp.
>
> I have smoothie 6axis mapping - OpenPnP 2.0 see attached
> Please let me know how it works out.
>
> Thanks
>
> --
> Mike
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/4fd18236-d050-d31d-871b-1087218eeb17%40makr.zone.
Sorry Bert, only 2.0. The XML reader in OpenPnP can't be switched
into "tolerant" mode, unfortunately, so migrating this is a
hassle.
But you can send me a partially stripped down version, that opens
in 2.0. It's mostly the nozzle tip section that changes, you could
just delete that for this test plus handle any remaining errors
when trying to open. Perhaps you have a different computer to do
this, so your original setup is safe.
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BKNHNx73tm4HxZJu9WZGgRk_v7J%3D0FLYgQ3Xdu-XUtqemgXzg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/bc57b13c-8796-aace-0ac6-c67e1a44a4d7%40makr.zone.
Hi Bert
I think I got enough examples for now, some in PM, thanks. All the important Transforms, Non-Squareness and Multi-Driver setups covered.
If somebody has a ScaleTransform, please send it :-)
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BKNHNyod0o98QK27J_HkXj9sXDXs74vE3ENxrW7yT5OS%2B_%2BpQ%40mail.gmail.com.
Hi Jason
I wanted to add some tests for the new axis & motion stuff (future and present). But foraging into this has turned into veritable side show.
See the video:
https://makr.zone/SimulatedImperfectMachine.mp4
As you can see, I can simulate many more aspects of the Machine. And btw. this is all with the new Axes implementation, the basis of which is the auto-migrated default machine.xml.The new framework for this is on a new sub-class of
ReferenceMachine so we can test axes across multiple drivers. To
enable one needs to change the machine class in the .xml.
What is missing is the "human-less" success monitoring. I'd like
to add little Vision operations that looks at the pick and place
location and match for the footprint, rendered black as a template
for the black strip pockets on Pick and rendered like in Alignment
as a so-so template for solder lands on Place. Triggered by the
Vacuum Valve actuate() in the Null driver. It could test both the
location and rotation.
If this works, I guess I could then create a TestUnit. But then we need to talk about test duration or how to run tests at different "scrutiny" levels.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c49d5cb4-744e-9030-3311-d14e42e7bec7%40makr.zone.
Jason,
is there a way to determine the current placement in the Job from outside?
More specifically I need the PartAlignmentOffset for the current
part on the nozzle, if it was aligned. As an offset for
the Place check, obviously. So far I've found a way for most
things, but this bugger really eludes me.
Thanks for your help!
If you're interested: The system already works, as currently all
the alignments are perfect in the SimulatedUpCamera, but I'd like
to change that, in order to really test Alignment in the
simulated imperfect machine.
Just to show you how the PnP location check works, this is a
series of alternating Pick and Place triplets of 1) what it sees
at the location (PCB HSV masked to just see the solder lands), 2)
the blurred part template, 3) the match map.
It's also a proof-of-concept of the "solder land to bottom vision
match" we talked about. If you blur one of the two, it will find
the natural sweet spot, even if the two don't really match in
individual pad/land size. This is nicely visible in the R0603
here:
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/9c483eb9-9d76-3d50-f39a-9b601347f8d5%40makr.zone.
Hi Jason
thanks. If I'm not mistaken, this data only lives transiently on
the stack of the JobProcessor thread. No way to access.
I propose eventually moving the aligment-offsets to the Nozzle, where the org.openpnp.spi.Nozzle.getPart() already is. The two belong together IMHO. This will also allow for ad-hoc alignment and accurate placement outside the JobProcessor. A "Place part at camera location" function could be useful during early machine setup and testing (at least with a Z probe or ContactProbeNozzle).
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jxN9F8ZuY5X0bpwiHxBY_Ojt_Y2xEijJasYYTCmH2X80Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/6632f3fb-a075-2580-5986-67a7a1affbec%40makr.zone.
Hi everybody (TinyG users?)
who uses MOVE_TO_COMPLETE_REGEX?
If the following link is still valid, then it is used for TinyG and it captures a response coming from the G0, G1 command, right?
<![CDATA[.*stat:3.*]]>
https://github.com/openpnp/openpnp/wiki/TinyG
Can anybody explain/provide an example log?
I'm asking because I'd like to separate the wait for a move to complete from the moveTo() command (as laid out in the original post of this topic). This is also important for multi (sub-) drivers machines where axes are mixed across drivers. This should only wait for completion after all the controllers have been commanded to do their moves. Otherwise they will be done in sequence (slow and completely uncoordinated!).If this <![CDATA[.*stat:3.*]]> report is sent
back without prompt (i.e. directly as a result of the move
itself), this might become a bad source of race conditions.
@tonyluken, is this the "unsolicited response" you meant?
Wanted is a command that you can send to the controller at
any time, that either reliably waits for completion before
returning (like the M400 on Smoothieware) or that sends a status
report that can be used in a loop to wait (like the position
report M114.1 on Smoothieware).
I've tried to find answers myself, but gave it up when I read
this,
https://github.com/openpnp/openpnp/wiki/TinyG#quirks
... if this is all still true, how can people even use TinyG with OpenPnP at all?Thanks.
_Mark
Thanks Tony!
What happens if you send the same move command again i.e. with
unchanged coordinates? Is there be a second report?
Or do you know any other command that may prompt a reliable status or completion report?
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/f5f0b2e1-c21a-4fb9-bfdd-db55ff53ee6f%40googlegroups.com.
Hi Jason,
another issue. The Camera Rotation Jogging you presented here...
https://www.youtube.com/watch?v=0TvqQBkTGP8
...didn't work any more in the NullDriver.
Turns out the reason is that the NullDriver now also uses proper
Axis Mapping. For others that read this, the current GcodeDriver
only Axis Mapping is described here:
https://github.com/openpnp/openpnp/wiki/GcodeDriver%3A-Axis-Mapping#mapping-axes-to-headmountables
The Wiki example maps Z and C axes only to the Nozzles. That's what I would expect and have on my machine and see in other people's machine.xml they recently sent me. Furthermore, with multi-nozzle machines there are multiple C axes , and one wouldn't know which C axis to map the camera to, so I guess not mapping the camera is still the correct thing to do, right?
But the code here...
.. always talks to the Camera, so with the unmapped axis, getLocation()
will always return the constant rotation from the head offsets.
The Jog will also not work, the rotation is ignored.
I haven't tried, but this must also never have worked on the
GcodeDriver, if the C axis wasn't mapped to the camera.
So this a bug, and we should rotate the selected tool instead, correct?
Plus we could hide the rotation handle when no C axis is mapped
to the selected tool.
As a reference: the reticles/cross hairs are also drawn with C
taken from the selected tool:
As a bonus this will then also work for the bottom camera that is currently excluded here:
If you agree, I can fix this in my PR-to-be and also conveniently
test it in the NullDriver. This will be a big one anyway :-)
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/44182c09-b8bb-686a-9d08-8a2f77155f05%40makr.zone.
Hi Jason,
Yes, there is this discrepancy. It is a bit too conveniently
hidden in the NullDriver with its head offset of 0, i.e. you are
effectively looking through the nozzle. But until we have
stereoscopic down-looking cameras and 3D reconstruction of the
image underneath, plus perfect cam-to-tip alignment, zero runout
etc, we need to live with offsets. And offsets mean switching
between tools, right?
And what about the second, the third, the nth nozzle? We need a
way to select ... says the guy with the one-nozzle machine :-) But
even with one nozzle there are those areas that the camera can't
reach... and it is veery useful to be able to use the nozzle tip
for capture (and not to waste that area).
I don't think there is an easy way to completely overturn the
current system. But, there is room for improvement how the
selected tool is handled, so you'd hardly ever have to think about
it. It should be automatically selected, whenever a Position
button is pressed. Not only the obvious ones but also the Feeders
Panel's pick location buttons etc.
When Camera-Jogging is used - even with just a click, it should
likewise select the camera.
OpenPnP would need to remember the last selected non-camera-tool
and restore it, when toggling between camera and tool (especially
when doing that in the Machine Controls).
The Camera View/Machine Controls could then also delegate the
Camera's unmapped C axis coordinate to the last selected tool
while bypassing the runout compensation (that's now a feature of
the new transform system, greetings to @doppelgrau). So we can
use rotation when the camera is selected, while keeping the X/Y
calibrated for the camera.
The runout offset that is visible when you rotate and the nozzle
is the selected tool, is vexing users, myself included for a long
time... that would finally be gone.
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jx%2BOnoWD7HJbLsrcZbrtyFaLy-5X9X%2Bpp%3DTTAjVAgjZVg%40mail.gmail.com.
Hi Jason,
Yes, there is this discrepancy. It is a bit too conveniently hidden in the NullDriver with its head offset of 0, i.e. you are effectively looking through the nozzle. But until we have stereoscopic down-looking cameras and 3D reconstruction of the image underneath, plus perfect cam-to-tip alignment, zero runout etc, we need to live with offsets. And offsets mean switching between tools, right?
And what about the second, the third, the nth nozzle? We need a way to select ... says the guy with the one-nozzle machine :-) But even with one nozzle there are those areas that the camera can't reach... and it is veery useful to be able to use the nozzle tip for capture (and not to waste that area).
I don't think there is an easy way to completely overturn the current system. But, there is room for improvement how the selected tool is handled, so you'd hardly ever have to think about it. It should be automatically selected, whenever a Position button is pressed. Not only the obvious ones but also the Feeders Panel's pick location buttons etc.
When Camera-Jogging is used - even with just a click, it should likewise select the camera.
OpenPnP would need to remember the last selected non-camera-tool and restore it, when toggling between camera and tool (especially when doing that in the Machine Controls).
The Camera View/Machine Controls could then also delegate the Camera's unmapped C axis coordinate to the last selected tool while bypassing the runout compensation (that's now a feature of the new transform system, greetings to @doppelgrau). So we can use rotation when the camera is selected, while keeping the X/Y calibrated for the camera.
The runout offset that is visible when you rotate and the nozzle is the selected tool, is vexing users, myself included for a long time... that would finally be gone.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/9da7e3fe-ccd0-401d-189a-7e3e9c7284e5%40makr.zone.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/44182c09-b8bb-686a-9d08-8a2f77155f05%40makr.zone.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jx%2BOnoWD7HJbLsrcZbrtyFaLy-5X9X%2Bpp%3DTTAjVAgjZVg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/daa3e52a-f45b-4e94-ae26-0f92f1af84e0%40googlegroups.com.
Hi everybodyIMPORTANT CALL FOR YOUR ASSISTANCE:As described in the original post of this thread, I'm working on a new GUI based global Axes and Drivers mapping solution in OpenPnP 2.0.In short:You can now easily define the Axes and Axis Transforms in the GUI and map them to the Head Mountables with simple drop-downs (see the image). No more machine.xml hacking.Furthermore, you can add as many Drivers as you like and mix and match different types. All Axes and Actuators can be mapped to the drivers (again in the GUI) and OpenPnP will then automatically just talk to the right driver(s).Most of the GcodeDiver specific goodies were extracted from the driver and are now available to all the drivers (some of that still work in progress):
- Axis Mapping (obviously)
- Axis Transforms
- Visual Homing
- Non-Squarenes Compensation
The idea is to automatically migrate your machine.xml to the new solution.
What I now need is your machine.xml, expecially if you have special mappings, special transforms, shared axes, sub-drivers, moveable actuators etc. pp.
Thanks, Alex.
I guess there is a mistake in that config. It had an unused
sub-driver with a second set of Z and Rotation axes mapped that
are never used (no second nozzle).
Note, the migration is still perfect and this machine will work,
it now just shows you the superfluous stuff more prominently:
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/8ee26fb5-754f-4fd0-a94b-38eed5ba1097%40googlegroups.com.
This would mean that Camera.getLocation() would not reflect the rotation. Which would mean that capturing camera coordinates would need to go through the camera view instead of just on the camera, unless I am missing something? I'm not a fan of this solution. I think it would be better for the camera to just maintain a "virtual" rotation coordinate.
OK, I understand now.
Can we agree that there are two completely independent questions?
About 1.:
I'll create a VirtualAxis that is mapped to down-looking cameras'
C and serves as a coordinate store.
Users could also assign a virtual Z. It is sometimes convenient
to switch back and forth between camera and nozzle tip such as
when setting up a feeder (height). I always hate it when the
camera forgets the Z.
To be clear, a Z probe measurement would still be favored on
capture. We could perhaps add a modifier Key (holding down Shift)
when you don't want the probe measurement.
The question: should I map a virtual Z per default on migrate?
Btw. I have made it easy to set the NullDriver up for 3D
operation (currently everything is on Z=0) so Z handling can be
tested. It sets all feeder's Z, the BoardLocations' Z and the
bottom cameras' Z.
About 2.:
> Only during machine setup, IMHO. My goal is for the runtime UI to not have a selected tool. So, certainly when you are setting up a nozzle changer, there is an implied selected / current tool, but my end goal is that all of the "operator" functions will just use the right tool automatically.
I'm not sure I understand. Sometimes I feel you underestimate
your own brilliant work. :-)
You don't need any of that control GUI during normal Job
operation, neither virtual nor real. Machine Controls can already
be hidden along with the selected tool combo-box and we could even
add an option to hide them automatically when you hit Run on the
Job. Your system is already brilliant, no need for a Verschlimmbesserung.
But as soon as the job is stuck and you need to trouble-shoot something you need to be back in full control mode, and fast. I wouldn't want a dumbed-own GUI when I need to fix a bad feeder pick location etc.. I can't imagine a single realistic interrupted Job/trouble-shoot use case that I can fix with just the camera view and virtual axes. It would be infuriating to have the real problem at hand and then also having to wrestle free of some Noobs Are King GUI. Sorry I'm quite emotional about these questions, as I'm already a bleeding victim of that *** mega trend.
About the unreachable area, I would lose so much space! One use case see here
https://youtu.be/dGde59Iv6eY?t=250
It would also make most Liteplacer users' life hard, as they are advised to set up their changer in the unreachable area here:
>Attach the holder to your work table. The down looking camera doesn’t need to see the nozzles, a good place is above the hole location, on the left.
https://www.liteplacer.com/assembling-the-nozzle-holder/
Not on my machine, though.
_Mark
This would mean that Camera.getLocation() would not reflect the rotation. Which would mean that capturing camera coordinates would need to go through the camera view instead of just on the camera, unless I am missing something? I'm not a fan of this solution. I think it would be better for the camera to just maintain a "virtual" rotation coordinate.OK, I understand now.
Can we agree that there are two completely independent questions?
- Whether the camera has virtual coordinates.
- Whether we should have a selected tool on the UI.
About 1.:
I'll create a VirtualAxis that is mapped to down-looking cameras' C and serves as a coordinate store.
Users could also assign a virtual Z. It is sometimes convenient to switch back and forth between camera and nozzle tip such as when setting up a feeder (height). I always hate it when the camera forgets the Z.
To be clear, a Z probe measurement would still be favored on capture. We could perhaps add a modifier Key (holding down Shift) when you don't want the probe measurement.
The question: should I map a virtual Z per default on migrate?
Btw. I have made it easy to set the NullDriver up for 3D operation (currently everything is on Z=0) so Z handling can be tested. It sets all feeder's Z, the BoardLocations' Z and the bottom cameras' Z.
About 2.:
> Only during machine setup, IMHO. My goal is for the runtime UI to not have a selected tool. So, certainly when you are setting up a nozzle changer, there is an implied selected / current tool, but my end goal is that all of the "operator" functions will just use the right tool automatically.
I'm not sure I understand. Sometimes I feel you underestimate your own brilliant work. :-)
You don't need any of that control GUI during normal Job operation, neither virtual nor real. Machine Controls can already be hidden along with the selected tool combo-box and we could even add an option to hide them automatically when you hit Run on the Job. Your system is already brilliant, no need for a Verschlimmbesserung.
But as soon as the job is stuck and you need to trouble-shoot something you need to be back in full control mode, and fast. I wouldn't want a dumbed-own GUI when I need to fix a bad feeder pick location etc.. I can't imagine a single realistic interrupted Job/trouble-shoot use case that I can fix with just the camera view and virtual axes. It would be infuriating to have the real problem at hand and then also having to wrestle free of some Noobs Are King GUI. Sorry I'm quite emotional about these questions, as I'm already a bleeding victim of that *** mega trend.
About the unreachable area, I would lose so much space! One use case see here
https://youtu.be/dGde59Iv6eY?t=250
It would also make most Liteplacer users' life hard, as they are advised to set up their changer in the unreachable area here:
>Attach the holder to your work table. The down looking camera doesn’t need to see the nozzles, a good place is above the hole location, on the left.
https://www.liteplacer.com/assembling-the-nozzle-holder/
Not on my machine, though.
--
_Mark
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/8389c137-83d0-7c49-87ff-74579a0acdaf%40makr.zone.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
(Clearly I have to do more testing on the machine and less in the simulator)
Well, one of my goals is to make the simulator much more
realistic and testing as results-oriented as possible. This should
now all behave much more like a real machine i.e. the same type of
discrepancies between drivers should no longer be possible.
Ideally, instead of using the NullDriver, this would one day also
work with a regular GcodeDriver talking to a GcodeServer. But this
needs a true Gcode interpreter for a full jobs test. Once there,
people could even test their real-life machine.xml setup
with arbitrary, reasonably standardized Gcode commands and the
comms temporarily bypassed to the GcodeServer. The next step would
be a machine scan using the down-looking camera to get the user's
machine table into the ImageCamera. Imagine how cool ;-) Shouldn't
be too difficult to implement if kept simple.
I'll keep the motion planner building blocks universal, so they can be plugged in, as soon as such a Gcode interpreter is available. I'm unfamilar with advanced Regexes so maybe this could actually be quite easy.
_Mark
> Good to know its compatible with the changes though.
Yes, the real world tests will still have to be done yet, some
bugs are to be expected and I hope people will help me test it. I
will do my best to make it work, for those who do.
_Mark
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/29601f27-8e82-4cef-a67f-e6011c6fa856%40googlegroups.com.
Hi Marek
no, vacuum sensing simulation is not included and not planned this time. Already too much work with motion.
But the basic framework is there. So this could be added later.
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/b2c98abe-7a66-2cf4-8292-cbdad878b14b%40makr.zone.
Hi Tony
I suspect the following:
If you send two, three or any number of G0/G1 commands quickly,
only one such status report will be issued. And if some
of the commands are somewhat delayed (due to other stuff happening
on the OpenPnP side) it will become nondeterministic, how many
reports are to be expected. It will become impossible to determine
safely when the sequence of commands has completed.
Please try to find a solution. Otherwise TinyG will not be usable
with the advanced GCode driver mode.
Note: TinyG is open source.
https://github.com/synthetos/TinyG/blob/master/firmware/tinyg/gcode_parser.c
Perhaps it would be easier to add a proper M400 command, than to try to coerce it into doing what we want with a crystal ball, tweezers and a crow bar.
https://www.reprap.org/wiki/G-code#M400:_Wait_for_current_moves_to_finish
Have you tried
G4 P0
?
_Mark
Could the driver have the capability to provide an incrementing line number to each G code command and then use that to disambiguate the responses?
Yes, that sounds promising. Can the same number be reused in a cycle if an overflow of the 99999 happens?
> BTW - you may have already answered this somewhere and I just missed it but if you are going to stream commands to the controller without waiting for each to complete, how do you know that the controller can accept another command without overflowing its internal buffer?
I really hope that each controller has proper serial flow control
that is blocking on the internal command queue. It never even
occurred to me that this could be missing. I think it is a common
pattern to just blindly stream Gcode for a 3D printer, so I'm
quite confident.
But having said that, don't overestimate the potential. I need
the asynchronous streaming for bursts of fine-grained interpolated
motion. There will still be frequent interlocks between those
bursts. It's not that OpenPnP can send the whole job and then sit
back and wait. :-)
We need interlock in all vision operation and also need some kind of interlock on vacuum valve switching and sensing. That's because the vacuum reading Gcode is asynchronous i.e. the command returns values immediately and in parallel to any on-going motion. So we need to interlock to the valve switch to know on the OpenPnP side when that happened. The valve switch in turn is not asynchronous i.e. the controller will by itself wait for motion to complete and then switch the valve. At least that's the behavior on Smothie.
That last part is unfortunate btw. as it spoils the possibility
to do "on-the-fly" (while moving) Part-Off checking. :-(
BEGIN FUTURE IDEAS
Maybe I'll find a way to trick Smoothie and other controllers or
maybe I'll even hack it to do asynchronous switching.
I plan to add some kind of "soft wait" that allows waiting for
some event without machine still-stand. The new writer thread on
the driver would periodically issue position report commands (not
needed on TinyG). The position reports would be monitored
continously and a "soft wait" would be released as soon as some
preset positional predicate turns true (Java Function).
The machine could still be in full motion in the background, but
we can trigger on-the-fly actions on the OpenPnP side. Like
checking the vacuum level, as soon as some Z height is reached
after a pick etc.
The next idea is to add a "Monitoring" switch on the Actuator.
The Actuator will no longer be read on demand, but continually
monitored. The Gcode writer thread would issue the
ACTUATOR_READ_COMMAND periodically and as soon as the report comes
back, match it against all the monitoring Actuator REGEXes and
store the measurement. When OpenPnP code reads the Actuator it
will just immediately return the stored last measurement. Often
multiple Actuators also share the same ACTUATOR_READ_COMMAND, as
one Gcode command reports multiple measurements, so a lot of
round-trip delays can be saved. E.g. @doppelgrau's screenshots
showed ~15ms round-trips on vacuum reads, so this can save up a
lot.
The next step would be to dynamically set Alarm ranges on
Actuators. After the Pick, i.e. as soon as the soft wait on SafeZ
is triggered, an Alarm range on the vacuum level could be set. The
vacuum level would then be continuously monitored during the whole
cycle until the alarm range is removed again in the Place step. An
Alarm status on the Actuator would be set by background monitoring
and evaluated on the next JobProcessor Step. So even a temporary
dip could trigger e.g. a Part-Off alarm. I guess this could be
handled like a "Deferred Exception" in a universal way i.e. the
handling of these Alarms on the JobProcessor Step side could be
generic. This way additional Actuators could monitor other machine
parameters and raise Alarms (like the pump reservoir level or
perhaps a stepper temperature, etc.).
TinyG's line number position reports now seems to enable us to exactly know where it is. I guess this even beats Smoothie's capabilities, where I will need to count the reports or even match up coordinates with the motion plan.
Thanks Tony.
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/73707467-aab1-70e9-f493-ee5206dd8c75%40makr.zone.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CAC%2BEaojF-4gCTqkXexYU5G89RTkxbFmizj%3DQVkyJdBNHocN9sA%40mail.gmail.com.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/73707467-aab1-70e9-f493-ee5206dd8c75%40makr.zone.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
> Full, maximum set ?
Like on Christmas? 8-)
OK, here's my list.
Integral planner
Most open source firmwares I looked at handle motion as something
special. Other commands are not planned the same way. This stands
in the way of important capabilities.
Most importantly, being synchronous should not mean "bring the machine to a full still-stand then do it". It should just mean "as soon as that last (movement) command is done, do it". If more movement commands are queued after the synchronous command, then the machine should keep the speed up.
E.g.
G0 X100
M10 ; switch valve on
G0 X200
M11 ; switch valve off
G0 X300
should move 100mm then on-the-fly switch the valve on, then without decelerating move on to 200mm, then on-the-fly-switch the valve off then move the rest to 300mm. Acceleration/Deceleration should only happen at the beginning and end of the sequence.
If I do that on the Smoothie, it will bring the machine to a full still-stand after 100mm then switch the valve, then accelerate again etc. Sloooow!
Still-stand command
Having said that, a proper M400 command is still an absolute MUST.
So if you wanted the still-stand behavior, you should be
able to add a working M400 command.
Synchronous/Asynchronous operation
Ideally, switching and other "actuator" commands should be
provided in a asynchronous variant (and vice versa).
E.g. a separate M10.1 variant would switch immediately,
without waiting for the queue.
Unique queue acknowledgments
It seems it should be an obvious feature but I haven't found
anything on open source controllers.
A synchronous "echo" command should report a unique string back
to OpenPnP as soon as the queue reaches this command. So on the
OpenPnP side we know exactly when that happens. Again it should be
done on-the-fly without stopping motion, if it is inserted between
motion commands.
Uncoordinates moves
A controller should implement G0 vs. G1 properly i.e. allow axes to move in uncoordinated fashion, if we want it. Uncoordinated moves can speed things up.
Ideally, and in a special variant only, this could even work across multiple G0 commands. So If you tell it this:
G0 X100 Y200
G0.1 B180
Then X, Y move in uncoordinated "hockey stick" fashion for best speed. Plus the move in B would start to move before the X, Y move is complete. This would be worth a lot when you want to move other stuff on the machine, like feeder actuators, conveyor belts etc. As one example consider this separated "drag" feeder on the SmallSMT machines:
Support path blending
This is probably a tall order but ideally you would support what LinuxCNC can do with the G64 command i.e. "cut corners" in smoothed-out motion to speed up things:
http://www.linuxcnc.org/docs/2.6/html/gcode/gcode.html#sec:G64
That's what I can think of at the moment.
:-)
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CAC%2BEaojF-4gCTqkXexYU5G89RTkxbFmizj%3DQVkyJdBNHocN9sA%40mail.gmail.com.
Bert,
I think if you read the beginning of this thread, things will
become clear. Then perhaps also re-read that last post about the
FUTURE IDEAS... This will shave off significant time if done
right.
Otherwise ask again.
> by the very nature of pick and place, it seems like control decisions need to be made at the completion of every move?
I disagree. If you don't see it after re-reading the beginning of
this thread, we can discuss it more specifically.
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BKNHNxyPtR%2BhNCtMRU1sRvoJRCOT_eSVjOGU6CWPQgbQTr2nw%40mail.gmail.com.
--Am 23.05.2020 um 18:48 schrieb Tony Luken:
Could the driver have the capability to provide an incrementing line number to each G code command and then use that to disambiguate the responses?Yes, that sounds promising. Can the same number be reused in a cycle if an overflow of the 99999 happens?
> BTW - you may have already answered this somewhere and I just missed it but if you are going to stream commands to the controller without waiting for each to complete, how do you know that the controller can accept another command without overflowing its internal buffer?
I really hope that each controller has proper serial flow control that is blocking on the internal command queue. It never even occurred to me that this could be missing. I think it is a common pattern to just blindly stream Gcode for a 3D printer, so I'm quite confident.
But having said that, don't overestimate the potential. I need the asynchronous streaming for bursts of fine-grained interpolated motion. There will still be frequent interlocks between those bursts. It's not that OpenPnP can send the whole job and then sit back and wait. :-)
We need interlock in all vision operation and also need some kind of interlock on vacuum valve switching and sensing. That's because the vacuum reading Gcode is asynchronous i.e. the command returns values immediately and in parallel to any on-going motion. So we need to interlock to the valve switch to know on the OpenPnP side when that happened. The valve switch in turn is not asynchronous i.e. the controller will by itself wait for motion to complete and then switch the valve. At least that's the behavior on Smothie.
That last part is unfortunate btw. as it spoils the possibility to do "on-the-fly" (while moving) Part-Off checking. :-(
BEGIN FUTURE IDEAS
Maybe I'll find a way to trick Smoothie and other controllers or maybe I'll even hack it to do asynchronous switching.
I plan to add some kind of "soft wait" that allows waiting for some event without machine still-stand. The new writer thread on the driver would periodically issue position report commands (not needed on TinyG). The position reports would be monitored continously and a "soft wait" would be released as soon as some preset positional predicate turns true (Java Function). The machine could still be in full motion in the background, but we can trigger on-the-fly actions on the OpenPnP side. Like checking the vacuum level, as soon as some Z height is reached after a pick etc.
The next idea is to add a "Monitoring" switch on the Actuator. The Actuator will no longer be read on demand, but continually monitored. The Gcode writer thread would issue the ACTUATOR_READ_COMMAND periodically and as soon as the report comes back, match it against all the monitoring Actuator REGEXes and store the measurement. When OpenPnP code reads the Actuator it will just immediately return the stored last measurement. Often multiple Actuators also share the same ACTUATOR_READ_COMMAND, as one Gcode command reports multiple measurements, so a lot of round-trip delays can be saved. E.g. @doppelgrau's screenshots showed ~15ms round-trips on vacuum reads, so this can save up a lot.
The next step would be to dynamically set Alarm ranges on Actuators. After the Pick, i.e. as soon as the soft wait on SafeZ is triggered, an Alarm range on the vacuum level could be set. The vacuum level would then be continuously monitored during the whole cycle until the alarm range is removed again in the Place step. An Alarm status on the Actuator would be set by background monitoring and evaluated on the next JobProcessor Step. So even a temporary dip could trigger e.g. a Part-Off alarm. I guess this could be handled like a "Deferred Exception" in a universal way i.e. the handling of these Alarms on the JobProcessor Step side could be generic. This way additional Actuators could monitor other machine parameters and raise Alarms (like the pump reservoir level or perhaps a stepper temperature, etc.).
END FUTURE IDEAS
TinyG's line number position reports now seems to enable us to exactly know where it is. I guess this even beats Smoothie's capabilities, where I will need to count the reports or even match up coordinates with the motion plan.
Thanks Tony.
_Mark
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/73707467-aab1-70e9-f493-ee5206dd8c75%40makr.zone.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
optimization of moves by the (TinyG) planner is much more of a concern for CNC machining and 3D printing than it is for pick-and-place so I don't think we need to worry about the planner queue at all.
PS. See here about S curves ( PM me if you have questions) https://groups.google.com/forum/#!msg/openpnp/W7FLuodpUNA/09OXDHQxAgAJ
Jaroslaw or Mark, are either of you interested in sharing how to calculate the S curve stuff for accel and decel? Not in this thread but maybe another or in PM?
I disagree with some of your assumptions. :-)
PnP is a high speed application and most (all?) open source
firmwares are optimized for slow applications like milling or
3D-printing. As long as controller architects keep thinking inside
that milling/3D-printing box, I don't expect much improvement.
That "inside the box thinking" is also the reason I'll try to
mend it from outside the controllers i.e. to try and find
a solution for all (or let's say most) controllers. I've talked
about that motivation early
in this thread. There are other ideas that drive this, that
don't make sense to discuss yet. :)
When you have short or slow moves like on 3D-printing or milling,
the acceleration is (almost) always short i.e. the few positioning
and tool changing moves between the long and slow milling and
extrusion moves don't matter in the sum. So having a rigid S is
not so bad.
The problem: The controller reaches the maximum acceleration just
for a moment, in the middle of "the S". So on a PnP application
you would tune it to the maximum acceleration the motors can
reliably take. But when the S is scaled up on long moves it takes
ages to reach the middle point. You can't compress the S because
then you would exceed the maximum acceleration the motors can
take.
What we want is a "Integration Symbol" shaped curve, not an "S" shaped curve. In the middle there needs to be a long constant acceleration segment that exploits the limits of the motors for the longest time possible, while still controlling the jerk at the beginning and end of the curve. That's true 3rd order motion control.
That's what I'm doing with my "simulated jerk control". I try to simulate what the controllers should actually do, by using fine-grained interpolated constant acceleration motion. One simple example I demonstrated with the "simulated jerk control" in the opening post of this thread:
https://makr.zone/wp-content/uploads/2020/04/SimulatedJerkControl.mp4
That's proof of concept, simply calculated in an Excel sheet and
sent with Pronterface.
Now I need to be able to do that from OpenPnP. For that I
need fast sending, buffering and queuing capabilities i.e. fully
asynchronous operation from OpenPnP.
Of course, ideally, a controller would do the true 3rd order
control autonomously. But as I said before, there are other ideas
in my head that definitely go beyond the capabilities of an MCU
and a simple G-Code dialect. So I know that I will likely need the
fine-grained motion path sending capability anyway. Even
if one of you guys could implement the true 3rd order motion
control on their controller and could convince me and the
majority of users to rip their old controllers out of their
machines and implant yours ;-).
I hope this clarifies things a bit. :-)
... on the other hand, if you can provide true 3rd order Bézier
motion paths with segment for segment 3rd order limits control,
then I might really be tempted! 8-P
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/130aa3f5-4d01-984b-fe1b-df551754a6ec%40makr.zone.
Hi Jarosław
So why are you calling them "S-curves"?
Sorry, I somehow only looked at the last of your graphs.
But the first one is good!
Can you change acceleration in the middle i.e. have acceleration != 0 at the nodes?Can you do Bézier?
Will this be true OpenSource? (Sorry I have a vague memory of this being asked before but I don't remember the answer).
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CAC%2BEaohZ-Y4yKCSf0NmswyOMsBr4_XKVZ3Q%2BOugEni0KJ2E8gA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/b9a6690e-ac84-360f-c4a1-fc45a4913dfb%40makr.zone.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CAC%2BEaojaMQptrWNk9ZOiBo6BmW-jC_16dJO1hhGxtgnRhwqvJg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BQw0jyZmQxKNQ2saAD8q_0%3Dq4gq%2BDEA3cWabWypj5rDu27dfw%40mail.gmail.com.
That's proof of concept, simply calculated in an Excel sheet and sent with Pronterface."
Are did you share this spreadsheet or are you willing to share it?
Assuming my summery is correct above, If the control boards acted proper WRT accel curves, you would not need to cache?
After all in a pick and place, something is done at the end of every movment I think? For inbstance there is no need to move X 10mm, then move it 10mm more, just move it 20mm in the first place. UNLESS you do something at the end of the first 10mm.
Hi Jason
Yes, but it seems they have only implemented the
Section 5.1 Ideal S-Curve
and not
Section 5.5 S-Curve with Linear Period
(and I didn't know it is still called an "S-Curve" with the
linear period).
The TinyG comment you linked says:
A full trapezoid is divided into 5 periods Periods 1 and 2 are the
first and second halves of the acceleration ramp (the concave and convex
parts of the S curve in the "head"). Periods 3 and 4 are the first
and second parts of the deceleration ramp (the tail). There is also
a period for the constant-velocity plateau of the trapezoid (the body).
For what it's worth, that is the same document that TinyG is based on: https://github.com/synthetos/TinyG/blob/d7855101d8d3ed1cbcc108221ea33b80abcef020/firmware/tinyg/plan_exec.c#L111
Jason
On Sun, May 24, 2020 at 4:01 PM Jarosław Karwik <jarosla...@gmail.com> wrote:
Because they are S curves ;-)Just that they are kind of split with constant acceleration phase once jerk calculation reaches max allowed acceleration.
There was link on my thread:
Enjoy the math ;-)
This itself is not enough time - you have seen my pictures, sometimes there is not enough time to reach full speed, so the shape will be simpler.
My project is open source ( both sw and hw) . I will have first proto running next week.
..
(Almost) perfect summary, Bert.
:-)
Except for the LinuxCNC part. In my application, the controller
is still needed to control the constant acceleration ramps for the
segments and generate the steps in real-time. I'm only giving it
the "envelope" of the motion and I give it up-front, not being
concerned with (difficult!) real-time issues. LinuxCNC on the
other hand is fully involved including in the real-time, i.e. here
is no separate controller last time I checked it out (they have
FPGA cards but as far as I understood they only do the last bit of
step generation).
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BKNHNxW%2BRgitODRYrnfoi6NRESkNY%3Di_WgrAvfb87O%2BOGx2uQ%40mail.gmail.com.
Hi bert
The spreadsheet is no real move calculation. No useful math in
it. It just applies fixed phases of constant jerk and generated
Gcode for this simple test. I.e. there is no move length
pre-determined it just lands where 3rd order integration lands, so
it is not useful. :-)
The purpose was not the math but to test Smoothies ability to
receive and parse Gcode and plan the motion in sufficient speed. A
20ms simulation step seemed to be no problem. I haven't tried
smaller values because this results in tiny first steps (depends
on the jerk limit), so that seemed fine enough. The example was
optimized for making the video not for maximum values of the
stepper.
See attached.
I haven't checked with a scope or whatever so I don't really know
if the sent motion envelope was completely (full-) filled by the
controller. The motion just "feels" right and the
anti-jerk/anti-vibration effect (that is the goal after all) is
obvious in the video and even more obvious in real life when you
see and hear it before you.
_Mark
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CA%2BKNHNxAkrdgPC9iq9POdneQtvH65Amtib6fipMcLG8CF9O9DQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/cbf348d6-93cf-21a1-a4b4-7647b9ea6437%40makr.zone.
>Cutting corners in X/Z or Y/Z or even X/Y/Z just kind of negates the safe Z concept. So what you are really saying here is that safe Z is lower than where we said it is. It is safe to start moving X/Y when Z hits some point lower. So at that point we start moving X/Y while Z is still moving up. If that is the case then just send that move sequence. I think Z is a short move anyway so no big benefit there from not letting it stop between moves.
My idea is to add a "Safe Z Head-Room Zone" rather than a single
Z value. This means any optimization will be restricted to that
Zone.
In effect, we do not need to wait for Z to decelerate. It will go
at full speed to Safe Z and as soon as it has passed the
threshold, the controller can start to accelerate in X/Y. So Z
will overshoot but if your machine has the Z head-room, there is
no harm, au contraire it will be a much smoother ride for the part
on the nozzle i.e. it can be done at greater speed even for large
or heavy parts. The same on re-entry: Z will be higher than Safe Z
and start to accelerate down before X and Y have reached the entry
point. We can also wrap the backlash-compensation into that
re-entry trajectory. Even more time saved.
On a shared Z two-nozzle machine, this means the nozzles will
sometimes limp above their respective Safe Z (i.e. are not
leveled). No harm in that either. If you value esthetics over
speed, you could always leave the Headroom Zone at 0.0 :-).
I think that video speaks for itself, although the "rounding" would of course be much less pronounced in real applications:
Do you ride a bicycle? Try going around 90° corners fast. Observe
your intuitively chosen path :-)
_Mark
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c78fbcd2-66f6-8aa5-d569-88af9ce4bb49%40makr.zone.
The point is not bringing it to Z0 but just to let it decelerate
in the head-room. Ideally it will not use the whole head room.
Imagine two 100m sprint tracks (or whatever they are properly
called in English).
One has a tall brick wall at the finishing line. The other has room for the runners to decelerate.
On which track will they do better times?
Or why is this bicyclist taking what seems like a wide
detour around the corner to be faster?
https://youtu.be/FP7_fe4bxBA?t=189
_Mark