motion path blending?

263 views
Skip to first unread message

Dmitry Yurtaev

unread,
Aug 12, 2018, 10:16:41 PM8/12/18
to OpenPnP
hi!

i was toying with the idea of a pnp build for a while, and finally put together a tiny machine mostly from the parts which were laying around for years. i'm into mitsubishi servo hacking for a number of years, so the machine uses mr-j3-b servos with 50/100/200w motors for z/x/y. those are pretty powerful for the machine size, so i can get some speed and accelerations out of them. recently i've got it moving under the LinuxCNC control and started learning the OpenPnP. haven't yet hooked it up, but browsing thru the code  made me thinking how it will behave. as far as i understand, the driver sends single linear segment motion commands one by one. and that will not allow linuxcnc to perform path blending (corners rounding) to maintain speed throughout the trajectory from pick to place locations. 

am i correct? if i am, is it possible to do anything about it? i'm not into java, but have some general programming experience and willing to work on that... 

here's a short clip of my machine running a g-code which simulates a (faulty) pnp operation: 


if i turn off path blending, i'm afraid it will be several times slower for the same acceleration limit...

Jason von Nieda

unread,
Aug 12, 2018, 10:28:04 PM8/12/18
to ope...@googlegroups.com
Hi Dmitry,

You're right - OpenPnP sends one segment at a time. This is done because in most cases it needs to do something between segments. Sometimes that is a vision operation, or advancing a feeder, or moving a drag pin, measuring a vacuum level, etc. This is one way that PnP differs pretty extensively from normal CNC or 3D printing. It's not really just a matter of blasting commands to the controller as fast as possible. It's usually: run a command, check that something happened, run another command, maybe repeat the previous command, etc.

This is a bit of a difficult problem to overcome with software like OpenPnP. If you were writing PnP software for a specific machine, with a specific set of known moves, it's easy. But OpenPnP is designed to work with *every* machine, and that means that sometimes we have to work at a "lower common denominator" level of efficiency.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/cdb07b20-2e82-4f6d-b529-2a29bd38df39%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dmitry Yurtaev

unread,
Aug 12, 2018, 11:06:13 PM8/12/18
to OpenPnP
oh, thanks for the quick reply! :)  yes, i understand that there're many such events. that's fine. linuxcnc also deals with similar situations - they call 'em "queue busters"...

how common are those? what can happens between e.g. a pick (vacuum on) and a place (vacuum off) or a vision op? i.e points when a nozzle really has to come to a complete stop?

is it (at least theoretically) possible to join z-up,xyc-move,z-down segments into a single op at some level of abstraction in the code? :)

Brynn Rogers

unread,
Aug 12, 2018, 11:37:33 PM8/12/18
to OpenPnP
Hi Dmitry,
    
   For an optimal Z-up,xyc-move,Z-down combined move,  you would really need start the Z-up, and then AFTER it has moved up Zmin, which is like the height of the part + extra to clear feeder shutters&stuff,  Only then is it safe to start the XYC move.    If you knew that the Z-down was going to take Zdtime, you still can't start that move Zdtime before the XYC is finished, because you might hit other tall parts that are already placed on the board.

  The machine I am building has linear motors for X and Y, so it could be crazy fast like yours.   So your making me think about some of these things too.
I hadn't even thought of starting the XYC before the Zup was done.    My controller will report a move is done before the motion completely damps out, and I figured that I could safely start the Zdown at that time, in the hopes that XYC finish oscillating around the commanded coordinate before the Zdown gets to the board.

  I'm not planning on worrying about those little bits of optimizations until long after I'm making boards.
Although one I am thinking about eventually looking at is a line scan camera -  basically the head just flies the part over the up-looking camera without stopping.
If doable, this would be a huge timesaver.    But even that I am not worried about til I have made boards.

Brynn

Dmitry Yurtaev

unread,
Aug 13, 2018, 12:15:04 AM8/13/18
to OpenPnP
   For an optimal Z-up,xyc-move,Z-down combined move,  you would really need start the Z-up, and then AFTER it has moved up Zmin, which is like the height of the part + extra to clear feeder shutters&stuff,  Only then is it safe to start the XYC move.    If you knew that the Z-down was going to take Zdtime, you still can't start that move Zdtime before the XYC is finished, because you might hit other tall parts that are already placed on the board.

sure! but if Z has some headroom above Zmin i it should be faster to allow it to overshoot and stop there after XYC started to move. same for Z-down - start descending before XY reach target position so to cross Zmin when we're at the right place.
 
  The machine I am building has linear motors for X and Y, so it could be crazy fast like yours.   So your making me think about some of these things too.

yeah, cool, i've seen you video. can't even compare that to my vibrations-inducing stretchy X belt drive :)
 
I hadn't even thought of starting the XYC before the Zup was done.    My controller will report a move is done before the motion completely damps out, and I figured that I could safely start the Zdown at that time, in the hopes that XYC finish oscillating around the commanded coordinate before the Zdown gets to the board.

motion controllers usually have a "position in place" signal. with properly tuned servos they should not leave a predefined range around that target position. definitely they should not oscillate much...
 
Although one I am thinking about eventually looking at is a line scan camera -  basically the head just flies the part over the up-looking camera without stopping. 
If doable, this would be a huge timesaver.

hm. interesting. cannibalize a CCD sensor from a flatbed scanner? :)
 

Brynn Rogers

unread,
Aug 13, 2018, 12:44:20 AM8/13/18
to OpenPnP


On Sunday, August 12, 2018 at 11:15:04 PM UTC-5, Dmitry Yurtaev wrote:
   For an optimal Z-up,xyc-move,Z-down combined move,  you would really need start the Z-up, and then AFTER it has moved up Zmin, which is like the height of the part + extra to clear feeder shutters&stuff,  Only then is it safe to start the XYC move.    If you knew that the Z-down was going to take Zdtime, you still can't start that move Zdtime before the XYC is finished, because you might hit other tall parts that are already placed on the board.

sure! but if Z has some headroom above Zmin i it should be faster to allow it to overshoot and stop there after XYC started to move. same for Z-down - start descending before XY reach target position so to cross Zmin when we're at the right place.

Ultimately that would be ideal for fastest placement.     I imagine if you can make your controller say that Zup is 'done' as soon as it has cleared Zmin, then openPnp would move on to the XYC move.    And if you get your controller to say XYC is done the right amount of time it will take the Zdown to cross Zmin.
I think it would need to know what that Zmin would be, and how long the Zdown to cross Zmin will take, all when the XYC move is commanded.


 
  The machine I am building has linear motors for X and Y, so it could be crazy fast like yours.   So your making me think about some of these things too.

yeah, cool, i've seen you video. can't even compare that to my vibrations-inducing stretchy X belt drive :)
 
I hadn't even thought of starting the XYC before the Zup was done.    My controller will report a move is done before the motion completely damps out, and I figured that I could safely start the Zdown at that time, in the hopes that XYC finish oscillating around the commanded coordinate before the Zdown gets to the board.

motion controllers usually have a "position in place" signal. with properly tuned servos they should not leave a predefined range around that target position. definitely they should not oscillate much...

This one I can make say it is done as soon as it reaches the coord, or have it wait til it stays within N counts of the coord, and/or that the velocity is within V of zero.
It can also flag an error if it takes to long to damp out the move.    If I have it tuned correctly your right, should oscillate much.   I think the mass of the head is one of the tuning variables.     Biggest reason to stick with the Gemini controller is all 20 some tuning variables needed are known just by typing in the slide part number.
  
Although one I am thinking about eventually looking at is a line scan camera -  basically the head just flies the part over the up-looking camera without stopping. 
If doable, this would be a huge timesaver.

hm. interesting. cannibalize a CCD sensor from a flatbed scanner? :)

I've found that I'm better off just getting a brand new modern linear CCD with proper data sheet.   The old CCD's have goofy multiple power supplies and weird clocking.    Newer ones are much easier to talk to, and have better specs anyway.    

 
 

Daren Schwenke

unread,
Aug 13, 2018, 7:44:56 AM8/13/18
to OpenPnP
I would think the linear CCD is a good idea, provided you can hit it at a range of angles and then recover the original geometry fast enough to have it make sense.  
If not, what about firing a strobe?  You would get your same instantaneous stopped position in space and need no 'new' hardware beyond a decent camera. I imagine someone has done this, but it's just what popped in my head.

 
 

Dmitry Yurtaev

unread,
Aug 13, 2018, 8:29:58 AM8/13/18
to OpenPnP
Ultimately that would be ideal for fastest placement.     I imagine if you can make your controller say that Zup is 'done' as soon as it has cleared Zmin, then openPnp would move on to the XYC move.    And if you get your controller to say XYC is done the right amount of time it will take the Zdown to cross Zmin.
I think it would need to know what that Zmin would be, and how long the Zdown to cross Zmin will take, all when the XYC move is commanded.

not obvious (if possible at all) how to time it without look-ahead... 

If not, what about firing a strobe?  You would get your same instantaneous stopped position in space and need no 'new' hardware beyond a decent camera. I imagine someone has done this, but it's just what popped in my head.

i looked at some linear image sensor datasheets and it appears that they are not that fast. older ones specify characteristics for 10ms integration/line time. way too slow... imho, a strobe light is a better idea. but to mitigate a rolling shutter effect the flash length should be less that 1 row time. so the light intensity need to be at least 1000 times higher than normal lighting to deliver the same amount of photons... possible? :)

Tuxsoft

unread,
Aug 13, 2018, 9:04:52 AM8/13/18
to OpenPnP
Talking crazy fast, in my experience some components like qfp100+ move slightly on the nozzle if I set my speed to fast, this can lead you down the garden path looking at vision for the problem. I am of the school that believes that if you are placing $xK parts per hour you might consider spending $xxxK on your machine. Could be I need more vacuum

Daren Schwenke

unread,
Aug 13, 2018, 9:45:58 AM8/13/18
to OpenPnP
I'm guessing a decent camera/global shutter needed, and good timing to fire the strobe as the camera is capturing.  A real strobe is pretty bright.  100 small LED's would work too (the COB chips are a lot slower I believe).  I would probably just buy a commercial flash unit (preferred), or for the cheapskate like me, grab a couple Amazon LED strobes for the LED array in them and drive them with a fast mosfet.  You can send crazy amounts of current through an LED if the pulses are short enough.  I think you can get 2 or 3 of those for like $20.  You might burn one up figuring out the limits, but that might get you there for 1/3rd the cost.

Another advantage here beyond the obvious time issue, is you are coming to a stop and accelerating one less time, and so one less time for your component to possibly shift.

Jason von Nieda

unread,
Aug 13, 2018, 9:56:18 AM8/13/18
to ope...@googlegroups.com
A typical placement looks like:
Feed
Move to feeder
Lower nozzle
Pick
Raise nozzle
Vacuum check
Move to camera
Lower nozzle
Bottom vision
Raise nozzle
Move to board
Lower nozzle
Place
Raise nozzle

Some of those could certainly be combined but it would require that the driver is aware of the stop points and we don’t currently have a way to do that.

With that said, you could probably get rid of most of the stops just by having the driver check if the move is a z only move and then combine it with the previous and next moves without waiting for motion to complete.

I’d be open to reworking the driver interface to support higher performance motion if someone wants to put together a proposal.

Jason

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Daren Schwenke

unread,
Aug 13, 2018, 10:00:58 AM8/13/18
to OpenPnP
Reason this was in the front of my mind...  I was actually thinking of using this method to maybe avoid having to wait for my swingarm optics to settle as much.  But.. I'm using the RPI camera now, and have not experimented with how much light I can get out of my strips or if they are even fast enough being made out of COB chips.  The rise time is decent, but they fall off slowly so not so much a strobe anymore.

Daren Schwenke

unread,
Aug 13, 2018, 10:05:28 AM8/13/18
to OpenPnP
You wouldn't have to implement this yourself, beyond the ability to sending two subsequent move commands instead of one, for Machinekit/LinuxCNC at least.
The internal Q factor for path blending would determine how rounded your corners are between the subsequent moves, provided you actually have a queue and not a single command sitting in the buffer.
Set low for nice sharp corners, set high for smooth rounded ones.

Brynn Rogers

unread,
Aug 13, 2018, 10:51:48 AM8/13/18
to OpenPnP
Tuxsoft,
   If you watch a $xxx,xxx machine, sure they move crazy fast for the small parts, but then as soon as it picks a big qfp100 or 6mm tall electrolytic cap you see that it moves at a fraction of the speed.
Clearly even those machines would have the big parts moving around on the nozzle or flying off if they didn't slow down for them.   It's obvious they have slowed the top speed for such a move, and I assume they also reduce the accelerations on those moves - tough to see that with the naked eye.

   are you using a nozzle with a bigger hole for the big parts?  Bigger the hole, the greater the holding force from the vacuum.

Brynn

Dmitry Yurtaev

unread,
Aug 13, 2018, 10:55:54 AM8/13/18
to OpenPnP
On Monday, August 13, 2018 at 4:56:18 PM UTC+3, Jason von Nieda wrote:

I’d be open to reworking the driver interface to support higher performance motion if someone wants to put together a proposal. 

great! i guess here's what i gonna do. finish my mechanics/electronics and get it running under OpenPNP control. then i'll try some quick and dirty hack on your nice code to make it to combine moves between stops into one operation. test it and show the results. and if it will appear to to be worth the effort - i'll try to come up with a proposal...

and one more question. i see a getLocation() method in the ReferenceDriver interface. in which the LinuxCNC class returns values of local variables which get updated during movement calls from the host. there's no position feedback from the machine. probably there's no reason to have one. is it true? does the code make assumption that the machine stays it the last commanded position? is it ok to, say, move it by hand between runs?

ma...@makr.zone

unread,
Aug 13, 2018, 10:56:08 AM8/13/18
to OpenPnP
Hi Dmitry

nice video!!

sure! but if Z has some headroom above Zmin i it should be faster to allow it to overshoot and stop there after XYC started to move. same for Z-down - start descending before XY reach target position so to cross Zmin when we're at the right place

I have thought about this for some time now and also looked into how to implement it in OpenPNP.

Problems:
  1. No open source controller known to me implements a proper G0 command, a true "uncoordinated" move that allows axes to accelerate differently etc. for best speed.
  2. Few controllers (except LinuxCNC, afaik) support G64, G61 commands to allow motion blending, overshoot, cutting corners etc.
  3. If you want to optimize the path from software (i.e. from OpenPNP moveTo()) you need to know all the parameters of the machine, such as max acceleration, max feedrate etc. for each actuator in order to shape the path correctly. This is further complicated by the CamTransform on dual-head seesaw or rocker Z axes
  4. Like Jason said, OpenPNP currently assumes a move is complete when exiting the moveTo() method. Just leaving out the M400 (wait for command queue empty) command in the G code driver will not work, as some code in OpenPNP (such as computer vision, vacuum sensing, feeder action on sub-controllers, etc.) assume to be coordinated with the motion.
  5. Furthermore the nice OpenPNP system of "pluggable" drivers only works with a very simple interface. Changing that is probably out of the question.
Solutions:

I have some ideas to do this without changing the driver interface and adapt the client software but it would be quite a challenge:
  1. Optimize only above safe-Z
  2. Need some headroom (like Dmitry wrote) so seesaw Z handling must be properly managed i.e. the second head must not be moved to safe Z, if it is already above it. This also means the head will likely move with slightly limping nozzles.
  3. Use a heuristic to determine if a move must be coordinated / synchronous i.e. finished with the M400 command.
  4. The heuristics would be this:
    a) am I a HeadMountable that may move asynchronously above/at safe Z? (i.e. everything except a down-looking Camera or better yet add a checkbox on the HeadMountable)
    b) AND is any of the XY axes mapped to me?
    c) AND is the end-point above/at safe Z 
  5. If Yes, just record the move in a queue.
  6. If No, record this last move in the queue and optimize and execute the path, as follows:
  7. Queued segments going through the safe Z ceiling (either way) will be broken in two segments at safe Z so a segment in the queue is either all-in or all-out.
  8. Queued segments below Safe Z will not be path optimized (executed as today).
  9. Queued segments above/at Safe Z will be optimized into more elaborate paths:
    a) ignore any Z moves (see point 2)
    b) combine multiple moves into one, especially the C axes rotation moves - why? see this: 
    https://www.youtube.com/watch?v=nBnfRhdnPks 
    c) plan a path in XY for uncoordinated fastest, non-linear, "hockey-stick" positioning.
    d) plan a path in Z both ways, deceleration going up (overshoot), acceleration diving down (like Dmitry suggested).
    e) NOTE that unless we add obstacle evasion in the future, there is no case of XY zigzagging above/at safe Z in OpenPNP known to me.
    f) NOTE that even the move to a Camera position or to the dive point into the "non-safe" Z zone can be optimized, it just needs to be precise at the endpoint
  10. The last segment in the queue is to be offset by the backlash compensation, regardless of whether it is inside or outside the safe Z zone.
  11. Now do final positioning (final move in backlash-compensation).
  12. Now finally issue the M400.
  13. CAVEAT: All scripts and G-code fragments that rely on finished motion inside safe Z will have to issue the M400 too. Note that for feeder actuation this is usually not wanted, it is good for it to be executed in parallel (someone asked for this some weeks ago).

I would like to give it a go (when time permits). My only concern is adding all these accelerations and feed-rates to the UI. I have no idea how the UI works. And it should an option and a bit hidden away a bit for better beginner's experience :)

_Mark

ma...@makr.zone

unread,
Aug 13, 2018, 11:00:49 AM8/13/18
to OpenPnP
Small correction: in 4 c) it should simply say "AND is any of the axes mapped to me?"
Notably it should include C too.

Dmitry Yurtaev

unread,
Aug 13, 2018, 11:49:24 AM8/13/18
to OpenPnP


On Monday, August 13, 2018 at 5:56:08 PM UTC+3, ma...@makr.zone wrote:

I have some ideas to do this without changing the driver interface and adapt the client software but it would be quite a challenge:

everything you say makes perfect sense. i don't know nothing yet about OpenPNP internals, so please forgive my ignorance.
but i thought about a possibility to introduce a kind of middle layer motion planner with higher-level calls like:

"pick a part, move along a path to XYZC, above Zmin, with specified speed/acceleration, sensing vaccum - stop and signal an error if leaking"

a reference implementation which will directly map onto existing drivers should be straight-forward (except that it will need to interface a ton of different objects: heads, sensors, nozzles, valves - whatever they are). an for g-code/linuxcnc those compound moves could be implemented as parametrized g-code subroutines...

but i'm sure everything is not that simple... :)

ma...@makr.zone

unread,
Aug 13, 2018, 3:49:32 PM8/13/18
to OpenPnP

On Monday, August 13, 2018 at 5:49:24 PM UTC+2, Dmitry Yurtaev wrote:
 
...i thought about a possibility to introduce a kind of middle layer motion planner with higher-level calls like:

"pick a part, move along a path to XYZC, above Zmin, with specified speed/acceleration, sensing vaccum - stop and signal an error if leaking"

a reference implementation which will directly map onto existing drivers should be straight-forward ...)

I planned to put my new stuff into the GcodeDriver. I got the impression that this is the only driver that really matters nowadays. It contains a lot of cool stuff that is actually non-driver specific like sub-drivers, visual homing, non-squareness compensation, backlash compensation. So I assumed it is common practice now to put new cool stuff only into this driver. It could actually be called "TextlineDriver" as it can send and interpret anything with standard textual numerical representation and newlines, it does not have to be Gcode. It can interface serial ports and TCP/IP. So even very simple Arduino sketches, local or remote daemons etc. can be interfaced, no need for a veritable Gcode interpreter there. Is there anything in the LinuxCNC driver that the Gcode driver cannot do?

As to a "middle layer": Currently the actual "business logic" (JobProcessor etc.) generates very abstract moveTo() commands against the Movable objects in the machine model (Nozzles, Cameras, etc.). The various objects themselves then propagate these move commands to the driver stack.

In the case of the GcodeDriver you can nest sub-drivers, sub-sub drivers etc. in a tree structure. However it does only allow for nested Gcode drivers. You can't plug say a LinuxCNC driver underneath a Gcode driver, at least not from the UI. Maybe that works from the XML?

Sub-driver stacking would be ideal for any type and number of "middle layers". By allowing any type of sub-driver underneath the GcodeDriver (i.e. LinuxCNC) and then give the sub-driver a flag "delegate", so all the visual homing, non-squareness compensation, backlash compensation, the stuff we're talking about and all future GcodeDriver additions could be done by the super-driver and then delegated to its sub-drivers for the actual driving. Currently it just propagates the original move location to the subdriver(s) but this could be very easily changed.

If there ever was a need for a proprietary non-textual/binary protocol (like the one Jason is just now decrypting) or any of the other drivers, it could simply be plugged underneath the top Gcode driver and benefit from all the smartness cascading down. This could even be stacked with various dedicated "middle layer" drivers etc. sandwiched in.

_Mark

Jason von Nieda

unread,
Aug 14, 2018, 12:30:13 AM8/14/18
to ope...@googlegroups.com
Here's a couple quick, random thoughts, which may generate some discussion:

1. GcodeDriver was never meant to be as "fat" as it is now. Non-squareness compensation, visual homing, sub-drivers, backlash compensation, axis mapping, etc. are all things that do not belong in GcodeDriver. Some of them should be high level features, and some should be mid-level features that other drivers can use or extend.

2. I think it's time to revisit the Driver interface. It has served us well for 6 years now, but it's a little long in the tooth. Some things I've been mulling over:
    1. Make the driver more of a HAL - the driver tells the machine what devices exist, rather than the current way, which is backwards. In other words, if the machine supports vacuum sensors, and controllable lighting, the driver should tell the machine about that rather than the user having to set that up separately and then link them together.
    2. More and more people are asking about performance, like in this thread :) At the very least we could probably add waitForMovesToComplete() and only call it when we know we need motion to stop. Then anything leading up to a wait could be queued. Personally, I think this will end up being less of a performance gain than folks guess, but we can try it and see. My reasoning is that e.g. OpenPnP basically does no XY -> XY moves. Most moves are XY -> Z -> XY. If you are comfortable with curving off the top of a Z move, I guess that's okay, but in a lot of cases that's a bad idea. 
    3. (Maybe) Remove the Drivers dealing with head offsets. This was intended to support complex machines that would figure out their own offsets and which might have variable offsets, but in the end I think it just adds complexity that will never be used.

Ultimately, I think drivers should be a lot simpler than the GcodeDriver. The driver should be nothing more than a simple translator for the commands to and from the machine. 

If someone is interested in spending some time working on this, that would be great. We need to come up with something that won't mean rewriting the entire application, but I think that's possible.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.

Mark

unread,
Aug 14, 2018, 9:14:51 AM8/14/18
to ope...@googlegroups.com

Thanks Jason

 

> 1. GcodeDriver was never meant to be as "fat" as it is now. Non-squareness compensation, visual homing, sub-drivers, backlash compensation, axis mapping, etc. are all things that do not belong in GcodeDriver. Some of them should be high level features, and some should be mid-level features that other drivers can use or extend.

 

Yes, indeed.

 

 

> 2. I think it's time to revisit the Driver interface. It has served us well for 6 years now, but it's a little long in the tooth. Some things I've been mulling over:

>    1. Make the driver more of a HAL - the driver tells the machine what devices exist, rather than the current way, which is backwards. In other words, if the machine supports vacuum sensors, and controllable lighting, the driver should tell the machine about that rather than the user having to set that up separately and then link them together.

 

Could you please elaborate a bit? Sorry, I don't understand.

 

I think the current building blocks are quite nice if we clean them up a bit and streamline things.

 

Reality check:, we have a M:N relation between drivers and machine model objects. One single controller will often do many things like control some but not all axes, control lights, control vacuum pump but not the valve, might control the drag pin actuator but not the drag pin positioning axis, etc. The different axes of the same Movable might be controlled by different drivers. There is no clean relationship between the controller and the machine model objects. So yes, axis mapping and sub-drivers may be a bewildering mess but such a linking scheme is necessary because the user does not want to spend extra dollar just to dedicate a separate controller board to a clean group of machine model objects. Also a driver is ultimately 1:1-associated with a connection (serial port/tcp), so you can't break that up, unless you make it more complex on the controller side which I guess would be no gain in the end.

 

What I think could be done...

 

First an illustration of how the UI could look like, explanation below:

 

 

1.       Convert the sub-driver tree structure into a linear pipeline much like in CV.

2.       Let the user choose on the UI which type of driver to add.

3.       Let the user add the new driver at any position in the pipeline, so drivers can be sandwiched into the middle and on top.

4.       Add waitForMovesToComplete() like you suggested.

5.       While we’re at it add an extensible “mode” parameter object to moveTo() to allow for different move modes (like G0 vs. G1, precision move vs. fast move, future ideas about relative motor current  etc. pp.)

6.       Add all the axes to the Location type, perhaps use a new DriverLocation type that is only used inside the driver pipeline. This will support AxisMapping+PathBlending (see below) rolled into one, i.e. the machine could move both Z axes of a four nozzle head at once and could pre-rotate all four nozzles while travelling to the pickup location, saving the time to do that when it is there. This would also support conveyor and feeder axes moves being blended in.

7.       Controller specific drivers should never do transformations like AxisTransform, non-squareness compensation and backlash compensation themselves. All they are allowed to do is talk the language of the controller.

8.       Instead create simple, separate transformation drivers for various purposes.

9.       Transformation must go forwards in moveTo() with the drivers being called in sequence of the pipeline.

10.   Transformation must go backwards in home() with the drivers being called in reverse sequence of the pipeline(reverse transformations being applied).

11.   Drivers always work in the transformed location space as passed down from their predecessors (or reverse transformation passed up from their descendants). If this is not wanted, an untransformed coordinate must be copied to a new “virtual” axis by the LinearTransformDriver (see below).

12.   There could be a VisualHomingDriver. It can be sequenced well above the actual controller driver.

13.   There could be a AxisMappingDriver per Movable object where you can choose which Location axes are mapped to which DriverLocation axes (matrix of X, Y, Z, C dropdowns, see image). No more XML editing. In the home() method the AxisMappingDriver will set the current axis coordinates after having received the reverse-transformed coordinates from its descendants.

14.   There could be a MultiNozzleDriver covering the usual seesaw/rocker nozzle Z transform. NOTE that it would transform the two axes Z1, Z2 that are separately mapped in the AxisMappingDriver into one Z1. Very clean and universal. No more XML editing.

15.   There could be many multi-axis transformation drivers in the future with robot arm solutions etc.. These must be sequenced below the axis mapping stuff but above the actual controller driver.

16.   There could be a NonSquarenessDriver. Or Perhaps a more general LinearTransformationDriver.

17.   There could be a PathBlendingDriver (as discussed previously  but without the heuristics) sequenced underneath all transformation drivers but above the actual controller driver.

18.   There could be a BacklashCompensationDriver underneath all that but above the actual controller driver. It would transform all coordinates by the backlash offset. Only waitForMovesToComplete() would then issue the final precision move to the target location.

19.   Finally the various controller and machine specific drivers (including a slimmed-down GcodeDriver) would reside underneath .

 

As you see this would create quite a pipeline of drivers. Because every driver does only one task it could still be quite simple to understand. Divide and conquer.

 

CAVEAT: how to migrate peoples’ old machine.xml settings?  

 

Given time I could do most of the internal implementation, I guess, if you could add the UI and XML migration (if at all possible) on top of that.

 

_Mark

 

image002.png

Brynn Rogers

unread,
Aug 18, 2018, 6:00:10 PM8/18/18
to OpenPnP
I've been looking at the Gcode driver some, and like mentioned before it can almost do all I need to put out 'Gemini' protocol.
I think your ideas below are good.
The new driver should have the features your talking about, but I think if it could parse a little more it would handle everything the Gemini needs, and probably other serial drivers

For example, we have these lines that the user can change in the machine setup:
G0 {X:X%.4f} {Y:Y%.4f} {Z:Z%.4f} {Rotation:E%.4f} F{FeedRate:%.0f} ; Send standard Gcode move
M400 ; Wait for moves to complete before returning

For the Gemini I would need lines like this:
0_D{(X/5):X%.0f}
1_D{(Y/5):Y%.0f}
2_D{(Z/5):Z%.0f}
3_D{Rotation:E%.0f}
{Feedrate*ScaleFactor:A=%.0f}
{Feedrate*ScaleFactor/2:AA=%.0f}
GO

Because the Gemini needs all parameters as integers,   and the units I have are 'counts' which depend on encoder resolution, and this one is 1 count= 5um for the 850mm travel tables
( the 100mm travel tables I probably won't use have 1 count = 0.1um )    So having a built-in way to do arbitrary math will solve the Gemini problem.
having the feedrate be 0 to 100 %  means that I'll also have to translate that into the three values A, AA, and V.  (A=Accel, AA=AveAccel, V=Velocity)
I also can have different values for the de-acceleration,  but I don't know that I want everything that complicated.


Also, it seems like having the wait for moves to complete should not be in the movetocommand, and should maybe be it's own call, along with a waitforZsafe call or something like that.

Brynn

Bernd Walter

unread,
Aug 18, 2018, 6:21:33 PM8/18/18
to ope...@googlegroups.com
On Mon, Aug 13, 2018 at 11:29:59PM -0500, Jason von Nieda wrote:
> Here's a couple quick, random thoughts, which may generate some discussion:
>
> 1. GcodeDriver was never meant to be as "fat" as it is now. Non-squareness
> compensation, visual homing, sub-drivers, backlash compensation, axis
> mapping, etc. are all things that do not belong in GcodeDriver. Some of
> them should be high level features, and some should be mid-level features
> that other drivers can use or extend.
>
> 2. I think it's time to revisit the Driver interface. It has served us well
> for 6 years now, but it's a little long in the tooth. Some things I've been
> mulling over:
> 1. Make the driver more of a HAL - the driver tells the machine what
> devices exist, rather than the current way, which is backwards. In other
> words, if the machine supports vacuum sensors, and controllable lighting,
> the driver should tell the machine about that rather than the user having
> to set that up separately and then link them together.
> 2. More and more people are asking about performance, like in this
> thread :) At the very least we could probably add waitForMovesToComplete()
> and only call it when we know we need motion to stop. Then anything leading
> up to a wait could be queued. Personally, I think this will end up being
> less of a performance gain than folks guess, but we can try it and see. My
> reasoning is that e.g. OpenPnP basically does no XY -> XY moves. Most moves
> are XY -> Z -> XY. If you are comfortable with curving off the top of a Z
> move, I guess that's okay, but in a lot of cases that's a bad idea.

I like the idea of isolating the wait into a separate call.
The reason is my machine setup with 2 controller boards for the motion.
XY are on another board than rotaion, which means it moves XY and then
starts rotating the nozzle.
My setup prohibits rotating the nozzle during the XY move.
Reply all
Reply to author
Forward
0 new messages