Develop: More Controlled Motion

3,663 views
Skip to first unread message

ma...@makr.zone

unread,
Apr 30, 2020, 10:52:10 AM4/30/20
to OpenPnP
Hi Jason
Hi Everybody

Problem

I always knew that both the Liteplacer mechanics and Smoothieware have their limits and trade-offs. But with the new diagnostics of the camera settle, more of that rough truth has emerged. Just to give you an idea, watch this GIF:



Vibration.png


But don't think it's just a problem of the Liteplacer or any "mechanically challenged" machine. Watch a part slipping on the nozzle tip (this is showing a 100° rotation and back):


PartSlipping.png



Note, how reducing the Speed (feedrate) did not help, because it's not the maximum feedrate that matters, but the acceleration (and the jerk), i.e. the inertia of the part fighting against the acceleration and/or the vacuum grip against the jerk. The maximum feedrate limited by OpenPnP is often irrelevant, because the machine will never reach the limit in short moves.

To make matters worse: what you see in the GIFs is an already "tamed" machine, with the acceleration already halved from what the steppers would actually take.

If I wanted to fix this, I would have to reduce the default acceleration so much, that my machine would resemble Flash in Zootopia.

Theory

Both problems make it clear to me that advanced motion "shaping" control is needed in OpenPnP. An S-Curve/Jerk Controlled Controller will fix some, but not all of these problems. But it is still useful for everybody to read and understand the following link before going on:


The solution lies in limiting acceleration and deceleration (and possibly jerk) smartly, with the application logic behind it. I still want the machine to accelerate as fast as possible, when it does not matter. OpenPnP already has a "Speed %" on the Part (that won't help, as shown above), but you get the idea how this should work: as long as the part is on the nozzle the acceleration/jerk should be reduced. Etc.

But doing so requires OpenPnP to know more about the machine specs: acceleration, jerk etc. and they might be different for each axis, especially for the rotation axes where currently degrees are handled like Millimeters and a half turn is treated like 180mm move! So I want to be able to configure this individually but uniformly for each axis (i.e. not in one huge driver setup page). It is also not really driver specific i.e. the control of these properties should be available for all drivers.

This brings me to the following rather large changes. Please don't dismiss these too easily, I thought long and hard about them (this also goes back a long time, the camera settling was just the confirmation of how badly needed this is).

Solutions

So I am proposing this:
  1. Expose Axes to the GUI (this would also help virtually every OpenPnP newbie get this right!)
  2. Introduce multiple Axis subclasses: RawAxis, TransformedAxis and many subclasses.
  3. AxisTransforms are no longer "children" of one single Axis but separate Axes that are based on other Axes and apply a transformation.
  4. Multi-Axis Transforms are possible, i.e. it can be based on more than one input Axis.
  5. Make Squareness Compensation a multi AxisTransform based on X and Y.
  6. Of course make the shared Z transforms like CamTransform available again.
  7. Some Transforms have multiple outputs (such as the CamTransform), so an OutputAxis could be based on the transforming Axis and expose the other result(s) such as the negated CamTransform. 
  8. As a further example: If Renee's Revolver Head makes it into reality, it would be relatively easy to add the needed revolver transformation.
  9. Reverse the current Axis Mapping, i.e....
  10. ... Choose the X and Y axis on the Head with simple drop downs, and the Z and C axes on each HeadMountable.
  11. ... Leave an Axis empty to not map it.
  12. Handle axis transforms on the HeadMountable, before calling the driver (like it is done with runout compensation for instance).
  13. Remove all Axis handling including squareness compensation from the GCodeDriver. 
  14. Whether this could all be migrated using an elaborate @Commit handler, I don't know yet.
  15. Now that we have the RawAxis in our hands, add properties for nominal feedrate, acceleration and jerk.
  16. Also Backlash Compensation can be configured on the axis so it is available for all drivers.
  17. GCode driver can then not only scale feedrate, but also acceleration (and jerk, for controllers that have it).
  18. Note if we want 50% speed, we need to shape acceleration to 25% (squared) and jerk to 12.5% (power of three).
  19. I would also break out the waitForMovesToComplete() (M400 GCode) from the moveTo() and call it from the necessary places. Now the driver/controller knows when to wait for a move to complete and where it can smooth the ride.
This would also form the basis for more:
  1. Add a GCodeDriver subclass to go even further:
  2. I can simulate jerk control by sending as series path segments with stepped accelerations. This is not 100% smooth but much better. Watch the video of standard Smoothieware Constant Acceleration vs. Simulated Jerk Control via G-Codes segments with 20ms time steps. The moves take the same time i.e. same average acceleration, so its a fair comparison:
    https://makr.zone/wp-content/uploads/2020/04/SimulatedJerkControl.mp4
  3. I can do more than what an S-curve controller could do: e.g. asymmetric acceleration/deceleration = fast start but smooth stop, when I know I need vibration free stillstand.
  4. Also I don't want a true S-curve (too slow for long moves), I want more of the "Integral symbol"-Curve, i.e. with a straight maximum acceleration segment in the middle (that is what I simulated in the video).
  5. Many more ideas, not spilling everything here, hehe.
What do you think?
_Mark

Marek T.

unread,
Apr 30, 2020, 12:09:22 PM4/30/20
to OpenPnP
Better acceleration/deceleration is nice idea and I believe you'll do something smart - like usually ;).

But
1. Don't you think you have too weak vacuum or just too small nozzle for the part like you show? I think that my machine is faster and sharper than yours but I've never seen part's slippery so strong like in your video shown.
2. Sending M204 together with G0 (in moveTo) instead of F really fixes a lot of problems. Pure speed adjustment is not too usable for PNP.

Jarosław Karwik

unread,
Apr 30, 2020, 12:16:53 PM4/30/20
to OpenPnP
I may help you a bit with Smoothie limitations - with my new controller as replacement:


Just ordered PCB's and firmware is very much advanced.
I plan no more then jerk s-curves, but with your tools we can try even higher order - I finally understood the math an can go as much as there is sense.

Jason von Nieda

unread,
Apr 30, 2020, 4:35:27 PM4/30/20
to ope...@googlegroups.com
Hi Mark,

I'm fully on board with all of this. I think it's the right direction to improve the current extremely limited motion system.

But, I also agree with Marek that it probably does not require an overhaul of the entire system to get you picking and placing. Lots and lots of people are having plenty of success with extremely basic motion controllers and the current motion system. Perhaps they are all going very slow, and you aren't, but it's possible also that your vacuum system needs improvement.

That aside - again - I am on board with all these changes. The only thing that jumped out as tricky is the backlash compensation since the current model requires Gcode hacks to work. Are you considering changing this to traditional measured backlash where you add that value to the axis whenever it changes direction?

Thanks,
Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/708392a2-0e3b-43af-9313-d6e8f64aa0d8%40googlegroups.com.

Keith Hargrove

unread,
Apr 30, 2020, 4:46:19 PM4/30/20
to OpenPnP
I went from Linuxcnc to smoothie I am using the same drivers for both I just made a db25 break out for the smoothie clone step,dir etc. My bot is old I made over 10 years ago it is big and heavy with smallish steppers but it gets moving very fast. I have noticed just looking at the cam I get a lot of bounce. I do not remember getting any of that before but this system also sat for 10 years so I was thinking the belts might have got weak. Before I could run at 100% speed just putting the parts from a strip onto a pcb and from the pcb back into strip. With just raw gcode done by hand, I can't get anywhere near the accuracy now. I was thinking it was just due to aging of the bot. But now I thinking the smoothie might be a lot of the problem. I know I had to slow my Z to almost 1/2 to keep the stepper from skipping once in a while. I never had that with the linuxcnc. Over all I have been disappointed with smoothie. I went to smoothie just to do OpenPNP as it seems to be what it is based around. if I get everything working I may put back a real cnc motion controller. all my bots have encoders (not wired yet) but I am used to mills and such. and closed loop is the norm.
 
I was wondering if you could add a encoder to a motor or on a idler pulley. Then you can compare and see if it matches.(I think it will)
Steppers can be bouncy on their own, and micro stepping makes it even worse. I was planning on making small servos using rc car BLDC way back then but things happened and I had to drop that.  Most cnc controllers if you get some feedback you can tune the overshoot out they give a scope like screen and you move back and forth and tune, more or less PID tuning and you can get very quick without any overshoot.
I do not know if smoothie will ever get there but if you could tune it with a encoder it might be possible to then put the numbers in to a simple controler like smoothie. a S curve is needed with a PID like tune block feeding it. 

it looks like S-curve is on the map 
Encoders has been poo pooed by the smoothie devs. which is too bad.  Thinking like that is making smoothie a dead end in the long run. 

I can see why more and more are jumping and flashing  Marlin 2 on there boards.
That might something to test with they have s-curve
encoder is kinda foggy it seems to have been at some point but not clear if it there now. 
 

Marek T.

unread,
Apr 30, 2020, 5:11:17 PM4/30/20
to OpenPnP
That's also my plan (in some free time) to flash my spare board MKS1.3 LPC1768 with Marlin to see how it works comparing to Smoothie firmware.
However the project of Jaroslaw controller sounds even better.

ma...@makr.zone

unread,
Apr 30, 2020, 7:47:52 PM4/30/20
to OpenPnP
Hi Jason, everybody

Thanks for your comments.

Yes, the vacuum system can be improved, I'm sure. Well, everything on my machine can be improved on the hardware side ;-). But that's not my point. Even if I improve the hardware, this will just help me on my machine. Nobody else will benefit. If I make the software smarter to trade in some brains for muscle, I can hope to make (almost) everybody's machine better. And its fun! For me the coolest achievement is not the best mechanical Übermachine that nobody can ever hope (or afford) to replicate, but something everybody can put together, or buy as a kit or whatever. I admit it has become a bit of a game to me, to just see how far I can push this funny rig. I still haven't replaced my banged up nozzle tip holder, although I have the replacement laying around. It has grotesque ~0.2mm runout and works flawlessly thanks to runout compensation. How cool is that?!

Even if I improve the hardware, that's usually just pushing the limit a bit further. True, you can go faster even with cruder control. But with the improved software, you can go even faster. It might make a less impressive difference then, ok, but that's how it works, getting harder and harder towards the limit. The harder, the higher the cost, so the value is still in there, even for a better machines. For my machine the idea is to push the software first, then maybe invest in some hardware improvments later, where it really counts. It's also about gaining knowledge first. Informed descisions. 

This also explains why I'd like to provide a solution for any controller. I certainly do agree, that nth order motion control in the controller is better than any simulation. But my idea again is that people will be able to upgrade OpenPnP and then just get a benefit with whatever controller they already have. And for those new fancy controllers you also need some type of dynamic control of these nth order parameters, because I don't believe for a second, that such a controller will truly shine with a "one-size-fits-all" static setup. And for that dynamic control, I believe the drafted axis centered approach is equally valid and valuable. Plus I do plan to add more advanced ideas on this basis, not everything has been disclosed yet ;-)

Regarding backlash: that's not high on my list. The first iteration might just leave that alone, inside GCodeDriver, and (if you agree) simply change the place where the values are stored (i.e. on the individual axes). Makes sense for a future implementation away from the driver, or to let each driver roll its own. Who knows, perhaps one of the fancy controllers that are announced, will do the backlash compensation autonomously as well. Whether the direction-change method can be used, I don't know. What I was thinking about, was calibrating backlash automatically, vision based (X, Y and C).

I'm glad you're on board with these changes. But of course this is all still open for further discussion.

_Mark

ma...@makr.zone

unread,
Apr 30, 2020, 7:58:27 PM4/30/20
to OpenPnP
On Thursday, April 30, 2020 at 6:09:22 PM UTC+2, Marek T. wrote:
1. Don't you think you have too weak vacuum or just too small nozzle for the part like you show? I think that my machine is faster and sharper than yours but I've never seen part's slippery so strong like in your video shown.

Your machine is certainly sharper and faster in X, Y, Z but I'm not sure about C. I might have been a tad ambitious with that acceleration limit of 18000°/s² and 360000°/min feedrate. What are your values?

These values work nicely with small passives, but I'm not saying this is needed for a large MCU that's why I want to control the acceleration :-)

_Mark

Marek T.

unread,
Apr 30, 2020, 8:21:36 PM4/30/20
to OpenPnP
C - I really don't remember, I'll check.

Balázs buglyó

unread,
May 1, 2020, 4:01:20 AM5/1/20
to ope...@googlegroups.com
I have the same board with marlin 2.0 And S-curve enabled with TRimanic 2209 chips.
The sound are good. I have optical encoder on the sides, but Not working as a closed loop system. Just informing me where are the rails. I have problem with my Y axis thats why i bought them. In 300mm i have 1mm offset. And also have 0.2 mm diff with the rail.
I bought the misumi linear rails. But for me it looks like a bit loose....

Back to the topic skr 1.3 and marlin2.0 works goog with tmc2209.


Marek T. <marek.tw...@gmail.com> (időpont: 2020. ápr. 30., Cs, 23:11) ezt írta:
That's also my plan (in some free time) to flash my spare board MKS1.3 LPC1768 with Marlin to see how it works comparing to Smoothie firmware.
However the project of Jaroslaw controller sounds even better.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Marek T.

unread,
May 1, 2020, 4:29:00 AM5/1/20
to OpenPnP
SKR1.3 is much better for Trinamics than MKS (serial ports wiring). I have SKR with 2208 in my 3D printer.
Pls tell me, do you use termistor inputs to read vacuum sensors with Marlin?
(Mark, only this question and we shut up about Marlin here).

Balázs buglyó

unread,
May 1, 2020, 4:35:02 AM5/1/20
to ope...@googlegroups.com
I did not try it yet. I want to set the axis and after that comes the vaccum. But i will use Michael’s expansion board for mega. 2 vacuum sensor and lot of space for his feeder. But i will try this too. 
In this case i dont need to use another board. 
But i think easier to write my own small code than modify the marlin but i did not check it yet.

Balazs 

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Marek T.

unread,
May 1, 2020, 4:51:38 AM5/1/20
to OpenPnP
Understand.
Let's meet later in some new separated thread when you're after some tests and decision how to go. I'll also do some tests here.

ma...@makr.zone

unread,
May 1, 2020, 10:49:21 AM5/1/20
to OpenPnP
For reference, the S-Curve implementation of Marlin:

Most notably:
Now, since we will (currently) *always* want the initial acceleration and jerk values to be 0,
We set P_i = P_0 = P_1 = P_2 (initial velocity), and P_t = P_3 = P_4 = P_5 (target velocity),
which, after simplification, resolves to ...

Ouch!

Notes to self:
  1. This will not work with my simulation. Marlin will need to be rebuilt with S Curves disabled
  2. They also just substitute the constant acceleration trapezoid ramp with a standard-shaped S-Curve with the same average acceleration as the ramp, completely oblivious of how large the velocity change is. This will result in extremely soft acceleration on large moves.
  3. Might be fine for 3D printing micro-moves but no good for PnP. We need an "Integral symbol" shape!
_Mark


ma...@makr.zone

unread,
May 2, 2020, 6:12:34 PM5/2/20
to OpenPnP
On Thursday, April 30, 2020 at 10:35:27 PM UTC+2, Jason von Nieda wrote:
Hi Mark,

I'm fully on board with all of this. I think it's the right direction to improve the current extremely limited motion system.

Hi Jason

I bumped into sub-drivers.

As planned, Axes are now out of the Gcode driver and Axes can be mapped to Head and HeadMountable (some parts still work in progress). So mappings and transformations can be used by any driver.

But I still need a way to map an axis to its driver if there are multiple drivers, aka sub-drivers. I originally planned to identify the proper (sub-) driver on the Axis itself, as a dropdown. So I needed a list and an ID on the Driver, but then.....

I only now realized how special the handling of the driver is (and choice the of the class to instanciate). Sub-drivers are specific to GcodeDriver and you can't instanciate another class of driver as a sub-driver (only a GcodeDriver again). So you can't have a NeoDen4 to drive the machine and a GcodeDriver to drive some extra stuff. Also I realized (looking at the actuator code you recently improved) that the way sub-drivers are driven, is sometimes a bit of a "fishing expedition", i.e. it checks through all the drivers for the presence of Gcode fragments to more or less "guess" which driver is doing what.

What I now did was streamline it to be like all the other machine objects in OpenPnP, i.e. you can have as may as you like and mix them freely.


The previous driver and sub-drivers are already automatically migrated to the new system.

The mapping is the done on the Axis and on the Actuators (dropdown with the proper driver driving it).

Like already discussed it is the responsability of the HeadMountable to assign the axes and it will then call each driver that has at least one axis assigned.

Likewise the Actuators: they already have a getDriver() method, I will just have to implement the property to store the ref on the actuator. Through the presence of the getDriver() I somehow suspect you already wanted to do such as thing, right?  :-)

I hope this is all OK, otherwise please stop me quick! I'm advancing...

_Mark

ma...@makr.zone

unread,
May 3, 2020, 3:40:01 AM5/3/20
to OpenPnP
Hi Jason
Hi Everybbody

Another thing. I planned to map the X and Y on the Head and only Z and C on the HeadMountable. That's obviously the easy to understand Cartesian solution.

But there are situations when HeadMountables might have their own way in X and Y too. One example just popped up with Behnam Max's SMD Taxi video:


The other would be Renee's Revolver Head.

So I'll map all the axes on the HeadMountable. 

The Axes themeselves can then define the proper transformations, so you can move them on top of each other (like the Revolver). Or you can use them directly, if you want relative moves (like in the SMD Taxi).

Hope that's all OK.

_Mark


Message has been deleted

dc42

unread,
May 3, 2020, 9:17:10 AM5/3/20
to OpenPnP
Looks like you are getting ringing. S-curve acceleration will only help if the period of the ringing is significantly shorter than the deceleration time. In the opposite situation (which is more common, at least in 3D printers), Dynamic Acceleration Adjustment (DAA) is more effective. 

DAA is implemented in RepRapFirmware, the standard firmware for Duet motion control electronics. See https://duet3d.dozuki.com/Wiki/Gcode?revisionid=HEAD#Section_M593_Configure_Dynamic_Acceleration_Adjustment for how to use it. Its main limitation is that it can only cancel ringing at a single frequency. So if your X and Y axes both suffer from serious ringing but at different frequencies, then you can only cancel one of them.

Jason von Nieda

unread,
May 3, 2020, 1:13:32 PM5/3/20
to ope...@googlegroups.com
Hi Mark - all sounds fine to me. I had hoped to eventually make sub-drivers a more high level thing anyway so that you could, e.g. use the NeoDen4Driver for your primary stuff and an Arduino (or whatever) for some add on stuff. So, sounds good, full steam ahead :)

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

ma...@makr.zone

unread,
May 3, 2020, 3:13:05 PM5/3/20
to ope...@googlegroups.com

Hi dc42

Yep, the Eigenfrequency.

Already thought about how to measure this per Axis (you can already glimpse the sinusoidal in the Camera Settle graphs) and then try and suppress it.  But I thought it goes both ways (symmetrically), i.e. to also increase jerk/acceleration, when (or slightly before) the pendulum wants to swing back.

I thought I could integrate the product with the sin() and cos() of the Eigenfrequency over the past/future moments of jerk/acceleration, get amplitude and phase and feed it back negatively.

Found this helpful to understand the principles without too many formulas :-)
https://youtu.be/spUNpyF58BY

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

dc42

unread,
May 3, 2020, 3:24:28 PM5/3/20
to OpenPnP
In RRF we adjust both the acceleration and deceleration times to cancel ringing at the specified frequency. This means there is no need to integrate anything, because it's only acceleration and deceleration that induce ringing. We don't normally use jerk on travel moves because it isn't necessary, and that makes the cancellation more accurate.

We designed this to reduce ringing in 3D printers, but I don't see any reason why it isn't applicable to PnP as well.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

RENEE Berty

unread,
May 6, 2020, 6:12:20 AM5/6/20
to OpenPnP
Wouaaw that's so cool...

ma...@makr.zone

unread,
May 7, 2020, 6:16:44 AM5/7/20
to OpenPnP
Hi Jason

I'm currently in a phase, where I could unify things more, but that would mean greater changes to existing code.

Example:

Every HeadMountable has its own set of moveTo(), moveToSafeZ() etc implementations. I need to change a section in each of them anyway. But generally the HeadMountables do not differ and where they do, it's often a benign difference e.g. NaN substitution could be a good thing for all of them.

So I'd like to move those methods to the new AbstractHeadMountable and only keep the few Overrides that have to be different (e.g. NozzleTip runout compensation). In fact I'd like to refactor some of those differences out. For instance I'd like to separate the Safe Z calculation from the moveToSafeZ(), so Overrides can just change that behavior separately (like for the relatively new part-height-on-nozzle calculation).

Why not do that later?

I believe its fair to say that unifying is always a good thing for maintenance and code quality. It would eliminate a second round of testing (frankly, I might have lost momentum and energy once a second round could be envisaged).

But it would mean even more time and patience from your side for reviewing these changes. GitHub diffs won't do that kind of rework justice, the actual changes will be smaller semantically than what they look like in a diff.

Jason von Nieda

unread,
May 7, 2020, 9:50:57 AM5/7/20
to ope...@googlegroups.com
Have at it Mark. I trust you'll get it right :)

It would be nice if, as you find things that are changing significantly, you added some tests that show the new works the same as the old.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

ma...@makr.zone

unread,
May 7, 2020, 11:06:45 AM5/7/20
to ope...@googlegroups.com

> Have at it Mark. I trust you'll get it right :)

Thanks, Jason

I've already had my run-ins with the tests, and your new Gcode Server is a great addition because it tests the relevant driver ;-) The tests correctly caught some unfinished conversion and demonstrated that it works!

> if, as you find things that are changing significantly, ...

This is half answer, half note self. If you see something that is irking you, just tell me.

Outside of the obvious Axes rework, up until now, there are these things that are changing in semantics (rather than just hidden Axis implementation):

  1. I want all HeadMountable moveTo() to go though the head, currently the moveToSafeZ() variants went directly to the driver. Now moveToSafeZ() is just a wrapper for moveTo(). The following will make clear why:
  2. BIGGEST CHANGE: the headOffset translation (back and forth) must be handled by the HeadMountable rather than by the driver, I believe that's the correct place to encapsulate this. This will allow for future variable offsets, like Renee's revolver head.
  3. Axis mapping and transform will be coordinated by the Head (using functionality on the HeadMountable) and it will call all the drivers that have one or more axis mapped in the operation.
  4. Order of calling drivers is always guaranteed to match the one in Machine Configuration. User has Arrow buttons to permutate.
  5. No more NaNs to the driver. They must be substituted by the HeadMountables, before head offset translation and axis transformation as this is essential for multi-axis transforms (like Non-Squareness Compensation) to work.
  6. The drivers will work purely with raw coordinates.
  7. Visual Homing is now in the Head and available for all drivers. You could even Visual Home a machine that has the X axis on a different driver than the Y axis.
  8. It works with transformed axes (including Non-Squareness Compensation) therefore the fiducial coordinates will finally match. For existing machine.xml the fiducial coordinate is migrated to "unapply" Squareness Compensation, so captured coordinates on existing machines will not be broken.
  9. Visual Homing now aborts the home() when Vision fails (I had some near-miss experiences when I tested Camera Settle and it was misconfigured).

I saw some existing test cases that I believe will fail due to 2. I will need to adjust these and try to add some new testing there.

_Mark

Jarosław Karwik

unread,
May 7, 2020, 11:18:24 AM5/7/20
to OpenPnP
Once you are making changes  - would you consider making them in such way that more intelligent G code controllers would be supported ?

I mean that instead of sending several G codes for all small operations - you would send something like - "pick/place component on certain Z and return result" ?  This was the way my previous chinese machine was working on protocol level.
It was able to combine Z operation with valve control and vacuum check. It was working more like actuator.

I am just making my own controller and I would support something like that - if it was easy to configure in OpenPnp

It would not need anything fancy in OpenPnp - more like just possibility to bypass/omit several operations and forward it to external controller.
Something like one actuator to which you write 'Z' and another one to get operation result.....
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

Jason von Nieda

unread,
May 7, 2020, 11:18:25 AM5/7/20
to ope...@googlegroups.com
On Thu, May 7, 2020 at 10:06 AM ma...@makr.zone <ma...@makr.zone> wrote:

> Have at it Mark. I trust you'll get it right :)

Thanks, Jason

I've already had my run-ins with the tests, and your new Gcode Server is a great addition because it tests the relevant driver ;-) The tests correctly caught some unfinished conversion and demonstrated that it works!

Glad to hear it! I think it's going to be a big help as we build up a test suite for it. I just took a note the other day: "It would be really cool to write a TinygServer, SmoothieServer, etc. to emulate their basic functionality and then write tests that use GcodeDriver to talk to them. Thinking about the new PR for the buffer flush here."

As the project grows, and as more people use increasingly complex and different controllers, it would be really nice if we knew for sure, when making a change, that it would work for at least the most common controllers.

> if, as you find things that are changing significantly, ...

This is half answer, half note self. If you see something that is irking you, just tell me.

Outside of the obvious Axes rework, up until now, there are these things that are changing in semantics (rather than just hidden Axis implementation):

  1. I want all HeadMountable moveTo() to go though the head, currently the moveToSafeZ() variants went directly to the driver. Now moveToSafeZ() is just a wrapper for moveTo(). The following will make clear why:
  2. BIGGEST CHANGE: the headOffset translation (back and forth) must be handled by the HeadMountable rather than by the driver, I believe that's the correct place to encapsulate this. This will allow for future variable offsets, like Renee's revolver head.
  3. Axis mapping and transform will be coordinated by the Head (using functionality on the HeadMountable) and it will call all the drivers that have one or more axis mapped in the operation.
  4. Order of calling drivers is always guaranteed to match the one in Machine Configuration. User has Arrow buttons to permutate.
  5. No more NaNs to the driver. They must be substituted by the HeadMountables, before head offset translation and axis transformation as this is essential for multi-axis transforms (like Non-Squareness Compensation) to work.
  6. The drivers will work purely with raw coordinates.
  7. Visual Homing is now in the Head and available for all drivers. You could even Visual Home a machine that has the X axis on a different driver than the Y axis.
  8. It works with transformed axes (including Non-Squareness Compensation) therefore the fiducial coordinates will finally match. For existing machine.xml the fiducial coordinate is migrated to "unapply" Squareness Compensation, so captured coordinates on existing machines will not be broken.
  9. Visual Homing now aborts the home() when Vision fails (I had some near-miss experiences when I tested Camera Settle and it was misconfigured).

This all sounds great Mark. Many of these items have been on my list for a long time, so thank you for diving in to this. I'm excited about the removal of NAN in the drivers, as that has irked me for a long time, and I'm excited about moving head offsets and transforms into the machine as it should greatly simplify the drivers. Nice work!

Thanks,
Jason

Jason von Nieda

unread,
May 7, 2020, 11:32:53 AM5/7/20
to ope...@googlegroups.com
This is supported today via a custom Machine (and associated child classes) implementation. High level OpenPnP code tells a Nozzle class to pick(). How it does that is up to the implementation. The Reference implementation calls into a driver which generally talks to a serial port, but a custom implementation could do anything that accomplishes the task.

Have a look at the SPI (Service Provider Interfaces): http://openpnp.github.io/openpnp/develop/

Jason


To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/4a6aca1b-727b-4246-8c8c-5aaf6b239ce0%40googlegroups.com.

Jarosław Karwik

unread,
May 7, 2020, 11:47:25 AM5/7/20
to OpenPnP
Well,

I went this way with SmallSmt driver. Possible, but high maintenance and quite many interesting features were in GCode driver only.

But there is actually even smarter way to do it with even existing Gcode principles and implementation.

I will simply cache g codes in my driver - sending confirmations even before completing them. 
It is quite easy to recognize commands which need to be finalized before responding ( like reading actuators etc.)
And with vision  commands - I can always add actuator operation to the pipeline.

This may be troublesome with more subdrivers, but still looks like path of least resistance :-)





ma...@makr.zone

unread,
May 7, 2020, 2:35:05 PM5/7/20
to ope...@googlegroups.com

Hi Jarosław

> quite many interesting features were in GCode driver only.

This will change now. Most features will be outside the driver and be available for all drivers/machines.

> I mean that instead of sending several G codes for all small operations - you would send something like - "pick/place component on certain Z and return result" ?  This was the way my previous chinese machine was working on protocol level.

OpenPnP has this kind of abstraction, starting from the Job Processor steps. They are really extremely basic and versatile. The "Reference" classes in OpenPnP are only one possible implementation. But if you wanted to replace those, you would need to replace large chucks of code with your own. The important message is that it is possible within OpenPnP (unlike any other framework I know).

_Mark

To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/902d5853-30ff-4fd5-837c-07a06edc4fbc%40googlegroups.com.

ma...@makr.zone

unread,
May 9, 2020, 2:22:11 PM5/9/20
to OpenPnP
Hi

Is anyone using the GcodeDriver OffsetTransform?

It is complicated to migrate because the offset is per HeadMountable. If it not in frequent use, I will not migrate it automatically.

According to the original commit, it was added to address shared C axis issues. But we solved this in the meantime.

If nobody speaks up, the transform will not be migrated automatically. There are new means to add it back manually.

_Mark


ma...@makr.zone

unread,
May 9, 2020, 3:50:39 PM5/9/20
to OpenPnP
Hi everybody

IMPORTANT CALL FOR YOUR ASSISTANCE:

As described in the original post of this thread, I'm working on a new GUI based global Axes and Drivers mapping solution in OpenPnP 2.0.

In short:

You can now easily define the Axes and Axis Transforms in the GUI and map them to the Head Mountables with simple drop-downs (see the image). No more machine.xml hacking.

Furthermore, you can add as many Drivers as you like and mix and match different types. All Axes and Actuators can be mapped to the drivers (again in the GUI) and OpenPnP will then automatically just talk to the right driver(s). 

Most of the GcodeDiver specific goodies were extracted from the driver and are now available to all the drivers (some of that still work in progress):
  • Axis Mapping (obviously)
  • Axis Transforms
  • Visual Homing
  • Non-Squarenes Compensation
The idea is to automatically migrate your machine.xml to the new solution.

What I now need is your machine.xml, expecially if you have special mappings, special transforms, shared axes, sub-drivers, moveable actuators etc. pp.

The goal is to make the migration fully* automatic. This will only succeed with your help!

*with the exception of OffsetTransforms, as mentioned earlier.


Thanks,
Mark

Duncan Ellison

unread,
May 10, 2020, 7:52:58 AM5/10/20
to OpenPnP
Hi Mark,

I see you are not getting a lot of traction on this request.  Here's mine anyway.

Conventional Machine 2 heads - Chinese see-saw type

Using the special smoothie 'mid way' Z homing sensor hack thing  :-0

Duncan
machine.xml

ma...@makr.zone

unread,
May 10, 2020, 3:35:19 PM5/10/20
to ope...@googlegroups.com

Hi Duncan

thanks a lot!

It seems to work.

Some impressions of your machine after fully automatic migration:


Axis mapping using drop downs:

Actuator Mapping:

Simpler management through mapping. Only what is mapped can have Gcode:



_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Jason von Nieda

unread,
May 10, 2020, 4:01:54 PM5/10/20
to ope...@googlegroups.com
This is looking awesome Mark! I'm very excited to see the final results!

Jason


Duncan Ellison

unread,
May 10, 2020, 4:09:45 PM5/10/20
to OpenPnP
Wow !  - Pictures of the head as well - what can I say.

Will be interesting to see if this breaks anything.  Maybe as well to warn everyone to take a backup of machine.xml first?

I know I struggled with axes until I got my head around it.  I think I was one of the first to purchase these China pre-built heads, but I see a lot of others are seeing them on Robotdigg and elsewhere and planning to use them as a way to speed up the build, but configuring the transforms correctly is a bit non-intuitive.

The biggest problem with these heads is resolving the Z homing issue, but that works nicely for me now with an optical sensor on one of the motors and the modified smoothieware.

The weird 'Feed Nmm' actuator is a kludge to drive the 'Simon' feeder from Erich's lab.  I hope this doesn't break  :-0    
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

ma...@makr.zone

unread,
May 12, 2020, 7:45:42 AM5/12/20
to ope...@googlegroups.com

Hi Duncan,
Hi Everybody,

your feed-back prompted some deeper scrutiny and I actually changed some thing again. Thanks for that. :-)

> Will be interesting to see if this breaks anything.

The only things now known to break are these:

  1. Same Axis mapped to multiple controllers.
  2. Features of one Actuator distributed across multiple controllers, e.g. boolean-actuate valve on controller 1, read back the vacuum level on controller 2.
  3. Use of OffsetTransform, as announced earlier (not really broken but needs manual work after migrate).
If 1. is a problem, we need to talk. I see no legitimate use case, but I might overlook something.

If 2. is a problem, I'd rather create a new sub-class DelegatedActuator that calls on a second actuator for select functions.

Naturally Script access to machine objects might need to be adapted. Some APIs have changed and if you're doing something with them, you need to adapt your scripts:

  1. All the already mentioned capabilities and properties that moved out of the driver to the various machine objects. E.g. Axis Mapping, Axis Transform, Non-Squareness, Visual Homing, Backlash offsets.
  2. All driver coordinates are now raw controller coordinates, i.e. all transformations are done outside of drivers in HeadMountables and Head.
  3. Most notably the Head offset is now handled on the HeadMountable i.e. no longer subtracted in the driver, so be careful if you used direct driver.moveTo(). But in this case your scripts will fail anyway due to changed signature.
  4. All driver coordinates are now handled using a new AxesLocation rather than Location, so the mapped axis is always specified along with the coordinate. The multi-dimensional implementation is now done using simple for() loops rather than four times duplicated code. This is also ready for future improvements where we might want to move more than one axis of the same Axis.Type at the same time. E.g. multiple rotation axes of a multi-nozzle head on the trip to alignment (smart motion blending). Or even ganged picks and ganged alignments in the far future ;-).
  5. ReferenceDriver.home() is now done with the Machine instead of the Head, as a machine with multiple Heads should not home the same controller multiple times.
  6. Instead, Visual homing is now done on the Head (if enabled).
  7. Sign of Non-Squareness Compensation has flipped (it is now a transformation like all the others, going from raw controller axes forward).

> Maybe as well to warn everyone to take a backup of machine.xml first?

Always do that when upgrading OpenPnP!! Do not expect an extra Warning.

_Mark

To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c7673f67-ae03-4583-91e8-35a1e7665b0d%40googlegroups.com.

ma...@makr.zone

unread,
May 14, 2020, 6:10:24 AM5/14/20
to ope...@googlegroups.com

Hi Mike,

I'm referring to your PM but posting this on the list for others to see.

Some impressions from the result of your machine's migration.

As you see it has created the four nozzle dual rack and pinion Z configuration:

Btw, the MappedAxis can do any axis scaling, axis offset, but also combinations, like full axis reversal for instance, in an easy to understand way by just relating two points on the axis to each other in "input" --> "output" manner.

Migrated Visual Homing:

Axis mapping with simple drop-downs:

Actuators mapped to second driver:

Looks good!

Thanks again for the machine.xml

@Everybody, the call for your OpenPnP 2.0 machine.xml file still stands!

_Mark

Am 13.05.2020 um 17:24 schrieb M. Mencinger:
>
> I refer to OpenPnP :


> What I now need is your machine.xml, expecially if you have special mappings, special transforms, shared axes, sub-drivers, moveable actuators etc. pp.
>

> I have smoothie 6axis mapping - OpenPnP 2.0 see attached
> Please let me know how it works out.
>
> Thanks
>
> --
> Mike



bert shivaan

unread,
May 14, 2020, 6:49:22 AM5/14/20
to OpenPnP
Mark I don't have a 2.0 version, but am happy to attach my 1.0 if it is helpful. This took care of 6 nozzles with shared C and shared Z. Had separate blow and vac valves for all 6. Also the nozzles moved up and down pneumatically. Z was simply a movable stop to set how far a nozzle could move down. X,Y are pretty normal. Visual homing.


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
old machine.xml

Marek T.

unread,
May 14, 2020, 6:59:32 AM5/14/20
to OpenPnP
Hi Mark,

Attaching my machine for 2.0. I don't use 2.0 until it is not functional with picksRetries but I hope the day will come.
Shared-C and no Z motors.
As you maybe remember, I use pneumatic nozzles. They are controlled with actuators created in MarekNozzle class attached (really appreciate) by Jason to official 2.0 repo.
machine.xml

ma...@makr.zone

unread,
May 14, 2020, 7:26:19 AM5/14/20
to ope...@googlegroups.com

Sorry Bert, only 2.0. The XML reader in OpenPnP can't be switched into "tolerant" mode, unfortunately, so migrating this is a hassle.

But you can send me a partially stripped down version, that opens in 2.0. It's mostly the nozzle tip section that changes, you could just delete that for this test plus handle any remaining errors when trying to open. Perhaps you have a different computer to do this, so your original setup is safe.

_Mark

bert shivaan

unread,
May 14, 2020, 7:30:33 AM5/14/20
to OpenPnP
No worries, it is not hard to attach something even if it is of no value :)
Unless it really helps you in some way, I have no need to convert. That machine has been scrapped for the new one I am building now. 
If it is helpful, I will try to get it to open in 2.0 then re-attach here.

ma...@makr.zone

unread,
May 14, 2020, 8:01:26 AM5/14/20
to ope...@googlegroups.com

Hi Bert

I think I got enough examples for now, some in PM, thanks. All the important Transforms, Non-Squareness and Multi-Driver setups covered.

If somebody has a ScaleTransform, please send it :-)

_Mark

ma...@makr.zone

unread,
May 14, 2020, 1:12:20 PM5/14/20
to ope...@googlegroups.com

Hi Jason

I wanted to add some tests for the new axis & motion stuff (future and present). But foraging into this has turned into veritable side show.

See the video:

https://makr.zone/SimulatedImperfectMachine.mp4

As you can see, I can simulate many more aspects of the Machine. And btw. this is all with the new Axes implementation, the basis of which is the auto-migrated default machine.xml.

The new framework for this is on a new sub-class of ReferenceMachine so we can test axes across multiple drivers. To enable one needs to change the machine class in the .xml.

What is missing is the "human-less" success monitoring. I'd like to add little Vision operations that looks at the pick and place location and match for the footprint, rendered black as a template for the black strip pockets on Pick and rendered like in Alignment as a so-so template for solder lands on Place. Triggered by the Vacuum Valve actuate() in the Null driver. It could test both the location and rotation.

If this works, I guess I could then create a TestUnit. But then we need to talk about test duration or how to run tests at different "scrutiny" levels.

Jason von Nieda

unread,
May 14, 2020, 1:27:43 PM5/14/20
to ope...@googlegroups.com
Hey Mark, I think that sounds great. Any time we can make the tests closer to reality that is a win.

I am not too worried about test duration. It's a big, complicated system, and if it takes a while to test that's fine with me. Developers can easily disable tests while developing (mvn -DskipTests) to speed up compile/test cycles, and it's easy to run single tests from Eclipse (right click, Run As JUnit Test) when developing new tests. So, I'd say don't worry about test duration until it's a problem.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Mike M.

unread,
May 15, 2020, 11:18:10 AM5/15/20
to OpenPnP
Thank you Mark !
Mike

ma...@makr.zone

unread,
May 15, 2020, 3:09:54 PM5/15/20
to ope...@googlegroups.com

Jason,

is there a way to determine the current placement in the Job from outside?

More specifically I need the PartAlignmentOffset for the current part on the nozzle, if it was aligned. As an offset for the Place check, obviously. So far I've found a way for most things, but this bugger really eludes me.

Thanks for your help!

If you're interested: The system already works, as currently all the alignments are perfect in the SimulatedUpCamera, but I'd like to change that, in order to really test Alignment in the simulated imperfect machine.

Just to show you how the PnP location check works, this is a series of alternating Pick and Place triplets of 1) what it sees at the location (PCB HSV masked to just see the solder lands), 2) the blurred part template, 3) the match map.

It's also a proof-of-concept of the "solder land to bottom vision match" we talked about. If you blur one of the two, it will find the natural sweet spot, even if the two don't really match in individual pad/land size. This is nicely visible in the R0603 here:


_Mark


Jason von Nieda

unread,
May 16, 2020, 11:29:08 AM5/16/20
to ope...@googlegroups.com
The short answer is no. The way I usually do stuff like this for the sake of testing is to add non-interface methods to the implementations and then cast the object in the test.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
--
Sent from my BeOS enabled toaster

ma...@makr.zone

unread,
May 17, 2020, 8:45:58 AM5/17/20
to ope...@googlegroups.com

Hi Jason

thanks. If I'm not mistaken, this data only lives transiently on the stack of the JobProcessor thread. No way to access.

I propose eventually moving the aligment-offsets to the Nozzle, where the org.openpnp.spi.Nozzle.getPart() already is. The two belong together IMHO. This will also allow for ad-hoc alignment and accurate placement outside the JobProcessor. A "Place part at camera location" function could be useful during early machine setup and testing (at least with a Z probe or ContactProbeNozzle).

_Mark

Jason von Nieda

unread,
May 17, 2020, 12:15:53 PM5/17/20
to ope...@googlegroups.com
Hi Mark,

Officially, it's on the heap in PlannedPlacement. During a cycle the list of planned placements is valid, so you *could* expose that, but it's messy at best.

Moving alignment offsets to Nozzle - probably makes sense.

Jason




ma...@makr.zone

unread,
May 18, 2020, 11:57:47 AM5/18/20
to ope...@googlegroups.com

Hi everybody (TinyG users?)

who uses MOVE_TO_COMPLETE_REGEX?

If the following link is still valid, then it is used for TinyG and it captures a response coming from the G0, G1 command, right?

<![CDATA[.*stat:3.*]]>

https://github.com/openpnp/openpnp/wiki/TinyG

Can anybody explain/provide an example log?

I'm asking because I'd like to separate the wait for a move to complete from the moveTo() command (as laid out in the original post of this topic). This is also important for multi (sub-) drivers machines where axes are mixed across drivers. This should only wait for completion after all the controllers have been commanded to do their moves. Otherwise they will be done in sequence (slow and completely uncoordinated!).

If this <![CDATA[.*stat:3.*]]> report is sent back without prompt (i.e. directly as a result of the move itself), this might become a bad source of race conditions. @tonyluken, is this the "unsolicited response" you meant? 

Wanted is a command that you can send to the controller at any time, that either reliably waits for completion before returning (like the M400 on Smoothieware) or that sends a status report that can be used in a loop to wait (like the position report M114.1 on Smoothieware).

I've tried to find answers myself, but gave it up when I read this,

https://github.com/openpnp/openpnp/wiki/TinyG#quirks

... if this is all still true, how can people even use TinyG with OpenPnP at all?

Thanks.

_Mark

Tony Luken

unread,
May 18, 2020, 2:01:08 PM5/18/20
to OpenPnP
Mark,

Yes, I use the MOVE_TO_COMPLETE_REGEX to recognize when a move has completed on my TinyG.  But I have mine just set to:

.*stat:3.*

TinyG automatically reports the status whenever a move completes but it also reports the status on a timer interval during moves (but generally with a "stat:5").  I suspect there may be certain other commands that also cause a status report to be sent, although I never have been able to confirm it and I don't think they are the main source of the problem as they probably aren't used during the normal course of operation.  I think most if not all of the spurious status reports I have seen are due to the timer interval expiring in some small time window just before/after a move completes so that two status reports both with the "stat:3" end up being sent (probably due to some race condition in the TinyG FW).  If the MOVE_TO_COMPLETE loop happens to catch the first one but before the second one arrives, the second one ends up stuck in the response queue.  Up until my recent change, that response would then be misinterpreted as the completion of the next moveTo command.  I was originally seeing the spurious responses when I had the timer interval set to 200 ms.  Now I have set the timer interval to a very large number (for TinyG users I added a $si=4e9 to my CONNECT_COMMAND to set the timer interval to over 46 days) and since doing that I haven't seen any spurious responses.

Here is a short snippet of a move with the large status interval:
2020-05-17 16:35:18.181 ReferenceCamera DEBUG: moveTo((111.997952, 44.592722, NaN, 0.000000 mm), 1.0)
2020-05-17 16:35:18.182 GcodeDriver DEBUG: sendCommand(G1 X112.4944 Y44.9927   F8000, 12000)...
2020-05-17 16:35:18.182 GcodeDriver TRACE: [serial://COM5] >> G1 X112.4944 Y44.9927   F8000
2020-05-17 16:35:18.202 GcodeDriver TRACE: [serial://COM5] << tinyg [mm] ok>
2020-05-17 16:35:18.202 GcodeDriver DEBUG: sendCommand(serial://COM5 G1 X112.4944 Y44.9927   F8000, 12000) => [tinyg [mm] ok>]
2020-05-17 16:35:18.202 GcodeDriver TRACE: [serial://COM5] << qr:31, qi:1, qo:0
2020-05-17 16:35:18.202 GcodeDriver DEBUG: sendCommand(G1 X112.0944 Y44.5927   F8000, 12000)...
2020-05-17 16:35:18.203 GcodeDriver TRACE: [serial://COM5] >> G1 X112.0944 Y44.5927   F8000
2020-05-17 16:35:18.218 GcodeDriver TRACE: [serial://COM5] << tinyg [mm] ok>
2020-05-17 16:35:18.218 GcodeDriver DEBUG: sendCommand(serial://COM5 G1 X112.0944 Y44.5927   F8000, 12000) => [qr:31, qi:1, qo:0, tinyg [mm] ok>]
2020-05-17 16:35:18.218 GcodeDriver TRACE: [serial://COM5] << qr:30, qi:1, qo:0
2020-05-17 16:35:18.218 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-17 16:35:18.469 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [qr:30, qi:1, qo:0]
2020-05-17 16:35:18.469 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-17 16:35:18.720 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => []
2020-05-17 16:35:18.720 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-17 16:35:18.954 GcodeDriver TRACE: [serial://COM5] << qr:31, qi:0, qo:1
2020-05-17 16:35:18.971 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [qr:31, qi:0, qo:1]
2020-05-17 16:35:18.971 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-17 16:35:19.103 GcodeDriver TRACE: [serial://COM5] << posx:112.094,posy:44.593,posz:0.000,posa:0.000,feed:8000.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3
2020-05-17 16:35:19.103 GcodeDriver TRACE: Position report: posx:112.094,posy:44.593,posz:0.000,posa:0.000,feed:8000.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3
2020-05-17 16:35:19.103 GcodeDriver TRACE: [serial://COM5] << qr:32, qi:0, qo:1
2020-05-17 16:35:19.222 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [posx:112.094,posy:44.593,posz:0.000,posa:0.000,feed:8000.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3, qr:32, qi:0, qo:1]
2020-05-17 16:35:19.222 ReferenceCamera DEBUG: moveTo((111.997952, 44.592722, 0.000000, 0.000000 mm), 1.0)

And in case you're interested, here is a snippet of a move with a 200 ms status interval (although without a spurious response - those are fairly rare):
2020-05-18 12:31:49.103 ReferenceCamera DEBUG: moveTo((103.029642, 24.494094, NaN, 0.000000 mm), 1.0)
2020-05-18 12:31:49.103 GcodeDriver DEBUG: sendCommand(G1 X103.4826 Y24.8941   F8000, 12000)...
2020-05-18 12:31:49.103 GcodeDriver TRACE: [serial://COM5] >> G1 X103.4826 Y24.8941   F8000
2020-05-18 12:31:49.122 GcodeDriver TRACE: [serial://COM5] << tinyg [mm] ok>
2020-05-18 12:31:49.123 GcodeDriver TRACE: [serial://COM5] << qr:31, qi:1, qo:0
2020-05-18 12:31:49.123 GcodeDriver DEBUG: sendCommand(serial://COM5 G1 X103.4826 Y24.8941   F8000, 12000) => [tinyg [mm] ok>]
2020-05-18 12:31:49.123 GcodeDriver DEBUG: sendCommand(G1 X103.0826 Y24.4941   F8000, 12000)...
2020-05-18 12:31:49.123 GcodeDriver TRACE: [serial://COM5] >> G1 X103.0826 Y24.4941   F8000
2020-05-18 12:31:49.139 GcodeDriver TRACE: [serial://COM5] << tinyg [mm] ok>
2020-05-18 12:31:49.139 GcodeDriver TRACE: [serial://COM5] << qr:30, qi:1, qo:0
2020-05-18 12:31:49.139 GcodeDriver DEBUG: sendCommand(serial://COM5 G1 X103.0826 Y24.4941   F8000, 12000) => [qr:31, qi:1, qo:0, tinyg [mm] ok>]
2020-05-18 12:31:49.139 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-18 12:31:49.323 GcodeDriver TRACE: [serial://COM5] << posx:5.871,posy:1.412,posz:0.000,posa:0.000,feed:8000.00,vel:5314.83,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:49.324 GcodeDriver TRACE: Position report: posx:5.871,posy:1.412,posz:0.000,posa:0.000,feed:8000.00,vel:5314.83,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:49.390 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [qr:30, qi:1, qo:0, posx:5.871,posy:1.412,posz:0.000,posa:0.000,feed:8000.00,vel:5314.83,unit:1,coor:1,dist:0,frmo:1,stat:5]
2020-05-18 12:31:49.390 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-18 12:31:49.520 GcodeDriver TRACE: [serial://COM5] << posx:29.447,posy:7.084,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:49.521 GcodeDriver TRACE: Position report: posx:29.447,posy:7.084,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:49.640 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [posx:29.447,posy:7.084,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5]
2020-05-18 12:31:49.640 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-18 12:31:49.717 GcodeDriver TRACE: [serial://COM5] << posx:54.649,posy:13.147,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:49.717 GcodeDriver TRACE: Position report: posx:54.649,posy:13.147,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:49.891 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [posx:54.649,posy:13.147,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5]
2020-05-18 12:31:49.891 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-18 12:31:49.910 GcodeDriver TRACE: [serial://COM5] << posx:79.851,posy:19.209,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:49.910 GcodeDriver TRACE: Position report: posx:79.851,posy:19.209,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:50.106 GcodeDriver TRACE: [serial://COM5] << posx:100.847,posy:24.260,posz:0.000,posa:0.000,feed:8000.00,vel:3553.30,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:50.106 GcodeDriver TRACE: Position report: posx:100.847,posy:24.260,posz:0.000,posa:0.000,feed:8000.00,vel:3553.30,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:50.142 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [posx:79.851,posy:19.209,posz:0.000,posa:0.000,feed:8000.00,vel:8000.00,unit:1,coor:1,dist:0,frmo:1,stat:5, posx:100.847,posy:24.260,posz:0.000,posa:0.000,feed:8000.00,vel:3553.30,unit:1,coor:1,dist:0,frmo:1,stat:5]
2020-05-18 12:31:50.142 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-18 12:31:50.250 GcodeDriver TRACE: [serial://COM5] << qr:31, qi:0, qo:1
2020-05-18 12:31:50.304 GcodeDriver TRACE: [serial://COM5] << posx:103.428,posy:24.839,posz:0.000,posa:0.000,feed:8000.00,vel:301.57,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:50.304 GcodeDriver TRACE: Position report: posx:103.428,posy:24.839,posz:0.000,posa:0.000,feed:8000.00,vel:301.57,unit:1,coor:1,dist:0,frmo:1,stat:5
2020-05-18 12:31:50.392 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [qr:31, qi:0, qo:1, posx:103.428,posy:24.839,posz:0.000,posa:0.000,feed:8000.00,vel:301.57,unit:1,coor:1,dist:0,frmo:1,stat:5]
2020-05-18 12:31:50.392 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-18 12:31:50.398 GcodeDriver TRACE: [serial://COM5] << posx:103.083,posy:24.494,posz:0.000,posa:0.000,feed:8000.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3
2020-05-18 12:31:50.399 GcodeDriver TRACE: Position report: posx:103.083,posy:24.494,posz:0.000,posa:0.000,feed:8000.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3
2020-05-18 12:31:50.399 GcodeDriver TRACE: [serial://COM5] << qr:32, qi:0, qo:1
2020-05-18 12:31:50.643 GcodeDriver DEBUG: sendCommand(serial://COM5 null, 250) => [posx:103.083,posy:24.494,posz:0.000,posa:0.000,feed:8000.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3, qr:32, qi:0, qo:1]
2020-05-18 12:31:50.643 ReferenceCamera DEBUG: moveTo((103.029642, 24.494094, 0.000000, 0.000000 mm), 1.0)

Other than this issue, I haven't really experienced any other noticeable problems with TinyG.

Tony


Doppelgrau

unread,
May 18, 2020, 2:10:53 PM5/18/20
to OpenPnP
a short example from my machine:
2020-05-18 20:06:32.518 ReferenceNozzle DEBUG: N1.moveTo((281.751000, -67.225000, -15.000000, 0.000000 mm), 1.0)
2020-05-18 20:06:32.518 GcodeDriver DEBUG: sendCommand(G0 X279.7310    F32000, 10000)...
2020-05-18 20:06:32.518 GcodeDriver TRACE: [serial://ttyUSB0] >> G0 X279.7310    F32000
2020-05-18 20:06:32.528 GcodeDriver TRACE: [serial://ttyUSB0] << G0 X279.7310    F32000
2020-05-18 20:06:32.532 GcodeDriver TRACE: [serial://ttyUSB0] << tinyg [mm] ok>
2020-05-18 20:06:32.533 GcodeDriver DEBUG: sendCommand(serial://ttyUSB0 G0 X279.7310    F32000, 10000) => [G0 X279.7310    F32000, tinyg [mm] ok>]
2020-05-18 20:06:32.533 GcodeDriver DEBUG: sendCommand(G1 X279.8810    F1600, 10000)...
2020-05-18 20:06:32.533 GcodeDriver TRACE: [serial://ttyUSB0] >> G1 X279.8810    F1600
2020-05-18 20:06:32.538 GcodeDriver TRACE: [serial://ttyUSB0] << G1 X279.8810    F1600
2020-05-18 20:06:32.542 GcodeDriver TRACE: [serial://ttyUSB0] << tinyg [mm] ok>
2020-05-18 20:06:32.543 GcodeDriver DEBUG: sendCommand(serial://ttyUSB0 G1 X279.8810    F1600, 10000) => [G1 X279.8810    F1600, tinyg [mm] ok>]
2020-05-18 20:06:32.544 GcodeDriver DEBUG: sendCommand(null, 250)...
2020-05-18 20:06:32.723 GcodeDriver TRACE: [serial://ttyUSB0] << posx:279.881,posy:11.525,posz:-15.000,posa:0.000,feed:1600.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3
2020-05-18 20:06:32.798 GcodeDriver DEBUG: sendCommand(serial://ttyUSB0 null, 250) => [posx:279.881,posy:11.525,posz:-15.000,posa:0.000,feed:1600.00,vel:0.00,unit:1,coor:1,dist:0,frmo:1,stat:3]

Reason why I use the tinyg: it was installed when I bought the used liteplacer and I didn't want to start too many different places of work at the same time.
Toying with the idea of an other controller again and again, but original smoothie 5x seems very hard to get, and other boards seems a bit of a gamble (e.g. SKR 1.3 with the 2209 or something like that).

ma...@makr.zone

unread,
May 18, 2020, 2:13:09 PM5/18/20
to ope...@googlegroups.com

Thanks Tony!

What happens if you send the same move command again i.e. with unchanged coordinates? Is there be a second report?

Or do you know any other command that may prompt a reliable status or completion report?

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

ma...@makr.zone

unread,
May 18, 2020, 3:25:11 PM5/18/20
to ope...@googlegroups.com

Hi Jason,

another issue. The Camera Rotation Jogging you presented here...

https://www.youtube.com/watch?v=0TvqQBkTGP8

...didn't work any more in the NullDriver.

Turns out the reason is that the NullDriver now also uses proper Axis Mapping. For others that read this, the current GcodeDriver only Axis Mapping is described here:

https://github.com/openpnp/openpnp/wiki/GcodeDriver%3A-Axis-Mapping#mapping-axes-to-headmountables

The Wiki example maps Z and C axes only to the Nozzles. That's what I would expect and have on my machine and see in other people's machine.xml they recently sent me. Furthermore, with multi-nozzle machines there are multiple C axes , and one wouldn't know which C axis to map the camera to, so I guess not mapping the camera is still the correct thing to do, right?

But the code here...

https://github.com/openpnp/openpnp/blob/c6c100e175a71460ebfe969a7a880dd46f4e8c61/src/main/java/org/openpnp/gui/components/CameraView.java#L1341-L1343

.. always talks to the Camera, so with the unmapped axis, getLocation() will always return the constant rotation from the head offsets. The Jog will also not work, the rotation is ignored.

I haven't tried, but this must also never have worked on the GcodeDriver, if the C axis wasn't mapped to the camera.

So this a bug, and we should rotate the selected tool instead, correct?

Plus we could hide the rotation handle when no C axis is mapped to the selected tool.

As a reference: the reticles/cross hairs are also drawn with C taken from the selected tool:

https://github.com/openpnp/openpnp/blob/c6c100e175a71460ebfe969a7a880dd46f4e8c61/src/main/java/org/openpnp/gui/components/CameraView.java#L554-L555

As a bonus this will then also work for the bottom camera that is currently excluded here:

https://github.com/openpnp/openpnp/blob/c6c100e175a71460ebfe969a7a880dd46f4e8c61/src/main/java/org/openpnp/gui/components/CameraView.java#L1337-L1339

If you agree, I can fix this in my PR-to-be and also conveniently test it in the NullDriver. This will be a big one anyway :-)

_Mark


Jason von Nieda

unread,
May 18, 2020, 4:14:57 PM5/18/20
to ope...@googlegroups.com
Hi Mark,

I will need to think about this a bit more in depth later tonight, but the main thing that jumps out to me is that if jogging happens on the selected tool instead of the camera then capturing camera coordinates (which is used everywhere) will no longer do what people expect. Maybe it never really has, though.

In general, it is important to me that things become *more* camera focused and *less* selected tool focused. My intention is for the global selected tool to go away someday.

So, I think I would prefer for the camera to have it's rotation stored and maybe just not mapped / sent to the driver? I haven't seen your new implementation, so I am not sure what the best way to do that would be.

Jason



--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

ma...@makr.zone

unread,
May 18, 2020, 5:17:21 PM5/18/20
to ope...@googlegroups.com

Hi Jason,

Yes, there is this discrepancy. It is a bit too conveniently hidden in the NullDriver with its head offset of 0, i.e. you are effectively looking through the nozzle. But until we have stereoscopic down-looking cameras and 3D reconstruction of the image underneath, plus perfect cam-to-tip alignment, zero runout etc, we need to live with offsets. And offsets mean switching between tools, right?

And what about the second, the third, the nth nozzle? We need a way to select ... says the guy with the one-nozzle machine :-) But even with one nozzle there are those areas that the camera can't reach... and it is veery useful to be able to use the nozzle tip for capture (and not to waste that area).

I don't think there is an easy way to completely overturn the current system. But, there is room for improvement how the selected tool is handled, so you'd hardly ever have to think about it. It should be automatically selected, whenever a Position button is pressed. Not only the obvious ones but also the Feeders Panel's pick location buttons etc. 

When Camera-Jogging is used - even with just a click, it should likewise select the camera.

OpenPnP would need to remember the last selected non-camera-tool and restore it, when toggling between camera and tool (especially when doing that in the Machine Controls).

The Camera View/Machine Controls could then also delegate the Camera's unmapped C axis coordinate to the last selected tool while bypassing the runout compensation (that's now a feature of the new transform system, greetings to @doppelgrau).  So we can use rotation when the camera is selected, while keeping the X/Y calibrated for the camera.

The runout offset that is visible when you rotate and the nozzle is the selected tool, is vexing users, myself included for a long time... that would finally be gone.

_Mark

Jason von Nieda

unread,
May 18, 2020, 5:58:44 PM5/18/20
to ope...@googlegroups.com
On Mon, May 18, 2020 at 4:17 PM ma...@makr.zone <ma...@makr.zone> wrote:

Hi Jason,

Yes, there is this discrepancy. It is a bit too conveniently hidden in the NullDriver with its head offset of 0, i.e. you are effectively looking through the nozzle. But until we have stereoscopic down-looking cameras and 3D reconstruction of the image underneath, plus perfect cam-to-tip alignment, zero runout etc, we need to live with offsets. And offsets mean switching between tools, right?

Only during machine setup, IMHO. My goal is for the runtime UI to not have a selected tool. So, certainly when you are setting up a nozzle changer, there is an implied selected / current tool, but my end goal is that all of the "operator" functions will just use the right tool automatically.

And what about the second, the third, the nth nozzle? We need a way to select ... says the guy with the one-nozzle machine :-) But even with one nozzle there are those areas that the camera can't reach... and it is veery useful to be able to use the nozzle tip for capture (and not to waste that area).

Areas the camera cannot reach is a sore subject for me. Personally, I do not think this is a valid machine configuration and I am willing to leave these users at the door. I am definitely not willing to make usability concessions for this extremely rare case.

I don't think there is an easy way to completely overturn the current system. But, there is room for improvement how the selected tool is handled, so you'd hardly ever have to think about it. It should be automatically selected, whenever a Position button is pressed. Not only the obvious ones but also the Feeders Panel's pick location buttons etc. 

When Camera-Jogging is used - even with just a click, it should likewise select the camera.


I think this is a reasonable intermediary step, but as I said up there ^, it's not where I want the UI to go long term.

OpenPnP would need to remember the last selected non-camera-tool and restore it, when toggling between camera and tool (especially when doing that in the Machine Controls).

The Camera View/Machine Controls could then also delegate the Camera's unmapped C axis coordinate to the last selected tool while bypassing the runout compensation (that's now a feature of the new transform system, greetings to @doppelgrau).  So we can use rotation when the camera is selected, while keeping the X/Y calibrated for the camera.

This would mean that Camera.getLocation() would not reflect the rotation. Which would mean that capturing camera coordinates would need to go through the camera view instead of just on the camera, unless I am missing something? I'm not a fan of this solution. I think it would be better for the camera to just maintain a "virtual" rotation coordinate.

The runout offset that is visible when you rotate and the nozzle is the selected tool, is vexing users, myself included for a long time... that would finally be gone.

I'm not sure what this refers to. Can you expand?

Thanks,
Jason

Duncan Ellison

unread,
May 18, 2020, 6:28:00 PM5/18/20
to OpenPnP
Hey Jason

Excuse me for jumping in as I haven't really been contributing to this thread, but ....

Areas the camera cannot reach is a sore subject for me. Personally, I do not think this is a valid machine configuration and I am willing to leave these users at the door. I am definitely not willing to make usability concessions for this extremely rare case.

I don't see that this is so rare.  Unless your camera is situated right between the nozzles, aren't you always going to have some dead space in front where the nozzle can reach before you hit the endstops, but the camera can't.  Similarly in the X direction, the nozzles will always be able to reach places the camera can't.  OK, maybe not by much, but I think for the majority of users bed space is at a premium.  

I'd ask you look kindly on those of us that either didn't think that far ahead or just have a configuration that means you simply can't reach every part of the XY travel with the camera. I've only just managed to re-configure my feeder set up so the camera can reach the pick location without crashing the nozzle into the feeder itself, but it only leaves me with 2mm to spare before the nozzles end up in Juki heaven ;-)

Feel free to shoot me down if I'm in the wrong here.


To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

Jason von Nieda

unread,
May 18, 2020, 6:35:17 PM5/18/20
to ope...@googlegroups.com
Hey Duncan,

I should clarify: I mean areas that the camera can't reach that the user would want to use for targeting. My response was geared towards my assumption that Mark was referring to feeders that the nozzle can reach but the camera can't.

In other words, if you have a feeder whose pick position cannot be seen with the camera, I am not willing to make significant UI concessions for that use case. My opinion is that if you intend to store a position in OpenPnP for some reason, the camera should be able to see that position.

Jason


To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/daa3e52a-f45b-4e94-ae26-0f92f1af84e0%40googlegroups.com.

Tony Luken

unread,
May 18, 2020, 7:11:09 PM5/18/20
to OpenPnP
Mark,

If I send exactly the same Gcode string from the prior move, I do not receive another status report, just an ok> (which matches my COMMAND_COMPLETE_REGEX of .*(ok>|SYSTEM READY).*

If I change one of the coordinates in the command by 0.0001 mm, I do get an ok> and another status report.  I tried smaller changes to the coordinates but couldn't determine a consistent cutoff for where a status report would or would not be generated.

I can request a status report by sending {sr:n} but then TinyG, for some reason, sends two status reports.  This command also switches TinyG to JSON mode (as opposed to text mode) so my COMMAND_COMPLETE_REGEX no longer worked correctly.  I was able to fix that by changing my COMMAND_COMPLETE_REGEX to .*(ok>|SYSTEM READY|f:\[1,0,).* so that it also detects the JSON footer with a status code of "ok" (the 0 following the 1).

Just for grins, I tried sending this command "G1 X23.5690 Y17.7339   F8000\nG1 X23.5690 Y27.7339   F8000\n{sr:n}".  Unexpectedly, the status report that is requested at the end, came out in the middle of the move (but with the correct status of stat:5 so it didn't mess anything up.

Tony

Alexander Goldstone

unread,
May 18, 2020, 7:24:39 PM5/18/20
to OpenPnP

Hi Mark.

Only just seen your request for machine.xml files sorry.

A fair few people have backed the crowdfunded SimplePNP project which is currently being manufactured. This is the machine.xml file the creator has shared so far:


Regards.

Alex


On Saturday, May 9, 2020 at 8:50:39 PM UTC+1, ma...@makr.zone wrote:
Hi everybody

IMPORTANT CALL FOR YOUR ASSISTANCE:

As described in the original post of this thread, I'm working on a new GUI based global Axes and Drivers mapping solution in OpenPnP 2.0.

In short:

You can now easily define the Axes and Axis Transforms in the GUI and map them to the Head Mountables with simple drop-downs (see the image). No more machine.xml hacking.

Furthermore, you can add as many Drivers as you like and mix and match different types. All Axes and Actuators can be mapped to the drivers (again in the GUI) and OpenPnP will then automatically just talk to the right driver(s). 

Most of the GcodeDiver specific goodies were extracted from the driver and are now available to all the drivers (some of that still work in progress):
  • Axis Mapping (obviously)
  • Axis Transforms
  • Visual Homing
  • Non-Squarenes Compensation
The idea is to automatically migrate your machine.xml to the new solution.

What I now need is your machine.xml, expecially if you have special mappings, special transforms, shared axes, sub-drivers, moveable actuators etc. pp.

ma...@makr.zone

unread,
May 19, 2020, 3:30:37 AM5/19/20
to ope...@googlegroups.com

Thanks, Alex.

I guess there is a mistake in that config. It had an unused sub-driver with a second set of Z and Rotation axes mapped that are never used (no second nozzle).

Note, the migration is still perfect and this machine will work, it now just shows you the superfluous stuff more prominently:

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

ma...@makr.zone

unread,
May 19, 2020, 6:00:12 AM5/19/20
to ope...@googlegroups.com
This would mean that Camera.getLocation() would not reflect the rotation. Which would mean that capturing camera coordinates would need to go through the camera view instead of just on the camera, unless I am missing something? I'm not a fan of this solution. I think it would be better for the camera to just maintain a "virtual" rotation coordinate.

OK, I understand now.

Can we agree that there are two completely independent questions?

  1. Whether the camera has virtual coordinates.
  2. Whether we should have a selected tool on the UI.

About 1.:

I'll create a VirtualAxis that is mapped to down-looking cameras' C and serves as a coordinate store.

Users could also assign a virtual Z. It is sometimes convenient to switch back and forth between camera and nozzle tip such as when setting up a feeder (height). I always hate it when the camera forgets the Z.

To be clear, a Z probe measurement would still be favored on capture. We could perhaps add a modifier Key (holding down Shift) when you don't want the probe measurement.

The question: should I map a virtual Z per default on migrate?

Btw. I have made it easy to set the NullDriver up for 3D operation (currently everything is on Z=0) so Z handling can be tested. It sets all feeder's Z, the BoardLocations' Z and the bottom cameras' Z.

About 2.:

> Only during machine setup, IMHO. My goal is for the runtime UI to not have a selected tool. So, certainly when you are setting up a nozzle changer, there is an implied selected / current tool, but my end goal is that all of the "operator" functions will just use the right tool automatically.

I'm not sure I understand. Sometimes I feel you underestimate your own brilliant work. :-)

You don't need any of that control GUI during normal Job operation, neither virtual nor real. Machine Controls can already be hidden along with the selected tool combo-box and we could even add an option to hide them automatically when you hit Run on the Job.  Your system is already brilliant, no need for a Verschlimmbesserung.

But as soon as the job is stuck and you need to trouble-shoot something you need to be back in full control mode, and fast. I wouldn't want a dumbed-own GUI when I need to fix a bad feeder pick location etc.. I can't imagine a single realistic interrupted Job/trouble-shoot use case that I can fix with just the camera view and virtual axes. It would be infuriating to have the real problem at hand and then also having to wrestle free of some Noobs Are King GUI. Sorry I'm quite emotional about these  questions, as I'm already a bleeding victim of that *** mega trend.

About the unreachable area, I would lose so much space! One use case see here

https://youtu.be/dGde59Iv6eY?t=250

It would also make most Liteplacer users' life hard, as they are advised to set up their changer in the unreachable area here:

>Attach the holder to your work table. The down looking camera doesn’t need to see the nozzles, a good place is above the hole location, on the left.

https://www.liteplacer.com/assembling-the-nozzle-holder/

Not on my machine, though.

_Mark


Jason von Nieda

unread,
May 19, 2020, 10:19:29 AM5/19/20
to ope...@googlegroups.com

On Tue, May 19, 2020 at 5:00 AM ma...@makr.zone <ma...@makr.zone> wrote:
This would mean that Camera.getLocation() would not reflect the rotation. Which would mean that capturing camera coordinates would need to go through the camera view instead of just on the camera, unless I am missing something? I'm not a fan of this solution. I think it would be better for the camera to just maintain a "virtual" rotation coordinate.

OK, I understand now.

Can we agree that there are two completely independent questions?

  1. Whether the camera has virtual coordinates.
  2. Whether we should have a selected tool on the UI.

Yes, agreed. And there's not really any reason to get into #2 right now. It's just a goal for the future.

About 1.:

I'll create a VirtualAxis that is mapped to down-looking cameras' C and serves as a coordinate store.

Users could also assign a virtual Z. It is sometimes convenient to switch back and forth between camera and nozzle tip such as when setting up a feeder (height). I always hate it when the camera forgets the Z.

To be clear, a Z probe measurement would still be favored on capture. We could perhaps add a modifier Key (holding down Shift) when you don't want the probe measurement.

Sounds good.

The question: should I map a virtual Z per default on migrate?

I think that sounds reasonable. Then it will work the way I had intended for it to work, whether it ever did or not :) (Clearly I have to do more testing on the machine and less in the simulator)

Btw. I have made it easy to set the NullDriver up for 3D operation (currently everything is on Z=0) so Z handling can be tested. It sets all feeder's Z, the BoardLocations' Z and the bottom cameras' Z.

About 2.:

> Only during machine setup, IMHO. My goal is for the runtime UI to not have a selected tool. So, certainly when you are setting up a nozzle changer, there is an implied selected / current tool, but my end goal is that all of the "operator" functions will just use the right tool automatically.

I'm not sure I understand. Sometimes I feel you underestimate your own brilliant work. :-)

You don't need any of that control GUI during normal Job operation, neither virtual nor real. Machine Controls can already be hidden along with the selected tool combo-box and we could even add an option to hide them automatically when you hit Run on the Job.  Your system is already brilliant, no need for a Verschlimmbesserung.

But as soon as the job is stuck and you need to trouble-shoot something you need to be back in full control mode, and fast. I wouldn't want a dumbed-own GUI when I need to fix a bad feeder pick location etc.. I can't imagine a single realistic interrupted Job/trouble-shoot use case that I can fix with just the camera view and virtual axes. It would be infuriating to have the real problem at hand and then also having to wrestle free of some Noobs Are King GUI. Sorry I'm quite emotional about these  questions, as I'm already a bleeding victim of that *** mega trend.


As I said above, I don't think we need to discuss #2 right now. It's a plan for the future and we can get into it when it's time to look at those changes. I have a bunch of other things I have to clear off my plate before even thinking about it. The only comment I'll add is that I think there is a very large gap between where things are now and a dumbed down GUI.

About the unreachable area, I would lose so much space! One use case see here

https://youtu.be/dGde59Iv6eY?t=250

It would also make most Liteplacer users' life hard, as they are advised to set up their changer in the unreachable area here:

>Attach the holder to your work table. The down looking camera doesn’t need to see the nozzles, a good place is above the hole location, on the left.

https://www.liteplacer.com/assembling-the-nozzle-holder/

Not on my machine, though.

There will always be a way to use the right tool to do setup tasks. I just don't think it needs to be global. But again, let's save it for another day. Nothing is changing today.

Jason

_Mark


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Alexander Goldstone

unread,
May 19, 2020, 10:48:46 AM5/19/20
to OpenPnP
Thanks Mark. Possibly incomplete as there are single and dual nozzle versions of the machine. Good to know its compatible with the changes though.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

ma...@makr.zone

unread,
May 19, 2020, 11:25:37 AM5/19/20
to ope...@googlegroups.com

Am 19.05.2020 um 16:19 schrieb Jason von Nieda:
> I think that sounds reasonable. Then it will work the way I had intended for it to work

Cool!

(Clearly I have to do more testing on the machine and less in the simulator)

Well, one of my goals is to make the simulator much more realistic and testing as results-oriented as possible. This should now all behave much more like a real machine i.e. the same type of discrepancies between drivers should no longer be possible.

Ideally, instead of using the NullDriver, this would one day also work with a regular GcodeDriver talking to a GcodeServer. But this needs a true Gcode interpreter for a full jobs test. Once there, people could even test their real-life machine.xml setup with arbitrary, reasonably standardized Gcode commands and the comms temporarily bypassed to the GcodeServer. The next step would be a machine scan using the down-looking camera to get the user's machine table into the ImageCamera. Imagine how cool ;-) Shouldn't be too difficult to implement if kept simple. 

I'll keep the motion planner building blocks universal, so they can be plugged in, as soon as such a Gcode interpreter is available. I'm unfamilar with advanced Regexes so maybe this could actually be quite easy.

_Mark


ma...@makr.zone

unread,
May 19, 2020, 11:29:07 AM5/19/20
to ope...@googlegroups.com

> Good to know its compatible with the changes though.

Yes, the real world tests will still have to be done yet, some bugs are to be expected and I hope people will help me test it. I will do my best to make it work, for those who do.

_Mark

To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/29601f27-8e82-4cef-a67f-e6011c6fa856%40googlegroups.com.

Marek T.

unread,
May 19, 2020, 12:35:18 PM5/19/20
to OpenPnP
Hi Mark,
I'm not sure it's matter of the simulator function you create. But have you solved, thought, how to test picks/plecements with different (proper-expected and improper-fault) vacuum values? To see how machine will work when in some point of the vacuum test the value is not proper.

ma...@makr.zone

unread,
May 19, 2020, 2:20:48 PM5/19/20
to ope...@googlegroups.com

Hi Marek

no, vacuum sensing simulation is not included and not planned this time. Already too much work with motion.

But the basic framework is there. So this could be added later.

_Mark

Jason von Nieda

unread,
May 19, 2020, 2:26:17 PM5/19/20
to ope...@googlegroups.com
FYI: This is part of why I added the GcodeServer - so that I could test the pick retry error with vacuum checks. So yes, as Mark said, the framework is now there and it's relatively easy to add tests that check this.

Jason


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Marek T.

unread,
May 19, 2020, 3:04:50 PM5/19/20
to OpenPnP
Great. It's pain in the ass during the tests (when I do it using terminals instead of the machine, but the problem is almost the same).

ma...@makr.zone

unread,
May 23, 2020, 6:05:21 AM5/23/20
to ope...@googlegroups.com

Hi Tony

I suspect the following:

If you send two, three or any number of G0/G1 commands quickly, only one such status report will be issued. And if some of the commands are somewhat delayed (due to other stuff happening on the OpenPnP side) it will become nondeterministic, how many reports are to be expected. It will become impossible to determine safely when the sequence of commands has completed.

Please try to find a solution. Otherwise TinyG will not be usable with the advanced GCode driver mode.

Note: TinyG is open source.

https://github.com/synthetos/TinyG/blob/master/firmware/tinyg/gcode_parser.c

Perhaps it would be easier to add a proper M400 command, than to try to coerce it into doing what we want with a crystal ball, tweezers and a crow bar.

https://www.reprap.org/wiki/G-code#M400:_Wait_for_current_moves_to_finish

Have you tried 

G4 P0

?

_Mark


Tony Luken

unread,
May 23, 2020, 12:48:49 PM5/23/20
to OpenPnP
Mark,

Yep, I think that is exactly how TinyG would behave if you are no longer waiting for each individual move to complete before sending the next move.  I tried G4 P0 and it appears to pause forever instead of a zero duration so I don't think that's a solution.  I have another idea though - TinyG supports G code line numbers (in the range 0 to 99999) and can be setup to include the line number of the last completed command in its status reports.  Could the driver have the capability to provide an incrementing line number to each G code command and then use that to disambiguate the responses?  Here's a snippet of a log where I sent a series of G codes to move the head four times along a square path (note that \N is the escape sequence for linefeed separating each move command)

2020-05-23 11:13:43.701 GcodeDriver DEBUG: sendCommand(N1 G0 X110 Y100\NN2 G0 Y110\NN3 G0 X100\NN4 G0 Y100\NN5 G0 X110\NN6 G0 Y110\NN7 G0 X100\NN9 G0 Y100\NN10 G0 X110\NN11 G0 Y110\NN12 G0 X100\NN13 G0 Y100\NN14 G0 X110\NN15 G0 Y110\NN16 G0 X100\NN17 G0 Y100, 5000)...
2020-05-23 11:13:43.702 GcodeDriver TRACE: [serial://COM5] >> N1 G0 X110 Y100
N2 G0 Y110
N3 G0 X100
N4 G0 Y100
N5 G0 X110
N6 G0 Y110
N7 G0 X100
N9 G0 Y100
N10 G0 X110
N11 G0 Y110
N12 G0 X100
N13 G0 Y100
N14 G0 X110
N15 G0 Y110
N16 G0 X100
N17 G0 Y100
2020-05-23 11:13:43.713 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,16,156]}
2020-05-23 11:13:43.713 GcodeDriver DEBUG: sendCommand(serial://COM5 N1 G0 X110 Y100
N2 G0 Y110
N3 G0 X100
N4 G0 Y100
N5 G0 X110
N6 G0 Y110
N7 G0 X100
N9 G0 Y100
N10 G0 X110
N11 G0 Y110
N12 G0 X100
N13 G0 Y100
N14 G0 X110
N15 G0 Y110
N16 G0 X100
N17 G0 Y100, 5000) => [{r:{},f:[1,0,16,156]}]
2020-05-23 11:13:43.959 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:100.000,posy:100.000,posz:0.000,posa:0.000,vel:1.26,stat:5}}
2020-05-23 11:13:43.960 GcodeDriver TRACE: Position report: {sr:{line:1,posx:100.000,posy:100.000,posz:0.000,posa:0.000,vel:1.26,stat:5}}
2020-05-23 11:13:43.960 GcodeDriver TRACE: [serial://COM5] << {qr:31,qi:1,qo:0}
2020-05-23 11:13:43.960 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,11,151]}
2020-05-23 11:13:43.961 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:100.004,posy:100.000,posz:0.000,posa:0.000,vel:30.43,stat:5}}
2020-05-23 11:13:43.961 GcodeDriver TRACE: Position report: {sr:{line:1,posx:100.004,posy:100.000,posz:0.000,posa:0.000,vel:30.43,stat:5}}
2020-05-23 11:13:43.961 GcodeDriver TRACE: [serial://COM5] << {qr:30,qi:1,qo:0}
2020-05-23 11:13:43.961 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,11,151]}
2020-05-23 11:13:43.962 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:100.026,posy:100.000,posz:0.000,posa:0.000,vel:125.09,stat:5}}
2020-05-23 11:13:43.962 GcodeDriver TRACE: Position report: {sr:{line:1,posx:100.026,posy:100.000,posz:0.000,posa:0.000,vel:125.09,stat:5}}
2020-05-23 11:13:43.962 GcodeDriver TRACE: [serial://COM5] << {qr:29,qi:1,qo:0}
2020-05-23 11:13:43.962 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,11,151]}
2020-05-23 11:13:43.963 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:100.084,posy:100.000,posz:0.000,posa:0.000,vel:302.68,stat:5}}
2020-05-23 11:13:43.963 GcodeDriver TRACE: Position report: {sr:{line:1,posx:100.084,posy:100.000,posz:0.000,posa:0.000,vel:302.68,stat:5}}
2020-05-23 11:13:43.963 GcodeDriver TRACE: [serial://COM5] << {qr:28,qi:1,qo:0}
2020-05-23 11:13:43.963 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,11,151]}
2020-05-23 11:13:43.964 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:100.199,posy:100.000,posz:0.000,posa:0.000,vel:563.05,stat:5}}
2020-05-23 11:13:43.964 GcodeDriver TRACE: Position report: {sr:{line:1,posx:100.199,posy:100.000,posz:0.000,posa:0.000,vel:563.05,stat:5}}
2020-05-23 11:13:43.964 GcodeDriver TRACE: [serial://COM5] << {qr:27,qi:1,qo:0}
2020-05-23 11:13:43.964 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,11,151]}
2020-05-23 11:13:43.965 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:100.389,posy:100.000,posz:0.000,posa:0.000,vel:892.33,stat:5}}
2020-05-23 11:13:43.965 GcodeDriver TRACE: Position report: {sr:{line:1,posx:100.389,posy:100.000,posz:0.000,posa:0.000,vel:892.33,stat:5}}
2020-05-23 11:13:43.965 GcodeDriver TRACE: [serial://COM5] << {qr:26,qi:1,qo:0}
2020-05-23 11:13:43.965 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,11,151]}
2020-05-23 11:13:43.966 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:100.667,posy:100.000,posz:0.000,posa:0.000,vel:1266.87,stat:5}}
2020-05-23 11:13:43.966 GcodeDriver TRACE: Position report: {sr:{line:1,posx:100.667,posy:100.000,posz:0.000,posa:0.000,vel:1266.87,stat:5}}
2020-05-23 11:13:43.966 GcodeDriver TRACE: [serial://COM5] << {qr:25,qi:1,qo:0}
2020-05-23 11:13:43.966 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,11,151]}
2020-05-23 11:13:43.967 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:101.040,posy:100.000,posz:0.000,posa:0.000,vel:1657.15,stat:5}}
2020-05-23 11:13:43.967 GcodeDriver TRACE: Position report: {sr:{line:1,posx:101.040,posy:100.000,posz:0.000,posa:0.000,vel:1657.15,stat:5}}
2020-05-23 11:13:43.967 GcodeDriver TRACE: [serial://COM5] << {qr:24,qi:1,qo:0}
2020-05-23 11:13:43.967 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.968 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:101.506,posy:100.000,posz:0.000,posa:0.000,vel:2031.69,stat:5}}
2020-05-23 11:13:43.968 GcodeDriver TRACE: Position report: {sr:{line:1,posx:101.506,posy:100.000,posz:0.000,posa:0.000,vel:2031.69,stat:5}}
2020-05-23 11:13:43.968 GcodeDriver TRACE: [serial://COM5] << {qr:23,qi:1,qo:0}
2020-05-23 11:13:43.968 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.969 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:102.057,posy:100.000,posz:0.000,posa:0.000,vel:2360.97,stat:5}}
2020-05-23 11:13:43.969 GcodeDriver TRACE: Position report: {sr:{line:1,posx:102.057,posy:100.000,posz:0.000,posa:0.000,vel:2360.97,stat:5}}
2020-05-23 11:13:43.969 GcodeDriver TRACE: [serial://COM5] << {qr:22,qi:1,qo:0}
2020-05-23 11:13:43.970 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.970 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:102.678,posy:100.000,posz:0.000,posa:0.000,vel:2690.06,stat:5}}
2020-05-23 11:13:43.970 GcodeDriver TRACE: Position report: {sr:{line:1,posx:102.678,posy:100.000,posz:0.000,posa:0.000,vel:2690.06,stat:5}}
2020-05-23 11:13:43.971 GcodeDriver TRACE: [serial://COM5] << {qr:21,qi:1,qo:0}
2020-05-23 11:13:43.971 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.972 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:103.580,posy:100.000,posz:0.000,posa:0.000,vel:2839.24,stat:5}}
2020-05-23 11:13:43.972 GcodeDriver TRACE: Position report: {sr:{line:1,posx:103.580,posy:100.000,posz:0.000,posa:0.000,vel:2839.24,stat:5}}
2020-05-23 11:13:43.972 GcodeDriver TRACE: [serial://COM5] << {qr:20,qi:1,qo:0}
2020-05-23 11:13:43.972 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.973 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:104.286,posy:100.000,posz:0.000,posa:0.000,vel:2909.14,stat:5}}
2020-05-23 11:13:43.973 GcodeDriver TRACE: Position report: {sr:{line:1,posx:104.286,posy:100.000,posz:0.000,posa:0.000,vel:2909.14,stat:5}}
2020-05-23 11:13:43.973 GcodeDriver TRACE: [serial://COM5] << {qr:19,qi:1,qo:0}
2020-05-23 11:13:43.973 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.974 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:105.000,posy:100.000,posz:0.000,posa:0.000,vel:2923.97,stat:5}}
2020-05-23 11:13:43.974 GcodeDriver TRACE: Position report: {sr:{line:1,posx:105.000,posy:100.000,posz:0.000,posa:0.000,vel:2923.97,stat:5}}
2020-05-23 11:13:43.974 GcodeDriver TRACE: [serial://COM5] << {qr:18,qi:1,qo:0}
2020-05-23 11:13:43.976 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.977 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:105.714,posy:100.000,posz:0.000,posa:0.000,vel:2918.39,stat:5}}
2020-05-23 11:13:43.977 GcodeDriver TRACE: Position report: {sr:{line:1,posx:105.714,posy:100.000,posz:0.000,posa:0.000,vel:2918.39,stat:5}}
2020-05-23 11:13:43.978 GcodeDriver TRACE: [serial://COM5] << {qr:17,qi:1,qo:0}
2020-05-23 11:13:43.978 GcodeDriver TRACE: [serial://COM5] << {r:{},f:[1,0,12,152]}
2020-05-23 11:13:43.978 GcodeDriver TRACE: [serial://COM5] << {sr:{line:1,posx:106.420,posy:100.000,posz:0.000,posa:0.000,vel:2870.58,stat:5}}
2020-05-23 11:13:43.978 GcodeDriver TRACE: Position report: {sr:{line:1,posx:106.420,posy:100.000,posz:0.000,posa:0.000,vel:2870.58,stat:5}}
2020-05-23 11:13:43.979 GcodeDriver TRACE: [serial://COM5] << {qr:16,qi:1,qo:0}
2020-05-23 11:13:44.119 GcodeDriver TRACE: [serial://COM5] << {qr:17,qi:0,qo:1}
2020-05-23 11:13:44.520 GcodeDriver TRACE: [serial://COM5] << {qr:18,qi:0,qo:1}
2020-05-23 11:13:44.919 GcodeDriver TRACE: [serial://COM5] << {qr:19,qi:0,qo:1}
2020-05-23 11:13:45.319 GcodeDriver TRACE: [serial://COM5] << {qr:20,qi:0,qo:1}
2020-05-23 11:13:45.719 GcodeDriver TRACE: [serial://COM5] << {qr:21,qi:0,qo:1}
2020-05-23 11:13:46.119 GcodeDriver TRACE: [serial://COM5] << {qr:22,qi:0,qo:1}
2020-05-23 11:13:46.519 GcodeDriver TRACE: [serial://COM5] << {qr:23,qi:0,qo:1}
2020-05-23 11:13:46.918 GcodeDriver TRACE: [serial://COM5] << {qr:24,qi:0,qo:1}
2020-05-23 11:13:47.319 GcodeDriver TRACE: [serial://COM5] << {qr:25,qi:0,qo:1}
2020-05-23 11:13:47.719 GcodeDriver TRACE: [serial://COM5] << {qr:26,qi:0,qo:1}
2020-05-23 11:13:48.102 GcodeDriver TRACE: [serial://COM5] << {qr:27,qi:0,qo:1}
2020-05-23 11:13:48.503 GcodeDriver TRACE: [serial://COM5] << {qr:28,qi:0,qo:1}
2020-05-23 11:13:48.903 GcodeDriver TRACE: [serial://COM5] << {qr:29,qi:0,qo:1}
2020-05-23 11:13:49.303 GcodeDriver TRACE: [serial://COM5] << {qr:30,qi:0,qo:1}
2020-05-23 11:13:49.703 GcodeDriver TRACE: [serial://COM5] << {qr:31,qi:0,qo:1}
2020-05-23 11:13:50.124 GcodeDriver TRACE: [serial://COM5] << {sr:{line:17,posx:100.000,posy:100.000,posz:0.000,posa:0.000,vel:0.00,stat:3}}
2020-05-23 11:13:50.124 GcodeDriver TRACE: Position report: {sr:{line:17,posx:100.000,posy:100.000,posz:0.000,posa:0.000,vel:0.00,stat:3}}
2020-05-23 11:13:50.124 GcodeDriver TRACE: [serial://COM5] << {qr:32,qi:0,qo:1}


BTW - you may have already answered this somewhere and I just missed it but if you are going to stream commands to the controller without waiting for each to complete, how do you know that the controller can accept another command without overflowing its internal buffer?  TinyG outputs queue reports like  {qr:24,qi:0,qo:1} to inform the sender of its internal queue status for that purpose.  I think the first number is the number of slots available, the second is how many slots were filled since the last report, and the third is how many slots were emptied since the last report.  Do other controllers have a similar mechanism?

Tony

ma...@makr.zone

unread,
May 24, 2020, 9:05:21 AM5/24/20
to ope...@googlegroups.com
Am 23.05.2020 um 18:48 schrieb Tony Luken:
Could the driver have the capability to provide an incrementing line number to each G code command and then use that to disambiguate the responses?

Yes, that sounds promising. Can the same number be reused in a cycle if an overflow of the 99999 happens?

> BTW - you may have already answered this somewhere and I just missed it but if you are going to stream commands to the controller without waiting for each to complete, how do you know that the controller can accept another command without overflowing its internal buffer?

I really hope that each controller has proper serial flow control that is blocking on the internal command queue. It never even occurred to me that this could be missing. I think it is a common pattern to just blindly stream Gcode for a 3D printer, so I'm quite confident.

But having said that, don't overestimate the potential. I need the asynchronous streaming for bursts of fine-grained interpolated motion. There will still be frequent interlocks between those bursts. It's not that OpenPnP can send the whole job and then sit back and wait. :-)

We need interlock in all vision operation and also need some kind of interlock on vacuum valve switching and sensing. That's because the vacuum reading Gcode is asynchronous i.e. the command returns values immediately and in parallel to any on-going motion. So we need to interlock to the valve switch to know on the OpenPnP side when that happened. The valve switch in turn is not asynchronous i.e. the controller will by itself wait for motion to complete and then switch the valve. At least that's the behavior on Smothie.

That last part is unfortunate btw. as it spoils the possibility to do "on-the-fly" (while moving) Part-Off checking. :-(

BEGIN FUTURE IDEAS

Maybe I'll find a way to trick Smoothie and other controllers or maybe I'll even hack it to do asynchronous switching.

I plan to add some kind of "soft wait" that allows waiting for some event without machine still-stand. The new writer thread on the driver would periodically issue position report commands (not needed on TinyG). The position reports would be monitored continously and a "soft wait" would be released as soon as some preset positional predicate turns true (Java Function). The machine could still be in full motion in the background, but we can trigger on-the-fly actions on the OpenPnP side. Like checking the vacuum level, as soon as some Z height is reached after a pick etc.

The next idea is to add a "Monitoring" switch on the Actuator. The Actuator will no longer be read on demand, but continually monitored. The Gcode writer thread would issue the ACTUATOR_READ_COMMAND periodically and as soon as the report comes back, match it against all the monitoring Actuator REGEXes and store the measurement. When OpenPnP code reads the Actuator it will just immediately return the stored last measurement. Often multiple Actuators also share the same ACTUATOR_READ_COMMAND, as one Gcode command reports multiple measurements, so a lot of round-trip delays can be saved. E.g. @doppelgrau's screenshots showed ~15ms round-trips on vacuum reads, so this can save up a lot.

The next step would be to dynamically set Alarm ranges on Actuators. After the Pick, i.e. as soon as the soft wait on SafeZ is triggered, an Alarm range on the vacuum level could be set. The vacuum level would then be continuously monitored during the whole cycle until the alarm range is removed again in the Place step. An Alarm status on the Actuator would be set by background monitoring and evaluated on the next JobProcessor Step. So even a temporary dip could trigger e.g. a Part-Off alarm. I guess this could be handled like a "Deferred Exception" in a universal way i.e. the handling of these Alarms on the JobProcessor Step side could be generic. This way additional Actuators could monitor other machine parameters and raise Alarms (like the pump reservoir level or perhaps a stepper temperature, etc.).

END FUTURE IDEAS

TinyG's line number position reports now seems to enable us to exactly know where it is. I guess this even beats Smoothie's capabilities, where I will need to count the reports or even match up coordinates with the motion plan.

Thanks Tony.

_Mark


Jarosław Karwik

unread,
May 24, 2020, 10:40:21 AM5/24/20
to ope...@googlegroups.com
I will have soon my new controller operational.
Could you specify what you would like to get ? Full, maximum set ?
I planned cacheing gcodes - I have so much memory that the capacity  would be 100's of lines. 

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

bert shivaan

unread,
May 24, 2020, 11:28:54 AM5/24/20
to OpenPnP
somehow I have missed why we need to cache movement commands? by the very nature of pick and place, it seems like control decisions need to be made at the completion of every move?

As for the asynchronous part such as the vac being turned on at the end of the move, isn't that solved easily by sending that command before the move?
Sorry if I am simplifying this too much and missing some intricacies you have found and are solving for. Just trying to catch up here.

I too am in the process of custom firmware for a custom control board, so am quite interested in this to be sure.

Jaroslaw or Mark, are either of you interested in sharing how to calculate the S curve stuff for accel and decel? Not in this thread but maybe another or in PM?


Jarosław Karwik

unread,
May 24, 2020, 12:04:05 PM5/24/20
to OpenPnP
Well,

I planned to "cache" commands in such a way that I send "OK" to all move commands and execute them in the background.
It is possible as long as you do not need vacuum readings or camera picture - this all could be detected either directly ( vacuum  read) or by small help with actuator ( you could even put it in pipeline). The plan was to fool OpenPnp as such implementation would not require support there and still be very generic and portable.

In reality - you would not need to be few commands ahead to get ride of communication delays.

We would need something portable - I could e.g. implement whole "pick" and "place" functionality as single commands in my controller - but this would not be portable. 



PS. See here about S curves ( PM me if  you have questions) https://groups.google.com/forum/#!msg/openpnp/W7FLuodpUNA/09OXDHQxAgAJ
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

Tony Luken

unread,
May 24, 2020, 1:09:34 PM5/24/20
to OpenPnP
Mark,

Yes, I tested reusing line numbers and it seemed to work ok so I don't think there is any issue with wrapping the numbers at some point.

WRT flow control - I've been searching through the TinyG documentation and this is what I've gleaned so far:
TinyG has an input serial buffer that can hold about 8 lines of ASCII text (~1000 characters).  The serial buffer does support hardware flow-control to prevent its overflow.  As TinyG parses the content of the serial buffer, the flow splits into two different paths.  Synchronous commands like G code go into the planner queue for in-order processing while asynchronous commands, such as a feed hold, bypass the planner and go directly to the motion controller.  In order to prevent the asynchronous commands from having a long delay before they take effect, it is recommended to keep the serial buffer only about half full (this is done my looking for the >ok response for each line).  In addition, in order to allow the planner to make optimal planning decisions, it is desirable to keep about 12 G code commands in the planner buffer at any one time (this is done by looking at the queue reports).  It's not entirely clear from the documentation, but I believe if the planner queue fills-up (I think it has room for either 28 or 32 commands), then the serial buffer will  begin to back-up.

Now all that said, I think optimization of moves by the (TinyG) planner is much more of a concern for CNC machining and 3D printing than it is for pick-and-place so I don't think we need to worry about the planner queue at all.  The only real concern then would be if we ever plan to send asynchronous commands (like a feed hold) and don't want to have potentially large delays before they take effect.  I suspect that many of the modern 32-bit based controllers pre-process the serial stream to pull-out the asynchronous commands before they ever go into a buffer so they don't even have that concern.

Tony

ma...@makr.zone

unread,
May 24, 2020, 1:59:17 PM5/24/20
to ope...@googlegroups.com

> Full, maximum set ?

Like on Christmas? 8-)

OK, here's my list.

Integral planner

Most open source firmwares I looked at handle motion as something special. Other commands are not planned the same way. This stands in the way of important capabilities.

Most importantly, being synchronous should not mean "bring the machine to a full still-stand then do it". It should just mean "as soon as that last (movement) command is done, do it". If more movement commands are queued after the synchronous command, then the machine should keep the speed up.

E.g.

G0 X100

M10 ; switch valve on

G0 X200

M11 ; switch valve off

G0 X300

should move 100mm then on-the-fly switch the valve on, then without decelerating move on to 200mm, then on-the-fly-switch the valve off then move the rest to 300mm. Acceleration/Deceleration should only happen at the beginning and end of the sequence.

If I do that on the Smoothie, it will bring the machine to a full still-stand after 100mm then switch the valve, then accelerate again etc. Sloooow!

Still-stand command

Having said that, a proper M400 command is still an absolute MUST. So if you wanted the still-stand behavior, you should be able to add a working M400 command.

Synchronous/Asynchronous operation

Ideally, switching and other "actuator" commands should be provided in a asynchronous variant (and vice versa).

E.g. a separate M10.1 variant would switch immediately, without waiting for the queue.

Unique queue acknowledgments

It seems it should be an obvious feature but I haven't found anything on open source controllers.

A synchronous "echo" command should report a unique string back to OpenPnP as soon as the queue reaches this command. So on the OpenPnP side we know exactly when that happens. Again it should be done on-the-fly without stopping motion, if it is inserted between motion commands.

Uncoordinates moves

A controller should implement G0 vs. G1 properly i.e. allow axes to move in uncoordinated fashion, if we want it. Uncoordinated moves can speed things up.

Ideally, and in a special variant only, this could even work across multiple G0 commands. So If you tell it this:

G0 X100 Y200

G0.1 B180

Then X, Y move in uncoordinated "hockey stick" fashion for best speed. Plus the move in B would start to move before the X, Y move is complete. This would be worth a lot when you want to move other stuff on the machine, like feeder actuators, conveyor belts etc. As one example consider this separated "drag" feeder on the SmallSMT machines:

https://youtu.be/Ip4dyV0exlM

Support path blending

This is probably a tall order but ideally you would support what LinuxCNC can do with the G64 command i.e. "cut corners" in smoothed-out motion to speed up things:

http://www.linuxcnc.org/docs/2.6/html/gcode/gcode.html#sec:G64

https://youtu.be/csFE-4XwaYE


That's what I can think of at the moment.

:-)

_Mark

ma...@makr.zone

unread,
May 24, 2020, 2:09:05 PM5/24/20
to ope...@googlegroups.com

Bert,

I think if you read the beginning of this thread, things will become clear. Then perhaps also re-read that last post about the FUTURE IDEAS... This will shave off significant time if done right.

Otherwise ask again.

> by the very nature of pick and place, it seems like control decisions need to be made at the completion of every move?

I disagree. If you don't see it after re-reading the beginning of this thread, we can discuss it more specifically.

_Mark

Message has been deleted

Jarosław Karwik

unread,
May 24, 2020, 3:46:47 PM5/24/20
to OpenPnP
Lets make a list:
1) Integral planner 
        Not really big deal, however by definition it would work only on co-linear moves.
2) Still-stand command (M400)
        No problem.
3)Synchronous/Asynchronous operation
       Not sure when you would use it, but no problem either.
4) Unique queue acknowledgments
      Still no problem.
5) Uncoordinates moves
      Well, it actually takes some work to make them coordinated, so this is simpler case. 
      Even this crossing G command might be possible, although please remember that I plan only 2/3 axis controller ( X/Y with encoders, possible coordinated moves and independent Z - more axis only with stacking controllers). By design it makes independent X/Y control and head control.
6) Support path blending
     I have actually implemented something like this for CNC - when CAM was approximating curves with linear paths.
     When would you expect such paths in pick&place scenario ? Here you have well defined end points - usually connected by straight lines only ( and not many corners).
     I have not seen single arc path (even with linear approximation) here.
niedz., 24 maj 2020, 15:05 użytkownik ma...@makr.zone <m...@makr.zone> napisał:
Am 23.05.2020 um 18:48 schrieb Tony Luken:
Could the driver have the capability to provide an incrementing line number to each G code command and then use that to disambiguate the responses?

Yes, that sounds promising. Can the same number be reused in a cycle if an overflow of the 99999 happens?

> BTW - you may have already answered this somewhere and I just missed it but if you are going to stream commands to the controller without waiting for each to complete, how do you know that the controller can accept another command without overflowing its internal buffer?

I really hope that each controller has proper serial flow control that is blocking on the internal command queue. It never even occurred to me that this could be missing. I think it is a common pattern to just blindly stream Gcode for a 3D printer, so I'm quite confident.

But having said that, don't overestimate the potential. I need the asynchronous streaming for bursts of fine-grained interpolated motion. There will still be frequent interlocks between those bursts. It's not that OpenPnP can send the whole job and then sit back and wait. :-)

We need interlock in all vision operation and also need some kind of interlock on vacuum valve switching and sensing. That's because the vacuum reading Gcode is asynchronous i.e. the command returns values immediately and in parallel to any on-going motion. So we need to interlock to the valve switch to know on the OpenPnP side when that happened. The valve switch in turn is not asynchronous i.e. the controller will by itself wait for motion to complete and then switch the valve. At least that's the behavior on Smothie.

That last part is unfortunate btw. as it spoils the possibility to do "on-the-fly" (while moving) Part-Off checking. :-(

BEGIN FUTURE IDEAS

Maybe I'll find a way to trick Smoothie and other controllers or maybe I'll even hack it to do asynchronous switching.

I plan to add some kind of "soft wait" that allows waiting for some event without machine still-stand. The new writer thread on the driver would periodically issue position report commands (not needed on TinyG). The position reports would be monitored continously and a "soft wait" would be released as soon as some preset positional predicate turns true (Java Function). The machine could still be in full motion in the background, but we can trigger on-the-fly actions on the OpenPnP side. Like checking the vacuum level, as soon as some Z height is reached after a pick etc.

The next idea is to add a "Monitoring" switch on the Actuator. The Actuator will no longer be read on demand, but continually monitored. The Gcode writer thread would issue the ACTUATOR_READ_COMMAND periodically and as soon as the report comes back, match it against all the monitoring Actuator REGEXes and store the measurement. When OpenPnP code reads the Actuator it will just immediately return the stored last measurement. Often multiple Actuators also share the same ACTUATOR_READ_COMMAND, as one Gcode command reports multiple measurements, so a lot of round-trip delays can be saved. E.g. @doppelgrau's screenshots showed ~15ms round-trips on vacuum reads, so this can save up a lot.

The next step would be to dynamically set Alarm ranges on Actuators. After the Pick, i.e. as soon as the soft wait on SafeZ is triggered, an Alarm range on the vacuum level could be set. The vacuum level would then be continuously monitored during the whole cycle until the alarm range is removed again in the Place step. An Alarm status on the Actuator would be set by background monitoring and evaluated on the next JobProcessor Step. So even a temporary dip could trigger e.g. a Part-Off alarm. I guess this could be handled like a "Deferred Exception" in a universal way i.e. the handling of these Alarms on the JobProcessor Step side could be generic. This way additional Actuators could monitor other machine parameters and raise Alarms (like the pump reservoir level or perhaps a stepper temperature, etc.).

END FUTURE IDEAS

TinyG's line number position reports now seems to enable us to exactly know where it is. I guess this even beats Smoothie's capabilities, where I will need to count the reports or even match up coordinates with the motion plan.

Thanks Tony.

_Mark


--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ope...@googlegroups.com.

ma...@makr.zone

unread,
May 24, 2020, 4:17:41 PM5/24/20
to ope...@googlegroups.com
This is for Tony and Bert and Jarosław and everybody interested :-)

Am 24.05.2020 um 19:09 schrieb Tony Luken:
optimization of moves by the (TinyG) planner is much more of a concern for CNC machining and 3D printing than it is for pick-and-place so I don't think we need to worry about the planner queue at all.

Am 24.05.2020 um 18:04 schrieb Jarosław Karwik:
PS. See here about S curves ( PM me if  you have questions) https://groups.google.com/forum/#!msg/openpnp/W7FLuodpUNA/09OXDHQxAgAJ

Am 24.05.2020 um 17:28 schrieb bert shivaan:
Jaroslaw or Mark, are either of you interested in sharing how to calculate the S curve stuff for accel and decel? Not in this thread but maybe another or in PM?

I disagree with some of your assumptions. :-)

PnP is a high speed application and most (all?) open source firmwares are optimized for slow applications like milling or 3D-printing. As long as controller architects keep thinking inside that milling/3D-printing box, I don't expect much improvement.

That "inside the box thinking" is also the reason I'll try to mend it from outside the controllers i.e. to try and find a solution for all (or let's say most) controllers. I've talked about that motivation early in this thread. There are other ideas that drive this, that don't make sense to discuss yet. :)

And no, S-Curves will not do! They are again optimized only for slow operation like milling and 3D-printing. They use the same trapezoid planner like a constant acceleration controller. Then - in order to limit jerk - they replace the ramp with a rigid S. The idea is just to have the same average acceleration in the S as the original ramp would have had.

When you have short or slow moves like on 3D-printing or milling, the acceleration is (almost) always short i.e. the few positioning and tool changing moves between the long and slow milling and extrusion moves don't matter in the sum. So having a rigid S is not so bad.

But on PnP we have many long and fast moves. S-Curves are no good. This illustration should definitely show why:

The problem: The controller reaches the maximum acceleration just for a moment, in the middle of "the S". So on a PnP application you would tune it to the maximum acceleration the motors can reliably take. But when the S is scaled up on long moves it takes ages to reach the middle point. You can't compress the S because then you would exceed the maximum acceleration the motors can take.

What we want is a "Integration Symbol" shaped curve, not an "S" shaped curve. In the middle there needs to be a long constant acceleration segment that exploits the limits of the motors for the longest time possible, while still controlling the jerk at the beginning and end of the curve. That's true 3rd order motion control.


That's what I'm doing with my "simulated jerk control". I try to simulate what the controllers should actually do, by using fine-grained interpolated constant acceleration motion. One simple example I demonstrated with the "simulated jerk control" in the opening post of this thread:

https://makr.zone/wp-content/uploads/2020/04/SimulatedJerkControl.mp4

That's proof of concept, simply calculated in an Excel sheet and sent with Pronterface.

Now I need to be able to do that from OpenPnP. For that I need fast sending, buffering and queuing capabilities i.e. fully asynchronous operation from OpenPnP.

Of course, ideally, a controller would do the true 3rd order control autonomously. But as I said before, there are other ideas in my head that definitely go beyond the capabilities of an MCU and a simple G-Code dialect. So I know that I will likely need the fine-grained motion path sending capability anyway. Even if one of you guys could implement the true 3rd order motion control on their controller and could convince me and the majority of users to rip their old controllers out of their machines and implant yours ;-).

I hope this clarifies things a bit. :-)

... on the other hand, if you can provide true 3rd order Bézier motion paths with segment for segment 3rd order limits control, then I might really be tempted! 8-P

_Mark


Jarosław Karwik

unread,
May 24, 2020, 4:22:59 PM5/24/20
to ope...@googlegroups.com
Hey, 
I agree, that is why i have implemented S curves with fixed linear acceleration phase  - see my previous link. It gives the shape you wanted.

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/130aa3f5-4d01-984b-fe1b-df551754a6ec%40makr.zone.

ma...@makr.zone

unread,
May 24, 2020, 4:31:26 PM5/24/20
to ope...@googlegroups.com

Hi Jarosław

So why are you calling them "S-curves"?

Sorry, I somehow only looked at the last of your graphs.

s3.jpg

But the first one is good!

s1.jpg

Can you change acceleration in the middle i.e. have acceleration != 0 at the nodes?

Can you do Bézier?

Will this be true OpenSource? (Sorry I have a vague memory of this being asked before but I don't remember the answer).

_Mark

ma...@makr.zone

unread,
May 24, 2020, 4:43:29 PM5/24/20
to ope...@googlegroups.com

Adding this for reference:

https://makr.zone/Why_S-Curves_are_no_good.svg

_Mark

Jarosław Karwik

unread,
May 24, 2020, 5:01:38 PM5/24/20
to ope...@googlegroups.com
Because they are S curves ;-)
Just that they are kind of split with constant acceleration phase once jerk calculation reaches max allowed acceleration. 

There was link on my thread:

Enjoy the math ;-)

This itself is not enough time - you have seen my pictures, sometimes there is not enough time to reach full speed, so the shape will be simpler.

My project is open source ( both sw and hw) . I will have first proto running next week.



--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

Tony Luken

unread,
May 24, 2020, 5:10:15 PM5/24/20
to OpenPnP
Mark,

Yep, I think I agree with all that.  I wasn't trying to say that what the motion control planner should do isn't important to pick-and-place operation but rather that the "optimizations" the TinyG planner actually does based on the commands in its queue are not of all that much concern because there is not much we can do about it anyway (unless someone wants to rewrite TinyG firmware).  It may not be optimal in the sense of getting from one place to another in the shortest possible time but it should still work.

Tony

Jason von Nieda

unread,
May 24, 2020, 5:14:04 PM5/24/20
to ope...@googlegroups.com

bert shivaan

unread,
May 24, 2020, 7:25:41 PM5/24/20
to OpenPnP
Ok, so I think I have it - let me summerize if I may:

You want to replace the accel/decel function implemented now in the controller with sending commands from openPNP instead. This would function similar to linuxCNC sending steps correct?

You want to be able to send commands (referred to as M-codes here) during motion so as to for instance turn on vac while X is moving.

To achieve the first you need to be able to cache the commands at startup or the controller will starve and all improvement will be lost as the controller waits for the next micro position command
To be able to know what step the controller is on, you need to have something better than 'OK" as a response
If you can get all this to work, you have some other bigger plans in mind after that?

How did I do?

bert shivaan

unread,
May 24, 2020, 7:54:04 PM5/24/20
to OpenPnP
hey Mark,

That's proof of concept, simply calculated in an Excel sheet and sent with Pronterface."

Are did you share this spreadsheet or are you willing to share it?


bert shivaan

unread,
May 24, 2020, 8:07:50 PM5/24/20
to OpenPnP
Assuming my summery is correct above, If the control boards acted proper WRT accel curves, you would not need to cache? 
After all in a pick and place, something is done at the end of every movment I think? For inbstance there is no need to move X 10mm, then move it 10mm more, just move it 20mm in the first place. UNLESS you do something at the end of the first 10mm.
I think that is what you are saying is needed for "on the fly" stuff? I like the concept that there are things that can happen while an axis is moving.
To this I would propose that the controller be able to "execute" some things while axis are moving. ie, send 
G0X20
then send command to turn on pump
pump turns on while X is somewhere between start and end of move.
This would require a set of "while axis in motion M-codes" for instance. maybe they have a new G code with them also like G103

Then no need to cache.

Next is if you want that pump to turn on at a specific spot, or better strobe a light when X passes 12mm.
This too could/should be handled in the controller by sending something like  G103X12M731.1

Where G103 means do Mxxx now or at optional parameter
in this case the parameter is X12
M731.1 is turn on blow valve

I actually really like this idea and think I will add it to my control.

ma...@makr.zone

unread,
May 25, 2020, 4:40:56 AM5/25/20
to ope...@googlegroups.com

Hi Jason

Yes, but it seems they have only implemented the

Section 5.1 Ideal S-Curve

and not

Section 5.5 S-Curve with Linear Period

(and I didn't know it is still called an "S-Curve" with the linear period).

The TinyG comment you linked says:

A full trapezoid is divided into 5 periods Periods 1 and 2 are the
first and second halves of the acceleration ramp (the concave and convex
parts of the S curve in the "head"). Periods 3 and 4 are the first
and second parts of the deceleration ramp (the tail). There is also
a period for the constant-velocity plateau of the trapezoid (the body).
The S-Curve with Linear Period has 7 periods.

Jarosław seems to have the first real Open Source implementation of the full 7 periods curve, AFAIK*. It does sound promising.

One thing I'm still missing is that the entry and exit velocity and acceleration can be != 0.

That's something I want to simulate as well. It comes into play in path blending i.e. if your accelerating around smoothed-out corners. I believe that is a significant time winner on PnP's frequent moveToLoactionAtSafeZ() pattern i.e. Z <-> X/Y corners.

_Mark

* my "research" in the matter is very shallow and informal. I just quickly googled and looked at source codes of firmwares that according to my search claimed to have 3rd order motion control i.e. TinyG, g2, and Marlin.


Am 24.05.2020 um 23:13 schrieb Jason von Nieda:

Jason


On Sun, May 24, 2020 at 4:01 PM Jarosław Karwik <jarosla...@gmail.com> wrote:
Because they are S curves ;-)
Just that they are kind of split with constant acceleration phase once jerk calculation reaches max allowed acceleration. 

There was link on my thread:

Enjoy the math ;-)

This itself is not enough time - you have seen my pictures, sometimes there is not enough time to reach full speed, so the shape will be simpler.

My project is open source ( both sw and hw) . I will have first proto running next week.


..

ma...@makr.zone

unread,
May 25, 2020, 4:43:01 AM5/25/20
to ope...@googlegroups.com

(Almost) perfect summary, Bert.

:-)

Except for the LinuxCNC part. In my application, the controller is still needed to control the constant acceleration ramps for the segments and generate the steps in real-time. I'm only giving it the "envelope" of the motion and I give it up-front, not being concerned with (difficult!) real-time issues. LinuxCNC on the other hand is fully involved including in the real-time, i.e. here is no separate controller last time I checked it out (they have FPGA cards but as far as I understood they only do the last bit of step generation).

_Mark

ma...@makr.zone

unread,
May 25, 2020, 5:12:36 AM5/25/20
to ope...@googlegroups.com

Hi bert

The spreadsheet is no real move calculation. No useful math in it. It just applies fixed phases of constant jerk and generated Gcode for this simple test. I.e. there is no move length pre-determined it just lands where 3rd order integration lands, so it is not useful. :-)

The purpose was not the math but to test Smoothies ability to receive and parse Gcode and plan the motion in sufficient speed. A 20ms simulation step seemed to be no problem. I haven't tried smaller values because this results in tiny first steps (depends on the jerk limit), so that seemed fine enough. The example was optimized for making the video not for maximum values of the stepper.

See attached.

I haven't checked with a scope or whatever so I don't really know if the sent motion envelope was completely (full-) filled by the controller. The motion just "feels" right and the anti-jerk/anti-vibration effect (that is the goal after all) is obvious in the video and even more obvious in real life when you see and hear it before you.

In OpenPnP I plan to do this not using "formula" math but using a numerical solver, that also allows me to apply non-linear constraints. Many ideas :)

_Mark

simulated s-curve.gcode
Simulated S-Curves v5.xlsx

bert shivaan

unread,
May 25, 2020, 5:41:11 AM5/25/20
to OpenPnP
Yea at least I get the point now :)
So I was being me about linuxCNC - only conveying 10% of what I mean. I understand what you said above. My comparison was really just that LinuxCNC sends out the steps in the speed it wants to move the motor. The control has NO IDEA about the motion.
You are wanting to send out snippets of the move so the controller does not need to know anything about the motion. Just execute the snippet in full speed.

As for the corner cutting - I completely HATE this idea. This is something that happened when folks first started to create machines that could do HSM, high speed machining. But there is no real use case for it in the machine world. Sometimes it can be helpful on roughing passes I suppose.

I don't see any instance where cutting corners in X/Y should be helpful. That said if it can well no harm there.
Cutting corners in X/Z or Y/Z or even X/Y/Z just kind of negates the safe Z concept. So what you are really saying here is that safe Z is lower than where we said it is. It is safe to start moving X/Y when Z hits some point lower. So at that point we start moving X/Y while Z is still moving up. If that is the case then just send that move sequence. I think Z is a short move anyway so no big benefit there from not letting it stop between moves.
I do agree having the next move cached so it can start right away would be awesome!
I do think that openPNP should be 1 move ahead to make the machine run smoother.

I also agree that the "formula" does not need to be crunched. There should be a way to get the y values based on the X value. This is how industrial machines generate arc's. they DO NOT do triq on the fly.

ma...@makr.zone

unread,
May 25, 2020, 6:07:08 AM5/25/20
to ope...@googlegroups.com

>Cutting corners in X/Z or Y/Z or even X/Y/Z just kind of negates the safe Z concept. So what you are really saying here is that safe Z is lower than where we said it is. It is safe to start moving X/Y when Z hits some point lower. So at that point we start moving X/Y while Z is still moving up. If that is the case then just send that move sequence. I think Z is a short move anyway so no big benefit there from not letting it stop between moves.

My idea is to add a "Safe Z Head-Room Zone" rather than a single Z value. This means any optimization will be restricted to that Zone.

In effect, we do not need to wait for Z to decelerate. It will go at full speed to Safe Z and as soon as it has passed the threshold, the controller can start to accelerate in X/Y. So Z will overshoot but if your machine has the Z head-room, there is no harm, au contraire it will be a much smoother ride for the part on the nozzle i.e. it can be done at greater speed even for large or heavy parts. The same on re-entry: Z will be higher than Safe Z and start to accelerate down before X and Y have reached the entry point. We can also wrap the backlash-compensation into that re-entry trajectory. Even more time saved.

On a shared Z two-nozzle machine, this means the nozzles will sometimes limp above their respective Safe Z (i.e. are not leveled). No harm in that either. If you value esthetics over speed, you could always leave the Headroom Zone at 0.0 :-).

I think that video speaks for itself, although the "rounding" would of course be much less pronounced in real applications:

https://youtu.be/csFE-4XwaYE

Do you ride a bicycle? Try going around 90° corners fast. Observe your intuitively chosen path :-)

_Mark


bert shivaan

unread,
May 25, 2020, 7:32:20 AM5/25/20
to OpenPnP
Yes it is mezmerizing to watch it. 
I FULLY understand the benefit of cutting the corner. Of course we can't ride our bicycle around sharp 90's unless very slow. But my point is in the pick and place  world there is always a lemonade stand at the corner and we want to get a drink. :)

I have zero love for safe Z. I ran a fake job on my first machine that placed 300 components using real Z values. It took over an hour to run. Ran the same job with safe Z set such that my Z stop did not need to move. time was 15 min for the same job. So I would loose 45 mins or more just running my Z stop up and down. My nozzles were pneumatic so I was still picking and placing parts.

Does openPNP allow X/Y movement along with Z now? If not then clearly it would be great  to have that enhancment. 
Your safe Z headroom sounds like I was describing to me. ie:
Z travel is from 0 to -30mm
Actual safe Z is at -20mm
so move z to -20
now move X/Y to target position while moving Z to 0

But the question to me is why bother to move Z to 0?
The only answer I can think of is on a 2 nozzle see-saw head we need to start bringing the other Z down to Z-20

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.

ma...@makr.zone

unread,
May 25, 2020, 8:13:44 AM5/25/20
to ope...@googlegroups.com

The point is not bringing it to Z0 but just to let it decelerate in the head-room. Ideally it will not use the whole head room.

Imagine two 100m sprint tracks (or whatever they are properly called in English).

One has a tall brick wall at the finishing line. The other has room for the runners to decelerate.

On which track will they do better times?

Or why is this bicyclist taking what seems like a wide detour around the corner to be faster?

https://youtu.be/FP7_fe4bxBA?t=189

_Mark

It is loading more messages.
0 new messages