Gcode Driver for LinuxCNC

684 views
Skip to first unread message

justin White

unread,
Apr 17, 2023, 10:27:01 PM4/17/23
to OpenPnP
My friend is going to help me get a LinuxCNC gcode driver going for openPnP. I have OpenPnP and LinuxCNC installed but I haven't really used OpenPnP yet as I'm sorting out this controller business. I'm going through the OpenPnP docs and came upon a couple of questions....

The jog buttons on the UI are a little confusing probably because I'm used to LinuxCNC interfaces. From what I gather they emit a MOVE_TO_COMMAND and the example given is a G0, it's not quite clear how the "UP" button is differentiated from the "DOWN" button but I assume down is just inverts the distance value? Like up would be G0 Y10 down would be G0 Y-10? In LinuxCNC this is a "jog incrimental" command that doesn't need a G-code and I can make this work as is in the driver.

Is there any way to alter the behavior of the jog buttons to emit some sort of "jog-stop" command set, or have a "jog-continuous" button that does so? Where the jog buttons issue a command while pressed, then issue another command when depressed? For example:
Jog "UP" pressed = set jog y 5.0
Jog "UP" depressed = set jog_stop y

tonyl...@gmail.com

unread,
Apr 17, 2023, 11:04:33 PM4/17/23
to OpenPnP
AFAIK, OpenPnP always expects to operate in absolute mode.  In other words, if the machine is setting at Y = 150 and you want to jog Y by +10, OpenPnP will send G0 Y160 to the controller.  The same goes for all other axes as well. Actually, the example you mention is probably out of date and typically G01 would be used instead of G00 but it all depends on what you have setup in your MOVE_TO_COMMAND. 

>Is there any way to alter the behavior of the jog buttons to....

Not easily, but it's all controlled by software so just about anything is possible given enough programming skill, time, and effort.

Tony

justin White

unread,
Apr 17, 2023, 11:18:13 PM4/17/23
to OpenPnP
As I'm going through it I came to that conclusion because there seems to be few OpenPnP commands. I see there is a park button that I suspect sends the same MOVE_TO_COMMAND but it would presumably have preset coordinates so I can't use jog_incr with LinuxCNC because park wouldn't work. That's no big deal but I also noticed the speed slider in the UI, G0 does not include a "speed" G1 accepts a feedrate so it must be intended to use with G1.

My plan is to get this working within the confines of what OpenPnP already does. Part of the issue I have is I hate the 3D printer firmwares so I will be making a BOB for a smaller Mesa card. I will make all of this stuff available open source so  maybe those who contribute to OpenPnP will be inclined to make accommodations for some of the conveniences of LinuxCNC

justin White

unread,
Apr 17, 2023, 11:43:09 PM4/17/23
to OpenPnP
Here's another question as I'm going through it. The docs explain jerk control and acceleration. What is that actually doing in a typical install? Is it passing firmware parameters to the motion controllers? Accelleration is not considered in Gcode commands so I don't quite understand what this is doing. Either way I'd need to turn this kind of thing off since LinuxCNC handles all of this internally. Can these things be nulled out?

justin White

unread,
Apr 18, 2023, 12:33:09 AM4/18/23
to OpenPnP
Also, the PICK_COMMAND and PLACE_COMMAND are a little confusing. The wiki shows the old usage which is fairly straight forward but says it's  deprecated in newer versions. The newer version says to use the async driver and assign everything though the UI as if it's being handled internally. Is it OK to use the old method in the gcode driver or how should I handle this if not? For example, the actual command I need to send to LinuxCNC for the actuator will be something like "set mdi mode", set mdi m105" to turn an actuator on. Just not sure what to do if I'm already manually writing a specific gcode driver for everything else.

mark maker

unread,
Apr 18, 2023, 6:27:41 AM4/18/23
to ope...@googlegroups.com

Hi Justin,

> My friend is going to help me get a LinuxCNC gcode driver going for openPnP.

That's good news. It would really be a nice addition.

However, your series of questions has a certain "trajectory", that I'm not sure aims in the right direction.

Telling you this as the developer who added the hole advanced motion stuff to OpenPnP, which includes a 3rd order 7 segment motion planner, plus a simple G-code interpreter for simulated machine testing. I also implemented the Issues & Solutions system that proposes the G-code templates for various G-code controllers. Plus I fixed and extended some of the Open Sources G-code controller firmwares for use with OpenPnP and to better adhere to the NIST standard. I guess I know a thing or two about this stuff 😎

The assumption is you want to use OpenPnP as intended, i.e. for Pick&Place of electronic components. If that is true, then you should first ask yourself how LinuxCNC can be employed to implement OpenPnP's needs, and not the other way around. 😉 Once you get the basic functionality going, you can always come back a second time, and think about making special LinuxCNC goodies available from OpenPnP.

In other words (and with no offense intended): I think you should get your priorities straight.  😅

As an example: Arrow-Jogging will be used in the (early) setup of the machine to go capture coordinates, but once you've done that, you will hardly ever use it. Yes, jogging implementation in OpenPnP is rudimentary, but I don't consider it a problem, simply because it is adequate for the few uses it has. For finer navigation there is camera view "drag"-jogging, which I consider quite sophisticated, i.e. beats arrow-jogging every time. Ergo, I recommend you don't lose time and sleep over some jogging features.

Much in OpenPnP is computer vision based, i.e. stored coordinates are only approximations, and OpenPnP need to interactively and sometimes iteratively center in on things it "sees". This means that sophisticated logic inside OpenPnP must interact with the controller, i.e., with LinuxCNC, in a tight and rapid manner. Consequently, G-code is generated on the fly, in reaction to the computer vision results, to adjust the alignment of a part, or center in on a fiducial on a PCB, for instance. This even includes deciding/branching on what happens next, e.g. when a pick failed and vision size check detects a part is missing or tomb-stoned on the nozzle tip, so it has to discard and retry. This is completely different from almost all other NC applications, where the whole G-code is generated up front, and where "interactiveness" (if there is any at all) is restricted to simple canned cycles like probing.

This also explains your next question: PICK_COMMAND and PLACE_COMMAND are no longer used, because the complexity and interactiveness of these operations has since become much more sophisticated. We optionally use vacuum sensing to establish the required vacuum levels in minimal time. And to check if a part has been successfully picked, and again this decides what happens next (discard and retry a few times, then set error state and skip). We integrate Contact Probing into the pick and place operations, to auto-learn part heights or the precise PCB + solder paste height for instance. All this is also highly configurable and parametric, e.g. by the nozzle tip that is currently loaded.

There isn't a chance we can implement all the required logic in once piece of G-code, hence PICK_COMMAND and PLACE_COMMAND that are simply defined globally or per nozzle are no longer adequate. Instead this is decomposed into the smaller machine objects that are involved. The vacuum valve as an actuator, the vacuum sensing actuator, the contact probing actuator, the pump actuator, etc.  OpenPnP Issues & Solutions will create and wire up all those properly for you.

Issues & Solutions will then go through all this and propose G-code snippets for you. Some are mostly automatic, others you must tell it. Issues & Solutions will query the type of the controller (M115) and adapt to known firmware dialects. For unknown firmwares, you can use the "Generic" profile.

> Accelleration is not considered in Gcode commands so I don't quite understand what this is doing. Either way I'd need to turn this kind of thing off since LinuxCNC handles all of this internally.

OpenPnP is dynamically setting the allowable feed-rate, acceleration and (optionally) jerk limits.

Example:

When you pick & place small passives you want it to be as fast as possible, because you have dozens or hundreds of these per board. So you want to configure your machine to have the highest feed-rate, acceleration and (optionally) jerk limits.

On the other hand, when you pick & place a heavy inductor or a large fine-pitch IC you need to make sure it does not slip on the nozzle tip. So you need to tone down on feed-rate, acceleration and (optionally) jerk limits.

That's why OpenPnP can set a per part speed limit. If you set 50% it will limit the feed-rate to 50%, the acceleration to 25% and the jerk to 12.5%, which due to the linear/quadratic/cubic nature of these limits, will result in exactly half the overall motion speed, i.e. double the motion time.

The importance of controlling these limits to make moves "gentler" is illustrated by these experiments:

https://www.youtube.com/watch?v=6SBDApObbz0

Not surprising, the actual "jerking around" that creates vibration in a machine or might make a part slip on the nozzle tip, comes from the jerk limit, not from the feed-rate limit.

The speed control is also typically used for gentler nozzle tip changer moves and gentler feeder actuation (drag, push-pull, etc.) without shaking parts out.

https://youtu.be/5QcJ2ziIJ14?t=240

The following shows you the principle of simulated 3rd order motion control ("jerk control") on a controller that does not have jerk control but only acceleration control. It  is very important to reduce vibrations etc. on affordable, i.e. mechanically not very stiff and heavy machines. It is based on shaping acceleration limits to mimic jerk controlled ramps:

https://www.youtube.com/watch?v=cH0SF2D6FhM

A quick search shows that LinuxCNC does not seem to have G-code (M201, M204) to set acceleration limits dynamically. I only found static settings in an .ini file. Without acceleration control, you can still use OpenPnP, but it would be a severe limitation.

Frankly, I'm surprised and I hope I missed something! If this is confirmed, and given the available alternatives of external controllers, I would consider this a no-go.

_Mark

--
You received this message because you are subscribed to the Google Groups "OpenPnP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/ab96c695-3954-473a-b631-c49569ad7f8cn%40googlegroups.com.

justin White

unread,
Apr 18, 2023, 1:09:42 PM4/18/23
to ope...@googlegroups.com
Mark,

Thanks for the detailed response, it's definitely good to have people that are willing to go out of there way.

However, your series of questions has a certain "trajectory", that I'm not sure aims in the right direction.

Well I think you took a couple of my questions about a couple of things to mean I was gonna lose sleep over those issues, as referenced by some of your other comments. This Gcode driver is something I have to get help with, so if there's something I can knock out while I'm getting that help I may as well. The jog thing was just a question, didn't intend to make it sound like a dealbreaker.
.....Open Sources G-code controller firmwares for use with OpenPnP and to better adhere to the NIST standard.
As a side note, LinuxCNC is actually a 80's era NIST grant project originally called EMC, the code is littered with the EMC reference but the name had to be changed sometime after it went open source due to some kind of claim, hence LinuxCNC

This also explains your next question: PICK_COMMAND and PLACE_COMMAND are no longer used, because the complexity and interactiveness of these operations has since become much more sophisticated. We optionally use vacuum sensing to establish the required vacuum levels in minimal time. And to check if a part has been successfully picked, and again this decides what happens next (discard and retry a few times, then set error state and skip). We integrate Contact Probing into the pick and place operations, to auto-learn part heights or the precise PCB + solder paste height for instance. All this is also highly configurable and parametric, e.g. by the nozzle tip that is currently loaded.

There isn't a chance we can implement all the required logic in once piece of G-code, hence PICK_COMMAND and PLACE_COMMAND that are simply defined globally or per nozzle are no longer adequate. Instead this is decomposed into the smaller machine objects that are involved. The vacuum valve as an actuator, the vacuum sensing actuator, the contact probing actuator, the pump actuator, etc.  OpenPnP Issues & Solutions will create and wire up all those properly for you.

When I had marlin firmware on the Octopus I initially used I went through the issues and solutions wizard, all I was getting was "not supported on this platform" and when I selected the gcodeAsync driver it just told me to use the normal GcodeDriver. OpenPnP could read the firmware on the Marlin and I could issue G0 moves so I know things were connected.

My intention is not to reinvent OpenPnP, I'm trying to understand how to integrate the necessary functions into the gcode driver. I understand that alot of stuff was pushed into the "issues and solutions" wizard but as I said above it really wasn't working for me on Marlin and being that LinuxCNC seems to never have really been explored with OpenPnP I didn't think it was going to help. The thing about LinuxCNC is it is extremely versatile but the linuxcncrsh telnet server has some limitations and it's not widely used. If I understand what this new method of PICK and PLACE is actually trying to shoot out the pipe I may be able to resolve this. I didn't mean to suggest that the old method was better, it's just defined and can be implemented. If you can explain exactly what the new method is doing and how I can bake it into a driver I can hopefully come up with something. Otherwise I assume I can still bake the old method into the gcodeDriver and use it while looking for a way to impliment the new method?

A quick search shows that LinuxCNC does not seem to have G-code (M201, M204) to set acceleration limits dynamically. I only found static settings in an .ini file. Without acceleration control, you can still use OpenPnP, but it would be a severe limitation.
LinuxCNC has user defined Gcodes M100-M200, that is how I am currently setting up ACTUATE_BOOLEAN_COMMAND and ACTUATE_DOUBLE_COMMAND. The ini file is just initial parameters. It's not common to alter accelleration mid program but it's easily doable. The M1xx codes are defined by a file, that file can have many instructions and 2 values can be passed into the Mcode with P and Q. The ini says something like:
[JOINT_1]
MAX_VELOCITY = 6
MAX_ACCELERATION = 15
That is actually a linked variable to the hal file, where the hardware is configured. LinuxCNC's hal implimentation was designed by electrical engineers so someone like me who knows little about programming sees hal as basically a wiring diagram and without having to be concerned with code semantics it's far more useful than some other implimentations. Anyway Joints and Axes are a concept in LinuxCNC but for all intents and purposes Joint 1 is axis Y that is setting this:
setp   hm2_[HOSTMOT2](BOARD).0.stepgen.01.maxaccel         [JOINT_1]STEPGEN_MAXACCEL
setp   hm2_[HOSTMOT2](BOARD).0.stepgen.01.maxvel           [JOINT_1]STEPGEN_MAXVEL
So if I want to alter accelleration from gcode I'll call it M110 and write a file for it
M110
#!/bin/bash
maxaccel=$1
maxvel =$2
halcmd setp hm2_[HOSTMOT2](BOARD).0.stepgen.01.maxaccel $maxaccel
halcmd setp hm2_[HOSTMOT2](BOARD).0.stepgen.01.maxvel $maxvel
exit 0
If the GcodeDriver passes M110 P10 Q5, max accelleration is set to 10 and maxvelocity is set to 5. I can only pass 2 values with one Mcode but I can have it set as many axis to those values as necessary, you can obviously do some standard bash things in there to derive a different value for something else but you get the point.

What's not documented is how to read a hardware pin value into linuxcncrsh for like the vacuum sensors but I have an idea about that, should be fairly simple.

I don't want to derail this too much because I'm looking for help with specific issues in this thread but I have to say....My affinity for LinuxCNC is obvious because it's what I know. With the PnP machine there's another direction I could go as some others have and use LinuxCNC straight up. Users have created configs and scripts, some of which have been mentioned on this group. I can't imagine that those efforts are as well implimented as OpenPnP on the vision side which is why I'm trying this route. What's true of me and LinuxCNC is probably true of the OpenPnP project itself. Everytime LinuxCNC is mentioned around here it's sort of shunned off rather than taken seriously. I understand, the bandwidth for development is limited, but If I can get this going I guarantee any of the developers doing OpenPnP could have done the same in 1/10th of the time. Some basic support for LinuxCNC would probably go a long way considering the typical motion controller used with LinuxCNC is an FPGA based card and it's only performing hard realtime tasks like step generation and encoder counting not gcode interpretation or any actual motion planning. I have no idea when these 3d printer style controllers hit the wall with OpenPnP but a 180mhz MCU is nowhere near as capable as a 400-600mhz FPGA running each stepgen as a parallel soft CPU. The FPGA is the magic bullet with LinuxCNC for good reason. I'm not suggesting OpenPnP should go out of it's way to do things the "LinuxCNC way" but maybe just some serious consideration on a GcodeDriver like the way I'm trying to do it. Everything you're trying to do can be done without really altering OpenPnP itself, it's just a matter of the method it's exposed from LinuxCNC. Once this stuff is figured out it's done and anyone can use this.



You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/IshRY1IM80w/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/3700f37e-1085-35c2-2d2f-771e25d4751a%40makr.zone.

mark maker

unread,
Apr 18, 2023, 2:42:16 PM4/18/23
to ope...@googlegroups.com

> Everytime LinuxCNC is mentioned around here it's sort of shunned off rather than taken seriously.

I beg to differ. I tried to help multiple times now, but then never heard back. There are hurdles to be taken seriously, that's all I'm saying.

Regarding your bash file, setting the acceleration:

Be aware that these need to be settable fully "on the fly", during ongoing motion. They must only be effective for the motion commands following later, not the ones already executing, and not the ones already in the queue. I'm not saying it won't work the way you said, just that you should double check.

_Mark

justin White

unread,
Apr 18, 2023, 2:57:13 PM4/18/23
to ope...@googlegroups.com

Regarding your bash file, setting the acceleration:

Be aware that these need to be settable fully "on the fly", during ongoing motion. They must only be effective for the motion commands following later, not the ones already executing, and not the ones already in the queue. I'm not saying it won't work the way you said, just that you should double check.

_Mark

Well as I was looking for the method to read out a value of a pin directly like for a vacuum sensor I found another way to set the pins directly, like for setting the accelleration. Actually it was suggested to me by a LinuxCNC guru...
linuxcncrsh is basically the remote gcode server, it doesn't have direct access to HAL hence the Mcode method.
halrmt is implemented almost exactly the same as linuxcncrsh in that it is another telnet server, but it does not use the Gcode interpreter. It does however have direct access to HAL which is basically everything and anything LinuxCNC does or knows.
So I have to figure out how to setup the OpenPnP gcode driver or maybe use 2 drivers. Once connects to say port 5007 for linuxcncrsh, the other connects to port 5006 for halrmt.
I push Gcodes into linuxcncrsh along the lines of "set mdi g0x4.5y3.2"
I push direct hardware stuff into halrmt like "set HAL SetP [pinname] <value>" or "get HAL PinVals [<substring>]"
For me, the LinuxCNC side of this is easy-ish, it's the OpenPnP side that I will have to figure out

justin White

unread,
Apr 18, 2023, 3:20:27 PM4/18/23
to OpenPnP

Also, as I mentioned I'll be using a Mesa FPGA card for this. Mesa has plenty of stuff that can do just about anything but nothing in the form factor of a 3D printer style controller you might use with a smaller PnP.

So what I intend to do is take a 7i92
And make a breakout for it. I intend to make this breakout board public to hopefully help get some interest in this on the OpenPnP side. Personally I will desolder the IDC connectors from the top of the 7i92 and place open pin headers on the bottom to it will plug straight into the BOB but that isn't really necessary, ribbons can still be used. This BOB will have Polulo style stepper headers but also headers for sending S+D to an external driver. GPIO will be more suitable for a PnP style machine than using the HC heater outputs for stuff. I'll likely put SPI vacuum sensors directly on the PCB as well. The best part of it, I don't have to mess with firmware. Mesa will just assign the firmware modules to the pins I designate as they use a modular firmware and that will be that.

Jarosław Karwik

unread,
Apr 19, 2023, 3:52:55 AM4/19/23
to OpenPnP
As I made my own controller for OpenPnp I can give you some hints.
Whole OpenPnp support can be done using following GCODE commands
- G0/G1 - movements
- G4 - delay
- G20/G21/G90/G91/G92 - coordination setup
- G28 - homing
- M42 - IO
- M105 - temperature/preasure  read
- M114 - position
- M115 - firmware info
- M201/M204 - acceleration/jerk  setup
- M400 - synchronisation

If you have support  for these ( e.g. over TCP terminal) then you are set. It just then need carefull OpenPnp setup of commands, but there are templates for different controllers so you do not have to start from the scratch

justin White

unread,
Apr 19, 2023, 11:45:31 AM4/19/23
to OpenPnP
Most of the problem is the Mcodes. None of those exist in rs274ngc. It's all 3D printer specific stuff. A few of them were spoofed into Machinekit which is an old fork of LinuxCNC and they did alot with 3Dprinters. M204 is tough though. What happens if M204 is omitted? If it's a big deal I can come up with something down the line.

Jarosław Karwik

unread,
Apr 19, 2023, 11:53:19 AM4/19/23
to ope...@googlegroups.com
You do not need actual M codes, but the functionality it provides - iny any way it can be encapsulated in text. OpenPnp uses command templates (you write them , OpenPnp provides values for it) , and the response interpretation is done using regular expressions. So while it is called Gcode driver, it does not need to use Gcode command convention at all. Anything text based goes, you get 'printf' like engine

justin White

unread,
Apr 19, 2023, 5:41:46 PM4/19/23
to OpenPnP
I get that but OpenPnP is expecting to use an Mcode based on what it's supposed to do. While most are straight forward, M204 specifically is not. I'd have to dig through Marlin's planner to see what it considers "Print" etc., acceleration so I know what accelerations to set based on what switch. I can figure it out, it might take some monkeying around but I'd rather not deal with it right away if it's possible to avoid that specific Mcode. Is the "old" method possible to use?

tonyl...@gmail.com

unread,
Apr 19, 2023, 6:58:47 PM4/19/23
to OpenPnP
> OpenPnP is expecting to use an Mcode based on what it's supposed to do.

Only for controllers it "knows" about. It doesn't "know" anything about LinuxCNC so you have to manually setup OpenPnP (as opposed to using the Issues & Solutions) so that it sends the command strings LinuxCNC needs.  For instance, my MOVE_TO_COMMAND for my TinyG looks like this:

M201.3 {XJerk:X%.0f} {YJerk:Y%.0f} {ZJerk:Z%.0f} {AJerk:A%.0f}
G1 {X:X%.4f} {Y:Y%.4f} {Z:Z%.4f} {A:A%.4f} {FeedRate:F%.2f}


The stuff in the curly braces gets substituted just before OpenPnP sends the commands to the TinyG so it ends up sending something like:

M201.3 X50000 Y9834
G1 X6.3005 Y1.2391   F4841.10


The point is that you can setup your MOVE_TO_COMMAND to send any strings you want.  Maybe something like:

SET_LINUXCNC_X_ACCELERATION_TO {XAcceleration:%.0f}
SET_LINUXCNC_Y_ACCELERATION_TO {YAcceleration:%.0f}
SET_LINUXCNC_Z_ACCELERATION_TO {ZAcceleration:%.0f}
SET_LINUXCNC_A_ACCELERATION_TO {AAcceleration:%.0f}
G0TO {X:X%.4f} {Y:Y%.4f} {Z:Z%.4f} {A:A%.4f} {FeedRate:F%.2f}

And when OpenPnP sends the command, LinuxCNC would see something like:

SET_LINUXCNC_X_ACCELERATION_TO 3337
SET_LINUXCNC_Y_ACCELERATION_TO 7455
GOTO X6.3005 Y1.2391   F4841.10

What the strings are you put into your commands is totally up to you.  You can also just leave the acceleration control lines out of your MOVE_TO_COMMAND if you prefer and OpenPnP won't care - it just won't be able to control your machine's acceleration so you will need to use one of the simple Motion Control Types (ToolpathFeedRate? - Mark should be able to help you there).

All the other commands OpenPnP needs to send are user definable similarly.

Once you get a good example of OpenPnP working with LinuxCNC, maybe Mark would add support for it in Issues & Solutions so that the setup process is much simpler for other folks.

Tony

justin White

unread,
Apr 19, 2023, 9:15:00 PM4/19/23
to OpenPnP

Most of that I get, if you look at the docs it says not to use the old PICK and PLACE methods and to use the new method which was basically described as being alot more complicated. The old method example was this:
Example:
M808 ; Turn on pump
M800 ; Turn on nozzle 1 vacuum solenoid

That example is not a problem to replicate:
PICK_COMMAND
        set mdi m160 P2

Where M160 is a user defined code and P2 is the nozzle number variable. Then inside the Mcode file I just define what turning the pump on is and what activating each nozzle is hardware wise.

I also get that it can just send a string:
HOME_COMMAND
       set mdi home -1

So I probably went a bit off track when Jarosla mentioned his gcodes, one of which is M204 which makes sense to use for a PnP for the PICK command if it does, but its variables are completely related to 3D printing and only a 3D printer firmware would define Print Retract and Travel accelerations. I can't replicate a "Print" accelerations unless I know what the printer assumes is a "print" speed. That's all a moot point if I can just use the standard GcodeDriver and define PICK myself and OpenPnP will not have a problem with it.

One other thing I noticed in the docs is POSITION_REPORT_REGEX is specified in the regex doc but nothing in the GcodeDriver doc is specified to invoke the response. POSITION_REPORT_REGEX isn't in the example GcodeDriver.xml so I don't even know if it's used. ACTUATOR_READ_REGEX response is invoked by ACTUATOR_READ_COMMAND so I would assume something like POSITION_REPORT_READ_COMMAND would exist but I haven't seen it. Here again the doc says issues and solutions will do it for you.

tonyl...@gmail.com

unread,
Apr 20, 2023, 1:22:40 PM4/20/23
to OpenPnP
Ignore the old Pick and Place commands.  The vacuum pump and valves are now controlled via Actuators - you will need to set them up and add your commands (strings) to turn on/off your pump and work your valves in their respective ACTUATE_BOOLEAN_COMMANDs.

>but its variables are completely related to 3D printing ....

That's not true.  The accelerations (or jerk) that it controls are for each machine axis - the command doesn't care what kind of machine is connected to those axes. If those axes are connected to a 3D printer, then they control the acceleration of a 3D printer, but if those axes are connected to a pick-and-place machine, then they control the accelerations of a pick-and-place machine.

The command for requesting a position report is GET_POSITION_COMMAND - typically M114 but you can use something else.

You will also need to implement the following commands as well:
COMMAND_CONFIRM_REGEX - every command line sent by OpenPnP needs to have a response come back that this regex matches - typically something like *.ok

SET_GLOBAL_OFFSETS_COMMAND - typically G92 or G28.3 is used but you can use something else.

MOVE_TO_COMPLETE_COMMAND - typically M400 is used but you can use something else - whatever you use, it needs to wait until all pending motions are completed before returning its COMMAND_CONFIRM_REGEX response.

mark maker

unread,
Apr 20, 2023, 1:31:01 PM4/20/23
to ope...@googlegroups.com

Like I said before (quoting myself):

> Issues & Solutions will then go through all this and propose G-code snippets for you. Some are mostly automatic, others you must tell it. Issues & Solutions will query the type of the controller (M115) and adapt to known firmware dialects. For unknown firmwares, you can use the generic profile.

Connect Firmware

https://github.com/openpnp/openpnp/wiki/Issues-and-Solutions#connect-milestone

This will propose "standard" G-code and Regex and as LinuxCNC seems to have a full G-code dialect, this is certainly a good start. If you really (?) need to prepend all these with set mdi it is still easier to do it after the generic proposal has been made.

Side note: I'm still somewhat reluctant to believe that in the whole LinuxCNC universe, nobody made a genuine G-code server, where you can send commands without the proprietary "set mdi" stuff. With such a program as an intermediary, given it is Open Source, it should be easy to add new G- and M-codes that do stuff outside the built-in commands.

Note: M204 is likely not your only problem. I haven't seen ways to actually report stuff back, like M105 for (analog) sensors, and M114 for position, on other controllers.

> Most of that I get, if you look at the docs it says not to use the old PICK and PLACE methods and to use the new method which was basically described as being alot more complicated.

You should just let Issues & Solutions guide you, using the above generic profile. I'm sure you'll find it easy. I made I&S to make things easy, but also to reduce the support load in this group here. If you continue second-guessing everything, I will no longer bother you with my unwanted help.

> M204 ...  but I'd rather not deal with it right away if it's possible to avoid that specific Mcode. Is the "old" method possible to use?

Yes. As long as you use the primitive ToolpathFeedRate method, any acceleration control is switched off:

https://github.com/openpnp/openpnp/wiki/GcodeAsyncDriver#motion-control-type

Like I explained earlier, this will make speed control mostly ineffective:


_Mark

justin White

unread,
Apr 20, 2023, 3:42:39 PM4/20/23
to ope...@googlegroups.com

Like I said before (quoting myself):

> Issues & Solutions will then go through all this and propose G-code snippets for you. Some are mostly automatic, others you must tell it. Issues & Solutions will query the type of the controller (M115) and adapt to known firmware dialects. For unknown firmwares, you can use the generic profile.

And like I said before (quoting myself)
 
When I had marlin firmware on the Octopus I initially used I went through the issues and solutions wizard, all I was getting was "not supported on this platform" and when I selected the gcodeAsync driver it just told me to use the normal GcodeDriver. OpenPnP could read the firmware on the Marlin and I could issue G0 moves so I know things were connected.

Not trying to be a jerk, appreciate the help but issues and solutions didn't work, like at all. It's not a big deal, I was trying to get most of these OpenPnP commands in order so I can pass them off to a friend for help. Probably best I wait until I get back in front of the machine to narrow down my issues. To be honest I haven't touched the OpenPnP install since last week so maybe issues and solutions will do something for me. Right now I'm trying to get the breakout board routed so I can get that going.

Side note: I'm still somewhat reluctant to believe that in the whole LinuxCNC universe, nobody made a genuine G-code server, where you can send commands without the proprietary "set mdi" stuff. With such a program as an intermediary, given it is Open Source, it should be easy to add new G- and M-codes that do stuff outside the built-in commands.

the "set mdi" stuff is just the command protocol for linuxcncrsh, You have to understand that "gcode servers" are not a popular thing except in 3D printer firmware. LinuxCNC's gcode server is purely a legacy carryover, nobody uses it as it sort of defeats the purpose of EMC's (the precursor for LinuxCNC) intentions. Honestly, you'll be hard pressed to find a proper CNC controller with a "proper gcode server" at all. 3D printers take gcode over the wire for the same reason very old CNC machines did, hardware limitations. Linuxcncrsh's protocol is exactly what I would expect for something that was going to grab commands over RS232 which is probably what it was designed for. If Open Source projects focused on things that 5 people were going to use you'd probably be writing a LinuxCNC gcode driver for OpenPnP lol.
The purpose of LinuxCNC's original form is explained in the NIST design docs:

And in fairness this is a workaround. I think it will work well but the proper way to do this would be at the very least for OpenPnP to control LinuxCNC over it's Python interface. The Python interface has more direct access. If you really wanted to get into it you would create custom realtime components to handle specific functions, they are written in C with RTAPI. This is an old video of a prototype machine I built, you can see it's not a milling machine and doesn't conform to any standard yet it's all run through LinuxCNC. All control through that GUI is done through Python and it runs several RT components to handle specific things that Python is too slow for. SOUND WARNING https://drive.google.com/file/d/1SxJM4sPfanNmmVkRNDxREewz4eeJFaSS/view?usp=share_link

That I had a programmer handle, I obviously cannot write the code for this stuff myself. I'm not suggesting OpenPnP people spend any time messing with LinuxCNC I'm just illustrating that what I'm trying to do here is a workaround for lack of programming skills.


Chris Campbell

unread,
Apr 20, 2023, 3:52:08 PM4/20/23
to ope...@googlegroups.com
Over the last few days I have been making a server for this purpose, based on Jaroslaw's post here and the requirements listed at:

It is loosely modeled on the existing linuxcncrsh utility, and adds M42, M114, M115, M400. I have not yet done M105 but it shouldn't be a problem, I'm just very sleepy for some reason after coding for 16 hours straight. M204 might need some input from the LinuxCNC folks, but I think it should also be possible (see https://www.forum.linuxcnc.org/38-general-linuxcnc-questions/48855-traj-set-acceleration-not-possible-via-nml).

For now it functions like linuxcncrsh, listening on a regular socket (telnet), checking if the input matches some specific keyword (eg. "pause", "abort", "m42", "M400" etc) and intercepting this with special behavior, otherwise it passes the input through as an MDI command to let LinuxCNC deal with it. In future I'm planning to check if the first character is "{" in which case it will be treated as JSON, and make a websocket server for convenience with web interfaces.

The code is a bit messy right now but working fairly well so far, hopefully I can tidy it up and put it on github soon. Anyway if Justin can wait a week or two this discussion might be mostly taken care of.



justin White

unread,
Apr 20, 2023, 4:02:44 PM4/20/23
to ope...@googlegroups.com
Well that's fantastic, and unexpected. I'm not in a rush, though the thread got derailed a bit I was just trying to gather information to pass to my helper. Hopefully that won't be necessary now.

Chris Campbell

unread,
Apr 28, 2023, 3:52:04 PM4/28/23
to ope...@googlegroups.com
Here are some movements commanded by OpenPNP, seems mostly ok so far:

That case was using 'ToolpathFeedrate' as the control type, which works well and allows LinuxCNC to blend between segments, but it does not pass any acceleration instructions. Seems like 'ConstantAcceleration' would be more appropriate, and that setting does command acceleration values, but unfortunately they are given between every single segment which currently interrupts blending. With a little more work I can make my server ignore acceleration commands where the acceleration value is the same as last time, so blending can be maintained.

If I understand correctly, OpenPNP will always send a MOVE_TO_COMMAND for every individual segment? Maybe it already exists, but it would be great if there was a way to pass the overall "intent" of a movement as a whole, allowing the motion controller to deal with it in a more optimal way. For example, instead of issuing multiple G0 commands to raise the head, then move it laterally, then lower it, the 'overall intent' command would just give the target destination, acceleration, and safe height(s). Maybe backlash compensation requirements could also be given, allowing any extra segments to be blended as well.

btw if anyone can give me a raw gcode log of an actual pick and place job running that uses acceleration commands, that would be really helpful. So far I have been twiddling around with small snippets of hand written movements, but I'm wondering what problems might be uncovered by running an actual job.


bert shivaan

unread,
Apr 28, 2023, 6:34:13 PM4/28/23
to ope...@googlegroups.com
That is awesome work Chris, are you able to visually home with it? I mean can you home visually with openPNP and tell LinuxCNC iit is homed?

Chris Campbell

unread,
Apr 29, 2023, 3:19:47 AM4/29/23
to ope...@googlegroups.com
Hi Bert
I haven't actually used OpenPNP yet but from reading this page:
I understand that visual homing means a target fiducial position will be given and fine-tuned by camera feedback. This location will then be stored and used in future jobs as a reference point for work coordinates instead of relying on limit switches. As far as I can tell there is nowhere in this procedure that it needs to tell LInuxCNC anything about homing status, but perhaps I missed something. In any case, when I run visual homing with the fake/simulated camera it seems to be ok - clicking on the 'home' button will return to that fiducial location and then slightly tweak the head position a few times while fine-tuning. The only commands given to LinuxCNC are G28 and G1.




mark maker

unread,
Apr 29, 2023, 4:05:20 AM4/29/23
to ope...@googlegroups.com

> Here are some movements commanded by OpenPNP, seems mostly ok so far

Cool!

> Seems like 'ConstantAcceleration' would be more appropriate, and that setting does command acceleration values, but unfortunately they are given between every single segment which currently interrupts blending. With a little more work I can make my server ignore acceleration commands where the acceleration value is the same as last time, so blending can be maintained.

In this case you should be using EuclideanAxisLimits. It will still send them each time, but they should stay stable until the Speed % is really changed. 

https://github.com/openpnp/openpnp/wiki/GcodeAsyncDriver#motion-control-type

> If I understand correctly, OpenPNP will always send a MOVE_TO_COMMAND for every individual segment? Maybe it already exists, but it would be great if there was a way to pass the overall "intent" of a movement as a whole, allowing the motion controller to deal with it in a more optimal way. For example, instead of issuing multiple G0 commands to raise the head, then move it laterally, then lower it, the 'overall intent' command would just give the target destination, acceleration, and safe height(s)

Given LinuxCNC has the cool G64 command, this should already give you good results. See also here:

https://groups.google.com/g/openpnp/c/y9mnpG-YXOI/m/kLvqwFieAAAJ

And I'm open to help optimize this. Send G64 commands and Safe Z tailored to X/Y move distance, i.e. the curving should be higher on longer X/Y moves, and lower on shorter X/Y moves, and it should never start the curving while still under Safe Z.

The following is for a 3rd order 7 segment controller and using a different concept. This is not blending, but letting moves be uncoordinated above Safe Z. I think you can see I spent some time thinking about and simulating this stuff. This is what the internal motion planner of OpenPnP can do. Still waiting for the controller that can do it for real:

Here, I discussed what is needed to do more or less the same with blending:

https://groups.google.com/g/openpnp/c/Zs6PBCyBI9o/m/S9vEz1TrAQAJ

Quoting myself:

I would do it as follows (numbered for easy referencing):

  1. We want to save time, so it is better to think the blending in terms of time, not distance.
  2. By overlapping the time to decelerate Z, and the time to accelerate X/Y,  we can (more or less) save that amount of time per corner (the other corner is just the same in reverse).
  3. If the X/Y displacement is large, then the overlap time is determined by the time it takes to decelerate Z.
  4. We want to drive the nozzle up as fast as possible, this means that ideally, we do not want to decelerate before we hit the minimum Safe Z height (at whatever speed and acceleration we can achieve).
  5. If our head-room (Safe Z Zone) is large enough to fully decelerate Z, we can take the full Z deceleration time as the overlap time.  The upper Z of the arc is determined by the braking distance.
  6. If our head-room (Safe Z Zone) is not large enough to decelerate Z, we need to start decelerating earlier. The overlap time is then just the fraction of the deceleration time that happens above Safe Z. The upper Z of the arc is then determined by the headroom (Safe Z Zone).
  7. If the X/Y displacement is small, then the overlap time is no longer determined by Z deceleration, but by half the X/Y motion time.
  8. In that case we can take this half X/Y time and calculate the Z braking distance that is achieved in this time (reverse from still-stand). This braking distance added to Safe Z gives us the upper Z of the arc.
  9. This basically means that at some point smaller X/Y moves will result in less and less "overshoot" into the Z headroom, the arc becomes lower and lower. It is logical: if we just have a tiny, tiny move in X/Y, then the fastest way to go to Safe Z and then back is still to just go to Safe Z and not higher.
  10. Finally, simply start the motion of X/Y earlier by this overlap time, and vector-add its relative displacement to the still decelerating Z motion.
  11. This bleding in time gives us the shape of the arc. For large X/Y moves it will just round the corners. For small X/Y moves (7) it will create a true arc.
  12. With 2nd order motion control, the above rule set can directly be used to create the blending.
  13. With true 3rd order motion control, this is complicated by jerk control, i.e. the switch from accelerating Z to decelerating Z is not instantaneous. So computing (5) and especially (8) is probably best done using some sort of iteration/numerical solver. It does not have to be super accurate, so the solver can terminate early to avoid a computation bottleneck.

_Mark

mark maker

unread,
Apr 29, 2023, 4:08:55 AM4/29/23
to ope...@googlegroups.com

Visual homing is usually just handled as G92 offsets, and LinuxCNC has that:

http://linuxcnc.org/docs/html/gcode/g-code.html#gcode:g92

_Mark

Chris Campbell

unread,
Apr 29, 2023, 6:24:01 AM4/29/23
to ope...@googlegroups.com
Hi Mark

Here are a few moves that mimic some parts of your example. I think 'rounded corners' is the best it can offer at the current time.

Another example, using same moves but with different accelerations:

One reason to prefer passing a movement as an overall intent is so that LinuxCNC knows which segments should form a contiguous blend. The on-demand communication (versus just reading a file) makes use of the MDI (Manual Data Input) feature which is more commonly used to type in the occasional command by hand. Commands will be executed immediately, and blending can only occur if there are further segments in the queue when the current segment begins. When commands are entered en-masse by pasting into telnet or via OpenPNP, in most cases all the commands can get into the queue before the first one has begun execution, and the full blend will succeed. Depending on the timing of events though, there is about a 20% failure rate, and some segments will end up with squared off corners instead of blending. I'm not aware of any way to make LinuxCNC defer execution until the queue is fully ready.

If each desired movement was passed as a set of parameters rather than individual segments, the full information for a move would be available in a single command and blends could not get skipped or mixed up. Handling these parameters in a subroutine would enable a programmable response that could include tailoring the safe-Z based on X/Y distance and perhaps even some attempt at improving on 'rounded corners'.
Incidentally it looks like this fella is using such subroutines, which gives me hope they will not be too slow...

Just some thoughts I had. For the time being, I think the standard behavior with G64 is pretty nice.


mark maker

unread,
Apr 29, 2023, 1:15:58 PM4/29/23
to ope...@googlegroups.com

> When commands are entered en-masse by pasting into telnet or via OpenPNP, in most cases all the commands can get into the queue before the first one has begun execution, and the full blend will succeed. Depending on the timing of events though, there is about a 20% failure rate, and some segments will end up with squared off corners instead of blending. I'm not aware of any way to make LinuxCNC defer execution until the queue is fully ready.

I'm so glad you clearly understand the important points. 😁

This is a common problem. Some controllers have a grace period, where they wait for more commands to come, before they start planning and motion. It's not only about blending, but also about premature ramp deceleration.

For OpenPnP you can wait until you get an M400, as a marker that the motion sequence is complete. M400 should immediately start the planning and motion. The only exception here is manual jogging, where OpenPnP cannot know if more jog steps will come, and therefore leaves it to the controller to start planning/motion (no M400 sent).

Just to see how common the problem is, see this for Duet:
https://github.com/Duet3D/RepRapFirmware/pull/471

> If each desired movement was passed as a set of parameters rather than individual segments, the full information for a move would be available in a single command and blends could not get skipped or mixed up.

I believe that the M400 marker gives you plenty opportunity to do this. You can record the motion sequences until the M400 arrives (with timeout). Then you got the intent perfectly. You can then make anything out of the recorded motion, recognize the OpenPnP moveToLocationAtSafeZ() pattern easily and apply any blending and needed Z overshoot as needed.

I'm rather reluctant to make OpenPnP speak some special dialect instead. There is a very strong Open Source idealism behind all this, compatibility, standardization, interchangeability. I'm allergic against all kinds of lock-in. Making controllers speak mostly standardized G-code is the way to go, and your work clearly follows that path too, which is great! 😁

Note, I'm not counting G64 as proprietary. This type of motion blending is clearly as intended by the NIST RS274 standard (sections "2.1.2.16 Path Control Mode", and "3.5.14 Set Path Control Mode — G61, G61.1, and G64"). Therefore, for OpenPnP smartly support it, would be ideal. Other Open Source controllers can then also implement G64 if they want to keep up. 😎 That's the spirit I'm after!

> Just some thoughts I had. For the time being, I think the standard behavior with G64 is pretty nice.

Definitely already much better than anything before.

> Incidentally it looks like this fella is using such subroutines, which gives me hope they will not be too slow...

Yep, also watch his cool videos:

From theoretical...
https://www.youtube.com/watch?v=hb4kSznglo0

... to practical:
https://www.youtube.com/watch?v=LTfe2ljmRpU

_Mark

Chris Campbell

unread,
May 11, 2023, 3:06:13 AM5/11/23
to ope...@googlegroups.com
I uploaded my code here, hopefully the readme is sufficient.

Regarding the discussion about passing a higher level 'intent' instead of individual move segments, it would certainly be possible to wait for a M400 before proceeding. But I'm not so certain that reading a sequence of g-codes is a reliable way to capture the intent. My server could only parse multiple lines of g-code and try to deduce what the intent was, in order to relay it to LinuxCNC as a single command. This would only be possible if the g-code adhered to very strict and recognizable sequences, for example a z-lift, xy-move, then z-lower could be detected as a single move intent. If the sequence contained any other segments (eg. backlash compensation, waypoints) the intent would become unrecognizable, and future changes in how OpenPNP sends segments might require updating the server to handle the new method.

Overall, requiring OpenPNP to produce low level g-code only to have it parsed back into a high level intent, and then re-generated as g-code on the LinuxCNC side, would be a rather inefficient and clumsy strategy and not something I'd be involved in. As I understand it, the only reason OpenPNP has to shoulder the burden of generating such g-code in the first place, is because the traditional motion hardware is a microcontroller with limited memory that needs to be instructed in fine detail about every little thing. If everyone had been using LinuxCNC instead when OpenPNP was born, I think passing higher level intents could well have become the preferred method, perhaps with g-code as a fallback.

With a capable enough motion controller, there are also other low-level movement related concepts that ideally OpenPNP should not have to concern itself with. For example tool changing, backlash compensation. The strength of OpenPNP is the vision feedback and being the overall glue that holds everything together at a higher level. The more low-level work it can hand off to the motion controller and say "do this and tell me when you're finished", the less redundant information is transferred and the more each part of the system can optimize to their strengths. As microcontroller specs improve over time I think we might see more embedded firmwares that can handle the low level tasks without so much supervision.

Regarding the objection to using a 'special dialect', OpenPNP already allows us to set custom formats for commands, so I'm not sure where 'lock-in' would arise. It would however require a new type of command setting, perhaps something like MOVE_INTENT_COMMAND which would be similar to MOVE_TO_COMMAND but also exposes the safe-Z, and is issued for only the final destination instead of all segments. From looking at the source code I can see this would not be trivial to implement so of course I'm not requesting anything, just outlining the general idea.

Here's an experiment where I pretended that MOVE_TO_COMMAND was actually specifying the final destination of a three-segment 'lift, move, lower' intent. The LinuxCNC side is altering the height based on the lateral distance and generating the final g-code to use.
I have hard-coded the safe-Z and acceleration parameters, but you get the idea. The g-code is generated in a sub-routine of the type used by our friend whose videos we were admiring earlier. Although he was able to implement most of his system using those subroutines, I think he would most likely have used OpenPNP if the movement interface allowed reliable blending, as he indicates in the last post on this page: https://forum.linuxcnc.org/pnp/38430-replacing-openpnp-motion-controller#158379

Anyway, what I currently have in the server should be enough to allow a basic functional connection between OpenPNP and LinuxCNC, albeit with occasional blending fails. Although I do have plans to build a pick-and-place someday, it could be a year in the future yet so I will leave it there for now unless there are obvious bugs to be fixed.


mark maker

unread,
May 11, 2023, 5:06:14 AM5/11/23
to ope...@googlegroups.com

Hi Chris

interesting discussion.

The problem from an OpenPnP perspective is that - like in any good software - the overall problem is broken up into many sub-problems, then individually solved, and then synthesized back into the final composite solution.

One issue you actually already picked up is Backlash Compensation. It needs to be done. Could it be done on the controller instead? Sure. Can we rely on each controller to support it, and properly so. No. Would a controller that supports it, know to do it right? Not sure. For instance, backlash compensation for X/Y must be done "in the air", before actually coming into contact with parts in feeders, or before pushing a part into solder paste, otherwise you can imagine what happens. This directly plays into motion blending, or transfer of intent, for instance. The controller likely says "hey, let's do that at the end of the move, so it can still be fluid", and that would be bad, obviously. So you would actually have to blend it into the down-going move, but assure it is done before actually arriving.

Because not all controllers support it, a solution must still be implemented inside OpenPnP. In order to support it, those moves must be de-composited. It would still be possible for it to be disabled and deferred to controllers that are known to do it right, but this makes the overall proposition of  keeping the "intent" intact more complex, you would need to defer decomposition too.

Add to that other concerns like Runout Compensation, Rotation Wrap-around, Rotation Mode, including nozzle Alignment, Contact Probing, including Tool-changer Z calibration, and whatnot. We also want to be open for innovation.

Transferring all that application knowledge and functionality to the controller, I would say, is bad design. Separation of Concerns.

So IMHO, sending the right G-code to do all that is still the right way to go. And I see no reason why this should not be possible, using well-timed G64, G1 sequences (or any textual command language you like), assuming LinuxCNC is clever enough to actually support parametrizing that stuff (including acceleration) on the fly. I see no reason why externalizing such things into scripts, as "our fried" (as you call him) did, is any different, conceptually, from externalizing them into OpenPnP.

Also note that OpenPnP supports using multiple controllers for the same machine, so LinuxCNC could just be one of them, perhaps just driving the X, Y, Z but not the C axes. It must still work in a quasi-coordinated way (this is clearly not hard realtime coordination but enough for PnP).

_Mark

justin White

unread,
May 11, 2023, 6:55:08 AM5/11/23
to ope...@googlegroups.com
So I have LinuxCNC and OpenPnP installed on my machine. I also have the firmware for my controller. Unfortunately I made one too many screw ups on my BoB PCB so it'll likely be a week or 2 before I get another revision back but I'll be trying this ASAP.

One issue you actually already picked up is Backlash Compensation. It needs to be done. Could it be done on the controller instead? Sure. Can we rely on each controller to support it, and properly so. No. Would a controller that supports it, know to do it right? Not sure. For instance, backlash compensation for X/Y must be done "in the air", before actually coming into contact with parts in feeders, or before pushing a part into solder paste, otherwise you can imagine what happens. This directly plays into motion blending, or transfer of intent, for instance. The controller likely says "hey, let's do that at the end of the move, so it can still be fluid", and that would be bad, obviously. So you would actually have to blend it into the down-going move, but assure it is done before actually arriving.
I'm not terribly familliar with all of the PnP nuance since I haven't used mine yet but backlash comp is standard in LinuxCNC. The way you describe backlash comp by "blend it into the downgoing move" is a different concept from what LinuxCNC uses. It also doesn't perform it at the end of the move but before it begins. Backlash is wasted, or unaccounted for motion. LinuxCNC does not try to account for it, it just wrings it out. It is setup by entering the backlash measurement for that joint into the ini, then setting the stepgen's maxaccel to near 2x the joint maxaccel. If the last move was in the positive direction, then the backlash is on the negative side of the mechanism. So just prior to moving negative LinuxCNC will snap it out. The idea is that the stepgen's max accel is set high enough that the backlash is absorbed before the commanded motion occurs and nothing should be lost or further accounted for. Not sure if this is what OpenPnP would consider "right" but like any motion controller,

What kind of backlash do you guys actually see? I would expect belted axes to have pretty much no backlash which is why this is one of the few machines I built with all belts.

Add to that other concerns like Runout Compensation, Rotation Wrap-around, Rotation Mode, including nozzle Alignment, Contact Probing, including Tool-changer Z calibration, and whatnot. We also want to be open for innovation.

Transferring all that application knowledge and functionality to the controller, I would say, is bad design. Separation of Concerns.

Wrap and contact probing I don't think are a concern, rotary axes can be setup many ways and it will also respect shortest distance. Contact probing sounds like auto tool measurement which my mill does through a script for the UI on every M6. The rest I understand the concept, I don't really know how OpenPnP executes them so maybe? I think it'll be alot of playing around, hopefully when I get some time in front of OpenPnP I'll have some intelligent feedback.

mark maker

unread,
May 11, 2023, 8:51:54 AM5/11/23
to ope...@googlegroups.com

You should perhaps read the links I provided. It would also help to test your machine with the Backlash Calibration, you can do that as soon as something moves. The fact that we need to support many different backlash compensation methods should make it clear(er) to you, why it is not so simple.

> So just prior to moving negative LinuxCNC will snap it out.

Yes, LinuxCNC also cares about compensating while moving, because most NC applications do the actual work while moving (milling, laser cutting, 3D printing). And doing so at a constant and regulated feed-rate, which is adjusted to the work being done, so the stuff will not go up in smoke or blobs of plastic (i.e. generally slow, but not too slow). For some applications, the preprocessor will even add little extra run up paths, so the nominal feed-rate is attained before the tool hits the work piece, starts lasering, starts extruding, etc. All this also makes it easier to predict the needed backlash compensation.

Pick and Place is quite different though. We don't care at all about what happens during motion. We care about the position at the end. We even don't care about the position up at Safe Z. we care about the position down on the feeder, camera focal plane, PCB. On the other hand, we do care about CPH, we simply want the maximum speed from A to B. This means that we have completely different situations regarding momentum, belt flex, friction, overshoot etc. depending on how far a move is. This in turn means that for many pragmatical machines, the Backlash Compensation must be explicit, either one of the OneSided methods or DirectionalSneakUp. Only few machines (if really tuned for high speed/acceleration) can use the DirectionalCompensation. I have yet to see one.

https://github.com/openpnp/openpnp/wiki/Backlash-Compensation#backlash-compensation-methods

I'm not even sure DirectionalCompensation is a hallmark of the mechanically best/most expensive machines. Even if you have (balled) lead screws, for instance, they have quite some backlash, apparently (and "stiction" too). Heavy machines (made from metal) or those with extreme servos supporting hefty deceleration may more likely have overshoot, which means innertia is then overcoming friction, which reverses backlash, but likely not consistently across speeds and distances. I could also imagine that aggressively tuned closed loop systems are unpredictable, you never know which way they last nudge the motor. Maybe those with linear encoders can make it go away.


> "blend it into the downgoing move" is a different concept from what LinuxCNC uses.

Yeah, it was specifically my argument that LinuxCNC does not know about the peculiarities of picking and placing and therefore cannot and should not care about this. OpenPnP should. It is the only way I see we can mix advanced options like motion blending and advanced Backlash Compensation and top speeds. We just need a clever way of telling LinuxCNC how.

_Mark

Chris Campbell

unread,
May 11, 2023, 9:56:24 AM5/11/23
to ope...@googlegroups.com
Basic backlash compensation is not particularly difficult to achieve with a few more lines of code in the subroutine. Here is a quick test of what I believe would be called OneSidedOptimizedPositioning in OpenPNP terms, done 'in the air' and blending through all segments including the down-going move, and becoming vertical-only at 5mm height above the destination. On the OpenPNP side, backlash compensation can be left as "None" and I don't see why it would even need to know this is happening.

Yes, it would certainly be bad design to have features like runout compensation, rotation wrap-around, rotation mode, nozzle alignment, contact probing, and tool-changer Z calibration transferred to the motion controller. But I didn't suggest any of that. I said tool change and backlash compensation might be good candidates to hand-off to a controller. Only simple movement sequences with no feedback interaction can be trusted to the motion controller, any high-level concepts still require OpenPNP.

Like you, I too see no reason why it would not be possible to watch the g-code commands and try to extract the intent from them, I'm just saying it's a clunky workaround driven by legacy circumstances and not a very robust form of information transfer in general. When I order a pizza, I could stay on the phone and tell the delivery guy which way to turn at every street corner as he drives and that would work out fine, but it's more sensible for me to simply give him my address and wait for the doorbell to ring.

The use of scripts on the motion controller side is not a result of wanting to use them, it's just the only way to input commands into LinuxCNC from a remote client and reliably keep the blending behavior. I have to admit though, I'm glad these recent investigations led me to discover these subroutines and their potential.... maybe I do want to use them.... just a little bit :)

If anyone can provide me a g-code log of an actual pick-and-place job, I will take a look and see how feasible it would be for my server to watch the commands and try to identify the 3-segment "lift, xy-move, lower" sequences for optimization. The most naive implementation could be to just ignore every G1 until the final one before a M400. For purely horizontal moves, that would waste some time moving up to safe-z and back, but overall it might still be a net gain, I can't tell yet.



mark maker

unread,
May 11, 2023, 11:07:55 AM5/11/23
to ope...@googlegroups.com

Nice, I see you understand perfectly what I meant. 😁

The "intent" reconstruction is only needed, if (or as long as) it is impossible to teach OpenPnP to send the blending commands/parameters directly. Can you provide examples of these scripts? Is there anything "magic" in them that prohibits us from doing the same math and sending the same commands from OpenPnP? Is there a performance issue?

> Yes, it would certainly be bad design to have features like runout compensation, rotation wrap-around, rotation mode, nozzle alignment, contact probing, and tool-changer Z calibration transferred to the motion controller. But I didn't suggest any of that.

The problem is, that all of these are interwoven with the moves we are talking about (or rather individual move segments). All the rotation stuff must happen "in the air" only, for obvious reasons, the contact probing should seamlessly happen on the down leg (without stopping between positioning and probing), etc.

That's why I would like to keep the "brains" inside OpenPnP, rather than to send "intent", with a gazillion of parameters applying all those issues to it. Among other things it would be a configuration and versioning nightmare to get the scripts on the LinuxCNC side in sync with the OpenPnP version.

If we absolutely need certain scripts / canned cycles for performance reasons, is there a way, perhaps, to declare them via your client? So they could be added to the ENABLE_COMMAND, for instance, or even generated on demand.

_Mark

Chris Campbell

unread,
May 11, 2023, 7:41:20 PM5/11/23
to ope...@googlegroups.com
Here is the script I'm using. I had not shown this earlier because the "language" is quite painful to look at.
The semi-colon lines are comments, most likely you can follow the general idea from that without knowing anything about the code syntax, but the details are here if you're interested to know more: https://linuxcnc.org/docs/html/gcode/o-code.html

There is nothing magic about the calculations, the main advantage is that because everything within the subroutine is processed inside a single command, blending will always function correctly. In my opinion it's also a better "separation of concerns" in cases where the motion controller has enough capability to not require micro-managing.

To be honest I don't know if there is any performance issue but I'm not expecting it to be significant on a 3GHz or so processor. In my videos I am using a Raspberry Pi 3 over remote desktop so the result you see there is often not so great. The Pi is spending 90% of one core running the 'axis' GUI and 40% of another core on the real-time subsystem, this is not the case on my real router with a proper computer.

The scripting syntax is quite ugly but works well enough and has basic conditional checks, looping, and can call other subroutine files or bash files. As part of learning how to use it I had already made the rotation spread across segments 2 and 3, and enforce pure vertical arrival like you're describing, I just hadn't demonstrated it so here's a look at that (in the GUI the 4th axis is represented by the tool rotating around the Y axis):

For tightly interwoven procedures requiring frequent interaction with OpenPNP, I was thinking the regular MOVE_TO_COMMAND would be used as normal. Sorry, perhaps I should have made that clearer from the beginning, Basically any single segment straight-line moves like registering fiducials, z-calibration, auto-focus, probing etc would not involve blending, so the standard behavior is already optimal. The MOVE_INTENT_COMMAND (or whatever it might be called) would be an opt-in feature, and only used where multiple segments exist and can be blended without fear of the rounded corners causing problems. So OpenPNP would have the final say at a higher level, whether a multi-segment move can be fully handed off to the motion controller as an intent or not. It's probably not trivial to go through all the myriad of movement cases and decide which this applies to though :)

I'm not sure why there would be a versioning nightmare, since OpenPNP allows us to define whatever g-code we want the motion controller to receive. As long as OpenPNP exposes the required values (like MOVE_TO_COMMAND already does) the g-code can be tailored to whatever the script needs, as seen in my earlier video. Naturally, anyone using LinuxCNC will need to make sure the g-code format they set up in OpenPNP matches with their script, but the same applies for TinyG, grbl or any other controller.

I think the next step would be for me to build a PnP and experiment with real jobs, and most likely conclude that 20-30% of failed blends doesn't even matter lol. It's also possible the blending fails might only be an issue on the Raspberry Pi, and not a 'real' computer. But like I said I might not be building a PnP for quite a while, mainly I was just spurred to make this server a bit earlier to maybe settle down the previous discussion in this thread. So in the meantime if anyone can give me a g-code log of an actual job, I could run that through my server and replay it for real on my router, and maybe uncover some problems to fix.


justin White

unread,
May 11, 2023, 9:17:02 PM5/11/23
to ope...@googlegroups.com
Mark, you mentioned in the other thread you're in switzerland. I used to work on these prototype machines made by a company called Graph-Tech AG. Everytime I read one of your posts I'm imagining you working on the software in one of those things. That's not a dig, I was fond of those machines. They were just a bit.....Swiss lol



mark maker

unread,
May 12, 2023, 1:11:07 AM5/12/23
to ope...@googlegroups.com

> Everytime I read one of your posts I'm imagining you working on the software in one of those things.

Wish I was. Nah, the software we're making is much more boring.

_Mark

mark maker

unread,
May 12, 2023, 2:00:40 AM5/12/23
to ope...@googlegroups.com

Hi Chris,

thanks!

Just to be sure: the only three commands actually "doing something" are the last three in the sub, right?

So you are not adjusting G64, but working with waypoints to ensure straight lines.

But what about moves where the blending is not wanted? Or where it is wanted on one knee but not the other? Can you command G64 in the middle of a move sequence and it will adapt without stopping in between? And btw. did you manage to control acceleration in mid-motion?

I'm talking about moves like feeder actuation moves (could also be a drag feeder for instance):

https://youtu.be/5QcJ2ziIJ14?t=248

Blinds Feeder cover opening:

https://youtu.be/dGde59Iv6eY?t=382

Nozzle tip changer moves:

https://youtu.be/9uFxV1-vnXw

I'm still convinced, OpenPnP should do the math and control waypoints and shape G64 from its side, as only it has the semantic knowledge of what is happening now, and in future versions. And frankly, it is much easier to code all that in Java, don't you think? 😁

Also I'd like for this to be machine universal, and available for all users, even if they are not "ngc programmers". There is existing UI to configure/capture the waypoints of these feeder/nozzle tip changer motion sequences, and new UI could be added for blending options, etc. If you want LinuxCNC to be a valid option for many OpenPnP users, you cannot assume they are able to hack their own canned cycles.

If it turns out we need to generate subs to make sure motion sequences are executed in one fluid go, I hope we can declare them on the fly, through your client, right?

 _Mark

Chris Campbell

unread,
May 12, 2023, 4:37:38 AM5/12/23
to ope...@googlegroups.com
Hi Mark

Yes, the lines with G1 are the only ones that trigger movement, there are four in total but one can potentially be skipped. Yes, I'm using waypoints to ensure the final touchdown is purely vertical, and keeping G64 in effect all the way through.

For moves where blending is not wanted at all, as I mentioned in my previous mail that would need to be done as normal with MOVE_TO_COMMAND (appended with M400 to enforce square corners). If blending is wanted on one knee and not the other, that could either be passed as a parameter to MOVE_INTENT_COMMAND, or just use MOVE_TO_COMMAND and accept the slowdown with no blending at all. From what I have seen of pick and place jobs, blending would be most beneficial for the large movements to and from the parts area to the PCB (rather than feeder actuation or tool changes etc), and I am assuming these large moves can be blended at both knees. So even if the script was only ever used and optimized for those movements I think it would still be worthwhile.

Yes, G64 and G61 can be commanded in the script without stopping. But setting acceleration will reset a blending sequence, so that the segment immediately after the acceleration change will not be blended. It seems that all corners in a single blend sequence must have the same acceleration.

Yes, coding in Java is nicer, but it doesn't solve the problem of failed/missed blends. I just realized I never demonstrated this, so here are a couple of examples from my earlier testing. The same list of g-codes is given, and the desired outcome is a single blended sequence as seen briefly at the beginning before I clear it away. The first attempt dumps all commands into LinuxCNC up-front, and the second attempt uses M400 occasionally to only feed a few lines at a time. In the first case all commands are in the queue at least by the end of the first segment completing, so we could reasonably expect every corner after the first one to be blended, and yet the probability of blending actually occuring is pretty much random. In contrast a subroutine is processed as a cohesive unit.

Yes, I agree that requiring everyone using LinuxCNC to prepare a subroutine script just to get anything done, would not be user friendly. That's what I meant by "opt-in" where the intent-style command usage would need to be intentionally enabled. Out-of-the-box the default commands would be no different to what they are now, and 70-80% of blends would still work, no script required. If the randomness of blending results is undesirable, users could get more predictable behavior by completely disabling blending with a M400 in their MOVE_TO_COMMAND.

The subroutines are read in from files on the LinuxCNC system, so no, my server would not be able to create them on the fly. Well ok, I suppose it could buffer the incoming g-code and when it sees a M400, create a file and then direct LinuxCNC to call that as a subroutine. My first reaction is wow, that's a whole new level of clumsiness, but it might actually be quite effective and is worth investigating. Looks like Linux can create temporary files in RAM, could be just the thing! And now I'm feeling like all the other paragraphs I wrote above were a complete waste of time until this is checked.... ffs.... I will leave them anyway, so you can see the thought process.

I'm gonna have to spend time on other projects for a while but I will definitely come back to this, since I'm keen to try my hand at building a PnP machine eventually. The current version of my server should be functional enough to meet the needs that gave rise to this thread. fwiw I tried controlling my router from OpenPNP today, so I will leave you with some video of that.



Chris Campbell

unread,
May 15, 2023, 10:51:05 PM5/15/23
to ope...@googlegroups.com
I spent some more time trying the idea of making a subroutine file on the fly... seems to work fine. So you can forget everything I said earlier in this thread, sorry for wasting so much time.

Details here:

I'm not so sure this is any easier to set up for LinuxCNC users than simply copying a script file into place would have been, but it does at least guarantee proper blending without requiring anything that OpenPNP doesn't already do. I would still like to test this on a real job log sometime...


mark maker

unread,
May 16, 2023, 10:22:47 AM5/16/23
to ope...@googlegroups.com

Hi Chris

> The subroutines are read in from files on the LinuxCNC system, so no, my server would not be able to create them on the fly.

Are you sure? Have you tried? Isn't your server just another command source, internally handled the same way as files?

> Yes, coding in Java is nicer, but it doesn't solve the problem of failed/missed blends.

Are you sure the failed blending isn't simply a network latency issue, that could be solved by proper buffering?

Like I said before the M400 could be your buffer flush signal: collect everything in a string and only feed it to LinuxCNC once you get M400, or once you did not receive new commands for 100ms or so.

Every simple MCU controller that I worked with, has a sort of grace period to avoid rushing into premature deceleration. So I would be astonished if LinuxCNC hasn't.

_Mark

mark maker

unread,
May 16, 2023, 10:53:44 AM5/16/23
to ope...@googlegroups.com

> I would still like to test this on a real job log sometime...

You can load the sample test job in OpenPnP and run it:

  1. Replace your machine.xml with this one:
    https://github.com/openpnp/openpnp/blob/test/src/test/resources/config/SampleJobTest/machine.xml
  2. It simulates a more realistic test machine, including simulated imperfections like nozzle run-out, non-squareness and camera vibrations. Unlike the default machine.xml it also uses real Z heights. It is connected against an internal G-code controller  (GcodeServer)
  3. Start OpenPnP. Change the driver to connect to LinuxCNC instead of the GcodeServer.
  4. Load the test job:
    https://github.com/openpnp/openpnp/wiki/Quick-Start#your-first-job
  5. Run it.
  6. It should now generate G-code in real time. It reacts to the simulated cameras of course, instead of really something LinuxCNC does, but because of the simulated imperfections it needs all the right compensations, part alignment corrections, camera settling to wait for the camera to stop shaking, etc. Unfortunately, there is no backlash simulation yet ;-) but you can configure backlash compensation anyways.
  7. You can configure the machine speed on the axes, to make it more realistic, e.g. like your planned machine.
  8. You can configure the imperfections on the Simulation Mode  tab.
    grafik
  9. Alternatively, you can record a G-code script with the Log G-code option on the driver. However, IMHO this is not a real test, as it does not simulate the timing and the hand-shaking.

I did something similar, i.e., simulate a job, and at the same time drive a real controller/stepper, here (albeit for a different testing purpose of course):

https://youtu.be/cH0SF2D6FhM

_Mark

Jan

unread,
May 16, 2023, 5:16:44 PM5/16/23
to ope...@googlegroups.com
Hi Mark!

On 16.05.2023 16:53, mark maker wrote:
[...]
> 1. Replace your machine.xml with this one:
> https://github.com/openpnp/openpnp/blob/test/src/test/resources/config/SampleJobTest/machine.xml
> 2. It simulates a more realistic test machine, including simulated
> imperfections like nozzle run-out, non-squareness and camera
> vibrations. Unlike the default machine.xml it also uses real Z
> heights. It is connected against an internal G-code controller
> (GcodeServer)

That's very interesting, thank you! May I suggest to add that to the
developers Wiki page?

Jan

Chris Campbell

unread,
May 17, 2023, 2:53:59 AM5/17/23
to ope...@googlegroups.com
Thanks Mark.
When I tried with the simulated machine.xml, it seems like the
simulated GCodeDriver will always be in effect regardless of any other
settings made in the GUI. I made all all the required settings the
same as I did before, but it never connects to my server and the log
shows lines like:

GcodeAsyncDriver DEBUG: simulated: port 38445 commandQueue.offer(M204
S187.71 G1 X6.2580 F600.00 ; move to target, 5000)

The g-code logging does work though, so I'll try using that for some
experiments.

Regarding your earlier questions, my server uses a feature called NML
(Network Message Layer) to communicate with the underlying LinuxCNC
system over the network. Any interface that manipulates a LinuxCNC
machine will use this in some way or another, typically via Python
bindings but mine uses C++ directly. Messages like openfile, run,
pause, resume, abort, start jog, stop jog, real-time feedrate
adjustments etc are passed over a socket, which is typically running
on the same computer as the LinuxCNC core but could also be a remote
endpoint, or multiple remote endpoints. In any case it's not handled
the same way as files.

Among the messages that can be sent is one called MDI (Manual Data
Input). This allows individual g-code commands to be executed
independently of any program, which provides a convenient way to
perform ad-hoc operations or make adjustments. But by far the bulk of
commands will be executed by reading a complete program file up-front
and running through the file with full knowledge of all future
commands. Since the usage scenario for MDI messages is occasional
tweaks that the operator would be typing in by hand, there was never
any notion that these would be expected to be processed as a coherent
group. They will always be executed immediately, and no amount of
"proper buffering" by my server can prevent that.

So although the capability to provide individual commands from a
remote system does exist, it was not intended to be used as the main
source of instructions. LinuxCNC can read in programs many megabytes
in size, so the 'simple MCU controller' approach of drip-feeding
commands from some other system was probably never even conceived of.
Consequently, a 'grace period' is also a foreign concept.

Fortunately the 'o-code' subroutines allow a way to intentionally
group commands together, and have reliable blending. The catch is that
these must be files on disk. So the awkward workaround my server ended
up having to do is create a temporary subroutine file on disk, and
then issue an MDI command (eg. "o<tmp> call") to execute the
subroutine. My server will group commands inside beginsub/endsub
keywords. To ensure jogging via OpenPNP works in a timely manner (and
that clients disconnecting without giving the 'endsub' don't cause
problems) the subroutine will be finalized after some timeout even
when no further commands are given. It's possible that creating a file
as a regular job and running it might work too, but that has more
overhead involved so I suspect it would be slightly slower.

This is probably more than you cared to know about the innards of
LinuxCNC, but it might help explain some of the reasoning that went on
in my experiments.
> --
> You received this message because you are subscribed to the Google Groups "OpenPnP" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/94e583ba-22da-845b-ffb9-df9640df3bbd%40googlemail.com.

mark maker

unread,
May 17, 2023, 5:13:03 AM5/17/23
to ope...@googlegroups.com

> This is probably more than you cared to know about the innards of LinuxCNC, but it might help explain some of the reasoning that went on in my experiments.

No, on the contrary, it is the only way to be sure, in order to rule options out, thanks. 😁

> Among the messages that can be sent is one called MDI (Manual Data Input). This allows individual g-code commands to be executed
independently of any program, which provides a convenient way to perform ad-hoc operations or make adjustments.

Are these strictly one line only? No way to send some escaped line delimiters? Any chance for a PR to add that capability?

Even if the temp file approach turns out to be the only way, I'm not overly concerned about performance, on a Linux system. Sharing files is handled extremely efficiently (basically via virtual memory page mapping), and if M400 delimiting is used, the overhead is only incurred once per motion sequence. Compared to the usual physical machine delays, this is nothing. The only small concern would be a shoddy implementation on the LinuxCNC side, perhaps not properly freeing files, memory, etc. as we would be generating, using and dismissing (tens of) thousands of scripts.

> LinuxCNC can read in programs many megabytes in size, so the 'simple MCU controller' approach of drip-feeding commands from some other system was probably never even conceived of.

What about this "interactive session" mdi usage?

http://linuxcnc.org/docs/devel/html/man/man1/mdi.1.html

_Mark

Chris Campbell

unread,
May 17, 2023, 7:23:19 AM5/17/23
to ope...@googlegroups.com
I'm pretty sure MDI does not allow escaping/delimiters etc. Where
multiple commands are required, normal procedure would be to read them
from a file. The man page you linked to is for a "shell" type of
utility, that again only accepts one line at a time.
https://forum.linuxcnc.org/48-gladevcp/29354-multiple-mdi-commands-in-a-vcp-action-mdi-widget

Yes I think performance will be fine. The location for subroutine
files can easily be set to a tmpfs path. I'm overwriting the same file
name every time, so I assume it's not occupying much new memory over
time. This Raspberry Pi is kinda gutless but it seems to be running
things surprisingly well.

Here is a run with two boards of the simulated job where I pasted the
log into my server to replay it. I omitted set-acceleration commands
because they break blending, so acceleration is the same for all
movements.
https://youtu.be/F68xNnZgNAk
> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/f96100ca-2ed0-b5ac-56b2-1dc77f8f8889%40makr.zone.

mark maker

unread,
May 17, 2023, 12:17:17 PM5/17/23
to ope...@googlegroups.com
> he man page you linked to is for a "shell" type of
> utility, that again only accepts one line at a time.

Doesn't sound like it, quoting:
interactive session
$mdi
MDI> m3 s1000
MDI> G0 X100
MDI> ^Z
$stopped
It seems to read from stdin.

The only reason I'm so "stubborn" here, is because it would be "stupid" from a software development standpoint, to be able to read from files but not from other types of streams, like stdin or pipes or sockets. It would not make sense, especially since it is called "LinuxCNC", i.e., having a UNIX mindset.



_Mark

justin White

unread,
May 17, 2023, 6:02:10 PM5/17/23
to ope...@googlegroups.com
The "interactive" example doesn't work. You can run the single command example, and get the MDI> prompt but everything fails with "invalid syntax"
$ mdi
MDI> G0 X10
Traceback (most recent call last):
  File "/usr/bin/mdi", line 40, in <module>
    mdi = eval(input("MDI> "))
          ^^^^^^^^^^^^^^^^^^^^
  File "<string>", line 1
    G0 X10
       ^^^
SyntaxError: invalid syntax

I doubt this ever worked. There are plenty of "stupid" things in LinuxCNC that seem to have been inherited from EMC that don't quite work or nobody knows how well they do work since no one really uses them.

Mark, you could pretty easily set up a LinuxCNC sim to mess with things like this without getting too involved.

You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/IshRY1IM80w/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/2ff12ef3-a97e-0971-2cc7-7c6d0fea4e8d%40makr.zone.

Chris Campbell

unread,
May 18, 2023, 5:37:39 AM5/18/23
to ope...@googlegroups.com
That "mdi" utility is a very simple wrapper that passes MDI commands to the LinuxCNC core in the same way as my server does, but via Python bindings. The "session" referred to in the man page is the shell session, not an ongoing group of gcode commands. The commands will still be sent over a socket, and LinuxCNC will still start execution immediately. If you could type fast enough some segments would be blended and some not, same as with my server. Any lines that are not a valid gcode command are ignored.
https://github.com/LinuxCNC/linuxcnc/blob/master/src/emc/usr_intf/axis/scripts/mdi.py

Yes it's true that everything is handled as a file descriptor, including sockets, but that doesn't mean they all behave the same. You cannot know whether a socket will ever provide more data, but with a file on disk you can tell up-front how much is available to look-ahead at. For sockets the typical strategy is to let the receiving side know to expect more data by sending some kind of header, to clearly express that intent.

btw that utility does actually work for me, at least on the RPi.

As a side note, I did come across one case where multiple gcode lines can be given as part of a larger motion, and will be processed as a group. This is marked as experimental:

But the ability to use subroutines kinda nullifies the need for such features. That nurbs feature can be done in a subroutine, and yesterday I was able to make a 3d cubic bezier subroutine with parameterized departure/arrival direction without too much trouble.
Selection_1200.png
Selection_1198.png
Calling subroutines in separate files is also more convenient than having to edit the main program. Right now I'm making a soldering bot for through-hole pins, where the actual repetitive soldering motion will be a subroutine that I can highly optimize as a separate file, and the main program can remain succinct and easy to set up for different boards, eg.:
   g0 x27 y30
   o<solder> call [1.7]
   g0 x27 y32.5
   o<solder> call [3.9] ; ground plane pin, longer dwell

Again, I seem to have waffled about a lot of things not really relevant to the main topic but might help to see the background behind MDI being a fairly primitive interface. On LinuxCNC gcode commands are expected to be read from some kind of disk file 99.9% of the time, and the socket connection is for non-gcode controls like start, pause, abort, jog etc. I think MDI would have been created merely as a convenience to not require a disk file for the occasional adjustment where manual jogging wasn't quite precise enough.




Mark

unread,
May 18, 2023, 7:42:24 AM5/18/23
to ope...@googlegroups.com

> On LinuxCNC gcode commands are expected to be read from some kind of disk file 99.9% of the time, and the socket connection is for non-gcode controls like start, pause, abort, jog etc. I think MDI would have been created merely as a convenience to not require a disk file for the occasional adjustment where manual jogging wasn't quite precise enough.

Agree. I have since browsed LinuxCNC source a bit and I guess it has evolved over many iterations/generations, has "sedimentary" layers upon layers, including, if I understand correctly, quite some Python stuff calling back and forth from/to C/C++ with some ugly looking string based bindings. Does not make it any simpler to understand, let alone modernize it. This seems to confirm what you say: LinuxCNC is designed to spool off prerecorded G-code files, where it doesn't matter if some cumbersome (and slow) preprocessing takes place. Guess you'd have to live with generated temp files.

Needless to say, this is not ideal. The same brain effort invested into extending/developing a modern MCU based controller firmware, i.e. to add blending, combine it with 3rd order motion control (or similar), looks ever more promising to me.

_Mark

justin White

unread,
May 18, 2023, 11:25:00 AM5/18/23
to ope...@googlegroups.com
btw that utility does actually work for me, at least on the RPi.
You're running compiled from git right? I assume you're running 2.10 then? I just fired up a VM with 2.9 from the Bookworm apt repo that I use for some other work. It was definitely broken in my install.

Needless to say, this is not ideal. The same brain effort invested into extending/developing a modern MCU based controller firmware, i.e. to add blending, combine it with 3rd order motion control (or similar), looks ever more promising to me.
Well it's not the way LinuxCNC is intended to be used and OpenPnP obviously wasn't intended to support this either. Just out of curiosity, what CNC controllers actually support the "send me Gcode" idea, besides the 3D printer firmwares? AFAIK this is a relatively new trend that started with 3D printers and that being the case none of these extended gcodes would be a thing in a. The fact that LinuxCNC supports it at all is a pretty surprising.

Honestly the work Chris has done is more than I expected to be necessary and it seems he's done a hell of a job. I've just been passively following along for now but it sounds like what's here is more than enough to get going. The limitations so far don't seem to be dealbreakers. The advanced motion stuff wasn't my actual interest personally. LinuxCNC is a different ecosystem that you probably wouldn't understand unless you actually used it for something it was intended to do. Ultimately, and I'm not a code person so grain of salt.....but it seems that the absolute end all would be to extend LinuxCNC's MDI interface and maybe work some of halrmt's direct hal control into the server.

mark maker

unread,
May 18, 2023, 12:50:20 PM5/18/23
to ope...@googlegroups.com

> Honestly the work Chris has done is more than I expected to be necessary and it seems he's done a hell of a job.

In case this was somehow unclear (was it?): in no way was it intended to denigrate Chris' work, or whatever. On the contrary, I was asking myself if his obvious talent was somewhat "wasted" on the wrong approach. 😇

> Just out of curiosity, what CNC controllers actually support the "send me Gcode" idea, besides the 3D printer firmwares?

I only know Open Source controllers. It could be argued that Grbl, the grandfather of them all, and TinyG are not per se 3D printing controllers, AFAIK they are used for NC routers first and foremost.

> AFAIK this is a relatively new trend that started with 3D printers

Actually, the "reactiveness" is not needed for 3D printing, at all.

But I guess it is a very important hallmark of any robotic use case that employs computer vision or other forms of "sensing", "reasoning" and "reacting" in ways beyond mere touch probing. It just opens up massive new possibilities. Think machine learning.

I'm not saying these use cases can't be done with LinuxCNC. But based on the facts available to me, after having browsed the code a bit, this would be highly proprietary, in many ways rigid and old style (files only?), unnecessarily complex, and still quite limited, for instance in not supporting third order motion control, or even fluid acceleration control. And those features would be almost impossible to add, given this system is clearly "grown organically" with a gazillion layers and module dependencies.

The "proprietary" in there is also very important to me. Open Source can only thrive if interfaces are more or less standardized, as G-code is. So stuff in every shape and form can be put to work together. So an enthusiast community can evolve an NC router into a 3D printer, into a laser cutter, into a PnP machine. "Entry cost" is also important. The simplest controller can be an Arduino and driver shield for a few bucks.

All this leads me to conclude that Chris should not spend too much time into making this work in too elaborate ways. There would be more worthwhile endeavors! 😎

_Mark

justin White

unread,
May 18, 2023, 10:26:46 PM5/18/23
to ope...@googlegroups.com
All this leads me to conclude that Chris should not spend too much time into making this work in too elaborate ways. There would be more worthwhile endeavors! 😎
I get what you think is a "more worthwhile endeavor", but I have no idea what that has to do with what Chris has done here. The posts are here on the OpenPnP group for obvious reasons but realistically it's something added to LinuxCNC, not OpenPnP. There was some discussion about this on the LinuxCNC forum but PnP machines are a really small thing in LinuxCNC so there isn't much motion there about it. I'm not sure anyone expected OpenPnP devs to support this in any way really so again I'm not sure how this is sucking oxygen away from anything else that OpenPnP would think a better use of time. Chris has a router running LinuxCNC, and I don't want to speak for him but I'll assume that's where his interest came from since he's never otherwise used OpenPnP and doesn't have a PnP machine.

As for whether LinuxCNC is at all suitable for a PnP machine in general, well we've all seen a pretty impressive example done solely by a single guy years ago. This is just getting the ball rolling, nobody even has a machine running this setup yet. Down the road maybe there would be some interest in the LinuxCNC side to make some of the extensions I mentioned above. As far as code quality well I suppose that's how 40yr old Open Source projects go. Tormach obviously didn't think too terribly of it when they decided to run it on all of their machines.

The simplest controller can be an Arduino and driver shield for a few bucks
Lol yes, but do you trust an 8bit micro to do all of the advanced motion control you're suggesting? Lets not forget, you need a half decent PC to run OpenPnP, here LinuxCNC could easily be running on that same PC (which is my intention) so it's actually cheaper than an Arduino in that respect. LinuxCNC's electronics are not "motion controllers" in the same regards as the firmwares familiar around here, they're not there to think about motion or interpret Gcode, they are there to do very specific things like generate steps and read encoders because CPUs are terrible at these things these days. There's Arduino firmware for LinuxCNC that does this and wouldn't cost $1 more than what you've suggested here but there's also a firmware targeting low cost 32bit MCUs called Remora. https://remora-docs.readthedocs.io/en/latest/

The nice thing about this approach here in this thread is that as Chris is doing you can use an Rpi to run LinuxCNC and since this is a network server it does the obvious thing that OpenPnP supports. An Rpi is far more suitable for running the hard bits of motion control than an MCU but terrible at the discrete bits of generating steps and such. Not sure about Chris's implementation but the original linuxcncrsh was also a replacement for the "display" interface, so the thing running LinuxCNC could be headless and not worry about drawing display bloat. Then it can further offload the discrete hardware control to the above mentioned MCU firmware or use the bane of LinuxCNC that will probably never have a reasonable argument for an MCU being better at, an FPGA.

I understand why you think better MCU firmware is a better endeavor but I doubt you fully understand the possible benefits of this approach. Personally I'm not worried about ultimate performance but imagine you bought a fleet of sturdy old commercial PnP machines and wanted to retrofit them with newer non-proprietary software. This is exactly what the world of LinuxCNC excels at for machine tools. The thing's got a bunch of old weird servos in it and crazy pneumatics, it weighs over 1000lbs......I don't think anyone would have a problem wanting to use OpenPnP in these things, they might not want to stuff an Arduino in them though.


mark maker

unread,
May 19, 2023, 3:39:48 AM5/19/23
to ope...@googlegroups.com

Thanks for the effort. I think, I understand all that more or less, and still come to a different conclusion, maybe because ultimately, I am talking about finding a nice lean and mean solution for PnP, not about finding new use cases for LinuxCNC.

But frankly, that Remora project (quote: "Remora was primarily developed to use LinuxCNC for 3D Printing") is like rectal dentistry. It can be done, I'm sure, and it is amazing that it can be done. But it is not reasonable to do so. The supported controllers are perfectly capable of 3D-Printing all by themselves, offline, while you can do something else on your computer, or switch it off. I for one have two 3D-printers, and I would not want to only use one at a time.

And yes, after browsing the source code, I do believe that everything LinuxCNC can do in its motion planner core, can be done on these modern 32bit controllers. We'd need a reasonable subset of (inverse) Kinematics. Plus a reasonable subset of the G-code interpreter. You would, of course, do away with all the Python back and forth, intermediate file handling, and that nasty user space to kernel interface, plus the HAL would have to be much more focused. You have to realize, that when LinuxCNC was implemented (with about these core capabilities), PCs' CPUs were weaker than today's MCUs (at least those with FPUs, although I'm not sure the first LinuxCNC PCs necessarily had FPUs). The lines even blur more if you talk about the Raspi.

Note that I still left out a whole lot of the LinuxCNC universe. If you absolutely insist on using an FPGA card for Servo closed loop driving, instead of buying a dedicated servo driver, then LinuxCNC it is.

The Arduino I only mentioned because it is actively used with OpenPnP now (Grbl). Certainly I would not build a new controller firmware with it. 😇

_Mark

justin White

unread,
May 19, 2023, 5:33:02 AM5/19/23
to ope...@googlegroups.com

But frankly, that Remora project (quote: "Remora was primarily developed to use LinuxCNC for 3D Printing") is like rectal dentistry. It can be done, I'm sure, and it is amazing that it can be done

Lol that's hilarious. You could be 100% right, like I said I'm not a code guy but I applaud the effort even if the implementation is sour. The idea is fine, I don't use it so it could be a very hot mess.

On the other hand as I was speaking to my friend about helping me with the Gcode Driver I had to explain a bit about OpenPnP. He's like a "C supremacist" he said "eww Java and HTML?" He's helped me with a few LinuxCNC things and he probably knows very little about machine control anything but he never had a bad thing to say about the LinuxCNC codebase.

You're fairly critical of other coding efforts, maybe OpenPnP is perfect...no idea. I know as a user I picked up LinuxCNC extremely quickly because it's hal implementation isn't written for people like you, it's written for people like me. That's a powerful thing in and of itself. That code being slightly inefficient is probably the least of anyone's problems these days.

You have to realize, that when LinuxCNC was implemented (with about these core capabilities), PCs' CPUs were weaker than today's MCUs (at least those with FPUs, although I'm not sure the first LinuxCNC PCs necessarily had FPUs).
No I'm aware of that and I agree. The CPUs were far weaker in overall ability but were much better for banging on pins. Software stepgens on a parallel port were a serious consideration where now you have to offload it because you can't guarantee the CPU can do it on time. LinuxCNC has shifted focus there, nobody uses software stepgens or encoder counters anymore. I think you also have to realize that way way back when CPUs were "slower than MCU's" the PCs generally were not reading encoders and generating steps, they were tachs, resolvers, and brusched DC servos. Machine control was still very analog when PCs sucked.

If you absolutely insist on using an FPGA card for Servo closed loop driving, instead of buying a dedicated servo driver, then LinuxCNC it is.

That's not at all the point. There's no firmware that closes the loop outside of LinuxCNC in software. I use servo drives over S+D and allow the drive to close the loop. High rate stepgens just allow a high degree of control without dealing with proprietary servo systems. The encoder feedback is best just used to monitor what's going on. The option for LinuxCNC to close the loop in software is certainly still viable though since it only has to worry about getting a position update from the FPGA @ the thread period rather than trying to count every single pulse.

I think you kind of glossed over the question of what a high end retrofit or new build looks like in the current OpenPnP world. I know it's not OpenPnP's job to worry about hardware directly but if you consider all the things that go into it it's not a simple thing to figure out and that's not OpenPnP's fault, the ecosystem doesn't seem to exist. Whereas here the question is "how do you get LinuxCNC and OpenPnP talking a little better". Regardless of what you think of LinuxCNC it can and is used relatively simply to retrofit high and low end machines, there's no question there. There's nothing really that's been discussed here that LinuxCNC cannot do itself right now, just questions about "how do we tell it to do that".

Bert Lewis

unread,
May 19, 2023, 10:50:52 AM5/19/23
to ope...@googlegroups.com
I for one am very interested in this development. My use case would be to be able to pick up a point using a camera instead of touching off as we do now. Sometimes probing is not the best when the thickness is only .032” and the height could vary .01”. Then the probe tip may be wrong. 
Here is a pic of the type thing I want to do. Of course I could just jog it there and manually use a camera for touch off. But how cool would it be to have that automatic as openpnp does so well?image0.jpeg

Bert Lewis

On May 19, 2023, at 5:33 AM, justin White <blaz...@gmail.com> wrote:


<Selection_1200.png>

mark maker

unread,
May 19, 2023, 2:04:35 PM5/19/23
to ope...@googlegroups.com

It almost seems, you deliberately try to misunderstand and misrepresent what I'm saying. 😭 I hope others following this, read what I really said, and in the context.

The retrofit argument I don't get. Reading this group for years, I remember people have retrofitted (old) PnP machines with the common MCU based controllers many times. Why should LinuxCNC be easier?

Remember: Unless when I said otherwise, I was and still am always talking about PnP. Conversely, I do see how LinuxCNC can be an ideal candidate for retrofitting a router etc. as most of its peripheral software and GUIs seem ready-made for that (I'm not a router guy, so this may be a false impression).

But make no mistake: I do welcome LinuxCNC as a valuable option for OpenPnP. I provided help and information to Chris in many iterations. Yes, I'm a tech guy, I want to know facts, rather than opinions, so I asked hard questions too. Chris provided the answers quick and very competent. We might have had different opinions about some things, but as far as I understood, this was all in the realm of constructive tech talk. Chris must say.

Yes, I do become extra critical, when OpenPnP is supposed to somehow handle LinuxCNC differently than other controllers. When we are effectively asked to break up the existing motion model and partially transfer it to LinuxCNC, and in proprietary ways. I'm not completely ruling such solutions out, but it must provide a super-duper advantage, and no alternative way to achieve the same result, in order to justify the breaking of the current model. So I'm asking hard questions about the super duper advantage, and about the complete ruling-out of alternative ways, and when I don't see both, I do speak my mind.

My current take is that Chris can make the connection to LinuxCNC working through G-Code. In order to support stable blending (and likely also to avoid premature deceleration), he likely needs to script the recorded commands into files and then execute them, when M400 is received, or after a small timeout (for manual jogging).

Then OpenPnP would be extended (by me) to support the G64 blending parametrization on the fly (corners with and without allowable blending), and to control how far the Z overshoot should be (effective Safe Z), based on X/Y move distance, and Z travel (run up) at the beginning and end. All that can be communicated through standard G-code. These cool options would be configurable in the OpenPnP GUI, and thus available to non-expert users.

The blending is a big speed winner for Pick & Place, so the LinuxCNC option would be duly noted, once the first videos are published. I do like this all the more, because it actually benefits the simpler DIY machines too, those would be able to run faster, due to much reduced vibrations.

Anything more elaborate, like transferring "intent" for larger move sequences, and by necessity then also transferring responsibility for all the different backlash compensation methods, runout-compensation etc. pp. as listed earlier, and thus breaking up OpenPnP's motion model, I don't see. I would have been more open to such options, if LinuxCNC turned out to be sooooo advanced in comparison, that it would have made sense to practically "bow to it". Based on my current information, it does not. That's all I'm really saying.

_Mark

mark maker

unread,
May 19, 2023, 2:10:29 PM5/19/23
to ope...@googlegroups.com

> When I tried with the simulated machine.xml, it seems like the simulated GCodeDriver will always be in effect regardless of any other settings made in the GUI.

Sorry, yes, I completely forgot about how this works inside. Will fix it.

_Mark


On 5/17/23 08:53, Chris Campbell wrote:

mark maker

unread,
May 19, 2023, 2:54:34 PM5/19/23
to ope...@googlegroups.com

Hi Chris,

It is now possible to connect to a real controller, while still simulating everything else. Details here:

https://github.com/openpnp/openpnp/pull/1561

You need to switch off the checkbox:

Simulation Mode

Please download/upgrade the newest test version, allow some minutes to deploy.

https://openpnp.org/test-downloads/

Note, I wasn't actually able to test with a real external controller, so your giving this a quick test job run and reporting back would be very welcome, thanks!  💯 😁

_Mark

Chris Campbell

unread,
May 20, 2023, 4:01:38 AM5/20/23
to ope...@googlegroups.com
Thanks Mark.
https://youtu.be/cxrg2g4NGII

Since this ping-pong interaction actually tests the M400 properly, it
uncovered some issues where an unexpected endsub would be
inadvertently passed through to LInuxCNC, and the lack of 'ok' reply
for beginsub/endsub would cause OpenPNP to timeout waiting. After
fixing these issues it seems to be fine, although it took many hours
to get a clean run for the video. Even after setting "ideal machine"
and disabling all the simulated problems like runout, noise, homing
error etc I still had huge problems with constant errors from the
vision complaining it couldn't find tape fiducials or see the picked
part with the bottom cam.

After a looooong time I realised the problem was that the vision
detection will trigger immediately when the M400 reply arrives, but
the simulated camera frame might not yet have caught up. This was
caused by me lowering the axis feedrates to closely match the (slow!)
display of the RPi LinuxCNC, in an attempt to make the video look
better. The fix was to just make the OpenPNP side move faster, so that
the camera frames would always be ready for detection when the M400
arrived. So in the video the two sides are not synchronized very well,
but at least I didn't have to keep manually prodding it to get a job
finished. This would not happen in an actual installation where the
camera always sees reality.

After running some real job gcode with it, the main inconvenience I
see is that acceleration settings cannot be handled well. The reason
is that changing the acceleration will interrupt blending. This is
another consequence of LinuxCNC being created for a different world,
where the tool does not pick up and carry things, and the mass of an
endmill is miniscule compared to the mass of the machine itself. So
the acceleration for each axis would typically be defined just once
after building the machine, the only consideration being that the
motors can actually perform the
demanded acceleration without losing steps. Anyway, acceleration can
be set only once per sequence of contiguous blended segments.

To ensure that blending occurred in my video I set the acceleration
manually before the job, low enough that blended corners will be
visible, and the job does not issue any acceleration changes at all. I
suspect it would be ok to do this for real jobs, either by setting it
low enough to accommodate whatever the heaviest package requires and
taking a speed hit on the smaller parts, or by setting up a separate
job where smaller parts are free to accelerate at warp speed.

I could modify my server so that only one acceleration per 'batch'
would actually be applied, and the move would remain blended. But this
would then rely on OpenPNP supplying the first acceleration as one
that could be suitable across the whole movement, which I don't think
is how it works at all right now. It also starts to look like passing
an intent which we don't talk about. I'm kinda thinking this type of
workaround is not really necessary, although I have never actually
used a PnP to know how annoying it would be to not have accelerations
individually tailored for every move.


On Sat, May 20, 2023 at 6:54 AM mark maker <ma...@makr.zone> wrote:
>
> Hi Chris,
>
> It is now possible to connect to a real controller, while still simulating everything else. Details here:
>
> https://github.com/openpnp/openpnp/pull/1561
>
> You need to switch off the checkbox:
>
> To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/c8923bec-c59c-4f19-af17-8cd590c63706%40makr.zone.

mark maker

unread,
May 20, 2023, 6:06:34 AM5/20/23
to ope...@googlegroups.com

Great news, Chris, and thanks for the testing!

> After running some real job gcode with it, the main inconvenience I see is that acceleration settings cannot be handled well.

I think I could optimize the Motion Control Type EuclideanAxisLimits in OpenPnP, where the axis accelerations are constant as long as the desired speed factor does not change. Consequently, I could make sure the acceleration command is only sent once, when the speed factor was deliberately changed by code (or user).

But I would need an M-code that sets acceleration limits for individual axes, something like this:

https://reprap.org/wiki/G-code#M201:_Set_max_acceleration

Change of speed factor only happens at specific waypoints:

  1. In normal Pick&Place it is only changed when a particularly delicate or heavy part is picked (Speed % setting on part), and then reset once that part is placed. Both at still-stand, so no penalty there.
  2. In the nozzle tip changer motion sequence, as well as in PushPullFeeder and BlindsFeeder motion sequences, the speed factor may change on individual legs of the motion sequence, but these are so rare it won't matter and I guess we don't want blending there anyways.

https://github.com/openpnp/openpnp/wiki/GcodeAsyncDriver#motion-control-type

Regarding G64. I can parametrize G64 (P word) specifically for the OpenPnP moveToLocationAtSafeZ() move sequence, along with the optimal Z overshoot. This move sequences are from still-stand to still-stand (except in case of contact probing, which I guess is difficult anyways).  I'm not yet sure how the G64 P word must be set to ensure straight Z up until it reaches nominal Safe Z, and only then allow it to blend. The inverted consideration happens at the diving end. We don't want it to knock over parts that were already placed: think of a large electrolyte condenser on the nozzle and being inserted between two already placed. Also note that the optimal overshoot curve may be lopsided, if the Z raise/dive is asymmetric, or even twisted (S shaped) on dual nozzle shared Z machines.

I'm also not entirely sure if LinuxCNC can blend more than two segments (I think I've seen some evidence in the source, that it can only blend the deceleration/acceleration phases of two subsequent segments, but there could be some overreaching optimization, I missed). If it can't blend beyond segments, we can work with extra waypoints at normal Safe Z, and just allow full Safe Z Zone blending via P word.

Btw. great write-up on your Github!


The way OpenPnP currently works, it is difficult for me to send beginsub. I cannot look ahead whether the upcoming sequence will be a jogging sequence (which should be driven as individual mdi commands), or a sub. I currently only know this after the fact.

Could you somehow make it so you decide at your end? Buffer commands until the next M400 is received, or until a timeout expires, whichever is first, and if you got M400, make a sub out of the buffer, else if it timed out, send the buffered lines as individual mdi?

If this is too hard, just tell me, then I need to dig into it, to know this ahead of time. But if so, should we then not use the rs274ngc o-codes to define the subs?

If I wanted to do the same simulation as you did, but without any physical driver/motor attached, how do I best do it?

If I understand this correctly:
I could just install the user space part on my Linux, right? This would likely give me the best graphical simulation speed.

Or is a VM better? If yes, which .iso should I use? And would a VM (hosted on a powerful PC) be fast enough to simulate graphically with true speeds (unlike the Raspi)?

And what about the dev files you mention? Are these already in the .iso? If I really need to compile LinuxCNC myself (as you seem to imply?), then I definitely want it inside a VM. It sounds complicated though, quoting the "greeting card" from the developer manual:

"That will probably fail! That doesn’t make you a bad person, it just means you should read this whole document to find out how
to fix your problems. Especially the section on Satisfying Build Dependencies." 😕

Thanks again, Chris!

_Mark

justin White

unread,
May 20, 2023, 9:47:16 AM5/20/23
to ope...@googlegroups.com
If I understand this correctly:
I could just install the user space part on my Linux, right? This would likely give me the best graphical simulation speed.

Or is a VM better? If yes, which .iso should I use? And would a VM (hosted on a powerful PC) be fast enough to simulate graphically with true speeds (unlike the Raspi)?
It's very easy to install LinuxCNC, the docs are old, it used to have major dependency issues.

Best bet if you are running Debian Bookworm or maybe Ubuntu Kinetic is just install it from apt since it's in the repos
$ apt install linuxcnc-uspace linuxcnc-uspace-dev.

If you want to compile it make sure you use the 2.9 branch git source.

You should not use the debs on the website since 2.8.x requires python2 and most distros have dropped it. If you want to use the iso's for a VM then it is fine since it's a whole debian system but 2.8.x will be replaced soon so might not be a great idea. After installing LinuxCNC launch it and choose "Axis sim". You will get an RT kernel with the Debian package but you won't need to use it for a sim

You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/IshRY1IM80w/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/e5938744-5cae-9c9e-13d9-be5c3cee5e00%40makr.zone.

Chris Campbell

unread,
May 20, 2023, 9:48:39 AM5/20/23
to ope...@googlegroups.com

Hi Mark,
My server is already doing as you described. If a 'beginsub' has been given, it buffers commands until 'endsub' is received (or timeout expires), then creates a subroutine file and calls it. From OpenPNP this is achieved by setting up gcode formatting as in these screenshots. The reason I made up the beginsub/endsub keywords is to keep my server a bit more generic, rather than explicitly listen for M400.

Selection_1189.png
Selection_1190.png

Any further 'beginsub' will be ignored if a buffer is already in progress. Full details are outlined in my github readme near the bottom.

To set acceleration I'm using M171 which simply calls a bash script I made with file name "M171", and passes up to two parameters. See "M Codes" in the LinuxCNC docs.
https://linuxcnc.org/docs/html/gcode/m-code.html#mcode:m100-m199
Currently I'm only using a single parameter and setting the same acceleration value for all axes. Since this functionality only allows two parameters, it would require separate commands to set all three axes separately. It's no problem to make M172, M173 etc to handle all axes individually, or maybe use one parameter for XY and one for Z, I'm assuming the X and Y accelerations would not actually be different? Eg.
   M171 {AccelerationXY:P%.0f} {AccelerationZ:Q%.0f}

Yes, LinuxCNC will not blend more than two segments, that's correct. The path will pass through the midpoint of each segment tangentially. This is a little more primitive than would be optimal in many cases, on the other hand it makes waypoints a predictable way to direct the path around obstacles. For example in this screenshot, all four paths start from x0y0. The three paths on the left are:
  g1 z20
  g1 x20
Selection_1202.png
There is no difference between accelerations 10 and 60 because the path is constrained to pass through the midpoint of the up-going segment. If there was more room (longer segments) the acceleration of 10 would follow a larger radius.
The path on the right is:
  g1 x20
  g1 z5
  g1 x40
Note here that the radius is much smaller than the other case where acceleration was also 60, because it is constrained to be vertical at the midpoint. Obeying this constraint affects velocity. The line on the right needed to slow down to about half the velocity to achieve that tighter radius within acceleration limits. Anyway, for segments that start and end a blend sequence, we can depend on the first (or last) half of those segments being straight. So for example if the safe-Z was 15mm, you could set the Z-rise to 30mm and not have to worry about what blending will do.

If your Linux is a Debian variant I think those uspace installs would work fine (unless you want to actually spin a motor smoothly), although I have not tried that myself. There is an RPi image floating around somewhere which might also be a convenient way to quickly try it. Building on Debian is actually not too bad, the section about satisfying build dependencies just lets you know how to generate a list of required packages which you can then pass to "apt install". My main development computer is Fedora which is *not* easy to build LinuxCNC on, which is why I'm using RPi for these tests (and also because I want to run my solderbot with RPi). If your Linux is not a Debian variant and you don't have any RPi around, then I suppose VM it would be.



justin White

unread,
May 20, 2023, 9:59:52 AM5/20/23
to ope...@googlegroups.com
Chris, are you using a software stepgen with the rpi hal component? I'm curious how well that works, asking alot of that pi.

You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/IshRY1IM80w/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/CAOMuCzBfJvuxzhGH5i679MsbY17wbMiuKiVFnN4RS5dxvqgZHg%40mail.gmail.com.

mark maker

unread,
May 20, 2023, 10:49:17 AM5/20/23
to ope...@googlegroups.com

> Best bet if you are running Debian Bookworm or maybe Ubuntu Kinetic is just install it from apt since it's in the repos
> $ apt install linuxcnc-uspace linuxcnc-uspace-dev.

Thanks Justin, will investigate (my Kubuntu LTS is too old, but maybe it is time to upgrade...).

_Mark

mark maker

unread,
May 20, 2023, 12:01:34 PM5/20/23
to ope...@googlegroups.com

> I'm assuming the X and Y accelerations would not actually be different?

They will often be quite different, as we want to accelerate as quick as possible. But because in a Cartesian machine the whole portal is much heavier than just the head, and unless a design compensates with a much stronger Y motor, this means maximum possible Y acceleration will be much lower than X acceleration.

pyramid

https://makr.zone/thinking-machine/82/

So we really need individual control for best performance.

> See "M Codes" in the LinuxCNC docs.

This is really a strange limitation. In that case I suggest using

M117 P<axis/joint> Q<accel>

with the axes/joints numbered according to "Trivial Kinematics":

http://linuxcnc.org/docs/html/motion/kinematics.html#_trivial_kinematics

> Yes, LinuxCNC will not blend more than two segments, that's correct.

Good to know.

> So for example if the safe-Z was 15mm, you could set the Z-rise to 30mm and not have to worry about what blending will do.

That would not be optimal for short distances. You'd want part of the deceleration taking place in the straight leg, otherwise the excessive blending might actually lengthen the overall move. But perhaps we can still rely on the blending algo to do the right thing, because it simply cannot blend more, due to the short X/Y deceleration/acceleration phases. Or like I said, we simply add waypoints at nominal Safe Z. The following also shows how it could be asymmetric (just hand drawn):


Extra
        Waypo9ints

_Mark

justin White

unread,
May 20, 2023, 1:34:20 PM5/20/23
to ope...@googlegroups.com
If you want to spend an hour and a half understanding how LinuxCNC's trajectory planner makes it's calculations, Here is a presentation from when Tormach donated LinuxCNC's updated trajectory planner.

A fella has also been working on adding S-curve.
I haven't been following that just caught the last couple of pages so I'm not sure how close that is to getting pushed into master.

You received this message because you are subscribed to a topic in the Google Groups "OpenPnP" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openpnp/IshRY1IM80w/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/db30db19-2ab4-d84e-0fbb-18be0cc418e7%40makr.zone.

mark maker

unread,
May 20, 2023, 4:41:46 PM5/20/23
to ope...@googlegroups.com

Interesting video. If these methods are still the same used today, then different acceleration in X, Y, Z are likely not really useful. I thought it used parabolic blending, but it actually uses circular arcs (constant R, constant max feed-rate), which means accelerations will have to be uniform across axes (it likely takes the minimum of all involved axes).

Chapter "Future Work": Is it still limited to X, Y, Z? That would be bad, as we routinely need to rotate nozzles while moving. But I guess that was in the test job, and it still blended, right?

All important to know!

_Mark

Chris Campbell

unread,
May 20, 2023, 11:00:13 PM5/20/23
to ope...@googlegroups.com
My understanding was that the reason OpenPNP needs to set accelerations is because some parts require lower acceleration in order not to fall off the nozzle. Surely that's the same for X and Y? I don't see why OpenPNP should be responsible for (or even aware of) the maximum limits of the machine, which would be a permanent setting in the machine configuration (even for TinyG/grbl etc), and not something g-code could violate anyway. Perhaps you are saying that if OpenPNP demands an acceleration/velocity and the machine limits it to a lower value, that would be a problem somehow? I thought that the M400 wait would take care of any timing mismatches.

But anyway yes, a format like this would work fine:
M171 P<axis> Q<accel>
If I understand correctly we could then set a rule like this and OpenPNP would only give the acceleration values when they actually change, when EuclideanAxisLimits is used right?
Selection_1204.png
Yes, simply setting the Z-rise to twice the safe-Z would be inefficient for short movements. I was just saying that it would be an easy way to avoid collisions. To be more efficient it could be done like you say, with a waypoint and setting the extra height to an xy distance dependent value that allows a decent size radius on the corner to avoid slowdown. Basically what I showed in my first subroutine demo a couple weeks ago.

Regarding the simultaneous blending of the A axis, yeah I also read somewhere that blending would not function when more than XYZ were involved. So I was expecting to see that in my experiments, but it looks like the 4th axis is moving just fine, at least on the GUI display. Not really sure what's happening there, maybe that info is old...?

I'm using software stepgen, going straight into a TMC2209 breakout/mount board. With a RPi 3B the speed you see in the video is about as fast as it can go without following errors. I'm actually thinking to try running my solderbot on a RPi Zero which is even slower, but the solderbot only needs very slow and small movements so it might just be ok. It would also be more size-appropriate since the entire solderbot fits on an A4 size sheet of paper. I have seen people using level shifters and opto-isolators between the Pi and the stepper driver so that's probably a safer method, but I'm just gonna try this and see how it goes. I will test with all 4 axes next to see if the 4th motor actually blends the way it shows on the GUI.
Webcam-20230521-14:13:41-807.jpeg

justin White

unread,
May 20, 2023, 11:34:24 PM5/20/23
to ope...@googlegroups.com
Not really sure what's happening there, maybe that info is old...?
That video is 8 years old, it was a presentation before the new TP was actually added to LCNC. There could have been changes before it even went into LCNC let alone the past following 3 major versions. I'd just take it as a broad explanation. 

I'm using software stepgen, going straight into a TMC2209 breakout/mount board. With a RPi 3B the speed you see in the video is about as fast as it can go without following errors.
The Trinamic chips have a step multiplier that should allow you to lower the step scale and still get reasonable microstepping. The missing steps are interpolated by the driver IC so it'll have less actual resolution, I've never used it so I'm not sure of the practical accuracy  but I'd assume software stepping will only get worse with more stepgens.  Do you plan on using the pi by itself for real hardware?

mark maker

unread,
May 21, 2023, 6:59:24 AM5/21/23
to ope...@googlegroups.com

> My understanding was that the reason OpenPNP needs to set accelerations is because some parts require lower acceleration in order not to fall off the nozzle. Surely that's the same for X and Y? I don't see why OpenPNP should be responsible for (or even aware of) the maximum limits of the machine, which would be a permanent setting in the machine configuration (even for TinyG/grbl etc), and not something g-code could violate anyway.

In an ideal world that would probably be true, but we are catering to real world machines, often on a low budget (numbers for easy reference):

  1. We have Machines that flex, wobble and vibrate. Infinite jerk is for the math book, in the real world it translates to complex elastic responses, with resonances in the machines, often different per axis. For instance my Liteplacer resonates on a much lower frequency on Y than on X. And it being a simple, affordable DIY extrusion and belt machine, that (unnecessarily) happens to be very badly balanced, it resonates a lot!
  2. You kinda see it here (this video is about something completely different, but that scene shows how the head and nozzle shakes on 1mm Y moves, which kinda hit the resonance):
    https://youtu.be/5QcJ2ziIJ14?t=166
  3. Vibration/ringing after motion of course results in inaccuracies.
  4. In the camera we can detect it, hence we have camera settling.
    https://makr.zone/openpnp-advanced-camera-settling/431/
    https://youtu.be/Pxg6g3KI5_E?t=4
  5. But on the nozzle tip, or any other kind of "blind" actuation, we still have the same (or likely worse) shaking and ringing, resulting in inaccurate picking/placing etc.
  6. Therefore, we need to shape acceleration well below the point where the steppers would stall, if you know what I mean.
  7. In its simplest form, we just reduce acceleration to the practical maximum per axis.  I agree this could also be done on the controller side.
  8. But personally, I still like to make the distinction between "absolute maximum" (set on the controller to prevent motor stalling and such) and a practical "precision enabling maximum", set in OpenPnP.
  9. Plus you can then offer a nice centralized GUI, not some obscure .ini File, you forget where it is and how it works after a few months.
  10. Plus of course we can do better: we also have the ModeratedConstantAcceleration motion control type, where the ramps are calculated as 3rd order jerk controlled motion, and the resulting average acceleration then used. This results in smaller moves accelerating gentler, which helps a lot to reduce vibrations.
  11. We also have the Simulated3rdOrderControl motion control type, where again the ramps are calculated as 3rd order jerk controlled motion, and the motion segment then interpolated into small staircase steps with varying acceleration limits, to shape the jerk control curve approximately. See the diagnostics of such a move built-into OpenPnP (strong lines show 3rd order theory, faint lines show the interpolated move). These work on some 32bit controllers like Smoothie and Duet:
    Simulated Jerk Control
  12. This works wonders against vibrations (you already know that video):
    https://youtu.be/cH0SF2D6FhM
  13. In addition and combination with all that, some OpenPnP machines have multiple drivers/controllers, not only to get a larger number of axes (four-nozzle machines need at least 8 axes), but also because it can make sense for simpler wiring, for instance to place a small controller board on the head, both for nozzle motion and various IO.
  14. While not strictly necessary in most cases, it is then nice that OpenPnP can coordinate motion across these controllers, not in hard real-time, of course, but sufficiently to move in ways the user expects (diagonals instead of "hockey sticks"). Hence we control feed-rate, acceleration (and jerk) to make this happen across controllers. In some (admittedly not ideal) "DIY cases", this might avoid collisions when a move must insert into a forest of protruding feeders, or avoid a nozzle-tip changer, etc.
  15. Furthermore, if you want your speed factor to be meaningful (50% means a move takes twice as long, hence feedrate is 50%, acceleration 25%, jerk 12.5%), then you need to know the baseline feed-rate, acceleration and jerk limits too.
  16. Because we have this internal 3rd order motion planner, we can also accurately simulate a machine, before it is actually built, or before an upgrade is made. Theoretically, one could optimize the design of their machine by trying different rates and their effect on CPH, then plan the motors etc. accordingly (weight and cost vs. power trade-off).
  17. Finally, when we are talking about calculating the optimal Z overshoot and amount of blending that really saves time, we equally must know these rate limits too, as I already explained.
  18. I hope it becomes clear, why OpenPnP would want to know all it can about how exactly the motion is going to be executed on the machine, the time it takes, etc. 😉

> If I understand correctly we could then set a rule like this and OpenPNP would only give the acceleration values when they actually change, when EuclideanAxisLimits is used right?

Correct.

> Regarding the simultaneous blending of the A axis, yeah...

I since confirmed that blending works across rotation axes (using LinuxCNC user mode). But there are strange asymmetries I don't understand.

g90 g21
g64 p5 q0
g0 a0
g0 b0
g0 x0 y0 c0
g4 p0
g1 z8 F2000
g1 z10
g1 x30 c90
g1 z8
g1 z0
g4 p0
m2



> I have seen people using level shifters and opto-isolators between the Pi and the stepper driver so that's probably a safer method, but I'm just gonna try this and see how it goes.

The most important thing, I guess, is making sure the Raspi supply and all of its connectors are floating (no earthing to the mains), so the ground connector from the stepper driver (and its PSU) gives you the ground, regardless of it bouncing when the motor draws (or back-generates) large currents (relatively speaking). The keyword is Star Topology.

I'm no expert and found little info about this, so just carried together what I found, here:

https://makr.zone/grounding-the-machine/283/

_Mark

mark maker

unread,
May 21, 2023, 8:37:26 AM5/21/23
to ope...@googlegroups.com

> I'm using software stepgen

With quad 1.8GHz cores, and the realtime kernel, surely it can generate any step rate you'd reasonably want? Right?

Is this the way to go?
https://forum.linuxcnc.org/9-installing-linuxcnc/39779-rpi4-raspbian-64-bit-linuxcnc?start=150#253623

I hope this is outdated:

"The maximum step rate depends on the CPU and other factors, and is usually in the range of 5KHz to 25KHz."
http://linuxcnc.org/docs/html/man/man9/stepgen.9.html

Just for reference (don't bite off my head, Justin, this is just about tech facts):

A Duet 3 MB6HC 300MHz MCU controller (firmware version is outdated, don't know if still accurate):

3.2RC1
Step rate, 1 motor 650kHz
Step rate, 3 motors (linear), per motor
Step rate, 3 motors (delta), per motor 480kHz
62-bit square root calculation time 0.73us
32-bit square root calculation time 0.35us
sin/cos calculation time 0.94us
CRC calculation time (1Kb) 41.16us

https://forum.duet3d.com/topic/18694/duet-maximum-achievable-step-rates?_=1684667410378

Or on a 600MHz, 32$ Teensy 4.1 with grblHAL:

https://www.grbl.org/single-post/how-fast-can-it-go

_Mark

justin White

unread,
May 21, 2023, 10:40:11 AM5/21/23
to ope...@googlegroups.com
With quad 1.8GHz cores, and the realtime kernel, surely it can generate any step rate you'd reasonably want? Right?
I don't want to get too deep into the math as it makes my head hurt but Chris already mentioned if he runs that 1 stepper any faster he gets a "following error" LinuxCNC's stepgen's themselves are generally  closed loop. That has nothing to do with the motor, just the step generator.

The following error  basically means the commanded number of steps is starting to deviate from the reported number of generated steps. LinuxCNC runs RT tasks in "threads". A software stepgen will run on a fast, non-floating point thread generally called the "base thread". the ferror allowance, acceleration and velocity are all determined by how fast the base thread is running. How fast the base thread can run is determined by how much jitter the PC has. Watching a Youtube video on a Pi while LinuxCNC is running will probably kill the base thread and cause RT violations for example. a following error isn't a realtime violation but if the base thread were running fast enough to not have that following error occur a RT violation probably would have occurred instead. This of course may actually just be things set incorrectly

The RT kernel doesn't guarantee anything will happen on time, it just guarantees it will have the priority. LinuxCNC itself has the mechanisms to make sure something either happens or your machine is in violation.
Honestly I don't use Rpi's for LinuxCNC much, I actually would have for this if it could handle the cameras but this group said it was not a good idea. The difference is I would not run a software stepgen, I would not run a base thread at all....I never do I only use the "servo-thread" which is the slower (typically 1ms) floating point thread. I use SPI or Ethernet based Mesa cards which turn the discrete RT tasks into realtime commands. This is much of what I was saying a few posts ago.

 "The maximum step rate depends on the CPU and other factors, and is usually in the range of 5KHz to 25KHz."
http://linuxcnc.org/docs/html/man/man9/stepgen.9.html
No that's probably correct but again, that is the software step gen, nobody really does that unless they are being cheap to be quite honest, or just messing around. You would be 10x better off running a software stepgen on a PC that was old enough to have a native parallel port than a brand new Ryzen/Core whatever. They were just better at these things.

Just for reference (don't bite off my head, Justin, this is just about tech facts):

A Duet 3 MB6HC 300MHz MCU controller (firmware version is outdated, don't know if still accurate):

3.2RC1
Step rate, 1 motor 650kHz
Step rate, 3 motors (linear), per motor
Step rate, 3 motors (delta), per motor 480kHz
62-bit square root calculation time 0.73us
32-bit square root calculation time 0.35us
sin/cos calculation time 0.94us
CRC calculation time (1Kb) 41.16us

https://forum.duet3d.com/topic/18694/duet-maximum-achievable-step-rates?_=1684667410378

Lol I'm not gonna bite your head off but this is pretty much exactly what I said a few posts ago. PCs have gotten fast, but they are terrible at banging on pins. That's not LinuxCNC's fault and it's not like LinuxCNC does not have an answer for it. You just should not expect good performance from software stepgens or encoder counters, a 300mhz MCU will blow the doors off it. On the other hand take that 650khz single motor step gen and compare it to 6+ 10mhz step gens that are capable with a  Mesa FPGA card. See how increasing the number of step gens starts killing the performance? Not the case with an FPGA based controller. Just think, if you are running your motion control on that same MCU you are also killing it's performance by increasing the number of pin bangers.

Or on a 600MHz, 32$ Teensy 4.1 with grblHAL:
You can't tell me anything about Teensy's lol. At last count I still have about 10 Teensy4.0/4.1 sitting in a drawer unopened





20211210_192317.jpg
20220420_193615.jpg


20211210_193725.jpg


It's funny because I before I decided I wanted to use LinuxCNC for this PnP adventure I wanted to make a Teensy based controller I could use for this and a 3D printer when I get around to it. I asked about Teensy support on the Klipper forums and they basically told me to F' off. Since I don't really know much about the 3D printer world I figured I'd start with that BTT board and Marlin as a test because Marlin somewhat supports Teensy 4.x. After all that I think Marlin sucks so I didn't want to deal with it, and there isn't alot I can do with firmware myself. Not sure if I knew about GrblHAL but since I've never used it I couldn't say whether I'd want to bother.

The thing about it is there;s nothing stopping anyone from making a Teensy Firmware that would work with LinuxCNC. I've actually tried multiple times to get people interested in that, no one really bites. You can't beat an FPGA card with a Teensy for what LinuxCNC does but nobody really says you have to, something as fast as a Teensy would kick ass for 90% of small machines. A LinuxCNC firmware for a Teensy4.1 could be pretty serious over SPI or Ethernet modeled on Mesa's hm2 firmware. Someone actually said they were working on it a couple years ago, the disappeared without a trace https://forum.linuxcnc.org/24-hal-components/39813-teensy-4-1-linuxcnc?start=0

justin White

unread,
Aug 10, 2023, 9:17:08 AM8/10/23
to OpenPnP
Chris,

Just getting started messing around with the Gcode server. Not sure if you'd rather I post here or github issues.....

Just using it in a telnet shell for now. First issue I noticed is the homing. My ini is setup to home Z first, then home X and Y together and that's how it works with Axis GUI. I'm getting odd results with the server, I have to run the home command 3 times. Not sure if that's intentional but IMO "home" should be EMC_JOINT_HOME -1. That is the "home all" command and it will follow the ini sequence. If it's necessary to home only 1 axis it would obviously be EMC_JOINT_HOME <axis#>

That's about as far as I got so far other than just some G0's, probably have more to say after I get openpnp setup

Chris Campbell

unread,
Aug 10, 2023, 11:49:21 AM8/10/23
to ope...@googlegroups.com
Hi Justin

Thanks for trying it out. I'll add a parameter to the 'home' command so it can optionally home a specific axis or all axes according to the ini.
Yeah I think it would be better to discuss this in an issue on github.


justin White

unread,
Aug 13, 2023, 2:29:59 PM8/13/23
to OpenPnP
Anyone know why OpenPnP would be limited to a feedrate of 1000? I have my machine setup in mm and in OpenPNP X and Y are set to 18000mm/min. I think Z is the slowest @15000 yet running a test job with the speed set at 100%, it only ever outputs f1000

Chris Campbell

unread,
Aug 13, 2023, 2:49:37 PM8/13/23
to ope...@googlegroups.com
Hi Justin

How are you determining that 1000 is the highest value commanded? Is that from looking at the gcode server output?
fwiw I'm getting values greater than 1000 with the settings shown in this screenshot. See the server output in the terminal, where some moves are at the 2000 I specified for feed rate.

Selection_1310.png

You can also check in OpenPNP's log what gcode it is sending (set the level to TRACE or DEBUG).

Selection_1312.png



justin White

unread,
Aug 13, 2023, 3:26:54 PM8/13/23
to OpenPnP

>>How are you determining that 1000 is the highest value commanded? Is that from looking at the gcode server output?

Well for 1 it's extremely slow lol, for 2......
term.png
This is a sample job set @ 100% speed. I set this thing up and put it through it's paces with LinuxCNC alone, MDI will do G0's and G1's @ fullspeed of 15000 to 18000 through MDI. Probably a setting in OpenPnP but it's not obvious.
pnp.png
And LinuxCNC is setup mm/s

ini.png

justin White

unread,
Aug 13, 2023, 3:38:47 PM8/13/23
to OpenPnP
BTW in that screenshot I'm not using beginsub/endsub. I just tried that as well, still stuck at f1000

I am just using toolpathfeedrate for now but I assume that's not a problem

Chris Campbell

unread,
Aug 13, 2023, 3:41:17 PM8/13/23
to ope...@googlegroups.com
Maybe this? Zero means no limit, I forget whether that was the default or I set it to zero.
Selection_1315.png


Chris Campbell

unread,
Aug 13, 2023, 3:46:26 PM8/13/23
to ope...@googlegroups.com
Just to confirm, your MOVE_TO_COMMAND is something like this?
Selection_1316.png

justin White

unread,
Aug 13, 2023, 4:04:12 PM8/13/23
to OpenPnP
That's it, didn't dawn on me that that was in driver settings. Not sure how it got set to 1000.

Toolpathfeedrate doesn't seem too bright, pretty much always sends the job speed percentage of max feedrate regardless of how axis are setup. Are you using "ModeratedConstantAccelleration"?

Chris Campbell

unread,
Aug 13, 2023, 4:25:15 PM8/13/23
to ope...@googlegroups.com
ModeratedConstantAcceleration was recommended by Mark somewhere, possibly earlier in this discussion thred but I forget the details. I'm not actually using any of this for real yet, other than the occasional check on my router.

mark maker

unread,
Aug 14, 2023, 12:58:52 AM8/14/23
to ope...@googlegroups.com

>  That's it, didn't dawn on me that that was in driver settings. Not sure how it got set to 1000.

Use Issues & Solutions, it would tell you to get rid of it.

The 1000 setting is the default.

_Mark

justin White

unread,
Aug 14, 2023, 1:06:41 AM8/14/23
to OpenPnP
I'm not sure how many people use the Linux version but like I said before, issues and solutions almost never suggests anything at all. It did make a suggestion when setting up the Z type, but absolutely nothing else.

I'm using a deb from the site compiled like a week ago

mark maker

unread,
Aug 14, 2023, 2:24:47 AM8/14/23
to ope...@googlegroups.com

> I'm not sure how many people use the Linux version

I'm personally using mostly Linux nowadays, and this surely is not OS dependent.

> issues and solutions almost never suggests anything at all.

Try the "Include Dismissed?" switch.

Some proposed settings are dependent on others. So if you dismissed one of the prerequisites (like setting a better "Control Type"), then it is not [yet] suggesting removing the driver feed-rate.

For instance:

Leads to ->

And that's just one example prerequisite (I don't know them all by heart).

_Mark

Reply all
Reply to author
Forward
0 new messages