Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Shared Communications Bus - RS-422 or RS-485

122 views
Skip to first unread message

Rick C

unread,
Nov 2, 2022, 1:29:02 AM11/2/22
to
I have a test fixture that uses RS-232 to communicate with a PC. It actually uses the voltage levels of RS-232, even though this is from a USB cable on the PC, so it's only RS-232 for maybe four inches. lol

I'm redesigning the test fixtures to hold more units and fully automate a few features that presently requires an operator. There will now be 8 UUTs on each test fixture and I expect to have 10 to 20 test fixtures in a card rack. That's 80 to 160 UUTs total. There will be an FPGA controlling each pair of UUTs, so 80 FPGAs in total that the PC needs to talk to.

Rather than working on a way to mux 80 RS-232 interfaces, I'm thinking it would be better to either daisy chain, or connect in parallel all these devices. The protocol is master-slave where the master sends a command and the slaves are idle until they reply. The four FPGAs on a test fixture board could be connected in parallel easily enough. But I don't think I want to run TTL level signals between so many boards.

I could do an RS-422 interface with a master to slave pair and a slave to master pair. The slaves do not speak until spoken to, so there will be no collisions.

RS-485 would allow all this to be over a single pair of wires. But the one big issue I see people complain about is getting PC software to not clobber the slaves, or I should say, to get the master to wait long enough that it's not clobbering it's own start bit by overwriting the stop bit of the slave. I suppose someone, somewhere has dealt with this on the PC and has a solution that doesn't impact bus speed. I run the single test fixture version of this at about 100 kbps. I'm going to want as much speed as I can get for 80 FPGAs controlling 160 UUTs. Maybe I should give that some analysis, because this might not be true.

The tests are of two types, most of them are setting up a state and reading a signal. This can go pretty fast and doesn't take too many commands. Then there are the audio tests where the FPGA sends digital data to the UUT, which does it's thing and returns digital data which is crunched by the FPGA. This takes some small number of seconds and presently the protocol is to poll the status until it is done. That's a lot of messages, but it's not necessarily a slow point. The same test can be started on every UUT in parallel, so the waiting is in parallel. So maybe the serial port won't need to be any faster.

Still, I want to use RS-422 or RS-485 to deal with ground noise since this will be spread over multiple boards that don't have terribly solid grounds, just the power cable really.

I'm thinking out loud here as much as anything. I intended to simply ask if anyone had experience with RS-485 that would be helpful. Running two wires rather than eight would be a help. I'll probably use a 10 pin connector just to be on the safe side, allowing the transceivers to be used either way.

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 2, 2022, 5:28:21 AM11/2/22
to
On 02/11/2022 06:28, Rick C wrote:
> I have a test fixture that uses RS-232 to communicate with a PC. It
> actually uses the voltage levels of RS-232, even though this is from
> a USB cable on the PC, so it's only RS-232 for maybe four inches.
> lol
>
> I'm redesigning the test fixtures to hold more units and fully
> automate a few features that presently requires an operator. There
> will now be 8 UUTs on each test fixture and I expect to have 10 to 20
> test fixtures in a card rack. That's 80 to 160 UUTs total. There
> will be an FPGA controlling each pair of UUTs, so 80 FPGAs in total
> that the PC needs to talk to.
>
> Rather than working on a way to mux 80 RS-232 interfaces, I'm
> thinking it would be better to either daisy chain, or connect in
> parallel all these devices. The protocol is master-slave where the
> master sends a command and the slaves are idle until they reply. The
> four FPGAs on a test fixture board could be connected in parallel
> easily enough. But I don't think I want to run TTL level signals
> between so many boards.
>
> I could do an RS-422 interface with a master to slave pair and a
> slave to master pair. The slaves do not speak until spoken to, so
> there will be no collisions.
>

RS-422 is normally a point-to-point interface. It is one line in each
direction, but using balanced pairs instead of a TTL signal. You would
not normally connect multiple receivers or transmitters to an RS-422
bus, as the standard practice is that each transmitter is always driving
the pair it is attached to - there is no multi-drop.

> RS-485 would allow all this to be over a single pair of wires. But
> the one big issue I see people complain about is getting PC software
> to not clobber the slaves, or I should say, to get the master to wait
> long enough that it's not clobbering it's own start bit by
> overwriting the stop bit of the slave. I suppose someone, somewhere
> has dealt with this on the PC and has a solution that doesn't impact
> bus speed. I run the single test fixture version of this at about
> 100 kbps. I'm going to want as much speed as I can get for 80 FPGAs
> controlling 160 UUTs. Maybe I should give that some analysis,
> because this might not be true.

RS-485 is easy from a PC using appropriate USB devices. We make heavy
use of FTDI's chips and cables - they handle the drive enables for the
RS-485 automatically so that from the PC side you just read and write to
the serial port.

The reception of the last byte from a slave is not finished until the
stop bit has been properly received by the master - that means at least
half-way through the sending of the stop bit. Then there is a delay
before the data gets sent back to the host PC, a delay through the
kernel and drivers before it reaches the user program, time for the
program to handle that message, time for it to prepare the next message,
delays through the kernel and drivers before it gets to the USB bus,
latency in the USB device that receives the USB message and then starts
transmitting. There can be no collision unless all that delay is less
than half a bit time. And no matter how fast your computer is, you are
always going to need at least one full USB polling cycle for all this,
which for USB 2.0 is 0.125 us. That means that if you have a baud rate
of 16 kbaud or higher, there is no possibility of a collision.


>
> The tests are of two types, most of them are setting up a state and
> reading a signal. This can go pretty fast and doesn't take too many
> commands. Then there are the audio tests where the FPGA sends
> digital data to the UUT, which does it's thing and returns digital
> data which is crunched by the FPGA. This takes some small number of
> seconds and presently the protocol is to poll the status until it is
> done. That's a lot of messages, but it's not necessarily a slow
> point. The same test can be started on every UUT in parallel, so the
> waiting is in parallel. So maybe the serial port won't need to be
> any faster.
>
> Still, I want to use RS-422 or RS-485 to deal with ground noise since
> this will be spread over multiple boards that don't have terribly
> solid grounds, just the power cable really.
>
> I'm thinking out loud here as much as anything. I intended to simply
> ask if anyone had experience with RS-485 that would be helpful.
> Running two wires rather than eight would be a help. I'll probably
> use a 10 pin connector just to be on the safe side, allowing the
> transceivers to be used either way.
>

You need to do some calculations to see if you can get enough telegrams
and enough data through a single serial port. You'll be hard pushed to
find a USB serial port device of any kind that goes above 3 Mbaud, and
you need to be careful about your selection of RS-485 drivers for those
kinds of rates. You will also find that much of your bandwidth is taken
up with pauses between telegrams and reply latency, unless you make your
telegrams quite large.

When we have made testbenches that required serial communication to
multiple parallel devices, we typically put a USB hub in the testbench
and use multiple FDTI USB to serial cables. You only make one (or
possibly a few) of the testbenches - it's much cheaper to use
off-the-shelf parts than to spend time designing something more
advanced. You can buy a /lot/ of hubs and USB cables for the price of
the time to design, build and program a custom card for the job. It
also makes the system more scalable, as the communication to different
devices runs in parallel.

We have also done systems where there is a Raspberry Pi driving the hub
and multiple FTDI converters. The PC is connected to the Pi by Ethernet
(useful for galvanic isolation), and the Pi runs forwarders between the
serial ports and TCP/IP ports.

To be fair, I don't recall any testbenches we've made that needed more
than perhaps 8 serial ports. If I needed to handle 80 lines, I would
probably split things up - a Pi handling 8-10 lines from a local
program, communicating with a PC master program by Ethernet.

pozz

unread,
Nov 2, 2022, 6:54:10 AM11/2/22
to
Il 02/11/2022 10:28, David Brown ha scritto:
> On 02/11/2022 06:28, Rick C wrote:

> RS-485 is easy from a PC using appropriate USB devices.  We make heavy
> use of FTDI's chips and cables - they handle the drive enables for the
> RS-485 automatically so that from the PC side you just read and write to
> the serial port.
>
> The reception of the last byte from a slave is not finished until the
> stop bit has been properly received by the master - that means at least
> half-way through the sending of the stop bit.  Then there is a delay
> before the data gets sent back to the host PC, a delay through the
> kernel and drivers before it reaches the user program, time for the
> program to handle that message, time for it to prepare the next message,
> delays through the kernel and drivers before it gets to the USB bus,
> latency in the USB device that receives the USB message and then starts
> transmitting.  There can be no collision unless all that delay is less
> than half a bit time.  And no matter how fast your computer is, you are
> always going to need at least one full USB polling cycle for all this,
> which for USB 2.0 is 0.125 us.  That means that if you have a baud rate
> of 16 kbaud or higher, there is no possibility of a collision.

However this depends on the speed of slave too, because it could be slow
to move *its* direction from TX to RX. If the master starts transmitting
after the stop bit from the slave, but *before* it changes *its*
direction from TX to RX, the first bytes could be corrupted.

Unfortunately, not all UARTs in MCUs are able to drive automatically the
DE (Drive Enable) signal, so it sometimes happens that DE is a normal GPIO.
If you are lucky, you have the TXC (transmit complete) interrupt that
fires *after* stop bit is transmitted, a safe time to move DE signal.

In this case interrupt delay is short, but you could have other active
interrupts that occasionally could delay the TXC interrupt for some time.

David Brown

unread,
Nov 2, 2022, 9:27:16 AM11/2/22
to
True, and it that is an important point.

>
> Unfortunately, not all UARTs in MCUs are able to drive automatically the
> DE (Drive Enable) signal, so it sometimes happens that DE is a normal GPIO.
> If you are lucky, you have the TXC (transmit complete) interrupt that
> fires *after* stop bit is transmitted, a safe time to move DE signal.

I think the OP has FPGA's for the slave side of the equation, so there
should not be a delay in switching their drivers off after the last byte
is sent. Even if it is a microcontroller and has no hardware control
for the DE line, pretty much any half-decent microcontroller from this
century has a TXC interrupt and can react and turn off the driver within
a few microseconds. I am assuming he is not trying to do this project
using PIC's or 8051's !

>
> In this case interrupt delay is short, but you could have other active
> interrupts that occasionally could delay the TXC interrupt for some time.
>

If this kind of thing is a risk, then it's not hard to put a short delay
on the PC side between receiving a reply and sending out the next
telegram. But it's good that you brought it up, so that the OP can
decide if it /is/ a risk.

Rick C

unread,
Nov 2, 2022, 3:20:53 PM11/2/22
to
That is simply not true. Data sheets for RS-422 devices often show multidrop applications and how to best terminate them.


> > RS-485 would allow all this to be over a single pair of wires. But
> > the one big issue I see people complain about is getting PC software
> > to not clobber the slaves, or I should say, to get the master to wait
> > long enough that it's not clobbering it's own start bit by
> > overwriting the stop bit of the slave. I suppose someone, somewhere
> > has dealt with this on the PC and has a solution that doesn't impact
> > bus speed. I run the single test fixture version of this at about
> > 100 kbps. I'm going to want as much speed as I can get for 80 FPGAs
> > controlling 160 UUTs. Maybe I should give that some analysis,
> > because this might not be true.
> RS-485 is easy from a PC using appropriate USB devices. We make heavy
> use of FTDI's chips and cables - they handle the drive enables for the
> RS-485 automatically so that from the PC side you just read and write to
> the serial port.

I've yet to be convinced of this. Admittedly, my last interaction with RS-485 was many years ago, but there were some four or five different devices being integrated with a PC, and no two of them handled it the bus the same way. The PC in particular, would cut on and off the driver in the middle of bits. No one put a bias on the bus, so it was indeterminate when no one was driving. It was a horrible failure.

Every time I've seen this discussed, the driver control has been an issue.


> The reception of the last byte from a slave is not finished until the
> stop bit has been properly received by the master - that means at least
> half-way through the sending of the stop bit.

That's not sufficient. Everyone's halfway is a bit different and start bit detection may not be enabled on some device when the next driver outputs a start bit, or the last driver may not be turned off when the next driver starts.


> Then there is a delay
> before the data gets sent back to the host PC, a delay through the
> kernel and drivers before it reaches the user program, time for the
> program to handle that message, time for it to prepare the next message,
> delays through the kernel and drivers before it gets to the USB bus,
> latency in the USB device that receives the USB message and then starts
> transmitting. There can be no collision unless all that delay is less
> than half a bit time. And no matter how fast your computer is, you are
> always going to need at least one full USB polling cycle for all this,
> which for USB 2.0 is 0.125 us. That means that if you have a baud rate
> of 16 kbaud or higher, there is no possibility of a collision.

If your numbers are accurate, that might be ok, but I'm looking for data rates closer to 1 Mbps. Admittedly, I have not done an analysis of what will actually be required, but 128 UUT, or possibly 256, can do a lot of damage to a shared bus. At 1 Mbps, 128 UUT results in an effective bit rate maximum of 7.8 kbps. With 256 UUTs, that's 3.9 kbps. No, I don't think this will work properly at much slower speeds than 1 Mbps. At 16 kbps, the effective rate to each UUT is just 62.5 bps, not kbps.
I assume by "telegrams", you mean the messages. They will be small by necessity. The protocol is interactive with a command message and a reply message. Read a register, write a register.


> When we have made testbenches that required serial communication to
> multiple parallel devices, we typically put a USB hub in the testbench
> and use multiple FDTI USB to serial cables. You only make one (or
> possibly a few) of the testbenches - it's much cheaper to use
> off-the-shelf parts than to spend time designing something more
> advanced. You can buy a /lot/ of hubs and USB cables for the price of
> the time to design, build and program a custom card for the job. It
> also makes the system more scalable, as the communication to different
> devices runs in parallel.

USB hubs are a last resort. I've found many issues with such devices, especially larger than 4 ports.


> We have also done systems where there is a Raspberry Pi driving the hub
> and multiple FTDI converters. The PC is connected to the Pi by Ethernet
> (useful for galvanic isolation), and the Pi runs forwarders between the
> serial ports and TCP/IP ports.

There is a possibility of using an rPi on an Ethernet cable to the PC with direct comms to each test fixture board, but that's more work that I'm interested in.


> To be fair, I don't recall any testbenches we've made that needed more
> than perhaps 8 serial ports. If I needed to handle 80 lines, I would
> probably split things up - a Pi handling 8-10 lines from a local
> program, communicating with a PC master program by Ethernet.

That's the advantage of the shared bus. No programming required, other than extending the protocol to move from "selecting" a device on the FPGA, to selecting the FPGA as well.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 2, 2022, 4:49:16 PM11/2/22
to
RS-422 is not multidrop. Occasionally you will see multiple receivers
on a bus, but not multiple transmitters.

Of course the same driver chips can be used in different combinations of
wiring and drive enables. An RS-422 driver chip can be viewed as two
RS-485 driver chips - alternatively, a RS-485 driver can be viewed as an
RS-422 driver with the two differential pairs connected together.
Really, all you are talking about is a differential driver and a
differential receiver.

So yes, you can do multidrop using an RS-422 driver chip. But it is not
RS-422, which is a point-to-point serial bus standard.

>
>>> RS-485 would allow all this to be over a single pair of wires.
>>> But the one big issue I see people complain about is getting PC
>>> software to not clobber the slaves, or I should say, to get the
>>> master to wait long enough that it's not clobbering it's own
>>> start bit by overwriting the stop bit of the slave. I suppose
>>> someone, somewhere has dealt with this on the PC and has a
>>> solution that doesn't impact bus speed. I run the single test
>>> fixture version of this at about 100 kbps. I'm going to want as
>>> much speed as I can get for 80 FPGAs controlling 160 UUTs. Maybe
>>> I should give that some analysis, because this might not be
>>> true.
>> RS-485 is easy from a PC using appropriate USB devices. We make
>> heavy use of FTDI's chips and cables - they handle the drive
>> enables for the RS-485 automatically so that from the PC side you
>> just read and write to the serial port.
>
> I've yet to be convinced of this. Admittedly, my last interaction
> with RS-485 was many years ago, but there were some four or five
> different devices being integrated with a PC, and no two of them
> handled it the bus the same way. The PC in particular, would cut on
> and off the driver in the middle of bits. No one put a bias on the
> bus, so it was indeterminate when no one was driving. It was a
> horrible failure.
>
> Every time I've seen this discussed, the driver control has been an
> issue.

I can tell you it works perfectly with FTDI's RS-485 cables - every
time, every OS, regardless of the software. Some RS-485 drivers rely on
RTS for the drive enable - this was the standard for RS-232 to RS-485
converters from the old days of 9-pin and 25-pin serial ports on PC's.
With such drivers, it is certainly possible to get things wrong. With
the drive enable handled directly by the UART hardware on the USB chip,
it is /far/ harder to get it wrong.

I would expect there to be many alternatives to FTDI that work similarly
well, but that's the ones we generally use.

<https://ftdichip.com/product-category/products/cables/?series_products=55>

>
>> The reception of the last byte from a slave is not finished until
>> the stop bit has been properly received by the master - that means
>> at least half-way through the sending of the stop bit.
>
> That's not sufficient. Everyone's halfway is a bit different and
> start bit detection may not be enabled on some device when the next
> driver outputs a start bit, or the last driver may not be turned off
> when the next driver starts.
>

"At least half-way" means "at least 50% of the bit time". As long as
the start bit from the next message is not sent until at least 50% of a
bit time after the stop bit is detected, it will not conflict and all
listening devices will be ready to see the start bit. (Devices that
needed two stop bits haven't existed in the last 50 years.)

You asked specifically about bus turnaround at the host side - I assume
that is because on the slave devices, you have control of the drive
enables and bus turnaround happens with negligible latency.

>
>> Then there is a delay before the data gets sent back to the host
>> PC, a delay through the kernel and drivers before it reaches the
>> user program, time for the program to handle that message, time for
>> it to prepare the next message, delays through the kernel and
>> drivers before it gets to the USB bus, latency in the USB device
>> that receives the USB message and then starts transmitting. There
>> can be no collision unless all that delay is less than half a bit
>> time. And no matter how fast your computer is, you are always going
>> to need at least one full USB polling cycle for all this, which for
>> USB 2.0 is 0.125 us. That means that if you have a baud rate of 16
>> kbaud or higher, there is no possibility of a collision.
>
> If your numbers are accurate, that might be ok, but I'm looking for
> data rates closer to 1 Mbps.

USB serial ports generally use the 48 MHz base USB reference frequency
as their source clock to scale down by a baud rate divisor, and common
practice is 16 sub-bit clocks per line bit (so that you can have
multiple samples for noise immunity). Thus baud rates of integer
divisions of 3 MBaud are common. Certainly the FTDI chips handle 1, 2
and 3 MBaud. (I haven't had need of such speeds with RS-485, but have
happily used the common 3v3 TTL cables at 3 MBaud.)

> Admittedly, I have not done an analysis
> of what will actually be required, but 128 UUT, or possibly 256, can
> do a lot of damage to a shared bus. At 1 Mbps, 128 UUT results in an
> effective bit rate maximum of 7.8 kbps. With 256 UUTs, that's 3.9
> kbps. No, I don't think this will work properly at much slower
> speeds than 1 Mbps. At 16 kbps, the effective rate to each UUT is
> just 62.5 bps, not kbps.
>

As long as you are /above/ 16 kbaud, you should be fine (at the PC
side). At 1 Mbaud, you do not need to worry about the PC starting a new
telegram before the last received stop bit is completed.
Telegram, message, packet - whatever term you prefer. At faster baud
rates, the inevitable pauses between messages take proportionately more
of the total bandwidth. Longer messages will be more efficient. But
you'll have to do the sums yourself to see what rates you need, and
whether or not this will be an issue.

>
>> When we have made testbenches that required serial communication
>> to multiple parallel devices, we typically put a USB hub in the
>> testbench and use multiple FDTI USB to serial cables. You only make
>> one (or possibly a few) of the testbenches - it's much cheaper to
>> use off-the-shelf parts than to spend time designing something
>> more advanced. You can buy a /lot/ of hubs and USB cables for the
>> price of the time to design, build and program a custom card for
>> the job. It also makes the system more scalable, as the
>> communication to different devices runs in parallel.
>
> USB hubs are a last resort. I've found many issues with such
> devices, especially larger than 4 ports.
>

We find they work fine - I have very rarely seen any issues with
off-the-shelf hubs, regardless of the number of ports. (They are almost
all made with 1-to-4 hub chips, which is why hubs are often found in
sizes of 4 ports, 7 ports, or 10 ports.)

A key complication with multiple serial ports on hubs is if you are
using Windows, it can be a big pain to keep consistent numbering for the
serial ports. You may have to use driver-specific libraries (like
FTDI's DLL's) to check serial numbers and use that information. It's
far easier on Linux where you can make a udev configuration file that
gives aliases to your ports ordered by physical tree address.

>
>> We have also done systems where there is a Raspberry Pi driving the
>> hub and multiple FTDI converters. The PC is connected to the Pi by
>> Ethernet (useful for galvanic isolation), and the Pi runs
>> forwarders between the serial ports and TCP/IP ports.
>
> There is a possibility of using an rPi on an Ethernet cable to the PC
> with direct comms to each test fixture board, but that's more work
> that I'm interested in.
>

Or you could use one Pi for a set of boards - whatever is physically
convenient.

>
>> To be fair, I don't recall any testbenches we've made that needed
>> more than perhaps 8 serial ports. If I needed to handle 80 lines, I
>> would probably split things up - a Pi handling 8-10 lines from a
>> local program, communicating with a PC master program by Ethernet.
>
> That's the advantage of the shared bus. No programming required,
> other than extending the protocol to move from "selecting" a device
> on the FPGA, to selecting the FPGA as well.
>

If you are familiar with socat, the Pi doesn't necessarily need any
programming either. (In our case we wanted some extra monitoring and
logging, which was more than we could get from socat - so it was a
couple of hundred lines of Python in the end.)

Paul Rubin

unread,
Nov 2, 2022, 6:01:02 PM11/2/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> That's 80 to 160 UUTs total. There will be an FPGA controlling each
> pair of UUTs, so 80 FPGAs in total that the PC needs to talk to.

Rather than make a huge shared bus, I wonder if you could move the test
controller from a PC to a small microprocessor board that controls two
FPGA's or whatever. Then use a bunch of separate boards of that type,
communicating with a PC using some method that doesn't have to be
especially fast. The microprocessor board could be something like a
Raspberry Pi Pico, which costs $4 and can run Mecrisp, if that is what
your software is written with. It is quite a powerful little board.

Rick C

unread,
Nov 2, 2022, 7:27:37 PM11/2/22
to
Not sure of your point. Multi-drop is multiple receivers on a single transmitter. Multi-point is multiple drivers and receivers. Look at a few references. Even wikipedia says, "RS-422 provides for data transmission, using balanced, or differential, signaling, with unidirectional/non-reversible, terminated or non-terminated transmission lines, point to point, or multi-drop. In contrast to EIA-485, RS-422/V.11 does not allow multiple drivers but only multiple receivers."


> Of course the same driver chips can be used in different combinations of
> wiring and drive enables. An RS-422 driver chip can be viewed as two
> RS-485 driver chips - alternatively, a RS-485 driver can be viewed as an
> RS-422 driver with the two differential pairs connected together.
> Really, all you are talking about is a differential driver and a
> differential receiver.

Sure, but the point is, nothing in RS-422 precludes multiple receivers, and in fact, every reference I've found (not paying for the actual spec) shows multi-drop receivers.


> So yes, you can do multidrop using an RS-422 driver chip. But it is not
> RS-422, which is a point-to-point serial bus standard.

I don't believe that is correct. If you have a copy of the spec to share, I'd love to look at it. I might have one myself, but it would be a paper copy somewhere unknown. The diagrams showing multi-drop RS-422 is so ubiquitous, I expect they are from the standard itself.
So, what range of speeds have you used? It is actually the UART hardware that *does* get it very wrong by working off the transmitter empty signal, which changes in the middle of the bit. The control has to be specially designed to transition at the *end* of the stop bit of the transmitted character. Knowing when it is ok to enable the driver has the same problem. The data is "received" in the middle of the stop bit. So the enable has to be a half bit time later, at the end of the stop bit.

None of this matters to me really. I'm going to use more wires, and do the multi-drop from the PC to the slaves on one pair and use RS-422 to multi-point from the slaves to the PC. Since the slaves are controlled by the master, they will never collide. The master can't collide with itself, so I can ignore any issues with this. I will use the bias resistors to assure a valid idle state. I may need to select different devices than the ones I use in the product. I think there are differences in the input load and I want to be sure I can chain up to 32 units.


> I would expect there to be many alternatives to FTDI that work similarly
> well, but that's the ones we generally use.
>
> <https://ftdichip.com/product-category/products/cables/?series_products=55>
> >
> >> The reception of the last byte from a slave is not finished until
> >> the stop bit has been properly received by the master - that means
> >> at least half-way through the sending of the stop bit.
> >
> > That's not sufficient. Everyone's halfway is a bit different and
> > start bit detection may not be enabled on some device when the next
> > driver outputs a start bit, or the last driver may not be turned off
> > when the next driver starts.
> >
> "At least half-way" means "at least 50% of the bit time". As long as
> the start bit from the next message is not sent until at least 50% of a
> bit time after the stop bit is detected, it will not conflict and all
> listening devices will be ready to see the start bit. (Devices that
> needed two stop bits haven't existed in the last 50 years.)

You don't seem to understand that there is nothing timing from the start of the bit. The timing is from the first detected low of the start bit. From there, all timing is done by an internal clock. Check the math, you don't get 50% of the stop bit, guaranteed. That's why they call it "asynchronous" serial.


> You asked specifically about bus turnaround at the host side - I assume
> that is because on the slave devices, you have control of the drive
> enables and bus turnaround happens with negligible latency.

I know the master has the most trouble with this. The slaves tend to not have a problem because they are operated by MCUs and can wait a bit time before replying, or even a character time. I suppose they don't have any magic on turning off the driver though, but early is the easy way and generally doesn't cause a problem. The master has trouble on both ends of it's message, needing to be careful to not turn on the driver too soon and not turning it off too late to clobber the reply.


> >> Then there is a delay before the data gets sent back to the host
> >> PC, a delay through the kernel and drivers before it reaches the
> >> user program, time for the program to handle that message, time for
> >> it to prepare the next message, delays through the kernel and
> >> drivers before it gets to the USB bus, latency in the USB device
> >> that receives the USB message and then starts transmitting. There
> >> can be no collision unless all that delay is less than half a bit
> >> time. And no matter how fast your computer is, you are always going
> >> to need at least one full USB polling cycle for all this, which for
> >> USB 2.0 is 0.125 us. That means that if you have a baud rate of 16
> >> kbaud or higher, there is no possibility of a collision.
> >
> > If your numbers are accurate, that might be ok, but I'm looking for
> > data rates closer to 1 Mbps.
> USB serial ports generally use the 48 MHz base USB reference frequency
> as their source clock to scale down by a baud rate divisor, and common
> practice is 16 sub-bit clocks per line bit (so that you can have
> multiple samples for noise immunity). Thus baud rates of integer
> divisions of 3 MBaud are common. Certainly the FTDI chips handle 1, 2
> and 3 MBaud. (I haven't had need of such speeds with RS-485, but have
> happily used the common 3v3 TTL cables at 3 MBaud.)

At some point you have to worry with the line waveforms. So too fast can cause problems when using *lots* of receivers.


> > Admittedly, I have not done an analysis
> > of what will actually be required, but 128 UUT, or possibly 256, can
> > do a lot of damage to a shared bus. At 1 Mbps, 128 UUT results in an
> > effective bit rate maximum of 7.8 kbps. With 256 UUTs, that's 3.9
> > kbps. No, I don't think this will work properly at much slower
> > speeds than 1 Mbps. At 16 kbps, the effective rate to each UUT is
> > just 62.5 bps, not kbps.
> >
> As long as you are /above/ 16 kbaud, you should be fine (at the PC
> side). At 1 Mbaud, you do not need to worry about the PC starting a new
> telegram before the last received stop bit is completed.

Not entirely. The master has to turn *off* the driver before the slave replies. At higher speeds that's a problem. But it all depends on how it is being done. This is why I'm going with two busses, one for master transmit and one for master input.
Not sure what delays you are talking about. Every message is either selecting a slave, or reading a register or writing a register. You are probably thinking like a code banger where you have to worry with software delays. The protocol at the interface to the UUT is serial, at 30 MHz and the turn around is so quick, I had to use a mux to select the first bit and shift the rest of the data from a shift register. All in all it is under a μs for a transfer, and the data can be sent to the PC in ASCII Hex format which even at 1 Mbps is much slower. I can't say what the delays on the PC are. It's never been of interest, but I can't imagine it's much.


> >> When we have made testbenches that required serial communication
> >> to multiple parallel devices, we typically put a USB hub in the
> >> testbench and use multiple FDTI USB to serial cables. You only make
> >> one (or possibly a few) of the testbenches - it's much cheaper to
> >> use off-the-shelf parts than to spend time designing something
> >> more advanced. You can buy a /lot/ of hubs and USB cables for the
> >> price of the time to design, build and program a custom card for
> >> the job. It also makes the system more scalable, as the
> >> communication to different devices runs in parallel.
> >
> > USB hubs are a last resort. I've found many issues with such
> > devices, especially larger than 4 ports.
> >
> We find they work fine - I have very rarely seen any issues with
> off-the-shelf hubs, regardless of the number of ports. (They are almost
> all made with 1-to-4 hub chips, which is why hubs are often found in
> sizes of 4 ports, 7 ports, or 10 ports.)

Exactly, and I find combining them like that has issues.


> A key complication with multiple serial ports on hubs is if you are
> using Windows, it can be a big pain to keep consistent numbering for the
> serial ports. You may have to use driver-specific libraries (like
> FTDI's DLL's) to check serial numbers and use that information. It's
> far easier on Linux where you can make a udev configuration file that
> gives aliases to your ports ordered by physical tree address.

Yet another reason to avoid such complications. The reality is there's no gain. The multi-drop is the right way to go here.


> >> We have also done systems where there is a Raspberry Pi driving the
> >> hub and multiple FTDI converters. The PC is connected to the Pi by
> >> Ethernet (useful for galvanic isolation), and the Pi runs
> >> forwarders between the serial ports and TCP/IP ports.
> >
> > There is a possibility of using an rPi on an Ethernet cable to the PC
> > with direct comms to each test fixture board, but that's more work
> > that I'm interested in.
> >
> Or you could use one Pi for a set of boards - whatever is physically
> convenient.

But it's yet another piece to keep working. Much easier to just use the multi-drop. I will keep that idea as a backup plan. But getting RS-422 on an rPi is a hassle. That would need to be a hat, or a shield or whatever they call daughter cards on rPis. Last time I checked, it was hard to find rPis. They are part of the unobtainium universe now, it seems.


> >> To be fair, I don't recall any testbenches we've made that needed
> >> more than perhaps 8 serial ports. If I needed to handle 80 lines, I
> >> would probably split things up - a Pi handling 8-10 lines from a
> >> local program, communicating with a PC master program by Ethernet.
> >
> > That's the advantage of the shared bus. No programming required,
> > other than extending the protocol to move from "selecting" a device
> > on the FPGA, to selecting the FPGA as well.
> >
> If you are familiar with socat, the Pi doesn't necessarily need any
> programming either. (In our case we wanted some extra monitoring and
> logging, which was more than we could get from socat - so it was a
> couple of hundred lines of Python in the end.)

A couple hundred lines I'd rather not write.

Thanks for the comments.

--

Rick C.

-- Get 1,000 miles of free Supercharging
-- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 2, 2022, 7:31:36 PM11/2/22
to
I'm lost. How is this any better? The data collection is running on the PC, so there still has to be communications. If I want to make changes to the test program, it's one place, not 32 places, or 128 places. All they would be doing is acting as a serial port concentrator. They went out 40 years ago!

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209

Paul Rubin

unread,
Nov 2, 2022, 8:37:03 PM11/2/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> I'm lost. How is this any better? The data collection is running on
> the PC, so there still has to be communications... All they would be
> doing is acting as a serial port concentrator. They went out 40 years
> ago!

Well, I always like doing stuff in software instead of hardware. Serial
port concentrators are still around and I posted a link to one in your
other thread. It seems like a reasonable approach too. Oddly, a quick
web search doesn't find any big cheap serial to ethernet ones, but maybe
you could use USB hubs and FTDI-like cables.

The idea of using an MCU is to move almost everything speed critical
away from the PC. The MCU only has control the FPGA's and transfer
transfer digested data upwards to the PC. By having some fanout you
could potentially control 1000s of UUT instead of 80 from a single PC,
without having to build anything special. It would just mean plugging
some off the shelf boxes together, and writing some code.

You're more comfortable with hardware than I am, so maybe that approach
has less attraction for you than it does for me.

Rick C

unread,
Nov 3, 2022, 12:58:08 AM11/3/22
to
On Wednesday, November 2, 2022 at 8:37:03 PM UTC-4, Paul Rubin wrote:
> Rick C <gnuarm.del...@gmail.com> writes:
> > I'm lost. How is this any better? The data collection is running on
> > the PC, so there still has to be communications... All they would be
> > doing is acting as a serial port concentrator. They went out 40 years
> > ago!
> Well, I always like doing stuff in software instead of hardware.

What hardware can you replace with software??? You use *different* hardware and then have to add new software. That's not an improvement.


> Serial
> port concentrators are still around and I posted a link to one in your
> other thread. It seems like a reasonable approach too. Oddly, a quick
> web search doesn't find any big cheap serial to ethernet ones, but maybe
> you could use USB hubs and FTDI-like cables.

I'm getting tired of discussing this. Your added hardware solves no problems. Having 32 serial ports just makes the software more complex and saves nothing.


> The idea of using an MCU is to move almost everything speed critical
> away from the PC. The MCU only has control the FPGA's and transfer
> transfer digested data upwards to the PC. By having some fanout you
> could potentially control 1000s of UUT instead of 80 from a single PC,
> without having to build anything special. It would just mean plugging
> some off the shelf boxes together, and writing some code.
>
> You're more comfortable with hardware than I am, so maybe that approach
> has less attraction for you than it does for me.

What hardware??? It's a serial port either way. You want to add a middle man that accomplishes nothing. You talk about controlling 1000's of UUTs, but there is no need for that.

What I need at this point, is the mechanical details of how to design a board to fit into a Eurocard chassis and how to figure out all the bits that go with it.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 3, 2022, 7:42:29 AM11/3/22
to
OK. I have always associated "multidrop" with multiple receivers /and/
transmitters - I have never come across a need for multiple receivers on
a serial bus without them also needing to transmit (such as in your
case), or a distinction between "multi-drop" meaning multiple receivers
and "multi-point" meaning multiple transmitters.

The term "multi-drop" is more commonly taken to mean "multiple devices
connected directly to the same bus, transmitting and receiving". The
bus has no explicit direction on the electrical connections. Examples
include RS-485, CAN, co-ax Ethernet.

"Multi-point" is more general and can be any kind of network where there
are multiple nodes that can send and receive to all other nodes. That
would include a switched Ethernet network as well as the subclass of
"multi-drop" networks.


But whatever the terms, I think we agree on how RS-422 works.

>
>> Of course the same driver chips can be used in different combinations of
>> wiring and drive enables. An RS-422 driver chip can be viewed as two
>> RS-485 driver chips - alternatively, a RS-485 driver can be viewed as an
>> RS-422 driver with the two differential pairs connected together.
>> Really, all you are talking about is a differential driver and a
>> differential receiver.
>
> Sure, but the point is, nothing in RS-422 precludes multiple receivers, and in fact, every reference I've found (not paying for the actual spec) shows multi-drop receivers.
>

Yes, it seems that is entirely possible. The only uses I have seen for
RS-422 is as a kind of long-range alternative to RS-232. And the only
use I have seen for multiple receivers is - like for RS-232 - for
monitoring and debugging communication.

Still, multiple receivers are not going to help you in your testbench
unless they can also transmit.
For RS-485, my usage has usually been quite slow (9600 baud is very
common). Other colleagues have used faster rates. But as I said, it is
the slow baud rates that are at higher risk.

However, without knowing exact implementation details of all UART
hardware, I think you are wrong. There are two "finished byte" signals
that are common in UART transmission hardware.

The first is "transmit buffer empty" which is set when a byte is
transferred from the buffer into the transmitter shift register - most
UARTs are at least double-buffered to improve flow. This signal comes a
whole character before the end of the transmission - it is useful for
the software, but not the hardware. If you have a transmitter that is
not double-buffered, this signal would likely come at the beginning of
the stop bit, or at the end of the stop bit (depending on how the state
machines were made).

The second is "transmission complete", which is set at the /end/ of the
stop bit sent out on the line. That's when you know everything has been
sent - software can move on, and hardware can turn off the driver.

I cannot imagine why anyone would design transmission hardware that had
a special signal or disabled a driver in the /middle/ of the stop bit.
That makes no sense, and would have no use in software or hardware.
That is definitely an imagined problem.

(For reference, the FTDI datasheets show that the TXDEN output is
activated one bit before the start bit - so that the start bit is a 1 to
0 transition, as required for UARTs - and deactivated at the end of the
stop bit.)


You are correct that reception is in the middle of the stop bit
(typically sub-slot 9 of 16). The first transmitter will be disabled at
the end of the stop bit, and the next transmitter must not enable its
driver until after that point - it must wait at least half a bit time
after reception before starting transmission. (It can wait longer
without trouble, which is why faster baud rates are less likely to
involve any complications here.)


>
> None of this matters to me really. I'm going to use more wires, and do the multi-drop from the PC to the slaves on one pair and use RS-422 to multi-point from the slaves to the PC. Since the slaves are controlled by the master, they will never collide. The master can't collide with itself, so I can ignore any issues with this. I will use the bias resistors to assure a valid idle state. I may need to select different devices than the ones I use in the product. I think there are differences in the input load and I want to be sure I can chain up to 32 units.
>

OK. I have no idea what such a hybrid bus should technically be called,
but I think it should work absolutely fine for the purpose and seems
like a solid solution. I would not foresee any issues with 32 nodes on
such a bus, especially if it is relatively short and you have
terminators at each end.

(You still have to consider the latencies and timings to see if you can
get enough messages through the system fast enough, but you won't see
bus collisions. Consider broadcasts or multicast messages without
replies as a way of avoiding latency.)

>
>> I would expect there to be many alternatives to FTDI that work similarly
>> well, but that's the ones we generally use.
>>
>> <https://ftdichip.com/product-category/products/cables/?series_products=55>
>>>
>>>> The reception of the last byte from a slave is not finished until
>>>> the stop bit has been properly received by the master - that means
>>>> at least half-way through the sending of the stop bit.
>>>
>>> That's not sufficient. Everyone's halfway is a bit different and
>>> start bit detection may not be enabled on some device when the next
>>> driver outputs a start bit, or the last driver may not be turned off
>>> when the next driver starts.
>>>
>> "At least half-way" means "at least 50% of the bit time". As long as
>> the start bit from the next message is not sent until at least 50% of a
>> bit time after the stop bit is detected, it will not conflict and all
>> listening devices will be ready to see the start bit. (Devices that
>> needed two stop bits haven't existed in the last 50 years.)
>
> You don't seem to understand that there is nothing timing from the start of the bit. The timing is from the first detected low of the start bit. From there, all timing is done by an internal clock. Check the math, you don't get 50% of the stop bit, guaranteed. That's why they call it "asynchronous" serial.
>

The beginning of the start bit is detected at the receiver by its
falling edge. It is /confirmed/ by samples in the middle (or the
falling edge gets rejected as noise), but all timing is done from that
start time - not from the middle of any bits.

It is called "asynchronous" because the transmitter and the receiver do
not have any pre-agreed or external synchronisation regarding when the
transmission is going to happen. But once it starts, they agree exactly
on /when/ it starts (assuming a short enough bus that rise times and
transmission line delays are negligible).

I must admit that I have been assuming that you have reasonable quality
clock references on each side of the communication, so that your baud
rates match. In theory you have a total of nearly 5% margin of error
for mismatched baud rates, line rise and fall delays, etc., and these
can add to the maximum time between the receiver recognising a stop bit
and the transmitter finishing sending the stop bit, giving between
almost 0 and almost 1 bit time (typically 2/16 to 14/16 bit times).

>
>> You asked specifically about bus turnaround at the host side - I assume
>> that is because on the slave devices, you have control of the drive
>> enables and bus turnaround happens with negligible latency.
>
> I know the master has the most trouble with this. The slaves tend to not have a problem because they are operated by MCUs and can wait a bit time before replying, or even a character time. I suppose they don't have any magic on turning off the driver though, but early is the easy way and generally doesn't cause a problem. The master has trouble on both ends of it's message, needing to be careful to not turn on the driver too soon and not turning it off too late to clobber the reply.
>

PC's are not good at accurate short delays, but have no problem at
making a delay of at least a given time. There is no excuse for a PC
program turning on the driver too soon - even if it were not handled
automatically by the hardware, adding a "sleep" call to get a minimum
delay is basic stuff. In the old days (I remember doing this stuff on
16-bit Windows) it was hard to get a reliable delay that was shorter
than about 20 ms, but even then it was possible. The bigger challenge
with "manual" driver enable control in PC software is being sure you
turn the driver off fast enough, before the other end replies.

However - and I know I am repeating myself - the answer is to get a
decent USB to RS-485 converter that does this correctly and
automatically in hardware.

As for delays before replying (or before sending a new message from the
master), we have only talked about them in regard to bus drivers. It is
standard practice to have an additional delay beyond the minimum, as it
gives a bit of extra leeway and makes debugging easier - you can see the
start and stop of the messages on an oscilloscope. Modbus RTU, for
example, specifies an inter-frame silence time of at least 3.5 characters.


>
>>>> Then there is a delay before the data gets sent back to the host
>>>> PC, a delay through the kernel and drivers before it reaches the
>>>> user program, time for the program to handle that message, time for
>>>> it to prepare the next message, delays through the kernel and
>>>> drivers before it gets to the USB bus, latency in the USB device
>>>> that receives the USB message and then starts transmitting. There
>>>> can be no collision unless all that delay is less than half a bit
>>>> time. And no matter how fast your computer is, you are always going
>>>> to need at least one full USB polling cycle for all this, which for
>>>> USB 2.0 is 0.125 us. That means that if you have a baud rate of 16
>>>> kbaud or higher, there is no possibility of a collision.
>>>
>>> If your numbers are accurate, that might be ok, but I'm looking for
>>> data rates closer to 1 Mbps.
>> USB serial ports generally use the 48 MHz base USB reference frequency
>> as their source clock to scale down by a baud rate divisor, and common
>> practice is 16 sub-bit clocks per line bit (so that you can have
>> multiple samples for noise immunity). Thus baud rates of integer
>> divisions of 3 MBaud are common. Certainly the FTDI chips handle 1, 2
>> and 3 MBaud. (I haven't had need of such speeds with RS-485, but have
>> happily used the common 3v3 TTL cables at 3 MBaud.)
>
> At some point you have to worry with the line waveforms. So too fast can cause problems when using *lots* of receivers.
>

Yes. But I don't think you have a physically long bus, do you? 10
meters, maybe? 3 MBaud and 32 nodes should be fine.

>
>>> Admittedly, I have not done an analysis
>>> of what will actually be required, but 128 UUT, or possibly 256, can
>>> do a lot of damage to a shared bus. At 1 Mbps, 128 UUT results in an
>>> effective bit rate maximum of 7.8 kbps. With 256 UUTs, that's 3.9
>>> kbps. No, I don't think this will work properly at much slower
>>> speeds than 1 Mbps. At 16 kbps, the effective rate to each UUT is
>>> just 62.5 bps, not kbps.
>>>
>> As long as you are /above/ 16 kbaud, you should be fine (at the PC
>> side). At 1 Mbaud, you do not need to worry about the PC starting a new
>> telegram before the last received stop bit is completed.
>
> Not entirely. The master has to turn *off* the driver before the slave replies. At higher speeds that's a problem. But it all depends on how it is being done. This is why I'm going with two busses, one for master transmit and one for master input.
>

Unless you are masochistic or stuck in the last century, the driver
turnoff is done by the USB to RS485 driver, not by a PC program in software.

(I think for several reasons your hybrid bus is a better choice than a
single RS-485 bus - though I would still prefer to look at a
hierarchical setup myself.)
There are /always/ delays - in particular at the PC side. PC's are good
for high throughput, but bad for low latency. If they are not a
problem, then that's fine.

>
>>>> When we have made testbenches that required serial communication
>>>> to multiple parallel devices, we typically put a USB hub in the
>>>> testbench and use multiple FDTI USB to serial cables. You only make
>>>> one (or possibly a few) of the testbenches - it's much cheaper to
>>>> use off-the-shelf parts than to spend time designing something
>>>> more advanced. You can buy a /lot/ of hubs and USB cables for the
>>>> price of the time to design, build and program a custom card for
>>>> the job. It also makes the system more scalable, as the
>>>> communication to different devices runs in parallel.
>>>
>>> USB hubs are a last resort. I've found many issues with such
>>> devices, especially larger than 4 ports.
>>>
>> We find they work fine - I have very rarely seen any issues with
>> off-the-shelf hubs, regardless of the number of ports. (They are almost
>> all made with 1-to-4 hub chips, which is why hubs are often found in
>> sizes of 4 ports, 7 ports, or 10 ports.)
>
> Exactly, and I find combining them like that has issues.
>

Experiences vary, I guess.

>
>> A key complication with multiple serial ports on hubs is if you are
>> using Windows, it can be a big pain to keep consistent numbering for the
>> serial ports. You may have to use driver-specific libraries (like
>> FTDI's DLL's) to check serial numbers and use that information. It's
>> far easier on Linux where you can make a udev configuration file that
>> gives aliases to your ports ordered by physical tree address.
>
> Yet another reason to avoid such complications. The reality is there's no gain. The multi-drop is the right way to go here.
>

You see a complication where I see a simple configuration. And if you
need to use multiple serial ports on a single PC, Linux and a udev
configuration is a /huge/ gain. I currently have 7 serial ports in use
on my development PC at the moment, connected to debug ports (TTL UARTs)
on various boards. /dev/ttySerialPort_2_3 for hub 2 port 3 is vastly
superior to "COM74" on a Windows system. (I have no idea if you are
using Windows or Linux on your controlling PC here.)

>
>>>> We have also done systems where there is a Raspberry Pi driving the
>>>> hub and multiple FTDI converters. The PC is connected to the Pi by
>>>> Ethernet (useful for galvanic isolation), and the Pi runs
>>>> forwarders between the serial ports and TCP/IP ports.
>>>
>>> There is a possibility of using an rPi on an Ethernet cable to the PC
>>> with direct comms to each test fixture board, but that's more work
>>> that I'm interested in.
>>>
>> Or you could use one Pi for a set of boards - whatever is physically
>> convenient.
>
> But it's yet another piece to keep working. Much easier to just use the multi-drop. I will keep that idea as a backup plan. But getting RS-422 on an rPi is a hassle. That would need to be a hat, or a shield or whatever they call daughter cards on rPis. Last time I checked, it was hard to find rPis. They are part of the unobtainium universe now, it seems.
>

Of course availability of parts is of prime concern these days, and
projects are often done by buying what you can and then designing around
the devices you have found.

Pi's have USB - you do your RS-485, RS-422 or whatever on the Pi in
exactly the same way as you do it on the PC, using FTDI cables (or an
alternative supplier that you are comfortable with). Plug and play.

It is about modularisation and scalability. Now, I don't know your
product, your manufacturing and test systems, your preferences, or
anything other than the information you've written here. But if our
production department asked us to make a test bench for handling 80
devices in parallel, my immediate reaction would be to refuse. I'd
design a testbench to handle 8, or some number of that order. Then I'd
get them to make perhaps 12 of these test benches. That way, they have
something scalable and maintainable. If one testbench breaks, they are
at 90% production capacity instead of 0%. If they need to increase
capacity, they can make a few more benches. If they want to spread
testing between two facilities, it's easy. So for /me/, and /my/
company, splitting things up in a hierarchy with Pi's (or something
similar) has clear advantages. But you might have very different
priorities or organisations that give different dynamics and different
trade-offs.

>
>>>> To be fair, I don't recall any testbenches we've made that needed
>>>> more than perhaps 8 serial ports. If I needed to handle 80 lines, I
>>>> would probably split things up - a Pi handling 8-10 lines from a
>>>> local program, communicating with a PC master program by Ethernet.
>>>
>>> That's the advantage of the shared bus. No programming required,
>>> other than extending the protocol to move from "selecting" a device
>>> on the FPGA, to selecting the FPGA as well.
>>>
>> If you are familiar with socat, the Pi doesn't necessarily need any
>> programming either. (In our case we wanted some extra monitoring and
>> logging, which was more than we could get from socat - so it was a
>> couple of hundred lines of Python in the end.)
>
> A couple hundred lines I'd rather not write.
>
> Thanks for the comments.
>

Thanks for starting the threads here - it's nice to have a bit of real
discussion in this group that is often rather quiet.

pozz

unread,
Nov 3, 2022, 9:01:00 AM11/3/22
to
Il 03/11/2022 12:42, David Brown ha scritto:
> On 03/11/2022 00:27, Rick C wrote:
>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown wrote:
>>> On 02/11/2022 20:20, Rick C wrote:
>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown
>>>> wrote:
>>>>> On 02/11/2022 06:28, Rick C wrote:


> You are correct that reception is in the middle of the stop bit
> (typically sub-slot 9 of 16).  The first transmitter will be disabled at
> the end of the stop bit, and the next transmitter must not enable its
> driver until after that point - it must wait at least half a bit time
> after reception before starting transmission.  (It can wait longer
> without trouble, which is why faster baud rates are less likely to
> involve any complications here.)

Do you mean that RX interrupt triggers in the middle of the stop bit and
not at the end? Interesting, but are you sure this is the case for every
UART implemented in MCUs?

I wouldn't be surprised if the implementation was different for
different manufacturers.


>> None of this matters to me really.  I'm going to use more wires, and
>> do the multi-drop from the PC to the slaves on one pair and use RS-422
>> to multi-point from the slaves to the PC.  Since the slaves are
>> controlled by the master, they will never collide.  The master can't
>> collide with itself, so I can ignore any issues with this.  I will use
>> the bias resistors to assure a valid idle state.  I may need to select
>> different devices than the ones I use in the product.  I think there
>> are differences in the input load and I want to be sure I can chain up
>> to 32 units.
>>
>
> OK.  I have no idea what such a hybrid bus should technically be called,
> but I think it should work absolutely fine for the purpose and seems
> like a solid solution.  I would not foresee any issues with 32 nodes on
> such a bus, especially if it is relatively short and you have
> terminators at each end.

In my experience, termination resistors at each end of the line could
introduce other troubles if they aren't strictly required (because of
signal integrity on long lines at high baud rates).

The receiver input impedance of all the nodes on the bus are in parallel
with the two terminators. If you have many nodes, the equivalent
impedance on the bus is much small and the partition with bias resistors
could reduce the differential voltage between A and B at idle to less
than 200mV.

If you don't use true fail-safe transceivers, a fault start bit could be
seen by these kind of receivers.

David Brown

unread,
Nov 3, 2022, 11:26:15 AM11/3/22
to
On 03/11/2022 14:00, pozz wrote:
> Il 03/11/2022 12:42, David Brown ha scritto:
>> On 03/11/2022 00:27, Rick C wrote:
>>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown wrote:
>>>> On 02/11/2022 20:20, Rick C wrote:
>>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown
>>>>> wrote:
>>>>>> On 02/11/2022 06:28, Rick C wrote:
>
>
>> You are correct that reception is in the middle of the stop bit
>> (typically sub-slot 9 of 16).  The first transmitter will be disabled
>> at the end of the stop bit, and the next transmitter must not enable
>> its driver until after that point - it must wait at least half a bit
>> time after reception before starting transmission.  (It can wait
>> longer without trouble, which is why faster baud rates are less likely
>> to involve any complications here.)
>
> Do you mean that RX interrupt triggers in the middle of the stop bit and
> not at the end? Interesting, but are you sure this is the case for every
> UART implemented in MCUs?

Of course I'm not sure - there are a /lot/ of MCU manufacturers!

UART receivers usually work in the same way, however. They have a
sample clock running at 16 times the baud clock. The start bit is edge
triggered to give the start of the character frame. Then each bit is
sampled in the middle of its time slot - usually at subbit slots 7, 8,
and 9 with majority voting. So the stop bit is recognized by subbit
slot 9 of the tenth bit (assuming 8-bit, no parity) - the voltage on the
line after that is irrelevant. (Even when you have two stop bits,
receivers never check the second stop bit - it affects transmit timing
only.) What purpose would there be in waiting another 7 subbits before
triggering the interrupt, DMA, or whatever?

>
> I wouldn't be surprised if the implementation was different for
> different manufacturers.
>

I've seen a bit of variation, including 8 subbit clocks per baud clock,
wider sampling ranges, re-sync of the clock on edges, etc. And of
course you don't always get the details of the timings in datasheets
(and who bothers measuring them?) But the key principles are the same.

>
>>> None of this matters to me really.  I'm going to use more wires, and
>>> do the multi-drop from the PC to the slaves on one pair and use
>>> RS-422 to multi-point from the slaves to the PC.  Since the slaves
>>> are controlled by the master, they will never collide.  The master
>>> can't collide with itself, so I can ignore any issues with this.  I
>>> will use the bias resistors to assure a valid idle state.  I may need
>>> to select different devices than the ones I use in the product.  I
>>> think there are differences in the input load and I want to be sure I
>>> can chain up to 32 units.
>>>
>>
>> OK.  I have no idea what such a hybrid bus should technically be
>> called, but I think it should work absolutely fine for the purpose and
>> seems like a solid solution.  I would not foresee any issues with 32
>> nodes on such a bus, especially if it is relatively short and you have
>> terminators at each end.
>
> In my experience, termination resistors at each end of the line could
> introduce other troubles if they aren't strictly required (because of
> signal integrity on long lines at high baud rates).
>

RS-485 requires them - you want to hold the bus at a stable idle state
when nothing is driving it. You also want to have a bit of load so that
you have some current on the bus, and thereby greater noise immunity.

> The receiver input impedance of all the nodes on the bus are in parallel
> with the two terminators. If you have many nodes, the equivalent
> impedance on the bus is much small and the partition with bias resistors
> could reduce the differential voltage between A and B at idle to less
> than 200mV.
>
> If you don't use true fail-safe transceivers, a fault start bit could be
> seen by these kind of receivers.
>

Receiver load is very small on modern RS-485 drivers.

Rick C

unread,
Nov 3, 2022, 11:29:11 AM11/3/22
to
On Thursday, November 3, 2022 at 9:01:00 AM UTC-4, pozz wrote:
> Il 03/11/2022 12:42, David Brown ha scritto:
> > On 03/11/2022 00:27, Rick C wrote:
> >> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown wrote:
> >>> On 02/11/2022 20:20, Rick C wrote:
> >>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown
> >>>> wrote:
> >>>>> On 02/11/2022 06:28, Rick C wrote:
>
>
> > You are correct that reception is in the middle of the stop bit
> > (typically sub-slot 9 of 16). The first transmitter will be disabled at
> > the end of the stop bit, and the next transmitter must not enable its
> > driver until after that point - it must wait at least half a bit time
> > after reception before starting transmission. (It can wait longer
> > without trouble, which is why faster baud rates are less likely to
> > involve any complications here.)
> Do you mean that RX interrupt triggers in the middle of the stop bit and
> not at the end? Interesting, but are you sure this is the case for every
> UART implemented in MCUs?

No, I have not tested every MCU UART ever made. This was the case when I tried using RS-485 many years ago and was in every UART chip available. I forget the number, but there was a particular part made by Western Digital that became the "standard". It blossomed into a family of devices with small improvements in each (typically adding a FIFO and the size of the FIFO). I never saw one of that family which changed this "feature". It's because the purpose of this signal was not to control a driver, but to signal to a CPU the buffer condition. The UART timing is to first align to the middle of the start bit, then continue marking the time for bit centers. When it times the middle of the stop bit, the receiver or transmitter is done and flags the received character is available or the transmitter register is empty. The stop bit is the default value of the line, so nothing further has to be done in the UART, other than the receiver entering the start bit hunt mode.

If a modern UART has a separate signal that flags the end of the stop bit time, that would be great, but I have no reason to think this is available in every case, or any particular case, unless it is documented, *well* documented. Have you seen any parts that specifically indicate they flag the end of the stop bit of characters received or transmitted?

I recall the 8251 USART by Intel was full of bugs and this flag was no exception.


> I wouldn't be surprised if the implementation was different for
> different manufacturers.

Of course. The trouble is knowing which ones have what!


> >> None of this matters to me really. I'm going to use more wires, and
> >> do the multi-drop from the PC to the slaves on one pair and use RS-422
> >> to multi-point from the slaves to the PC. Since the slaves are
> >> controlled by the master, they will never collide. The master can't
> >> collide with itself, so I can ignore any issues with this. I will use
> >> the bias resistors to assure a valid idle state. I may need to select
> >> different devices than the ones I use in the product. I think there
> >> are differences in the input load and I want to be sure I can chain up
> >> to 32 units.
> >>
> >
> > OK. I have no idea what such a hybrid bus should technically be called,
> > but I think it should work absolutely fine for the purpose and seems
> > like a solid solution. I would not foresee any issues with 32 nodes on
> > such a bus, especially if it is relatively short and you have
> > terminators at each end.
> In my experience, termination resistors at each end of the line could
> introduce other troubles if they aren't strictly required (because of
> signal integrity on long lines at high baud rates).
>
> The receiver input impedance of all the nodes on the bus are in parallel
> with the two terminators. If you have many nodes, the equivalent
> impedance on the bus is much small and the partition with bias resistors
> could reduce the differential voltage between A and B at idle to less
> than 200mV.

That depends on the details of the receivers and drivers. I have control over that other than the USB cable. To assure I'm using appropriate RS-422 devices, I could use a TTL cable and on the first test fixture board use a separate connector with TTL level signaling. This would then be buffered on this first board to RS-422 for the rest of the chain. I could do all this with RS232, but it doesn't provide for tristate on the outputs. Not that I recall anyway.


> If you don't use true fail-safe transceivers, a fault start bit could be
> seen by these kind of receivers.

Sorry, I don't know what you mean by "fail-safe" transceivers. Are you talking about the driver or the receiver? Do you mean the internal bias some receivers have, so with zero volts across the input pair they are in a defined state? That can be done through external resistors as well.

I'm expecting to use shorting jumpers for setting various modes on these test fixtures. Do they make any shorting jumpers that are more than just two pins? I've never seen that. I suppose I could make one using a connector and wire.

I think my biggest problem is going to be the mechanical parts of the chassis. I need the boards to be very strong to withstand many insertion removal cycles of the UUTs from these boards. We currently do burn in using production Eurocard format units. They are very stiff with a rail on the front. I just realized, they probably get a lot of stiffening from the massive Eurocard connectors on the back and similar connectors on the front panel. I was planning to use a front panel, but maybe I need to add board stiffeners as well. In the "old" days, when DIPs roamed the earth, these were a combination of power decoupling capacitor and board stiffener. They probably don't even make them anymore. I really need the board to be solid so it doesn't suffer damage from the strain of repeated use. Maybe I just need to bolt a hunk of metal to the card.

--

Rick C.

++ Get 1,000 miles of free Supercharging
++ Tesla referral code - https://ts.la/richard11209

Paul Rubin

unread,
Nov 3, 2022, 12:57:41 PM11/3/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> What hardware can you replace with software??? You use *different*
> hardware and then have to add new software. That's not an
> improvement.

The idea is to avoid BUILDING hardware. Plugging cables into a box is
not building. There is no soldering iron, oscilloscope, or anything
like that in the picture. If you already avoid that, then great.

Paul Rubin

unread,
Nov 3, 2022, 1:36:31 PM11/3/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> I think my biggest problem is going to be the mechanical parts of the
> chassis. I need the boards to be very strong to withstand many
> insertion removal cycles of the UUTs from these boards.

I wonder if you can use some kind of low-force connector or cable on the
card, instead of plugging and unplugging the card from a chassis.

E.g. something like https://www.adafruit.com/product/5468 depending on
how many conductors are needed.

Rick C

unread,
Nov 3, 2022, 3:02:06 PM11/3/22
to
You seem to misunderstand. I AM building hardware. There's no choice in that matter. You want me to add other hardware and also software that adds nothing, improves nothing, and just creates more potential problems. This all seems to be because you can't understand the very, very simple nature of a master-slave shared bus and a polling protocol.

The bottom line is, you don't know much about the application, but think you can design it better. Why are you doing this?

The only way to "improve" this, might be to use RS-232 instead of RS-422. But that loses the noise immunity of the differential signaling and would require adding analog switches to the slave transmit outputs. It is also very unspecified, so I'd have to do more engineering to be sure it will work. So RS-422 is looking pretty good to me.

The connectors will be RJ-45 8P8C connectors. They are cheap and easy to plug/unplug. They mount securely to the board and can even be integrated into the front panel rather than hanging out the back. I still need to figure out how to distribute power. It's probably going to be plugs like they use on PC laptop supplies. This will most likely be supplied by a laptop supply, so I'll need a board that splits out the one PSU to 16 power cables. Sounds like a job for perf board and a small, plastic case. The supply on my laptop is 300W. That should do the trick easily.

--

Rick C.

--- Get 1,000 miles of free Supercharging
--- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 3, 2022, 3:32:42 PM11/3/22
to
How about something like this?

https://www.digikey.com/en/products/detail/amphenol-cs-commercial-products/RJHSE538B02/1979553

That's what I'm going to use. I'd run the power through it as well, but even with the low power I'll be using, 28 ga is a bit small. I'm just not a big fan of the typical barrel power connectors. I'd be happy if they had a detent so they won't fall out. The nylon shell connectors are a pain to remove.

--

Rick C.

--+ Get 1,000 miles of free Supercharging
--+ Tesla referral code - https://ts.la/richard11209

Dave Nadler

unread,
Nov 3, 2022, 3:37:43 PM11/3/22
to
Hi Rick - I have an RS-485 system on my desk using an implementation of
the old Intel BitBus. Works fine for a handful of nodes, limited
distance, and very simple cabling - but only 62.5kbaud. Good solid
technology for 1994 when I designed it...

Why would you use RS-485 instead of CAN? A million chips out there
support CAN with no fuss, works at decent speeds over twisted pair, not
hard to use.

BTW, another option for interfacing to RS-485 from USB is XR21B1411
which is what I happen to have on my desk.

Hope that helps!
Best Regards, Dave

Paul Rubin

unread,
Nov 3, 2022, 3:53:32 PM11/3/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> The bottom line is, you don't know much about the application, but
> think you can design it better. Why are you doing this?

You asked for suggestions and I gave some.

Paul Rubin

unread,
Nov 3, 2022, 4:08:28 PM11/3/22
to
That appears to be an RJ45 like ethernet cables use. The little locking
tabs break off all the time, and also the cable gets kinked up after
repeated flexing where it goes into the connector. Strain relief helps
but it happens anyway. You might buy some ready made ethernet cables
rather than putting those connectors on yourself. At least with cheap
crimpers, the ready made cables are often more reliable than DIY ones.

They do make those magnetic connectors with varying numbers of pins.

Here is a CAN cable, no idea if that is of interest, but it uses the
OBD connector found in cars: https://www.adafruit.com/product/4841

XLR or DIN style plugs/sockets might also be something to consider.

There is also this style, popular with the mechanical keyboard crowd:
https://www.pchcables.com/aviationplugs.html

Rick C

unread,
Nov 3, 2022, 4:32:40 PM11/3/22
to
I'm using RS-422 because I don't need to learn how to use a "chip". It's the same serial protocol I'm using now, but instead of RS-232 voltage levels, it's RS-422 differential. The "change" is really the fact that it's not just one slave. So the bus will be split into a master send bus and a slave reply bus. The master doesn't need to manage the tri-state output because it's the only talker. The slaves only talk when spoken to and the UART is in an FPGA, (no CPU), so it can manage the tri-state control to the driver chip very easily.

CAN bus might be the greatest thing since sliced bread, but I am going to be slammed with work and I don't want to do anything I don't absolutely have to.

A lot of people don't understand that this is nearly the same as what I'm using now and will only require a very minor modification to the message protocol, to allow the slaves to be selected/addressed. It would be hard to make it any simpler and this would all still have to be done even if adding the CAN bus. The slaves still need to be selected/addressed.

Thanks for the suggestions. The part I'm worried about now are the more mechanical bits. I am thinking of using the Eurocard size so I can use the rack hardware, but I know very little about the bits and bobs. There will be no backplane, just card guides and the front panels on the cards to hold them in place. I might put the cabling on the front panel to give it easy access, but then it needs machining of the front panel. I could simplify that by cutting out one large hole to expose all the LEDs and connectors. I want to make the design work as simple as possible and mechanical drawings are not my forte.

--

Rick C.

-+- Get 1,000 miles of free Supercharging
-+- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 3, 2022, 4:33:38 PM11/3/22
to
Ok, thank you for your suggestions.

--

Rick C.

-++ Get 1,000 miles of free Supercharging
-++ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 3, 2022, 4:40:07 PM11/3/22
to
On Thursday, November 3, 2022 at 4:08:28 PM UTC-4, Paul Rubin wrote:
> Rick C <gnuarm.del...@gmail.com> writes:
> > How about something like this?
> >
> > https://www.digikey.com/en/products/detail/amphenol-cs-commercial-products/RJHSE538B02/1979553
> That appears to be an RJ45 like ethernet cables use. The little locking
> tabs break off all the time, and also the cable gets kinked up after
> repeated flexing where it goes into the connector. Strain relief helps
> but it happens anyway. You might buy some ready made ethernet cables
> rather than putting those connectors on yourself. At least with cheap
> crimpers, the ready made cables are often more reliable than DIY ones.

The cable will be three inches long. I can make more.


> They do make those magnetic connectors with varying numbers of pins.
>
> Here is a CAN cable, no idea if that is of interest, but it uses the
> OBD connector found in cars: https://www.adafruit.com/product/4841

If you are talking about the big, black connector, it is bigger than the board. This will be a Eurocard rack with 4HP or 0.8 inch spacing. RJ-45 barely fits.


> XLR or DIN style plugs/sockets might also be something to consider.

DIN? You mean those things that are used on Eurocards with some 96 pins? What would mate with it? What's actually wrong with RJ-45?


> There is also this style, popular with the mechanical keyboard crowd:
> https://www.pchcables.com/aviationplugs.html

Way too much work. This is a jumper connector to go between boards that are 0.8 inches on centers. It simply doesn't require that much effort. They will be plugged and unplugged, on average, 1.1 times a day. I think RJ-11 will hack it. I'd rather have something that breaks and is very easy to replace, than something that breaks less often, but is much harder to repair or replace.

--

Rick C.

+-- Get 1,000 miles of free Supercharging
+-- Tesla referral code - https://ts.la/richard11209

Paul Rubin

unread,
Nov 3, 2022, 7:50:13 PM11/3/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> DIN? You mean those things that are used on Eurocards with some 96
> pins?

No I meant the circular connectors like you see on old PC keyboards,
similar to the aviation style one that I linked.

> What would mate with it? What's actually wrong with RJ-45?

1) the plugs break and the cables get munged up, but as you can say you
can replace them when they do.

2) the sockets also break, maybe not as often, but replacing them
might be harder, depending

If both of those are ok with you, then maybe it a good choice.

Rick C

unread,
Nov 4, 2022, 12:10:29 AM11/4/22
to
On Thursday, November 3, 2022 at 7:50:13 PM UTC-4, Paul Rubin wrote:
> Rick C <gnuarm.del...@gmail.com> writes:
> > DIN? You mean those things that are used on Eurocards with some 96
> > pins?
> No I meant the circular connectors like you see on old PC keyboards,
> similar to the aviation style one that I linked.
> > What would mate with it? What's actually wrong with RJ-45?
> 1) the plugs break and the cables get munged up, but as you can say you
> can replace them when they do.

I suppose the plugs can break, but I've never seen a broken RJ-45, other than the catch breaking. That's nearly always a result of pulling on a cable through a tangle rather than freeing it gently. On the other hand, I have seen broken DIN mouse connectors. The way they protrude, they get bumped and one or the other is damaged. I think the metal on the chassis mounted connector is optional or something, that can make it more fragile. Anything can break, but I need to be concerned with significant problems. I think RJ-45 will be very adequate.


> 2) the sockets also break, maybe not as often, but replacing them
> might be harder, depending

I've never seen an RJ-45 plug broken. They are used widely in the telecom industry as RS-232 connectors for consoles.


> If both of those are ok with you, then maybe it a good choice.

Yeah, I'm fine with a cable I can make to any length I want in 5 minutes, with most of that spent finding where I put the parts and tool. Oh, and costs less than $1.

If there was an easier way to make a DIN connector, I'd be ok with that. Anything crimp or solder pin is going to be a PITA. Heck, I'd be ok with a ribbon cable actually, but it would be larger than an RJ-45 since the smallest I'm likely to find is 10 positions. That's a half inch, plus the extra width of the female part. It's easy to bend those pins and it's not easy to extract the things without the extraction levers which make it even larger. As long as I put it in the back of the card, that's not a big deal, but I'm thinking of putting the connectors on the front to make access easier when pulling a card out of the cage. If power is in the front, it's totally easy. Of course, using an actual back plane is even easier, but that's a lot more work to get all the specs to make that happen. I wish I had one of the card cages in front of me to look at and see how they are constructed.

I have a similar card with a front panel. That is pretty straight forward with 8.5 inches clear space on the front panel. Not sure where to get these particular parts though. I wish the cards were a bit larger in each direction. I can get 8 UUTs on one 6U, size B card, but it will be tight with the other stuff (FPGA, buffers, power supplies). We've always had trouble ejecting the daughter cards as the two friction fit, 20 pin connectors are tough to get apart. It's easy to damage the connectors removing them. I have some ideas, but nothing that's rock solid. It will be important for the test fixture card to be well supported when removing the daughter cards. Having some extra room around the UUTs would help. The next standard size up from the size B (233 x 160 mm) is 367 x 220 mm. That's a large card! Turns out it's not so much money if made at JLCPCB. 20 of them for $352! That's pretty amazing!

pozz

unread,
Nov 4, 2022, 3:45:34 AM11/4/22
to
There's no real purpose, but it's important to know exactly when the RX
interrupt is fired from the UART.

Usually the next transmitter starts transmitting after receiving the
last byte of the previous transmitter (for example, the slave starts
replying to the master after receiving the complete message from it).

Now I think of the issue related to a transmitter that delays a little
to turn around the direction of its transceiver, from TX to RX. Every
transmitter on the bus should take into account this delay and avoid
starting transmission too soon.

So I usually implement a short delay before starting a new message
transmission. If the maximum expected delay of moving the direction from
TX to RX is 10us, I could think to use a 10us delay, but this is wrong
in your assumption.

If the RX interrupt is at the middle of the stop bit, I should delay the
new transmission of 10us + half of bit time. With 9600 this is 52us that
is much higher than 10us.

I know the next transmitter should make some processing of the previous
received message, prepare and buffer the new message to transmit, so the
delay is somewhat automatic, but in many cases I have small 8-bits PICs
and full-futured Linux box on the same bus and the Linux could be very
fast to start the new transmission.
But this is the goal of *bias* resistors, not termination resistors.


> You also want to have a bit of load so that
> you have some current on the bus, and thereby greater noise immunity.

Of course, but termination resistors are usually small (around 100 ohms)
because they should match the impedance of the cable. If you want only
to introduce "some current" on the bus, you could use resistors in the
order of 1k, but this isn't strictly a *termination* resistor.


>> The receiver input impedance of all the nodes on the bus are in
>> parallel with the two terminators. If you have many nodes, the
>> equivalent impedance on the bus is much small and the partition with
>> bias resistors could reduce the differential voltage between A and B
>> at idle to less than 200mV.
>>
>> If you don't use true fail-safe transceivers, a fault start bit could
>> be seen by these kind of receivers.
>>
>
> Receiver load is very small on modern RS-485 drivers.

ST3485 says the input load of the receiver around 24k. When you connect
32 slaves, the equivalent resistor would be 750 ohms, that should be
enough to have "some current" on the bus. If you add *termination*
resistors in the order of 100R on both sides, you could reduce
drastically the differential voltage between A and B at idle state.

David Brown

unread,
Nov 4, 2022, 5:49:42 AM11/4/22
to
I think it is extremely rare that this is important. I can't think of a
single occasion when I have thought it remotely relevant where in the
stop bit the interrupt comes.

> Usually the next transmitter starts transmitting after receiving the
> last byte of the previous transmitter (for example, the slave starts
> replying to the master after receiving the complete message from it).
>

No. Usually the next transmitter starts after receiving the last byte,
and /then a pause/. There will always be some handling time in
software, and may also include an explicit pause. Almost always you
will want to do at least a minimum of checking of the incoming data
before deciding on the next telegram to be sent out. But if you have
very fast handling in relation to the baud rate, you will want an
explicit pause too - protocols regularly specify a minimum pause (such
as 3.5 character times for Modbus RTU), and you definitely want it to be
at least one full character time to ensure no listener gets hopelessly
out of sync.

> Now I think of the issue related to a transmitter that delays a little
> to turn around the direction of its transceiver, from TX to RX. Every
> transmitter on the bus should take into account this delay and avoid
> starting transmission too soon.

They should, yes. The turnaround delay should be negligible in this day
and age - if not, your software design is screwed or you have picked the
wrong hardware. (Of course, you don't always get the choice of hardware
you want, and programmers are often left to find ways around hardware
design flaws.)

>
> So I usually implement a short delay before starting a new message
> transmission. If the maximum expected delay of moving the direction from
> TX to RX is 10us, I could think to use a 10us delay, but this is wrong
> in your assumption.
>

Implementing an explicit delay (or being confident that your telegram
handling code takes long enough) is a good idea.

> If the RX interrupt is at the middle of the stop bit, I should delay the
> new transmission of 10us + half of bit time. With 9600 this is 52us that
> is much higher than 10us.
>

I made no such assumptions about timings. The figures I gave were for
using a USB 2 based interface on a PC, where the USB polling timer is at
8 kHz, or 125 µs. That is half a bit time for 4 Kbaud. (I had doubled
the frequency instead of halving it and said the baud had to be above 16
kBaud - that shows it's good to do your own calculations and not trust
others blindly!). At 1 MBaud (the suggested rate), the absolute fastest
the PC could turn around the bus would be 12 character times - half a
stop bit is irrelevant.

If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs
between reception of the last bit and the start of transmission of the
next message, your code is wrong - by nearly two orders of magnitude.
It is that simple.

If we take Modbus RTU as an example, you should be waiting 3.5 * 10 /
9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned
about exactly where the receive interrupt comes in the last stop bit,
add another half bit time and you get 3.7 ms. The half bit time is
negligible.

> I know the next transmitter should make some processing of the previous
> received message, prepare and buffer the new message to transmit, so the
> delay is somewhat automatic, but in many cases I have small 8-bits PICs
> and full-futured Linux box on the same bus and the Linux could be very
> fast to start the new transmission.
>

So put in a delay. An /appropriate/ delay.
Yes - but see below. Bias resistors are part of the termination - it
just means that you have terminating resistors to 5V and 0V as well as
across the balanced pair.

>
>> You also want to have a bit of load so that you have some current on
>> the bus, and thereby greater noise immunity.
>
> Of course, but termination resistors are usually small (around 100 ohms)
> because they should match the impedance of the cable. If you want only
> to introduce "some current" on the bus, you could use resistors in the
> order of 1k, but this isn't strictly a *termination* resistor.
>

If you have a cable that is long enough (or speeds fast enough) that it
needs to be treated as a transmission line with controlled impedance,
then you do need impedance matched terminators to avoid reflections
causing trouble. Usually you don't.

A "terminating resistor" is just a "resistor at the terminator" - it
does not imply impedance matching, or any other specific purpose. You
pick a value (and network) appropriate for the task in hand - maybe you
impedance matching, maybe you'd rather have larger values to reduce
power consumption.


>
>>> The receiver input impedance of all the nodes on the bus are in
>>> parallel with the two terminators. If you have many nodes, the
>>> equivalent impedance on the bus is much small and the partition with
>>> bias resistors could reduce the differential voltage between A and B
>>> at idle to less than 200mV.
>>>
>>> If you don't use true fail-safe transceivers, a fault start bit could
>>> be seen by these kind of receivers.
>>>
>>
>> Receiver load is very small on modern RS-485 drivers.
>
> ST3485 says the input load of the receiver around 24k. When you connect
> 32 slaves, the equivalent resistor would be 750 ohms, that should be
> enough to have "some current" on the bus. If you add *termination*
> resistors in the order of 100R on both sides, you could reduce
> drastically the differential voltage between A and B at idle state.
>

If you are pushing the limits of a bus, in terms of load, distance,
speed, cable characteristics, etc., then you need to do such
calculations carefully and be precise in your specification of
components, cables, topology, connectors, etc. For many buses in
practice, they will work fine using whatever resistor you pull out your
box of random parts. For a testbench, you are going to go for something
between these extremes.

David Brown

unread,
Nov 4, 2022, 6:01:09 AM11/4/22
to
On 03/11/2022 21:08, Paul Rubin wrote:
> Rick C <gnuarm.del...@gmail.com> writes:
>> How about something like this?
>>
>> https://www.digikey.com/en/products/detail/amphenol-cs-commercial-products/RJHSE538B02/1979553
>
> That appears to be an RJ45 like ethernet cables use. The little locking
> tabs break off all the time, and also the cable gets kinked up after
> repeated flexing where it goes into the connector. Strain relief helps
> but it happens anyway. You might buy some ready made ethernet cables
> rather than putting those connectors on yourself. At least with cheap
> crimpers, the ready made cables are often more reliable than DIY ones.
>

On some testbenches that we have made that are used for cards with RJ45
sockets on the card, we made posts with an RJ45 on the end, with a small
spring at the base. The RJ45 connector had its tag removed, of course.
The DUT slid in on rails. For high-usage testbenches, you don't want
any flexible cables attached to the DUT - you want bed of nails and
spring-loaded connectors as much as possible.


David Brown

unread,
Nov 4, 2022, 6:13:37 AM11/4/22
to
On 04/11/2022 05:10, Rick C wrote:

> Yeah, I'm fine with a cable I can make to any length I want in 5 minutes, with most of that spent finding where I put the parts and tool. Oh, and costs less than $1.
>

A cable you can make in 5 minutes doesn't cost $1, unless you earn less
than a hamburger flipper and the parts are free. The cost of a poor
connection when making the cable could be huge in downtime of the
testbench. It should not be hard to get a bag of pre-made short
Ethernet cables for a couple of dollars per cable - it's probably
cheaper to buy an effectively unlimited supply than to buy a good
quality crimping tool.

pozz

unread,
Nov 4, 2022, 10:37:38 AM11/4/22
to
In theory, if all the nodes on the bus were able to change direction in
hardware (exactly at the end of the stop bit), you will not be forced to
introduce any delay in the transmission.

Many times I'm the author of a custom protocol because some nodes on a
shared bus, so I'm not forced to follow any specifications. When I
didn't introduce any delay in the transmission, I sometimes faced this
issue. In my experience, the bus is heterogeneous enough to have a fast
replying slave to a slow master.


>> Now I think of the issue related to a transmitter that delays a little
>> to turn around the direction of its transceiver, from TX to RX. Every
>> transmitter on the bus should take into account this delay and avoid
>> starting transmission too soon.
>
> They should, yes.  The turnaround delay should be negligible in this day
> and age - if not, your software design is screwed or you have picked the
> wrong hardware.  (Of course, you don't always get the choice of hardware
> you want, and programmers are often left to find ways around hardware
> design flaws.)

Negligible doesn't mean anything. If thre's a poor 8 bit PIC (previous
transmitter) clocked at 8MHz that changes direction in TXC interrupt
while other interrupts are active, and there's a Cortex-M4 clocked at
200MHz (next transmitter), you will encounter this issue.

This is more evident if, as you are saying, the Cortex-M4 is able to
start processing the message from the PIC at the midpoint of last stop
bit, while the PIC disables its driver at the *end* of the stop bit plus
an additional delay caused by interrupts handling.

In this cases the half bit time is not negligible and must be added to
the transmission delay.



>> So I usually implement a short delay before starting a new message
>> transmission. If the maximum expected delay of moving the direction
>> from TX to RX is 10us, I could think to use a 10us delay, but this is
>> wrong in your assumption.
>>
>
> Implementing an explicit delay (or being confident that your telegram
> handling code takes long enough) is a good idea.
>
>> If the RX interrupt is at the middle of the stop bit, I should delay
>> the new transmission of 10us + half of bit time. With 9600 this is
>> 52us that is much higher than 10us.
>
> I made no such assumptions about timings.  The figures I gave were for
> using a USB 2 based interface on a PC, where the USB polling timer is at
> 8 kHz, or 125 µs.  That is half a bit time for 4 Kbaud.  (I had doubled
> the frequency instead of halving it and said the baud had to be above 16
> kBaud - that shows it's good to do your own calculations and not trust
> others blindly!).  At 1 MBaud (the suggested rate), the absolute fastest
> the PC could turn around the bus would be 12 character times - half a
> stop bit is irrelevant.
>
> If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs
> between reception of the last bit and the start of transmission of the
> next message, your code is wrong - by nearly two orders of magnitude. It
> is that simple.

Not always. If you have only MCUs that are able to control direction in
hardware, you don't need any delay before transmission.


> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 /
> 9600 seconds at a minimum - 3.65 /milli/seconds.  If you are concerned
> about exactly where the receive interrupt comes in the last stop bit,
> add another half bit time and you get 3.7 ms.  The half bit time is
> negligible.

Oh yes, if you have already implemented a pause of 3.5 char times, it is ok.
Ok, I thought you were suggesting to add impedance matching (slow)
resistors as terminators in any case.

Rick C

unread,
Nov 4, 2022, 11:40:32 AM11/4/22
to
You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed.

The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable.


> If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs
> between reception of the last bit and the start of transmission of the
> next message, your code is wrong - by nearly two orders of magnitude.
> It is that simple.
>
> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 /
> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned
> about exactly where the receive interrupt comes in the last stop bit,
> add another half bit time and you get 3.7 ms. The half bit time is
> negligible.

Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle.


> > I know the next transmitter should make some processing of the previous
> > received message, prepare and buffer the new message to transmit, so the
> > delay is somewhat automatic, but in many cases I have small 8-bits PICs
> > and full-futured Linux box on the same bus and the Linux could be very
> > fast to start the new transmission.
> >
> So put in a delay. An /appropriate/ delay.

You are thinking software, like most people do. The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself.
How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.

Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.

They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.

--

Rick C.

+-+ Get 1,000 miles of free Supercharging
+-+ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 4, 2022, 11:52:22 AM11/4/22
to
You are not only right, but absolutely correct. Cablestogo has 6 inch cables for $2.99 each. I'd like to keep them a bit shorter, but that's probably not an issue. Under quantity, they even list "unlimited supply".

--

Rick C.

++- Get 1,000 miles of free Supercharging
++- Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 4, 2022, 12:36:51 PM11/4/22
to
Communication is about /reliably/ transferring data between devices.
Asynchronous serial communication is about doing that despite slight
differences in clock rates, differences in synchronisation, differences
in startup times, etc. If you don't have idle pauses, you have almost
zero chance of staying in sync across the nodes - and no chance at all
of recovery when that happens. /Every/ successful serial protocol has
pauses between frames - long enough pauses that the idle time could not
possibly be part of a normal full speed frame. That does not just apply
to UART protocols, or even just to asynchronous protocols. The pause
does not have to be as long as 3.5 characters, but you need a pause -
just as you need other error recovery handling.

>
> Many times I'm the author of a custom protocol because some nodes on a
> shared bus, so I'm not forced to follow any specifications. When I
> didn't introduce any delay in the transmission, I sometimes faced this
> issue. In my experience, the bus is heterogeneous enough to have a fast
> replying slave to a slow master.
>
>
>>> Now I think of the issue related to a transmitter that delays a
>>> little to turn around the direction of its transceiver, from TX to
>>> RX. Every transmitter on the bus should take into account this delay
>>> and avoid starting transmission too soon.
>>
>> They should, yes.  The turnaround delay should be negligible in this
>> day and age - if not, your software design is screwed or you have
>> picked the wrong hardware.  (Of course, you don't always get the
>> choice of hardware you want, and programmers are often left to find
>> ways around hardware design flaws.)
>
> Negligible doesn't mean anything.

Negligible means of no significance in comparison to the delays you have
anyway - either intentional delays in order to separate telegrams and
have a reliable communication, or unavoidable delays due to software
processing.

> If thre's a poor 8 bit PIC (previous
> transmitter) clocked at 8MHz that changes direction in TXC interrupt
> while other interrupts are active, and there's a Cortex-M4 clocked at
> 200MHz (next transmitter), you will encounter this issue.
>

No, you won't - not unless you are doing something silly in your timing
such as failing to use appropriate pauses or thinking that 10 µs
turnarounds are a good idea at 9600 baud. And I did specify picking
sensible hardware - 8-bit PICs were are terrible choice 20 years ago for
anything involving high speed, and they have not improved. (Again -
sometimes you don't have control of the hardware, and sometimes there
can be other overriding reasons for picking something. But if your
hardware is limited, you have to take that into account.)

> This is more evident if, as you are saying, the Cortex-M4 is able to
> start processing the message from the PIC at the midpoint of last stop
> bit, while the PIC disables its driver at the *end* of the stop bit plus
> an additional delay caused by interrupts handling.
>
> In this cases the half bit time is not negligible and must be added to
> the transmission delay.
>

Sorry, but I cannot see any situation where that would happen in a
well-designed communication system.

Oh, and it is actually essential that the receiver considers the
character finished half-way through the stop bit, and not at the end.
UART communication is intended to work despite small differences in the
baud rate - up to nearly 5% total error. By the time the receiver is
half way through the received stop bit, and has identified it is valid,
the sender could be finished the stop bit as its clock is almost 5%
faster (50% bit time over the full 10 bits). The receiver has to be in
the "watch for falling edge of start bit" state at this point, ready for
the transmitter to start its next frame.
Yes, exactly.

Rick C

unread,
Nov 4, 2022, 1:11:07 PM11/4/22
to
The "idle" pauses you talk about are accommodated with the start and stop bits in the async protocol. Every character is sent with a start bit which starts the timing. The stop bit is the "fluff" time for the next character to align to the next start bit. There is no need for the bus to be idle in the sense of no data being sent. If an RS-485 or RS-422 bus is biased for undriven times, there is no need for the driver to be on through the full stop bit. Once the stop bit has driven high, it can be disabled, such as in the middle of the bit. The there is a half bit time for timing skew, which amounts to 5%, between any two devices on the bus.


> > Many times I'm the author of a custom protocol because some nodes on a
> > shared bus, so I'm not forced to follow any specifications. When I
> > didn't introduce any delay in the transmission, I sometimes faced this
> > issue. In my experience, the bus is heterogeneous enough to have a fast
> > replying slave to a slow master.
> >
> >
> >>> Now I think of the issue related to a transmitter that delays a
> >>> little to turn around the direction of its transceiver, from TX to
> >>> RX. Every transmitter on the bus should take into account this delay
> >>> and avoid starting transmission too soon.
> >>
> >> They should, yes. The turnaround delay should be negligible in this
> >> day and age - if not, your software design is screwed or you have
> >> picked the wrong hardware. (Of course, you don't always get the
> >> choice of hardware you want, and programmers are often left to find
> >> ways around hardware design flaws.)
> >
> > Negligible doesn't mean anything.
> Negligible means of no significance in comparison to the delays you have
> anyway - either intentional delays in order to separate telegrams and
> have a reliable communication, or unavoidable delays due to software
> processing.

The software on the PC is not managing the bus drivers. So software delays are not relevant to bus control timing.
Yes, why would it not be? This is why there's no need for additional delays or "gaps" in the protocol for an async interface.

--

Rick C.

+++ Get 1,000 miles of free Supercharging
+++ Tesla referral code - https://ts.la/richard11209

anti...@math.uni.wroc.pl

unread,
Nov 4, 2022, 6:53:34 PM11/4/22
to
Rick C <gnuarm.del...@gmail.com> wrote:
>
> How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.

It is pointless to add terminator to driver, there will be mismatch
anyway and resistor would just waste transmit power. Mismatch
at driver does not case trouble as long as ends are properly
terminated. And when driver is at the near end and there are no
other drivers, then it is enough to put termination only at the
far end. So FTDI cable seem to be doing exactly what is needed.
>
> Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
>
> They sell cables that have 5 m of cable, with a round trip of 30 ns or so.

Closer to 50 ns due to lower speed in cable.

> I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.

Termination is also to kill _multiple_ reflections. In low loss line
you can have bunch of reflection creating jitter. When jitter is
more than 10% of bit time serial communication tends to have significant
number of errors. At 9600 or at 100000 bits/s with short line bit
time is long enough that jitter due to reflections in untermined
line does not matter. Also multidrop RS-485 is far from low loss,
each extra drop weakens signal, so reflections die faster than
in quality point-to-point line.

--
Waldek Hebisch

chris

unread,
Nov 4, 2022, 8:13:30 PM11/4/22
to
On 11/2/22 05:28, Rick C wrote:
> I have a test fixture that uses RS-232 to communicate with a PC. It actually uses the voltage levels of RS-232, even though this is from a USB cable on the PC, so it's only RS-232 for maybe four inches. lol
>
> I'm redesigning the test fixtures to hold more units and fully automate a few features that presently requires an operator. There will now be 8 UUTs on each test fixture and I expect to have 10 to 20 test fixtures in a card rack. That's 80 to 160 UUTs total. There will be an FPGA controlling each pair of UUTs, so 80 FPGAs in total that the PC needs to talk to.
>
> Rather than working on a way to mux 80 RS-232 interfaces, I'm thinking it would be better to either daisy chain, or connect in parallel all these devices. The protocol is master-slave where the master sends a command and the slaves are idle until they reply. The four FPGAs on a test fixture board could be connected in parallel easily enough. But I don't think I want to run TTL level signals between so many boards.
>
> I could do an RS-422 interface with a master to slave pair and a slave to master pair. The slaves do not speak until spoken to, so there will be no collisions.
>
> RS-485 would allow all this to be over a single pair of wires. But the one big issue I see people complain about is getting PC software to not clobber the slaves, or I should say, to get the master to wait long enough that it's not clobbering it's own start bit by overwriting the stop bit of the slave. I suppose someone, somewhere has dealt with this on the PC and has a solution that doesn't impact bus speed. I run the single test fixture version of this at about 100 kbps. I'm going to want as much speed as I can get for 80 FPGAs controlling 160 UUTs. Maybe I should give that some analysis, because this might not be true.
>
> The tests are of two types, most of them are setting up a state and reading a signal. This can go pretty fast and doesn't take too many commands. Then there are the audio tests where the FPGA sends digital data to the UUT, which does it's thing and returns digital data which is crunched by the FPGA. This takes some small number of seconds and presently the protocol is to poll the status until it is done. That's a lot of messages, but it's not necessarily a slow point. The same test can be started on every UUT in parallel, so the waiting is in parallel. So maybe the serial port won't need to be any faster.
>
> Still, I want to use RS-422 or RS-485 to deal with ground noise since this will be spread over multiple boards that don't have terribly solid grounds, just the power cable really.
>
> I'm thinking out loud here as much as anything. I intended to simply ask if anyone had experience with RS-485 that would be helpful. Running two wires rather than eight would be a help. I'll probably use a 10 pin connector just to be on the safe side, allowing the transceivers to be used either way.
>

I worked on highway traffic sign project some years back that used
multidrop RS423. The sign driven from a roadside controller, with a
supervisory controller between that and led column controllers.
Supervisory controller always master, with col controllers slaves.
Master always initiated comms, with col controllers talking when
addressed. A simple software state machine and line turnaround for
the selected column to talk. Used diff line transceivers at the tx
and rx ends, which could be tristated at the output. Interesting
project and with a 15 yr design life, probably hundreds still
working now. RS423 multidrop works well, though don't remember what
the max supported speeds are. Much cheaper than network, but you can
used standard cat5 etc network cables and pcb sockets to tie it all
together...

Chris




Rick C

unread,
Nov 4, 2022, 9:07:33 PM11/4/22
to
On Friday, November 4, 2022 at 6:53:34 PM UTC-4, anti...@math.uni.wroc.pl wrote:
> Rick C <gnuarm.del...@gmail.com> wrote:
> >
> > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
> It is pointless to add terminator to driver, there will be mismatch
> anyway and resistor would just waste transmit power. Mismatch
> at driver does not case trouble as long as ends are properly
> terminated. And when driver is at the near end and there are no
> other drivers, then it is enough to put termination only at the
> far end. So FTDI cable seem to be doing exactly what is needed.

Yes, that's true for a single driver and multiple receivers. The point is that with multiple drivers, a terminator is needed at both ends of the cable. You have two ends to terminate, because drivers can be in the middle. You could not use FTDI RS-422 cables in the arrangement I am implementing. Every receiver would add a 120 ohm load to the line. Good thing I only need one!


> > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
> >
> > They sell cables that have 5 m of cable, with a round trip of 30 ns or so.
> Closer to 50 ns due to lower speed in cable.
> > I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
> Termination is also to kill _multiple_ reflections. In low loss line
> you can have bunch of reflection creating jitter. When jitter is
> more than 10% of bit time serial communication tends to have significant
> number of errors. At 9600 or at 100000 bits/s with short line bit
> time is long enough that jitter due to reflections in untermined
> line does not matter. Also multidrop RS-485 is far from low loss,
> each extra drop weakens signal, so reflections die faster than
> in quality point-to-point line.

How do RS-485 drops "weaken" the signal? The load of an RS-485 device is very slight. The same result will happen with multiple receivers on RS-422.

I expect to be running at least 1 Mbps, possibly as high as 3 Mbps.

One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter.

Why would the color be an issue, to the point of creating two different specs???

Obviously I'm missing something. I will need to check a cable before I design the boards, lol.

--

Rick C.

---- Get 1,000 miles of free Supercharging
---- Tesla referral code - https://ts.la/richard11209

Richard Damon

unread,
Nov 4, 2022, 10:47:12 PM11/4/22
to
RS-485 will require you to make a firm decision on protocol timing.
Either you require that ALL units can get off the line fast after a
message, so you don't need to add much wait time, or your allow any unit
to be slow to get off, so everyone has to wait a while before talking.

Perhaps if you have a single master that is fast, the replying machines
can be slow, as long as the master knows that.

Multi-drop RS-422, with one pair going out from the master controller to
everyone, and a shared pair to answer on largely gets around this
problem, as the replying units just need to be fast enough getting off
the line so they are off before the controller sends enough of a message
that someone else might decide to start to reply. This sounds like what
you are talking about, and does work.

You can even do "Multi-Master" with this topology, if you give the
masters two drive chips, one to drive the master bus when they are the
master, and one to drive the response bus when they are selected as a
slave, and some protocol to pass mastering around and some recovery
method to handle the case where the master role gets lost.

One other thing to remember is that 422/485 really is designed to be a
single linear bus, without significant branches, with end of bus
termination. You can "cheat" on this if your speed is on the slow side.

Paul Rubin

unread,
Nov 4, 2022, 11:03:25 PM11/4/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> Cablestogo has 6 inch cables for $2.99 each. I'd like to keep them
> a bit shorter, but that's probably not an issue.

I thought there was a minimum length for ethernet cables because they
have to have certain RF characteristics at 100mhz or 1ghz frequencies.
I didn't realize they even came as short as 6 inches. Either way
though, it shouldn't be an issue for your purposes.

anti...@math.uni.wroc.pl

unread,
Nov 4, 2022, 11:46:16 PM11/4/22
to
Rick C <gnuarm.del...@gmail.com> wrote:
> On Friday, November 4, 2022 at 6:53:34 PM UTC-4, anti...@math.uni.wroc.pl wrote:
> > Rick C <gnuarm.del...@gmail.com> wrote:
> > >
> > > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
> > It is pointless to add terminator to driver, there will be mismatch
> > anyway and resistor would just waste transmit power. Mismatch
> > at driver does not case trouble as long as ends are properly
> > terminated. And when driver is at the near end and there are no
> > other drivers, then it is enough to put termination only at the
> > far end. So FTDI cable seem to be doing exactly what is needed.
>
> Yes, that's true for a single driver and multiple receivers. The point is that with multiple drivers, a terminator is needed at both ends of the cable. You have two ends to terminate, because drivers can be in the middle.

With 100 Ohm line driver in the middle sees two parts in parallel, so
effectively 50 Ohm. Typical driver impedance is about 40 Ohm, so
while mismatched, mismath is not too bad. Also, with multiple
devices on the line there will be undesirable signals even if you
have termination at both ends.

In unterminated line there will be some loss, so after each reflection
reflected signal will be weaker, in rough approximation multiplied
by some number a < 1 (say 0.8). After n reflections signal will
be multiplied by a^n and for large enough n will become negligible.
Termination at given end with 1% resistor means that about 2% will
be reflected (due to imperfection). This 2% is likely to be negligible.
If transmitter is in the middle, there is still reflection at the
end opposite to termination and at the transmitter. But mismatch
at transmitter is not bad and the corresponding parameter a is
much smaller than in unterminated case. So termination at one
end reduces number of problematic reflections probably about 2-4
times. Which means that you can increase transfer rate by
similar factor. Of course, termintion at both ends is better,
but in multidrop case speed will be lower than in point-to-point
link.

> You could not use FTDI RS-422 cables in the arrangement I am implementing. Every receiver would add a 120 ohm load to the line. Good thing I only need one!

Well, multiple receivers on RS-422 have limited usefulness (AFAIK your
use case is called 4-wire RS-485), so no wonder that FTDI does not
support it. Maybe they have something more expensive that is
doing what you want.

> > > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
> > >
> > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so.
> > Closer to 50 ns due to lower speed in cable.
> > > I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
> > Termination is also to kill _multiple_ reflections. In low loss line
> > you can have bunch of reflection creating jitter. When jitter is
> > more than 10% of bit time serial communication tends to have significant
> > number of errors. At 9600 or at 100000 bits/s with short line bit
> > time is long enough that jitter due to reflections in untermined
> > line does not matter. Also multidrop RS-485 is far from low loss,
> > each extra drop weakens signal, so reflections die faster than
> > in quality point-to-point line.
>
> How do RS-485 drops "weaken" the signal? The load of an RS-485 device is very slight. The same result will happen with multiple receivers on RS-422.

That is general thing, not specific to RS-485. If RS-485 receiver
puts 24 kOhm load on line, that is about 0.4% of line impedance.
When signal passes past receiver there is corresponding power loss.
There is also second effect: receiver created discontinuity, so
there is reflection. And beside resitive part receiver impedance
has also reactive part which means that discontinuity and reflection
is bigger than implied by receiver resistance. With lower load
recevier effect is smaller, but still there is fraction of percent
lost or reflected. Single loss is "very slight", but they add up
and increase effective line loss: with single receiver reflecting/losing
0.5 after 40 receivers 20% of signal is gone. This 20% effectively
adds to normal line loss.

> I expect to be running at least 1 Mbps, possibly as high as 3 Mbps.

You probably should check if you can get such rate with short messages.
If did little experiment using CH340 and CP2104. That was bi-drectional
TTL level serial connection using 15 cm wires. Slave echoed each
received character after mangling it a little (so I knew that it
really came from the slave and not from some echo in software stack).
I had trouble running CH340 above 460800 (that could be limit of program
that I used). But using 1 character messages 10000 round trips took
about 7s, with small influence from serial speed (almost the same
result at 115200 and 230400). Also increasing message to 5 bytes
gave essentially the same number of _messages_.

CP2104 was better, here I could go up to 2000000. Using 5 byte
messages 10000 round trips needed 2.5s up to 1500000, at
2000000 time dropped to about 1.9. When I increased message
to 10 bytes it was back about 2.5s.

I must admit that ATM I am not sure what this means. But this 2.5s
looks significant: this means 4000 round trips per second, which
is 8000 messages, which in turn is number of USB cycles. So,
it seems that normally smallish messages need USB cycle (125 uS)
to get trough USB bus. It seems that sometimes more than one
message may go trough in a cycle (giving smaller times that I
observed), but it is not clear if one can do significantly better.
And CH340 shows that it may be much worse.

FTDI is claimed to be very good, so maybe it is better, but I would
not count on this without checking. Actually, I remember folks
complaining that they needed more than millisecond to get message
trough USB-serial.

OTOH, your description suggest that you should be able to do what
you want with much smaller message traffic, so maybe USB-serial
speed is enough for you.

> One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter.
>
> Why would the color be an issue, to the point of creating two different specs???
>
> Obviously I'm missing something. I will need to check a cable before I design the boards, lol.

You may be missing fact that most folks installing network cabling
do not know about transmission lines and reasons for matching pairs.
And even for folks that understand theory, it is easier to check
that colors are in position prescribed in the norm, than to check
pairs. So, colors matter because using colors folks can get correct
connetion without too much thinking. Why two specs? I think
that this is artifact of history and way that standard bodies work.
When half of industry is using one way and other half is using
different but equally good way standard body can not say that
one half is wrong, they must allow both ways.

--
Waldek Hebisch

Rick C

unread,
Nov 5, 2022, 5:09:46 AM11/5/22
to
On Friday, November 4, 2022 at 11:46:16 PM UTC-4, anti...@math.uni.wroc.pl wrote:
> Rick C <gnuarm.del...@gmail.com> wrote:
> > On Friday, November 4, 2022 at 6:53:34 PM UTC-4, anti...@math.uni.wroc.pl wrote:
> > > Rick C <gnuarm.del...@gmail.com> wrote:
> > > >
> > > > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
> > > It is pointless to add terminator to driver, there will be mismatch
> > > anyway and resistor would just waste transmit power. Mismatch
> > > at driver does not case trouble as long as ends are properly
> > > terminated. And when driver is at the near end and there are no
> > > other drivers, then it is enough to put termination only at the
> > > far end. So FTDI cable seem to be doing exactly what is needed.
> >
> > Yes, that's true for a single driver and multiple receivers. The point is that with multiple drivers, a terminator is needed at both ends of the cable. You have two ends to terminate, because drivers can be in the middle.
> With 100 Ohm line driver in the middle sees two parts in parallel, so
> effectively 50 Ohm. Typical driver impedance is about 40 Ohm, so
> while mismatched, mismath is not too bad. Also, with multiple
> devices on the line there will be undesirable signals even if you
> have termination at both ends.

I don't want to get into a big discussion on termination, but any time a driver is in the middle of the line, it will see two loads, one for each direction of the cable. The termination only impacts the behavior of the reflection. So every driver that is not at the end of the line, will see the characteristic impedance divided by two. However, since the driver is not impedance matched to the line, that should not matter. But to prevent reflections, each end needs to be terminated, to prevent reflections from that end.

The disruptions from the driver/receiver connections of intermediate chips will be small, since they are high impedance and minimal capacitance compared to the transmission line. These signals have multiple ns rise and fall times, so even with no terminations, it is unlikely to see effects from reflections from the ends of the line, much less the individual connections.


> In unterminated line there will be some loss, so after each reflection
> reflected signal will be weaker, in rough approximation multiplied
> by some number a < 1 (say 0.8). After n reflections signal will
> be multiplied by a^n and for large enough n will become negligible.
> Termination at given end with 1% resistor means that about 2% will
> be reflected (due to imperfection). This 2% is likely to be negligible.
> If transmitter is in the middle, there is still reflection at the
> end opposite to termination and at the transmitter. But mismatch
> at transmitter is not bad and the corresponding parameter a is
> much smaller than in unterminated case. So termination at one
> end reduces number of problematic reflections probably about 2-4
> times. Which means that you can increase transfer rate by
> similar factor. Of course, termintion at both ends is better,
> but in multidrop case speed will be lower than in point-to-point
> link.

Multidrop is a single driver and multiple receivers. Multipoint is multiple drivers and receivers. One line will be multidrop (from PC) and the other multipoint (to the PC). The Multidrop will be single terminated since the driver needs no termination, it's impedance is well below the line impedance. The Multipoint line has a termination in the FTDI device on the receiver. Another termination will be added to the far end of the run. This is mostly insurance. I would not expect trouble if I used no terminators. I could probably use a TTL level serial cable and no RS-422 interface chips. But that's going a bit far I think. Using RS-422 is enough insurance to make the system work reliably.


> > You could not use FTDI RS-422 cables in the arrangement I am implementing. Every receiver would add a 120 ohm load to the line. Good thing I only need one!
> Well, multiple receivers on RS-422 have limited usefulness (AFAIK your
> use case is called 4-wire RS-485), so no wonder that FTDI does not
> support it. Maybe they have something more expensive that is
> doing what you want.

??? Who said FTDI does not support multiple receivers? Oh, you mean their cables only. I'm not sure why you say this has limited usefulness. But whatever. That's not a thing worth mentioning really.

I'm not using FTDI anyplace other than the PC, so their device does exactly what I want. The only other, differential cable is RS-485, which I don't want to use as you have to pay more attention to timing of the driver enables.


> > > > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
> > > >
> > > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so.
> > > Closer to 50 ns due to lower speed in cable.
> > > > I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
> > > Termination is also to kill _multiple_ reflections. In low loss line
> > > you can have bunch of reflection creating jitter. When jitter is
> > > more than 10% of bit time serial communication tends to have significant
> > > number of errors. At 9600 or at 100000 bits/s with short line bit
> > > time is long enough that jitter due to reflections in untermined
> > > line does not matter. Also multidrop RS-485 is far from low loss,
> > > each extra drop weakens signal, so reflections die faster than
> > > in quality point-to-point line.
> >
> > How do RS-485 drops "weaken" the signal? The load of an RS-485 device is very slight. The same result will happen with multiple receivers on RS-422.
> That is general thing, not specific to RS-485. If RS-485 receiver
> puts 24 kOhm load on line, that is about 0.4% of line impedance.
> When signal passes past receiver there is corresponding power loss.

If you are talking about the load resistance, that is trivial enough to be ignored for signal loss. The basic RS-422 devices are rated for 32 loads, and the numbers in the FTDI data sheet (54 ohms load) are with a pair of 120 ohm resistors and 32 loads.


> There is also second effect: receiver created discontinuity, so
> there is reflection. And beside resitive part receiver impedance
> has also reactive part which means that discontinuity and reflection
> is bigger than implied by receiver resistance. With lower load
> recevier effect is smaller, but still there is fraction of percent
> lost or reflected. Single loss is "very slight", but they add up
> and increase effective line loss: with single receiver reflecting/losing
> 0.5 after 40 receivers 20% of signal is gone. This 20% effectively
> adds to normal line loss.

The "reactive" part of the receiver/driver load is capacitive. That does not change with the load value. It's mostly from the packaging is my understanding, but they don't give a value in the part data sheet. I expect there's more capacitance in the 6 foot cable than the device. I don't know how you come up with the loss number.


> > I expect to be running at least 1 Mbps, possibly as high as 3 Mbps.
> You probably should check if you can get such rate with short messages.
> If did little experiment using CH340 and CP2104. That was bi-drectional
> TTL level serial connection using 15 cm wires. Slave echoed each
> received character after mangling it a little (so I knew that it
> really came from the slave and not from some echo in software stack).
> I had trouble running CH340 above 460800 (that could be limit of program
> that I used). But using 1 character messages 10000 round trips took
> about 7s, with small influence from serial speed (almost the same
> result at 115200 and 230400). Also increasing message to 5 bytes
> gave essentially the same number of _messages_.

I ran the numbers in one of my posts (here or in another group). My messages are around 10 char with the same echo or three more characters for a read reply. Assuming 8 kHz for the polling rate, an exchange would happen at 4 kHz. A total of 25 char gives 100 kchar/s or 800 kbps on USB or 1,000 kbps on the RS-422/RS-485 interface. So I would probably want to use something a bit faster than 1 Mbps. I think 4k messages per second will be plenty fast enough. With 128 UUT in the system that's 32 commands per second per UUT.

I may want to streamline the protocol a bit to incorporate the slave selection in every command. This will be more characters per message, but more efficient overall with fewer messages. The process can be to send the same command to every UUT at the same time. Mostly this is just not an issue, until the audio tests. They take some noticeable time to execute, as they collect some amount of audio data. I might add a test for spurs, since some UUT failures clip the sinewaves due to DC bias faults and harmonic distortion would be a way to check for this. I want the testing to diagnose as much as possible. This would add another slow test. So these should be done on all UUT in parallel.


> CP2104 was better, here I could go up to 2000000. Using 5 byte
> messages 10000 round trips needed 2.5s up to 1500000, at
> 2000000 time dropped to about 1.9. When I increased message
> to 10 bytes it was back about 2.5s.
>
> I must admit that ATM I am not sure what this means. But this 2.5s
> looks significant: this means 4000 round trips per second, which
> is 8000 messages, which in turn is number of USB cycles. So,
> it seems that normally smallish messages need USB cycle (125 uS)
> to get trough USB bus. It seems that sometimes more than one
> message may go trough in a cycle (giving smaller times that I
> observed), but it is not clear if one can do significantly better.
> And CH340 shows that it may be much worse.

I used to use CH340 cables with my test fixture, but they would stop working after some time, hours I think. I think the cable had to be unplugged to get it working again. Once I realized it was the CH340 cable/drivers, I got FTDI devices and never looked back. They are triple the price, but much, much cheaper in the long run.


> FTDI is claimed to be very good, so maybe it is better, but I would
> not count on this without checking. Actually, I remember folks
> complaining that they needed more than millisecond to get message
> trough USB-serial.

It's too early to be testing, but I will get to that. I suppose I could do loopback testing with the RS-232 cable I have now.


> OTOH, your description suggest that you should be able to do what
> you want with much smaller message traffic, so maybe USB-serial
> speed is enough for you.

If it doesn't run at the speed I'm thinking, it's not a big loss. There's no testing at all done with the current burn in chassis. The UUTs are tested one at a time. You can't get much slower than that. Even if it takes a minute to run a full test, that's on all 128 UUTs in parallel and it will be around 1000 times faster than what we have now! The slow part will be getting all the UUTs loaded on the test fixtures and getting the process started. Any bad UUTs will need to be pulled out and tested/debugged separately. Once they are pulled out, the testing runs until the next day when the units are labeled with a serial number and ready to ship!


> > One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter.
> >
> > Why would the color be an issue, to the point of creating two different specs???
> >
> > Obviously I'm missing something. I will need to check a cable before I design the boards, lol.
> You may be missing fact that most folks installing network cabling
> do not know about transmission lines and reasons for matching pairs.
> And even for folks that understand theory, it is easier to check
> that colors are in position prescribed in the norm, than to check
> pairs. So, colors matter because using colors folks can get correct
> connetion without too much thinking.

The people using the cables don't see the colors. They just plug them in.


> Why two specs? I think
> that this is artifact of history and way that standard bodies work.
> When half of industry is using one way and other half is using
> different but equally good way standard body can not say that
> one half is wrong, they must allow both ways.

But it's not different, really. It's just colors that mean nothing to anyone actually using the cables. They just want to plug them in and make things work. The color of the insulator won't change that at all.

If there was something different about the wiring, then I'd say, I get it. But electrically they are identical.

It's also odd, that the spec doesn't say how many turns per foot/meter are in the twisted pair. But it is different in each pair to give less crosstalk.

--

Rick C.

---+ Get 1,000 miles of free Supercharging
---+ Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 5, 2022, 6:58:24 AM11/5/22
to
There are two levels of framing here, and two types of pauses.

For UART communication, there is the "character frame" and the stop bit
acts as a pause between characters. This is to give a minimum time to
allow re-synchronisation of the clock timing at the receiver. It also
forms, along with the start bit, a guaranteed edge for this
re-synchronisation. More sophisticated serial protocols (CAN, Ethernet,
etc.) do not need this because they have other methods of guaranteeing
transitions and allowing the receiver to re-synchronise regularly - thus
they do not need framing or idling at the character or byte level.

But you always want framing and idling between message frames at a
higher level. You always have an idle period that is longer than any
valid character or part of a message.

For example, in CAN communication you have "bit stuffing" any time you
have 5 equal value bits in a row. This ensures that in the message, you
never have more than 5 bits without a transition, and you don't need a
fixed start or stop bit per byte in order to keep the receiver
synchronised. But at the end of the CAN frame there is at least 10 bits
of recessive (1) value. Any receiver that has got out of
synchronisation, due to noise, startup timing, etc., will know it cannot
possibly be in the middle of a frame and restart its receiver.

In UART communication, this is handled at the protocol level rather than
the hardware (though some UART hardware may have "idle detect" signals
when more than 11 bits of high level are seen in a row). Some
UART-based protocols also use a "break" signal between frames - that is
a string of at least 11 bits of low level.

If you do not have such pauses, and a receiver is out of step, it has no
way to get into synchronisation again. Maybe you get lucky, but
basically all it is seeing is a stream of high and low bits with no
absolute indicator of position - and no way to tell what might be the
start bit of a new character (rather than a 1 bit then a 0 bit within a
character), never mind the start of a message.

Usually you get enough pauses naturally in the communication, with
delays between reception and reply. But if you don't have them, you
must add them. Otherwise your communication will be too fragile to use
in practice. You /need/ idle gaps to be able to resynchronise reliably
in the face of errors (and there is /always/ a risk of errors).
It will be in the right state at the right time, as long as it enters it
when the stop bit is identified (half-way through the stop bit) rather
than artificially waiting for the end of the bit time.

You need gaps in the character stream at a higher level, for error recovery.



David Brown

unread,
Nov 5, 2022, 7:47:59 AM11/5/22
to
I'm making the assumption that you are using appropriate hardware. No
processor, just a USB device that has a "transmitter enable" signal on
its UART.

I'm getting the impression that you have never heard of such a UART
(either in a USB-to-UART device, or as a UART peripheral elsewhere), and
assume software has to be involved in enabling and disabling the
transmitter. Please believe me when I say such UARTs /do/ exist - and
the FTDI examples I keep giving are a case in point.

> The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable.
>

Yes, and it is a /solved/ issue if you pick the right hardware.

>
>> If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs
>> between reception of the last bit and the start of transmission of the
>> next message, your code is wrong - by nearly two orders of magnitude.
>> It is that simple.
>>
>> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 /
>> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned
>> about exactly where the receive interrupt comes in the last stop bit,
>> add another half bit time and you get 3.7 ms. The half bit time is
>> negligible.
>
> Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle.
>

A single transmitter, while sending a multi-character message, does not
need any delay between sending the full stop bit and starting the next
start bit. That is obvious. And that is why a "transmission complete"
signal comes at the end of the start bit sent on the transmitter side.
On the receiver side, the "byte received" signal comes in the /middle/
of the stop bit, as seen by the receiver, because that could be at the
/end/ of the stop bit as seen by the transmitter due to clock
differences. (It could also be at the /start/ of the stop bit as seen
by the transmitter.) The receiver has to prepare for the next incoming
start bit as soon as it identifies the stop bit.

But you want an extra delay of at least 11 bits (a character frame plus
a buffer for clock speed differences) between messages - whether they
are from the same transmitter or a different transmitter - to allow
resynchronisation if something has gone wrong.

I've explained in other posts why inter-message pauses are needed for
reliable UART communication protocols. They don't /need/ to be as long
as 35 bit times as Modbus specifies - 11 bit times is the minimum. If
you don't understand this by now, then we should drop this point.

>
>>> I know the next transmitter should make some processing of the previous
>>> received message, prepare and buffer the new message to transmit, so the
>>> delay is somewhat automatic, but in many cases I have small 8-bits PICs
>>> and full-futured Linux box on the same bus and the Linux could be very
>>> fast to start the new transmission.
>>>
>> So put in a delay. An /appropriate/ delay.
>
> You are thinking software, like most people do.

It doesn't matter whether things are software, hardware, or something in
between.

> The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself.
>

Yes, with the bus you have described, and the command/response protocol
you have described, there should be no problems with multiple
transmitters on the bus, and you have plenty of inter-message idle periods.

However, this Usenet thread has been mixing posts from different people,
and discussions of different kinds of buses and protocols - not just the
solution you picked (which, as I have said before, should work fine). I
think this mixing means that people are sometimes talking at cross-purposes.

>> If you are pushing the limits of a bus, in terms of load, distance,
>> speed, cable characteristics, etc., then you need to do such
>> calculations carefully and be precise in your specification of
>> components, cables, topology, connectors, etc. For many buses in
>> practice, they will work fine using whatever resistor you pull out your
>> box of random parts. For a testbench, you are going to go for something
>> between these extremes.
>
> How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
>

There is no point in having a terminator at a driver (unless you are
talking about very high speed signals with serial resistors for slope
control). You will want to add a terminator at the far end of both
buses. This will give you a single terminator on the PC-to-slave bus,
which is fine as it is fixed direction, and two terminators on the
slave-to-PC bus, which is appropriate as it has no fixed direction.

(I agree that your piece of string is of a size that should work fine
without reflections being a concern.)


> Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
>
> They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
>

The speed of a signal in a copper cable is typically about 70% of the
speed of light, giving a minimum round-trip time closer to 45 ns than 30
ns. Not that it makes any difference here.



David Brown

unread,
Nov 5, 2022, 12:25:37 PM11/5/22
to
There may be issues with minimum total length for Ethernet, but I have
not heard of figures myself - usually maximum lengths are the issue.
It's common to have racks with the wiring coming into patch panels, and
then you need a short Ethernet cable to the switch. These cables should
ideally be short - both from a cable management viewpoint, and because
you always want to have as few impedance jumps as possible in the total
connection between switch and end device and you want the bumps to be as
close to the ends as possible.

30 cm patch cables are common, but I've also seen 10 cm cables. For the
very short ones, they need to be made of very flexible material -
standard cheap Ethernet cables aren't really flexible enough to be
convenient to plug in and out unless you have a little more length.


Rick C

unread,
Nov 5, 2022, 12:57:24 PM11/5/22
to
<<< snip >>>

> In UART communication, this is handled at the protocol level rather than
> the hardware (though some UART hardware may have "idle detect" signals
> when more than 11 bits of high level are seen in a row). Some
> UART-based protocols also use a "break" signal between frames - that is
> a string of at least 11 bits of low level.
>
> If you do not have such pauses, and a receiver is out of step,

You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"?


> it has no
> way to get into synchronisation again. Maybe you get lucky, but
> basically all it is seeing is a stream of high and low bits with no
> absolute indicator of position - and no way to tell what might be the
> start bit of a new character (rather than a 1 bit then a 0 bit within a
> character), never mind the start of a message.

I have no idea what you are talking about. You have already explained above how every character is framed with a start and a stop bit. That gives a half bit time of clock misalignment to maintain sync. What would cause getting out of step?

With the protocol involved, the characters for commands are unique. So if a devices sees noise on the line and does get out of sync at framing characters, it would simply not respond when spoken to. That would inherently cause a delay. So all data after that would be received correctly.

The reason I'm using RS-422 instead of TTL, is the huge improvement in noise tolerance. So if the noise rate is enough to cause any noticeable problems, there's a bad design in the cabling or some fundamental flaw in the design and needs to be corrected. Actually, that makes me realize I need to have a mode where the comms are exercised and bit errors counted.


> Usually you get enough pauses naturally in the communication, with
> delays between reception and reply. But if you don't have them, you
> must add them. Otherwise your communication will be too fragile to use
> in practice. You /need/ idle gaps to be able to resynchronise reliably
> in the face of errors (and there is /always/ a risk of errors).

You haven't made your case. You've not explained how anything gets out of sync. What is your use case? But you finally mention "errors". Are you talking about bit errors in the comms? I've addressed that above. It is inherently handled in a command/response protocol, but since the problem of bit errors should be very, very infrequent, I'm not worried.


> >> Oh, and it is actually essential that the receiver considers the
> >> character finished half-way through the stop bit, and not at the end.

That depends entirely on what is being done with the information. Start bit detection should start as early as possible. Enabling the transmitter driver after the last received character should not happen until the entire character is received, to the end of the stop bit.

If the bus has fail-safe provisions, it's actually ok for the transmitter to disable the driver at the middle of the stop bit. The line will already be in the idle state and the passive fail-safe will maintain that. Less chance of bus contention if the next driver is enabled slightly before the end of the stop bit.


> >> UART communication is intended to work despite small differences in the
> >> baud rate - up to nearly 5% total error. By the time the receiver is
> >> half way through the received stop bit, and has identified it is valid,
> >> the sender could be finished the stop bit as its clock is almost 5%
> >> faster (50% bit time over the full 10 bits). The receiver has to be in
> >> the "watch for falling edge of start bit" state at this point, ready for
> >> the transmitter to start its next frame.
> >
> > Yes, why would it not be? This is why there's no need for additional delays or "gaps" in the protocol for an async interface.
> >
> It will be in the right state at the right time, as long as it enters it
> when the stop bit is identified (half-way through the stop bit) rather
> than artificially waiting for the end of the bit time.
>
> You need gaps in the character stream at a higher level, for error recovery.

If you have errors. I like systems without errors. Systems without errors are better in my opinion. I'm just sayin'. But it's handled anyway.

--

Rick C.

--+- Get 1,000 miles of free Supercharging
--+- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 5, 2022, 1:23:55 PM11/5/22
to
On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote:
> On 04/11/2022 16:40, Rick C wrote:
> > On Friday, November 4, 2022 at 5:49:42 AM UTC-4, David Brown wrote:
> >> I made no such assumptions about timings. The figures I gave were for
> >> using a USB 2 based interface on a PC, where the USB polling timer is at
> >> 8 kHz, or 125 µs. That is half a bit time for 4 Kbaud. (I had doubled
> >> the frequency instead of halving it and said the baud had to be above 16
> >> kBaud - that shows it's good to do your own calculations and not trust
> >> others blindly!). At 1 MBaud (the suggested rate), the absolute fastest
> >> the PC could turn around the bus would be 12 character times - half a
> >> stop bit is irrelevant.
> >
> > You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed.
> >
> I'm making the assumption that you are using appropriate hardware. No
> processor, just a USB device that has a "transmitter enable" signal on
> its UART.

How can there not be a processor? I'm using a split bus, with the PC master driving all the slave receivers and all the slave transmitters sharing the PC receive bus.

Is the PC not a processor?

The slaves have no USB.


> I'm getting the impression that you have never heard of such a UART
> (either in a USB-to-UART device, or as a UART peripheral elsewhere), and
> assume software has to be involved in enabling and disabling the
> transmitter. Please believe me when I say such UARTs /do/ exist - and
> the FTDI examples I keep giving are a case in point.

You are not being clear. I don't know and don't care what is inside the FTDI device. That's just magic to me, or it's like something inside the black hole, unknowable. More importantly, there is no transmitter enable on the RS-422 driver in the FTDI device, because it's not tristateable.


> > The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable.
> >
> Yes, and it is a /solved/ issue if you pick the right hardware.
> >
> >> If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs
> >> between reception of the last bit and the start of transmission of the
> >> next message, your code is wrong - by nearly two orders of magnitude.
> >> It is that simple.
> >>
> >> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 /
> >> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned
> >> about exactly where the receive interrupt comes in the last stop bit,
> >> add another half bit time and you get 3.7 ms. The half bit time is
> >> negligible.
> >
> > Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle.
> >
> A single transmitter, while sending a multi-character message, does not
> need any delay between sending the full stop bit and starting the next
> start bit. That is obvious. And that is why a "transmission complete"
> signal comes at the end of the start bit sent on the transmitter side.

??? Are you talking about the buffer management signals for the software?


> On the receiver side, the "byte received" signal comes in the /middle/
> of the stop bit, as seen by the receiver, because that could be at the
> /end/ of the stop bit as seen by the transmitter due to clock
> differences. (It could also be at the /start/ of the stop bit as seen
> by the transmitter.) The receiver has to prepare for the next incoming
> start bit as soon as it identifies the stop bit.

Again, this depends entirely on what this signal is used for. For entering the state of detecting the next start bit, yes, that is the perceived middle of the stop bit.


> But you want an extra delay of at least 11 bits (a character frame plus
> a buffer for clock speed differences) between messages - whether they
> are from the same transmitter or a different transmitter - to allow
> resynchronisation if something has gone wrong.

Again, you seem to not understand the use case. The split bus never has messages back to back on the same pair. It gets confusing because so many people have tried to talk up RS-485 using a single pair. In that case, everything is totally different. Slaves need to wait until the driver has stopped driving the bus, which means an additional bit time to account for timing errors. But RS-485 is not being used. Each bus is simplex, implementing a half-duplex protocol on the two buses.


> I've explained in other posts why inter-message pauses are needed for
> reliable UART communication protocols. They don't /need/ to be as long
> as 35 bit times as Modbus specifies - 11 bit times is the minimum. If
> you don't understand this by now, then we should drop this point.

You are assuming a need for error tolerance. But a munged message is the problem, not resyncing. A protocol to detect an error and retransmit is very messy. I've tried that before and it messes up the protocol badly.


> >> So put in a delay. An /appropriate/ delay.
> >
> > You are thinking software, like most people do.
> It doesn't matter whether things are software, hardware, or something in
> between.

Of course it does. Since the slaves are all logic, there is no need for delays, at all. The slave driver can be enabled at any time the message has been received and the reply is ready to go.


> > The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself.
> >
> Yes, with the bus you have described, and the command/response protocol
> you have described, there should be no problems with multiple
> transmitters on the bus, and you have plenty of inter-message idle periods.
>
> However, this Usenet thread has been mixing posts from different people,
> and discussions of different kinds of buses and protocols - not just the
> solution you picked (which, as I have said before, should work fine). I
> think this mixing means that people are sometimes talking at cross-purposes.

Yes, it gets confusing.


> >> If you are pushing the limits of a bus, in terms of load, distance,
> >> speed, cable characteristics, etc., then you need to do such
> >> calculations carefully and be precise in your specification of
> >> components, cables, topology, connectors, etc. For many buses in
> >> practice, they will work fine using whatever resistor you pull out your
> >> box of random parts. For a testbench, you are going to go for something
> >> between these extremes.
> >
> > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
> >
> There is no point in having a terminator at a driver (unless you are
> talking about very high speed signals with serial resistors for slope
> control). You will want to add a terminator at the far end of both
> buses. This will give you a single terminator on the PC-to-slave bus,
> which is fine as it is fixed direction, and two terminators on the
> slave-to-PC bus, which is appropriate as it has no fixed direction.

It does if you are using it in a shared bus with multiple drivers. The line should still be organized as linear with minimal stubs and a terminator on each end. This is not my plan, so maybe I should stop discussing it.


> (I agree that your piece of string is of a size that should work fine
> without reflections being a concern.)
> > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
> >
> > They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
> >
> The speed of a signal in a copper cable is typically about 70% of the
> speed of light, giving a minimum round-trip time closer to 45 ns than 30
> ns. Not that it makes any difference here.

The problem I have now is finding parts to use for this. These devices seem to be in a catagory that are hit hard by the shortage. My product uses the SN65C1168EPW, which is very hard to find in quantity. My customer has mentioned 18,000 units next year. I may need to get with the factory and see if they can supply me directly.

--

Rick C.

--++ Get 1,000 miles of free Supercharging
--++ Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 5, 2022, 2:57:30 PM11/5/22
to
On 05/11/2022 18:23, Rick C wrote:
> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote:
>> On 04/11/2022 16:40, Rick C wrote:
>>> On Friday, November 4, 2022 at 5:49:42 AM UTC-4, David Brown wrote:
>>>> I made no such assumptions about timings. The figures I gave were for
>>>> using a USB 2 based interface on a PC, where the USB polling timer is at
>>>> 8 kHz, or 125 µs. That is half a bit time for 4 Kbaud. (I had doubled
>>>> the frequency instead of halving it and said the baud had to be above 16
>>>> kBaud - that shows it's good to do your own calculations and not trust
>>>> others blindly!). At 1 MBaud (the suggested rate), the absolute fastest
>>>> the PC could turn around the bus would be 12 character times - half a
>>>> stop bit is irrelevant.
>>>
>>> You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed.
>>>
>> I'm making the assumption that you are using appropriate hardware. No
>> processor, just a USB device that has a "transmitter enable" signal on
>> its UART.
>
> How can there not be a processor? I'm using a split bus, with the PC master driving all the slave receivers and all the slave transmitters sharing the PC receive bus.
>
> Is the PC not a processor?

Sure, the PC is a processor. It sends a command to the USB device,
saying "send these N bytes of data out on the UART ...".

The USB device is /not/ a processor - it is a converter between USB and
UART. And it is the USB device that controls the transmit enable signal
to the RS-485/RS-422 driver. There is no software on any processor
handling the transmit enable signal - the driver is enabled precisely
when the USB to UART device is sending data on the UART.

>
> The slaves have no USB.
>
>
>> I'm getting the impression that you have never heard of such a UART
>> (either in a USB-to-UART device, or as a UART peripheral elsewhere), and
>> assume software has to be involved in enabling and disabling the
>> transmitter. Please believe me when I say such UARTs /do/ exist - and
>> the FTDI examples I keep giving are a case in point.
>
> You are not being clear. I don't know and don't care what is inside the FTDI device. That's just magic to me, or it's like something inside the black hole, unknowable. More importantly, there is no transmitter enable on the RS-422 driver in the FTDI device, because it's not tristateable.
>

As I mentioned earlier, this thread is getting seriously mixed-up. The
transmit enable discussion started with /RS-485/ - long before you
decided to use a hybrid bus and a RS-422 cable. You were concerned
about how the PC controlled the transmitter enable for the RS-485
driver, and I have been trying to explain how this works when you use a
decent UART device. You only confuse yourself when you jump to
discussing RS-422 here, in this bit of the conversation.

The FTDI USB to UART chip (or chips - they have several) provides a
"transmitter enable" signal that is active with exactly the right timing
for RS-485. This is provided automatically, in hardware - no software
involved. If you connect one of these chips to an RS-485 driver, you
immediately have a "perfect" RS-485 interface with automatic direction
control. If you connect one of these chips to an RS-422 driver, you
don't need direction control as RS-422 has two fixed-direction pairs.
If you buy a pre-built cable from FTDI, it will have one of these driver
chips connected appropriately.


>
>>> The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable.
>>>
>> Yes, and it is a /solved/ issue if you pick the right hardware.
>>>
>>>> If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs
>>>> between reception of the last bit and the start of transmission of the
>>>> next message, your code is wrong - by nearly two orders of magnitude.
>>>> It is that simple.
>>>>
>>>> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 /
>>>> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned
>>>> about exactly where the receive interrupt comes in the last stop bit,
>>>> add another half bit time and you get 3.7 ms. The half bit time is
>>>> negligible.
>>>
>>> Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle.
>>>
>> A single transmitter, while sending a multi-character message, does not
>> need any delay between sending the full stop bit and starting the next
>> start bit. That is obvious. And that is why a "transmission complete"
>> signal comes at the end of the start bit sent on the transmitter side.
>
> ??? Are you talking about the buffer management signals for the software?
>

No.

>
>> On the receiver side, the "byte received" signal comes in the /middle/
>> of the stop bit, as seen by the receiver, because that could be at the
>> /end/ of the stop bit as seen by the transmitter due to clock
>> differences. (It could also be at the /start/ of the stop bit as seen
>> by the transmitter.) The receiver has to prepare for the next incoming
>> start bit as soon as it identifies the stop bit.
>
> Again, this depends entirely on what this signal is used for. For entering the state of detecting the next start bit, yes, that is the perceived middle of the stop bit.
>

Yes.

>
>> But you want an extra delay of at least 11 bits (a character frame plus
>> a buffer for clock speed differences) between messages - whether they
>> are from the same transmitter or a different transmitter - to allow
>> resynchronisation if something has gone wrong.
>
> Again, you seem to not understand the use case.

Yes, I understand your new use case, as well as the original discussions
and the side discussions. I don't think /you/ understand that there had
been a change, because you seem to imagine everything in the thread is
in reference to your current solution.

> The split bus never has messages back to back on the same pair. It gets confusing because so many people have tried to talk up RS-485 using a single pair. In that case, everything is totally different. Slaves need to wait until the driver has stopped driving the bus, which means an additional bit time to account for timing errors. But RS-485 is not being used. Each bus is simplex, implementing a half-duplex protocol on the two buses.
>

I agree. I know how your solution works, and have said many times that
I think it sounds quite a good idea for the task in hand.

>
>> I've explained in other posts why inter-message pauses are needed for
>> reliable UART communication protocols. They don't /need/ to be as long
>> as 35 bit times as Modbus specifies - 11 bit times is the minimum. If
>> you don't understand this by now, then we should drop this point.
>
> You are assuming a need for error tolerance. But a munged message is the problem, not resyncing. A protocol to detect an error and retransmit is very messy. I've tried that before and it messes up the protocol badly.
>

All communications have failures. Accept that as a principle, and
understand how to deal with it. It's not hard to do - it is certainly
much easier than trying to imagine and eliminate any possible cause of
trouble.

>
>>>> So put in a delay. An /appropriate/ delay.
>>>
>>> You are thinking software, like most people do.
>> It doesn't matter whether things are software, hardware, or something in
>> between.
>
> Of course it does. Since the slaves are all logic, there is no need for delays, at all. The slave driver can be enabled at any time the message has been received and the reply is ready to go.
>

I'm sorry you don't understand, and I can't see how to explain it better
than to say timing and delays are fundamental to the communication, not
the implementation.

>
>>> The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself.
>>>
>> Yes, with the bus you have described, and the command/response protocol
>> you have described, there should be no problems with multiple
>> transmitters on the bus, and you have plenty of inter-message idle periods.
>>
>> However, this Usenet thread has been mixing posts from different people,
>> and discussions of different kinds of buses and protocols - not just the
>> solution you picked (which, as I have said before, should work fine). I
>> think this mixing means that people are sometimes talking at cross-purposes.
>
> Yes, it gets confusing.
>

There has, I think, been some interesting discussion despite the
confusion. I hope you have got something out of it too - and I am glad
that you have a bus solution that looks like it will work well for the
purpose.

>
>>>> If you are pushing the limits of a bus, in terms of load, distance,
>>>> speed, cable characteristics, etc., then you need to do such
>>>> calculations carefully and be precise in your specification of
>>>> components, cables, topology, connectors, etc. For many buses in
>>>> practice, they will work fine using whatever resistor you pull out your
>>>> box of random parts. For a testbench, you are going to go for something
>>>> between these extremes.
>>>
>>> How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
>>>
>> There is no point in having a terminator at a driver (unless you are
>> talking about very high speed signals with serial resistors for slope
>> control). You will want to add a terminator at the far end of both
>> buses. This will give you a single terminator on the PC-to-slave bus,
>> which is fine as it is fixed direction, and two terminators on the
>> slave-to-PC bus, which is appropriate as it has no fixed direction.
>
> It does if you are using it in a shared bus with multiple drivers. The line should still be organized as linear with minimal stubs and a terminator on each end. This is not my plan, so maybe I should stop discussing it.
>

Ideally, a bus should be (as you say) linear with minimal stubs and a
terminator at each end - /except/ if one end is always driven. There is
no point in having a terminator at a driver. Think about it in terms of
impedance - the driver is either driving a line high, or it is driving
it low. At any given time, one of the differential pair lines will have
almost 0 ohm resistance to 0V, and the other will have nearly 0 ohm
resistance to 5V. When the signal changes, these swap. Connecting a
100 ohm resistor across the lines at that point will make no difference
whatsoever. The terminator is completely useless - it's just a waste of
power. At the other end of the cable it's a different matter - there's
a cable full of resistance, capacitance and inductance between the
terminator and the near 0 ohm driver, so the terminator resistor /does/
make a difference.

In more sophisticated tristate drivers, you would off (disconnect) the
local terminator whenever the driver is enabled. This is done in some
multi-lane systems as it can significantly reduce power and make slope
control and pulse shaping easier. (It's not something you'd be likely
to see on RS-485 buses.)

>
>> (I agree that your piece of string is of a size that should work fine
>> without reflections being a concern.)
>>> Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
>>>
>>> They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
>>>
>> The speed of a signal in a copper cable is typically about 70% of the
>> speed of light, giving a minimum round-trip time closer to 45 ns than 30
>> ns. Not that it makes any difference here.
>
> The problem I have now is finding parts to use for this. These devices seem to be in a catagory that are hit hard by the shortage. My product uses the SN65C1168EPW, which is very hard to find in quantity. My customer has mentioned 18,000 units next year. I may need to get with the factory and see if they can supply me directly.
>

Unfortunately, sourcing components these days is a much harder problem
than designing the systems.



Rick C

unread,
Nov 5, 2022, 4:42:59 PM11/5/22
to
Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software.


> > The slaves have no USB.
> >
> >
> >> I'm getting the impression that you have never heard of such a UART
> >> (either in a USB-to-UART device, or as a UART peripheral elsewhere), and
> >> assume software has to be involved in enabling and disabling the
> >> transmitter. Please believe me when I say such UARTs /do/ exist - and
> >> the FTDI examples I keep giving are a case in point.
> >
> > You are not being clear. I don't know and don't care what is inside the FTDI device. That's just magic to me, or it's like something inside the black hole, unknowable. More importantly, there is no transmitter enable on the RS-422 driver in the FTDI device, because it's not tristateable.
> >
> As I mentioned earlier, this thread is getting seriously mixed-up. The
> transmit enable discussion started with /RS-485/ - long before you
> decided to use a hybrid bus and a RS-422 cable. You were concerned
> about how the PC controlled the transmitter enable for the RS-485
> driver, and I have been trying to explain how this works when you use a
> decent UART device. You only confuse yourself when you jump to
> discussing RS-422 here, in this bit of the conversation.

Ok, I'll stop talking about what I am doing.


> The FTDI USB to UART chip (or chips - they have several) provides a
> "transmitter enable" signal that is active with exactly the right timing
> for RS-485. This is provided automatically, in hardware - no software
> involved. If you connect one of these chips to an RS-485 driver, you
> immediately have a "perfect" RS-485 interface with automatic direction
> control. If you connect one of these chips to an RS-422 driver, you
> don't need direction control as RS-422 has two fixed-direction pairs.
> If you buy a pre-built cable from FTDI, it will have one of these driver
> chips connected appropriately.

Ok, thanks.
Ok, then the conversation has reached an end.


> >> I've explained in other posts why inter-message pauses are needed for
> >> reliable UART communication protocols. They don't /need/ to be as long
> >> as 35 bit times as Modbus specifies - 11 bit times is the minimum. If
> >> you don't understand this by now, then we should drop this point.
> >
> > You are assuming a need for error tolerance. But a munged message is the problem, not resyncing. A protocol to detect an error and retransmit is very messy. I've tried that before and it messes up the protocol badly.
> >
> All communications have failures. Accept that as a principle, and
> understand how to deal with it. It's not hard to do - it is certainly
> much easier than trying to imagine and eliminate any possible cause of
> trouble.

That's not a premise I have to deal with. I will also die. I'm not factoring that into the project either.

I don't need to eliminate "any possible cause of trouble". I only have to reach an effective level of reliability. As I've said, error handling protocols are complex and subject to failure. It's much more likely I will have more trouble with the error handling protocol than I will with bit errors on the bus. So I choose the most reliable solution, no error handling. So without an error handling protocol in the software, I don't need to do anything further to deal with errors.


> >>>> So put in a delay. An /appropriate/ delay.
> >>>
> >>> You are thinking software, like most people do.
> >> It doesn't matter whether things are software, hardware, or something in
> >> between.
> >
> > Of course it does. Since the slaves are all logic, there is no need for delays, at all. The slave driver can be enabled at any time the message has been received and the reply is ready to go.
> >
> I'm sorry you don't understand, and I can't see how to explain it better
> than to say timing and delays are fundamental to the communication, not
> the implementation.

I understand perfectly. I only need to meet the requirements of this project. Not the requirements of some ultra high reliability project. With the RS-422 interface, I expect I could run the entire system continuously, and would not find an error in my lifetime. That's good enough for me.
Indeed.

--

Rick C.

-+-- Get 1,000 miles of free Supercharging
-+-- Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 6, 2022, 5:55:22 AM11/6/22
to
On 05/11/2022 21:42, Rick C wrote:
> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote:
>> On 05/11/2022 18:23, Rick C wrote:
>>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote:

>> The USB device is /not/ a processor - it is a converter between USB and
>> UART. And it is the USB device that controls the transmit enable signal
>> to the RS-485/RS-422 driver. There is no software on any processor
>> handling the transmit enable signal - the driver is enabled precisely
>> when the USB to UART device is sending data on the UART.
>
> Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software.
>

No, I think you are mixing things up. FTDI make a fair number of
devices, including some that /are/ processors or contain processors.
(That would their display controller devices, their USB host
controllers, amongst others.)

The code for using chips like the FT232H as a JTAG interface runs on the
host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other
software). The chip has /hardware/ support for a few different serial
interfaces - SPI, I²C, JTAG and UART.

>> As I mentioned earlier, this thread is getting seriously mixed-up. The
>> transmit enable discussion started with /RS-485/ - long before you
>> decided to use a hybrid bus and a RS-422 cable. You were concerned
>> about how the PC controlled the transmitter enable for the RS-485
>> driver, and I have been trying to explain how this works when you use a
>> decent UART device. You only confuse yourself when you jump to
>> discussing RS-422 here, in this bit of the conversation.
>
> Ok, I'll stop talking about what I am doing.
>

We don't need to stop talking about it - we (everyone) just need to be a
bit clearer about the context. It's been fun to talk about, and its
great that you have a solution you are happy with, but it's a shame if
topic mixup leads to frustration.

>> All communications have failures. Accept that as a principle, and
>> understand how to deal with it. It's not hard to do - it is certainly
>> much easier than trying to imagine and eliminate any possible cause of
>> trouble.
>
> That's not a premise I have to deal with. I will also die. I'm not factoring that into the project either.
>
> I don't need to eliminate "any possible cause of trouble". I only have to reach an effective level of reliability. As I've said, error handling protocols are complex and subject to failure. It's much more likely I will have more trouble with the error handling protocol than I will with bit errors on the bus. So I choose the most reliable solution, no error handling. So without an error handling protocol in the software, I don't need to do anything further to deal with errors.
>

I agree that error handling procedures can be difficult - and very
often, they are poorly tested and have their own bugs (hardware or
software). Over-engineering can reduce overall reliability, rather than
increase it. (A few years back, we had a project that had to be updated
to SIL safety certification requirements. Most of the changes reduced
the overall safety and reliability in order to fulfil the documentation
and certification requirements.)

For serial protocols, ensuring a brief pause between telegrams is
extremely simple and makes recovery possible after many kinds of errors.
That's why it is found in virtually every serial protocol in wide use.
And like it or not, you have it already in your hybrid bus solution.



Rick C

unread,
Nov 6, 2022, 8:56:56 AM11/6/22
to
On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote:
> On 05/11/2022 21:42, Rick C wrote:
> > On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote:
> >> On 05/11/2022 18:23, Rick C wrote:
> >>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote:
>
> >> The USB device is /not/ a processor - it is a converter between USB and
> >> UART. And it is the USB device that controls the transmit enable signal
> >> to the RS-485/RS-422 driver. There is no software on any processor
> >> handling the transmit enable signal - the driver is enabled precisely
> >> when the USB to UART device is sending data on the UART.
> >
> > Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software.
> >
> No, I think you are mixing things up. FTDI make a fair number of
> devices, including some that /are/ processors or contain processors.
> (That would their display controller devices, their USB host
> controllers, amongst others.)
>
> The code for using chips like the FT232H as a JTAG interface runs on the
> host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other
> software). The chip has /hardware/ support for a few different serial
> interfaces - SPI, I²C, JTAG and UART.

They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle.
There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.

--

Rick C.

-+-+ Get 1,000 miles of free Supercharging
-+-+ Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 6, 2022, 3:54:04 PM11/6/22
to
On 06/11/2022 14:56, Rick C wrote:
> On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote:
>> On 05/11/2022 21:42, Rick C wrote:
>>> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote:
>>>> On 05/11/2022 18:23, Rick C wrote:
>>>>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote:
>>
>>>> The USB device is /not/ a processor - it is a converter between USB and
>>>> UART. And it is the USB device that controls the transmit enable signal
>>>> to the RS-485/RS-422 driver. There is no software on any processor
>>>> handling the transmit enable signal - the driver is enabled precisely
>>>> when the USB to UART device is sending data on the UART.
>>>
>>> Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software.
>>>
>> No, I think you are mixing things up. FTDI make a fair number of
>> devices, including some that /are/ processors or contain processors.
>> (That would their display controller devices, their USB host
>> controllers, amongst others.)
>>
>> The code for using chips like the FT232H as a JTAG interface runs on the
>> host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other
>> software). The chip has /hardware/ support for a few different serial
>> interfaces - SPI, I²C, JTAG and UART.
>
> They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle.
>

There is no reason to think that they /do/ have a processor there. I
should imagine you would have no problem making the programmable logic
needed for controlling a UART/SPI/I²C/JTAG/GPIO port, and USB slave
devices are rarely made in software (even on the XMOS they prefer
hardware blocks for USB). Why would anyone use a /processor/ for some
simple digital hardware? I am not privy to the details of the FTDI
design beyond their published documents, but it seems pretty clear to me
that there is no processor in sight.
That is one way to handle possible errors.


Richard Damon

unread,
Nov 6, 2022, 6:34:59 PM11/6/22
to
On 11/6/22 8:56 AM, Rick C wrote:
> There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.

If the only way to handle a missed message is to abort the whole
software system, that seems to be a pretty bad system.

Note, if the master sends out a message, and waits for a response, with
a retry if the message is not replied to, that naturally puts a pause in
the communication bus for inter-message synchronization.

Based on your description, I can't imagine the master starting a message
for another slave until after the first one answers, or you will
interfere with the arbitration control of the reply bus.

In a dedicated link, after the link is established, it might be possible
that one side just starts streaming data continously to the other side,
but most protocals will have some sort of at least occational
handshaking back, so a loss of sync can stop the flow to re-establish
the syncronization. And such handshaking is needed if you have need to
handle noise in packets.

Paul Rubin

unread,
Nov 6, 2022, 6:37:30 PM11/6/22
to
Richard Damon <Ric...@Damon-Family.org> writes:
> And such handshaking is needed if you have need to handle noise in
> packets.

Once you acknowledge that noise and errors are even possible, some kind
of checksums or FEC seem appropriate in addition to a retry protocol.

Richard Damon

unread,
Nov 6, 2022, 7:19:00 PM11/6/22
to
Yes, the messages should have some form of checksum in them to identify
bad packets. That should be part of the message definition.

Stef

unread,
Nov 7, 2022, 4:40:06 AM11/7/22
to
On 2022-11-05 Rick C wrote in comp.arch.embedded:
...
> One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter.
>
> Why would the color be an issue, to the point of creating two different specs???
>
> Obviously I'm missing something. I will need to check a cable before I design the boards, lol.

Yes, only difference is the colors. There are some historical
backgrounds, see also https://en.wikipedia.org/wiki/ANSI/TIA-568.

In the early days there sometimes was a need for crossover cables. 568A
on one end, 568B on the other end. IIRC, you needed one to connect 2
PC's together directly, without a hub. Hubs also had a special uplink
port.

These days all ethernet PHY's are auto detect and there is no need
for special ports or cables anymore. So pick a standard you like or just
use what is available. Most cables I have in my drawer here seem to be
568B. Just standard cables, did not pay attention to the A/B when I
bought them. ;-)

--
Stef

The light at the end of the tunnel is the headlight of an approaching train.

Rick C

unread,
Nov 7, 2022, 4:58:05 AM11/7/22
to
I don't agree. These interfaces are not so simple when you consider the level of flexibility in implementing many different interfaces in one part. XMOS is nothing like this. A small processor running at high speed would easily implement any of these interfaces. The small processor can actually be a very small amount of chip area. Typical MCUs are dominated by the memory blocks. With a small memory an MCU could easily be smaller than dedicated logic. Even many of the I/O blocks, like UARTs, can be larger than an 8 bit CPU. A CPU takes advantage of the massive multiplexer in the memory, which is implemented in ways that use very little area. FPGAs use the multiplexers in tiny LUTs while an MCU uses the multiplexer in a single, much larger LUT, the program store.

--

Rick C.

-++- Get 1,000 miles of free Supercharging
-++- Tesla referral code - https://ts.la/richard11209

Stef

unread,
Nov 7, 2022, 5:00:09 AM11/7/22
to
On 2022-11-05 Rick C wrote in comp.arch.embedded:
> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote:
>
>> In UART communication, this is handled at the protocol level rather than
>> the hardware (though some UART hardware may have "idle detect" signals
>> when more than 11 bits of high level are seen in a row). Some
>> UART-based protocols also use a "break" signal between frames - that is
>> a string of at least 11 bits of low level.
>>
>> If you do not have such pauses, and a receiver is out of step,
>
> You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"?

I have seen this happen in long messages (few kB) with no pauses between
characters and transmitter and receiver set to 8,N,1. It seemed that the
receiver needed the complete stop bit and then immediately saw the low
of the next start bit. Detecting the edge when it was ready to see it,
not when it actually happened. When the receiver is slightly slower than
the transmitter, this caused the detection of the start bit (and
therefor the whole character) to shift a tiny bit. This added up over
the character stream until it eventually failed.

Lowering the baud rate did not solve the issue, but inserting pauses
after a number of chars did. What also solved it was setting the
transmitter to 2 stop bits and the receiver to one stop bit. This was a
one way stream and this may not be possible on a bi-directional stream.

I would expect a sensible UART implementation to allow for a slightly
shorter stop bit to compensate for issues like this. But apparently this
UART did not do so in the 1 stop bit setting. I have not tested if
setting both ends to 2 stop bits also solved the problem.


--
Stef

Westheimer's Discovery:
A couple of months in the laboratory can frequently save a
couple of hours in the library.

Rick C

unread,
Nov 7, 2022, 5:03:12 AM11/7/22
to
On Sunday, November 6, 2022 at 6:34:59 PM UTC-5, Richard Damon wrote:
> On 11/6/22 8:56 AM, Rick C wrote:
> > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.
> If the only way to handle a missed message is to abort the whole
> software system, that seems to be a pretty bad system.

You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error.


> Note, if the master sends out a message, and waits for a response, with
> a retry if the message is not replied to, that naturally puts a pause in
> the communication bus for inter-message synchronization.

The pause is already there by virtue of the protocol. Commands and replies are on different busses.


> Based on your description, I can't imagine the master starting a message
> for another slave until after the first one answers, or you will
> interfere with the arbitration control of the reply bus.

Exactly! Now you are starting to catch on.


> In a dedicated link, after the link is established, it might be possible
> that one side just starts streaming data continously to the other side,

Except that there is no data to stream. Maybe you haven't been around for the full conversation. The protocol is command/reply for reading and writing registers and selecting which unit the registers are being accessed. The "stream" is an 8 bit value.


> but most protocals will have some sort of at least occational
> handshaking back, so a loss of sync can stop the flow to re-establish
> the syncronization. And such handshaking is needed if you have need to
> handle noise in packets.

??? Every command has a reply. How is that not a handshake???

--

Rick C.

-+++ Get 1,000 miles of free Supercharging
-+++ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 7, 2022, 5:07:17 AM11/7/22
to
Why? Does the processor checksum every value calculated and stored in memory? Not on my computer. This is not warranted because the data failure rate is very low. Same with an RS-422 bus in an electrically quiet environment. I could probably get away with TTL level signals, but I'd like to have the ESD protection these RS-422 chips give. That additional noise immunity means there is an extremely small chance of bit errors. If we have problems, the error handling can be added.

--

Rick C.

+--- Get 1,000 miles of free Supercharging
+--- Tesla referral code - https://ts.la/richard11209

Stef

unread,
Nov 7, 2022, 5:26:06 AM11/7/22
to
Why are you discussing this? Out of academic curiosity? Then please
continue. But what does it matter for your system implementation? There
is just a UART/SPI/I²C/JTAG/GPIO peripheral and your software won't care
how this peripheral is implemented, as long as it behaves as expected.

--
Stef

"Microwave oven? Whaddya mean, it's a microwave oven? I've been watching
Channel 4 on the thing for two weeks."

Rick C

unread,
Nov 7, 2022, 5:39:36 AM11/7/22
to
I care. Don't you?

I remember when I came to the realization of why an MCU was so cost effective compared to programmable or even dedicated logic. It's because the MCU program is a FSM, using the instructions stored in the memory. These instruction are essentially logic, which is connected through the CPU logic, creating a very low cost solution to a wide variety of problems, because of the very low cost of memory compared to dedicated or programmable logic.

Stef

unread,
Nov 7, 2022, 5:55:27 AM11/7/22
to
On 2022-11-07 Rick C wrote in comp.arch.embedded:
> On Sunday, November 6, 2022 at 6:34:59 PM UTC-5, Richard Damon wrote:
>> On 11/6/22 8:56 AM, Rick C wrote:
>> > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.
>> If the only way to handle a missed message is to abort the whole
>> software system, that seems to be a pretty bad system.
>
> You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error.

I would not dare to implement a serial protocol without any form of
error checking, on any length of cable.

You mention ESD somewhere. This can be a serious disturbance that can
easily corrupt a few bits.
Reminds me of a product where we got windows blue screens during ESD
testing on a device connected via an FTDI USB to serial adapter. Cable
length less than 6 feet.

>
>> Note, if the master sends out a message, and waits for a response, with
>> a retry if the message is not replied to, that naturally puts a pause in
>> the communication bus for inter-message synchronization.
>
> The pause is already there by virtue of the protocol. Commands and replies are on different busses.
>
>
>> Based on your description, I can't imagine the master starting a message
>> for another slave until after the first one answers, or you will
>> interfere with the arbitration control of the reply bus.
>
> Exactly! Now you are starting to catch on.

So you do wait for a reply, and a reply is only expected on a valid
message? What if there is no reply, do you retry? If so, you already have
implemented some basic error checking. For more robustness you could (I
would) add some kind of CRC.

In the following, I think Richard is just considering a situation where
this problem might occur. Not your situation because he has already
'caught on', as you mention. But I should probably not speak for
Richard ...

>> In a dedicated link, after the link is established, it might be possible
>> that one side just starts streaming data continously to the other side,
>
> Except that there is no data to stream. Maybe you haven't been around for the full conversation. The protocol is command/reply for reading and writing registers and selecting which unit the registers are being accessed. The "stream" is an 8 bit value.
>
>
>> but most protocals will have some sort of at least occational
>> handshaking back, so a loss of sync can stop the flow to re-establish
>> the syncronization. And such handshaking is needed if you have need to
>> handle noise in packets.
>
> ??? Every command has a reply. How is that not a handshake???
>


--
Stef

I don't care for the Sugar Smacks commercial. I don't like the idea of
a frog jumping on my Breakfast.
-- Lowell, Chicago Reader 10/15/82

Stef

unread,
Nov 7, 2022, 6:07:43 AM11/7/22
to
No, I don't. We do use FTDI chips in our designs to interface a serial
port to USB. And we also use ready made FTDI cables. We use these chips
and cables based on their specifications in datasheets and user guides
etc. I have never felt the need to invesitigate how the UART/USB
functionality was actually implemented inside the chip. What would I do
with this knowledge? In a design I must rely on the behaviour as
specified in the datasheet.


> I remember when I came to the realization of why an MCU was so cost effective compared to programmable or even dedicated logic. It's because the MCU program is a FSM, using the instructions stored in the memory. These instruction are essentially logic, which is connected through the CPU logic, creating a very low cost solution to a wide variety of problems, because of the very low cost of memory compared to dedicated or programmable logic.
>

This is what I would call 'academic interest', and that is perfectly
fine. And this knowledge might help you think differently about solving
a problem in your own design. But it will make no difference in how you
will imlement this chip (or cable) in your design.

--
Stef

So many men, so many opinions; every one his own way.
-- Publius Terentius Afer (Terence)

David Brown

unread,
Nov 7, 2022, 9:46:35 AM11/7/22
to
On 07/11/2022 11:00, Stef wrote:
> On 2022-11-05 Rick C wrote in comp.arch.embedded:
>> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote:
>>
>>> In UART communication, this is handled at the protocol level rather than
>>> the hardware (though some UART hardware may have "idle detect" signals
>>> when more than 11 bits of high level are seen in a row). Some
>>> UART-based protocols also use a "break" signal between frames - that is
>>> a string of at least 11 bits of low level.
>>>
>>> If you do not have such pauses, and a receiver is out of step,
>>
>> You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"?
>
> I have seen this happen in long messages (few kB) with no pauses between
> characters and transmitter and receiver set to 8,N,1. It seemed that the
> receiver needed the complete stop bit and then immediately saw the low
> of the next start bit. Detecting the edge when it was ready to see it,
> not when it actually happened. When the receiver is slightly slower than
> the transmitter, this caused the detection of the start bit (and
> therefor the whole character) to shift a tiny bit. This added up over
> the character stream until it eventually failed.
>
> Lowering the baud rate did not solve the issue, but inserting pauses
> after a number of chars did. What also solved it was setting the
> transmitter to 2 stop bits and the receiver to one stop bit. This was a
> one way stream and this may not be possible on a bi-directional stream.
>

An extra stop bit will help for this particular kind of error (and is a
good idea if you get such errors often, as it will improve your
percentage timing margins). An occasional pause of at least 11 bit
times will help for all sorts of possible errors.

Basically, it is a good idea to assume that sometimes things go wrong.
There can be noise, interference, cosmic rays, power glitches - even in
a system that has bug-free software, quality hardware, and no fallible
human anywhere, there's always a risk of faults. That is why most
serial protocols have CRC's or other checksums, and at least a basic "if
there is no reply, repeat the telegram" handler.

Rick C

unread,
Nov 7, 2022, 10:51:50 AM11/7/22
to
On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote:
>
> In more sophisticated tristate drivers, you would off (disconnect) the
> local terminator whenever the driver is enabled. This is done in some
> multi-lane systems as it can significantly reduce power and make slope
> control and pulse shaping easier. (It's not something you'd be likely
> to see on RS-485 buses.)

I'm not sure what bus arrangement you are referring to. The RS-485 bus is intended to be linear. The terminators are at the ends, to prevent reflections. There's no point in removing either of them no matter which driver is enabled. All drivers see two loads. A terminal driver sees the bus and the terminator. A driver along the bus sees two buses which are driven in parallel. So everyone sees the same impedance, half the characteristic impedance of the bus.

--

Rick C.

+-+- Get 1,000 miles of free Supercharging
+-+- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 7, 2022, 11:05:18 AM11/7/22
to
If a UART receiver can not properly receive a message like this, it is defective. The point of the start and stop bits are to provide the synchronization. The receiver simply needs to detect the stop be state (by sampling where the receiver thinks is the middle of the bit) and then immediately start looking for the leading edge of the next start bit. The receiver will then be synchronized to the new character bit timing and it will never slip. That gives up to ±5% combined timing error tolerance.

If the receiver waits until a later time, such as the expected end of the received stop bit, to start looking for a start bit leading edge, it will not be able to tolerate a timing error where the transmitter is faster than the receiver making the timing tolerance unipolar, i.e. 5% rather than ±5%.

That's a receiver design flaw, or the transmitter is sending short stop bits, which you can easily see on the scope with a delayed trigger control.

You should be able to diagnose which end has the problem by connecting a different type of receiver to the stream. If a different receiver UART is able to receive the messages without fault, the problem is obviously the failing receiver.

--

Rick C.

+-++ Get 1,000 miles of free Supercharging
+-++ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 7, 2022, 11:18:18 AM11/7/22
to
On Monday, November 7, 2022 at 6:55:27 AM UTC-4, Stef wrote:
> On 2022-11-07 Rick C wrote in comp.arch.embedded:
> > On Sunday, November 6, 2022 at 6:34:59 PM UTC-5, Richard Damon wrote:
> >> On 11/6/22 8:56 AM, Rick C wrote:
> >> > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.
> >> If the only way to handle a missed message is to abort the whole
> >> software system, that seems to be a pretty bad system.
> >
> > You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error.
> I would not dare to implement a serial protocol without any form of
> error checking, on any length of cable.
>
> You mention ESD somewhere. This can be a serious disturbance that can
> easily corrupt a few bits.

Yes, I mentioned ESD somewhere. This is testing newly constructed circuit boards, so is used in an ESD controlled environment.


> Reminds me of a product where we got windows blue screens during ESD
> testing on a device connected via an FTDI USB to serial adapter. Cable
> length less than 6 feet.

I assume you mean some other device was being ESD tested? This is not being used in an ESD testing lab. Was the FTDI serial cable RS-232 by any chance? Being single ended, that is much less tolerant of noise.


> >> Note, if the master sends out a message, and waits for a response, with
> >> a retry if the message is not replied to, that naturally puts a pause in
> >> the communication bus for inter-message synchronization.
> >
> > The pause is already there by virtue of the protocol. Commands and replies are on different busses.
> >
> >
> >> Based on your description, I can't imagine the master starting a message
> >> for another slave until after the first one answers, or you will
> >> interfere with the arbitration control of the reply bus.
> >
> > Exactly! Now you are starting to catch on.
> So you do wait for a reply, and a reply is only expected on a valid
> message? What if there is no reply, do you retry? If so, you already have
> implemented some basic error checking. For more robustness you could (I
> would) add some kind of CRC.

There should not be any messages other than "valid" messages. I don't recall specifically what the slave does on messages with bit errors, but I'm pretty sure it simply doesn't know they have bit errors. The message has no checksum or other bit error control. The format has one character to indicate the "command" type. If that character is corrupted, the command is not used, unless it is changed to another valid character (3 of 256 chance).

Again, there's no reason to "detect" errors since I've implemented no error protocol. That is many times more complex than simply ignoring the errors, which works because errors don't happen often enough to have an impact on testing.

On the Apollo moon missions, they took no precautions against damage from micrometeoroids, because the effort required was not commensurate with the likelihood of the event.

--

Rick C.

++-- Get 1,000 miles of free Supercharging
++-- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 7, 2022, 11:25:53 AM11/7/22
to
On Monday, November 7, 2022 at 7:07:43 AM UTC-4, Stef wrote:
> On 2022-11-07 Rick C wrote in comp.arch.embedded:
> > On Monday, November 7, 2022 at 5:26:06 AM UTC-5, Stef wrote:
> >> On 2022-11-07 Rick C wrote in comp.arch.embedded:
> >
> > I care. Don't you?
> No, I don't. We do use FTDI chips in our designs to interface a serial
> port to USB. And we also use ready made FTDI cables. We use these chips
> and cables based on their specifications in datasheets and user guides
> etc. I have never felt the need to invesitigate how the UART/USB
> functionality was actually implemented inside the chip. What would I do
> with this knowledge? In a design I must rely on the behaviour as
> specified in the datasheet.

It's hard to imagine an engineer with no curiosity.


> > I remember when I came to the realization of why an MCU was so cost effective compared to programmable or even dedicated logic. It's because the MCU program is a FSM, using the instructions stored in the memory. These instruction are essentially logic, which is connected through the CPU logic, creating a very low cost solution to a wide variety of problems, because of the very low cost of memory compared to dedicated or programmable logic.
> >
> This is what I would call 'academic interest', and that is perfectly
> fine. And this knowledge might help you think differently about solving
> a problem in your own design. But it will make no difference in how you
> will imlement this chip (or cable) in your design.

It is very much of practical interest to me, as I design FPGAs and knowing that I can use less resources by constructing a peripheral as a CPU, is important info. The FPGA design in the UUT was pushing the capacity of the chip it was in. I was on the cusp of changing the design to a CPU centric design when it was routed at 90% utilization. This time, I'm bumping the size of the FPGA significantly, about 3x. The Gowin FPGA devices are very cost effective. I'll be able to use the hard logic and the soft CPU, both. LOL

--

Rick C.

++-+ Get 1,000 miles of free Supercharging
++-+ Tesla referral code - https://ts.la/richard11209

Stef

unread,
Nov 7, 2022, 11:57:27 AM11/7/22
to
On 2022-11-07 Rick C wrote in comp.arch.embedded:
> On Monday, November 7, 2022 at 7:07:43 AM UTC-4, Stef wrote:
>> On 2022-11-07 Rick C wrote in comp.arch.embedded:
>> > On Monday, November 7, 2022 at 5:26:06 AM UTC-5, Stef wrote:
>> >> On 2022-11-07 Rick C wrote in comp.arch.embedded:
>> >
>> > I care. Don't you?
>> No, I don't. We do use FTDI chips in our designs to interface a serial
>> port to USB. And we also use ready made FTDI cables. We use these chips
>> and cables based on their specifications in datasheets and user guides
>> etc. I have never felt the need to invesitigate how the UART/USB
>> functionality was actually implemented inside the chip. What would I do
>> with this knowledge? In a design I must rely on the behaviour as
>> specified in the datasheet.
>
> It's hard to imagine an engineer with no curiosity.

Yes, that's hard. But imagining an engineer who does not care about the
internal structure of every single chip he uses is a lot easier (for
me). I tend to focus my curiiosity on things that matter to me, don't
you?


--
Stef

One difference between a man and a machine is that a machine is quiet
when well oiled.

Stef

unread,
Nov 7, 2022, 12:20:33 PM11/7/22
to
On 2022-11-07 Rick C wrote in comp.arch.embedded:
> On Monday, November 7, 2022 at 6:55:27 AM UTC-4, Stef wrote:
>> On 2022-11-07 Rick C wrote in comp.arch.embedded:
>> > On Sunday, November 6, 2022 at 6:34:59 PM UTC-5, Richard Damon wrote:
>> >> On 11/6/22 8:56 AM, Rick C wrote:
>> >> > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.
>> >> If the only way to handle a missed message is to abort the whole
>> >> software system, that seems to be a pretty bad system.
>> >
>> > You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error.
>> I would not dare to implement a serial protocol without any form of
>> error checking, on any length of cable.
>>
>> You mention ESD somewhere. This can be a serious disturbance that can
>> easily corrupt a few bits.
>
> Yes, I mentioned ESD somewhere. This is testing newly constructed circuit boards, so is used in an ESD controlled environment.
>

You wrote:
"I could probably get away with TTL level signals, but I'd like to have
the ESD protection these RS-422 chips give. That additional noise
immunity means there is an extremely small chance of bit errors. If we
have problems, the error handling can be added."

This led me to believe you were expecting actual ESD discharges that
could disturb your messages.

ESD protection is just that: protection against device damage

I do not believe ESD protection does anything to improve noise immunity.
It just increases the ESD level at which the device will be damaged.

And if you have an ESD controlled environment, that is not actually
needed.


>> Reminds me of a product where we got windows blue screens during ESD
>> testing on a device connected via an FTDI USB to serial adapter. Cable
>> length less than 6 feet.
>
> I assume you mean some other device was being ESD tested? This is not being used in an ESD testing lab. Was the FTDI serial cable RS-232 by any chance? Being single ended, that is much less tolerant of noise.

No a device with an FTDI chip on it was tested. USB cable was <= 6 feet
and serial ports were only a few centimeters of TTL level PCB traces.
This was reproducable with an evaluation kit with only USB connected.

>
>> >> Note, if the master sends out a message, and waits for a response, with
>> >> a retry if the message is not replied to, that naturally puts a pause in
>> >> the communication bus for inter-message synchronization.
>> >
>> > The pause is already there by virtue of the protocol. Commands and replies are on different busses.
>> >
>> >
>> >> Based on your description, I can't imagine the master starting a message
>> >> for another slave until after the first one answers, or you will
>> >> interfere with the arbitration control of the reply bus.
>> >
>> > Exactly! Now you are starting to catch on.
>> So you do wait for a reply, and a reply is only expected on a valid
>> message? What if there is no reply, do you retry? If so, you already have
>> implemented some basic error checking. For more robustness you could (I
>> would) add some kind of CRC.
>
> There should not be any messages other than "valid" messages. I don't recall specifically what the slave does on messages with bit errors, but I'm pretty sure it simply doesn't know they have bit errors. The message has no checksum or other bit error control. The format has one character to indicate the "command" type. If that character is corrupted, the command is not used, unless it is changed to another valid character (3 of 256 chance).

Okay, the slaves are already implemented? Missed that.
So there is some very basic error detection: the command must be valid.
And if it is not and the slave does not reply, what does the master do?

> Again, there's no reason to "detect" errors since I've implemented no error protocol. That is many times more complex than simply ignoring the errors, which works because errors don't happen often enough to have an impact on testing.

A test rig that ignores errors. I don't know the requirements of this
test and how bad it would be to have an invalid pass/fail result.

> On the Apollo moon missions, they took no precautions against damage from micrometeoroids, because the effort required was not commensurate with the likelihood of the event.

I am not sure what they could have done, but adding effective shields
would probably have prohibitive weight consequences, if at all possible.
But if you can believe the movie Apollo 13, thre is a real danger from
micrometeorites.


--
Stef

<KnaraKat> Bite me.
* TheOne gets some salt, then proceeds to nibble on KnaraKat a little
bit....

Rick C

unread,
Nov 7, 2022, 12:50:26 PM11/7/22
to
On Monday, November 7, 2022 at 12:57:27 PM UTC-4, Stef wrote:
> On 2022-11-07 Rick C wrote in comp.arch.embedded:
> > On Monday, November 7, 2022 at 7:07:43 AM UTC-4, Stef wrote:
> >> On 2022-11-07 Rick C wrote in comp.arch.embedded:
> >> > On Monday, November 7, 2022 at 5:26:06 AM UTC-5, Stef wrote:
> >> >> On 2022-11-07 Rick C wrote in comp.arch.embedded:
> >> >
> >> > I care. Don't you?
> >> No, I don't. We do use FTDI chips in our designs to interface a serial
> >> port to USB. And we also use ready made FTDI cables. We use these chips
> >> and cables based on their specifications in datasheets and user guides
> >> etc. I have never felt the need to invesitigate how the UART/USB
> >> functionality was actually implemented inside the chip. What would I do
> >> with this knowledge? In a design I must rely on the behaviour as
> >> specified in the datasheet.
> >
> > It's hard to imagine an engineer with no curiosity.
> Yes, that's hard. But imagining an engineer who does not care about the
> internal structure of every single chip he uses is a lot easier (for
> me). I tend to focus my curiiosity on things that matter to me, don't
> you?

By definition curiosity is, "an eager desire to know or learn about something". That's not limited to things I *need* to know about. In fact, I don't limit my curiosity at all. It's a desire, not an act.

The knowledge can be very useful, if it opens new ideas for how to use these devices. In fact, I found that the majority of FTDI cables are full speed, which is much more limiting that the few Hi-speed USB cables they make. The Hi-speed cables seem to handle a lot more protocols. So now I'm back to wondering if they are implemented in a CPU based design.

--

Rick C.

+++- Get 1,000 miles of free Supercharging
+++- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 7, 2022, 1:26:46 PM11/7/22
to
Yes, you are right. My language there is poor. I should have said I prefer the noise immunity the RS-422 devices have compared to TTL devices *in addition to* the ESD immunity.


> And if you have an ESD controlled environment, that is not actually
> needed.

In theory, but I can't control how these will be used in the future. ESD immunity is something I want designed into any application that is connected by a cable.


> >> Reminds me of a product where we got windows blue screens during ESD
> >> testing on a device connected via an FTDI USB to serial adapter. Cable
> >> length less than 6 feet.
> >
> > I assume you mean some other device was being ESD tested? This is not being used in an ESD testing lab. Was the FTDI serial cable RS-232 by any chance? Being single ended, that is much less tolerant of noise.
> No a device with an FTDI chip on it was tested. USB cable was <= 6 feet
> and serial ports were only a few centimeters of TTL level PCB traces.
> This was reproducable with an evaluation kit with only USB connected.

So you were shooting high voltages into a device and were surprised the PC it was connected to crashed? I'm not following this at all. I'm pretty sure the FTDI cable is not rated to provide isolation. That has nothing to do with ESD protection. As you say, ESD protection is about damage, not operation.


> >> >> Note, if the master sends out a message, and waits for a response, with
> >> >> a retry if the message is not replied to, that naturally puts a pause in
> >> >> the communication bus for inter-message synchronization.
> >> >
> >> > The pause is already there by virtue of the protocol. Commands and replies are on different busses.
> >> >
> >> >
> >> >> Based on your description, I can't imagine the master starting a message
> >> >> for another slave until after the first one answers, or you will
> >> >> interfere with the arbitration control of the reply bus.
> >> >
> >> > Exactly! Now you are starting to catch on.
> >> So you do wait for a reply, and a reply is only expected on a valid
> >> message? What if there is no reply, do you retry? If so, you already have
> >> implemented some basic error checking. For more robustness you could (I
> >> would) add some kind of CRC.
> >
> > There should not be any messages other than "valid" messages. I don't recall specifically what the slave does on messages with bit errors, but I'm pretty sure it simply doesn't know they have bit errors. The message has no checksum or other bit error control. The format has one character to indicate the "command" type. If that character is corrupted, the command is not used, unless it is changed to another valid character (3 of 256 chance).
> Okay, the slaves are already implemented? Missed that.

A test fixture is in use, with software on the PC. There's no reason to change the protocol in the new test fixture and software unless there is a need, a new requirement.


> So there is some very basic error detection: the command must be valid.
> And if it is not and the slave does not reply, what does the master do?

The command being valid is based on as single character. The command is something like, "01 23 X<cr><lf>". I suppose the CR LF might also be required, but I don't recall. It might require one and ignore the other. The whole CR LF thing is such a PITA. The only character that is required for sure, is the "X", which at the moment can be one of three from the possible characters (don't recall if they are 8 bit or 7). I also don't recall if parity checking is used.

I do know that I had a flaw in the initial setup that gave intermittent errors. I had the hardest time finding the problem because of using bias in where to look. I tried adding re-transmission, which helped, but it borked up the code pretty well. I guess my software skills are not so good. In the end, it was an Ariane problem where the UART in the FPGA was existing code that was reused. Thinking it was a previously validated module, it was not suspected... at all. Eventually I realized it did not include the input FF synchronization to absolve race conditions. That was left for the system designer to add, since there may be more than one device on the same input.

Since that was solved, we've tested thousands of UUTs with no interface bit errors. So I have no worries about this.


> > Again, there's no reason to "detect" errors since I've implemented no error protocol. That is many times more complex than simply ignoring the errors, which works because errors don't happen often enough to have an impact on testing.
> A test rig that ignores errors. I don't know the requirements of this
> test and how bad it would be to have an invalid pass/fail result.

Since the test will be run, over night, every few seconds, with all UUT errors logged, the chances of the same bit error happening the same way, causing the same miss of a UUT failure some thousands of time (about 7,000), is on the order as a proton decaying. Well, maybe a bit more likely.


> > On the Apollo moon missions, they took no precautions against damage from micrometeoroids, because the effort required was not commensurate with the likelihood of the event.
> I am not sure what they could have done, but adding effective shields
> would probably have prohibitive weight consequences, if at all possible.
> But if you can believe the movie Apollo 13, thre is a real danger from
> micrometeorites.

Real, even if very small danger. That's the point. In this case, the impact is small, the likelihood is small, and the work to mitigate the problem is far more effort than justifiable, no matter how emotional people may get about "Errors! OMG, there may be ERRORS!"

Maybe I need a heavy duty cabinet to protect against the very real possibility of meteors?

https://abc7chicago.com/meteor-california-destroys-home-shower/12425011/

--

Rick C.

++++ Get 1,000 miles of free Supercharging
++++ Tesla referral code - https://ts.la/richard11209

Stef

unread,
Nov 7, 2022, 3:30:37 PM11/7/22
to
On 2022-11-07 Rick C wrote in comp.arch.embedded:
> On Monday, November 7, 2022 at 12:57:27 PM UTC-4, Stef wrote:
>> On 2022-11-07 Rick C wrote in comp.arch.embedded:
>> > On Monday, November 7, 2022 at 7:07:43 AM UTC-4, Stef wrote:
>> >> On 2022-11-07 Rick C wrote in comp.arch.embedded:
>> >> > On Monday, November 7, 2022 at 5:26:06 AM UTC-5, Stef wrote:
>> >> >> On 2022-11-07 Rick C wrote in comp.arch.embedded:
>> >> >
>> >> > I care. Don't you?
>> >> No, I don't. We do use FTDI chips in our designs to interface a serial
>> >> port to USB. And we also use ready made FTDI cables. We use these chips
>> >> and cables based on their specifications in datasheets and user guides
>> >> etc. I have never felt the need to invesitigate how the UART/USB
>> >> functionality was actually implemented inside the chip. What would I do
>> >> with this knowledge? In a design I must rely on the behaviour as
>> >> specified in the datasheet.
>> >
>> > It's hard to imagine an engineer with no curiosity.
>> Yes, that's hard. But imagining an engineer who does not care about the
>> internal structure of every single chip he uses is a lot easier (for
>> me). I tend to focus my curiiosity on things that matter to me, don't
>> you?
>
> By definition curiosity is, "an eager desire to know or learn about something". That's not limited to things I *need* to know about. In fact, I don't limit my curiosity at all. It's a desire, not an act.
>

Learn about something != learn about everything
Matter to me != *need* to know about

My not caring abbout the innards of a particular chip seems to let you
think I don't care about anything. But we are not discussing my
interests here, but your bus.


--
Stef

Old age is the most unexpected of things that can happen to a man.
-- Trotsky

Stef

unread,
Nov 7, 2022, 4:04:29 PM11/7/22
to
Yes, alway protect accessible parts.

>
>> >> Reminds me of a product where we got windows blue screens during ESD
>> >> testing on a device connected via an FTDI USB to serial adapter. Cable
>> >> length less than 6 feet.
>> >
>> > I assume you mean some other device was being ESD tested? This is not being used in an ESD testing lab. Was the FTDI serial cable RS-232 by any chance? Being single ended, that is much less tolerant of noise.
>> No a device with an FTDI chip on it was tested. USB cable was <= 6 feet
>> and serial ports were only a few centimeters of TTL level PCB traces.
>> This was reproducable with an evaluation kit with only USB connected.
>
> So you were shooting high voltages into a device and were surprised the PC it was connected to crashed? I'm not following this at all. I'm pretty sure the FTDI cable is not rated to provide isolation. That has nothing to do with ESD protection. As you say, ESD protection is about damage, not operation.

Ofcourse not into a device. But all over the enclosure, as is required
to pass EMC testing. These discharges cause current spikes that can
induce currents in parts of your circuits. Part of ESD testing also uses
coupling planes, where you fire on a metal plate 'near' the device. That
can also give a lot of noise. All these things may not cause device
damage like direct ESD discharges, but they can disturb the device
operation. Depending on the expected performance level, this may cause a
fail. For medical devices you usually cannot get away with worse than
"temporary loss of function and recovery without operator intervention".

ESD protection is indeed about damage prevention. But passing an ESD
test usually requires more than just preventing damage.

How would you rate a phone that resets every time you pick it up when
you have not properly discharged yourself from static electricity? It
may just reboot and work fine after that, but it would still be a crappy
phone.


>> >> >> Note, if the master sends out a message, and waits for a response, with
>> >> >> a retry if the message is not replied to, that naturally puts a pause in
>> >> >> the communication bus for inter-message synchronization.
>> >> >
>> >> > The pause is already there by virtue of the protocol. Commands and replies are on different busses.
>> >> >
>> >> >
>> >> >> Based on your description, I can't imagine the master starting a message
>> >> >> for another slave until after the first one answers, or you will
>> >> >> interfere with the arbitration control of the reply bus.
>> >> >
>> >> > Exactly! Now you are starting to catch on.
>> >> So you do wait for a reply, and a reply is only expected on a valid
>> >> message? What if there is no reply, do you retry? If so, you already have
>> >> implemented some basic error checking. For more robustness you could (I
>> >> would) add some kind of CRC.
>> >
>> > There should not be any messages other than "valid" messages. I don't recall specifically what the slave does on messages with bit errors, but I'm pretty sure it simply doesn't know they have bit errors. The message has no checksum or other bit error control. The format has one character to indicate the "command" type. If that character is corrupted, the command is not used, unless it is changed to another valid character (3 of 256 chance).
>> Okay, the slaves are already implemented? Missed that.
>
> A test fixture is in use, with software on the PC. There's no reason to change the protocol in the new test fixture and software unless there is a need, a new requirement.

Ah, existing stuff.

>> So there is some very basic error detection: the command must be valid.
>> And if it is not and the slave does not reply, what does the master do?
>
> The command being valid is based on as single character. The command is something like, "01 23 X<cr><lf>". I suppose the CR LF might also be required, but I don't recall. It might require one and ignore the other. The whole CR LF thing is such a PITA. The only character that is required for sure, is the "X", which at the moment can be one of three from the possible characters (don't recall if they are 8 bit or 7). I also don't recall if parity checking is used.

Okay, more restrictions on valid messages, yet more error detection
present already. ;-)

> I do know that I had a flaw in the initial setup that gave intermittent errors. I had the hardest time finding the problem because of using bias in where to look. I tried adding re-transmission, which helped, but it borked up the code pretty well. I guess my software skills are not so good. In the end, it was an Ariane problem where the UART in the FPGA was existing code that was reused. Thinking it was a previously validated module, it was not suspected... at all. Eventually I realized it did not include the input FF synchronization to absolve race conditions. That was left for the system designer to add, since there may be more than one device on the same input.
>
> Since that was solved, we've tested thousands of UUTs with no interface bit errors. So I have no worries about this.
>
>
>> > Again, there's no reason to "detect" errors since I've implemented no error protocol. That is many times more complex than simply ignoring the errors, which works because errors don't happen often enough to have an impact on testing.
>> A test rig that ignores errors. I don't know the requirements of this
>> test and how bad it would be to have an invalid pass/fail result.
>
> Since the test will be run, over night, every few seconds, with all UUT errors logged, the chances of the same bit error happening the same way, causing the same miss of a UUT failure some thousands of time (about 7,000), is on the order as a proton decaying. Well, maybe a bit more likely.
>

Another layer of error detection. ;-)

>
>> > On the Apollo moon missions, they took no precautions against damage from micrometeoroids, because the effort required was not commensurate with the likelihood of the event.
>> I am not sure what they could have done, but adding effective shields
>> would probably have prohibitive weight consequences, if at all possible.
>> But if you can believe the movie Apollo 13, thre is a real danger from
>> micrometeorites.
>
> Real, even if very small danger. That's the point. In this case, the impact is small, the likelihood is small, and the work to mitigate the problem is far more effort than justifiable, no matter how emotional people may get about "Errors! OMG, there may be ERRORS!"
>
> Maybe I need a heavy duty cabinet to protect against the very real possibility of meteors?
>
> https://abc7chicago.com/meteor-california-destroys-home-shower/12425011/
>


--
Stef

The only winner in the War of 1812 was Tchaikovsky.
-- David Gerrold

Rick C

unread,
Nov 7, 2022, 5:27:41 PM11/7/22
to
Seems to me you wanted to talk about my interests when you said, "Why are you discussing this?" and then continued discussing that issue for some half dozen more posts.

--

Rick C.

----- Get 1,000 miles of free Supercharging
----- Tesla referral code - https://ts.la/richard1120

Rick C

unread,
Nov 7, 2022, 5:54:08 PM11/7/22
to
Lol! I probably would barely notice! Cell phones are among the most unreliable devices we use with any regularity. I recall a Dave Barry article that was talking about cell phones which he mocked by describing typical conversations as, "What? WHAT?" Now, it's more like, "Hello...? Hello...? <click>"


> >> >> >> Note, if the master sends out a message, and waits for a response, with
> >> >> >> a retry if the message is not replied to, that naturally puts a pause in
> >> >> >> the communication bus for inter-message synchronization.
> >> >> >
> >> >> > The pause is already there by virtue of the protocol. Commands and replies are on different busses.
> >> >> >
> >> >> >
> >> >> >> Based on your description, I can't imagine the master starting a message
> >> >> >> for another slave until after the first one answers, or you will
> >> >> >> interfere with the arbitration control of the reply bus.
> >> >> >
> >> >> > Exactly! Now you are starting to catch on.
> >> >> So you do wait for a reply, and a reply is only expected on a valid
> >> >> message? What if there is no reply, do you retry? If so, you already have
> >> >> implemented some basic error checking. For more robustness you could (I
> >> >> would) add some kind of CRC.
> >> >
> >> > There should not be any messages other than "valid" messages. I don't recall specifically what the slave does on messages with bit errors, but I'm pretty sure it simply doesn't know they have bit errors. The message has no checksum or other bit error control. The format has one character to indicate the "command" type. If that character is corrupted, the command is not used, unless it is changed to another valid character (3 of 256 chance).
> >> Okay, the slaves are already implemented? Missed that.
> >
> > A test fixture is in use, with software on the PC. There's no reason to change the protocol in the new test fixture and software unless there is a need, a new requirement.
> Ah, existing stuff.

Yes, the very first sentence of the very first post was, "I have a test fixture that uses RS-232 to communicate with a PC."


> >> So there is some very basic error detection: the command must be valid.
> >> And if it is not and the slave does not reply, what does the master do?
> >
> > The command being valid is based on as single character. The command is something like, "01 23 X<cr><lf>". I suppose the CR LF might also be required, but I don't recall. It might require one and ignore the other. The whole CR LF thing is such a PITA. The only character that is required for sure, is the "X", which at the moment can be one of three from the possible characters (don't recall if they are 8 bit or 7). I also don't recall if parity checking is used.
> Okay, more restrictions on valid messages, yet more error detection
> present already. ;-) No real detection since there's no awareness of the error. It's like saying your transmission has "error detection" because it can stop working because a gear tooth broke off and jammed the whole transmission breaking more gears.


> > I do know that I had a flaw in the initial setup that gave intermittent errors. I had the hardest time finding the problem because of using bias in where to look. I tried adding re-transmission, which helped, but it borked up the code pretty well. I guess my software skills are not so good. In the end, it was an Ariane problem where the UART in the FPGA was existing code that was reused. Thinking it was a previously validated module, it was not suspected... at all. Eventually I realized it did not include the input FF synchronization to absolve race conditions. That was left for the system designer to add, since there may be more than one device on the same input.
> >
> > Since that was solved, we've tested thousands of UUTs with no interface bit errors. So I have no worries about this.
> >
> >
> >> > Again, there's no reason to "detect" errors since I've implemented no error protocol. That is many times more complex than simply ignoring the errors, which works because errors don't happen often enough to have an impact on testing.
> >> A test rig that ignores errors. I don't know the requirements of this
> >> test and how bad it would be to have an invalid pass/fail result.
> >
> > Since the test will be run, over night, every few seconds, with all UUT errors logged, the chances of the same bit error happening the same way, causing the same miss of a UUT failure some thousands of time (about 7,000), is on the order as a proton decaying. Well, maybe a bit more likely.
> >
> Another layer of error detection. ;-)

Errors in the UUT. If there is an error in the comms link, we likely would not even know about it. This will be an interesting test for both the UUTs and the comms link. I'm not certain how many messages it currently takes to implement any given test, but it should be possible to run the tests in parallel minimizing wait times for the PC software. I would estimate the total test time for a chassis to be between 10 and 60 seconds, so between 1,200 and 7,200 tests in the 20 hour soak time. As I learn more about the FTDI device, I am more pessimistic about the throughput. I could shove the details of tests into the FPGAs, so the commands are more like, run test 1 on channel number 2. That would cut the number of tests significantly, but require much more work in updating the FPGA software.

I think I'll start with a direct transfer of the existing protocol.

--

Rick C.

----+ Get 1,000 miles of free Supercharging
----+ Tesla referral code - https://ts.la/richard11209

Stef

unread,
Nov 7, 2022, 6:07:50 PM11/7/22
to
On 2022-11-07 Rick C wrote in comp.arch.embedded:
> On Monday, November 7, 2022 at 4:30:37 PM UTC-4, Stef wrote:
...
>> My not caring abbout the innards of a particular chip seems to let you
>> think I don't care about anything. But we are not discussing my
>> interests here, but your bus.
>
> Seems to me you wanted to talk about my interests when you said, "Why are you discussing this?" and then continued discussing that issue for some half dozen more posts.


That was not my intention. It seemed to me that you cared about the
internal implementation of the FTDI chip in relation to your bus
problem. I just wanted to point out that is of no concern for your bus
operation. And then I just got dragged in. ;-)


--
Stef

He's the kind of guy, that, well, if you were ever in a jam he'd
be there... with two slices of bread and some chunky peanut butter.

Paul Rubin

unread,
Nov 7, 2022, 6:14:56 PM11/7/22
to
Rick C <gnuarm.del...@gmail.com> writes:
> I could shove the details of tests into the FPGAs, so the commands are
> more like, run test 1 on channel number 2. That would cut the number
> of tests significantly, but require much more work in updating the
> FPGA software.

Are we circling back to the idea putting a microprocessor on the test
board? Ivan Sutherland famously called this a wheel of reincarnation:

http://www.cap-lore.com/Hardware/Wheel.html

Richard Damon

unread,
Nov 7, 2022, 7:02:19 PM11/7/22
to
YOU may consider it a design flaw, but I have seen too many serial ports
having this flaw in them to just totally ignore it.

Yes, the "robust" design will allow for a short stop bit, but you can't
count on all serial adaptors allowing for it.

Part of the problem is that (at least as far as I know) the Asynchronous
Serial Format isn't actually a "Published Standard", but just an
de-facto protocol that is simple enough that it mostly just works, but
still hides a few gotchas for corner cases.

Rick C

unread,
Nov 7, 2022, 7:50:40 PM11/7/22
to
On Monday, November 7, 2022 at 7:07:50 PM UTC-4, Stef wrote:
> On 2022-11-07 Rick C wrote in comp.arch.embedded:
> > On Monday, November 7, 2022 at 4:30:37 PM UTC-4, Stef wrote:
> ...
> >> My not caring abbout the innards of a particular chip seems to let you
> >> think I don't care about anything. But we are not discussing my
> >> interests here, but your bus.
> >
> > Seems to me you wanted to talk about my interests when you said, "Why are you discussing this?" and then continued discussing that issue for some half dozen more posts.
> That was not my intention. It seemed to me that you cared about the
> internal implementation of the FTDI chip in relation to your bus
> problem. I just wanted to point out that is of no concern for your bus
> operation. And then I just got dragged in. ;-)

I'm always curious about how things are implemented. I thought I had heard somewhere that the FTDI chip was a fast, but small processor. I design those for use in FPGA designs and they can be very effective. Often the code is very minimal.

--

Rick C.

---+- Get 1,000 miles of free Supercharging
---+- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 7, 2022, 7:57:58 PM11/7/22
to
On Monday, November 7, 2022 at 7:14:56 PM UTC-4, Paul Rubin wrote:
> Rick C <gnuarm.del...@gmail.com> writes:
> > I could shove the details of tests into the FPGAs, so the commands are
> > more like, run test 1 on channel number 2. That would cut the number
> > of tests significantly, but require much more work in updating the
> > FPGA software.

That should have been, "cut back the number of commands".


> Are we circling back to the idea putting a microprocessor on the test
> board? Ivan Sutherland famously called this a wheel of reincarnation:
>
> http://www.cap-lore.com/Hardware/Wheel.html

Zero need for a processor in the FPGA at this point. At least the need for a conventional processor. The commands are things like, assert pin X, read pin Y. A test of some basic functionality that could be debugged separately from other tests would be a few of these instructions. Very easy to do in an FPGA by using memory blocks and stepping through the commands. But I'm open to a processor. It would be one of my own design, however.

--

Rick C.

---++ Get 1,000 miles of free Supercharging
---++ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Nov 7, 2022, 8:15:16 PM11/7/22
to
That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock.

I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error.


> Yes, the "robust" design will allow for a short stop bit, but you can't
> count on all serial adaptors allowing for it.

There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit.


> Part of the problem is that (at least as far as I know) the Asynchronous
> Serial Format isn't actually a "Published Standard", but just an
> de-facto protocol that is simple enough that it mostly just works, but
> still hides a few gotchas for corner cases.

True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong.

--

Rick C.

--+-- Get 1,000 miles of free Supercharging
--+-- Tesla referral code - https://ts.la/richard11209

David Brown

unread,
Nov 8, 2022, 3:02:38 AM11/8/22
to
There's nothing wrong with curiosity. However, I have no doubt that you
heard wrong, or heard about different FTDI devices, or that your source
heard wrong. FTDI have been making these things for a couple of
decades, since the earliest days of USB. You can be sure they are
hardware peripherals, not software.

For /you/, and /your/ designs in FPGAs, adding a small processor can be
a good solution. The balance is different for ASICs and for dedicated
silicon, and it is different now than it was when FTDI made their MPSE
block for use in their devices.

Really, we are not talking about a peripheral that is much more advanced
than common serial communication blocks. It multiplexes a UART, an SPI
and an I²C on the same pins. That's it. You don't bother with a
processor and software for that.

FTDI /do/ make devices using embedded processors, with a few different
types (I forget which - perhaps Tensila cores). But those are other chips.

Richard Damon

unread,
Nov 8, 2022, 6:54:59 AM11/8/22
to
Depends on how you design it. IF you start a counter at the leading edge
of the start bit and then detect the counter at its middle value, then
the stop bit ends when the counter finally expires at the END of the
stop bit.

>
> I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error.

IF you don't start the looking for the start bit until the time has
passed for the END of the stop bit, and the receiver is 0.1% slow, then
every bit you lose 0.1% of a bit, or 1% per character, so after 50
consecutive characters you are 1/2 a bit late, and getting errors.

>
>
>> Yes, the "robust" design will allow for a short stop bit, but you can't
>> count on all serial adaptors allowing for it.
>
> There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit.

As I pointed out, 0.1% means 50 characters. 0.001% means 5000
characters, long enough string of characters and eventually you hit the
problem.

If you only use short messages, you never have a problem.

>
>
>> Part of the problem is that (at least as far as I know) the Asynchronous
>> Serial Format isn't actually a "Published Standard", but just an
>> de-facto protocol that is simple enough that it mostly just works, but
>> still hides a few gotchas for corner cases.
>
> True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong.
>

The problem is that if you describe the sampling as "Middle of bit",
then going to the end of the stop bit makes sense.

If you are adding functionality like RS-485 control that needs to know
when that end of bit is, and it is easy to forget that the receiver has
different needs than the transmitter.

Richard Damon

unread,
Nov 8, 2022, 6:58:30 AM11/8/22
to
The key is that if it is specified to have a quick Disable at end of
transimition capability, then you can count on that, and not say it is
up to the speed of the program to turn of the transmitter.

Sometimes we hit a blurry line between what is really a general purpose
computer and what is a FSM doing an operation.

Ultimately, we need to look at the specifications of performance to
decide what we need to do.

Rick C

unread,
Nov 8, 2022, 8:50:27 AM11/8/22
to
On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote:
> On 11/7/22 8:15 PM, Rick C wrote:
> > On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote:
> >> YOU may consider it a design flaw, but I have seen too many serial ports
> >> having this flaw in them to just totally ignore it.
> >
> > That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock.
> Depends on how you design it. IF you start a counter at the leading edge
> of the start bit and then detect the counter at its middle value, then
> the stop bit ends when the counter finally expires at the END of the
> stop bit.

There is still some extra logic to distinguish the condition. There is a bit timing counter, and a counter to track which bit you are in. Everything happening in the operation of the UART is happening at the middle of a bit. Then you need extra logic to distinguish the end of a bit.


> > I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error.
> IF you don't start the looking for the start bit until the time has
> passed for the END of the stop bit, and the receiver is 0.1% slow, then
> every bit you lose 0.1% of a bit, or 1% per character, so after 50
> consecutive characters you are 1/2 a bit late, and getting errors.

There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.

I recall the Intel USART was such a part for other technical flaws. So they finally came out with a new version that fixed the problems.


> >> Yes, the "robust" design will allow for a short stop bit, but you can't
> >> count on all serial adaptors allowing for it.
> >
> > There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit.
> As I pointed out, 0.1% means 50 characters. 0.001% means 5000
> characters, long enough string of characters and eventually you hit the
> problem.
>
> If you only use short messages, you never have a problem.

You mean if you have gaps with idle time.


> >> Part of the problem is that (at least as far as I know) the Asynchronous
> >> Serial Format isn't actually a "Published Standard", but just an
> >> de-facto protocol that is simple enough that it mostly just works, but
> >> still hides a few gotchas for corner cases.
> >
> > True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong.
> >
> The problem is that if you describe the sampling as "Middle of bit",
> then going to the end of the stop bit makes sense.

Sorry, you are not clear. This doesn't make sense to me. What is "going to the end of the stop bit"?


> If you are adding functionality like RS-485 control that needs to know
> when that end of bit is, and it is easy to forget that the receiver has
> different needs than the transmitter.

???

--

Rick C.

--+-+ Get 1,000 miles of free Supercharging
--+-+ Tesla referral code - https://ts.la/richard11209

Clifford Heath

unread,
Nov 8, 2022, 6:45:14 PM11/8/22
to
On 9/11/22 00:50, Rick C wrote:
> On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote:
>> IF you don't start the looking for the start bit until the time has
>> passed for the END of the stop bit, and the receiver is 0.1% slow, then
>> every bit you lose 0.1% of a bit, or 1% per character, so after 50
>> consecutive characters you are 1/2 a bit late, and getting errors.
>
> There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.


Yeah, but you can still insist that the stop bit fills 99%, or 90% of
the required time, and not get that pathology.

This is a branch of the principle "be rigorous in what you produce,
permissive in what you accept". I've personally moved away from that
principle - I think being permissive too often just masks problems until
they re-occur downstream but cannot be diagnosed there. So I'm much more
willing to reject bad input (or to complain but still accept it) early on.

CH

Rick C

unread,
Nov 8, 2022, 7:46:05 PM11/8/22
to
On Tuesday, November 8, 2022 at 7:45:14 PM UTC-4, Clifford Heath wrote:
> On 9/11/22 00:50, Rick C wrote:
> > On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote:
> >> IF you don't start the looking for the start bit until the time has
> >> passed for the END of the stop bit, and the receiver is 0.1% slow, then
> >> every bit you lose 0.1% of a bit, or 1% per character, so after 50
> >> consecutive characters you are 1/2 a bit late, and getting errors.
> >
> > There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.
> Yeah, but you can still insist that the stop bit fills 99%, or 90% of
> the required time, and not get that pathology.

I'm not clear on what you are saying. The larger the clock difference, the earlier the receiver has to look for the start bit. It will work just fine with the start bit check being delayed until the end of the stop bit, as long as the timing clocks aren't offset in one direction. Looking for the start bit in the middle of the stop bit gives a total of 5% tolerance, pretty much taking mistiming out of the list of problems for async data transmission. Drop that to 0.05% (your 99% example) and you are in the realm of crystal timing error on the two systems, ±250 ppm.

--

Rick C.

--++- Get 1,000 miles of free Supercharging
--++- Tesla referral code - https://ts.la/richard11209

Clifford Heath

unread,
Nov 9, 2022, 6:32:25 AM11/9/22
to
Go back to the first words I quoted from Richard:

"
IF you don't start the looking for the start bit until the time has
passed for the END of the stop bit, and the receiver is 0.1% slow, then
every bit you lose 0.1% of a bit
"

But if you wait until 95% of the stop bit time, and allow a new start
bit to come early by 5%, then it doesn't matter if "the receiver is 0.1%
slow" and you don't lose sync; the 5% early doesn't mount up over "50
consecutive characters".

Same if you wait 99% and the new start bit is only 1% early.

So your "There you go! You have just proven..." was a bogus situation
proposed by Richard, that's trivially avoided, and basically all actual
UARTs will do that,

Rick C

unread,
Nov 9, 2022, 7:53:18 AM11/9/22
to
If you cherry pick your numbers, you can make anything work. Looking for a start bit at the middle of the stop bit gives you the ±5% tolerance of timing. If you delay when you start looking for a start bit, you reduce this tolerance. So, in that case, if you are happy to provide a ±0.1% tolerance clock under all conditions, then sure, you can look for the start bit later. In the real world, there are users who expect a UART to work the way it is supposed to work, and use less accurate timing references than a crystal. This UART won't work for them and that would become known to users in general. While a claim has been made that such UARTs exist, no one has provided information about one.

I would also point out that the above timing analysis is not actually worse case since it does not take into account the 1/16th or 1/8th bit jitter from the first character start bit detection. So the requirements on the timing reference are even tighter when using the sloppy timing for start bit checking.

Richard Damon

unread,
Nov 9, 2022, 11:46:03 PM11/9/22
to
On 11/8/22 8:50 AM, Rick C wrote:
> On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote:
>> On 11/7/22 8:15 PM, Rick C wrote:
>>> On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote:
>>>> YOU may consider it a design flaw, but I have seen too many serial ports
>>>> having this flaw in them to just totally ignore it.
>>>
>>> That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock.
>> Depends on how you design it. IF you start a counter at the leading edge
>> of the start bit and then detect the counter at its middle value, then
>> the stop bit ends when the counter finally expires at the END of the
>> stop bit.
>
> There is still some extra logic to distinguish the condition. There is a bit timing counter, and a counter to track which bit you are in. Everything happening in the operation of the UART is happening at the middle of a bit. Then you need extra logic to distinguish the end of a bit.

Nope, simplest logic is to have your 8x sub-bit counter start at 0 and
count up starting on the leading edge, on the count values of 3, 4, and
5 you sample the bit for noise detection, and roll over from 7 to 0 for
the next bit, and count to the next bit. You stop the counter when it
rolls from 7 to 0 in the stop bit, and counts past the stop bit.

>
>
>>> I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error.
>> IF you don't start the looking for the start bit until the time has
>> passed for the END of the stop bit, and the receiver is 0.1% slow, then
>> every bit you lose 0.1% of a bit, or 1% per character, so after 50
>> consecutive characters you are 1/2 a bit late, and getting errors.
>
> There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.

Except that we have bought many USB serial ports with just this flaw in
them.

So I guess the nobody actually exists.

Seem to be based on an FTDI chip, but maybe just a "look alike", where
they did bare minimum design work.

The key point is that very few applications actually do have very long
uninterrupted sequences of characters, and typical PCs will tend to
naturally add small spaces just becuase the OS isn't that great. Doesn't
require much to fix the issue.

Rick C

unread,
Nov 10, 2022, 12:42:11 AM11/10/22
to
On Thursday, November 10, 2022 at 12:46:03 AM UTC-4, Richard Damon wrote:
> On 11/8/22 8:50 AM, Rick C wrote:
> > On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote:
> >> On 11/7/22 8:15 PM, Rick C wrote:
> >>> On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote:
> >>>> YOU may consider it a design flaw, but I have seen too many serial ports
> >>>> having this flaw in them to just totally ignore it.
> >>>
> >>> That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock.
> >> Depends on how you design it. IF you start a counter at the leading edge
> >> of the start bit and then detect the counter at its middle value, then
> >> the stop bit ends when the counter finally expires at the END of the
> >> stop bit.
> >
> > There is still some extra logic to distinguish the condition. There is a bit timing counter, and a counter to track which bit you are in. Everything happening in the operation of the UART is happening at the middle of a bit. Then you need extra logic to distinguish the end of a bit.
> Nope, simplest logic is to have your 8x sub-bit counter start at 0 and
> count up starting on the leading edge, on the count values of 3, 4, and
> 5 you sample the bit for noise detection, and roll over from 7 to 0 for
> the next bit, and count to the next bit. You stop the counter when it
> rolls from 7 to 0 in the stop bit, and counts past the stop bit.

You've conveniently left out a significant amount of logic.

Detecting specific states of the sub-bit counter uses more logic than other function. Most UARTs use 16 sub-samples and so have a 4 bit counter. Counters have a carry chain built in, so the carry out is a free zero count detector.

Counters are most efficient in terms of implementation when done as down counters, with various preloads. The counter is loaded with the half bit count while waiting for the leading edge of the start bit. The same zero detection (carry out) that triggers the next load is also the bit center mark. All loads during an active character will load a full bit count (different in the msb only). Every zero detect will mark a bit center. To get to the end of the final stop bit would require loading the counter with another half bit count, so extra logic. More than anything, why would anyone want to think about adding the extra half bit count when it's not part of any requirement?


> >>> I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error.
> >> IF you don't start the looking for the start bit until the time has
> >> passed for the END of the stop bit, and the receiver is 0.1% slow, then
> >> every bit you lose 0.1% of a bit, or 1% per character, so after 50
> >> consecutive characters you are 1/2 a bit late, and getting errors.
> >
> > There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.
> Except that we have bought many USB serial ports with just this flaw in
> them.

Oh, you mean the Chinese UARTs that most people won't touch because they are full of flaws! Got it.

I was talking about real UARTs that people use in real designs. I used to buy CH340 based USB cables for work. But we eventually figured out that they were unreliable and I only use FTDI cables now. The CH340 cables seemed to work, but would quit after an hour or two or three.


> So I guess the nobody actually exists.
>
> Seem to be based on an FTDI chip, but maybe just a "look alike", where
> they did bare minimum design work.

There are lots of clones. If you have an FTDI chip with this stop bit problem, I'd love to see it. I think FTDI would love to see it too.


> The key point is that very few applications actually do have very long
> uninterrupted sequences of characters, and typical PCs will tend to
> naturally add small spaces just becuase the OS isn't that great. Doesn't
> require much to fix the issue.

The key point is that a company like FTDI is not going to sell such crap. "Fixing" such issues is only possible if you have control over the system. Not everyone is designing a system from scratch. My brother's company makes a device that interfaces to a measurement device outputting data periodically. For who knows what reason, that company changed the product so it stopped outputting the headers. So a small box was made to add the headers every few lines. The UARTs in it just have to work correctly, since there's no option to modify any other piece of equipment. If they don't work correctly, they get pulled and they use other equipment, and the original maker gets a black eye. Enough black eyes and people don't buy that equipment anymore.

--

Rick C.

--+++ Get 1,000 miles of free Supercharging
--+++ Tesla referral code - https://ts.la/richard11209
0 new messages