OK. I have always associated "multidrop" with multiple receivers /and/
transmitters - I have never come across a need for multiple receivers on
a serial bus without them also needing to transmit (such as in your
case), or a distinction between "multi-drop" meaning multiple receivers
and "multi-point" meaning multiple transmitters.
The term "multi-drop" is more commonly taken to mean "multiple devices
connected directly to the same bus, transmitting and receiving". The
bus has no explicit direction on the electrical connections. Examples
include RS-485, CAN, co-ax Ethernet.
"Multi-point" is more general and can be any kind of network where there
are multiple nodes that can send and receive to all other nodes. That
would include a switched Ethernet network as well as the subclass of
"multi-drop" networks.
But whatever the terms, I think we agree on how RS-422 works.
>
>> Of course the same driver chips can be used in different combinations of
>> wiring and drive enables. An RS-422 driver chip can be viewed as two
>> RS-485 driver chips - alternatively, a RS-485 driver can be viewed as an
>> RS-422 driver with the two differential pairs connected together.
>> Really, all you are talking about is a differential driver and a
>> differential receiver.
>
> Sure, but the point is, nothing in RS-422 precludes multiple receivers, and in fact, every reference I've found (not paying for the actual spec) shows multi-drop receivers.
>
Yes, it seems that is entirely possible. The only uses I have seen for
RS-422 is as a kind of long-range alternative to RS-232. And the only
use I have seen for multiple receivers is - like for RS-232 - for
monitoring and debugging communication.
Still, multiple receivers are not going to help you in your testbench
unless they can also transmit.
For RS-485, my usage has usually been quite slow (9600 baud is very
common). Other colleagues have used faster rates. But as I said, it is
the slow baud rates that are at higher risk.
However, without knowing exact implementation details of all UART
hardware, I think you are wrong. There are two "finished byte" signals
that are common in UART transmission hardware.
The first is "transmit buffer empty" which is set when a byte is
transferred from the buffer into the transmitter shift register - most
UARTs are at least double-buffered to improve flow. This signal comes a
whole character before the end of the transmission - it is useful for
the software, but not the hardware. If you have a transmitter that is
not double-buffered, this signal would likely come at the beginning of
the stop bit, or at the end of the stop bit (depending on how the state
machines were made).
The second is "transmission complete", which is set at the /end/ of the
stop bit sent out on the line. That's when you know everything has been
sent - software can move on, and hardware can turn off the driver.
I cannot imagine why anyone would design transmission hardware that had
a special signal or disabled a driver in the /middle/ of the stop bit.
That makes no sense, and would have no use in software or hardware.
That is definitely an imagined problem.
(For reference, the FTDI datasheets show that the TXDEN output is
activated one bit before the start bit - so that the start bit is a 1 to
0 transition, as required for UARTs - and deactivated at the end of the
stop bit.)
You are correct that reception is in the middle of the stop bit
(typically sub-slot 9 of 16). The first transmitter will be disabled at
the end of the stop bit, and the next transmitter must not enable its
driver until after that point - it must wait at least half a bit time
after reception before starting transmission. (It can wait longer
without trouble, which is why faster baud rates are less likely to
involve any complications here.)
>
> None of this matters to me really. I'm going to use more wires, and do the multi-drop from the PC to the slaves on one pair and use RS-422 to multi-point from the slaves to the PC. Since the slaves are controlled by the master, they will never collide. The master can't collide with itself, so I can ignore any issues with this. I will use the bias resistors to assure a valid idle state. I may need to select different devices than the ones I use in the product. I think there are differences in the input load and I want to be sure I can chain up to 32 units.
>
OK. I have no idea what such a hybrid bus should technically be called,
but I think it should work absolutely fine for the purpose and seems
like a solid solution. I would not foresee any issues with 32 nodes on
such a bus, especially if it is relatively short and you have
terminators at each end.
(You still have to consider the latencies and timings to see if you can
get enough messages through the system fast enough, but you won't see
bus collisions. Consider broadcasts or multicast messages without
replies as a way of avoiding latency.)
>
>> I would expect there to be many alternatives to FTDI that work similarly
>> well, but that's the ones we generally use.
>>
>> <
https://ftdichip.com/product-category/products/cables/?series_products=55>
>>>
>>>> The reception of the last byte from a slave is not finished until
>>>> the stop bit has been properly received by the master - that means
>>>> at least half-way through the sending of the stop bit.
>>>
>>> That's not sufficient. Everyone's halfway is a bit different and
>>> start bit detection may not be enabled on some device when the next
>>> driver outputs a start bit, or the last driver may not be turned off
>>> when the next driver starts.
>>>
>> "At least half-way" means "at least 50% of the bit time". As long as
>> the start bit from the next message is not sent until at least 50% of a
>> bit time after the stop bit is detected, it will not conflict and all
>> listening devices will be ready to see the start bit. (Devices that
>> needed two stop bits haven't existed in the last 50 years.)
>
> You don't seem to understand that there is nothing timing from the start of the bit. The timing is from the first detected low of the start bit. From there, all timing is done by an internal clock. Check the math, you don't get 50% of the stop bit, guaranteed. That's why they call it "asynchronous" serial.
>
The beginning of the start bit is detected at the receiver by its
falling edge. It is /confirmed/ by samples in the middle (or the
falling edge gets rejected as noise), but all timing is done from that
start time - not from the middle of any bits.
It is called "asynchronous" because the transmitter and the receiver do
not have any pre-agreed or external synchronisation regarding when the
transmission is going to happen. But once it starts, they agree exactly
on /when/ it starts (assuming a short enough bus that rise times and
transmission line delays are negligible).
I must admit that I have been assuming that you have reasonable quality
clock references on each side of the communication, so that your baud
rates match. In theory you have a total of nearly 5% margin of error
for mismatched baud rates, line rise and fall delays, etc., and these
can add to the maximum time between the receiver recognising a stop bit
and the transmitter finishing sending the stop bit, giving between
almost 0 and almost 1 bit time (typically 2/16 to 14/16 bit times).
>
>> You asked specifically about bus turnaround at the host side - I assume
>> that is because on the slave devices, you have control of the drive
>> enables and bus turnaround happens with negligible latency.
>
> I know the master has the most trouble with this. The slaves tend to not have a problem because they are operated by MCUs and can wait a bit time before replying, or even a character time. I suppose they don't have any magic on turning off the driver though, but early is the easy way and generally doesn't cause a problem. The master has trouble on both ends of it's message, needing to be careful to not turn on the driver too soon and not turning it off too late to clobber the reply.
>
PC's are not good at accurate short delays, but have no problem at
making a delay of at least a given time. There is no excuse for a PC
program turning on the driver too soon - even if it were not handled
automatically by the hardware, adding a "sleep" call to get a minimum
delay is basic stuff. In the old days (I remember doing this stuff on
16-bit Windows) it was hard to get a reliable delay that was shorter
than about 20 ms, but even then it was possible. The bigger challenge
with "manual" driver enable control in PC software is being sure you
turn the driver off fast enough, before the other end replies.
However - and I know I am repeating myself - the answer is to get a
decent USB to RS-485 converter that does this correctly and
automatically in hardware.
As for delays before replying (or before sending a new message from the
master), we have only talked about them in regard to bus drivers. It is
standard practice to have an additional delay beyond the minimum, as it
gives a bit of extra leeway and makes debugging easier - you can see the
start and stop of the messages on an oscilloscope. Modbus RTU, for
example, specifies an inter-frame silence time of at least 3.5 characters.
>
>>>> Then there is a delay before the data gets sent back to the host
>>>> PC, a delay through the kernel and drivers before it reaches the
>>>> user program, time for the program to handle that message, time for
>>>> it to prepare the next message, delays through the kernel and
>>>> drivers before it gets to the USB bus, latency in the USB device
>>>> that receives the USB message and then starts transmitting. There
>>>> can be no collision unless all that delay is less than half a bit
>>>> time. And no matter how fast your computer is, you are always going
>>>> to need at least one full USB polling cycle for all this, which for
>>>> USB 2.0 is 0.125 us. That means that if you have a baud rate of 16
>>>> kbaud or higher, there is no possibility of a collision.
>>>
>>> If your numbers are accurate, that might be ok, but I'm looking for
>>> data rates closer to 1 Mbps.
>> USB serial ports generally use the 48 MHz base USB reference frequency
>> as their source clock to scale down by a baud rate divisor, and common
>> practice is 16 sub-bit clocks per line bit (so that you can have
>> multiple samples for noise immunity). Thus baud rates of integer
>> divisions of 3 MBaud are common. Certainly the FTDI chips handle 1, 2
>> and 3 MBaud. (I haven't had need of such speeds with RS-485, but have
>> happily used the common 3v3 TTL cables at 3 MBaud.)
>
> At some point you have to worry with the line waveforms. So too fast can cause problems when using *lots* of receivers.
>
Yes. But I don't think you have a physically long bus, do you? 10
meters, maybe? 3 MBaud and 32 nodes should be fine.
>
>>> Admittedly, I have not done an analysis
>>> of what will actually be required, but 128 UUT, or possibly 256, can
>>> do a lot of damage to a shared bus. At 1 Mbps, 128 UUT results in an
>>> effective bit rate maximum of 7.8 kbps. With 256 UUTs, that's 3.9
>>> kbps. No, I don't think this will work properly at much slower
>>> speeds than 1 Mbps. At 16 kbps, the effective rate to each UUT is
>>> just 62.5 bps, not kbps.
>>>
>> As long as you are /above/ 16 kbaud, you should be fine (at the PC
>> side). At 1 Mbaud, you do not need to worry about the PC starting a new
>> telegram before the last received stop bit is completed.
>
> Not entirely. The master has to turn *off* the driver before the slave replies. At higher speeds that's a problem. But it all depends on how it is being done. This is why I'm going with two busses, one for master transmit and one for master input.
>
Unless you are masochistic or stuck in the last century, the driver
turnoff is done by the USB to RS485 driver, not by a PC program in software.
(I think for several reasons your hybrid bus is a better choice than a
single RS-485 bus - though I would still prefer to look at a
hierarchical setup myself.)
There are /always/ delays - in particular at the PC side. PC's are good
for high throughput, but bad for low latency. If they are not a
problem, then that's fine.
>
>>>> When we have made testbenches that required serial communication
>>>> to multiple parallel devices, we typically put a USB hub in the
>>>> testbench and use multiple FDTI USB to serial cables. You only make
>>>> one (or possibly a few) of the testbenches - it's much cheaper to
>>>> use off-the-shelf parts than to spend time designing something
>>>> more advanced. You can buy a /lot/ of hubs and USB cables for the
>>>> price of the time to design, build and program a custom card for
>>>> the job. It also makes the system more scalable, as the
>>>> communication to different devices runs in parallel.
>>>
>>> USB hubs are a last resort. I've found many issues with such
>>> devices, especially larger than 4 ports.
>>>
>> We find they work fine - I have very rarely seen any issues with
>> off-the-shelf hubs, regardless of the number of ports. (They are almost
>> all made with 1-to-4 hub chips, which is why hubs are often found in
>> sizes of 4 ports, 7 ports, or 10 ports.)
>
> Exactly, and I find combining them like that has issues.
>
Experiences vary, I guess.
>
>> A key complication with multiple serial ports on hubs is if you are
>> using Windows, it can be a big pain to keep consistent numbering for the
>> serial ports. You may have to use driver-specific libraries (like
>> FTDI's DLL's) to check serial numbers and use that information. It's
>> far easier on Linux where you can make a udev configuration file that
>> gives aliases to your ports ordered by physical tree address.
>
> Yet another reason to avoid such complications. The reality is there's no gain. The multi-drop is the right way to go here.
>
You see a complication where I see a simple configuration. And if you
need to use multiple serial ports on a single PC, Linux and a udev
configuration is a /huge/ gain. I currently have 7 serial ports in use
on my development PC at the moment, connected to debug ports (TTL UARTs)
on various boards. /dev/ttySerialPort_2_3 for hub 2 port 3 is vastly
superior to "COM74" on a Windows system. (I have no idea if you are
using Windows or Linux on your controlling PC here.)
>
>>>> We have also done systems where there is a Raspberry Pi driving the
>>>> hub and multiple FTDI converters. The PC is connected to the Pi by
>>>> Ethernet (useful for galvanic isolation), and the Pi runs
>>>> forwarders between the serial ports and TCP/IP ports.
>>>
>>> There is a possibility of using an rPi on an Ethernet cable to the PC
>>> with direct comms to each test fixture board, but that's more work
>>> that I'm interested in.
>>>
>> Or you could use one Pi for a set of boards - whatever is physically
>> convenient.
>
> But it's yet another piece to keep working. Much easier to just use the multi-drop. I will keep that idea as a backup plan. But getting RS-422 on an rPi is a hassle. That would need to be a hat, or a shield or whatever they call daughter cards on rPis. Last time I checked, it was hard to find rPis. They are part of the unobtainium universe now, it seems.
>
Of course availability of parts is of prime concern these days, and
projects are often done by buying what you can and then designing around
the devices you have found.
Pi's have USB - you do your RS-485, RS-422 or whatever on the Pi in
exactly the same way as you do it on the PC, using FTDI cables (or an
alternative supplier that you are comfortable with). Plug and play.
It is about modularisation and scalability. Now, I don't know your
product, your manufacturing and test systems, your preferences, or
anything other than the information you've written here. But if our
production department asked us to make a test bench for handling 80
devices in parallel, my immediate reaction would be to refuse. I'd
design a testbench to handle 8, or some number of that order. Then I'd
get them to make perhaps 12 of these test benches. That way, they have
something scalable and maintainable. If one testbench breaks, they are
at 90% production capacity instead of 0%. If they need to increase
capacity, they can make a few more benches. If they want to spread
testing between two facilities, it's easy. So for /me/, and /my/
company, splitting things up in a hierarchy with Pi's (or something
similar) has clear advantages. But you might have very different
priorities or organisations that give different dynamics and different
trade-offs.
>
>>>> To be fair, I don't recall any testbenches we've made that needed
>>>> more than perhaps 8 serial ports. If I needed to handle 80 lines, I
>>>> would probably split things up - a Pi handling 8-10 lines from a
>>>> local program, communicating with a PC master program by Ethernet.
>>>
>>> That's the advantage of the shared bus. No programming required,
>>> other than extending the protocol to move from "selecting" a device
>>> on the FPGA, to selecting the FPGA as well.
>>>
>> If you are familiar with socat, the Pi doesn't necessarily need any
>> programming either. (In our case we wanted some extra monitoring and
>> logging, which was more than we could get from socat - so it was a
>> couple of hundred lines of Python in the end.)
>
> A couple hundred lines I'd rather not write.
>
> Thanks for the comments.
>
Thanks for starting the threads here - it's nice to have a bit of real
discussion in this group that is often rather quiet.