Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Two-wires RS485 and the driver enable problem

63 views
Skip to first unread message

pozzugno

unread,
Oct 13, 2014, 7:58:42 AM10/13/14
to
I have a multi-drop two-wires RS485 bus. One node is the master and all
the others are slaves. The master is the only node that is authorized
to initiate a transmission, addressing one slave. The addressed slave
usually answers to the master.

The bus is half-duplex, so every node disables the driver. Only THE
node that transmits data on the bus enables the driver and disables it
as soon as it can, just after the last byte. An interrupt (transmit
complete) usually triggers when the last byte is totally shifted out, so
the driver can be disabled immediately.

Of course, other interrupts can be triggered. What happens when
interrupt X (whatever) triggers just before the "transmit complete"
interrupt? The result is the ISR X is called, postponing the execution
of "transmit complete" ISR. The RS485 driver will be disabled with a
certain amount of delay. In the worst case, the driver could be
disabled with a delay that is the sum of the duration of all ISRs that
could trigger.
[In this scenario, I think of ISRs that can't be interrupted by a higher
priority interrupt.]

If a node on the bus is very fast and starts transmitting (the master)
or answering (one slave) immediately after receving the last byte, but
when the previous transmitting node is executing other ISRs, the final
result is a corrupted transmission.

What is the solution? I think the only solution is to define, at the
design time, a minimum interval between the receiving of the last byte
from one node and the transmission of the first byte. This interval
could be in the range of 100 microseconds and should be calibrated on
the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a
simple calculation.

Moreover, implementing a short "software" delay in the range of some
microseconds isn't a simple task. An empty loop on a decreasing
volatile variable is a solution, but the final delay isn't simple to
calculate at the design time, and it could depend on the compiler,
compiler settings, clock frequency and so on. Use an hw timer only for
this pause?

How do you solve this problem?

[I know there are some microcontrollers that automatically (at the
hw-level) toggle an output pin when the last byte is totally shifted
out, but I'm not using one of the them and they aren't so common.]

Wouter van Ooijen

unread,
Oct 13, 2014, 8:29:29 AM10/13/14
to
pozzugno schreef op 13-Oct-14 1:58 PM:
The minimum delay that you calculated is just that: a minimum. Apart
from performance, there no problem in waiting somewhat longer. IMO you
must come up with two figures: the minimum respond time (determined by
the maximum driver turn-off delay) and the maximum respond time
(determined by the time you can afford to loose with the bus idling
bewteen request and response).

As for the delay: My experience is that I need delays all over the
place. One approach is to have a free-running 64 bit counter (which will
roll over long after you are dead) and wait for it to exceed start+delay.

Wouter

pozzugno

unread,
Oct 13, 2014, 8:38:41 AM10/13/14
to
Il 13/10/2014 14:29, Wouter van Ooijen ha scritto:
> The minimum delay that you calculated is just that: a minimum. Apart
> from performance, there no problem in waiting somewhat longer. IMO you
> must come up with two figures: the minimum respond time (determined by
> the maximum driver turn-off delay) and the maximum respond time
> (determined by the time you can afford to loose with the bus idling
> bewteen request and response).

Ok, so you're confirming the solution is a delay.


> As for the delay: My experience is that I need delays all over the
> place. One approach is to have a free-running 64 bit counter (which will
> roll over long after you are dead) and wait for it to exceed start+delay.

I usually use this approach for longer delays (milliseconds or seconds),
so I can increment the counter in a ISR that trigger every millisecond.
I don't like to fire a trigger every 100us.

64-bit appears to me too wide. Everytime I need to read it and compare
with the time, I have to disable interrupts.

David Brown

unread,
Oct 13, 2014, 8:47:41 AM10/13/14
to
You don't have to use the sum of all ISR's on all nodes as your minimum
time. You have to use the largest value of delay that could occur on
any one of the nodes, which is certainly not the same thing.

If possible, you can make the "transmission complete" interrupt a high
priority interrupt - if your microcontroller supports some sort of
nested interrupt scheme, then this will greatly reduce the latency of
the transmission complete interrupt and therefore the bus disable.

But the key point is that interrupt functions, or other areas of code
during which interrupts are disabled, should be as short, fast and
deterministic as possible. Don't do any "work" during interrupt
functions - or if you do, re-enable global interrupts first. Then it
makes little difference if an interrupt function is running when the
"transmission complete" triggers because the function will be complete
in a few microseconds.


pozzugno

unread,
Oct 13, 2014, 8:56:22 AM10/13/14
to
Il 13/10/2014 14:47, David Brown ha scritto:
> You don't have to use the sum of all ISR's on all nodes as your minimum
> time. You have to use the largest value of delay that could occur on
> any one of the nodes, which is certainly not the same thing.

Yes, I wanted to write the same thing, but my English isn't very good.

I wanted to stress I have to *add all the ISRs execution time* that
could trigger in a single slave and take the maximum value.
One slave could use just a single interrupt, but another could use 10
interrupts (ADC, serial ports, I2C, timers, ...)


> If possible, you can make the "transmission complete" interrupt a high
> priority interrupt - if your microcontroller supports some sort of
> nested interrupt scheme, then this will greatly reduce the latency of
> the transmission complete interrupt and therefore the bus disable.

I'm using AVR8 and SAMD20 from Atmel. In AVR8 there is a "hard wired"
priority scheme that can't be changed and the TXC (transmit complete
interrupt) priority is very low. Anyway an ISR can't be never
interrupted, even by a higher priority interrupt event (the priority is
used only if there are more than one interrupt requests at the same time).


> But the key point is that interrupt functions, or other areas of code
> during which interrupts are disabled, should be as short, fast and
> deterministic as possible. Don't do any "work" during interrupt
> functions -

I know, I know.


> or if you do, re-enable global interrupts first. Then it
> makes little difference if an interrupt function is running when the
> "transmission complete" triggers because the function will be complete
> in a few microseconds.

Someone considers the practice to enable interrupts inside an ISR as The
Devil :-)

http://betterembsw.blogspot.it/2014/01/do-not-re-enable-interrupts-in-isr.html


Richard Damon

unread,
Oct 13, 2014, 9:11:24 AM10/13/14
to
Yes, the best solution is to make it happen in hardware, but I
understand it isn't always available.

As you commented, the key is to define the maximum period after
transmission is complete that a unit is allowed to drive the line, and
thus the minimum time another unit must wait to respond.

By defining a maximum time that a unit can drive the bus, you have now
introduced a "Hard Real Time" requirement to the system. That normally
means that you need to be careful what is done in interrupt routines, so
no interrupt can mask off other interrupts for an extended time.

A second question is how much data is being sent and how fast do
responses need to be. It may be possible to define the minimum reply
delay long enough that you don't really need to worry about "small"
delays in the transmitter.

A third question is can the protocol be made "fail-safe" so that if on
an occasional unlucky combination of delays, and the beginning of the
response is garbled by a slow disable, that the system can detect this
and cause a retransmission (This is often good to handle other types of
errors too), or otherwise tolerate an occasional missing response. In
this case you just need to make sure that the responding unit takes long
enough to build its response (maybe with an added delay to make sure)
for the transmitter to normally get off the bus. Unless you have vastly
differing speed of units, this often will just happen to be true as the
receiving unit, especially if set to verify the "stop bit", won't begin
to parse the message which is normally a longer task than the sending
units "shut off the driver" operation.

Wouter van Ooijen

unread,
Oct 13, 2014, 9:28:17 AM10/13/14
to
pozzugno schreef op 13-Oct-14 2:38 PM:
> Il 13/10/2014 14:29, Wouter van Ooijen ha scritto:
>> The minimum delay that you calculated is just that: a minimum. Apart
>> from performance, there no problem in waiting somewhat longer. IMO you
>> must come up with two figures: the minimum respond time (determined by
>> the maximum driver turn-off delay) and the maximum respond time
>> (determined by the time you can afford to loose with the bus idling
>> bewteen request and response).
>
> Ok, so you're confirming the solution is a delay.
>
>
>> As for the delay: My experience is that I need delays all over the
>> place. One approach is to have a free-running 64 bit counter (which will
>> roll over long after you are dead) and wait for it to exceed start+delay.
>
> I usually use this approach for longer delays (milliseconds or seconds),
> so I can increment the counter in a ISR that trigger every millisecond.
> I don't like to fire a trigger every 100us.

With a 64 bit counter that ticks at 1 ns you have a rollover after 585
years, so you can use it for all delays. If you don't have a hardware 64
bit counter you can use a 32 bit counter + rollover interrupt.

> 64-bit appears to me too wide. Everytime I need to read it and compare
> with the time, I have to disable interrupts.

If you don't want to disable interrupts:

h1 = read_high();
l1 = read_low();
h2 = read_high();
l2 = read_low();
if h1 == h2 return ( h1, l1 ) else return ( h2, l2 )

This makes the reasonable assumption that only one carry from low to
high can occur during the execution of this code fragment.

Wouter van Ooijen

David Brown

unread,
Oct 13, 2014, 10:13:29 AM10/13/14
to
On 13/10/14 14:56, pozzugno wrote:
> Il 13/10/2014 14:47, David Brown ha scritto:
>> You don't have to use the sum of all ISR's on all nodes as your minimum
>> time. You have to use the largest value of delay that could occur on
>> any one of the nodes, which is certainly not the same thing.
>
> Yes, I wanted to write the same thing, but my English isn't very good.
>
> I wanted to stress I have to *add all the ISRs execution time* that
> could trigger in a single slave and take the maximum value.
> One slave could use just a single interrupt, but another could use 10
> interrupts (ADC, serial ports, I2C, timers, ...)

No, you only have to include the ISR's that could occur during that time
- not all the ISR's in your system. That means you either have to know
which ones might occur (for example, you may be confident that I2C
interrupts could not happen at that point, and therefore discount them),
or you may perhaps choose to disable some of them temporarily on the
transmission of the last character, and re-enable them within the
transmission complete ISR.

You also have to consider things from a point of view of the chance of
something going wrong, and the consequences of that error. In most
communication systems, you already have a method of checking telegram
correctness (CRC, checksum, etc.) and of re-trying in the case of error.
Suppose - through calculation or testing - you figure out that a
latency of more than 20 us occurs one telegram out of 10000. Then if
your bus-turnaround delay is set to 20 us, then one out of every 10000
replies could be lost or corrupted. If that is within the level of
acceptable error rates, a 20 us delay is good enough.

(Note that you will not damage your drivers by having two drivers
enabled at the same time - decent RS-485 drivers have protection against
that.)

Of course, it is highly application-dependent whether such a statistical
test is good enough - it is likely to be fine for a temperature
measurement system, but frowned upon for a car brake control.


>
>> If possible, you can make the "transmission complete" interrupt a high
>> priority interrupt - if your microcontroller supports some sort of
>> nested interrupt scheme, then this will greatly reduce the latency of
>> the transmission complete interrupt and therefore the bus disable.
>
> I'm using AVR8 and SAMD20 from Atmel. In AVR8 there is a "hard wired"
> priority scheme that can't be changed and the TXC (transmit complete
> interrupt) priority is very low. Anyway an ISR can't be never
> interrupted, even by a higher priority interrupt event (the priority is
> used only if there are more than one interrupt requests at the same time).
>

The "priorities" here are of minor relevance, since they only affect
which interrupt vector is used on simultaneous interrupts. The point is
that you don't have a microcontroller that supports nested interrupts
(other than by simply re-enabling global interrupts).

>
>> But the key point is that interrupt functions, or other areas of code
>> during which interrupts are disabled, should be as short, fast and
>> deterministic as possible. Don't do any "work" during interrupt
>> functions -
>
> I know, I know.

Apply that principle seriously, and you should not have a problem here.

Also avoid using interrupts unnecessarily. It is often tempting to have
lots of interrupts on SPI, I2C, ADC, etc., when they are actually
unnecessary - and may be slower than a simple polling loop.

>
>
>> or if you do, re-enable global interrupts first. Then it
>> makes little difference if an interrupt function is running when the
>> "transmission complete" triggers because the function will be complete
>> in a few microseconds.
>
> Someone considers the practice to enable interrupts inside an ISR as The
> Devil :-)
>
> http://betterembsw.blogspot.it/2014/01/do-not-re-enable-interrupts-in-isr.html
>

People always have opinions and their own ideas of general rules. In
embedded development, general rules are often wrong - there are always
exceptions.

You do have to be careful about re-enabling interrupts inside an ISR,
however. A typical usage is for a long-running ISR on a timer
interrupt. First, acknowledge the timer interrupt. Then disable the
timer interrupt to avoid recursion, then re-enable global interrupts.
Reverse the process at the end of the timer ISR. But be even more
careful than usual about shared data and race conditions!


Don Y

unread,
Oct 13, 2014, 10:21:38 AM10/13/14
to
On 10/13/2014 6:28 AM, Wouter van Ooijen wrote:
> If you don't want to disable interrupts:
>
> h1 = read_high();
> l1 = read_low();
> h2 = read_high();
> l2 = read_low();
> if h1 == h2 return ( h1, l1 ) else return ( h2, l2 )
>
> This makes the reasonable assumption that only one carry from low to high can
> occur during the execution of this code fragment.

You can skip the second read of the "low"; if the highs differ, you can assume
the low takes on the value *at* rollover (depends on down count vs. up count).

E.g., if counter increments (decimal):

X 90
X 91 <--- read high (=X)
X 92
X 93
X ...
X 99
Y 00
Y 01 <--- read high (=Y)

So, you can assume {Y,00} is a safe value for the counter regardless of whether
you saw {91,92,93...00,01} as the low observation.

[If the counter is built of two concatenated counters, then you need to
use the modulus of the lower one as the assumed "wrap value", obviously]

If both read high's yield X, then you can use the observed read low.

Reading the low byte a second time doesn't improve your result:

X 90
X 91 <--- read high (=X)
X 92
X 93 read low first time somewhere
X ... in this area
X 99
Y 00
Y 01 <--- read high (=Y)
Y 02
Y 03 perhaps ISR occurs here?
Y ...
Y 10
Y 11 <--- read low (second time)

Note that an "unknown" delay (ISR) can occur between any two of these
reads. So, the two high reads can be very closely spaced together, in
time (which means the first low is tightly bracketed) -- then, the second
low could be deferred a long time wrt the second high. As such, using
*it* in the final result adds more slop to the reading.

[This assumes that the "event" that you are interested in tagging has
*triggered* this action -- so, you want the earliest time associated
with it]

Wouter van Ooijen

unread,
Oct 13, 2014, 10:46:06 AM10/13/14
to
Don Y schreef op 13-Oct-14 4:21 PM:
> On 10/13/2014 6:28 AM, Wouter van Ooijen wrote:
>> If you don't want to disable interrupts:
>>
>> h1 = read_high();
>> l1 = read_low();
>> h2 = read_high();
>> l2 = read_low();
>> if h1 == h2 return ( h1, l1 ) else return ( h2, l2 )
>>
>> This makes the reasonable assumption that only one carry from low to
>> high can
>> occur during the execution of this code fragment.
>
> You can skip the second read of the "low"; if the highs differ, you can
> assume
> the low takes on the value *at* rollover (depends on down count vs. up
> count).

Correct. But that requires a little more knowledge of the hardware.
Which you will need anyway if it is about up versus down counting.

But the main point was that you don't have to disable interrupts if you
don't want to. An IIRC disabling interrupts doesn't help much.

Wouter

rickman

unread,
Oct 13, 2014, 10:53:21 AM10/13/14
to
If you can't use hardware to control the driver enable signal, then
perhaps you can use hardware to control your time out. Rather than
worry about software reading timers and messing with your interrupt
priorities, etc... use the UART as a timer to delay the first enabled
character being transmitted.

Before a device transmits on the bus, send a character to the UART
without enabling the RS-485 driver. Wait for the transmitter shift
register to be empty ("transmit complete" by your terminology). Then
enable the bus driver and start sending the packet data. This will
provide 1 character transmission time delay.

You still need to calculate the worst case delay needed. If it turns
out to be longer than 1 character transmission time then you may need to
pad with more than 1 dummy character.

--

Rick

pozzugno

unread,
Oct 13, 2014, 10:59:50 AM10/13/14
to
Il 13/10/2014 15:28, Wouter van Ooijen ha scritto:
>> I usually use this approach for longer delays (milliseconds or seconds),
>> so I can increment the counter in a ISR that trigger every millisecond.
>> I don't like to fire a trigger every 100us.
>
> With a 64 bit counter that ticks at 1 ns you have a rollover after 585
> years, so you can use it for all delays. If you don't have a hardware 64
> bit counter you can use a 32 bit counter + rollover interrupt.

On 8-bit microcontrollers, it's difficult to have a 32 bit counter :-)


>> 64-bit appears to me too wide. Everytime I need to read it and compare
>> with the time, I have to disable interrupts.
>
> If you don't want to disable interrupts:
>
> h1 = read_high();
> l1 = read_low();
> h2 = read_high();
> l2 = read_low();
> if h1 == h2 return ( h1, l1 ) else return ( h2, l2 )

Again, you are supposing you have a 32-bits hardware counter available.

> This makes the reasonable assumption that only one carry from low to
> high can occur during the execution of this code fragment.

Even if I don't have 32-bits hardware counters, thank you for sharing
this approach.

pozzugno

unread,
Oct 13, 2014, 11:03:40 AM10/13/14
to
Il 13/10/2014 16:53, rickman ha scritto:
> Before a device transmits on the bus, send a character to the UART
> without enabling the RS-485 driver. Wait for the transmitter shift
> register to be empty ("transmit complete" by your terminology). Then
> enable the bus driver and start sending the packet data. This will
> provide 1 character transmission time delay.

Good suggestion to avoid using another hardware peripheral for
implementing the delay. Thank you.

Don Y

unread,
Oct 13, 2014, 11:02:59 AM10/13/14
to
On 10/13/2014 7:46 AM, Wouter van Ooijen wrote:
> Don Y schreef op 13-Oct-14 4:21 PM:

>> You can skip the second read of the "low"; if the highs differ, you can
>> assume
>> the low takes on the value *at* rollover (depends on down count vs. up
>> count).
>
> Correct. But that requires a little more knowledge of the hardware. Which you
> will need anyway if it is about up versus down counting.
>
> But the main point was that you don't have to disable interrupts if you don't
> want to. An IIRC disabling interrupts doesn't help much.

Correct. Disabling interrupts *forces* an atomic operation -- when there
often isn't the need for one. E.g., in this case, you are trying to
emulate two *concurrent* reads (Hi,Lo). As long as the result that you
get "makes sense" given the approach you've taken for implementation,
it's (almost) as good as any other!

[E.g., (X,99) and (Y,00) are different from each other, but more correct
than, for example, (Y,17)]

Don Y

unread,
Oct 13, 2014, 11:06:23 AM10/13/14
to
No, what you are most concerned with is ensuring every ISR manages to terminate
before it can be reinvoked.

So, ISR1 can be interrupted by ISR3 which can be interrupted by ISR7
which can be interrupted by ISR2, etc. AS LONG AS ISR1 can't reassert
itself while ANY instance of ISR1 is still "active".

(Ditto for every ISR in the system)

Even this "rule" can be bent -- if you know the worst case nesting of ISR's
on themselves (and ensure you have adequate stack to cover that level of
penetration)


Don Y

unread,
Oct 13, 2014, 11:08:23 AM10/13/14
to
On 10/13/2014 7:59 AM, pozzugno wrote:
> Il 13/10/2014 15:28, Wouter van Ooijen ha scritto:
>>> I usually use this approach for longer delays (milliseconds or seconds),
>>> so I can increment the counter in a ISR that trigger every millisecond.
>>> I don't like to fire a trigger every 100us.
>>
>> With a 64 bit counter that ticks at 1 ns you have a rollover after 585
>> years, so you can use it for all delays. If you don't have a hardware 64
>> bit counter you can use a 32 bit counter + rollover interrupt.
>
> On 8-bit microcontrollers, it's difficult to have a 32 bit counter :-)

All you need is a wide enough counter to span the maximum delay you want
to measure "comfortably". You need to design such that your "most sluggish"
activity happens often enough to be captured in one counter rollover period
(i.e., counter can't roll over more than once between observations)

Grant Edwards

unread,
Oct 13, 2014, 11:20:13 AM10/13/14
to
On 2014-10-13, pozzugno <pozz...@gmail.com> wrote:

> If a node on the bus is very fast and starts transmitting (the master)
> or answering (one slave) immediately after receving the last byte, but
> when the previous transmitting node is executing other ISRs, the final
> result is a corrupted transmission.
>
> What is the solution? I think the only solution is to define, at the
> design time, a minimum interval between the receiving of the last byte
> from one node and the transmission of the first byte.

Or put a some disposable preamble bytes on the front of messages. This
is pretty common when using half-duplex modems: the padding bytes give
the modems time to "turn-around" and get carrier established, PLLs
locked, etc.

--
Grant Edwards grant.b.edwards Yow! Here I am in the
at POSTERIOR OLFACTORY LOBULE
gmail.com but I don't see CARL SAGAN
anywhere!!

Don Y

unread,
Oct 13, 2014, 11:17:25 AM10/13/14
to
On 10/13/2014 4:58 AM, pozzugno wrote:

[bus acquisition in the face of possible contention]

> What is the solution? I think the only solution is to define, at the design
> time, a minimum interval between the receiving of the last byte from one node
> and the transmission of the first byte. This interval could be in the range of
> 100 microseconds and should be calibrated on the sum of duration of *all* ISRs
> of *all* nodes on the bus. It isn't a simple calculation.
>
> How do you solve this problem?

First, you need to *know* that it is *you* that has been granted access to
the bus/resource. If this requires you to perform some analysis of the
ENTIRE MESSAGE (to verify that the "address field" is, in fact, intact!),
then you probably DON'T want to try to ride the coat-tails of the message,
directly.

As the front end of a message (including yours) is more likely to be corrupted
(by a collision on the bus -- someone jabbering too long or too soon), you
might consider designing a packet format that has one checksum on the
address field "early" in the packet (before the payload) and another that
handles the balance of the message.

This allows you to capture the address information and it's closely following
checksum, verify that it is *you* that are being addressed and prepare for your
acquisition of the bus before the message has completely arrived.

You can also arrange to access the bus in "timeslots" referenced to some
easily recognizable event (e.g., a beacon sent by the master). So,
you do all of your timing off of that single event (see "some special
point" in the beacon, start timer, wait to transmit until "your slot").

Note this also works if you just assume the "next slot" after the master's
message is the slot you should use (you just limit the length of the master's
message so it fits in that fixed delay). Similarly, the master can "know"
that your reply will never exceed a fixed duration so it won't issue another
beacon/request until that has expired.

Hopefully, this makes sense. Sorry, I'm off for a pro bono day so no time
here... :-/

Don Y

unread,
Oct 13, 2014, 11:19:41 AM10/13/14
to
On 10/13/2014 8:17 AM, Don Y wrote:

> As the front end of a message (including yours) is more likely to be corrupted
> (by a collision on the bus -- someone jabbering too long or too soon), you
> might consider designing a packet format that has one checksum on the
> address field "early" in the packet (before the payload) and another that
> handles the balance of the message.
>
> This allows you to capture the address information and it's closely following
> checksum, verify that it is *you* that are being addressed and prepare for your
> acquisition of the bus before the message has completely arrived.

... so that you don't erroneously acquire the bus only to discover that it
was intended for someone else (and you've just made things worse)

Wouter van Ooijen

unread,
Oct 13, 2014, 12:14:47 PM10/13/14
to
pozzugno schreef op 13-Oct-14 4:59 PM:

> Il 13/10/2014 15:28, Wouter van Ooijen ha scritto:
>>> I usually use this approach for longer delays (milliseconds or seconds),
>>> so I can increment the counter in a ISR that trigger every millisecond.
>>> I don't like to fire a trigger every 100us.
>>
>> With a 64 bit counter that ticks at 1 ns you have a rollover after 585
>> years, so you can use it for all delays. If you don't have a hardware 64
>> bit counter you can use a 32 bit counter + rollover interrupt.
>
> On 8-bit microcontrollers, it's difficult to have a 32 bit counter :-)

You can use the same technique, but with a 16-bit counter and some more
bytes updated by the overflow ISR. The 8-bittters I know have 16-bit
counters with a mechanism to do an atomic 16-bit read. The total number
of bits you need depends on the resolution and the required run time of
the application.

But at least on the 8-bitters I know (PIC 12 and 14 bit cores)
instruction timing is totally predictable, so it is easy to make a busy
delay subroutine that is accurate.

Wouter van Ooijen

Les Cargill

unread,
Oct 13, 2014, 1:35:45 PM10/13/14
to
MODBUS RTU specifies 3.5 character times "settle time" after
the last byte was sent. That's on the order of .9 msec for 38,400
baud.

But that's a *minimum*.

> [I know there are some microcontrollers that automatically (at the
> hw-level) toggle an output pin when the last byte is totally shifted
> out, but I'm not using one of the them and they aren't so common.]


You have to know the worst-case timing. If you can't control the slave
nodes, then you'll have retransmissions due to collisions if the ISR
is sufficiently nondeterministic.

Since it's half-duplex, I'm wondering why you get an interrupt
other than TX-complete in that state.

You simply have to trade speed for determinism in this case. And there
will be collisions. If nothing else, add counters of bad CRC events
and no-response events and tune a delay.

485 isn't a good protocol these days. The kids get Ethernet on RasPi
class machines for class projects; it's not unreasonable to
use a real comms stack.

--
Les Cargill


upsid...@downunder.com

unread,
Oct 13, 2014, 3:02:33 PM10/13/14
to
On Mon, 13 Oct 2014 13:58:42 +0200, pozzugno <pozz...@gmail.com>
wrote:
You must be quite desperate, if you intend to use 1x550 style chips on
RS-485 :-). That chip family is useless for any high speed half duplex
communication.

You can get an interrupt when you load the last character into the Tx
shift register, but you can't get an interrupt, when the last bit of
the last character is actually shifted out of the Tx shift register.

In real word, some high priority code will have to poll, when the last
bit of your transmission has actually sent the last stop bit of your
last byte into the line.

Grant Edwards

unread,
Oct 13, 2014, 3:09:27 PM10/13/14
to
On 2014-10-13, upsid...@downunder.com <upsid...@downunder.com> wrote:

> You must be quite desperate, if you intend to use 1x550 style chips on
> RS-485 :-). That chip family is useless for any high speed half duplex
> communication.
>
> You can get an interrupt when you load the last character into the Tx
> shift register, but you can't get an interrupt, when the last bit of
> the last character is actually shifted out of the Tx shift register.
>
> In real word, some high priority code will have to poll, when the
> last bit of your transmission has actually sent the last stop bit of
> your last byte into the line.

And (in my experience) figuring out when that stop bit has been sent
can be problematic. Not all '550 "compatible" UARTs wait until the
end of the stop bit to set the "transmit shift register empty" status
bit. Some I've used set it as soon as the last data bit has been
sent, and if you turn of the driver at that point, you canse lose the
stop bit and create a framing error.

--
Grant Edwards grant.b.edwards Yow! My polyvinyl cowboy
at wallet was made in Hong
gmail.com Kong by Montgomery Clift!

glen herrmannsfeldt

unread,
Oct 13, 2014, 4:55:57 PM10/13/14
to
pozzugno <pozz...@gmail.com> wrote:
> I have a multi-drop two-wires RS485 bus. One node is the master and all
> the others are slaves. The master is the only node that is authorized
> to initiate a transmission, addressing one slave. The addressed slave
> usually answers to the master.

> The bus is half-duplex, so every node disables the driver. Only THE
> node that transmits data on the bus enables the driver and disables it
> as soon as it can, just after the last byte. An interrupt (transmit
> complete) usually triggers when the last byte is totally shifted out, so
> the driver can be disabled immediately.

Ethernet has a minimum time between packets that is independent
of most other timing parameters. The suggestion for it is that it
is enough time for the receiver to get the data out of its buffer,
and get ready to receive again.

-- glen

glen herrmannsfeldt

unread,
Oct 13, 2014, 4:58:24 PM10/13/14
to
Grant Edwards <inv...@invalid.invalid> wrote:

(snip, someone wrote)
>> In real word, some high priority code will have to poll, when the
>> last bit of your transmission has actually sent the last stop bit of
>> your last byte into the line.

> And (in my experience) figuring out when that stop bit has been sent
> can be problematic. Not all '550 "compatible" UARTs wait until the
> end of the stop bit to set the "transmit shift register empty" status
> bit. Some I've used set it as soon as the last data bit has been
> sent, and if you turn of the driver at that point, you canse lose the
> stop bit and create a framing error.

For the usual asynchronous serial systems, the stop bit is the
same level as the inactive state of the line. As long as you don't
start the next character too soon, you are safe.

If you are using it for a multiple driver line, seems to me that
you are using it for something that it wasn't designed to do.

-- glen

Grant Edwards

unread,
Oct 13, 2014, 5:08:45 PM10/13/14
to
On 2014-10-13, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Grant Edwards <inv...@invalid.invalid> wrote:
>
> (snip, someone wrote)
>>> In real word, some high priority code will have to poll, when the
>>> last bit of your transmission has actually sent the last stop bit of
>>> your last byte into the line.
>
>> And (in my experience) figuring out when that stop bit has been sent
>> can be problematic. Not all '550 "compatible" UARTs wait until the
>> end of the stop bit to set the "transmit shift register empty" status
>> bit. Some I've used set it as soon as the last data bit has been
>> sent, and if you turn of the driver at that point, you canse lose the
>> stop bit and create a framing error.
>
> For the usual asynchronous serial systems, the stop bit is the
> same level as the inactive state of the line.

That's true, but I don't know why it's relevent. We're talking about
knowing when to turn off RTS at the end of the last byte in the
message. If you turn it off immediately after the last data bit, and
the level of the last data bit is opposite from the required stop-bit
(idle) state, then you end up with problems unless the line is biased
stringly enough to return the line to it's idle state in less than
about 1/8 of a bit time. In my experience, a lot of installations end
up with no bias resistors at all...

> As long as you don't start the next character too soon, you are safe.

There is no next character. We're talking about the last byte in a
message.

> If you are using it for a multiple driver line, seems to me that
> you are using it for something that it wasn't designed to do.

That's exactly what RS485 is designed to do, but for it to work
reliably, you have to leave RTS on during a good portion of the stop
bit so that the driver can actively force the line back to the idle
state.

--
Grant Edwards grant.b.edwards Yow! ... the MYSTERIANS are
at in here with my CORDUROY
gmail.com SOAP DISH!!

Tim Wescott

unread,
Oct 13, 2014, 5:30:47 PM10/13/14
to
So? You don't have to disable interrupts for very long. Yes, it adds to
the interrupt latency, but not by much if you're careful.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Tim Wescott

unread,
Oct 13, 2014, 5:37:04 PM10/13/14
to
On Mon, 13 Oct 2014 13:58:42 +0200, pozzugno wrote:

> I have a multi-drop two-wires RS485 bus. One node is the master and all
> the others are slaves. The master is the only node that is authorized
> to initiate a transmission, addressing one slave. The addressed slave
> usually answers to the master.
>
> The bus is half-duplex, so every node disables the driver. Only THE
> node that transmits data on the bus enables the driver and disables it
> as soon as it can, just after the last byte. An interrupt (transmit
> complete) usually triggers when the last byte is totally shifted out, so
> the driver can be disabled immediately.
>
> Of course, other interrupts can be triggered. What happens when
> interrupt X (whatever) triggers just before the "transmit complete"
> interrupt? The result is the ISR X is called, postponing the execution
> of "transmit complete" ISR. The RS485 driver will be disabled with a
> certain amount of delay. In the worst case, the driver could be
> disabled with a delay that is the sum of the duration of all ISRs that
> could trigger.
> [In this scenario, I think of ISRs that can't be interrupted by a higher
> priority interrupt.]

Unless your processor is very primitive, you should be able to make the
serial interrupt the highest priority. Or take David Brown's suggestion
and disable all but the serial interrupt when you start to transmit the
last byte.

> If a node on the bus is very fast and starts transmitting (the master)
> or answering (one slave) immediately after receving the last byte, but
> when the previous transmitting node is executing other ISRs, the final
> result is a corrupted transmission.
>
> What is the solution? I think the only solution is to define, at the
> design time, a minimum interval between the receiving of the last byte
> from one node and the transmission of the first byte. This interval
> could be in the range of 100 microseconds and should be calibrated on
> the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a
> simple calculation.

I believe you're going about the last half of this backwards. Do not
calculate the worst-case interrupt latency -- specify it, and make it a
requirement on the slave boards. This should be easy enough to do if you
are in charge of all the software, and still quite doable if you're only
in charge of the communications software (assuming a functional group).

> Moreover, implementing a short "software" delay in the range of some
> microseconds isn't a simple task. An empty loop on a decreasing
> volatile variable is a solution, but the final delay isn't simple to
> calculate at the design time, and it could depend on the compiler,
> compiler settings, clock frequency and so on. Use an hw timer only for
> this pause?
>
> How do you solve this problem?

In a UART without a FIFO, an easy way to do this would be to send one or
more bytes with the transmitter disabled, then turn on the transmitter at
the appropriate time. Basically, use the UART as your timed event
generator.
>
> [I know there are some microcontrollers that automatically (at the
> hw-level) toggle an output pin when the last byte is totally shifted
> out, but I'm not using one of the them and they aren't so common.]

In my experience, unless you're really using a high baud rate and a slow
processor, or if your ISR's are just plain incorrectly written, your
interrupt latency will be far lower than a bit interval.

lang...@fonz.dk

unread,
Oct 13, 2014, 5:40:03 PM10/13/14
to
I've never had much luck running with out bias (or failsafe as they call them)
too many false start bits

unless the wires are very long the ~600R or so recommended pull-up/down I'd think if would be fast enough even if you turn off the transmitter

-Lasse

lang...@fonz.dk

unread,
Oct 13, 2014, 6:02:44 PM10/13/14
to
but Ethernet need hubs or switches, that gets messy if you have a lot of nodes


-Lasse

Don Y

unread,
Oct 13, 2014, 8:36:52 PM10/13/14
to
In the past, I've had good luck with 10Base2 implementations.
But, you have to "fix" the cabling instead of relying on the
flakey T's, etc.

[Do they even make 10Base2 kit anymore?]

Les Cargill

unread,
Oct 13, 2014, 10:53:40 PM10/13/14
to
POR QUE??? :)

> [Do they even make 10Base2 kit anymore?]


I haven't seen any since the '90s.

--
Les Cargill

Don Y

unread,
Oct 13, 2014, 11:24:49 PM10/13/14
to
On 10/13/2014 7:53 PM, Les Cargill wrote:
> Don Y wrote:

>>> but Ethernet need hubs or switches, that gets messy if you have a lot
>>> of nodes
>>
>> In the past, I've had good luck with 10Base2 implementations.
>> But, you have to "fix" the cabling instead of relying on the
>> flakey T's, etc.
>
> POR QUE??? :)

Device sat in a hard-to-access location and had pretty extreme
environmental conditions (heat, vibration, etc.). The traditional
T's (or F's if you preferred that orientation) just weren't very good
at long term reliability. So, the physical connections were "adjusted"
to more appropriately address those needs.

>> [Do they even make 10Base2 kit anymore?]
>
> I haven't seen any since the '90s.

I'd assume you could still hack together a suitable PHY. (?)
Not sure how *economical* it would be, though...

pozz

unread,
Oct 14, 2014, 1:48:42 AM10/14/14
to
Indeed my approach is to use a volatile uint16_t ticks variable
incremented every 1ms in a timer ISR. Of course the variable overflow
"naturally" from 65535 to 0 in the ISR. Taking into account the
wrap-around, I can manage delays up to 65536/2=30 seconds that is enough
for many applications. When I need longer delays, I use uint32_t.
Consider that my ticks variable is a "software counter", not a hardware
counter (that is used to generate 1ms timer interrupts).

On 8-bitters, I read the ticks variable after disabling interrupts, just
to be sure the operation is atomic. The Wouter's approach is new for me
and very interested. I'll try to use it in the future.

I think It can be used to read two 8-bits registers or two 16-bits
registers (if the architecture lets to read atomically a 16-bit hardware
counter).

The initial Wouter's approach doesn't take into account wrap-around,
because he uses a very wide 64-bits counter that reasonably never reach
its maximum value during the lifetime of the gadget (or the developer's
life). For 16-bits or 32-bits counters and 7d/7d 24h/24h applications,
the wrap-around *must* be considered, so reducing the maximum delay to a
half. Anywat this is a big issue.

The only problem I see with using hardware counter is that it is quite
impossible to have a nice counting frequency, such as 1ns, 1us or 1ms.
Mostly hardware timer/counter peripherals can be feed directly by the
main clock or after a prescaler. Usually prescaler values can be 2, 4,
8, 256 or similar, that brings to an odd final frequency.
With a "software" ticks counter incremented in a timer ISR, it's simpler
to calibrate the hardware counter to trigger every 1ms or similar nice
values.

Don Y

unread,
Oct 14, 2014, 2:28:14 AM10/14/14
to
On 10/13/2014 10:48 PM, pozz wrote:

[snip]

>>>> With a 64 bit counter that ticks at 1 ns you have a rollover after 585
>>>> years, so you can use it for all delays. If you don't have a hardware 64
>>>> bit counter you can use a 32 bit counter + rollover interrupt.
>>>
>>> On 8-bit microcontrollers, it's difficult to have a 32 bit counter :-)
>>
>> All you need is a wide enough counter to span the maximum delay you want
>> to measure "comfortably". You need to design such that your "most
>> sluggish"
>> activity happens often enough to be captured in one counter rollover period
>> (i.e., counter can't roll over more than once between observations)
>
> Indeed my approach is to use a volatile uint16_t ticks variable incremented
> every 1ms in a timer ISR. Of course the variable overflow "naturally" from
> 65535 to 0 in the ISR. Taking into account the wrap-around, I can manage
> delays up to 65536/2=30 seconds that is enough for many applications. When I
> need longer delays, I use uint32_t.
> Consider that my ticks variable is a "software counter", not a hardware counter
> (that is used to generate 1ms timer interrupts).

You should be able to get ~60 second delays.

If you *know* you will always look at a value "more often" than the wraparound
period, you can always *deduce* wraparound trivially:

unsigned now, then;

if (now < then)
now += counter_modulus;

(effectively)

> On 8-bitters, I read the ticks variable after disabling interrupts, just to be
> sure the operation is atomic. The Wouter's approach is new for me and very
> interested. I'll try to use it in the future.

Anything you can do to AVOID disabling interrupts (or, to allow you to
re-enable them earlier) tends to be a win.

Ideally, you don't ever want to unilaterally disable (and, later, re-enable!)
interrupts. Instead, each time you explicitly disable interrupts you want to,
first, make note of whether or not they were enabled at the time (assuming
this isn't implied).

Then, later, when you choose to re-enable them, you actually want to RESTORE
them to the state that they were in when you decided they should be disabled.

> I think It can be used to read two 8-bits registers or two 16-bits registers
> (if the architecture lets to read atomically a 16-bit hardware counter).


With careful consideration, you can read any width counter/timer (though
the granularity of your result will vary).

What you probably *don't* want to do is:

high1 = read_high()
low1 = read_low()
while (high1 != (high2 = read_high())) {
high1 = high2
low1 = read_low()
}

or similar.

[keeping in mind that IRQ's can come into this at any time -- including
REPEATEDLY!]

> The initial Wouter's approach doesn't take into account wrap-around, because he
> uses a very wide 64-bits counter that reasonably never reach its maximum value
> during the lifetime of the gadget (or the developer's life). For 16-bits or
> 32-bits counters and 7d/7d 24h/24h applications, the wrap-around *must* be
> considered, so reducing the maximum delay to a half. Anywat this is a big issue.

If you had a proper RTOS, you could ask the OS to schedule a task at some
specific interval after the "even of interest". It would then GUARANTEE
that at least N time units had elapsed (and not more than M).

> The only problem I see with using hardware counter is that it is quite
> impossible to have a nice counting frequency, such as 1ns, 1us or 1ms. Mostly
> hardware timer/counter peripherals can be feed directly by the main clock or
> after a prescaler. Usually prescaler values can be 2, 4, 8, 256 or similar,
> that brings to an odd final frequency.

Doesn't matter. Do the math ahead of time (e.g., at compile time) and figure
out what (value) you want to wait for.

> With a "software" ticks counter incremented in a timer ISR, it's simpler to
> calibrate the hardware counter to trigger every 1ms or similar nice values.

Timer IRQ's (esp the jiffy) are a notorious source of problems.
Too *often*, too *much* is done, there. (e.g., reschedule())

It's harder -- but not discouragingly so -- to move stuff out of the jiffy.
But, once you do so, you tend to get a lot more robust/responsive system.

E.g., the "beacon" scheme I mentioned (elsewhere) allows you to pre-determine
what your actions will be... then, lay them in place when the "event of
interest" occurs in a very timely manner -- without doing any "work" in
IRQ's, etc. You've already sorted out what *will* be done and are now just
waiting for your "cue" to do so!

For example, if you know the beacon message will be N time units (based on
number of characters and bit rate), you can concentrate on detecting the
beacon -- and nothing more -- PROMPTLY. Then, arranging for your code to
run N+epsilon time units after that event (instead of trying to watch each byte
from that beacon message in the hope of finding the end of the message).

This sort of scheme can easily allow every node (in a modest cluster size)
to indicate that it needs attention (by allowing each node to respond to
a "polling broadcast" in their individual timeslots with an indication of
whether or not they "have something to say". (the master node then takes
note of each of these and, later, issues directed queries to those nodes that
"need attention")

If it hasn't been said (and, if your environment can accommodate it), you
might want to look at a different signalling/comms technology that allows
for a true party-line (resolving contention in hardware).

pozzugno

unread,
Oct 14, 2014, 2:50:41 AM10/14/14
to
IMHO it's a critical approach. In order to let the immediate "transmit
complete" ISR (TXC), I have to re-enable interrupts inside all the other
ISRs.

So ISR A could be interrupted by ISR B even if I don't interested in
this, just because ISR A must be interrupted by ISR TXC.

pozzugno

unread,
Oct 14, 2014, 3:20:16 AM10/14/14
to
Il 14/10/2014 08:28, Don Y ha scritto:
> On 10/13/2014 10:48 PM, pozz wrote:

>> Indeed my approach is to use a volatile uint16_t ticks variable
>> incremented
>> every 1ms in a timer ISR. Of course the variable overflow "naturally"
>> from
>> 65535 to 0 in the ISR. Taking into account the wrap-around, I can manage
>> delays up to 65536/2=30 seconds that is enough for many applications.
>> When I
>> need longer delays, I use uint32_t.
>> Consider that my ticks variable is a "software counter", not a
>> hardware counter
>> (that is used to generate 1ms timer interrupts).
>
> You should be able to get ~60 second delays.
>
> If you *know* you will always look at a value "more often" than the
> wraparound
> period, you can always *deduce* wraparound trivially:
>
> unsigned now, then;
>
> if (now < then)
> now += counter_modulus;
>
> (effectively)

I don't think I have got your point.

I usually use the following comparison to understand if a timer tmr has
expired.

((uint32_t)(ticks - tmr) <= UINT32_MAX / 2)

In this way I loose a half of the total period, but it isn't usually a
big issue.


>> On 8-bitters, I read the ticks variable after disabling interrupts,
>> just to be
>> sure the operation is atomic. The Wouter's approach is new for me and
>> very
>> interested. I'll try to use it in the future.
>
> Anything you can do to AVOID disabling interrupts (or, to allow you to
> re-enable them earlier) tends to be a win.

Oh yes, I know.


> Ideally, you don't ever want to unilaterally disable (and, later,
> re-enable!)
> interrupts. Instead, each time you explicitly disable interrupts you
> want to,
> first, make note of whether or not they were enabled at the time (assuming
> this isn't implied).
>
> Then, later, when you choose to re-enable them, you actually want to
> RESTORE
> them to the state that they were in when you decided they should be
> disabled.

In my applications, I disable interrupts only when managing timers, so
I'm sure interrupts are enabled when I try to access the 16-bits or
32-bits counter. Of course, I never use timers in ISRs.


>> The only problem I see with using hardware counter is that it is quite
>> impossible to have a nice counting frequency, such as 1ns, 1us or 1ms.
>> Mostly
>> hardware timer/counter peripherals can be feed directly by the main
>> clock or
>> after a prescaler. Usually prescaler values can be 2, 4, 8, 256 or
>> similar,
>> that brings to an odd final frequency.
>
> Doesn't matter. Do the math ahead of time (e.g., at compile time) and
> figure
> out what (value) you want to wait for.
>
>> With a "software" ticks counter incremented in a timer ISR, it's
>> simpler to
>> calibrate the hardware counter to trigger every 1ms or similar nice
>> values.
>
> Timer IRQ's (esp the jiffy) are a notorious source of problems.

What do you mean with "jiffy"? Are you naming my approach as "jiffy"?
I didn't understand.


> Too *often*, too *much* is done, there. (e.g., reschedule())
>
> It's harder -- but not discouragingly so -- to move stuff out of the jiffy.
> But, once you do so, you tend to get a lot more robust/responsive system.
>
> E.g., the "beacon" scheme I mentioned (elsewhere) allows you to
> pre-determine
> what your actions will be... then, lay them in place when the "event of
> interest" occurs in a very timely manner -- without doing any "work" in
> IRQ's, etc. You've already sorted out what *will* be done and are now just
> waiting for your "cue" to do so!
>
> For example, if you know the beacon message will be N time units (based on
> number of characters and bit rate), you can concentrate on detecting the
> beacon -- and nothing more -- PROMPTLY. Then, arranging for your code to
> run N+epsilon time units after that event (instead of trying to watch
> each byte
> from that beacon message in the hope of finding the end of the message).
>
> This sort of scheme can easily allow every node (in a modest cluster size)
> to indicate that it needs attention (by allowing each node to respond to
> a "polling broadcast" in their individual timeslots with an indication of
> whether or not they "have something to say". (the master node then takes
> note of each of these and, later, issues directed queries to those nodes
> that
> "need attention")

I'm sorry, I think I completelty didn't understand what you have written :-(
I don't hope you explain again in greater details what you have written,
have you a link to study this "beacon" approach?


> If it hasn't been said (and, if your environment can accommodate it), you
> might want to look at a different signalling/comms technology that allows
> for a true party-line (resolving contention in hardware).

Any suggestions?


pozzugno

unread,
Oct 14, 2014, 3:28:41 AM10/14/14
to
Il 13/10/2014 17:17, Don Y ha scritto:
> On 10/13/2014 4:58 AM, pozzugno wrote:

> First, you need to *know* that it is *you* that has been granted access to
> the bus/resource. If this requires you to perform some analysis of the
> ENTIRE MESSAGE (to verify that the "address field" is, in fact, intact!),
> then you probably DON'T want to try to ride the coat-tails of the message,
> directly.
>
> As the front end of a message (including yours) is more likely to be
> corrupted
> (by a collision on the bus -- someone jabbering too long or too soon), you
> might consider designing a packet format that has one checksum on the
> address field "early" in the packet (before the payload) and another that
> handles the balance of the message.
>
> This allows you to capture the address information and it's closely
> following
> checksum, verify that it is *you* that are being addressed and prepare
> for your
> acquisition of the bus before the message has completely arrived.

I have just one checksum at the end of the message, but the address
field is at the beginning. Anyway I look at the address field as it
arrives to decide if it's a frame for me.

I know the address field could be corrupted, but IMHO it's not
important. If the address field is corrupted and appear for me, but the
master wanted to talk to another node, I store the message till the end,
but the checksum will be wrong, so the frame will be discarded. If the
address field is corrupted and it doesn't appear for me, but the master
really wanted to talk with me, I discard early the message.

IMHO, adding a new checksum at the beginning of the frame, only to
protect the address field, doesn't add more robustness to the final
performance.


> You can also arrange to access the bus in "timeslots" referenced to some
> easily recognizable event (e.g., a beacon sent by the master). So,
> you do all of your timing off of that single event (see "some special
> point" in the beacon, start timer, wait to transmit until "your slot").
>
> Note this also works if you just assume the "next slot" after the master's
> message is the slot you should use (you just limit the length of the
> master's
> message so it fits in that fixed delay). Similarly, the master can "know"
> that your reply will never exceed a fixed duration so it won't issue
> another
> beacon/request until that has expired.
>
> Hopefully, this makes sense. Sorry, I'm off for a pro bono day so no time
> here... :-/

I think I'll have to think more deeply about this beacon approach. Any
useful info on the Internet?

pozzugno

unread,
Oct 14, 2014, 3:37:26 AM10/14/14
to
Il 13/10/2014 23:37, Tim Wescott ha scritto:
> On Mon, 13 Oct 2014 13:58:42 +0200, pozzugno wrote:
>
>> I have a multi-drop two-wires RS485 bus. One node is the master and all
>> the others are slaves. The master is the only node that is authorized
>> to initiate a transmission, addressing one slave. The addressed slave
>> usually answers to the master.
>>
>> The bus is half-duplex, so every node disables the driver. Only THE
>> node that transmits data on the bus enables the driver and disables it
>> as soon as it can, just after the last byte. An interrupt (transmit
>> complete) usually triggers when the last byte is totally shifted out, so
>> the driver can be disabled immediately.
>>
>> Of course, other interrupts can be triggered. What happens when
>> interrupt X (whatever) triggers just before the "transmit complete"
>> interrupt? The result is the ISR X is called, postponing the execution
>> of "transmit complete" ISR. The RS485 driver will be disabled with a
>> certain amount of delay. In the worst case, the driver could be
>> disabled with a delay that is the sum of the duration of all ISRs that
>> could trigger.
>> [In this scenario, I think of ISRs that can't be interrupted by a higher
>> priority interrupt.]
>
> Unless your processor is very primitive, you should be able to make the
> serial interrupt the highest priority.

I'm using AVR8 from Atmel controllers. I can't change interrupt
priorities (they are hard-wired in the device). IMHO, anyway it's not a
matter of priority, but of the lack of a *nested* interrupt controller:
an ISR can't be never interrupted by higher priority interrupts (in this
case, transmit complete).

> Or take David Brown's suggestion
> and disable all but the serial interrupt when you start to transmit the
> last byte.

This is a good suggestion, even if it isn't simple. I should save the
status of all IRQ, disable them and reactive the ones originally active.

David Brown

unread,
Oct 14, 2014, 4:36:21 AM10/14/14
to
Note that "Wouter's algorithm" (giving him his two minutes of fame,
until someone points out that he didn't actually invent it...) is easily
extendible. On an 8-bit system with 64-bit counters, the "read_high"
should read the upper 56 bits, and the "read_low" reads the low 8-bit
(or use a 48-bit/16-bit split if you can do an atomic 16-bit read of the
counter hardware, which IIRC is possible on an AVR).

Another variation that might be easier if your counter is running
relatively slowly (say 10 kHz) is just:

a = read_counter();
while (true) {
b = read_counter();
if (a == b) return a;
a = b;
}

(Yes, the run-time here is theoretically unbounded - but if your system
is so badly overloaded with ISR's that this loop runs more than a couple
of times, you've got big problems anyway.)

> I think It can be used to read two 8-bits registers or two 16-bits
> registers (if the architecture lets to read atomically a 16-bit hardware
> counter).
>
> The initial Wouter's approach doesn't take into account wrap-around,
> because he uses a very wide 64-bits counter that reasonably never reach
> its maximum value during the lifetime of the gadget (or the developer's
> life). For 16-bits or 32-bits counters and 7d/7d 24h/24h applications,
> the wrap-around *must* be considered, so reducing the maximum delay to a
> half. Anywat this is a big issue.

Wrap most be considered, but it is not necessarily a problem. Just make
sure you deal with differences in times rather than waiting for the
timer to pass a certain value.

rickman

unread,
Oct 14, 2014, 4:36:59 AM10/14/14
to
On 10/13/2014 11:17 AM, Don Y wrote:
>
> First, you need to *know* that it is *you* that has been granted access to
> the bus/resource. If this requires you to perform some analysis of the
> ENTIRE MESSAGE (to verify that the "address field" is, in fact, intact!),
> then you probably DON'T want to try to ride the coat-tails of the message,
> directly.

It's called a checksum and can be done after the entire message is
received and before a reply needs to be sent.


> As the front end of a message (including yours) is more likely to be
> corrupted
> (by a collision on the bus -- someone jabbering too long or too soon), you
> might consider designing a packet format that has one checksum on the
> address field "early" in the packet (before the payload) and another that
> handles the balance of the message.
>
> This allows you to capture the address information and it's closely
> following
> checksum, verify that it is *you* that are being addressed and prepare
> for your
> acquisition of the bus before the message has completely arrived.

As the OP has pointed out this is of no value whatsoever. No one is
trying to minimize dead time or maximize throughput. The question is
about turning off the driver at the end of a transmission.


> You can also arrange to access the bus in "timeslots" referenced to some
> easily recognizable event (e.g., a beacon sent by the master). So,
> you do all of your timing off of that single event (see "some special
> point" in the beacon, start timer, wait to transmit until "your slot").

This is not only not a solution to the problem, it is entirely
pointless. The protocol is that the master sends a message to a
peripheral device. When the peripheral device receives a message it
responds. When the master receives a response it can send the next
message.


> Note this also works if you just assume the "next slot" after the master's
> message is the slot you should use (you just limit the length of the
> master's
> message so it fits in that fixed delay). Similarly, the master can "know"
> that your reply will never exceed a fixed duration so it won't issue
> another
> beacon/request until that has expired.

Or the master can wait to send another message until the reply from the
peripheral is complete.


> Hopefully, this makes sense. Sorry, I'm off for a pro bono day so no time
> here... :-/

Actually this is poorly thought out. The problem is not with the
protocol. The problem is a basic hardware limitation which makes it
hard to control the driver enable at the time it is needed. Either you
have no understanding of the problem or you just couldn't be bothered to
actually respond about the problem at hand.

--

Rick

Don Y

unread,
Oct 14, 2014, 9:09:35 AM10/14/14
to
On 10/14/2014 12:28 AM, pozzugno wrote:
> Il 13/10/2014 17:17, Don Y ha scritto:
>> On 10/13/2014 4:58 AM, pozzugno wrote:
>
>> First, you need to *know* that it is *you* that has been granted access to
>> the bus/resource. If this requires you to perform some analysis of the
>> ENTIRE MESSAGE (to verify that the "address field" is, in fact, intact!),
>> then you probably DON'T want to try to ride the coat-tails of the message,
>> directly.
>>
>> As the front end of a message (including yours) is more likely to be
>> corrupted
>> (by a collision on the bus -- someone jabbering too long or too soon), you
>> might consider designing a packet format that has one checksum on the
>> address field "early" in the packet (before the payload) and another that
>> handles the balance of the message.
> >
> > This allows you to capture the address information and it's closely
> > following
> > checksum, verify that it is *you* that are being addressed and prepare
> > for your
> > acquisition of the bus before the message has completely arrived.
>
> I have just one checksum at the end of the message, but the address field is at
> the beginning. Anyway I look at the address field as it arrives to decide if
> it's a frame for me.

Correct.

> I know the address field could be corrupted, but IMHO it's not important. If
> the address field is corrupted and appear for me, but the master wanted to talk
> to another node, I store the message till the end, but the checksum will be
> wrong, so the frame will be discarded. If the address field is corrupted and
> it doesn't appear for me, but the master really wanted to talk with me, I
> discard early the message.

Also correct.

> IMHO, adding a new checksum at the beginning of the frame, only to protect the
> address field, doesn't add more robustness to the final performance.

You haven't indicated how big the payload is. Or, the effort required to
create the reply.

In your scheme, you MUST deliver a reply as soon as the message from the
master is complete (you are free to define "soon").

Verifying the address (as yours AND "intact") lets you know whether or not
you have to deal with the payload AT ALL.

>> You can also arrange to access the bus in "timeslots" referenced to some
>> easily recognizable event (e.g., a beacon sent by the master). So,
>> you do all of your timing off of that single event (see "some special
>> point" in the beacon, start timer, wait to transmit until "your slot").
>>
>> Note this also works if you just assume the "next slot" after the master's
>> message is the slot you should use (you just limit the length of the
>> master's
>> message so it fits in that fixed delay). Similarly, the master can "know"
>> that your reply will never exceed a fixed duration so it won't issue
>> another
>> beacon/request until that has expired.
>>
>> Hopefully, this makes sense. Sorry, I'm off for a pro bono day so no time
>> here... :-/
>
> I think I'll have to think more deeply about this beacon approach. Any useful
> info on the Internet?

<shrug> Probably. But, I've not gone looking for it. You might look at some
of the token passing algorithms to see (Arcnet?) what they have to say by
way of example.

The key advantage is that it lets you set up the time at which your reply will
be "required" instead of having to "watch carefully" for the end of the
immediately preceding transmission (from the master). This lets you deal with
bigger time intervals instead of bit-level timing. I.e., recognize the start
of the beacon -- because that is usually a more repeatable "event" -- (and
it's content) and then IGNORE the serial line until you know it is your
timeslot to reply (you don't care what the data on the bus happens to be from
the other nodes -- why even receive it?).

[Unless you also want to use this scheme to allow the other nodes to be
bus masters and directly deliver their messages to other nodes, instead of
via the master. I.e., in time slot N, node N can send a message to ANY node
(which may be a REPLY to a message sent from that other node, earlier).
The master's significance is just to coordinate the timeslots. Note this
is far more involved than anything discussed so far]

Note that you also know when your timeslot *ends* and the next begins.
So, you can turn on the driver for the entire duration REGARDLESS of how
long your message is -- subject to the constraint that you know it must
fit in the time allotted. And, even before you are ready to transmit!

If you are sluggish getting onto the bus... <shrug> No problem. As
long as you hit *your* timeslot. I.e., you can be sluggish if you know your
reply is short enough to still get out in the time remaining.

In other words, you can trade processing time for response time. If you
have to digest a complex message from the master, just make sure your expected
reply is SHORT: ACK vs. NAK instead of detailed.

If you need to deliver a detailed reply, then modify your protocol so that
you can get some forewarning of its impending need. E.g., a message that
effectively says, "prepare the detailed reply for me" to which you
acknowledge (or NOT!) your understanding of the request AND your ability
to deliver it (which might require scheduling resources that aren't
normally available for that). Then, later, expect to get a message asking
for the actual details (that you have presumably "staged" in anticipation
of this).

[You can also modify the protocol so the master does this in three steps:
"Prepare the details" "ACK/NAK"
"Are you ready yet?" "ACK/NAK"
"OK, give them to me!"]

It's just another way to get stuff out of ISR's and into lower priority
processing.

It also lets you modify the protocol to give everyone a slot (if you have a
small number of nodes) instead of having the master constantly prompting
everyone (which wastes bandwidth as well as increasing the number of instances
where the bus has to be handed off "per unit message exchange")

Finally, it allows the master to routinely distribute information that may be
pertinent to all nodes (e.g., current system time).

I have no idea what your actual communications needs are (you haven't indicated
an application, etc.). I offer it as an alternative approach to consider
instead of the more obvious "ping-pong" of command-response that you appear
to be pursuing. Only you can tell if it has merit in your case. If messages
pass between nodes (via the master?) infrequently, then this wastes a lot
of time on the bus -- because most timeslots will contain no information
(though the corresponding node can be REQUIRED to deliver an acknowledgement
if it is important for you to use that as a keepalive/verification that the
node is still "up").

Of course, if the master is cyclicly *polling* each of these nodes, then even
MORE time is wasted (the time for the individual solicitations and the negative
acknowledgements).

I use a similar scheme with Ethernet to manage multiple (~120) co-operating
hosts and ensure all have "system data" that needs to be delivered periodically
(the beacon is a "periodic token") as well as ensuring that the required nodes
are "up".

[Of course, my nodes can all chat as necessary so it isn't used to
arbitrate access to the medium]

Don Y

unread,
Oct 14, 2014, 9:09:44 AM10/14/14
to
>> You should be able to get ~60 second delays.
>>
>> If you *know* you will always look at a value "more often" than the
>> wraparound
>> period, you can always *deduce* wraparound trivially:
>>
>> unsigned now, then;
>>
>> if (now < then)
>> now += counter_modulus;
>>
>> (effectively)
>
> I don't think I have got your point.

Assuming the counter counts UP, the only way you can see a SMALLER value
than the last one you saw was if the counter wrapped in the time between
those two observations. I.e., the "real" value of the counter NOW is
actually counter_modulus greater than the OBSERVED value.

If you want to know how much time has transpired, it is:
now + (counter_modulus - then)
i.e., the time from "then" to the counter wrapping PLUS the time from
the counter wrapping ("0") to "now".

> I usually use the following comparison to understand if a timer tmr has expired.
>
> ((uint32_t)(ticks - tmr) <= UINT32_MAX / 2)
>
> In this way I loose a half of the total period, but it isn't usually a big issue.

>>> With a "software" ticks counter incremented in a timer ISR, it's
>>> simpler to
>>> calibrate the hardware counter to trigger every 1ms or similar nice
>>> values.
>>
>> Timer IRQ's (esp the jiffy) are a notorious source of problems.
>
> What do you mean with "jiffy"? Are you naming my approach as "jiffy"? I didn't
> understand.

The "periodic interrupt" common in most systems is the "jiffy".
<http://catb.org/~esr/jargon/html/J/jiffy.html>
<http://lwn.net/Articles/549593/>
There are probably better references (I'm pressed for time)

> I'm sorry, I think I completelty didn't understand what you have written :-(
> I don't hope you explain again in greater details what you have written, have
> you a link to study this "beacon" approach?

Ah, well... <grin> Ignore my previous reply!

>> If it hasn't been said (and, if your environment can accommodate it), you
>> might want to look at a different signalling/comms technology that allows
>> for a true party-line (resolving contention in hardware).
>
> Any suggestions?

CAN

Off to another pro-bono day. (Boy do I hate mornings!)

Dave Nadler

unread,
Oct 14, 2014, 12:58:18 PM10/14/14
to
On Monday, October 13, 2014 9:02:33 PM UTC+2, upsid...@downunder.com wrote:
> ...you can't get an interrupt, when the last bit of
> the last character is actually shifted out of the Tx shift register.

Sure you can, just append N dummy TX bytes where "N" is the TX FIFO depth.

But check it on the scope!!

Hope that helps,
Best Regards, Dave

rickman

unread,
Oct 14, 2014, 2:20:24 PM10/14/14
to
On 10/14/2014 12:58 PM, Dave Nadler wrote:
> On Monday, October 13, 2014 9:02:33 PM UTC+2, upsid...@downunder.com wrote:
>> ...you can't get an interrupt, when the last bit of
>> the last character is actually shifted out of the Tx shift register.
>
> Sure you can, just append N dummy TX bytes where "N" is the TX FIFO depth.
>
> But check it on the scope!!

That is a very dangerous way to control the driver enable. Most likely
the interrupt will happen when the start bit is sent which means the
output will already be sending the start bit when the driver is disabled.

I think you are worrying about the wrong end of the message.

--

Rick

rickman

unread,
Oct 14, 2014, 2:28:13 PM10/14/14
to
On 10/14/2014 9:09 AM, Don Y wrote:
> On 10/14/2014 12:28 AM, pozzugno wrote:
>
>> IMHO, adding a new checksum at the beginning of the frame, only to
>> protect the
>> address field, doesn't add more robustness to the final performance.
>
> You haven't indicated how big the payload is. Or, the effort required to
> create the reply.
>
> In your scheme, you MUST deliver a reply as soon as the message from the
> master is complete (you are free to define "soon").

Really? "MUST"??? All the respondent is required to do is respond
before the master times out thinking the respondent is not replying.
This timeout should be set as required by system level requirements such
as throughput, etc.

Don't make this as complex as the things you design.


> Verifying the address (as yours AND "intact") lets you know whether or not
> you have to deal with the payload AT ALL.

Who cares?


>> I think I'll have to think more deeply about this beacon approach.
>> Any useful
>> info on the Internet?

The "beacon" is pointless, don't waste your time with it. It has no
advantage over a simple poll/response protocol and in fact is just the
same thing with more complication and no added benefit.

One point you should be aware of is that your start and end characters
can be compromised and the protocol has to deal with that. So consider
what happens when they are munged and not recognized. Make sure your
protocol is robust to those problems.

--

Rick

Arlet Ottens

unread,
Oct 14, 2014, 2:44:29 PM10/14/14
to
On 10/13/2014 01:58 PM, pozzugno wrote:

> Of course, other interrupts can be triggered. What happens when
> interrupt X (whatever) triggers just before the "transmit complete"
> interrupt? The result is the ISR X is called, postponing the execution
> of "transmit complete" ISR. The RS485 driver will be disabled with a
> certain amount of delay. In the worst case, the driver could be
> disabled with a delay that is the sum of the duration of all ISRs that
> could trigger.

Try to reduce the time spent in other ISRs.

>
> If a node on the bus is very fast and starts transmitting (the master)
> or answering (one slave) immediately after receving the last byte, but
> when the previous transmitting node is executing other ISRs, the final
> result is a corrupted transmission.
>
> What is the solution? I think the only solution is to define, at the
> design time, a minimum interval between the receiving of the last byte
> from one node and the transmission of the first byte. This interval
> could be in the range of 100 microseconds and should be calibrated on
> the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a
> simple calculation.

Instead of calculating, you can also build the design, and then measure
how long it takes and adjust the timeout value until error rate is
sufficiently low. I'm assuming that error rate > 0 is acceptable.

>
> Moreover, implementing a short "software" delay in the range of some
> microseconds isn't a simple task. An empty loop on a decreasing
> volatile variable is a solution, but the final delay isn't simple to
> calculate at the design time, and it could depend on the compiler,
> compiler settings, clock frequency and so on. Use an hw timer only for
> this pause?

Use a hardware timer, but it doesn't have to be just for this purpose.
Often, you can still use match interrupts on a free running timer, you
just have to adjust the match registers after each interrupt. I do this
all the time.

Keep in mind that the receiver also needs to process the incoming
packet, verify checksums, and prepare a response. You can use the dead
time for that.

>
> How do you solve this problem?

Use a better microcontroller ?

Don Y

unread,
Oct 14, 2014, 5:34:37 PM10/14/14
to
You're stuck with this problem because you are forcing processing and
timeliness constraints into the ISR's. An ISR should *only* do what it
ABSOLUTELY MUST. "In and out", lickity split!

Your description SUGGESTS that you are implementing your comms system
in a state machine something at the RxIRQ level like (pseudocode):

GetCharacter:
retrieve character from comms hardware
note error flags associated with this reception
if any error, do some sort of error recovery/logging
(or, do state specific error recovery, as appropriate)
ret

AwaitSoH:
header = GetCharacter()
if (header != Start_of_Header)
diagnostic("SoH not received when anticipated")
// leave RxIRQ as is; remain in the state awaiting SoH
else
set_RxIRQ(SoHReceived)
return from interrupt

// the above assumes SoH doesn't occur in a message body. If it
// does, then you revisit this state occasionally as you sync up to
// the data stream

SoHReceived:
address = GetCharacter()
if (address != MyAddress)
diagnostic("message APPARENTLY not intended for me")
set_RxIRQ(AwaitSoH) // a simplification, for illustration
else
message_length = 0 // prepare for payload to follow
InitializeChecksum(address)
set_RxIRQ(AccumulateMessage)
return from interrupt

AccumulateMessage:
datum = GetCharacter()
buffer[message_length++] = datum
UpdateChecksum(datum)
if (message_length > MESSAGE_LENGTH)
set_RxIRQ(AwaitChecksum)
return from interrupt

AwaitChecksum:
checksum = GetCharacter()
if (Checksum != computedChecksum)
diagnostic("message failed integrity check")
error
else
parse message (unless you've been doing this incrementally)
act on message
prepare result
wait until master has turned off its bus driver
turn on your bus driver and transmitter
(or, have the scheduler do so)
schedule your reply for transmission
set_RxIRQ(AwaitSoH)
return from interrupt

[Note that you may, instead, have folded all of this into one static RxIRQ
by conditionally examining a "state variable" (byte counter?) and acting
accordingly:
if (byte_count == 1)
check if correct header
else if (byte_count == 2)
check if correct address
else if (byte_count ...
I manipulate the interrupt vector instead as each little ISR "knows"
what the next ISR should be so why introduce a bunch of conditionals?]

And, your TxIRQ (once the reply has been scheduled):

TxIRQ:
SendCharacter(buffer[message_length++])
if (message_length > MESSAGE_LENGTH)
set_TxIRQ(ShutdownTx)
return from interrupt

ShutdownTx:
wait until last character cleared transmitter (NOT holding reg)!
wait until line has stabilized (bus driver)
turn off bus driver
set_TxIRQ(none)
return from interrupt

[Of course, message lengths can vary between Tx and Rx, etc]

So, all of your IRQ's are lightweight. EXCEPT the "AwaitChecksum"
state. There, you have to do a fair bit of processing ON THE HEELS OF
the final character in the master's transmission (you could add an
additional "trailer" but that's just more characters to receive and
process and doesn't fundamentally change the algorithm).

The delays ("wait until...") are all relatively short. Yet, not
necessarily "opcode timing" short. So, you sort of have to sit around
twiddling your thumbs until you think you've met the required delays
(you can't make any *observations* to tell you when the time is right...
when the master has turned off its bus driver, etc.)

Or, throw some hardware resource at the problem (small interval timing)

All of that waiting wants to be done OUTSIDE the ISR. Yet, you can't
afford for it to be unbounded -- because your master no doubt expects a
reply in *some* time period else a dropped message would hang all comms!
(and it probably uses a lack of reply to indicate a failure of your node)

I.e., there are LOWER and UPPER bounds on when you can start your reply.
Too soon and you collide with the the tail end of the master's transmission;
too late and you risk the end of your reply running into the master's
*next* transmission.

[Or, you can add a timer to the master so that it doesn't start its next
transmission until it is *sure* you are finished transmitting]

Likewise, all of the *processing* that isn't time critical (or, SHOULDN'T
be!) wants to happen outside the ISR.

If, instead, you could note the time at which a "request" from the master
was sent and use that as a reference point from which to determine
when you would have a CHANCE to deliver a reply ("timeslot"), then
you can do all of this processing AND waiting outside of the IRQ.

[If you force that time to be JUST the duration of the master's message,
then you don't have any real leeway -- you have to act promptly! You're
stuck with your present dilemma.]

For example, assume you have a 1ms periodic interrupt (change to
suit your needs). Assume you are delivering data at 9600 baud
(change to suit your needs). Assume messages from the master are
M characters long.

[I've chosen numbers that make the math relatively easy so you don't
have to dig out a calculator. Changing the values just changes the
math. I.e., characters are arriving roughly at the same rate as your
periodic interrupt -- though they aren't guaranteed to be in
a particular phase relationship with it. (this is not a requirement
of this approach, just a coincidence for the numbers I have chosen)]

On a particular node, you notice a "SoH" received sometime between
periodic interrupt S-1 and S (because you modify your AwaitingSoH
ISR to signal an event that you can then examine -- or, let it
capture the "periodic counter" *in* the ISR and post that time value
as the "event").

You KNOW the master's message will not be complete until at least
(S-1)+M but definitely before S+M -- because you have *designed* to
this goal!

Furthermore, you know that your timeslot is offset X ms from the
StartOfHeader in the master's message (i.e., time ~S). You KNOW
that you can't safely turn on your bus driver until S+X (because those
are the rules of the protocol) but you *do* know that the master
will have turned his bus driver off by then (because it is following
the same rules!)

So, you schedule a job that turns on your bus driver at S+X and
pushes *a* reply onto the bus.

*A* reply.

This need not be *the* reply to the message from the master sent
at time S! It may, instead, be a reply to the message sent by
the master at the *previous* "time S". Or, the one before that!

Instead of tail-gating the master's message and trying to reply
as soon as the last character in its message has cleared the medium
(or, some epsilon later), you decouple your replies from the master's
requests.

With the timeslot, you *know* you can send a reply EVEN IF THE MESSAGE
FROM THE MASTER IS NOT FOR YOU! I.e., you dont have to check the address,
decode the message, act on it AND compose your reply *now*. Likewise,
other nodes know that they can send THEIR replies even when the current
message is for *you* and not them!

You just support some number of outstanding messages to each node
(perhaps just "1" but bigger numbers are better) and tag them with
a (small) sequence number -- so the master can pair replies to
outstanding requests AND so you can see when the master has given
up on an "old" request that you perhaps forgot to acknowledge.

[E.g., you can conceivably reply to message 2 before replying to message
1 -- if that makes sense in your current execution environment. If
not, then Reply2 has to wait to be scheduled until Reply1 has been sent.
You are free to arrange those criteria as fits your processing capabilities.
You don't HAVE TO reply to the message *now*.]

Note that this can be scaled by supporting a smaller number of
timeslots than there are physical nodes in the system -- the
master can allow nodes (1 - Q) to use the Q timeslots following
*this* message (before it issues the NEXT message) and the (Q+1 - 2Q)
nodes to use the Q timeslots following the NEXT message.

The point is, each node knows AHEAD OF TIME when it can reply (when
it can turn on its bus driver) instead of having "very little notice"
and having to react promptly.

It also allows the nodes to know that communications are "fair"
and "deterministic". Any node knows how long it must wait before
it is *guaranteed* a chance to access the medium. (if there was
no guarantee, then nodes wouldn't have been able to PREDICT when
they should acquire the medium and place their messages on it)

You've moved the:
parse message (unless you've been doing this incrementally)
act on message
prepare result
steps from the ISR into a lower priority task where you, presumably,
have more leeway in addressing those needs (than you would in an
ISR that wants to be short!). *All* of your ISRs are now short
(because they just empty the receiver or stuff the transmitter
and don't do any *decoding*, processing, etc.)

RxIRQ:
datum = GetCharacter()
FIFO <- (datum,timestamp)
return from interrupt

Something else watches the FIFO to try to identify messages
within. When it does, it knows when the message began (because
of the timestamp on that datum) so it can figure out when a reply
*should* be scheduled -- even though the reply might not be the
reply for this message.

Instead of having to deliver a response *NOW*, your protocol moves
the time that you are granted to fabricate a response out of the
low level "driver". You could conceivably allow different timeouts
for different types of messages, etc.

And, because you know when to expect a message from the master
(even if it is not for you!) -- because the protocol has the master
sending a message followed by Q reply timeslots -- and when, relative
to that, to present your reply, the timing of the bus arbitration is
decoupled from the immediacy of a particular "Rx IRQ". You're not
trying to accurately time/delay "character times", "bit times" or smaller.
You, instead, rely on a timebase that you already have in place.
Making timing decisions in a more tolerant environment (including,
potentially, the enabling and disabling of the bus driver)

If you look at higher performance "packet protocols", you will tend
to see this same pattern repeated. I.e., you don't send an ethernet
message and expect the receiving interrupt on the destination node
to prepare and deliver a reply! Nor do you expect the sending node
to twiddle its thumbs awaiting that reply; nor the reply to message 1
before it sends message 2 (to you *or* some other node).

You just have to agree, ahead of time (i.e., protocol specification!)
how long you are willing to wait for old acknowledgements and how
many you are willing to let remain outstanding at any given time.
This defines the time you have to "handle" a particular message; NOT
some artifically tight constraint IN an ISR.

Then, "do the math" for your particular periodic interrupt rate
and data rates (and message lengths, node counts, etc.) so you can
ensure each node gets serviced in a timely fashion. The point isn't
to maximize throughput but, rather, to be able to provide *guarantees*
without overspecifying hardware or needlessly tightening constraints
on the software.

This turns a hard real-time approach (where missing a deadline means
you simply abandon the action that you were attempting) into a softer
one with more flexible deadlines (where missing a deadline doesn't
render the effort "wasted" but, rather, salvageable at some possibly
reduced "value", at a later time).

Otherwise, you "must" reply to this message now. And, if you can't,
what makes you think you will be able to the *next* time it is
sent to you (i.e., as a "retry")? Your implementation is "more brittle".
(what happens if some node needs an ISR for some other capability
and that ISR interferes with your TIMELY processing of "this" message?)

[I currently use a variation of this "over ethernet" (where it isn't
necessary) in anticipation of porting the design to a wireless
implementation (where folks can't all "talk at once" yet deterministic
delivery is required). I.e., "bus driver" in my case is "RF transmitter"]

*Think* about this sort of implementation... what it means/costs in your
hardware. Put ballpark numbers on how much time it allows slaves to
handle messages, etc. Decide what the impact on the devices (master?)
making those requests might be: e.g., if they BLOCK until they get a
request (and no other processing CAN occur on that node), then you
would want to favor more prompt responses.

OTOH, if you can do meaningful work while awaiting a reply, then the
impact of a delayed reply is minimized.

I reviewed a colleague's implementation of a product a few weeks back
that had several CAN nodes collaborating on the services provided.
Every message request he sent out, I asked:
"What happens if you NEVER get a reply to this?
What happens if the reply is delayed?
Is this node effectively *stuck* in that get_reply() for the duration?"
And, eventually:
"Ahhhh... so that's why you are doing all this work in your ISRs that
one would more safely do elsewhere -- less precious! Can't risk things
catching fire while you're waiting for a reply!"

David Brown

unread,
Oct 14, 2014, 5:35:33 PM10/14/14
to
On 14/10/14 20:20, rickman wrote:
> On 10/14/2014 12:58 PM, Dave Nadler wrote:
>> On Monday, October 13, 2014 9:02:33 PM UTC+2, upsid...@downunder.com
>> wrote:
>>> ...you can't get an interrupt, when the last bit of
>>> the last character is actually shifted out of the Tx shift register.
>>
>> Sure you can, just append N dummy TX bytes where "N" is the TX FIFO
>> depth.
>>
>> But check it on the scope!!
>
> That is a very dangerous way to control the driver enable. Most likely
> the interrupt will happen when the start bit is sent which means the
> output will already be sending the start bit when the driver is disabled.
>

It can often work fine - at worst, the receiver sees a brief noise at
the end of the telegram itself, and that is easily ignored.

However, if you are worried that the "transmission complete" interrupt
might be delayed too long, then clearly the same thing will apply to the
final "transmit character" interrupt for your extra character. So it is
a useful trick if you don't have a "transmission complete" interrupt
signal, but not for the problem at hand here.

rickman

unread,
Oct 15, 2014, 2:00:20 AM10/15/14
to
Is all of this based on the idea that the transmitter empty interrupt is
not valid in some way?

--

Rick

upsid...@downunder.com

unread,
Oct 15, 2014, 2:09:03 AM10/15/14
to
On Mon, 13 Oct 2014 22:02:33 +0300, upsid...@downunder.com wrote:

>You must be quite desperate, if you intend to use 1x550 style chips on
>RS-485 :-). That chip family is useless for any high speed half duplex
>communication.
>
>You can get an interrupt when you load the last character into the Tx
>shift register, but you can't get an interrupt, when the last bit of
>the last character is actually shifted out of the Tx shift register.
>
>In real word, some high priority code will have to poll, when the last
>bit of your transmission has actually sent the last stop bit of your
>last byte into the line.

An other way of dealing with '550 style UARTs on RS-485 is to use a
driver chip that doesn't disable the receiver during your own
transmit. Thus the UART Rx pin will hear your transmission and the Rx
ISR can do "echo canceling" by monitoring your own transmission. As
soon as the Rx interrupt hears your complete final transmitted byte,
the Rx ISR can turn off the transmit enable (RTS) and change the Rx
mode from echo canceling to normal Rx mode.

This makes it possible to do all things in the ISR and you don't have
to do anything in normal code (with potentially bad latencies in some
OS).

David Brown

unread,
Oct 15, 2014, 3:47:32 AM10/15/14
to
No, my point was that Dave's suggestion of sending an extra byte is not
"very dangerous" as you suggest, and can be a useful trick. It is not
needed here as the OP has a "transmission complete" interrupt which
triggers when the final send is complete. Many other microcontrollers
and UARTs don't have a suitable interrupt on the final character (or
have flaws with it, such as an interrupt that triggers at the start of
the stop bit - turning off the driver at that point can cause
hard-to-trace problems). For such micros, sending an extra byte can be
a good solution.

But as far as I can remember (it's a long time since I used an AVR), the
AVR's "transmission complete" interrupt works fine.


upsid...@downunder.com

unread,
Oct 15, 2014, 4:01:09 AM10/15/14
to
Please tell us, how to get transmitter [shift register] empty
interrupt on the '550 style UARTs ?

You can get last byte loaded into the transmit SR interrupt, but you
need to poll some status bits to know, when the last byte has been
shifted out of the SR.

There has been claims that dropping the SR empty status at the end of
last bit but before the stop bit(s) might be a problem, However, with
standard "fail-safe" termination, the line is in the idle state during
the stop bit times anyway.

Turning the transmitter a few bit times too late is usually not a
problem, except for extremely low line rates. A character delay can be
catastrophic, if the other station responds rapidly.

In practice (I have seen several examples) is the stupid
implementation in many devices, in which the designer relies on the
"last byte moved to the SR" interrupt and prematurely turns off the
transmitter and the line floats to the idle "1" state. Since the UART
sends LSB bits first, the premature Tx disable will set the MSB bit(s)
to 1, so 0x0? is received as 0x8?, 0xC?, 0xE?, 0xF? of even as 0xFF.

Since many protocols put BCC/CRC into the last byte, the received and
calculated value differ by those high bits, which is a clear sign of
premature Tx disabling.

This problem becomes worse as the line speed is dropped. Some devices
can't be used below 9600 bit/s rates due to this problem.

Grant Edwards

unread,
Oct 15, 2014, 10:24:12 AM10/15/14
to
We're talking about a 16550. There _is_no_ transmitter empty
interrupt. There is a transmit holding register empty interrupt, but
that happens _before_ transmission of the last byte has begun.

There is a transmit shift register empty status bit (no interrupt). In
my experience that status bit isn't reliable either and on some
implementations goes active before the final stop bit has been sent.

--
Grant Edwards grant.b.edwards Yow! Used staples are good
at with SOY SAUCE!
gmail.com

Grant Edwards

unread,
Oct 15, 2014, 10:26:22 AM10/15/14
to
On 2014-10-15, upsid...@downunder.com <upsid...@downunder.com> wrote:

> An other way of dealing with '550 style UARTs on RS-485 is to use a
> driver chip that doesn't disable the receiver during your own
> transmit. Thus the UART Rx pin will hear your transmission and the Rx
> ISR can do "echo canceling" by monitoring your own transmission. As
> soon as the Rx interrupt hears your complete final transmitted byte,
> the Rx ISR can turn off the transmit enable (RTS) and change the Rx
> mode from echo canceling to normal Rx mode.
>
> This makes it possible to do all things in the ISR and you don't have
> to do anything in normal code (with potentially bad latencies in some
> OS).

That works, but you have to disable the RX FIFO so that you get
notified of RX bytes immediatly.

--
Grant Edwards grant.b.edwards Yow! Everybody is going
at somewhere!! It's probably
gmail.com a garage sale or a disaster
Movie!!

rickman

unread,
Oct 16, 2014, 1:17:30 AM10/16/14
to
On 10/15/2014 10:24 AM, Grant Edwards wrote:
> On 2014-10-15, rickman <gnu...@gmail.com> wrote:
>> On 10/14/2014 5:35 PM, David Brown wrote:
>>
>>> It can often work fine - at worst, the receiver sees a brief noise at
>>> the end of the telegram itself, and that is easily ignored.
>>>
>>> However, if you are worried that the "transmission complete" interrupt
>>> might be delayed too long, then clearly the same thing will apply to the
>>> final "transmit character" interrupt for your extra character. So it is
>>> a useful trick if you don't have a "transmission complete" interrupt
>>> signal, but not for the problem at hand here.
>>
>> Is all of this based on the idea that the transmitter empty interrupt is
>> not valid in some way?
>
> We're talking about a 16550. There _is_no_ transmitter empty
> interrupt. There is a transmit holding register empty interrupt, but
> that happens _before_ transmission of the last byte has begun.

I don't know who "we" is, but the OP never said what UART he is using.
I had the impression it was a UART within an MCU from his initial post
where he refers to "some microcontrollers" toggling an output. Did he
say he is using a '550' type UART?

--

Rick

rickman

unread,
Oct 16, 2014, 1:30:32 AM10/16/14
to
I just don't know that putting the glitch on the bus is a good idea.
Minimizing the glitch depends on a fast response to the interrupt which
is most of what this thread has been discussing. A slow response puts a
larger glitch on the bus.

Personally I prefer to use hardware which is designed for the job and
will handle the driver enable properly.

--

Rick

pozzugno

unread,
Oct 16, 2014, 2:36:22 AM10/16/14
to
Yes, I'm using a UART peripheral integrated in microcontrollers, like
PICs from Microchip or AVRs from Atmel.

pozzugno

unread,
Oct 16, 2014, 3:01:07 AM10/16/14
to
Il 13/10/2014 17:20, Grant Edwards ha scritto:
> On 2014-10-13, pozzugno <pozz...@gmail.com> wrote:
>
>> If a node on the bus is very fast and starts transmitting (the master)
>> or answering (one slave) immediately after receving the last byte, but
>> when the previous transmitting node is executing other ISRs, the final
>> result is a corrupted transmission.
>>
>> What is the solution? I think the only solution is to define, at the
>> design time, a minimum interval between the receiving of the last byte
>> from one node and the transmission of the first byte.
>
> Or put a some disposable preamble bytes on the front of messages. This
> is pretty common when using half-duplex modems: the padding bytes give
> the modems time to "turn-around" and get carrier established, PLLs
> locked, etc.

I was thinking about your suggestions, but it seems to me it doesn't
work well.

Others suggested to send some dummy bytes, keeping the RS485 driver
*disabled*. Only after those bytes are really shifted out, the driver
could be enabled. In other words, you use the UART hardware also for
timing, without the need to use other peripherals (such as timers/counters).

Your suggestion seems better: send some sync bytes at the beginning,
with the RS485 driver *enabled*. My protocol can be used with this
approach, because it is inspired to HDLC. Every frame starts and ends
with a sync byte. If the sync byte appears in the payload data, it is
escaped (as in HDLC or SLIP protocols).

The device could send N sync bytes without problems. The receiver will
see N empty frames and will discard them silently. In this way it's
even more simple to introduce a delay before the answer.

But I think it doesn't work. Bytes are send asyncronously and the
receiver must syncronize to the start bit of each byte. If the slave
sends 2 sync bytes in the front of each frame, without a delay, and the
master toggles direction in the middle of the two sync bytes, the master
will receive one or two wrong bytes or detect frame errors, depending on
the precise time and transitions pattern.

Moreover, if the payload is transmitted immediately after sync bytes, as
really happens, the overall frame could be corrupted.

*Perhaps* this technique works well only if the preamble bytes are 0xFF,
because they appears on the wire just as a single start bit (all the
other bits, including stop bits, are at the idle level).

--- ------------------ ------------------ -.........
| | PREAMBLE | | PREAMBLE | | START OF FRAME
| | 0xFF | | 0xFF | |
---- ---- ----
^ ^ ^
A B C

If the master toggles direction at time A, it will receive the two
preamble bytes correctly and could discard them. The frame is received
correctly.
If it toggles direction at time C, it will receive only one preamble
byte correctly and could discard it. The frame is received correctly.

What happens it the master toggles direction at time B, in the middle of
a start bit? I don't know if the UART detects a start bit on the
high-to-low edge or on the low level. In the first case, I think
there's no problem. In the second case, what happens?

Tauno Voipio

unread,
Oct 16, 2014, 3:19:12 AM10/16/14
to
On 16.10.14 10:01, pozzugno wrote:

> *Perhaps* this technique works well only if the preamble bytes are 0xFF,
> because they appears on the wire just as a single start bit (all the
> other bits, including stop bits, are at the idle level).
>
> --- ------------------ ------------------ -.........
> | | PREAMBLE | | PREAMBLE | | START OF FRAME
> | | 0xFF | | 0xFF | |
> ---- ---- ----
> ^ ^ ^
> A B C


It does - I have been using it for years (20+) in several different
RS-485 links.

You need also a reliable way of recognizing frame boundaries, to
get the framing right. I'm using an encapsulation similar to
PPP (RFC1662) which also gives the possibility to exclude the
preamble bytes from valid frame data.

--

Tauno Voipio

rickman

unread,
Oct 16, 2014, 5:02:30 AM10/16/14
to
On 10/16/2014 3:01 AM, pozzugno wrote:
>
> I was thinking about your suggestions, but it seems to me it doesn't
> work well.
>
> Others suggested to send some dummy bytes, keeping the RS485 driver
> *disabled*. Only after those bytes are really shifted out, the driver
> could be enabled. In other words, you use the UART hardware also for
> timing, without the need to use other peripherals (such as
> timers/counters).
>
> Your suggestion seems better: send some sync bytes at the beginning,
> with the RS485 driver *enabled*. My protocol can be used with this
> approach, because it is inspired to HDLC. Every frame starts and ends
> with a sync byte. If the sync byte appears in the payload data, it is
> escaped (as in HDLC or SLIP protocols).

The block start and end bytes are important, but if they are not unique
you have to escape them in your data which implies a message handling
layer in your software. Or you can use a UART that recognizes the start
characters. If you use sync data transmission rather than async this is
common.


> The device could send N sync bytes without problems. The receiver will
> see N empty frames and will discard them silently. In this way it's
> even more simple to introduce a delay before the answer.

How is N sync bytes equivalent to a bunch of empty frames? Is your sync
byte unique?
I'm not sure I see a problem. UARTs typically see the falling edge as
the start of the start bit, but check at least once in the middle of the
bit to see if it was just noise. Too short a pulse and it is discarded.
Long enough and it will be seen as a start bit. Since the UART starts
looking for a new start bit in the middle of the stop bit
(approximately) such a start bit timing discrepancy will not create a
problem although this leaves no room for timing variation between the
two ends. As long as point B is in the first preamble char and not the
last it should be ok. But if you can be sure the driver disable will be
at worst case no later than C, why can't the slave just hold of sending
the start of frame for another char?

Your approach will work, but it is no better than sending chars without
enabling the slave's driver in the first place. In fact, if you know
when the master driver will be off the bus, you can get your message out
once character more quickly by not sending the last preamble byte.

--

Rick

rickman

unread,
Oct 16, 2014, 5:05:51 AM10/16/14
to
Do you know if your transmitter empty interrupt works properly or is
this what prompted the entire thread?

I was looking at some of the USB dongles for RS-485 a while back and
some of them make it clear that the driver is controlled directly by the
transmitter empty and so the last char gets out of the module correctly.

--

Rick

pozzugno

unread,
Oct 16, 2014, 6:54:25 AM10/16/14
to
Il 16/10/2014 11:05, rickman ha scritto:
> On 10/16/2014 2:36 AM, pozzugno wrote:

> Do you know if your transmitter empty interrupt works properly or is
> this what prompted the entire thread?

AVR micros have two transmitter interrupts: register empty and transmit
complete. The transmit complete interrupt should disable the RS485
driver and it seems work well.


> I was looking at some of the USB dongles for RS-485 a while back and
> some of them make it clear that the driver is controlled directly by the
> transmitter empty and so the last char gets out of the module correctly.

No, AVRs (and many many micros smaller and bigger) can't directly
control the RS485 driver/direction.


Grant Edwards

unread,
Oct 16, 2014, 10:16:48 AM10/16/14
to
Sorry, I must have gotten this confused with a different thread.

--
Grant Edwards grant.b.edwards Yow! I'm ZIPPY the PINHEAD
at and I'm totally committed
gmail.com to the festive mode.

David Brown

unread,
Oct 16, 2014, 2:46:42 PM10/16/14
to
On 16/10/14 07:30, rickman wrote:

>
> Personally I prefer to use hardware which is designed for the job and
> will handle the driver enable properly.
>

You'll get no arguments on that point. The trick of sending an extra
byte is a way to make things work with non-ideal hardware - but better
hardware is always nicer.

Dave Nadler

unread,
Oct 16, 2014, 3:00:47 PM10/16/14
to
On Thursday, October 16, 2014 7:30:32 AM UTC+2, rickman wrote:
> Personally I prefer to use hardware which is designed for the job and
> will handle the driver enable properly.

Me too. And I prefer datasheets that match the hardware,
and so forth. Then, I wake up from that nice dream, and
deal with reality ;-)
0 new messages