The first vendor references an Advanced Micro Devices 802.3 design handbook
which states that "the LANCE can handle up to 7 dribbling bits when a
receive packet terminates", and that receive frames containing dribbling
bits with no CRC error are valid frames. This vendor seems to append a
single dribble bit to the end of every transmitted frame after the FCS.
The second vendor references 802.3 (looks like section 3.4 about Invalid
MAC Frames) claiming that dribble bits make a frame invalid. However there
is no reference to dribble bits in this section. This vendor's product
discards frames which have the dribble bit appended. Thus the problem.
Questions:
1. What can cause dribble bits to appear?
2. Why would vendor #1 attach them to each frame?
3. What purpose do they serve?
4. Are they legitimate?
5. Who is correct?
Any help would be greatly appreciated.
Regards,
Lindsay
Lindsay Foster
TELUS Advanced Communications
403 543 2019
lindsay...@telus.com
> We are experiencing problems between two vendors equipment regarding 802.3
> compliance for dribble bits. One vendor states that 1-7 dribble bits are
> allowed following the FCS of a MAC frame. Another vendor claims that any
> dribble bits beyond the FCS render the frame invalid.
>
The first vendor is correct. The standard (both the original Ethernet spec
and the IEEE 802.3 standard) allows up to 7 "dribble bits" following the
"real" end of the frame, with no error being noted or recorded. (In Fast
Ethernet, data is transferred in 4-bit chunks, so you can have at most one
"dribble nibble", rather than 7 "dribble bits".)
"Dribble bits" are an artifact of the earliest Ethernet transceiver designs
(including my own!). The transceiver determined when the end of a frame
occured by looking at the drop in absolute DC voltage on the coax cable.
(Remember that coaxial Ethernet uses a DC signal to sense carrier and
collisions.) Now, this DC voltage cannot drop instantaneously, since you
need a low pass filter to extract the DC, and any such filter has a delay
and time constant. Worse the timers used to determine when the next bit
"might be there" were analog (with wide tolerances). So, there was some
probability that, while the DC voltage at the output of the low-pass filter
was returning to zero, that a noise signal could cause a zero crossing and
look like a "real" bit.
A similar phenomenon occured at the controller end of the transceiver
cable; while there was no DC level used for carrier-sense on the AUI, there
was still a "droop" from the AUI coupling transformers. The receiver had to
determine end-of-frame by detecting a period without a zero crossing of not
less than 120 ns (100 ns plus 20 ns worst-case zero-crossing jitter), and
not more than 160 ns. Again, this was originally done using analog timers
(hence the fairly wide range). During this period the transformers were
drooping according to their natural time constant (T=L/R, where L is the
magnetizing inductance of the coupling transformer and R is the impedance
of the twisted pair. For a 30 uH transformer and a 78 ohm line (typical),
this is a 385 ns time constant.) In the 160 ns period where you are trying
to figure out if there is going to be another bit, the voltage level will
droop to e^(-t/T)*V, or about 66% of the original voltage. That is, while
looking for a possible additional bit (which would mean that the frame had
not really ended yet), the signal-to-noise ratio will degrade by 34%,
making you that much more susceptible to noise. Any noise that caused an
extraneous zero crossing during this period would create a "dribble bit".
Rather than try to tighten up the timing specs to reduce this probability,
we simply allowed dribble bits to occur. You might get one, or even a few,
in a really noisy environment, but we figured you wouldn't get 8. So the
algorithm is, wait for the last bit to show up (as defined by the last zero
crossing with no more zero crossings within 160 ns, as seen at the AUI
receiver), align the received bits on byte boundaries (i.e., ignore any
extraneous bits that do not constitute a complete byte), check the FCS, and
if it is good, then ignore the "dribble bits" and pass just the
byte-aligned data to the client. There is no error recorded.
If the FCS does *not* check out, then pass one of two error codes: "FCS
error" if the number of received bits was on an integral byte boundary, or
"Alignment error" if you had any dribble bits. Note that it was not the
dribble bits that were in error--these were never checked by the FCS. The
dribble bits simply changed the reporting of the FCS error from FCS-error
to Alignment-error.
(And the reason for this is yet ANOTHER historical story...)
> The first vendor references an Advanced Micro Devices 802.3 design handbook
> which states that "the LANCE can handle up to 7 dribbling bits when a
> receive packet terminates", and that receive frames containing dribbling
> bits with no CRC error are valid frames. This vendor seems to append a
> single dribble bit to the end of every transmitted frame after the FCS.
>
The AMD handbook is precisely correct, especially as the LANCE was designed
jointly by AMD and my own design team at DEC in the early 1980s! It
properly handles dribble bits, per the standard.
> The second vendor references 802.3 (looks like section 3.4 about Invalid
> MAC Frames) claiming that dribble bits make a frame invalid. However there
> is no reference to dribble bits in this section. This vendor's product
> discards frames which have the dribble bit appended. Thus the problem.
>
The second vendor apparently does NOT conform to the standard. Section 3.4
does list the criteria for invalid frames (including failing to pass the
FCS check), but these checks are performed AFTER dribble bits are stripped
away.
> Questions:
> 1. What can cause dribble bits to appear?
See above.
> 2. Why would vendor #1 attach them to each frame?
There is no reason to "go out of one's way" to create dribble bits.
I suspect that they have a design "idiosyncrasy" (e.g., an inability to
stop the RX clock precisely, so they always clock in an extra bit), and
decided that it wasn't worth fixing the bug, since a conformant system will
ignore the extra bit. I would probably have agreed with them if I were
consulting to them. (Maybe I did--you don't say who the vendor is!)
> 3. What purpose do they serve?
> 4. Are they legitimate?
See above.
> 5. Who is correct?
>
From the information you have provided, it would seem Vendor 1 is correct.
--
Rich Seifert Networks and Communications Consulting
sei...@netcom.com 21885 Bear Creek Way
(408) 395-5700 Los Gatos, CA 95033
(408) 395-1966 FAX
"... specialists in Local Area Networks and Data Communications systems"
Look for: "Gigabit Ethernet: Technology and Applications for High-Speed LANs"
http://cseng.awl.com/bookdetail.qry?ISBN=0-201-18553-9&ptype=0