Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Ethernet vs. Token Ring

95 views
Skip to first unread message

Mike Kattan

unread,
Jan 30, 1992, 10:27:00 PM1/30/92
to
I hate to ask such a primitive question to this newsgroup, but I didn't
see a FAQ.

What are the advantages of token ring over ethernet, and vice-versa?

If you were trying to simulate the performance of each, what would be
the key variables?

Thanks in advance for any help. If there's much response, I'll post a
summary.

___________________________________________________________________________
Mike Kattan | Internet: kat...@jetson.uh.edu
Decision and Information Sciences | Bitnet: kattan@uhupvm1 kattan@jetson
University of Houston | Phone: (713) 749-6789 Fax: 749-6765
===========================================================================

Sam Drake

unread,
Feb 2, 1992, 12:49:18 AM2/2/92
to
I'm not a Token Ring architect, but I've fiddled with it a bit. Let's see:

o In general, Token Ring performance degrades more gracefully than Ethernet
under conditions of very heavy load.
o Token Ring is star-wired, with a central hub. This simplifies
worst-case network management, since any station can be removed
from the ring by yanking a cable in the central wiring closet.
Classic Ethernet is messier, though some of the newer Ethernet
implementations (those not involving tapping into a single snaking
cable) have the same advantage.
o Token Ring has good network management. For example, a central
administrator can force any station on the ring to leave the ring
under software control.
o Token Ring is available in a 16 Mbit/sec flavor, which can be
beneficial in some environments (versus Ethernet's 10 Mbit/sec)
o Token Ring supports longer packets than Ethernet, which again can
provide better performance. Token Ring packets can be (from memory)
at least 2K on the 4Mbit version and can be over 12K on the 16Mbit
version, versus ~1.5Kbytes for Ethernet.
o Ethernet's cheaper and more pervasive.

OK, that's enough. Let the flames ... er, GAMES ... begin!

Disclaimer: IBM might agree with this ... but let's not find out, OK?


Sam Drake / IBM Almaden Research Center
Internet: dr...@almaden.ibm.com BITNET: DRAKE at ALMADEN

Vernon Schryver

unread,
Feb 2, 1992, 3:00:25 AM2/2/92
to
In article <14...@rufus.UUCP>, dr...@drake.almaden.ibm.com (Sam Drake) writes:
> ...

> o Token Ring is available in a 16 Mbit/sec flavor, which can be
> beneficial in some environments (versus Ethernet's 10 Mbit/sec)
> o Token Ring supports longer packets than Ethernet, which again can
> provide better performance. Token Ring packets can be (from memory)
> at least 2K on the 4Mbit version and can be over 12K on the 16Mbit
> version, versus ~1.5Kbytes for Ethernet.
> ...


Would you care to quote some numbers for TCP/IP? For either the slow or
fast versions of Token Ring? I intend this as a serious question. In
principle, you would expect far more than 10Mbit/sec TCP throughput for
16Mb token ring. What do you see? Say with an RS6000?

Some chip vendors are running around showing overheads claiming decidely
unimpressive numbers for the IBM MAC level performance. Of course, they
claim their numbers are far better. Also, of course, chip vendors can't
spell TCP/IP.

For comparison, modern, low cost (<$10,000) UNIX workstations get more than
1000KBytes/sec through TCP/IP/ethernet as measured by ttcp. I don't know
that IBM UNIX boxes do that, but I assume they do.


Vernon Schryver, v...@sgi.com

Sam Drake

unread,
Feb 4, 1992, 2:49:36 AM2/4/92
to
In article <scott.697157270@labtam> sc...@labtam.labtam.oz.au (Scott Colwell) writes:
>I would like to hear what data rate can be achieved from application to
>application over tcp-ip/802.5. I have heard from unsubstantiated sources
>that it is significantly less than the wire rate unlike ethernet.

As my other followup said, we've seen 1.8MBytes/sec; I'd say your
sources are wrong.

>>o Token Ring supports longer packets than Ethernet, which again can
>> provide better performance.

>Large packets is most definitely not an advantage for networks that
>are used for interactive traffic (which means nearly all lans that are
>in use today.)

No argument. Depending on your application, long packets may be
good for you. They are not universally good, just as short packets
are not universally good. ATM networks do indeed use teeny packets,
but they do that mostly due to telco requirements, not DP requirements.

>By the way how many packets per second do token ring users see ? If they
>were restricted to 1500 byte max packets over token ring what would be the
>throughput ?

Sorry, I really have no idea....

Scott Colwell

unread,
Feb 3, 1992, 5:47:50 PM2/3/92
to
dr...@drake.almaden.ibm.com (Sam Drake) writes:
>o Token Ring is available in a 16 Mbit/sec flavor, which can be
> beneficial in some environments (versus Ethernet's 10 Mbit/sec)

I would like to hear what data rate can be achieved from application to


application over tcp-ip/802.5. I have heard from unsubstantiated sources
that it is significantly less than the wire rate unlike ethernet.

>o Token Ring supports longer packets than Ethernet, which again can


> provide better performance. Token Ring packets can be (from memory)
> at least 2K on the 4Mbit version and can be over 12K on the 16Mbit
> version, versus ~1.5Kbytes for Ethernet.

Large packets is most definitely not an advantage for networks that


are used for interactive traffic (which means nearly all lans that are

in use today.) Interactive usage of lans is much more dependent on
the _latency_ of the network rather than the throughput.

Take the example of token ring at 16Mbit/s, the largest packet is defined
as 17800 bytes in the IBM TR Arch Ref. This will take 8.9 ms on the wire
as opposed to 1.2ms for the largest ethernet packet. (Multiply both by
the number of stations that get to transmit before you do...) Compare
this with the figure of 50ms quoted in comp.arch as the aceptable latency
for mouse tracking (form a paper by Jim Gettys ?).

Large packets are also unnecessary since the wastage due to headers,
trailers et cetera is a very small % at even modest packet sizes.
Ethernet has 26 bytes of header+trailer plus an interframe gap of 9.6us
which gives an overhead of 11.7us or 0.95% for a 1500byte packet. A figure
of 5% or more would be perfectly adequate. Note that the packet size
for new high data rate standards like ATM are much smaller (70 odd?)

The only benefits that come form large packets are if you are using
a protocol that cannot allow multiple unacknowledged packets or if
you are using a lan interface that cannot process many packets per second.

By the way how many packets per second do token ring users see ? If they
were restricted to 1500 byte max packets over token ring what would be the
throughput ?

--
Scott Colwell
Labtam Australia Pty. Ltd. net: sc...@labtam.labtam.oz.au
Melbourne, Australia phone: +61-3-587-1444

Sam Drake

unread,
Feb 5, 1992, 2:26:58 AM2/5/92
to
In article <1992Feb4.2...@practic.com> bru...@practic.UUCP (Thomas Eric Brunner) writes:
>>o Token Ring is available in a 16 Mbit/sec flavor, which can be
>> beneficial in some environments (versus Ethernet's 10 Mbit/sec)
>
>Ekk! Sam, you didn't buy that one did you? Vernon is going to hammer you in
>follow-ups on the distinction between a station's ability to source and sink
>and the media's theoretical load capability, and someone is sure to hop up
>and down on the network and transport protocol behavior. I suggest that it
>would be "a nice thing" if a hot RIOS running later-than-Austin networking
>code showed up at the ANTC in Santa Clara, some numbers would be nice,
>especially now that the 16Mb card is available.

See my other recent posts, including the ones showing two RISC Systems
actually transmitting over 14Mb/s via 16Mb/s Token Ring. I've seen it,
with off-the-shelf AIX 3.1.5. I *won't* claim the scenario is typical,
but I doubt those who have seen Ethernet running at 95% would say their
scenario is "typical", either.

I stand by the statement; 16Mb/s vs 10Mb/s is clearly beneficial, in some
environments.

>A bit of history, once upon a time every person wearing a suit said that
>token-ring was deterministic, ethernet was not, and that ethernet would
>fail under 30% load, and that it had scads of other problems. Then a
>data-challenged physicist discovered that ethernet was very deterministic
>and could be loaded to within 2% of its theoretical capacity with a single
>workstation (causing some embaressment to the vendor w.r.t. the performance
>of it's file server products), and broadly speaking, was a well behavied
>media.

I never wear suits, I know others have loaded E-net to close to the
theoretical max, and I've seen T/R similarly loaded. If both media
can be loaded to the same percentage, best case, then the fact that 16 > 10
becomes a bit more real.

Hey, I *like* Ethernets. Some of my best friends are Ethernets. I'm
typing this on an Ethernet attached station. But there are things that
I want to do where Token Ring provides better performance. I won't
argue about which is better, or which is cheaper, or which is
faster ... some things are too subjective; others are too obvious;
others are too variable.

Sam Drake

unread,
Feb 4, 1992, 2:44:06 AM2/4/92
to
In article <gid...@sgi.sgi.com> v...@rhyolite.wpd.sgi.com (Vernon Schryver) writes:
>Would you care to quote some numbers for TCP/IP? For either the slow or
>fast versions of Token Ring? I intend this as a serious question. In
>principle, you would expect far more than 10Mbit/sec TCP throughput for
>16Mb token ring. What do you see? Say with an RS6000?

Between two RS/6000, using TCP sockets, our group here has measured as
high as 1.8 MBytes/sec data transfer rate on a 16 Mbit/sec ring...1.8*8
= 14.4Mbits. Using the Token Ring device driver directly from an application
to write MAC level frames directly on the ring I've measured 1.9 MBytes/sec.
Your mileage may vary; I certainly won't claim these are "typical" numbers.

Vernon Schryver

unread,
Feb 4, 1992, 3:34:53 AM2/4/92
to
In article <14...@rufus.UUCP>, dr...@drake.almaden.ibm.com (Sam Drake) writes:
>
> Between two RS/6000, using TCP sockets, our group here has measured as
> high as 1.8 MBytes/sec data transfer rate on a 16 Mbit/sec ring...1.8*8
> = 14.4Mbits. Using the Token Ring device driver directly from an application
> to write MAC level frames directly on the ring I've measured 1.9 MBytes/sec.
> Your mileage may vary; I certainly won't claim these are "typical" numbers.


Thank you a lot for the numbers.

In appreciation, I won't ask the questions that immediately occur to me,
such as "what kind of TCP benchmark?", and "a real test between 2 machines
or one of those bogus summing of the work of 29 machines?".
Instead, I'll assume answers as good as those numbers.


Thanks.

Vernon Schryver, v...@sgi.com

Thomas Eric Brunner

unread,
Feb 4, 1992, 6:39:49 PM2/4/92
to
In article <14...@rufus.UUCP> dr...@drake.almaden.ibm.com (Sam Drake) writes:
>I'm not a Token Ring architect, but I've fiddled with it a bit. Let's see:

>o In general, Token Ring performance degrades more gracefully than Ethernet
> under conditions of very heavy load.

My problem with statements which begin with the phrase "in general" is that
every counter-example to the thesis is an insignificant exception. I don't
have this problem when I use the phrase however!

OK, the reference work on ethernet loading with ip source and sinks is Van
Jacobson's. The hoary old stat is a Sun 3/50 with an AMD Lance ethernet chip
can approach wire speed (16k frames/sec), _and_ post-4.3bsd release tcp
implementations manage to equitably "share" bandwidth under load.

My all-time favorite "load" condition remains the arp storms on the Austin
and UCLA backbone wires.

>o Token Ring is star-wired, with a central hub. This simplifies
> worst-case network management, since any station can be removed
> from the ring by yanking a cable in the central wiring closet.
> Classic Ethernet is messier, though some of the newer Ethernet
> implementations (those not involving tapping into a single snaking
> cable) have the same advantage.

True, for our novice in search of truth we should point out that "classic"
means a bus-topology, whether using thick or thin cable, and that "some of"
is probably not quite sufficiently suggestive of the volume of new cable
and infrastructure installation with is unshielded twisted pair with hub
architecture, theoretically "managed" by SNMP.

>o Token Ring has good network management. For example, a central
> administrator can force any station on the ring to leave the ring
> under software control.

The same is true for hub-based ethernets, with SNMP turning on or off a port
("station" in ring-esse), via software. It should be mentioned that IBM has
put a lot of effort into this product line (802.5 as-interpreted), and there
are some advantages.

>o Token Ring is available in a 16 Mbit/sec flavor, which can be
> beneficial in some environments (versus Ethernet's 10 Mbit/sec)

Ekk! Sam, you didn't buy that one did you? Vernon is going to hammer you in


follow-ups on the distinction between a station's ability to source and sink
and the media's theoretical load capability, and someone is sure to hop up
and down on the network and transport protocol behavior. I suggest that it
would be "a nice thing" if a hot RIOS running later-than-Austin networking
code showed up at the ANTC in Santa Clara, some numbers would be nice,
especially now that the 16Mb card is available.

I'm willing to "contribute" my driver, though I suspect that an RT's raw
performace isn't going to leave the world gasping for air.

>o Token Ring supports longer packets than Ethernet, which again can
> provide better performance. Token Ring packets can be (from memory)
> at least 2K on the 4Mbit version and can be over 12K on the 16Mbit
> version, versus ~1.5Kbytes for Ethernet.

True, but our gentle reader ought to be given a pointer or two as to where
"bigger" might be better. Perhaps Mr. Kattan will be administering a nice,
uncomplicated wire populated by token-ring speaking devices, running some
application which primarily uses large frames, and which would approach or
pass either the interrupt saturation point of the hosts, or the transport
protocol "overhead" if the frames were fragmented. Say Brand X boxes using
Brand X's non-ip (but open of course) protocol suite, running the mongo
packet application, only, with no lossy long-haul links or messy routers
to less savory networks.

On the other hand, perhaps his wire will be used to provide distributed
services which were developed in an ethernet environment, and which are
fairly incurious about performace hacks involving mongo packets, except
optimizations for FDDI frame size and overhead issues. There are several
file systems which fit this bill, as well as a few other widgets.

Further, Mr. Kattan's wire may actually be connected to something which
is not token-ring, and the non-token-ring net may generate a non-trivial
portion of the token-ring traffic (or the reverse), then "big" would come
to mean "fragmented to fit".

>o Ethernet's cheaper and more pervasive.

Bingo!

>OK, that's enough. Let the flames ... er, GAMES ... begin!

But Sam, you didn't even get to his second question:

>> If you were trying to simulate the performance of each, what would be
>> the key variables?

This is a rather broad question and there is an entire literature of both
network simulation and LAN access methodology papers proping up scores of
dissertations. Before more blood is shed it would help to know what if any
thing Mr. Katten intends to run on top of either of these two link-level
protocols and medias, or if he's about to write an access method simulation/
performace paper of his own.

My own third cent's worth is this: if you can get token-ring for free, take
it. If you have to pay real money go with unshielded twisted pair and a
hub-based ethernet architecture. In terms of cable alone, type 1 cable is
a serious drag (pun intended), vastly more expensive, and the prices of
all the widgets is (handwaving) twice that for ethernet widgets. It is now
possible to run 802.5 over UTP, which removes some of the down-side to the
ring choice, and it would be nice if handwaving could improve the price and
vendor-base part of the problem, and if some significant part of the ring
vendors would get a bit less connectionist.

A bit of history, once upon a time every person wearing a suit said that
token-ring was deterministic, ethernet was not, and that ethernet would
fail under 30% load, and that it had scads of other problems. Then a
data-challenged physicist discovered that ethernet was very deterministic
and could be loaded to within 2% of its theoretical capacity with a single
workstation (causing some embaressment to the vendor w.r.t. the performance
of it's file server products), and broadly speaking, was a well behavied
media.

>Disclaimer: IBM might agree with this ... but let's not find out, OK?

Ok, I won't call Armonk if you won't.

Disclaimer: humm, well I'm writing a 16Mb/s token-ring driver, have written
a few ethernet and 4Mb/s token-ring drivers, and have cabled the past four
InterOps. I worked for the same company as Vernon, and I do work for the same
company as Sam. I mumble for myself. Everything I write is gospel.

--
#include <std/disclaimer.h>
Eric Brunner 4bsd/RT Project
uucp: uunet!practic!brunner or bru...@practic.com
trying to understand multiprocessing is like having bees live inside your head.

Sam Drake

unread,
Feb 4, 1992, 8:58:17 PM2/4/92
to
In article <gl2...@sgi.sgi.com> v...@rhyolite.wpd.sgi.com (Vernon Schryver) writes:
>Thank you a lot for the numbers.
>In appreciation, I won't ask the questions that immediately occur to me,
>such as "what kind of TCP benchmark?", and "a real test between 2 machines
>or one of those bogus summing of the work of 29 machines?".
>Instead, I'll assume answers as good as those numbers.

The program was a little C routine which was writing data into a TCP socket
as fast as it could, with another program on the other end reading from a
socket as fast as it could. That's one sending station, one receiving station,
one otherwise idle 16Mb/s Token Ring.

As I said, not typical.

Lon Stowell

unread,
Feb 3, 1992, 4:45:50 PM2/3/92
to
In article <1992Jan31.0...@menudo.uh.edu> kat...@JANE.UH.EDU writes:
>I hate to ask such a primitive question to this newsgroup, but I didn't
>see a FAQ.
>
>What are the advantages of token ring over ethernet, and vice-versa?
>
Depends on who you ask. Token Ring was designed to be a
"reliable" physical media and access. Inherent in the standards
are all sorts of physical layer and media fault determination
AND isolation stuff.

Each Token Ring station has a "fink on thy neighbor" mentality.

Ethernet didn't used to have much of this....being essentially a
bus.

The advent of the smart hubs and 10BaseT have rendered much of
the differences moot....other than for bit twiddling wienies.

A Token Ring tends to be "stable" at extremely high percentages
of it's "raw" bit rate compared to CSMA/CD techniques.....so in
a HIGHLY interactive environment with lots of stations, a 4 Mbps
Token Ring can compete nicely with a 10 Mbps Ethernet. Note
that this is in a HIGHLY interactive environment with many
stations trying to use the media in small bursts. And I would
bet that part of the so-called advantages are due to the
different protocol stacks that tend to run on T/R vs E'net.

For less interactive applications....say a small number of
stations doing FTP, the 10 Mbs Ethernet offers thuput which
the 4 Mbps Token Ring is physically unable to offer....600-700
Kbytes/second or more.

The 16 Mbps Token Ring is another matter. Drastic differences
in thruput occur due to different design decisions on the part
of vendors. Some swear by the "smart card" method...and some
even run the layers above the MAC (or LLC) layer on the T/R
silicon itself. Others swear AT the "smart card" approach and
do all this in the higher speed main cpu's if available.

If you have a nice fast bus, good I/O routines, and a fast
enough main cpu(s) and operating system kernel, the smart cards
can actually reduce thruput compared to a dumb card....

Thruput of 10 Mps Ethernet is theoretically lower than 16 Mbps
T/R....but the vendors have had a lot more time for tuning of
hardware vs operating systems than with T/R.

Besides, REAL Lanner's run on FDDI..... >:-)

The Key variables to me would be the incidence of collision on a
multinode network compared to the token rotation time on the
ring. But I frankly doubt if either is as important as vendor
implementation and tuning....

Lon Stowell

unread,
Feb 5, 1992, 4:19:42 PM2/5/92
to
In article <scott.697157270@labtam> sc...@labtam.labtam.oz.au (Scott Colwell) writes:

>I would like to hear what data rate can be achieved from application to
>application over tcp-ip/802.5. I have heard from unsubstantiated sources
>that it is significantly less than the wire rate unlike ethernet.
>

If it IS significantly less than the "wire" rate, then the fault
is in the implementation, not the protocol. Token Ring uses a
"smart" chipset.....MUCH smarter than a LANCE or other typical
Ethernet interface. A lot of vendors are still struggling with
how to get best performance AND maintainability of code with
this technology.

And frankly, more than a few vendors are making the wrong
decision IMHO....which slows down their thruput.

If you are talking about doing TCP/IP on Token Ring over the
typical 802.2 LLC layer, then WHERE you run the LLC stuff and
the next layer up has a drastic effect on thruput. TCP/IP on
top of formal LLC is the "norm"....as this allows other
protocols to run at the same time on the same adapter. This is
overhead that few CURRENT Ethernets are burdened with.

And some of the LLC's are much faster than others.....as far as
I know, the MADGE one is the fastest.....but they cheat.

I don't have thruput figures for T/R for TCP/IP.....but would
expect the added overhead of formal 802.2 (processing, not
bytes) to have an impact compared to the older TCP/IP directly
on the Ethernet MAC layer common today.

For "other" protocols,,,SNA being one, I have measured batch
mode thruput of 80% of wire rate.....which is getting pretty
close to the wire+protocol overhead.

In my personal opinion, I would NOT use Token Ring for TCP/IP
unless I had some other reason for doing so....like needing to
carry SNA traffic on the same network.....or a requirement for
an extremely reliable LAN (media)....at which Token Ring and
it's "Fink on thy Neighbor" mentality still beats Ethernet.


TCP/IP has way too much overhead at the upper layers to take
advantage of Token Ring. TCP/IP was designed for a relatively
"unreliable" set of lower layers. Token Ring has a LOT more
overhead down at those lower layers than does Ethernet. T/R is
designed to provide a highly reliable link layer to upper layer
protocol stacks (SNA, NETBIOS, etc.) which were designed for that
type of link.

Runnning the overhead of TCP/IP on a reliable link layer such as
Token Ring DLC is a lot like hiring a bodyguard for the
Terminator.


>
>Large packets is most definitely not an advantage for networks that
>are used for interactive traffic (which means nearly all lans that are
>in use today.) Interactive usage of lans is much more dependent on
>the _latency_ of the network rather than the throughput.
>

If you want maximum batch throughput for a SMALL number of
stations, then large packet size helps....as long as the packet
size is still way smaller than the inherent probability of error
for the media.

For a mixed LAN, where there are both small packet interactive
stations and large packet batch stations, the hardware enforced
transmit priority levels of Token Ring offer a means to get the
best compromise for both users....set the interactive stations
to a higher priority level than the batch.

Still, as you noted, once a batch station does grab a low
priority token and begin blasting away, it can hog the media for
a pretty good length of time.

You must admit that the use of the Token will ALLOW larger
packet sizes to have pretty decent odds of completing a
transmission. If you tried that on a collision type LAN like
ethernet you would have a disaster on your hands.

Token Ring has mechanisms which can help....there is the LLC
layer flow control and throttling available. For even finer
control you could use the built in management features and have
a central station control max packet size.....and with a bit of
programming you could actually drop the batch size dynamically
any time there is a high amount of interactive traffic on the
net. (Not that I have seen any implementations of this kind of
tuning yet......but the tools are there.....)


>Take the example of token ring at 16Mbit/s, the largest packet is defined
>as 17800 bytes in the IBM TR Arch Ref. This will take 8.9 ms on the wire
>as opposed to 1.2ms for the largest ethernet packet. (Multiply both by
>the number of stations that get to transmit before you do...) Compare
>this with the figure of 50ms quoted in comp.arch as the aceptable latency
>for mouse tracking (form a paper by Jim Gettys ?).
>

IBM has always encouraged the separation of batch and
interactive traffic onto distinct media. Perhaps they weren't
as backwards as the typical Unix type believes.

Mixing the two types of large packet batch and small packet
interactive has better odds of moderately satisfactory
performance on a token type LAN than it would on a citizen's
band type LAN. It is NOT a good idea on either LAN type at
those transmission speeds.


>Large packets are also unnecessary since the wastage due to headers,
>trailers et cetera is a very small % at even modest packet sizes.

Depends on whether you are batch or
interactive. And as you note, if you are doing files between two
non-realtime operating systems, the less often you harass the
operating system the more the thruput.


>
>By the way how many packets per second do token ring users see ? If they
>were restricted to 1500 byte max packets over token ring what would be the
>throughput ?
>--

On a 4 MBPS Token Ring, with a 1000 Byte packet, using "scream
mode" you can hit 3.84 Mbps thruput with a pair of AT clones and
16 bit T/R adapters. (scream mode means that since these were
video files, they were sent in a connectionless mode....only the
LLC and MAC level was doing any ack/nak....there was no higher
layer protocol involved...)

The test ran was sending video files back and forth around the
LAN. These big packets were being sent at low priority. AT THE
SAME TIME, we were able to run a virtual terminal type
application at a higher token priority with decent response
time.



Lon Stowell

unread,
Feb 5, 1992, 4:44:37 PM2/5/92
to
>In article <14...@rufus.UUCP> dr...@drake.almaden.ibm.com (Sam Drake) writes:
>>o Token Ring has good network management. For example, a central
>> administrator can force any station on the ring to leave the ring
>> under software control.
>
In article <1992Feb4.2...@practic.com> bru...@practic.UUCP (Thomas Eric Brunner) writes:

>The same is true for hub-based ethernets, with SNMP turning on or off a port
>("station" in ring-esse), via software. It should be mentioned that IBM has
>put a lot of effort into this product line (802.5 as-interpreted), and there
>are some advantages.
>

Although the Ethernet smart hubs are helping, there is still a
huge advantage to Token Ring in this area.

The T/R stations themselves constantly monitor the impedance and
integrity of their media. The stations remove themselves if
problems are noted by themselves OR if they note that another
station appears to be blaming them for a ring problem.

And there are now smart Token Ring hubs on the market which can
drop stations and monitor/report via SNMP (or other) management.

The biggest advantage I see is that T/R allows a station with
management privileges to configure the networking parameters of
all the other stations on the ring. This allows a degree of
dynamic control that Ethernet still lacks.

>
>A bit of history, once upon a time every person wearing a suit said that
>token-ring was deterministic, ethernet was not, and that ethernet would
>fail under 30% load, and that it had scads of other problems. Then a
>data-challenged physicist discovered that ethernet was very deterministic
>and could be loaded to within 2% of its theoretical capacity with a single
>workstation (causing some embaressment to the vendor w.r.t. the performance
>of it's file server products), and broadly speaking, was a well behavied
>media.
>

Whoopee! I can load Ethernet to within 2% of it's capacity
using a single station. This sounds like a REAL USEFUL
application.....having a station talk to itself. Why not go all
the way and remove even this SINGLE station and see how fast it
will go? How ZEN!

You can load Token Ring to within a couple percent of its
capacity and actually be doing something USEFUL.....like having
two stations talking to each other. What a concept!

If you really want to crunch both media, put about a hundred
stations on each. Have 60 of these blasting files back and
forth in RELIABLE transfer mode. (FTP or whatever). At the
same time, have the other 40 stations running a virtual terminal
type set of sessions with each other. Now measure the aggregate
thruput of all 3 LAN's.

Oh yes, do the Token Ring properly and use the priority schemes
and run your DLC entirely on the adapter with a lot of memory.

During your testing, I am going to simulate "workmen in the
building" by dropping computer room grade floor tiles on your
cables.....or kicking a few station jacks around a bit.

You predict the LAN which can move more gigabytes (and still
provide lowest interactive response time) by the end of the
day.....

Me, I'll try to get cabling which can support FDDI speeds.....


Lyle_...@transarc.com

unread,
Feb 5, 1992, 8:51:38 PM2/5/92
to
lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
> In article <scott.697157270@labtam> sc...@labtam.labtam.oz.au (Scott Colwell) writes:
> >
> >Large packets is most definitely not an advantage for networks that
> >are used for interactive traffic (which means nearly all lans that are
> >in use today.) Interactive usage of lans is much more dependent on
> >the _latency_ of the network rather than the throughput.
> >
> You must admit that the use of the Token will ALLOW larger
> packet sizes to have pretty decent odds of completing a
> transmission. If you tried that on a collision type LAN like
> ethernet you would have a disaster on your hands.

Boy Lon, you sure got that wrong. I don't understand, 'cause your
postings are usually pretty reasonable, but you're way off base here.

On a CSMA LAN, such as Ethernet (which is CSMA/CD, to be precise), the
longer the packet, the lower the risk of collision. A collision can
only occur during the transmission of the first ( T * 10E6 ) bits,
where T is the propogation delay from the transmitter to the most
distant station. Everything after that is gravy. The time to transmit
the smallest legal Ethernet frame is approximately equal to twice the
maximum end-to-end delay of an Ethernet (worst case). This guarantees
that any host will be able to detect a collision on its packet before
it ceases sending.

So using larger packets drastically reduces the likelihood of
collisions, at the cost of higher delays.

By the same token, shorter Ethernets will perform better, and placing
your most prolific senders close to each other on the wire (and in the
middle of the wire) will also improve the performance of an Ethernet.

Lyle Transarc 707 Grant Street
412 338 4474 The Gulf Tower Pittsburgh 15219

Rick Jones

unread,
Feb 4, 1992, 2:04:11 AM2/4/92
to

>Would you care to quote some numbers for TCP/IP? For either the slow or
>fast versions of Token Ring? I intend this as a serious question. In
>principle, you would expect far more than 10Mbit/sec TCP throughput for
>16Mb token ring. What do you see? Say with an RS6000?
>
>Some chip vendors are running around showing overheads claiming decidely
>unimpressive numbers for the IBM MAC level performance. Of course, they
>claim their numbers are far better. Also, of course, chip vendors can't
>spell TCP/IP.
>

Oh boy - a chance to spout numbers !-) It ain't an RS/6000, but a
combination of the HP 720 and the EISA 802.5 adaptor (of Madge Origin)
will chugg along quite merrily at 1880+KB/s where KB is 1024X1024
bytes... this is measured with an in-house program that looks
remarkably like ttcp on the wire ;-) Also, the TCP TPDU was 4K...

So, I would have to say that one can indeed see higher performance on
an 802.5 ring...what is everyone else seeing?

rick jones

Anand Krishnamurthy

unread,
Feb 6, 1992, 11:45:45 AM2/6/92
to
Hi folks,
I'm interested in analyzing the limits of multimedia traffic support
in FDDI. I'd appreciate it if somebody could provide references on performance
analysis of FDDI networks supporting multimedia traffic. Thanks in advance.
Anand

gary s anderson

unread,
Feb 6, 1992, 12:18:22 PM2/6/92
to
In article <343...@hpindda.cup.hp.com>, r...@hpindda.cup.hp.com (Rick Jones) writes:
|>
|> >Would you care to quote some numbers for TCP/IP? For either the slow or
|> >fast versions of Token Ring? I intend this as a serious question. In
|> >principle, you would expect far more than 10Mbit/sec TCP throughput for
|> >16Mb token ring. What do you see? Say with an RS6000?
|> >
|> >Some chip vendors are running around showing overheads claiming decidely
|> >unimpressive numbers for the IBM MAC level performance. Of course, they
|> >claim their numbers are far better. Also, of course, chip vendors can't
|> >spell TCP/IP.
|> >
|>
|> Oh boy - a chance to spout numbers !-) It ain't an RS/6000, but a
|> combination of the HP 720 and the EISA 802.5 adaptor (of Madge Origin)
|> will chugg along quite merrily at 1880+KB/s where KB is 1024X1024


I assume you mean KB = 1024. 1024 X 1024 is typically MB (megabyte).

|> bytes... this is measured with an in-house program that looks
|> remarkably like ttcp on the wire ;-) Also, the TCP TPDU was 4K...

TPDU is an OSI transport term, while MSS is negotiated by TCP.
The real question is: what size packet are you sending on the
wire? If its 4096 bytes, I'm curious as to how many vendors really
support 4096 byte messages on the wire (i.e. is this a pratical test)???

Rob Warnock

unread,
Feb 6, 1992, 1:51:08 PM2/6/92
to
lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
+---------------

| You must admit that the use of the Token will ALLOW larger
| packet sizes to have pretty decent odds of completing a
| transmission. If you tried that on a collision type LAN like
| ethernet you would have a disaster on your hands.
+---------------

I need not admit it, because you are incorrect. A collision can only occur
during the first 51.2 usec of an Ethernet transmission (providing that your
I/S guys haven't egregiously broken the rules in configuring your net). That
is one of the fundamental rules of Ethernet (and 802.3). Once that time has
passed, you are guaranteed that everyone on the net has heard the "carrier",
and that they are now deferring to the sender (rather than possibly being
about to collide). That is to say, the first 51.2 usec (well, plus the 9.6
usec minumum interpacket gap) is the sole "contention" period. Anybody who's
going to fight (contend) for access to the net will do so during that time.

If there are more than one, they will collide -- *within* that time! --
backoff, and retry with a random delay (chosen from a pool of numbers whose
values double for each consecutive failure to acquire the net). If there
is one one contender, he will "capture" the cable, and any late arrivals
will "defer" to him until carrier drop at the end of his packet. Then
*everyone* who is deferring immediately (well, 9.6u) jumps on the cable.
If there was 0 or 1 station deferring (including the sender, who may have
more to send), there will be no collision. In any event, the "media access
protocol" to see who gets to send the next packet to send a packet occurs
only in the first 51.2us after the 9.6us gap.

It is *not* permitted for a station to collide at any later time than this
once another station has managed to "capture the flag" without contention
for that long. [Until the end of the packet, when the game starts again.]
Such a thing it recored by many chips as a "late collision", and is considered
by hardware support people to be a serious indication of malfunction (usually
somethings very close to dying).

Therefore, the larger the packet, the *more* efficient transmission becomes
on Ethernet. Look at anybody's *measured* data on Ethernets (such as the DEC
study), and you'll see graphs of performance versus packet length that rise
*sharply* from 64 (the minimum) to "a few hundred" bytes, and them roll off
above 90% of the bit rate (asymptoting to 98+%) as the packets get larger.


-Rob

-----
Rob Warnock, MS-9U/510 rp...@sgi.com
Silicon Graphics, Inc. (415)335-1673
2011 N. Shoreline Blvd.
Mountain View, CA 94039-7311

Rob Warnock

unread,
Feb 6, 1992, 1:51:03 PM2/6/92
to
lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
+---------------
| Whoopee! I can load Ethernet to within 2% of it's capacity
| using a single station. This sounds like a REAL USEFUL
| application.....having a station talk to itself. Why not go all
| the way and remove even this SINGLE station and see how fast it
| will go? How ZEN!
|
| You can load Token Ring to within a couple percent of its
| capacity and actually be doing something USEFUL.....like having
| two stations talking to each other. What a concept!
+---------------

Are you just *trying* to be dense? What he (and everyone else) meant is
that typical well-designed Unix-based workstations today are capable of
transferring *user* data from one user process on one machine and reliably
delivering it via TCP/IP/Ethernet into the user process of another machine
at a *sustained* rate of *user* data of 90-95% of the physical bit clock
rate of the network medium (or 95% to 98% if you include the protocol
headers as "data", which we don't usually).

That's *two* machines, a producer of the data and a consumer of the data.

[Since TCP is a reliable protocol, there actually have to be packets going
in both directions, but the bandwidth taken up by the ACKs is typically
about 2%, or less.

I have heard these numbers reported for several years now, on SGI, IBM,
HP, Cray, and Sun [with VanJ net code] machines. (Please excuse me if
I have left out a major player.)

Lyle_...@transarc.com

unread,
Feb 6, 1992, 4:17:09 PM2/6/92
to
Rob puts it better, with numbers and everything, but there's one
little nit.

rp...@rigden.wpd.sgi.com (Rob Warnock) writes:
> ... A collision can only occur


> during the first 51.2 usec of an Ethernet transmission (providing that your
> I/S guys haven't egregiously broken the rules in configuring your net). That
> is one of the fundamental rules of Ethernet (and 802.3). Once that time has
> passed, you are guaranteed that everyone on the net has heard the "carrier",
> and that they are now deferring to the sender (rather than possibly being
> about to collide). That is to say, the first 51.2 usec (well, plus the 9.6
> usec minumum interpacket gap) is the sole "contention" period. Anybody who's
> going to fight (contend) for access to the net will do so during that time.

Actually, that 51.2 usec is the UPPER BOUND on the guarantee (ie,
worst case), and is referring to a maximally sized network, with the
sender located exactly at one end of the network and another host on
the opposite end with a packet ready to send.

The expected case is is somewhat smaller, and depends on the size of
your network and the location of the various stations, especially the
most prolific ones.

Most early analytic studies of Ethernet were based on worst-case
models instead of expected-case models, so they drastically
under-represented the capacity of an Ethernet.

Vernon Schryver

unread,
Feb 6, 1992, 11:43:28 PM2/6/92
to
In article <39...@shamash.cdc.com>, g...@easyaspi.udev.cdc.com (gary s anderson) writes:
> ..

> The real question is: what size packet are you sending on the
> wire? If its 4096 bytes, I'm curious as to how many vendors really
> support 4096 byte messages on the wire (i.e. is this a pratical test)???


Do you mean, why would someone limit themselves to only 4KB packets?
All else equal, everything runs faster with bigger packets. You have less
overhead per byte, from context switches to DMA setup to protocol
munching.

In the unlikely case that you meant to imply that 4KB is too big, then
consider that 4KB and 8KB are nice numbers for common workstations. You
can avoid the byte copy to user space if you use a user data size equal
to your physical page size. At the relatively low data rates of token ring
compared to modern CPUs, that is not very important, but every little bit
helps.

If there is some concern that 4KB is a lot of data for an application
to marshall, than consider file transport protocols or the current
buzz "multi-media."


Vernon Schryver, v...@sgi.com

Paul Koning

unread,
Feb 7, 1992, 4:18:03 PM2/7/92
to

In article <179...@pyramid.pyramid.com>, lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
|>In article <1992Jan31.0...@menudo.uh.edu> kat...@JANE.UH.EDU writes:
|>>I hate to ask such a primitive question to this newsgroup, but I didn't
|>>see a FAQ.
|>>
|>>What are the advantages of token ring over ethernet, and vice-versa?
|>>
|>...

|> A Token Ring tends to be "stable" at extremely high percentages
|> of it's "raw" bit rate compared to CSMA/CD techniques.....so in
|> a HIGHLY interactive environment with lots of stations, a 4 Mbps
|> Token Ring can compete nicely with a 10 Mbps Ethernet. Note
|> that this is in a HIGHLY interactive environment with many
|> stations trying to use the media in small bursts. And I would
|> bet that part of the so-called advantages are due to the
|> different protocol stacks that tend to run on T/R vs E'net.
|>
|> For less interactive applications....say a small number of
|> stations doing FTP, the 10 Mbs Ethernet offers thuput which
|> the 4 Mbps Token Ring is physically unable to offer....600-700
|> Kbytes/second or more.

You're perpetuating an old and well-debunked myth. In the examples you
mentioned, Ethernet will deliver throughput well in excess of 4 Mb/s.
If you're into tiny packets, you might in some cases top out as low as
6 or 7 Mb/s; more sensible situations get you 9 Mb/s or more.

paul

Lon Stowell

unread,
Feb 7, 1992, 3:38:09 PM2/7/92
to
In article <go9...@sgi.sgi.com> rp...@rigden.wpd.sgi.com (Rob Warnock) writes:
>
>I need not admit it, because you are incorrect. A collision can only occur

>during the first 51.2 usec of an Ethernet transmission (providing that your
>I/S guys haven't egregiously broken the rules in configuring your net). That

Actually as I admitted to another poster, I meant to add the
issue of interactive and batch on the same wire.

However, I have a problem with "can only occur". "Should only
occur" I might agree with, but late collisions DO occur...it
all depends on what types of equipment you are mixing on the
cable....how you have the network bridged, routed, repeated,
etc., as well as where the stations are physically located on
the cable.

In the real world, rarely do you get to optimize your network
as one would for a benchmark. Planning is nice, but even that
may be more difficult than theory because stations may have to
be added to existing wire.... No, it is not a good idea, but
that doesn't mean it isn't done.

>
>It is *not* permitted for a station to collide at any later time than this
>once another station has managed to "capture the flag" without contention
>for that long. [Until the end of the packet, when the game starts again.]
>Such a thing it recored by many chips as a "late collision", and is considered
>by hardware support people to be a serious indication of malfunction (usually
>somethings very close to dying).
>

You can get late collisions on an Ethernet for other than
"serious hardware malfunction" reasons. It may be a signal of a
"less than optimal" plant installation....or it may be because
you are mixing V1, V2, and 802.3 stations on the net.....and are
doing so in "less than optimal" physical spacing.

>Therefore, the larger the packet, the *more* efficient transmission becomes
>on Ethernet. Look at anybody's *measured* data on Ethernets (such as the DEC
>study), and you'll see graphs of performance versus packet length that rise
>*sharply* from 64 (the minimum) to "a few hundred" bytes, and them roll off
>above 90% of the bit rate (asymptoting to 98+%) as the packets get larger.
>

I haven't seen that study.....but I make a great distinction
between a "two station net" and a practical network with
hundreds of stations on it--some with file xfers going and some
with highly interactive traffic. Token Ring has a physical
technique WHICH MAKES IT POSSIBLE to ensure that the interactive
stuff gets transmission priority over the batch stuff,-ethernet
does not. [ Note that T/R makes it POSSIBLE....that doesn't mean
vendors understand T/R enough to USE the technique.....]

The larger packets make for greater efficiency on ANY network,
but only if there are no "hits" of any type during the packet.
Note that a 'hit" can be due to other than a collision.....

Vernon Schryver

unread,
Feb 7, 1992, 10:36:41 PM2/7/92
to
In article <179...@pyramid.pyramid.com>, lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
> ....

> However, I have a problem with "can only occur". "Should only
> occur" I might agree with, but late collisions DO occur...it
> all depends on what types of equipment you are mixing on the
> cable....how you have the network bridged, routed, repeated,
> etc., as well as where the stations are physically located on
> the cable.

No. "Late collisions cannot occur"
Late collisions are just as fatal and indicate just as much wrong with an
ethernet as excessive clock jitter on a Token Ring. (Or name your own,
valid, serious problem. I don't really know what I'm talking about with
TR.)

> In the real world, rarely do you get to optimize your network
> as one would for a benchmark. Planning is nice, but even that
> may be more difficult than theory because stations may have to
> be added to existing wire.... No, it is not a good idea, but
> that doesn't mean it isn't done.

Preventing late collisions has nothing to do with "optimizing" your
network, unless you think keeping water in your car's radiator is an
"optimization." Late collissions don't just mess up benchmarks. They mess
up everything. IF you get late collisions, you also get short packets
which are lost without sign, causing higher layer timeouts.

If your ethernet has late collisions, it is broken. Demand it be fixed or
fix it yourself. Remove the long drop cables on 10baseT transievers.
Shorten the main wire to legal limits.

> You can get late collisions on an Ethernet for other than
> "serious hardware malfunction" reasons. It may be a signal of a
> "less than optimal" plant installation....or it may be because
> you are mixing V1, V2, and 802.3 stations on the net.....and are
> doing so in "less than optimal" physical spacing.

How can you get late collisions simply by bad physical spacing? Late
collisions mean that the size of the ethernet it is too big. Someone has
violated the rules.

Mixing V1, V2, and 802.3 stations on the net causes no problems.

Yes, mixing incompatible transiever and station pairs is bad.
It's also a "serious" error.

> ...


> The larger packets make for greater efficiency on ANY network,
> but only if there are no "hits" of any type during the packet.
> Note that a 'hit" can be due to other than a collision.....

No, an incorrectly built or otherwise broken network is broken. "Hits" are
vary rare on a correctly operating network. Consult the error counters on
any large, correctly installed ethernet.

The incredibly resistence of ethernet to extra drop cables, extra taps,
thinnet "drop cables", and so on cannot be considered a defect of the
ethernet protocol. It is ridiculous to compare a broken or badly built
ethernet with a correctly installed token ring.

Is a token ring a fraction as resistent to "improvements"?


Vernon Schryver, v...@sgi.com

Marco S Hyman

unread,
Feb 8, 1992, 3:14:07 PM2/8/92
to
In article <179...@pyramid.pyramid.com>

lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
> In article <go9...@sgi.sgi.com>
> rp...@rigden.wpd.sgi.com (Rob Warnock) writes:
> >Therefore, the larger the packet, the *more* efficient transmission becomes
> >on Ethernet. Look at anybody's *measured* data on Ethernets (such as the
> >DEC study), and you'll see graphs of performance versus packet length
> >that rise *sharply* from 64 (the minimum) to "a few hundred" bytes, and
> >them roll off above 90% of the bit rate (asymptoting to 98+%) as the
> >packets get larger.
>
> I haven't seen that study.....but I make a great distinction
> between a "two station net" and a practical network with
> hundreds of stations on it--some with file xfers going and some
> with highly interactive traffic.

Better check out the study -- "Measured Capacity of an Ethernet: Myths and
Reality" by Boggs, Mogul, and Kent in the proceedings of SIGCOMM '88
(published as Computer Communications Review, Volume 18, Number 4, August '88).

The graph Rob mentions has number of hosts on the horizontal axis,
utilization in Mbits/s on the vertical axis, and traces for various packet
sizes, including some that violate the spec. Another interesting graph charts
transmission delay for various numbers of hosts and packet sizes. The tests
were not done using a two host ethernet.

// marc
--
ma...@dumbcat.sf.ca.us -- pacbell!dumbcat!marc

Thomas Eric Brunner

unread,
Feb 7, 1992, 8:20:19 PM2/7/92
to
In article <179...@pyramid.pyramid.com> lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
[in a followup to a posting I made, following up a posting Sam made, in reply
to a general query]

>
>>The same is true for hub-based ethernets, with SNMP turning on or off a port
>>("station" in ring-esse), via software. It should be mentioned that IBM has
>>put a lot of effort into this product line (802.5 as-interpreted), and there
>>are some advantages.
>>
> Although the Ethernet smart hubs are helping, there is still a
> huge advantage to Token Ring in this area.
>
> The T/R stations themselves constantly monitor the impedance and
> integrity of their media. The stations remove themselves if
> problems are noted by themselves OR if they note that another
> station appears to be blaming them for a ring problem.
>
> And there are now smart Token Ring hubs on the market which can
> drop stations and monitor/report via SNMP (or other) management.

These are all very good points, there is much more management "function"
built into 802.5 as-interpreted, but to what end? At the risk of touching
off a different theological debate, where ought management reside? In
media-specific managers, or in media-independent managers? The choice of
location is in my mind more of a marketing question than otherwise, and
since the "market" appears to have some preference for using multiple
media types within single "managed networks", the embedding of management
in the link-level media protocol rather than at the network addressable
level seems partially redundant. If this is too opaque, I claim that the
use of SNMP is probably more useful for a larger set of problems than
are the token-ring management features.

I should point out that it is only this month that I learned how SRT
bridges actually effect "load balancing" in meshed (multi-connect) rings,
which altered my understanding of how a back-to-back packet sequence which
is not guaranteed to be sequential by the source/sink transport protocol
might fail under intermittant circumstances, so I've probably got a few
remaining misconceptions to discover yet.

> The biggest advantage I see is that T/R allows a station with
> management privileges to configure the networking parameters of
> all the other stations on the ring. This allows a degree of
> dynamic control that Ethernet still lacks.
>

Would you do me the favor of expanding on this, there are probably other
issues in dynamic behavior (other than "load balancing" mentioned above)
which I'm ignorant of.

>>
>> [my 8-line hitory of the world, deleted]


>>
> Whoopee! I can load Ethernet to within 2% of it's capacity
> using a single station. This sounds like a REAL USEFUL
> application.....having a station talk to itself. Why not go all
> the way and remove even this SINGLE station and see how fast it
> will go? How ZEN!

I'm sorry, I should have been clearer. The test used two Sun boxes and
attempted to explore loading on an otherwise idle net. A similar test
on a working net using a mix of page-alligned (ND) and non-page-alligned
packets (of several sizes, TCP short and large, and the usual mix for NFS
over UDP), using similar hardware, with some transit traffic through a
router, reached similar results on loading. Had this been a "single station"
test it would not have been very interesting. I recall sitting on the edge
of my seat when Van summarized his work at the Stanford IETF, and writing
for the subsequent paper and code by the second test done at UCB. I can't
at the moment recall the author's name.

> If you really want to crunch both media, put about a hundred
> stations on each. Have 60 of these blasting files back and
> forth in RELIABLE transfer mode. (FTP or whatever). At the
> same time, have the other 40 stations running a virtual terminal
> type set of sessions with each other. Now measure the aggregate
> thruput of all 3 LAN's.

Well, to be honest the question of how to test and what to test has caused
some real head scratching within the benchmarking working group of the IETF,
here is an excerpt from the Atlanta meeting:

: The BMWG met on Tuesday, July 30th in Atlanta during the IETF meeting.
:
: The single topic of the discussions was to explore ways to more closely
: relate the design of tests for routers and bridges to the conditions
: found in the real world.
:
: We explored the issues of bi-directional traffic, mixed protocols and
: random address and came to the conclusion that it would be difficult, at
: the least, to simulate a real-world network but that most of the above
: issues should be included in the test design.
:
: It was concluded, in the absence of actual tests, that the choice of
: routing protocol probably did not make any performance difference to the
: routed protocol after the next-hop address had been learned and added to
: the routing cache. Tests should be performed to see if this is true.
:
: We agreed to hold a video conference in mid September to continue
: refining the actual procedures that should be used to do throughput
: tests.

Of course, the working group's principal focus is on characterizing tests
for bridges and routers, not media, but it is semi-relevent for media
performance when connected by such devices. All of my source/sink test code
is far from what I'd really like, either to measure rfc1009 (or rreq)
conformance, or performance.

> During your testing, I am going to simulate "workmen in the
> building" by dropping computer room grade floor tiles on your
> cables.....or kicking a few station jacks around a bit.

Well, I doubt this as neither I nor the ANTC is probably likely to allow
someone from Pyramid to be actually malicious in our respective shops, but
I do take your point, in fact there has been an entire species of postings
on the bogosity of vendors using slide-locks for transceiver cables, and
of users attaching/detaching thinnet drops from hosts. Personally I think
I've gotten some "workmen in the building" exposure building large networks
in short periods in convention halls where fork lifts, semi-tractor/trailors,
and booth vendor sales staff "pilot error" are commonplace.


> Me, I'll try to get cabling which can support FDDI speeds.....

Try Type 4 for copper cable, for fibre I suggest you contact RedHawk over
in Haward or Fremont. They designed and terminated the 3-pair I used for
the '91 InterOp demo ring and spines. They are good people and give fair
prices, with real discounts for bulk purchases. Mention my name and InterOp,
they may discount further.

Thomas Eric Brunner

unread,
Feb 7, 1992, 7:14:46 PM2/7/92
to

Let's not forget the relative sophistication of the original poster's query,
Mr. Kattan wrote a simple query, and to my reading his lack of specificis
suggestd some naivete, so the obvious apparant advantage(s) of one media over
the other really ought to be carefully explained, e.g., packet size and
bandwidth, in some context.

In my handwaving capsule discription of how I and some other people I know
can to "understand" Ethernet and the TCP ACK mechanism (or flow control in
general), I deliberately slighted a class of people; those who engaged in
marketing one technology over another by representing one as deterministic
and high performace and the competing technology as non-deterministic and
problematic and not really high performance. I referred to these persons as
"person (s) wearing a suit". It would have been cuter, but no less slighting
if I'd used the term "sartorially challenged." At no time did I imagine that
anyone would infer that I was suggesting that anyone with a long history of
useful postings to several newsgroups could be intended -- after all, when
I teach I wear a suit and even a tie (the later does tend to constrict the
flow of blood to my head however, with unfortunate side affects).

However, as recently as a year ago I was in Raleigh to teach on non-SNA
networking to a group somewhat involved in token-ring products, and we
(myself and my partner) did observe that our packet traces and observations
were "shared" by the technical attendees, and "rejected" by the rather
incredulous non-technical attendees. By the way, the Austin and UCLA wires
I mentioned were large, SRT-populated token-rings.

What would be useful is an attempt to characterize the protocol(s) and the
application(s) where there is a significant difference in end-to-end, or
distributed system performace, due to the differences in the link protocol
and physical media. As several people have pointed out, there are a _lot_
of variables, which is equivalent to the observation that MTU and bandwidth
are not sufficient as raw numbers to base one's selection, except with some
implicit caveats.

Rob Warnock

unread,
Feb 10, 1992, 1:45:12 AM2/10/92
to
lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
+---------------
| However, I have a problem with "can only occur". "Should only
| occur" I might agree with, but late collisions DO occur...
+---------------

Only with a mis-configured net or with broken equipment. Any occurence of a
"late collision" should be taken as an indication that you have a serious
problem in your net that should be fixed at the earliest opportunity. Late
collisions are not *ever* a "normal" part of netowrok operations.

+---------------


| >by hardware support people to be a serious indication of malfunction (usually
| >somethings very close to dying).
| >

| You can get late collisions on an Ethernet for other than
| "serious hardware malfunction" reasons. It may be a signal of a
| "less than optimal" plant installation....or it may be because
| you are mixing V1, V2, and 802.3 stations on the net.....and are
| doing so in "less than optimal" physical spacing.

+---------------

Sorry, You can mix V1, V2, 802.3, thick, thin, lots of stuff. If you don't
break the specs or have broken equipment, you should *not* get late collisions.

Late collisions are *always* an indication of *serious* trouble.

Lon Stowell

unread,
Feb 10, 1992, 4:57:56 PM2/10/92
to
In article <1992Feb8.0...@practic.com> bru...@practic.UUCP (Thomas Eric Brunner) writes:
>
>These are all very good points, there is much more management "function"
>built into 802.5 as-interpreted, but to what end? At the risk of touching
>off a different theological debate, where ought management reside? In

You will indeed touch of a theological debate with statements
like that.

In practice, I cannot see much advantage of one over the
other...unless I had some pretty extreme requirements for the
LAN....requirements which the LAN software likely couldn't meet
anyway.

Arguing about at what layer management, network, or "other"
functions ought to reside at is a waste of time unless you talk
about the entire protocol stack....as well as the networking
paradigm of the folks who developed the stack.

If you want to have fun sometimes, try to explain SNA or TCP/IP
to folks trained in and with experience ONLY in one of these two
camps. It is an interesting event....and a waste of time if
there are rigid religious fanatics of either type in the
audience...

>
>I should point out that it is only this month that I learned how SRT
>bridges actually effect "load balancing" in meshed (multi-connect) rings,
>which altered my understanding of how a back-to-back packet sequence which
>is not guaranteed to be sequential by the source/sink transport protocol
>might fail under intermittant circumstances, so I've probably got a few
>remaining misconceptions to discover yet.
>

A lot of this "understanding" is protocol stack specific.
TCP/IP and SNA just don't "see" a network in similar ways.
Although SNA has lately admitted that there just MIGHT be some
advantages to dynamic routing, dynamic resource discovery, etc.
these concepts were foreign to IBM's networking paradigm for
most of SNA's history. Just looking at the IBM T/R architecture
reference manual in the sections where they talk about the Path
Control interface to the T/R gives a pretty good insight to
their thinking....

>
>Would you do me the favor of expanding on this, there are probably other
>issues in dynamic behavior (other than "load balancing" mentioned above)
>which I'm ignorant of.
>

There are two ways you can control this in T/R. At the MAC
layer....where a master station can affect how other stations are
configured. At the LLC layer, frankly 802.3 with 802.2 should be
able to do the same things...

The token ring MAC has 7 priority levels. You can set
batch traffic to a lower level than interactive....not that you
NEED to do this....nor do a given pair have to be at the same
priority level to communicate. Note that I know of no "standard"
for doing this....

At the 802.2 llc layer, the layer itself has a dynamic window.
Of course you need to remember whether you are using connection
oriented communications or connectionless.

>> If you really want to crunch both media, put about a hundred
>> stations on each. Have 60 of these blasting files back and
>> forth in RELIABLE transfer mode. (FTP or whatever). At the
>> same time, have the other 40 stations running a virtual terminal
>> type set of sessions with each other. Now measure the aggregate
>> thruput of all 3 LAN's.
>
>Well, to be honest the question of how to test and what to test has caused
>some real head scratching within the benchmarking working group of the IETF,
>here is an excerpt from the Atlanta meeting:

BENCHMARKING, a science in need of an art.

>
>: The BMWG met on Tuesday, July 30th in Atlanta during the IETF meeting.
>:
>: The single topic of the discussions was to explore ways to more closely
>: relate the design of tests for routers and bridges to the conditions
>: found in the real world.

Crunched cables, overlength segments, attempts so save money on
wiring, new and poorly trained technicians, anarchistic growth,
incessant user demands, frequent network upgrades--hardware and
software, poor layout, poor power, budget crunches, and the
"vp effect" are all difficult to model.

Your group would do well to read the "Last Subscriber Loop
Survey" from the old pre-Green Bell folks. These were the real
technical and practical types from the Bell system. They
measured, poked, and prodded, and everything else on a study of
the REAL subscriber local loops....with actual data tests. They
then went back to the labs and attempted to create an enviromnent
which could effectively model this real world. The results were
less than satisfactory...


>:
>: It was concluded, in the absence of actual tests, that the choice of
>: routing protocol probably did not make any performance difference to the
>: routed protocol after the next-hop address had been learned and added to
>: the routing cache. Tests should be performed to see if this is true.
>:

Routing cache? You use routing cache? Seriously, by even using
the term you have implied a routing protocol....

>
>Well, I doubt this as neither I nor the ANTC is probably likely to allow
>someone from Pyramid to be actually malicious in our respective shops, but
>I do take your point, in fact there has been an entire species of postings
>on the bogosity of vendors using slide-locks for transceiver cables, and
>of users attaching/detaching thinnet drops from hosts. Personally I think
>I've gotten some "workmen in the building" exposure building large networks
>in short periods in convention halls where fork lifts, semi-tractor/trailors,
>and booth vendor sales staff "pilot error" are commonplace.
>

Yes, and real commercial networks almost always seem to have
similar problems. People kicking wall plates, cable installers
with hangovers, other workmen in the wiring closets or under the
floors, welders, etc.

If you are talking about media, you really should note how the
protocol/media combination...or just the media, treats these
real world events. In my experience Ethernet tolerates
TRANSIENT events far better than Token Ring....but Token Ring is
better at telling you where the event took place.

Pay your money and take your choice..... but I see very few
vendors who even acknowledge that there ARE differences in this
area.

Lon Stowell

unread,
Feb 10, 1992, 4:00:24 PM2/10/92
to
In article <gq2...@sgi.sgi.com> v...@rhyolite.wpd.sgi.com (Vernon Schryver) writes:
>
>If your ethernet has late collisions, it is broken.

Broken means different things to different people....a bad cable,
card, etc. to a repairman. Incorrectly installed might be a
better term.

>Demand it be fixed or
>fix it yourself. Remove the long drop cables on 10baseT transievers.
>Shorten the main wire to legal limits.
>

You left off the prerequisite steps....

o First, clean up your resume.

The next time you are in one of those big office buildings in
NYC or Chicago, start taking a network apart. You might even
point out that you are trying to make the net run better as a
goal, but in commercial installations, users are usually
severely disciplined if caught messing around with network gear.

As far as demanding, that would help.....but the network folks
may already be aware of the problem....but the budget for the
new router, cable, etc. just got slashed. Meanwhile, there is
work to be done and the network is how it gets done.

If I had to take a choice between a "less than optimal"
installation and NO net access, I'm afraid I would have to go
for the kludge and hope it gets cleaned up later (and there is
almost ALWAYS a "later" in big nets....).

>
>Mixing V1, V2, and 802.3 stations on the net causes no problems.
>
Depends on what you want to do with them. And it also depends
on whether you are talking about the transceiver, the AUI
interface, or the frame layer.

>Yes, mixing incompatible transiever and station pairs is bad.
>It's also a "serious" error.

It is also not an unknown happening. If you have a big network
and are adding equipment in response to user growth rather than
some network purists "master plan" compromises do occur.

If possible it is always nice to segment the LAN into "traffic
oriented" workgroups. It also has been know to help if you
keep the V1/V2 frame level stations away from the 802.3 level
stations...with SOME vendor's implementations and protocol
stacks. These little quirks usually are learned from
experience...and between the knowledge and the budget there is
usually a bit of lag time...


>
>No, an incorrectly built or otherwise broken network is broken. "Hits" are
>vary rare on a correctly operating network. Consult the error counters on
>any large, correctly installed ethernet.
>

It must be nice to have such a nice clean network. I've seen a
lot of big nets, but have never seen one without runs hits and
errors.

And I've NEVER seen a growing, dynamic network where sooner or
later problems weren't injected by the network folks or
outsiders who just happened to be under the same floors, in the
same walls, on the same branch circuits, etc.

>The incredibly resistence of ethernet to extra drop cables, extra taps,
>thinnet "drop cables", and so on cannot be considered a defect of the
>ethernet protocol. It is ridiculous to compare a broken or badly built
>ethernet with a correctly installed token ring.
>

It is if the design of the MAC layer of the two is sufficiently
different in how they treat these types of problems. With a
Token Ring you will usually know EXACTLY where the fault domain
is pretty quickly....as long as it is related to overlength,
crimped, cut, damaged, out-of-spec, cabling, hubs, or stations.
If you add to a Ring and start causing problems, the Net Mgr
would have to be braindead not to know exactly what the problem
is in a hurry.

With traditional bus Ethernet, things just ain't this easy
unless you have excellent record keeping of error and traffic
statistics as well as excellent record keeping of your physical
layout as well.

Token Ring will give you a topological listing of all your
stations pretty quickly. If you have smart hubs, you can get
the same info for the hubs, drop cables, and know which station
is on which port....and where the trunks are.

The new 10BaseT smart hubs are doing a lot to add this type of
management to Ethernet....

>Is a token ring a fraction as resistent to "improvements"?
>

No, actually the reason that (IMHO) there are more "less than
optimal" Ethernets is that Ethernet will tolerate considerably
more abuse at the MAC layer than a Token Ring will. With
IBM's rigid approach to physical and data integrity at the
lower layers, either a Token Ring tends to run pretty error
free at the lowest layers or it does not run very well at
all....

I don't know if you've ever taken the challenge that IBM used
to put out.....try to break a token ring in such a manner than
it doesn't recover around the fault domain. You have to know
a bit of detailed design about token ring to do it. And now
with the Star Tek, Proteon, and even IBM's new smart MAU's, I
would have to bet that you would almost find it impossible if
it were MY net management software running the physical media.

I don't know how hard it is to break a 10BaseT network with
good net management software running the media, but with
traditional thicknet/thinnet it is so easy it isn't even a
challenge.

Lon Stowell

unread,
Feb 10, 1992, 5:00:15 PM2/10/92
to
In article <1992Feb8.2...@dumbcat.sf.ca.us> ma...@dumbcat.sf.ca.us (Marco S Hyman) writes:
>Better check out the study -- "Measured Capacity of an Ethernet: Myths and
>Reality" by Boggs, Mogul, and Kent in the proceedings of SIGCOMM '88
>(published as Computer Communications Review, Volume 18, Number 4, August '88).
>
>The graph Rob mentions has number of hosts on the horizontal axis,
>utilization in Mbits/s on the vertical axis, and traces for various packet
>sizes, including some that violate the spec. Another interesting graph charts
>transmission delay for various numbers of hosts and packet sizes. The tests
>were not done using a two host ethernet.
>
Nor was it done, in my opinion and experience in field support,
on a typical commercial network.

Lon Stowell

unread,
Feb 10, 1992, 4:14:53 PM2/10/92
to
In article <1992Feb8.0...@practic.com> bru...@practic.UUCP (Thomas Eric Brunner) writes:
>
>What would be useful is an attempt to characterize the protocol(s) and the
>application(s) where there is a significant difference in end-to-end, or
>distributed system performace, due to the differences in the link protocol
>and physical media. As several people have pointed out, there are a _lot_
>of variables, which is equivalent to the observation that MTU and bandwidth
>are not sufficient as raw numbers to base one's selection, except with some
>implicit caveats.
>
It would also be extremely useful to distinguish the real world
of real commercial users from the fairly artificial benchmark,
performance analysis, or research and development environments.

IMHO most of the differences between the two are of interest
mainly to the "artificial" environments or to the marketing
folks.

Although I tend to favor the Token Ring for EXTREMELY high
availability networks, for most commercial networks, I prefer
simple Ethernet. This preference became even more pronounced
with the advent of the smart hubs and 10BaseT.

The only reasons I would recommend T/R are:

o You need to run SNA on the LAN. Whether I would install
all Token Ring or mix the two would depend a lot on the
physical placement of the SNA vs non-SNA stations....and
the likely odds of needing SNA access for users in the
non-SNA physical locations in the future.

o You need extremely high availability. As of today this
would be smart MAU's and T/R. However as noted, 10BaseT
would probably work almost as well. I would go T/R more
likely if SNA were a requirement.

o Throuput? Schmoo-put. Given enough money for routers,
etc. the differences between Ethernet and Token Ring are to
me just not enough to justify one over the other simply
based on the "speedometer mentality".
Let the marketing types use this one....kinda like selling
a machine based on MIPS.

If you want thru-put, create a backbone, workcenter type
LAN. If you can put FDDI in the backbone, fine. If not,
wire your campus so you CAN put FDDI in when it becomes
available.

If you are running TCP/IP, Netware, whatever...and have no
present or future requirement for SNA, why not go with Ethernet?
The cards tend to be cheaper... and on a big network that can
buy all sorts of toys like the DA-30 or a snootful of Sniffers.

It will be interesting to see how Token Ring responds now that
the IBM/TI monopoly is broken......

Or has T/R missed its window of opportunity with FDDI and copper
DDI available?

Vernon Schryver

unread,
Feb 11, 1992, 1:17:50 AM2/11/92
to
In article <179...@pyramid.pyramid.com>, lsto...@pyrnova.pyramid.com (Lon Stowell) writes:

This is true. I remember being quite disappointed when I got a copy of the
paper after hearing its recommendation. The test setup up could hardly be
called similar to anything in the real world.

The paper is very good for puncturing some stupid myths, myths whose
implausibility is apparently exceeded only by their popularity.


Vernon Schryver, v...@sgi.com

Vernon Schryver

unread,
Feb 11, 1992, 1:49:56 AM2/11/92
to
In article <179...@pyramid.pyramid.com>, lsto...@pyrnova.pyramid.com (Lon Stowell) writes:
> In article <gq2...@sgi.sgi.com> v...@rhyolite.wpd.sgi.com (Vernon Schryver) writes:
> >
> >If your ethernet has late collisions, it is broken.
>
> Broken means different things to different people....a bad cable,
> card, etc. to a repairman. Incorrectly installed might be a
> better term.

No, an error is an error. You don't let GM or Ford get away with
"improperly assembled". Nor your doctor or dentist. Demand the same of
your network installers.

> >Demand it be fixed or
> >fix it yourself. Remove the long drop cables on 10baseT transievers.
> >Shorten the main wire to legal limits.
> >

> ...


> As far as demanding, that would help.....but the network folks
> may already be aware of the problem....but the budget for the
> new router, cable, etc. just got slashed. Meanwhile, there is

> work to be done and the network is how it gets done.....

This differs from my experience. The network folks may know something is
wrong, but often are so busy justifying new hardware and more head count,
that they have no time for details like fixing problems.

I'm sure you know the experience of the Big Customer System Manager who
just KNOWS about networks, who, for example, KNOWS that late collisions are
caused by implementation errors in your stations. (IRIS's are unlike most
machines, and complain about late collisions. If I had one share of stock
for every internal pitched battle where managers and others have demanded
those messages be removed, ....)

Too many network service groups are graded on how fast they grow and how
much HiTech talk they use to snow their bosses, instead of the quaility of
their service.


> It is if the design of the MAC layer of the two is sufficiently
> different in how they treat these types of problems. With a
> Token Ring you will usually know EXACTLY where the fault domain
> is pretty quickly....as long as it is related to overlength,
> crimped, cut, damaged, out-of-spec, cabling, hubs, or stations.
> If you add to a Ring and start causing problems, the Net Mgr
> would have to be braindead not to know exactly what the problem
> is in a hurry.
>
> With traditional bus Ethernet, things just ain't this easy
> unless you have excellent record keeping of error and traffic
> statistics as well as excellent record keeping of your physical
> layout as well.
>
> Token Ring will give you a topological listing of all your
> stations pretty quickly. If you have smart hubs, you can get
> the same info for the hubs, drop cables, and know which station
> is on which port....and where the trunks are.

I don't know much about TR, but if it is at all similar to FDDI, I
disagree. It's true that FDDI usually makes the "fault domain" clear. If
you've destroyed the network, the location of the fault is obvious.

The difference is that it is very hard to completely trash an ethernet but
very easy to destroy an FDDI ring. Since the FDDI problem comes not from
the fiber or speed but from the MAC complexity and the ring topology, and
since the source-clock and zillions of TR MAC frames sound even messier
and shakier than FDDI, my guess is the same applies to TR.

In other words, the common failure of ethernets is lost packets (e.g. late
collisions) which are quite hard to chase down but do no more than reduce
throughput. The common failure of rings is complete uselessness, which is
usually fairly easy to diagnose.

> The new 10BaseT smart hubs are doing a lot to add this type of
> management to Ethernet....

I think the management in hubs is vastly over rated, and their ability to
break and trash an ethernet even more vastly unobserved. I daily use a big
ethernet, invovling many big hubs each supporting SNMP. They're both
useless as diagnostic tools and the source of a large part, perhaps
most of the problems.

> >Is a token ring a fraction as resistent to "improvements"?
> >
> No, actually the reason that (IMHO) there are more "less than
> optimal" Ethernets is that Ethernet will tolerate considerably
> more abuse at the MAC layer than a Token Ring will. With
> IBM's rigid approach to physical and data integrity at the
> lower layers, either a Token Ring tends to run pretty error
> free at the lowest layers or it does not run very well at
> all....

We agree. I doubt we agree on whether a network which is more brittle is
ever more desirable, other things like upper layers being equal.

> ...

> I don't know how hard it is to break a 10BaseT network with
> good net management software running the media, but with
> traditional thicknet/thinnet it is so easy it isn't even a
> challenge.

The most important part of network management software does not come in the
box. It is the inclination of the operators to notice and fix problems.
The second most important part is even harder to find. It is the human
knowledge to recognize what is or is not a significant problem. I wouldn't
consider such a challenge if you were manning your software. It would be
an easy win for too many real customers.


vjs

Thomas Eric Brunner

unread,
Feb 11, 1992, 1:37:34 PM2/11/92
to

Lon,

I mentioned "what ought management do and where should it reside" because
there was considerable mention of token-ring management capabilities, which
if not set in context of the larger set of problems management of multiple
media, multiple protocol sites, defaults to a implicit "buy token-ring as
it is manageable" endorsement, which is as limited a justification as using
raw frame size or rates.

As you suggested, I have "had fun sometimes", at IBM Raleigh, lecturing
on the mechanisms of failure in large meshed SRB rings (having to do with
an undocumented feature of one bit in the RIF and an ambiguity in rfc1042
dealing with ARP in 802.5 media) to a mixed audience from the network
products division. (Sorry all, my fingers typed "SRT" when my brain ment
"SRB" in an earlier post -- another circus in 802.1/802.5 land.)

I thank you for your reply to my query about dynamic behavior but I was
hoping for a little more detail. I'll take my usual drawing of a handfull
of cherios (representing a mesh SRB ring) and see if after a bit of thought
I get anything new from your discussion.

On the "Last Subscriber Loop Survay", could you provide me with the pub
number? I hate wading through AT&T and BellCore catalogs, and it could
be interesting as you mention.

You cited the portion of the BMWG draft which discussed the interaction of
route caching and the routing protocol after the next-hop address had been
determined, writing:

> Routing cache? You use routing cache? Seriously, by even using
> the term you have implied a routing protocol....

I thought that it was clear that the BMWG are attempting to clarify the
issues (find out what is measurable, what measurements are significant),
for things called "routers", which usually run some dynamic adaptive
routing algorithm. Because there are limited means, the BMWG has focused
on ip routers, so I guess some ip routing protocol is implied. I hope that
this dosen't mean that our work is worthless.

In Marco Hyman's helpfull follow-up with the citations for the Boggs paper,
(my copies of the earlier articles in this thread have "expired", so this
is by memory, I appologize in advance for any errors), I thought he was
expanding on Rob Warnok's reply to your posting, which was a follow-up
to my earlier posting. If memory serves, you read my posting as discussing
a single-station test, and were corrected in that reading by Rob, Marc and
myself, with mention of multi-station and production network performance
studies.

Whether it (the Boggs/Mogul/Kent study, or the CSRG paper who's cite I still
can't find -- the monitoring code on an Ultrix host was memorable in 1988!,
or Van's published papers), was a "single station" test as you appear to
have originally read, or a "two station" test as you have subsequently read,
or not was the point, not whether the networks approximated "practial" nets
in the "real" world. Please have the grace not to shift your own point
without expressing appreciation for the people who have tried to put new
knowledge into your hands. I know from my own work and talking with
Scott Bradner, that one of your implicit points, that production networks
are difficult to characterize for benchmarking purposes, is known to two or
three of your collegues. The same applies to knowing what are the "right"
(usefull in some sense) questions to ask.

Could you also please get off the moral high ground? We (Rob, Vernon, Sam,
Paul, you and I) all work within the general area, perhaps lunch or dinner
would de-toxify some of the subtext(s).

Lon Stowell

unread,
Feb 12, 1992, 9:19:16 PM2/12/92
to
In article <1992Feb11....@practic.com> bru...@practic.UUCP (Thomas Eric Brunner) writes:
>I mentioned "what ought management do and where should it reside" because
>there was considerable mention of token-ring management capabilities, which
>if not set in context of the larger set of problems management of multiple
>media, multiple protocol sites, defaults to a implicit "buy token-ring as
>it is manageable" endorsement, which is as limited a justification as using
>raw frame size or rates.

I disagree there. A manageable media, protocol stack,
(employee?), is an important criteria. I hate to give Netview a
plug, but the LACK of a similar rigid net manager capability in
some of the other protocol stacks has, IMHO, hurt their ability
to function in large corporate networks.....where good network
troubleshooters are very scarce, expensive, and in demand.

>
>I thank you for your reply to my query about dynamic behavior but I was
>hoping for a little more detail. I'll take my usual drawing of a handfull
>of cherios (representing a mesh SRB ring) and see if after a bit of thought
>I get anything new from your discussion.
>

If you can get TI's manual for 802.5 and 802.2 it has most of the
items I mentioned in it.


>On the "Last Subscriber Loop Survay", could you provide me with the pub
>number? I hate wading through AT&T and BellCore catalogs, and it could
>be interesting as you mention.
>

It was a 41000 series publication....pre-Green. I would post
over on the telecom or modem BBS to see if anyone has a copy
they will lend you...as I have tried to get another copy from
AT&T and have been spectacularly unsuccessful.

The old Bell folks used to publish tech papers on transmission
quality, actual service levels, etc. It was one of them...and
that is it's exact title.

>Because there are limited means, the BMWG has focused
>on ip routers, so I guess some ip routing protocol is implied. I hope that
>this dosen't mean that our work is worthless.
>

I don't know of any other protocol stack that uses a routing
cache at the network layer. SNA with its "anal-retentive"
approach to networking requires explicit prior configuration for
darned near everything. Even the 2.1 Nodes do this today. APPN
does dynamic routing and resource location, but at a different
layer.

That's what I meant, a "routing cache" implies a protocol stack
which doesn't use explicit configured routing to function.

Lon Stowell

unread,
Feb 12, 1992, 9:07:50 PM2/12/92
to
In article <gu7...@sgi.sgi.com> v...@rhyolite.wpd.sgi.com (Vernon Schryver) writes:
>
>No, an error is an error. You don't let GM or Ford get away with
>"improperly assembled". Nor your doctor or dentist. Demand the same of
>your network installers.
>
This is a good idea if you are the person who can withhold
payment from a network installer. It falls apart in the real
world if you are just another vendor (or user) who can
demonstrate that there are network problems.....but have no real
power to get them fixed other than pointing them out.

>
>This differs from my experience. The network folks may know something is
>wrong, but often are so busy justifying new hardware and more head count,
>that they have no time for details like fixing problems.
>

A bit harsh, but not at all untypical. Let's just say that
"there might be other priorities involved".
However this paragraph doesn't jibe with your earlier one
demanding instant fixes of network problems....


>I'm sure you know the experience of the Big Customer System Manager who
>just KNOWS about networks, who, for example, KNOWS that late collisions are
>caused by implementation errors in your stations. (IRIS's are unlike most
>machines, and complain about late collisions. If I had one share of stock
>for every internal pitched battle where managers and others have demanded
>those messages be removed, ....)
>
>Too many network service groups are graded on how fast they grow and how
>much HiTech talk they use to snow their bosses, instead of the quaility of
>their service.
>

You are making my point for me. In a lab environment where
everything is under control of "us good guys", you would not
have late collisions, cruncked wires, flattened coax,
semi-operative stations, mixed V1 and V2 transceivers. However
in real-world commercial installations, none of these are
at all atypical.


>
>I don't know much about TR, but if it is at all similar to FDDI, I

Since the thread was T/R vs Ethernet, let's stick to those two.
Token Ring has two types of errors, isolating and
non-isolating. The non-isolating errors are a bit trickier to
get rid of (if they occur).

The very term "isolating error" is the clue if you are not
familiar with T/R. Each station on the ring knows exactly who
the station upstream of it is. All the stations report changes
in this upstream neighbors identity to the ring manager
address. Just by logging these frames, you get a topology
list of the stations on the ring.

What an isolating error means is that the station reporting the
error knows that the "error" was caused by conditions between
the transmitter of its upstream neighbor and its own receiver.
It is able to be absolutely certain of this because the
upstream neighbor would have reported the error if it detected
it...
Since T/R error reports always include the upstream neighbors
address, as soon as the hit occurs you know what sections of
cabling, MAU's, etc. are involved. i.e. it isn't rocket
science to figure out where the fault domain lies.

Each station in T/R has this "fink on thy upstream neighbor"
function....and the data from each station physically flows
THROUGH the downstream station. Each station is required to
constantly check the data for physical level errors (crc,
framing, code violations, etc.). Once the first station
detects a physical level violation, it marks the physical frame
trailer so downstream stations will know that the fault domain
would not include themselves....(unless they are the upstream
neighbor,,,but most drivers aren't that smart).

So any time a station sees a physical violation, if the error
flag is not already set, it can be pretty sure that the fault
domain is immediately isolated.

In addition, the stations constantly measure the physical
impedance of their drop cable. If it goes out of bounds, the
station will drop off the network....in such a manner than
the station is electrically bypassed...and all its drop cabling
is also removed from the network.

>
>The difference is that it is very hard to completely trash an ethernet but
>very easy to destroy an FDDI ring.

I have never had any problems whatever trashing an ethernet.
Sometimes it has even been on purpose. >:-)

I have even managed to trash a few while working on OTHER
technology stuff that just happened to be under the same floor.

The main reason I am a fan of 10BaseT is that it is quite
difficult, with smart hubs, to trash an entire network....but
it is trivially easy with the coax forms of Ethernet.

>
>In other words, the common failure of ethernets is lost packets (e.g. late
>collisions) which are quite hard to chase down but do no more than reduce
>throughput. The common failure of rings is complete uselessness, which is
>usually fairly easy to diagnose.

The Token Ring is fairly similar....it was designed to work
almost error free or not work at all. But it was designed to
remove offending stations AND their wiring from the ring
immediately. With the newer Star Tek, Proteon, and IBM hubs, you
can even switch in redundant paths, or remove suspected stations
for isolation of "non-isolating" errors.

IMHO all Ethernet physical errors must be considered
"non-isolating"...except for 10Bt.


>
>I think the management in hubs is vastly over rated, and their ability to
>break and trash an ethernet even more vastly unobserved. I daily use a big
>ethernet, invovling many big hubs each supporting SNMP. They're both
>useless as diagnostic tools and the source of a large part, perhaps
>most of the problems.
>

We obviously disagree here. The ability of a smart hub to break
a network is more a function of the software and wetware
operating the hubs management functions. The ability of a smart
hub to eliminate hundreds of hours of beepernet is hard to
appreciate.....but you DO allude to it in your paragraphs above
about the difficulty of isolating physical level faults on a
large ethernet.

>
>We agree. I doubt we agree on whether a network which is more brittle is
>ever more desirable, other things like upper layers being equal.
>

It makes a good comparison point for SNA vs TCP/IP though. SNA
was designed to run only on highly reliable link layers. It is
notoriously intolerant of errors at those lower layers. TCP/IP
will run quite nicely on link layers with runs, hits, and
errors. You can see the different thinking by looking at WHERE
in the layers certain functions exist when comparing the two
protocol stacks.

When you run TCP/IP on a bullet-proof link layer you have a lot
of unnecessary overhead which reduces thru-put ON THOSE LINKS
compared to what you could get with a more streamlined set of
upper layers.

When you run SNA on flawed link layers, you really won't like
the results very much. IBM's QLLC (SNA on X.25) is one of the
more egregious examples of this, IMHO.

Which is the "better" protocol stack to me is a silly argument.
I prefer the one that doesn't require me as a user to give a
hot red rat feces which protocol stack I am using.

>
>The most important part of network management software does not come in the
>box. It is the inclination of the operators to notice and fix problems.

I could point out that GOOD network management software would
have priority filters, some part of an expert system, and would
make it darned difficult to ignore real severe problems.....
I have heard klaxons, sirens, even rooster crows....and have
seen screens that light up like artwork. Unfortunately most
net mgt software is designed by techies knowledgeable in the
field....not by ergonomics experts.

With a decent LAN it could also take recovery action and restore
service in the event of a hard failure. This is trivial with
T/R...care to try it with Ethernet (10BaseT and smart hubs would
be cheating....)?

Some of these packages exist....but darned few.


>The second most important part is even harder to find. It is the human
>knowledge to recognize what is or is not a significant problem. I wouldn't
>consider such a challenge if you were manning your software. It would be
>an easy win for too many real customers.

This to me would be a fault on the part of the net management
software. It should be customizable so the customer can note
which are and which are NOT critical errors.....and which station
is the VP's terminal.....other than that, the NMS software should
assist the UNTRAINED human in noting errors.

Thomas Eric Brunner

unread,
Feb 13, 1992, 7:49:41 PM2/13/92
to
Vernon wrote about the failure modes of each and the relative ease of figuring
out the cause(s), for the most part I agree, but (why agree when you can post
a vigorous disagreement?), I take exception to the notion that ring failure
is usually fairly easy to diagnose, except for small rings with a very small
number of source route bridges. Perhaps it is because I have a real shallow
learning curve, but I've found it challenging to try and debug ring failures
on rings approaching LAN_MAX_BRIDGE (8) size. I'd probably also have problems
with rings having miles of FOT-like bridges between campus sites.

Here's the nub as far as I'm concerned: In large rings, incrementally adding
stations until the failure occurs does not guarantee that the last station
added is the culprit. There are lots of interesting interactions between the
MAC traffic, timing, the link-level routing, and the network-level activity,
it is just more complicated than ethernet.

Of course I agree with everything else Vernon writes, and add what I hope is
a humorous note on the wonderfullness of hubs.

A client had re-homed two nfs clients, topologically the before and after
picture was identical, one hop from the server. My client and I were trying
to determine why a make took an order of magnitude longer after than before.
It turned out that the hub now used had SQE on... we figured this one out
today (no, it wasn't us, the telco people owned that hub...).

Score One Point for the hub-scorners.

At InterOp we've used telnet, ping, walkie-talkies, and the occasional hammer,
this got made into a shirt. We were sort of reactionary, the previous year
some ditz at InterOp let all the SNMP management station vendors put their
cute, colorful heat emitters in our limited space-and-circulation NOC, and
then we had to fend off the journalists wanting to know which one we liked
best...

I think that SNMP is the right answer, but thus far I'm a beliver, not a user
(well that's not true either, I mean I use SNMP widgetry, I just don't reach
for it first when I notice a problem).

Vernon Schryver

unread,
Feb 15, 1992, 1:42:22 AM2/15/92
to
In article <1992Feb14.0...@practic.com>, bru...@practic.com (Thomas Eric Brunner) writes:
> ...

> Here's the nub as far as I'm concerned: In large rings, incrementally adding
> stations until the failure occurs does not guarantee that the last station
> added is the culprit....


Wow, what an understatement.

I guess you paid more attention to the "ring building" merathons of a
certain multi-vendor ring demo for a certain network show in the last couple
of years than I thought.


Vernon Schryver, v...@sgi.com

Thomas Eric Brunner

unread,
Feb 18, 1992, 2:59:37 PM2/18/92
to

Vernon may be referring to three of the past four InterOp shows, where I
appeared to have some (probably unhealthy) relationship with multi-vendor FDDI
"try-it" rings. Naturally I paid attention, I wanted to be sure that I knew
when to run and hide, if necessary. The same applied to the other thingies
anyone may claim to have seen me near.

Ring building at previous InterOp shows are similar to approaching the point
of failure of once-operational rings, so it isn't a completely vacuous
exercise, aside from its obvious intent as a "one of everything" ring.

0 new messages