Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

autonomous / fast switching ???

0 views
Skip to first unread message

bjo...@netcom.com

unread,
Jan 10, 1996, 3:00:00 AM1/10/96
to
What is the difference between autonomous and fast switching?? The
cisco documentation is vague on this!

Derek Williams,


Tony Li

unread,
Jan 10, 1996, 3:00:00 AM1/10/96
to

What is the difference between autonomous and fast switching?? The
cisco documentation is vague on this!

The _functional_ difference is that autonomous is much faster and does not
use the main CPU at all.

Tony

Colin Wu

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to
On Wed, 10 Jan 1996 18:10:12 -0800, Tony Li wrote:

> Subject: Re: autonomous / fast switching ???
> To: bjo...@netcom.com
> cc: ci...@spot.Colorado.EDU

Under what situations would it not be possible to use autonomous switching, or
does such situations even exist?

---
__ _ _ Colin Wu
/ ) // ' ) / Network Analyst
/ __|/ o ____ / / / . . Computing & Information Services
(__/ (_) \_<_/ / <_ (_(_/ (_/_ McMaster University
(905)525-9140 ext 24050
"Words are like fingers pointing at the moon.
Once you see the moon, fingers are no longer needed."


Tony Li

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to

>The _functional_ difference is that autonomous is much faster and does not
>use the main CPU at all.

Does this correlate with the following terminology?
If not please give more details.

No, not at all. We were talking about routing. The following is some
marketing misinformation. In particular the comments about latency and its
ties to performance are wholly incorrect.

Tony

- Forwarding technology. Vendors have been debating the virtues of
cut-through vs. store-and-forward since Ethernet switching
began. Cut-through switching provides high performance by minimizing
latency: the time it takes a packet to pass through a switch. However,
it does not check for packet errors, thereby potentially wasting
bandwidth. Store-and-forward switching filters packet errors, but
increases latency. That hurts performance but boosts network
reliability. Fortunately, there is an alternative to this performance
vs. reliability dilemma. Today's most flexible LAN switches offer a
hybrid mode called adaptive cut-through switching, which provides a
combination of high performance and robust reliability.

Adaptive cut-through switching provides automatic, real-time support of both
cut-through and store-and-forward modes on a per-port basis. On power-up,
the switch configures all ports to cut-through mode. That offers the low
latency needed for large networks with cascaded switches or real-time
traffic. During operation, if packet errors occur, the switch automatically
sets the port to store-and-forward mode to keep errors from propagating
across network segments. Once error statistics fall below a configurable
threshold, the port automatically returns to cut-through mode-all with no
required intervention by network management.

Tony Li

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to

> The _functional_ difference is that autonomous is much faster and does not
> use the main CPU at all.

Under what situations would it not be possible to use autonomous


switching, or does such situations even exist?

Certain features cannot be autonomously switched. Even in this case, if
you configure autonomous switching, it will simply not take effect. The
primary case where you don't want to use autonomous switching is if you
have low bandwidth serial lines. In this case, process switching will give
you much more buffering.

Tony

Keith Fruge

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to
At 06:10 PM 1/10/96 -0800, Tony Li wrote:
>
> What is the difference between autonomous and fast switching?? The
> cisco documentation is vague on this!
>
>The _functional_ difference is that autonomous is much faster and does not
>use the main CPU at all.
>
>Tony
>
>
>
>

Does this correlate with the following terminology?
If not please give more details.

- Forwarding technology. Vendors have been debating the virtues of
cut-through vs.
store-and-forward since Ethernet switching began. Cut-through switching
provides high
performance by minimizing latency: the time it takes a packet to pass
through a switch.
However, it does not check for packet errors, thereby potentially wasting
bandwidth.
Store-and-forward switching filters packet errors, but increases latency.
That hurts
performance but boosts network reliability. Fortunately, there is an
alternative to this
performance vs. reliability dilemma. Today's most flexible LAN switches
offer a hybrid mode called adaptive cut-through switching, which provides a
combination of high performance and robust reliability.

Adaptive cut-through switching provides automatic, real-time support of both
cut-through and store-and-forward modes on a per-port basis. On power-up,
the switch configures all ports to cut-through mode. That offers the low
latency needed for large networks with cascaded switches or real-time
traffic. During operation, if packet errors occur, the switch automatically
sets the port to store-and-forward mode to keep errors from propagating
across network segments. Once error statistics fall below a configurable
threshold, the port automatically returns to cut-through mode-all with no
required intervention by network management.

======================================
keith...@ins.com
International Network Services
Voice Mail (214) 392-3545 EXT. 194
Pager (800) 796-7363 Pin#1026275

Geaux Cajuns!!!


Tony Li

unread,
Jan 14, 1996, 3:00:00 AM1/14/96
to

But if the switch is checking for errors while it is in 'cut-through'
mode it must be also in some kind of 'store-and-forward' mode hence
inducing an overhead which it is trying to loose ?

Gotta be careful about that word "overhead". Yes, it can verify the link
layer checksum as it switches the packet. Recall that cut-through can AT
BEST only decrease the latency (which is irrelevant if you're using a
decent sliding window protocol). Note that a cut-through switch which
receives a frame and discovers that the link layer checksum is incorrect is
in a bind: it's already started to transmit the frame out another
interface. Most switches in this case will go ahead and transmit a broken
frame, and pray for the end system to notice.

Tony

tonyb

unread,
Jan 14, 1996, 3:00:00 AM1/14/96
to
Tony Li wrote:
>
>
> >The _functional_ difference is that autonomous is much faster and does not
> >use the main CPU at all.
>
> Does this correlate with the following terminology?
> If not please give more details.
>
>No, not at all. We were talking about routing. The following is some
>marketing misinformation. In particular the comments about latency and its
>ties to performance are wholly incorrect.
>
>Tony
>
> - Forwarding technology. Vendors have been debating the virtues of
> cut-through vs. store-and-forward since Ethernet switching
> began. Cut-through switching provides high performance by minimizing
> latency: the time it takes a packet to pass through a switch. However,
> it does not check for packet errors, thereby potentially wasting
> bandwidth. Store-and-forward switching filters packet errors, but
> increases latency. That hurts performance but boosts network
> reliability. Fortunately, there is an alternative to this performance
> vs. reliability dilemma. Today's most flexible LAN switches offer a
> hybrid mode called adaptive cut-through switching, which provides a
> combination of high performance and robust reliability.
>
> Adaptive cut-through switching provides automatic, real-time support of both
> cut-through and store-and-forward modes on a per-port basis. On power-up,
> the switch configures all ports to cut-through mode. That offers the low
> latency needed for large networks with cascaded switches or real-time
> traffic. During operation, if packet errors occur, the switch automatically
> sets the port to store-and-forward mode to keep errors from propagating
> across network segments. Once error statistics fall below a configurable
> threshold, the port automatically returns to cut-through mode-all with no
> required intervention by network management.
>

But if the switch is checking for errors while it is in 'cut-through'


mode it must be also in some kind of 'store-and-forward' mode hence

inducing an overhead which it is trying to loose ? Does anyone know how
this works in practice ?

Thanks


Tony Barber Phone : +44(0)1223 250122
Network Support UnipalmPIPEX
**** Please send ALL support related email to sup...@pipex.net ****


Brett Frankenberger

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
In article <8216680...@news.Colorado.EDU>, tonyb <to...@pipex.net> wrote:
>>
>> Adaptive cut-through switching provides automatic, real-time support of both
>> cut-through and store-and-forward modes on a per-port basis. On power-up,
>> the switch configures all ports to cut-through mode. That offers the low
>> latency needed for large networks with cascaded switches or real-time
>> traffic. During operation, if packet errors occur, the switch automatically
>> sets the port to store-and-forward mode to keep errors from propagating
>> across network segments. Once error statistics fall below a configurable
>> threshold, the port automatically returns to cut-through mode-all with no
>> required intervention by network management.
>>
>
>But if the switch is checking for errors while it is in 'cut-through'
>mode it must be also in some kind of 'store-and-forward' mode hence
>inducing an overhead which it is trying to loose ? Does anyone know how
>this works in practice ?

It *could* store the entire packet, as well as cut it through, but
that's not necessary. You can calculate the CRC/checksum in "real
time" as the data comes in, and then compare it to what it's supposed
to be.

Token-Ring adapters have been doing it (real time CRC verification) for
years -- they calculate, in real time, a CRC, as the bits fly by the
adapter (they only insert 3 bits or so of delay) ... then they compare
the calculated CRC to the real CRC as it goes by (in real time), and if
it doesn't match, they set an error bit in the last byte of the frame
(again, in real time).
--

- Brett (bre...@netcom.com)

------------------------------------------------------------------------------
... Coming soon to a | Brett Frankenberger
.sig near you ... a Humorous Quote ... | bre...@netcom.com

Charles Hedrick

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
Keith Fruge <Keith...@ins.com> writes:

>> What is the difference between autonomous and fast switching?? The
>> cisco documentation is vague on this!

>Does this correlate with the following terminology?


>If not please give more details.

...


>- Forwarding technology. Vendors have been debating the virtues of
>cut-through vs.
>store-and-forward since Ethernet switching began. Cut-through switching

...

No. Routers all do store and forward (as far as I know). The basic
process is receive the packet into a buffer, look at the destination
address to figure out which interface it should go out, maybe copy the
packet between buffers, and send it. In the original Cisco routers,
All of this was done in a monolithic piece of code running on a 68000
processor. In more recent hardware, more and more of the computation
has been moving into special-purpose hardware, and there has been
increased sharing of buffers to reduce the number of times a packet
has to be copied. (Initially each packet had to be copied twice. In
a lot of the recent setups, it is never copied.) Through all of this,
the technology remains store and forward. However there are at least
three benefits:

- reduced latency between the end of reception of the packet
and the beginning of transmitting it out the destination interface
- ability to handle higher packet and media rates without degrading
performance
- more consistent latencies

In fact the original design was fast enough to provide negligible
latency for media speeds and configurations common used. The more
sophisticated designs preserve this as the number of interfaces and
their speed goes up.

>Store-and-forward switching filters packet errors, but increases latency.
>That hurts
>performance but boosts network reliability. Fortunately, there is an
>alternative to this
>performance vs. reliability dilemma. Today's most flexible LAN switches
>offer a hybrid mode called adaptive cut-through switching, which provides a

>combination of high performance and robust reliability. ...


>During operation, if packet errors occur, the switch automatically
>sets the port to store-and-forward mode to keep errors from propagating
>across network segments. Once error statistics fall below a configurable

This passage reads like marketing material. It's not entirely clear
to me that there is a noticable performance degradation from store and
forward. The Rutgers computer science dept. has a network based on
two Cisco 5000's (in different buildings, connected by two duplex Fast
Ether lines) and an AGS+ with 12 Ethernets and an FDDI going out to
the rest of the world. (Clearly we're going to have to upgrade the
router shortly.) Typical packets go through the 5000 twice (on
different VLAN's) and the AGS+ once. If the other building is
involved, they go through a 5000 three times. Using ping times and
perceived response, I haven't noticed any degradation when we
introduced the 5000's. Indeed under load there's an improvement,
since we were previously overloading the Ethernets.

Maybe I'm just old-fashioned, but I'm scared at the concept of trying
to debug network problems with a machine that is reconfiguring itself
while I'm trying to explore the problem. I'm also concerned about
errors that the machine doesn't realize are happening. One of the
functions of our 5000 is to break the network into fairly small parts
(a separate switch port for each server and for each 12-port hub going
to user machines). The hope is that this will prevent problems on one
portion of the net from affecting others, and allow us to identify
problems more easily. I feel safer with a box that is always store
and forward.

Of course cut-through would be impractical in our configuration
anyway. When we finish our server upgrades, most network traffic is
going to be between a server with a Fast Ethernet interface and a
client with a 10 Mbps Ethernet interface. As far as I know,
cut-through switching is not possible there.


Niels O. Brunsgaard (Manager Consulting Division)

unread,
Jan 20, 1996, 3:00:00 AM1/20/96
to
Why is autonomous/silicon switching not on by default on high speed
interfaces ?

>
> > The _functional_ difference is that autonomous is much faster and does not
> > use the main CPU at all.
>

Niels O. Brunsgaard (Manager Consulting Division)

unread,
Jan 21, 1996, 3:00:00 AM1/21/96
to
Are you implying that autonomous/silicon switching does not always work ?
How does one decide when to use and when not to use ?

Niels

>
> Why is autonomous/silicon switching not on by default on high speed
> interfaces ?
>

>Because the safety should be on by default. ;-)
>
>Tony
>


Tony Li

unread,
Jan 21, 1996, 3:00:00 AM1/21/96
to

> Why is autonomous/silicon switching not on by default on high speed
> interfaces ?
>
>Because the safety should be on by default. ;-)

Are you implying that autonomous/silicon switching does not always work ?

I'm not implying it, I'll say it right out: I've written bugs before, and
will again. ;-)

I, as one of the developers, would feel very uncomfortable with foisting
this switching code on the unsuspecting general public by default when in
most cases, it's not really needed. The reliability that these code paths
have demonstrated is certainly not sufficient for the critical demands of
some of the customer base.

How does one decide when to use and when not to use ?

Probably the most critical issue is that of low-speed serial lines.
Frequently, these may need to be process switched for additional buffering.
Also, when using either autonomous or silicon switching, there are some
caching issues which you must contemplate. Autonomous switching is
inappropriate if your cache contains a very large number of destination
hosts. Silicon switching is appropriate unless your cache contains an
extremely large number of prefixes. The Internet routing table fits, but I
can certainly imagine larger....

Tony

Tony Li

unread,
Jan 21, 1996, 3:00:00 AM1/21/96
to

Why is autonomous/silicon switching not on by default on high speed
interfaces ?

Because the safety should be on by default. ;-)

Tony

Darren Turnbull

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to

>I, as one of the developers, would feel very uncomfortable with foisting
>this switching code on the unsuspecting general public by default when in
>most cases, it's not really needed. The reliability that these code paths
>have demonstrated is certainly not sufficient for the critical demands of
>some of the customer base.
>


I thought Waited Fair Queuing was on by default.
A different philosophy perhaps?

Darren

Sean Doran

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to
In article <8222674...@news.Colorado.EDU> Tony Li <t...@cisco.com> writes:

tli>I'm not implying it, I'll say it right out: I've written bugs before, and
tli>will again. ;-)

Nah, I don't believe it.

Sean.

0 new messages