Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Comparing an old flow snapshot with some packet size data

5 views
Skip to first unread message

Kent W. England

unread,
Aug 5, 1996, 3:00:00 AM8/5/96
to big-in...@munnari.oz.au

Folks;

I did a little traffic comparison to see what I could glean from comparing
Sean Doran's flow stats posted last January with an unpublished analysis of
a snippet of FIX West data, collected by Kim Claffy at NLANR and analysed by
Jerry Scharf of the CIX.

Back in January, Sean Doran and Dorian Kim posted some cisco IP flow stats
to this list. I haven't seen any since, but my big-internet mail delivery
seems spotty so I may have missed some messages. I'd be interested in seeing
some more flow stats, if Sean or Dorian or anyone has been collecting more
data. Sean or Dorian, would you care to post some more flow stats?

Kim Claffy collected 15 minutes of traffic data from FIX West on 12 Feb 96
and Jerry Scharf analyzed the packet size distribution of that sample. I
used this data in a paper I recently finished on WAN protocol overhead.
Here's a portion of the packet size histogram from this data. Only packet
sizes that exceed 1% of the total traffic over this fifteen minute period
are listed, although Jerry's data contains counts of all the traffic that
Kim collected.

IP Payload Per cent of Packets
40 30.55%
41 1.51%
44 3.04%
72 4.10%
185 2.72%
296 1.48%
552 22.29%
576 3.59%
1500 1.51%

All other packet sizes are less than 1% of the total, but as you can see that
adds up to about 29% of the traffic. There were almost no packets larger than
1500 bytes. And the 29% of other traffic was scattered over the interval up
to 1500 bytes. Jerry has a perl script that does a "what if" calculation on
what the WAN protocol overhead would be if all this traffic were HDLC or FR
or ATM, but so far he hasn't published anything.

The most interesting thing to me is that the most common traffic is probably
file transfer (whether HTTP or FTP), since the 552 bytes corresponds to a
TCP payload of 512 bytes, the largest power of two smaller than the IP
default MTU of 576. 30% of the traffic is a zero byte TCP payload
corresponding to all the connection setup and flow control traffic for all
those file transfers going on.

To recall what Sean originally posted in January:
-------------------begin-------
This is from a fairly small-traffic router (sl-kc-2.sprintlink.net),...

Sean.
- --
IP Flow Switching Cache, 29999 active, 2769 inactive, 58411388 added
1418487 lru, 22352334 timeout, 20923593 tcp fin, 2633568 invalidates
5253815 dns, 5799592 resent syn, 0 counter wrap
statistics cleared 141949 seconds ago

Protocol Total Flows Packets Bytes Packets Active(Sec) Idle(Sec)
-------- Flows /Sec /Flow /Pkt /Sec /Flow /Flow
TCP-Telnet 267034 1.8 233 75 439.3 182.6 36.5
TCP-FTP 1030837 7.2 10 78 76.6 22.6 43.7
TCP-FTPD 554967 3.9 164 345 641.3 52.7 15.7
TCP-WWW 32107858 226.2 15 247 3610.6 13.5 28.1
TCP-SMTP 3526231 24.8 13 159 323.1 10.2 23.6
TCP-X 9600 0.0 121 129 8.2 148.2 55.1
TCP-BGP 111096 0.7 14 77 11.5 229.2 61.1
TCP-other 5729172 40.3 70 220 2858.1 71.0 41.3
UDP-TFTP 2398 0.0 3 62 0.0 13.4 69.5
UDP-DNS 12875077 90.7 2 110 195.4 5.4 43.6
UDP-other 1489072 10.4 30 293 321.8 28.5 68.7
ICMP 665771 4.6 13 259 62.8 75.5 66.8
IGMP 5144 0.0 18 278 0.6 82.4 64.3
IPINIP 4450 0.0 933 377 29.2 166.7 61.0
IP-other 2693 0.0 11 136 0.2 80.8 65.7
Total: 58381400 411.3 20 227 8579.4 0.0 0.0
------------------------end--------

I would say that these two different sets of statistics are roughly consistent.
(Note that neither one represents a lot of data. The FIX West data is only
over 15 minutes and Sean's was over the major part of a day.)

Note the small number of packets per flow for WWW and FTP in Sean's data,
from 10 to 15 for each flow. I don't understand the 78 bytes/pkt for FTP,
but the WWW bytes/pkt of 247 is roughly consistent with the packet
distribution of 30% at 40 bytes and 22% at 552. If I average 40 and 552 I
get 296, near to 247. It's rough, but sensible.

With all appropriate caveats about the limited sample size, the majority of
the TCP flows are WWW or FTP file transfers with a data payload of about 512
bytes (from the Claffy/Scharf data) and a total number of packets about 15
(from Sean's data). If I assume it takes 2 empty packets to open the
connection, 6 packets of data, 5 ACKs back, and 2 more empty packets to
close, then we have a file size of about 6*512 or 3100 bytes. [I could be
off on those counts, but not by much.]

Therefore, the average or most common Web/FTP file size transferred is about
3000 bytes. Simon Spero's trace analysis of an HTTP page load (available at
the W3C web site) is remarkably similar.

All in all, these three data sources (Claffy/Scharf, Doran, Spero) seem
relatively consistent. An overwhelming amount of the flows in the Internet
seem to be small file transfers, the TCP payload for this traffic is mostly
<=512 bytes, when it could easily be <=1460 bytes. And slow start adds at
least one extra RTT to each transfer that might be avoided if the payloads
were 1460 instead of 512.

Would there be any improvement if hosts used path MTU discovery, or would it
add up to about the same thing? I'm not sure whether you can do path MTU
discovery at the same time you are starting a TCP session or whether, as is
more likely, it is a separate process and uses an RTT or more before
starting the TCP session.

Now, is there more data to bolster or refute these conclusions? I've done
what I can with what I've found, but there just isn't much data to go on
anymore. But I think it is pretty consistent with the view that a lot of the
traffic is WWW TCP sessions of a few kilobytes. Would you agree?

--Kent


(Please note that as far as I know neither Kim nor Jerry have published
anything from this data, so don't bug them for information or hold them
responsible in any way for what I did with it.)


Greg Minshall

unread,
Aug 5, 1996, 3:00:00 AM8/5/96
to Kent W. England, id laa09511, tue

Kent,

Answering your questions/observations will take some thinking (which i will
do), but pointing you at some earlier work is fairly easy. Try looking at

http://www.nlanr.net/NA/Learn/packetsizes.html

(You might also be interested in some analysis we've done locally; try looking
at http://www.ipsilon.com/aboutipsilon/staffpages/pn/papers/interop96.ps.)

Greg

John Hawkinson

unread,
Aug 6, 1996, 3:00:00 AM8/6/96
to Kent W. England, tue, big-in...@munnari.oz.au

> From: "Kent W. England" <k...@6sigmanets.com>
> Subject: Comparing an old flow snapshot with some packet size data

> Back in January, Sean Doran and Dorian Kim posted some cisco IP flow stats
> to this list. I haven't seen any since, but my big-internet mail delivery
> seems spotty so I may have missed some messages. I'd be interested in seeing
> some more flow stats, if Sean or Dorian or anyone has been collecting more
> data. Sean or Dorian, would you care to post some more flow stats?

Just to provide you with a lack of baselines for comparison (:-)), here
are the top packet sizes one of our transit FDDI rings between
1200 and 1300 EDT today:

Size %Packets %Bytes
40 36.4837 4.5397
552 19.2812 33.1087
576 9.56957 17.1468
1500 4.84203 22.5937
44 4.00251 0.547841
41 2.48799 0.317323
52 0.573903 0.0928349
60 0.505717 0.0943905
48 0.484214 0.0723015
72 0.467757 0.104766
56 0.435227 0.0758182
42 0.400778 0.0523627
296 0.340765 0.313773
84 0.326612 0.0853456
45 0.305438 0.0427568
43 0.297758 0.0398292
588 0.297319 0.543838

Binning to histograms of 10 bytes

Size %Packets %Bytes
40-50 45.020109 5.693618
550-560 19.326785 33.187304
570-580 9.604513 17.209135
1500-1510 4.842249 22.594727
50-60 2.133512 0.362158
60-70 1.659997 0.326584
70-80 1.633336 0.374121
80-90 1.030689 0.269704
290-300 0.555307 0.510068
140-150 0.522942 0.234360

> Would there be any improvement if hosts used path MTU discovery, or would it
> add up to about the same thing? I'm not sure whether you can do path MTU
> discovery at the same time you are starting a TCP session or whether, as is
> more likely, it is a separate process and uses an RTT or more before
> starting the TCP session.

There would be QUITE A LOT of improvement if everyone used Path MTU
Discovery. There would be quite a lot of improvement if everyone
changed the TCP default MSS on their unix boxes to 1460 instead of
576.

In the former case, most inplementations assume that the interface
MTU - ip header is the maximum length, and will send that as the MSS
when they open a TCP connection. They will send any data up-to that
size in a single packet with the DF bit set, and will only fragment
if they get back an indication that such is necessary. There are
relatively few links in the Internet that don't support a 1500-byte
MTU that it's well worth the extra RTT. Further, those hosts that
don't have 1500-byte MTUs tend to be behind slow links (i.e. dialup
links) where an extra RTT is probably not all that significant. This
is the standard way of implementing PMD, and it's how it works
in Solaris, for instance. There is no intial-RTT cost for setup in the
general (non-fragemented case).

If you don't have PMD and just up the max segmenet size, you do the
same thing except you don't set Dont Fragment on your packets. This
may actually be more efficient because it causes fragmentation
to happen at the places in the network where low-MTU links exist.
If you assume that those are few and far -between, and are special
cases who should be willing to bear the cost of doing fragmentation
themselves, this is a good thing. It doesn't work so well if your
host is FDDI-connected, because many Internet links can't support
the FDDI MSS. But you can set your FDDI link to the Ethernet MSS
and still see a good improvement. Of course, this methodology doesn't
work for IPv6, but PMD is required there, anyhow.

--jhawk
John Hawkinson

Dorian R. Kim

unread,
Aug 6, 1996, 3:00:00 AM8/6/96
to Kent W. England, tue, big-in...@munnari.oz.au

On Mon, 5 Aug 1996, Kent W. England wrote:

> Back in January, Sean Doran and Dorian Kim posted some cisco IP flow stats
> to this list. I haven't seen any since, but my big-internet mail delivery
> seems spotty so I may have missed some messages. I'd be interested in seeing
> some more flow stats, if Sean or Dorian or anyone has been collecting more
> data. Sean or Dorian, would you care to post some more flow stats?

I have some stats that were collected by OSU off our router. I would need to
get clearance from OSU to post that.

I'm waiting for installation of an ultrasparc with lots of disc before I can
go back to doing anything real with flow data. There is also a question as to
where I can do what I'm doing with various Cisco bits and have flow info at
the same time.

Nevil Brownlee from NZ and Mark Fullmer from OSU are also doing some analysis
with this data. I however don't think they've gotten very far yet.. I gather
that things that pays their bills are taking
majority of their time. ;)

-dorian


Michael A. Patton

unread,
Aug 6, 1996, 3:00:00 AM8/6/96
to id ra06951, wed, big-in...@munnari.oz.au

Date: Mon, 05 Aug 1996 17:08:36 -0700

From: "Kent W. England" <k...@6sigmanets.com>

... I don't understand the 78 bytes/pkt for FTP, [in Sean's sample]

Note that this is on the FTP control port. The actual data is on
other ports and can't easily be recognized... I'm not sure how to
interpret that as it's less than I would expect (the FTP control
connection should have both IP and TCP overhead on every packet, and I
would expect more than half the packets to have either an FTP command
or response in them, this makes 78 seem too small to me on first thought).

but the WWW bytes/pkt of 247 is roughly consistent with the packet
distribution of 30% at 40 bytes and 22% at 552. If I average 40 and 552 I
get 296, near to 247. It's rough, but sensible.

Or to do a pro-rated average (probably a little better, although still
only an estimate): (30*40+22*552)/(30+22) => 256 (even closer to 247
and an interesting number in it's own right :-).

-MAP

Kent W. England

unread,
Aug 6, 1996, 3:00:00 AM8/6/96
to Greg Minshall, wed, big-in...@munnari.oz.au

At 12:53 PM 8/6/96 -0700, Greg Minshall wrote:
>Kent,
>
>Having cogitated a bit...
>
My thanks.
>
>I think you are saying that if TCP sent 1460 bytes in the first [data] packet,
>then lots of transfers would only have one data packet, but that since it is
>using 512 bytes, transfers take 3 (say) data packets, so slow start kicks in
>and it takes 2 RTTs to transfer that data.

Yes, but to be fair this is what Simon Spero first said in his paper on HTTP.
< http://sunsite.unc.edu/mdma-release/http-prob.html >
>
>I think that is probably true, though if transfer sizes are, say, 3000 bytes
>(which you mentioned), then even with 1460, "slow start" imposes a 2.x RTT
>"penalty". On the other hand, that "penalty" is there, of course, to keep the
>net alive.


>
>> Would there be any improvement if hosts used path MTU discovery, or would it
>> add up to about the same thing? I'm not sure whether you can do path MTU
>> discovery at the same time you are starting a TCP session or whether, as is
>> more likely, it is a separate process and uses an RTT or more before
>> starting the TCP session.
>

>(I guess i'm not totally sure what "constituency" you represent in this, in
>the sense of i.e., PPP users at the end of 28.8 links, or network providers
>trying to figure out how to provision, or corporate intXXnet builders, etc.
>For example, when you say "improvement", improvement for *whom*?)
>

I was only thinking of "users", whoever they are (including me). But the GOP
convention is happening in my hometown next week and if anyone on this list
would be my constituency, I'll put myself down against ol' Bob as the
candidate!
A vote for me is a vote for path MTU! :-)

Seriously, if RTTs are the problem then this issue holds for 28.8 PPP users as
well as corporate users. In fact, this problem came home to me in spades when
I upgraded my residential access from 28.8 to 112.5 (hardware, you know) ISDN
and found little performance improvement on many pages.

>You can do path MTU "at the same time" you are starting a TCP session. *I*
>think it would help.
>
>Systems are also free [encouraged! -- see http://info.internet.isi.edu:80/in-no
>tes/rfc/files/rfc1191.txt] to "remember" previous path MTU values to various
>destinations to try to "optimize" the path MTU start up time; so, it would be
>much better if each of the web pages downloads for images from a given site
>didn't have to individually "learn" the path MTU, but could "share" that
>knowledge from the first download. Unfortunately, this involves a bit of
>thinking and coding (neither of which i've done!), and so isn't totally
>straightforward.
>
>Greg
>
>
Of course this is also addressed by HTTPng. Can we hold our breath that long?

See < http://www.w3.org/pub/WWW/Protocols/HTTP-NG/Overview.html >

--Kent


Greg Minshall

unread,
Aug 6, 1996, 3:00:00 AM8/6/96
to Kent W. England, wed, big-in...@munnari.oz.au

Kent,

Having cogitated a bit...

> All in all, these three data sources (Claffy/Scharf, Doran, Spero) seem


> relatively consistent. An overwhelming amount of the flows in the Internet
> seem to be small file transfers, the TCP payload for this traffic is mostly
> <=512 bytes, when it could easily be <=1460 bytes. And slow start adds at
> least one extra RTT to each transfer that might be avoided if the payloads
> were 1460 instead of 512.

I think you are saying that if TCP sent 1460 bytes in the first [data] packet,

then lots of transfers would only have one data packet, but that since it is
using 512 bytes, transfers take 3 (say) data packets, so slow start kicks in
and it takes 2 RTTs to transfer that data.

I think that is probably true, though if transfer sizes are, say, 3000 bytes

(which you mentioned), then even with 1460, "slow start" imposes a 2.x RTT
"penalty". On the other hand, that "penalty" is there, of course, to keep the
net alive.

> Would there be any improvement if hosts used path MTU discovery, or would it


> add up to about the same thing? I'm not sure whether you can do path MTU
> discovery at the same time you are starting a TCP session or whether, as is
> more likely, it is a separate process and uses an RTT or more before
> starting the TCP session.

(I guess i'm not totally sure what "constituency" you represent in this, in

the sense of i.e., PPP users at the end of 28.8 links, or network providers
trying to figure out how to provision, or corporate intXXnet builders, etc.
For example, when you say "improvement", improvement for *whom*?)

You can do path MTU "at the same time" you are starting a TCP session. *I*

Paul Ferguson

unread,
Aug 6, 1996, 3:00:00 AM8/6/96
to John Hawkinson, Kent W. England, big-in...@munnari.oz.au

Not that this isn't interesting data (it is), but it would even
be more valuable if there were a painless mechanism to derive
the arrival sequence of the various packet sizes in a timeline
relationship to the distributions we've seen thus far.

Food for thought.

- paul

>> Would there be any improvement if hosts used path MTU discovery, or would it
>> add up to about the same thing? I'm not sure whether you can do path MTU
>> discovery at the same time you are starting a TCP session or whether, as is
>> more likely, it is a separate process and uses an RTT or more before
>> starting the TCP session.
>

Kent W. England

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to Karl Denninger, MCSNet, Andrew Partan, pfer...@cisco.com, jh...@bbnplanet.com, big-in...@munnari.oz.au

At 03:41 PM 8/8/96 -0500, Karl Denninger, MCSNet wrote:
>
>The trade-off IMHO has to do with the technology changes. Cram 4470 into
>53-byte cells for ATM, and you end up needing 85 of them for each segment!
>I suspect that some of the buffering problems we've seen with things like
>Netedge boxes can be traced to that kind of encapsulation change problem...
>
>Cut that MTU to 1500 and now you only need 29 cells for a segment. Much
>less likely to run into trouble.

My experience with the PacBell NAP has led me to conclude that ATM performance
is best optimized with very large packets. It's better, in my opinion, to
operate with large packets over a zero cell loss ATM network than to pay the
extra 2 or 3% that smaller frame sizes cost on the padding overhead. The
Stratacom switches have plenty of buffer space and large frame sizes are
more efficient if you have large ATM buffers, explicit flow control and
zero cell loss. (Now, technically, we aren't talking zero cell loss but
practically speaking as near to zero as one can get.) I'm sure that other
ATM switches with large buffers and flow control work as well, but those
that don't, well, they don't quite cut it.

So, since routers work better with larger frame sizes and so do ATM switches,
then it's safe to say that increasing the MTU is a good thing.
>
>The bigger problem is that some hardware out there can't hope to keep up at
>DS-3 rates and above with very small segment sizes. I don't know how much
>of a problem this is in the real high-end hardware, but I do know that it
>shows up instantly for those people trying to do the "cheap router in a PC
>box" solutions. Reports from the field are that a "100Mbps" network
>on which a PCI Pentium is sourcing or sinking traffic can not really expect
>to see more than about 50Mbps due to the packet processing overhead in
>this environment with a 1500 byte MTU -- but that number rises to nearly
>85Mbps with a 4470 MTU. Its not moving the data that is killing the
>throughput -- its handling the encapsulation overhead.
>
The situation is a bit different if you can assume fixed size packets.
Right now, the only ones who can claim the prize for very high speed are
the cell switch makers. Perhaps by the end of the year, there will be a new
router that can compete with the best fixed length cell switch. If not, then
certainly some other folks will have a shot next year. But right now the
ATM switch makers have a clear path to OC-12 and beyond and the router folks
aren't there.

If we could just get the cell size up a bit to, say, 1500 bytes. :-)

--Kent


Greg Minshall

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to Andrew Partan, Paul Ferguson, jh...@bbnplanet.com

Andrew,

> If we view the future where lots of hosts are connected via ethernet
> and fast ethernet & the like, then a MTU of 1500 would be 'correct'.
>
> If we think that the future will have lots of hosts connected via
> Fddi or similar, then a MTU of 4470 would be 'better'.

Personally, i can't imagine anything except ethernet, ethernet, and more
ethernet into the future. Circa 1970, Djikstra (i believe) said something
like "I don't know what the major scientific programming language 20 years
from now will *look* like, but it will be *called* Fortran." He wasn't all
wrong. Similarly, i don't know what the data link ten years from now will
*look* like, but it will probably be *called* ethernet. (Note that gigabit
ethernet *looks* an awful lot like fibre channel, or so i am told.)

Greg Minshall

Jeremy Porter

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to Andrew Partan id laa13766, fri, Paul Ferguson, jh...@bbnplanet.com, fri

I remember arguing this about 6-9 months ago. Except from an
exchange point point of view. Based on current traffic
patterns I ended up recommend switched full duplex ethernet as
the most cost effective and least complex protocol to run.
FDDI has as lot of unneeded overhead.

So unless you expect to see FDDI deployment to exceed X% of total
LANs then total FDDI MTUs will be less than X%. (Assuming
relativlity equal data volumes from ethernet and FDDI (which
may not be true due to larger pipes at FDDI sites)). However
some straight forward market research should be possible to determin
relative sizes. And unless people start scrapping all that old
technology we are going to have to design for lots of smaller packets
rather than fewer large MTU packets.

I would prefere to have fast packet switches and routers, than
having to fragment packets inside my network. But that depends
on the cost between packet fragmentation v. carrying more packets.

Someone really should just design a larger MTU for 100BaseTX, i.e.
100BaseTX-BIG. With a MTU of 4470 say.

In message <1996080819...@home.partan.com>, Andrew Partan writes:
>This is all interesting stuff.
>
>One question that I have been trying to figure out is
> What size MTU should an ISP support on its backbone?


>
>If we view the future where lots of hosts are connected via ethernet
>and fast ethernet & the like, then a MTU of 1500 would be 'correct'.
>
>If we think that the future will have lots of hosts connected via
>Fddi or similar, then a MTU of 4470 would be 'better'.
>

>Any ideas?
> --a...@partan.com (Andrew Partan)

---
Jeremy Porter, Freeside Communications, Inc. je...@fc.net
PO BOX 80315 Austin, Tx 78708 | 1-800-968-8750 | 512-458-9816
http://www.fc.net

Kent W. England

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to Andrew Partan, id kaa13677, fri, jh...@bbnplanet.com, big-in...@munnari.oz.au

At 03:58 PM 8/8/96 -0400, Andrew Partan wrote:
>This is all interesting stuff.
>
>One question that I have been trying to figure out is
> What size MTU should an ISP support on its backbone?
>

Is there any performance reason, such as buffer memory, that is
constrained to the point where a 10k MTU couldn't be supported?

After all, if the hosts don't support 1500, you'll still see
576 or 552 byte sizes along with the 40 byte signals. Why not
go to 9180?

--Kent


Paul Ferguson

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to Andrew Partan, jh...@bbnplanet.com, k...@6sigmanets.com, big-in...@munnari.oz.au

Well, one would think that the answer hinges on the life-expectancy
of FDDI, as opposed to higher-speed media (giagbit-ethernet?)....

- paul

At 03:58 PM 8/8/96 -0400, Andrew Partan wrote:

>This is all interesting stuff.
>
>One question that I have been trying to figure out is
> What size MTU should an ISP support on its backbone?
>

Dorian R. Kim

unread,
Aug 8, 1996, 3:00:00 AM8/8/96
to Kent W. England, big-in...@munnari.oz.au

Darren Kerr of Cisco pointed out the fact that flows are not bi-directional,
i.e. a TCP session is two flows, and that meaningful FTP data is FTPD, which
stand for FTP Data, so it throws off some of the hypotheses discussed here.

Some more itsy bitsy data until I have real stuff to play with:

This is from a customer aggregation box.

dgd#sh ip ca flow
IP packet size distribution (3992M total packets):
1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480
.005 .489 .058 .016 .013 .008 .009 .012 .011 .015 .004 .005 .002 .002 .002

512 544 576 1024 1536 2048 2560 3072 3584 4096 4608
.005 .003 .142 .000 .114 .075 .000 .000 .000 .000 .000

IP Flow Switching Cache, 10539 active, 54997 inactive, 257164236 added
0 flows exported, 0 not exported, 0 export msgs sent
3 cur max hash, 257 worst max hash, 11801 valid buckets
0 flow alloc failures
statistics cleared 1423044 seconds ago

Protocol Total Flows Packets Bytes Packets Active(Sec) Idle(Sec)
-------- Flows /Sec /Flow /Pkt /Sec /Flow /Flow

TCP-Telnet 1566034 1.1 129 70 142.2 115.3 44.4
TCP-FTP 5836648 4.1 6 91 26.3 12.4 45.7
TCP-FTPD 3560889 2.5 86 464 216.5 49.2 45.7
TCP-WWW 139280025 97.8 11 319 1137.3 8.2 45.9
TCP-SMTP 22840124 16.0 10 160 166.6 9.8 45.9
TCP-X 58694 0.0 127 176 5.2 106.0 44.3
TCP-BGP 1339769 0.9 2 50 2.6 9.2 44.5
TCP-Frag 123389 0.0 9 306 0.7 17.6 45.3
TCP-other 20351999 14.3 70 354 1008.7 61.2 45.2
UDP-DNS 39827050 27.9 3 103 96.7 6.7 45.8
UDP-NTP 5321916 3.7 2 76 7.6 0.8 45.9
UDP-TFTP 106 0.0 4 94 0.0 22.2 44.2
UDP-Frag 1829 0.0 59 296 0.0 69.3 45.1
UDP-other 7809191 5.4 19 142 108.4 27.6 45.3
ICMP 9140968 6.4 3 154 24.5 7.7 45.8
IGMP 39621 0.0 37 422 1.0 44.5 44.9
IPINIP 44911 0.0 626 282 19.7 104.2 43.7
GRE 12545 0.0 2233 272 19.6 208.4 42.6
IP-other 587 0.0 34 525 0.0 18.2 46.2
Total: 257156295 180.7 16 302 2984.5 14.1 45.8


-dorian

Brian Carpenter CERN-CN

unread,
Aug 9, 1996, 3:00:00 AM8/9/96
to Andrew Partan, pfer...@cisco.com, jh...@bbnplanet.com, k...@6sigmanets.com, fri

Andrew,

> One question that I have been trying to figure out is
> What size MTU should an ISP support on its backbone?
>

"Therefore, the default IP MTU for use with ATM AAL5 shall be 9180
octets. All implementations compliant and conformant with this
specification shall support at least the default IP MTU value for use
over ATM AAL5." - RFC 1626

It seems "obvious" that the ISPs should if possible support at
least the largest default MTU likely to be found on user sites...

but...

(1) it is largely irrelevant until we get rid of http 1.0

(2) Van has good arguments why larger MTUs may be a clear loser
on multi-hop TCP paths

(3) since MTU discovery is still not by any means universal,
there is a strong risk of inducing fragmentation at Internet
exchanges. Can you imagine the impact of going through the
Ethernet part of MAE East with 4k or 9k packets?

So I'd hazard a guess that it is not time to think of going
above 1500.

Brian Carpenter

Dorian R. Kim

unread,
Aug 10, 1996, 3:00:00 AM8/10/96
to Andrew Partan, Brian Carpenter CERN-CN, big-in...@munnari.oz.au

On Sat, 10 Aug 1996, Andrew Partan wrote:

> The question is, if you were designing a backbone today, what would
> you use for your hub LANs? Fddi? Or 100baseT? 100baseT is probably
> going to be a lot cheaper (it looks like there is going to be a
> *lot* of it made), but its MTU is 1500.

Neither, I would think. Neither FDDI nor 100baseT is fast enough to be useful
in interconnecting backbone routers.

If it's a question of interconnecting customer aggregation boxes to backbone
routers, I'd rather go with FDDI rather than 100baseT as FDDI degrades more
gracefully under load. Given that full duplex FDDI is now a possibility, the
only disadvantage of FDDI is cost. (well... there is more overhead to, but..)

> Can you get by with this? Or do you really need to invest in LANs
> that do 4470?

I think that today is a particularly bad time to think about this issue in
operational terms as point to point connection speeds have made all
exisiting and widely deployed multi-access technologies inadequate, and next
generation technologies that promises order of magnitude improvements are not
here yet.

Any solution arrived at, FDDI, switched FDDI, switched full duplex FDDI,
100baseT, switched 100baseT and ATM are all stop gap measures at best.

> Any high performance internet folks out there? What would you do
> (or want us to do)?

I would think that the problem of just simply going faster and the problem of
making massively aggregated flows flow through bigger pipes are not quite the
same thing.

Not that this is much of an answer. :)

-dorian


0 new messages