Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A connection-based Internet?

1 view
Skip to first unread message

Noel Chiappa

unread,
Dec 18, 1996, 3:00:00 AM12/18/96
to hui...@bellcore.com, m...@uu.net, big-in...@munnari.oz.au, fl...@research.ftp.com, j...@netstar.com, wed, thu

From: "Mike O'Dell" <m...@uu.net>

the fundamental problem is that the model of "ip destination-only"
forwarding is not powerful enough to build the networks required for
the current Internet, much less the future.

Much as I am very interested in the basic question under debate here, I think
this discussion is fundamentally pointless, if not "out of order", *in this
forum*.

Experience in the IETF has shown, over and over again, that we'll argue about
this for months, and when the dust settles, very few minds will have been
changed, and the situation will not have been resolved. In the meantime, much
time/energy of the WG-to-be will have been wasted.

So, I will say again: this is not an effort to formally get the IETF as a
whole to agree to switch to flows as a fundamental paradigm. It is a forum for
people who like a certain set of ideas to come up with specifications which
implement those ideas.

If you think the ideas are crazy/impossible, that's fine, just please sit
quietly and watch as other people waste their time, and let the rest of the
group get on with doing their thing.


nobody is arguing for "end to end VCs". that's just silly.

One of the things that continually annoys me no end is the apparent inability
of some people to grok that there is a lot (anything?) in the middle of the
spectrum between the extremes of pure-stateless-datagram (a la IPv4), and
old-style-virtual-circuits (a la X.25).

It seems like anytime someone stands up and says "maybe we need to do
something other than pure datagram", there are always people accuse you of
wanting to do VC's. The fact that the proposer is well aware of the problems
of pure VC's, and doesn't want to do a pure VC network, is usually completely
missed. They also don't usually seem to have bothered to take the time to
understand what it is you actually *are* proposing. Needless to say, after a
while it all, especially the latter bit, gets pretty unfuriating.

Someone at the just-passed IETF described this as a "four-legs-good,
two-legs-bad" model of reality, and that's right on target. I find, over and
over again in the IETF (and it was very obvious with the past debate on
variable length addresses), that many minds are already made up, and no real
objective, open-minded, thoroughgoing, from-scratch analysis of the
engineering good and bad points of new approaches are made. Instead, they are
dismissed with an immediate, simplistic, and unstudied "two-legs-bad" kind of
reaction that's all to apparent, after you've been on the receiving end enough
times.

The good thing about reactions like that is that you soon figure out that
since there is little deep analysis behind them, you can just blow them off.
If you think reasoned debate is going to change them, think again - been there,
tried that.

The packet world is a victim of its own sucess. The people who would, in the
60's, have been *outside* saying "it's new, and therefore won't work" are now
*inside* saying "it's new, and therefore won't work".


what we are talking about is switching flows, where there is some
efficiency to be gained in establishing soft state in the forwarding
paths.

Be careful with the use of the term "soft state". As far as I can tell, it
means different things (in terms of operational considerations like who
establishes it, who maintains it, who removes it, what happens when it's
missing, etc) to just about everyone who uses the term.

In fact, it seems to have fallen victim to "four-legs-good" disease, in that
schemes that someone likes are inevitably described as being "soft state",
whereas schemes they don't like are always described as "hard state". (One's
own scheme is *always* "soft state", no matter how the definition of that has
to be twisted to fit.)

Given that the routing tables used in the current pure-stateless-datagram
model are the hardest of hard state, I wonder exactly how they fit into the
simplistic "hard-state-bad" view of reality that seems to be common among
those who adhere most tenaciously to the PSD model, but I digress.


the ability to place bandwidth between the points where it needed, as best
approximated by where you can actually get it, and then place the traffic
of interest on that path without going crazy screwing with IP metrics

Some of us might point out that the current fundamental routing architecture
of the 'Net, one inherited basically unchanged from the Baran work in the
early 60's, and one intended for far smaller networks with different
requirements, is really not the paradigm we ought to be working within - but
that's a different WG! :-)

in other networks ... the flows are much larger, aggregate objects which
have little to do with any particular IP prefix, other than it and a bunch
of others directly connect to the infrastructure off a particular superhub.
(this is where the mapping between randomly-assigned IP address and physical
proximity happens.)

Too bad addresses in packets don't exclusively reflect something that's
actually useful to the packet-forwarding fabric, like where you are, or
anything. Nah, that's too obvious.


link-state technology tries to duck the question by letting everyone
pretend they know everything. there's something deeply unscalable about
this notion of "just tell everyone everything".

Depending on exactly what you mean by "link-state", sorry to disappoint you,
but this statement has been untrue for at least 10 years. ("Four legs
good...") See:

Josh Seeger and Atul Khanna, "Reducing Routing Overhead
in a Growing DDN", MILCOMM '86, IEEE, 1986.

for an early one, and of course the IETF's own OSPF also somehow seems to
work without global knowledge.

It's this kind of simplistic statement which leads less-knowledgeable people
down the path, and I hope people will cease and desist with them?


Noel

Noel Chiappa

unread,
Dec 18, 1996, 3:00:00 AM12/18/96
to hui...@bellcore.com, j...@ginger.lcs.mit.edu, m...@uu.net, big-in...@munnari.oz.au, fl...@research.ftp.com, j...@netstar.com, thu

From: j...@ginger.lcs.mit.edu (Noel Chiappa)

Much as I am very interested in the basic question under debate here, I
think this discussion is fundamentally pointless, if not "out of order",
*in this forum*

PS: Sorry I forgot to mention this on the original message, but if you feel a
need to reply to this, please do so only on the "tagswitch" mailing list. I
CC'd "flows" and "big-i" since I felt that the points I had to make might be
of interest to people there as well.

Noel

Greg Minshall

unread,
Dec 19, 1996, 3:00:00 AM12/19/96
to Noel Chiappa, hui...@bellcore.com, m...@uu.net, big-in...@munnari.oz.au, for, thu

Noel,

> I find, over and
> over again in the IETF (and it was very obvious with the past debate on
> variable length addresses), that many minds are already made up, and no real
> objective, open-minded, thoroughgoing, from-scratch analysis of the
> engineering good and bad points of new approaches are made. Instead, they are
> dismissed with an immediate, simplistic, and unstudied "two-legs-bad" kind of
> reaction that's all to apparent, after you've been on the receiving end enough
> times.

You have to be a bit careful here.

There are twin dangers in designing: one, as you point out, is "different is
bad"; the other is what i term "the tyranny of analytic thinking", which is to
say "if i can *prove* something thru some logical process to be true, that
means it is, in fact, true".

The problem with the latter is that there are many things that have been
"proven" over the years, but don't, in fact, work (or, in the math space,
'theorems' that have been 'proved' but aren't, in fact, true).

Thus, basing things on what we currently know is by no means a stupid thing to
do. On the other hand, trying something new, and getting experience with it,
is also a very good idea; it's just that you shouldn't expect people to
"salute" because of a closed form proof -- working code is much more
persuasive!

Greg

Tim Bass

unread,
Dec 19, 1996, 3:00:00 AM12/19/96
to Greg Minshall, j...@ginger.lcs.mit.edu, hui...@bellcore.com, m...@uu.net


Noel laments:

> and NO real objective, open-minded, thoroughgoing, from-scratch analysis

> of the engineering good and bad points of new approaches are made.

> Instead, they are dismissed with an immediate, simplistic, .....

Bravo!!

Then, Greg responds:

> ... working code is much more persuasive!

Indeed, the Root of The Problem! EGP was 'working code' however,
Eric himself stated EGP was NOT a valid routing protocol. Yet
BGP was built on EGPs 'working code'. Then! many pointed
out BGP was certainly flawed, concentrating on manually
configurable 'routing policies' and a core topology, basically
a 'modified-spanning-tree paradigm'; yet the working code was the
foundation for the problematic IP exterior routing protocol
of today. Let me remind everyone, the problem with scalability
&c was documented in RFCs in the early 1980s, almost 20 years
ago, yet modifying 'working code' based on a back-o-the-
envelope, quick-and-dirty 'working code' paradigm, dominated
(and continues to dominate) the process.

The lesson learned might be thus:

-----------------------------------------------------------------
Working code is persuasive, but not necessary the best development
track! and certainly not the best design.
-----------------------------------------------------------------

Both Noel and Greg are both correct. The IETF audaciously
prides itself on hacking 'working code' and avoiding the
painful but necessary step of analysis and broad peer
review on important and far reaching issues such as
global exterior routing paradigms.

On the other 'business reality' hand...

If 'someone' aggressively advocates a new routing paradigm,
aggressively without broad peer review, and does it with an
quiet eye toward a patent or financial gain; we shall just
see the distasteful process repeated for the second time.

We all agree it is not difficult to design a new, truly
scalable, hierarchical protocol that works with meshed
ADs and does not provide tacit advantages of one provider
over another in the marketplace; however what is often
in the best interest of the marketplace is rarely in
the best interest of the established businesses with a
very real interest in growing, not diminishing, market share
and revenue.

The IETF has never been immune to commercial influences and
there is not a period in history where commerce did not
dominate the decision making process. Let us not be
deceived in fancying an Internet dominated by spiritual
sages and selfless actions without the motive of profit.


Best Regards,

Tim

--
mailto:ba...@silkroad.com voice (703) 222-4243
http://www.silkroad.com/ fax (703) 222-7320


Tim Bass

unread,
Dec 19, 1996, 3:00:00 AM12/19/96
to Tony Li, mins...@ipsilon.com, j...@ginger.lcs.mit.edu, hui...@bellcore.com

>
>
> BGP was built on EGPs 'working code'.
>
> Amazing.

Sorry to cross swords with you in public, Monsieur BGP!

For what other excuse can there be for such an 'policy based,
manually configurable' core-tree paradigm that is not scalable
(like EGP), exchanges no routing policies (like EGP) and looks
like EGP with more features and 'advances' ?

BGP was certainly not a radical departure from EGP like so
many competing technologies and paradigms that for certain
would require a complete rewrite. It certainly was not
a new vision for a meshed AD topology providing zero
dependence on the core-tree superstructure.

However, if you claim that not one line of YOUR code was used
for BGP from EGP, Monsieur, that may well be. However, based
on the very close similarity between the two protocols in
principle and paradigm, it stands to reason that SOMEONE
other than yourself thought of EGP when considering BGP.

One thing is certain, honorable Monsignor, we shall never agree in
principle on the BGP development track. Some shall consider
it 'a wonder of engineering', changing 'wings on airplanes
in mid-flight', and other romantic tales from the net.
Yet others, perfectly well educated and knowledgable engineers
will disagree, and with acceptable reason.

I beg you, Monsieur, to allow me to take leave of this dialog;
because I am a critic of BGP, and after searching the entire
IEEE publication database, printing, studying, reading all
of the papers on the subject, and reading every paper printed
on hierarchical routing. Yet! I still remain a critic of BGP,
in spite of the romantic rhetoric.

This criticism, however, is certainly not a personal one!

My issues are with those who placed 'policy based routing'
to enforce directed AUPs above clustering and scalability.
This, as I currently see it, was the driving force that
has lead ERP development down it's thorny path.


Best Regards,

Tim


Tony Li

unread,
Dec 19, 1996, 3:00:00 AM12/19/96
to ba...@linux.silkroad.com, mins...@ipsilon.com, j...@ginger.lcs.mit.edu, hui...@bellcore.com, fri

BGP was built on EGPs 'working code'.

Amazing.

Tony

Valdis.K...@vt.edu

unread,
Dec 19, 1996, 3:00:00 AM12/19/96
to Greg Minshall id wa22424, fri, Noel Chiappa, hui...@bellcore.com, m...@uu.net
On Thu, 19 Dec 1996 09:58:29 PST, Greg Minshall said:
> The problem with the latter is that there are many things that have been
> "proven" over the years, but don't, in fact, work (or, in the math space,
> 'theorems' that have been 'proved' but aren't, in fact, true).

Or even worse, are *truly* provably true within a given domain. See
Euclid's parallel postulate and non-Euclidean geometry for an example.

Just because something was demonstrably true 10 years ago doesn't mean that
it will still be true once the next generation of <insert favorite
combo of hard/soft/live ware here> comes along, and the nature of things
changes.
--
Valdis Kletnieks
Computer Systems Engineer
Virginia Tech


Hiroshi Esaki

unread,
Dec 25, 1996, 3:00:00 AM12/25/96
to id naa10787, wed, j...@ginger.lcs.mit.edu, hui...@bellcore.com, m...@uu.net

greg> Thus, basing things on what we currently know is by no means
greg> a stupid thing to do.
greg> On the other hand, trying something new, and getting
greg> experience with it, is also a very good idea; it's
greg> just that you shouldn't expect people to "salute"
greg> because of a closed form proof -- working code is much more
greg> persuasive!

running and working code gives us the important feedback to
the next code to fix the problem of old code.
We believe that the experiences through the actually operating
system is very important and is also the greate feedback to the
mailing-list.

Hiroshi Esaki (Toshiba Corp.)

Masataka Ohta

unread,
Dec 25, 1996, 3:00:00 AM12/25/96
to id oaa10883, wed, mins...@ipsilon.com, j...@ginger.lcs.mit.edu, hui...@bellcore.com

In good old days when not so much effort was spent for the
implementation of Internet protocols, the running code principle
was working to prevent developing unnecessary protocols and make
developed protocols simple.

Today, with so much investment on the Internet, the running code
principle is hardly working any more.

People says "it's too late to change spec", even before a protocol
becomes a PS.

Masataka Ohta

0 new messages