Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

corba or sockets?

472 views
Skip to first unread message

Fernando Rodríguez

unread,
Oct 30, 2000, 5:24:35 AM10/30/00
to
Hi!

I have to comunicate a lisp process with a c++ app and I was
considering corba or using sockets. Both are rather new to me, so wich one
would you recommend and why? O:-)

TIA


//-----------------------------------------------
// Fernando Rodriguez Romero
//
// frr at mindless dot com
//------------------------------------------------

Marc Battyani

unread,
Oct 30, 2000, 6:46:00 AM10/30/00
to

"Fernando Rodríguez" <spa...@must.die> wrote in message
news:hriqvscgjd0pcql56...@4ax.com...

> Hi!
>
> I have to comunicate a lisp process with a c++ app and I was
> considering corba or using sockets. Both are rather new to me, so wich one
> would you recommend and why? O:-)

I use sockets. It works well and is quite simpler to use than corba and the
performance is very good over a 100 Mbit LAN. You just have to be able to do
both sides of the communication. With Corba you can inter-operate with
already existing Corba enabled software. You also need a lisp with Corba.

Marc Battyani

Philip Lijnzaad

unread,
Oct 30, 2000, 6:55:18 AM10/30/00
to

Fernando> Hi!
Fernando> I have to comunicate a lisp process with a c++ app and I was
Fernando> considering corba or using sockets. Both are rather new to me, so wich one
Fernando> would you recommend and why? O:-)

Unless the communication involves simple but big data (kind of like ftp,
say), use CORBA, by a very long shot.
Philip

--
Time passed, which, basically, is its job. -- Terry Pratchett (in: Equal Rites)
-----------------------------------------------------------------------------
Philip Lijnzaad, lijn...@ebi.ac.uk \ European Bioinformatics Institute,rm A2-24
+44 (0)1223 49 4639 / Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax) \ Cambridgeshire CB10 1SD, GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC 50 3D 1F 64 40 75 FB 53

Erik Naggum

unread,
Oct 30, 2000, 6:56:44 AM10/30/00
to
* Fernando Rodríguez <spa...@must.die>

| I have to comunicate a lisp process with a c++ app and I was
| considering corba or using sockets. Both are rather new to me, so
| wich one would you recommend and why? O:-)

CORBA is already badly designed, so if you are new to sockets and
protocol design (which you will get yourself into), the likelihood
that you, too, will design your protocol badly is so high that it is
probably better to go with CORBA.

If you are very dedicated, you will probably spend a few months just
getting up to speed with protocol design before you can do something
that is not a total drag, and upwards of a year to get it fully
right. Such braindamaged disasters as HTTP (any version) would not
have been possible if there were at least some reasonably accessible
material on how to do these things.

Security problems in protocols are so common and so hard to avoid
that you _really_ don't want to expose yourself to the process of
learning by doing.

#:Erik
--
Does anyone remember where I parked Air Force One?
-- George W. Bush

Eric Marsden

unread,
Oct 30, 2000, 7:18:52 AM10/30/00
to
CORBA provides a lot more than inter-application communication. It
provides a design for developing distributed applications and useful
services such as naming, trading, event propagation. CORBA is
therefore large, and will require time to understand.


>>>>> "mb" == Marc Battyani <Marc.B...@fractalconcept.com> writes:

mb> With Corba you can inter-operate with already existing Corba
mb> enabled software. You also need a lisp with Corba.

CLORB (under development, but already useful, particularly for
client-side CORBA) works with CMUCL, CLISP, SBCL and ACL. Available
under GNU LGPL.

<URL:http://clorb.sourceforge.net/>

--
Eric Marsden <URL:http://www.laas.fr/~emarsden/>

Erik Naggum

unread,
Oct 30, 2000, 8:22:04 AM10/30/00
to
* Philip Lijnzaad <lijn...@ebi.ac.uk>

| Unless the communication involves simple but big data (kind of like ftp,
| say), use CORBA, by a very long shot.

Really? I'd use CORBA for fairly simple stuff and roll my own if I
get above a fairly low complexity threshold, but I know I'm about
three orders of magnitude better at protocol design than the CORBA
team could ever dream of becoming. For instance, CORBA is a single-
threaded protocol, with lock-step properties where you wait for the
answer before you continue with the next transaction. This is of
course a consequence of serial computations and CORBA being a very,
very slow way of doing serial computations on disjoint processes and
processors.

With all this hoopla about multi-threading and other uses for real
and pseudo-parallelism, you might expect people to think through the
multi-threading implications of their protocols. The hardware folks
are gung-ho about _real_ parallelism, trying very hard to answer the
demand for faster on-chip communication, but then what do we get in
the software world? People who use all this new hardware power only
to wait for ages for incoming data before they turn around _real_
fast to send something that did not even need to wait for that value
and then proceed to wait for ages, again, only because the protocol
is designed by a committee of morons?

If you cannot deal with multiple objects "on the wire" and are not
using the bandwidth at full throttle even while you _are_ waiting
for something to come in, you are not programming in the dynamic,
networked world of the new millennium. (How's that for buzzwords?)

E.g, how many applications do you know which continue to work just
as well if you move one of the computers 3000 miles away? Do you
think CORBA handles this situation well? From what I have seen, the
application grinds to a virtual stand-still. Latencies higher than
a few millisecond cause all sorts of interesting behavior in modern
software. It's pretty pathetic that such things are touted as
workable solutions.

Fernando Rodríguez

unread,
Oct 30, 2000, 8:33:08 AM10/30/00
to
On 30 Oct 2000 11:56:44 +0000, Erik Naggum <er...@naggum.net> wrote:

>* Fernando Rodríguez <spa...@must.die>
>| I have to comunicate a lisp process with a c++ app and I was
>| considering corba or using sockets. Both are rather new to me, so
>| wich one would you recommend and why? O:-)
>
> CORBA is already badly designed, so if you are new to sockets and
> protocol design (which you will get yourself into), the likelihood
> that you, too, will design your protocol badly is so high that it is
> probably better to go with CORBA.

What's so wrong about corba?

Marc Battyani

unread,
Oct 30, 2000, 8:38:55 AM10/30/00
to
"Erik Naggum" <er...@naggum.net> wrote in message
news:31818958...@naggum.net...

> * Fernando Rodríguez <spa...@must.die>
> | I have to comunicate a lisp process with a c++ app and I was
> | considering corba or using sockets. Both are rather new to me, so
> | wich one would you recommend and why? O:-)
>
> CORBA is already badly designed, so if you are new to sockets and
> protocol design (which you will get yourself into), the likelihood
> that you, too, will design your protocol badly is so high that it is
> probably better to go with CORBA.
>
> If you are very dedicated, you will probably spend a few months just
> getting up to speed with protocol design before you can do something
> that is not a total drag, and upwards of a year to get it fully
> right. Such braindamaged disasters as HTTP (any version) would not
> have been possible if there were at least some reasonably accessible
> material on how to do these things.

There are lots of text based protocols (ok it's a weak argument they could
all be just crap). Why are you against simple lisp reader friendly text
based protocols when 1) the task is really easy and 2) you master both sides
of the communication?

I drive lots of measuring instruments from Lisp and they all have text based
protocols.

> Security problems in protocols are so common and so hard to avoid
> that you _really_ don't want to expose yourself to the process of
> learning by doing.

By security do you mean cryptographic security or protocol security (fail
safe, resync, etc.)?
If it's the former then SSL can be used. Or do you know security holes in
SSL ?

Marc Battyani

Michael Livshin

unread,
Oct 30, 2000, 9:19:42 AM10/30/00
to
Erik Naggum <er...@naggum.net> writes:

> Really? I'd use CORBA for fairly simple stuff and roll my own if I
> get above a fairly low complexity threshold, but I know I'm about
> three orders of magnitude better at protocol design than the CORBA
> team could ever dream of becoming. For instance, CORBA is a single-
> threaded protocol, with lock-step properties where you wait for the
> answer before you continue with the next transaction. This is of
> course a consequence of serial computations and CORBA being a very,
> very slow way of doing serial computations on disjoint processes and
> processors.

do you imply that it *was* possible to come up with a better wire
protocol for CORBA as it is, or that the whole concept of "remote
procedure call" (i.e. the CORBA semantics) is not an adequate
abstraction in situations involving non-trivial network latencies?

I would agree with the latter, I guess...

--
(only legal replies to this address are accepted)
newsgroup volume is a measure of discontent. -- Erik Naggum

Erik Naggum

unread,
Oct 30, 2000, 9:29:20 AM10/30/00
to
* "Marc Battyani" <Marc.B...@fractalconcept.com>

| There are lots of text based protocols (ok it's a weak argument they
| could all be just crap). Why are you against simple lisp reader
| friendly text based protocols when 1) the task is really easy and 2)
| you master both sides of the communication?

Can you back up and at least explain _how_ you arrived at the
assumptions that underlie your questions?

Erik Naggum

unread,
Oct 30, 2000, 10:20:33 AM10/30/00
to
* Michael Livshin <mliv...@yahoo.com>

| do you imply that it *was* possible to come up with a better wire
| protocol for CORBA as it is, or that the whole concept of "remote
| procedure call" (i.e. the CORBA semantics) is not an adequate
| abstraction in situations involving non-trivial network latencies?

Yes, that the remote procedure call model is fundamentally flawed.

It is, however, possible to do remote procedure calls intelligently,
but it requires a programming language that can deal with unfinished
computations and actually calculate with them for a while. This is
not terribly difficult stuff, but not trivial, either.

Erik Naggum

unread,
Oct 30, 2000, 10:18:19 AM10/30/00
to
* Fernando Rodríguez <spa...@must.die>

| What's so wrong about corba?

Almost everything. I have sketched how in a response to a another
article. CORB's only redeeming quality is that it is a standard, of
sorts, and facilitates communication with library-based "tools".

Real programmers want something that actually works efficiently, too.

Fernando Rodríguez

unread,
Oct 30, 2000, 10:20:45 AM10/30/00
to
On Mon, 30 Oct 2000 10:24:35 GMT, Fernando Rodríguez <spa...@must.die> wrote:

BTW, what about ILU? O:-)


>Hi!
>
> I have to comunicate a lisp process with a c++ app and I was
>considering corba or using sockets. Both are rather new to me, so wich one
>would you recommend and why? O:-)

//-----------------------------------------------

Eric Marsden

unread,
Oct 30, 2000, 10:34:26 AM10/30/00
to
>>>>> "en" == Erik Naggum <er...@naggum.net> writes:

en> For instance, CORBA is a single- threaded protocol, with
en> lock-step properties where you wait for the answer before you
en> continue with the next transaction.

this is no longer true. While the standard CORBA communication model
is synchonous twoway (the client thread which sends the request is
blocked while waiting for the server to answer), several other types
of communication are possible:

* oneway invocations, where the client doesn't wait for a response

* deferred synchonous, where the client invokes a request and can
poll the server later to see whether a response is available (this
is available by using the dynamic invocation interface)

* over EventChannels specified by the CORBA Event Service, or the more
recent Notification Service. The Event Channel mediates
communication between consumers and suppliers, and allows push,
pull, push/pull and pull/push interactions.

* asynchronous method invocations provided by CORBA Messaging, either
by polling or with callbacks. [This has been specified fairly
recently, so isn't supported by all implementations.]

See <URL:http://www.iona.com/hyplan/vinoski/> for some excellent
papers on these subjects (from the C/C++ Users Journal :-).

Tim Bradshaw

unread,
Oct 30, 2000, 10:48:35 AM10/30/00
to
Erik Naggum <er...@naggum.net> writes:

>
> Yes, that the remote procedure call model is fundamentally flawed.
>
> It is, however, possible to do remote procedure calls intelligently,
> but it requires a programming language that can deal with unfinished
> computations and actually calculate with them for a while. This is
> not terribly difficult stuff, but not trivial, either.
>

I think this is yet another one of those cases where most software
people (clearly not Erik) are living in the world of the PDP11.
Hardware people spend all their time dealing with these issues -- just
look at the design of a modern processor, which is dealing exactly
this kind of thing. Memory access is essentially just like RPC. The
PDP11 approach is: issue a request, wait for a result, do something,
issue another one and so on. Modern machines have to deal with huge
memory latencies, and so do something much more complex and
interesting involving multiple outstanding requests, caching of
requests, and speculatively executing code for which all the
prerequisites are not yet known. And they can make this stuff work on
machines with multiple processors, all doing their own caching, and in
a context where `fixes' to the algorithms can often not be made
because they're cast in silicon.

And to cap it all, they manage to make the resulting design *look like
a PDP11* because they know that's all SW people can deal with!

So I suppose either SW people will have to work out that you can't
program network protocols as if they were memory access on a PDP11, or
some *really* cool HW people will manage to make RPC look like is *is*
memory access on a PDP11!

--tim

William Deakin

unread,
Oct 30, 2000, 11:25:48 AM10/30/00
to
Tim wrote:
> So I suppose either SW people will have to work out that you can't
> program network protocols as if they were memory access on a PDP11, or
> some *really* cool HW people will manage to make RPC look like is *is*
> memory access on a PDP11!

Is this because most SW people are stupid and most HW people are clever?
Is this why SW people like java, C++ ... and not Common Lisp?

;)w

Fernando Rodríguez

unread,
Oct 30, 2000, 12:02:49 PM10/30/00
to
On 30 Oct 2000 15:20:33 +0000, Erik Naggum <er...@naggum.net> wrote:


> It is, however, possible to do remote procedure calls intelligently,
> but it requires a programming language that can deal with unfinished
> computations and actually calculate with them for a while. This is
> not terribly difficult stuff, but not trivial, either.

Could you elaborate on this, with some example, please?

Jason Trenouth

unread,
Oct 30, 2000, 11:59:07 AM10/30/00
to
On 30 Oct 2000 13:22:04 +0000, Erik Naggum <er...@naggum.net> wrote:

> * Philip Lijnzaad <lijn...@ebi.ac.uk>
> | Unless the communication involves simple but big data (kind of like ftp,
> | say), use CORBA, by a very long shot.
>
> Really? I'd use CORBA for fairly simple stuff and roll my own if I
> get above a fairly low complexity threshold, but I know I'm about
> three orders of magnitude better at protocol design than the CORBA
> team could ever dream of becoming. For instance, CORBA is a single-
> threaded protocol, with lock-step properties where you wait for the
> answer before you continue with the next transaction. This is of
> course a consequence of serial computations and CORBA being a very,
> very slow way of doing serial computations on disjoint processes and
> processors.
>

> ...

CORBA calls are synchronous by default. The call site effectively blocks
waiting for a reply even for operations returning void and containing no 'out'
or 'inout' arguments. Part of the reason for this is probably because system
exceptions may be thrown by ORB, or by the Object Adaptor on the serverside (eg
OBJECT_NOT_EXIST), even when no user exceptions have been declared.

Client code that isn't interested in waiting for potential failures can spawn a
thread in which to make the call or could declare the operation as 'oneway' in
IDL.

The CORBA community have recently been defining new asynchronous calling
mechanisms for themselves:

http://cgi.omg.org/cgi-bin/doc?orbos/98-05-05

__Jason

Marc Battyani

unread,
Oct 30, 2000, 1:23:14 PM10/30/00
to

"Erik Naggum" <er...@naggum.net> wrote in message
news:31819049...@naggum.net...

> * "Marc Battyani" <Marc.B...@fractalconcept.com>
> | There are lots of text based protocols (ok it's a weak argument they
> | could all be just crap). Why are you against simple lisp reader
> | friendly text based protocols when 1) the task is really easy and 2)
> | you master both sides of the communication?
>
> Can you back up and at least explain _how_ you arrived at the
> assumptions that underlie your questions?

May be it's an English issue (from me) but when you write:

> CORBA is already badly designed, so if you are new to sockets and
> protocol design (which you will get yourself into), the likelihood
> that you, too, will design your protocol badly is so high that it is
> probably better to go with CORBA.

My understanding of this is that "CORBA is bad" and "you probably won't do
better so use it anyway"

This is why I ask you if you still think so if the protocol is a simple lisp
reader friendly text based protocol when 1) the task is really easy and 2)


you master both sides of the communication?

Marc Battyani

Paolo Amoroso

unread,
Oct 30, 2000, 2:22:49 PM10/30/00
to
On 30 Oct 2000 11:56:44 +0000, Erik Naggum <er...@naggum.net> wrote:

> right. Such braindamaged disasters as HTTP (any version) would not
> have been possible if there were at least some reasonably accessible
> material on how to do these things.

Could you please mention a few examples of well designed protocols that are
worth studying? Thanks in advance.


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

Will Hartung

unread,
Oct 30, 2000, 3:04:05 PM10/30/00
to

Marc Battyani wrote in message <8tkf8h$4ka$1...@reader1.fr.uu.net>...

>
>May be it's an English issue (from me) but when you write:
>
>> CORBA is already badly designed, so if you are new to sockets and
>> protocol design (which you will get yourself into), the likelihood
>> that you, too, will design your protocol badly is so high that it is
>> probably better to go with CORBA.
>
>My understanding of this is that "CORBA is bad" and "you probably won't do
>better so use it anyway"
>
>This is why I ask you if you still think so if the protocol is a simple
lisp
>reader friendly text based protocol when 1) the task is really easy and 2)
>you master both sides of the communication?


The issue that you are missing is that there is more to a protocol than
simply the data being transmitted. There's also the timing and sequencing
details. You can certainly have a very sophisticated and complicated
protocol based on either a Lisp text stream or a binary stream.

HTTP is, essentially, a text based protocol, but that fact alone doesn't
make it good or bad, just readable to humans.

Regards,

Will Hartung
(will.h...@havasint.com)

Erik Naggum

unread,
Oct 30, 2000, 3:13:52 PM10/30/00
to
* Fernando Rodríguez <spa...@must.die>

| Could you elaborate on this, with some example, please?

Why?

Consider the "normal" way people implement streams, with a single
buffer that is over-written with new data as the reading crashes
into the right buffer wall and then continues from the left end of
the buffer. This is not smart in a modern world.

Instead of crashing into a buffer wall, have several buffers. Ask
the operating system to fill them for you while you're more or less
busy processing the data already read, as in "asynchronously". The
operating system is doing a decent job of predicting your linear
progression through the file, anyway, so it sort of works to crash
into buffer walls and get first aid from the operating system and
get back on your feet, but it is far from optimal, and no matter
what you do with that system call, it at _least_ has to copy the
buffer contents from system memory to user memory, which is also
probably at a really bad alignnment for hardware support for such
copying. Several page-aligned buffers can cause I/O to occur at the
operating system's discretion, copying is not needed if the
operating system can map the page directly into user space, and you
can scan through a large file in no time compared to these false
stops and starts as you crash into buffer walls.

Consider the sending of mail to someone with SMTP. As long as you
have well-defined state transitions that are (fairly) independent,
send a whole bunch of commands down the wire and match the responses
to the requests sent instead of waiting for each response by itself.
Consequence? _Dramatic_ speedup when sending to many recipients.

In normal human life, you don't get paralyzed and die if you don't
an answer to a question, but (carefully) go on with assumptions
because you probably have a pretty good idea what kind of answer
you're going to get. Just how many code paths could you possibly
walk down depending on the answer? Do them all! You certainly have
the time while the bits trickle across the globe.

Erik Naggum

unread,
Oct 30, 2000, 3:17:42 PM10/30/00
to
* Eric Marsden <emar...@mail.dotcom.fr>

| this is no longer true.

I appreciate the update.

I _still_ think CORBA sucks, though.

Erik Naggum

unread,
Oct 30, 2000, 3:51:10 PM10/30/00
to
* William Deakin <w.de...@pindar.com>

| Is this because most SW people are stupid and most HW people are clever?
| Is this why SW people like java, C++ ... and not Common Lisp?

Is that a SW person or a HW person asking?

Erik Naggum

unread,
Oct 30, 2000, 3:31:36 PM10/30/00
to
* "Marc Battyani" <Marc.B...@fractalconcept.com>

| This is why I ask you if you still think so if the protocol is a
| simple lisp reader friendly text based protocol when 1) the task is
| really easy and 2) you master both sides of the communication?

The likelihood that novices will do worse than CORBA is still very,
very high. Look at how people design their input languages from
simple files that are under total programmer control, and often they
blow it so bad applications crash and burn, a missed version update
causes serious bit rot, and changing your mind about something means
you lose a _lot_ of old information. Using Lisp for this does not
really help.

No networked task is really easy. If it were, you wouldn't be doing
it. If you're still doing it, how could that possibly impact any
decision on how to design protocols for tasks _worth_ doing?

I actually _favor_ Lisp-based protocols, but not because "it's text,
so it's easy" is even close to relevant (it isn't -- syntax is an
important issue to humans -- it is _not_ important to representing
the data), but because it means I have a stable and proven framework
to work from.

It is actually quite important to realize that just because it's
text doesn't make it any less fraught with all the dangers of other
network protocols. For one thing, network communications is subject
to a failure modes that novices never consider. Even with the best
of checksum algorithms, you _may_ get rotten data once in a while.
Deadlocks may occur whether you use a hariy binary protocol or a
Lisp-based protocol. All these things are unlikely to happen if you
run on a local machine, and they don't happen if you work within the
same memory. (Unless you run without error correcting memory, of
course, but then again, if you do that, you have already proclaimed
to the world that you don't care, so nobody else should bother.)

Marc Battyani

unread,
Oct 30, 2000, 4:27:15 PM10/30/00
to

"Will Hartung" <will.h...@havasint.com> wrote in message
news:8tkk7...@news1.newsguy.com...

Sure. I do agree with all this and this is why I think that for easy tasks
(T.B.D.) it's better to use sockets than CORBA (or COM for M$ fans). When I
don't write Lisp, I write VHDL where I implement lots of (fast!) protocols
at the bit/hardware level. Well, it seems that I should communicate in VHDL
or Lisp rather than English today...

Another communication channel I used between applications is mapped memory +
system events. This gives the fastest communication I could achieve between
2 applications.

Marc Battyani

Marc Battyani

unread,
Oct 30, 2000, 4:45:18 PM10/30/00
to

"William Deakin" <w.de...@pindar.com> wrote in message
news:39FDA10C...@pindar.com...

Now that HW is written in VHDL (SW) the frontier is becoming rather fuzzy.
Lisp has been very present in the electronics CAD software industry and the
standard file format (EDIF) should look familiar here. See yourself, here
are the first lines of an EDIF file...

(edif time_sim
(edifVersion 2 0 0)
(edifLevel 0)
(keywordMap (keywordLevel 0))
(status
(written
(timestamp 2000 5 24 12 0 29)
(program "Xilinx NGD2EDIF" (version "C.18"))
(comment "Command line: -w -v fndtn testus.nga time_sim.edn ")))
(external SIMPRIMS
(edifLevel 0)
(technology (numberDefinition
(scale 1 (E 1 -12) (unit TIME))))
(cell x_and3
(cellType GENERIC)
(view view_1
(viewType NETLIST)
(interface
(port IN0
(direction INPUT))
(port IN1
(direction INPUT))
(port IN2
(direction INPUT))
(port OUT
(direction OUTPUT))
....

Looks like HW guys don't need XML though I'm sure some brilliant SW guy will
explain some day that these file should be written in XML to be used by
C#.NET...
When this arrives we could start to have to reboot, several time per day,
our watches, radio receivers, phones, lifts, flash lights,...

Marc Battyani

Marc Battyani

unread,
Oct 30, 2000, 5:15:19 PM10/30/00
to

"Erik Naggum" <er...@naggum.net> wrote in message
news:31819266...@naggum.net...

Ok, I fully agree with this. I missed the emphasis you made on the
difficulty of designing a correct protocol, and I thought you were against
using Lisp based protocols.

Marc Battyani


Erik Naggum

unread,
Oct 30, 2000, 5:41:40 PM10/30/00
to
* Paolo Amoroso <amo...@mclink.it>

| Could you please mention a few examples of well designed protocols
| that are worth studying? Thanks in advance.

TCP and IPv4 are _very_ good protocols. (I don't know enough about
IPv6 to rate it.) The telecommunications protocols used on the
gigabit trunks and on the long-haul high-speed interconnects are
amazing feats of science. Of the really, _really_ large protocol
suites, I find ITU's Signalling System #7, the Digital Subscriber
Signalling System, and the Integrated Digital Services Network
protocols as well-designed as they are complex. The Telecom people
have probably done some of the best work there is in protocol
design. Just take a look at how they started out with very low
speeds, like 300 bps, but over time managed to squeeze 56kbps
through the feeble phone lines that were not upgraded, then got DSL
to work across that same old copper wire. Impresses me, anyway.

FTP and SMTP are well-designed and fairly small protocols, too.
(Small does not mean quick-and-dirty: If you spend less than 3
months implementing a server or client for either of them, however,
you're doing it wrong, as almost every implementation that is not
regarded as a huge monstrosity are.)

As an example of a very good way of separating abstractions and
representation, ASN.1 and its encoding rules are worth a study, such
as what is done with SNMP.

Finally, if you don't know X.25, you don't know packet exchange, but
it's _real_ old, now, and probably doesn't bring anything directly
applicable to people who think in streams implemented on top of
record-based protocols, except what _not_ to do with streams and
when it really does make sense to use records.

_Very_ few of the recent application-level Internet protocols are
worth the paper they aren't published on. The sheer lack of design
behind the protocol that is _actually_ at work in the MIME cruft is
astonishing, but trying to track that braindamaged piece of shit
from its inception to its current specification level through the
numerous RFCs will tell you what a few over-sized egos who had no
clue what they were doing can do to a conceptually _very_ simple
thing: Wrap various objects in a textual representation.

On the other hand, the Network Time Protocol was done by people who
really cared and also who managed to keep the dolts out of the way.

Jon S Anthony

unread,
Oct 30, 2000, 6:10:30 PM10/30/00
to
Erik Naggum wrote:
>
> * Philip Lijnzaad <lijn...@ebi.ac.uk>
> | Unless the communication involves simple but big data (kind of like ftp,
> | say), use CORBA, by a very long shot.
>
> Really? I'd use CORBA for fairly simple stuff and roll my own if I
> get above a fairly low complexity threshold, but I know I'm about
> three orders of magnitude better at protocol design than the CORBA

The real question should be whether you believe your new protocol adds
real value to what it is you are producing. If you are in the
protocol business this is obvious. If you are in something else
entirely and only need to make use of protocols, it is highly unlikely
that the time spent on inventing yet another one will be of any real
value for your project/product.

CORBA is _not_ anything like the best distributed object protocol that
one could define - probably a legacy of the C++ mentality of most of
the original vendors involved. However, it _does_ work rather well
for many scenarios - most any that someone not in the protocol tool
business would care about.


> team could ever dream of becoming. For instance, CORBA is a single-
> threaded protocol, with lock-step properties where you wait for the
> answer before you continue with the next transaction. This is of
> course a consequence of serial computations and CORBA being a very,
> very slow way of doing serial computations on disjoint processes and
> processors.

This has not been true since at least CORBA 2.0. That's not to say
that you have "true" asynchronous communication (request and asynch
notification of completion) out of the box (certainly not in any of
the ORBs we've used). However, you certainly have one way calls
(basically datagrams...) and can also use of these, or more likely
synchronous "posts", to get this. We've done this here and the
results are actually _very_ good.

The nice thing about using CORBA to do this is that it at least
eliminates all the marshalling and unmarshalling of data items
(primitive and structured) in a platform independent way. Unless your
business is really line protocols or something similar, rolling your
own here with sockets is just a distracting waste of resources.


> the software world? People who use all this new hardware power
> only to wait for ages for incoming data before they turn around
> _real_ fast to send something that did not even need to wait for
> that value and then proceed to wait for ages, again,

Only if you don't make use of threads in making the calls in the first
place. If each call is a thread, the "waiting for the synchronous
reply" is irrelevant. Most ORBs are multithreaded and can easily
handle this.


> E.g, how many applications do you know which continue to work just
> as well if you move one of the computers 3000 miles away? Do you
> think CORBA handles this situation well? From what I have seen, the

Well we have done it at 2000 miles and the results were basically
"instantaneous" (well under a second). Using straight X was death.
Using HTTP was (of course) worse than death.

/Jon

--
Jon Anthony
Synquiry Technologies, Ltd. Belmont, MA 02478, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Erik Naggum

unread,
Oct 30, 2000, 7:56:07 PM10/30/00
to
* Jon S Anthony <j...@synquiry.com>

| The real question should be whether you believe your new protocol adds
| real value to what it is you are producing. If you are in the
| protocol business this is obvious. If you are in something else
| entirely and only need to make use of protocols, it is highly unlikely
| that the time spent on inventing yet another one will be of any real
| value for your project/product.

I disagree with this assessment. It is the same argument that can
be made for something overly complex and badly implemented, like
SGML, It has also been touted as solving problems that people would
not be able to solve cheaper by themselves. That was just plain
false. Similar problems crop up with other committee products that
have tried to solve overly broad and thus non-existing problems
instead of trying to solve real problems of real complexity.

How much effort does it take a good system designer to come up with
something better than XML for any _particular_ purpose? If he
grasps the good and useful qualities of SGML, a simplified syntax,
slightly general tools, etc, can be written in a fairly short amount
of time, and it costs less to develop, maintain, and use than a
full-fledged SGML system does, because, importantly, SGML's full
generality has surpassed the point where additional features bring
nothing but more costs than benefits to the equation.. A particular
markup language (or something similar) is also easier to learn for
someone who has _not_ spent much too much time studying the original
-- like one one who is only moderately familiar with the original.

| CORBA is _not_ anything like the best distributed object protocol
| that one could define - probably a legacy of the C++ mentality of
| most of the original vendors involved. However, it _does_ work
| rather well for many scenarios - most any that someone not in the
| protocol tool business would care about.

Precisely, but having designed a few protocols that have been in
wide use for over a decade each, just how much effort do you think
it is to design new ones? The reason protocol design is hard is
that people are told not to do it, and thus do not investigate the
issues involved. This also leads to incredibly bad protocols in
use, because the users are told they are too hard to understand.

| This has not been true since at least CORBA 2.0.

I'm "happy" to hear that.

| The nice thing about using CORBA to do this is that it at least
| eliminates all the marshalling and unmarshalling of data items
| (primitive and structured) in a platform independent way.

You know, this is _not_ a hard problem. It is a hard problem in C++
and the like, but that is because those languages have lost every
trace of the information the compiler had when building the code.
That design mistakes has cost the world _billions_ of dollars. If
you don't make that design mistake, you don't have to worry. I
know, this sounds "too good to be true", but the inability of any
small C++ program to read C++ source code and do useful things with
it is the reason for all these weird data-oriented languages.

| Unless your business is really line protocols or something similar,
| rolling your own here with sockets is just a distracting waste of
| resources.

Well, I have tried both approaches. Have you?

| Only if you don't make use of threads in making the calls in the
| first place. If each call is a thread, the "waiting for the
| synchronous reply" is irrelevant. Most ORBs are multithreaded and
| can easily handle this.

Right. Not my experience. Of course, everything gets better in the
mythical "future", so do stay with CORBA if you have other things to
do while waiting for the synchronous request for more features sees
a response from the CORBA people. (Sorry, a tad rambling here.)

| Well we have done it at 2000 miles and the results were basically
| "instantaneous" (well under a second). Using straight X was death.
| Using HTTP was (of course) worse than death.

How long time do you think you would have spent designing a protocol
that would have been just as good?

Christopher Browne

unread,
Oct 30, 2000, 11:12:09 PM10/30/00
to
In our last episode (30 Oct 2000 16:34:26 +0100),

the artist formerly known as Eric Marsden said:
>>>>>> "en" == Erik Naggum <er...@naggum.net> writes:
>
> en> For instance, CORBA is a single- threaded protocol, with
> en> lock-step properties where you wait for the answer before you
> en> continue with the next transaction.
>
>this is no longer true. While the standard CORBA communication model
>is synchonous twoway (the client thread which sends the request is
>blocked while waiting for the server to answer), several other types
>of communication are possible:
>
> * oneway invocations, where the client doesn't wait for a response

Deprecated, due to the availability of AMI...

> * deferred synchonous, where the client invokes a request and can
> poll the server later to see whether a response is available (this
> is available by using the dynamic invocation interface)

> * over EventChannels specified by the CORBA Event Service, or the more
> recent Notification Service. The Event Channel mediates
> communication between consumers and suppliers, and allows push,
> pull, push/pull and pull/push interactions.

Essentially, these two approaches amount to a _simulation_ of
asynchronous communications by using some sort of "proxy" to manage
the connection. They still use synchronous twoway communications; if
whatever server is sitting there ready to accept the requests can
queue them so they _appear_ async, then you get something that _looks_
async.

Reality doesn't change...

> * asynchronous method invocations provided by CORBA Messaging,
> either by polling or with callbacks. [This has been specified
> fairly recently, so isn't supported by all implementations.]

Are there any non-C++ implementations of AMI? All the documentation
I've seen on AMI has been quite tightly tied to C++...

>See <URL:http://www.iona.com/hyplan/vinoski/> for some excellent
>papers on these subjects (from the C/C++ Users Journal :-).

The thing that is irritating about CORBA is that once you start trying
to get into anything that deviates from "just plain vanilla
synchronous connections," it gets _real_ complex and _real_ strongly
tied to platform-dependancies _real_ fast. They've got a standard for
everything, but only a few that are ubiquitously available.

It's a bit like comparing CL to Scheme; both involve standards; the
problem with Scheme, like CORBA, is that getting Full Functionality
means having to have huge quantities of extensions that don't work
with all the implementaitons. (Contrast with the fact that the SERIES
extension for CL can at least be _hoped_ to work with many CL
implementations...)
--
(concatenate 'string "cbbrowne" "@" "acm.org")
<http://www.hex.net/~cbbrowne/corba.html>
"Popularity is the hallmark of mediocrity." --Niles Crane, "Frasier"

William Deakin

unread,
Oct 31, 2000, 4:14:53 AM10/31/00
to
Erik wrote:
> Is that a SW person or a HW person asking?

Clearly a SW person or else I wouldn't be talking about it :)

Peter Vaneynde

unread,
Oct 31, 2000, 5:12:45 AM10/31/00
to
Erik Naggum <er...@naggum.net> writes:

> * Fernando Rodríguez <spa...@must.die>


> | I have to comunicate a lisp process with a c++ app and I was
> | considering corba or using sockets. Both are rather new to me, so
> | wich one would you recommend and why? O:-)
>

> CORBA is already badly designed, so if you are new to sockets and

IMHO CORBA is badly designed, and the problems this produces are only
amplified by the legions of C++ "programmers" that want to design
systems using only the knownledge of "Corba in 21 days".

Network programming isn't easy, and using CORBA in a slightly
different way the designers immagined will get you into a world of
pain. But it all appears so easy in the examples people are lured into
a false sense of security...

I think CORBA should be seen as "just" a RPC call round ASN.1, and
people should use its 'oo'-ness as sparingly as possible. Just have a
Server object replying to simple (non-object) commands with simple
(structure) replies. Stick to this and you'll be safe. Ignore the
C++/java people why cry you don't know OOP and want to make everything
an object. Remember that a RPC is inherently slow, so limit the amount
of calls needed to do a typical operation, also (unless you use
advanced stuff, not implemented in the lisp servers AFAIK) objects in
CORBA as horribly costly: they take about 1K of space on the wire and
most deadly of all *the never, ever die*. An object in CORBA, once
created and given out *has* to remain alive for the duration of the
lisp process.


> If you are very dedicated, you will probably spend a few months just
> getting up to speed with protocol design before you can do something
> that is not a total drag, and upwards of a year to get it fully
> right.

Right. Be prepared to junk the first few versions of your code. (or
just give up and leave that project)

> Security problems in protocols are so common and so hard to avoid
> that you _really_ don't want to expose yourself to the process of
> learning by doing.

In my experience people don't care, and I don't expect this to change
soon :-(.

Groetjes, Peter

--
LANT nv/sa, Research Park Haasrode, Interleuvenlaan 15H, B-3001 Leuven
mailto:Peter.V...@lant.be Phone: ++32 16 405140
http://www.lant.be/ Fax: ++32 16 404961

Eric Marsden

unread,
Oct 31, 2000, 5:44:39 AM10/31/00
to
>>>>> "cb" == Christopher Browne <cbbr...@news.hex.net> writes:

ecm> asynchronous method invocations provided by CORBA Messaging,
ecm> either by polling or with callbacks. [This has been specified
ecm> fairly recently, so isn't supported by all implementations.]

cb> Are there any non-C++ implementations of AMI? All the
cb> documentation I've seen on AMI has been quite tightly tied to
cb> C++...

not that I know of. Probably future releases of Orbix2000 for Java
will include AMI.

Eric Marsden

unread,
Oct 31, 2000, 7:51:51 AM10/31/00
to
>>>>> "pve" == Peter Vaneynde <Peter.V...@lant.be> writes:

pve> Remember that a RPC is inherently slow, so limit the amount of
pve> calls needed to do a typical operation

a CORBA method invocation does not necessarily involve network
communication: if the client and server are collocated (reside in the
same address space), the ORB can use a simple method call.


pve> objects in CORBA as horribly costly: they take about 1K of
pve> space on the wire and most deadly of all *the never, ever die*.
pve> An object in CORBA, once created and given out *has* to remain
pve> alive for the duration of the lisp process.

I haven't used the commercial CL ORB implementations, so I don't know
what they are capable of. However, I don't see anything in the CORBA
specifications which requires an object to stay alive indefinitely.

You publish the availability of a service through an object reference.
On the host+port specified by that object reference, the object
adapter is responsible for dispatching incoming invocations to the
appropriate servant (a programming language entity which implements
the service). In CL, a servant is probably represented by an instance
of a CLOS class.

The object adapter maintains a mapping between object identifiers (the
server-specific component of an object reference) and servants. When
publishing a service, the only thing which has to remain alive
server-side is its object identifier (a key in a hashtable, say). You
aren't obliged immediately to incarnate a servant: the object adapter
can do so on-demand. Once a servant has serviced a request the object
adapter can decide to kill it (called "etherealization"), to be
restarted when a new request comes in. You can also have one servant
which services requests for multiple objects.

Peter Vaneynde

unread,
Oct 31, 2000, 10:27:59 AM10/31/00
to
Eric Marsden <emar...@mail.dotcom.fr> writes:

> a CORBA method invocation does not necessarily involve network
> communication: if the client and server are collocated (reside in the
> same address space), the ORB can use a simple method call.

AFAIK no common lisp ORB can collocate.

> pve> objects in CORBA as horribly costly: they take about 1K of
> pve> space on the wire and most deadly of all *the never, ever die*.
> pve> An object in CORBA, once created and given out *has* to remain
> pve> alive for the duration of the lisp process.
>
> I haven't used the commercial CL ORB implementations, so I don't know
> what they are capable of. However, I don't see anything in the CORBA
> specifications which requires an object to stay alive indefinitely.

In at least one cl-ORB you can force the deletion of the object (this
was added on request) and further operations on the object result in
an exception. This was only added later and I don't know how well it
works, nor if it is in the base product now (doesn't seem to be). The
position of the designers was that after publication something had to
be there to reply to RPC's. Note that the published CORBA mapping
claims that Lisp objects do not need the destroy method (sorry don't
know the exact name) as Lisp doesn't need it...

> The object adapter maintains a mapping between object identifiers (the
> server-specific component of an object reference) and servants. When
> publishing a service, the only thing which has to remain alive
> server-side is its object identifier (a key in a hashtable, say). You
> aren't obliged immediately to incarnate a servant: the object adapter
> can do so on-demand. Once a servant has serviced a request the object

The POA right?

> adapter can decide to kill it (called "etherealization"), to be
> restarted when a new request comes in. You can also have one servant
> which services requests for multiple objects.

Not implemented AFAIK. IMHO if you have a need for a large number of
objects in your design it's likely you're doing something wrong.

See:
http://www.franz.com/support/documentation/6.0/orblink/doc/orblink-idl.htm
http://www.franz.com/support/documentation/6.0/orblink/doc/standards.htm

Wade Humeniuk

unread,
Oct 31, 2000, 11:31:05 AM10/31/00
to

> TCP and IPv4 are _very_ good protocols. (I don't know enough about
> IPv6 to rate it.) The telecommunications protocols used on the
> gigabit trunks and on the long-haul high-speed interconnects are
> amazing feats of science. Of the really, _really_ large protocol

They are fine protocols, but only as transport and network layer
protocols. The whole conceptual framework of protocols I think is best
described by the OSI reference model. TCP and IPv4 only live within the
context of the data link and physical layer protocols that carry them.
All the issues of reliability, delivery and such need to be looked at as
protocol stacks.

For those that want to learn about protocols break open the OSI spec on
the reference model. Anybody remember the ISO spec number? I don't have
it handy anymore.

When I think what are good protocols I think they should have state
models that are totally defined. Modern protocols seem to be lacking
proper definitions of their state machines and are more ambiguous in
their specification. X.25 is very well defined with state tables
unambiguously placed in the specification. Also I have found a good
feature of a protocol is that is does not try to do too much. This
means that its finite state machine does not exceed 15 states with about
15 events. Do not ask me to justify these arbitrary numbers. I have
just found from experience that if the state machine exceeds these
limits there is probably more than one state machine that can be layered
into a protocol stack.

> protocols as well-designed as they are complex. The Telecom people
> have probably done some of the best work there is in protocol

The major problem with the Telecom protocols I have found has been their
defintions of their protocol data units which place a premimum on
reducing the number of bits in network traffic. This makes the PDUs
hard to parse. A great example of this, if you can get the spec is
IS-634, the control protocol between Cellular Base Stations and
Management systems. Whoever dreamed that up has never programmed a
computer before. Also the telecom protocols (at least the cellular
ones) are tending to define the protocol state machines in SDL which
allows protocol designers to leave gaps of what happens when error
conditions occur.

>
> As an example of a very good way of separating abstractions and
> representation, ASN.1 and its encoding rules are worth a study, such
> as what is done with SNMP.

ASN.1, what can I say, another language to learn. Liking Lisp, I think
a Lisp external syntax should be used for a platform independent
encoding of PDUs. It takes more bytes but it transports well (and I can
see it clearly on a data scope).

I am still divided on the issue of creating protocols between
cooperative machines/processes or going for a distributed OS view where
everything is one large memory space. I thought that is what CORBA is
offering, one large memory space. There is room for both of them, a
distributed OS at some lower level must implement protocols. Does a
programmer have to see that?

Wade

Philip Lijnzaad

unread,
Oct 31, 2000, 11:36:54 AM10/31/00
to

pve> objects in CORBA as horribly costly: they take about 1K of
pve> space on the wire

You're talking about object references, which can actually be much smaller
than 1 k (IOR strings don't tell you that much); it depends a bit on the ORB
vendor. The minimum it must contain (for IIOP) is protocol name +version,
hostname, portnumber, adaptor name and object identifier (the last two are
mangled into the object key, in a vendor-specific fashion). These components
would be necessary for any object reference that is supposed to work across a
network. I believe I read that an estimate of the lower bound on the data in
of an object reference is below 100 bytes (sorry, no 'references').

pve> and most deadly of all *the never, ever die*.


pve> An object in CORBA, once created and given out *has* to remain
pve> alive

Same is true for telephone numbers and URLs ... Somehow, the real world isn't
quite aware of this :-)

pve> for the duration of the lisp process.

Peter> In at least one cl-ORB you can force the deletion of the object (this
Peter> was added on request)

This sounds very suspicious to me. If an object should be deletable by a
client, then it's interface should bloody well have a delete() or destroy()
method or some such (and many do).


Peter> and further operations on the object result in
Peter> an exception.

exactly.

Peter> Note that the published CORBA mapping claims that Lisp objects do not
Peter> need the destroy method (sorry don't know the exact name)

'the destroy method' ? There is no such thing. It depends on the IDL, which
has nothing to do with languages.

Peter> as Lisp
Peter> doesn't need it...

??? If an IDL has interface Car { ...; destroy(); }

then this has nothing to do with Lisp garbage collection: it's simply a
method to have clients delete a particular Car object.

>> The object adapter maintains a mapping between object identifiers (the
>> server-specific component of an object reference) and servants. When
>> publishing a service, the only thing which has to remain alive
>> server-side is its object identifier (a key in a hashtable, say). You
>> aren't obliged immediately to incarnate a servant: the object adapter
>> can do so on-demand. Once a servant has serviced a request the object

Peter> The POA right?

(yes)

>> adapter can decide to kill it (called "etherealization"), to be
>> restarted when a new request comes in. You can also have one servant
>> which services requests for multiple objects.

Peter> Not implemented AFAIK.

This so-called DefaultServant approach is part of the POA, and therefore must
be implemented in _all_ ORBs that CORBA 2.1 compliant (were now at version
2.3a, btw). If not, they are not compliant. Incidentally, the DefaultServant
can be simulated by using a ServantLocator approach, where the preinvoke()
method always returns the same servant. It prolly is slightly more costly
though.

Peter> IMHO if you have a need for a large number of objects in your design
Peter> it's likely you're doing something wrong.

Why? It turns out that using objects in our field (bioinformatics) is very
natural and does make things easier. Since we're talking biggish databases
(1.5 million biological sequences, with many more sub-objects) here, we
inevitably have large numbers of objects. This has nothing to do with the
design: using 1.5 million little pieces of text (which is actually were we
come from) is no option.

When you're talking about large number of objects (all of them being equally
important), you _must_ have persistency behind it. Given this, CORBA
makes it easy to -- completely transparently to clients -- instantiate
some 'working set' of all those objects on demand (which, BTW, no other
distributed (object?) technology comes close to provding).

Philip
--
Time passed, which, basically, is its job. -- Terry Pratchett (in: Equal Rites)
-----------------------------------------------------------------------------
Philip Lijnzaad, lijn...@ebi.ac.uk \ European Bioinformatics Institute,rm A2-24
+44 (0)1223 49 4639 / Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax) \ Cambridgeshire CB10 1SD, GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC 50 3D 1F 64 40 75 FB 53

Lieven Marchand

unread,
Oct 30, 2000, 5:00:51 PM10/30/00
to
Erik Naggum <er...@naggum.net> writes:

> * Eric Marsden <emar...@mail.dotcom.fr>
> | this is no longer true.
>
> I appreciate the update.
>
> I _still_ think CORBA sucks, though.

If you want to see protocol design gone berserk, try ASN.1. They
haven't solved any of the real problems but they added heaps of
complexity in the process. Unfortunately, it's caught on in some part
of the IETF world, so I'm considering writing some tools in CL to make
it livable.

--
Lieven Marchand <m...@bewoner.dma.be>
Lambda calculus - Call us a mad club

Rainer Joswig

unread,
Oct 31, 2000, 12:17:59 PM10/31/00
to
In article <m3n1fmh...@localhost.localdomain>, Lieven Marchand
<m...@bewoner.dma.be> wrote:

> Erik Naggum <er...@naggum.net> writes:
>
> > * Eric Marsden <emar...@mail.dotcom.fr>
> > | this is no longer true.
> >
> > I appreciate the update.
> >
> > I _still_ think CORBA sucks, though.
>
> If you want to see protocol design gone berserk, try ASN.1. They
> haven't solved any of the real problems but they added heaps of
> complexity in the process. Unfortunately, it's caught on in some part
> of the IETF world, so I'm considering writing some tools in CL to make
> it livable.

Have you looked at http://www.switch.ch/misc/leinen/snmp/sysman.html ?

--
Rainer Joswig, Hamburg, Germany
Email: mailto:jos...@corporate-world.lisp.de
Web: http://corporate-world.lisp.de/

Jason Trenouth

unread,
Oct 31, 2000, 11:53:12 AM10/31/00
to
On Tue, 31 Oct 2000 15:27:59 GMT, Peter Vaneynde <Peter.V...@lant.be>
wrote:

> Eric Marsden <emar...@mail.dotcom.fr> writes:
>
> > a CORBA method invocation does not necessarily involve network
> > communication: if the client and server are collocated (reside in the
> > same address space), the ORB can use a simple method call.
>
> AFAIK no common lisp ORB can collocate.

Xanalys HCL ORB does co-location optimization for Common Lisp clients and
servers within the same LispWorks process.

__Jason

Erik Naggum

unread,
Oct 31, 2000, 1:50:58 PM10/31/00
to
* Lieven Marchand

| If you want to see protocol design gone berserk, try ASN.1.

ASN.1 is not a protocol design. It's a data description language.
The full name is Abstract Syntax Notation #1. It does no more than
define structures that are named and identified by mutual agreement.
To use ASN.1, you have to choose some encoding rules, and there are
unfortunately too many of them, but the Basic and Distinguished
Encoding Rules are actually OK. (C programmers hate them, because
they make life so much harder in an untyped language like theirs,
but if you know what kinds of objects you get, as in dynamic types,
you can easily deal with these things.)

| They haven't solved any of the real problems but they added heaps of
| complexity in the process.

I can't agree with your assessment. Would you like to explain what
you consider the real problems they have not solved and what the
heaps of complexity added have been?

| Unfortunately, it's caught on in some part of the IETF world, so I'm
| considering writing some tools in CL to make it livable.

I'm surprised that you haven't, already. ASN.1 is about typed data.
It does not live well in a "typed variable" approach to programming.

Erik Naggum

unread,
Oct 31, 2000, 1:36:55 PM10/31/00
to
* Wade Humeniuk <hume...@cadvision.com>

| They are fine protocols, but only as transport and network layer
| protocols.

What does this mean? It makes absolutely no sense at it stands.

| The whole conceptual framework of protocols I think is best
| described by the OSI reference model.

Not at all. Only some frameworks are describable in ISORMOSI,
specifically those that actually used it while being described.

Please note that the full name is the _ISO_ Reference Model for Open
Systems Interconnection. It's their model. It is _not_ a model
that explains or can describe all Open Systems Interconnection.

See The Elements of Networking Style: And Other Essays &
Animadversions on the Art of Intercomputer Networking by
M. A. Padlipsky for a full coverage of this. ISBN 0-595-08879-1.



| TCP and IPv4 only live within the context of the data link and
| physical layer protocols that carry them.

This is false.

| All the issues of reliability, delivery and such need to be looked
| at as protocol stacks.

If you use the ISORMOSI on the Internet, you will end up with very,
very weird attitudes towards a very fine technology, sort of like
the people who believe that now that we have 8-bit byte-adressable
machines, the concept "byte" is conflatable with "octet", and then
their heads explode when other concepts of "byte" present themselves
to them.

| For those that want to learn about protocols break open the OSI spec
| on the reference model. Anybody remember the ISO spec number? I
| don't have it handy anymore.

I would recommend against ISORMOSI until you understand whence it
came, that is: the history of protocol design preceding it, both
within the CCITT/ITU and the ISO camps and without either. There
are very different cultures at work, here. That's a contributory
reason that TCP/IP won. "If ISORMOSI really was so great, how come
nobody talks their protocols?" actually applies.

| The major problem with the Telecom protocols I have found has been
| their defintions of their protocol data units which place a premimum
| on reducing the number of bits in network traffic.

You _really_ want this, but perhaps you need to work with these guys
to realize that 1% more protocol overhead means 1% more money to buy
the same bandwidth, or slower deployment due to higher costs, etc.

| This makes the PDUs hard to parse.

That does not follow _at_all_.

| A great example of this, if you can get the spec is IS-634, the
| control protocol between Cellular Base Stations and Management
| systems. Whoever dreamed that up has never programmed a computer
| before.

I'm making an unsubstantiated guess here, much like you do, and
would counter that he probably does protocol implementation in
dedicated chips. So would you if you dealt that these things.

| Also the telecom protocols (at least the cellular ones) are tending
| to define the protocol state machines in SDL which allows protocol
| designers to leave gaps of what happens when error conditions occur.

I have worked a little with SDL, but that is years ago, but most of
what I have worked with in the Telecom world has been described with
SDL (Z.100) or 'variations" like Estelle.

| ASN.1, what can I say, another language to learn. Liking Lisp, I
| think a Lisp external syntax should be used for a platform
| independent encoding of PDUs. It takes more bytes but it transports
| well (and I can see it clearly on a data scope).

Notice that ASN.1's encoding rules are actually parsable with the
same ease as (most) Lisp objects (that is, excluding symbols, which
are a pain insofar as they overlap syntactically with numbers).

I see a very clean mapping between ASN.1 and Lisp, qua languages to
describe and encode data.

| I am still divided on the issue of creating protocols between
| cooperative machines/processes or going for a distributed OS view
| where everything is one large memory space. I thought that is what
| CORBA is offering, one large memory space. There is room for both
| of them, a distributed OS at some lower level must implement
| protocols. Does a programmer have to see that?

If you do the global memory thing, at least figure out the need for
cache coherence and propagation protocols. Nobody I talked to about
CORBA have had the faintest clue what that would entail for CORBA,
so I gave up talking to these guys. (This was not in a Lisp
context, BTW.)

About half of what I have done over the years has had to do with
ensuring that lots of computers very far apart agree to what the
world looks like at exactly the same time. "One large memory space"
sort of covers it, since the idea is that through intelligent
protocols and very careful programming, lots of computers actually
get the same values without having to ask anyone _else_ for them.
There are several interesting products that offer some of this
functionality for databases, messages servers, etc, these days.

I'm not sure a programmer needs to see everything, but being able to
ensure that you have the information you need in time requires work,
and I fail to see how anyone but programmers should do that. Like
TCP and IP do not live quietly down in the basement, but actually
extend their services and _presence_ all the way up to the
"application layer" (and you _really_ want this, by the way), there
is no way you can avoid knowing about "the underworld" relative to
your actual interests. If you think you can avoid it, you are only
saying that you are willing to accept some set of default behaviors,
but if you don't know what they are, you are also likely to be wrong.

The belief in cleanly separated layers is almost a religion. It has
no basis in reality or experience, but there is nothing that can
make someone who believes in it stop believing in it. There's
always an answer for why it's a good idea despite all the hairy
stuff they have to deal with. Just like the whole ISORMOSI thing.
And almost like the belief in structured programming and object
orientation where the only time anyone can be trusted to use them
well is when they know when _not_ to use them.

Make that a more general assertion: If you think you have an answer
for everything, the answer is probably not right for anything. If
you have an answer for some things, the answer is probably right for
those things.

Fernando Rodríguez

unread,
Oct 31, 2000, 3:51:31 PM10/31/00
to
On Tue, 31 Oct 2000 16:53:12 +0000, Jason Trenouth <ja...@harlequin.com>
wrote:


>> > a CORBA method invocation does not necessarily involve network
>> > communication: if the client and server are collocated (reside in the
>> > same address space), the ORB can use a simple method call.
>>
>> AFAIK no common lisp ORB can collocate.
>
>Xanalys HCL ORB does co-location optimization for Common Lisp clients and
>servers within the same LispWorks process.

Sorry, but what's the need to use something like corba within a single
process? =:-O

TIA O:-)

//-----------------------------------------------
// Fernando Rodriguez Romero
//
// frr at mindless dot com
//------------------------------------------------

Erik Naggum

unread,
Oct 31, 2000, 4:43:13 PM10/31/00
to
* Fernando Rodríguez

| Sorry, but what's the need to use something like corba within a single
| process? =:-O

Well, besides the hammer, nail/thumb analogy, you might decide not
to "know" where your objects are. This could lead to all _kinds_ of
interesting performance pessimizations, just one of which is CORBA.

Marius Vollmer

unread,
Oct 31, 2000, 4:22:03 PM10/31/00
to
Erik Naggum <er...@naggum.net> writes:

> The Telecom people have probably done some of the best work there
> is in protocol design. Just take a look at how they started out
> with very low speeds, like 300 bps, but over time managed to
> squeeze 56kbps through the feeble phone lines that were not
> upgraded, then got DSL to work across that same old copper wire.

I don't know how 56kbps or ISDN etc really work, but I guess they are
more about very high modulation schemes, good channel estimations and
equalizations, error correcting codes and data compression than they
are about clever protocols.

And the old copper wires aren't probably the problem, anyway. They
should not be much worse than your ordinary 10Base-T twisted pair
ethernet wire. The problems are probably more with the filters along
the way that have been optimized for base-band speech signals.

I'm very impressed by these techniques as well, but I don't think you
can learn from them how to design a better CORBA. They are just
worlds apart.

Jon S Anthony

unread,
Oct 31, 2000, 7:09:42 PM10/31/00
to
Erik Naggum wrote:
>
> * Jon S Anthony <j...@synquiry.com>
> | The real question should be whether you believe your new protocol adds
> | real value to what it is you are producing. If you are in the
> | protocol business this is obvious. If you are in something else
> | entirely and only need to make use of protocols, it is highly unlikely
> | that the time spent on inventing yet another one will be of any real
> | value for your project/product.
>
> I disagree with this assessment. It is the same argument that can
> be made for something overly complex and badly implemented, like
> SGML, It has also been touted as solving problems that people would
> not be able to solve cheaper by themselves. That was just plain

Yes, this is the obvious counter argument, but it really doesn't
address the issue. Those involved in things like defining and
implementing SGML (or <fill in the blank>) _should_ be concerned about
"doing it right" because the result of that work _is_ their primary
value added (it _is_ their product/project).

Everyone going off and reinventing this stuff on their own, _even if
they know they can do a better job of it_, when it is not the primary
work they are involved in, and when the available offerings will not
negatively impact this primary work is just counter productive.

This in no way suggests that it is not appropriate or sensible or even
"a good thing" for someone to focus on defining and producing better
such offerings and making them generally available.


> How much effort does it take a good system designer to come up with
> something better than XML for any _particular_ purpose? If he
> grasps the good and useful qualities of SGML, a simplified syntax,
> slightly general tools, etc, can be written in a fairly short amount
> of time, and it costs less to develop, maintain, and use than a

> full-fledged SGML system does,...

This assumes that the tradeoff analysis for use is clearly negatively
disposed towards the available offering. For XML and SGML this is
likely true for most of the suggested uses as most of the suggested
uses are simply inappropriate. Perhaps you would include CORBA as
well, but I would not.


> Precisely, but having designed a few protocols that have been in
> wide use for over a decade each, just how much effort do you think
> it is to design new ones? The reason protocol design is hard is

If it doesn't provide you any advantage, the amount of effort it would
take is irrelevant. Again, if you're interested in getting into the
protocol business, that's a different story.


> | The nice thing about using CORBA to do this is that it at least
> | eliminates all the marshalling and unmarshalling of data items
> | (primitive and structured) in a platform independent way.
>
> You know, this is _not_ a hard problem. It is a hard problem in C++
> and the like, but that is because those languages have lost every

Whether it is hard or not is irrelevant. It still needs to be done
and it still needs to be maintained and it still needs all this on
various platforms. And for what? Additionally, you then need to make
it available for use in things like C++, Java, Perl, etc., at least
this is true for the sort of positioning we need. With CORBA all of
this is already done, and it just works. Let the armies of drones
hacking at C++, Java and the rest spend their time doing this.


> | Unless your business is really line protocols or something similar,
> | rolling your own here with sockets is just a distracting waste of
> | resources.
>
> Well, I have tried both approaches. Have you?

Yes.


> | Only if you don't make use of threads in making the calls in the
> | first place. If each call is a thread, the "waiting for the
> | synchronous reply" is irrelevant. Most ORBs are multithreaded and
> | can easily handle this.
>
> Right. Not my experience. Of course, everything gets better in the
> mythical "future", so do stay with CORBA if you have other things to

Just to be clear, the multithreaded ORBs are available now and have
been for years.


> | Well we have done it at 2000 miles and the results were basically
> | "instantaneous" (well under a second). Using straight X was death.
> | Using HTTP was (of course) worse than death.
>
> How long time do you think you would have spent designing a protocol
> that would have been just as good?

Designing is only a fraction of the effort. You then need to
implement it across several platforms and make it available for use in
an effectively open ended set of languages (at the least C++ and
Java). Given what we are doing, I would guess it would have taken
well over a person year or more of resources. And for what? To get
something just as good? Probably not even as good if you factor in
such things as the IDL compilers, exceptions, and the like.

Now, an order of magnitude better (depending on how that gain is
distribted), that would be a different story, but the effort would go
up dramatically and you are then basically in the distributed object
protocol _business_ (or at least you _should_ be).

Bruce Hoult

unread,
Oct 31, 2000, 8:11:45 PM10/31/00
to
In article <87itq8l...@zagadka.ping.de>, Marius Vollmer
<m...@zagadka.ping.de> wrote:

> And the old copper wires aren't probably the problem, anyway. They
> should not be much worse than your ordinary 10Base-T twisted pair
> ethernet wire. The problems are probably more with the filters along
> the way that have been optimized for base-band speech signals.

Well there is the minor detail that 10baseT is only specced to go 100m
while ADSL has to go for several km.

Humble old AppleTalk has been quietly driving up to 2 km of phone wire
(using the Farallon "PhoneNet" adaptors) for nearly 15 years now at
230,400 bps. That's the same speed as a lot of people's budget-price
ADSL runs at, acheived using a handful of simple passive components
(transformer, resistor, capacitor) plugged into a serial port. Now of
course ADSL can do better than that -- more like 2 Mbps - 6 Mbps -- if
the telco wants you to (or you pay them enough), but it *has* had a few
more years of development and those Nokia M10's aren't cheap at all.

-- Bruce

Christopher Browne

unread,
Oct 31, 2000, 8:28:58 PM10/31/00
to
In our last episode (Tue, 31 Oct 2000 20:51:31 GMT),

the artist formerly known as Fernando Rodríguez said:
>On Tue, 31 Oct 2000 16:53:12 +0000, Jason Trenouth <ja...@harlequin.com>
>wrote:
>>> > a CORBA method invocation does not necessarily involve network
>>> > communication: if the client and server are collocated (reside in the
>>> > same address space), the ORB can use a simple method call.
>>>
>>> AFAIK no common lisp ORB can collocate.
>>
>>Xanalys HCL ORB does co-location optimization for Common Lisp clients and
>>servers within the same LispWorks process.
>
>Sorry, but what's the need to use something like corba within a
>single process? =:-O

Um. Because that provides the highest conceivable speed of access
between client and server?

I might have an application that is distributable via CORBA so that
parts can run on separate hosts _if need be_; efficient colocation
means that this results in minimal degradation of performance if the
components run on the same host.
--
(concatenate 'string "cbbrowne" "@" "ntlug.org")
<http://www.ntlug.org/~cbbrowne/corba.html>
C is almost a real language. (see assembler) Even the name sounds like
it's gone through an optimizing compiler. Get rid of all of those
stupid brackets and we'll talk. (see LISP)

Christopher Browne

unread,
Oct 31, 2000, 8:29:10 PM10/31/00
to
In our last episode (31 Oct 2000 18:50:58 +0000),

the artist formerly known as Erik Naggum said:
>* Lieven Marchand
>| If you want to see protocol design gone berserk, try ASN.1.

> ASN.1 is not a protocol design. It's a data description language.
> The full name is Abstract Syntax Notation #1. It does no more than
> define structures that are named and identified by mutual
> agreement. To use ASN.1, you have to choose some encoding rules,
> and there are unfortunately too many of them, but the Basic and
> Distinguished Encoding Rules are actually OK. (C programmers hate
> them, because they make life so much harder in an untyped language
> like theirs, but if you know what kinds of objects you get, as in
> dynamic types, you can easily deal with these things.)

Ah... And therein lies where I've been "a little suspicious" of CORBA
IDL; the thing that it _should_ be closest to is a "protocol
definition language," but the lack of indication of how methods should
relate is "makes my spider-sense tingle."

For instance, the anticipated use of the following should be clear:
interface file_reader {
fileref open (in filename fn);
okcode read (in fileref f, in quantity q, out filebuf fb);
void fileref close (in fileref f);
};

One would do use this like:

(let ((fr (file_reader::open "/home/cbbrowne/whatever"))
(ok_code t)
(if fr
(progn
(while ok_code
(setf ok_code (file_reader::read fr 240 filebuffer)))
(file_reader::close fr)))))

The thing is, it's an _obvious_ thing that there should be some form
of state machine indicating the manner in which the methods should be
called; OBVIOUSLY we start with an "open," follow that with some
"read" calls, and then end off with a "close."

That relationship, which should, for many protocols, be expressed as a
state machine, doesn't enter in at all.

>| They haven't solved any of the real problems but they added heaps of
>| complexity in the process.
>
> I can't agree with your assessment. Would you like to explain what
> you consider the real problems they have not solved and what the
> heaps of complexity added have been?

Here's one commentary that gripes over it...
<http://www.alvestrand.no/x400/debate/asn1.html>

The really flameworthy bit:

Carl M. Ellison <c...@tis.com>:

"ASN.1 might be an interesting exercise for people who believe
LISP is the only real language or who really like to play with
abstract syntax or who like to write specs while ignoring
implementations (ie., write standards). It's *really* easy to
write structure declarations in ASN.1 -- as long as you don't try
to implement from them.

However, as one who wants computer programs to be written
efficiently and legibly (with small, easy to handle names,
allowing complete statements to be expressed in a small space)
and to have these programs communicate between machines with
different byte ordering -- and almost nothing else -- I find
ASN.1 is the *wrong* tool."

So apparently according to Ellison, Lisp must be intended for computer
programs that are to be written inefficiently, illegibly, with long,
ridiculous-to-process names, blah, blah, blah...

>| Unfortunately, it's caught on in some part of the IETF world, so I'm
>| considering writing some tools in CL to make it livable.
>
> I'm surprised that you haven't, already. ASN.1 is about typed data.
> It does not live well in a "typed variable" approach to programming.

Some Lisp-relevant links...
<http://asn1.elibel.tm.fr/fr/outils/emacs/manuel-utilisation.htm>
<http://www.switch.ch/misc/leinen/snmp/lisp/>

The latter bit includes some ASN.1-processing code...


--
(concatenate 'string "cbbrowne" "@" "ntlug.org")

<http://www.ntlug.org/~cbbrowne/lisp.html>

Christopher Browne

unread,
Oct 31, 2000, 8:29:15 PM10/31/00
to
In our last episode (31 Oct 2000 18:36:55 +0000),

the artist formerly known as Erik Naggum said:
>* Wade Humeniuk <hume...@cadvision.com>
>| They are fine protocols, but only as transport and network layer
>| protocols.
>
> What does this mean? It makes absolutely no sense at it stands.

The point is that they're not directly "application" layer protocols.

A good comparison seems to me to be that they generally represent the
"assembly language" of the networking world.

TCP/IP does indicate that it includes "application" layers that
provide _some_ of the higher level stuff; you have made some
desparaging remarks about recent RFCs that would go along with the
notion of separating "application layers" from the remainder of it...

--
(concatenate 'string "cbbrowne" "@" "acm.org")
<http://www.ntlug.org/~cbbrowne/>
Who wants to remember that escape-x-alt-control-left shift-b puts you
into super-edit-debug-compile mode? (Discussion in comp.os.linux.misc
on the intuitiveness of commands, especially Emacs.)

Erik Naggum

unread,
Oct 31, 2000, 8:51:50 PM10/31/00
to
* Marius Vollmer <m...@zagadka.ping.de>

| I don't know how 56kbps or ISDN etc really work, but I guess they are
| more about very high modulation schemes, good channel estimations and
| equalizations, error correcting codes and data compression than they
| are about clever protocols.

I didn't say "clever". I answered a question on which protocols I
thought were good and worth a study. The issues you raise are taken
care of at different layers of the protocol. The interesting aspect
for protocol designers is the _signalling_system_. That's why I
explicitly spelled out the names of SS#7 and DSS#1.

Rather than list the results of your guesswork, how about their
foundation?

| And the old copper wires aren't probably the problem, anyway. They
| should not be much worse than your ordinary 10Base-T twisted pair
| ethernet wire. The problems are probably more with the filters
| along the way that have been optimized for base-band speech signals.

This is completely false and just more random guesswork. Quit it
and talk to people who know this stuff. It's been _years_ since I
worked with this stuff, but any competent telecom engineer will be
able to tell you about the electrical qualities of those copper
wires and why that is _completely_ irrelevant to ISDN signalling.

| I'm very impressed by these techniques as well, but I don't think
| you can learn from them how to design a better CORBA. They are just
| worlds apart.

I'd like to know why you don't think so, apart from being "worlds
apart", which is just too funny when referring to the telecom world.

Incidentally, most of the interesting things appeared to be "worlds
apart" at one time or another. I believe good high-level language
programmers need to understand machine architectures more than good
low-level language programmers do, for the _reason_ that they are
worlds apart. If you don't understand what's going on on the wire,
the likelihood that you will not understand how to make a high-level
protocol work _with_ the intermediate layers is very high.

Erik Naggum

unread,
Oct 31, 2000, 9:32:55 PM10/31/00
to
* Jon S Anthony <j...@synquiry.com>
| Again, if you're interested in getting into the protocol business,
| that's a different story.

This seems to be your key argument, that there is a primary business
and lots of ancillary concerns for which it is better to use the
results of somebody else's primary business than dabble in it. I do
not agree with this for several reasons. First, if you discover
that you need better control over some ancillary concern, you may
have to make it the primary business of some person or group in your
company. Second, if you find that you cannot afford to do it on
your own, but need something different from the available offerings,
you may cause somebody else to spawn a similar new primary business,
such as a consortium. Third, you may discover that as you go abaout
your business, you gravitate towards certain concerns that are very
different from what you set out to do, and your primary business may
change to a previously ancillary concern, not the least because the
only way to improve your previous primary business to do something
else entirely. All of these have happened to me, and I claim that
if you're making any effort to be good at what you do, you will not
be able to tell beforehand what you will do best in.

| Designing is only a fraction of the effort.

I see that it is somehow important to you to exaggerate the costs of
"rolling your own", but I'd like to know why. It may be necessary
to defend the choice of using CORBA, but I have _already_ stated in
plain text and simple terms that if you can't do it better yourself,
by all means, stick with what somebody else did even if that is not
particularly good, so I hoped that we would have that condition
behind us, but you keep carping on this cost of not doing it better.
I fail to see the relevance in context.

I'm effectively arguing that out-doing CORBA is not that hard, but
having said

CORBA is already badly designed, so if you are new to sockets and

protocol design (which you will get yourself into), the likelihood
that you, too, will design your protocol badly is so high that it
is probably better to go with CORBA.

the rather obvious (to me, anyway) ramification is that you should
be good enough at what you do to out-do CORBA.

| You then need to implement it across several platforms and make it
| available for use in an effectively open ended set of languages (at
| the least C++ and Java). Given what we are doing, I would guess it
| would have taken well over a person year or more of resources. And
| for what? To get something just as good? Probably not even as good
| if you factor in such things as the IDL compilers, exceptions, and
| the like.

I started out overhauling a system that spent 6 seconds from end
system to end system at best, with more than 120 seconds worst case.
It was the third generation of a system I built in 1989 that then
guaranteed 2 seconds from end system to end system. It was simply
so incompetently done that it had to be rewritten. I got it down to
the old standards in the summer of 1998. To move beyond that into
the 500 ms guaranteed end system to end system transmission times,
including more and more clients on the randomly performing Internet
instead of dedicated lines with known characteristics, much higher
bandwidth and even higher transmission needs, I had months and
months of hard work cut out for me.

This stuff is not for sale to random clients as a packaged product,
and it won't be, either. It is not in my employer's interest to
sell the server side of my protocol, because that has become one of
the main reasons we're ahead of the pack. The protocol is intended
to be open to the public and a tremendous amount of work has been
put into ensuring that a client can be written in a short time, like
a week for a reasonably competent programmer regardless of language,
while the server has taken almost 18 months.

| Now, an order of magnitude better (depending on how that gain is
| distribted), that would be a different story, but the effort would
| go up dramatically and you are then basically in the distributed
| object protocol _business_ (or at least you _should_ be).

I think you have a fairly naive view of the separation between the
primary business and the ancillary concerns of an endeavor. Our
_primary_ business is delivering financial news to investors and
brokers. The protocol design became _my_ primary business when I
found that we were destined to waste a lot of time if we stuck with
off-the-shelf products, and I'm paid exceedingly well to develop,
maintain, and promulgate this protocol. This came to be because I
have managers who saw the value of my work and listened to my
concerns and honored my request to be free to work on this for as
long as I wanted. Now I can honestly say that whatever I take home
from this project is miniscule compared to what it brings in. This
was not something that could have been realized if anyone had had
the naive "primary business" view of what we intended to be good at.
Nowhere in our business plans would you find mention of what I do
for this company, because it isn't what we tell people about, and we
don't make any money from my work, we make the money _with_ my work.

Rob Warnock

unread,
Oct 31, 2000, 11:03:16 PM10/31/00
to
Erik Naggum <er...@naggum.net> wrote:
+---------------
| * Wade Humeniuk <hume...@cadvision.com>

| | The whole conceptual framework of protocols I think is best
| | described by the OSI reference model.
...

| Please note that the full name is the _ISO_ Reference Model for Open
| Systems Interconnection. It's their model. It is _not_ a model
| that explains or can describe all Open Systems Interconnection.
|
| See The Elements of Networking Style: And Other Essays &
| Animadversions on the Art of Intercomputer Networking by
| M. A. Padlipsky for a full coverage of this. ISBN 0-595-08879-1.
+---------------

Damn! You beat me to it!! ;-} ;-}

Padlipsky's book is classic ammunition for refuting ignorant/naive
ISORMOSI advocates (or as he says, ISORM, pronouonced "eye sore...M").

+---------------


| | TCP and IPv4 only live within the context of the data link and
| | physical layer protocols that carry them.
|
| This is false.

+---------------

Indeed. As Padlipsky says:

If you know what you're doing, three levels is
enough; if you don't, even seventeen won't help.

Granted, the ARPANET (now IETF) Reference Model *did* eventually grow
in practice from three to four levels, since it was recognized that it
was useful to have a common "Protocols/Services" layer separate from
Applications. So now the Internet protocol architecture's four levels
are (per "Figure 1" of RFC 791):

Applications
Protocols/Services
Internetwork Protocol (incl. ICMP)
Local Network

ISO needlessly complicated thing by breaking up Applications into
Applications & Presentation (as if presentation could even be separated
from the application context -- hah!), breaking Protocols into Session
and Transport, and breaking Local Net into Data Link & Physical. But the
802.3 & FDDI Standards Committees didn't stop *there* -- they further
subdivided the Physical into MAC, PHY, and PMD (stealing some of Data
Link for the MAC layer), and now we have even *more* sub-layers, such
as "AUI" and "MII".

+---------------


| | All the issues of reliability, delivery and such need to be looked
| | at as protocol stacks.
|
| If you use the ISORMOSI on the Internet, you will end up with very,

| very weird attitudes towards a very fine technology...
+---------------

As Dave Clark's classic paper "Modularity & Efficiency" addresses.
Or as Padlipsky puts it:

Layering makes a good servant but a bad master.

+---------------


| | For those that want to learn about protocols break open the OSI spec
| | on the reference model.
|

| I would recommend against ISORMOSI until you understand whence it
| came, that is: the history of protocol design preceding it, both
| within the CCITT/ITU and the ISO camps and without either.

+---------------

Well, as I heard the story, there were seven strong personalities who
each wanted their own sub-committee, so -- guess what? -- they ended up
with seven layers! (Surprise, surprise.)

+---------------


| TCP and IP do not live quietly down in the basement, but actually
| extend their services and _presence_ all the way up to the
| "application layer" (and you _really_ want this, by the way), there
| is no way you can avoid knowing about "the underworld" relative to
| your actual interests.

+---------------

In the very early days, some of you may recall, the protocol implementations
*did* live in user mode, but that later became unfashionable for "security"
and (mistaken) "efficiency" reasons. However, this is now getting a revival
with today's higher-speed network links. See, for example:

<URL:http://www.cl.cam.ac.uk/~iap10/gige.ps>
"Arsenic: A User-Accessible Gigabit Ethernet Interface"
Ian Pratt (University of Cambridge Computer Laboratory).

"Arsenic" is a way of moving the IP/UDP/TCP functions back up into
user-mode, *without* sacrificing inter-user security, so that applications
can access the data directly, where the network interface dropped it,
with no copying.

We do something very similar with our MPI-over-"O/S Bypass"-over-STP-
over-GSN. User programs on one system can exchange MPI messages with
user programs on other systems at a very high rate *without* doing
system calls or causing any interrupts!! Yes, applications (safely,
in a protected way) talking *directly* to the hardware!! To paraphrase
Dogbert, "I wave my paw, Bah! You layer demons, be gone!"

MPI-over-Myrinet or -Giganet does something similar as well.

[Yes, some people (such as SGI) implemented (and shipped) "zero-copy DMA"
protocol stacks before, but they usually depended on the network MTU being
a multiple of the host operating system's page size... which is becoming a
problem on systems which have shifted to larger page sizes for efficiency.]

We're going to *have* to embrace this, guys! 10-gigabit/sec Ethernet is right
around the corner (hardware demos are already being shown), and we're going
to *need* some clever "layer breaking" to provide full 10GbE performance.

+---------------


| The belief in cleanly separated layers is almost a religion. It has
| no basis in reality or experience, but there is nothing that can
| make someone who believes in it stop believing in it.

+---------------

Again, people like Dave Clark & Mike Padlipsky & even Van Jacobsen (his
papers on the "Witless" interface, as well as TCP "header prediction")
blew this religion away years ago, but its adherents never got the message.
(They never do seem to. *sigh*)


-Rob

-----
Rob Warnock, 31-2-510 rp...@sgi.com
Network Engineering http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043

Wade Humeniuk

unread,
Oct 31, 2000, 11:28:01 PM10/31/00
to
Erik Naggum wrote:

>
> About half of what I have done over the years has had to do with
> ensuring that lots of computers very far apart agree to what the
> world looks like at exactly the same time. "One large memory space"
> sort of covers it, since the idea is that through intelligent
> protocols and very careful programming, lots of computers actually
> get the same values without having to ask anyone _else_ for them.
> There are several interesting products that offer some of this
> functionality for databases, messages servers, etc, these days.

The only one I ever read about is Amoeba. A research distributed OS.
Any others?

> The belief in cleanly separated layers is almost a religion. It has
> no basis in reality or experience, but there is nothing that can
> make someone who believes in it stop believing in it. There's
> always an answer for why it's a good idea despite all the hairy
> stuff they have to deal with. Just like the whole ISORMOSI thing.
> And almost like the belief in structured programming and object
> orientation where the only time anyone can be trusted to use them
> well is when they know when _not_ to use them.
>

ISORMOSI (first time I've seen this acronym!) for me still has a place.
The example I use is that when I design some software the written design
in a way has to lie. I have to omit details to have clarity, so I can
discuss it with other software developers (and have some aesthetic
appeal). I will usually try to design a protocol which follows a
layered model, because I have to apply some technique to get where I am
going. If I find problems with the layering I can try to resolve them
when I get there, but I have to start somewhere (with the _faith_ that I
will get there). This is one of the weird parts I find about life is
that I seem to need flawed (religious? dogmatic?) views of the world to
approach the truth. I guess its called learning. Is there a way out of
that morass?

Just some more words, a description of something can never be the thing.

As for the triumph of TCP/IP over ISORMOSI. I think it was things like:

Unix
FTP being simpler than FTAM
SNMP being simpler than CMIP
HTTP
the Session and Presentation Layers in OSI
TCP/IP was mostly American
random blind chance
and should I say "Simpler is Better"

that killed ISORMOSI (may it rest in layers).

BTW, isn't ISORMOSI still kicking in Europe?

Wade

Espen Vestre

unread,
Nov 1, 2000, 4:01:09 AM11/1/00
to
Wade Humeniuk <hume...@cadvision.com> writes:

> BTW, isn't ISORMOSI still kicking in Europe?

Well, if you count death cramps (sp?) as 'kicking'...
--
(espen)

Raymond Wiker

unread,
Nov 1, 2000, 4:12:27 AM11/1/00
to
Espen Vestre <espen@*do-not-spam-me*.vestre.net> writes:

Maybe he meant "getting a kicking"...

--
Raymond Wiker
Raymon...@fast.no

Tim Bradshaw

unread,
Nov 1, 2000, 5:06:38 AM11/1/00
to
Wade Humeniuk <hume...@cadvision.com> writes:

>
> The major problem with the Telecom protocols I have found has been their
> defintions of their protocol data units which place a premimum on
> reducing the number of bits in network traffic. This makes the PDUs
> hard to parse. A great example of this, if you can get the spec is
> IS-634, the control protocol between Cellular Base Stations and
> Management systems. Whoever dreamed that up has never programmed a
> computer before. Also the telecom protocols (at least the cellular
> ones) are tending to define the protocol state machines in SDL which
> allows protocol designers to leave gaps of what happens when error
> conditions occur.
>

It seems to me that telecom protocols, especially wireless ones, may
well have been designed by people who were acutely aware that
bandwidth is scarce and expensive, and compute cycles and memory,
even in a mobile phone, are cheap and plentiful.

--tim

Jason Trenouth

unread,
Nov 1, 2000, 5:15:12 AM11/1/00
to
On Wed, 01 Nov 2000 01:28:58 GMT, cbbr...@news.hex.net (Christopher Browne)
wrote:

> In our last episode (Tue, 31 Oct 2000 20:51:31 GMT),
> the artist formerly known as Fernando Rodríguez said:
> >On Tue, 31 Oct 2000 16:53:12 +0000, Jason Trenouth <ja...@harlequin.com>
> >wrote:
> >>> > a CORBA method invocation does not necessarily involve network
> >>> > communication: if the client and server are collocated (reside in the
> >>> > same address space), the ORB can use a simple method call.
> >>>
> >>> AFAIK no common lisp ORB can collocate.
> >>
> >>Xanalys HCL ORB does co-location optimization for Common Lisp clients and
> >>servers within the same LispWorks process.
> >
> >Sorry, but what's the need to use something like corba within a
> >single process? =:-O
>
> Um. Because that provides the highest conceivable speed of access
> between client and server?
>
> I might have an application that is distributable via CORBA so that
> parts can run on separate hosts _if need be_; efficient colocation
> means that this results in minimal degradation of performance if the
> components run on the same host.

Indeed. Some folks are so gloomy and pessimistic by nature that they assume
co-location optimization means you lose when running remotely instead of
thinking that you win when running locally. :-j

__Jason

Philip Lijnzaad

unread,
Nov 1, 2000, 5:30:13 AM11/1/00
to

Erik> you might decide not to "know" where your objects are. This could
Erik> lead to all _kinds_ of interesting performance pessimizations, just
Erik> one of which is CORBA.

Are you suggesting that there is a performance bottleneck in that you have to
know where (host,port or so) they are in order to use CORBA objects? This is,
of course, not true. If this thing is a concern, you would typically look up,
by name (or other) the 'where' of the real objects using some service
(Naming/Trading or so). Moreover, the resolving of a CORBA object reference
can result in a LocationForward by its server. This all happens transparently
to clients, since I believe even CORBA 1.0.

Lastly, a CORBA object reference can actually contain multiple locations
(and/or, incidentally, multiple protocols), offering fault tolerance options
(the standard for this is I believe being finalized; I haven't followed it).

Some of the critcism that I see raised against CORBA derive from the fact
that people don't appreciate the kinds of things that CORBA solves. When you
look closely at it, no other technology (cgi, RMI, SOAP, DCOM) comes close to
offering its functionality. Just to be explicit about this:

- language independence
- platform/vendor independence
- location independence
- network protocol independence
- separation of 'distributed objects' from their implementations
- security model
- location forwarding
- on-demand launching of servers
- fault-tolerance capabilities
- async messaging
- footprint
- performance

and there's prolly a few things I overlooked.

Christopher Browne

unread,
Nov 1, 2000, 9:47:44 AM11/1/00
to
Centuries ago, Nostradamus foresaw a time when Philip Lijnzaad would say:

>Erik> you might decide not to "know" where your objects are. This could
>Erik> lead to all _kinds_ of interesting performance pessimizations, just
>Erik> one of which is CORBA.
>
>Are you suggesting that there is a performance bottleneck in that you have to
>know where (host,port or so) they are in order to use CORBA objects? This is,
>of course, not true. If this thing is a concern, you would typically look up,
>by name (or other) the 'where' of the real objects using some service
>(Naming/Trading or so). Moreover, the resolving of a CORBA object reference
>can result in a LocationForward by its server. This all happens transparently
>to clients, since I believe even CORBA 1.0.
>
>Lastly, a CORBA object reference can actually contain multiple locations
>(and/or, incidentally, multiple protocols), offering fault tolerance options
>(the standard for this is I believe being finalized; I haven't followed it).

"May contain" and "standard still being finalized" and "not nearly
ubiquitously-implemented" adds up to vapourware...

>Some of the critcism that I see raised against CORBA derive from the fact
>that people don't appreciate the kinds of things that CORBA solves. When you
>look closely at it, no other technology (cgi, RMI, SOAP, DCOM) comes close to
>offering its functionality. Just to be explicit about this:
>
> - language independence
> - platform/vendor independence
> - location independence
> - network protocol independence
> - separation of 'distributed objects' from their implementations

All of which are well and good, and probably pretty widely implemented.

> - security model

... Where the last book I looked at on the topic basically did a
"marketing overview" 250 pages long that was certainly not suggestive of
it being reasonable to use it...

> - location forwarding
> - on-demand launching of servers

... Which isn't part of the standard, which means that everyone sets up
a different registry for the launchers...

> - fault-tolerance capabilities

... Probably inherent to any competent implementation of a
distributed system...

> - async messaging

... AMI isn't yet ubiquitous; I'm not sure it is actually implemented
yet ...

> - footprint
> - performance

Pretty much any system of any kind has a "footprint" and has some
performance characteristics.

>and there's prolly a few things I overlooked.

But many of these are only characteristics of a few specific CORBA
implementations; some may not yet be true of _any_ implementation, and
still others fall outside what CORBA speaks to, so it's not fair to call
all of them actual characteristics "of CORBA."
--
aa...@freenet.carleton.ca - <http://www.hex.net/~cbbrowne/corba.html>
History of Epistemology in One Lesson
"First Hume said "We can't really know anything", but nobody believed
him, including Hume. Then Popper said "Hume was right, but here's what
you can do instead...". Bartley then debugged Popper's code."
-- Mark Miller

Erik Naggum

unread,
Nov 1, 2000, 8:47:01 AM11/1/00
to
* Erik Naggum
| ... you might decide not to "know" where your objects are. This
| could lead to all _kinds_ of interesting performance pessimizations,
| just one of which is CORBA.

I thought this was an obvious tongue-in-cheek comment.

* Philip Lijnzaad <lijn...@ebi.ac.uk>


| Are you suggesting that there is a performance bottleneck in that
| you have to know where (host,port or so) they are in order to use
| CORBA objects?

No, I am "suggesting" that if you make a decision not to know the
location of your objects, you open up for a number of ways to waste
time and resources, relative to knowing where they are, obviously.
You are most probably not arguing that there is _no_ performance
cost in using COBRA compared to a direct in-memory object reference,
so I am a little uncertain what you are actually thinking of and
responding to.

| Some of the critcism that I see raised against CORBA derive from the
| fact that people don't appreciate the kinds of things that CORBA
| solves.

That could also be true, but in my case, I do not care for the kinds
of things COBRA does to accomplish what it does solve. I don't have
anything against what CORBA tries to solve, which is all very smart
and very good and all that.

| When you look closely at it, no other technology (cgi, RMI, SOAP,
| DCOM) comes close to offering its functionality.

This looks like marketing to me, with fairly automatic responses,
and I have to reopen my ears to continue to read what you're saying.

| Just to be explicit about this:
|
| - language independence
| - platform/vendor independence
| - location independence
| - network protocol independence
| - separation of 'distributed objects' from their implementations
| - security model
| - location forwarding
| - on-demand launching of servers
| - fault-tolerance capabilities
| - async messaging
| - footprint
| - performance
|
| and there's prolly a few things I overlooked.

Thanks for the list of features. Do you see a list of misfeatures
or at least problems that would impel someone _not_ to use CORBA if
they wanted some of these features?

Erik Naggum

unread,
Nov 1, 2000, 9:47:36 AM11/1/00
to
* Wade Humeniuk <hume...@cadvision.com>

| ISORMOSI (first time I've seen this acronym!) for me still has a
| place.

I have used it to describe a protocol tha was very messy and that
definition was the basis for a clean reimplementation, so I won't
knock the model too hard, but it doesn't work for the Internet, and
one must be acutely aware of when to break that model. Sometimes
you need a lift in that 7-story high-rise.

| I will usually try to design a protocol which follows a layered
| model, because I have to apply some technique to get where I am
| going.

Protocol layering is as hard as doing class inheritance right. The
interesting thing is that you can start with the lowest layer and
have a slot called "payload" that a subclass would interpret with
sub-slots, or you could start with the highest layer and add slots
in subclasses as you move down the stack. As long as the needs of a
class are dictated from the outside, this is pretty easy. It gets
real difficult to find the One True Layering when you move your own
functionality around as you experience changing needs.

| This is one of the weird parts I find about life is that I seem to
| need flawed (religious? dogmatic?) views of the world to approach
| the truth. I guess its called learning. Is there a way out of that
| morass?

Apart from dying? I don't think so. :)

| As for the triumph of TCP/IP over ISORMOSI. I think it was things like:
|
| Unix
| FTP being simpler than FTAM
| SNMP being simpler than CMIP
| HTTP
| the Session and Presentation Layers in OSI
| TCP/IP was mostly American
| random blind chance
| and should I say "Simpler is Better"
|
| that killed ISORMOSI (may it rest in layers).
|
| BTW, isn't ISORMOSI still kicking in Europe?

Nah, even the OSI Profiles (like GOSIP) have moved to TCP/IP, but
there are still some large commercial X.400 software vendors and
service providers. Some believe that EDI needs X.400 to work.

I used to study X.400. I made a guess that I would spend 10 years
writing a fully compliant mail system based on X.400. Then I wrote
a fully compliant SMTP-based mail system in three weeks, added MIME
stuff experimentally (while I was still contributing to that work)
and figured that if you had to spend more than a man-year on a mail
system, you'd need to make it a highly successful mass product, and
that was very unlikely to happen to anything as long as decent and
simple mail systems were available essentially for free. I don't
think I was too far from how people with real money, resources, and
vested interests were thinking. Today, we have the Evil Behemoth
doing about 10% of what X.400 offered, and they still don't comply
with the necessary RFCs, so chances are they can't even _read_ the
X.400 specification.

The WWW idea hit the world with unprecedented force. It's a crying
shame that HTTP and HTML had such staggeringly idiotic designs, and
still do. If I were Tim Berners-Lee, I'd blame someone else for it,
so I guess he couldn't find anyone who would accept that. But I
digress.

Philip Lijnzaad

unread,
Nov 1, 2000, 10:46:38 AM11/1/00
to

Erik> * Erik Naggum
Erik> | ... you might decide not to "know" where your objects are. This
Erik> | could lead to all _kinds_ of interesting performance pessimizations,
Erik> | just one of which is CORBA.

Erik> I thought this was an obvious tongue-in-cheek comment.

not to me, I'm afraid. It looked like you '"suggested"' that one of the
problems with CORBA is performance if/when you don't exactly know where your
objects are.

Erik> | - language independence
Erik> | - platform/vendor independence
Erik> | - location independence
Erik> | - network protocol independence
Erik> | - separation of 'distributed objects' from their implementations
Erik> | - security model
Erik> | - location forwarding
Erik> | - on-demand launching of servers
Erik> | - fault-tolerance capabilities
Erik> | - async messaging
Erik> | - footprint
Erik> | - performance
Erik> |
Erik> | and there's prolly a few things I overlooked.

Erik> Thanks for the list of features. Do you see a list of misfeatures
Erik> or at least problems that would impel someone _not_ to use CORBA if
Erik> they wanted some of these features?

Good question. There are two main problems: (perceived) complexity and
learning curve of the thing (relative to cgi and rmi and SOAP), and more
importantly: lack of widely available (different) implementations of all the
useful extra standards such as security, messenging, etc. Lesser problems
are the overhead (use sockets if you need real high-speed), and I think
CORBA is prolly not yet mature enough in the transaction monitor area. So,
for high speed transaction processing, CORBA may not be the right
techonology.

Lastly, the standardization process is a bit long-winded, but that
doesn't say much about the technology.

John M. Adams

unread,
Nov 1, 2000, 10:37:46 AM11/1/00
to
Fernando Rodríguez <spa...@must.die> writes:

> Hi!
>
> I have to comunicate a lisp process with a c++ app and I was
> considering corba or using sockets. Both are rather new to me, so wich one
> would you recommend and why? O:-)

If the communication required is pretty simple, use sockets for two
reasons.

1) Corba has a large learning curve, complicates builds, is harder to
debug in complex scenarios, and makes you alot more vulnerable to
vendor bugs. It can make sense if you are using it as a framework for
many interoperable applications which need to interact in diverse
ways.

2) If you don't understand sockets, you are going to have a hard time
reasoning about the operational behavior of your system. Doing at
least a prototype using plain sockets will give you a sound basis for
understanding the salient qualities of a particular corba
implementation.

--
John M. Adams

Philip Lijnzaad

unread,
Nov 1, 2000, 11:01:14 AM11/1/00
to

>> Some of the critcism that I see raised against CORBA derive from the fact
>> that people don't appreciate the kinds of things that CORBA solves. When you
>> look closely at it, no other technology (cgi, RMI, SOAP, DCOM) comes close to
>> offering its functionality. Just to be explicit about this:
>>
>> - language independence
>> - platform/vendor independence
>> - location independence
>> - network protocol independence
>> - separation of 'distributed objects' from their implementations

Christopher> All of which are well and good, and probably pretty widely
Christopher> implemented.

uh ... not really. Network protocol independence doesn't seem to be so, but
then again it doesn't seem to matter. The vendor independence is usually but
not always OK.

>> - security model

Christopher> ... Where the last book I looked at on the topic basically did a
Christopher> "marketing overview" 250 pages long that was certainly not
Christopher> suggestive of it being reasonable to use it...

No, I know, and it's a big pity really.

>> - on-demand launching of servers

Christopher> ... Which isn't part of the standard, which means that everyone
Christopher> sets up a different registry for the launchers...

you're right; on-demand launching should have been listed under the location
forwarding: you can have auto-launching thanks to this feature, but the
solutions are (probably necessarily) implementation-specific.

>> - fault-tolerance capabilities

Christopher> ... Probably inherent to any competent implementation of a
Christopher> distributed system...

cgi? SOAP? DCOM ? rmi? They can be made fault-tolerant by hand; the point is
that (OK, OK, once the fault tolerance thing is finilized and implemented ...),
this thing will be available out of the box.

>> - async messaging

Christopher> ... AMI isn't yet ubiquitous; I'm not sure it is actually
Christopher> implemented yet ...

I thought Iona has, or has annnounced it; not sure.

>> - footprint
>> - performance

Christopher> Pretty much any system of any kind has a "footprint" and has some
Christopher> performance characteristics.

:-) Again, this is meant relative to other technologies. ORBExpress claims an
ORB runtime of 150kByte. I think this compares favourably with the avarage
XML parser package or httpd process. Same thing for speed: XML or text/plain
is just slower to parse than a binary datastream of which you know the layout.

>> and there's prolly a few things I overlooked.

Christopher> But many of these are only characteristics of a few specific
Christopher> CORBA implementations; some may not yet be true of _any_
Christopher> implementation, and still others fall outside what CORBA speaks
Christopher> to, so it's not fair to call all of them actual characteristics
Christopher> "of CORBA."

yes, I agree, I'm sorry if I appeared to be claiming that.

Lieven Marchand

unread,
Oct 31, 2000, 4:13:18 PM10/31/00
to
Rainer Joswig <jos...@corporate-world.lisp.de> writes:

> In article <m3n1fmh...@localhost.localdomain>, Lieven Marchand
> <m...@bewoner.dma.be> wrote:
>
> > Erik Naggum <er...@naggum.net> writes:
> >
> > > * Eric Marsden <emar...@mail.dotcom.fr>
> > > | this is no longer true.
> > >
> > > I appreciate the update.
> > >
> > > I _still_ think CORBA sucks, though.
> >
> > If you want to see protocol design gone berserk, try ASN.1. They


> > haven't solved any of the real problems but they added heaps of

> > complexity in the process. Unfortunately, it's caught on in some part


> > of the IETF world, so I'm considering writing some tools in CL to make
> > it livable.
>

> Have you looked at http://www.switch.ch/misc/leinen/snmp/sysman.html ?

Yes, I'm aware of it. It only does the low end BER
encoding/decoding. I was thinking of a real ASN.1 compiler, going from
PDU to functions to encode and decode. There are some fairly thorny
issues there.

--
Lieven Marchand <m...@bewoner.dma.be>
Lambda calculus - Call us a mad club

Erik Naggum

unread,
Nov 1, 2000, 12:25:36 PM11/1/00
to
* Philip Lijnzaad <lijn...@ebi.ac.uk>

| XML or text/plain is just slower to parse than a binary datastream
| of which you know the layout.

This claim is not at all supported by the evidence. Parsing a
binary data stream that claims to be general and not just a dump of
bytes from memory, is just as expensive to parse as text/plain and
can quickly become _slower_ if you do it wrong and need a lot of
overhead to overcome the inherent problems of binary representation.
The arguments for binary datastreams are space and bandwidth, _not_
time to process.

Eric Marsden

unread,
Nov 1, 2000, 12:35:05 PM11/1/00
to
>>>>> "cb" == Christopher Browne <cbbr...@salesman.brownes.org> writes:
>>>>> "pl" == Philip Lijnzaad <lijn...@ebi.ac.uk> writes:

pl> Lastly, a CORBA object reference can actually contain multiple
pl> locations (and/or, incidentally, multiple protocols), offering
pl> fault tolerance options (the standard for this is I believe
pl> being finalized; I haven't followed it).

cb> "May contain" and "standard still being finalized" and "not
cb> nearly ubiquitously-implemented" adds up to vapourware...

the CORBA fault tolerance specification was published in January.

<URL:http://cgi.omg.org/cgi-bin/doc?ptc/00-04-04>


cb> ... AMI isn't yet ubiquitous; I'm not sure it is actually
cb> implemented yet ...

AMI is implemented by TAO and Orbix2000 for C++, at least.

--
Eric Marsden <URL:http://www.laas.fr/~emarsden/>

Wade Humeniuk

unread,
Nov 1, 2000, 2:51:35 PM11/1/00
to
Tim Bradshaw wrote:

>
> It seems to me that telecom protocols, especially wireless ones, may
> well have been designed by people who were acutely aware that
> bandwidth is scarce and expensive, and compute cycles and memory,
> even in a mobile phone, are cheap and plentiful.
>
> --tim

Yes they are, but that argument is suspiciously like the one made when
hardware resources like memory and disk space were scarce. It has taken
me a long time to get over the size of Lisp executables because of that
conditioning. Save space, save cycles and optimize, optimize,
optimize. I am going to predict that communications bandwidth will
become less and less of an issue as time goes by and eventually we will
get tired of "transmitting the sum of all of man's computer data every
few seconds". Maybe its not such a good thing to get into bad habits
now.

My feeling that the real key to reducing network traffic is to make the
protocols "higher level" and thus say more with less.

I was at a seminar about ten years ago put on by a member of Sun
Microsystems's Corba team. He discussed bandwidth considerations but
brushed them off as "orthogonal problems" (the same for network
management, et. al)

As to the question of Corba or Sockets, go with what attracts you, once
you have implemented the solution you will actually know how to actually
do it. :) :)

Wade

Tim Bradshaw

unread,
Nov 1, 2000, 5:42:33 PM11/1/00
to
* Wade Humeniuk wrote:


> Yes they are, but that argument is suspiciously like the one made when
> hardware resources like memory and disk space were scarce. It has taken
> me a long time to get over the size of Lisp executables because of that
> conditioning. Save space, save cycles and optimize, optimize,
> optimize. I am going to predict that communications bandwidth will
> become less and less of an issue as time goes by and eventually we will
> get tired of "transmitting the sum of all of man's computer data every
> few seconds". Maybe its not such a good thing to get into bad habits
> now.

It may be `suspiciously like it' but I don't think it's the same. At
least where I live (in the UK) it's reasonably apparent that a huge
amount of telecoms is going to end up going over wireless links[1].
There is just a finite amount of radio bandwidth, and nothing is ever
going to increase it. You can effectively increase it by reducing
cell sizes but that's not a cure-all since base-stations are expensive
and a nuisance.

And bandwidth is *expensive* -- the European telcos are in the process
of spending over $100 billion on licenses for more bandwidth alone,
with the total expenditure on licenses and hardware estimated at more
than $300 billion over the next few years. That's really a lot of
money.

And it's also not really the case that memory is cheap for instance,
although the PDP11 mindset I referred to elsewhere would lead you to
believe it is. It's cheap *if you don't want to access it*: if you do
want to access it you soon discover that you have a small amount of
memory you can actually get at quickly, and a much larger amount which
is depressingly far away from the CPU. If you have a decent machine,
it may be able to deliver stuff in the quantities the CPU needs, but
the latency is still severe. And this situation will not ever get
better. And lo and behold there are seriously hairy algorithms to
deal with this problem, implemented mostly in hardware.

Similar things go for network bandwidth -- wire-based bandwidth may be
plentiful, but latencies are long and getting longer (light travels
0.3 m in one cycle for a 1GHz chip).

--tim


Footnotes:
[1] I'm not a particularly advanced user, and I already hardly use
my landline since mobiles are cheaper and more convenient for
me. Indeed I'm typing this over a mobile link from my hotel
room: although mobile data connections are definitely not
serious yet, it's sufficiently cheaper than hotel phone bills
that I do it anyway.

Boris Schaefer

unread,
Nov 1, 2000, 8:33:03 PM11/1/00
to
Tim Bradshaw <t...@tfeb.org> writes:

| I think this is yet another one of those cases where most software
| people (clearly not Erik) are living in the world of the PDP11.
| Hardware people spend all their time dealing with these issues --
| just look at the design of a modern processor, which is dealing
| exactly this kind of thing.

Well, for some time now, I have put off studying some processor
designs. Since I did never before study a processor design, I would
be glad, if you could recommend a processor that's modern and that
someone with no (at least not very much) previous experience in this
area can understand. I'd also be glad for some literature
recommendations in this area.

Thanks in advance,
Boris

--
bo...@uncommon-sense.net - <http://www.uncommon-sense.net/>

Many aligators will be slain,
but the swamp will remain.

Erik Naggum

unread,
Nov 1, 2000, 9:18:15 PM11/1/00
to
* Boris Schaefer <bo...@uncommon-sense.net>

| Well, for some time now, I have put off studying some processor
| designs. Since I did never before study a processor design, I would
| be glad, if you could recommend a processor that's modern and that
| someone with no (at least not very much) previous experience in this
| area can understand. I'd also be glad for some literature
| recommendations in this area.

Just about nothing beats David A. Patterson and John L Hennessy's
seminal works "Computer Architecture (a quantitative approach)" and
"Computer Organization and Design (the hardware/software interface)".

Read them before you pick up, say, the processor reference manuals
for the Intel Pentium III. (Despite the braindamaged instruction
set and register model, the internals are amazingly brilliant.)

Bruce Hoult

unread,
Nov 1, 2000, 9:56:43 PM11/1/00
to
In article <87d7gf1...@qiwi.uncommon-sense.net>, Boris Schaefer
<bo...@uncommon-sense.net> wrote:

> Tim Bradshaw <t...@tfeb.org> writes:
>
> | I think this is yet another one of those cases where most software
> | people (clearly not Erik) are living in the world of the PDP11.
> | Hardware people spend all their time dealing with these issues --
> | just look at the design of a modern processor, which is dealing
> | exactly this kind of thing.
>
> Well, for some time now, I have put off studying some processor
> designs. Since I did never before study a processor design, I would
> be glad, if you could recommend a processor that's modern and that
> someone with no (at least not very much) previous experience in this
> area can understand. I'd also be glad for some literature
> recommendations in this area.

That's easy, study any modern RISC design, such as PowerPC or ARM or
Alpha or MIPS (I'd stay away from SPARC).

For books, it's really easy to decide on a recommendation:

Computer Organization and Design:
the hardware/software interface

<http://www.amazon.com/exec/obidos/ASIN/1558604286>

It covers everything you need to know, all fucussed around the MIPS ISA.
It looks at gates and hardware design, instruction sets,
assembly-language programming. It presents a series of
increasingly-sophisticated possible implementations of the MIPS
architecture, with the first ones being simple enough that you could
actually go and build them yourself using TTL chips (or just model them).

You can't beat it and these guys *know* what they're talking about.

Hennessy, btw, just got a big promotion at Stanford...

-- Bruce

Rob Warnock

unread,
Nov 2, 2000, 4:42:48 AM11/2/00
to
Erik Naggum <er...@naggum.net> wrote:
+---------------
| The WWW idea hit the world with unprecedented force. It's a crying
| shame that HTTP and HTML had such staggeringly idiotic designs...
+---------------

The main two issues I had with HTTP/1.0 (both "fixed" now, except
that the fixes came too late to be ubiquitous in clients, servers,
and especially, proxies), were that the protocol did not provide
the *entire* URL to the server, specifically the server's full
domain name (fixed, somewhat hackily, with the HTTP/1.1 "Host:"
header), and the separate-TCP-connection-per-object ugliness
(fixed with HTTP/1.1's "Connection: Keep-Alive").

Besides those two, what other serious issues do you have with HTTP?

Tim Bradshaw

unread,
Nov 2, 2000, 5:33:58 AM11/2/00
to
Tim Bradshaw <t...@cley.com> writes:

[Stuff]

Something else I thought of last night is that the people who design
mobile phone protocols are designing them for the hardware they want
to sell, not some hypothetical future hardware where the costs have
changed. And bandwidth is *not* plentiful at present. Only fools and
software people (have I specified more than one group of people
here?) think it's reasonable to try and sell a product which will
only work well using hardware no-one has yet.

(Actually, I *have* specified more than one group: software people are
obviously not fools because they *succeed* in selling these products.)

--tim

Tim Bradshaw

unread,
Nov 2, 2000, 5:43:24 AM11/2/00
to
Boris Schaefer <bo...@uncommon-sense.net> writes:

> Well, for some time now, I have put off studying some processor
> designs. Since I did never before study a processor design, I would
> be glad, if you could recommend a processor that's modern and that
> someone with no (at least not very much) previous experience in this
> area can understand. I'd also be glad for some literature
> recommendations in this area.
>

I agree with Erik's recommendation of the books by Hennessy &
Patterson: these are classics. If you can get the *first* edition of
`Computer Architecture: a quantitative approach' it is, in my opinion,
worth reading as well as the 2nd, as it has a lot of stuff on things
like the IBM 360 and the Vax which were omitted (for reasons of space
I think) from the 2nd. These machines may seem like ancient history,
but it's hard to overestimate how important they are (the 360 as an
example of getting things mostly right, and the Vax as an example of
getting things mostly wrong): learning about them is like learning
about classical dynamics for a physicist.

(In fact, if anyone has a 1st ed they don't want in the UK, mail me,
as I borrowed mine and the person who has it won't sell it to me...)

--tim

William Deakin

unread,
Nov 2, 2000, 7:09:09 AM11/2/00
to
Tim wrote:
> (Actually, I *have* specified more than one group: software people are
> obviously not fools because they *succeed* in selling these products.)
Just because your not a fool, doesn't stop you from being stupid...

:) will

Tim Bradshaw

unread,
Nov 2, 2000, 7:16:37 AM11/2/00
to
William Deakin <w.de...@pindar.com> writes:

> Just because your not a fool, doesn't stop you from being stupid...

It doesn't. But I think it's actually quite smart to be able to sell
as much non-working software as gets sold...

--tim (who has just installed windows 2000, and been absolutely
astonished that it seems to be better than either 98 or NT4 was to
install, although it definitely does require HW I haven't got yet. A
shame, as I'd been sharpening my angle-grinder specially.)

Wade Humeniuk

unread,
Nov 2, 2000, 11:09:40 AM11/2/00
to


Yes I agree they do that, they are not fools, but who is?

I will digress for a moment...

When I mentioned the spec IS-634 as an example that was difficult to
implement is because I actually had to do it. This protocol is a
supervisory protocol, not the actual digital data stream that carries
the encoded voice. The voice encoding protocols have to efficient and
thus TDMA can carry 3 or more digital voice channels in every analog
voice channel. This is without a doubt necessary. However the same
approach is taken with supervisory protocols to set up calls to the PSTN
(Public Service Telephone Network) and to do hand-offs and such. IS-634
travels over wires and is not the protocol between the cell phone and
the Base Station. IS-634 was designed to hide that the call setup,
hand-off and other administrative stuff is actually AMPS, TDMA or CDMA.
It was designed to be layer to hide all of that. Of course trying to be
all things to all people it became big and then they threw this
difficult definition for the PDUs. I am not sure anyone actually
understood the spec, and there certainly was a lot of disagreement when
we asked for clarification on the protocol's state behavior from a MS
vendor. I am not positive but the spec was about 500 pages (maybe more,
anyway one BIG binder). At the time all I can remember is being peeved
about the complexity that I thought was unnecessary and that I found it
demoralizing.

Now to link this into Lisp...

One of the reasons I have decided that I will use Lisp is that my
perception is that there is less complexity, even though intellectually
I know this cannot be true. By sticking with Lisp techniques, say in
designing a protocol with Lisp external syntax, I feel that I can
understand it better because it is expressed better. Many times I find
myself translating a protocol or a computer algorithm into Lisp to
analyze its operation thoroughly. Deep done I think by doing this I can
actually get to understand what the designer was _intending_. Then I
can actually implement it because I know what needs to be accomplished.
Most of the time the thickness of the spec hides some very simple
points.

Wade

Jon S Anthony

unread,
Nov 2, 2000, 5:42:39 PM11/2/00
to
Erik Naggum wrote:
>
> * Jon S Anthony <j...@synquiry.com>
> | Again, if you're interested in getting into the protocol business,
> | that's a different story.
>
> This seems to be your key argument, that there is a primary business
> and lots of ancillary concerns for which it is better to use the
> results of somebody else's primary business than dabble in it.

This is incorrect. You left out the crucial bit concerning the
tradeoff of effectiveness of the potential offering for what you need.
As I stated:

Everyone going off and reinventing this stuff on their own, _even
if they know they can do a better job of it_, when it is not the
primary work they are involved in, AND WHEN THE AVAILABLE
OFFERINGS WILL NOT NEGATIVELY IMPACT THIS PRIMARY WORK is just
counter productive. (emphasis added this time...)

You either didn't understand this part before or for some reason
actually believe that even with this piece considered the point made
still makes little to no sense.


> | Designing is only a fraction of the effort.
>
> I see that it is somehow important to you to exaggerate the costs of
> "rolling your own", but I'd like to know why.

Incorrect - see above.


> I have _already_ stated in plain text and simple terms that if you
> can't do it better yourself, by all means, stick with what
> somebody else did even if that is not particularly good,

Yes, but this is _irrelevant_ as it does not address the _actual_ point!


> I'm effectively arguing that out-doing CORBA is not that hard, but

And I'm arguing that this makes no difference in most cases, i.e.,
those as specified above.


> I started out overhauling a system that spent 6 seconds from end
> system to end system at best, with more than 120 seconds worst case.
> It was the third generation of a system I built in 1989 that then
> guaranteed 2 seconds from end system to end system. It was simply
> so incompetently done that it had to be rewritten. I got it down to
> the old standards in the summer of 1998. To move beyond that into
> the 500 ms guaranteed end system to end system transmission times,
> including more and more clients on the randomly performing Internet
> instead of dedicated lines with known characteristics, much higher
> bandwidth and even higher transmission needs, I had months and
> months of hard work cut out for me.

Sounds perfectly believeable and this actually is annecdotal evidence
in support of the point I am making (to do a significantly "better"
job is definitely non trivial and thus had better make a significant
difference for what you need).


> This stuff is not for sale to random clients as a packaged product,
> and it won't be, either. It is not in my employer's interest to
> sell the server side of my protocol, because that has become one of
> the main reasons we're ahead of the pack. The protocol is intended

Fine, but I fail to see the relevance of this as it clearly indicates
that _for your particular case(s)_ you believe (or know) that the
effort put into building this ancillary piece was a positive tradeoff
("one of the main reasons we're ahead of the pack").

> I think you have a fairly naive view of the separation between the
> primary business and the ancillary concerns of an endeavor. Our

Well, obviously I disagree. What is more, our investors would
disagree and that is clearly important for us (and to me - more or
less). I believe my view of analyzing the tradeoffs of "build vs buy"
in this area and makeing choices based on firm technical requirements
and value added is exactly the correct way to proceed in such cases.


> long as I wanted. Now I can honestly say that whatever I take
> home from this project is miniscule compared to what it brings in.
> This was not something that could have been realized if anyone had
> had the naive "primary business" view of what we intended to be
> good at. Nowhere in our business plans would you find mention of
> what I do for this company, because it isn't what we tell people
> about, and we don't make any money from my work, we make the money
> _with_ my work.

Perhaps a major difference here is that your business is not product
centered/oriented, whereas ours is.


/Jon

--
Jon Anthony
Synquiry Technologies, Ltd. Belmont, MA 02478, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Jon S Anthony

unread,
Nov 2, 2000, 6:08:14 PM11/2/00
to
John Adams wrote:
>
> 1) Corba has a large learning curve, complicates builds, is harder to
> debug in complex scenarios,

This is just plain nonsense. In practice (just using it to actual do
something) it is about as simple as it gets. In CL it is amazingly
simple. You can easily put systems together in an afternoon - the
hardest part is implementing the methods - something you will need to
do no matter what you use.


> and makes you alot more vulnerable to vendor bugs.

In contrast to your own? And what about all the other vendor bugs you
are vulnerable to? Perhaps you just roll your own CL implementation
also.


> It can make sense if you are using it as a framework for many
> interoperable applications which need to interact in diverse ways.

Yes, but it can also work just fine for simple things, but for such
really simple stuff it may well not be cost effective. It would
depend - for now only commercial versions for CL provide a reasonably
full implementation. I don't know how complete or capable CLORB
currently is. For things like Java, C++, Perl, etc. there are loads
of free ones which work very well.


> Doing at least a prototype using plain sockets will give you a sound
> basis for understanding the salient qualities of a particular corba
> implementation.

Yes, but if all you want to do is get something done, why do you care
about the implementation details in the first place? How is this
really any different than thinking one should "prototype a (lisp,
java, ...) compiler in order to gain sound understanding of an
implementation before using it? Does this really make sense for the
general case???

Tim Bradshaw

unread,
Nov 2, 2000, 5:40:56 PM11/2/00
to
* Wade Humeniuk wrote:

> IS-634 travels over wires and is not the protocol between the cell
> phone and the Base Station.

I apologise, I misread your original message, ad assumed it was an air
protocol and so was perhaps unduly flippant.

I still think that the issues I mentioned are fairly valid in general,
but obviously this protocol could be just badly designed and you are
in a much better position to know that than I.

--tim

Wade Humeniuk

unread,
Nov 2, 2000, 7:46:08 PM11/2/00
to
Thanks, but no apology was needed.

Wade

Erik Naggum

unread,
Nov 2, 2000, 9:17:46 PM11/2/00
to
* Jon S Anthony <j...@synquiry.com>
| You either didn't understand this part before or for some reason
| actually believe that even with this piece considered the point made
| still makes little to no sense.

I considered it a fairly unintelligent point to make. It is a given
that we do not set out to waste resources. If you have to argue
against people who do, that is certainly not my problem, but I find
it oddly amusing to watch people who make such assumptions about
others, as the only place this acquired stupidity can grow is where
there is utter disregard for technical decisions.

| I believe my view of analyzing the tradeoffs of "build vs buy" in
| this area and makeing choices based on firm technical requirements
| and value added is exactly the correct way to proceed in such cases.

Of course. The interesting question is, however, what would make
you change your mind, not what makes you convinced you are right.
for all I know or care, you could be psychologically impelled to
defend your choice and work hard to rationalize it after the fact.

| Perhaps a major difference here is that your business is not product
| centered/oriented, whereas ours is.

Another fairly unintelligent point. The question is which products,
if this makes an interesting distinction, which I don't think it does.

I think the explanation for our differences in view (and the reason
you are not really listening) is that you seem to be venture capital
funded, while we're making and spending our own money and can afford
research that does not contribute to the first quarter bottom line.
This is one of the reasons I'm working for a very solid company and
have rejected golden-edged job offers from venture capital-funded
pies in the sky. I don't need bonuses or stock options to enjoy my
work here, and I do see the hoping for that big cash hand-out as a
huge misfeature where it is offered the employees. I do not play
the lottery or bet on horses, either. We have a product that sells
well, has significant growth potential, and I'm free to do whatever
I think can contribute to that growth, including going away for a
year to do weird stuff that they trust me implicitly to be good for
the company, work on and with Common Lisp, etc. Consequently, I
find your attitude both condescending and ignorant at the same time.

We live in a time when information technology is seen as magic by
people who have only figured out that there is gold somewhere, but
not how to find it or encourage anybody to find it. I have extreme
distaste for venture capital in such an ignorant climate. If you
are indeed venture capital-funded and have chosen CORBA because you
think you'll avoid some expenditures tha would scare your investors
and leave you stranded, I'd have to put your analysis in the "play
it safe" category, rather than "play it intelligently, long range",
and I am no longer impressed with your choice of CORBA and what you
claim it does for you.

Simply put: I have come to doubt that you have the resources to make
intelligent choices, but have to make choices based on insufficient
data and cannot afford to go wrong. Instead of being able to afford
to go wrong with your own money, a venture capitalist keeps a lot of
people on a taut leash and while he can afford to go wrong quite
often to make it big on a few, each of his "puppies" have to show
early signs of success or be wasted. This isn't _investment_ in my
view, it's playing the lottery with other people's jobs and brains.
Lotteries don't do it for me. I find no thrill in accidents.

Jon S Anthony

unread,
Nov 3, 2000, 3:00:00 AM11/3/00
to
Erik Naggum wrote:
>
> * Jon S Anthony <j...@synquiry.com>
> | You either didn't understand this part before or for some reason
> | actually believe that even with this piece considered the point made
> | still makes little to no sense.
>
> I considered it a fairly unintelligent point to make. It is a given

OK, but why?


> that we do not set out to waste resources. If you have to argue
> against people who do, that is certainly not my problem, but I find

I have in no way suggested this, so I don't see the relevance.
Certainly it has nothing to do with the given point, so it is unclear
why you bring it up. This _looks_ for all the world like you are
saying that "if you chose Corba, then _apriori_ you could not have
done so as part of any analysis of the tecnical context and thus must
be victimized by all these dysfunctional things". That kind of
position is simply irrational.


> it oddly amusing to watch people who make such assumptions about
> others, as the only place this acquired stupidity can grow is
> where there is utter disregard for technical decisions.

I haven't made any such assumptions, or is this just a general comment
about such situations?


> | I believe my view of analyzing the tradeoffs of "build vs buy" in
> | this area and makeing choices based on firm technical requirements
> | and value added is exactly the correct way to proceed in such cases.
>
> Of course. The interesting question is, however, what would make
> you change your mind, not what makes you convinced you are right.

An analysis which showed that there would be any noticeable benefit
to us for the effort expended and the incurred distraction from the
many things that we do need to achieve. What else?


> for all I know or care, you could be psychologically impelled to
> defend your choice and work hard to rationalize it after the fact.

No. Of course one could always claim that this is merely a belief on
my part, but that's not a particularly productive position.


> | Perhaps a major difference here is that your business is not product
> | centered/oriented, whereas ours is.
>
> Another fairly unintelligent point. The question is which products,
> if this makes an interesting distinction, which I don't think it does.

Incorrect. There is a very clear distinction between a business model
which centers on producing product for sale and one which focuses on
work for hire or services. The comment, really a question on second
look, stemmed from my impression that your's centered on the latter
while ours does center on the former. So the idea of "which products"
is a category error. This is not a big deal as both models are
perfectly legitimate, but they can produce different perspectives on
the issues we were discussing.


> I think the explanation for our differences in view (and the reason
> you are not really listening) is that you seem to be venture capital
> funded, while we're making and spending our own money and can afford
> research that does not contribute to the first quarter bottom line.

Incorrect - as we also make and spend our own money. No, the
difference is where we believe it is most important to spend that
money. You seem to be saying that it should be spent on ancillary
items even when those items will not produce any true positive gain in
the value of the resulting product.


> This is one of the reasons I'm working for a very solid company and
> have rejected golden-edged job offers from venture capital-funded

Well, we have been around for 5 years now, and have produced all of
our offerings _before_ we got investors (which was only just
recently). And it was during that period that we did the analysis and
chose Corba. It is also the period where we chose Common Lisp. So
your "analysis" is simply dead wrong.


> the company, work on and with Common Lisp, etc. Consequently, I
> find your attitude both condescending and ignorant at the same time.

Fine, but from where I sit, this stems from some barrior on your part
to get beyond some preconceptions of your own.


> distaste for venture capital in such an ignorant climate. If you
> are indeed venture capital-funded and have chosen CORBA because you
> think you'll avoid some expenditures tha would scare your investors

Is this at all clearer for you now? The scenario you paint is just
plain in the weeds.


> Simply put: I have come to doubt that you have the resources to
> make intelligent choices, but have to make choices based on
> insufficient data and cannot afford to go wrong.

Is it all clearer for you now that this is just plain wrong???

Boris Schaefer

unread,
Nov 2, 2000, 8:15:51 PM11/2/00
to
Tim Bradshaw <t...@tfeb.org> writes:

| I agree with Erik's recommendation of the books by Hennessy &
| Patterson: these are classics.

Thank you, and thanks to Bruce and Erik as well. I picked up
_Computer Architecture: A Quantitative Approach_ today (for about a
year now a local bookstore had one copy that no one buyed and which I
almost buyed several times already just because it looked interesting
but not knowing it's a classic (they also have a copy of the german
translation of SICP for more than a year now that, as well, no one's
buying)).

| (In fact, if anyone has a 1st ed they don't want in the UK, mail me,
| as I borrowed mine and the person who has it won't sell it to me...)

If you don't mind ordering it used and from the US, you should check
out www.abebooks.com, I took a look today and they have it for about
$20 or $30, IIRC.

Boris

Decaffeinated coffee? Just Say No.

Christopher Browne

unread,
Nov 3, 2000, 9:07:52 PM11/3/00
to
In our last episode (03 Nov 2000 02:17:46 +0000),

the artist formerly known as Erik Naggum said:
> This is one of the reasons I'm working for a very solid company and
> have rejected golden-edged job offers from venture capital-funded
> pies in the sky. I don't need bonuses or stock options to enjoy my
> work here, and I do see the hoping for that big cash hand-out as a
> huge misfeature where it is offered the employees.

Just one complaint: I suspect you misspelled the word "vulture" here.
:-)
--
(concatenate 'string "aa454" "@" "freenet.carleton.ca")
<http://www.hex.net/~cbbrowne/>
REALITY is a crutch for people who can't face ITS.

David Bakhash

unread,
Nov 4, 2000, 3:00:00 AM11/4/00
to
Erik Naggum <er...@naggum.net> writes:

> * Paolo Amoroso <amo...@mclink.it>
> | Could you please mention a few examples of well designed protocols
> | that are worth studying? Thanks in advance.
>
> The Telecom people have probably done some of the best work there
> is in protocol design. Just take a look at how they started out
> with very low speeds, like 300 bps, but over time managed to
> squeeze 56kbps through the feeble phone lines that were not
> upgraded, then got DSL to work across that same old copper wire.
> Impresses me, anyway.

I'm not sure I follow this, and I'd like to understand more.

So they started out with 300 bps, and eventually got it up to 56K
bps. Almost 200% increase. Okay. I agree that's impressive.

But what does this have to do with the original protocol? It seems to
have more to do with the channel capacity, and maybe even how fast the
electronic interfaces were at different points along the chain.

Is the impressive part that, despite the massive increase in data
rate, that the same protocol is used, without breaking down with
timing issues, etc.?

Also, is it fair to say that they did a good job on an absolute scale?
For example, I think it's kinda hard to mess things up for a 300 bps
channel, depending on how noisy it is, but assuming a reasonable SNR.
Do they deserve to be commended just because they made it up to 56K
bps? TCP/IP handles datarates of 10M bps, about the same factor over
56K bps as 56K bps is to 300 bps.

Judging protocols is hard business. Protocols are often designed to
promise a certain amount of data integrity (usually very high). If
one person designed a protocol that optimized the hell out of the
particular situation to acheive unbeatable performance, then should
his protocol be deemed better or worse than one which wasn't as
efficient, but when the channel specifications change (e.g. got
faster), didn't break down?

I think a lot of it has to do with what the optimization metric is.
It's probably not that much simpler to judge a protocol than to judge
a person's overall intelligence. Or maybe it's just me. There are
certainly a lot of criteria.

dave

Erik Naggum

unread,
Nov 4, 2000, 3:00:00 AM11/4/00
to
* Jon S Anthony <j...@synquiry.com>
| That kind of position is simply irrational.

I'd like to know what I have done to you to force you into that
corner where all you can do is respond as if your worst-case
scenarios for why people hold the opinions they do _must_ be true,
without any room for doubt or alternatives. If I have not done
anything to you to force you into such a position, could you please
get yourself out of that self-defensive corner and try to deal with
people as if they aren't dangerous to you? If you choose to spend
your time defending yourself, I will take no part in it.

What would happen if this discussion should prove that CORBA was a
bad move for you? What would happen if this discussion should prove
that CORBA would have been a good choice for me? Nothing, in either
case, unless we decided _not_ to act on the new knowledge. _That_
would be illoyal to your employers. Until then, acting out of fear
that you might be proven wrong is what _is_ irrational.

What _is_ clear to me now is that you have zero respect for other
people and differing opinions regardless of how they arrived at
them, but I attribute this to your need to engage in self-defense,
which must be external to the discussion, so I'd just like you to be
able to _listen_ to what people are telling you before you respond
to them. Until you have listened, there's no point in reading what
you say in response, because I don't know what you respond to.

Erik Naggum

unread,
Nov 4, 2000, 3:00:00 AM11/4/00
to
* David Bakhash <ca...@alum.mit.edu>

| So they started out with 300 bps, and eventually got it up to 56K
| bps. Almost 200% increase. Okay. I agree that's impressive.

Huh? (/ (* 100 (- 56000 300)) 300) => 18566% increase in my book.

I fail to understand why giving increases in percent makes sense
when the factor of increase is greater than 2 (100% increase), but I
notice people are talking about something that went from 1 to 10 as
1000% increase, when it is clearly 900%, but they also call it a
"ten-fold increase", which is correct. Confusing multiplicative and
additive increases is much too easy when computing with percentages.

| Is the impressive part that, despite the massive increase in data
| rate, that the same protocol is used, without breaking down with
| timing issues, etc.?

The impressive part is their ability to make technology out of
science. I am one of those few people who are impressed by science
and technology as such. Most people seem to be more impressed by
sports achievements and Harry Potter (either the books or the sales).

| Also, is it fair to say that they did a good job on an absolute
| scale?

Yes.

| For example, I think it's kinda hard to mess things up for a 300 bps
| channel, depending on how noisy it is, but assuming a reasonable SNR.

Yes, it _is_ kinda hard, but it _was_ pretty good when they did it.

| Do they deserve to be commended just because they made it up to 56K
| bps?

They have "made it" it up to 10Mbps.

| TCP/IP handles datarates of 10M bps, about the same factor over 56K
| bps as 56K bps is to 300 bps.

TCP is a transport protocol. IP is a network protocol. Modems do
not transport or network, they simply move bits across a wire.
Incidentally, TCP/IP works pretty well on gigabit networks, too,
although the amount of data on a wire with a high bandwidth*latency
product is _staggering_, which causes the window size in TCP to be a
major problem if the connection is even slightly lossy. Therefore,
gigabit TCP introduces the same kinds of redundancy that the Telecom
people added to their T3/E3 data link protocols (154Mbps) and above,
and which have been used in comparatively low-bandwidth satellite
communications for ages, namely a pretty good time distance between
the data and its error recovery signalling that makes it possible to
recover from outage stretches of up to 10 ms and sometimes more.

| Judging protocols is hard business.

Yes, almost as hard as reading the specifications.

| Protocols are often designed to promise a certain amount of data
| integrity (usually very high). If one person designed a protocol
| that optimized the hell out of the particular situation to acheive
| unbeatable performance, then should his protocol be deemed better or
| worse than one which wasn't as efficient, but when the channel
| specifications change (e.g. got faster), didn't break down?

The funny thing about the real world is that you actually have to
deploy _something_. If you deploy, say, copper wire, with certain
electrical characteristics all through your service area and there
is solid international agreement on those characteristics, you know
that you have deployed copper wire with those characteristics and
you can tell your customers that they can buy telephones, modems,
faxes, whatever, according to those specifications,. We do not live
in a world of magic (much TV and Harry Potter to the contrary), so
those characteristics are not going to change until somebody goes
out there and deployes some new cable at the cost of hundreds of
billions of dollars. This is why it is very good engineering and
very intelligent use of the science and technologies available to
manage DSL on the same kind of wire that used to carry 300 bps only
20 years earlier. I come from a family of engineers on both sides,
and despite my university education, I _still_ think highly of the
art of engineering, which is precisely that of managing to use the
pre-existing physical conditions ever better and more accurately.
Squeezing 56kbps out of a twisted copper wire laid 50 years ago in
some cases indicates forethought and good engineering 50 years ago
and good engineering today. I take even more genuine pleasure in
dealing with both the people and their work when such competence and
caring is evident, than I do disdain and disgust when dealing with
people who display incompetence and carelessness, such as on USENET.

| I think a lot of it has to do with what the optimization metric is.
| It's probably not that much simpler to judge a protocol than to
| judge a person's overall intelligence. Or maybe it's just me.
| There are certainly a lot of criteria.

How well you deal with the real world is a good criteria in both
assessment processes if you ask me. Getting lost in wishes for a
world that is easier to live in warns _me_ of low intelligence.
Judging what is done in the physical world according to what would
have been great in a dream world is also a pretty good sign that
someone is not firing on all plugs. But I, too, work in software
because the real world of physics and materials science is too hard
for me to excel in. That's part of why I'm easily impressed by the
people who manage to keep the electricity grid of a whole city up
and running while conditioning separately synced mega-volt feeds.
On the other hand, it is probably because I try to understand the
destructive forces of nature that I find this impressive. Those who
think nature is kind just because it is predictable have a hard time
getting impressed with those who harness it and actually predict it.

Nils Goesche

unread,
Nov 4, 2000, 3:00:00 AM11/4/00
to
David Bakhash <ca...@alum.mit.edu> writes:

> Erik Naggum <er...@naggum.net> writes:
>
> > * Paolo Amoroso <amo...@mclink.it>
> > | Could you please mention a few examples of well designed protocols
> > | that are worth studying? Thanks in advance.
> >
> > The Telecom people have probably done some of the best work there
> > is in protocol design. Just take a look at how they started out
> > with very low speeds, like 300 bps, but over time managed to
> > squeeze 56kbps through the feeble phone lines that were not
> > upgraded, then got DSL to work across that same old copper wire.
> > Impresses me, anyway.
>
> I'm not sure I follow this, and I'd like to understand more.
>

> So they started out with 300 bps, and eventually got it up to 56K
> bps. Almost 200% increase. Okay. I agree that's impressive.

Actually, it's almost a 18667% increase.

> But what does this have to do with the original protocol? It seems to
> have more to do with the channel capacity, and maybe even how fast the
> electronic interfaces were at different points along the chain.
>

> Is the impressive part that, despite the massive increase in data
> rate, that the same protocol is used, without breaking down with
> timing issues, etc.?
>

> Also, is it fair to say that they did a good job on an absolute scale?

> For example, I think it's kinda hard to mess things up for a 300 bps
> channel, depending on how noisy it is, but assuming a reasonable SNR.

> Do they deserve to be commended just because they made it up to 56K

> bps? TCP/IP handles datarates of 10M bps, about the same factor over


> 56K bps as 56K bps is to 300 bps.

TCP/IP doesn't care about the data rate, and it can be much higher
than 10M bps (ever heard of 100 MBit ethernet?). Broadband ISDN (ATM)
is even faster.

> Judging protocols is hard business. Protocols are often designed to


> promise a certain amount of data integrity (usually very high). If
> one person designed a protocol that optimized the hell out of the
> particular situation to acheive unbeatable performance, then should
> his protocol be deemed better or worse than one which wasn't as
> efficient, but when the channel specifications change (e.g. got
> faster), didn't break down?
>

> I think a lot of it has to do with what the optimization metric is.
> It's probably not that much simpler to judge a protocol than to judge
> a person's overall intelligence. Or maybe it's just me. There are
> certainly a lot of criteria.

I can't imagine anyone who wouldn't be impressed by the massive
standards for the ISDN signalling protocols (I mean DSS1.
Unfortunately, I don't know SS7). It is a bit annoying when you spend
much time with these and then some SW guy comes around and seriously
believes that his tiny little text protocols over TCP like HTTP are
much more difficult than your ``trivial bit stuffing algorithms''.
(Actually, at least for the DSS1 layer 3, Lisp would be a great help.
Maybe sometime in the future :-).
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

Tim Bradshaw

unread,
Nov 4, 2000, 3:00:00 AM11/4/00
to
* David Bakhash wrote:

> So they started out with 300 bps, and eventually got it up to 56K
> bps. Almost 200% increase. Okay. I agree that's impressive.

that's almost a *factor* of 200, not almost 200%. And they are now
running DSL at considerably higher rates than this over the same old
copper.

--tim

Rob Warnock

unread,
Nov 5, 2000, 3:00:00 AM11/5/00
to
Nils Goesche <nils.g...@anylinx.de> wrote:
+---------------

| David Bakhash <ca...@alum.mit.edu> writes:
| > TCP/IP handles datarates of 10M bps, about the same factor over
| > 56K bps as 56K bps is to 300 bps.
|
| TCP/IP doesn't care about the data rate, and it can be much higher
| than 10M bps (ever heard of 100 MBit ethernet?).
+---------------

Or gigabit Ethernet (1000BASE-SX or 1000BASE-T)?? Our systems do
over 650 Mbit/sec of TCP over GbE, user-mode to user-mode, with a
single TCP connection. [Over 800 Mbit/sec, if you allow "jumbo frames",
that is, 9000 byte MTUs, but that's non-standard.]

And 10 GbE is just about to pop out of the gate (already starting to
be demonstrated by a few vendors). But AFAIK there aren't any full-speed
host NIC interfaces for 10GbE yet, so for now the fastest TCP I know of
is still ours, at ~2.0 Gbit/sec (on a 6.4 Gbit/sec GSN link). [If we use
STP instead of TCP we can get ~6 Gbit/sec, user to user, single socket,
but we're talking about TCP, so I won't mention that further...]

+---------------


| Broadband ISDN (ATM) is even faster.

+---------------

Well, sure, all the way up to OC-192c/ATM, but there aren't very many
host NICs at that speed. On the other hadn, OC-12c/ATM NICs are fairly
common, and you can get ~500 Mbit/sec of TCP out of a well-tuned OC-12c/ATM
implementation, but I'm afraid GbE has already passed it by.

And at the higher speeds, people are dropping the ATM encapsulation and
just doing "packets over SONET" (POS). In fact, OC-192c/POS is even one
of the PHY/PMD options for the proposed 10GbE standard...

Jon S Anthony

unread,
Nov 6, 2000, 3:00:00 AM11/6/00
to
> Erik Naggum wrote:
>
> * Jon S Anthony <j...@synquiry.com>
> | That kind of position is simply irrational.
>
> I'd like to know what I have done to you to force you into that
> ...

I'm unclear on how you can get this out of what I wrote:

I have in no way suggested this, so I don't see the relevance.
Certainly it has nothing to do with the given point, so it is
unclear why you bring it up. This _looks_ for all the world like
you are saying that "if you chose Corba, then _apriori_ you could
not have done so as part of any analysis of the tecnical context
and thus must be victimized by all these dysfunctional things".

That kind of position is simply irrational.

As you can see I clearly stated that this _looks_ (with an implied "to
me", which I admit may not have been clear) like you are claiming that
any such choice is wrong apriori. That doesn't say or in any way
imply that you were or are _actually_ saying this. What I was and am
asking for was some clarity of your position on this with some sort of
_rationale_ for it.


> aren't dangerous to you? If you choose to spend your time defending
> yourself, I will take no part in it.

I'm not defending _anything_, much less myself, or my choices. I'm
simply stating matters of fact concerning _what I actually did to make
the choice_. That's it.


> What would happen if this discussion should prove that CORBA was a
> bad move for you?

I would seriously consider how to go about changing to the proposed
alternative. I have already stated so in not so many words. Why do
have such a difficult time believing this?


> What would happen if this discussion should prove that CORBA would
> have been a good choice for me?

I don't know and I don't care.


> decided _not_ to act on the new knowledge. _That_ would be illoyal
> to your employers. Until then, acting out of fear that you might be
> proven wrong is what _is_ irrational.

Criminey - _I_ am basically my employer, this is still true even with
our new investment. And since I have already stated (at least twice
now) that strong evidence showing my choice to be incorrect would be
taken seriously, your comment here is _clearly_ irrelevant and devoid
of content.


> What _is_ clear to me now is that you have zero respect for other
> people and differing opinions regardless of how they arrived at them,

Incorrect. I am more than willing to believe, and have in no way
indicated otherwise, that you made exactly the correct choice based on
sound technical analysis for your situation. The only thing I have
also said is that I did the same, even though we both arrived at
differnt positions. This is _not_ a big deal.


> must be external to the discussion, so I'd just like you to be able to
> _listen_ to what people are telling you before you respond to them.

I think this is sound advice and clearly goes both ways.


> response, because I don't know what you respond to.

I respond to clearly stated positions that take into account all sides
of an argument. That there is evidence of understanding of the
various positions and at least a modicum of respect for them. The
positions may even be mutually exclusive but that does not invalidate
any of them outright or apriori. The positions should have some
decent rationale (which can be as simple as anecdotal evidence) which
is clearly stated.

I don't think that's asking for too much - do you?

Erik Naggum

unread,
Nov 6, 2000, 9:24:18 PM11/6/00
to
* Jon S Anthony <j...@synquiry.com>
| I'm unclear on how you can get this out of what I wrote:

Of course you are unclear on that, as you _still_ defend yourself.
What makes you think I got it out what you _quote_ that you wrote?
Why quote it? To what end is that a rational course of action? It
is obviously rational in a self-defense position, but I have a hard
time imagining any other rational explanations for this.

| I'm not defending _anything_, much less myself, or my choices.

Jon, please. You're doing it right there. Just observe yourself.

| I would seriously consider how to go about changing to the proposed
| alternative. I have already stated so in not so many words. Why do
| have such a difficult time believing this?

Because you reject everything I say before you understand it. I'm
sure you think you've been where I've been so you don't need to
listen to what I say because you think you already know what I'm
saying, but that is why I have taken this to a meta-discussion. You
need to stop thinking you know your opponent's position and you very
much need to stop defending your own position and especially yourself
-- otherwise you have no chance of _ever_ listening. If you don't
believe that this is evident in your behavior, at least _LISTEN_ to
the fact that someone reads you that way. Wonder why, at least,
don't just reject it immediately because you don't understand it,
which is exactly what you do with the technical questions, as well.

| > What would happen if this discussion should prove that CORBA would
| > have been a good choice for me?
|
| I don't know and I don't care.

You really do have a hard time in general, don't you, not just with
this discussion? I was trying to strike a balance by asking that
question both ways. You seem to want this balance when you need to
strike back, but when the balance is there, you reject it. This is
_not_ a good sign of mental health, Jon. Stop defending yourself.
You are _not_ under attack, OK?

| > What _is_ clear to me now is that you have zero respect for other
| > people and differing opinions regardless of how they arrived at them,
|
| Incorrect.

It is _incorrect_ that that is clear to me? *boggle* How the fuck
do you _know_ what is clear to me or not? You _disagree_, Jon. If
you might begin to understand that your _disagreement_ in conclusion
is something very different from terming somebody else's conclusions
from all the available evidence to _them_ as "incorrect", you might
have something to say worth listening to, but as long as you mix up
_correctness_ with people's opinions and conclusions, you're just
too stupid to waste any time on. I'm trying very hard to make you
understand that you do something here that tells me that what you
have come to conclude is probably not trustworthy at all, and you
_should_ take that as a sign that you need to do something different
to increase that trustworthiness, not work very hard to desroy any
remains of it.

| I think this is sound advice and clearly goes both ways.

Your need to make it apply back to me is a very good indicator of a
personality in need of defending itself, _especially_ when you
reject when others hand you a "goes both ways" just a few paragraphs
back. Ask just about any shrink or psychologist about this if you
have trouble understanding it. I'm not going to elaborate, as you
don't really _listen_ to what I tell you, no matter what I say.
People who hold up a mirror and say "clearly goes both ways" have
shut down the prerequisite brain activity to listen long ago.

| I respond to clearly stated positions that take into account all
| sides of an argument.

Then you are clearly delusional, too, and your conclusions must be
treated in accordance with such an assessment of your ability to
observer yourself in action. This doesn't make CORBA bad, it only
means that your decision to use it is completely worthless as any
form of testimony to its usefulness or the soundness of your choice.

| I don't think that's asking for too much - do you?

I think if you didn't demand if others, but simply followed it, it
would be perfectly OK. As far as I know, I already comply, and you
are not giving me any indication of _where_ I'm not, either, just
this weak implication that I'm not, which just doesn't hold water.

I'm through with you, Jon. Let me know when you are willing to
listen to what people who disagree with you are telling you. I
don't trust people who have made up their mind so hard it must be
cracked before it can re-examine its path to its conclusions.

David Bakhash

unread,
Nov 7, 2000, 12:13:01 AM11/7/00
to
David Bakhash <ca...@alum.mit.edu> writes:

> So they started out with 300 bps, and eventually got it up to 56K
> bps. Almost 200% increase. Okay. I agree that's impressive.

oops. I meant almost 200x.

My main point was that it's hard to weight performance from
insensitivity to change in externalities.

I think that the best design is one which is fast enough to do the job
now, but personally I favor insensitivity to changes in other factors
more than a couple of percentage points on performance.

I feel the same way about general purpose programs. Programs which
are massively optimized for what they do, but not easily extended are
a nightmare to work with.

It's just important to understand that if a someone's protocol breaks
down, it's not a symptom of poor design, if the specifications that
the designer was given ignored that in favor of squeezing every bit
out of the current external environment. It's more a sign of poor
specifications than design.

dave

Jon S Anthony

unread,
Nov 7, 2000, 3:00:00 AM11/7/00
to
Erik Naggum wrote:
>
> * Jon S Anthony <j...@synquiry.com>
> | I'm unclear on how you can get this out of what I wrote:
>
> What makes you think I got it out what you _quote_ that you wrote?

Because you quoted part of it.


> | I'm not defending _anything_, much less myself, or my choices.
>
> Jon, please. You're doing it right there. Just observe yourself.

Hmmm, if this is what you call defending, then that is worth
understanding. I wouldn't call it that but perhaps that's irrelevant
in attempting to communicate here.


> | I would seriously consider how to go about changing to the proposed
> | alternative. I have already stated so in not so many words. Why do
> | have such a difficult time believing this?
>
> Because you reject everything I say before you understand it. I'm
> sure you think you've been where I've been so you don't need to

No, that is simply not the case. All I reject is any claim that it
was not or is not possible to make this choice on sound technical
reasons. I've stated that this is what I _believed_ you said, but
wanted some sort of verification or clarification on this.


> listen to what I say because you think you already know what I'm

No I _don't_ already think I know what you are saying. For crying out
loud, I've actually said this several times. I stated what I
_thought_ you were saying and said it was only that and requested
clarification of whether I was right or wrong. All I get is that I
don't understand. Well, duh.


> saying, but that is why I have taken this to a meta-discussion. You
> need to stop thinking you know your opponent's position and you very

I don't know it or claim to know it. I make this statement again.


> believe that this is evident in your behavior, at least _LISTEN_ to
> the fact that someone reads you that way. Wonder why, at least,

OK, I'm listening. Seriously, what about what I actually _say_ (not
your inferences about what you believe I'm saying), is screwing this
all up?


> | I don't know and I don't care.
>
> You really do have a hard time in general, don't you, not just with
> this discussion? I was trying to strike a balance by asking that

OK, I misread you on this one.


> Stop defending yourself.
> You are _not_ under attack, OK?

I already understand this, but am willing now to understand that you
believe that I am defending myself based on your understanding of what
I've said (criminey, I'm not sure I understand that one :)


> | > What _is_ clear to me now is that you have zero respect for other
> | > people and differing opinions regardless of how they arrived at them,
> |
> | Incorrect.
>
> It is _incorrect_ that that is clear to me? *boggle* How the fuck

No, not that that is clear to you, but that the proposition embodied
in it is false.


> You _disagree_, Jon. If

Correct, and I admit this could/should have been clearer.


> have something to say worth listening to, but as long as you mix up
> _correctness_ with people's opinions and conclusions, you're just

I don't disagree with this.


> | I think this is sound advice and clearly goes both ways.
>
> Your need to make it apply back to me is a very good indicator of a

I do disagree with this.


> | I respond to clearly stated positions that take into account all
> | sides of an argument.
>
> Then you are clearly delusional, too, and your conclusions must be

Clearly I disagree with the claimed proposition - not that you believe
it.


> | I don't think that's asking for too much - do you?
>
> I think if you didn't demand if others, but simply followed it, it

I _am_ following it. I understand you don't believe this. The fact
that you don't get it from me is, in my opinion, saying more about you
than it is about me. Also, I don't _demand_ it at all, I just stated
that this is what I typically try to respond to.


> would be perfectly OK. As far as I know, I already comply, and you
> are not giving me any indication of _where_ I'm not, either, just

I've stated it a few times, obviously I am not able to communicate
this clearly to you.


> I'm through with you, Jon. Let me know when you are willing to
> listen to what people who disagree with you are telling you.

I've been willing from the start. From my position, I keep getting
these sorts of indications from you that make me believe that _you_
are unwilling to listen.


> I don't trust people who have made up their mind so hard it must
> be cracked before it can re-examine its path to its conclusions.

Neither do I. Apparently we have done such a botched up job of trying
to communicate to one another that both of us thinks the other's
credibility is shot. I'm sorry about that. I really am.

0 new messages