Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What's new in Dylan? Is it even OO?

48 views
Skip to first unread message

Jason Robbins

unread,
Mar 8, 1993, 9:19:50 AM3/8/93
to
This is an honest question. I read the Dylan(tm) book and I am left
very unimpressed.
The third paraghaph of the Introduction says:
"To those familiar with the Lisp family of languages, Dylan looks like
Scheme augmented wuth CLOS. ... The real target audience of Dylan is
application developers now using languages such as C, C++, and Pascal
who are ready to move up to Object Oriented Dynamic Languages"

reaction #1: Yawn.

reaction #2: Get a clue. Nobody is going to willingly move from the
frying pan (C syntax) into the fire (entering parse trees directly,
a.k.a. lisp syntax without macros).

reaction #3: Well at least it is OO and very dynamic. That is a step up.

reaction #4: (after finishing the book) Hang on. That was not OO at
all. I don't even think that it meets the definition of OOness.
Objects don't have their own behavior, they are just records that are
the victims of generic functions. It can't have encapsulation since
there is nothing to mark which generic functions have access to
potential private slots and which g.f.'s do not. It does have modules
and polymorphism. But it misses the big OO picture by a mile, it seems
that programmers are always working on functions, thinking about
functions, and the run time state of the program is mainly thought of
as a trace back of functions calling other functions. (While a _real_
OO language is an organization of objects, programmers worry about
objects, and the run time state of the program is most clearly thought
of as a network of objects pointing to other objects). I guess it is
still a step up from C++, barely, but why isn't Apple promoting a
really interesting, OO, and fast language (Self comes to mind).

reaction #4a: The notion the Dylan is "objects all the way down" is
made meaningless by the "Seal" construct. Seal looks like a way to
turn an existing OO class into a non-OO functional representation,
which would be nice if the compiler did it for you when you were ready
to ship your product, but when 16 out of 40 classes described in the
book are already sealed it seams like hype.

Flames welcome, if they are inetersting. (one reply I got to my last
message told me that I should try hard to keep my functions under 15
lines, then I would like Lisp syntax better. My current project in CL
is about 12000 lines, 1000 functions seems unmanageable. In fact, I
think that it (in _practice_) the weakness its syntax makes lisp
functions less first class than C functions!)

-jason

Steve Knight

unread,
Mar 10, 1993, 11:44:23 AM3/10/93
to
Jason Robbins writes:
> reaction #2: Get a clue. Nobody is going to willingly move from the
> frying pan (C syntax) into the fire (entering parse trees directly,
> a.k.a. lisp syntax without macros).

This message was meant to be a personal impression but steps out of
bounds with "Nobody is going to ...". Plenty of people will -- after all
exchanging one of the most poorly designed syntaxes for syntax that is
easier to learn, more flexible, and more mature, is a step forward for
a lot of C programmers, such as myself.

I'm no admirer of S-expression syntax, as readers of this group will
probably recall, but I would be very comfortable to argue its merits
versus C's syntax. My belief is that this is yet another instance of
a call for pluralism and diversity -- multiple syntaxes rather than just one.


> reaction #4: (after finishing the book) Hang on. That was not OO at
> all. I don't even think that it meets the definition of OOness.

A lot of people have this reaction when meeting multi-method style
OO languages for the first time. There's no easy way to soften the
response, unfortunately, because they are wrong. One can rephrase the
objection as "that's not the way Smalltalk works", or whatever, and then
we can agree. But, unpleasant though it might be, it really is OO.

Those of us who have programmed with CLOS style multi-methods find
their generality invaluable. They completely encompass the limited range
of object-oriented techniques available in mono-method systems and go
beyond it in useful and intuitive ways.

The key difference is that method definitions and class definitions are
not so intimately related. And why should they be? If you've
programmed in Smalltalk, then you'll probably be familiar with that
teeth-grinding moment when you discover that closely associated methods
need to be thinly spread all over the class hierarchy.


> It does have modules
> and polymorphism. But it misses the big OO picture by a mile, it seems
> that programmers are always working on functions, thinking about
> functions, and the run time state of the program is mainly thought of
> as a trace back of functions calling other functions.

My answer to this is simply "try it out". When I program using my own
multi-method based system (ObjectClass), I move seamlessly between the
function-oriented and object-oriented views. It is truely a significant
improvement over mono-method systems such as flavours, Smalltalk, C++ etc etc.


> I guess it is
> still a step up from C++, barely, but why isn't Apple promoting a
> really interesting, OO, and fast language (Self comes to mind).

Comparing Dylan to C++ is preposterous. Dylan will have many major advantages
over C++
1. multi-methods
2. automatic storage management (GC)
3. a consistent system of object types (no need for base types)
4. higher-order programming seamlessly integrated with full lexical binding
5. a better syntax

As for the concern over performance, I guess there's a valid point lurking.
It is true that single-inheritance, mono-method systems can be implemented
very efficiently. That's why in ObjectClass, I arranged that when the
programmer doesn't exploit these extra facilities, it generates more
efficient methods. And these techniques are undoubtedly exploited by
Dylan implementors.

So in reply to "why not self", the answer comes "because it's more powerful
and equally efficient when the extra power isn't needed."


> reaction #4a: The notion the Dylan is "objects all the way down" is
> made meaningless by the "Seal" construct. Seal looks like a way to
> turn an existing OO class into a non-OO functional representation,
> which would be nice if the compiler did it for you when you were ready
> to ship your product, but when 16 out of 40 classes described in the
> book are already sealed it seams like hype.

Rather than directly answer this point (which I partially agree with) I'd
like to add a little note on why I had to introduce sealing into my
own multi-method system (ObjectClass).

One of the key reasons for introducing sealing to a system is integration
with old style languages that don't support multiple inheritance. When
you support multiple inheritance, you have significant problems keeping
all the slots of an object in a fixed order. However, when you pass such
objects across to C (for example) the order must be determined.

The solution to the problem is to seal the class. You can add more
methods but you can't inherit from it. Thus you can arrange for some of
the methods to be implemented as routines that assume a fixed order.

This might seem an OTT solution -- but there is such a lot of code out there
that you need to interface to, you can't afford the time to write new
indirections. Sealing is a good answer.


> In fact, I
> think that it (in _practice_) the weakness its syntax makes lisp
> functions less first class than C functions!)

Maybe this was meant as a joke -- in which case forgive me for pointing
out the obvious. "First-class" is not a formal term. It refers to the
the intuitive concept that procedures can be employed in the same
contexts as data -- you can pass them as parameters, compare them for
equality, compose them, and so on. All languages fall short of
the ideal but Lisp has one of the very best claims. C by contrast has an
more limited claim -- because although you can pass them as parameters, put them
in variables, apply variables, put and get them from data structures, you
can't create new functions. Furthermore, you can't even have nested functions,
so the range of functions you can conveniently create is, by comparison,
extremely limited.

BTW, I don't know who suggested that Lisp functions should be limited to 12
lines or less. But I admire their sense of romance.

Steve

Jason Robbins

unread,
Mar 10, 1993, 5:47:18 AM3/10/93
to
In article <1374...@otter.hpl.hp.com>

s...@otter.hpl.hp.com (Steve Knight) writes:
Jason Robbins writes:
> reaction #2: Get a clue. Nobody is going to willingly move from the
> frying pan (C syntax) into the fire (entering parse trees directly,
> a.k.a. lisp syntax without macros).

This message was meant to be a personal impression but steps out of
bounds with "Nobody is going to ...". Plenty of people will -- after all

Sorry. But I just had to get in that "lisp as parse tree" part.

> reaction #4: (after finishing the book) Hang on. That was not OO at
> all. I don't even think that it meets the definition of OOness.

.... One can rephrase the


objection as "that's not the way Smalltalk works", or whatever, and then
we can agree. But, unpleasant though it might be, it really is OO.

Those of us who have programmed with CLOS style multi-methods find
their generality invaluable. They completely encompass the limited range
of object-oriented techniques available in mono-method systems and go
beyond it in useful and intuitive ways.

Assembly language encompases the limited range of OO techniques also.
Generality is nice, but you can only take it so far since most
languages are Turing complete anyway. It also the _limitations_ of a
system that define its style. The generic function idea is that "you
can do anything", frankly I don't want to do anything, and I don't
want the people writing code that I have to read doing anything, I
want them to write only code that has good software engineering
features. OOness gives you some important SE features beacuse of its
limitations. Let's consider encapsulation:

I think of encapsulation as being able to know ALL the ways the state
of an object can change by looking at some bounded amount of source
code. In CLOS and Dylan the number of lines of source you must study
in order to see when objects might be modified is _not_ syntactically
bounded. This is the symbolic equivelent of a dangling pointer writing
over your data in C.

It seems to be a limitation of the Lisp syntax that makes it
impractical to have large syntatic constructs such as modules and
classes with ALL the methods inside: it is just too much of a pain to
deal with unmarked parens that far apart. (of course, C++ has the same
problem, but atleast {}'s mean big groups and ()'s mean small groups.)

You argued above that CLOS is a superset of all of the normal features
of OO. But CLOS doesn't have any super way of encapsulation (other
than putting things inside of a defun or let). Not only can users not
use large syntatic groupings, but (I think) the CLOS designers could not
have given them that option even if they wanted to. Encapsulation is a
semantic feature, but with no practical syntax to back it up it's
semantics can never relied upon.

> It does have modules
> and polymorphism. But it misses the big OO picture by a mile, it seems
> that programmers are always working on functions, thinking about
> functions, and the run time state of the program is mainly thought of
> as a trace back of functions calling other functions.

My answer to this is simply "try it out". When I program using my own

Can someone argue for generic functions on philosophical grounds?


> reaction #4a: The notion the Dylan is "objects all the way down" is
> made meaningless by the "Seal" construct. Seal looks like a way to

One of the key reasons for introducing sealing to a system is integration

with old style languages that don't support multiple inheritance. When

... Sealing is a good answer.

Right sealing is useful, but it makes Dylan not "objects all the way
down". Not that anyone reallys wants "objects all the way down"
anyway, what they want is the ability to deal with objects and
non-objects consistanly, and not to have to use typecase statements.

> I think that it (in _practice_) the weakness its syntax makes lisp
> functions less first class than C functions!

Maybe this was meant as a joke ... you can pass them as parameters,


compare them for equality, compose them, and so on.

It is in the ability to compose that Lisp syntax is lacking. Compose
t0o much and you get lost in a sea of parens. That is why functional
decomposition is so important in Lisp, survival.

BTW, I don't know who suggested that Lisp functions should be limited to 12
lines or less. But I admire their sense of romance.

The honeymoon is over... Lisp is still a major pain when you use large
functions. The new syntax for Dylan really should have named ending
delimiters ("end if", "end my-function-name",...) they reuduce the
amount of time spent balanceing delimiters.

-jason

William M. York

unread,
Mar 10, 1993, 4:28:03 PM3/10/93
to

Can someone argue for generic functions on philosophical grounds?

The main idea is that in a functional programming language like Lisp,
it makes a lot of sense to express the object-oriented component of
the language in functional terms. Perhaps this sounds like a
tautology to you, but I would then ask you how you would go about
adding object-oriented capabilities to a "base" Lisp language? (No
fair saying that you wouldn't use Lisp in the first place!)

Some of us remember the old Symbolics Flavors system, which used a
"message-passing" style. This meant that once you had become
comfortable with Lisp you had to learn a whole new set of concepts in
order to cross the threshold into OOP. This was a bad thing (ask any
of the former Symbolics course instructors).

BTW, I don't know who suggested that Lisp functions should be limited to 12
lines or less. But I admire their sense of romance.

The honeymoon is over... Lisp is still a major pain when you use large
functions. The new syntax for Dylan really should have named ending
delimiters ("end if", "end my-function-name",...) they reuduce the
amount of time spent balanceing delimiters.

Lisp has such a construct, syntactically represented by the semicolon
character:

(defun fact (n)
(if (< n 2)
1
(* n (fact (1- n))))) ; end fact

What's that you say? You'll only use such a construct if it is a
manditory part of the syntax of the function? Oh...

Because each syntactic construct in Lisp is its own sub-form, it is
quite easy (admittedly with minimal editor support) to unambiguously
find the end. It is precisely because more "linear" languages don't
have this built-in structure that you need "end if" to figure out what
this "end" is ending.

Sandy Wise

unread,
Mar 10, 1993, 12:38:53 PM3/10/93
to
I think of encapsulation as being able to know ALL the ways the state
of an object can change by looking at some bounded amount of source
code. In CLOS and Dylan the number of lines of source you must study
in order to see when objects might be modified is _not_ syntactically
bounded. This is the symbolic equivelent of a dangling pointer writing
over your data in C.

This can be resolved with environment support -- Tools such class
browsers still work for examining the state of a system. The query
"what operations apply to this object" is changed from being an editor
task (via inspection) to an explicit query operation...

Can someone argue for generic functions on philosophical grounds?

The strict binding of methods to objects as was done in Smalltalk and
C++ forces the selection of operations to be based on a single type.
Unfortunately, not all operations are unary. Addition is a simple
example...

If I have a class integer, and I want to add a new class real, I not
only have to define real::"+"(int) I also have to redefine int to
include int::"+"(real)! Increasing the arity of the functions is left
as an exercise for the reader :-)
/s
--
Alexander Erskine Wise /\/\/\/\/\/\/\/\/\/\/\/\ Software Development Laboratory
/\/\/\/\/\/\/\/\/\/\/\/\/\/\ WI...@CS.UMASS.EDU /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\ This situation calls for large amounts of unadulterated CHOCOLATE! /\/\/\

Eric Anderson

unread,
Mar 10, 1993, 9:27:06 PM3/10/93
to
In article <SANDY.93M...@beeker.cs.umass.edu> wi...@cs.umass.edu writes:
>In article <JROBBINS.93...@kingston.cs.ucla.edu> jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:
>The strict binding of methods to objects as was done in Smalltalk and
>C++ forces the selection of operations to be based on a single type.
>Unfortunately, not all operations are unary. Addition is a simple
>example...
>
>If I have a class integer, and I want to add a new class real, I not
>only have to define real::"+"(int) I also have to redefine int to
>include int::"+"(real)! Increasing the arity of the functions is left
>as an exercise for the reader :-)
It's interestingly enough this precice example which has lead to a huge
"wart" into the language Haskell. They introduced type classes and have
a very large, relatively complicated description of all the different
Numeric classes. Someone recently looked in the standard haskell prelude
(library) and found that besides the numeric classes, there weren't any
uses of the type classes. This discussion is currently going on in
comp.lang.functional for those who are interested.
--
Does anyone have an example which doesn't involve numbers where you want
to do this?
-Eric
*********************************************************
"Overhead, without any fuss, the stars were going out."
-The Nine Billion Names of God
"Yes, you're very smart. Shut up."
-In "The Princess Bride"
*********************************************************

Brian Harvey

unread,
Mar 11, 1993, 10:58:31 AM3/11/93
to
wi...@cs.umass.edu writes:
>If I have a class integer, and I want to add a new class real, I not
>only have to define real::"+"(int) I also have to redefine int to
>include int::"+"(real)! Increasing the arity of the functions is left
>as an exercise for the reader :-)

Yes, but with generic functions you have to define (+ <int> <real>) and
(+ <real> <int>) separately, which is just as bad. As the number of
types increases, you get N^2 methods to define.

What you *really* want to be able to do is only define the N methods that
apply when the two arguments are of the same type, and then independently
define methods for raising a single number from, e.g., int to real, and
have the language be clever enough to try raising if it can't find an
appropriate method. But once you decide on that approach, generic
functions are no better (in this example) than message-passing.

The generic function model may work better in cases that aren't best
handled by raising, but rather in which there really are only a few
well-defined combinations of argument types.

Chris Dollin

unread,
Mar 11, 1993, 3:01:52 AM3/11/93
to
In article ...jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:

It seems to be a limitation of the Lisp syntax that makes it
impractical to have large syntatic constructs such as modules and
classes with ALL the methods inside: it is just too much of a pain to
deal with unmarked parens that far apart. (of course, C++ has the same
problem, but atleast {}'s mean big groups and ()'s mean small groups.)

Packages (in CLOS).
Modules (in Dylan).

They don't need to be bound by parentheses (at least, packages don't).


[What's that? They're not a proper nested S-expression syntax? What
a pity.]
--

Regards, "I know three kinds: hot, cool, and 'what time does the tune start?'".
Kers [The Beiderbeck Connection]

Steve Knight

unread,
Mar 11, 1993, 11:03:57 AM3/11/93
to
A few, brief responses on Jason's points:

> Those of us who have programmed with CLOS style multi-methods find
> their generality invaluable. They completely encompass the limited range
> of object-oriented techniques available in mono-method systems and go
> beyond it in useful and intuitive ways.

> Assembly language encompases the limited range of OO techniques also.

As it wasn't clear the first time, I mean that CLOS style multi-methods
encompass mono-methods through a trivial textual substitution. I am not
talking about Turing equivalence or uninteresting relationships of principle.


> I think of encapsulation as being able to know ALL the ways the state
> of an object can change by looking at some bounded amount of source
> code. In CLOS and Dylan the number of lines of source you must study
> in order to see when objects might be modified is _not_ syntactically
> bounded. This is the symbolic equivelent of a dangling pointer writing
> over your data in C.

I would claim there are 3 misunderstandings here. Firstly,
encapsulation (as in C++) does not limit the amount of source code that
needs to be studied to see how the state changes. It is no more than
a convenient name-hiding mechanism. Secondly, it is not useful to
state you have to study an unlimited amount of source code in Dylan and
CLOS -- because exactly the same applies to other languages. Finally,
the comparison with dangling points is misleading. A dangling pointer is
a semantic error. The failure to provide this particular name-hiding
mechanism is, at worst, an inconvenience.


> > It does have modules
> > and polymorphism. But it misses the big OO picture by a mile, it seems
> > that programmers are always working on functions, thinking about
> > functions, and the run time state of the program is mainly thought of
> > as a trace back of functions calling other functions.

> My answer to this is simply "try it out". When I program using my own

> Can someone argue for generic functions on philosophical grounds?

Yes -- of course one can. But it is basically a weak approach. The multi-
method approach frees the programmer from specialising on a principal
argument (the "receiver") by allowing you to specialise on any selection
of arguments. But it does not prevent you from doing so. It also gives
methods the status of procedures -- making the first-class, avoiding
introducing superfluous syntax, and so on. These are two of the most
significant issues.

If these "philosophical" considerations don't influence someone, I am
not greatly surprised. Such advantages seem very theorectical until
you have gained familiarity with them.

Occasionally, there is some confusion over the "philosophy" of OOP. Some
claim that OO is ontologically related to "the world". Such a view is
insupportable, as has been argued in punishing detail by Chris Dollin
elsewhere.


> It is in the ability to compose that Lisp syntax is lacking. Compose
> t0o much and you get lost in a sea of parens. That is why functional
> decomposition is so important in Lisp, survival.

This makes no sense to me as it stands. I agree that Lisp syntax makes
reading and writing code more difficult -- but its composability is
excellent. In contrast, consider the occasional situation in C where you
would like to alter an expression to a statement. Because of the difference
in syntactic roles between expressions and statements, the code may have
to be significantly, and pointlessly, rewritten. There are no analogous
problems in Dylan or in any Lisp.

Steve

Andrew LM Shalit

unread,
Mar 11, 1993, 12:22:37 PM3/11/93
to
In article <JROBBINS.9...@kingston.cs.ucla.edu>,

jrob...@kingston.cs.ucla.edu (Jason Robbins) wrote:
>
> reaction #4a: The notion the Dylan is "objects all the way down" is
> made meaningless by the "Seal" construct. Seal looks like a way to
> turn an existing OO class into a non-OO functional representation,
> which would be nice if the compiler did it for you when you were ready
> to ship your product, but when 16 out of 40 classes described in the
> book are already sealed it seams like hype.

Response #1

When the Dylan manual says that it is "objects all the way down", it
does not mean, "specializable all the way down", nor does it mean,
"redefinable all the way down". It means what it says, "objects all
the way down".

A fuller definition of this phrase would be: every piece of data in
the language is a first-class object. The language provides a consistent
abstraction barrier between the programmer and the raw bits of the
underlying machine. The programmer is not exposed to bits and pointers,
but to objects. Even error conditions do not break this barrier, so
all debugging is in terms of objects.

Now, some people don't want this barrier. They want to be able to play
with
the bits, treat things as raw machine pointers, etc. They believe that
this
gives them more control over their program, and allows them to make their
programs more efficient. I'm not one of those people. I'm someone who
is happy to let the computer handle some of the details, so I can
concentrate
on the higher-level functionality of my program.

Respnose #2

Sealing provides two capabilities:

1) It allows programs to be compiled more efficiently. This is really
just
a bow to the reality of the day.

2) It allows programmers to declare constant properties of their program,
and ensure that any additions made later will not violate those constant
properties. I believe that this capability is important in delivering
secure, reliable software.


-Andrew Shalit
Apple Computer

Jason Robbins

unread,
Mar 11, 1993, 5:15:18 AM3/11/93
to

Can someone argue for generic functions on philosophical grounds?

The main idea is that in a functional programming language like Lisp,
it makes a lot of sense to express the object-oriented component of
the language in functional terms. Perhaps this sounds like a
tautology to you, but I would then ask you how you would go about
adding object-oriented capabilities to a "base" Lisp language? (No
fair saying that you wouldn't use Lisp in the first place!)

Some of us remember the old Symbolics Flavors system, which used a
"message-passing" style. This meant that once you had become
comfortable with Lisp you had to learn a whole new set of concepts in
order to cross the threshold into OOP. This was a bad thing (ask any
of the former Symbolics course instructors).

It seems that you ("you" refering to the CLOS community) missed the OO
revolution. You are not supposed to think of objects as being
equivelent to groups of functions, that is what the compiler is
supposed to do. OO programmers are supposed to be able to think of
their programs as communicating objects (a view that has some _real_
advantages, see my next posting). Moving to CLOS without learning to
think of objects as objects is like moving from C to C++ without
rewritting any of your (unOO) C programms. It's like trying to do OO
programming in ADA (sorry for that low blow).

To belabor the point, I could make an argument similar to yours:
The idea of functional programming makes more sense to programmers (of
the 1960's) if they think of each function in terms of blocks of
instructions which can execute a gosub to other such blocks of
instructions. There is the additional limitation that those blocks of
instructions should only return one result and not modify global
variables, mostly that is a workable limitation, but when it is not
you can still modify globals if you want.

BTW, I don't know who suggested that Lisp functions should be
limited to 12 lines or less. But I admire their sense of romance.

The honeymoon is over... Lisp is still a major pain when you use large
functions. The new syntax for Dylan really should have named ending
delimiters ("end if", "end my-function-name",...) they reuduce the
amount of time spent balanceing delimiters.

Lisp has such a construct, syntactically represented by the semicolon
character:
(defun fact (n)
(if (< n 2)
1
(* n (fact (1- n))))) ; end fact

Good point, in fact I do use that.
-jason

Daniel LaLiberte

unread,
Mar 11, 1993, 1:23:09 PM3/11/93
to
Jason Robbins writes:
Can someone argue for generic functions on philosophical grounds?

From: yo...@lorddarcy.parc.xerox.com (William M. York)

The main idea is that in a functional programming language like Lisp,
it makes a lot of sense to express the object-oriented component of
the language in functional terms.

The functional syntax (really prefix notation) is independent of
whether one or more than one arguments are used to determine the
method. So blending with the prefix notation is not the reason to use
multi-methods and generic functions. There are better, semantic
reasons to do so, which others have presented.

Because each syntactic construct in Lisp is its own sub-form, it is
quite easy (admittedly with minimal editor support) to unambiguously
find the end. It is precisely because more "linear" languages don't
have this built-in structure that you need "end if" to figure out what
this "end" is ending.

Unique ending brackets help readers *and* compilers figure out the
structure and inform you when there is a mismatch. Comments don't
help the reader unless you maintain them correctly, and the compiler
will not help you do so.

Nevertheless, I don't really like unique ending brackets because I
prefer editor support for structural feedback. Still, I get messed up
every so often.

Dan LaLiberte
lib...@cs.uiuc.edu
(Join the League for Programming Freedom: l...@uunet.uu.net)

William M. York

unread,
Mar 11, 1993, 5:47:41 AM3/11/93
to
In article <1nnnj7$o...@agate.berkeley.edu> b...@anarres.CS.Berkeley.EDU (Brian Harvey) writes:

The generic function model may work better in cases that aren't best
handled by raising, but rather in which there really are only a few
well-defined combinations of argument types.

Yes, CLIM uses multi-methods in cases like this:

(defgeneric decode-ink (medium ink) ...)

Where MEDIUM can be one of CLX-MEDIUM, MACINTOSH-MEDIUM, etc. and INK
can be one of COLOR, PATTERN, etc.

So, by filling in the 2D matrix with methods, you implement support
for each kind of ink on each kind of display device. ("medium" is a
generic term for "object that supports the output protocol")

Jason Robbins

unread,
Mar 11, 1993, 6:02:48 AM3/11/93
to
In article <1374...@otter.hpl.hp.com> s...@otter.hpl.hp.com (Steve Knight) writes:

A few, brief responses on Jason's points:
> Those of us who have programmed with CLOS style multi-methods find
> their generality invaluable. They completely encompass the limited range
> of object-oriented techniques available in mono-method systems and go
> beyond it in useful and intuitive ways.

> Assembly language encompases the limited range of OO techniques also.

As it wasn't clear the first time, I mean that CLOS style multi-methods
encompass mono-methods through a trivial textual substitution. I am not
talking about Turing equivalence or uninteresting relationships of
principle.

You were talking about equivelence and so was I. My point was that too
much generality washes out the advantages of OOP. You deleted that part.

> I think of encapsulation as being able to know ALL the ways the state
> of an object can change by looking at some bounded amount of source
> code. In CLOS and Dylan the number of lines of source you must study
> in order to see when objects might be modified is _not_ syntactically
> bounded. This is the symbolic equivelent of a dangling pointer writing
> over your data in C.

I would claim there are 3 misunderstandings here. Firstly,
encapsulation (as in C++) does not limit the amount of source code that
needs to be studied to see how the state changes. It is no more than
a convenient name-hiding mechanism. Secondly, it is not useful to
state you have to study an unlimited amount of source code in Dylan and
CLOS -- because exactly the same applies to other languages. Finally,
the comparison with dangling points is misleading. A dangling pointer is
a semantic error. The failure to provide this particular name-hiding
mechanism is, at worst, an inconvenience.

#1) name hiding does limit the amount of source you have to look at,
it is a _really_big_ convience, unforutantly CLOS doesn't do it very well.

#2) No. Imagine packages in ADA or modules in modula-2. The private
parts of those suckers are _private_, no source anywhere outside of
the begin ...end module pair can mess up those private parts. :.
(therfore) If they get messed up you know where to look.

The problem with CLOS is the "no other source" part of the above. In
CLOS another piece of source in another file can say (in-package
'your-private-package) and start messing around. Sure it's bad style,
but Dylan is meant to be used by people who were using C++ (so let's
not talk about bad style). Actually I know that kind of problem can
come up even in well managed projects that get large if there is time
pressure, it happened on my current CLOS project.

#3) Your right. Dangling pointers are worse. I just wanted to give
readers that sinking feeling that people get when they have to debug a
program and they don't know where to start looking for the bug.


> Can someone argue for generic functions on philosophical grounds?

Yes -- of course one can. But it is basically a weak approach. The multi-
method approach frees the programmer from specialising on a principal
argument (the "receiver") by allowing you to specialise on any selection
of arguments. But it does not prevent you from doing so.

OK, if you only specialize on the first argument then you can be
more OO. I have heard that GF's and Multiple inheritance are used only
very infrequently (my application programmers) in languages which
support them because single inheritance and single dispatching are
more natural and understandable. The multi-method aspect of GF's is
not nearly so offensive as the fact that they are _functions_.

It also gives methods the status of procedures -- making them


first-class, avoiding introducing superfluous syntax, and so on.
These are two of the most significant issues.

Sounds like you are trying to be backwards compatable with something.
Why not move forward and start really thinking in terms of objects. If
you stay grounded in the structured programming revolution and only
admit objects so long as they fit nicely into your old mental model
then you will end up with a new ADA: procedureal decomposition with
abstract data types pretending to be objects. Of course ADA is better
at that then CLOS.

Paraphrased from the Gooch book (although it has been widely known):
Objects are a better center for the design of useful programs because
they are usually the most stable part of the requirements: users will
always want more and differnt functions, but the objects they need
tend not to change. Adding/changing functions often means drastic changes
to other functions, that is why so many functionally decomposed
systems tend to get patched as time passes.

My single biggest problem with GF's is that they promote functional
decomposition. (kinda like promoting tooth decay).

Occasionally, there is some confusion over the "philosophy" of OOP. Some
claim that OO is ontologically related to "the world". Such a view is

insupportable...

You can generalize OOP to things other than simulation, but the
farther you go the less the advantages. Objects are not fundementally
different from datatypes and functions as the computer see them. But
objects are more understandable to humans than functions are,
precisely because they have real world experience with real world
objects. Objects are like little people that talk to each other and
each have their own duties and properties and goals. Surely everyone
can understand that better than a programming model based on
functional evaluation, I have known real life functions since
childhood, but I have known a lot more objects.

> It is in the ability to compose that Lisp syntax is lacking. Compose

> too much and you get lost in a sea of parens. That is why functional


> decomposition is so important in Lisp, survival.

This makes no sense to me as it stands. I agree that Lisp syntax makes
reading and writing code more difficult -- but its composability is
excellent. In contrast, consider the occasional situation in C where you
would like to alter an expression to a statement.

You're right, I misphrased. Composing is easy in Lisp, but you can't
do it too much or you get into trouble. A 100 line function in C would
be ok to work with, a 100 line function in Lisp would be a disaster.
Everyone says that the right way to program in lisp is to write short
functions, everyone likes short functions in any language, but I would
rather have a language that could handle short or long functions.

-jason

Jason Robbins

unread,
Mar 11, 1993, 7:29:45 AM3/11/93
to
In article <alms-110...@alms.cambridge.apple.com> al...@cambridge.apple.com (Andrew LM Shalit) writes:

In article <JROBBINS.93...@kingston.cs.ucla.edu>,


jrob...@kingston.cs.ucla.edu (Jason Robbins) wrote:
>
>
> I think of encapsulation as being able to know ALL the ways the state
> of an object can change by looking at some bounded amount of source
> code. In CLOS and Dylan the number of lines of source you must study
> in order to see when objects might be modified is _not_ syntactically
> bounded. This is the symbolic equivelent of a dangling pointer writing
> over your data in C.

Dylan certainly supports encapsulation. It just doesn't require that
encapsulation boundaries be identical to class boundaries. Instead, it
uses a module system to create encapsulation boundaries. Recent Smalltalk
system and derivitives (Component Software, and I believe QKS Smalltalk)
recognize the need for this additional module layer, on top of classes.

I agree that modules are definately a good thing, and an oversight in
most current OOPL's.

My claim was not that "classes==modules which is bad", but rather that
CLOS doesn't have a good way of doing modules/packages. I think that
the unmarked delimiters of CLOS/Lisp/Dylan are part of the reason why
the packages of CLOS are done wrong.

Will Dylan modules have a syntax that _textually_encloses_ the
encapsulated classes, or will it be like the packages of
CLOS? You can tell what I would vote for.

-jason

Matt Wright

unread,
Mar 11, 1993, 3:20:14 PM3/11/93
to
s...@otter.hpl.hp.com (Steve Knight) responds to the question "Can someone

argue for generic functions on philosophical grounds?":

>Yes -- of course one can. But it is basically a weak approach. The multi-
>method approach frees the programmer from specialising on a principal
>argument (the "receiver") by allowing you to specialise on any selection
>of arguments. But it does not prevent you from doing so. It also gives
>methods the status of procedures -- making the first-class, avoiding
>introducing superfluous syntax, and so on. These are two of the most
>significant issues.

Maybe we message-passing fuddy-duddies would be happy if there were a piece
of syntactic sugar that let us write things like

(define-class gizmo
(method (foo x y) ...)
(method (bar a b) ...)
...)

as a shorthand for

(define-method (foo gizmo x y) ...)

(define-method (bar gizmo a b) ...)


and have

(ask obj method arg1 arg2)

stand for

(method obj arg1 arg2)


I know the details of this are lame (like no adding methods to a class after
you define it), but it's just off the top of my head. (And no, I'm not
proposing to add it to the core language! It would just be a macro.)

The point is that to me, OOP is about describing the behaviors of classes,
and it helps me keep things straight if I say "OK, now I'm going to code the
GIZMO class", rather than having to constantly think about the gizmo
instance as just another argument to the method. I know that both styles
say the same thing, but I just feel more comfortable thinking about it the
non-generic function way.

-Matt

Jason Robbins

unread,
Mar 11, 1993, 7:51:08 AM3/11/93
to
In article <alms-110...@alms.cambridge.apple.com> al...@cambridge.apple.com (Andrew LM Shalit) writes:
jrob...@kingston.cs.ucla.edu (Jason Robbins) wrote:
> languages are Turing complete anyway. It also the _limitations_ of a
> system that define its style.

Your last sentence is certainly true. However, the comparison of
generic functions to assembly language misses the mark.

OK, another low blow, I admit it, I know that the a-word is a heavily
loaded term, especially with symbolic type people. I didn't really
mean to say that lisp == asm, I just wanted to say that GF's over generalize.

> Can someone argue for generic functions on philosophical grounds?

Message-passing supports you in factoring your program across a
single dimension (a class). Factoring over this single dimension can
be expressed very clearly. If additional dimensions come into play,
though, you have to create new mechanisms by hand.

Multi-methods support you in factoring your program across multiple
dimensions. This doesn't mean that your program becomes unstructured.
It just means that the structure can encompass more facets of
the problem. This factoring process can make the interactions between
the dimensions very clear.

Your right, GF's are more flexible that single-methods, and
single-method langauages sometimes require work-arounds that would not
be needed if multi-methods were availible.

BUT how offen does that really happen?

AND is it worth watering down the OO paradigm to the point that Dylan
programmers will forever be working on writing _function_, calling
functions, debugging functions, thinking in terms of functions, and
designing functions.

If Dylan is really going to be "OO for the rest" then maybe you guys
at apple east should try to figure out a message passing paradigm that
an be extended to multi-methods _without_ moving us from the OO
revolution back to the structured programming revolution. Such a setup
might even be more efficent than GF's, who knows.

This is honestly meant as constructive critisism:
The Dylan language design lacks innovation.

-jason

Jason Robbins

unread,
Mar 11, 1993, 8:30:46 AM3/11/93
to
In article <alms-110...@alms.cambridge.apple.com> al...@cambridge.apple.com (Andrew LM Shalit) writes:

In article <JROBBINS.9...@kingston.cs.ucla.edu>,
jrob...@kingston.cs.ucla.edu (Jason Robbins) wrote:
>
> reaction #4a: The notion the Dylan is "objects all the way down" is
> made meaningless by the "Seal" construct. Seal looks like a way to
> turn an existing OO class into a non-OO functional representation,
> which would be nice if the compiler did it for you when you were ready
> to ship your product, but when 16 out of 40 classes described in the
> book are already sealed it seams like hype.

When the Dylan manual says that it is "objects all the way down", it


does not mean, "specializable all the way down", nor does it mean,
"redefinable all the way down". It means what it says, "objects all
the way down".

(Dylan != Raw bits argument deleted)

Respnose #2

Sealing provides two capabilities:

1) It allows programs to be compiled more efficiently. This is really
just a bow to the reality of the day.

I like that, I would like it even more if the compiler figured it out
by itself. If could seal a class every now and then while the screen
saver is running. If you later try to modify that class it would
"unseal" it by reloading the source if by no other means. When you are
ready to ship it would seal everything.

2) It allows programmers to declare constant properties of their program,
and ensure that any additions made later will not violate those constant
properties. I believe that this capability is important in delivering
secure, reliable software.

I like that also. But that is what modules do. I would change "later"
to "elsewhere".

-jason

Carl L. Gay

unread,
Mar 11, 1993, 6:06:14 PM3/11/93
to

#1) name hiding does limit the amount of source you have to look at,
it is a _really_big_ convience, unforutantly CLOS doesn't do it very well.

#3) Your right. Dangling pointers are worse. I just wanted to give


readers that sinking feeling that people get when they have to debug a
program and they don't know where to start looking for the bug.

Not that this is an answer to your argument in the general case, but
I'd like to point out that this problem can be largely solved by the
programming environment. I rarely got that sinking feeling on a
Symbolics machine because I could easily view all the source code that
modified a given data structure with Edit Callers <an-identifier>.

It also gives methods the status of procedures -- making them
first-class, avoiding introducing superfluous syntax, and so on.
These are two of the most significant issues.

Sounds like you are trying to be backwards compatable with something.
Why not move forward and start really thinking in terms of objects. If
you stay grounded in the structured programming revolution and only
admit objects so long as they fit nicely into your old mental model
then you will end up with a new ADA: procedureal decomposition with
abstract data types pretending to be objects. Of course ADA is better
at that then CLOS.

Paraphrased from the Gooch book (although it has been widely known):
Objects are a better center for the design of useful programs because
they are usually the most stable part of the requirements: users will
always want more and differnt functions, but the objects they need
tend not to change. Adding/changing functions often means drastic changes
to other functions, that is why so many functionally decomposed
systems tend to get patched as time passes.

Using generic functions instead of message passing needn't change the
way you *design* your program. You can still center the design around
the objects to be modelled.

I can see how using (foo blah) instead of (send blah :foo) could
obscure the fact that blah is a class instance, and therefore may make
the *implementation* of your program more confusing to understand.

Jason Robbins

unread,
Mar 11, 1993, 11:54:06 AM3/11/93
to

#1) name hiding does limit the amount of source you have to look at,
it is a _really_big_ convience, unforutantly CLOS doesn't do it very well.

#3) Your right. Dangling pointers are worse. I just wanted to give
readers that sinking feeling that people get when they have to debug a
program and they don't know where to start looking for the bug.

Not that this is an answer to your argument in the general case, but
I'd like to point out that this problem can be largely solved by the
programming environment. I rarely got that sinking feeling on a
Symbolics machine because I could easily view all the source code that
modified a given data structure with Edit Callers <an-identifier>.


Programming environments are great, everyone should have one.
BUT even programmers who have them will probably need some way to
communicate with those that don't: through ftp, email, printouts, on
the phone, etc. The langauge needs to have some communicatable form,
and that might as well be the normal one.


It also gives methods the status of procedures -- making them
first-class, avoiding introducing superfluous syntax, and so on.
These are two of the most significant issues.

Sounds like you are trying to be backwards compatable with something.
Why not move forward and start really thinking in terms of objects. If
you stay grounded in the structured programming revolution and only
admit objects so long as they fit nicely into your old mental model
then you will end up with a new ADA: procedureal decomposition with
abstract data types pretending to be objects. Of course ADA is better
at that then CLOS.

Paraphrased from the Gooch book (although it has been widely known):
Objects are a better center for the design of useful programs because
they are usually the most stable part of the requirements: users will
always want more and differnt functions, but the objects they need
tend not to change. Adding/changing functions often means drastic changes
to other functions, that is why so many functionally decomposed
systems tend to get patched as time passes.

Using generic functions instead of message passing needn't change the
way you *design* your program. You can still center the design around
the objects to be modelled.

Centering your design on one aspect and coding it around another
sounds like you are doing the work of a compiler. Wouldn't it be
better if you could think in terms of objects and code in terms of
objects?

I can see how using (foo blah) instead of (send blah :foo) could
obscure the fact that blah is a class instance, and therefore may make
the *implementation* of your program more confusing to understand.

The reason that the inmplementation is hard to understand is that it
is in conflict with the design. Definately confusing. and confusion
means beaucoup bugs.

-jason

Peter Norvig - Sun Microsystems Labs BOS

unread,
Mar 12, 1993, 3:54:43 AM3/12/93
to

Brian Harvey writes:
>Yes, but with generic functions you have to define (+ <int> <real>) and
>(+ <real> <int>) separately, which is just as bad. As the number of
>types increases, you get N^2 methods to define.

>What you *really* want to be able to do is only define the N methods that
>apply when the two arguments are of the same type, and then independently
>define methods for raising a single number from, e.g., int to real, and
>have the language be clever enough to try raising if it can't find an
>appropriate method. But once you decide on that approach, generic
>functions are no better (in this example) than message-passing.

Generic functions allow you to do what Brian wants quite easily:

(define-method binary-add ((x <t>) (y <t>))
(let ((<c> (least-common-superclass x y)))
(if (eq? <c> <t>)
(error "Can't coerce ~s and ~s to an addable class." x y)
(binary-add (as <c> x) (as <c> y)))))

Now just add a definition of least-common-superclass, and you can
still write individual methods for <int> <int> or <real> <real>
or <3D-symbolic-tensor> <3D-symbolic-tensor> or whatever.

________________________________________________________________________
Peter Norvig Email: Peter....@East.Sun.COM
Sun Microsystems Laboratories Tel: (508) 442-0508
Two Elizabeth Drive Fax: (508) 250-5067
Chelmsford MA 01824-4195 USA (Sun Internal mail stop: UCHL03-207)
________________________________________________________________________

Mike Haynie

unread,
Mar 12, 1993, 9:43:09 AM3/12/93
to

[...] Lots of good discussion deleted...

OK, if you only specialize on the first argument then you can be
more OO. I have heard that GF's and Multiple inheritance are used only
very infrequently (my application programmers) in languages which
support them because single inheritance and single dispatching are
more natural and understandable. The multi-method aspect of GF's is
not nearly so offensive as the fact that they are _functions_.

By way of counter example, I and about a dozen others are working on a
large (~17MB) application using CL and CLOS. Of the hundred or so
classes in use, only a dozen or so have a single ancestor class, and
most of those inherit from our application equivalent of
<standard-class>. The remainder are interesting combinations of two or
more other classes. In addition, about half of the functional entities
are generic functions. The remainder are helper functions and macros.
All of the key interfaces are based on generic functions.

The application domain (Engineering design) is such that only an OO
approach is viable, and it is quite clear that this application could
not be written with reasonable effort without the support afforded by
multiple inheritance and generic functions.

Ofcourse, we break designs down by what things *do*, rather than what
they *are*; we find better re-use and more flexible design this way,
G. Booch not withstanding. By that, I mean that we choose classes
based on the operations to be supported, and the classes are kept
small. The data-structures (i.e. the models) are organized based on what
they *are*. This dichotomy is (I think) only possible because of
multiple inheritance.

[...]


A 100 line function in C would be ok to work with, a 100 line function in Lisp would be a disaster.

*I* would not be happy with a 100 line function in C ;-)

-jason

--

____/|
Michael Haynie \ o.O| ACK!
m...@wisdom.attmail.com =(_)= THPHTH!
U

Andrew LM Shalit

unread,
Mar 12, 1993, 12:09:41 PM3/12/93
to
In article <JROBBINS.93...@kingston.cs.ucla.edu>,

In the majority of cases, the compiler will do this automatically.
However, because classes and functions are sometimes used as first-class
objects (as arguments to functions), there are limits to this automatic
analysis.

Defining sealing does two things here: it pins down the semantics of
the change (for both the automatic and the manual case), and it allows
the manual declaration when the sealing can by inferred.

> 2) It allows programmers to declare constant properties of their
program,
> and ensure that any additions made later will not violate those
constant
> properties. I believe that this capability is important in
delivering
> secure, reliable software.
>
> I like that also. But that is what modules do. I would change "later"
> to "elsewhere".

There may be times when you want to export an object, but only for
a limited purpose. For example, you may to export a class for
instantiation,
but not for subclassing. Sealing allows this. I don't see how modules
do.

-Andrew Shalit
Apple Computer

Jason Robbins

unread,
Mar 12, 1993, 5:27:44 AM3/12/93
to
In article <alms-120...@alms.cambridge.apple.com> al...@cambridge.apple.com (Andrew LM Shalit) writes:
alms:
2) [sealing] allows programmers to declare constant properties of

their program, and ensure that any additions made later will not
violate those constant properties. I believe that this capability is
important in delivering secure, reliable software.
jrobbins:

I like that also. But that is what modules do. I would change "later"
to "elsewhere".
alms:

There may be times when you want to export an object, but only for
a limited purpose. For example, you may to export a class for
instantiation, but not for subclassing. Sealing allows this. I
don't see how modules do.

jrobbins:
I can't see why you would want to do that, but it would limit reuse.
Maybe you would if you wanted to sell a library and charge extra for
the ability to subclass... But you are right, I don't see how modules
would do that, unless the module had a function which made and instance.

Having modules with stray functions inside them brings us back to
structured programming. Why not replace modules with nested classes:
instances of the higher level class would serve to limit acces to the
inner classes, it could also have its own state (for example, permission
bits), and multiple instances of the outer class could have different
permissions (thus you could upgrade your subclassable/nonsubclassible
store bought library with a password). If Dylan is dynamic then why
not in that way, "objects all the way up". It would, of course, mean
that inner classes which might be exported and subclassed could not be
sealed. But I suspect that outer classes would be used more often as
chunky objects: an outer instance would make inner instances when it
is made. That would allow an encapsulated place for the code which
sets up nets of highly collabroative objects, in most current
languages that code is spread through the collabritve objects themselves.

Nested classes aside, will Dylan allow nested modules, or just
disjoint ones?

-jason

Aaron Sloman

unread,
Mar 12, 1993, 4:40:48 PM3/12/93
to
s...@otter.hpl.hp.com (Steve Knight) writes:

> Date: 10 Mar 93 16:44:23 GMT
> Organization: Hewlett-Packard Laboratories, Bristol, UK.
>
> Jason Robbins writes:
> > ...


> > reaction #4: (after finishing the book) Hang on. That was not OO at
> > all. I don't even think that it meets the definition of OOness.

I have talked to many people about OOness and am amazed

a. At the variety of different views as to what is essential
to object orientation
b. At the strength of emotion with which these views are
asserted, defended, etc.

I've even met people who think "object oriented" means "based on or
using mouse-manipulated objects on the screen". (It's a bit like
defining "healthy" in terms of the quality that make a climate
healthy, rather than what it is for a person to be in good health.)

Then there are those who think the essence of OOness is message
sending, because they have never noticed that message sending is
(usually, in non-distributed systems) just syntactic sugar for
function calls that dispatch on the type of the first argument.
(Which is why such people can't appreciate that multi-methods are a
powerful generalisation.)

And of course there are those who previously used fortran or basic,
and when they are introduced to a language with a variety of data
types, including user definable-datatypes, which happens to be an OO
language, they conclude that having datastructures of various types
is what makes an OO language, not having the experience to grasp the
difference between e.g. C or Pascal and an OO language.

Finally there are those who go on about encapsulation, and hiding
data from everyone outside, and similar misplaced metaphors.

I wonder how many other views there are of the essence of OO-ness?

If instead of asking what the definition is, or what the key idea
is, or what your favour god claims it is, you ask what makes OOness
in its various forms most important for software engineering, I
think the answer is clearly the inheritance mechanisms which allow
software to be designed in a highly modular and re-usable form,
without requiring any code to be repeated just because the same
algorithm is used in a variety of contexts, or portions of data-type
declarations to be repeated because different data classes have
common sub-structures.

(Of course, the combination of multiple inheritance (classes
inheriting data and methods from two or more superclasses) and
multi-methods (methods that dispatch on the types of two or more of
their arguments) leaves open lots of scope for semantic confusion
and ambiguity, e.g. regarding whether to switch on the type
information available from the classes mentioned in method
defintions, or to switch on the types of the actual arguments
presented at run time (which could be instances of sub-classes of
the classes mentioned in the procedure definition) and language
designers have hard problem coming up with good general solutions:
answer: provide both. Does Dylan?)

But I've met people who claim to be fans of OOness who haven't even
heard of inheritance, including one who had been using C++ for
several months!

[sfk]


> A lot of people have this reaction when meeting multi-method style
> OO languages for the first time. There's no easy way to soften the
> response, unfortunately, because they are wrong. One can rephrase the
> objection as "that's not the way Smalltalk works", or whatever, and then
> we can agree. But, unpleasant though it might be, it really is OO.

Clearly in view of the multiplicity of interpretations of "object
oriented" it is time to drop the concept and start introducing new
precisely defined technical terms that correspond to the different
current interpretations. I'd use "inheritance oriented" for the
combination of ideas that I regard as most important for software
engineering.

> Those of us who have programmed with CLOS style multi-methods find
> their generality invaluable. They completely encompass the limited range
> of object-oriented techniques available in mono-method systems and go
> beyond it in useful and intuitive ways.

Having used your Objectclass system (an extension to Pop-11), and
introduced students to OOP through it, I confirm this.

> The key difference is that method definitions and class definitions are
> not so intimately related. And why should they be? If you've
> programmed in Smalltalk, then you'll probably be familiar with that
> teeth-grinding moment when you discover that closely associated methods
> need to be thinly spread all over the class hierarchy.

E.g. a method for linking an object of type A with an object of type
B on the screen. Should that be associated with Class A or Class B?
Should the "marry" method be associated with class man, or class
woman?

(There are lots more examples, including examples involving more
than two classes)

Aaron
---
--
Aaron Sloman,
School of Computer Science, The University of Birmingham, B15 2TT, England
EMAIL A.Sl...@cs.bham.ac.uk OR A.Sl...@bham.ac.uk
Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281

Jason Robbins

unread,
Mar 12, 1993, 9:23:38 AM3/12/93
to
In article <MARKF.93M...@scoupe.harlqn.co.uk> ma...@harlqn.co.uk (Mark Friedman) writes:

Jason> My point was that too much generality washes out the
Jason> advantages of OOP. You deleted that part.

Jason> In CLOS another piece of source in another file can say
Jason> (in-package 'your-private-package) and start messing around.

Jason> My single biggest problem with GF's is that they promote
Jason> functional decomposition. (kinda like promoting tooth decay).

Jason> But objects are more understandable to humans than functions
Jason> are, precisely because they have real world experience with
Jason> real world objects.

I think that we are dealing with a more fundamental difference in
attitude than OO vs. functional. I think Jason objects to the
historical lispers attitude of open systems and multiple paradigms.
The purpose of CLOS (and packages and first class procedures and catch
and throw and assigments and the LOOP macro and ...) is to support
certain paradigms, not to enforce them. This is different than many
peoples' attitudes about what programming languages and environments
should do.

I would like whatever language I use to support encapsulation.

I don't think that you can get a property like encapsulation by adding
on more features or throwing in more ideas, you have to take something out.

CLOS is halfway towards "anything goes". You can have objects, but you
can also have functions that break their encapsulation, so they are
not really objects. You can have packages, but you can also get around
their modularity, so they don't make good packages. The main paradigm
is functional, but the presence of objects implies side-effects, so it
is not all that functional. CL and CLOS is really a mess, making Dylan
also a mess will not attract very many traditional programmers.

I think that attitude severly limits its appeal for use on real projects.

-jason

William M. York

unread,
Mar 12, 1993, 9:50:07 AM3/12/93
to

Some of us remember the old Symbolics Flavors system, which used a
"message-passing" style. This meant that once you had become
comfortable with Lisp you had to learn a whole new set of concepts in
order to cross the threshold into OOP. This was a bad thing (ask any
of the former Symbolics course instructors).

It seems that you ("you" refering to the CLOS community) missed the OO
revolution. You are not supposed to think of objects as being
equivelent to groups of functions, that is what the compiler is
supposed to do. OO programmers are supposed to be able to think of
their programs as communicating objects (a view that has some _real_
advantages, see my next posting). Moving to CLOS without learning to
think of objects as objects is like moving from C to C++ without
rewritting any of your (unOO) C programms. It's like trying to do OO
programming in ADA (sorry for that low blow).

The point that I was trying to make above is that the Lisp OOP
community STARTED with the message-based paradigm ("Why, I was sending
messages when you were only knee-high to a grasshopper"), found it
wanting (due to the semantic limitations and syntaxtic extra baggage
discussed previously). We then took a step FORWARD to the generic
function paradigm. Some of the rest of the OOP community is now
wrestling with these same issues, in the same way that the GUI
community is tripping over the kinds of window system issues that the
Interlisp and MIT window systems wrestled with a decade ago.

One last attempt: one of the things that the generic function paradigm
does is to encourage you to think in terms of protocols, rather than
the objects that implement them.

That is, you first sit down and write

(defgeneric stream-p ...)
(defgeneric stream-read-char ...)
(defgeneric stream-write-char ...)
(defgeneric stream-force-output ...)

and so on. Then you can choose to make some of your objects "protocol
participants" by mixing in classes that support the protocol, or
writing methods for the protocol functions on your class.

The whole :WHICH-OPERATIONS mechanism was a band-aid to cover up for
the fact that the protocols were ill-defined, and you could never tell
which object handled which set of messages. The goal in my
object-oriented systems is to now be clear about the "roles"
undertaken by the objects. That is, an object that returns T to
STREAM-P is guaranteed to support the entire stream protocol,
regardless of whether it is a file stream object or a window (or a
debugging/scripting tool).

William M. York

unread,
Mar 12, 1993, 9:55:49 AM3/12/93
to

Your right, GF's are more flexible that single-methods, and
single-method langauages sometimes require work-arounds that would not
be needed if multi-methods were availible.

BUT how offen does that really happen?

Quite often, once you have the capability. I remember being in your
position, wondering whether multi-methods would be useful. I even
remember wondering whether the switch from "good old" message-based
Flavors to generic-function-based New Flavors was a good thing. Now,
4 and 7 years later (respectively) I think that the answer is "yes".

AND is it worth watering down the OO paradigm to the point that Dylan
programmers will forever be working on writing _function_, calling
functions, debugging functions, thinking in terms of functions, and
designing functions.

This is a perspective issue. I think of multi-methods as an
enhancement, not a watering-down of OOP.

This is honestly meant as constructive critisism:
The Dylan language design lacks innovation.

Again, from my perspective Dylan is trying to take all the things that
we have learned so far and package them effectively.

Jason Robbins

unread,
Mar 12, 1993, 9:49:06 AM3/12/93
to
In article <C3spK...@cs.bham.ac.uk> a...@cs.bham.ac.uk (Aaron Sloman) writes:

> Jason Robbins writes:
> > reaction #4: (after finishing the book) Hang on. That was not OO at
> > all. I don't even think that it meets the definition of OOness.

I have talked to many people about OOness and am amazed
a. At the variety of different views as to what is essential
to object orientation
b. At the strength of emotion with which these views are
asserted, defended, etc.

One very common definition is OO = encapsulation, inheritance,
polymorphism, dynamic binding, I think we can all (alomst) agree on
that. I would add that OO is the most useful when programmers and
users can _think_ about the system as a simulation of something. That
something might be some physical phemomena (like paper flow through a
company) or some abstract phemomena (like searching a hash table).
When the design of the software system is almost isomorphic with the
'real' system then understandablity is greatest and the design is most
obvious. Of course, OOP is also a good way to program just because of
the four things listed above, although "made with objects" is a better
description of software systems which are not simulating anything.


Then there are those who think the essence of OOness is message
sending, because they have never noticed that message sending is
(usually, in non-distributed systems) just syntactic sugar for
function calls that dispatch on the type of the first argument.

You are doing the compiler's job.

Finally there are those who go on about encapsulation, and hiding
data from everyone outside, and similar misplaced metaphors.

Hardly a misplaced requirement. Imagine a large program with little
encapsu;ation or modularity: your code would be misplaced, and before
long the customer's check would get 'misplaced'.

I think the answer is clearly the inheritance mechanisms which allow
software to be designed in a highly modular and re-usable form,
without requiring any code to be repeated just because the same
algorithm is used in a variety of contexts, or portions of data-type
declarations to be repeated because different data classes have
common sub-structures.

Inheritance is good, but it my itself does not make software "highly
modular and re-usable". Modularity does that. And designing for
re-usability involves many extra-lingual considerations.

(Of course, the combination of multiple inheritance (classes
inheriting data and methods from two or more superclasses) and
multi-methods (methods that dispatch on the types of two or more of
their arguments) leaves open lots of scope for semantic confusion
and ambiguity, e.g. regarding whether to switch on the type
information available from the classes mentioned in method
defintions, or to switch on the types of the actual arguments
presented at run time (which could be instances of sub-classes of
the classes mentioned in the procedure definition) and language
designers have hard problem coming up with good general solutions:
answer: provide both. Does Dylan?)

Confusing: yes. I would rather see Dylan get simpler than add the
eqivelent of C++'s virtual/non-virtual functions.

> The key difference is that method definitions and class definitions are
> not so intimately related. And why should they be? If you've
> programmed in Smalltalk, then you'll probably be familiar with that
> teeth-grinding moment when you discover that closely associated methods
> need to be thinly spread all over the class hierarchy.

E.g. a method for linking an object of type A with an object of type
B on the screen. Should that be associated with Class A or Class B?
Should the "marry" method be associated with class man, or class
woman?

In your particular example you could have aPerson marry: aPerson and
use inheritance...

You are starting to convince me some. I already know that
multi-methods can get you out of some awkward moments in Smalltalk.

My main problems with them is that (1) they break down encapsulation,
and (2) they make the programmer design around verbs instead of nouns.

Can anyone propose a way to have joint recievers without switching
mental models back to functions? Don't say GF's.

-jason

Brian Harvey

unread,
Mar 12, 1993, 9:22:21 PM3/12/93
to

Well, as luck would have it, it just so happens that today I gave my
standard lecture on OOP to my data structures class, and it's all about
these very issues, so I thought I'd share it with you and see how many
flames it will attract.

Here goes:

What Is Object Oriented Programming?

As you know, any time you get six computer scientists together and ask
them any question whatsoever, you always get at least half a dozen
different answers -- unless the question is "What do you think about
object-oriented programming," in which case they all say "It's the
greatest thing ever invented!"

What can this mean?

Actually, what it means is that they're not all talking about the same
thing. Each has taken his or her favorite idea and called it "OOP."
So this lecture can't really live up to its title, because there really
is no such thing as OOP. Instead, this will be a lecture about the
various types of computer scientist, and what each type means by OOP.

There are four types; I'll describe them from low to high on the
evolutionary scale.

1. The Paranoid.

Some computer scientists think that the most important thing in all of
computer science is to ensure that you can't possibly ever make a mistake
in writing a computer program, no matter how stupid or careless you are.
To this end, they judge their tools not by what the tool *allows* you to
do, but by what it *prevents* you from doing. The more prevention, the
better they like it. Computer scientists of this class tend to like
Pascal, and/or to have several alphabetically consecutive letters in
their names.

A type-1 computer scientist, asked what's so great about C++, will
say something like this: In C, you can create structs, which are new
data types containing fields. What's OOPy in C++ is that you can
declare certain fields of a struct to be *private*, which means that
only a few specially privileged procedures, associated with that
object type, are allowed to make reference to those fields. This is
an OOPy feature because it doesn't allow the C++ program to do anything
that the corresponding C program couldn't have done; instead it *restricts*
what the C++ program can do. Isn't that great?


2 and 3. Failed Religions.

Categories 2 and 3 are computer scientists who have established a religion
in which some programming technique or style is deemed to be immoral. The
result is a more elegant form of programming. However, the result is also
a form of programming that really doesn't cut the mustard -- you just can't
get practical work done in it. So, instead of admitting that they were
wrong, these computer scientists reinvent the forbidden feature under some
other name, and call it OOP. Details follow:


2. The Religion of Strong Typing.

One failed religion in computer science is the one that says that every
variable must have a "type" associated with it, and can never take on
any value outside of that type. This restriction is alleged to prevent
bugs by preventing the use of a variable for a purpose other than the
one originally intended.

So, for example, members of this religion don't like programs such as

(define (square x)
(* x x))

because this program is too sloppy about the type of the variable x.
Instead they want you to write this:

int square_of_int(int x) {
return x * x;
}

float square_of_float(float x) {
return x * x;
}

This religion elevates an accident of history, concerning the ways in which
numbers are represented inside computers, to a principle.

But of course it's intolerable to have to define many different procedures
that all do the same thing, just because the types are different. OOP to
the rescue! A type-2 computer scientist will explain that C++ is OOPy
because it provides *overloading*, a feature that allows the two procedures
above to be given the *same name*. C++ figures out automatically which one
to use by checing the type of the actual argument. (You still have to
define the procedure twice; the amazing feature is that you can use the
same name for both.)

[Anecdote: A colleague of mine once explained at a faculty meeting that
we should teach our introductory CS course using C++ instead of Scheme,
because in C++, supposing you want to be able to have linked lists of
integers, and you also want to be able to have linked lists of floating
point numbers, you only have to write the implementation of linked lists
once!]


3. The Religion of Referential Transparency.

Some computer scientists believe that you should be able to read a program
and understand the thing to which a particular use of a particular symbol
refers, just from the text of the program, without thinking about the
context in which it's used. For example, these computer scientists reject
the use of dynamic scope, because the variable associated with a given
symbol under dynamic scope may depend on which procedure *invoked* the
procedure in which it occurs, something that can't be known until the
program is running. Under lexical scope, by contrast, the variable
associated with a given symbol in a given context depends only on the
location of that context within the text of the program -- whether, that
is, the procedure in which it occurs is *defined* within some other
procedure.

The trouble is that sometimes the meaning of a symbol *should* depend
on the context in which it's used. So these computer scientists have
had to fall back on the idea of *inheritance*, whereby the meaning of
a symbol used within a method of one class may depend on definitions
in some other class from which this one inherits. A dynamic change in
the methods or instance variables of that other class modifies the
meaning of symbols used in the child class.

(This aspect of inheritance is really more relevant to a dynamic OOP
language like CLOS or Dylan than to one like C++, in which everything
is defined ahead of time. Type-3 computer scientists are often Lisp fans.)


4. OOP as Metaphor.

The fourth category of computer scientists focuses not on technical
capabilities but on ways of thinking. People in this category know
that every language is technically equivalent to every other language;
one is better suited than another to some task because it allows the
*programmer* to understand the task more easily, not because it allows
the *computer* to solve it more easily.

Certain problems are most naturally understood as an interaction among
independent actors. The paradigmatic example is simulation. You want
to decide whether to add a lane to the Bay Bridge, or turn an existing
lane into a carpool lane, so you write a computer program to simulate
the traffic. This program's most natural form is one in which each
car (that is, each driver, really) carries out actions in parallel with
every other one. Each driver is affected by the actions of other
nearby drivers, but each acts separately and each has a separate
intelligence. There is no one taskmaster overhead moving all of the
cars.

Another example of a naturally object-oriented task is a window system.
The user wants to be able to manipulate any window, e.g., to move or
resize it. The most natural thing is for the user to address a request
directly to the window, not to some overall operating system that
manages all the windows. (It doesn't matter that behind the scenes
there is indeed, probably, one operating system; what matters is the
metaphor through which the user uses the system.)

What feature does a type-4 computer scientist single out as the one
that makes C++ OOPy? None! To a type-4 computer scientist, there is
nothing OOPy about C++, because it doesn't use a message-passing
metaphor. You don't ask objects to do things; your one program
calls procedures with objects as (passive) arguments. The only
OOP language is Smalltalk!

Chris Dollin

unread,
Mar 12, 1993, 3:44:13 AM3/12/93
to
In article ... s...@otter.hpl.hp.com (Steve Knight) writes:

Occasionally, there is some confusion over the "philosophy" of OOP. Some
claim that OO is ontologically related to "the world". Such a view is
insupportable, as has been argued in punishing detail by Chris Dollin
elsewhere.

Why, thank you, Steve.

.... In contrast, consider the occasional situation in C where you


would like to alter an expression to a statement. Because of the difference
in syntactic roles between expressions and statements, the code may have
to be significantly, and pointlessly, rewritten. There are no analogous
problems in Dylan or in any Lisp.

I rather think you mean it the other way about: converting a statement
into an expression. To turn an expression into a statement, just add
water. Sorry. Just a semicolon.

Chris Dollin

unread,
Mar 12, 1993, 4:16:05 AM3/12/93
to
In article ... jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:

AND is it worth watering down the OO paradigm to the point that Dylan
programmers will forever be working on writing _function_, calling
functions, debugging functions, thinking in terms of functions, and
designing functions.

(a) It's only ``watering down'' the OO paradigm if generic fuctions
actually *do* violate the OO spirit. They don't. They might not support
it in quite the same way as class-obsessed languages do, but OO isn't
about making classes the primary programming structure; it's about
a model of computation where the state-changes to objects are mediated
only by a small set of functions that ``belong'' to the object. Whether
``belonging'' is done by making classes scopes, or by non-class module
structures, is a side-issue. IMAO.

(b) A True OO Programmer would presumably be working on _methods_,
invoking methods, debugging methods, thinking in terms of methods, and
designing methods. gs/method/function/. A Generic Function Programmer
is just as capable of working with class lattices, inheritance, and
all that gubbins as a True OO Programmer.

So what's the difference you're trying to isolate?

This is honestly meant as constructive critisism:
The Dylan language design lacks innovation.

Surely a language designed for widespread use for OO software engineering
is exactly the place to *avoid* innovation. Innovative things are new,
and untried, and have unexpected consequences, and need changing as a
result. Splendid for standards, or at least the standards business.

Chris Dollin

unread,
Mar 12, 1993, 4:03:30 AM3/12/93
to
In article ... jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:

You're right, I misphrased. Composing is easy in Lisp, but you can't
do it too much or you get into trouble. A 100 line function in C would
be ok to work with, a 100 line function in Lisp would be a disaster.

You have evidence for this, perhaps?

[The little local evidence I have suggests that 100-line Lisp functions
are both rare and no less comprehensible than 100-line C functions.]

Dipankar Gupta

unread,
Mar 12, 1993, 8:18:02 AM3/12/93
to
>>>>> "JR" == Jason Robbins <jrob...@kingston.cs.ucla.edu> writes:

JR> If Dylan is really going to be "OO for the rest" then maybe you guys
JR> at apple east should try to figure out a message passing paradigm that
JR> an be extended to multi-methods _without_ moving us from the OO
JR> revolution back to the structured programming revolution. Such a setup
JR> might even be more efficent than GF's, who knows.

JR> This is honestly meant as constructive critisism:
JR> The Dylan language design lacks innovation.

Is this related to your ``generics are worse than message dispatch''
argument? In any case, I find noise about XYZ revolution
disconcerting: it's the hallmark of proselytizers, not language
designers. I'm reminded of Tony Hoare's remark:

``The language designer should be familiar with many alternative
features designed by others, and should have excellent judgement in
choosing the best or rejecting any that are mutually inconsistent...
One thing he should not do is to include untried ideas of his own. His
task is consolidation, not innovation.''

[Paraphrased from ``Modula-3 Language Definition''
Cardelli, L. et. al. ACM SIGPLAN Notices 27(8)]

-- Dipankar

Aaron Sloman

unread,
Mar 13, 1993, 9:48:08 AM3/13/93
to
I think some of the recent discussion of the nature of OOness
exposes prejudices about how things OUGHT to be done that may be
based on exposure to too limited a variety of problems and
formalisms.

In what follows I'll support some of what Jason writes and then go
on to disagree regarding the point that lies behind this
interchange.

jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:

> Date: 11 Mar 93 16:54:06 GMT
> Organization: The University of California at Los Angeles


>
> In article <CGAY.93Ma...@majestix.cs.uoregon.edu> cg...@majestix.cs.uoregon.edu (Carl L. Gay) writes:
>
> In article <JROBBINS.93...@kingston.cs.ucla.edu> jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:
>
> #1) name hiding does limit the amount of source you have to look at,
> it is a _really_big_ convience, unforutantly CLOS doesn't do it very well.
>
> #3) Your right. Dangling pointers are worse. I just wanted to give
> readers that sinking feeling that people get when they have to debug a
> program and they don't know where to start looking for the bug.
>
> Not that this is an answer to your argument in the general case, but
> I'd like to point out that this problem can be largely solved by the
> programming environment. I rarely got that sinking feeling on a
> Symbolics machine because I could easily view all the source code that
> modified a given data structure with Edit Callers <an-identifier>.
>
> Programming environments are great, everyone should have one.
> BUT even programmers who have them will probably need some way to
> communicate with those that don't: through ftp, email, printouts, on
> the phone, etc. The langauge needs to have some communicatable form,
> and that might as well be the normal one.

[......]

Yes, this is a very important point. In particular there will be
books, journal articles, manuals and technical reports, giving
printed examples of programs for teaching and other purposes.

Many of the defenders of lisp syntax who rightly point out that a
powerful editor can remove some of the objections to the syntax,
forget that not everyone has such lisp-friendly editors, and more
importantly forget that for many people exposure to the language
includes books, printed articles etc.

On the printed page the presence of clearly distinguished closing
brackets, and, perhaps even more importantly, intermediate markers
"elseif" ... "then" ... and the like, can make a considerable
difference to the speed and accuracy with which a human brain can
take in the structure.

I believe the Lisp Pointers magazine produced by and for lisp users
often included code examples on paper. (I have not actually seen it
recently, so I don't know if it still exists, or if it still
includes examples.)

Moreover, I know several programmers who (unlike me!) prefer to do a
lot of their work away from the terminal using pencil and paper,
including poring over listings of their own and others' code e.g.
when trying to think about improvements, generalisations, or
extensions (as opposed to de-bugging where the machine can often
help).

For this reason I regard all the comments about how editors solve
the problem as missing the main point, given that the intention is
that Dylan should be a language capable of being used for production
quality software development in a wide variety of applications, and
not just for teaching, exploring ideas, or building tools for one's
own use.

(Many academics have no idea what the requirements for production
quality software development by a large team are, so their opinions
on these matters are severely biased towards considering only their
own limited needs.)

As regards the previously discussed problem of "grouping" code when
multi-methods are used, I think the answer is that there just is no
single *right* structure, as different ways of organising
information are needed for different purposes (even when it's the
same information.)

Forcing people to use only mono-methods (or message sending) is just
a way of forcing them to use an arbitrary and sometimes
unsatisfactory solution.

All this is really just a special case of the general problem of how
to classify or organise information: in general the world does
not decompose uniquely into a tree structured taxonomy suitable for
description on a 2-D surface with with nested brackets (or a one
dimensional file of characters, if you prefer). So there is no ONE
way of structuring the information that is best for all purposes.

The difficulty of finding a general purpose classification for books
in a library is a well known example of this problem. The same thing
can happen when you try to analyse a factory that's to be simulated,
or a business for which management software is to be developed.
Different taxonomies are needed from different viewpoints.

This may well mean that code for object oriented systems may have to
have cross references, as is common in other domains where reality
is not tree structured and yet printed documents have to describe
that reality.

The style I encourage in my students is to try above all to make the
*ontology* clear by having a good layout for class definitions, with
comments and cross references to related information (essential
anyway if you use multiple inheritance).

Mono-methods can, of course, be associated with the relevant class
definitions as part of their specification, as can descriptions of
the properties (unary predicates) of various classes.

As soon as descriptive binary, ternary, etc. relations enter into
the ontology (e.g. "x causes y", "x overlaps y" "x adjoins y", "x is
part of y", "x owns y", "x provides y for z", etc) then there is NO
uniquely correct way of associating them with the types of their
arguments.

(I am talking primarily about computed relationships, not the static
relationships typically represented by making one object the value
of a slot in another object.)

So one solution is not to group the list of relations with the
argument types, but to put them into separate lists of relations of
various sorts, e.g. predicates concerned with geometrical
relationships could be grouped together, as could predicates
concerned with functional relationships, predicates concerned with
contractual relationshiops, predicates concerned with family
relationships, etc. etc.

Similarly with *actions* involving two or more classes of objects.

Then the class definitions for obejct types will have to have cross-
references to the relationships and actions that they can be
involved in.

This is the "real world" version of the problem that other
contributors have discussed only in relation to code structure.
(Compare Roget's Thesaurus: a heroic, but unsuccessful attempt to
produce a unique general purpose hierarchical classification. The
index is a way of getting round some of the inadequacies: many
index entries group items in very different categories.)

(Gooch several times comes close to making these points in his book
Object Oriented Design, but, unless I have missed something, never
quite gets there.)

As regards code structure: multi-methods, instead of arbitrarily
being associated with particular object classes, may be grouped into
different categories where the relevant class is not a class of
things, but a class of relationships (methods that compute
predicates), or a class of actions (methods that change objects and
relationships).

E.g. if there are methods concerned with linking graphical objects
of various types (to choose a simple and common example), then it
can be useful to group all those "link" methods together. Similarly
all the methods for detecting overlap might be grouped together.
Similarly methods for transmitting data from one object to another.

Each of the relevant object classes will then need notes (e.g. in
the form of comments) referring to the various collections of
methods (such as the "link" collection and the "overlap" collection)
involving those classes.

Of course, a good hypertext-based software environment can replace
such notes with the methods themselves for presentation purposes.
But a printed version will require either cross references or
paper-wasting repetition. Such is life.

I can't see any reason except prejudice for saying that it is ALWAYS
better to group the code according to the data specifications, than
to group the code according to commonalities in high level
functionality. Sometimes one may be better, sometimes the other,
depending on the purposes. (More needs to be said about how
different purposes determine different preferred organisation.)

Similarly there is no reason to say that the specification of the
ontology of an application domain must ALWAYS group information
about relevant relationships and actions with the classes of objects
to which they apply: that forces arbitrary and misleading
structuring in some cases.

I suspect that the prejudice I am opposing may be held mainly by
people used to languages that don't easily support high levels of
procedural abstraction -- e.g. allowing creation of procedures
formed by specialising higher level procedures, as can be done with
lexical closures, or partial application in Pop-11. I call those
*super-routines* as contrasted with *sub-routines*.

In such a language, it is often the case that several different
methods can be created by instantiation of a single super-routine.
Grouping those methods together then helps to clarify their deep
commonality, and may simplify generalisation, debugging, etc.

Moreover, it may be useful sometimes to review and compare all the
methods for creating a link (or performing some other action-type
involving two classes of objects), e.g. if a procedure invokes
"link" and you wish to check that all the variants of link do
precisely what you expect in all cases (since otherwise you may need
to define a new method set instead.) Reviewing all the link methods
may be much harder if they are scattered all over the place in
association with the object class definitions.

(The answer that there should be a single high level specification
to which the variants of the method type all conform misses the
point that the *formal specification* may itself be buggy in
relation to a particular *informal requirement* in ways that can
sometimes be grasped only by looking closely at the implementations
conforming to the specification. Requirements can be very hard or
impossible to formalise. Sometimes it is only during testing that
the failure to meet an implicit requirement becomes clear, and then
discovering why this happens may require looking at how the explicit
specification fails to determine implementation in some relevant
way. But I am now treading on the toes of those who see formal
methods as a panacea.....)

Aaron

Matt Wright

unread,
Mar 13, 1993, 12:55:19 PM3/13/93
to
ke...@hplb.hpl.hp.com (Chris Dollin) writes:
> OO isn't about making classes the primary programming structure; it's about
> a model of computation where the state-changes to objects are mediated only
> by a small set of functions that ``belong'' to the object.

Aha! Type-one.

Regards to Brian Harvey...

-Matt

Matt Wright

unread,
Mar 13, 1993, 1:11:40 PM3/13/93
to
a...@cs.bham.ac.uk (Aaron Sloman) writes:
>Many of the defenders of lisp syntax who rightly point out that a
>powerful editor can remove some of the objections to the syntax,
>forget that not everyone has such lisp-friendly editors, and more
>importantly forget that for many people exposure to the language
>includes books, printed articles etc.

The only "powerful editor" features I've used for Lisp programs are a
pretty-printer and a parenthesis matcher. The only time I use the
parenthesis matcher is to move quickly around the program, or,
occasionally, to check if the pretty-printing is wrong.

I once explained pretty-printing to a Lisp-hater, who had expressed all the
same problems people in this group have been mentioning---"I can't find the
end of this statement", "it's impossible to count the parentheses to see
what goes with what." Once he realized that the indentation levels of a
correctly printed Lisp program tell you everything you need to know about
its structure, he was much happier. When my students have trouble with Lisp
code, one of the first things I usually do is have them make their editor
reformat it.

My point is this: opponents of Lisp syntax argue on the grounds that
matching parentheses are difficult across multiple-line programs, but most
of the Lispians I know usually *don't* match parentheses by hand; we just
look at (and trust!) the indentation.

Pretty printing is *not* an obscure hyper wizardly editor feature; students
could easily write one in an introductory programming course.

>Moreover, I know several programmers who (unlike me!) prefer to do a
>lot of their work away from the terminal using pencil and paper,
>including poring over listings of their own and others' code e.g.
>when trying to think about improvements, generalisations, or
>extensions (as opposed to de-bugging where the machine can often
>help).
>
>For this reason I regard all the comments about how editors solve

>the problem as missing the main point...

When I write code by hand, I sometimes get the parens wrong, but I *always*
get the indentation right. When I pore over someone's listing, *their*
editor already got it formatted correctly, so I mostly don't have to think
about it.

-Matt

tes...@newton.apple.com

unread,
Mar 13, 1993, 5:29:58 PM3/13/93
to
At 9:16 AM 3/12/93 +0000, Chris Dollin wrote:
>In article ... jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:
>
> AND is it worth watering down the OO paradigm to the point that Dylan
> programmers will forever be working on writing _function_, calling
> functions, debugging functions, thinking in terms of functions, and
> designing functions.
>
>(a) It's only ``watering down'' the OO paradigm if generic fuctions
>actually *do* violate the OO spirit. They don't. They might not support
>it in quite the same way as class-obsessed languages do, but OO isn't

>about making classes the primary programming structure; it's about
>a model of computation where the state-changes to objects are mediated
>only by a small set of functions that ``belong'' to the object. Whether
>``belonging'' is done by making classes scopes, or by non-class module
>structures, is a side-issue. IMAO.
>
>(b) A True OO Programmer would presumably be working on _methods_,
>invoking methods, debugging methods, thinking in terms of methods, and
>designing methods. gs/method/function/. A Generic Function Programmer
>is just as capable of working with class lattices, inheritance, and
>all that gubbins as a True OO Programmer.

At 11:10 PM 3/12/93 +0000, Brian Harvey wrote:
>I agree with Aaron that it's confusing to call this otherwise good
>idea "object-oriented." To me, that word is the name of a metaphor,
>in which there are a whole bunch of small independent *smart* entities
>inside the computer, all acting on their own; with the generic function
>approach, the metaphor still has to be that *the* computer carries out
>*the* program that invokes all these functions. I've been on the
>message-passing side of this debate, but I might be won over if the
>generic function people picked another name!

I agree with Brian and I mostly disagree with Chris.

I consider myself "A True OO Programmer". When I use Smalltalk, I don't
call methods, I send messages. When I use Dylan, I call functions that I
know begin with implicit typecase statements.

An "object" is an instantiable entity that makes its own decisions about
whether and how to respond to messages from outside itself. That is, an
object encapsulates behavior, not just state.

Encapsulation in modules was an idea that predated Smalltalk. Modules are
not objects because they cannot be instantiated.

As for inheritance, Smalltalk-72 did not even have it. Inspired by
Simula-67, we gradually added inheritance to Smalltalk between 1974 and
1976 (although we experimented with ways of faking it before that).

Classes are not essential to OOP either, but that is a less controversial
assertion. Having to decide between a class and prototype approach in the
early '70s, Alan Kay and Dan Ingalls chose the class model to start with
because there was already plenty of risk in the project as it was. Some
work on prototype-based alternatives was conducted in the Learning Research
Group during the '70s, but nothing as thorough as what was later seen in
SELF.

Loop-structuring constructs are a good thing to have in a block-structured
language, but scoping is the defining property. Similarly, inheritance is
a good thing to have in an object-oriented language, but behavioral
encapsulation is the defining property.

Dylan is thus not object-oriented in this most fundamental sense. However,
I have come to accept calling it an object-oriented language, for two
reasons:

(1) Because classes and inheritance are usually found in languages that
have objects, and because classes and inheritance deliver a significant
portion (half?) of the SW engineering benefits of OOP, people have seen fit
to retain the moniker "object-oriented" even when objects are degraded in
their ability to control the messages to which they respond (or the
functions which they specialize, as you prefer).

(2) One of the main benefits of behavioral encapsulation is that objects
can't directly change (or even access) state within other objects; they
must make a procedural request by sending a message or calling a function.
In Dylan, slots are accessed by getter and setter functions that the class
CAN choose NOT to export.

Thus, most OOP benefits are provided by Dylan, as well as some benefits
that purer OOLs can't offer, such as multimethods.

I think the principal weaknesses of Dylan from the OOP point of view are:

(1) The programmer of one module can force an object in another
programmer's module to change its response to a "message" by defining a new
method on that object's class. Note, however, that such a method cannot
force state changes to the object except through exported functions. Note
also that Smalltalk-80 in practice did not restrict programmers from
changing any methods they liked. In both Dylan and Smalltalk, we rely on
programming environments to encourage or enforce encapsulation and to
expose permitted violations.

(2) In Dylan, it is assumed that all objects are in one address space. In
the pure messaging model, an object can send a message transparently
(without special syntax) to an object in another address space (e.g.,
across the network). Note, however, that although this capability is
latent in pure OOP systems, the complexity of efficient and consistent
implementation means that it is not commonly seen in practice. And some
(not I) may argue that such transparency is misleading and could lead to
mysterious performance problems.

Treatises have been written on the definition of OOP and it is generally
concluded that it is best not to try to be too precise and limiting in the
usage of natural language terms. I agree. And I am glad that, being a
language from the LISP tradition, Dylan, unlike C++, offers automatic
memory management and applicative programming. However, sometimes I become
concerned that software engineers exposed only to CLOS (or C++) will focus
too much on the benefits of classes and inheritance and not learn enough
about the benefits of encapsulation, of processes as objects, and of other
important features of Smalltalk.

In case anyone thinks I am a Smalltalk bigot because I worked on the
language, implementations, applications, and publications for 7-8 years,
you should know that I was a LISP programmer, implementer, and language
designer for many years before that, and that I am a current user and
advocate of Dylan.

Larry
>************************************************************
>Larry Tesler, V.P. PIE Engineering, Apple Computer, Inc.
>USMail: 20525 Mariani Ave., MS: 301-2A, Cupertino, CA, 95014
>Internet: tes...@newton.apple.com
>Phone: (408) 974-2219, Fax: (408) 974-1794
>************************************************************

Barton Christopher Massey

unread,
Mar 13, 1993, 5:33:02 PM3/13/93
to
In article <1nrggt$p...@agate.berkeley.edu>

b...@anarres.CS.Berkeley.EDU (Brian Harvey) writes:
> Well, as luck would have it, it just so happens that today I gave my
> standard lecture on OOP to my data structures class, and it's all about
> these very issues, so I thought I'd share it with you and see how many
> flames it will attract.

OK, count me in! I don't mean to be insulting when I say that
I think you should understand computer science a little better
before you try to teach it -- you are (unwittingly, I'm sure)
doing a grave disservice to your students with your misinformed
approach. Please excuse my little parody of your argumentation,
but it was too easy, and I found I couldn't resist.

-----

> 1. The Paranoid.

"I'm much smarter than all those `consecutive letter' people
such as Edgar Dykstra and D.E. Knuth. *They* find programming
so hard that they need the help of the compiler to catch their
obvious, common mistakes. Having private data or functions in
a module is pointless because I'm too smart to ever
accidentally violate an internal invariant of my module, and
programmers with whom I'm working would never do so either
accidentally or intentionally. C++ is a stupid language
because it tries to make objects have some of the properties of
modules, which are bad."

> 2 and 3. Failed Religions.

"If you adopt a style of programming which is restricted in any
way, you can't write long rambling programs which dump core
frequently for no apparent reason: i.e., you can't get
practical work done. Evil, sneaky computer scientists are
trying to make me stop getting practical work done by calling
their restricted styles of programming `object oriented' and thus
fooling me into adopting them unawares, but I'm too smart for
'em."

> 2. The Religion of Strong Typing.

"From my vast grasp of the field of computer science, I'm sure
that all strong type systems exclude all forms of polymorphism
and overloading, and polymorphism and overloading are good, so
all strong type systems must be bad. (I've never heard of ML
or CLU, so please don't bother me about them.) It seems
ludicrous to me that in C++ you have to write different code to
handle data with different properties -- the compiler should
figure out what I mean by such constructs as `square("hello")',
or else the code shouldn't break until runtime. (Anecdote: At
least one of my colleagues acted stupid in a faculty meeting,
which explains why so much of the world is interested in C++.)"

> 3. The Religion of Referential Transparency.

(I'm sorry, but I can't start better than Mr. Harvey did on this
one:)

> Some computer scientists believe that you should be able to read a program
> and understand the thing to which a particular use of a particular symbol
> refers, just from the text of the program, without thinking about the
> context in which it's used.

"Those fools! Don't they understand that you can't understand
pieces of a 2 million line program in isolation? You have to
understand it as an *organic whole*! Inheritance is the last
recourse of incompetent computer scientists who want to allow
some context-dependency in their programs without giving up
their precious referential transparency. BTW, if you were so
confused you thought inheritance had something to do with types,
or so thoroughly baffled you thought it had something to do
with overloading, see 2 above. (I mentioned that C++ is stupid:
CLOS and Dylan are stupid too. Guess what: I'm getting ready
to tell you that Smalltalk is the one true OO language. Boy
will that be a surprise!)."

> 4. OOP as Metaphor.

"`Every language is technically equivalent to every other
language,' i.e., all languages are the same. The important
thing is that the enlightened programmer thinks in an
appropriate language for solving the problem -- thus business
programmers, for example, should write COBOL-style programs in
any language they use. OO thinking is good for simulation, and
for window systems. For example, users should quit thinking
about operating systems and window managers, which are concrete
entities, and talk directly to the windows, which are
abstractions."

"C++ is not object-oriented because (in spite of the fact that a
lot of people are deluded about this) there's no
message-passing mechanism in C++! I know that it *looks* like
`foo->bar(bletch)' asks the object denoted by `foo' to perform
the action `bar' with argument `bletch', but that's just
because you've been confused by evil scientists. And besides,
`bletch' is `passive', unlike Smalltalk, where it sings and
dances. Oh, I promised you a surprise, and now it's here:
`The only OOP language is Smalltalk!' Surprised? (I've never
heard of ACTOR, Simula, or Self, so please don't bother me
about them.)"

-----

The sad thing is that I really dislike C++ a lot, but somehow
ended up in the position of defending it: It seems clear to me
that, whatever its faults, it is in fact a language with OK OOP
support.

I have some fairly serious problems with Dylan, but it's
articles like this one that make me think I'd be wasting my
time trying to discuss them in this forum. Any chance we could
get a *moderated* group about the language?

Bart Massey
ba...@cs.uoregon.edu

Aaron Sloman

unread,
Mar 13, 1993, 6:15:28 PM3/13/93
to
b...@anarres.CS.Berkeley.EDU (Brian Harvey) writes:

> Date: 12 Mar 93 23:10:43 GMT
> Organization: University of California, Berkeley
>
> a...@cs.bham.ac.uk (Aaron Sloman) writes:
> >I first met the problem in the 1970s when working on a computer
> >vision project ...
[.....]
>
> This is an interesting example, but it raises a question for me because
> of the issue of order of arguments. That is, I can envision having
> to define
>
> combine(horiz, horiz, vert) ==> F
> combine(horiz, vert, horiz) ==> F
> combine(vert, horiz, horiz) ==> F
>
> (That's the letter 'F', not nil! :-)
>
> Anyway, I'd rather not have to define methods for all three possible
> argument orders (and it'd be even more if two didn't happen to be of
> the same type).

In my project referred to previously we dealt with this by adopting
canonical orderings. So we did not have to deal with all possible
orderings.

I agree that it's a nuisance having to remember the orderings
(though the burden can be reduced with help from the editor, of
course). But this is not a problem that's specific to OOP. E.g. the
environment I currently work in has the following procedure:

issubstring(smallstring, integer, bigstring)

which reports on whether the first argument is a substring of the
third argument starting at or after the location specified by the
second argument. It's a nuisance having to remember that the integer
comes second not first or third. (Fortunately the editor finds the
specification in a fraction of a second if required.) But note that
the two string arguments have different *roles* rather than
different *types*.

One solution is to use key-words to indicate the roles of arguments.
This is common in operating system command languages (e.g. Unix
shell, VMS DCL).

But I don't think the problem of having to provide either different
versions of procedures to deal with different orderings of the
arguments, or keywords, has anything specific to do with OOP or
multi-methods.

Having to remember an arbitrary ordering is no worse than having to
remember that one object type has been arbitrarily selected as the
one to whose instances the message has to be sent!!

> .... Do the generic-method people have any ideas about
> moving even further in their direction by allowing a method to be
> specified by a condition on its arguments as a group (e.g., one
> vertical and two horizontals) rather than by conditions on each
> argument separately?

This would be useful only where is no distinction in *role* for
arguments of the same *type*. Alas that's not generally the case, as
shown by the isssubstring example.

Another example is a predicate method to test whether one object
completely covers another, where the objects could be of different
types. Either you have to use argument position or you have to use
keywords, to indicate which is meant to be the coverer and which the
covered.

Of course if you are asking "Does either cover the other, and if so
please return the coverer", then it would be nice not to have to
order the arguments or label them by keywords. Compare the typically
overloaded arithmetic "max" procedure which copes (in Pop-11 and
Common Lisp) with integers and floats, bigintegers, double floats
and ratios, e.g. (in Pop-11)

max(3, 4.5) =>
** 4.5
max(6.3, 4.5) =>
** 6.3
max(6.3, 8) =>
** 8
max(6, 5) =>
** 6
max(2**110, 2.5e30) =>
** 1298074214633706907132624082305024
max(14/6, 2) =>
** 7_/3
etc.

So max has to cope with all possible orderings of two instances of
five different numeric types. I believe it is implemented by a table
that hashes the combinations of types and jumps to a specialised
procedure!

How would the message sending approach handle this variety?

The only reasonable alternative to using ordering or keywords or
multiple method definitions I can think of (which works in languages
that are not excessively strict regarding variable typing) is to
define a single pseudo method that takes all N arguments, rearranges
them into some canonical ordering (e.g. circles before polygons, and
triangles before squares, squares before pentagons, etc.) and then
invokes the method appropriate to the combination of types in that
order. This has the disadvantage that you then have to update that
procedure when you add a new multi-method for a new combination of
types.

Alternatively it should in principle be possible to extend an OOP
language to make it possible to declare a type of multi-method for
which the hashing pseudo method is to be defined automatically by
the system and automatically extended whenever a new specific method
is added to deal with a new set of argument types. I've never heard
of an OOP system that provides that.

> I agree with Aaron that it's confusing to call this otherwise good
> idea "object-oriented."

My point was that the phrase "object-oriented" means so many
different things to different people that it should be dropped
completely!!

> ....To me, that word is the name of a metaphor,


> in which there are a whole bunch of small independent *smart* entities
> inside the computer, all acting on their own;

I think Carl Hewitt at MIT coined a good name for that in the early
1970s: it's the "actor" model of computation. Why should we rename
it as "object oriented programming" when there's already a perfectly
good name?

> ..with the generic function


> approach, the metaphor still has to be that *the* computer carries out
> *the* program that invokes all these functions.

This is only a small part of the model. Your summary of the generic
function approach omits the extremely important role of object type
hierarchies and inheritance between object classes. That's what
makes it object oriented (in one sense of the phrase). "Generic"
means: applicable to several types of objects. So types of objects
are important to the concept. Who or what does the work is a
secondary issue.

> ..I've been on the


> message-passing side of this debate, but I might be won over if the
> generic function people picked another name!

Both sides should pick another name. You can use "actor-oriented",
the other side "inheritance-oriented"!!!

Aaron
---

Ralph Johnson

unread,
Mar 13, 1993, 6:13:05 PM3/13/93
to
jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:

>Can someone argue for generic functions on philosophical grounds?

I'm primarily a Smalltalk programmer, but I have also used C++ a lot.

It is pretty common to have functions of a system that are not the
responsibility of a single object. For example, displaying an object
on a graphics display depends not only on the object you are displaying,
but on whether the printer is a Mac window, and X window, or a Postscript
printer. Code generation depends not only on the parse tree being
compiled, but on the target system. Whether one type is a subtype of
another depends not only on the first type, but on both of them.

In languages like Smalltalk and C++, situations like this are handled
by double dispatching. DD is not the most elegant technique in the
world, but it works. Its biggest problem is that adding a new class
to the hierarchy requires modifying the interface to all of the other
classes. It might not require adding a lot of code; often only a
couple of methods need to be added high up in the hierarchy and
inherited by most of the classes, but every single class has to be
given a definition for what happens when they interact with the new
class. In a language like C++, where changing an interface requires
recompiling everything that uses the interface, this can be a serious
problem. It is less of a problem in Smalltalk. Nevertheless, even
though I have never done any serious programming in a generic-function
based language, I can see that it could be useful.

Another advantage of basing a system on generic functions is that it
should make it simpler to "enrich" existing classes. Lots of time
you want to add some operations to existing classes. One solution is
to make a subclass, but this doesn't work when the object is being
passed in as a parameter. Smalltalker are used to just adding methods
to system classes, even though this poses well-known dangers. However,
C++ programmers are basically forbidden from doing this, leading to
all sorts of crude work-arounds. In a generic-function based language,
it is easy to add a new function and specialize in on imported classes.
Assuming some sort of module system, the new function can be private
to the client module, instead of having to be global as in Smalltalk.
This should cut way down on unintended side effects.

I just don't believe the claims that generic functions have less
encapsulation than message passing languages. The real problem is
that the generic function languages invented so far do not have static
type checking. It should be possible to define inside and outside
interfaces to a class, and then functions that import the inside
interface are automatically "inside the capsule".

I don't know why the people who use generic functions don't talk about
these issues. Perhaps they are too close to the trees to see the forest.

-Ralph Johnson

Jason Robbins

unread,
Mar 13, 1993, 9:57:07 AM3/13/93
to
york:

Some of us remember the old Symbolics Flavors system, which used a
"message-passing" style. This meant that once you had become
comfortable with Lisp you had to learn a whole new set of concepts in
order to cross the threshold into OOP. This was a bad thing (ask any
of the former Symbolics course instructors).

jrobbins


It seems that you ("you" refering to the CLOS community) missed the OO
revolution. You are not supposed to think of objects as being
equivelent to groups of functions, that is what the compiler is
supposed to do. OO programmers are supposed to be able to think of
their programs as communicating objects (a view that has some _real_
advantages, see my next posting). Moving to CLOS without learning to
think of objects as objects is like moving from C to C++ without
rewritting any of your (unOO) C programms. It's like trying to do OO
programming in ADA (sorry for that low blow).

york:


The point that I was trying to make above is that the Lisp OOP
community STARTED with the message-based paradigm ("Why, I was sending
messages when you were only knee-high to a grasshopper"), found it
wanting (due to the semantic limitations and syntaxtic extra baggage
discussed previously). We then took a step FORWARD to the generic

function paradigm. ... (lisp invented windows argument deleted)

One last attempt: one of the things that the generic function paradigm
does is to encourage you to think in terms of protocols, rather than
the objects that implement them.

jrobbins:
I understood your point, I just don't agree with it: thinking
primarily of procedures is the core of functional decomposition. OOP
decomposes problems based on the objects in them, try it some time.

Please consider the following as (humorous) satire:
I would like to _act_ out my feelings about the Lisp community:
Lisp: "Lisp is so much better, but no one will listen, they called me mad,
but I'll show them! I can do anything that they can do, better,
better, better than they can do it."

Distant voice: "It's a new way of organizing programs that can be more
understandable, maintainable, and reusable. Even if you don't go for
it all the way it still can give you some nice software engineering
properties. And people tend to like it better..."

Lisp: "Organize MY code your way?! I's sooner die than waste away in
your object oriented _prison_! I can do all that better than you
anyway. I'll drown you in a sea of _freedom_ so that you can never
proliferate your 'people like _slavery_ better' propaganda again. You
known why I'm right? Because I've _always_ been right! My old way
_was_ your new way, you just never saw it. But now I'll make it clear,
and I'll make it even better. I'll make it meta, meta, meta..."

Curtain closes to an empty house.

I considered the ivory tower feature of the Lisp community, but I
thought that he mad scientist character is more fun.

-jason

Jason Robbins

unread,
Mar 13, 1993, 11:11:21 AM3/13/93
to
In article <KERS.93Ma...@cdollin.hpl.hp.com> ke...@hplb.hpl.hp.com (Chris Dollin) writes:
In article ... jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:
AND is it worth watering down the OO paradigm to the point that Dylan
programmers will forever be working on writing _function_, calling
functions, debugging functions, thinking in terms of functions, and
designing functions.
(a) It's only ``watering down'' the OO paradigm if generic fuctions
actually *do* violate the OO spirit. They don't. They might not support
it in quite the same way as class-obsessed languages do, but OO isn't
about making classes the primary programming structure; it's about
a model of computation where the state-changes to objects are mediated
only by a small set of functions that ``belong'' to the object. Whether
``belonging'' is done by making classes scopes, or by non-class module
structures, is a side-issue. IMAO.

No, that is encapsulation. That is very important, but OOP is more
than that alone. Another part of OOness is that objects are the
central focus of design. It's nice not to be forced to cast everything
in terms of objects, but it is hard to cast anything in terms of
objects when your only view of them is via functions.

(b) A True OO Programmer would presumably be working on _methods_,
invoking methods, debugging methods, thinking in terms of methods, and
designing methods. gs/method/function/. A Generic Function Programmer
is just as capable of working with class lattices, inheritance, and
all that gubbins as a True OO Programmer.

He/She could do it, but it is stretching. Wouldn't be better to use a
language which is mainly focused on objects and allows stray functions in
exceptional cases.

So what's the difference you're trying to isolate?

Focus on function in CLOS vs. focus on objects in Smalltalk. Really I
don't want to get stuck to any one langauge, playing devils advocate
is too fun, and too productive for me to give it up.

This is honestly meant as constructive critisism:
The Dylan language design lacks innovation.

Surely a language designed for widespread use for OO software engineering
is exactly the place to *avoid* innovation. Innovative things are new,
and untried, and have unexpected consequences, and need changing as a
result. Splendid for standards, or at least the standards business.

CLOS has been around for several years, not many people liked it
enough to switch. If Dylan is gonna go anywhere it better do
something different (besides being faster). It is somewhat better
than CLOS because it is faster and somewhat better than C++ because it
is more dynamic, but it will take more than that for it to withstand
competition from langauges like Component Workshop (C++-like syntax +
Smalltalk-like semantics = fast fully OO Dylan-killer).

-jason

Jason Robbins

unread,
Mar 13, 1993, 11:26:17 AM3/13/93
to
In article <1993Mar13.1...@pasteur.Berkeley.EDU> ma...@volga.Berkeley.EDU (Matt Wright) writes:
a...@cs.bham.ac.uk (Aaron Sloman) writes:
>Many of the defenders of lisp syntax who rightly point out that a
>powerful editor can remove some of the objections to the syntax,
>forget that not everyone has such lisp-friendly editors, and more
>importantly forget that for many people exposure to the language
>includes books, printed articles etc.

... Once he realized that the indentation levels of a


correctly printed Lisp program tell you everything you need to know about
its structure, he was much happier. When my students have trouble with Lisp
code, one of the first things I usually do is have them make their editor
reformat it.

My point is this: opponents of Lisp syntax argue on the grounds that
matching parentheses are difficult across multiple-line programs, but most
of the Lispians I know usually *don't* match parentheses by hand; we just
look at (and trust!) the indentation.

So you mean I could have an indentation based syntax if I use lemacs
font-lock-mode to color parens with the background color.
That would be an improvement.

Indentation alone is not enough the source is printed in
multiple columns or on multiple pages.

But wouldn't it be better to have meaningful indentation AND
meaningful (but short... but noticable) keywords?

C is bad in lots of ways, but the combination of short keywords in the
right places with a variety of delimiters (parens for expressions,
braces for blocks) has helped make it the most popular modern
language. In a popularity contest for the rest of us, s-expressions
would come in nearly last place (along with FORTH and APL). They are
all good in their own right, but they will never really catch on.


-jason

Jason Robbins

unread,
Mar 13, 1993, 10:52:17 AM3/13/93
to
In article <1nrggt$p...@agate.berkeley.edu> b...@anarres.CS.Berkeley.EDU (Brian Harvey) writes:

Well, as luck would have it, it just so happens that today I gave my
standard lecture on OOP to my data structures class, and it's all about
these very issues, so I thought I'd share it with you and see how many
flames it will attract.

1. The Paranoid.

... they judge their tools not by what the tool *allows* you to


do, but by what it *prevents* you from doing. The more prevention, the
better they like it.

... What's OOPy in C++ is that you can


declare certain fields of a struct to be *private*, which means that
only a few specially privileged procedures, associated with that
object type, are allowed to make reference to those fields. This is
an OOPy feature because it doesn't allow the C++ program to do anything
that the corresponding C program couldn't have done; instead it *restricts*
what the C++ program can do. Isn't that great?

I admit to using slight misreprentation to prove a point in that
"_acting_ out my feelings on the Lisp comunity bit. But this goes way
too far. I have sat in on your class before, and I found you mildly
entertaining. If you had unloaded this snide load of crap on the class
I would have been embarrased for you. I have to tell you seriously
that you are doing your students a disfavor.

First a short story:
back when I was a little kid, I had to go to sunday school. They told
me that GOD was all powerful. I asked if that power included the
ability to make objects too heavy for Him to lift, rules to strict for
Him to break, work too good for Him to outdo, etc. (Now I am an
atheist, I find it more flexible, and I respect fleiblity.)

Now on to your misrepresentation:
No one thinks that limitations are "great, and the more the better".
The advanatages of limitations is that they help you know what to do,
they help you communicate with others if they are shared, and they
help to make some things safer (if the limitations are set up right
then they tend to make common _real_ _mistakes_ more evident because
they violate those limitation). I like the idea of private variables,
not because they are _secret_, but because they have a better chance
of being safe (errors due to naming conflicts are fewer, last minute
hacks stand out and therefore more likely to be eliminated). I also
lile private variables because they focus on the interface (nobody
bothers to read the source of the private functions).

Consider the social contract.

Lisp does allow you to have a well structured program, but they
_don't_ _support_ you in doing it, which is sure to cause your program
to become less structured as time goes on. The prevention part is that
even if you have a program with fairly private parts the syntax will
never show it, which makes it hard to keep it clear.

A personal attack:
Really who is the paranoid person? The one who likes to have the
option of making somethings fairly safe, or the one who is so insecure
about his freedom that he sees threats from every corner?


2 and 3. Failed Religions.

The term "failed" implies just the opposite how the OO revolution is
going. "Failed" would be a better term for the spread of "Lisp" which
has been a fringe element since the 50's (not even a good cult like FORTH).


2. The Religion of Strong Typing.

...


3. The Religion of Referential Transparency.

I think you should be harping on functional programming, the anti-OOP.

[dynamic binding is not allowed ... ]


The trouble is that sometimes the meaning of a symbol *should* depend
on the context in which it's used. So these computer scientists have
had to fall back on the idea of *inheritance*, whereby the meaning of

you are stretching. Inheritance is a good mechanism for reuse. Your
favored method (dynamic binding) never really "cut the mustard."


4. OOP as Metaphor.

Oh I get it, only the first three type are slanders. The fourth type
is you. Briliant argument technique. The only problem is that type-1
is unfair.

The fourth category of computer scientists focuses not on technical
capabilities but on ways of thinking. People in this category know
that every language is technically equivalent to every other language;
one is better suited than another to some task because it allows the
*programmer* to understand the task more easily, not because it allows
the *computer* to solve it more easily.

[ pretty good examples deleted ]

What feature does a type-4 computer scientist single out as the one
that makes C++ OOPy? None! To a type-4 computer scientist, there is
nothing OOPy about C++, because it doesn't use a message-passing
metaphor. You don't ask objects to do things; your one program
calls procedures with objects as (passive) arguments. The only
OOP language is Smalltalk!

If you say that last line with a straight face then I forgive everything.
BTW you could replace C++ with CLOS in that last paragraph. Both are
semi-OO hybrids (the h-word).
-jason

Brian Harvey

unread,
Mar 13, 1993, 8:52:58 PM3/13/93
to
a...@cs.bham.ac.uk (Aaron Sloman) writes:
>This would be useful only where is no distinction in *role* for
>arguments of the same *type*. Alas that's not generally the case, as
>shown by the isssubstring example.

If the arguments have different roles, then they can't appear in
arbitrary orders, and the N^2 problem I was worrying about doesn't
arise.

Hmm, well, that isn't exactly true either; I suppose in the subtraction
function, the two roles are different, but you still have to worry about
<real minus int> and about <int minus real> separately.

Still, the substring example doesn't bother me. I'm not worried about
remembering the order; I'm worried about N^2 special cases of promoting
arguments.

Jason Robbins

unread,
Mar 13, 1993, 12:44:29 PM3/13/93
to
In article <C3uoH...@cs.uiuc.edu> joh...@cs.uiuc.edu (Ralph Johnson) writes:
jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:

>Can someone argue for generic functions on philosophical grounds?

It is pretty common to have functions of a system that are not the
responsibility of a single object. [ standard arguments for GF's deleted ]

... In a generic-function based language,


it is easy to add a new function and specialize in on imported classes.
Assuming some sort of module system, the new function can be private
to the client module, instead of having to be global as in Smalltalk.
This should cut way down on unintended side effects.

So... it would be a violation of the encapulastion of some other
object, but each such violation would be local to a given module. That
is definately better than global violations. But I would still rather
have a modular Smalltalk than GF's.

... It should be possible to define inside and outside


interfaces to a class, and then functions that import the inside
interface are automatically "inside the capsule".

THAT IS A GOOD IDEA: A really protected part (the inside), and a
pseudo-protected part (the outside). You could make your own
convienince methods that look like part of the class interface, but
really they are not, they just go through the real interface. Of
course, anything really convient should be sent to the author of that
class so that he/she can add it to the real interface instead of you
hacking it on here and there. Now that is the kind of innovation that
I would like to see in the Dylan design.

Now all Dylan needs is a new syntax with marked delimiters and a
message passing paradigm that is as powerful as GF's without being so
FO (function oriented, or any other F______ O_______ words you like).

-jason

Brian Harvey

unread,
Mar 14, 1993, 12:55:50 AM3/14/93
to
ba...@cs.uoregon.edu (Barton Christopher Massey) writes (parodying me):

>"From my vast grasp of the field of computer science, I'm sure
>that all strong type systems exclude all forms of polymorphism
>and overloading, and polymorphism and overloading are good, so
>all strong type systems must be bad. (I've never heard of ML
>or CLU, so please don't bother me about them.)

I have heard of ML and CLU, actually. I think that they're serious
attempts to create a language that's both strongly typed and usable.
They're not my cup of tea, but I wouldn't make fun of them.

But there really are people out there who think that OOP -- not
polymorphism, but OOP -- is saving the world because it overcomes
the strong typing problem. I still think that's ludicrous.

The ML and CLU people haven't engaged in the sort of hoopla that OOP
people have. I'm trying to get at whether there is anything behind
all the excitement, and I don't think overloading can be it. I don't
think information hiding can be it. Those are okay ideas, but they're
not revolutionary ideas.

What was revolutionary in Smalltalk was that you could build something
like a GUI in no time, by throwing together pieces such as "radio buttons"
and "browsers" and so on. Things like information hiding are indeed
among the technical details that made it possible, but to emphasize
information hiding as the key to the OOP revolution seems to me to be
seriously missing the point. And strong typing doesn't even feel to me
like a "technical detail that made it possible"; it's entirely orthogonal
to what I think of as OOP.

I think what started me thinking this way was the time I team-taught the
SICP course with a colleague at Berkeley (not the one I mentioned before).
One day he decided that the treatment of OOP in SICP wasn't good enough,
so he was going to take a lecture to explain to the kiddies what OOP
"really" means. He spent about 45 minutes of the 50-minute period saying
that if Scheme were an OOP language it would be full of declarations, so
you'd have to say (int foo) and stuff like that, and you'd have to give
types to all the formal parameters, and so on. In the last 5 minutes he
barely mentioned inheritance. Not a word about message passing. I was
very upset!

Technically Sweet

unread,
Mar 14, 1993, 2:01:36 AM3/14/93
to
The essence of OO was illuminated for my by the first
two chapters of "Women, Fire, and Dangerous Things"
by Dr. George Lakoff. It's not about computers.

Cognitive Linguistics is about how we categorize
things into groups of things. OO is merely
an application of this to software. This book
showed me that OO is good, but not good enough,
partly because reflects some but not enough
understanding about categorization.

On another note: one major problem is that the
inside is not the outside. Type composition and
code reuse are mostly different issues, but OO
systems tend to mush the two together.

--

Lance Norskog
thi...@netcom.com
Data is not information is not knowledge is not wisdom.

Michial Gunter

unread,
Mar 13, 1993, 7:11:47 PM3/13/93
to

In article <C3u14...@cs.bham.ac.uk> a...@cs.bham.ac.uk (Aaron Sloman) writes:

On the printed page the presence of clearly distinguished closing
brackets, and, perhaps even more importantly, intermediate markers
"elseif" ... "then" ... and the like, can make a considerable
difference to the speed and accuracy with which a human brain can
take in the structure.

ma...@volga.Berkeley.EDU (Matt Wright) writes:

My point is this: opponents of Lisp syntax argue on the grounds that
matching parentheses are difficult across multiple-line programs, but most
of the Lispians I know usually *don't* match parentheses by hand; we just
look at (and trust!) the indentation.


I think we all agree that people read _indentation_. Parens,
semicolons, bracket, and (in some contexts) keywords are noise.
(In some cases, worse than noise --- they determine the semantics of
statements. One then has to keep the syntax that the computer reads
and the syntax that people read consistent (which is not difficult
with the proper tools).)

The syntax of our computer languages should reflect this.


mike gunter


P.S.
For an example of a language with the exceptionally clean, clear, and
uncluttered syntax get the Haskell tuturial from
"/nebula.cs.yale.edu:/pub/haskell/tutorial"
or "/ftp.dcs.glasgow.ac.uk:" (Some path.)

I was overwhelmed by how much more beautiful Haskell is
compared with all other languages I had seen is.

Chris Dollin

unread,
Mar 15, 1993, 4:39:27 AM3/15/93
to
In article ... Barton Christopher Massey writes:

... a compelling demonstration that parodies of parodies (or
caricatures of caricatures) Don't Work.

[Usually. I'm no Shakespeare, either.]

Chris Dollin

unread,
Mar 15, 1993, 4:24:09 AM3/15/93
to
In article ... ma...@volga.Berkeley.EDU (Matt Wright) writes:

ke...@hplb.hpl.hp.com (Chris Dollin) writes:
> OO isn't about making classes the primary programming structure; it's
> about a model of computation where the state-changes to objects are
> mediated only by a small set of functions that ``belong'' to the object.

Aha! Type-one.

Regards to Brian Harvey...

Type 4, actually. (Is there anyone who'd admit to being anything else?)
I'm pretty casual about what ``belong'' means (which is why I quoted it),
not so casual about what ``small'' means, and in any case I wanted to
capture the flavour rather than be definitve, precise, and accurate.

(In fact, since ``object-oriented'' is a metaphorical term, I don't think
it *can* be defined in a tight and satisfactory manner. I'm more
interested in ways to make software organisation more effective than
in debates about what OO ``means'' except in my capacity as rabble-
rouser, skeptic, and self-anointed Crusader foir the Truth.)

Chris Dollin

unread,
Mar 15, 1993, 4:31:14 AM3/15/93
to
In article ... jrob...@lynn.cs.ucla.edu (Jason Robbins) writes:
[Responding to me]

It's nice not to be forced to cast everything
in terms of objects, but it is hard to cast anything in terms of
objects when your only view of them is via functions.

So how come it's any easier when your only view is via methods?
[I said]

So what's the difference you're trying to isolate?

Focus on function in CLOS vs. focus on objects in Smalltalk.

Systems have both objects and functionality. Focussing on one to the
exclusion of the other is clearly absurd (and neither CLOS nor
Smalltalk makes that mistake). Treating one as an ikky little
sideshow does not make for good systems, either.

Chris Dollin

unread,
Mar 15, 1993, 4:43:58 AM3/15/93
to
In article ... jrob...@lynn.cs.ucla.edu (Jason Robbins) writes:

C is bad in lots of ways, but the combination of short keywords in the
right places with a variety of delimiters (parens for expressions,
braces for blocks) has helped make it the most popular modern
language.

Hmm. An interesting argument. I'd have thought it more that it was a
small, readily ported language given support by it being both supplied
with Unix and essential to it, back in the days when universities had
more-or-less free access to Unix and its source.

Whether its syntax helped or hindered this is a question I have
absolutely no idea about how to answer.

Chris Dollin

unread,
Mar 15, 1993, 4:04:23 AM3/15/93
to
In article ... jrob...@lynn.cs.ucla.edu (Jason Robbins) writes:

[He's replying to <mumble>, so I am not the referent of ``you''.]

Now on to your misrepresentation:
No one thinks that limitations are "great, and the more the better".

Heavens, Jason, he was making a *parody* to illustrate his point. And
it wasn't that far off being the truth, either. One of the powers of
a notation *is* the things it won't let you do, and it *is* nice if
it prohibits things you *don't* want to do.

I like the idea of private variables,
not because they are _secret_, but because they have a better chance
of being safe (errors due to naming conflicts are fewer, last minute
hacks stand out and therefore more likely to be eliminated).

And they gain these effects because they are secret.

I also
lile private variables because they focus on the interface (nobody
bothers to read the source of the private functions).

``Nobody bothers ...''? Could you just check the value of pi a moment?

Consider the social contract.

Lisp does allow you to have a well structured program, but they
_don't_ _support_ you in doing it, which is sure to cause your program
to become less structured as time goes on. The prevention part is that
even if you have a program with fairly private parts the syntax will
never show it, which makes it hard to keep it clear.

Packages. They're not perfect, but they *are* there.

A personal attack:
Really who is the paranoid person? The one who likes to have the
option of making somethings fairly safe, or the one who is so insecure
about his freedom that he sees threats from every corner?

Think ``parody''. Think ``poetic licence''. Think ``People don't have to
take things literally to gain understanding''.

``Cartoon'' or ``caricature'' is probably a better term than parody.

2 and 3. Failed Religions.

The term "failed" implies just the opposite how the OO revolution is
going.

No, he didn't mean that OO had failed, but that the (following) religions
had failed.

"Failed" would be a better term for the spread of "Lisp" which

has been a fringe element since the 50's (not even a good cult like FORTH).

Sure, sure. Like the dinosaurs were a failure, having only been around
longer than humanity. Lisp isn't even dead.

4. OOP as Metaphor.

Oh I get it, only the first three type are slanders. The fourth type
is you. Briliant argument technique. The only problem is that type-1
is unfair.

It was obvious from the first title (``The Paranoid''), wasn't it?

What feature does a type-4 computer scientist single out as the one
that makes C++ OOPy? None! To a type-4 computer scientist, there is
nothing OOPy about C++, because it doesn't use a message-passing
metaphor. You don't ask objects to do things; your one program
calls procedures with objects as (passive) arguments. The only
OOP language is Smalltalk!

If you say that last line with a straight face then I forgive everything.

Fortunately, that last line is as straight as all the rest.

BTW you could replace C++ with CLOS in that last paragraph. Both are
semi-OO hybrids (the h-word).

If we wish to indulge in irrelevant word-play, other h-words include
happy, humerous, harmonious, handsome, and Hatfield-and-the-North (oops,
wrong newsgroup).

Chris Dollin

unread,
Mar 15, 1993, 4:12:41 AM3/15/93
to
In article ... a...@cs.bham.ac.uk (Aaron Sloman) writes:

In a previous message I gave an example that typically occurs in
object-oriented graphics: A method to link an object of class A to
an object of class B. Such a method doesn't naturally belong with
class A or with class B.

Wouldn't one solution be to make a new class of Link objects? Or
does this degenerate into one big table?

I'm interested in whether the problem is abstract (ie, appears at a
level where relationships are first-class and objects don't contain
pointers to other objects) or programming-linguistic (ie, links would
be represented as pointers and questions of efficiency raise their
Handsome Heads).

Aaron Sloman

unread,
Mar 15, 1993, 9:38:05 AM3/15/93
to
ke...@hplb.hpl.hp.com (Chris Dollin) writes:

> Date: 15 Mar 93 09:12:41 GMT
> Organization: Hewlett-Packard Laboratories, Bristol, UK.


>
> In article ... a...@cs.bham.ac.uk (Aaron Sloman) writes:
>
> In a previous message I gave an example that typically occurs in
> object-oriented graphics: A method to link an object of class A to
> an object of class B. Such a method doesn't naturally belong with
> class A or with class B.
>
> Wouldn't one solution be to make a new class of Link objects?

But you would presumably then need methods for creating Link
objects, and those methods would take two or more objects of
different types, i.e. multi-methods.

Or do you want to say that the link creator would not be a method,
just an ordinary function, possibly with a list of conditions for
handling different types of things (e.g. storing them in the link
object in a canonical order to reduce the variety of cases to be
considered by the methods that operate on links.)?

> .....Or


> does this degenerate into one big table?

That's presumably an implementation detail?

> I'm interested in whether the problem is abstract (ie, appears at a

> level where relationships are first-class ...

Yes, that's what I've tried previously to suggest in discussing the
problems of structuring an ontology without arbitrarily associating
multi-argument relations (or multi-argument processes) with the
types of particular arguments.

Within an intelligent agent, reasoning about the relationships that
are currently in use may require the relationships to be "first
class" in some sense, i.e. not MERELY implicit in the structuring of
slots and their values.

> ....and objects don't contain
> pointers to other objects)

I didn't understand the negation here. Did you mean something like
the end of my previous sentence
"i.e. not MERELY implicit in the structuring of
slots and their values."

> ....or programming-linguistic (ie, links would


> be represented as pointers and questions of efficiency raise their
> Handsome Heads).

I presume this is intended to have the meaning you get if you remove
the "not" from my sentence?

Obviously for some programs these implicit relations represented as
pointers are sufficient. For programs that (e.g.) *reason* about
which relationship to use in a difficult situation something
explicit is required.

But even if we ignore the program's requirements at run time,
multi-methods are useful for the organisation of certain large
programs in a form that makes their structure clear to human
developers, maintainers, documenters, managers, as I tried to
suggest in a previous comment.

If your problem is really small, then it probably doesn't matter
much what language you use or what design methodology you use. Even
Basic (in its recent forms) can be fine for small programs, and
lots of mathematicians seem to be very happy using it!

Aaron

Jeff Dalton

unread,
Mar 15, 1993, 11:29:41 AM3/15/93
to
> In article ... jrob...@kingston.cs.ucla.edu (Jason Robbins) writes:
>
> You're right, I misphrased. Composing is easy in Lisp, but you can't
> do it too much or you get into trouble. A 100 line function in C would
> be ok to work with, a 100 line function in Lisp would be a disaster.
>
> You have evidence for this, perhaps?
>
> [The little local evidence I have suggests that 100-line Lisp functions
> are both rare and no less comprehensible than 100-line C functions.]

Let's face it: it matters what's in the 100 lines!

A 100 line function with a simple enough structure can be very easy
to understand. For instance, maybe it's one big CASE expression.
Lisp has lots of ways to structure code w/in procedures, at least as
many ways as C. There are also a number of things that Lisp can
express more compactly (w/o sacrificing readability -- we're not
talking APL here).

On the other hand, there certainly are ways to structure small
Lisp procedures that don't work so well for larger ones. It may
be that this kind of structure arises less often in C.

-- jd

Jeff Dalton

unread,
Mar 15, 1993, 11:58:24 AM3/15/93
to
> To: info-...@com.apple.cambridge.ministry
> Newsgroups: comp.lang.dylan
> From: Jason Robbins <jrob...@edu.ucla.cs.kingston>
> Subject: Re: Is CLOS/Dylan even OO?
> In-Reply-To: ma...@harlqn.co.uk's message of Fri, 12 Mar 1993 15:20:57 GMT
>
> I would like whatever language I use to support encapsulation.

Lisp supports encapsulation by means of lexical scoping and
procedures. Some Lisps (eg, EuLisp) also provide modules that
(unlike CL packages) can't be circumvented.

However, there's a general problem in languages in which classes
are 1st-class objects, namely that if you have an object of class
C, you can typically get C even if C was not exported from its
module of definition.

> I don't think that you can get a property like encapsulation by adding
> on more features or throwing in more ideas, you have to take something out.

Actually, you can get it by adding a feature. You can get it by
adding a feature that seals things up so that you can't get inside
them any more. Whether this is a good way to do it is another
question.

> CLOS is halfway towards "anything goes". You can have objects, but you
> can also have functions that break their encapsulation, so they are
> not really objects.

Of course they're really objects. Encapsulation is not an essential
characteristic.

> You can have packages, but you can also get around
> their modularity, so they don't make good packages.

This depends on what you want. Debuggers typically let you get around
modularity. That's not necessarily bad. It's not so much that you
_can_ get around packages as that it's fairly easy and common to do
so (IMHO). They therefore require more discipline on the part of the
programmer than many people want to rely on.

> The main paradigm is functional, but the presence of objects implies
> side-effects, so it is not all that functional.

Lisp has always had side effects. Like "print", for instance.
It's not something that was brought in by "objects". Lisp has never
been a pure functional language. The idea that you can make this
a defect simply by saying "the main paradigm is functional" seems
rather bizarre to me. Most languages aren't pure instantiations
of paradigms, so why is it suddenly a huge defect when it comes
to Lisp?

Anyway, none of this is a good reason for saying CLOS in not OO.

-- jd

Michael J. Grier

unread,
Mar 15, 1993, 9:22:40 PM3/15/93
to

In article <C3uoH...@cs.uiuc.edu>, joh...@cs.uiuc.edu (Ralph Johnson) writes:

[interesting reading deleted]

|>I just don't believe the claims that generic functions have less
|>encapsulation than message passing languages. The real problem is
|>that the generic function languages invented so far do not have static
|>type checking. It should be possible to define inside and outside
|>interfaces to a class, and then functions that import the inside
|>interface are automatically "inside the capsule".
|>
|>I don't know why the people who use generic functions don't talk about
|>these issues. Perhaps they are too close to the trees to see the forest.
|>
|>-Ralph Johnson
|>

You state that you don't believe the claim that generic functions have


less encapsulation than message passing languages.

I guess you could be right, but I believe that some definitions need
to be clarified for an objective conclusion to be reached.

The definition which I believe needs to be clarified is that of
"encapsulation". We all know what it means intuitively, but I'd like to
put some of the discussion here into the perspective of writing software
for a living ("software engineers" rather than the "computer scientists"
which have received all the attention up to this point in this discussion.)

The reality of the fact is that a large number of software engineers in
the industry don't have a CS background and have allergic reactions if
you started to discuss many of the issues which have brought us to this point
in the discussion on Dylan. (If you want to discuss this problem, there was
a good discussion on this going on in comp.software-eng.)

My working non-precise definition is that encapsulation is the property
which ensures that a maintenance engineer only needs to decypher a bounded
and hopefully small amount of code to determine how a particular behaviour
occurred. OK, that's somewhat vague, but then we turn that into my working
more-precise definition which is that the maintenance engineer has to be able
to figure out what source module is most likely the culprit in solving a
bug or in adding an enhancement. I use a source module as the working unit
of encapsulation because, again due to the reality of the world we're
living in, most code today lives in ASCII or ISO Latin-1 text files.

[Short digression on this definition: I realize that browsers are possible
solutions to this problem, but much like the discussion on how different
editors can help people find the matching closing parens in S-expressions or
braces in C language source files, unless these tools are part of the Dylan
language definition, they really can't be used in a discussion on the
properties of the language, because they aren't.]

Ok, now, down to the meat of the issue. Given that our (working) unit of
encapsulation should be a source text, if the methods invoked given a
particular generic function call cannnot be determined by inspecting a single
source file, then it is possible to break encapsulation. (I won't bother
constructing an example demonstrating the ability of encapsulation to be
broken, given this environment, if you can't do it yourself you clearly
don't understand the issues.) [meta comment: my group is using a single-
dispatching smalltalk-like system, and dealing with polymorphism and
inheritance is enough of a problem when trying to solve bugs. After reading
the Dylan manual I started considering ways to add generic functions, but
after helping a few people through "tough" bugs because of the "simple"
polymorphic dispatcher, I decided yet again that the generic function is
the enemy of modularity and maintainability.]

(Another note upon re-reading is that crossing an encapsulation boundary
is assumed to be a well-defined operation. I.e. you could argue that my
aforementioned description is an argument for monolithic system design.
Instead it's an argument for well-defined areas of control with firewalls
between the modules. Read Meyer's Object-Oriented Software Construction
for a discussion on firewalling in OO software design via his concept of
contactual responsibility between clients and service providers.)

C++ also permits the breaking of encapsulation in this same way -- there's
nothing in the language which precludes the definition of two different
member functions for a class from being in two separate source modules.
However, at least all the possible operations in a context *can* be found by
reading the definitions for a class and the classes from which it inherits.
By contrast, without a browser -- which as mentioned before is *not* a
language feature -- generic functions *require* scanning every source module
for a possibly eligable instance function (probably wrong terminology).

Eiffel is probably the only language I've personally seen which can
utilize the advantages of OO in the real world of software engineering. I'm
not sure if Modula-3 permits the implementation of a class to be distributed
among several source modules, but it comes in at a close second for OO
languages which take software engineering seriously.

Back to the real issue, which is Dylan and its OO-ness.

Given the fact that there isn't a rigorous definition of OO, there isn't
any way to objectively rank a language's "OO-ness". Most of us would agree
that Dylan possesses most of the properties expected in an OO environment,
I believe that the argument is largely that Dylan would not be an effective
language to use in a software engineering environment.

Being an "industry person" rather than an academian, I would give Dylan a
0% chance of being accepted as a software engineering language. It's a neat
language which accomplishes its goals of basically giving us CLOS which can
be more efficient at run-time, but I wouldn't wish it on an engineering team.

p.s.

Along the same lines, it is clear that people arguing for multiple syntaxes
have clearly never done maintenance on a software system. I wouldn't wish
such a thing on an engineer who's trying to track down a regression.
------------------------------------------------------------------------
I'm saying this, not Digital. Don't hold them responsibile for it!

Michael J. Grier Digital Equipment Corporation
(508) 496-8417 m...@ktbush.ogo.dec.com
Stow, Mass, USA Mailstop OGO1-1/E16

Andrew LM Shalit

unread,
Mar 15, 1993, 9:16:02 PM3/15/93
to
In article <JROBBINS.93...@kingston.cs.ucla.edu>,
jrob...@kingston.cs.ucla.edu (Jason Robbins) wrote:
>
> In article <alms-110...@alms.cambridge.apple.com>
al...@cambridge.apple.com (Andrew LM Shalit) writes:
>
> In article <JROBBINS.93...@kingston.cs.ucla.edu>,
> jrob...@kingston.cs.ucla.edu (Jason Robbins) wrote:
> >
> >
> > I think of encapsulation as being able to know ALL the ways the
state
> > of an object can change by looking at some bounded amount of source
> > code. In CLOS and Dylan the number of lines of source you must
study
> > in order to see when objects might be modified is _not_
syntactically
> > bounded. This is the symbolic equivelent of a dangling pointer
writing
> > over your data in C.
>
> Dylan certainly supports encapsulation. It just doesn't require that
> encapsulation boundaries be identical to class boundaries. Instead,
it
> uses a module system to create encapsulation boundaries. Recent
Smalltalk
> system and derivitives (Component Software, and I believe QKS
Smalltalk)
> recognize the need for this additional module layer, on top of
classes.
>
> I agree that modules are definately a good thing, and an oversight in
> most current OOPL's.
>
> My claim was not that "classes==modules which is bad", but rather that
> CLOS doesn't have a good way of doing modules/packages. I think that
> the unmarked delimiters of CLOS/Lisp/Dylan are part of the reason why
> the packages of CLOS are done wrong.
>
> Will Dylan modules have a syntax that _textually_encloses_ the
> encapsulated classes, or will it be like the packages of
> CLOS? You can tell what I would vote for.
>
> -jason

It sounds like you want your programming language to be built around
text files. We prefer a database model, where it isn't possible to have
a module syntax textually enclose the code. A UI worth anything,
however,
would make module ownership very visible.

We do feel that it is important to provide a file-based interchange
protocol. We haven't settled on the details of that yet.

As far as Common Lisp packages, I make no apologies for them. They
have many, many problems which we hope not to repeat.

-Andrew Shalit
Apple Computer

Craig Chambers

unread,
Mar 16, 1993, 12:23:19 AM3/16/93
to
In article <1993Mar16.0...@nntpd.lkg.dec.com>, m...@ktbush.ogo.dec.com (Michael J. Grier) writes:
|>
|> In article <C3uoH...@cs.uiuc.edu>, joh...@cs.uiuc.edu (Ralph Johnson) writes:
|> |>I just don't believe the claims that generic functions have less
|> |>encapsulation than message passing languages. The real problem is
|> |>that the generic function languages invented so far do not have static
|> |>type checking. It should be possible to define inside and outside
|> |>interfaces to a class, and then functions that import the inside
|> |>interface are automatically "inside the capsule".
|> |>
|> |>I don't know why the people who use generic functions don't talk about
|> |>these issues. Perhaps they are too close to the trees to see the forest.
|> |>
|> |>-Ralph Johnson
|> |>
...

|> My working non-precise definition is that encapsulation is the property
|> which ensures that a maintenance engineer only needs to decypher a bounded
|> and hopefully small amount of code to determine how a particular behaviour
|> occurred. OK, that's somewhat vague, but then we turn that into my working
|> more-precise definition which is that the maintenance engineer has to be able
|> to figure out what source module is most likely the culprit in solving a
|> bug or in adding an enhancement. I use a source module as the working unit
|> of encapsulation because, again due to the reality of the world we're
|> living in, most code today lives in ASCII or ISO Latin-1 text files.
...

|> Michael J. Grier Digital Equipment Corporation
|> (508) 496-8417 m...@ktbush.ogo.dec.com
|> Stow, Mass, USA Mailstop OGO1-1/E16
|>


I've been working on a language named Cecil that includes
multi-methods, encapsulation, and static type checking. I think it
might address some of your concerns. A paper on the dynamically-typed
core of Cecil, focusing on the programming methodology/encapsulation
aspect of multi-methods in Cecil, was published in ECOOP'92. You can
also get the paper via anonymous ftp from
cs.washington.edu:pub/chambers/cecil-oo-mm.ps.Z. A paper on Cecil's
static type system is in preparation.

-- Craig Chambers

Chris Dollin

unread,
Mar 16, 1993, 3:17:16 AM3/16/93
to
In article ... a...@cs.bham.ac.uk (Aaron Sloman) writes:

ke...@hplb.hpl.hp.com (Chris Dollin) writes:
> Wouldn't one solution be to make a new class of Link objects?

But you would presumably then need methods for creating Link
objects, and those methods would take two or more objects of
different types, i.e. multi-methods.

Well, I wouldn't call something a multi-method just because it took
two or more arguments of different types; it would need to *dispatch*
on two or more types.

Or do you want to say that the link creator would not be a method,
just an ordinary function, possibly with a list of conditions for
handling different types of things (e.g. storing them in the link
object in a canonical order to reduce the variety of cases to be
considered by the methods that operate on links.)?

I was thinking that when a Link object was created, you'd supply the
ends of the link and they would go ... wherever the Link object found
them most useful. The interesting object then is the Collection of
Links, which you can interrogate to see if object A is connected to
object B, and if so, by which Link.

> .....Or
> does this degenerate into one big table?

That's presumably an implementation detail?

No; it's possible to have the notion creep in at early development
stages. The question is, does the Link-object approach *force* a
Big Table early -- which would be a point against it.

> I'm interested in whether the problem is abstract (ie, appears at a
> level where relationships are first-class ...

> ....and objects don't contain
> pointers to other objects)

I didn't understand the negation here. Did you mean something like
the end of my previous sentence
"i.e. not MERELY implicit in the structuring of
slots and their values."

I mean the approach where, early on in analysis, you don't think of
objects as having references to other objects at all, but instead of
being involved in *relationships* with other objects. Object-valued
attributes are, in this view, an implementation technique for a certain
kind of many-one relationship.

> ....or programming-linguistic (ie, links would
> be represented as pointers and questions of efficiency raise their
> Handsome Heads).

I presume this is intended to have the meaning you get if you remove
the "not" from my sentence?

Err ... I'm not sure. I think we may be talking across each other here,
hence my responses above. I can rephrase my question as, how does the
linking problem look from a viewpoint where relationships are first-
class and objects cannot have object-valued slots?

Obviously for some programs these implicit relations represented as
pointers are sufficient. For programs that (e.g.) *reason* about
which relationship to use in a difficult situation something
explicit is required.

Yes.

But even if we ignore the program's requirements at run time,
multi-methods are useful for the organisation of certain large
programs in a form that makes their structure clear to human
developers, maintainers, documenters, managers, as I tried to
suggest in a previous comment.

I'm actually all in favour of multi-methods. I just want to make
arguments in their favour (and against, for that matter) clear, so
that we-and-the-world can make well-informed judgements. As for
whether multi-methods are OO or not, all I can say is, they seem
far too useful and elegant to leave out, unless one is being
obsessively mono-theistic in ones computational religion.

Harley Davis

unread,
Mar 16, 1993, 4:56:04 AM3/16/93
to
From: jrob...@lynn.cs.ucla.edu (Jason Robbins)
Newsgroups: comp.lang.dylan
Date: 14 Mar 93 00:11:21 GMT

CLOS has been around for several years, not many people liked it
enough to switch. If Dylan is gonna go anywhere it better do
something different (besides being faster). It is somewhat better
than CLOS because it is faster and somewhat better than C++ because it
is more dynamic, but it will take more than that for it to withstand
competition from langauges like Component Workshop (C++-like syntax +
Smalltalk-like semantics = fast fully OO Dylan-killer).

Surely it should be painfully obvious that technical excellence has
little if anything to do with success in the computing world. If you
are correct, why hasn't Objective C displaced C++?

-- Harley Davis

------------------------------------------------------------------------------
nom: Harley Davis ILOG S.A.
net: da...@ilog.fr 2 Avenue Gallie'ni, BP 85
tel: (33 1) 46 63 66 66 94253 Gentilly Cedex, France

Harley Davis

unread,
Mar 16, 1993, 5:42:43 AM3/16/93
to
Newsgroups: comp.lang.dylan
Sender: use...@nntpd.lkg.dec.com (USENET News System)
Reply-To: m...@ktbush.ogo.dec.com (Michael J. Grier)
Date: Tue, 16 Mar 1993 02:22:40 GMT

Being an "industry person" rather than an academian, I would
give Dylan a 0% chance of being accepted as a software engineering
language. It's a neat language which accomplishes its goals of
basically giving us CLOS which can be more efficient at run-time,
but I wouldn't wish it on an engineering team.

Michael J. Grier Digital Equipment Corporation


(508) 496-8417 m...@ktbush.ogo.dec.com
Stow, Mass, USA Mailstop OGO1-1/E16

Based on the Dylan book, what makes you think that a good
implementation of Dylan would be more efficient at runtime than a good
implementation of CLOS?

s...@sef-pmax.slisp.cs.cmu.edu

unread,
Mar 16, 1993, 11:06:15 AM3/16/93
to

From: m...@ktbush.ogo.dec.com (Michael J. Grier)

[Short digression on this definition: I realize that browsers are possible
solutions to this problem, but much like the discussion on how different
editors can help people find the matching closing parens in S-expressions or
braces in C language source files, unless these tools are part of the Dylan
language definition, they really can't be used in a discussion on the
properties of the language, because they aren't.]

I disagree with this comment. While it's true that a browsing and
program-development environmnet is not part of the Dylan spec (many people,
after all, are determined to reject any language for which the manual, when
printed on dead tree-stuff, weighs more than a pound), I think it is
legitimate to think about what will be the typical program-development
environment in the next few years and to optimize Dylan's design for that
kind of environment. Throwing out an idea because it may lead to some
awkwardness when poring over paper listings makes about as much sense as
requiring all Dylan statements to be no more than 80 characters so that
they will fit on a single punched card.

In any object-oriented language with inheritance (even single inheritance,
but the problem is worse with multiple inheritance), there is the problem
of having to look through a chain of a dozen methods in order to assemble a
complete picture of how a certain object responds to a certain message.
Some of the inherited behavior may come from different modules or files.
In such a situation you NEED a first-rate browser. Once you have one, it's
a relatively small step to keep track of multi-methods and to find ALL the
routines in a system that twiddle the innards of some object.

Encapsulation isn't a matter of morality, but rather a convenience for
programmers and program maintainers. It should be easy to set up
conceptual boundaries that are not ROUTINELY penetrated, but in a
reflective language we need to be able to write tools that deliberately
penetrate the firewalls. As long as all such penetrations are noticed,
recorded, and easily traced in a modern programming environment, I see no
problem. Yes, we should make sure that there is still a decent printed
form for Dylan, so that people can read programs at the beach without
getting their Powerbook full of sand (or stolen, more likely), but if
choices must be made, I think it would be better to favor the more common
and more important case of people working within a modern, active
environment. (Of coruse, this assumes that we believe someone will provide
such an environment for Dylan in readily available form. I know of several
groups that plan to do this.)

The assumption of a proper program editing environment also makes the
question of choosing a single syntax a non-issue. I suppose there needs to
be a desingated "primary" syntax for people who still want to write Dylan
textbooks and papers on bits of dead tree-stuff, and in my opinion that
should be a syntax-rich (or parenthesis-poor) form, but the idea of
learning a language like Dylan (or Lisp or Smalltalk) away from a computer
running the language in some nice environment seems increasingly bizarre to
me.

-- Scott

===========================================================================
Scott E. Fahlman Internet: se...@cs.cmu.edu
Senior Research Scientist Phone: 412 268-2575
School of Computer Science Fax: 412 681-5739
Carnegie Mellon University Latitude: 40:26:33 N
5000 Forbes Avenue Longitude: 79:56:48 W
Pittsburgh, PA 15213
===========================================================================

Robin Popplestone

unread,
Mar 16, 1993, 1:57:22 PM3/16/93
to
Jeff Dalton:

> Actually, you can get it by adding a feature. You can get it by
> adding a feature that seals things up so that you can't get inside
> them any more. Whether this is a good way to do it is another
> question.

The feature exists. It is called a *closure*. It is implicit in the lambda
calculus, built into the design of CPL by Strachey (c.1965 I don't know
the extent of implementation) built into the implementation of POP-2 by
myself and Burstall (c.1968), and used as the encapsulation mechanism in
the Multipop time-sharing system (thus a disc file was presented to the
user as an opaque partial application roughly

read_from_buffer(%track,sector%),

) designed into Scheme from the beginning and incorporated ultimately into
Common Lisp.

Many other mechanisms (e.g. sections in POP, packages in LISP) are
efficiency bodges to allow for the fact that closures are perceived as
a pricey, general purpose mechanism. This view may not in fact be
justified - e.g. closure-by-partial-application, POP-style, treats free
variables as extra parameters, which in the read-only case will simply be
register accesses.

I think that a genuine need for other mechanisms arises from static typing,
as found in the ML structure mechanism.


Robin.

Jeff Dalton

unread,
Mar 16, 1993, 2:21:14 PM3/16/93
to
> From: "Michael J. Grier" <m...@com.dec.ogo.ktbush>

> In article <C3uoH...@cs.uiuc.edu>, joh...@cs.uiuc.edu (Ralph Johnson) writes:
>

> [interesting reading deleted]


>
> |>I just don't believe the claims that generic functions have less
> |>encapsulation than message passing languages.

> You state that you don't believe the claim that generic functions have


> less encapsulation than message passing languages.

A number of potential confusions can be avoided by being a bit
pedantic about this claim. We shouldn't be comparing generic
_functions_ with message-passing _languages_. Moreover, we
should distinguish between generic functions and multi-methods.

If we're asking whether Dylan or Common Lisp are _purely_ OO
languages, then I think the answer will be "no", even though a
case might be made to the contrary. Nonetheless both of them
have an OO subset.

Even that seems to be a somewhat controversial claim, but I hope
we can all accept that the presence of non-OO features doesn't
prevent there being an OO subset. So, if multi-methods turn out
to be non-OO, we can just regard them as outside the subset.

Multi-methods certainly move you further from the basic idea behind
OO programming, namely that objects are active entities in their own
right and each object gets to determine how it responds to messages.
Classes also represent a departure from this, but a less significant
one. Multi-methods are not associated with a single object or class
of objects. The way you think about programs also changes, with more
emphasis on protocols and less on classes. But if we can accomodate
classes as OO, I think we can accomodate this as well.

With classes, methods are no longer associated with individual
objects, they're associated with classes. But we can still think
of them as properties of objects -- it's just that we give
explicit recognition to a particular structure of common properties.

Multi-methods can also be brought back to objects. Instead of
a single object determining its own behavior, we have a group of
objects agreeing on their behavior in an action that involves
them all. (I'm agreeing with Bob Kerns here.)

> Ok, now, down to the meat of the issue. Given that our (working) unit
> of encapsulation should be a source text, if the methods invoked given
> a particular generic function call cannnot be determined by inspecting
> a single source file, then it is possible to break encapsulation.

In effect, this is just a way to say the behavior isn't associated
with a single object or class. Of course, that's inevitable when
an operation sooner or later involves several objects, but we lose
the ability to separate the part of the behavior that is so associated.
I think this is a real problem. If you want to find all the method
definitions that are relevant to a particular object or class, it's
not clear where you have to look. There's a danger that you'll end
up thinking in terms of functions, organizing your method defintions
by function instead of by class, and so on. The people who've been
saying this sort of thing have a point, no question about it.

However, given that objects/classes are going to have to cooperate
in may cases, can it really be wrong to provide explicit support
for a particular form of cooperation? It seems to me that it is
not wrong, in itself, but that this support is incomplete without
tools for managing it. Without tools that recognize this form
of cooperation and make it easy to find the relevant methods, and
so forth, it may be that multi-methods _are_ too difficult to
manage.

> [Short digression on this definition: I realize that browsers are possible
> solutions to this problem, but much like the discussion on how different
> editors can help people find the matching closing parens in S-expressions or
> braces in C language source files, unless these tools are part of the Dylan
> language definition, they really can't be used in a discussion on the
> properties of the language, because they aren't.]

I agree that merely uttering the magic word "browser" is not enough.
What we need is for the advocates of multi-methods to set out effective
ways to manage them in the sorts of programming environments most of
us actually have to use. But the lack of such advice would not make
multi-methods non-OO.

---
This leaves generic functions. Some people will claim that a
function-call syntax is incompatible with being OO. That is,
they claim it makes all the difference that you write

(<operation> <object> ...)

instead of

(<object> <operation> ...)

or

(send <object> <operation> ...)

I don't think it makes that much difference, but if someone disagrees
I don't know what I can say to get then to change their mind.

-- jeff

Michael J. Grier

unread,
Mar 16, 1993, 5:43:18 PM3/16/93
to

In article <C3zoqH...@cs.cmu.edu>, s...@sef-pmax.slisp.cs.cmu.edu writes:
|> From: m...@ktbush.ogo.dec.com (Michael J. Grier)
|>
|> [Short digression on this definition: I realize that browsers are possible
|> solutions to this problem, but much like the discussion on how different
|> editors can help people find the matching closing parens in S-expressions or
|> braces in C language source files, unless these tools are part of the Dylan
|> language definition, they really can't be used in a discussion on the
|> properties of the language, because they aren't.]
|>
|>I disagree with this comment. While it's true that a browsing and
|>program-development environmnet is not part of the Dylan spec (many people,
|>after all, are determined to reject any language for which the manual, when
|>printed on dead tree-stuff, weighs more than a pound), I think it is
|>legitimate to think about what will be the typical program-development
|>environment in the next few years and to optimize Dylan's design for that
|>kind of environment. Throwing out an idea because it may lead to some
|>awkwardness when poring over paper listings makes about as much sense as
|>requiring all Dylan statements to be no more than 80 characters so that
|>they will fit on a single punched card.

Clearly I brought up the issue because I realize the benefit in using
browsers and more modern tools to develop software systems. However, the
reality is that the world's software engineering is largely a least-
common-denominator approach.

I guess that Dylan supporters have to ask themselves what is the purpose of
introducing Yet Another Language into the arena. (I used to be bright eyed
and bushy tailed about designing The Ultimate Language. I've gotten a little
more realistic as years have advanced.)

Is Dylan supposed to be a language to support software engineering, or
the existence proofs which computational scientists like to develop?

The latter is *clearly* not needed, as from what I've seen, Dylan offers
little over Common LISP or Scheme, whichever your religion happens to be,
plus an object/generic-function mechanism. Sealing seems to be the primary
feature which Dylan has over CLOS, and it's a run-time/efficiency measure.

So, the alternative is that you/we want to build software products using
the language. That's great! I think that there *is* a real place for an
OO/dynamically-typed language like Dylan. [Actually, I can't personally figure
out why anyone trying to build a software system which doesn't include bugs
would want a dynamically typed language, but that's religion which isn't
very useful to this discussion. I just wanted to not contradict a flame-fest
I took part in a couple of years ago in comp.object...]

Just try to consider how these discussions can be applied to an engineer
who's been called up at 2:00 in the morning because a *Very Important* customer
has a critical issue and they're (the engineer) dialed in on a VT-100 or
equivalent terminal over a 2400 baud modem. (I'm lucky, I snagged one of the
9,600 baud modems and I have a Mac from work as well as a self-purchased PC,
but I wouldn't consider that the status quo. Be careful also not to fool
yourself into thinking that the average engineer who reads or posts to USENET
is in any way the average software engineer.)

Thinking about this issue last night, I want to look into a new slant on
language design and research: language definition oriented towards code
maintainability. I live maintenance every single day, it cuts deeply into my
efforts to build ourselves out of whatever accidental holes we created in
the scramble to design and build systems before deadlines and funding cuts.

I had been considering lack of browsing capability to be an assumption.
Perhaps this should be questioned, which then leads me into the last part of
my statement: if the language design doesn't include browsing tools, we have
to interpolate what programming environment(s) will be available without
browsing tools. Maybe the language design *should* include browsing tools.

But if we're going to consider including browsing tools in the language
definition, which up to this point has been a textual-syntax based
specification, why not make the total leap and take the semantics of Dylan
mapping them into a truly visual language? That seems to be more along the
lines of intent of the Newton (which is how I personally first heard about
Dylan.)

To summarize: if the language design does not explicitly include a browser
or other mechanism to address these encapsulation issues, then we have a
language with which we can build very large and complex systems, the behaviour
of which will be very difficult to predict (at some point, the apparent
behaviour of a generic function becomes, for all human intents and purposes,
non-deterministic) and extremely difficult to maintain.

If the language design *does* want to address these issues, I'd like to
suggest that the language be centered around a more encompassing development/
CASE environment. Smalltalk did this, and I'm not personally sure if the
smalltalk user environment limited its acceptance into the mainstream of
computer programming languages, or helped get what was an unusual approach
to problem solving wedged into places where people who saw the value of the
OO approach and the graphical environment could latch onto it.

Down to brass tacks, if every Dylan installation had some way of saying
"let me view all the source code for implementations of the <insert-generic-
function-name-here> generic function", I would still be concerned, because
modules have more value as chunks of information for a person to learn than
just as a unit to use with a text editor, but it would be clear that the
needs of the software maintainer were acknowledged in trying to introduce
this language as a mainstream software engineering language.

(apologies for rambling)

|>
|>In any object-oriented language with inheritance (even single inheritance,
|>but the problem is worse with multiple inheritance), there is the problem
|>of having to look through a chain of a dozen methods in order to assemble a
|>complete picture of how a certain object responds to a certain message.
|>Some of the inherited behavior may come from different modules or files.
|>In such a situation you NEED a first-rate browser. Once you have one, it's
|>a relatively small step to keep track of multi-methods and to find ALL the
|>routines in a system that twiddle the innards of some object.

Absolutely. Are you telling me that all Dylan implementations will include
such a browser? Then why isn't it part of the language definition.

|>
|>Encapsulation isn't a matter of morality, but rather a convenience for

|>programmers and program maintainers. (...)

Who are many many times more important than the computer. Remember that
the choice of a computer programming language isn't so much in trying to make
the solution you are trying clear to the computer, so much as making it
clear to other programmers. Does that make it a religion? Well, it's a
religion that's costing billions of dollars and prompting the DoD to plop
down large amounts of cash to try to solve it.


|> It should be easy to set up
|>conceptual boundaries that are not ROUTINELY penetrated, but in a
|>reflective language we need to be able to write tools that deliberately
|>penetrate the firewalls. As long as all such penetrations are noticed,
|>recorded, and easily traced in a modern programming environment, I see no
|>problem. Yes, we should make sure that there is still a decent printed
|>form for Dylan, so that people can read programs at the beach without
|>getting their Powerbook full of sand (or stolen, more likely), but if
|>choices must be made, I think it would be better to favor the more common
|>and more important case of people working within a modern, active
|>environment. (Of coruse, this assumes that we believe someone will provide
|>such an environment for Dylan in readily available form. I know of several
|>groups that plan to do this.)

In a previous posting, someone categories computer scientists as being
in any of a number of categories. Any computer scientist who is thinking
of being a software engineer had better consider themselves as being in
the "paranoid" (type 1) category. Meaning that given the choice of the
flexibility in being able to "selectibly" break firewalled module boundaries
and not being able to except in well defined interfaces, I'd take the strong
firewalls any day. Implicit interfaces are the mortal enemy of software
maintenance.

|>
|>The assumption of a proper program editing environment also makes the
|>question of choosing a single syntax a non-issue. I suppose there needs to
|>be a desingated "primary" syntax for people who still want to write Dylan
|>textbooks and papers on bits of dead tree-stuff, and in my opinion that
|>should be a syntax-rich (or parenthesis-poor) form, but the idea of
|>learning a language like Dylan (or Lisp or Smalltalk) away from a computer
|>running the language in some nice environment seems increasingly bizarre to
|>me.

If it's bizarre, make it explicitly part of the language and don't kid
yourselves that you're going to try to show the masses of COBOL and C coders
out there how much better Dylan would make their lives. It won't and you'll
end up as yet another Trellis or Smalltalk.

The adoption of languages is very much a marketing issue. When you're
introducing a new product of any sort, you have a fairly basic decision to
make, are you going to try to compete in some field, or are you going to try
to define a new field in which you can be the leader.

If Dylan is to compete with C, Pascal and COBOL, these issues of source
file maintenance are *CRITICAL*. It's a large market, and there's a good
argument that a language like Dylan could possibly acquire a chunk of it.
C++ is starting to do just that, and I personally wouldn't mind a bit if its
advancement was stopped. ;-)

It sounds like the Dylan supporters want to define a new market. Like I
said before, that's absolutely great. Toss the S-expression syntax, define
the language's semantics abstractly, with a graphical representation and
mechanism to maintain it, and go for it! There's a distinct lack of
visual languages which are in any way important in the software engineering
field. Visual BASIC and now Visual C++ are the evolution of the old-style
language world evolving, but given that there is no Dylan base today, instead
of scaring folks off with the attempt to mix "modularity", generic functions
and LISP-style syntax, instead define a new way of doing things.

|>
|>-- Scott
|>
|>===========================================================================
|>Scott E. Fahlman Internet: se...@cs.cmu.edu
|>Senior Research Scientist Phone: 412 268-2575
|>School of Computer Science Fax: 412 681-5739
|>Carnegie Mellon University Latitude: 40:26:33 N
|>5000 Forbes Avenue Longitude: 79:56:48 W
|>Pittsburgh, PA 15213
|>===========================================================================
|>
|>

------------------------------------------------------------------------
I'm saying this, not Digital. Don't hold them responsibile for it!

Michael J. Grier Digital Equipment Corporation

s...@sef-pmax.slisp.cs.cmu.edu

unread,
Mar 17, 1993, 3:22:36 AM3/17/93
to

Is Dylan supposed to be a language to support software engineering, or
the existence proofs which computational scientists like to develop?

I certainly don't speak for Apple, who ultimately get to decide what the
goals of Dylan(tm) are. But here are some ideas about what I think the Dylan
goals should be:

A lot of good ideas about dynamic programming languages and environments
have been developed over the last decade or two. These are your "existence
proofs". I think that the time has come to harvest a large, coherent set
of these good ideas and to try to cast them into a form that will show
people how these ideas, when used together (and without some of the
mistakes of, say, Common Lisp) can be of real, practical value in producing
real, practical programs. Enormous value, when compared to C, et al.

Such a demonstration will require both a programming langauge and a
first-rate program-development environment if it is to succeed. I believe
that Dylan is an excellent candidate for the language part of the system;
there are a number of contenders for the environment part. I think it
makes perfect sense to say that Dylan is a language optimized for use
within SOME powerful program development environment, without trying to
cram the specification for that environment into the Dylan language spec.
I don't understand why you have such a problem with that idea.

If Dylan is optimized for use within a powerful environmnet and if such
environments for Dylan never arrive, Dylan will die. I'm not worried about
that. In any case, I see no point in producing Yet Another Language for
programmers working with pencil and paper -- there are plenty of those
languages, and probably not much more leverage can be gained within that
set of assumptions.

Lisp and Smalltalk are rich-environment languages. They are barely
readable and nearly impossible to write without a good environment, but
they are extremely powerful given the proper environment. Dylan should
continue in that tradition. It is arguable that this
environment-dependency is what prevented Lisp and Smalltalk from becoming
mainstream languages. They were ahead of their time: rich-environment
languages in a pencil-and-paper (and later a 640K PC) world. A lot of
people hate Lisp just because they were forced to use it without a proper
environment and never recovered. Well, powerful machines are now
ubiquitous, and it is time to start using that power for software
engineering, instead of always worrying about the least common denominator.
(This, by the way, is a good strategy for Apple, since they sell the sort
of powerful, portable, easy-to-administer machines that such an environment
will depend on.)

Just try to consider how these discussions can be applied to an engineer
who's been called up at 2:00 in the morning because a *Very Important* customer
has a critical issue and they're (the engineer) dialed in on a VT-100 or
equivalent terminal over a 2400 baud modem.

Screw him. This company deserves to go out of business. If they won't buy
Powerbooks or X-terminals and fast modems for their key people, who might
get called at home, then they should stick with primitive langauges
designed to work well in their cheap non-environmnet. Until someone with
better tools blows them away...

Maybe the VT100 is what "mainstream" means today, but the object of this
exercise should be to raise that "mainstream" standard to the next level.
You don't do that by assuming the most primitive conditions and then
crippling the langauge to fit.

Thinking about this issue last night, I want to look into a new slant on
language design and research: language definition oriented towards code
maintainability. I live maintenance every single day, it cuts deeply into my
efforts to build ourselves out of whatever accidental holes we created in
the scramble to design and build systems before deadlines and funding cuts.

Yes, maintenance is a very important part of the picture -- probably the
most important part. Both language and environment must play a role here,
but the environment plays the larger role.

But if we're going to consider including browsing tools in the language
definition, which up to this point has been a textual-syntax based
specification, why not make the total leap and take the semantics of Dylan
mapping them into a truly visual language? That seems to be more along the
lines of intent of the Newton (which is how I personally first heard about
Dylan.)

Every visual programming environment I have seen is worthless for
non-trivial programming. Maybe I just haven't seen the good ones. But it
is not obvious to me that visual programming is a step forward. Of course,
in a multi-mode environment, various flavors of visual programming could be
among the modes supported.

Jeff Dalton

unread,
Mar 17, 1993, 1:54:17 PM3/17/93
to
> From: "Michael J. Grier" <m...@com.dec.ogo.ktbush>
> Subject: Re: What's new in Dylan? Is it even OO?
> Message-Id: <1993Mar16.2...@nntpd.lkg.dec.com>

> [Actually, I can't personally figure
> out why anyone trying to build a software system which doesn't include
> bugs would want a dynamically typed language,
>

> Just try to consider how these discussions can be applied to an engineer
> who's been called up at 2:00

Of course, one reasons an engineer would be called at 2 might be
that there's a bug. This can happen even if you're "trying to build
a software system which doesn't include bugs", and dealing with bugs
is one of the cases where dynamic typing can help (because the actual
objects contain type information).

> Absolutely. Are you telling me that all Dylan implementations will include
> such a browser? Then why isn't it part of the language definition.

Why does everything have to be in one volume? Under one title?
Just what's the problem here?

Meredith Lesly

unread,
Mar 21, 1993, 3:39:27 PM3/21/93
to
In article <CGAY.93Ma...@majestix.cs.uoregon.edu> cg...@cs.uoregon.edu writes:
>Using generic functions instead of message passing needn't change the
>way you *design* your program. You can still center the design around
>the objects to be modelled.

Not necessarily. For example, I would really like to be able to define
a method (message?) called "Draw" which would draw any drawable thing. In
Object Lisp, arguments only had to coincide within a subtree of the hierarchy.
In CLOS (and Dylan, and others), I have to use the same set of arguments
(barring keyword variations) for every object, even if they are unrelated.
One of the first OO features I appreciated when moving to OO was not having
to have millions of methods called "draw-foo" "draw-bar" "draw-quux".
Generic functions send me right back to millions of names.

Meredith Lesly

unread,
Mar 21, 1993, 4:35:13 PM3/21/93
to
In article <20301.93...@subnode.aiai.ed.ac.uk> je...@aiai.edinburgh.ac.uk (Jeff Dalton) writes:
>[Lots of stuff omitted]

>---
>This leaves generic functions. Some people will claim that a
>function-call syntax is incompatible with being OO. That is,
>they claim it makes all the difference that you write
>
> (<operation> <object> ...)
>
>instead of
>
> (<object> <operation> ...)
>
>or
>
> (send <object> <operation> ...)
>
>I don't think it makes that much difference, but if someone disagrees
>I don't know what I can say to get then to change their mind.

Well, yes, you're right in this particular case. But what about the
numerous methods (generic functions, whatever) that don't take the
dispatching argument as the first one. To pick an example at random,
the generic function for any? in dylan is

(define-generic-function any? (procedure collection #rest more-collections))

I'm pretty sure that the author of this does not consider "procedure" the
"receiver" of the message "any?". If there weren't a #rest argument, one
could well argue that "collection" was the receiver (as is the case in
several other Dylan methods), but not, I think, the receiver. The ST-80
equivalent is <collection> detect: aBlock (omitting the #rest arguments).

Personally, I think of any? as "Does this collection contain any members
that cause procedure to return true?" Similarly, I think of member? as
"Does this collection contain this object?" My mental receiver is collection.

Bob Kerns

unread,
Mar 21, 1993, 9:34:03 PM3/21/93
to
Date: Sun, 21 Mar 1993 20:39:27 GMT
From: gy...@world.std.com (Meredith Lesly)

Not necessarily. For example, I would really like to be able to define
a method (message?) called "Draw" which would draw any drawable thing. In
Object Lisp, arguments only had to coincide within a subtree of the hierarchy.
In CLOS (and Dylan, and others), I have to use the same set of arguments
(barring keyword variations) for every object, even if they are unrelated.
One of the first OO features I appreciated when moving to OO was not having
to have millions of methods called "draw-foo" "draw-bar" "draw-quux".
Generic functions send me right back to millions of names.

This isn't inherent in generic functions. I agree that overloading
is helpful. I'd particularly like to do:

(defmethod draw ((obj foo))
(draw foo (foo-window foo)))

C++ supports this at compile time only, because it doesn't suport
runtime number-of-arguments (because C doesn't, to its great loss).

But it's not as bad as you make out, either. You don't need
draw-foo, draw-bar, draw-quux, etc. You're only going to need
a handful at most, and probably not more than two or three different
draw generics. If you really work hard to make things inconsisent,
you might have some objects which don't have a buit-in-position,
and so have to have that supplied, while others do, and similar
kinds of protocol variation. But I think the shared contract will
predominate, and you'll only need a very limited number of generics.

0 new messages