Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

is CLOS reall OO?

984 views
Skip to first unread message

Kevin Haddock

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

Hi, I'm somewhat new to lisp but upon looking into CLOS it would appear
that the primary way to set slot values is with setf. Now I know that
setf is a generic function (which means it could do different things
with different objects given as arguments), but isn't it fundamentally
wrong for one object to be tinkering around with another's private data?

I have tried creating specific methods for accessing the private data of
objects, but that does not seem very elegant either. You have to match
the number of arguments wether you are storing or retreiving, or you
have to do specific tests to see if the second argument (the one to
store) is there. Is there something I am missing here or is that just
the nature of the beast? What happens, say, if I create a class and all
throughout my code I set one of it's slots with setf, then later in the
development I decide that that slot will not be an actual slot, but
derived from several other slots and now just merely a method. Do I
have to go all throughout my program and change the setf's into method
calls?

Thanks in advance for your feedback,


-Kevin
khad...@ecst.csuchico.edu

Lyman S. Taylor

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

In article <354DEDA4...@ecst.csuchico.edu>,
Kevin Haddock <khad...@ecst.csuchico.edu> wrote:
....

>wrong for one object to be tinkering around with another's private data?

For namespace privacy use packages. The CLOS doesn't manage
namespace control. The fact that "privacy" isn't syntactically
co-located with class declaration doesn't render CLOS non-OO, IMHO.
It may not be as an convenient notation... but that's a syntax argument.


If you have objects and methods you don't want the "user" of you
object library to use, place them in a package and only EXPORT those
methods and class names that the user is suppose to use.

[ Can the "user" of your library sneak in and subvert your package
"privacy"? Sure. However, most other OO languages have "back doors"
too. Perhaps with higher level of difficultly to implement...but
the truely dedicated can subvert them none the less. ]


>I have tried creating specific methods for accessing the private data of
>objects, but that does not seem very elegant either. You have to match
>the number of arguments wether you are storing or retreiving, or you
>have to do specific tests to see if the second argument (the one to
>store) is there.

seperate slot getter/setter methods.... that's basically what you have to
do in Smalltalk... I don't think there are very many who doubt it is OO.

(shape-height aShape ) ==> value

(shape-height-setter aShape new-value ) ==> new-value

That shouldn't collapse into one method with an optional arg.


>the nature of the beast? What happens, say, if I create a class and all
>throughout my code I set one of it's slots with setf, then later in the
>development I decide that that slot will not be an actual slot, but
>derived from several other slots and now just merely a method. Do I
>have to go all throughout my program and change the setf's into method
>calls?

I don't think so. I think you could just define a DEFSETF that
invokes the whatever you come up with for a setter method... that's
all DEFCLASS (or one if its minions ) does for you automagically anyway
(if I remember correctly).

In this particular case, I think you may have a OO Design problem in that
what does setting a "virtual" slot mean. If its values are derived from
several other slots.... How does a new slot value backpropagate
into its numerous sources? You would need a unique mapping from
a single value back to a set of values.

f( a, b, c ) ==> virtual value

inverse-f( virtual-value ) ==> ( a, b, c )

Perhaps it should have been the case that the orignal slot should have
been only "read only" in the first place. Therefore you wouldn't have
setf scattered about.... and switching to a virtual implementation
would be totally transparent.

--

Lyman S. Taylor "Twinkie Cream; food of the Gods"
(ly...@cc.gatech.edu) Jarod, "The Pretender"


Kent M Pitman

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

Kevin Haddock <khad...@ecst.csuchico.edu> writes:

I think your subject line implies some things you don't realize it
does. The term OO has existed a long time and its original meaning
has been co-opted by more modern systems that, while they supply
some interesting functionality (like "encapsulation") do not supply
other features classically associated with OO (like "dynamic typing
for all objects"). The question of "What is OO?" is at this point
inevitably a political one and one that all concerned would do best
to avoid. You're better off to pick the specific features of an OO
system that matter to you and ask about those individually rather
than blundering in thinking it's meaningful to ask whether CLOS is
"real" OO or not. For further remarks on this topic, see my
"Parenthetically Speaking" articles "What's in a Name?" at
http://world.std.com/~pitman/PS/Name.html
and "Lambda: The Ultimate Political Party" at
http://world.std.com/~pitman/PS/Lambda.html

> I know that setf is a generic function (which means it could do
> different things with different objects given as arguments

No, SETF is a macro. It makes its decision about what to do based
on static information. One of the things it might do is to call a
generic function if no static information is known about how to do
the assignment. But in the case of SETF of SLOT-VALUE, that's really
not what happens.

) but isn't it fundamentally


> wrong for one object to be tinkering around with another's private data?

As discussed recently in another thread (search for "private" in this
group within the last month at
http://www.dejanews.com/home_ps.shtml
to find the thread), Common Lisp objects really have no private data.
That fact may be lamentable to some (and not to others) but trying to
begin from a model of privacy and trying to find it in CL won't lead you
anywhere.

> I have tried creating specific methods for accessing the private data of
> objects, but that does not seem very elegant either.
> You have to match
> the number of arguments wether you are storing or retreiving, or you
> have to do specific tests to see if the second argument (the one to

> store) is there. Is there something I am missing here or is that just

> the nature of the beast?

That's what :reader and :accessor are for in slot descriptions, for
example. To help you write these tedious little methods.

Rainer Joswig

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

Kevin Haddock wrote in message <354DEDA4...@ecst.csuchico.edu>...

Ask five OO-experts for their definition of OO-ness. You will get atleast
six answers. Most people know only about a message-passing version.

>Hi, I'm somewhat new to lisp but upon looking into CLOS it would appear
>that the primary way to set slot values is with setf.

There are two ways.

First you can use setting the slot with (setf slot-value):

(defclass ship ()
(pos-x
pos-y
captain))

(setf titanic (make-instance 'ship))

(setf (slot-value titanic 'pos-x) 10
(slot-value titanic 'pos-y) 20)


CL-USER 15 > (describe titanic)

#<SHIP 235A4C04> is a SHIP
POS-X 10
POS-Y 20
CAPTAIN #<unbound slot>

Or you can use the slot-writer method:

(defclass ship ()
(pos-x
pos-y
(captain :accessor ship-captain)))

CL-USER 16 > (setf nautilus (make-instance 'ship))
#<SHIP 203A7AC4>

CL-USER 17 > (setf (ship-captain nautilus) 'nemo)
NEMO

> Now I know that


>setf is a generic function (which means it could do different things

>with different objects given as arguments),

Not setf is the generic function, but the writer method you can use via
setf.

> but isn't it fundamentally
>wrong for one object to be tinkering around with another's private data?


Keep in mind that generic functions are not connected to classes like
in languages like C++/Eiffel/Smalltalk, etc.

Methods are not part of a class definition. They are standing alone and are
defined
outside of classes. There are no inside methods. This takes
some time to understand when coming from a different model
(message passing between objects, where functionality
is part of the object). CLOS deals with classes and objects.
Generic functions are functions (input parameters and return values),
whose selected implementation depends on the arguments provided.
CLOS also enables the generation of an effective method depending
on one or more arguments. Most other OO-languages are
dispatching on one argument (the message receiver).

>What happens, say, if I create a class and all
>throughout my code I set one of it's slots with setf, then later in the
>development I decide that that slot will not be an actual slot, but
>derived from several other slots and now just merely a method. Do I
>have to go all throughout my program and change the setf's into method
>calls?

The idea of setf is that you don't have to remember how to actually
change a certain data structures (the content of an array at a index,
the value for a key in a hashtable, the slot of an object, whatever).
This is all hidden behind the setf mechanism. Knowledge about
a reader is enough.

You actually can write setf methods for writer/accessor functions:

Here SHIP-CAPTAIN is an existing accessor (see the class definition above).

(defmethod (setf ship-captain) :after (new-value (the-ship ship))
(print "Just changed a slot."))

CL-USER 18 > (setf (ship-captain nautilus) 'hornblower)

"Just changed a slot."
HORNBLOWER

Writing a primary setf method:

We define a setf method for SHIP-POS. The argument will
be a list of the x- and y-position. We then set the
pos-x and pos-y slot.

(defmethod (setf ship-pos) (new-position (the-ship ship))
(when (listp new-position)
(setf (slot-value the-ship 'pos-x) (first new-position)
(slot-value the-ship 'pos-y) (second new-position)))
new-position)

CL-USER 19 > (setf (ship-pos titanic) (list 50 60))
(50 60)

CL-USER 20 > (describe titanic)

#<SHIP 23C41A44> is a SHIP
POS-X 50
POS-Y 60
CAPTAIN #<unbound slot>


Greetings,

Rainer Joswig


Barry Margolin

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

In article <6il6rv$2...@desire.lavielle.com>,

Rainer Joswig <jos...@lavielle.com> wrote:
>Ask five OO-experts for their definition of OO-ness. You will get atleast
>six answers. Most people know only about a message-passing version.

Sometime around 1991 or so, I think there was an article in SIGPLAN Notices
or LISP Pointers that evaluated CLOS with respect to a well-known set of
criteria for OO language features (I think they were from OO bigwig like
Booch). Except for the information hiding features (which the article
admitted could be done somewhat with packages), it scored reasonably well.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Cambridge, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.

Erik Naggum

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

* Kevin Haddock

| Hi, I'm somewhat new to lisp but upon looking into CLOS it would appear
| that the primary way to set slot values is with setf.

not necessarily. let's consider the two typical cases:

(defclass class-with-setter-getter ()
(some-slot :reader get-some-slot
:writer set-some-slot
:initform 42))

(defvar *setter-getter-instance* (make-instance 'class-with-setter-getter))

(get-some-slot *setter-getter-instance*) => 42
(set-some-slot 41 *setter-getter-instance*) => 41
(get-some-slot *setter-getter-instance*) => 41

(defclass class-with-accessor ()
(some-slot :accessor some-slot
:initform 42))

(defvar *accessor-instance* (make-instance 'class-with-accessor))

(some-slot *accessor-instance*) => 42
(setf (some-slot *accessor-instance*) 41) => 41
(some-slot *accessor-instance*) => 41

BTW: note the order of arguments in SET-SOME-SLOT.

| Now I know that setf is a generic function

no, it isn't. SETF is a macro.

| ... isn't it fundamentally wrong for one object to be tinkering around


| with another's private data?

well, objects don't do any tinkering to begin with, they just are, so
let's presume you meant that some _code_ somewhere "tinkers" with some
object's "private" data.

however, if you think it is "wrong" to do something, then you just don't
do that. if you want the compiler to smack you in the face if you do it
anyway, I'd say you have an attitude problem that needs readjustment
before you resume writing code.

some purportedly object-oriented languages are designed specifically for
people who love to be smacked in the face when they lie to the compiler.
bondage and discipline go hand in hand with very strict rules of privacy
and all that weird deviant stuff. CLOS isn't like that.

CLOS communicates the _intent_ of hiding differently, but if you look for
the ways to express yourself in CLOS, it would very hard indeed _not_ to
see that CLOS code is just as concerned with proper etiquette and access
rights as the anal-retentive languages. if a slot has no :WRITER option,
that's a fairly good sign you should not be writing to this slot. ditto
for :READER or :ACCESSOR. in essence, if you use the function SLOT-VALUE
in your code, you either have the right to do so, or you're trespassing.
_how_ to determine whether you have that right or is trespassing is a
question of policy that you find paralleled in communities everywhere:
whether to use armed guards with Rotweilers to whom you have to declare
people to be your friends, whether to fence yourself in but just lock the
gates, whether to have open access and everybody just keeps an eye on it
all, or whether to let giant open areas be accessible to the public as
long as they behave well. obviously, by using words like "tinkering
about with other's private data", you imply a strong sense of privacy and
an attitude towards heavy policing and ID cards for the children of your
friends and such. you might actually feel _unsafe_ in a CLOS world where
people are just expected to show good taste, unless you feel you can
trust them. it's a matter of what kind programmers you live amongst, I
guess. if you live among thieves and bums who steal and rob you, by all
means go for the compiler who smacks them in the face. if you live among
nice people who you would _want_ to worry about a red-hot plate they can
see inside a kitchen window or a broken window they need to look inside
your house before reporting as a crime (serious bug), you wouldn't want
the kind of armor-plating that you might want in the 'hood. that doesn't
mean the _need_ for privacy is any different. it's just that in C++ and
the like, you don't trust _anybody_, and in CLOS you basically trust
everybody. the practical result is that thieves and bums use C++ and
nice people use CLOS. :)

| What happens, say, if I create a class and all throughout my code I set
| one of it's slots with setf, then later in the development I decide that
| that slot will not be an actual slot, but derived from several other
| slots and now just merely a method.

I infer from this that you would use SLOT-VALUE. the :READER and :WRITER
and :ACCESSOR options in DEFCLASS automatically create methods for you,
so you don't have use SLOT-VALUE (although those methods will effectively
do just that). there is no difference between such a method and any
other method.

| Do I have to go all throughout my program and change the setf's into
| method calls?

presuming you don't use SLOT-VALUE directly: no, you define a SETF method
for the accessor method that does the right thing, and recompile the
callers.

#:Erik
--
Support organized crime: use Microsoft products!

Aaron Gross

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Erik Naggum <cle...@naggum.no> writes:

I think you're giving C++ a bum rap here. The class definitions are
conventionally put in .h files which the user has access to. Thus a
user can easily change private data to public. Stroustrup emphasized
in _The C++ Language_ that the "private" keyword is meant to protect
against *unintentional* trespassing, and that it doesn't do anything
to stop a programmer from knowingly modifying private data. So the
correct analogy would be a fence around the house with a gate that was
closed, but not locked.

Regarding your claim that CLOS's policy of trust is OK, it seems the
most obvious and convincing argument for a new CLOS programmer would
be empirical: "In my experience developing and maintaining x amount of
CLOS code, I've never seen a problem caused by this policy."

Erik Naggum

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

* Aaron Gross

| I think you're giving C++ a bum rap here. The class definitions are
| conventionally put in .h files which the user has access to. Thus a user
| can easily change private data to public.

well, yeah, at the cost of a system recompile, and you have be _very_
careful not to change the order of the members as C++ is so ill-designed
as to require memory layout to follow specification order. technically,
you can use the access specifier in front of every individual member, but
people tend to sort the members based on access rights. in particular, I
have never seen code in any textbook on C++ where the access specifier is
repeated or applied specifically to an individual member -- they are
always used only once, as sort of "headers", and the message is clear
from other sources, as well: don't _do_ that. so I'd say that a whole
culture is speaking against your "just change it" policy.

| Stroustrup emphasized in _The C++ Language_ that the "private" keyword is
| meant to protect against *unintentional* trespassing, and that it doesn't
| do anything to stop a programmer from knowingly modifying private data.

I don't think this can be true. I would, however, have believed it if
access violations were specified to be warnings, not the errors they are.

I have helped salvage a project where people "knew" the layout of C++
classes and were afraid to change the .h files because that would mean
many a lost day due to recompiling, so "knowingly modify private data"
through the wonders of pointer arithmetic and casting is such as horribly
bad "solution" to the problem that I would consider asking the police to
put a detail on programmers who do it -- they're likely to break the law
through blatant disregard in real life, too.

| So the correct analogy would be a fence around the house with a gate that
| was closed, but not locked.

really? this sounds more like you're on Bjarne's defense team than an
honest argument. how do you open the gate and get at something inside?
your analogy strongly implies that there is a mechanism to open gates,
but what you actually have to do is go away elsewhere and declare the
object to outside your gate, then rebuild the world so it is. when your
gate doesn't have an "open" method, who cares whether it's locked or not
as long as you have to tear the whole fence down to pass through it?

| Regarding your claim that CLOS's policy of trust is OK, it seems the most
| obvious and convincing argument for a new CLOS programmer would be
| empirical: "In my experience developing and maintaining x amount of CLOS
| code, I've never seen a problem caused by this policy."

this is a good point. I must have designed and implemented 20 times as
many classes in CLOS as I have in C++, but I have always had some sort of
problems with the access rights in C++ as uses and designs change over
time, and I have _never_ had any desire to control access in CLOS. now,
I don't claim to be typical. for more than a year after first exposed to
C++ in a six-month project, I was actively considering another career
because C++ was becoming the required language and I couldn't stomach the
monumental waste of human resources that this language requires of its
users and projects. the problem is not just one of pragmatics, however.

C++ is philosophically and cognitively unsound as it forces a violation
of all known epistemological processes on the programmer. as a language,
it requires you to specify in great detail what you do not know in order
to obtain the experience necessary to learn it. C++ has taken premature
optimization to the level of divine edict since it _cannot_ be vague in
the way the state of the system necessarily is. (I'm not talking about
totally vague, but about the kind of vague details that programmers are
supposed to figure out even after a good design has been drawn up.) in
other words, a C++ programmer is _required_ by language design to express
certainty where there _cannot_ be any. a C++ programmer who cares about
correctness is a contradiction in terms: correctness is a function of
acquired certainty, not random guesswork. I'm discounting the ability to
experiment with something through educated guesses, because the number of
issues that must be experimented with in this fashion is gargantuan in
any non-trivial piece of code. C++ is a language strongly optimized for
liars and people who go by guesswork and ignorance. I cannot live with
this. I especially cannot live with it when I cannot even ask for help
in maintaining the uncertainty I have. e.g., if I do not have enough
foreknowledge to know _exactly_ which type I need for a variable (and
there is no type hierarchy, so I cannot be vague), I must _guess_, which
is bad enough, but I have to _repeat_ the guess all over! I cannot say
"this thing here has the _same_ type as that thing over there, and I'd
rather you go look there because I just want to put down my guesses once"
-- there is no `typeof' operator and it certainlycannot be used in those
anal-retentive declarations to afford a limited form of type propagation.

I actually think C++ is ideal only for programmers without any ethics.
you must lie, you are encouraged to declare your private stuff and keep
the cards very closely to your breast, but if you need access, you just
go ahead and change other people's class definitions. the "improved"
strong typing is designed to cause errors if you make simple mistakes,
forcing you to use a lot more of your brain on remembering silly stuff,
but then you get to templates and all hell breaks loose because suddenly
you have _no_control_at_all_ over where things fly. when I wrote that
C++ and CLOS is a question of what kind of people you wish to live with
in your community, I was not _just_ making an analogy. one can only
imagine what having to lie to your compiler _every_ day and to make
unwarranted guesses that your colleagues will depend on the same way they
depend on your hard-earned knowledge _does_ to a human brain -- I know I
couldn't handle even the prospect of it.

and, no, the solution is _not_ to design everything on paper first, then
implement, although this is the way that C++ would actually have worked
out well -- if it were _possible_ to design like that in our world. all
experience has taught us that solving a complex problem uncovers hidden
assumptions and ever more knowledge, trade-offs that we didn't anticipate
but which can make the difference between meeting a deadline and going
into research mode for a year, etc. if C++ were used only to solve
_known_ problems with _known_ solutions, it would be a perfectly OK
language. smart people would design classes "once and for all" and sell
them like nuts and bolts, companies could pre-fabricate parts that fit
together because of industry consensus on standards and specifications,
and this age-old "dream" of some software engineers. I believe this is a
completely insane view of programming, but C++ cannot have any other --
it is _designed_ to be an "ex post facto language" -- most bad languages
are, because the desire to make things solid and predictable, especially
if they can never be, is so deeply ingrained in people they have to work
hard with the concept of dynamism. so people are now arguing in terms of
"patterns", _new_ code that conforms to "general design principles", and
which consequently doesn't fit with the other parts unless specially
designed to. more and more, people see that solving _any_ problem is a
question of acquiring knowledge and having respect for the cognitive and
epistemological processes involved in the human brain: use what you know,
learn what you can, and if you have to guess, be _aware_ that you do so
you can go back and update your guesses when you (hopefully) learn more.

do I give C++ a bum rap? I may well do, but I hope I can illuminate what
choice of language does to the human cognitive process: people who come
from the C++ camp are positively obsessed with protecting their data --
even to the point of being scared when they can't. are they paranoid? I
think have _become_ paranoid because C++ makes you go nuts if you don't
-- it's a natural, psychological defense mechanism against criminally bad
language design: the flip side of having to feign certainty about your
guesses is that you lose confidence in your real certainty, too. when
anything you decided in the past can be overturned without warning just
because some other random guesswork somewhere turnes out to be false, you
learn to be defensive. C++ _needs_ encapsulation and all this privacy
stuff because the whole language is _systematically_ and _fundamentally_
sloppy, and thus attract programmers (and managers) who don't care one
way or the other about it. but, hey, the very _existence_ of Microsoft
would have been _impossible_ without systematic and pervasive sloppiness,
carelessness and outright lies, so it's kinda hard to argue against those
vices with people who _only_ want a quick buck (which is a vice on its
own, if you ask me).

perhaps I care too much, but I found that I needed to get out of the C++
world _permanently_, and I only turn back to look and study to know mine
enemy. in order to think about difficult and interesting problems, I had
to find languages (and write tools) that allowed me _not_ to think about
all the idiotic problems that C++ forces on everybody, _swamping_ their
cognitive processes and creativity with meaningless minutiae that they
have to lie about to boot. I have succeeded because I knew _exactly_
what I needed to get away from, but that knowledge didn't come for free:
for a few years I was sustaining myself mostly as a writer instead of a
programmer simply because I couldn't do anything worthwhile until I had
figured out why C++ is so rotten, and just _how_ (initially unfathomably)
rotten it really is: C++ turns otherwise good people into paranoid,
insecure prostitutes, and it comes from creating such horrible living
environments for themselves and compounded by trying to explain to
themselves that "it's OK, really". (of course, if you don't think about
such things and never need to explain anything to yourself, if you are of
the kind who "just follow orders", C++ is probably OK for you, which
leads to another label for C++: assembly-line object-oriented language.)

end of rant. happy Lisp hacking!

David Hanley

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Erik Naggum <cle...@naggum.no> wrote:
> well, objects don't do any tinkering to begin with, they just are, so
> let's presume you meant that some _code_ somewhere "tinkers" with some
> object's "private" data.

> however, if you think it is "wrong" to do something, then you just don't
> do that. if you want the compiler to smack you in the face if you do it
> anyway, I'd say you have an attitude problem that needs readjustment
> before you resume writing code.

I agree with you as far as writing small programs for oneself goes,
but there's more to it than that. A lot of progrgramming is done either
in large groups, or by one programmer after another on the same project.
In these cases, it's nice to be able to say "don't mess with this, you'll
hurt yourself." In our project, we pass out object that others, if they
were to mess with, would make our module behave badly. So we don't
give them write access, we preserve our elegant design, and everyone's
happy.

> some purportedly object-oriented languages are designed specifically for
> people who love to be smacked in the face when they lie to the compiler.

You could also look at it as "they smack those who break the rules
established by the objects (invariances) so those who use them correctly
and those who have to debug them don't suffer." In a group project you
might be happy for such discipline.

> question of policy that you find paralleled in communities everywhere:
> whether to use armed guards with Rotweilers to whom you have to declare
> people to be your friends, whether to fence yourself in but just lock the
> gates, whether to have open access and everybody just keeps an eye on it
> all, or whether to let giant open areas be accessible to the public as
> long as they behave well.

You could look at it that way. I prefer the road analogy. By
driving on the correct side of the road, stopping at stoplights, and
driving correctly, we all get to where we're going faster, because we
don't have to drive 10MPH and stop at every intersection to avoid an
accident. Yes, if the roads were empty, I could get there faster by blowing
every light, driving the wrong side of the road, but that's not reality.

dave

Francis Leboutte

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Barry Margolin <bar...@bbnplanet.com> wrote:

>In article <6il6rv$2...@desire.lavielle.com>,
>Rainer Joswig <jos...@lavielle.com> wrote:
>>Ask five OO-experts for their definition of OO-ness. You will get atleast
>>six answers. Most people know only about a message-passing version.
>
>Sometime around 1991 or so, I think there was an article in SIGPLAN Notices
>or LISP Pointers that evaluated CLOS with respect to a well-known set of
>criteria for OO language features (I think they were from OO bigwig like
>Booch). Except for the information hiding features (which the article
>admitted could be done somewhat with packages), it scored reasonably well.

Some people at Gartner Group surely will agree with you. According to one
of their research notes (Dec. 6, 1993), CLOS is the only language that
meets all must-have and should-have criteria for object-oriented languages.
--
Francis Leboutte
f.leb...@skynet.be lebo...@acm.org http://users.skynet.be/algo
Marre du courrier non sollicité (spam)? Visitez http://www.cauce.org

Barry Margolin

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <31033186...@naggum.no>, Erik Naggum <cle...@naggum.no> wrote:
> well, objects don't do any tinkering to begin with, they just are, so
> let's presume you meant that some _code_ somewhere "tinkers" with some
> object's "private" data.

Of course, this is one of the primary reasons why many people don't think
CLOS is OO. Most of Object-Oriented programming is based on the idea of
active objects, which tinker with their own innards and interact with other
objects solely by sending messages, so that those objects can then tinker
with their own innards and send messages, and so on.

CLOS doesn't implement this OO paradigm. We chose to focus on the aspects
of OO programming related to class hierarchies, and CLOS's features in that
area are virtually unsurpassed. Method combination is one of the most
powerful features of CLOS, yet it's practically unique in the OO universe
(although I can't think of any reason why the built-in method combinations
couldn't easily be included in most other OO languages, even C++, rather
than always requiring overriding methods to call the superclass method
explicitly).

Perhaps we should have stopped referring to CLOS as Object-Oriented when we
switched from method passing and active objects to generic functions and
multimethods. A better term for CLOS might be Class-Oriented or
Class-Based.

Vassil Nikolov

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

On Mon, 4 May 1998 22:15:43 -0700,
Aaron Gross <aa...@morannon.bfr.co.il> wrote in comp.lang.lisp:

>I think you're giving C++ a bum rap here. The class definitions are
>conventionally put in .h files which the user has access to. Thus a

>user can easily change private data to public. (...)

Even if you only have access to the compiled code?
And if you just make public what was private, can you
be sure that nothing will be broken, with C++'s
complicated rules regarding private/public and inheritance?

* * *

I think that the really good thing about Common Lisp (including
CLOS) is that it gives you _full_choice_; you can:

(A) leave everything accessible (the door is open), or
(B) set things up so that there are just warnings on access, or
(C) set things up so that access is denied (the door is locked).

Yes, (B) and (C) require some effort (see note below), but
then nothing is free. While C++ offers you (A) and (C),
I don't think it offers you (B).

As to tresspassing and protection against it, I don't
care about other programmers tresspassing. If they
use my code, and tresspass, it's their responsibility.
What I want is the _option_ to be notified (not necessarily
smacked in the face, to borrow Erik Naggum's phrase)
if I tresspass myself, which option I would turn on
when I am absent-minded or whatever.

Note: to show that (B) and (C) are achievable (if that is
not already evident), consider, as just one possible
approach, defining a before method for SLOT-VALUE which
signals a warning (for (B)) or an error (for (C)) if
a slot is accessed without invoking the appropriate
accessor. The accessor can notify the SLOT-VALUE
method by setting a variable, which may be shared by
the two in a lexical closure (thus inaccessible to
anybody else).


Best regards,
Vassil.


Vassil Nikolov

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Rainer Joswig

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <1998050516474...@nym.alias.net>, "Vassil Nikolov"
<vnik...@math.acad.bg> wrote:

> Note: to show that (B) and (C) are achievable (if that is
> not already evident), consider, as just one possible
> approach, defining a before method for SLOT-VALUE which
> signals a warning (for (B)) or an error (for (C)) if
> a slot is accessed without invoking the appropriate
> accessor. The accessor can notify the SLOT-VALUE
> method by setting a variable, which may be shared by
> the two in a lexical closure (thus inaccessible to
> anybody else).

How do you do that? Isn't SLOT-VALUE an ordinary function
(not a generic function)?

--
http://www.lavielle.com/~joswig/


Mike McDonald

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <6in7tp$erh$1...@eve.enteract.com>,

David Hanley <ma...@enteract.com> writes:
> Erik Naggum <cle...@naggum.no> wrote:
>> well, objects don't do any tinkering to begin with, they just are, so
>> let's presume you meant that some _code_ somewhere "tinkers" with some
>> object's "private" data.
>
>> however, if you think it is "wrong" to do something, then you just don't
>> do that. if you want the compiler to smack you in the face if you do it
>> anyway, I'd say you have an attitude problem that needs readjustment
>> before you resume writing code.
>
> I agree with you as far as writing small programs for oneself goes,
> but there's more to it than that. A lot of progrgramming is done either
> in large groups, or by one programmer after another on the same project.
> In these cases, it's nice to be able to say "don't mess with this, you'll
> hurt yourself." In our project, we pass out object that others, if they
> were to mess with, would make our module behave badly. So we don't
> give them write access, we preserve our elegant design, and everyone's
> happy.

The basic assumption with "protection" is that the
user is too stupid to be trusted with not breaking
the object. If this assumption is true (and there's
plenty of evidence to support that in general) then
the solution is not "protection", it taking the
keyboards away from the idiots and let your skilled
employees lose. Don't cripple the skilled individuals
in the name of hordes of idiots.

Mike McDonald
mik...@mikemac.com


Mike McDonald

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <31033568...@naggum.no>,

Erik Naggum <cle...@naggum.no> writes:
> * Aaron Gross
>| I think you're giving C++ a bum rap here. The class definitions are
>| conventionally put in .h files which the user has access to. Thus a user
>| can easily change private data to public.
>
> well, yeah, at the cost of a system recompile, and you have be _very_
> careful not to change the order of the members as C++ is so ill-designed
> as to require memory layout to follow specification order.

In my experience using C++, if you didn't do a
complete recompile every couple of hours, you were
asking for trouble. Trying to figure out what the
heck is going wrong when some class definition got
changed but some code that uses that class didn't get
recompiled is a nightmare. I also think you should
recompile your C++ code with different compilers as
often as you can. That helps you avoid introducing
wierd compiler dependancies.


>| So the correct analogy would be a fence around the house with a gate that
>| was closed, but not locked.
>
> really? this sounds more like you're on Bjarne's defense team than an
> honest argument. how do you open the gate and get at something inside?
> your analogy strongly implies that there is a mechanism to open gates,
> but what you actually have to do is go away elsewhere and declare the
> object to outside your gate, then rebuild the world so it is. when your
> gate doesn't have an "open" method, who cares whether it's locked or not
> as long as you have to tear the whole fence down to pass through it?

And why are your "privates" hanging out for the
whole world to see anyway? (Philosophical question, I
know the technical reason.)

>| Regarding your claim that CLOS's policy of trust is OK, it seems the most
>| obvious and convincing argument for a new CLOS programmer would be
>| empirical: "In my experience developing and maintaining x amount of CLOS
>| code, I've never seen a problem caused by this policy."
>
> this is a good point. I must have designed and implemented 20 times as
> many classes in CLOS as I have in C++, but I have always had some sort of
> problems with the access rights in C++ as uses and designs change over
> time, and I have _never_ had any desire to control access in CLOS. now,
> I don't claim to be typical. for more than a year after first exposed to
> C++ in a six-month project, I was actively considering another career
> because C++ was becoming the required language and I couldn't stomach the
> monumental waste of human resources that this language requires of its
> users and projects. the problem is not just one of pragmatics, however.

From your writing, I think you probably are an
atypical programmer, definitely an atypical C++
programmer. For one thing, you care! In my
experience, most programmers don't care. It's just a
job to them. It sounded better than flipping burgers
but that's all. I too have been giving serious
consideration to finding another career. Dealing with
a world designed for idiots has me worn down.

> and, no, the solution is _not_ to design everything on paper first, then
> implement, although this is the way that C++ would actually have worked
> out well -- if it were _possible_ to design like that in our world. all
> experience has taught us that solving a complex problem uncovers hidden
> assumptions and ever more knowledge, trade-offs that we didn't anticipate
> but which can make the difference between meeting a deadline and going
> into research mode for a year, etc. if C++ were used only to solve
> _known_ problems with _known_ solutions, it would be a perfectly OK
> language.

Oh how I wish you could talk to my boss! He's
suppose to be a relatively enlightened guy. He got
his degree from MIT and all. (He didn't know that
Lisp had arrays though! How can I explain something
like CLOS to a guy like this?) Inorder to keep things
under control, our company (and group) have a
developement process. We write a spec, have marketing
approve it. Then we write the design doc and have
marketing approve that. Then we spend 8 weeks
implementing it and have marketing approve it. The it
goes to our beta customers for 6 months so we can
kludge it up to do what they really wanted in the
first place! Oh and we have our choice of three
languages to use for the implementation, C++, Perl,
or TCL. I'm currently stuck on the "design" portion
of the spec. I'm totally bummed since I know it's BS.
The whole project would take a couple of weeks in
CLOS and then we could get right to finding out what
the customers really wanted. But no, gotta follow the
process!


> perhaps I care too much, but I found that I needed to get out of the C++
> world _permanently_, and I only turn back to look and study to know mine
> enemy.

Caring is definitely a no no in this "profession".

>
> end of rant. happy Lisp hacking!
>
> #:Erik

If only it were so! Back to kludging C++/TCL for
me.

Mike McDonald
mik...@mikemac.com


Dan Higdon

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Mike McDonald wrote in message ...

> The basic assumption with "protection" is that the
>user is too stupid to be trusted with not breaking
>the object. If this assumption is true (and there's
>plenty of evidence to support that in general) then
>the solution is not "protection", it taking the
>keyboards away from the idiots and let your skilled
>employees lose. Don't cripple the skilled individuals
>in the name of hordes of idiots.


Of course, you could look at it this way: protection
allows you to declare what you do and don't want to
be able to do with a design. This spec allows the
compiler to catch your errors (logic errors in either
the original design or your code) much like a strongly
typed language catches value errors. Of course, I
don't expect Lisp programmers to really dig this sort
of thing, given the general hostility to strong type
checking one usually sees around here. :-)

As far as "hordes of idiots", I suppose you've never
made a semantic typo? (Accidentally used the wrong
variable/function.) I know I've had a number of bugs
pointed out to me by my compiler (better under SML
and such than C++, to be sure). Most humans make
mistakes, and it's occasionally nice to have the compiler
point them out to you.

Still, I have to agree with you that programmers who
don't care about their jobs shouldn't be allowed to
drag down those of us who do care, and want to
use appropriate tools for the tasks at hand.

----------------------------------------
hd...@charybdis.com
"Throwing fire at the sun"

Gavin E. Gleason

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

> Of course, you could look at it this way: protection
> allows you to declare what you do and don't want to
> be able to do with a design. This spec allows the

I don't think I've been able to bend my mind around this
problem. Why is it that :: is a problem but *dyn-var*
isn't? I mean with a group of people who are not complete
morons it seem like you could get your co-workers to ask
you why you didn't export a symbol befor they went and
used it.

Scott L. Burson

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Mike McDonald wrote:
> Oh how I wish you could talk to my boss! He's
> suppose to be a relatively enlightened guy. He got
> his degree from MIT and all. (He didn't know that
> Lisp had arrays though! How can I explain something
> like CLOS to a guy like this?)

Harrumph. *When* did he get his degree from MIT? If it was after 1978
or so, he has no excuse for his ignorance.

> In order to keep things


> under control, our company (and group) have a

> development process. We write a spec, have marketing


> approve it. Then we write the design doc and have
> marketing approve that. Then we spend 8 weeks
> implementing it and have marketing approve it.

Oh boy, the waterfall model!

> Then it


> goes to our beta customers for 6 months so we can
> kludge it up to do what they really wanted in the
> first place! Oh and we have our choice of three
> languages to use for the implementation, C++, Perl,
> or TCL.

Oh dear, you *are* in Hell.

See if you can talk your boss into letting you use Java. This shouldn't
be too hard -- it's very trendy. And it will be a much better language
to work in than any of the above.

> I'm currently stuck on the "design" portion
> of the spec. I'm totally bummed since I know it's BS.
> The whole project would take a couple of weeks in
> CLOS and then we could get right to finding out what
> the customers really wanted.

I suggest you at least try to explain to your boss that you should be
doing in-depth interviews with the customers *before* writing the
design. This shouldn't be too hard to justify on the basis that it will
likely shorten the beta period.

Even the waterfall model needs some water (information) coming in at the
top.

-- Scott (MIT 1980)

* * * * *

To use the email address, remove all occurrences of the letter "q".

Sunil Mishra

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Yes, it is. The AMOP has slot-value call slot-value-using-class, which is a
generic function.

But even if it were generic, I still don't see how the above proposal would
work. The function slot-value has to operate for any number of classes (and
subclasses) of any given class. If you want a private slot in the sense of
C++, then the slot should not be shared with children of the class. So, if
a child redefines the slot to have an accessor, then the new accessor will
no longer share the lexical closure, and I believe in this situation you
will lose.

Sunil

Kevin Haddock

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

Erik Naggum wrote:
>
> * Kevin Haddock
*snip*

(my comment about tinkering with private data)

> however, if you think it is "wrong" to do something, then you just don't
> do that. if you want the compiler to smack you in the face if you do it
> anyway, I'd say you have an attitude problem that needs readjustment
> before you resume writing code.

Well, I'm not saying that it should be impossible. I mean sometimes one
might *have* to do it for performance reasons. But perhaps it should be
somewhat difficult (e.g. prepending an underscore or something)

(Eric's comments about compiler rigor)

> that doesn't
> mean the _need_ for privacy is any different. it's just that in C++ and
> the like, you don't trust _anybody_, and in CLOS you basically trust
> everybody. the practical result is that thieves and bums use C++ and
> nice people use CLOS. :)

Yes, I can see that. And I greatly appreciate all the responses I got
to my question!

(And BTW, I pretty much hail from the Forth community. C++ makes me
wretch)

-Kevin

Espen Vestre

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Barry Margolin <bar...@bbnplanet.com> writes:

> CLOS doesn't implement this OO paradigm. We chose to focus on the aspects
> of OO programming related to class hierarchies, and CLOS's features in that
> area are virtually unsurpassed.

Surpassed only by those CLOS implementations which implement the
MOP :-)

--

(espen vestre)

Espen Vestre

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

smi...@piedmont.cc.gatech.edu (Sunil Mishra) writes:

> Yes, it is. The AMOP has slot-value call slot-value-using-class, which is a
> generic function.
>
> But even if it were generic, I still don't see how the above proposal would
> work. The function slot-value has to operate for any number of classes (and
> subclasses) of any given class.

So you need to use the MOP and implement a meta-class, along these
lines:

(defclass my-meta (standard-class)
())

(defmethod slot-value-using-class :before ((class my-meta) instance slot-name)
(when (hidden-in-class class slot-name)
(error "Attempt to access hidden slot")))

(I guess the most elegant way to implement "hidden-in-class" would be
to subclass the slot-definition metaclass....?)
--

(espen)

Tim Bradshaw

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

* Vassil Nikolov wrote:

> Note: to show that (B) and (C) are achievable (if that is
> not already evident), consider, as just one possible
> approach, defining a before method for SLOT-VALUE which
> signals a warning (for (B)) or an error (for (C)) if
> a slot is accessed without invoking the appropriate
> accessor. The accessor can notify the SLOT-VALUE
> method by setting a variable, which may be shared by
> the two in a lexical closure (thus inaccessible to
> anybody else).

I think this is a bad approach actually, because any implementation
worth its salt is going to optimise SLOT-VALUE into the ground, so
defining methods on it (or on SLOT-VALUE-USING-CLASS) is kind of a bad
thing to do.

Although I can't remember the details, it did seem to me that AMOP
really requires SLOT-VALUE to call SLOT-VALUE-USING-CLASS, which
strikes me as a ludicrous limitation on implementations which one
shouldn't assume they actually adhere to.

--tim

Kjetil Valstadsve

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Espen Vestre <e...@nextel.no> writes:

> So you need to use the MOP and implement a meta-class, along these
> lines:
>
> (defclass my-meta (standard-class)
> ())
>
> (defmethod slot-value-using-class :before ((class my-meta) instance
> slot-name)
> (when (hidden-in-class class slot-name)
> (error "Attempt to access hidden slot")))
>
> (I guess the most elegant way to implement "hidden-in-class" would be
> to subclass the slot-definition metaclass....?)

Sounds like a MOP thing, definitely. How would you put it to use?
Ie. what's the most practical way to make certain slots of a my-meta
class hidden?

I've studied the MOP spec here now, and found that I can override
DIRECT-SLOT-DEFINITION-CLASS (which returns the slot-definition class
for a given slot) for my-meta classes and look for keywords in the
&rest'ed arguments (representing, it seems, keywords in the
slot-definition form within the defclass form). Then, I can specify
that extra keyword (for instance, :hidden) in the defclass form and
get away with it, something like this: (I use the keyword :my-slot,
below, not :hidden)

(defclass my-meta (standard-class) ())

(defclass my-slot-definition (standard-direct-slot-definition) ())

(defmethod clos:direct-slot-definition-class ((class my-meta) &rest initargs)
(print "yikes!")
(if (member :my-slot initargs)
(progn
(print "hubba!")
(class-of (make-instance 'my-slot-definition))) ;; <- *blush*
;; I momentarily forgot how to specify classes
;; directly, and mr. Naggum was asleep at the time.
(call-next-method)))

Usage:

[4c] USER(26): (defclass my-class ()
((my-slot :my-slot))
(:metaclass my-meta))
"yikes!"
"hubba!"
Error: Recursive loop in finalize-inheritance
#<STANDARD-CLASS MY-SLOT-DEFINITION>

... oops, with some hitches, admittedly, but would this be a good way
to go about it, given some time to iron out the glitches? I'm a CLOS
newbie, so be gentle.

--
We are looking for people who sincerely want to rule the universe. Have
you got what it takes to be an evil galactic tyrant? Please tick one [X]
[ ] I am slightly naughty [ ] I am actually quite evil
[ ] I am very naughty indeed [ ] I am so evil that I am going to quote
your entire post after my own gibberish

Joao Cachopo

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

>>>>> "Vassil" == Vassil Nikolov <vnik...@math.acad.bg> writes:

Vassil> Note: to show that (B) and (C) are achievable (if that is not
Vassil> already evident), consider, as just one possible approach,
Vassil> defining a before method for SLOT-VALUE which signals a
Vassil> warning (for (B)) or an error (for (C)) if a slot is accessed
Vassil> without invoking the appropriate accessor. The accessor can
Vassil> notify the SLOT-VALUE method by setting a variable, which may
Vassil> be shared by the two in a lexical closure (thus inaccessible
Vassil> to anybody else).

Others have pointed out the problems with these approaches.

However, I feel there is something I do not fully understand, and that
is related to a message I sent a few days ago but that nobody
followed-up.

It seems that some people are deeply worried with private slots and
the way that they are accessed in CLOS. But I can't see what's the
use for private slots that nobody can manipulate, since in CLOS
OO-paradigm there are no "class-owned methods". Therefore, I assume
that the problem is in the use of the slot-value function (or
with-slots).

Indeed, I think that writing code relying in slot-names (through the
use of slot-names or with-slots) is not a "good thing", and that
accessors should be used instead.

This leads me to my question: why use slot names at all?

I know that the use of slot names allow you to do things like:

1. Inherit several (distinct) initargs for the same slot.
2. Inherit several (distinct) accessors for the same slot.
3. Constrain further the type of an inherited slot.
4. Specify an initform for an inherited slot.
5. [ Possibly many other things... ]

But, is it worth it?
I mean, are these things really useful?
I ask this because I do not have much experience with CLOS programming
in the large...

As I see it (maybe naively), those who do not like access to a slot
unless through its accessors (which I agree is the right way to do it,
because you can add methods to the corresponding generic functions),
should not use slot names at all.

Slot names are not optional in slot definitions, of course, but using
gensyms you get pretty much the same effect, isn't it?

Either

(defclass a ()
((#.(gensym) :accessor property1)))

or

(defclass a ()
((#:property1 :accessor property1)))

Should do.

Am I right?

--
Joao Cachopo * Homepage: http://www.gia.ist.utl.pt/~jcachopo

*** Murphy was an optimist ***

Sunil Mishra

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <w63eend...@gromit.nextel.no> Espen Vestre <e...@nextel.no> writes:

So you need to use the MOP and implement a meta-class, along these
lines:

(defclass my-meta (standard-class)
())

(defmethod slot-value-using-class :before ((class my-meta) instance slot-name)
(when (hidden-in-class class slot-name)
(error "Attempt to access hidden slot")))

(I guess the most elegant way to implement "hidden-in-class" would be
to subclass the slot-definition metaclass....?)

--

(espen)

If you are going to use the MOP to implement slot access control, I believe
AMOP has a good example of that too. And I think they do subcalss the slot
definition class, as you have suggested.

Sunil

Antonio Leitao

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

>>>>> "Joao" == Joao Cachopo <jcac...@gia.ist.utl.pt> writes:

[...]

Joao> It seems that some people are deeply worried with private
Joao> slots and the way that they are accessed in CLOS. But I
Joao> can't see what's the use for private slots that nobody can
Joao> manipulate, since in CLOS OO-paradigm there are no
Joao> "class-owned methods". Therefore, I assume that the problem
Joao> is in the use of the slot-value function (or with-slots).

Joao> Indeed, I think that writing code relying in slot-names
Joao> (through the use of slot-names or with-slots) is not a "good
Joao> thing", and that accessors should be used instead.

That is definitely true, but for a large number of people that made
the transition from other Object-Oriented languages to CLOS,
multi-dispatch methods is something that they use rarely. Mostly, they
will map the idea of methods defined in a class into a CLOS method
specialized in just one argument. Then, they will want to access the
"instance variables" of the class as if the method was part of the
class. Then, they will discover that they must precede the method body
by a with-slots form specifying all the slots they need, meanwhile
complaining about such strange obligation. Finally, they will
understand that there is nothing special about a with-slots form (or
slot-value) and that everybody can use it from any place, so they
complain that there is no privacy in CLOS. All of this results, of
course, from the belief that a method in CLOS is like a method in
other languages, and should have privileged rights. I bet that as
soon as they start using multi-dispatch methods all these
misconceptions disappear.

Now, I would like to add a related topic, namely the use of structures
(a la defstruct). When we use a defstruct, we get automatically
defined accessors and we simply use those accessors to manipulate the
structure. We know that an "instance" of a defstruct is just a vector,
so we can access the slots using aref. What surprises me is that no
one complains that there is no privacy in a defstruct. Besides, when
we think that we can use (single) inheritance in structures, then I
don't see much difference between accessing the "instance" of a
defstruct using aref or accessing the "instance" of a defclass using
slot-value.

As a result, I would suggest that slot-value is to defclass the
same as aref is to defstruct. Of course, the fact that slot-value uses
a symbol while aref uses a number makes a big difference, and it
entitles people to think that slot-value is more legitimate than
aref. But from the point of view of a system that must evolve as time
goes by, using a slot-value is almost as dangerous as using an aref
because we might have to decide that a specific value that was kept in
a slot will instead have to be computed and then it is too late because
all the code is using slot-value and not the proper accessor.

Joao> This leads me to my question: why use slot names at all?

Joao> I know that the use of slot names allow you to do things
Joao> like:

Joao> 1. Inherit several (distinct) initargs for the same slot.
Joao> 2. Inherit several (distinct) accessors for the same slot.
Joao> 3. Constrain further the type of an inherited slot.
Joao> 4. Specify an initform for an inherited slot. 5. [ Possibly
Joao> many other things... ]

Joao> But, is it worth it? I mean, are these things really
Joao> useful? I ask this because I do not have much experience
Joao> with CLOS programming in the large...

In my opinion, they are very useful. I would paraphrase your
question in a different way: why use slot-value at all?

Why not just resort exclusively to accessors, maybe allowing for some
kind of optimization where they are inlined as slot-value? Then you
would have an accessor that behaved like a slot-value without the
overhead of going through a generic function. I don't remember whether
this is not already possible in CLOS. Maybe it is.

The idea is to convince the programmer that using slot-value will not
give him anything that a proper optimization couldn't give. Just like it
happens with defstruct. We don't use aref because we know that it can't
be faster than the accessor when it is compiled with proper optimization.
And for people who like to look at slots as "instance variables", then
we already have the powerful with-accessors. The only problem is
that, contrary to what happens with defstruct, there is no automatic
creation of accessors. By default, no accessors are created. We should
change this.

Joao> As I see it (maybe naively), those who do not like access to
Joao> a slot unless through its accessors (which I agree is the
Joao> right way to do it, because you can add methods to the
Joao> corresponding generic functions), should not use slot names
Joao> at all.

Even if you don't want to add methods to the corresponding generic
function, you shouldn't use slot-value. You might want to change that
slot and then you must check all the code. A function is much more
"stable" than a slot.

Joao> Slot names are not optional in slot definitions, of course,
Joao> but using gensyms you get pretty much the same effect, isn't
Joao> it?

Joao> Either

Joao> (defclass a () ((#.(gensym) :accessor property1)))

Joao> or

Joao> (defclass a () ((#:property1 :accessor property1)))

Joao> Should do.

Joao> Am I right?

I think so, but it looks ugly. I don't we just use automatic defined
accessors, a la defstruct, with some automatic optimizations
included?

António Menezes Leitão.


Vassil Nikolov

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

On Tue, 5 May 1998 14:07:08 -0700,
Sunil Mishra <smi...@piedmont.cc.gatech.edu> wrote in comp.lang.lisp:


> In article <1998050516474...@nym.alias.net>, "Vassil Nikolov"
> <vnik...@math.acad.bg> wrote:
>

> > Note: to show that (B) and (C) are achievable (if that is

> > not already evident), consider, as just one possible
> > approach, defining a before method for SLOT-VALUE which
> > signals a warning (for (B)) or an error (for (C)) if
> > a slot is accessed without invoking the appropriate
> > accessor. The accessor can notify the SLOT-VALUE
> > method by setting a variable, which may be shared by
> > the two in a lexical closure (thus inaccessible to
> > anybody else).
(...)
>But (...) I still don't see how the above proposal would


>work. The function slot-value has to operate for any number of classes (and

>subclasses) of any given class. If you want a private slot in the sense of

^^^^^^^^^^^^^^^^^^^^^^^^^^ ... see note 1


>C++, then the slot should not be shared with children of the class. So, if
>a child redefines the slot to have an accessor, then the new accessor will

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ see note 2


>no longer share the lexical closure, and I believe in this situation you
>will lose.

With any particular approach there are conditions under which
one loses. Perhaps I should have made it explicit that I wasn't
offering a general solution, just an approach which may or may
not be useful in the particular situation that the programmer
has at hand.

(note 1) Besides, I don't want a private slot in the sense of C++.
I just want a means whereby I could have CLOS notify me if an accessor
was bypassed and SLOT-VALUE was used directly, which may or may
not be with good reason. (Hence `notify,' not `block.')

(note 2) If the definition of a subclass redefines accessors for a slot,
it it the responsibility of the maker of that definition
whether or not to extend the scope of the notification mentioned
above. Certainly I don't want to impose undue restrictions on
those that want to subclass my classes; they are supposed to
know what they are doing!

Finally, there are other possibilities: CLOS doesn't give you
_the_ direction to follow, but a number of them (yes, that
makes life difficult: decisions must be taken...). Espen
Vestre already mentioned one could use MOP (that's what it
is for, after all). One could also `privatise' slots by
using uninterned symbols for the slot names (I think that
Graham gives such an example, but I know it from hearsay,
haven't seen the book myself), if one really must have it.
And anyway, slots not being private isn't usually the
real problem.

Best regards,
Vassil.


Barry Margolin

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <1998050516474...@nym.alias.net>,
Vassil Nikolov <vnik...@math.acad.bg> wrote:
>On Mon, 4 May 1998 22:15:43 -0700,
>Aaron Gross <aa...@morannon.bfr.co.il> wrote in comp.lang.lisp:
>
>>I think you're giving C++ a bum rap here. The class definitions are
>>conventionally put in .h files which the user has access to. Thus a
>>user can easily change private data to public. (...)
>
>Even if you only have access to the compiled code?
>And if you just make public what was private, can you
>be sure that nothing will be broken, with C++'s
>complicated rules regarding private/public and inheritance?

Actually, C++ got that right. Name resolution takes place independently of
public/private designation. After it resolves the name, it then determines
whether the visibility rules allow the reference. Although this tends to
confuse newbies (and probably even some oldies), it means that making a
member public cannot change the behavior of existing code.

Barry Margolin

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <354F6DA6...@zeta-sqoft.com>,

Scott L. Burson <Gy...@zeta-sqoft.com> wrote:
>See if you can talk your boss into letting you use Java. This shouldn't
>be too hard -- it's very trendy. And it will be a much better language
>to work in than any of the above.

I was speaking to someone last night who's looking to hire someone to
rewrite a Java application in C++, because Java isn't up to snuff yet.

David Hanley

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Barry Margolin <bar...@bbnplanet.com> wrote:
> In article <354F6DA6...@zeta-sqoft.com>,
> Scott L. Burson <Gy...@zeta-sqoft.com> wrote:
> >See if you can talk your boss into letting you use Java. This shouldn't
> >be too hard -- it's very trendy. And it will be a much better language
> >to work in than any of the above.

> I was speaking to someone last night who's looking to hire someone to
> rewrite a Java application in C++, because Java isn't up to snuff yet.

Maybe they should look into translating it to smalltalk?
I would think that would be less work. For one thing, the java code might
depend a lot on the GC and dynamism.

dave

Scott L. Burson

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Barry Margolin wrote:
> In article <354F6DA6...@zeta-sqoft.com>,
> Scott L. Burson <Gy...@zeta-sqoft.com> wrote:
> >See if you can talk your boss into letting you use Java. This shouldn't
> >be too hard -- it's very trendy. And it will be a much better language
> >to work in than any of the above.
>
> I was speaking to someone last night who's looking to hire someone to
> rewrite a Java application in C++, because Java isn't up to snuff yet.

Yeah, Mike responded to me privately with similar concerns.

I clearly need to take another look at the Java environment situation --
it's been about 10 months since I did so. I had assumed that we'd be
seeing pretty good environments by now, but I guess we're not. I know
it's a lot of work to build a good environment, but it doesn't seem like
anyone in the Unix world is investing all that much in it, even Sun.

-- Scott

Rainer Joswig

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <wog1inw...@antartida.gia.ist.utl.pt>, Antonio Leitao
<a...@antartida.gia.ist.utl.pt> wrote:

> Now, I would like to add a related topic, namely the use of structures
> (a la defstruct). When we use a defstruct, we get automatically
> defined accessors and we simply use those accessors to manipulate the
> structure. We know that an "instance" of a defstruct is just a vector,
> so we can access the slots using aref.

This does work only in your particular implementation of CL
and is not part of ANSI CL.


Welcome to Macintosh Common Lisp Version 4.2!
? (defstruct foo bar)
FOO
? (make-foo :bar 'baz)
#S(FOO :BAR BAZ)
? (aref * 0)
> Error: value #S(FOO :BAR BAZ) is not of the expected type ARRAY.
> While executing: CCL::%AREF1
> Type Command-. to abort.
See the Restarts... menu item for further choices.

--
http://www.lavielle.com/~joswig/


Marco Antoniotti

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

I think there are advantages in both models of "fencing".

What I think makes things a little murky in C++ (CL is murky as well
for totally different reasons) is that the two concepts of "package"
(plug in your favorite synonym) and class (plug in your favorite
synonym) are collapsed. (I am not familiar enough with C++
namespaces, nor do I know how much they are implemented in various
compilers).

Java does it "more" right, by (re)introducing the notion of package.
Ada95 also has some interesting featurs.

Cheers

--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - (0)6 - 68 80 79 23, fax. +39 - (0)6 - 68 80 79 26
http://www.parades.rm.cnr.it

Espen Vestre

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

Marco Antoniotti <mar...@galvani.parades.rm.cnr.it> writes:

> for totally different reasons) is that the two concepts of "package"
> (plug in your favorite synonym) and class (plug in your favorite
> synonym) are collapsed.

In Perl, OO is in fact a hack built upon the package system...

--

(espen)

Erik Naggum

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

* Antonio Leitao

| Now, I would like to add a related topic, namely the use of structures (a
| la defstruct). When we use a defstruct, we get automatically defined
| accessors and we simply use those accessors to manipulate the structure.
| We know that an "instance" of a defstruct is just a vector, so we can
| access the slots using aref.

* Rainer Joswig


| This does work only in your particular implementation of CL
| and is not part of ANSI CL.

well... you can tell DEFSTRUCT you want a vector representation.

#:Erik
--
Support organized crime: use Microsoft products!

Tim Bradshaw

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

* Antonio Leitao wrote:

> Why not just resort exclusively to accessors, maybe allowing for some
> kind of optimization where they are inlined as slot-value? Then you
> would have an accessor that behaved like a slot-value without the
> overhead of going through a generic function. I don't remember whether
> this is not already possible in CLOS. Maybe it is.

At least some implementations optimise the automatically-defined
accessors into something like SLOT-VALUE with a constant slot name,
and then optimise that into something very quick indeed (close to
DEFSTRUCT performance). You may lose if you define auxiliary methods
on the accessors.

It would be interesting to know which implementations optimise this
kind of thing, and how much.

> The idea is to convince the programmer that using slot-value will not
> give him anything that a proper optimization couldn't give. Just like it
> happens with defstruct. We don't use aref because we know that it can't
> be faster than the accessor when it is compiled with proper optimization.
> And for people who like to look at slots as "instance variables", then
> we already have the powerful with-accessors. The only problem is
> that, contrary to what happens with defstruct, there is no automatic
> creation of accessors. By default, no accessors are created. We should
> change this.

It would be easy to write a wrapper around DEFCLASS which looked quite
like DEFSTRUCT and defined the accessors for you (and obfuscated the
slot names if you wanted).

--tim

Bob Hutchison

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

Barry Margolin <bar...@bbnplanet.com> wrote:

>In article <354F6DA6...@zeta-sqoft.com>,
>Scott L. Burson <Gy...@zeta-sqoft.com> wrote:
>>See if you can talk your boss into letting you use Java. This shouldn't
>>be too hard -- it's very trendy. And it will be a much better language
>>to work in than any of the above.
>
>I was speaking to someone last night who's looking to hire someone to
>rewrite a Java application in C++, because Java isn't up to snuff yet.

Any particular problem with Java? More to the point, what's the problem
with Java that C++ is supposed to fix?
---
Bob Hutchison, hu...@RedRock.com, (416) 760-0565
RedRock, Toronto, Canada

Vassil Nikolov

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

On Wed, 6 May 1998 12:50:05 -0700,
Barry Margolin <bar...@bbnplanet.com> wrote in comp.lang.lisp:

>In article <1998050516474...@nym.alias.net>,
>Vassil Nikolov <vnik...@math.acad.bg> wrote:
>>On Mon, 4 May 1998 22:15:43 -0700,
>>Aaron Gross <aa...@morannon.bfr.co.il> wrote in comp.lang.lisp:
>>

>>>(...) a user can easily change private data to public. (...)
>>
(...)


>>And if you just make public what was private, can you
>>be sure that nothing will be broken, with C++'s
>>complicated rules regarding private/public and inheritance?
>
>Actually, C++ got that right. Name resolution takes place independently of
>public/private designation. After it resolves the name, it then determines
>whether the visibility rules allow the reference. Although this tends to
>confuse newbies (and probably even some oldies), it means that making a
>member public cannot change the behavior of existing code.

Thank you for correcting me and for the clarification.
Since you know what you are talking about, I take my
question back. (There are still some things I'd like
to know about C++ in this respect, so I guess I might
RTFM... one day.)

When I think about it, I probably got mixed up with Java's
rules about what gets inherited. I am so lucky I can
afford (at least so far) to have only an academic interest
in these languages.

Best regards, Vassil.


Vassil Nikolov

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

On Wed, 6 May 1998 03:25:07 -0700,
Tim Bradshaw <t...@aiai.ed.ac.uk> wrote in comp.lang.lisp:

(...)


>Although I can't remember the details, it did seem to me that AMOP
>really requires SLOT-VALUE to call SLOT-VALUE-USING-CLASS, which
>strikes me as a ludicrous limitation on implementations which one
>shouldn't assume they actually adhere to.

Here's the speed vs. flexibility trade-off again. X needs the
former, Y needs the latter. Usually one can't have both,
but the good news is that rarely one wants both equally strongly.

By the way, both CLtL2 and the standard (using CLHS as the source)
_recommend_ that the implementation uses SLOT-VALUE-USING-CLASS,
although this is not a requirement for conformance to the standard.

Would it be nice to be able to declare that SLOT-VALUE doesn't
use SLOT-VALUE-USING-CLASS on a particular occasion?
In other words, to give the programmer choice whether to insist
on flexibility or on speed. (Such a declaration would follow
the spirit of the DYNAMIC-EXTENT declaration, I guess.)

Best regards, Vassil.


Lyman S. Taylor

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <3557cdff....@enews.newsguy.com>,
Bob Hutchison <hu...@RedRock.com> wrote:
>Barry Margolin <bar...@bbnplanet.com> wrote:
...

>>rewrite a Java application in C++, because Java isn't up to snuff yet.
>
>Any particular problem with Java? More to the point, what's the problem
>with Java that C++ is supposed to fix?

It is likely not Java, the "Language", that isn't up to snuff. It is the
implementation(s) thereof and the high state of flux of the libraries.

Does the Language C++, do things better than Java? IMHO, For most things,
No. Do the implementations of C++ do many things better than Java? In many
circumstances, yes. Especially, if we're outside the venue of web
browser applets.

For instance, Sun's "super duper" Java optimizer isn't ready for prime
time yet. Most C++ implementaton's optimizers have many more man-years of
work in them.

--

Lyman S. Taylor "Twinkie Cream; food of the Gods"
(ly...@cc.gatech.edu) Jarod, "The Pretender"


Georg Bauer

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

In article <6in7tp$erh$1...@eve.enteract.com>, David Hanley
<ma...@enteract.com> wrote:

> In a group project you
>might be happy for such discipline.

Actually I would be much happier in a group project if everybody
_understood_ what to touch and what not. It's not that it isn't there -
you can always write a comment into the source. It is awkward to build
restrictions into the compiler just because some other people might not be
able to understand what it is all about. I don't like the idea to
transport "behaviour" in the source - it should be in the peoples minds,
already.

The better way would be to throw out the people tinkering with parts of
the objects that they shouldn't tinker with and hire some people instead
that have a clue.

BTW: people that would tinker with foreign objects in an open environment
will do things like that in a closed environment, too. Ok, they won't play
with your private parts (pun intended), but they will allways find
"creative" ways to abuse your modules. So it would be better not to have
to work with those people in the first place.

A anal-retentive (I like this one, Erik :-) ) language doesn't help there.
It just hinders those that have clue about what they are doing. _Any_
language can be abused by bad programmers. So a language should
concentrate on helping the cluefull programmer and not worry with bashing
clueless ones.

bye, Georg

--
#!/usr/bin/perl -0777pi
s/TS:.*?\0/$_=$&;y,a-z, ,;s, $,true,gm;s, 512,2048,;$_/es

Asle Olufsen

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

Barry Margolin <bar...@bbnplanet.com> writes:

>
> Of course, this is one of the primary reasons why many people don't think
> CLOS is OO. Most of Object-Oriented programming is based on the idea of
> active objects, which tinker with their own innards and interact with other
> objects solely by sending messages, so that those objects can then tinker
> with their own innards and send messages, and so on.


>
> CLOS doesn't implement this OO paradigm.


I once anticipated this kind of objection so i created somthing like
the following macro in order to argue that the difference is only one
of perspective.

(defmacro define-class (name superclasses &rest body)
(let ((mode :none)
(slots nil)
(slot-names nil)
(functions nil)
main
(main-done-p nil))
(dolist (item body)
(if (keywordp item)(setq mode item)
(case mode
(:var (push (list item item) slot-names)
(push (list item
:initarg (intern (symbol-name item) :keyword)
:accessor item)
slots))
(:function (push item functions))
(:main (if main-done-p (error "Double main with : ~a" item)
(setq main item
main-done-p t)))
(:none (error "No mode given for ~a" item))
(t (error "Unknown mode: ~a" mode)))))
`(progn
(defclass ,name ,superclasses ,slots)
,@(mapcar #'(lambda (f)
`(defmethod ,(car f)((this ,name) ,@(cadr f))
(with-accessors ,slot-names this
,@(cddr f))))
functions)
,(when main-done-p
`(defmethod initialize-instance :after ((this ,name) &rest args)
(with-accessors ,slot-names this
,main))))))


Then you could say

(define-class car ()
:var
reg-nr
owner
model

:function
(change-owner (new-owner)
(setq owner new-owner))

(display (extra)
(format t "~a: ~a~%" reg-nr extra))

:main
(display this "I'm alive"))


(defvar *herbie* (make-instance 'car :reg-nr "AB12345"))

(change-owner *herbie* "Me"))


where the macro expands into:

(DEFCLASS CAR NIL
((MODEL :INITARG :MODEL :ACCESSOR MODEL)
(OWNER :INITARG :OWNER :ACCESSOR OWNER)
(REG-NR :INITARG :REG-NR :ACCESSOR REG-NR)))

(DEFMETHOD DISPLAY ((THIS CAR) EXTRA)
(WITH-ACCESSORS ((MODEL MODEL) (OWNER OWNER) (REG-NR REG-NR)) THIS
(FORMAT T "~a ~a" REG-NR EXTRA)))

(DEFMETHOD CHANGE-OWNER ((THIS CAR) NEW-OWNER)
(WITH-ACCESSORS ((MODEL MODEL) (OWNER OWNER) (REG-NR REG-NR)) THIS
(SETQ OWNER NEW-OWNER)))

(DEFMETHOD INITIALIZE-INSTANCE :AFTER ((THIS CAR) &REST ARGS)
(WITH-ACCESSORS ((MODEL MODEL) (OWNER OWNER) (REG-NR REG-NR)) THIS
(DISPLAY THIS "I'm alive"))


The audience didn't understand what the hell I was talking about.

Oluf

Georg Bauer

unread,
May 9, 1998, 3:00:00 AM5/9/98
to

In article <w6ogxa9...@gromit.nextel.no>, Espen Vestre <e...@nextel.no> wrote:

>In Perl, OO is in fact a hack built upon the package system...

It _uses_ the package system, that is correct. Or more correct would be
to say it connects a namespace and a reference to create an object. Due to
the nature of perl those namespaces don't need to be full packages - you
can even build them at runtime. The nice thing: you can touch any aspect
of the implementation and the semantics of the OO-system. Actually not an
MOP, but something with the same intention.

Of course, all of perl is a hack. A quite lovely hack (for some very
obscure definition of "lovely"). _I_ love it. Ok, I have a obscure love
for ugly languages :-)

(actually Perl is one of the few useable scripting languages - it has
closures, some (bad) garbage collection, higher order functions and
several other things normally only found in Lisp - so I think I am
forgiven for using Perl)

Bruce Tobin

unread,
May 16, 1998, 3:00:00 AM5/16/98
to

Barry Margolin wrote:

> In article <354F6DA6...@zeta-sqoft.com>,
> Scott L. Burson <Gy...@zeta-sqoft.com> wrote:
> >See if you can talk your boss into letting you use Java. This shouldn't
> >be too hard -- it's very trendy. And it will be a much better language
> >to work in than any of the above.
>
> I was speaking to someone last night who's looking to hire someone to

> rewrite a Java application in C++, because Java isn't up to snuff yet.
>

What kind of app? In what respects is Java not up to snuff? There are
certainly classes of applications for which Java isn't suitable (and knowing
the kinds of things you work on I wouldn't be surprised if your example falls
into one of them), but it's working very well for us. I'd rather be using Lisp
or Smalltalk, of course, but a lot of customers would balk at the latter, and
unfortunately all of them would balk at the former. If performance is the
issue, there are a lot of ways to speed up a Java app without rewriting it in
something else. A properly written Java app shouldn't be less than half as
fast as the same app written in C++, even if it's very computationally
intensive.


Patrick Wray

unread,
May 19, 1998, 3:00:00 AM5/19/98
to

> The basic assumption with "protection" is that the
>user is too stupid to be trusted with not breaking
>the object. If this assumption is true (and there's
>plenty of evidence to support that in general) then
>the solution is not "protection", it taking the
>keyboards away from the idiots and let your skilled
>employees lose. Don't cripple the skilled individuals
>in the name of hordes of idiots.
>
> Mike McDonald


The bad assumption here is that "protected" equates to "inaccessible".
What's your problem with allowing private data to be accessed and modified
through public interfaces that prevent misuse of the private data? Assuming
the skilled individuals have designed the class in the first place....


Mike McDonald

unread,
May 19, 1998, 3:00:00 AM5/19/98
to

In article <6jrtc6$ve6$1...@reader1.reader.news.ozemail.net>,

The problem is that even skilled individuals cannot forsee all circumstances
in which the objetc may be used. If I'm a skilled individual who's using an
instance of the class you design and I have decided that I need to bypass your
access interface, why should I not be allowed to do that?

Mike McDonald
mik...@mikemac.com


Kent M Pitman

unread,
May 19, 1998, 3:00:00 AM5/19/98
to

mik...@mikemac.com (Mike McDonald) writes:

> The problem is that even skilled individuals cannot forsee all circumstances
> in which the objetc may be used. If I'm a skilled individual who's using an
> instance of the class you design and I have decided that I need to bypass your
> access interface, why should I not be allowed to do that?

"Because I said so," said the program's author.

In the test of wills between author and user, someone must be defined
to be right. Generally, it is the author's intellectual property and
the author, not the user, has final say. That's just the law in all
countries observing the Berne convention and/or other laws where the
software developer licenses uses under contract. Of course, you as an
author might give up those rights; but you have no right to ask other
authors to. Software will succeed or fail on the policies that the
author dictates. You may wish you could go further as a user, but
frankly I think that may hurt society as a whole in service of your
private desire.

Suppose I, as a software author, wanted to make the claim that no one
ever needed inside a certain interface because I could not conceive of
a need. Suppose you could silently violate the interface without
telling me. You may have such a need but it doesn't serve me for you
to be able to get there quietly. You should have to ask me for an
interface, so that I can judge whether you have a good reason. I
might decide you have a good reason and give you a new interface--with
documentation and with appropriate alterations to my belief system
about how people are using my product. I might instead decide your
reason is bad, and suggest a workaround. You might decide my response
wins and use it. You might decide my response loses and buy someone
else'e product the next time. But ultimately, it's the author's
decision. Of course, the author might decide you can have free reign;
but then, why would such an author have made something private in the
first place?

Mike McDonald

unread,
May 19, 1998, 3:00:00 AM5/19/98
to

In article <sfwk97i...@world.std.com>,

Kent M Pitman <pit...@world.std.com> writes:
> mik...@mikemac.com (Mike McDonald) writes:
>
>> The problem is that even skilled individuals cannot forsee all circumstances
>> in which the objetc may be used. If I'm a skilled individual who's using an
>> instance of the class you design and I have decided that I need to bypass your
>> access interface, why should I not be allowed to do that?
>
> "Because I said so," said the program's author.
>
> In the test of wills between author and user, someone must be defined
> to be right. Generally, it is the author's intellectual property and
> the author, not the user, has final say. That's just the law in all
> countries observing the Berne convention and/or other laws where the
> software developer licenses uses under contract. Of course, you as an
> author might give up those rights; but you have no right to ask other
> authors to. Software will succeed or fail on the policies that the
> author dictates. You may wish you could go further as a user, but
> frankly I think that may hurt society as a whole in service of your
> private desire.

Uh, we're talking about the public/private/protected feature of some OO
programming languages and whether or not they improve/hinder code
developement. As such, I don't see how the Berne Convention is relevant. Can
you enlighten me as to how it is relevant?

Mike McDonald
mik...@mikemac.com


David Hanley

unread,
May 19, 1998, 3:00:00 AM5/19/98
to

Mike McDonald <mik...@mikemac.com> wrote:

> The problem is that even skilled individuals cannot forsee all circumstances
> in which the objetc may be used. If I'm a skilled individual who's using an
> instance of the class you design and I have decided that I need to bypass your
> access interface, why should I not be allowed to do that?

Because, presumably, the author of the class has studied the
issues of the class very carefully, and spent considerable time architecting
it. If you knew more about geometries then the author of the geometries
class, why are you usin his class?

That's part of the whole point of OOP and modular programming,
for that matter. Specialists can architect their own modules/objects,
and, due to their specilization, hopefully do a good job of it.

As to forseeing all possible uses; no, but a little flexible
programming goes a long way. And if you use the class in a way that
it was not originally intended by meddling with private data, it may
blow up on you, or, worse, may make someone else's code entirely blow
up. This is why data hiding is useful on big projects.

dave

Barry Margolin

unread,
May 19, 1998, 3:00:00 AM5/19/98
to

In article <lzj81.1366$DM1.1...@news.teleport.com>,

Mike McDonald <mik...@mikemac.com> wrote:
> The problem is that even skilled individuals cannot forsee all circumstances
>in which the objetc may be used. If I'm a skilled individual who's using an
>instance of the class you design and I have decided that I need to bypass your
>access interface, why should I not be allowed to do that?

Object oriented programming is intended to support independent evolution of
components. A class's exported interface defines an invariant protocol
that the class guarantees to support; the implementation is a black box.
If the entire implementation is exposed for potential use, it can't be
changed for fear of impacting unrelated components.

The Bondage&Discipline view of programming specifies that the language
should enforce this independence -- if your program compiles and links
successfully, it's guaranteed that changes to the implementation of classes
you use should not impact it, as long as the class still implements the
advertised protocol. The expectation is that there are a small number of
smart class designers, and a large number of dumb class users, and the
language should prevent people in the latter group from shooting themselves
in the foot.

The alternative is to believe that programmers know what they're doing, and
can be trusted to limit themselves to documented interfaces whenever
feasible. If they bypass the access interface, they take responsibility
for dealing with the consequences if the internal implementation changes.

In many environments, the B&D approach probably addresses a real need.
There simply aren't enough expert programmers to go around, so many places
need to employ pools of dumb programmers, and tie their hands to make sure
they don't leave time bombs. And even smart programmers can't really be
trusted -- I'll bet lots of the software with Y2K problems was written by
experts.

Another way to view it is that software development managers would like to
believe that software components can be treated like hardware components.
If you build a device out of IC's, you can only interface to it through its
pins, and no one complains that this restricts their creativity or that
they don't trust the IC designer to provide all the interfaces they need.
The advantage of this approach is that you can switch IC vendors easily, as
long as they implement the same specification. If this works well for
hardware components, why not for software as well? (That was a rhetorical
question.)

Mike McDonald

unread,
May 19, 1998, 3:00:00 AM5/19/98
to

In article <PIm81.82$_y1.13...@cam-news-reader1.bbnplanet.com>,

Barry Margolin <bar...@bbnplanet.com> writes:
> In article <lzj81.1366$DM1.1...@news.teleport.com>,
> Mike McDonald <mik...@mikemac.com> wrote:
>> The problem is that even skilled individuals cannot forsee all circumstances
>>in which the objetc may be used. If I'm a skilled individual who's using an
>>instance of the class you design and I have decided that I need to bypass your
>>access interface, why should I not be allowed to do that?
>

>

> The alternative is to believe that programmers know what they're doing, and
> can be trusted to limit themselves to documented interfaces whenever
> feasible. If they bypass the access interface, they take responsibility
> for dealing with the consequences if the internal implementation changes.

This is the approach I prefer, to assume people are competent until they
prove otherwise. Then do something about the incompetent ones, whether it be
remedial training, mentoring, or showing them the door.

> In many environments, the B&D approach probably addresses a real need.
> There simply aren't enough expert programmers to go around, so many places
> need to employ pools of dumb programmers, and tie their hands to make sure
> they don't leave time bombs.

I guess this is the main point with which I disagree. I don't believe there
is a shortage of competent programmers. I also believe there is an abundance
of incompetent programmers and an overwhelming majority of software "managers"
who can't tell the difference. I contend that most "shops" would be more
productive if they could remove the incompetent programmers and allow their
remaining competent ones to flourish.

> And even smart programmers can't really be
> trusted -- I'll bet lots of the software with Y2K problems was written by
> experts.

I think this is a different problem. All engineering is done under a set of
assumptions and constraints. The fault is to blindly continue using the
product after, or right up till, those assumptions are no longer true. The
constraint that Y2K programs fell under was that disk space was expensive and
extremely limited.

Mike McDonald
mik...@mikemac.com


Patrick Wray

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

Mike McDonald wrote in message ...


>In article <6jrtc6$ve6$1...@reader1.reader.news.ozemail.net>,
> "Patrick Wray" <pw...@ozemail.com.au> writes:
>>
>>> The basic assumption with "protection" is that the
>>>user is too stupid to be trusted with not breaking
>>>the object. If this assumption is true (and there's
>>>plenty of evidence to support that in general) then
>>>the solution is not "protection", it taking the
>>>keyboards away from the idiots and let your skilled
>>>employees lose. Don't cripple the skilled individuals
>>>in the name of hordes of idiots.
>>>
>>> Mike McDonald
>>
>>
>> The bad assumption here is that "protected" equates to "inaccessible".
>> What's your problem with allowing private data to be accessed and
modified
>> through public interfaces that prevent misuse of the private data?
Assuming
>> the skilled individuals have designed the class in the first place....
>

> The problem is that even skilled individuals cannot forsee all
circumstances
>in which the objetc may be used. If I'm a skilled individual who's using an
>instance of the class you design and I have decided that I need to bypass
your
>access interface, why should I not be allowed to do that?
>

> Mike McDonald
> mik...@mikemac.com


There's nothing to stop you inheriting from the class and writing your own
access interface.


Harvey J. Stein

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

Kent M Pitman <pit...@world.std.com> writes:

> Suppose I, as a software author, wanted to make the claim that no one
> ever needed inside a certain interface because I could not conceive of
> a need. Suppose you could silently violate the interface without
> telling me. You may have such a need but it doesn't serve me for you
> to be able to get there quietly. You should have to ask me for an
> interface, so that I can judge whether you have a good reason. I
> might decide you have a good reason and give you a new interface--with
> documentation and with appropriate alterations to my belief system
> about how people are using my product. I might instead decide your
> reason is bad, and suggest a workaround. You might decide my response
> wins and use it. You might decide my response loses and buy someone
> else'e product the next time. But ultimately, it's the author's
> decision. Of course, the author might decide you can have free reign;
> but then, why would such an author have made something private in the
> first place?

Since you want to treat programmers as authors, let's flip the
equation and apply the above to a book. To help make the book closer
to software, lets call it a textbook.

I read a textbook. The textbook has a mistake. It seems like you're
saying I can't just notice the error & use my corrected version, that
I have to either report the mistake to the author or buy a new
textbook.

Alternatively, we can suppose software's a machine.

I buy a car. The car runs poorly under certain conditions that the
manufacturer and/or designer didn't forsee or chose to not fix.
You're saying that I can't just modify my car. I have to either
notify the manufacturer and argue for a fix, or buy a different car.

Both of these are absurd positions. I don't understand why you think
it's different for software.

More generally, all the author, designer, manufacturer, ... can do is
to sell something of their design, authorship, ... to the consumer.
Once the consumer owns it there is no obligation on the consumer,
legal or otherwise, to report problems. And there's no obligation on
the consumer, legal or otherwise, to use the item unmodified. The
consumer can't claim its the original item, but he can do with it as
he pleases.

Why should a programmer be different? Why should a programmer be able
to sell a program while witholding the ability to modify it?

--
Harvey J. Stein
BFM Financial Research
hjs...@bfr.co.il

Erik Naggum

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

* Mike McDonald

| The constraint that Y2K programs fell under was that disk space was
| expensive and extremely limited.

this is historically false. none of these "space-saving" arguments are
actually supported by evidence from the time the were supposedly made.
these arguments are only made up now that we face the problem, but they
are incredibly convenient arguments: "it was somebody else's faulty
assumptions a long time ago", but the danger in believing these false
reasons is that we won't learn anything and won't change our ways.

our whole culture has historically only been concerned with centuries
very close to their beginnings and ends ("fin-de-siécle" is even a
well-established psychological effect on people near a century's end, and
it's even worse now, with journalists crying the end of the world because
of computer glitches -- but they would cry the end of the world, anyway,
if this was Y2K problem didn't exist, just look at newspapers and books
from 1898), because people don't plan very far ahead and they can always
figure out which century was meant for anything that happens in their
lifetime, anyway. Y2K is a people and a culture problem, not at all
computer problem. the solution is a people and culture solution, too:
DO NOT ABBREVIATE INFORMATION!

if you look at 16th, 17th, 18th, 19th and 20th century art, you'll find a
predominance of artists who signed and dated their works with only two
digits. I have just as hard a time believing they did this to save space
as I have believing 20th's century programmers or system designers did.
centuries do not matter to people who live less than a 100 years, unless
they stop and think carefully about it.

and _if_ they had wanted to save memory, they would have stored a date in
_much_ less space than they would require for 6 characters or even 6 BCD
digits. think about it: with millions if not billions of records, if you
could use a date encoding that spanned 179 years in 16 bits (or 716 years
in 18 bits) simply by counting the number of days since some "epoch", you
would save yourself millions if not billions of bytes or half-words, at
the cost of a table lookup and some arithmetic when reading and printing
dates, but also with the gain of simplifying a lot of other calculations
with dates. you can bet hard cash that _somebody_ would have thought of
this if "saving space" was as important to them as people would have it
these days in order to "explain" this problem away in a way that does not
affect them.

however, _every_ person who writes years with two digits is part of the
cultural problem that resulted in the Y2K mess. the only _permanent_
solution is that people start writing years with all four digits. the
other solutions require moderate amounts of intelligence in all software
that reads dates from humans, and we know what _requiring_ intelligence
from today's crop of unskilled software workers will result in: massive
system failure. when a computerized packaging plant's quality control
discards freshly canned tunafish that were date-stamped to be fresh for
two years because it believes the cans are 98 years old, it's not _just_
a case of braindamage by design, it's the culture of abbreviated
information that is at fault. fix that, and your Y2K problems go away.
then never, _ever_ store dates or timestamps as anything but with a
non-lossy textual form like ISO 8601 (which includes the timezone, which
people also remove in the belief that it can easily be reconstructed,
which it cannot) or as a linearly increasing number of units of time from
some epoch like Common Lisp's Universal Time (and with a timezone to make
the information complete).

with all the "pattern" talk going around these days, I'm surprised that I
cannot find any "patterns" for dealing with dates and times in the right
ways, only a massive number of botched designs.

the really sad thing about this is that people will now store dates in
their databases with 8-digit numbers, requiring 32 bits of storage if
store a parsed integer value, or 64 bits if they store characters, when
16 bits would _still_ be enough for 80 more years if they had really
_wanted_ to save space and started counting days from 1900, or 179 years
from whenever they started counting -- surely a flight control system
wouldn't live for 179 years. so the "save space" argument curiously
requires _really_ stupid people who didn't realize what savings they
could make. if they had been _smart_ when they wanted to save space, we
wouldn't have had any problems with dates for yet another lifetime.

I think the Y2K problem is just a very convenient outlet for the public
_need_ for fin-de-siécle hysteria. with many managers behaving as if
there won't _be_ any year 2000 so they won't have to deal with this, I
don't think we need to look for old reasons why people don't "get it" --
we need to look for reasons why they are _still_ solving it the wrong way
-- and that's the cultural problem that won't go away.

also, I'd be curious to see how many historical archives suddenly lose
all their material from his century when we roll over into the next
because "5/20/98" will be parsed as 2098-05-20 and no data exists for
that dateš. as far as I have seen, people are mostly concerned with Y2K
as it relates to treating "now" correctly. they have indeed learned
nothing about the root cause of this tragic cultural problem.

#:Erik
-------
Å¡ (encode-universal-time 0 0 0 5 20 98) will produce the right date in
Common Lisp until 2048, because it treats years in the range 0-100 as
being within 50 years of "now".
--
"Where do you want to go to jail today?"
-- U.S. Department of Justice Windows 98 slogan

Barry Margolin

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <m2ogwtb...@blinky.bfr.co.il>,

Harvey J. Stein <hjs...@bfr.co.il> wrote:
>Why should a programmer be different? Why should a programmer be able
>to sell a program while witholding the ability to modify it?

Distinguishing between private and exported interfaces in a program has
nothing to do with whether someone can modify the program. If you want to
allow someone to modify a program, you provide the source code. They can
then change private interfaces into public interfaces to their heart's
content.

But if you have a language that allows users to access internal data of a
class, and users use this feature to bypass the public interfaces, they're
essentially voiding the warranty on the software. The class was designed
to be used in a certain way, and the designer is essentially claiming that
if you follow the rules it should continue to work, even though he might
change the internals over time. The class designer doesn't want to be held
responsible for what happens to your program if he releases a new version
of the library that has different internal representations (e.g. he changes
from polar to rectangular coordinates).

Barry Margolin

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <31046379...@naggum.no>, Erik Naggum <c...@naggum.no> wrote:
>* Mike McDonald
>| The constraint that Y2K programs fell under was that disk space was
>| expensive and extremely limited.
>
> this is historically false.

Thank you, Erik, for an enormous diatribe that has absolutely nothing to do
with this newsgroup, and for not even bothering to change the Subject
appropriately. :(

If anyone feels the need to reply to him in the group, *please* put Y2K or
something like it in the Subject line (like this message), so that those of
us who want to read the OO discussion can easily skip over the irrelevant
Y2K tangent. I didn't really mean to start a discussion of Y2K issues, I
was just using it as an easy example of the fact that many excellent
programmers and software designers make enormous mistakes, so the B&D
constraints may have some justification.

I happen to think I can live without them. I also think I'm a good enough
driver that speed limits shouldn't apply to me. I think I read in a AAA
publication that according to a survey, most drivers think they're above
average in driving skill; by definition, many of us are wrong (please,
don't start a tangent on the difference between mean, median, and mode).

Harvey J. Stein

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

Barry Margolin <bar...@bbnplanet.com> writes:

> In article <m2ogwtb...@blinky.bfr.co.il>,
> Harvey J. Stein <hjs...@bfr.co.il> wrote:
> >Why should a programmer be different? Why should a programmer be able
> >to sell a program while witholding the ability to modify it?
>
> Distinguishing between private and exported interfaces in a program has
> nothing to do with whether someone can modify the program. If you want to
> allow someone to modify a program, you provide the source code. They can
> then change private interfaces into public interfaces to their heart's
> content.
>
> But if you have a language that allows users to access internal data of a
> class, and users use this feature to bypass the public interfaces, they're
> essentially voiding the warranty on the software. The class was designed
> to be used in a certain way, and the designer is essentially claiming that
> if you follow the rules it should continue to work, even though he might
> change the internals over time. The class designer doesn't want to be held
> responsible for what happens to your program if he releases a new version
> of the library that has different internal representations (e.g. he changes
> from polar to rectangular coordinates).

I absolutely agree. There's no responsibility on the author if the
user modifies the internals or in any way doesn't stick to the
documented interface. Actually, with current shrink-wrap licenses,
there's no responsibility on the author no matter what the user does,
but that's another issue :).

My comment was intended to be rhetorical and to be viewed in contrast
to Kent M Pitman's comment <pit...@world.std.com> that it's
essentially wrong for the user to violate an interface and that he
should have to gain permission from the developer or go elsewhere:

Suppose you could silently violate the interface without telling
me. You may have such a need but it doesn't serve me for you to be
able to get there quietly. You should have to ask me for an
interface, so that I can judge whether you have a good reason.

The last sentence is the point in question. Additionally:

I might decide you have a good reason and give you a new
interface--with documentation and with appropriate alterations to
my belief system about how people are using my product. I might
instead decide your reason is bad, and suggest a workaround. You
might decide my response wins and use it. You might decide my
response loses and buy someone else'e product the next time. But
ultimately, it's the author's decision.

Again, the last sentence is the point of disagreement.

Finally, we have the comment:

Of course, you as an author might give up those rights; but you
have no right to ask other authors to. Software will succeed or
fail on the policies that the author dictates. You may wish you
could go further as a user, but frankly I think that may hurt
society as a whole in service of your private desire.

Correct me if I'm wrong, but it seems to me that Kent Pitman is
advocating the position that the user is required, both morally and by
law, to only use the published interface. I'm advocating that the
user be free to do whatever he damn well pleases with the software he
purchased, but at his own risk.

Aaron Gross

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

Erik Naggum <c...@naggum.no> writes:

> however, _every_ person who writes years with two digits is part
> of the cultural problem that resulted in the Y2K mess. the only
> _permanent_ solution is that people start writing years with all
> four digits. the other solutions require moderate amounts of
> intelligence in all software that reads dates from humans, and we
> know what _requiring_ intelligence from today's crop of unskilled
> software workers will result in: massive system failure.

Whatever happened to letting computers do the work? I don't think it's
too much to ask that a check dated 20.5.98 be stored in the computer
with the date 1998-05-20. If computer programmers were all
super-competent, would you still insist that we *always* write dates
with four-digit years? If not, then you're insisting that people
change their behavior to accommodate broken software. What if there
were no computers? Would you still insist that we not abbreviate
years? Any person looking at a check dated 20.5.98 within the next
several decades will know that the "98" means "1998". What's the
problem with this?

Erik Naggum

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

* Aaron Gross

| What if there were no computers? Would you still insist that we not
| abbreviate years?

yes, of course. that is precisely my point.

let me turn your question around and ask what is the problem with writing
a year in full? why do you _have_ to abbreviate it?

#:Erik

Tim Bradshaw

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

* David Hanley wrote:

> Mike McDonald <mik...@mikemac.com> wrote:
>> The problem is that even skilled individuals cannot forsee all circumstances
>> in which the objetc may be used. If I'm a skilled individual who's using an
>> instance of the class you design and I have decided that I need to bypass your
>> access interface, why should I not be allowed to do that?

> Because, presumably, the author of the class has studied the

> issues of the class very carefully, and spent considerable time architecting
> it. If you knew more about geometries then the author of the geometries
> class, why are you usin his class?

Well, I'm using the class because I have finite time to spend on the
problem and although I understand it rather well I don't really want
to have to reinvent the whole thing from the ground up (and deal with
making my new classes look enough like the old ones that everyone
else's code will just carry on working). So I have a peek into the
existing stuff, and do some hacking with undocumented internals, and
things work. Of course, they'll break in the next release of the
classes, but I'm willing to take that risk (and I document carefully
what I've done so I can work out how to do an equivalent thing next
time if that turns out to be necessary).

In many cases the alternative is either to sit their and shout at the
vendor till they fix it (n months down the line, if at all), or
reimplement the whole class hierarchy from 0. So I'd at least like
the *option* of taking my life in my hands in these cases.

This sort of think is particularly bad in Java (and C++ perhaps)
where, as far as I can see, you can't to even be able to add *methods*
to an existing class without the whole source code to it, even if
those methods just use the documented interface to the class... Uck.

--tim


Vassil Nikolov

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

On Tue, 19 May 1998 22:09:00 GMT,
Mike McDonald <mik...@mikemac.com> wrote:

(...)

>constraint that Y2K programs fell under was that disk space was expensive and
>extremely limited.

I ask to be enlightened in what way did expensive and extremely
limited disk space force programmers into the assumption
that YY shall be converted into YYYY by always adding 1900?

Storing the last two digits would not cause a problem in
a large number of cases if only these two digits are taken
in context, as it were. And implementing something like
the Common Lisp rule for year<100 doesn't take up so much
space in the code.

Best regards,
Vassil.


Mike McDonald

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <Tmw81.7$152.1...@cam-news-reader1.bbnplanet.com>,

Barry Margolin <bar...@bbnplanet.com> writes:
> In article <31046379...@naggum.no>, Erik Naggum <c...@naggum.no> wrote:
>>* Mike McDonald
>>| The constraint that Y2K programs fell under was that disk space was
>>| expensive and extremely limited.
>>
>> this is historically false.
>
> Thank you, Erik, for an enormous diatribe that has absolutely nothing to do
> with this newsgroup, and for not even bothering to change the Subject
> appropriately. :(
>
> If anyone feels the need to reply to him in the group, *please* put Y2K or
> something like it in the Subject line (like this message), so that those of
> us who want to read the OO discussion can easily skip over the irrelevant
> Y2K tangent. I didn't really mean to start a discussion of Y2K issues, I
> was just using it as an easy example of the fact that many excellent
> programmers and software designers make enormous mistakes, so the B&D
> constraints may have some justification.

I didn't mean for it to become a general discussion of Y2K problems and
their cultural roots. I was just trying to point out that even competent
people can make what latter seem like mistakes given the constraints and
cultural biases at the time. B&D would not have helped avoid this "problem" at
all.

> I happen to think I can live without them. I also think I'm a good enough
> driver that speed limits shouldn't apply to me. I think I read in a AAA
> publication that according to a survey, most drivers think they're above
> average in driving skill; by definition, many of us are wrong (please,
> don't start a tangent on the difference between mean, median, and mode).

And we all think we're smarter than average, more moral then average, ...
But so what? Some actually are!

Mike McDonald
mik...@mikemac.com

Mike McDonald

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <310465044...@naggum.no>,

Erik Naggum <c...@naggum.no> writes:
> * Aaron Gross
>| What if there were no computers? Would you still insist that we not
>| abbreviate years?
>
> yes, of course. that is precisely my point.
>
> let me turn your question around and ask what is the problem with writing
> a year in full? why do you _have_ to abbreviate it?
>
> #:Erik

Because we're human, which means we're lazy. And since everybody seems to
get it right, it's also unnecessary.

Mike McDonald
mik...@mikemac.com


Mike McDonald

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <6jtm7e$gpj$1...@reader1.reader.news.ozemail.net>,
"Patrick Wray" <pw...@ozemail.com.au> writes:

> There's nothing to stop you inheriting from the class and writing your own
> access interface.

Will you show me how to do that in C++? How do I subclass a class and get
access to its private data?

Mike McDonald
mik...@mikemac.com


Mike McDonald

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <199805201307...@nym.alias.net>,

"Vassil Nikolov" <vnik...@math.acad.bg> writes:
> On Tue, 19 May 1998 22:09:00 GMT,
> Mike McDonald <mik...@mikemac.com> wrote:
>
> (...)
>
>>constraint that Y2K programs fell under was that disk space was expensive and
>>extremely limited.
>
> I ask to be enlightened in what way did expensive and extremely
> limited disk space force programmers into the assumption
> that YY shall be converted into YYYY by always adding 1900?
>
> Storing the last two digits would not cause a problem in
> a large number of cases if only these two digits are taken
> in context, as it were. And implementing something like
> the Common Lisp rule for year<100 doesn't take up so much
> space in the code.
>
> Best regards,
> Vassil.
>

Not a code space problem, a data size problem. The thought (or excuse, if
you want to call it that) was that storing the century portion would double
the size of the year field. Multiply that by billions of records and it adds
up to real space real quickly.

Mike McDonald
mik...@mikemac.com


Barry Margolin

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <lybtst1a...@cruella.bfr.co.il>,

Aaron Gross <aaron+...@bfr.co.il> wrote:
>Whatever happened to letting computers do the work? I don't think it's
>too much to ask that a check dated 20.5.98 be stored in the computer
>with the date 1998-05-20.

That's generally what computers do. But the problem is that such
heuristics can be wrong. How does the computer know that the check isn't
100 years old? How do *you* know?

Which is why some people who have broken the 100-year-old mark have started
receiving ads for baby products -- someone entered a birthday like 5/20/96,
and a computer assumed they meant 1996, when they actually meant 1896. Or
when some products with a 3-year shelf life were entered into a computer,
the computer sent out an order to have them all thrown out: the expiration
date entered was xx/xx/01, which was interpreted as xx/xx/1901.

Although dates stored internally with only two digits is certainly part of
the problem, the other half of the problem is what Erik referred to:
because we allow users to be lazy and enter 2-digit years, every program
that deals with time has to be examined to see if its interpretations are
appropriate when the century barrier is crossed. It's not a simple
problem, since the heuristics are highly context dependent. For activities
like entering checks, food expiration dates, and credit card expiration
dates, you can generally assume that the date nearest to the current time
is intended. When entering birth dates, you know that they have to be in
the past, although you still have to be careful about years that could be a
little over a century ago.

Bradford W. Miller

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <310465044...@naggum.no>, Erik Naggum <c...@naggum.no> wrote:

> * Aaron Gross
> | What if there were no computers? Would you still insist that we not
> | abbreviate years?
>
> yes, of course. that is precisely my point.
>
> let me turn your question around and ask what is the problem with writing
> a year in full? why do you _have_ to abbreviate it?

Why stop there? Let's also include the AD, except (darn!) that's an
abbreviation...

Don't forget to never use acronyms, contractions, anything less than
complete personal identifiers or references that cannot possibly be
confused even to an
alien intelligence...

These damn natural languages, so imprecise. OK, henceforth all time
references to be chronons since the big bang, all spacial references to be
in ylorts (in 6 space) from the essential muddle, (per convention 73672a'/4
subsecant c in the galactic code, tick
2.2785928310983928375982739857392837479583112837xe9287357)

>
> #:Erik


> --
> "Where do you want to go to jail today?"
> -- U.S. Department of Justice Windows 98 slogan

--
Bradford W. Miller
miller@ehm-cee-cee-dot-cee-oh-ehm

Mike McDonald

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <31046379...@naggum.no>,

Erik Naggum <c...@naggum.no> writes:
> * Mike McDonald
>| The constraint that Y2K programs fell under was that disk space was
>| expensive and extremely limited.
>
> this is historically false. none of these "space-saving" arguments are
> actually supported by evidence from the time the were supposedly made.

Well, since I contributed to the problem years ago, I can only attest to why
I did it. One consideration was space. A second consideration was the language
and database being used. They made it easy and fast to abbreviate the year as
YY and the users liked only two digits. The third consideration was that it
was only suppose to last for six months. (This was in the spring of '84.)


> if you look at 16th, 17th, 18th, 19th and 20th century art, you'll find a
> predominance of artists who signed and dated their works with only two
> digits. I have just as hard a time believing they did this to save space
> as I have believing 20th's century programmers or system designers did.
> centuries do not matter to people who live less than a 100 years, unless
> they stop and think carefully about it.

I agree that cultural bias has a lot to do with the Y2K problem. No argument
there.

> and _if_ they had wanted to save memory, they would have stored a date in
> _much_ less space than they would require for 6 characters or even 6 BCD
> digits. think about it: with millions if not billions of records, if you
> could use a date encoding that spanned 179 years in 16 bits (or 716 years
> in 18 bits) simply by counting the number of days since some "epoch", you
> would save yourself millions if not billions of bytes or half-words, at
> the cost of a table lookup and some arithmetic when reading and printing
> dates, but also with the gain of simplifying a lot of other calculations
> with dates. you can bet hard cash that _somebody_ would have thought of
> this if "saving space" was as important to them as people would have it
> these days in order to "explain" this problem away in a way that does not
> affect them.
>
> however, _every_ person who writes years with two digits is part of the
> cultural problem that resulted in the Y2K mess. the only _permanent_
> solution is that people start writing years with all four digits.

If there's only one thing I've learned from my years of programming is that
you can't change users. They're going to keep doing what they do. You can
either learn to live with it or become a monk. You'll never, ever get people
to stop abbreviating things, especially dates.

> the
> other solutions require moderate amounts of intelligence in all software
> that reads dates from humans, and we know what _requiring_ intelligence
> from today's crop of unskilled software workers will result in: massive
> system failure.

A lot of systems don't require much at all inorder to handle abbreviated
dates properly. They deal only with inputted dates that are close to now. It's
relatively easy to get that right. Systems that deal with historical dates
have a much tougher time. They'd probably have to force the users to verify
the century portions. And the users are going to bitch about it. (That's the
second thing I've learned, users always are going to bitch.)

> when a computerized packaging plant's quality control
> discards freshly canned tunafish that were date-stamped to be fresh for
> two years because it believes the cans are 98 years old, it's not _just_
> a case of braindamage by design, it's the culture of abbreviated
> information that is at fault. fix that, and your Y2K problems go away.
> then never, _ever_ store dates or timestamps as anything but with a
> non-lossy textual form like ISO 8601 (which includes the timezone, which
> people also remove in the belief that it can easily be reconstructed,
> which it cannot) or as a linearly increasing number of units of time from
> some epoch like Common Lisp's Universal Time (and with a timezone to make
> the information complete).

Agreed, propegating abbreviated data leads to disaster. Some time, some
place.

> with all the "pattern" talk going around these days, I'm surprised that I
> cannot find any "patterns" for dealing with dates and times in the right
> ways, only a massive number of botched designs.

Now I want to address you're "just store the date as a binary count from
some epoch" proposal. Given that you've seen so many people botch the patterns
for dates, do you really think they'd be able to get the conversion from/to a
binary stored date right?

> the really sad thing about this is that people will now store dates in
> their databases with 8-digit numbers, requiring 32 bits of storage if
> store a parsed integer value, or 64 bits if they store characters, when
> 16 bits would _still_ be enough for 80 more years if they had really
> _wanted_ to save space and started counting days from 1900, or 179 years
> from whenever they started counting -- surely a flight control system
> wouldn't live for 179 years.

All you've done now is to change the Y2K problem into a Y2037 problem.
(Isn't that the year that Unix has its problem, since time was stored as a
32bit count of seconds since the beginning of 1970. I know, lots of Unixes
have changed time_t to be 64 bits. Lot's of other haven't.)

> so the "save space" argument curiously
> requires _really_ stupid people who didn't realize what savings they
> could make. if they had been _smart_ when they wanted to save space, we
> wouldn't have had any problems with dates for yet another lifetime.
>
> I think the Y2K problem is just a very convenient outlet for the public
> _need_ for fin-de-siécle hysteria.

People always find something to worry about. It used to be the
commies/capitalist, now it's Y2K. In 2001, we'll find something else to get
hysterical over. (I'm not saying that the Y2K problem isn't serious.)

Mike McDonald
mik...@mikemac.com


David Hanley

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

Mike McDonald <mik...@mikemac.com> wrote:
> I didn't mean for it to become a general discussion of Y2K problems and
> their cultural roots. I was just trying to point out that even competent
> people can make what latter seem like mistakes given the constraints and
> cultural biases at the time. B&D would not have helped avoid this "problem" at
> all.

I must disagree. B&D by the person managing dates may have
enforced four-digit dates.

dave

David J. Fiander

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

Erik Naggum <c...@naggum.no> writes:
> this is historically false. none of these "space-saving" arguments are
> actually supported by evidence from the time the were supposedly made.
> these arguments are only made up now that we face the problem, but they
> are incredibly convenient arguments: "it was somebody else's faulty
> assumptions a long time ago", but the danger in believing these false
> reasons is that we won't learn anything and won't change our ways.

I do, however, like the suggestion that it wasn't core storage
space that was the concern but card space. Any trick in the book
to squeeze the whole record onto a single 80-column image was a
good thing.

> and _if_ they had wanted to save memory, they would have stored a date in
> _much_ less space than they would require for 6 characters or even 6 BCD
> digits. think about it: with millions if not billions of records, if you
> could use a date encoding that spanned 179 years in 16 bits (or 716 years

The only problem with this theory is that the binary arithmetic
instructions were an extra cost option; decimal arithmetic was
standard. That and the fact that the 360 provided a "sprintf"
instruction, which only handled decimal-coded numbers.

Don't forget that Alan Turing worked very hard to convince people
that binary was the way to go, but most early computers (up to
some time in the late '50s or early '60s) were decimal at heart.

- David, who is mentally quoting Santayana

Mike McDonald

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <6jvekh$s8c$1...@eve.enteract.com>,

Not if he picked two digit ones, it wouldn't. B&D doesn't force the
implementer to make the "right" choices, it just makes it hard for his "users"
to get around his decision, good or bad.

Mike McDonald
mik...@mikemac.com


Mike McDonald

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

In article <u3ee4wwp3.fsf_-_@davidf_nt.mks.com>,
dav...@mks.com (David J. Fiander) writes:

> Erik Naggum <c...@naggum.no> writes:
>> this is historically false. none of these "space-saving" arguments are
>> actually supported by evidence from the time the were supposedly made.
>> these arguments are only made up now that we face the problem, but they
>> are incredibly convenient arguments: "it was somebody else's faulty
>> assumptions a long time ago", but the danger in believing these false
>> reasons is that we won't learn anything and won't change our ways.
>
> I do, however, like the suggestion that it wasn't core storage
> space that was the concern but card space. Any trick in the book
> to squeeze the whole record onto a single 80-column image was a
> good thing.

In lots of cases, it was hard disk sectors that were the important size.
Going over a sector size would often incur a full sector penalty. If you could
fit your record in one 128 byte sector, great! If you end up with 129 bytes of
data, it cost you two sectors.

Mike McDonald
mik...@mikemac.com


Nate Holloway

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

As mentioned about a month ago in this group, the checking for access to
data members is only at compile time in C++: the run time model is just
of a struct offset. Therefore you can change the prototype for the
original class, making the private data members you need to access be
protected instead. (you could also make them public, but if you don't
need to, don't).

This introduces a second hiding problem: hiding your changes to the
headers, so people don't jump down your throat or mistakenly think that
they are the "real" access categories.

So Erik Naggum would be right that C++ encourages liars and thieves ;-)

p.s. this technique will not work if your only access is to precompiled
headers -- I do not know if anybody is in this situation, but it seems
possible if current trends in "hiding" continue. FWIW

--
f l u i d e s i g n
www.fluidesign.com

Nate Holloway, Technical director
natedogg at fluidesign dot com

David Hanley

unread,
May 20, 1998, 3:00:00 AM5/20/98
to

Mike McDonald <mik...@mikemac.com> wrote:
> In article <6jvekh$s8c$1...@eve.enteract.com>,
> David Hanley <ma...@enteract.com> writes:
> > Mike McDonald <mik...@mikemac.com> wrote:
> >> I didn't mean for it to become a general discussion of Y2K problems and
> >> their cultural roots. I was just trying to point out that even competent
> >> people can make what latter seem like mistakes given the constraints and
> >> cultural biases at the time. B&D would not have helped avoid this "problem" at
> >> all.
> >
> > I must disagree. B&D by the person managing dates may have
> > enforced four-digit dates.
> >
> > dave

> Not if he picked two digit ones, it wouldn't.

Right. But see what I said in my above posting? "four
digit dats"

> B&D doesn't force the
> implementer to make the "right" choices, it just makes it hard for his "users"
> to get around his decision, good or bad.

It also preserves flexibility. Let's say the class designer did
pick two-digit dates. Mr/Ms designer later realizes the Y2K problem is
neigh, and revisits the classes. By updating the class code, the
Y2K problem is circumvented via one of the usual means.

But, let's suppose Joe User has gone and noticed the Y2K problem
all by his lonesome. He tampered with private data and "fixed" the problem
in his own code. However, his fixes are incompatible with the class
designer's fixes. Joe can't use the new classes. Worse yet, Joe is
expected to merge his code with the rest of the development team's code
soon, who used the proferred Date code.

Modifying what was intended to be private is sometimes necessary,
but almost always not the best way nor even a good idea. Ideally it
should be possible, but in such a manner as to make a "big thumb" stick
up, so others will be aware of what is going on.

dave

> Mike McDonald
> mik...@mikemac.com


Patrick Wray

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

Mike McDonald wrote in message ...

>In article <6jtm7e$gpj$1...@reader1.reader.news.ozemail.net>,
> "Patrick Wray" <pw...@ozemail.com.au> writes:
>
>> There's nothing to stop you inheriting from the class and writing your
own
>> access interface.
>
> Will you show me how to do that in C++? How do I subclass a class and get
>access to its private data?
>
> Mike McDonald


No, I see where you're coming from now. Hopefully the designer of the class
will have used the "protected" access specifier rather than "private", but
you're at the mercy of their decisions.

I should also mention that I am a total stranger to CLOS, but after having a
look at some basic Lisp, I can hardly believe I've ignored it for so long.
Can anyone tell me: does Lisp still enjoy its former status as THE language
for AI, or has it been usurped by a newcomer?

Patrick.


Dr Richard A. O'Keefe

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

Aaron Gross wrote:
> Whatever happened to letting computers do the work? I don't think it's
> too much to ask that a check dated 20.5.98 be stored in the computer
> with the date 1998-05-20. If computer programmers were all
> super-competent, would you still insist that we *always* write dates
> with four-digit years?

Having lived in New Zealand, Scotland, California, and Australia,
I have come to completely distrust *any* all-numeric dates other
than ISO 8601 ones. When writing dates by hand, I always write
dd-MON-yyyy with the month as three letters. I _know_ this is May.
To find out what month number it is, I have to count. Even when a
printed form has boxes for two digits, I always write the month as
three letters because that's the only way I can be sure I'll get it
right. There are probably data entry operators cursing me in much
of the English-speaking world. If computer programmers were all
super-competent (and being one, I know they aren't) I would _still_
have switched to 4-May-2006 format for dates because a super-
competent programmer *from another culture* (can you say 'cheap
off-shore coding? I knew you could) is pretty much guaranteed to
misunderstand what _I_ mean by 4/5/6. You have to assume
super-competent programmers from the same culture as their
super-competent users.

To a large extent the US has been insulated from some of these
concerns because it has a large internal market which is pretty
culturally uniform. (Yes, I know USAn's can tell each other
apart. Thing is, _computers_ in the USA don't have to worry
whether the user is a Hoosier or a Texan.) Erik Naggum's insistence
on unambiguous dates may be easier to appreciate when you realise
that he comes from a small country which _doesn't_ follow US
cultural conventions, and probably sees dates in at least three
different confusable formats every day.

The only reasonable thing for a computer to do when given a
possibly ambiguous date (or any other possibly ambiguous
information) is to show the user its preferred interpretation
and ask for confirmation. Presumably super-competent programmers
already know this.

The problem with 2-digit dates is that information that _wasn't_
ambiguous at the time it was entered will _become_ ambiguous,
and inferring the disambiguating context won't always be easy.
The Y2K problem is not the only situation where this happens;
consider for example medical records where what was believed to
be a single disease at the time the entry was added is later
resolved into two. That's an ever harder case when it will
probably _never_ be possible to disambiguate the old codes; one
reason why some of the medical codes in use are hierarchical.

Aaron Gross

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

I got an e-mail message saying that this subject is inappropriate to
comp.lang.lisp, and I think that's reasonable. Therefore I am
responding by e-mail to (some of) those who followed-up my post. I
won't post any more on this topic to this newsgroup.

Just a word in my defense though: He started it! I like reading Erik
Naggum's rants, and I only decided to respond to his off-topic
four-digit year rant after I'd read it in at least two or three posts
here.

Scott L. Burson

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

Erik Naggum wrote:
>
> * Mike McDonald
> | The constraint that Y2K programs fell under was that disk space was
> | expensive and extremely limited.
>
> and _if_ they had wanted to save memory, they would have stored a date in
> _much_ less space than they would require for 6 characters or even 6 BCD
> digits. think about it: with millions if not billions of records, if you
> could use a date encoding that spanned 179 years in 16 bits (or 716 years
> in 18 bits) simply by counting the number of days since some "epoch", you
> would save yourself millions if not billions of bytes or half-words, at
> the cost of a table lookup and some arithmetic when reading and printing
> dates, but also with the gain of simplifying a lot of other calculations
> with dates. you can bet hard cash that _somebody_ would have thought of
> this if "saving space" was as important to them as people would have it
> these days in order to "explain" this problem away in a way that does not
> affect them.

But CPU cycles and main memory were as precious as disk space. Why do
you think they stored so much numeric data in BCD? It's because they
printed numbers more frequently than they did arithmetic on them, and it
was not worth it for them to pay the price of binary-to-decimal
conversion in order to print them out (whether that price was primarily
time, or involved a space cost for lookup tables of some sort).

Remember that these people were processing large volumes of data on
machines some of which would have made a Z-80 look powerful.

It is conceivable that some clever designer could have come up with a
date representation that would have been cheap enough to store and
convert, though I don't know what that would have been. Supposing it
existed, though, the only way to get it universally used (or nearly)
would have been to make it a primitive type in COBOL. And if it wasn't
used universally, it would hardly have been used at all, because people
need to be able to exchange data.

I'm not saying that more advance thought about the problem might not
have been beneficial. But I don't think it's fair to portray the
decision programmers faced then as trivial, and the choice they made as
completely idiotic.

-- Scott

* * * * *

To use the email address, remove all occurrences of the letter "q".

Erik Naggum

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

* Scott L. Burson

| Why do you think they stored so much numeric data in BCD? It's because
| they printed numbers more frequently than they did arithmetic on them,
| and it was not worth it for them to pay the price of binary-to-decimal
| conversion in order to print them out (whether that price was primarily
| time, or involved a space cost for lookup tables of some sort).

um, no, they stored numeric data in BCD because the earlier computers
were designed to use decimal arithmetic, and when they moved to binary,
they _still_ had BCD arithmetic hardware. binary didn't have much to
offer these people. however, BCD vanished. now, why was _that_?

| Remember that these people were processing large volumes of data on
| machines some of which would have made a Z-80 look powerful.

yet there was support for BCD arithmetic in the Zilog Z80, too.

| Supposing it existed, though, the only way to get it universally used (or
| nearly) would have been to make it a primitive type in COBOL. And if it
| wasn't used universally, it would hardly have been used at all, because
| people need to be able to exchange data.

precisely, and the lack of a sane DATE type in COBOL is supposedly to
blame for 95% of the Y2K problem. go figure...

| I'm not saying that more advance thought about the problem might not have
| been beneficial. But I don't think it's fair to portray the decision
| programmers faced then as trivial, and the choice they made as completely
| idiotic.

nonono, I don't think the decision they made was idiotic. I think they
did not make a decision -- it was nowhere near anybody's conscious mind
to regard century boundaries in the middle of the century -- everybody in
question were born in this century at the time, an most of them would die
in this century, so who _could_ have cared? I think it is foolish to
impute decisions to them 30-40-50 years after the fact, the anachronistic
speculations are just that. Aaron Gross points out (in mail) that none
of these statements about the purported decisions are actually backed up
by actual quotes from people who supposedy made them. that's enough to
disregard them, IMNSHO.

#:Erik

Kai Grossjohann

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

>>>>> mik...@mikemac.com (Mike McDonald) writes:

> Will you show me how to do that in C++? How do I subclass a class
> and get access to its private data?

,-----
| #define private public
| #include <foo.h>
`-----

Shudder.
kai
--
A large number of young women don't trust men with beards.
(BFBS Radio)

Bob Hutchison

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

Erik Naggum <c...@naggum.no> wrote:

>* Aaron Gross
>| What if there were no computers? Would you still insist that we not
>| abbreviate years?
>
> yes, of course. that is precisely my point.
>
> let me turn your question around and ask what is the problem with writing
> a year in full? why do you _have_ to abbreviate it?
>

>#:Erik

For the same reason that you don't capitalise the first word of your
sentences? :-)

What I find amazing is the continued use of this particular abbreviation
well past the time the problems with it were well known. One of my very
first lectures in CS in 1976 used the y2k problem as a cautionary tale.
Apparently there had been a number of conferences concerning this
problem before 1976. I remember the lecturer saying that the conclusion
of one such conference was that if every graduate of every university
(in the US? the world?) had a CS degree and they all worked on the y2k
problem exclusively until 2000 that there would still be software
remaining uncorrected. The situation now is a bit different than the
projections made before 1976 (e.g. PCs exist) but the seriousness of the
problem was understood. And still people continue using two digits --
I'll bet there are a good number of projects underway right now that do
this.

BTW, I've also seen a (signed) byte used to represent the year since
1900. Software using this technique has until 2028.

Cheers,
Bob
---
Bob Hutchison, hu...@RedRock.com, (416) 878-3454
RedRock, Toronto, Canada

Scott L. Burson

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

William Paul Vrotney wrote:
>
> Lisp was the language of choice for AI and may still be in the future.

For the AI project I'm working on, I am using Lisp. In this case I am
not bound by anything but technical considerations and my own
preference; on the one hand no one is pressuring me to use C++, but on
the other, if I felt that ML, Prolog, Mercury, Haskell, or any of the
other interesting languages that are around were superior for my
purposes, I would be using that instead of Lisp. While those languages
are all powerful in their way, none of them has the flexibility of Lisp.

(I am probably going to have to switch to Scheme for this project, by
the way, because I think I will need firstclass continuations. If there
were a Common Lisp to which firstclass continuations had been added, I
would happily use that, but I know of none.)

Scott L. Burson

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

Erik Naggum wrote:
>
> * Scott L. Burson
> | Why do you think they stored so much numeric data in BCD? It's because
> | they printed numbers more frequently than they did arithmetic on them,
> | and it was not worth it for them to pay the price of binary-to-decimal
> | conversion in order to print them out (whether that price was primarily
> | time, or involved a space cost for lookup tables of some sort).
>
> um, no, they stored numeric data in BCD because the earlier computers
> were designed to use decimal arithmetic, and when they moved to binary,
> they _still_ had BCD arithmetic hardware. binary didn't have much to
> offer these people. however, BCD vanished. now, why was _that_?

Actually, BCD hasn't vanished at all. There are still billions if not
trillions of lines of Cobol code that use it, with more being written
every day. (I'm in the US -- that's "billion" as in 1E9, "trillion" as
in 1E12.)

I think the reason I suggest is one of the reasons it has survived so
long -- the other being simple inertia.

> I don't think the decision they made was idiotic. I think they
> did not make a decision -- it was nowhere near anybody's conscious mind
> to regard century boundaries in the middle of the century -- everybody in
> question were born in this century at the time, an most of them would die
> in this century, so who _could_ have cared?

Point taken, but I am suggesting that even if a few of them did stop to
think about it, under the constraints they were under (which, as others
have pointed out, include card and widths, sector sizes, etc.), they
might very well have decided to go with 2-digit dates anyway.

Nate Holloway

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

I needed to change the subject ;-)

Mike McDonald wrote:
> Now I want to address you're "just store the date as a binary count from
> some epoch" proposal. Given that you've seen so many people botch the patterns
> for dates, do you really think they'd be able to get the conversion from/to a
> binary stored date right?

I think part of this you've already answered. Handling abbreviated year
input in a general way (as some of those patterns are trying to do, I'm
sure) requires a lot of knowledge about context. But translating a
day-count from some epoch does not. It's essentially a copy and paste
business, although the table-lookup technique is going to be harder to
get right than the naive algebra.

> Naggum wrote:
> > 16 bits would _still_ be enough for 80 more years if they had really
> > _wanted_ to save space and started counting days from 1900, or 179 years
> > from whenever they started counting -- surely a flight control system
> > wouldn't live for 179 years.

This is not the best example; one might imagine that a flight-control
system might have to manage units of time shorter than a day. =p

> All you've done now is to change the Y2K problem into a Y2037 problem.

Look again; if you count the way I'm counting, it gets you to 2079 if
you start at 1900.

> (Isn't that the year that Unix has its problem, since time was stored as a
> 32bit count of seconds since the beginning of 1970.

Yes, with a signed long in seconds from epoch 1970 the limits are
1902 < y < 2038. A stickler might say that's only 31 bits worth of
seconds since 1970.

This actually raises an interesting point, and one that might possibly
be twisted to sort of be on topic for this ng. Everybody knows about the
popular amnesia in this industry, of how better operating systems have
been supplanted by less elegant ones, and ideas keep getting reinvented
that existed in real systems when LBJ was president.

Is the Lisp machine Y2K-capable? I know that when you boot one the FEP
asks for a date and time, but I must admit ignorance about the FEP
internals so I do not know whether it would work or not.

ITS? I know that there is an operated ITS site in Norway, I don't
suppose Erik has heard anything about this?

Multics?

Georg Bauer

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

In article <35647B75...@zeta-sqoft.com>, "Scott L. Burson"
<Gy...@zeta-sqoft.com> wrote:

>(I am probably going to have to switch to Scheme for this project, by
>the way, because I think I will need firstclass continuations. If there
>were a Common Lisp to which firstclass continuations had been added, I
>would happily use that, but I know of none.)

You are aware of Paul Grahams "On Lisp"? Pages 266ff show an
implementation of continuations in Common Lisp. The book is worth reading
for other things, too - a very instructive collection of techniques,
samples and interesting stuff.

bye, Georg

--
#!/usr/bin/perl -0777pi
s/TS:.*?\0/$_=$&;y,a-z, ,;s, $,true,gm;s, 512,2048,;$_/es

Scott L. Burson

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

Georg Bauer wrote:
>
> In article <35647B75...@zeta-sqoft.com>, "Scott L. Burson"
> <Gy...@zeta-sqoft.com> wrote:
>
> You are aware of Paul Grahams "On Lisp"? Pages 266ff show an
> implementation of continuations in Common Lisp. The book is worth reading
> for other things, too - a very instructive collection of techniques,
> samples and interesting stuff.

No, I haven't seen that. Can you summarize his technique for me
briefly? I don't know any way to do it short of writing one's own
interpreter, with a Rabbit-style compiler to go with it (an option I am
also seriously considering, BTW). I've studied implementation
strategies for firstclass continuations at some length, so I don't need
background information, just the essence of how he's going about it.

Scott L. Burson

unread,
May 21, 1998, 3:00:00 AM5/21/98
to

David B. Lamkins wrote:
>
> See http://www.eecs.harvard.edu/onlisp/ . There's a link to the code.
> The code itself is just eight forms beginning with (setq *cont*
> #'identity) , then there's an example.

Okay, I see what he's doing.

This is adequate for explanation purposes but not for actual use,
because the continuations never return. So in CL, the stack will just
continue to grow unboundedly during the computation. This wouldn't
happen in Scheme because all the invocations of the continuation occur
in tail position (at least, that seems to be his intention).

Christopher B. Browne

unread,
May 22, 1998, 3:00:00 AM5/22/98
to

On 21 May 1998 09:55:45 +0000, Erik Naggum <c...@naggum.no> posted:

>* Scott L. Burson
>| Why do you think they stored so much numeric data in BCD? It's because
>| they printed numbers more frequently than they did arithmetic on them,
>| and it was not worth it for them to pay the price of binary-to-decimal
>| conversion in order to print them out (whether that price was primarily
>| time, or involved a space cost for lookup tables of some sort).
>
> um, no, they stored numeric data in BCD because the earlier computers
> were designed to use decimal arithmetic, and when they moved to binary,
> they _still_ had BCD arithmetic hardware. binary didn't have much to
> offer these people. however, BCD vanished. now, why was _that_?

I'd suggest that it disappeared for three reasons:

a) With cheapening disk/memory, there was not the incentive to store
"decimal" information as ASCII text. This allows one to view files and read
them straight off...

b) Microprocessors didn't get heavily utilized for COBOL; C and Pascal were
used instead, and both encouraged the use of ASCII.

c) Floating point got popular instead, as popularized by spreadsheets, the
"canonical" microcomputer application.

>| Remember that these people were processing large volumes of data on
>| machines some of which would have made a Z-80 look powerful.
>
> yet there was support for BCD arithmetic in the Zilog Z80, too.

The Zilog was one of the first CPUs to try to go after the "general
computing" market in the realm of microcomputers. They probably presumed
that BCD would be important. Spreadsheets *didn't* use BCD, but rather FP,
which resulted in the declining popularity of BCD.

Note that the infamous "Pentium DIV" bug is the sort of thing that shouldn't
hit a BCD "register;" there are *many* bugs that FP math encourages that BCD
does not. (And no doubt we "really" should all be using some combination of
BIGNUMs, rationals, or perhaps even continued fractions, all of which have
entertainingly different properties that make them react to inaccuracy in
different ways...)

>| Supposing it existed, though, the only way to get it universally used (or
>| nearly) would have been to make it a primitive type in COBOL. And if it
>| wasn't used universally, it would hardly have been used at all, because
>| people need to be able to exchange data.
>
> precisely, and the lack of a sane DATE type in COBOL is supposedly to
> blame for 95% of the Y2K problem. go figure...

Floating point arithmetic has an equal lack of "sanity" if you're deploying
financial applications; COBOL's use of BCD was not a "stupid" move...

>| I'm not saying that more advance thought about the problem might not have
>| been beneficial. But I don't think it's fair to portray the decision
>| programmers faced then as trivial, and the choice they made as completely
>| idiotic.
>

> nonono, I don't think the decision they made was idiotic. I think they


> did not make a decision -- it was nowhere near anybody's conscious mind
> to regard century boundaries in the middle of the century -- everybody in
> question were born in this century at the time, an most of them would die

> in this century, so who _could_ have cared? I think it is foolish to
> impute decisions to them 30-40-50 years after the fact, the anachronistic
> speculations are just that. Aaron Gross points out (in mail) that none
> of these statements about the purported decisions are actually backed up
> by actual quotes from people who supposedy made them. that's enough to
> disregard them, IMNSHO.

If the problem won't bite anyone until you've been retired for 30 years,
then it's not really your problem, is it? :-)

Many of the systems were designed at the very *beginning* of "modern data
processing." They obviously hadn't been bitten by century problems yet; that
was, indeed 30 or more years off. It's not remarkable that the early
applications had some "blemishes" from a more modern perspective.

I suspect, though, that it's not quite so obvious that they were all being
perfectly "competent." Insurance companies most definitely had at least one
century problem to deal with in the 1960s, as they doubtless had many
thousands of customers with birthdates in the 1800's, and furthermore look
forward far enough in their actuarial analyses to thus hit both century
marks. Apparently developers didn't learn enough from their lessons...

--
Those who do not understand Unix are condemned to reinvent it, poorly.
-- Henry Spencer <http://www.hex.net/~cbbrowne/lsf.html>
cbbr...@hex.net - "What have you contributed to Linux today?..."

David B. Lamkins

unread,
May 22, 1998, 3:00:00 AM5/22/98
to

In article <3564DB8F...@zeta-sqoft.com>, "Scott L. Burson"
<Gy...@zeta-sqoft.com> wrote:

>Georg Bauer wrote:
>>
>> In article <35647B75...@zeta-sqoft.com>, "Scott L. Burson"
>> <Gy...@zeta-sqoft.com> wrote:
>>
>> You are aware of Paul Grahams "On Lisp"? Pages 266ff show an
>> implementation of continuations in Common Lisp. The book is worth reading
>> for other things, too - a very instructive collection of techniques,
>> samples and interesting stuff.
>
>No, I haven't seen that. Can you summarize his technique for me
>briefly?

See http://www.eecs.harvard.edu/onlisp/ . There's a link to the code.

The code itself is just eight forms beginning with (setq *cont*
#'identity) , then there's an example.

You'll probably want to at least look at the book to help you grok what
Graham is doing...

--
David B. Lamkins <http://www.teleport.com/~dlamkins/>

Georg Bauer

unread,
May 22, 1998, 3:00:00 AM5/22/98
to

In article <3564DB8F...@zeta-sqoft.com>, "Scott L. Burson"
<Gy...@zeta-sqoft.com> wrote:

>No, I haven't seen that. Can you summarize his technique for me
>briefly?

Since he wrote only some short macros, I will quote them:

(setq *cont* #'identity)

(defmacro =lambda (parms &body body)
`#'(lambda (*cont* ,@parms) ,@body))

(defmacro =defun (name parms &body body)
(let ((f (intern (concatenate 'string "=" (symbol-name name)))))
`(progn
(defmacro ,name ,parms
`(,',f *cont* ,,@parms))
(defun ,f (*cont* ,@parms) ,@body))))

(defmacro =bind (parms expr &body body)
`(let ((*cont* #'(lambda ,parms ,@body))) ,expr))

(defmacro =values (&rest retvals)
`(funcall *cont* ,@retvals))

(defmacro =funcall (fn &rest args)
`(funcall ,fn *cont* ,@args))

(defmacro =apply (fn &rest args)
`(apply ,fn *cont* ,@args))

Sources (c) Paul Graham, Typoes (c) Georg Bauer

But you should most definitely get the book itself. Actually everybody
programming in Common Lisp should.

BTW: dependend on the goal you have with this, there might be easier ways
to do it - maybe you should tell a bit more about what you want to create.
As was pointed out to me correctly by Christopher Oliver, if you want
continuations to do backtracking, you might be far easier away with using
Screamer.

Scott L. Burson

unread,
May 22, 1998, 3:00:00 AM5/22/98
to

Georg Bauer wrote:
> As was pointed out to me correctly by Christopher Oliver, if you want
> continuations to do backtracking, you might be far easier away with using
> Screamer.

What I'm doing is related to that, but not the same.

Thanks for the suggestions, though.

Barry Margolin

unread,
May 22, 1998, 3:00:00 AM5/22/98
to

In article <3564B...@illegal.domain>, Nate Holloway <th...@good.one> wrote:
>ITS? I know that there is an operated ITS site in Norway, I don't
>suppose Erik has heard anything about this?

The ITS-LOVERS list came alive earlier this week when someone posted a
message asking about Y2K conformance.

>Multics?

Sigh....

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Cambridge, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.

Rob Warnock

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

Bob Hutchison <hu...@RedRock.com> wrote:
+---------------

| What I find amazing is the continued use of this particular abbreviation
| well past the time the problems with it were well known...
+---------------

Yeah, tell me about it! I just got my California driver's license renewed,
and guess what? The new expiration data is... (*drum roll*)... "05-23-03".
How's *that* for ambiguity??!? ;-}


-Rob

p.s. IMHO, it *should* have read "2003-05-23". But no.

-----
Rob Warnock, 7L-551 rp...@sgi.com http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
2011 N. Shoreline Blvd. FAX: 650-933-4392
Mountain View, CA 94043 PP-ASEL-IA

Barry Margolin

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

In article <6k5k74$pp...@fido.engr.sgi.com>,

Rob Warnock <rp...@rigden.engr.sgi.com> wrote:
>Yeah, tell me about it! I just got my California driver's license renewed,
>and guess what? The new expiration data is... (*drum roll*)... "05-23-03".
>How's *that* for ambiguity??!? ;-}

I don't find it ambiguous at all. I know that in the US, dates are almost
always mm/dd/year or mm-dd-year (where year may be either 2 or 4 digits).
Occasionally you see yyyy-mm-dd (especially on computers, where it makes it
easy to use a lexicographic sort to order by date), but since this format
almost always uses 4-digit years, it would not have entered my mind that it
might be yy-mm-dd if you hadn't brought it up in the context of a Y2K
discussion. Perhaps you wouldn't have noticed it on your license if we
hadn't been carrying on this conversation this week (but maybe you're
involved in Y2K work, so you're already sensitized to it).

Martin Rodgers

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

In article <6k5k74$pp...@fido.engr.sgi.com>, rp...@rigden.engr.sgi.com
says...

> Yeah, tell me about it! I just got my California driver's license renewed,
> and guess what? The new expiration data is... (*drum roll*)... "05-23-03".
> How's *that* for ambiguity??!? ;-}

You can find loads of problems like this in comp.risks (Digests for the
RISKS list). Not all data format problems are Y2K related, but you'll not
be too suprised that they seem to be becoming more popular. If "popular"
is the right word...

There are a number of newsgroups for discussing Y2K problems, and a fair
number of pages on the web. Just in case you wanted to know more. ;)
--
Please note: my email address is munged; You can never browse enough
"Oh knackers!" - Mark Radcliffe

Christopher Stacy

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Fiander> Any trick in the book to squeeze the whole record onto a
Fiander> single 80-column image was a good thing.

Besides two-digit years, I can think of another common space-saving hack.
Signed numeric fields rendered as alphanumeric codes (EBCDIC or ASCII)
have the last digit interpreted specially. The digit is actually a
character that indicates the sign of the entire field.
The table is like 'A' = +1 ... 'I' = +9, 'J' = -1, etc.
This is called an "overpunch".

Today in the latter half of 1998, this is still how telephone companies
exchange data (ie., billing and subscription) with each other.

The record sizes are like 175 rather than 80, nowadays, though.
I'll bet that number 175 in turn comes from something having to
do with the size of magnetic tape physical records.

The Y2K discussion here reminded me of Lisp code that I wrote that
processes such data, in case you were tempted to think that this
was truly off-topic!

Christopher Stacy

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Why is everyone talking about "they" who made these decisions?
I can't be the only one around here who was programming back in the early 1970s.
I am sure there are people much older (I was just a kid at the time).

Speculation is not necessary. It's really true that conscious decisions
were made, and the reason was to save space on punched cards, magnetic
tape files, and disk files. At least, that's what I was taught back in
those days, before people were thinking about Y2K. Typical programs
processed data on multiple tape drives (eg. most inputs and outputs were tape).

I don't think disk space became fantastically cheaper really until the mid-1980s.

Mike McDonald

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

In article <ubtshy...@pilgrim.com>,

Christopher Stacy <cst...@pilgrim.com> writes:
> Why is everyone talking about "they" who made these decisions?
> I can't be the only one around here who was programming back in the early 1970s.
> I am sure there are people much older (I was just a kid at the time).

Well, I've already confessed my sins. :-)

> Speculation is not necessary. It's really true that conscious decisions
> were made, and the reason was to save space on punched cards, magnetic
> tape files, and disk files. At least, that's what I was taught back in
> those days, before people were thinking about Y2K. Typical programs
> processed data on multiple tape drives (eg. most inputs and outputs were tape).
>
> I don't think disk space became fantastically cheaper really until the mid-1980s.

"Back in those days" wasn't that long ago. I use Quicken to keep track of my
expenses. Since it's the only Windoze program I use, I was considering writing
my own program under Linux. But I wanted to import all of my existing data. So
I was deciphering the format of the Quicken files. Quicken 2 had two digit
years in it! That wasn't written until after, what? 1990? I have no idea if
the newer versions have that fixed or not. I'd hope so! I guess I'll find out
in a 19 months! (Oh, just to barely stay on topic for the group, I was
thinking of writing my version in Lisp, but the lack of a widget set detered
me.)

Mike McDonald
mik...@mikemac.com


0 new messages