Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

In praise of Java.

303 views
Skip to first unread message

Christian Lynbech

unread,
Nov 30, 2001, 12:29:06 PM11/30/01
to

Sometimes people get the impression that we lisp programmers are a
stubborn and arrogant bunch of zealots, unable to ever see the
advatnages of other approaches.

To counter this image, I would like to draw attantion to an often
ovrelooked fact about Java, that in my view deserves a lot more good
publicity, and goes to prove that Java is just bad.

We know from Sun marketing that the number of Java programmers keeps
going up, and we can also observe that other programming language
communities are declining quite as rapidly. As observed recently
elsewhere[1], this must be related to the fact that Java attracts new
people to programming.

In other words:

The rise of Java helps turn inferior programmers into average programmers.

As the number of Java programmers goes up, the capabilities of the
average programmer will approach that of the Java programmers.

This is of course a good thing, from a social and solidaric point of
view. It would have been selfish if lisp programmers were to claim the
benefits of belonging to the programming community exclusively for
themselves.

[1] Darn, article <a6789134.01113...@posting.google.com>
kind of makes the same observation. I am trying to hide the fact
that I have been late in bringing the insight out in public by
posting to a new thread.


------------------------+-----------------------------------------------------
Christian Lynbech | Ericsson Telebit, Skanderborgvej 232, DK-8260 Viby J
Phone: +45 8938 5244 | email: christia...@ted.ericsson.dk
Fax: +45 8938 5101 | web: www.ericsson.com
------------------------+-----------------------------------------------------
Hit the philistines three times over the head with the Elisp reference manual.
- pet...@hal.com (Michael A. Petonic)

Martin Cracauer

unread,
Nov 30, 2001, 4:46:46 PM11/30/01
to
Christian Lynbech <christia...@ted.ericsson.dk> writes:

>As the number of Java programmers goes up, the capabilities of the
>average programmer will approach that of the Java programmers.

And even for those how don't improve their programming, of course I
prefer them to write in Java than in C or C++. If only Java also had
integer overflow detection...

Martin
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <crac...@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/

Andrzej Lewandowski

unread,
Nov 30, 2001, 6:43:20 PM11/30/01
to
On Fri, 30 Nov 2001 18:29:06 +0100, Christian Lynbech
<christia...@ted.ericsson.dk> wrote:

>
>Sometimes people get the impression that we lisp programmers are a
>stubborn and arrogant bunch of zealots, unable to ever see the
>advatnages of other approaches.
>

This is exactly my impression after watching some discussions here
and participating in other.

A.L.

Erik Naggum

unread,
Nov 30, 2001, 8:05:56 PM11/30/01
to
* Christian Lynbech

> Sometimes people get the impression that we lisp programmers are a
> stubborn and arrogant bunch of zealots, unable to ever see the
> advatnages of other approaches.

* Andrzej Lewandowski


| This is exactly my impression after watching some discussions here
| and participating in other.

If your general outlook on things is that once you have found the Right
Thing, you can stop looking, you will also see others in the same light,
and you will not change your mind when you get more information, either.

If your general outlook on things is that once you have found something
to be Wrong, it is important to avoid repeating it, you will tire quickly
of people who repeat old errors in new wrappings.

If you are fairly inexperienced and do not know much of what has already
proved to be bad ideas, _and_ you are of the "One Right Thing" type, it
will take an inordinate amount of intellectual effort to realize that you
are watching the consequences of the "Avoid Repeating the Wrong Things"
type unfold themselves if you offer up a "new" idea that is anything but.

Overall, Lisp programmers are very experienced, know a dozen languages
well enough to think in them, have been through their rise and fall and
their accumulation of crud and their fragmentation into more communities
because they went wrong at a crucial step by adopting some really stupid
idea too early because they wanted to be "competitive" and then were
unable to back out of it, and so do not consider "new" ideas that were
shown to be wrong years ago to be particular fascinating reading.

We do not know yet whether Common Lisp has made that kind of mistake, but
some people are forever obsessing about little details that they want to
re-open and raise awareness about that really should be left alone and
just be relegated to the same unquestioned "this is how we do it" status
that _successful_ elements of a culture has. As long as people think
that there is value in discussing things that have been defeated hundreds
of times before and refuse to consult the literature because they cannot
fathom that language with longer histories than the Internet (i.e., WWW,
in their view, since the Internet really started pre-1970) even exist.

In my view, Common Lisp made very few mistakes that should bar future
progress from occurring within the language, but it takes serious effort
for some people to come to terms with the choices that have been made and
especially that they are _not_ fundamental, which they would have been in
most other languages. One of the things that are so great about Common
Lisp is that serious changes may be made which can be implemented in some
inefficient way in the existing language with existing compilers and then
inspire compiler or other system changes if they seem worthwhile. In
particular, the lack of "reserved words" and "operators" with special
syntactic status, makes it possible to design the next Common Lisp within
Common Lisp just by changing the package name. Since optimizing the hell
out of everything is no longer necessary on modern computers, serious
experimentation can be conducted non-intrusively. Many people, even some
who claim to be Lisp programmers, have not really understood this and go
the way of much lesser languages and create their own compilers and all
the development system and everything from scratch. Since it is so easy
to create something new, the ability to work on something privately until
it gets to a _realistic_ level is so good that it is important to keep
the stupid and bad ideas from spreading too far when they are published.
In other languages, the sheer effort of building something is a very good
control mechanism that keeps bad ideas from spreading.

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

Kenny Tilton

unread,
Nov 30, 2001, 10:52:48 PM11/30/01
to

Christian Lynbech wrote:
>
> Sometimes people get the impression that we lisp programmers are a
> stubborn and arrogant bunch of zealots, unable to ever see the
> advatnages of other approaches.

There are other approaches?

>
> We know from Sun marketing that the number of Java programmers keeps
> going up, and we can also observe that other programming language

> communities are declining quite as rapidly. ...this must be related to the fact that Java attracts new
> people to programming.

Quibble Alert: Hang on, if X and Y are changing "quite as" rapidly along
opposite vectors, their sum is constant. how do you get to (+ X Y) is
increasing? I would say Ys are migrating to X and the path of (+ X Y) is
more about overall demand for programming services, which probably is
independent of the tools its craftsmen like to employ.

>
> In other words:
>
> The rise of Java helps turn inferior programmers into average programmers.
>

Agreed. Air bags and seat belts make lousy drivers statistically safer.
But new languages are supposed to do that. That's the whole idea of
everything from 6502 Assembler to Prolog to Arc. So to give them credit
for, say, bringing GC to The Great Unwashed is faint praise--they were
supposed to do /something/ when they made a new language. And the GC
sucks.

> As the number of Java programmers goes up, the capabilities of the
> average programmer will approach that of the Java programmers.

yeah, but they'll be using /Java/. No macros, crappy performance, no MI.

This is like saying Windows is good because fewer people are using a
commandline, when in fact they could be using a Mac.

Now if Java had more parentheses....

<g>

Kenny
CliniSys

Christian Lynbech

unread,
Dec 1, 2001, 7:15:03 AM12/1/01
to
>>>>> "Kenny" == Kenny Tilton <kti...@nyc.rr.com> writes:

Kenny> if X and Y are changing "quite as" rapidly along opposite
Kenny> vectors, their sum is constant. how do you get to (+ X Y) is
Kenny> increasing?

Assume a population of discrete elements, divided into disjoints sets
(of size) X and Y. If X increases by a significant amount x, either Y
must decrease by a similar amount or the population must increase.

There are obviously some assumptions here. Sun marketing has not been
very explicit on the nature of the inflow of Java programmers. They
have only stated that X is going up, not that the Y's are going
down.

A programmer may of course have *skills* belonging both to X and Y,
but even if equally skilled in either language, a particular amount of
*production* must necessarily fall in a particular category. It would
be close to dishonest if Sun marketing was only counting an inflow of
skill and not production.

Anyway, I was also assuming that we would have heard about it if C++ was
dying.

>>
>> In other words:
>>
>> The rise of Java helps turn inferior programmers into average programmers.
>>

Kenny> Agreed. Air bags and seat belts make lousy drivers statistically safer.

Just to make sure to hammer my point in: I was not claiming Java to
make inferior programmers less inferior, only that the sheer weight of
their numbers would change the average to approach that of their
inferiority.

PS

I should also admit that I wasn't totally serious in my first post :-)

Erik Naggum

unread,
Dec 1, 2001, 9:29:50 AM12/1/01
to
* Christian Lynbech

| There are obviously some assumptions here. Sun marketing has not been
| very explicit on the nature of the inflow of Java programmers. They have
| only stated that X is going up, not that the Y's are going down.

There appears to me to be more female Java programmers than the ratio is
for many other languages. The Lisp family also appears to attract more
female programmers than the C crowd ever has been able to. Perhaps Java
shares with Common Lisp a notion of higher usefulness without having to
deal with a lot of nerdy-macho "braggable stuff".

I may be biased, but I tend to find a much lower tendency among female
programmers to be dishonest about their skills, and thus do not say they
know C++ when they are smart enough to realize that that would be a lie
for all but perhaps 5 people on this planet.

Look at the people on USENET who are completely unafraid to demonstrate
their massive ignorance of any topic, who defend their idiotic remarks
rather than try to learn from criticism or counter-evidence or counter-
information. They are almost all male. How often do you find males who
think "I gotta know this stuff better before I post anything"? They,
like many females do not post at all, just _listen_. If you give the
average young male programmer an impossible task, he will lie that he has
completed it and hope he has before he is discovered. That never happens
with female programmers, who seem to start off with the attitude of
mature and experienced male programmers. Catch a female programmer in
cheating and lying, and she is embarrassed and wants to fix it and make
amends. Catch a male programmer in cheating and lying, and he fights you
to salvage his pride, and definitely does not want to fix it or make
amends unless you threaten him. C++ is a language that encourages such
cheating and lying because you _have_ to tell your compiler something you
cannot possibly know, yet, such as the type of all your variables and
functions, and then you make up a reality that fits the lies. This does
not appear to appeal to many female programmers. Java is a _lot_ better
than C++ in this regard in that the amount of required cheating and lying
is dramatically reduced by the more flexible type system and the
existence of so much more stuff that dictates the types to use. (Common
Lisp is even better, of course, but it just not "there" when people look
for something to learn if they think they like programming computers.)

If my observations are more than just my probably biased impressions, I
say: leave the nerdy-macho languages to the young cave men, and let us
find ways to tell the rest of the potential programmers about programming
languages for thinking, honest people. In my view, Java is not horrible.
It can be quite good. I have seen fewer idiots use Java than use C++,
which seems like a veritable magnet on the far left side of the Gauss
curve, and the teaching material for Java is so much better than that for
C++. Perhaps Java can actually be a real programming language, and then
we can tell them about the benefits of aa transition to Common Lisp, a
proces which we know just blows the mind of those C++ cave men.

Besides, I would not mind seeing less of the "early man" attitude of
"first my pride, then the truth" which seems to come with the average
male programmer. Or perhaps it is just that only the really good female
programmers "make it" and that the desire to brag to hide one's ignorance
is _necessary_ in this cave-man-dominated culture. If so, perhaps the
only benefit of Java is that it reduces the need to brag so much because
the language is actually possible to master. Getting people to stop
bragging about their non-existing skills may in fact be a necessary step
to teach them Common Lisp, which appears to require more thought and less
"action" and has very little nerdy-macho appeal in obscure details with
which to impress other competitive young cave men.

Duane Rettig

unread,
Dec 1, 2001, 12:15:48 PM12/1/01
to
Kenny Tilton <kti...@nyc.rr.com> writes:

> Christian Lynbech wrote:
> >
> > We know from Sun marketing that the number of Java programmers keeps
> > going up, and we can also observe that other programming language
> > communities are declining quite as rapidly. ...this must be related
to the fact that Java attracts new
> > people to programming.
>
> Quibble Alert: Hang on, if X and Y are changing "quite as" rapidly along
> opposite vectors, their sum is constant. how do you get to (+ X Y) is
> increasing? I would say Ys are migrating to X and the path of (+ X Y) is
> more about overall demand for programming services, which probably is
> independent of the tools its craftsmen like to employ.

Quibble Quibble:

Christian's whole premise was built on the phrase "We know from
Sun marketing", which set the whole tone of the message (and which
he has later confirmed) as a joke.

In reality, the reason why Sun marketing might honestly think such
a foolish thing is because of the natural tendency to ignore or write
off or ignore as unimportant people who are not in the Java camp,
just as we might tend to write off or ignore people who are not in
the Lisp camp.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Kenny Tilton

unread,
Dec 1, 2001, 1:18:39 PM12/1/01
to

Duane Rettig wrote:
>
> Christian's whole premise was built on the phrase "We know from
> Sun marketing", which set the whole tone of the message (and which
> he has later confirmed) as a joke.

Oh, sorry. Did not pick that up. Kind of like "MicroSoft's popularity
with developers..."?

kenny
clinisys

Thomas Stegen CES2000

unread,
Dec 1, 2001, 8:23:53 PM12/1/01
to
"Erik Naggum" <er...@naggum.net> wrote in message
news:32162057...@naggum.net...

[snip]


> C++ is a language that encourages such
> cheating and lying because you _have_ to tell your compiler something
you
> cannot possibly know, yet, such as the type of all your variables and
> functions, and then you make up a reality that fits the lies.

Are you proposing to start writing the code before you know where
the program is headed? One should know all these things before
one starts writing code. Of course this is not easy, and almost
invariable will adjustments need to be made. But I don't believe
this is not the case in Lisp as well.

[snip]
--
Thomas.

"Mathematics is the language of nature."


Erik Naggum

unread,
Dec 1, 2001, 9:14:09 PM12/1/01
to
* Erik Naggum

> C++ is a language that encourages such cheating and lying because you
> _have_ to tell your compiler something you cannot possibly know, yet,
> such as the type of all your variables and functions, and then you make
> up a reality that fits the lies.

* Thomas Stegen CES2000


| Are you proposing to start writing the code before you know where
| the program is headed?

No. How did you arrive at the rather peculiar position that I might?

Hint: Just because you equate certain things does not mean they are even
remotely connected, much less the same.

| One should know all these things before one starts writing code.

Wrong. How did you arrive at this rather peculiar value judgment?

Hint 1: You may know you need a number, but whether it is a floating
point number or an integer might not be known early. Hint 2: If you
think you need an integer, you probably do not know its range, yet.
Hint 3: If you need an integer without easy hardware representation,
getting a type with sufficient range might require special libraries that
impact a lot more than your specific algorithm, such as its interface to
its callers, which will also have to use this library.

| Of course this is not easy, and almost invariable will adjustments need
| to be made.

You are making my point, now. If you did not make statements about types
prematurely, you could _narrow_ the type specification down as you went.
In Common Lisp, we have the universal supertype t atop the _hierarchy_ of
types, and many levels in the hierarchy, and we do not need to specify
the lowest possible type in the hierarchy right away. Until classes came
along in the C world, all of C's types were disjoint because they reflect
hardware "types". This is simply bad programming language design.

| But I don't believe this is not the case in Lisp as well.

You can believe whatever you want for all I care, as you have obviously
made up your mind, already, but please do not tell me what I believe. I
make an effort to be precise and accurate in what I write. You could
make the effort of paying attention in return.

Where did you come from? Is there a C++ conference next door that sends
out missionaries to defend their obviously broken language from critique?

Coby Beck

unread,
Dec 1, 2001, 9:43:16 PM12/1/01
to

"Thomas Stegen CES2000" <tho_s...@hotmail.com> wrote in message
news:3c098358$1...@nntphost.cis.strath.ac.uk...

> "Erik Naggum" <er...@naggum.net> wrote in message
> news:32162057...@naggum.net...
>
> [snip]
> > C++ is a language that encourages such
> > cheating and lying because you _have_ to tell your compiler something
> you
> > cannot possibly know, yet, such as the type of all your variables and
> > functions, and then you make up a reality that fits the lies.
>
> Are you proposing to start writing the code before you know where
> the program is headed? One should know all these things before
> one starts writing code.

This is only one approach and is not always the best approach. Really.

Many problems are best solved by starting *before* you can even clearly define
them. This may seem a bit paradoxical but I have worked (and am working) on
software projects where the goal was only understood in a very general way and
actually starting to write code was the best way to refine and understand the
problem well enough to be confident in "implementation details" like class
hierarchies and the organization of top level functionality.

This is where lisp's flexibility is invaluable. (and it's very fun! ;-)
--
Coby
(remove #\space "coby . beck @ opentechgroup . com")


Kenny Tilton

unread,
Dec 1, 2001, 9:55:58 PM12/1/01
to

Thomas Stegen CES2000 wrote:
>
> "Erik Naggum" <er...@naggum.net> wrote in message
> news:32162057...@naggum.net...
>
> [snip]
> > C++ is a language that encourages such
> > cheating and lying because you _have_ to tell your compiler something
> you
> > cannot possibly know, yet, such as the type of all your variables and
> > functions, and then you make up a reality that fits the lies.
>
> Are you proposing to start writing the code before you know where
> the program is headed? One should know all these things before
> one starts writing code.

The beauty is, those two sentences are heresy to me, yet I respect that
they convey exactly how many feel about software development and that
those many are good what they do, so we should just give up on all these
language wars, to each their own. But FWIW....

Suppose I want to write a program to simulate pocket billiards. What I
should know beforehand is how pocket billiards works in the real world,
and how to express real world stuff in algorithms. Whether I want to use
single or double precision for real-world values is just my chosen
language's way of harassing me and making me miserable, it is not
something I should be thinking about, it is something I have to deal
with when backed into a corner.

If I sit down /beforehand/ to think all that through, (a) I get bored to
tears and give up programming (b) it gets me nowhere because until I
engage the enemy I really do not know what is going to come up. Quoth
Kent: "Concrete comes with details."

OTOH, I might well subscribe to the sentiments quoted above if I had to
wait 3 to 90 minutes for one edit-run cycle. :)


kenny
clinisys

Dmitri Ivanov

unread,
Dec 2, 2001, 6:03:43 AM12/2/01
to

Erik Naggum <er...@naggum.net> wrote in message
news:32162057...@naggum.net...
> ... Perhaps Java can actually be a real programming language,

> and then we can tell them about the benefits of aa transition to
> Common Lisp

Moreover, Java's marks seem to become more and more closer to Common
Lisp's. Lately, I've read James Gosling's interview where he said he was
thinking of class renaming mechanism in Java. Reinventing whatever CL
has had for ages are considered a fantastic breakthrough once again :-)
---
Sincerely,
Dmitri Ivanov
www.aha.ru/~divanov


Thomas Stegen CES2000

unread,
Dec 2, 2001, 7:35:56 AM12/2/01
to
"Erik Naggum" <er...@naggum.net> wrote in message
news:32162480...@naggum.net...
> * Erik Naggum

[snip]

> | But I don't believe this is not the case in Lisp as well.

That '.' should have been a '?'. I don't know much about Lisp yet, so I
should
perhaps have chosen my wording more carefully.

>
> You can believe whatever you want for all I care, as you have obviously
> made up your mind, already, but please do not tell me what I believe. I
> make an effort to be precise and accurate in what I write. You could
> make the effort of paying attention in return.
>
> Where did you come from? Is there a C++ conference next door that sends
> out missionaries to defend their obviously broken language from
critique?

Why do you sound so angry all the time? You were using strong words,
and so did I. I was not at all trying to insult you by patronizing you or
make use of sarcasm.

Erik Naggum

unread,
Dec 2, 2001, 8:17:20 AM12/2/01
to
* Thomas Stegen CES2000

| Why do you sound so angry all the time?

Because you care too much about such things and even exaggerate wildly,
both of which only further annoy me when you fail to answer the arguments
and focus on why "angry" you think I sound.

Kalle Olavi Niemitalo

unread,
Dec 2, 2001, 9:19:47 AM12/2/01
to
"Dmitri Ivanov" <div...@aha.ru> writes:

> Lately, I've read James Gosling's interview where he said he was
> thinking of class renaming mechanism in Java. Reinventing whatever CL
> has had for ages are considered a fantastic breakthrough once again :-)

What is a class renaming mechanism, and how is it present in CL?

A quick web search turned up some Eiffel references, where class
renaming is apparently a way to use similarly named but
independently developed classes in the same program. I guess
packages avoid that problem in CL. But Java has packages too,
so I am puzzled.

Kent M Pitman

unread,
Dec 2, 2001, 1:30:19 PM12/2/01
to
Kalle Olavi Niemitalo <k...@iki.fi> writes:

> "Dmitri Ivanov" <div...@aha.ru> writes:
>
> > Lately, I've read James Gosling's interview where he said he was
> > thinking of class renaming mechanism in Java. Reinventing whatever CL
> > has had for ages are considered a fantastic breakthrough once again :-)

If memory serves, Gosling was a lisper long ago, before Java, so I'm
not even sure reinvention would be a correct term. He's surely aware
of us. But we should be happy for any features of CL that make their
way into Java--each is one less thing to teach new people about.



> What is a class renaming mechanism, and how is it present in CL?
>
> A quick web search turned up some Eiffel references, where class
> renaming is apparently a way to use similarly named but
> independently developed classes in the same program. I guess
> packages avoid that problem in CL. But Java has packages too,
> so I am puzzled.

CL classes use names as a superficial veneer but unless you go to special
work to make your system depend on the name, it uses only class objects
and so the classes can be renamed.

This is, incidentally, an argument for using

(defmethod foo? ((x t)) nil)
(defmethod foo? ((x foo)) t)
... (foo? x) ...

rather than

...(typep x 'foo)...

regardless of whether the compiler will well-optimize TYPEP or not, as
we discussed earlier. The usage of defmethod doesn't turn into something
that looks for (eq (class-name (class-of x)) 'foo) nor for
(eq (class-of foo) (find-class 'foo)) but rather to something more akin to
(eq (class-of foo) (load-time-value (find-class 'foo))) so that renaming
the class after this method is loaded won't perturb anything... other than
loading later patches that have to get foothold by name, of course, but then
for that you could rename things back temporarily, if it came to that.

I don't knw if this is what is meant by class renaming, but I merely wanted
to observe that this can be done. That is, that CLOS is itself
object-centric, not name-centric.

Marco Antoniotti

unread,
Dec 2, 2001, 1:52:47 PM12/2/01
to

Kalle Olavi Niemitalo <k...@iki.fi> writes:

Cfr.
change-class,
update-instance-for-different-class,
update-instance-for-redefined-class, and
reinitialize-instance.

Cheers


--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.

Kalle Olavi Niemitalo

unread,
Dec 2, 2001, 2:36:16 PM12/2/01
to
Kent M Pitman <pit...@world.std.com> writes:

> CL classes use names as a superficial veneer but unless you go to special
> work to make your system depend on the name, it uses only class objects
> and so the classes can be renamed.

Ah. Would it go like this?

(defclass old-name () ()) ;or something more complex

(setf (find-class 'new-name) (find-class 'old-name))
(setf (find-class 'old-name) nil)
;; or an equivalent SHIFTF

The second SETF failed in Debian CMUCL 3.0.5. (I'll report that.)
So I did this instead:

(defclass dummy () ())
(setf (find-class 'old-name) (find-class 'dummy))

The class originally called OLD-NAME has now been renamed to
NEW-NAME, and no other names refer to it. However, (class-name
(find-class 'new-name)) keeps returning OLD-NAME. Can that be
updated too?

Dmitri Ivanov

unread,
Dec 3, 2001, 2:14:15 AM12/3/01
to

Kalle Olavi Niemitalo <k...@iki.fi> wrote in message
news:87g06tjt...@Astalo.y2000.kon.iki.fi...
> [...snip...]

> (defclass old-name () ()) ;or something more complex
>
> (setf (find-class 'new-name) (find-class 'old-name))
> (setf (find-class 'old-name) nil)
> ;; or an equivalent SHIFTF
>
> The second SETF failed in Debian CMUCL 3.0.5. (I'll report that.)
> So I did this instead:

Simply try
(setf (find-class 'new-name) 'old-name)

Kalle Olavi Niemitalo

unread,
Dec 3, 2001, 2:56:30 AM12/3/01
to
"Dmitri Ivanov" <div...@aha.ru> writes:

> Simply try
> (setf (find-class 'new-name) 'old-name)

Does that conform to the spec? The syntax is:

# find-class symbol &optional errorp environment => class
# (setf (find-class symbol &optional errorp environment) new-class)

CLHS does not say what values are allowed as new-class.
I _guess_ the intention was to have the same values for
new-class as for class, which is a class object or nil.

Will Deakin

unread,
Dec 3, 2001, 9:24:39 AM12/3/01
to
Kenny Tilton wrote:

[...elided stuff about java...]

> And the GC sucks.

I realise this is an arse thing to ask, but, can you give an
justification for this? that is, are there any papers comparing java
with lisp, say?

(The reason for asking this is that I know a little about lisp and
c/c++ gc and found some java gc but I really didn't understand what
the java stuff is about. In contrast, lisp gc is well documented --
and stuff out for the interested -- and c and c++ is also quite well
referenced and honest -- admiting there's quite alot of stuff you
can't gc. In contrast, the java stuff is a bit too sweet and
insubstantial -- much like candyfloss.)

:)w

Dmitri Ivanov

unread,
Dec 3, 2001, 12:20:59 PM12/3/01
to

Kalle Olavi Niemitalo <k...@iki.fi> wrote in message
news:izn667obu...@stekt34.oulu.fi...

Sorry. Of course, the above should be:
(setf (class-name (find-class 'old-name)) 'new-name)

Kalle Olavi Niemitalo

unread,
Dec 2, 2001, 5:19:15 PM12/2/01
to
Kent M Pitman <pit...@world.std.com> writes:

> ...(typep x 'foo)...

A class object is valid as a type specifier, so you could use
TYPEP and still allow renaming. :-)

> The usage of defmethod doesn't turn into something
> that looks for (eq (class-name (class-of x)) 'foo) nor for
> (eq (class-of foo) (find-class 'foo)) but rather to something more akin to
> (eq (class-of foo) (load-time-value (find-class 'foo)))

I'm not very familiar with LOAD-TIME-VALUE. The spec says that
"the order of evaluation with respect to the evaluation of top
level forms in the file is implementation-dependent." Does that
mean (load-time-value (find-class 'foo)) is unreliable if the
(defclass foo ...) is in the same file?

Thomas F. Burdick

unread,
Dec 3, 2001, 4:30:57 PM12/3/01
to
Will Deakin <aniso...@hotmail.com> writes:

> Kenny Tilton wrote:
>
> [...elided stuff about java...]
>
> > And the GC sucks.
>
> I realise this is an arse thing to ask, but, can you give an
> justification for this? that is, are there any papers comparing java
> with lisp, say?

Ah, thank you for following up here, because I'd meant to reply to
Kenny's article. Java GC certainly sucked at first. However, in true
Worse-is-Better form, they started out with a crappy GC (which, for
people coming from non-GC languages was probably a godsend) and
worried about making the GC good later. Well, later came, and AFAICT,
there are good GCs for Java now. Although, come to think of it, I can
only remember reading papers about Java GCs on Sun hardware, so it's
possible that Java GC still sucks on Intel hardware, I don't know.

One problem with the Java approach of start with a bad GC, then
develop good ones later, though, is that it let Java users develop a
poisonous culture of writing finalizers that must be run in a timely
manner. This worked at first, but introduce generaltional GC, and
suddenly you've got unreleased resources in older generations that
aren't being collected. Active Java users would probably have a
better idea about how much of a problem this still is, but any code
base that does this pretty much requires a poor GC.

Oh, and to answer your question, I don't know of any, but if you look
for papers on Java GCs, you'll often find references to papers on Lisp
GCs. So, there's comparison of a sort.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Kenny Tilton

unread,
Dec 3, 2001, 9:40:15 PM12/3/01
to

"Thomas F. Burdick" wrote:
>
> Will Deakin <aniso...@hotmail.com> writes:
>
> > Kenny Tilton wrote:
> >
> > [...elided stuff about java...]
> >
> > > And the GC sucks.
> >
> > I realise this is an arse thing to ask, but, can you give an
> > justification for this? that is, are there any papers comparing java
> > with lisp, say?
>
> Ah, thank you for following up here, because I'd meant to reply to
> Kenny's article. Java GC certainly sucked at first.

Right, and I think it important in this language war to ignore
improvements and forever characterize a language according to the
attributes it held in the first few weeks after the initial release.
Lisp is slow, right? Because it's interpreted! Uses too much memory,
too.

:)

kenny
clinisys

glauber

unread,
Dec 4, 2001, 11:14:34 AM12/4/01
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) wrote in message news:<xcvsnas...@apocalypse.OCF.Berkeley.EDU>...

> Will Deakin <aniso...@hotmail.com> writes:
>
> > Kenny Tilton wrote:
> >
> > [...elided stuff about java...]
> >
> > > And the GC sucks.
> >
> > I realise this is an arse thing to ask, but, can you give an
> > justification for this? that is, are there any papers comparing java
> > with lisp, say?
>
> Ah, thank you for following up here, because I'd meant to reply to
> Kenny's article. Java GC certainly sucked at first. However, in true
> Worse-is-Better form, they started out with a crappy GC (which, for
> people coming from non-GC languages was probably a godsend) and

Yes indeed. Many people had their first exposure to GC then. That was
a good thing.

[...]

> One problem with the Java approach of start with a bad GC, then
> develop good ones later, though, is that it let Java users develop a
> poisonous culture of writing finalizers that must be run in a timely
> manner. This worked at first, but introduce generaltional GC, and

[...]

Finalizers are one of my pet peeves with Java; more precisely, the
fact that Java doesn't have destructors. People coming from a C++
tradition look for the destructor, don't find it, and use the
finalizer. The only good use i can think of for a finalizer is to
release memory that has been malloc-ed outside of the garbage
collector (perhaps as part of a native interface method). Stay away
from finalizers -- they're evil.

Now, not having a destructor, we're forever having to code stuff like:

try { open database connection, do stuff }
finally { close database connection }

when this should have been handled transparently in the destructor.


g

Will Deakin

unread,
Dec 4, 2001, 11:37:54 AM12/4/01
to
glauber wrote:

> Now, not having a destructor, we're forever having to code stuff like:
>
> try { open database connection, do stuff }
> finally { close database connection }
>
> when this should have been handled transparently in the destructor.

Working as a (sometime) Oracle DBA I would say that this has done
more than peeved me, due to naiveity on the part of some developers
this caused the death by cursor starvation of an live Oracle
database running on a meaty E4500...

:|w


Andreas Bogk

unread,
Dec 4, 2001, 12:42:12 PM12/4/01
to
thegl...@my-deja.com (glauber) writes:

> Now, not having a destructor, we're forever having to code stuff like:
>
> try { open database connection, do stuff }
> finally { close database connection }
>
> when this should have been handled transparently in the destructor.

What's wrong with that? Of course, other than that the usual Lisp
idiom of having a WITH-OPEN-DATABASE is more convenient?

Andreas

--
"In my eyes it is never a crime to steal knowledge. It is a good
theft. The pirate of knowledge is a good pirate."
(Michel Serres)

Ray Blaak

unread,
Dec 4, 2001, 1:02:33 PM12/4/01
to
thegl...@my-deja.com (glauber) writes:
> Finalizers are one of my pet peeves with Java; more precisely, the
> fact that Java doesn't have destructors. People coming from a C++
> tradition look for the destructor, don't find it, and use the
> finalizer. The only good use i can think of for a finalizer is to
> release memory that has been malloc-ed outside of the garbage
> collector (perhaps as part of a native interface method). Stay away
> from finalizers -- they're evil.
>
> Now, not having a destructor, we're forever having to code stuff like:
>
> try { open database connection, do stuff }
> finally { close database connection }
>
> when this should have been handled transparently in the destructor.

The problem is that destructors and GC in Java don't mix very well. Every
object is in fact a pointer to the object data. How does the system know when
the destructor should be called? The object being referenced might very well
being returned out of scope, in which case it should *not* be destructed.

If Java had some sort of stack-based "struct" type, such a thing could have
scope-based destructors. But such things would not fit with the rest of Java,
by-value/by-reference/identity issues would arise, etc.

I want scope-based destructors too, but what I would do is to add to Java a
sort of compromise: have a scope-based construct that does not complicate
Java's type system but yet allows the programmer to easily express scope-based
management (essentially a form of Lisp-style with-xxx macros):

Connection c = new Connection(...);
with (c)
{ ...do stuff...
}

Such a construct would be equivalent to:

Connection c = new Connection(...);
try
{ c.initializeResource();
...do stuff...
}
finally
{ c.finalizeResource(success);
}

with the added requirement that c implement some sort of ControlledResource
interface that mandates the initializeResource() and finalizeResource()
methods.

Then the decision of when to "destruct" is up to the programmer (this is the
compromise), but things are easy to express, making it straightforward to
repeat a "managed scope" while minimizing the possibility of introducing an
error in the resource management logic.

Followups to comp.lang.java.machine where this kind of thing can be better
discussed [and is a counterpoint to Edward Diener's current proposal for a
resource type].

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@telus.net The Rhythm has my soul.

Hannah Schroeter

unread,
Dec 4, 2001, 2:14:36 PM12/4/01
to
Hello!

In article <32162057...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:
>[...]

> I may be biased, but I tend to find a much lower tendency among female
> programmers to be dishonest about their skills, and thus do not say they
> know C++ when they are smart enough to realize that that would be a lie
> for all but perhaps 5 people on this planet.

Now, I'd say I know quite much of C++. I do it nearly every work day,
even though I'm not too glad about it (there are those Erlang/... days
:-) ).

Yes, I *know* that I don't know all of C++. Who does? Just read those
Guru of the Week thingies or similar in comp.lang.c++.moderated.
Read some strange template-based code. etc.

Still, I know enough of C++ to program useful things. It's more effective
than pure C, I guess. Perhaps if I had already invested the same
effort into e.g. Lisp, I'd be even more productive. However, alas,
technological merits aren't the only things that count in industry.

>[...]

> amends unless you threaten him. C++ is a language that encourages such
> cheating and lying because you _have_ to tell your compiler something you
> cannot possibly know, yet, such as the type of all your variables and
> functions, and then you make up a reality that fits the lies.

That particular point doesn't match my experience. Usually my assertions
about types (which are checked according to the type system) are true,
at least *now*. Maybe they'll become wrong due to evolution of the
system I'm (co)developing. Then there may be excessive changes to
the type declarations etc., of course, but still, usually things are
true *now*. Except you excessively cheat yourself around the type
systems using casts or things like that.

> This does
> not appear to appeal to many female programmers. Java is a _lot_ better
> than C++ in this regard in that the amount of required cheating and lying
> is dramatically reduced by the more flexible type system and the
> existence of so much more stuff that dictates the types to use. (Common
> Lisp is even better, of course, but it just not "there" when people look
> for something to learn if they think they like programming computers.)

Now, frankly, C++'s type system is usually more expressive. You can map
almost everything save for GC-based issues from Java into C++, but not
necessarily vice versa.

In C++ you say std::map<KeyType, ValueType>. In Java you just use some
map type and have to cast values you retrieve dynamically to the type
you expect. You have to slightly cheat around the type system in Java
in this case. (There are Java extensions that fix that deficiency,
and AFAIK the solution of generic classes is in fact cleaner than C++'s
templates.)

>[...]

So far. However, I don't really like using such simple drawers like
"male" and "female". All too often, more or less slight correlations
are used to build up gender stereotyping, gender prejudices and
self-fulfilling prophecies. An example for the latter:
"Women are better at home. Let's educate our daughter in cooking
and not in those strange outside-world related things". The consequence
is clear: The daughter will have a disadvantage in outside-world
things and an advantage in cooking knowledge. So the prejudice
has come true in this case, has been reproduced. (It's only strange
to see that many professional cooks are men...) If done in great numbers,
women, statistically, *are* really better in cooking, but with no
inherency of that trait or correlation at all.

Kind regards,

Hannah.

Thomas F. Burdick

unread,
Dec 4, 2001, 4:30:04 PM12/4/01
to
thegl...@my-deja.com (glauber) writes:

> Finalizers are one of my pet peeves with Java; more precisely, the
> fact that Java doesn't have destructors. People coming from a C++
> tradition look for the destructor, don't find it, and use the
> finalizer. The only good use i can think of for a finalizer is to
> release memory that has been malloc-ed outside of the garbage
> collector (perhaps as part of a native interface method). Stay away
> from finalizers -- they're evil.

That, and as a sort of safety net: check to see if whatever resource
that should have been freed, was, and, if not, free it and log a
warning.

> Now, not having a destructor, we're forever having to code stuff like:
>
> try { open database connection, do stuff }
> finally { close database connection }
>
> when this should have been handled transparently in the destructor.

It should have? Java objects (like Lisp objects) have indefinate
extent. I'm not sure how you think a destructor would be any
different than a finalizer. If Java had a macro system, you could
make WITH-... style macros to make that idiom nicer, but I don't see
the problem with it. The object has indefinate extent, and the GC
handles the memory management. DB handles aren't memory, so you're
going to have to manage them yourself. It's just too bad (for the
people tracking down bugs caused by people who didn't write the
try{}finally{} code) that Java doesn't have a convenient way to make
WITH-... macros, or even CALL-WITH-... functions (why do people use
this language instead of SmallTalk, again?).

Erik Naggum

unread,
Dec 4, 2001, 4:35:33 PM12/4/01
to
* Erik Naggum

> amends unless you threaten him. C++ is a language that encourages such
> cheating and lying because you _have_ to tell your compiler something you
> cannot possibly know, yet, such as the type of all your variables and
> functions, and then you make up a reality that fits the lies.

* Hannah Schroeter


| That particular point doesn't match my experience. Usually my assertions
| about types (which are checked according to the type system) are true, at
| least *now*. Maybe they'll become wrong due to evolution of the system
| I'm (co)developing. Then there may be excessive changes to the type
| declarations etc., of course, but still, usually things are true *now*.
| Except you excessively cheat yourself around the type systems using casts
| or things like that.

But things that are true _now_ which turn out to be false a short time
later are generally regarded as unpredictable. The problem is that you
have to make those changes without any changes in external requirements,
they are required as your knowledge of only your _own_ code increases. I
consider this to be a case of being forced to say that something is true
when you are clearly not certain that it is, and then when you cannot
make up a credible world around it so it holds true, you change your
testimony, as it were.

| Now, frankly, C++'s type system is usually more expressive. You can map
| almost everything save for GC-based issues from Java into C++, but not
| necessarily vice versa.

This is actually an argument that C++ is a lower-level language than
Java. The most expressive type system is that of machine language, or
assembler. We sacrifice expressibility on one level to attain it another.

| However, I don't really like using such simple drawers like "male" and
| "female".

People actually do come in these easily labeled packages. The difference
is whether one confuses the descriptive and the prescriptive statements.
Induction is a tricky business, prediction even more so. I really tried
to avoid both and just report my observations so far. See my signature
for a hopefully succinct summary of all the possible arguments why I
think you misdirected your comments on stereotyping.

Kaz Kylheku

unread,
Dec 4, 2001, 5:09:52 PM12/4/01
to

Or perhaps it should be handled by a language feature that the programmer
can define himself:

(with-open-database-connection (conn) (do-stuff conn))

Destructors are not necessarily the answer. Using the destruction of an
object to clean up its state is a really semantic hack; in C++ you force
the destruction of the object in order to make it close the database
connection that it manages as a side effect.

If you have way to build language *features* that have built in unwind
trapping and cleanup, you don't need to rely on your class system to
provide the only halfway convenient substrate for doing it.

Kaz Kylheku

unread,
Dec 4, 2001, 5:10:38 PM12/4/01
to
In article <892f97d1.01120...@posting.google.com>, glauber wrote:
>Finalizers are one of my pet peeves with Java; more precisely, the
>fact that Java doesn't have destructors. People coming from a C++
>tradition look for the destructor, don't find it, and use the
>finalizer. The only good use i can think of for a finalizer is to
>release memory that has been malloc-ed outside of the garbage
>collector (perhaps as part of a native interface method). Stay away
>from finalizers -- they're evil.
>
>Now, not having a destructor, we're forever having to code stuff like:
>
>try { open database connection, do stuff }
>finally { close database connection }
>
>when this should have been handled transparently in the destructor.

Or perhaps it should be handled by a language feature that the programmer

Espen Vestre

unread,
Dec 5, 2001, 5:04:38 AM12/5/01
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> worried about making the GC good later. Well, later came, and AFAICT,
> there are good GCs for Java now.

I'm actually a bit scared that given the amount of money used on
the development of java run time systems, there might come the day
when java GC is significantly better than lisp GC - and even worse:
That the new GC methods are (C) some company that won't give away
their stuff to the lisp vendors.
--
(espen)

Tim Bradshaw

unread,
Dec 5, 2001, 9:15:24 AM12/5/01
to
k...@ashi.footprints.net (Kaz Kylheku) wrote in message news:<QQbP7.21889$nm3.9...@news1.rdc1.bc.home.com>...

> Or perhaps it should be handled by a language feature that the programmer
> can define himself:
>
> (with-open-database-connection (conn) (do-stuff conn))

This is a really interesting connection that I've noticed before. The
desire for `destructors' in a language *actually* turns out to be due
to the lack of sufficient linguistic flexibility, specifically the
lack of a proper macro system.

Actually, I think that the only interesting property of any language
is a sufficiently hairy macro system. Who needs to actually compute
anything when you can expand macros. C++ seems to be realising this
with templates. They haven't quite got there yet, but fairly soon
they'll work out that you don't need to ship the system at all, you
just ship some *really complicated* templates, and the compiler.
Soon, to boot windows you'll just say `gcc windows-xp.cpp > /dev/mem'.

--tim

Tim Bradshaw

unread,
Dec 5, 2001, 10:45:24 AM12/5/01
to
Espen Vestre <espen@*do-not-spam-me*.vestre.net> wrote in message news:<w6n10yq...@wallace.ws.nextra.no>...

> I'm actually a bit scared that given the amount of money used on
> the development of java run time systems, there might come the day
> when java GC is significantly better than lisp GC - and even worse:
> That the new GC methods are (C) some company that won't give away
> their stuff to the lisp vendors.

This is kind of a sarcastic response, so take it with a pinch of
salt...

Have you measured GC overhead recently? Will Deakin & I did some
tests (he did most of them) for a fairly consy program on a number of
machines & CL systems. I think the worst was ~6% runtime, the best
was 2 or 3%. None of these were tuned at all, I'd guess that for a
highly-tunable system like ACL (which I think was at the low end
already), you could likely shave another couple of percent off,
leaving you with maybe 1-2%. It's in the noise.

The reason this is slighly tongue in cheek is that one place where
there probably are big wins are GCs for multithreaded systems on
multiple-processor machines. a GC that takes 5% and serialises the
program will start to hurt at 20 processors, which is not very large
(though not exactly a desktop either). And Sun have a big interest in
a multithreaded system (Java) and multiple-processor machines, so
there may be interesting developments there. However I'm not sure if
that problem is not already mostly-solved in academia.

--tim

glauber

unread,
Dec 5, 2001, 12:20:31 PM12/5/01
to
k...@ashi.footprints.net (Kaz Kylheku) wrote in message news:<yRbP7.21890$nm3.9...@news1.rdc1.bc.home.com>...

> In article <892f97d1.01120...@posting.google.com>, glauber wrote:
[...]

> >Now, not having a destructor, we're forever having to code stuff like:
> >
> >try { open database connection, do stuff }
> >finally { close database connection }
> >
> >when this should have been handled transparently in the destructor.
>
> Or perhaps it should be handled by a language feature that the programmer
> can define himself:
>
> (with-open-database-connection (conn) (do-stuff conn))


Yes. C++ attempted to eliminate macros, but it provides features that
are powerful enough to replace most of the use we did of macros in C.
Java eliminates macros but gives very little to replace them. Lisp
macros are the best.

The problem of the

try { do something with the database }
finally { release resources }

is that it's verbose, and people forget to do it. Many Java
programmers never use the finally clause of the try block (i know...).
Specifically with databases, sometimes it's not clear what you need to
release (statements? result sets? all of the above?). I've worked with
a buggy driver (IBM's DB2 level 4 driver) that locks up when you try
to release a result set. :-( This leads to bugs that are hard to
reproduce, because depending on your application's memory usage, the
finalizer may indeed be called and release the resource for you (or
not, depending on how the driver's developer coded it).

In Java, the only way to extend the language is by subclassing.
Sometimes this forces people into a better design, but sometimes the
mechanism is just not powerful enough.

g

Thomas F. Burdick

unread,
Dec 5, 2001, 4:36:25 PM12/5/01
to
tfb+g...@tfeb.org (Tim Bradshaw) writes:

> The reason this is slighly tongue in cheek is that one place where
> there probably are big wins are GCs for multithreaded systems on
> multiple-processor machines. a GC that takes 5% and serialises the
> program will start to hurt at 20 processors, which is not very large
> (though not exactly a desktop either). And Sun have a big interest in
> a multithreaded system (Java) and multiple-processor machines, so
> there may be interesting developments there. However I'm not sure if
> that problem is not already mostly-solved in academia.

In fact, the last time I looked through GC papers, this was the most
interesting area, and the one where Java seemed to be dominating.
Java's probably not quite as good a target for GC as SmallTalk or
Lisp[*], but they seem to have gotten the single-processing stuff
down, I think about as well as it's going to get for anyone. Where
they're going to murder Lisp in on multiprocessing. We don't even
*have* properly multiprocessing Lisps, do we? Like the kind that can
make use of more than 4 processors? That's where the cool GC research
in Java is going, AFAICT.

[*] I'd suspected that dynamically-typed languages are better for GC
because statically-typed ones encourage the programmer to do some
manual memory management by holding on to some stuff that's probably
garbage, and generally forcing the programmer to think more about when
they get things and let them go. This might just be an artifact of
the code I've seen from others (I don't think I could make that
judgment on my own code very well), but I think I read a paper that
confirmed my suspicions. I say "I think" because I can't find it
(moved 3 times in the last year), and without double-checking, I
might've been just hearing what I wanted to hear :-)

Andreas Bogk

unread,
Dec 5, 2001, 10:45:03 PM12/5/01
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> down, I think about as well as it's going to get for anyone. Where
> they're going to murder Lisp in on multiprocessing. We don't even
> *have* properly multiprocessing Lisps, do we? Like the kind that can
> make use of more than 4 processors? That's where the cool GC research
> in Java is going, AFAICT.

I don't see a reason why garbage collection techniques developed for
Java shouldn't be applicable to Lisp.

Espen Vestre

unread,
Dec 6, 2001, 5:40:00 AM12/6/01
to
tfb+g...@tfeb.org (Tim Bradshaw) writes:

> Have you measured GC overhead recently?

I'm actually not at all interested in the _percentual_ overhead, I
would even accept 50% overhead if it could be done without loosing
responsiveness, with the application at the same time being able to
perform "almost hard real time".

I'm working with an application that runs for _months_, produces
_tons_ of garbage and at the same time caches almost 1GB of data.
I've tuned it to work amazingly well with LispWorks (*much* better
than an earlier ACL version, btw), but I still can't get around the
fact that I sometimes need to do memory defragmentation and sweeps of
Generation 2. Each of these operations, which I usually run 1 or 2
times an hour in peak load hours, will last for about 10 - 30
seconds. For my specific application, a 30 seconds "blackout" once an
hour is fully acceptable, but for other applications it's not, so I
see a _very_ real need for almost hard real time lisp GC
implementation like the one presented by Takeuchi at the JLUGM 2000.
--
(espen)

Will Deakin

unread,
Dec 6, 2001, 5:56:53 AM12/6/01
to
Espen Vestre wrote:
> with the application at the same time being able to perform
> "almost hard real time".
Hmmm. Isn't this called "soft real time?"

> but I still can't get around the fact that I sometimes need to
> do memory defragmentation and sweeps of Generation 2. Each of
> these operations, which I usually run 1 or 2 times an hour in
> peak load hours, will last for about 10 - 30 seconds. For my
> specific application, a 30 seconds "blackout" once an hour
is > fully acceptable, but for other applications it's not,

Speaking from a position of complete ignorance -- often an
advantage -- your descriptions indicates that you are using a
form of generational gc. Would it not be more appropriate to use
a non-generational gc?

:)w

Thomas F. Burdick

unread,
Dec 6, 2001, 6:43:55 AM12/6/01
to
Andreas Bogk <and...@andreas.org> writes:

> t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > down, I think about as well as it's going to get for anyone. Where
> > they're going to murder Lisp in on multiprocessing. We don't even
> > *have* properly multiprocessing Lisps, do we? Like the kind that can
> > make use of more than 4 processors? That's where the cool GC research
> > in Java is going, AFAICT.
>
> I don't see a reason why garbage collection techniques developed for
> Java shouldn't be applicable to Lisp.

Well, there are differences, but a lot of the research *could* be
useful to us, too. But it would take lots of money to implement them,
and we'd need to have the kind of multiprocessing Lisp systems they go
with, which takes a *lot* more money. So, yeah, the only thing
between us and them is a whole lot of money :-P

Andreas Bogk

unread,
Dec 6, 2001, 9:07:33 AM12/6/01
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> > I don't see a reason why garbage collection techniques developed for
> > Java shouldn't be applicable to Lisp.
> Well, there are differences, but a lot of the research *could* be
> useful to us, too.

The differences are mainly in the area of finalization. For
multiprocessing, issues are very much the same.

> But it would take lots of money to implement them,

Well, be glad you don't have to pay for the research in that area,
that's even more expensive. And open-source development doesn't
involve so much money, but rather finding somebody who will dedicate
his spare time on the issue. Or his company time, for that matter,
which tends to happen.

For the issue of multiprocessing GC, there's code available in the
Boehm-Weiser conservative GC. It supports parallel marking in
multiple threads. This GC is used by Java, Scheme, ML and Dylan
implementations.

> and we'd need to have the kind of multiprocessing Lisp systems they go
> with, which takes a *lot* more money.

Well, it suffices if, say, the CMUCL maintainers have access to a
multiprocessing system. Money helps to get things done fast, but it
can be substituted for with connections to the right people.

> So, yeah, the only thing
> between us and them is a whole lot of money :-P

So let *them* spend it on research, lean back, and collect the
fruits. :)

Kalle Olavi Niemitalo

unread,
Dec 6, 2001, 3:15:43 PM12/6/01
to
"Dmitri Ivanov" <div...@aha.ru> writes:

> (setf (class-name (find-class 'old-name)) 'new-name)

Oh. Thanks.

(CLHS has find-class and (setf find-class) on the same page but
class-name and (setf class-name) on separate pages, apparently
because the latter two are generic functions.)

Michael Schuerig

unread,
Dec 7, 2001, 4:19:52 PM12/7/01
to
Tim Bradshaw wrote:

> k...@ashi.footprints.net (Kaz Kylheku) wrote in message
> news:<QQbP7.21889$nm3.9...@news1.rdc1.bc.home.com>...
>> Or perhaps it should be handled by a language feature that the
>> programmer can define himself:
>>
>> (with-open-database-connection (conn) (do-stuff conn))
>
> This is a really interesting connection that I've noticed before. The
> desire for `destructors' in a language *actually* turns out to be due
> to the lack of sufficient linguistic flexibility, specifically the
> lack of a proper macro system.

Jonathan Bachrach and Keith Playford are working on an interesting
macro system for Java

http://www.ai.mit.edu/~jrb/Projects/java-macros.htm

Michael

--
Michael Schuerig GPG Fingerprint
mailto:schu...@acm.org DA28 7DEB 5856 3365 BED9
http://www.schuerig.de/michael/ 8365 0A30 545A 82D2 05D7

Kaz Kylheku

unread,
Dec 7, 2001, 4:57:45 PM12/7/01
to
In article <9urbpn$hga$07$1...@news.t-online.com>, Michael Schuerig wrote:
>Tim Bradshaw wrote:
>
>> k...@ashi.footprints.net (Kaz Kylheku) wrote in message
>> news:<QQbP7.21889$nm3.9...@news1.rdc1.bc.home.com>...
>>> Or perhaps it should be handled by a language feature that the
>>> programmer can define himself:
>>>
>>> (with-open-database-connection (conn) (do-stuff conn))
>>
>> This is a really interesting connection that I've noticed before. The
>> desire for `destructors' in a language *actually* turns out to be due
>> to the lack of sufficient linguistic flexibility, specifically the
>> lack of a proper macro system.
>
>Jonathan Bachrach and Keith Playford are working on an interesting
>macro system for Java
>
> http://www.ai.mit.edu/~jrb/Projects/java-macros.htm

It's not really ``for'' Java; it's a new language that is translated into
Java source code. The Java language isn't opened up to structural
transformations, but is used as a back end.

If everyone brings his pet Java peprocessor to a project, you end up
with code that is going through half a dozen text filters to produce
executable code. What if, for instance, you want to use AspectJ
and the Java Syntactic Expander at the same time?

A similar thing was done for C++ as part of Edward Willink's
Ph. D. thesis. ``Flexible Object Generator'' (FOG).
http://www.computing.surrey.ac.uk/research/dsrg/fog/

The creator of Java, James Gosling, wrote a structural macro preprocessor
for C called ACE, over ten years ago. Its aim was to perform code
transformations with the primary aim of doing optimizations that the
compiler isn't trusted to be able to do.
http://java.sun.com/people/jag/ace/ace.html

To be done right, these things have to be done *in* the language not as
a text processor *in front* of the language.

Michael Schuerig

unread,
Dec 7, 2001, 8:47:28 PM12/7/01
to
Kaz Kylheku wrote:

> In article <9urbpn$hga$07$1...@news.t-online.com>, Michael Schuerig
> wrote:
>>Tim Bradshaw wrote:
>>
>>> k...@ashi.footprints.net (Kaz Kylheku) wrote in message
>>> news:<QQbP7.21889$nm3.9...@news1.rdc1.bc.home.com>...
>>>> Or perhaps it should be handled by a language feature that the
>>>> programmer can define himself:
>>>>
>>>> (with-open-database-connection (conn) (do-stuff conn))
>>>
>>> This is a really interesting connection that I've noticed before.
>>> The desire for `destructors' in a language *actually* turns out to
>>> be due to the lack of sufficient linguistic flexibility,
>>> specifically the lack of a proper macro system.
>>
>>Jonathan Bachrach and Keith Playford are working on an interesting
>>macro system for Java
>>
>> http://www.ai.mit.edu/~jrb/Projects/java-macros.htm
>
> It's not really ``for'' Java; it's a new language that is translated
> into Java source code. The Java language isn't opened up to structural
> transformations, but is used as a back end.

I'm not sure I understand the difference in this case. What would it
mean for a macro system to be *in* Java instead of in front of it?
Doesn't Java's syntax it close to impossible (better: undesirable) to
have a macro system in the language?

> If everyone brings his pet Java peprocessor to a project, you end up
> with code that is going through half a dozen text filters to produce
> executable code. What if, for instance, you want to use AspectJ
> and the Java Syntactic Expander at the same time?

I wouldn't want to. If I wanted to play around, I wouldn't use either
as I wouldn't even use Java. On a serious project I'd consider it too
dangerous to use extensions that significantly deviate from "standard"
Java. Also, I consider it unlikely that any extension such as JSE or
AspectJ will ever make it into Java proper.

cbbr...@acm.org

unread,
Dec 7, 2001, 9:05:33 PM12/7/01
to
Michael Schuerig <schu...@acm.org> writes:
> Kaz Kylheku wrote:
>
> > In article <9urbpn$hga$07$1...@news.t-online.com>, Michael Schuerig
> > wrote:
> >>Tim Bradshaw wrote:
> >>
> >>> k...@ashi.footprints.net (Kaz Kylheku) wrote in message
> >>> news:<QQbP7.21889$nm3.9...@news1.rdc1.bc.home.com>...
> >>>> Or perhaps it should be handled by a language feature that the
> >>>> programmer can define himself:
> >>>>
> >>>> (with-open-database-connection (conn) (do-stuff conn))
> >>>
> >>> This is a really interesting connection that I've noticed before.
> >>> The desire for `destructors' in a language *actually* turns out to
> >>> be due to the lack of sufficient linguistic flexibility,
> >>> specifically the lack of a proper macro system.
> >>
> >>Jonathan Bachrach and Keith Playford are working on an interesting
> >>macro system for Java
> >>
> >> http://www.ai.mit.edu/~jrb/Projects/java-macros.htm

> > It's not really ``for'' Java; it's a new language that is
> > translated into Java source code. The Java language isn't opened
> > up to structural transformations, but is used as a back end.

> I'm not sure I understand the difference in this case. What would it
> mean for a macro system to be *in* Java instead of in front of it?
> Doesn't Java's syntax it close to impossible (better: undesirable)
> to have a macro system in the language?

I suppose the system in question might be compared to using M4 as a
macro preprocessor.

It's not obvious what it would mean to have a macro system "in" Java;
that's effectively why there's quite a bunch of mutually incompatible
such schemes.

> > If everyone brings his pet Java peprocessor to a project, you end
> > up with code that is going through half a dozen text filters to
> > produce executable code. What if, for instance, you want to use
> > AspectJ and the Java Syntactic Expander at the same time?

> I wouldn't want to. If I wanted to play around, I wouldn't use
> either as I wouldn't even use Java. On a serious project I'd
> consider it too dangerous to use extensions that significantly
> deviate from "standard" Java. Also, I consider it unlikely that any
> extension such as JSE or AspectJ will ever make it into Java proper.

Well, if you generate some code using AspectJ, what you get, in the
end, _is_ "standard Java" code; it's just that it starts off in a
non-standard form.

I'd tend to think that stuff like AspectJ would be highly valuable to
"use in front," as AOP addresses some _commonly-found_ classes of
problems particularly related to resource locking. (In a sense, it
provides a vague sort of equivalent to WITH-MY-LOCK, or WITH-DATABASE,
or WITH-DBCURSOR.) It beats hand-coding it everywhere...
--
(reverse (concatenate 'string "ac.ocitapmys@" "enworbbc"))
http://www.ntlug.org/~cbbrowne/internet.html
"Even in the area of anticompetitive conduct, Microsoft is mainly an
imitator." -- Ralph Nader (1998/11/11)

Tim Bradshaw

unread,
Dec 11, 2001, 7:04:21 AM12/11/01
to
Espen Vestre <espen@*do-not-spam-me*.vestre.net> wrote in message news:<w6adwwo...@wallace.ws.nextra.no>...

>
> I'm working with an application that runs for _months_, produces
> _tons_ of garbage and at the same time caches almost 1GB of data.
> I've tuned it to work amazingly well with LispWorks (*much* better
> than an earlier ACL version, btw), but I still can't get around the
> fact that I sometimes need to do memory defragmentation and sweeps of
> Generation 2. Each of these operations, which I usually run 1 or 2
> times an hour in peak load hours, will last for about 10 - 30
> seconds. For my specific application, a 30 seconds "blackout" once an
> hour is fully acceptable, but for other applications it's not, so I
> see a _very_ real need for almost hard real time lisp GC
> implementation like the one presented by Takeuchi at the JLUGM 2000.

Oh, OK, I was responding rather to an implied `lisp is slow because of
GC' type argument which people (probably not you) tend to make, or
perhaps don't make any more.

real-time GC is definitely a good thing, and you probably are right
that the Java people will get commercial ones first because they have
huge money. They'll also (and probably relatedly) get systems that
run well on big commercial multiprocessor machines first, because Sun
make buckets of money off them and want to sell them as huge Java
boxes. This is another place that Lisp will get to five years too
late (although in this case probably the implementors know this and
it's just down to lack of money rather than stupidity like missing the
web bandwagon...)

--tim

Espen Vestre

unread,
Dec 11, 2001, 7:47:41 AM12/11/01
to
tfb+g...@tfeb.org (Tim Bradshaw) writes:

> huge money. They'll also (and probably relatedly) get systems that
> run well on big commercial multiprocessor machines first, because Sun

...but systems that run on big fat Sun boxes tend to consist of more
than one process, so many of them can take advantage of multiple processors
anyway.

--
(espen)

cbbr...@acm.org

unread,
Dec 11, 2001, 9:16:04 AM12/11/01
to
Espen Vestre <espen@*do-not-spam-me*.vestre.net> writes:
> tfb+g...@tfeb.org (Tim Bradshaw) writes:
> > huge money. They'll also (and probably relatedly) get systems
> > that run well on big commercial multiprocessor machines first,
> > because Sun

> ....but systems that run on big fat Sun boxes tend to consist of


> more than one process, so many of them can take advantage of
> multiple processors anyway.

An interesting URL to look at in this regard is an essay on the "C10K
Problem."
<http://www.kegel.com/c10k.html>

Its thesis is that one of the Interesting Problems is of how to allow
web servers to support having on the order of 10,000 clients (ergo
"C10K").

Falling out of this are the notable issues:
-> Each client will have some I/O requirements;
-> Each client needs to get serviced in a timely manner;

Lots of asynchonicity is associated with this, and making it actually
_work_ is liable to require some clever ways of approaching threading,
asynchronous I/O, and such.


--
(reverse (concatenate 'string "ac.ocitapmys@" "enworbbc"))

http://www.ntlug.org/~cbbrowne/languages.html
"Problem solving under linux has never been the circus that it is
under AIX." -- Pete Ehlke in comp.unix.aix

Bulent Murtezaoglu

unread,
Dec 11, 2001, 9:38:48 AM12/11/01
to
>>>>> "cb" == cbbrowne <cbbr...@acm.org> writes:
[...]
cb> An interesting URL to look at in this regard is an essay on
cb> the "C10K Problem." <http://www.kegel.com/c10k.html>

I'll second that recommendation. Kegel does a good job of keeping that
page up to date.
[...]
cb> Lots of asynchonicity is associated with this, and making it
cb> actually _work_ is liable to require some clever ways of
cb> approaching threading, asynchronous I/O, and such.

Yes, I'd especially recommend this link to younger programmers who
seem to have a tendency to exclusively think of spinning up threads
or forking up processes when they need to deal with multiple clients.
There are good examples there with some benchmark data exposing
interfaces with the OS for multiplexing in ways other than just
relying on the scheduler.

cheers,

BM

Tim Bradshaw

unread,
Dec 11, 2001, 1:16:16 PM12/11/01
to
Espen Vestre <espen@*do-not-spam-me*.vestre.net> wrote in message news:<w6y9k96...@wallace.ws.nextra.no>...

> ...but systems that run on big fat Sun boxes tend to consist of more
> than one process, so many of them can take advantage of multiple processors
> anyway.

Yes, this is true. But one reason for the big fat, expensive, box
rather than the cheap farm of linux machines is that there are crucial
synchronisation issues which the big box makes not too horrible, but
which are seriously awful on anything without fancy interconnect
and/or cache-coherency. I'd like Lisp to be applicable to those parts
of the problem too, rather than having to rely on oracle or some other
horror from the last millenium.

--tim

Carl Shapiro

unread,
Dec 12, 2001, 9:58:46 AM12/12/01
to
tfb+g...@tfeb.org (Tim Bradshaw) writes:

> Yes, this is true. But one reason for the big fat, expensive, box
> rather than the cheap farm of linux machines is that there are crucial
> synchronisation issues which the big box makes not too horrible, but
> which are seriously awful on anything without fancy interconnect
> and/or cache-coherency.

Could you perhaps elaborate on what sort of synchronization issues you
think are made less horrible on the "big box" class of machine? From
my perspective, such hardware is now overwhelmingly employing some
flavor of NUMA, where multiprocessing is an awful lot more complex
than on your garden variety shared memory multiprocessor.

Carl Shapiro

unread,
Dec 12, 2001, 9:59:00 AM12/12/01
to
tfb+g...@tfeb.org (Tim Bradshaw) writes:

> Yes, this is true. But one reason for the big fat, expensive, box
> rather than the cheap farm of linux machines is that there are crucial
> synchronisation issues which the big box makes not too horrible, but
> which are seriously awful on anything without fancy interconnect
> and/or cache-coherency.

Will Deakin

unread,
Dec 12, 2001, 10:22:29 AM12/12/01
to
Carl Shapiro wrote:

> Could you perhaps elaborate on what sort of synchronization issues you
> think are made less horrible on the "big box" class of machine?

Big boxes have lots and lots of wires and caches that connect the
processors, disks and memory together. For example, Sun sparc
chips have large L1 and L2 caches and fast(er) memory that help
sort out some of the bottlenecks issues associated with multiple
processors.

> From my perspective, such hardware is now overwhelmingly employing
> some flavor of NUMA, where multiprocessing is an awful lot more

> complex than on your garden variety shared memory multiprocessor.

This depends on where you sit. Yes, the hardware is more complex
but what the OS and the programmer sees is much simpler. Rather
like a duck, a lot of paddling goes on to give the apperance of
serence unruffled motion.

:)w

Tim Bradshaw

unread,
Dec 12, 2001, 2:04:39 PM12/12/01
to
Carl Shapiro <csha...@panix.com> wrote in message news:<ouyy9k8...@panix3.panix.com>...

> Could you perhaps elaborate on what sort of synchronization issues you
> think are made less horrible on the "big box" class of machine? From
> my perspective, such hardware is now overwhelmingly employing some
> flavor of NUMA, where multiprocessing is an awful lot more complex
> than on your garden variety shared memory multiprocessor.

I think that the issue is the difference between best and worst cases.
The best case for virtually anything is some kind of embarrassingly
parallel problem where you basically need no communication at all
between the threads of control. COWs are great for those, witness
things like rendering farms and SETI.

The worst case (again for virtually anything) is something which is
communications-dominated, when those communications can't be pipelined
and so are seriously latency-sensitive. These kinds of problems work
really well on single-CPU machines because the comms cost is really
low there. But for a COW they're terrible, since even if you have the
interconnect bandwidth (which you generally don't), the latency really
hurts you. non-pipelinable communications issues are what I mean by
synchronisation problems. If they're pipelinable they aren't
synchronisation problems really, since you can always carry on doing
stuff while the data makes its way down a fat but high-latency pipe to
wherever else. Things like locks on databases are really good, really
important examples.

Big box machines tend to spend an awful lot of time making the best
case reasonable and the worst case not too bad. Anyone can make the
best case arbitrarily good (especially now memory is so cheap that you
can likely afford enough in each box) but making the worst case not be
terrible requires a lot of really expensive hardware in the machine.
I think the observation that drives big box machines is that the worst
case happens quite often in commercially important workloads (oracle).

--tim

0 new messages