Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CMU CL vs. CLISP?

47 views
Skip to first unread message

Joseph O'Rourke

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to
I am preparing to install a free Lisp for teaching Intro. to AI.
Is there any reasons to choose between CMU Common Lisp and
CLISP? I think both will run on my primary platform (SGI Irix 6.5),
and both run on other platforms. I cannot tell easily from the
documentation I've studied if one is more stable, thorough,
efficient, easier to interface to editors, etc., than the other.
I would appreciate advice from those with experience. Thanks!

Johan Kullstam

unread,
Jul 20, 1999, 3:00:00 AM7/20/99
to

i would recommend CLISP. CLISP is a fairly complete common-lisp with
small memory footprint and it is easy to use (especially with built-in
gnu readline). CLISP will run on many operating systems including
linux and windows. (not that i'm a big fan of windows, but it *is*
common. this way students can use it easily at home.)

CMUCL is good, but it's a bit industrial strength. the compiler is
wordy and complains a lot about type inferences and such. this is, to
be sure, useful since CMUCL can produce fast number-crunching code,
but may be a bit overwhelming to the neophyte. CMUCL exists for a few
popular flavors of unix and afaik does not do windows.

both CLISP and CMUCL can be run from within EMACS. using the lisp source
editor and inferior lisp modes of EMACS make my life easier.

--
J o h a n K u l l s t a m
[kull...@ne.mediaone.net]
Don't Fear the Penguin!

Pierpaolo Bernardi

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
Johan Kullstam (kull...@ne.mediaone.net) wrote:

: CMUCL is good, but it's a bit industrial strength.

What do you mean with `industrial strength'?

I would say the opposite is true. Clisp is being used for real
`industrial strength' projects. I don't think this is the case with
CMUCL (but I may be wrong).

P.

Mark Carroll

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
In article <932571082.483197@fire-int>,
Pierpaolo Bernardi <bern...@cli.di.unipi.it> wrote:
(snip)

>I would say the opposite is true. Clisp is being used for real
>`industrial strength' projects. I don't think this is the case with
>CMUCL (but I may be wrong).

I know of at least one commercial company that uses CMU CL for some
of its development. (XML based web database stuff, I believe...)

-- Mark

Gareth McCaughan

unread,
Jul 21, 1999, 3:00:00 AM7/21/99
to
Johan Kullstam wrote:

> i would recommend CLISP. CLISP is a fairly complete common-lisp with
> small memory footprint and it is easy to use (especially with built-in
> gnu readline). CLISP will run on many operating systems including
> linux and windows. (not that i'm a big fan of windows, but it *is*
> common. this way students can use it easily at home.)
>
> CMUCL is good, but it's a bit industrial strength. the compiler is
> wordy and complains a lot about type inferences and such. this is, to
> be sure, useful since CMUCL can produce fast number-crunching code,
> but may be a bit overwhelming to the neophyte. CMUCL exists for a few
> popular flavors of unix and afaik does not do windows.

On the other hand, CMUCL does have a native-code compiler, and
CLISP doesn't, so if performance matters CMUCL may win. (For some
purposes CLISP will likely be *faster* than CMUCL, though; I'm
told its bignums are especially good.)

The version of CLISP I have (which is admittedly a bit old) also
doesn't grok inline functions. This, plus the fact that "built-in"
operations tend to be much faster than build-it-yourself ones,
does slightly discourage one from constructing abstractions;
that's a shame.

None of this means I don't like CLISP, by the way. It's a great
piece of software, especially in view of its ability to live in
small machines.

--
Gareth McCaughan Gareth.M...@pobox.com
sig under construction

Pierre R. Mai

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Mark Carroll <ma...@chiark.greenend.org.uk> writes:

I know of at least one commercial company that uses CMU CL for it's
factory-floor simulation software suite... ;)

While CLISP is a nice implementation, it has serious problems when you
use it for large to huge data-sets, IMHO.

There is also another annoying little problem with CLISP: While I
generally have little problems keeping stuff portable across most
other CL implementations, CLISP often disagrees with all other
implementations on some things[1].

Things may have changed since the last time I tried to port some
things to CLISP though, so YMMV...

Regs, Pierre.

Footnotes:
[1] This doesn't necessarily mean that CLISP is wrong, or non-conforming.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Bernhard Pfahringer

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
In article <87ogh55...@orion.dent.isdn.cs.tu-berlin.de>,

Pierre R. Mai <pm...@acm.org> wrote:
>
>There is also another annoying little problem with CLISP: While I
>generally have little problems keeping stuff portable across most
>other CL implementations, CLISP often disagrees with all other
>implementations on some things[1].
>

Are you aware of the "ANSI" flag of CLISP:

-a ANSI CL compliant: Comply with the ANSI CL specifica-
tion even on those issues where ANSI CL is broken. This
option is provided for maximum portability of Lisp pro-
grams. It is not useful for actual everyday work.

I've only recently discovered that flag, it can be helpful at times
(should RTFM more often :-)

Bernhard
--
--------------------------------------------------------------------------
Bernhard Pfahringer
Austrian Research Institute for http://www.ai.univie.ac.at/~bernhard/
Artificial Intelligence bern...@ai.univie.ac.at

William Deakin

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
As a aside, I am running clisp but would like to get a hold of a copy of
CMU CL and have heard the debian CMU CL highly recommended. I am an out
of hours slackware-y and have not been able to track down the debian
package. This has been exacerbated by the search engine at debian.org
not working :-( Could anybody help me with in my quest?

:-) Will


Friedrich Dominicus

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to


You better ask this question in some debian mailing-list. I guess you
have to provide some infra-structurre to get the *deb files up and
running. at least you have to have the dpkg kit.

Regards
Friedrich

Pierre R. Mai

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Friedrich Dominicus <Friedrich...@inka.de> writes:

If I where him, I'd just download the deb package and run alien on it
to get a slackware package. To find the current CMU CL package (which
probably needs GLIBC 2.1, though), go to the packages section from the
Debian home page (link is somewhere on the left), then go to the
unstable packages section (second link from top). There you'll find
CMU CL packages in the Devel section. On the info page, press the
download page button, and select the location nearest to you... Run
alien over the DEB package and enjoy.

If you only have GLIBC 2.0, you might want to get the old stable
version of CMU CL (2.4.9) instead, which is in the Devel section of
the stable packages.

There are also a couple of other cmucl-related packages you might want
to get (most start with cmucl-).

If you don't have access to alien, you might get by by unpacking the
deb archive yourself (deb's are ar archives, which contain two
tarballs: One with the control information, and one with the files, to
be unpacked into the root directory).

On Debian, getting CMU CL is just as simple as typing

apt-get install cmucl

on your command line ;)

Regs, Pierre.

Pierre R. Mai

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
bern...@hummel.ai.univie.ac.at (Bernhard Pfahringer) writes:

> Are you aware of the "ANSI" flag of CLISP:
>
> -a ANSI CL compliant: Comply with the ANSI CL specifica-
> tion even on those issues where ANSI CL is broken. This
> option is provided for maximum portability of Lisp pro-
> grams. It is not useful for actual everyday work.
>
> I've only recently discovered that flag, it can be helpful at times
> (should RTFM more often :-)

Interesting. This seems to be a "recent" addition (well I haven't
kept up to date with CLISP to closely in recent times...). Maybe I'll
try to revisit CLISP for some things...

Pierpaolo Bernardi

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Gareth McCaughan (Gareth.M...@pobox.com) wrote:
: Johan Kullstam wrote:

: > CMUCL is good, but it's a bit industrial strength. the compiler is


: > wordy and complains a lot about type inferences and such. this is, to
: > be sure, useful since CMUCL can produce fast number-crunching code,
: > but may be a bit overwhelming to the neophyte. CMUCL exists for a few
: > popular flavors of unix and afaik does not do windows.

: On the other hand, CMUCL does have a native-code compiler, and
: CLISP doesn't, so if performance matters CMUCL may win.

In my understanding, `industrial strenght' means correct and supported.

: (For some


: purposes CLISP will likely be *faster* than CMUCL, though; I'm
: told its bignums are especially good.)

try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
and Allegro.

: The version of CLISP I have (which is admittedly a bit old) also


: doesn't grok inline functions.

Allegro doesn't grok them either.

In Clisp, functions are inlined by the file compiler, but not by
COMPILE (as far as I can remember, Clisp has always worked in this
way).

: This, plus the fact that "built-in"


: operations tend to be much faster than build-it-yourself ones,
: does slightly discourage one from constructing abstractions;
: that's a shame.

I don't understand this. You are complaining that built-in fuctions
are too fast?

Should be easy to fix. Just insert a delay in the interpreter loop
whenever a built-in function is called. You may even make this delay
so big as to make build-it-yourself functions more convenient, thus
encouraging constructing abstractions.

P.

Tim Bradshaw

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
* Pierpaolo Bernardi wrote:

> try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
> and Allegro.

Bignum performance is not always on the critical path for Lisp
applications.


> : This, plus the fact that "built-in"
> : operations tend to be much faster than build-it-yourself ones,
> : does slightly discourage one from constructing abstractions;
> : that's a shame.

> I don't understand this. You are complaining that built-in fuctions
> are too fast?

No, he's complaining that the byte compiler is too *slow*, so code you
write is always much slower than anything built in. So you are
encouraged to use the builtin types & functions rather than write your
own. Which is what he wrote.

> Should be easy to fix. Just insert a delay in the interpreter loop
> whenever a built-in function is called. You may even make this delay
> so big as to make build-it-yourself functions more convenient, thus
> encouraging constructing abstractions.

Ho ho.

--tim

Gareth McCaughan

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to
Pierpaolo Bernardi wrote:

> Gareth McCaughan (Gareth.M...@pobox.com) wrote:
>: Johan Kullstam wrote:
>
>:> CMUCL is good, but it's a bit industrial strength. the compiler is
>:> wordy and complains a lot about type inferences and such. this is, to
>:> be sure, useful since CMUCL can produce fast number-crunching code,
>:> but may be a bit overwhelming to the neophyte. CMUCL exists for a few
>:> popular flavors of unix and afaik does not do windows.
>
>: On the other hand, CMUCL does have a native-code compiler, and
>: CLISP doesn't, so if performance matters CMUCL may win.
>
> In my understanding, `industrial strenght' means correct and supported.

Is this meant to be a reply to what I wrote, or to what Johan wrote?
(I wasn't the one who said that CMU CL was "a bit industrial strength".)

>: (For some
>: purposes CLISP will likely be *faster* than CMUCL, though; I'm
>: told its bignums are especially good.)
>

> try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
> and Allegro.

I presume this would simply confirm what I already said.

>: The version of CLISP I have (which is admittedly a bit old) also
>: doesn't grok inline functions.
>
> Allegro doesn't grok them either.
>
> In Clisp, functions are inlined by the file compiler, but not by
> COMPILE (as far as I can remember, Clisp has always worked in this
> way).

That's interesting; I hadn't realised it was so. Thanks for the
information.

>: This, plus the fact that "built-in"
>: operations tend to be much faster than build-it-yourself ones,
>: does slightly discourage one from constructing abstractions;
>: that's a shame.
>
> I don't understand this. You are complaining that built-in fuctions
> are too fast?

No, I'm complaining that user code is too slow.

> Should be easy to fix. Just insert a delay in the interpreter loop
> whenever a built-in function is called. You may even make this delay
> so big as to make build-it-yourself functions more convenient, thus
> encouraging constructing abstractions.

A brilliant idea. I'll do it at once.

Pierpaolo Bernardi

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to
Gareth McCaughan (Gareth.M...@pobox.com) wrote:

: Pierpaolo Bernardi wrote:
: > Gareth McCaughan (Gareth.M...@pobox.com) wrote:
: >: Johan Kullstam wrote:

: > In my understanding, `industrial strenght' means correct and supported.

: Is this meant to be a reply to what I wrote, or to what Johan wrote?
: (I wasn't the one who said that CMU CL was "a bit industrial strength".)

Sorry. I mixed two replies which should have been better separated.

: > try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
: > and Allegro.

: I presume this would simply confirm what I already said.

Yes. But maybe more than you thought.

: > I don't understand this. You are complaining that built-in fuctions
: > are too fast?

: No, I'm complaining that user code is too slow.

I know of no bytecoded lisp faster than clisp.

P.

Damond Walker

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to

Pierpaolo Bernardi wrote in message <932748524.839709@fire-int>...

>: > try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
>: > and Allegro.


Quick question... Is (random (expt 10 500)) supposed to be a slow
operation? Runs almost instantly with clisp on my little machine (dual
pentium-133 -- does clisp support SMP systems?). I ran this straight at the
prompt. Is there some kind of generally accepted benchmark for clisp,
cmucl, etc.?

Damond

Lars Marius Garshol

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to

* Damond Walker

|
| Quick question... Is (random (expt 10 500)) supposed to be a slow
| operation?

I don't think so. It runs immediately on CMUCL, CLISP and Allegro,
although Allegro chokes with some floating-point-related complaint.

--Lars M.

Pierre R. Mai

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to
bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> : No, I'm complaining that user code is too slow.
>
> I know of no bytecoded lisp faster than clisp.

This is not a contradiction to his complaint.

While bytecodes buy you a couple of things (like smaller image size,
smaller compiler-complexity and often greater portability), you take
a performance hit against native code. Those are the trade-offs.
This doesn't make CLISP a bad CL implemenation. No need to defend it
so avidly. Neither is CLISP always better than CMU CL, nor is CMU CL
always better than CLISP. There are certain things that CMU CL does
better, and certain things that CLISP does better, and yet other
things that none of them are really good at. So you have to live
with the trade-offs and choose an implementation that satisfies your
requirements the most. That's life.

Christopher R. Barry

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to
bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> I know of no bytecoded lisp faster than clisp.

I know of no Common Lisp slower than CLISP.

The reason why people are complaining that user functions in CLISP are
a lot slower than predefined functions is because nearly all of the
CLISP predefined functions are written in C, while in a "real" Lisp
the Lisp is written in Lisp and the compiler is smart enough to
efficiently compile its own Lisp and user Lisp.

As for the bignum performance of CLISP; yes, it's amazing. Try taking
the factorial of 100000 sometime. CLISP will return the 500k result in
under 13 minutes on a 200MHz MMX Pentium and the heap will never grow
above a few MB. With CMU or Allegro you'll need gigs of swap to do
this and it would be way slower. If you look at Haible's page you'll
see he's written a fair ammount C/C++ numerical code and he's authored
many mathematical papers so has particular skill in the area. (The
CLISP bignum routines are all C.)

CLISP's floats are about 10-40x slower than CMUCL's, which in many
cases are 3-4x faster than the next best commercial offering. But who
cares? I'd trade it for Allegro's CLOS performance anyday.

Christopher

Robert Monfera

unread,
Jul 23, 1999, 3:00:00 AM7/23/99
to
Gareth McCaughan wrote:
...

> > Should be easy to fix. Just insert a delay in the interpreter loop
> > whenever a built-in function is called. You may even make this delay
> > so big as to make build-it-yourself functions more convenient, thus
> > encouraging constructing abstractions.
>
> A brilliant idea. I'll do it at once.
...

Be careful though - you should make built-in primitive functions slower
if you do not use them in user constructed primitives - otherwise they
would be slower too.

Maybe one could introduce a declaration to somehow distinguish between
built-primitives and primitives built by the programmer.

Robert

Christopher R. Barry

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to
Tim Bradshaw <t...@tfeb.org> writes:

> * Christopher R Barry wrote:
>> As for the bignum performance of CLISP; yes, it's amazing. Try taking
>> the factorial of 100000 sometime. CLISP will return the 500k result in
>> under 13 minutes on a 200MHz MMX Pentium and the heap will never grow
>> above a few MB. With CMU or Allegro you'll need gigs of swap to do
>> this and it would be way slower.

> I don't have figures for Allegro, but CMUCL 18a on a 333MHz sparc took
> 13 minutes and some seconds. Given 64Mb between GCs it never got above
> 67,843,456 bytes dynamic space in use, and never above 691,920 bytes
> retained after GC. So it would probably have run with a dynamic space
> of 2Mb or so but it would have GCd a lot more often and probably
> runtime would have been GC dominated.

> This is a bit slower and a bit bigger but definitely neither gigs of
> swap nor way slower.

My box here is a 64MB 200MHz MMX Pentium. Last I tried it with CMU CL I
had 128MB swap and it exhausted all my swap and all the windows in X
started dying and it took me several minutes to get an xterm up so I
could kill -9 the damn thing. I've got 512MB swap now and I'll try it
after I sleep and post the result. [It's 5am in California, one of those
Friday nights....]

Christopher


Tim Bradshaw

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to

Does CMUCL have some fancy GC on x86 now? That sounds like exactly
the sort of syndrome you might get for a generational collector which
where intermediate results are being tenured bogusly.

--tim

Christopher R. Barry

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to
Tim Bradshaw <t...@tfeb.org> writes:

Yes, x86 has gengc. I tried it again and it ran for about 15 minutes
without swapping out and then it filled all 512MB of my VM and I had to
kill it.

Christopher

Erik Naggum

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
* bern...@cli.di.unipi.it (Pierpaolo Bernardi)
| Allegro doesn't grok them [inline functions] either.

Allegro CL inlines system functions, but not user-defined functions.
various measures can be used to obtain the speed effect without the code
bloat effect.

| I don't understand this. You are complaining that built-in fuctions
| are too fast?

it's very valid concern with CLISP because it means that any attempt to
make use of the powerful abstractions that Common Lisp offers will cost
you a tremendous lot in performance. the code that people write in CLISP
looks like Lisp Assembler -- they go to great lengths to use built-in
functions for speed.

| Should be easy to fix. Just insert a delay in the interpreter loop
| whenever a built-in function is called. You may even make this delay
| so big as to make build-it-yourself functions more convenient, thus
| encouraging constructing abstractions.

I take it that you mean that encouraging abstraction is bad. if so,
I concede that CLISP offers you the best choice, bar none.

#:Erik
--
suppose we blasted all politicians into space.
would the SETI project find even one of them?

R. Toy

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
Christopher R. Barry wrote:
>
> CLISP's floats are about 10-40x slower than CMUCL's, which in many
> cases are 3-4x faster than the next best commercial offering. But who
> cares?

I do. So do several of the key developers of CMUCL. Plus, it's the
only Lisp I know that doesn't GC to death when working with complex
numbers. This is a major win for me.

Ray

Tim Bradshaw

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to

OK, well this looks like the difference between clisp and cmucl is
nothing really to do with bignum performance but more to do with this
particular problem being a screw case for CMUCL's more sophisticated
GC. I suspect if you can turn of the generational stuff or tweak the
parameters so it isn't tenuring a large number of intermediate
results, then it will perform with in a small factor of clisp, as it
does on sparc.

--tim

Pierpaolo Bernardi

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Erik Naggum (er...@naggum.no) wrote:
: * bern...@cli.di.unipi.it (Pierpaolo Bernardi)

: | Allegro doesn't grok them [inline functions] either.

: Allegro CL inlines system functions, but not user-defined functions.
: various measures can be used to obtain the speed effect without the code
: bloat effect.

I know this. I thought that the original comment said that the
compiler didn't obeyed inline declarations. Maybe I have misread.

: | I don't understand this. You are complaining that built-in fuctions
: | are too fast?

: it's very valid concern with CLISP because it means that any attempt to
: make use of the powerful abstractions that Common Lisp offers will cost
: you a tremendous lot in performance.

If a programmer writes bad code, is that programmer's problem. He
should not blame the lisp implementation he's using.

: the code that people write in CLISP


: looks like Lisp Assembler -- they go to great lengths to use built-in
: functions for speed.

I have never noticed this. And certainly is not true for code that I
write. Can you point me to any publically available example?

What code does exist, outside of the Clisp implementation, that is
optimized for Clisp? I don't know of any.

And, IMO, if this turns out to be the case, a likely explanation could
be that a beginner is more likely to be using Clisp than a native code
compiler, for the obvious price reasons.

: | Should be easy to fix. Just insert a delay in the interpreter loop


: | whenever a built-in function is called. You may even make this delay
: | so big as to make build-it-yourself functions more convenient, thus
: | encouraging constructing abstractions.

: I take it that you mean that encouraging abstraction is bad.

You are wrong. I don't mean this, and I can't see how you can conclude
that I mean this from what I have written.

P.

Christopher B. Browne

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
On 26 Jul 1999 01:55:23 GMT, Pierpaolo Bernardi <bern...@cli.di.unipi.it> posted:

>Erik Naggum (er...@naggum.no) wrote:
>: | Should be easy to fix. Just insert a delay in the interpreter loop
>: | whenever a built-in function is called. You may even make this delay
>: | so big as to make build-it-yourself functions more convenient, thus
>: | encouraging constructing abstractions.
>
>: I take it that you mean that encouraging abstraction is bad.
>
>You are wrong. I don't mean this, and I can't see how you can conclude
>that I mean this from what I have written.

It's reasonable to expect that if "native operators" work more efficiently
than "generated ones," this will encourage developers to prefer using
"native" ones.

That being said, there are two confounding effects:
a) Constructing your own operators using macros provides a direct
translation of "generated operators" into "native" ones, which mitigates
the problem.

b) Constructing your own "language" supplies a "cost of comprehension."
Is it preferable for a new developer to:
1: Learn "regular, colloquial" Lisp, or
2: Learn your variations on Lisp, namely the language that is
"Lisp Plus Some K001 operators we made up."

I'd tend to think it preferable to go to "tried and true" traditional
Lisp, as that is used by a much wider community.
--
"If you were plowing a field, which would you rather use? Two strong oxen
or 1024 chickens?"
-- Seymour Cray
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>

Erik Naggum

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
* bern...@cli.di.unipi.it (Pierpaolo Bernardi)

| I know this. I thought that the original comment said that the compiler
| didn't obeyed inline declarations. Maybe I have misread.

that may be what he meant, but he said "doesn't grok inline functions".
since it is easy to misunderstand this (watch what people have taken
pretty clear statements to mean in here recently) to mean that Allegro CL
doesn't inline systems functions, either, I thought it was worth pointing
out. as a side note, NOTINLINE declarations are of course honored.

| If a programmer writes bad code, is that programmer's problem. He should
| not blame the lisp implementation he's using.

sigh. the exact same argument can be used about programming languages.
it seems you go out of your way to refuse to understand the issue in
favor of defending CLISP, so I give up, but will just make a mental note
that CLISP _still_ needs defending by people who refuse to listen to
criticism, like it has in the past.

| You are wrong. I don't mean this, and I can't see how you can conclude
| that I mean this from what I have written.

it's a pretty obvious conclusion from your silly refusal to understand
the criticism and crack jokes about a serious concern.

Raymond Toy

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
>>>>> "Christopher" == Christopher R Barry <cba...@2xtreme.net> writes:

Christopher> Yes, x86 has gengc. I tried it again and it ran for about 15 minutes
Christopher> without swapping out and then it filled all 512MB of my VM and I had to
Christopher> kill it.

I think, but I'm not sure, that the problem is trying to print out the
number, not in computing it.

There is at least one known test case where the gencgc on x86 leaks
memory. The sparc port with it's simpler GC doesn't leak memory in
this case.

Ray

Pierre R. Mai

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

[rest snipped]

> And, IMO, if this turns out to be the case, a likely explanation could
> be that a beginner is more likely to be using Clisp than a native code
> compiler, for the obvious price reasons.

What _price_ reasons are there that keep a _beginner_ from using
either CMU CL or one of the free versions of Allegro CL or Harlequin's
LispWorks?

CLISP might have been ported more widely than most other
implementations, thus being more available, but I don't see a price
reason for this...

Simon Leinen

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
[on using 10000! as a Lisp bignum benchmark]

>>>>> "rt" == Raymond Toy <t...@rtp.ericsson.se> writes:
> I think, but I'm not sure, that the problem is trying to print out the
> number, not in computing it.

Right, printing the result is generally more expensive than computing
it. Allegro 5.0/Linux on a 266 MHz Pentium:

NOC(8): (declaim (optimize (speed 3) (space 1) (debug 0)))
T
NOC(9): (defun fac (x) (labels ((fac1 (x y) (declare (type unsigned-byte x y)) (if (= x 0) y (fac1 (1- x) (* x y))))) (fac1 x 1)))
FAC
NOC(10): (compile 'fac)
FAC
NIL
NIL
NOC(11): (progn (time (setq result (fac 10000))) (values))
; cpu time (non-gc) 10,250 msec user, 40 msec system
; cpu time (gc) 1,510 msec user, 40 msec system
; cpu time (total) 11,760 msec user, 80 msec system
; real time 12,683 msec
; space allocation:
; 1 cons cell, 0 symbols, 78,713,384 other bytes, 0 static bytes
NOC(12): (progn (time (print result)) (values))

[35660 digits omitted]
; cpu time (non-gc) 14,090 msec user, 40 msec system
; cpu time (gc) 0 msec user, 0 msec system
; cpu time (total) 14,090 msec user, 40 msec system
; real time 14,630 msec
; space allocation:
; 9 cons cells, 0 symbols, 15,008 other bytes, 1646208 static bytes
NOC(13):

Regards,
--
Simon.

Bruno Haible

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Christopher R. Barry <cba...@2xtreme.net> wrote:
>
>> I know of no bytecoded lisp faster than clisp.
>
> I know of no Common Lisp slower than CLISP.

Then try Poplog (http://www.elwood.com/alu/table/systems.htm#poplog).
Last time I tried it, it ran about the same speed as CLISP.

Aside from that, CLISP is more portable than other implementations. You
will see what that's worth when you buy a new machine with an IA-64 CPU.
How long, do you think, will it take for CMUCL's, Allegro CL's, or
LispWorks's compiler to be modified to generate code for that CPU?
For CLISP, you'll have to modify a few #defines in the include file - or
it could even be completely autoconfiguring by then -, and you compile it.

> The reason why people are complaining that user functions in CLISP are
> a lot slower than predefined functions is because nearly all of the
> CLISP predefined functions are written in C

Yeah, I understand that. You optimize 700 functions by hand for them, and
then they complain about it. It's a pity.

> while in a "real" Lisp the Lisp is written in Lisp and the compiler is
> smart enough to efficiently compile its own Lisp and user Lisp.

... and a "real" Lisp carries its own operating system, and its own windowing
system. And is expensive. And doesn't run on stock hardware...

Can you please put aside these prejudices about "real" Lisps which you
borrowed from past decades?

CLISP is different.

* It runs fine in an xterm, and is therefore accessible to non-Emacs users.

* Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
times as high as a shell's startup time. You can therefore use it as a
script interpreter (with structures and CLOS), or as a CGI interpreter.

* It supports Unicode, not just as an add-on, but right from the start: The
`character' type is Unicode (16 bit). CLISP is therefore the instrument of
choice for manipulating HTML or XML text.

* Its CLX implementation uses libX11 and it therefore up-to-date with all
recent X11 developments.

Yes, CLISP is different.

Bruno http://clisp.cons.org/


Bruno Haible

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Erik Naggum <er...@naggum.no> wrote:
>
> it's very valid concern with CLISP because it means that any attempt to
> make use of the powerful abstractions that Common Lisp offers will cost
> you a tremendous lot in performance. the code that people write in CLISP

> looks like Lisp Assembler -- they go to great lengths to use built-in
> functions for speed.

Maybe. On the other hand, I've seen that people write code which avoids
Common Lisp built-in data types, or even simulate these data types:

- Garnet uses its own kind of self-made hash tables. Are the vendors'
hash tables too slow, or do they have an unusable hash function?
CLISP at least has fast hash tables, and gets the hash function right.

- Gilbert Baumann, when writing a universal lexer/parser, stopped using
bit-vectors, because in some implementation, compiling to native code,
bit-vectors were unbearably slow.
CLISP at least has fast bit-vectors.

Providing a native code compiler with all bells and whistles is respectable,
but it is not an excuse for badly implementing Common Lisp's datatypes.

Bruno http://clisp.cons.org/


Gareth McCaughan

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Pierpaolo Bernardi wrote:

[#\Erik:]
>: it's very valid concern with CLISP because it means that any attempt to


>: make use of the powerful abstractions that Common Lisp offers will cost
>: you a tremendous lot in performance.
>

> If a programmer writes bad code, is that programmer's problem. He
> should not blame the lisp implementation he's using.

Really?

Let's take a more extreme example. Suppose you have a Lisp compiler
that screws up whenever you try to do simple CLOS things: it gives
wrong answers, or goes into an infinite loop, or something. If you
are (for whatever reason) using this implementation, and you avoid
using the features that produce these terrible results, does that
make you a bad programmer? Is it your problem rather than that of
the implementation?

Now, suppose that instead of actually going into an *infinite* loop,
the system just behaves really appallingly slowly when using those
features: say, a factor of 10^6 slower tham it "ought" to be. If you
avoid using features that lead to a catastrophic slowdown, is that
bad practice? Should you be blamed for writing bad programs, not the
implementation for making it harder to write good ones?

What if it's a factor of 10^5? 10^4? 10^3? 10^2? At this point we're
right in the CLISP ball-park, I think.

Is it bad practice for a programmer to write his code so that it
doesn't go unbearably slowly on his system?

>: the code that people write in CLISP


>: looks like Lisp Assembler -- they go to great lengths to use built-in
>: functions for speed.
>

> I have never noticed this. And certainly is not true for code that I
> write. Can you point me to any publically available example?
>
> What code does exist, outside of the Clisp implementation, that is
> optimized for Clisp? I don't know of any.

I have written code that's sort-of optimised for CLISP. More
precisely, I've done whatever I had to to get performance good
enough for my purposes on CLISP. The resulting code doesn't look
like "Lisp Assembler", but that may just indicate that I don't
know much about optimising code for CLISP or that I care about
things other than performance too.

Gareth McCaughan

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Bruno Haible wrote:

[all snipped, in fact]

Why is it that when someone says "CLISP is great, but it's
rather slow for many things" people jump up ans say "That's
unfair! CLISP is great!" ?

CLISP is a lovely system. It's just a pity it does many things
so slowly. (I am aware that many of its benefits are consequences
of the same decisions that lead also to its slowness.)

Christopher Browne

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
On 26 Jul 1999 21:51:31 GMT, Bruno Haible <hai...@clisp.cons.org> wrote:
>Christopher R. Barry <cba...@2xtreme.net> wrote:
>>
>>> I know of no bytecoded lisp faster than clisp.
>>
>> I know of no Common Lisp slower than CLISP.
>
>Then try Poplog (http://www.elwood.com/alu/table/systems.htm#poplog).
>Last time I tried it, it ran about the same speed as CLISP.
>
>Aside from that, CLISP is more portable than other implementations. You
>will see what that's worth when you buy a new machine with an IA-64 CPU.
>How long, do you think, will it take for CMUCL's, Allegro CL's, or
>LispWorks's compiler to be modified to generate code for that CPU?
>For CLISP, you'll have to modify a few #defines in the include file - or
>it could even be completely autoconfiguring by then -, and you compile it.

It seems to me that CLISP subscribes somewhat to Richard Gabriel's
"Worse is Better" edict.

<http://www.ai.mit.edu/docs/articles/good-news/good-news.html>

CLISP may not be as good as the "other guys" in terms of performance,
but if it can be readily ported to new architectures as a side-effect
of people porting C compilers over, then CLISP may "win the race."

One thing I'm not clear on: Erik Naggum "blasts" CLISP pretty heavily
for nonconformance with X3J13. I'm not sure to what extent this
represents:
a) His biases,
b) His perception of *past* noncompliant states of CLISP,
c) Continuing noncompliance of CLISP with X3J13.

Free software has some tendancy not to be *really* compliant with
standards. After all, adding functionality is a whole lot more fun
than:
a) Writing up regression tests,
b) Fixing bugs relating to nonstandard behaviour,
c) Rerunning regression tests to make sure new
functionality/optimization didn't break anything.

>> The reason why people are complaining that user functions in CLISP are
>> a lot slower than predefined functions is because nearly all of the
>> CLISP predefined functions are written in C
>
>Yeah, I understand that. You optimize 700 functions by hand for them, and
>then they complain about it. It's a pity.

It's something of a pity that the optimization took place in C.

It would be *really* slick if CLISP were largely "self-hosting," with
the main body of code written in Lisp, producing C code that was
optimized by a Lisp-based optimizer.

That would permit extending the implementation with "real fast"
operators where needed.

>> while in a "real" Lisp the Lisp is written in Lisp and the compiler is
>> smart enough to efficiently compile its own Lisp and user Lisp.
>
> ... and a "real" Lisp carries its own operating system, and its own
> windowing system. And is expensive. And doesn't run on stock
> hardware...
>
>Can you please put aside these prejudices about "real" Lisps which you
>borrowed from past decades?

Symbolics and Lisp Machines went out of business as a result of some
of the side-effects of those "prejudices," as well as because "Worse
Is Better."

>CLISP is different.
>
>* It runs fine in an xterm, and is therefore accessible to non-Emacs users.
>
>* Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
> times as high as a shell's startup time. You can therefore use it as a
> script interpreter (with structures and CLOS), or as a CGI interpreter.
>
>* It supports Unicode, not just as an add-on, but right from the start: The
> `character' type is Unicode (16 bit). CLISP is therefore the instrument of
> choice for manipulating HTML or XML text.
>
>* Its CLX implementation uses libX11 and it therefore up-to-date with all
> recent X11 developments.
>
>Yes, CLISP is different.

If CLISP can take advantage of those aspects of "Worse is Better" that
it can, without damaging performance *too* much, it can do well.

I'm not sure how easy/hard it is to extend it with further functions;
if a tuning process shows that there are a couple of functions that
should be turned into inline C so as to immensely improve performance,
and there is some support for automagically generating that C, then it
can be kept Fast Enough.

--
"Bawden is misinformed. Common Lisp has no philosophy. We are held
together only by a shared disgust for all the alternatives."
-- Scott Fahlman, explaining why Common Lisp is the way it is....
cbbr...@hex.net- <http://www.ntlug.org/~cbbrowne/langlisp.html>

Pierpaolo Bernardi

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Gareth McCaughan (Gareth.M...@pobox.com) wrote:

: Why is it that when someone says "CLISP is great, but it's


: rather slow for many things" people jump up ans say "That's
: unfair! CLISP is great!" ?

This has not been the case this time.

: CLISP is a lovely system. It's just a pity it does many things


: so slowly. (I am aware that many of its benefits are consequences
: of the same decisions that lead also to its slowness.)

But you have not complained that Clisp is slow! You have complained
that some part of Clisp are too fast.

How could be that you don't see the strangeness of this statement of
yours?

P.


Pierpaolo Bernardi

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Christopher B. Browne (cbbr...@news.brownes.org) wrote:
: On 26 Jul 1999 01:55:23 GMT, Pierpaolo Bernardi <bern...@cli.di.unipi.it> posted:

: It's reasonable to expect that if "native operators" work more efficiently


: than "generated ones," this will encourage developers to prefer using
: "native" ones.

Maybe it is reasonable for a perl hacker.

A professional programmer should be concerned about correctness, and
maintainability of his code.

If his code to be correct needs to run faster than what Clisp can
provide, he should not be using Clisp.

: That being said, there are two confounding effects:

The rest of your post is right, but not relevant.

P.

Pierpaolo Bernardi

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Erik Naggum (er...@naggum.no) wrote:
: * bern...@cli.di.unipi.it (Pierpaolo Bernardi)

: it seems you go out of your way to refuse to understand the issue in


: favor of defending CLISP, so I give up, but will just make a mental note
: that CLISP _still_ needs defending by people who refuse to listen to
: criticism, like it has in the past.

I just reacted to a puzzling affermation and corrected a bit of false
information.

: | You are wrong. I don't mean this, and I can't see how you can conclude


: | that I mean this from what I have written.

: it's a pretty obvious conclusion from your silly refusal to understand
: the criticism and crack jokes about a serious concern.

I promise I will never joke again about Clisp's builtins being too fast.

P.

Pierpaolo Bernardi

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Tim Bradshaw (t...@tfeb.org) wrote:
: * Pierpaolo Bernardi wrote:

: > try comparing the speed of (random (expt 10 500)) on Clisp, CMUCL
: > and Allegro.

: Bignum performance is not always on the critical path for Lisp
: applications.

And indeed I`m not concerned principally with bignum speed. I am more
concerned about the apparent lack of care that some implementors put
in implementing such basic functions as RANDOM. Please try my example
on ACL.

P.

Pierpaolo Bernardi

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Gareth McCaughan (Gareth.M...@pobox.com) wrote:
: Pierpaolo Bernardi wrote:
...
: > If a programmer writes bad code, is that programmer's problem. He

: > should not blame the lisp implementation he's using.

: Really?

Yes.

: Let's take a more extreme example. Suppose you have a Lisp compiler


: that screws up whenever you try to do simple CLOS things: it gives
: wrong answers, or goes into an infinite loop, or something.

You cannot conflate bugs with slowness.

: If you


: are (for whatever reason) using this implementation, and you avoid
: using the features that produce these terrible results, does that
: make you a bad programmer? Is it your problem rather than that of
: the implementation?

Definitely.

: Now, suppose that instead of actually going into an *infinite* loop,


: the system just behaves really appallingly slowly when using those
: features: say, a factor of 10^6 slower tham it "ought" to be. If you
: avoid using features that lead to a catastrophic slowdown, is that
: bad practice? Should you be blamed for writing bad programs, not the
: implementation for making it harder to write good ones?

: What if it's a factor of 10^5? 10^4? 10^3? 10^2? At this point we're
: right in the CLISP ball-park, I think.

: Is it bad practice for a programmer to write his code so that it
: doesn't go unbearably slowly on his system?

Yes.

: >: the code that people write in CLISP


: >: looks like Lisp Assembler -- they go to great lengths to use built-in
: >: functions for speed.
: >
: > I have never noticed this. And certainly is not true for code that I
: > write. Can you point me to any publically available example?
: >
: > What code does exist, outside of the Clisp implementation, that is
: > optimized for Clisp? I don't know of any.

: I have written code that's sort-of optimised for CLISP. More
: precisely, I've done whatever I had to to get performance good
: enough for my purposes on CLISP.

That's interesting. For what reason you have not used one of the
compilers with better performance?

P.

Tim Bradshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to

> But you have not complained that Clisp is slow! You have complained
> that some part of Clisp are too fast.

> How could be that you don't see the strangeness of this statement of
> yours?

Oh come *on*, do not be so bloody misleading. It was completely and
entirely clear that what he was complaining about was the slowness of
user-written code. Only someone with a very poor grasp of English, or
very stupid, or deliberately trying to start an argument would not see
that.

If you really do not understand, take it from me, he was *not*
complaining about things being too fast.

--tim


Erik Naggum

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
* hai...@clisp.cons.org (Bruno Haible)

| How long, do you think, will it take for CMUCL's, Allegro CL's, or
| LispWorks's compiler to be modified to generate code for that CPU?

what an odd way to put it. of course real compilers aren't "modified to
generate code" for new processors. ports are generally prepared some
time before a new processor becomes availble, if there is demand for it,
and finalized when the vendor can get their hands on a machine. that is
usually some time before the general market can purchase the computers,
since vendors tend to believe that their markets will increase if there
are good development tools available for them when they hit the streets.
this all leads to the obvious conclusion that if there is evidence of
demand for Allegro CL, say, for IA-64, there will be an Allegro CL
available for IA-64 before any random user can compile CLISP for it,
provided he purchases a sufficiently good C compiler first. or do you
think GCC will be available for IA-64 as the first compiler that does
really good code? last time I looked, the processor manufacturers again
prefer to do their own compilers, since much interesting work has taken
place in compiler technology that GCC/EGCS just hasn't caught up with,
and most of these new processors are so hard to optimize that the work
necessary to port GCC exceeds the work necessary to roll their own, not
to mention the usefulness of producing machine code directly instead of
going through the assembler.

I'd say your argument backfired on you.

| CLISP is different.
|
| * It runs fine in an xterm, and is therefore accessible to non-Emacs users.

huh? which other Common Lisp doesn't?

| * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
| times as high as a shell's startup time. You can therefore use it as a
| script interpreter (with structures and CLOS), or as a CGI interpreter.

the startup time for Allegro CL on my system is 0.06 seconds. the
startup time for bash on my system is 0.02 seconds. wow, you beat my
factor 3 with a factor 2.5. I'm _so_ impressed.

| * It supports Unicode, not just as an add-on, but right from the start: The
| `character' type is Unicode (16 bit). CLISP is therefore the instrument of
| choice for manipulating HTML or XML text.

that's odd. Allegro CL also has 16-bit characters if you ask for it, and
it has had that for a good number of years. yes, it's doing Unicode
under Windows. I'm currently working on Unicode support for the Unix
international edition.

| * Its CLX implementation uses libX11 and it therefore up-to-date with all
| recent X11 developments.

I don't use CLX, but this sounds like a good thing for those who do.

| Yes, CLISP is different.

I'd say it's a little less different than you think. if you want to
attack others with stuff like "Can you please put aside these prejudices
about "real" Lisps which you borrowed from past decades?" you might want
to update your view of the "real" Lisps out there. you're not fighting
against Lisp machines, anymore.

Fernando Mato Mira

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Bruno Haible wrote:

> * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
> times as high as a shell's startup time. You can therefore use it as a
> script interpreter (with structures and CLOS), or as a CGI interpreter.

Now _THIS_ is news. It means one can forget about Scheme for scripts
I didn't know that.
[Hm. But what about fast _compiled_ scripts with fast startup?]

Thanks, Bruno.

Tim Bradshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> And indeed I`m not concerned principally with bignum speed. I am more
> concerned about the apparent lack of care that some implementors put
> in implementing such basic functions as RANDOM. Please try my example
> on ACL.
>

Have you submitted a bug report to Franz, if it is buggy, or are you
just flaming on newsgroups in the hope that they'll somehow hear you?

--tim

Tim Bradshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Gareth McCaughan <Gareth.M...@pobox.com> writes:

> Bruno Haible wrote:
>
> [all snipped, in fact]
>

> Why is it that when someone says "CLISP is great, but it's
> rather slow for many things" people jump up ans say "That's
> unfair! CLISP is great!" ?
>

> CLISP is a lovely system. It's just a pity it does many things
> so slowly. (I am aware that many of its benefits are consequences
> of the same decisions that lead also to its slowness.)
>

I think I should say that I agree with this, since I've posted a
couple of nasty articles today on the non-clisp side of this debate.

I think clisp is a really good system. I think it has some defects,
its slightly odd performance characteristics being one, but I think
the other Lisps have defects too.

--tim

hai...@clisp.cons.org

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Christopher Browne <cbbr...@hex.net> wrote:
>
> It seems to me that CLISP subscribes somewhat to Richard Gabriel's
> "Worse is Better" edict.

Not that much. My interpretation of "Worse is Better" is that it applies to
the quality of the API presented to the programmer. This is definitely not
the way it is done in CLISP.

However, "Less is Better" is true for CLISP, to some extent. CLISP does not
implement features which would be hard to maintain and are not essential in
some way. Simply because we acknowledge that development resources for CLISP
are scarce. Examples of this attitude are:
- No built-in editor, because every user has his/her own preferred editor
anyway.
- No support for `update-instance-for-redefined-class' because this would
cause performance penalties in the rest of CLOS, and it's not used anyway.
- No `defsystem', since there is no standardized spec for it, and since
`make' is fine for compiling Lisp programs.

> It's something of a pity that the optimization took place in C.
>
> It would be *really* slick if CLISP were largely "self-hosting," with
> the main body of code written in Lisp, producing C code that was
> optimized by a Lisp-based optimizer.

> ...


> I'm not sure how easy/hard it is to extend it with further functions;
> if a tuning process shows that there are a couple of functions that
> should be turned into inline C so as to immensely improve performance,
> and there is some support for automagically generating that C, then it
> can be kept Fast Enough.

What you describe here is the architecture of GCL. Do you use GCL? Do you
help the GCL maintainters, extend the GCL compiler, and so on? If not,
why not?

Bruno Haible http://clisp.cons.org/


Rob Warnock

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Fernando Mato Mira <mato...@iname.com> wrote:
+---------------

| Bruno Haible wrote:
| > * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
| > times as high as a shell's startup time. You can therefore use it as a
| > script interpreter (with structures and CLOS), or as a CGI interpreter.
|
| Now _THIS_ is news. It means one can forget about Scheme for scripts
+---------------

Why do you say *that*? SIOD Scheme is similarly fast-starting (roughly
2.5 times Bourne Shell to do "hello world")...


-Rob

-----
Rob Warnock, 8L-855 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. FAX: 650-933-0511
Mountain View, CA 94043 PP-ASEL-IA

Rainer Joswig

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
In article <7nk5k6$sqj$1...@news.u-bordeaux.fr>, hai...@clisp.cons.org wrote:

> are scarce. Examples of this attitude are:

> - No support for `update-instance-for-redefined-class' because this would


> cause performance penalties in the rest of CLOS, and it's not used anyway.

Sigh.

Nick Levine

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
> - Garnet uses its own kind of self-made hash tables. Are the vendors'
> hash tables too slow, or do they have an unusable hash function?

I assume you are talking about Garnet's KR? That also served as a home-baked
substitute for CLOS. Its initial justification was that when Garnet was
first written, CL was in its infancy and most implementations did not at the
time have CLOS.

I always found KR to be unwieldy, undebuggable (except with an almighty
struggle), and slower than CLOS would have been. [Just my opinion though.]

The advantages of using the implementor's CLOS, hash-tables, etc are that
they will be more reliable, supported, faster, and comprehensible to some
stranger who has to fix your code on your behalf five years from now.

- n

Fernando Mato Mira

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Rob Warnock wrote:

> Fernando Mato Mira <mato...@iname.com> wrote:
> +---------------
> | Bruno Haible wrote:
> | > * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
> | > times as high as a shell's startup time. You can therefore use it as a
> | > script interpreter (with structures and CLOS), or as a CGI interpreter.
> |
> | Now _THIS_ is news. It means one can forget about Scheme for scripts
> +---------------
>
> Why do you say *that*? SIOD Scheme is similarly fast-starting (roughly
> 2.5 times Bourne Shell to do "hello world")...

Because it takes a lot of time and energy to get the Scheme people to understand
not everybody can live in a perfect world, I am starting to get tired, and
switching between CL and Scheme is too expensive (time+money).

If you could get implementations of CL for any combination of features you'd like
(speed, small footprint, JVM-compatible, C++ interfacing, continuations..) would
you care about Scheme?


Rob Warnock

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Fernando Mato Mira <mato...@iname.com> wrote:
+---------------
| Rob Warnock wrote:
| > | Now _THIS_ is news. It means one can forget about Scheme for scripts
| > +---------------
| > Why do you say *that*? SIOD Scheme is similarly fast-starting (roughly
| > 2.5 times Bourne Shell to do "hello world")...
|
| Because it takes a lot of time and energy to get the Scheme people to
| understand not everybody can live in a perfect world, I am starting to
| get tired, and switching between CL and Scheme is too expensive (time+money).
|
| If you could get implementations of CL for any combination of features
| you'd like (speed, small footprint, JVM-compatible, C++ interfacing,
| continuations..) would you care about Scheme?
+---------------

I honestly don't know. It's certainly a question that keeps coming up
for me, too. I'm currently *much* more fluent in Scheme than in CL,
yet whenever I bump up against "practicalities" when hacking Scheme,
I find myself turning to the CLHS for inspiration. I personally prefer
the "look" of Scheme programs (mainly cuz it's a Lisp1), but could live
with CL if I had to.

It's a question that I don't think I'll answer any time soon, but I
also know it won't go away...


-Rob

p.s. Like most everyone who's dived into Scheme at any depth, I have my
own toy implementation bubbling on the back burner. I've been tempted to
call it "Common Scheme" (a Lisp1 subset of CL plus tail-call-opt. & call/cc),
but figured that would just get me flamed from *both* sides... ;-} ;-}

Bruno Haible

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
To my question:

>| How long, do you think, will it take for CMUCL's, Allegro CL's, or
>| LispWorks's compiler to be modified to generate code for that CPU?

Erik Naggum <er...@naggum.no> replied:


> what an odd way to put it. of course real compilers aren't "modified to
> generate code" for new processors. ports are generally prepared some
> time before a new processor becomes availble, if there is demand for it,
> and finalized when the vendor can get their hands on a machine. that is
> usually some time before the general market can purchase the computers,
> since vendors tend to believe that their markets will increase if there
> are good development tools available for them when they hit the streets.
> this all leads to the obvious conclusion that if there is evidence of
> demand for Allegro CL, say, for IA-64, there will be an Allegro CL
> available for IA-64 before any random user can compile CLISP for it,

Hardware vendors do believe in the importance of development tools. C compiler
writers and Linux porters sometimes get a brand-new machine for free. Other
categories of application vendors, probably Lisp compiler writers as well,
typically have to rent such a pre-release hardware. Maybe the Allegro CL
demand for IA-64 will be sufficient for its vendor to pay such a hardware.
But I was also talking about CMUCL - how should they get at such a machine?

> before any random user can compile CLISP for it,
> provided he purchases a sufficiently good C compiler first.

Any developer needs to buy the C compiler, be it bundled with the OS or
sold separately.

My point is: IA-64 needs heavy changes to the code generator, because the
instructions must be grouped into triplets of 42 bits each. The testing alone
of such a modified compiler will take longer than the entire porting of CLISP.

> or do you think GCC will be available for IA-64 as the first compiler
> that does really good code?

This question is really off-topic because CLISP can be compiled with any
ANSI C or C++ compiler, therefore the availability of gcc does not matter.

But anyway, I don't mind discussing this. I don't think GCC will be the
first compiler for IA-64, but it won't come very late either.

> last time I looked, the processor manufacturers again
> prefer to do their own compilers

At least IBM is an active contributor to GCC. And Intel has contributed a
lot to the gcc derivative called `pgcc', but that hasn't been completely
integrated into GCC (AFAIK).

> since much interesting work has taken place in compiler technology
> that GCC/EGCS just hasn't caught up with,

I don't know about which technology you are talking. Recently, Be Inc.
has dropped MetroWerks as C compiler for the i586 version of BeOS and
replaced it with GCC.

> I'd say your argument backfired on you.

I don't think so. How long did it take for CMUCL to be ported to i386?
Paul Werkowski did heroic efforts for one long year.


Bruno http://clisp.cons.org/


Fernando Mato Mira

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Rob Warnock wrote:

> p.s. Like most everyone who's dived into Scheme at any depth, I have my
> own toy implementation bubbling on the back burner. I've been tempted to
> call it "Common Scheme" (a Lisp1 subset of CL plus tail-call-opt. & call/cc),
> but figured that would just get me flamed from *both* sides... ;-} ;-}

You too! Hey, I'm driving up to the Area after SIGGRAPH. Care to visit some VCs?
:-I

Erik Naggum

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
* Bruno Haible

| But I was also talking about CMUCL - how should they get at such a machine?

your point is certainly valid for CMUCL, but your extrapolation to other
native Common Lisp implementations leaves something to be desired in its
applicability.

Joerg-Cyril Hoehle

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Hi Rainer,

jos...@lavielle.com (Rainer Joswig) writes:
[Bruno Haible of CLISP fame wrote:]


> > - No support for `update-instance-for-redefined-class' because this would
> > cause performance penalties in the rest of CLOS, and it's not used anyway.

> Sigh.

My reply may be a little of topic.

See how Schemers "discussed" (fought) the cost of R5RS'
DYNAMIC-UNWIND. There is a real cost to some operations. Every
implementation makes design decisions. Some implementations may
decide to drop a costly feature. Others go and design another
language (Dylan) which possibilities for sealing etc.

If you were a Smalltalker, would you say 'Sigh' to an implementation
that wouldn't provide 'become' whose use is strictly discouraged in
every style guide? The design choices underlying the implementation
may make this operation intractable or at least too expensive.

Did you know that state of the art commercial Smalltalk
implementations don't recompile children methods when a class is
redefined, so that the images may crash or cause weird behaviour
because these old methods reference slots at wrong (obsolete) offsets?

Sigh? Really?

Regards,
Jorg Hohle
Telekom Research Center -- SW-Reliability

Tim Bradshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
hai...@clisp.cons.org (Bruno Haible) writes:


> My point is: IA-64 needs heavy changes to the code generator, because the
> instructions must be grouped into triplets of 42 bits each. The testing alone
> of such a modified compiler will take longer than the entire porting of CLISP.

This is not a loaded question, I actually want to know this!

When clisp is rebuilt for ia64 will it be a 64-bit system, in the
sense of haivng bigger address space and I guess bigger fixums &c, or
will it still be 32-bit? Is there a 64-bit clisp now (on sparc or
other currently-64-bit machines, for which compilation technology &
hardware already exists)?

Thanks

--tim

Joerg-Cyril Hoehle

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Hello,

I'm trying to start a discussion about abstraction, optimization,
macros and also OO.

I've observed a shift about the former bad opinion about macros in the
last years. Maybe Peter Norvig started with PAIP, and Paul Graham's
On Lisp made the point extremely clear.


cbbr...@news.brownes.org (Christopher B. Browne) writes:
> It's reasonable to expect that if "native operators" work more efficiently
> than "generated ones," this will encourage developers to prefer using
> "native" ones.
>

> That being said, there are two confounding effects:

> a) Constructing your own operators using macros provides a direct
> translation of "generated operators" into "native" ones, which mitigates
> the problem.

That's old style. All newer Lisp books advocate against macros for
that purpose and heavily recommend DECLARE INLINE. Yet long-
experienced Lispers still conclude "macros is the only way to optimize
portably".

We are really discussing compiler qualities at this point and IMHO
this is independent on other choices (bytecode or native code, CMUCL
or CLISP or ACL) etc.

> b) Constructing your own "language" supplies a "cost of comprehension."
> Is it preferable for a new developer to:
What do you expect from new developers in any language? "Here's the C
language and here are gadzillions of various libraries."

> 1: Learn "regular, colloquial" Lisp, or
> 2: Learn your variations on Lisp, namely the language that is
> "Lisp Plus Some K001 operators we made up."
That's again old yet long-standing advice against macros.

Newer books have advocated the use of domain specific languages (DSL)
and related concepts that fit nicely into macros, like all kind of
declarative stuff (your a). I think this puts the power of macros right.

> I'd tend to think it preferable to go to "tried and true" traditional
> Lisp, as that is used by a much wider community.
It depends. For example, I refrained from implementing a COUNTDOWN
macro as the reverse of DOTIMES.

Do you want simple, readable code (for highly reliable systems)? Do
you want high-performance code? Do you want code fast to write? It
all depends.

Do you trust more concepts expressed w.r.t. to a given domain,
compiled into Lisp code (compiled into whatever) using techniques of
partial evaluation, compilation et al, that you must then trust and
verify as well, or do you trust more code written in basic Lisp? Your
job's requirements will probably bias your answer.

Some safety requirements argue against the use of preprocessors.
Macros may be percieved as very similar in effect.


On the other hand, I'm sometimes missing OO features within CL
primitives. That's IMHO what kills a real decision between "abstract"
operations and CL primitives. My favourite here is some bag or set
type, with different operations allowed whenever I violate abstraction
and take advantage of the underlying BIT-VECTOR, LIST or HASH-TABLE
structure. But maybe I'm just asking for good type analysis (not
necessarily a static type system): I don't want to rewrite the code
using the sets when I change the representation, yet it should be as
fast as the primitive operations that it maps onto (suffer no extra
FUNCALL that just does CAR, etc).

Chuck Fry

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
In article <qkpemhuce...@tzd.dont.telekom.spam.de.me>,

Joerg-Cyril Hoehle <hoehle...@tzd.dont.telekom.spam.de.me> wrote:
>cbbr...@news.brownes.org (Christopher B. Browne) writes:
>> It's reasonable to expect that if "native operators" work more efficiently
>> than "generated ones," this will encourage developers to prefer using
>> "native" ones.
>>
>> That being said, there are two confounding effects:
>> a) Constructing your own operators using macros provides a direct
>> translation of "generated operators" into "native" ones, which mitigates
>> the problem.
>
>That's old style. All newer Lisp books advocate against macros for
>that purpose and heavily recommend DECLARE INLINE. Yet long-
>experienced Lispers still conclude "macros is the only way to optimize
>portably".

That's because the INLINE declaration in user code is ignored by at
least one very popular commercial CL implementation. So one winds up
having to use macrology instead.

>Do you want simple, readable code (for highly reliable systems)? Do
>you want high-performance code? Do you want code fast to write? It
>all depends.

I want it *all*! I don't see that these goals have to conflict.
Granted, tuning code for performance takes time, but it's
straightforward to write a prototype using appropriate abstractions,
then later tune the abstractions according to observed performance.

-- Chuck
--
Chuck Fry -- Jack of all trades, master of none
chu...@chucko.com (text only please) chuc...@home.com (MIME enabled)
Lisp bigot, mountain biker, car nut, sometime guitarist and photographer
The addresses above are real. All spammers will be reported to their ISPs.

Pierre R. Mai

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> And indeed I`m not concerned principally with bignum speed. I am more
> concerned about the apparent lack of care that some implementors put
> in implementing such basic functions as RANDOM. Please try my example
> on ACL.

Beware! Benchmarking RANDOM is a non-trivial task, and timing
(random (expt 10 500)) is just brain-damaged, unless you can convince
me that this happens to be on the critical path of any
non-brain-damaged program (which I would find hard to believe). And
BTW random is far from a "basic" function. The amount of theory
contained in RANDOM is so dense, that reading up on it can take you a
couple of months. And using RANDOM isn't simple for that very
reason. Random numbers are even more evil in the hand of an unskilled
user than floating point numbers.

If you want to benchmark RANDOM, first understand what you are
benchmarking. Sadly, most implementations of any language deem
it enough to provide a simplistic (and sometimes even seriously
flawed) RNG, and not even documenting the exact algorithm and
parameters used, thereby forcing any serious user to implement
his own RNG anyway. Comparing the performance of a flawed or
severely restricted RNG to that of a high-quality one is not in
any way meaningful (although there exist many high-quality RNGs
out there which can be quite competitive to your usual crappy
RNG).[1]

Then use realistic examples. Where would the range argument to the
RNG be calculated afresh every time? I can't think of any reasonable
case.

And finally you have to discern the different cases. Whilst ACL's
random implementation can be a bit slow in the general case, the
performance can be sped up considerably, in certain special cases
(like when the range is a single float, or probably a fixnum). If you
contact your vendor he will be quite willing to give you the necessary
advice needed for tuning, as usual.

But even given a very simplistic benchmarking approach, what you could
have found out about random performance in ACL, CMU CL and CLISP would
have been the following:

*********************************************************************
BEWARE: THIS IS NOT MEANT AS A SERIOUS BENCHMARK!
*********************************************************************

The given test-code is not really realistic, and the RNGs in question
have wildly varying characteristics. Furthermore performance could be
heavily influenced by the addition or deletion of declarations, and/or
optimization settings, and/or the use of more modern implementation
versions, and/or the use of different architectures, and/or even
different chips of the same architecture (the AMD-K6-2/350 I used at
home for these "benchmarks" has a none too good FP unit, so
performance on serious chips with useful FP performance will be
better, and might skew the results, depending on the implementation
strategies chosen by the RNGs in question). OS influence seems
unlikely, but can't be ruled out either. OS in question was Linux
2.2.10.

No rigorous attempt has been made to optimize the test for any
implementation, although an attempt has been made to provide a
slightly "de-optimized" version for ACL, to show the special-casing of
1.0s0. Since the constructs provided are very direct, the advantages
of CMU CL's type-inference mechanisms are not really utilized, thereby
putting "less intelligent" compilers at less of a disadvantage...

The run-times are really too short in most cases to provide reliable
measures, but I was to lazy to invest any more time into this silly
"benchmark". The number of digits provided in measurements is a joke,
and should not be taken to indicate any kind of accuracy or certainty.

The RNGs in the versions of CLISP and ACL tested were (according to
documentation and/or source):

CLISP:
# Zufallszahlengenerator nach [Knuth: The Art of Computer Programming, Vol. II,
# Seminumerical Algorithms, 3.3.4., Table 1, Line 30], nach C. Haynes:
# X eine 64-Bit-Zahl. Iteration X := (a*X+c) mod m
# mit m=2^64, a=6364136223846793005, c=1.

ACL:
If number is a fixnum or the single-float 1.0, the algorithm used
to generate the result is Algorithm A on page 27 of Art of Computer
Programming, volume 2, 2nd edition, by Donald Knuth (published by
Addison-Wesley). If number is any other value, a
linear-congruential generator using 48 bit integers for the seed
and multiplier is used. because 48 bit integers are bignums, random
with an argument other than a fixnum or the single-float 1.0 is
very inefficient and not recommended.

CMU CL uses the below mentioned MT-19937 RNG.

The test-hugenum test produces an error on ACL 5.0, indicating that
#.(expt 10 500) can't be coerced to a double-float. Therefore no
figures for test-hugenum are available on ACL. Maybe I missed
something, but IMHO this should work.

Final Word: I really mean it! Do not use these figures as any kind
of indication of realistic RNG performance! If you care about RNG
performance, you should probably care more about RNG quality! If you
still care, and are prepared to do some serious work on benchmarking
them, good luck, and bring along much time (and a PhD in a related
field of mathematics can't do any harm, either). This silly little
demonstration is solely meant to show how wildly differing results
you can get even under very simplistic conditions, thereby
invalidating any benchmarking approach that tries to give you single
figures or value judgements.

*********************************************************************

Source:

(declaim (optimize (speed 3)))

(defun test-dfloat (n)
(dotimes (i n)
(random 1.0d0)))

(defun test-sfloat-var (n)
(let ((range 1.0s0))
(dotimes (i n)
(random range))))

(defun test-sfloat (n)
(dotimes (i n)
(random 1.0s0)))

(defun test-bignum (n)
(dotimes (i n)
(random #.(expt 2 100))))

(defun test-hugenum (n)
(dotimes (i n)
(random #.(expt 10 500))))


*********************************************************************

Results of (time (test-hugenum 100000)):

Implementation Real-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1 4182 21600000 110
CMU CL CVS 2.4.9 90270 1495314816 38120
ACL TE Linux 5.0 - - -

Results of (time (test-bignum 100000)):

Implementation Real-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1 730 2387272 30
CMU CL CVS 2.4.9 3260 30312608 820
ACL TE Linux 5.0 5653 67001000 480

Results of (time (test-sfloat 1000000)):

Implementation Run-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1 3896 0 0
CMU CL CVS 2.4.9 220 0 0
ACL TE Linux 5.0 922 32 0

Results of (time (test-sfloat-var 1000000)):

Implementation Run-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1 3895 0 0
CMU CL CVS 2.4.9 220 0 0
ACL TE Linux 5.0 2785 16000032 80

Results of (time (test-dfloat 1000000)):

Implementation Run-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------
CLISP 1997-12-06-1 6481 52000000 610
CMU CL CVS 2.4.9 290 0 0
ACL TE Linux 5.0 27729 399999984 2310

Regs, Pierre.

Footnotes:
[1] This is one of many reasons I like CMU CL so much: The RNG of
CMU CL is currently a Mersenne-Twister Generator (MT-19937) with a
period of 2^19937-1 and 623-dimensional equidistribution. The
algorithm has been published together with the usual test results in
the ACM Transactions on Modelling and Computer Simulation (TOMACS),
Issue 1/1998, pp. 3-30, by Makoto Matsumoto and Takuji Nishimura.
This is actually where I came across the RNG, and when I decided to
implement it in CL, I found out that it had already been implemented
for CMU CL (together with a considerably bummed implementation for
x86). Together with the necessary references to the paper. I was
severely impressed (thanks to Raymond Toy and Douglas T. Crosher who
seem responsible for this ;). The performance of this is very nice in
non-bignum cases.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Pierre R. Mai

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> You cannot conflate bugs with slowness.

There are such things as performance bugs, IMHO. Take for example CMU
CL (lest I be accused of being unfair to CLISP again). The generalized
sequence mapping operations have a performance bug: Since they simply
use elt to step through all sequences, they exhibit O(n^2) behaviour on
lists. Although the result of any program won't change because of
this, and nothing in ANSI CL demands that the mapping operation be
performed in O(n), I'd still consider this a bug, for two reasons:

a) The unweary user will be quite surprised when mapping operations
suddenly start operating in O(n^2) time, and

b) once he is aware of this behaviour, he will most likely start
working around it, which will decrease the quality of his code.

Now this problem isn't as severe as it sounds at first, since O(n^2)
on small lists is still not a biggie, and long lists are often a
sign that you are using the wrong data structure anyway. But it
still is a severe defect in CMU CL, and I'd like it fixed some time,
the sooner the better. Sadly I haven't had the time to polish up my
implementation of mapping to a reasonable standard yet, and thus the
problem still persists...

But denying that this is a problem of the implementation, and shifting
the blame onto the user seems like the easy (and wrong) way out to me.

> : If you
> : are (for whatever reason) using this implementation, and you avoid
> : using the features that produce these terrible results, does that
> : make you a bad programmer? Is it your problem rather than that of
> : the implementation?
>
> Definitely.

It is definitely the problem of the implementation, if the user has
brought his performance problems to the attention of the implementor,
and the implementor has through lack of action "forced" the user to
use work-arounds. Now if the in-action has good reasons (like
differences in implementation goals, or lack of resources/time, or
whatever), then this is not really the fault of the implementor
personally, but this doesn't shift the blame to the user, unless the
user clings to the implementation he is using without proper reasons.

> : Is it bad practice for a programmer to write his code so that it
> : doesn't go unbearably slowly on his system?
>
> Yes.

Simplistic answers don't provide any insight into the situation.

Regs, Pierre

Pierre R. Mai

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Fernando Mato Mira <mato...@iname.com> writes:

> > * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
> > times as high as a shell's startup time. You can therefore use it as a
> > script interpreter (with structures and CLOS), or as a CGI interpreter.
>

> Now _THIS_ is news. It means one can forget about Scheme for scripts

> I didn't know that.
> [Hm. But what about fast _compiled_ scripts with fast startup?]

As Erik has already pointed out, most implementations have a fast start
up time nowadays, if they are in the OS caches (like bash usually is).
Even CMU CL starts up and exits in around 0.5s on my machine nowadays,
and CMU CL is quite slow on start-up. And if you use a resident image,
you can easily do scripting via it's socket interfaces, in even less
start-up time. ACL starts up and exits in 0.075s, CLISP in 0.060s
(both without surpressing banner output).

It seems to me that scripting in CL isn't a question of start-up
times, but more of a nice scripting library.

Regs, Pierre.

Bruno Haible

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Tim Bradshaw <t...@tfeb.org> asked:

>
> When clisp is rebuilt for ia64 will it be a 64-bit system, in the
> sense of haivng bigger address space and I guess bigger fixums &c, or
> will it still be 32-bit?

You will be able to compile clisp (like other software) as either a 32-bit
application or a 64-bit application. The code space needed for either will
probably the same, but the data (the memory images) will likely be 60% larger
in 64-bit mode.

Details at http://developer.intel.com/design/IA64/devinfo.htm

> Is there a 64-bit clisp now (on sparc or other currently-64-bit machines,
> for which compilation technology & hardware already exists)?

clisp runs in 64-mode on DEC/Compaq Alpha since 1993, and on Mips since 1997.

Bruno http://clisp.cons.org/


Vassili Bykov

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
hoehle...@tzd.dont.telekom.spam.de.me (Joerg-Cyril Hoehle) writes:
>
> If you were a Smalltalker, would you say 'Sigh' to an implementation
> that wouldn't provide 'become' whose use is strictly discouraged in
> every style guide? The design choices underlying the implementation
> may make this operation intractable or at least too expensive.

Your point being...? You can design around the absense of #become: so
it is not strictly necessary--it is there as an artifact of the times
when it was dirt cheap to implement in the only existing
implementation--yet so far every implementation has been providing it,
though with somewhat varying semantics.

> Did you know that state of the art commercial Smalltalk
> implementations don't recompile children methods when a class is
> redefined, so that the images may crash or cause weird behaviour
> because these old methods reference slots at wrong (obsolete) offsets?

What implementations are you talking about? This is plain wrong for
*every* commercial one in existence. Those worth their salt both
recompile the methods and mutate existing instances to match the new
instance structure--though not as gracefully as
UPDATE-INSTANCE-FOR-REDEFINED-CLASS would allow.

--Vassili

Raymond Toy

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
>>>>> "Pierre" == Pierre R Mai <pm...@acm.org> writes:

Pierre> Results of (time (test-hugenum 100000)):

Pierre> Implementation Real-Time(ms) Consing(Bytes) GC-Time(ms)
Pierre> -------------------------------------------------------------------
Pierre> CLISP 1997-12-06-1 4182 21600000 110
Pierre> CMU CL CVS 2.4.9 90270 1495314816 38120
Pierre> ACL TE Linux 5.0 - - -

Pierre> Results of (time (test-bignum 100000)):

Pierre> Implementation Real-Time(ms) Consing(Bytes) GC-Time(ms)
Pierre> -------------------------------------------------------------------
Pierre> CLISP 1997-12-06-1 730 2387272 30
Pierre> CMU CL CVS 2.4.9 3260 30312608 820
Pierre> ACL TE Linux 5.0 5653 67001000 480


I think I know the reason for the relatively slow results for CMUCL.
The generator in this case creates the bignum by essentially
overlapping a bunch of 32-bit random integers by 3 bits. The intent
is to enhance the randomness of the least significant bits. However,
the MT-19937 generator is supposed to have good randomness for the
entire 32 bits. If we truly concatenate the 32-bit numbers together,
we get results like this (on a Ultrasparc II, 300 MHz):

Results of (time (test-hugenum 100000)):

Implementation Real-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------

CLISP 1999-05-15 12324 21600000 205
CMU CL 18b+ 4940 90899784 840

Results of (time (test-bignum 100000)):

Implementation Real-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------

CLISP 1999-05-15 1626 2400000 46
CMU CL 18b+ 1510 15404520 160

To calibrate the results, unmodified CMUCL gives a time of 2160 ms for
the bignum test, so this 300MHz Ultra 30 is about twice the speed of
your K6-2/350.

To confuse matters more, the time for test-sfloat is 410 ms compared
to your 220 ms. So your floating point isn't so shabby. The
difference perhaps is due to the fact that the x86 port uses an
assembly version for the mt-19937 generator and the sparc uses Lisp.
And, the x86 version of CLISP appears to be much faster than the sparc
version.

Also 1.0s0 is a short float, which is not a single-float. This
doesn't matter for CMUCL or ACL, but it does for CLISP which does
have true short floats:

Results of (time (test-sfloat 1000000)):

Implementation Run-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------

CLISP 1999-05-15 4559 0 0
CMU CL 18b+ 410 48 0

Results of (time (test-ffloat 1000000)):

Implementation Run-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------

CLISP 1999-05-15 6674 48000000 796


Isn't benchmarking fun? :-) Making sense of the results is a lot of
fun too! :-)

Ray

Pierre R. Mai

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
hoehle...@tzd.dont.telekom.spam.de.me (Joerg-Cyril Hoehle) writes:

> Hi Rainer,
>
> jos...@lavielle.com (Rainer Joswig) writes:
> [Bruno Haible of CLISP fame wrote:]
> > > - No support for `update-instance-for-redefined-class' because this would
> > > cause performance penalties in the rest of CLOS, and it's not used anyway.
>
> > Sigh.
>
> My reply may be a little of topic.
>
> See how Schemers "discussed" (fought) the cost of R5RS'
> DYNAMIC-UNWIND. There is a real cost to some operations. Every
> implementation makes design decisions. Some implementations may
> decide to drop a costly feature. Others go and design another
> language (Dylan) which possibilities for sealing etc.

[ Smalltalk stuff snipped ]

> Sigh? Really?

I don't know what Rainer's sigh was trying to convey. I on the other
hand would not sigh at the fact that CLISP doesn't provide
update-instance-for-redefined-class. That is an implementation
decission (which brings CLISP out of line with the standard, but
probably might still qualify it as a subset), and if the user
community of CLISP prefers better CLOS performance (relative to CLISP
with u-i-f-r-c) to the presence of this update protocol, who am I to
quarrel with that, since I'm not an active user of CLISP, and my
priorities are different from those of the existing CLISP community it
seems.

I would however sigh at the second part of Bruno's sentence, if I
should take "... and it's not used anyway" as an assertion on his
part, that u-i-f-r-c isn't used in the CL community as a whole. That
would make assumptions about the usage patterns of a whole number of
people, many with different interests and priorities than Bruno's.

Pierre R. Mai

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Raymond Toy <t...@rtp.ericsson.se> writes:

[ BTW: Sorry for the tabs in the original post, I forgot to strip them
prior to posting... ]

> I think I know the reason for the relatively slow results for CMUCL.
> The generator in this case creates the bignum by essentially
> overlapping a bunch of 32-bit random integers by 3 bits. The intent
> is to enhance the randomness of the least significant bits. However,
> the MT-19937 generator is supposed to have good randomness for the
> entire 32 bits. If we truly concatenate the 32-bit numbers together,
> we get results like this (on a Ultrasparc II, 300 MHz):

Lovely! Another set of inconsistent results, demonstrating the
silliness of simplistic benchmarking even further! ;)

> To confuse matters more, the time for test-sfloat is 410 ms compared
> to your 220 ms. So your floating point isn't so shabby. The
> difference perhaps is due to the fact that the x86 port uses an
> assembly version for the mt-19937 generator and the sparc uses Lisp.

Yes, the assembly version of the mt-19937 on x86 will probably drown
out most other factors. And even while generating FPs, most work is
still done in the state update operation of mt-rand19937, which uses
integer operations only, which means that FP performance is not a
major factor in this. So I still think that a 300MHz Ultrasparc II
gives better FP performance than a 350MHz AMD K6-2.

> And, the x86 version of CLISP appears to be much faster than the sparc
> version.

See below for a more direct comparison. Since my CLISP seems to cons
only half of your CLISP, it seems to me we are using versions with
different representations. This might be because I used the "small"
version of CLISP, which uses a 24+8 bit representation (IIRC), and you
used the "wide" version which uses 64 bits (again IIRC). Or it might
be because of other differences in representation between x86 Linux
and UltraSparcII versions of CLISP.

> Also 1.0s0 is a short float, which is not a single-float. This
> doesn't matter for CMUCL or ACL, but it does for CLISP which does
> have true short floats:

Oops, yes, thanks for spotting this. Never benchmark when not 100%
concentrated. So here is a short comparison between short and single
float performance of CLISP (both with (time (test-* 1000000))):

Implementation Real-Time(ms) Consing(Bytes) GC-Time(ms)
-------------------------------------------------------------------

CLISP short-float 4027 0 0
CLISP single-float 5097 24000000 360

> Results of (time (test-sfloat 1000000)):
>
> Implementation Run-Time(ms) Consing(Bytes) GC-Time(ms)
> -------------------------------------------------------------------

> CLISP 1999-05-15 4559 0 0
> CMU CL 18b+ 410 48 0
>

> Results of (time (test-ffloat 1000000)):


>
> Implementation Run-Time(ms) Consing(Bytes) GC-Time(ms)
> -------------------------------------------------------------------

> CLISP 1999-05-15 6674 48000000 796

> Isn't benchmarking fun? :-) Making sense of the results is a lot of
> fun too! :-)

Yes, silly-benchmarking is kind of addictive, like micro-optimizing.
And it's like standards and statistics: So many answers to choose
from.

Regs, Pierre.

Pierpaolo Bernardi

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Tim Bradshaw (t...@tfeb.org) wrote:
: bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

: > And indeed I`m not concerned principally with bignum speed. I am more
: > concerned about the apparent lack of care that some implementors put
: > in implementing such basic functions as RANDOM. Please try my example
: > on ACL.

: Have you submitted a bug report to Franz, if it is buggy, or are you


: just flaming on newsgroups in the hope that they'll somehow hear you?

*I* am flaming? What about you?


Reporting bugs to Franz has not worked for me in the past.

Surely I hope that they fix this. If reporting bugs in ACL in this
newsgroup makes Franz fix them, I'll report here any new bug that I find.

P.

Gareth McCaughan

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Pierpaolo Bernardi wrote:

>: Let's take a more extreme example. Suppose you have a Lisp compiler
>: that screws up whenever you try to do simple CLOS things: it gives
>: wrong answers, or goes into an infinite loop, or something.

>
> You cannot conflate bugs with slowness.

Why not? If a Lisp system took a million cycles to do every operation
then it would be just as unusable as if it returned 999 for (CAR NIL)
every now and then.

>: If you
>: are (for whatever reason) using this implementation, and you avoid
>: using the features that produce these terrible results, does that
>: make you a bad programmer? Is it your problem rather than that of
>: the implementation?
>
> Definitely.

Why?

>: Is it bad practice for a programmer to write his code so that it
>: doesn't go unbearably slowly on his system?
>
> Yes.

Why?

(If you mean "because his code is for others to run too", let me
amend my question to "... on the systems on which it will be running?".)

>: I have written code that's sort-of optimised for CLISP. More
>: precisely, I've done whatever I had to to get performance good
>: enough for my purposes on CLISP.
>
> That's interesting. For what reason you have not used one of the
> compilers with better performance?

The machine I run CLISP on doesn't have any other Common Lisps
that can run on it. I also have an x86 unix box; I run CMU CL
on that. (Though if I wanted to do bignum-intensive stuff, or
replace bash with Lisp, or use arbitrary-precision reals, I would
use CLISP on that machine too.)

--
Gareth McCaughan Gareth.M...@pobox.com
sig under construction

Pierpaolo Bernardi

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Pierre R. Mai (pm...@acm.org) wrote:
: bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

: > And indeed I`m not concerned principally with bignum speed. I am more
: > concerned about the apparent lack of care that some implementors put
: > in implementing such basic functions as RANDOM. Please try my example
: > on ACL.

: Beware! Benchmarking RANDOM is a non-trivial task, and timing
: (random (expt 10 500)) is just brain-damaged, unless you can convince
: me that this happens to be on the critical path of any
: non-brain-damaged program (which I would find hard to believe).

I have not found this bug by trying functions at random with random
arguments. It happened in a real program, and it was in the critical
path. Even if you find it hard to believe.


: If you want to benchmark RANDOM, first understand what you are
: benchmarking.

I, at least, understand that if (random (expt 10 500)) signals a
condition instead of returning a number, something must be broken.

: Sadly, most implementations of any language deem


: it enough to provide a simplistic (and sometimes even seriously
: flawed) RNG, and not even documenting the exact algorithm and
: parameters used, thereby forcing any serious user to implement
: his own RNG anyway. Comparing the performance of a flawed or
: severely restricted RNG to that of a high-quality one is not in
: any way meaningful (although there exist many high-quality RNGs
: out there which can be quite competitive to your usual crappy
: RNG).[1]

: Then use realistic examples. Where would the range argument to the
: RNG be calculated afresh every time? I can't think of any reasonable
: case.

A reasonable case occurs when doing primality tests of big integers.
But anyway this is irrelevant.

: And finally you have to discern the different cases. Whilst ACL's


: random implementation can be a bit slow in the general case,

It not a question of speed.

P.

Gareth McCaughan

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Pierpaolo Bernardi wrote:

>: Why is it that when someone says "CLISP is great, but it's


>: rather slow for many things" people jump up ans say "That's
>: unfair! CLISP is great!" ?
>

> This has not been the case this time.

It's how it looks to me.

>: CLISP is a lovely system. It's just a pity it does many things


>: so slowly. (I am aware that many of its benefits are consequences
>: of the same decisions that lead also to its slowness.)
>

> But you have not complained that Clisp is slow! You have complained
> that some part of Clisp are too fast.

If you really think that was my complaint, I can only conclude that
either I am a much worse communicator than I thought or your English
comprehension isn't very good.

Duane Rettig

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to

I have been following this thread with interest; Rest assured I consider
a failure of (random (expt 10 500)) to produce results to be a bug in
Allegro CL, and I have even been considering the Mersenne Twister as a
potential algorithm for a fast, highly random generator for Allegro CL.
However, I must answer this post, since it calls into question the
integrity of Franz's support.

bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> Tim Bradshaw (t...@tfeb.org) wrote:
> : bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:
>
> : > And indeed I`m not concerned principally with bignum speed. I am more
> : > concerned about the apparent lack of care that some implementors put
> : > in implementing such basic functions as RANDOM. Please try my example
> : > on ACL.
>

> : Have you submitted a bug report to Franz, if it is buggy, or are you
> : just flaming on newsgroups in the hope that they'll somehow hear you?
>
> *I* am flaming? What about you?
>
>
> Reporting bugs to Franz has not worked for me in the past.

Though we encourage people who use our unsupported products to report
problems, bugs, and anomalies to us, we never give any promise of
support for these unsupported products. We appreciated the three reports
that you sent in to us in early 1997, and at least one of them resulted in
a fix to the linux product.

> Surely I hope that they fix this. If reporting bugs in ACL in this
> newsgroup makes Franz fix them, I'll report here any new bug that I find.

There are two things that you need in order to guarantee that bugs get fixed
in Allegro CL:

1. An avenue to let us know that the bug exists. We read a few newsgroups
and sometimes grab bugs or potential bugs from them, and try to work them
in to the fabric of the lisp. However, we still prefer that you at least
send a report to bu...@franz.com, so that we are sure not to miss anything.
Also, the distribution for such discussion may be a little too wide for
this newsgroup. I am not afraid to admit that our product is not perfect
(yet :-), but I would not like to see those lispers who are not users of
Allegro CL to necessarily be bothered by complaints about our product.

2. Priority. We attach highest priority to ensuring the success of our
supported customers. We are also obviously interested in continuing
to improve our product in general, though if this general goal takes time
and resources away from helping our customers to be successful, then it
gets lower priority.

Once a supported customer has reported a problem, and we have determined
that it is a bug, we run through a standard process of determining what
the priority is of their bug. This includes the obvious: we ask them how
important it is to fix it right away. It also includes the possibilities
that either the customer will work around the problem, (sometimes we supply
the workaround) or that we will supply a patch. Again, we usually ask our
customers what their preference is. And finally, we log a "bug" report, a
database item that gets assigned priority for looking at or fixing in a
future release. Non-bugs, but features that are highly desireable, are
logged as "rfe" reports (requests for enhancement). We tend to look at
all of these reports often (we get a list of the top priority bugs and rfes
daily) and try to work them into our schedules according to their priorities.


Finally, regarding your earlier response:

> : > I am more


> : > concerned about the apparent lack of care that some implementors put
> : > in implementing such basic functions as RANDOM.

Make no mistake about it; we care.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Christopher Browne

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
On Tue, 27 Jul 1999 11:12:33 +0200, Fernando Mato Mira
<mato...@iname.com> wrote:
>Bruno Haible wrote:
>
>> * Its startup time (on Linux - except Linux/Sparc - or Solaris) is only 2.5
>> times as high as a shell's startup time. You can therefore use it as a
>> script interpreter (with structures and CLOS), or as a CGI interpreter.
>
>Now _THIS_ is news. It means one can forget about Scheme for scripts
>I didn't know that.
>[Hm. But what about fast _compiled_ scripts with fast startup?]

I finally got CMU-CL installed on my Debian box (it's been a "not
flawless" install, generally...) and don't see a terribly perceptible
difference between the startup time for CMU-CL and that for CLISP.
--
If you stand in the middle of a library and shout "Aaaaaaaaargh" at the
top of your voice, everyone just stares at you. If you do the same thing
on an aeroplane, why does everyone join in?
cbbr...@hex.net- <http://www.ntlug.org/~cbbrowne/langlisp.html>

Erik Naggum

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
* bern...@cli.di.unipi.it (Pierpaolo Bernardi)

| Reporting bugs to Franz has not worked for me in the past.

how do you define "work" for reporting a bug?

| Surely I hope that they fix this. If reporting bugs in ACL in this
| newsgroup makes Franz fix them, I'll report here any new bug that I find.

I wish you wouldn't. by reporting a bug to Franz Inc, you will learn
whether it has already been reported and what the status is, you report a
bug in the interest of having it fixed in your product, i.e., you have a
reason the bug impacts you that is not mere frustration, and you let
Franz Inc take part in your problems. all of this is constructive. by
reporting it here, you would likely report bugs that have been fixed or
have a known workaround, it is unlikely that the bug makes a business
difference to you since USENET is not used to divulge business sensitive
information, meaning that the bug would be a "disassociated bug" that it
doesn't make any sense to provide workarounds for, you would most likely
report the bug in a similarly hostile way to what you have done so far,
which can only hurt Franz Inc for no reason at all, and finally, you get
to decide what is a bug or not, and Franz Inc would have a hard time
defending the expenditure of time and effort countering your misguided
views of what constitutes bugs. all of this is destructive. you concede
that it is, too, the way you formulate the above.

your ardent defense of the CLISP implementation no matter what the
criticism and your destructiveness towards other players in the Common
Lisp market indicate that you are not driven by principle or by a desire
to see good Common Lisp implementations, but by something else that
ignores problems in one implementation and exaggerates problems in
another. a "something else" that fits is "not being of a particularly
rational mind". you may wish to alter your behavior so at least to give
a different impression.

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Raymond Toy wrote:

> I think I know the reason for the relatively slow results for CMUCL.
> The generator in this case creates the bignum by essentially
> overlapping a bunch of 32-bit random integers by 3 bits. The intent
> is to enhance the randomness of the least significant bits. However,
> the MT-19937 generator is supposed to have good randomness for the
> entire 32 bits. If we truly concatenate the 32-bit numbers together,

What is "good"? Unless you are comparing the same distribution, or can say that the one
that `is more random' is faster, it's like comparing apples and oranges..

http://random.mat.sbg.ac.at/

Pierre R. Mai

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Fernando Mato Mira <mato...@iname.com> writes:

What exactly is your point here? Raymond was making observations
about the implementation of bignum random number generation from the
32-bit RNs that are generated by the MT-19937 "primitive" generator in
CMU CL. His observation was that the technique used was sub-optimal,
since it tried to counter a problem in the usual simplistic RNGs (the
problem of "little randomness" in the least significant bits), a
problem that is not known to be present in MT-19937.

If you really wanted to know the effects of the absence (_or presence_)
of this technique on the quality of the generated bignums, you'd have
to go through the usual theoretical and statistical tests. Since I
assume that these tests haven't been run on the bignum generator as it
stands, you gain nor lose nothing here. But given that the least
significand bits of MT-19937 have been examined rigorously, and have
exhibited equally good results for the usual tests than other bits, it
seems theoretically sound to change the algorithm as proposed.

For further information see the original article on MT-19937, published
by Matsumoto and Nishimura in ACM TOMACS 1/1999, p. 3-30. MT-19937 has
also been recomended for RNG in a number of introductory papers on RNG
for simulation use.

Regs, Pierre.

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
"Pierre R. Mai" wrote:

Nothing of this is obvious from the above. But the main point is that a slower
RNG might work in a case where a faster one doesn't, and the idea of having 1
canonical function called `RANDOM' is pretty dangerous in the hands of the
noninitiated as evidenced by the issue that triggered this discussion.


Raymond Toy

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
>>>>> "Pierre" == Pierre R Mai <pm...@acm.org> writes:

Pierre> stands, you gain nor lose nothing here. But given that
Pierre> the least significand bits of MT-19937 have been examined
Pierre> rigorously, and have exhibited equally good results for
Pierre> the usual tests than other bits, it seems theoretically

I looked briefly at the paper. They give results for the groups of
bits starting from the most significant. The results say that the
most significant bit is extremely random and all 32-bits are also very
random. However, they don't include any results for the least
significant bits, but do mention in passing that the least 6 bits are
2000-some-equidistributed, which means it's very random, I think.

Ray

Raymond Toy

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
>>>>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:

Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
Fernando> noninitiated as evidenced by the issue that triggered this discussion.

Why is one canonical RANDOM bad? No one seems to complain there's
just one canonical function "COS".

(cos (expt 2d0 120)) returns 0d0 or 1d0 on many Lisps. According to
one of Kahan's papers, the result should be -0.9258790228548379d0.
(CMUCL sparc but not x86 returns this answer because the libc
implementation does this.)

It seems to me that this is a quality of implementation issue. If the
implementation has a good well-tested RANDOM function, then that will
satisfy just about everyone. For the few where it won't, they'll have
to roll there own. The same can be said for COS too, though.

Ray

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Raymond Toy wrote:

> >>>>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:
>
> Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
> Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
> Fernando> noninitiated as evidenced by the issue that triggered this discussion.
>
> Why is one canonical RANDOM bad? No one seems to complain there's
> just one canonical function "COS".

But doesn't an optimal approximation to COS at a given precision `exist'? Can there be an
optimal `RANDOM'?

Raymond Toy

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
>>>>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:

Fernando> But doesn't an optimal approximation to COS at a given
Fernando> precision `exist'? Can there be an optimal `RANDOM'?

The existence of an optimal approximation doesn't mean the
implementation actually does this or is even willing to do this. And
optimal needs to be defined, so there can be an "optimal" RANDOM, for
an appropriately chosen definition of optimal.

Ray


Tim Bradshaw

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
* Bruno Haible wrote:

> You will be able to compile clisp (like other software) as either a 32-bit
> application or a 64-bit application. The code space needed for either will
> probably the same, but the data (the memory images) will likely be 60% larger
> in 64-bit mode.

Sorry, this wasn't the question I meant to ask (bad wording on my
part).

What I meant was, when clisp is built on a 64-bit machine (with
suitable C-level support &c) will things like fixnums and so forth be
wider than they are on 32-bit machines?

Thanks

--tim

Stig Hemmer

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Raymond Toy <t...@rtp.ericsson.se> writes:
> The existence of an optimal approximation doesn't mean the
> implementation actually does this or is even willing to do this. And
> optimal needs to be defined, so there can be an "optimal" RANDOM, for
> an appropriately chosen definition of optimal.

Well, the problem is that different peoples definitions of "optimal"
won't match up. Won't even be compatible.

E.g. One person will only be satisfied by hardware-generated true
random bits. Another person prices execution speed above all.

These two people will never agree on which is the optimal RANDOM.

With COS, on the other hand, people are much more likely to agree on
what an optimal implementation should do. If not totally, then at
least enough to be satisfied by the same implementation.

Recommended reading: The chapter on random numbers in Donald E. Knuths
book "The Art of Computer Programming"

Stig Hemmer,
Jack of a Few Trades.

Thomas A. Russ

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
hai...@clisp.cons.org writes:

> - No support for `update-instance-for-redefined-class' because this would
> cause performance penalties in the rest of CLOS, and it's not used anyway.

I would take issue with this. It is one reason that the Loom software
will not run on CLISP. We use UPDATE-INSTANCE-FOR-REDEFINED-CLASS in
our system.

--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Raymond Toy wrote:

> >>>>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:
>
> Fernando> But doesn't an optimal approximation to COS at a given
> Fernando> precision `exist'? Can there be an optimal `RANDOM'?
>

> The existence of an optimal approximation doesn't mean the
> implementation actually does this or is even willing to do this. And
> optimal needs to be defined, so there can be an "optimal" RANDOM, for
> an appropriately chosen definition of optimal.

But a clueless user or even a naive non-expert as myself with his fair
share of numerical analysis courses during his college days might not be
too unreasonable to expect COS to be `precise first, fast second' (I
actually never thought that an implementation might go the other way
before).
A couple of times I've done `man random' just to check the arguments and
obviated all the discussion about the method used except for the line
where it might point out to a better one (which I would usually adopt
just like that). Those were for pretty simple uses, but it might have
happened the same is I was implementing some NN learning using Montecarlo
simulation. And maybe the particular test sets would have passed so you
label it as `OK'. It was not until this year when I was looking for some
random number generator that I could use for network initialization in a
CSMA/CD fashion in an embedded controller for some cheap appliances that
I saw the problems.
[Maybe I read something about the importance of RNGs for Montecarlo
before, but I don't remember].
It seems pretty clear that you can go through a curriculum which is 50%
math and 50% CS, where you get banged in the head enough about
approximation errors, but not about distribution problems (maybe because
the basic statistics, operations research, and numerical analysis courses
each have their own focus).

It's also possible to overlook the issue when changing, or even worse,
upgrading, compilers. Having the user define RANDOM on his own to be
RANDOM-WHATEVER, so that he can assume that it should continue working
the same on a different compiler,
he is forced to think if it's not there, or that he can be sure it won't
get pulled under his feet in the next release seems more sound to me.

I meant `optimal' in the sense that it works for all programs that manage
to run with some appropriately chosen pseudorandom function.

But maybe everybody will have real hardware-based RANDOM in the near
future, and then the story will reverse..


Raymond Toy

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
>>>>> "Stig" == Stig Hemmer <st...@pvv.ntnu.no> writes:

Stig> Raymond Toy <t...@rtp.ericsson.se> writes:
>> The existence of an optimal approximation doesn't mean the
>> implementation actually does this or is even willing to do this. And
>> optimal needs to be defined, so there can be an "optimal" RANDOM, for
>> an appropriately chosen definition of optimal.

Stig> Well, the problem is that different peoples definitions of "optimal"
Stig> won't match up. Won't even be compatible.

Stig> E.g. One person will only be satisfied by hardware-generated true
Stig> random bits. Another person prices execution speed above all.

Stig> These two people will never agree on which is the optimal RANDOM.

My reply was rather flippant, and I should have included a smiley as I
should have done.

:-)

Stig> With COS, on the other hand, people are much more likely to agree on
Stig> what an optimal implementation should do. If not totally, then at
Stig> least enough to be satisfied by the same implementation.

This is circular. You allow COS to work for some definition of
optimal for most people, but not RANDOM. Makes no sense to me.
Granted, there are probably many more definitions of "optimal" for
RANDOM than for COS.

Stig> Recommended reading: The chapter on random numbers in Donald E. Knuths
Stig> book "The Art of Computer Programming"

Done so. Several times.

Ray

Raymond Toy

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
>>>>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:

Fernando> too unreasonable to expect COS to be `precise first,
Fernando> fast second' (I actually never thought that an
Fernando> implementation might go the other way before).

I think most implementations of COS are much better now than before,
but sometimes "easy" was preferred over "precise" or "fast". If you
read some of Kahan's articles you see that "fast" is probably more
important than anything else, even for simple things like a*b + c
which might use a multiply-accumulate instruction instead of a
multiply and then a seperate add.

Fernando> I meant `optimal' in the sense that it works for all
Fernando> programs that manage to run with some appropriately
Fernando> chosen pseudorandom function.

In that case, I suspect even COS would fail your test.

Fernando> But maybe everybody will have real hardware-based RANDOM
Fernando> in the near future, and then the story will reverse..

I thought you could already buy such things. Some hardware with
radioactive source and detector that attaches to a serial port or
printer port.

Ray

Robert Monfera

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
"Thomas A. Russ" wrote:
>
> hai...@clisp.cons.org writes:
>
> > - No support for `update-instance-for-redefined-class' because this would
> > cause performance penalties in the rest of CLOS, and it's not used anyway.
>
> I would take issue with this. It is one reason that the Loom software
> will not run on CLISP. We use UPDATE-INSTANCE-FOR-REDEFINED-CLASS in
> our system.

Although I respect CLISP and Corman Lisp a lot, CLOS seems to suffer
most from the lack of compliance in these implementations (probably
because Corman CLOS+MOP is based on Closette, I don't know about
CLISP).

In what way would performance penalties occur if the dynamic features of
CLOS were implemented (maybe as an -ansi option)? Is it not a
comparison of performance penalty of exploiting the functionality versus
not having the functionality at all?

Maybe there is a Closette-based CLOS compatibility package out there?

Robert

Pierpaolo Bernardi

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Duane Rettig (du...@franz.com) wrote:
: bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:
: > Tim Bradshaw (t...@tfeb.org) wrote:
: > : bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

: > Reporting bugs to Franz has not worked for me in the past.

: Though we encourage people who use our unsupported products to report


: problems, bugs, and anomalies to us, we never give any promise of
: support for these unsupported products.

Of course.

: > Surely I hope that they fix this. If reporting bugs in ACL in this


: > newsgroup makes Franz fix them, I'll report here any new bug that I find.

: There are two things that you need in order to guarantee that bugs get fixed
: in Allegro CL:

: 1. An avenue to let us know that the bug exists. We read a few newsgroups
: and sometimes grab bugs or potential bugs from them, and try to work them
: in to the fabric of the lisp. However, we still prefer that you at least
: send a report to bu...@franz.com, so that we are sure not to miss anything.
: Also, the distribution for such discussion may be a little too wide for
: this newsgroup.

Three bug reports have not even produced an acknowledgement of having
received the mails. What you say in this usenet article is the first
sign I have that you have effectively received these mails. Do you
find reasonable for people to keep sending bug reports to what appears
to be a black hole?

: I am not afraid to admit that our product is not perfect


: (yet :-), but I would not like to see those lispers who are not users of
: Allegro CL to necessarily be bothered by complaints about our product.

I think that lispers, whether or not users of Allegro, are very
interested in discussing flaws, defects and strong points of the
available lisp implementations.

Best regards,
Pierpaolo Bernardi

William Tanksley

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
On 28 Jul 1999 10:17:31 -0400, Raymond Toy wrote:
>>>>>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:

> Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
> Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
> Fernando> noninitiated as evidenced by the issue that triggered this discussion.

>Why is one canonical RANDOM bad? No one seems to complain there's
>just one canonical function "COS".

Because there _is_ only one COS function. There are a large number of
possible RANDOM functions, and almost all of them are very bad, and most
of the remaining ones are bad for most purposes.

>(cos (expt 2d0 120)) returns 0d0 or 1d0 on many Lisps. According to
>one of Kahan's papers, the result should be -0.9258790228548379d0.
>(CMUCL sparc but not x86 returns this answer because the libc
>implementation does this.)

Then the COS function on those Lisps is buggy for that value. No problem.

>It seems to me that this is a quality of implementation issue. If the
>implementation has a good well-tested RANDOM function, then that will
>satisfy just about everyone. For the few where it won't, they'll have
>to roll there own. The same can be said for COS too, though.

Needs for RANDOM differ according to use. Many games and tests need
repeatability, so an rng with seed extraction is best. General crypto
needs a huge period and a vast number of seeds, so seed extraction is not
so nice. OTP crypto can't have repeatability or seed extraction (of
course, it can't be done in software and there's no proven way to do it
anyhow else).

That's only two fields, without even considering performance requirements.

>Ray

--
-William "Billy" Tanksley

Pierpaolo Bernardi

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Erik Naggum (er...@naggum.no) wrote:
: * bern...@cli.di.unipi.it (Pierpaolo Bernardi)

: | Reporting bugs to Franz has not worked for me in the past.

: how do you define "work" for reporting a bug?

At the very least, should have some detectable effect. Like, say, a
confirmation of having received the report.

: | Surely I hope that they fix this. If reporting bugs in ACL in this
: | newsgroup makes Franz fix them, I'll report here any new bug that I find.

: I wish you wouldn't.

I cannot care less of what you wish.

: by reporting a bug to Franz Inc, you will learn


: whether it has already been reported and what the status is,

Done it. Have not learned anything of what you describe.

: you report a


: bug in the interest of having it fixed in your product, i.e., you have a
: reason the bug impacts you that is not mere frustration, and you let
: Franz Inc take part in your problems. all of this is constructive.

Since I'm not a customer of Franz Inc, I didn't report the bugs
pretending that they fixed them for me. Naively, I thought that they
could be interested in bug reports about their prouct, whether or not
the reporting person is a paying customer.

: by


: reporting it here, you would likely report bugs that have been fixed or
: have a known workaround, it is unlikely that the bug makes a business
: difference to you since USENET is not used to divulge business sensitive
: information,

If I was using ACL for business, I would have buyed a licence.
Your writing does not make any sense (no news here).

[... usual naggum drivels, elided]

P.

Johan Kullstam

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Raymond Toy <t...@rtp.ericsson.se> writes:

> >>>>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:
>
> Fernando> RNG might work in a case where a faster one doesn't, and the idea of having 1
> Fernando> canonical function called `RANDOM' is pretty dangerous in the hands of the
> Fernando> noninitiated as evidenced by the issue that triggered this discussion.
>
> Why is one canonical RANDOM bad? No one seems to complain there's
> just one canonical function "COS".

there is only one COS function.

in contrast there are a plethora of random number generators (RNGs).

all software RNGs are algorithmically driven psuedo-random number
generators. the values only *look random*. and they only look random
or not depending on how you look at them!

different people need different things from their RNG. in my modem
simulations i do at work, the program typically spends 10-25% of its
time in the RNG itself. since i fire off long simulations (lasting
hours to weeks), i've gotten a quick generator with enough randomness
and bummed the stuffing out of it. (it's written in C++ (no flames
please) and based on an algorithm by knuth. numerical recipies calls
it ran3. if anyone wants it i am happy to share.)

one thing CL could do better would be to offer *multiple* (no less
than 3) random number generators *with explicit and thorough
documentation and repeatability across architechture and vendor*.
undocumented and unknown RNGs are *worthless* imho. anyone who has
done much work with RNGs knows (through pain, suffering and much
gnashing of teeth) one size does not fit all. that way you could be
assured of portability and consistency along with freedom to choose a
suitable RNG for your application.

in RNGs many things are important
0) repeatability (same seed => same results)
1) independence of samples
2) seed complexity (scalar versus complex structure such as an array)
3) speed

note that
0) and 1) are obviously contradictory.
and that
1) RNGs are 100% predictable yet it must appear to be random
independent in the application

everything is a trade-off. what appears to be independent in one
application may not be in a another statistical test. speed may or
may not be crucial.

> (cos (expt 2d0 120)) returns 0d0 or 1d0 on many Lisps. According to
> one of Kahan's papers, the result should be -0.9258790228548379d0.
> (CMUCL sparc but not x86 returns this answer because the libc
> implementation does this.)
>

> It seems to me that this is a quality of implementation issue. If the
> implementation has a good well-tested RANDOM function, then that will
> satisfy just about everyone.

no, it will satisfy only the naive.

> For the few where it won't, they'll have to roll there own.

is this the scheme answer?

> The same can be said for COS too, though.

not really.

--
J o h a n K u l l s t a m
[kull...@ne.mediaone.net]
Don't Fear the Penguin!

Duane Rettig

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> Duane Rettig (du...@franz.com) wrote:
> : bern...@cli.di.unipi.it (Pierpaolo Bernardi) writes:

> : > Surely I hope that they fix this. If reporting bugs in ACL in this
> : > newsgroup makes Franz fix them, I'll report here any new bug that I find.
>

> : There are two things that you need in order to guarantee that bugs get fixed
> : in Allegro CL:
>
> : 1. An avenue to let us know that the bug exists. We read a few newsgroups
> : and sometimes grab bugs or potential bugs from them, and try to work them
> : in to the fabric of the lisp. However, we still prefer that you at least
> : send a report to bu...@franz.com, so that we are sure not to miss anything.
> : Also, the distribution for such discussion may be a little too wide for
> : this newsgroup.
>
> Three bug reports have not even produced an acknowledgement of having
> received the mails. What you say in this usenet article is the first
> sign I have that you have effectively received these mails. Do you
> find reasonable for people to keep sending bug reports to what appears
> to be a black hole?

No, not at all. This has troubled me since I first looked into it
yesterday, and although I won't make any policy statements here,
I can say that we are actively discussing solutions to removing
this "black hole" perception.

> : I am not afraid to admit that our product is not perfect
> : (yet :-), but I would not like to see those lispers who are not users of
> : Allegro CL to necessarily be bothered by complaints about our product.
>
> I think that lispers, whether or not users of Allegro, are very
> interested in discussing flaws, defects and strong points of the
> available lisp implementations.

Well, I'll leave it to users on the net to respond to this question;
as a vendor, I honestly don't know. However, the problem you are
describing is only present for non-supported users - we are relatively
new to the concept of supporting non-support :-) On the other hand,
a recent survey of our _supported_ customers showed a high satisfaction
rate for our technical support for that group. For such matters, the
direct, personal touch is much more adequate than a usenet discussion.

Pierre R. Mai

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Fernando Mato Mira <mato...@iname.com> writes:

> But a clueless user or even a naive non-expert as myself with his fair
> share of numerical analysis courses during his college days might not be

> too unreasonable to expect COS to be `precise first, fast second' (I
> actually never thought that an implementation might go the other way
> before).

You can't trust your expectations. If you _really_ care about the
details of _any_ numerical function/operator, you have to go out and
check it, in each and every implementation and each and every
version. Nothing else will do!

That being said, most people don't have to care about it in that much
detail, if they take a few precautionary steps.

That is still no reason not to include a fast, high-quality RNG as the
default, instead of the usual 08/15 stuff. Especially the usual LCGs
you find in many implementations of many languages, are not a good
choice for use as the standard RNG, since non-weary users are much
more likely to introduce RNG artifacts into their programs with a LCG
than with most modern RNGs. (Keywords here are the lattice-structure
of LCGs, and the quite small period, where empirical evidence suggests
that sqrt(P) or less samples should be used. With a period of ~2^32,
many RNGs will only reliably give you around 65000 samples. That's
not much nowadays).

> It's also possible to overlook the issue when changing, or even worse,
> upgrading, compilers. Having the user define RANDOM on his own to be
> RANDOM-WHATEVER, so that he can assume that it should continue working
> the same on a different compiler,
> he is forced to think if it's not there, or that he can be sure it won't
> get pulled under his feet in the next release seems more sound to me.

If we take this reasoning to the end, this would imply that RANDOM
should not be part of the standard, or the standard would have to
specify a particular algorithm. Both are impractical. You can't
prescribe an algorithm, since this hampers progress, and can give you
big trouble should an important defect be found in the algorithm you
specified. So this would only leave the option of excluding RANDOM
from the language. But this would only encourage J. Random Loser to
implement an RNG himself, which usually leads to pretty disastrous
results. So to protect J. Random Loser from the worst effects of his
own ignorance, I'd claim that we should convince language implementors
to use high-quality RNGs for the default RANDOM. High-quality RNGs
which are known to have few defects and pitfalls.

Let me make this clear again: RANDOM is in the language to protect
and help J. Random Loser, and not for the serious user of RNGs.

> I meant `optimal' in the sense that it works for all programs that manage
> to run with some appropriately chosen pseudorandom function.

Modern RNGs like MT-19937 (or the many others that exist today) are very
versatile. They work for nearly all non-specialized applications. You
still have to have a number of other RNGs at hand to test for artifacts,
but for most normal uses, you can get by with one RNG nowadays.

> But maybe everybody will have real hardware-based RANDOM in the near


> future, and then the story will reverse..

A hardware-based RANDOM is even less optimal than a modern RNG. Most
software applications of RNGs require repeatability, which means you
must be able to regenerate the stream of "random" numbers. With real
random numbers, you'd have to record the stream, which can get pretty
costly when the numbers get big (which they do pretty quickly in many
fields).

The (software) field which would benefit the most from real random
numbers would probably be cryptography and security. Their uses of
random numbers differ pretty much from most other uses of random
numbers.

Pierre R. Mai

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Raymond Toy <t...@rtp.ericsson.se> writes:

Although he only gives the exact numbers for the least 6 bits in his
paper, tests have also been done (as is usual) on shorter runs of
least-significant bits, without obvious problems. I can't quite
remember in which paper I read a more detailed analysis.

Matsumoto himself recommends simply concatenating successive 32-bits
samples to get longer sequences. Hellekallek's team also did tests on
MT-19937 (their usual pLab reports, but also other work IIRC), and
reported favourably on MT-19937's performance, though I'd have to dig
up their report to get the details on this.

Although MT-19937 is fairly new (~ 2 years), it is based on TT-800,
which is a couple of years older, and which has been tested very
favourably before.

Pierre R. Mai

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Fernando Mato Mira <mato...@iname.com> writes:

> Nothing of this is obvious from the above. But the main point is

??? Do you claim I have insights into Raymond's mind, that you do not
possess? Or what do you claim? I simply restated what was already
contained in Raymond's and my earlier posts. No magic PSI-factor
involved.

> that a slower RNG might work in a case where a faster one doesn't,
> and the idea of having 1 canonical function called `RANDOM' is
> pretty dangerous in the hands of the noninitiated as evidenced by


> the issue that triggered this discussion.

I think I've made it absolutely clear in my original post, that comparing
RNGs is non-trivial. The whole point of this was to show the absurdity of
simple benchmarks. W.r.t. to the non-initiated and RANDOM, see my other
post. And w.r.t. to "the main point": Speed and quality of RNGs aren't
that highly correlated, as can be witnessed by the fact that MT-19937 is a
very high speed RNG, yet still is one of the most successful RNGs when it
comes to theoretical and statistical tests[1]. So what is your main point
worth? It doesn't really apply to Raymond's post, since his modification
was not to the RNG itself, and in any case is backed by theory. And if
you want to claim that CLISP or ACL's RNGs are slower because they are
"better", I'd like to see evidence for this. Have you actually looked up
the test results for the RNG algorithms in question?

Regs, Pierre.


Footnotes:
[1] And MT-19937 is not the only fast, high-quality RNG.

Pierre R. Mai

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Stig Hemmer <st...@pvv.ntnu.no> writes:

> Recommended reading: The chapter on random numbers in Donald E. Knuths

> book "The Art of Computer Programming"

While this is a classic on RNGs, I would recommend reading some of
the newer overview papers and reports on RNGs after TAOCP. There has
been much development in the world of RNGs since TAOCP, and anyone
seriously using RNGs should be aware of that. Sadly, there exists a
huge gap between theory and practice in this area.

Regs, Pierre.

Pierre R. Mai

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Raymond Toy <t...@rtp.ericsson.se> writes:

> I thought you could already buy such things. Some hardware with
> radioactive source and detector that attaches to a serial port or
> printer port.

I did once hook up a scintilation-counter to a computer, and used this
to generate random numbers for fun. Was part of a bigger project, and
the RNG part was only to get aquainted with the device.

Christopher Browne

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
On Wed, 28 Jul 1999 17:28:08 +0200, Fernando Mato Mira
<mato...@iname.com> wrote:
>Raymond Toy wrote:
>> "Fernando" == Fernando Mato Mira <mato...@iname.com> writes:
>>> RNG might work in a case where a faster one doesn't, and the idea
>>> of having 1 canonical function called `RANDOM' is pretty dangerous
>>> in the hands of the noninitiated as evidenced by the issue that
>>> triggered this discussion.
>>
>> Why is one canonical RANDOM bad? No one seems to complain there's
>> just one canonical function "COS".
>
>But doesn't an optimal approximation to COS at a given precision

>`exist'? Can there be an optimal `RANDOM'?

The "optimal" approximation to the cosine function is the value that
is the closest to the actual value for the particular argument.

If a value is incorrect, that is readily evaluated.

The same is not true for random number generators; it is possible for
different processes that use RNGs to value varying properties.

The "traditional" RNGs use linear congruential generator functions.

(setf seed 12455)
(defconstant modulus 32767)
(defconstant multiplier 1103515245)
(defconstant adder 12345)
(defun randvalue ()
(setf seed (modulo (+ (* seed multiplier) adder) modulus))
seed)

(No claims made here of how wonderful these parameters are!)

This is dead simple to implement; unfortunately it's not Terribly
Random, and some applications may expose problems with this.

In recent days, RNG schemes that are based on the use of cryptographic
functions are espoused for some purposes.

Other RNG schemes that, despite only providing values that are fairly
small (e.g. - 32 bits), but yet have periods between repetitions of
values that are *vastly* larger than 2^32 have become fairly popular.

[What I would really like to see, coded in some Lisp variant, is the
algorithm that Knuth presents in the Stanford Graphbase. It's fairly
32-bit-oriented, which is somewhat unfortunate...]
--
Rules of the Evil Overlord #19. "The hero is not entitled to a last
kiss, a last cigarette, or any other form of last request."
cbbr...@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>

It is loading more messages.
0 new messages