Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

compiling cmulisp

11 views
Skip to first unread message

Klaus Schilling

unread,
Oct 5, 1998, 3:00:00 AM10/5/98
to

I downloaded the cmulisp 18.a source, but there's no instruction in it how to
compile it. What's the trick?

Klaus Schilling

Mike McDonald

unread,
Oct 5, 1998, 3:00:00 AM10/5/98
to
In article <87ogrr1...@ivm.de>,

Don't! Compiling CMUCL is a pain in the duffus! Get the precompiled version
instead. (You need a working version to build a working version anyway!) If
you are really a glutton for punishment, check out Martin Cracauer's web page
for an outline of how to do it. (http://www.cons.org/cracauer/lisp.html)

Mike McDonald
mik...@mikemac.com


James A. Crippen

unread,
Oct 8, 1998, 3:00:00 AM10/8/98
to

Waall... I dunno. I needed the CMUCL source to fiddle with, and being
up in Alaska I decided that that was all I was going to download. (Even
on a light pipe I still get an avg range of 5kBps to 20kBps...) So I
compiled the sucker. It did take a while (~1.5 hour) on a good 200
i586. But there are plenty of other things you can do on that very same
computer that only take an infinitesimal amount of cpu cycles, like read
the CMUCL docs. That's always fun.

Of course, configuring it was somewhat interesting, but fairly painless.

cheers,
james

David Steuber The Interloper

unread,
Oct 9, 1998, 3:00:00 AM10/9/98
to
On Thu, 08 Oct 1998 03:28:22 -0800, "James A. Crippen"
<crip...@saturn.math.uaa.alaska.edu> claimed or asked:

% Waall... I dunno. I needed the CMUCL source to fiddle with, and being
% up in Alaska I decided that that was all I was going to download. (Even
% on a light pipe I still get an avg range of 5kBps to 20kBps...) So I
% compiled the sucker. It did take a while (~1.5 hour) on a good 200
% i586. But there are plenty of other things you can do on that very same
% computer that only take an infinitesimal amount of cpu cycles, like read
% the CMUCL docs. That's always fun.
%
% Of course, configuring it was somewhat interesting, but fairly painless.

Perhaps you can let me know what I'm in for. I got the 18b source. I
also tried to get GARNET, but couldn't get on to the ftp server.

--
David Steuber (ver 1.31.1b)
http://www.david-steuber.com
To reply by e-mail, replace trashcan with david.

When the long night comes, return to the end of the beginning.
--- Kosh (???? - 2261 AD) Babylon-5

"Where do you want to go tomorrow?" --- KDE tool tip

Johann Hibschman

unread,
Oct 9, 1998, 3:00:00 AM10/9/98
to
tras...@david-steuber.com (David Steuber "The Interloper") writes:
>
> Perhaps you can let me know what I'm in for. I got the 18b source. I
> also tried to get GARNET, but couldn't get on to the ftp server.

From what I've seen go across the cmucl-imp mailing list, you should
probably just get the binary release. Wimp out, just this once. ;-)
Also, join the mailing lists. I'm just a quiet observer there, but it
gives me a feel for what's going on.

--Johann

Klaus Schilling

unread,
Oct 9, 1998, 3:00:00 AM10/9/98
to
Johann Hibschman <joh...@physics.berkeley.edu> writes:
>
> From what I've seen go across the cmucl-imp mailing list, you should
> probably just get the binary release. Wimp out, just this once. ;-)
> Also, join the mailing lists. I'm just a quiet observer there, but it
> gives me a feel for what's going on.

So cmulisp becomes no option for me.

Klaus Schilling

Pierre Mai

unread,
Oct 9, 1998, 3:00:00 AM10/9/98
to
Klaus Schilling <Klaus.S...@home.ivm.de> writes:

Could you explain this statement any further? What are your
requirements that are unfulfilled by the simple fact that recompiling
CMU CL from sources is at times a somewhat difficult process?

Regs, Pierre,

who is somewhat confused...

--
Pierre Mai <pm...@acm.org> http://home.pages.de/~trillian/
"Such is life." -- Fiona in "Four Weddings and a Funeral" (UK/1994)

Raymond Toy

unread,
Oct 9, 1998, 3:00:00 AM10/9/98
to
>>>>> "Klaus" == Klaus Schilling <Klaus.S...@home.ivm.de> writes:

Klaus> Johann Hibschman <joh...@physics.berkeley.edu> writes:
>>
>> From what I've seen go across the cmucl-imp mailing list, you should
>> probably just get the binary release. Wimp out, just this once. ;-)
>> Also, join the mailing lists. I'm just a quiet observer there, but it
>> gives me a feel for what's going on.

Klaus> So cmulisp becomes no option for me.

Why? No one said it was truly impossible to compile because it's
obviously not since someone is doing it. It's just probably MUCH
harder than most people expect.

If you want or need to compile CMUCL yourself, you should really join
the cmucl-imp mailing list. People there will help as much as they
can.

Ray


David Steuber The Interloper

unread,
Oct 10, 1998, 3:00:00 AM10/10/98
to
On 09 Oct 1998 00:30:12 -0700, Johann Hibschman
<joh...@physics.berkeley.edu> claimed or asked:

% From what I've seen go across the cmucl-imp mailing list, you should
% probably just get the binary release. Wimp out, just this once. ;-)
% Also, join the mailing lists. I'm just a quiet observer there, but it
% gives me a feel for what's going on.

Where do I send mail to join the list? What is the format?

I moved the 18b tar file to a drive accessible to Linux. I also found
what I hope is the source for GARNET. I also have GARNET
documentation, I think. I don't know if the CMUCL tar file has
documentation or not.

The releases on the cmu site were all for non-Intel machines. Does
this mean that x86 native code compilation won't be supported? Where
would I find an actual binary release? A couple people have mailed me
links to www.cons.org. I haven't had a chance to look at it yet.

Linux gave me the option of compiling my kernel to be optimized for my
CPU. Can I do that with CMUCL? It's not a requirement, but it would
be nice.

David Steuber The Interloper

unread,
Oct 10, 1998, 3:00:00 AM10/10/98
to
On 09 Oct 1998 00:30:12 -0700, Johann Hibschman
<joh...@physics.berkeley.edu> claimed or asked:

% From what I've seen go across the cmucl-imp mailing list, you should
% probably just get the binary release. Wimp out, just this once. ;-)
% Also, join the mailing lists. I'm just a quiet observer there, but it
% gives me a feel for what's going on.

The only binary release I've seen for the x86 so far was a FreeBSD
build.

I've unpacked the tar balls. In the lisp directory is a bunch of
Configure files (including one for Linux) and .c files. It looks like
it is possible to do a virgin build, although I haven't tried to yet.

I also now have Garnet. I still need to get cmucl documentation.

My current tree looks like this (abridged version):

/opt/cmucl/cmucl-src/*
/opt/cmucl/garnet-src/*

I put the HyperSpec and Garnet source in my /usr/doc tree.

Really what's left to do is find out how to do a virgin build. Pierre
Mai kindly mailed me the addresses of the mailing lists and archives
for cmucl. I hope to find a nice recipe for doing a virgin build in
there. I am used to something that looks like this:

Configure
make
make test
make install

Once I've done that, and found a tar ball of the docs and installed
them, I just need to set up XEmacs to be my development environment.
I need to do my .emacs file and get any .el files that would be useful
for cmucl.

Martin Cracauer

unread,
Oct 10, 1998, 3:00:00 AM10/10/98
to
tras...@david-steuber.com (David Steuber "The Interloper") writes:

>On 09 Oct 1998 00:30:12 -0700, Johann Hibschman
><joh...@physics.berkeley.edu> claimed or asked:

>% From what I've seen go across the cmucl-imp mailing list, you should
>% probably just get the binary release. Wimp out, just this once. ;-)
>% Also, join the mailing lists. I'm just a quiet observer there, but it
>% gives me a feel for what's going on.

>Where do I send mail to join the list? What is the format?

Please see http://www.cons.org/cmucl/

>I moved the 18b tar file to a drive accessible to Linux. I also found
>what I hope is the source for GARNET. I also have GARNET
>documentation, I think. I don't know if the CMUCL tar file has
>documentation or not.

There are dozends of tar files in the CMUCL site. As long as you don't
tell us which you have, we can't tell you if it has docs in it.

>The releases on the cmu site were all for non-Intel machines. Does
>this mean that x86 native code compilation won't be supported? Where
>would I find an actual binary release? A couple people have mailed me
>links to www.cons.org. I haven't had a chance to look at it yet.

Maybe you should have delayed your post to usenet until you got an
opportunity to look it up?

>Linux gave me the option of compiling my kernel to be optimized for my
>CPU. Can I do that with CMUCL? It's not a requirement, but it would
>be nice.

Unless you seriously enjoy reading generated assembly code that fills
each CPU's pipeline best for its own beaty, what do you care if the
code CMUCL generates is fast on all CPUs instead of just one? Maybe
one of Intel's CPUs is your personal enemy :-)?

Martin

David Steuber The Interloper

unread,
Oct 11, 1998, 3:00:00 AM10/11/98
to
On 10 Oct 1998 21:45:04 GMT, crac...@not.mailable (Martin Cracauer)
claimed or asked:


% tras...@david-steuber.com (David Steuber "The Interloper") writes:
%
% >On 09 Oct 1998 00:30:12 -0700, Johann Hibschman
% ><joh...@physics.berkeley.edu> claimed or asked:
%
% >I moved the 18b tar file to a drive accessible to Linux. I also found
% >what I hope is the source for GARNET. I also have GARNET
% >documentation, I think. I don't know if the CMUCL tar file has
% >documentation or not.

The file name is: cmucl-18b_source.tgz
The file size is: 3,490,672 bytes

There are no docs.

% There are dozends of tar files in the CMUCL site. As long as you don't
% tell us which you have, we can't tell you if it has docs in it.
%
% >The releases on the cmu site were all for non-Intel machines. Does
% >this mean that x86 native code compilation won't be supported? Where
% >would I find an actual binary release? A couple people have mailed me
% >links to www.cons.org. I haven't had a chance to look at it yet.
%
% Maybe you should have delayed your post to usenet until you got an
% opportunity to look it up?

RTFM? Me? When I can just ask? :-)

Well, I went there and couldn't find what I wanted straight away. It
seems that all the Linux versions are under the experimental tree
instead of release. It seems there is a prejudice against Linux.

Then there are the choices. There is a Linux tree under experimental.
Bunch of stuff there. Do I want all of it or some of it? What about
this file in experimental:

cmucl-x86_linux_longfloat.tgz 6,022,585 bytes

I haven't unpacked it yet. I don't have Internet working in Linux, so
I have to keep booting to NT :-(

% >Linux gave me the option of compiling my kernel to be optimized for my
% >CPU. Can I do that with CMUCL? It's not a requirement, but it would
% >be nice.
%
% Unless you seriously enjoy reading generated assembly code that fills
% each CPU's pipeline best for its own beaty, what do you care if the
% code CMUCL generates is fast on all CPUs instead of just one? Maybe
% one of Intel's CPUs is your personal enemy :-)?

Maybe I want the CMUCL environment itself to run as fast as possible
on my hardware? For distribution code, it doesn't matter so much.
Especially if I just distribute source under the "here, you build it"
license.

Of course, duel pipeline assembly IS a thing of beauty :-)

So I have this chicken, see? And it hatched from this egg, see? But
the egg wasn't laid by a chicken. It was cross laid by a turkey.

So cmucl requires cmucl to build cmucl. What was cmucl built on
originally?

--
David Steuber (ver 1.31.2a)


http://www.david-steuber.com
To reply by e-mail, replace trashcan with david.

So I have this chicken, see? And it hatched from this egg, see? But
the egg wasn't laid by a chicken. It was cross-laid by a turkey.

R. Toy

unread,
Oct 11, 1998, 3:00:00 AM10/11/98
to
David Steuber The Interloper wrote:
>
> The only binary release I've seen for the x86 so far was a FreeBSD
> build.

Look a little deeper. You'll find Linux versions in

http://www2.cons.org:8000/ftp-area/cmucl/

>
> I've unpacked the tar balls. In the lisp directory is a bunch of
> Configure files (including one for Linux) and .c files. It looks like
> it is possible to do a virgin build, although I haven't tried to yet.

Without a working version, you can't build CMUCL.

>
> I also now have Garnet. I still need to get cmucl documentation.

One of the tar files should contain the user's guide. If not, you can
find an HTML version at http://www.mindspring.com/~rtoy. This is
mentioned on the CMUCL home page.

> Really what's left to do is find out how to do a virgin build. Pierre
> Mai kindly mailed me the addresses of the mailing lists and archives
> for cmucl. I hope to find a nice recipe for doing a virgin build in
> there. I am used to something that looks like this:

As mentioned above, you MUST have an working version of CMUCL to build a
version of CMUCL. It's not easy. Unless you absolutely, positively
must build a version yourself and you are a glutton for punishment, do
yourself a favor and just get the binaries.

>
> Once I've done that, and found a tar ball of the docs and installed
> them, I just need to set up XEmacs to be my development environment.
> I need to do my .emacs file and get any .el files that would be useful
> for cmucl.
>

I'd recommend using the ILISP package that comes with XEmacs.

Ray
--
---------------------------------------------------------------------------
----> Raymond Toy rt...@mindspring.com
http://www.mindspring.com/~rtoy

R. Toy

unread,
Oct 11, 1998, 3:00:00 AM10/11/98
to
David Steuber The Interloper wrote:
>
> Well, I went there and couldn't find what I wanted straight away. It
> seems that all the Linux versions are under the experimental tree
> instead of release. It seems there is a prejudice against Linux.

You didn't look hard enough. They're there in
http://www2.cons.org:8000/ftp-area/cmucl/binaries.

Unfortunately, there is no Linux binaries for the real 18b sources. You
can find them for early and later versions. Unfortunately, I can't
build Linux libc5 version for 18b from my current experimental sources.

> Then there are the choices. There is a Linux tree under experimental.
> Bunch of stuff there. Do I want all of it or some of it? What about
> this file in experimental:
>
> cmucl-x86_linux_longfloat.tgz 6,022,585 bytes

The major difference between the experimental version and
non-experimental version is the support for long-float's. The tarball
above is ancient and I should upload a new version.

> Maybe I want the CMUCL environment itself to run as fast as possible
> on my hardware? For distribution code, it doesn't matter so much.

Right now you'll have to build your own version with the optimizations
you want. I don't think the current sources really optimize for any
particular processor. There are a few optimizations for a Pentium FPU
instead of a 486 FPU. There may be a few others. I don't really know.

>
> So cmucl requires cmucl to build cmucl. What was cmucl built on
> originally?

I think the first versions of CMUCL were done on an RS/6000 machine.
The early versions could be built from scratch but they migrated so that
eventually you had to have a working version to build a new version.
Cross-compilers were used to produce ports for the other architectures.
The x86 versions were cross-compiled from the Alpha port.

Martin Cracauer

unread,
Oct 11, 1998, 3:00:00 AM10/11/98
to
tras...@david-steuber.com (David Steuber "The Interloper") writes:

>There are no docs.

See
http://cvs2.cons.org:8000/ftp-area/cmucl/src/docs/



>Well, I went there and couldn't find what I wanted straight away. It
>seems that all the Linux versions are under the experimental tree
>instead of release. It seems there is a prejudice against Linux.

Oh yes, our primary goal is to damage Linux. CMUCL binaries are built
by the people who spent most time with CMUCL and most people spending
a lot of time on CMUCL prefer FreeBSD over Linux. The maintainer of
the Linux version is busy with other things, too. The glibc change
didn't help, either.

>Then there are the choices. There is a Linux tree under experimental.
>Bunch of stuff there. Do I want all of it or some of it? What about
>this file in experimental:

>cmucl-x86_linux_longfloat.tgz 6,022,585 bytes

This beast is as advertised "experimental". You will not be able to
rebuild normal sources based on this.

>I haven't unpacked it yet. I don't have Internet working in Linux, so
>I have to keep booting to NT :-(

Well, as long as your own domain runs, it can't be that hard ;-)

>% >Linux gave me the option of compiling my kernel to be optimized for my
>% >CPU. Can I do that with CMUCL? It's not a requirement, but it would
>% >be nice.
>%
>% Unless you seriously enjoy reading generated assembly code that fills
>% each CPU's pipeline best for its own beaty, what do you care if the
>% code CMUCL generates is fast on all CPUs instead of just one? Maybe
>% one of Intel's CPUs is your personal enemy :-)?

>Maybe I want the CMUCL environment itself to run as fast as possible
>on my hardware?

We all want the code to run as fast as possible. It's just a matter of
time where to invest the time improving the compiler. There's
currently not much point in microtuning things that are even different
on various Intel CPUs.

Or in other words: As a user, you are expected to have your code run
as fast as possible, but how that is achived is an entirely different
matter. There is already a :pentium feature but I don't think it does
anything for now except helping some floating point hacks.

[...]

>So cmucl requires cmucl to build cmucl.

It isn't strictly true that you need a running CMUCL to build. You can
load the CMUCL compiler (python) into another Lisp implementation
(which can be built from C sources like CLISP does) and then build the
files you need besides the C startup in CMUCL's lisp/ directory. I
don't think that has been done within the last 12 years or so. Even
the x86 port a few years ago was crosscompiled by CMUCL on RISC
workstations instead of running the compiler on other x86 platforms.

>What was cmucl built on originally?

Crosscompiling from MacLISP a compiler that wasn't very capable, but
capable enough to generate more capable compilers on the new platform.

You can't look at the process by today's measurements. When CMUCL
started around 1980 (named "Spice Lisp), it was much simpler. The
Python compiler that causes much of the complexity of current rebuilds
is the second implementation from 1985, the 1980 thing was a bytecode
system. Also, the runtime was hand-coded assembler and didn't need the
alien symbol lookup hack we now have. The folks back then were real
bit-fiddlers and had a deep understanding of the system. They could
easily generate whatever exotic file they needed by hand, run the
compiler under a different implementation of Lisp.

Today we tangle with several subsystems that add rebuilding complexity
(i.e. PCL, several random number generators), a full-blown agressivly
optimizing compiler that has to implement a very complex language
standard (more data types nativly supported by the compiler, changing
struct/class implementations) and we have to interface to the full set
of system calls Unix provides. Usually, without having source for the
OS implementation...

I'm a newscomer, so all this isn't official, although collected from
mails of folks who should know.

Martin

R. Toy

unread,
Oct 11, 1998, 3:00:00 AM10/11/98
to
Martin Cracauer wrote:
> Oh yes, our primary goal is to damage Linux. CMUCL binaries are built
> by the people who spent most time with CMUCL and most people spending
> a lot of time on CMUCL prefer FreeBSD over Linux. The maintainer of

Hey, I resent that! :-) All of the changes I've added to CMUCL (not
insignificant, but nothing compared to what Paul and Doug and others
have done) were all done using either the Solaris port or the Linux
port. I don't maintain the Linux port because Peter does a better job
of it.

> Or in other words: As a user, you are expected to have your code run
> as fast as possible, but how that is achived is an entirely different
> matter. There is already a :pentium feature but I don't think it does
> anything for now except helping some floating point hacks.

the last time I looked I think the :pentium feature enabled the extended
range log(1+x) instruction and a fast copy via FPU registers.

> It isn't strictly true that you need a running CMUCL to build. You can
> load the CMUCL compiler (python) into another Lisp implementation
> (which can be built from C sources like CLISP does) and then build the
> files you need besides the C startup in CMUCL's lisp/ directory. I
> don't think that has been done within the last 12 years or so. Even

Do you think it's still possible to do so? What other lisp was used
back
then?

> You can't look at the process by today's measurements. When CMUCL
> started around 1980 (named "Spice Lisp), it was much simpler. The
> Python compiler that causes much of the complexity of current rebuilds
> is the second implementation from 1985, the 1980 thing was a bytecode
> system. Also, the runtime was hand-coded assembler and didn't need the
> alien symbol lookup hack we now have. The folks back then were real

I understand that the first few versions of the Python compiler took
over 14
hours to build itself back then. And I get frustrated when rebuilding
CMUCL takes an hour on my PC. :-) Especially when I introduced some
stupid mistake. :-)

Is there any documents on the history of CMUCL? I think it would make
interesting reading.

R. Toy

unread,
Oct 11, 1998, 3:00:00 AM10/11/98
to
David Steuber The Interloper wrote:
>
> Thanks, Ray.

No problem. I expect full payment from you by releasing your 3D
animation system free. :-)

> What's left for me to do now is figure out whether I need to unpack
> the lib5c tarball or the non lib5c tarball.

The libc version should work everywhere. If you have glibc, you should
probably get that version.

> cmucl-dtc_src.tgz

Not necessary unless you want to rebuild the experimental version from
sources.

> lf-elemfun_tar.gz

Not needed. This contains a "portable" implementation of some special
functions that are reasonably accurate for any float type.

In addition, you may want the Garnet sources from the same place. As
someone pointed out on the CMUCL lists, Garnet is about the same size as
CLUE, but does a zillion times more.

Ray

David Steuber The Interloper

unread,
Oct 12, 1998, 3:00:00 AM10/12/98
to
Thanks, Ray.

I ended up pulling down all the files under
experimental/Linux/old-fashioned-tar (or whatever). That should take
a toll on my ISDN bill.

It's easy enough to get too busy to keep up a web site, but I think
the navigation could have been made somewhat simpler.

What's left for me to do now is figure out whether I need to unpack
the lib5c tarball or the non lib5c tarball.

This is all the stuff I grabbed so far:

/home/david/cmucl-arch:
total 38111
drwxr-xr-x 2 david users 1024 Oct 11 00:26 .
drwxr-xr-x 11 david users 1024 Oct 11 00:25 ..
-rw-r--r-- 1 david users 6093 Oct 11 00:26
CompileCL.howto
-rw-r--r-- 1 david users 2073410 Oct 11 00:26
HyperSpec-4-0_tar.gz
-rw-r--r-- 1 david users 400 Oct 11 00:26
build-core.lisp
-rw-r--r-- 1 david users 404 Oct 11 00:26
build-full-core.lisp
-rw-r--r-- 1 david users 345 Oct 11 00:26
build-int_h.lisp
-rw-r--r-- 1 david users 313 Oct 11 00:26
build-some-subsystems.lisp
-rw-r--r-- 1 david users 232 Oct 11 00:26
build-the-subsystems.lisp
-rw-r--r-- 1 david users 1281450 Oct 11 00:26
clio-19970309-4.tgz
-rw-r--r-- 1 david users 660015 Oct 11 00:26
clue-19970309-4.tgz
-rw-r--r-- 1 david users 11983832 Oct 11 00:26
cmucl-2_4_5-2-libc5.tgz
-rw-r--r-- 1 david users 11967645 Oct 11 00:26
cmucl-2_4_5-2.tgz
-rw-r--r-- 1 david users 4607 Oct 11 00:26
cmucl-build.html
-rw-r--r-- 1 david users 436325 Oct 11 00:26
cmucl-clm-2_4_5-2.tgz
-rw-r--r-- 1 david users 1465828 Oct 11 00:26
cmucl-clx-2_4_5-2.tgz
-rw-r--r-- 1 david users 101070 Oct 11 00:26
cmucl-defsystem-2_4_5-2.tgz
-rw-r--r-- 1 david users 135021 Oct 11 00:26
cmucl-dtc_src.tgz
-rw-r--r-- 1 david users 1517431 Oct 11 00:26
cmucl-hemlock-2_4_5-2.tgz
-rw-r--r-- 1 david users 6022585 Oct 11 00:26
cmucl-x86_linux_longfloat.tgz
-rw-r--r-- 1 david users 6701 Oct 11 00:26
emacs-lisp-hacks.htm
-rw-r--r-- 1 david users 47419 Oct 11 00:26
lf-elemfun_tar.gz
-rw-r--r-- 1 david users 77359 Oct 11 00:26 lisp.html
-rw-r--r-- 1 david users 851 Oct 11 00:26 makedirs.htm
-rw-r--r-- 1 david users 11033 Oct 11 00:26 mp-test.lisp
-rw-r--r-- 1 david users 2739 Oct 11 00:26
old-tar_gz-format.html
-rw-r--r-- 1 david users 684645 Oct 11 00:26
pictures-19970309-4.tgz
-rw-r--r-- 1 david users 1423 Oct 11 00:26 readme.htm
-rw-r--r-- 1 david users 351590 Oct 11 00:26
series-19980604-2.tgz
-rw-r--r-- 1 david users 262 Oct 11 00:26 setenv.lisp


I've just got to figure out which packages I want. I bet I don't end
up doing my own build after all. I never recompiled gcc, just my
kernel.

Peter Van Eynde

unread,
Oct 12, 1998, 3:00:00 AM10/12/98
to
On Sun, 11 Oct 1998 22:50:59 -0400, R. Toy <rt...@mindspring.com> wrote:
>> You can't look at the process by today's measurements. When CMUCL
>> started around 1980 (named "Spice Lisp), it was much simpler. The
>> Python compiler that causes much of the complexity of current rebuilds
>> is the second implementation from 1985, the 1980 thing was a bytecode
>> system. Also, the runtime was hand-coded assembler and didn't need the
>> alien symbol lookup hack we now have. The folks back then were real
>
>I understand that the first few versions of the Python compiler took
>over 14
>hours to build itself back then. And I get frustrated when rebuilding
>CMUCL takes an hour on my PC. :-) Especially when I introduced some
>stupid mistake. :-)

I _really_ hate it when I have a bug that only shows itself in the
_third_ generation system after the change was made. And compiling
takes almost 2 hours on my old machine :-(.

Six hours of recompilation to check out a single change means using
binary search for the error... And sometimes I need to catch up
with a months worth of changes :-(.

>Is there any documents on the history of CMUCL? I think it would make
>interesting reading.

Hmm. I came across a reference to Spice lisp in the book of Gabriel,
_Patterns of software_. IIRC they needed a lisp for the IBM-RT and wanted to
use Spice lisp as a basis. I mostly remember that he was complaining about the
bad quality of the system :-(.

Also Rob MacLachlan has told some interesting stories on the mailing-list.

Groetjes, Peter

--
It's logic Jim, but not as we know it. pvan...@debian.org, pvan...@inthan.be
Look in keyservers for PGP key.

David Steuber The Interloper

unread,
Oct 13, 1998, 3:00:00 AM10/13/98
to
On 11 Oct 1998 22:18:46 GMT, crac...@not.mailable (Martin Cracauer)
claimed or asked:

% Oh yes, our primary goal is to damage Linux. CMUCL binaries are built
% by the people who spent most time with CMUCL and most people spending
% a lot of time on CMUCL prefer FreeBSD over Linux. The maintainer of
% the Linux version is busy with other things, too. The glibc change
% didn't help, either.

I'm new to both Linux and CMUCL. In fact, I installed Linux so that I
could get the best free Lisp available. The general consensus seemed
to be that was CMUCL. So I don't really know what issues are involved
with creating and maintaining the various ports.

Hopefully, I will become proficient enough with Lisp that I can
maintain my own code base if I have to, or contribute to the
maintenance effort. Like the maintainer of the Linux version, I am
rather busy too. This is now the only news group I take the time to
monitor.

% >I haven't unpacked it yet. I don't have Internet working in Linux, so
% >I have to keep booting to NT :-(
%
% Well, as long as your own domain runs, it can't be that hard ;-)

My domain is being hosted on another machine in another town. What I
do on my home machine doesn't affect it. Although, I would like to
have my domain hosted on a Linux box at some point. At the moment, I
have fallen behind on keeping certain pages up to date. I also have a
bunch more pictures to add. One particularly nice picture doesn't
have a link to it. I've been spending all my computer time at home
getting Linux tuned the way I like and trying to set up a good Lisp
development environment.

% >Maybe I want the CMUCL environment itself to run as fast as possible
% >on my hardware?
%
% We all want the code to run as fast as possible. It's just a matter of
% time where to invest the time improving the compiler. There's
% currently not much point in microtuning things that are even different
% on various Intel CPUs.

Someone else also pointed out that building the system for performance
reasons would be a futile effort. It is clear I have a lot to learn
about it before I try that anyway. There may be other reasons for
rebuilding, but I'm going to stop thinking about all that for now. My
key goal for the near term is to get set up with a fully functional
system that works in XEmacs so that I can learn the Lisp.

% >So cmucl requires cmucl to build cmucl.
%
% It isn't strictly true that you need a running CMUCL to build. You can
% load the CMUCL compiler (python) into another Lisp implementation
% (which can be built from C sources like CLISP does) and then build the
% files you need besides the C startup in CMUCL's lisp/ directory. I
% don't think that has been done within the last 12 years or so. Even
% the x86 port a few years ago was crosscompiled by CMUCL on RISC
% workstations instead of running the compiler on other x86 platforms.

I don't think I'll go dredging up ancient voodoo. I would like to
play with just the one environment to cut down on the number of
variables I have to deal with.

% Today we tangle with several subsystems that add rebuilding complexity
% (i.e. PCL, several random number generators), a full-blown agressivly
% optimizing compiler that has to implement a very complex language
% standard (more data types nativly supported by the compiler, changing
% struct/class implementations) and we have to interface to the full set
% of system calls Unix provides. Usually, without having source for the
% OS implementation...

Well, Linux ships with the source, so you have no problems there ;-)

I wonder if it isn't possible to write a Lisp program that manages the
Lisp development process.

% I'm a newscomer, so all this isn't official, although collected from
% mails of folks who should know.

Really? I thought you were running cons.org.

As I become used to working with cmucl, I will have a vested interest
in its continued support. The only way I can really insure that is by
knowing how the whole system works. It seems rather complex, so it
will take me a while. I also will probably only program during
weekends because I have to work for food. Such is life.

David Steuber The Interloper

unread,
Oct 13, 1998, 3:00:00 AM10/13/98
to
On Sun, 11 Oct 1998 22:56:52 -0400, "R. Toy" <rt...@mindspring.com>
claimed or asked:

% No problem. I expect full payment from you by releasing your 3D
% animation system free. :-)

Well, that sucker is still in my head. If it survives my excruciating
laziness, I'll definitely let people know.

Martin Cracauer

unread,
Oct 13, 1998, 3:00:00 AM10/13/98
to
"R. Toy" <rt...@mindspring.com> writes:

>> It isn't strictly true that you need a running CMUCL to build. You can

>> load the CMUCL compiler (python) into another Lisp implementation

>> (which can be built from C sources like CLISP does) and then build the

>> files you need besides the C startup in CMUCL's lisp/ directory. I

>> don't think that has been done within the last 12 years or so. Even

>Do you think it's still possible to do so?

No idea. Using clisp has been discussed when the x86 port started, but
Paul as the one who actually did it used cross-compilation from
another platform instead.

>What other lisp was used back then?

No idea, Rob once said people did so, but I know nothing more than
"somebody sometime".

>> You can't look at the process by today's measurements. When CMUCL
>> started around 1980 (named "Spice Lisp), it was much simpler. The
>> Python compiler that causes much of the complexity of current rebuilds
>> is the second implementation from 1985, the 1980 thing was a bytecode
>> system. Also, the runtime was hand-coded assembler and didn't need the
>> alien symbol lookup hack we now have. The folks back then were real

>I understand that the first few versions of the Python compiler took
>over 14
>hours to build itself back then. And I get frustrated when rebuilding
>CMUCL takes an hour on my PC. :-) Especially when I introduced some
>stupid mistake. :-)

>Is there any documents on the history of CMUCL? I think it would make
>interesting reading.

Rob once sent his summary to cmucl-imp, you may remember the
thread. He pointed to a paper:

I believe that Scott Fahlman published a paper somewhere about the
bootstrapping path of Spice Lisp. See also:
@inproceedings(wholey,
author = "Skef Wholey and Scott E. Fahlman",
title = "The Design of an Instruction Set for Common Lisp",
booktitle = "ACM Conference on Lisp and Functional Programming",
year = 1984,
pages = "150--158")

Martin Cracauer

unread,
Oct 13, 1998, 3:00:00 AM10/13/98
to
tras...@david-steuber.com (David Steuber "The Interloper") writes:

>On 11 Oct 1998 22:18:46 GMT, crac...@not.mailable (Martin Cracauer)
>claimed or asked:

>% >Maybe I want the CMUCL environment itself to run as fast as possible


>% >on my hardware?
>%
>% We all want the code to run as fast as possible. It's just a matter of
>% time where to invest the time improving the compiler. There's
>% currently not much point in microtuning things that are even different
>% on various Intel CPUs.

>Someone else also pointed out that building the system for performance
>reasons would be a futile effort. It is clear I have a lot to learn
>about it before I try that anyway. There may be other reasons for
>rebuilding, but I'm going to stop thinking about all that for now. My
>key goal for the near term is to get set up with a fully functional
>system that works in XEmacs so that I can learn the Lisp.

Don't misunderstand me. I'm a performance (and memory usage even more)
hog and many Lisp programmers would think I'm ill for my strong
beleive in preallocated buffers and such C-ism. It's just that the
possible compiler option to optimize to a Pentium Pro instead of a
Pentium II would be rather irrelevant compared to what an improved PCL
method lookup would do for most applications.

>% >So cmucl requires cmucl to build cmucl.
>%
>% It isn't strictly true that you need a running CMUCL to build. You can
>% load the CMUCL compiler (python) into another Lisp implementation
>% (which can be built from C sources like CLISP does) and then build the
>% files you need besides the C startup in CMUCL's lisp/ directory. I
>% don't think that has been done within the last 12 years or so. Even
>% the x86 port a few years ago was crosscompiled by CMUCL on RISC
>% workstations instead of running the compiler on other x86 platforms.

>I don't think I'll go dredging up ancient voodoo. I would like to
>play with just the one environment to cut down on the number of
>variables I have to deal with.

See it that way: Once through it, you master it, once and for all :-)

>% Today we tangle with several subsystems that add rebuilding complexity
>% (i.e. PCL, several random number generators), a full-blown agressivly
>% optimizing compiler that has to implement a very complex language
>% standard (more data types nativly supported by the compiler, changing
>% struct/class implementations) and we have to interface to the full set
>% of system calls Unix provides. Usually, without having source for the
>% OS implementation...

>Well, Linux ships with the source, so you have no problems there ;-)

>I wonder if it isn't possible to write a Lisp program that manages the
>Lisp development process.

Sure, we have tons of these ;-)

The problem is bootstrapping. Compiling a source that isn't the one
your running binary has been built from or compiling with a slightly
changed OS underneath. Each change to the sources that may cause
bootstrapping trouble is different and needs a different way of
compiling.

In larger projects like FreeBSD, such bootstrapping of newer features
is always combined with committing the neccessary bootstrap stuff
(with recognition whether they are needed in a given situation) to the
build tools. But doing so is a lot easier in FreeBSD (because you
don't mess with the running compiler process) and the number of people
rebuilding CMUCL on a regular basis is very small, so in practice a
quick notice on the mailing lists is everything.

I wonder if always crosscompiling on the local platform would improve
the situation. May lead to additional problems as likely.

>% I'm a newscomer, so all this isn't official, although collected from
>% mails of folks who should know.

>Really? I thought you were running cons.org.

That's true, nothingtheless I haven't been involved with CMUCL before
1992 and I'm more of the web/postmaster and integration kind of person
around here than a great CMUCL hacker.

>As I become used to working with cmucl, I will have a vested interest
>in its continued support. The only way I can really insure that is by
>knowing how the whole system works. It seems rather complex, so it
>will take me a while. I also will probably only program during
>weekends because I have to work for food. Such is life.

Well, if you make your way through CMUCL, maybe you could write down
your findings and improve the internals.tex document? That would be a
nice way to ensure we always find people supporting CMUCL.

Martin

Bill Newman

unread,
Oct 13, 1998, 3:00:00 AM10/13/98
to
Martin Cracauer (crac...@not.mailable) wrote:
[about weird and wonderful CMUCL bootstrapping problems]
: I wonder if always crosscompiling on the local platform would improve

: the situation. May lead to additional problems as likely.

I've wondered about this myself. My impression is that the difficulty
of self-compiling CMUCL has been a significant irritant for some time,
and that cross-compiling is worse: only wizards seem to have done it,
and only in times of great need. There are fairly fundamental reasons
why self-compiling leads to weird bootstrapping problems, but I don't
understand why cross-compiling has to be hard. Is it just lack of
maintenance? If so, why is the cross-compilability of the sources
given a relatively low priority? (significantly than self-compiling,
as far as I can see)

To my way of thinking, cross-compilability is fundamentally more
important than self-compilability. If someone gave me the choice
between a build process which could run with any near-ANSI Lisp
(including CMUCL itself) and produce an executable, and one which
allowed a running CMUCL to rebuild itself incrementally in place, I'd
choose the first with no hesitation. It seems to me that the CMUCL
community has chosen the second instead, and I don't understand why,
especially since (unlike self-compiling) cross-compiling seems to
have the potential to become absolutely reliable and routine.

Maybe this belongs on the cmucl-imp mailing list, especially if the
answer is complicated. I'm leaving it here for now out of inertia and
because it seems possible that the answer is related to general,
deeply-Lisp-y, not-specific-to-CMUCL arguments. (Perhaps something
along the lines of "If you think cross-compiling is more important
than the ability of the compiler to rebuild itself on the fly, then
you don't understand the importance of the dynamic features of
Lisp.":-)

Bill

Raymond Toy

unread,
Oct 13, 1998, 3:00:00 AM10/13/98
to
>>>>> "Bill" == Bill Newman <wne...@netcom.com> writes:

Bill> I've wondered about this myself. My impression is that the difficulty
Bill> of self-compiling CMUCL has been a significant irritant for some time,
Bill> and that cross-compiling is worse: only wizards seem to have done it,
Bill> and only in times of great need. There are fairly fundamental reasons
Bill> why self-compiling leads to weird bootstrapping problems, but I don't
Bill> understand why cross-compiling has to be hard. Is it just lack of
Bill> maintenance? If so, why is the cross-compilability of the sources
Bill> given a relatively low priority? (significantly than self-compiling,
Bill> as far as I can see)

I can't answer for the wizards, but cross-compiling does currently
work, it's not really harder than self-compiling, and there are
scripts now to cross-compile from many architectures (mostly x86 to
something else and vice versa). Getting access to other architectures
is probably also difficult these days. And who wants to build on some
other slow machine when a PII-333 recompiles CMUCL in less than 15
minutes? (A 300MHz Ultrasparc takes about 30 minutes.)

At one point long ago, I did try compiling CMUCL's compiler using
CLISP. There were so many things assumed by CMUCL that didn't exist
in CLISP that I gave up after an hour or so. Fortunately, Peter Van
Eynde shortly thereafter created a Linux version. Many thanks to him!

Ray

David Steuber The Interloper

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
On 13 Oct 1998 13:28:11 GMT, crac...@not.mailable (Martin Cracauer)
claimed or asked:

% Well, if you make your way through CMUCL, maybe you could write down
% your findings and improve the internals.tex document? That would be a
% nice way to ensure we always find people supporting CMUCL.

How sacrilegious would it be to have a Perl script that blasts through
all the lisp files, listing them and documenting them? I understand
Lisp has a mechanism for documenting a function. Something like:

(defun func ()
"This is a function"
(blah (blah (blah arg))))

Of course it would make more sense to write such a program in Lisp for
the exercise of learning Lisp.

David Steuber The Interloper

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
On Tue, 13 Oct 1998 20:37:13 GMT, wne...@netcom.com (Bill Newman)
claimed or asked:

% Maybe this belongs on the cmucl-imp mailing list, especially if the
% answer is complicated. I'm leaving it here for now out of inertia and
% because it seems possible that the answer is related to general,
% deeply-Lisp-y, not-specific-to-CMUCL arguments. (Perhaps something
% along the lines of "If you think cross-compiling is more important
% than the ability of the compiler to rebuild itself on the fly, then
% you don't understand the importance of the dynamic features of
% Lisp.":-)

I haven't done it myself. But something you can do with the gcc
source code is rebuild gcc. It shouldn't be a big deal.

The same thing applies to Lisp, in my mind. If the compiler is 100%
Lisp, then it should be able to compile itself. It doesn't matter
what the target image is. If the compiler can be configured to
produce code for three different CPUs (x, y, z), then it should be
able to build it self for all three.

If the compile function is simply compile, and it the source file for
the compiler is compiler.lisp, then it should be possible to do the
following:

(compile "compiler.lisp" "image-x" x)
(compile "compiler.lisp" "image-y" y)
(compile "compiler.lisp" "image-z" z)

All on the same machine. The little bit of runtime code that is
written in C / assembler should also be portable. Although, if the
compiler can be written entirely in Lisp, why not the runtime as well?
Why should there be any need at all for non Lisp code? The compiler
should be able to generate the necessary runtime to load and run the
image for the systems it knows about.

Now I admit that I am completely ignorant of the way things are done,
but it seems like a possible approach may be something like this:

A precompiler takes the Lisp code and generates an intermediate byte
code.

The final compiler takes the byte code and converts it to code native
to the target platform. The final compiler is responsible for adding
the runtime support.

With the compilers all implemented in Lisp, it should be possible to
target any platform from the same source base as final compilers
become available. The intermediate bytecode can be for some imaginary
lisp machine (not unlike the JVM for Java). If it is well defined, it
won't change often. Then the biggest changes will be in the final
compilers as they evolve to keep up with their target platform. The
Lisp code can be evolved as well. The boot strapping problem is
solved by simply creating a final compiler that knows how to take the
intermediate byte code and translate it into the native code for the
target platform. In the mean time, the byte code can be run by an
interpreter in those instances where you don't want to compile all the
way, but you want more performance than you get with lisp text.

I'm not a CS student or graduate. So it is not obvious to me why
there should be anything wrong with the method outlined above. I am
aware that the method is incomplete. I think it is necessary for a
Lisp to be able to use 'alien' functions and data types, like C
functions and types. It would also be extremely useful to be able to
do any portion of CORBA. That includes having an IDL compiler that
produces Lisp code. Also, a compiled image will want to have the
ability to dynamically bind with another one in the same process space
(like a dll or so) and then act like a single unified image. I don't
think this is unlike loading lisp files into an environment.

I think a Lisp system that can do all the above would be very popular
for lispers. I would certainly go for it, unless I turn out not to
enjoy Lisp.

So how far away is CMUCL from this? How hard would it be to move to
this model? Would the CMUCL community want to move to this model, or
should CMUCL be branched into a new implementation that does this?

David B. Lamkins

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
In article <3623f2ca...@news.newsguy.com> , tras...@david-steuber.com

(David Steuber "The Interloper") wrote:

>On 13 Oct 1998 13:28:11 GMT, crac...@not.mailable (Martin Cracauer)
>claimed or asked:
>
>% Well, if you make your way through CMUCL, maybe you could write down
>% your findings and improve the internals.tex document? That would be a
>% nice way to ensure we always find people supporting CMUCL.
>
>How sacrilegious would it be to have a Perl script that blasts through
>all the lisp files, listing them and documenting them? I understand
>Lisp has a mechanism for documenting a function. Something like:
>
>(defun func ()
> "This is a function"
> (blah (blah (blah arg))))
>
>Of course it would make more sense to write such a program in Lisp for
>the exercise of learning Lisp.

How sacrilegious? Using Perl to extract documentation from a Lisp program?!
I hope you enjoy extremely hot climates <g>...


Actually, there is a Lisp program in the CMU AI repository that grovels over
Lisp source and produces a nicely-formatted file of documentation. Take a
look at
<http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/lisp/code/tools/
user_man/0.html>. The blurb says:

"The Automatic User Manual Creation system is a portable system for
automatically generating user's guides from the source definitions and their
documentation strings. It uses several heuristics for formatting the
documentation segments nicely. It can produce text (ASCII), Scribe, and
LaTeX output. If Waters' XP pretty printer is available, it uses that
instead to format the argument lists."


(There's another documentation extraction tool called LispDocu that seems to
have disappeared from the net. I liked that because it could generate HTML.
Does anyone have a valid URL?)


While documentation extraction tools are nice if you want a hardcopy record
of a delivered system, you're (usually, IMO) better served during
development by arranging to have your editor show the doc string of the
symbol under the cursor (this is usually referred to as doing a meta-dot,
based upon the canonical binding for that function in Emacs-like editors.)


---
David B. Lamkins <http://www.teleport.com/~dlamkins/>

Peter Van Eynde

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
On 13 Oct 1998 18:53:45 -0400, Raymond Toy <t...@rtp.ericsson.se> wrote:
>I can't answer for the wizards, but cross-compiling does currently
>work, it's not really harder than self-compiling, and there are
>scripts now to cross-compile from many architectures (mostly x86 to
>something else and vice versa). Getting access to other architectures
>is probably also difficult these days. And who wants to build on some
>other slow machine when a PII-333 recompiles CMUCL in less than 15
>minutes? (A 300MHz Ultrasparc takes about 30 minutes.)

Actually I need a newer machine I fear (my old and not so dependable
pentium 90 takes 2 hours to recompile :-(), any idea how fast the
AMD processors are at recompiling CMUCL? Any other comments?
(You can email me, I'll summarize)

>At one point long ago, I did try compiling CMUCL's compiler using
>CLISP. There were so many things assumed by CMUCL that didn't exist
>in CLISP that I gave up after an hour or so.

I was planning to see if I can use ACL5 to do the recompile, but I just got
mad from all the package problems :-(. In the end I assumed that I would have
to rename all packages used by CMUCL, and this was a bit too brutal...

Raymond Toy

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
>>>>> "David" == David Steuber "The Interloper" <tras...@david-steuber.com> writes:

David> On Tue, 13 Oct 1998 20:37:13 GMT, wne...@netcom.com (Bill Newman)
David> claimed or asked:

David> % Maybe this belongs on the cmucl-imp mailing list, especially if the
David> % answer is complicated. I'm leaving it here for now out of inertia and
David> % because it seems possible that the answer is related to general,
David> % deeply-Lisp-y, not-specific-to-CMUCL arguments. (Perhaps something
David> % along the lines of "If you think cross-compiling is more important
David> % than the ability of the compiler to rebuild itself on the fly, then
David> % you don't understand the importance of the dynamic features of
David> % Lisp.":-)

David> I haven't done it myself. But something you can do with the gcc
David> source code is rebuild gcc. It shouldn't be a big deal.

David> The same thing applies to Lisp, in my mind. If the compiler is 100%

This is totally different from rebuilding gcc with gcc. When this
happens, the compilation of gcc does not effect the running gcc in any
way.

However, with CMUCL, recompiling the compiler *changes* the compiler
that's doing the compiling. For example, there's a variable in CMUCL
that essentially is an enum for all of the recognized types. If you
add a new type, such as (signed-byte 8), this needs to be placed in
the enum. However, when you compile this up, it changes that variable
in the compiler that's compiling the code, and the current compiler is
totally confused by that change because it's now wrong.


David> A precompiler takes the Lisp code and generates an intermediate byte
David> code.

David> The final compiler takes the byte code and converts it to code native
David> to the target platform. The final compiler is responsible for adding
David> the runtime support.

CMUCL has a byte-code compiler. I think it's architecture-neutral,
except for endianness. However, I don't think the byte-code is
enough; some of the code needs to be compiled to native code. Things
like

(defun car (l)
(car l))

needs the compiler to put in the actual code for the interior car.

[interesting idea of compiling byte-code to native code]

David> I'm not a CS student or graduate. So it is not obvious to me why
David> there should be anything wrong with the method outlined above. I am
David> aware that the method is incomplete. I think it is necessary for a
David> Lisp to be able to use 'alien' functions and data types, like C
David> functions and types. It would also be extremely useful to be able to
David> do any portion of CORBA. That includes having an IDL compiler that
David> produces Lisp code. Also, a compiled image will want to have the
David> ability to dynamically bind with another one in the same process space
David> (like a dll or so) and then act like a single unified image. I don't
David> think this is unlike loading lisp files into an environment.

CMUCL has good native support for alien functions and types. It
supports dynamic linking on some architectures.

David> So how far away is CMUCL from this? How hard would it be to move to
David> this model? Would the CMUCL community want to move to this model, or
David> should CMUCL be branched into a new implementation that does this?

This is a nice idea, but it sounds like a huge, Huge, HUGE task. Why
don't you ask on cmucl-imp mailing list?

Ray

Mike McDonald

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
In article <3624f48f...@news.newsguy.com>,

tras...@david-steuber.com (David Steuber "The Interloper") writes:
> On Tue, 13 Oct 1998 20:37:13 GMT, wne...@netcom.com (Bill Newman)
> claimed or asked:

>
> % Maybe this belongs on the cmucl-imp mailing list, especially if the
> % answer is complicated. I'm leaving it here for now out of inertia and
> % because it seems possible that the answer is related to general,
> % deeply-Lisp-y, not-specific-to-CMUCL arguments. (Perhaps something
> % along the lines of "If you think cross-compiling is more important
> % than the ability of the compiler to rebuild itself on the fly, then
> % you don't understand the importance of the dynamic features of
> % Lisp.":-)

>
> I haven't done it myself. But something you can do with the gcc
> source code is rebuild gcc. It shouldn't be a big deal.

gcc doesn't have the same type of runtime environment as lisp does.
Compiling one file under gcc doesn't effect compiling following files like it
can in lisp. As an example, let's look at a recent case that's near and dear
to the CMUCL team's heart, lisp-streams. CMUCL has a structure called
lisp-stream that's defined as a defstruct. It represents all streams inside of
CMUCL. For one of my projects, I need CLOS based Grey streams to be
implemented. This requires changing the basic stream structure. For effiecency
reasons, the accessors and type predicates for lisp-stream are compiled
inline. So the runtime of CMUCL wants the old structure around (so the
compiler can read the files) while we need the new one loaded for building the
new image (so it'll have the new definition). We've got a bit of a conflict
here. This requires a very careful setup of the environment inorder to do the
cross compile. (Even if it's to the same architecture. You don't want to run
any of the new code. Remember, we're "redefining" every lisp function in this
process!)

Now, if gcc had to load and use the .o files as it compiled them, it too
would have these same kinds of headaches. But that's one of the fundamental
differences between CL and C.


Mike McDonald
mik...@mikemac.com

Duane Rettig

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
tras...@david-steuber.com (David Steuber "The Interloper") writes:

> The same thing applies to Lisp, in my mind. If the compiler is 100%

> Lisp, then it should be able to compile itself. It doesn't matter
> what the target image is. If the compiler can be configured to
> produce code for three different CPUs (x, y, z), then it should be
> able to build it self for all three.

There are two distinct requirements for cross-compiling: the ability
to compile to different machine architectures, and the ability to
compile toward different operating systems on the same architecture.
For the first part, a compiler back-end is required for each
target architecture. Since this back-end is of non-trivial size,
we do not include any but the "native" one on each machine on Allegro CL.
However, the capability exists to load multiple back-ends for cross-
compiling, and is sometimes used. See below for a discussion on the
different-operating-system portion.

> If the compile function is simply compile, and it the source file for
> the compiler is compiler.lisp, then it should be possible to do the
> following:
>
> (compile "compiler.lisp" "image-x" x)
> (compile "compiler.lisp" "image-y" y)
> (compile "compiler.lisp" "image-z" z)

Yes, with different back-ends for each architecture.

> All on the same machine. The little bit of runtime code that is
> written in C / assembler should also be portable. Although, if the
> compiler can be written entirely in Lisp, why not the runtime as well?

We have actually taken this philosophy in Allegro CL. Most of the
runtime system is written in a pseudo-lisp (not CL, but one that uses
the same compiler, extended) and compiles to the assembler source code
that is appropriate for the target architecture and operating system.
This is a departure from the older style of using "LAP" code, which is
a lisp assembler protocol that may still be used in various runtime
system implementations.

An example of such "low-level-lisp" code is:

(def-runtime-q make-complex (real imag)
(let ((compl (q-allocate-heap-other #md-complex-type-code #md-complex-size)))
(setf (ls md-complex real compl) real)
(setf (ls md-complex imag compl) imag)
compl))

where q-allocate-heap-other is a macro, and ls is a low-level structure
accessor. Note that for this "low-level" compilation, we've added a
"sharpsign-m" reader macro to replace the various machine-dependent
values with their actual values (part of the machine dependent back-end
mentioned above).

> Why should there be any need at all for non Lisp code? The compiler
> should be able to generate the necessary runtime to load and run the
> image for the systems it knows about.

It is true that anything (even down to the system-call level) can be
reproduced in lisp. However, we stop short of implementing _all_ of
the runtime system in lisp, because of the need to track changing
system structures that are best represented in .h files in C. For these
we have a small C component. For example, the Unix fstat() call defines
a stat structure that is different for every operating system, but which
is described in <sys/stat.h>. It makes no sense to reinvent an fstat
call (for each operating system) and then to risk having to re-reinvent
it every time a new version of the operating system comes out.

> Now I admit that I am completely ignorant of the way things are done,
> but it seems like a possible approach may be something like this:
>

> A precompiler takes the Lisp code and generates an intermediate byte

> code.


>
> The final compiler takes the byte code and converts it to code native

> to the target platform. The final compiler is responsible for adding

> the runtime support.

I submit that byte-code is not necessary. (Although most compilers of
every language have some sort of intermediate representation, I am
assuming here because of your later comments that by "byte code" you
mean some sort of well-defined pseudo-machine representation that can
also be executed by a byte-code interpreter). Byte-code is definitely
the most portable intermediate representation, and many systems use
this technique; emacs is the most notable example, and I believe that
many Smalltalk implementations ship their compiled files as byte-coded.

But although it is very portable, it is not portably fast, even if the
byte codes are recompiled to native machine code - as an example, consider
the byte-machine for some byte-code set: is it stack-oriented, or
frame-oriented? If it is stack-oriented, then it will native-compile
very efficiently to a stack-oriented machine, such as the x86, but not
efficiently at all to RISC architectures, which have no stack-oriented
instructions. On the other hand, if the byte-machine is not
stack-oriented, then native-compilation will be efficient on a RISC
machine, to the detriment of the x86.

So why byte-compile at all? A valid argument is that byte-code is
smaller than native-compiled code. However, there is a negative
impact on the size of the lisp when byte-compilation is added (unless
it is part of the mainstream compilation process itself, at a sacrifice
of speed). Also, if most of the code in a lisp cam be relegated to
shared-storage and reused by multiple processes, then it doesn't
figure in to the per-process space usage hit of the lisp.

> With the compilers all implemented in Lisp, it should be possible to
> target any platform from the same source base as final compilers
> become available. The intermediate bytecode can be for some imaginary
> lisp machine (not unlike the JVM for Java). If it is well defined, it
> won't change often. Then the biggest changes will be in the final
> compilers as they evolve to keep up with their target platform. The
> Lisp code can be evolved as well. The boot strapping problem is
> solved by simply creating a final compiler that knows how to take the
> intermediate byte code and translate it into the native code for the
> target platform. In the mean time, the byte code can be run by an
> interpreter in those instances where you don't want to compile all the
> way, but you want more performance than you get with lisp text.

But why not have the "intermediate" code be lisp code itself? I
think that lisp source code is the best representation of the intentions
of the programmer, and thus should be perserved as long as possible.

> I'm not a CS student or graduate. So it is not obvious to me why

> there should be anything wrong with the method outlined above. I am

> aware that the method is incomplete. I think it is necessary for a

> Lisp to be able to use 'alien' functions and data types, like C

> functions and types. It would also be extremely useful to be able to

> do any portion of CORBA. That includes having an IDL compiler that

> produces Lisp code. Also, a compiled image will want to have the

> ability to dynamically bind with another one in the same process space

> (like a dll or so) and then act like a single unified image. I don't

> think this is unlike loading lisp files into an environment.
>

> I think a Lisp system that can do all the above would be very popular
> for lispers. I would certainly go for it, unless I turn out not to
> enjoy Lisp.

Other than the issue of intermediate byte-codes, I believe that the major
commercial Lisp vendors do this. Check out www.franz.com for our ORBlink
product. If Harlequin and/or Digitool have a similar product, it will
probably be on their web page.

> So how far away is CMUCL from this? How hard would it be to move to

> this model? Would the CMUCL community want to move to this model, or

> should CMUCL be branched into a new implementation that does this?

I haven't looked at CMUCL for a while, so I can't comment on this.

Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Mike McDonald

unread,
Oct 14, 1998, 3:00:00 AM10/14/98
to
In article <4ng1cr1...@rtp.ericsson.se>,
Raymond Toy <t...@rtp.ericsson.se> writes:

> However, with CMUCL, recompiling the compiler *changes* the compiler
> that's doing the compiling. For example, there's a variable in CMUCL
> that essentially is an enum for all of the recognized types. If you
> add a new type, such as (signed-byte 8), this needs to be placed in
> the enum. However, when you compile this up, it changes that variable
> in the compiler that's compiling the code, and the current compiler is
> totally confused by that change because it's now wrong.

I never understood why this is the case. It seems one should be able to add
the new types to the enum without causing the old types to get new numbers. If
redefining the old types gives the same number, then the compiler shouldn't
give a hoot. So, what am I missing?

Mike McDonald
mik...@mikemac.com

David Steuber The Interloper

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
On 14 Oct 1998 17:42:08 GMT, mik...@engr.sgi.com (Mike McDonald)
claimed or asked:

% Now, if gcc had to load and use the .o files as it compiled them, it too
% would have these same kinds of headaches. But that's one of the fundamental
% differences between CL and C.

This seems to be a serious design flaw in the compiler to me. The
compiler output should be going into a separate image file, not the
one it's using. While the model you are talking about may be useful
for many things (I assume it is) there really needs to be a switch
that allows a simple cross-compile.

I've heard of self modifying code, but this takes the cake!

David Steuber The Interloper

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
On 14 Oct 1998 08:08:19 -0400, Raymond Toy <t...@rtp.ericsson.se>
claimed or asked:

% David> I haven't done it myself. But something you can do with the gcc
% David> source code is rebuild gcc. It shouldn't be a big deal.
%
% David> The same thing applies to Lisp, in my mind. If the compiler is 100%
%
% This is totally different from rebuilding gcc with gcc. When this
% happens, the compilation of gcc does not effect the running gcc in any
% way.
%
% However, with CMUCL, recompiling the compiler *changes* the compiler
% that's doing the compiling. For example, there's a variable in CMUCL
% that essentially is an enum for all of the recognized types. If you
% add a new type, such as (signed-byte 8), this needs to be placed in
% the enum. However, when you compile this up, it changes that variable
% in the compiler that's compiling the code, and the current compiler is
% totally confused by that change because it's now wrong.

I don't understand this part. Why isn't the compiler generating a
separate image instead of changing its own? Is the compiler limited
to just sucking in lisp files into its own environment? The idea
should be to build a separate image as the result of a compile that is
rebuilding the compiler. If you are rebuilding all of cmucl, then the
compiled result should be in a new image that hasn't been loaded. You
would then go into the new image to see that everything is kosher.

Mike McDonald

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
In article <36263ebb....@news.newsguy.com>,

tras...@david-steuber.com (David Steuber "The Interloper") writes:
> On 14 Oct 1998 17:42:08 GMT, mik...@engr.sgi.com (Mike McDonald)
> claimed or asked:
>
> % Now, if gcc had to load and use the .o files as it compiled them, it too
> % would have these same kinds of headaches. But that's one of the fundamental
> % differences between CL and C.
>
> This seems to be a serious design flaw in the compiler to me. The
> compiler output should be going into a separate image file, not the
> one it's using. While the model you are talking about may be useful
> for many things (I assume it is) there really needs to be a switch
> that allows a simple cross-compile.

No, recompiling the system requires that some parts of the new system be
loaded so as to effect the new image. In my example, the new definition of
lisp-stream has to be loaded so the subsequent files know about the new type.
In Ray's example, the new primitive type codes need to be load into the
environment so that the compiled files reflect these new codes. The trick is
to have both the old and new environments in the running image at the same
time without messing things up.

> I've heard of self modifying code, but this takes the cake!

Not even close! At Harris years ago, we modified a lispm based VLSI CAD
system to support the creation of self modifying microcode. Now that's wierd!
(And will give you a royal headache if you try to do it by hand!)

Mike McDonald
mik...@mikemac.com

David B. Lamkins

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
In article <36263ebb....@news.newsguy.com> ,

tras...@david-steuber.com (David Steuber "The Interloper") wrote:

>On 14 Oct 1998 17:42:08 GMT, mik...@engr.sgi.com (Mike McDonald)
>claimed or asked:
>
>% Now, if gcc had to load and use the .o files as it compiled them, it too
>% would have these same kinds of headaches. But that's one of the fundamental
>% differences between CL and C.
>
>This seems to be a serious design flaw in the compiler to me. The
>compiler output should be going into a separate image file, not the
>one it's using. While the model you are talking about may be useful
>for many things (I assume it is) there really needs to be a switch
>that allows a simple cross-compile.
>

>I've heard of self modifying code, but this takes the cake!

I strongly recommend "Lisp in Small Pieces", Quiennec, 1996, Cambridge
University Press, ISBN 0-521-56247-3 if you want to learn about Lisp
compilation.

Raymond Toy

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
>>>>> "Mike" == Mike McDonald <mik...@engr.sgi.com> writes:

Mike> In article <4ng1cr1...@rtp.ericsson.se>,


Mike> Raymond Toy <t...@rtp.ericsson.se> writes:

>> However, with CMUCL, recompiling the compiler *changes* the compiler

>> that's doing the compiling. For example, there's a variable in CMUCL

>> that essentially is an enum for all of the recognized types. If you

>> add a new type, such as (signed-byte 8), this needs to be placed in

>> the enum. However, when you compile this up, it changes that variable

>> in the compiler that's compiling the code, and the current compiler is

>> totally confused by that change because it's now wrong.

Mike> I never understood why this is the case. It seems one should
Mike> be able to add the new types to the enum without causing the
Mike> old types to get new numbers. If redefining the old types
Mike> gives the same number, then the compiler shouldn't give a
Mike> hoot. So, what am I missing?

Nothing. You are right. In the first version of signed-array, I
stuck the new codes at the end. However, Doug pointed out that this
slows down the type-checking stuff because it couldn't merge the tests
into simple range tests. Tests for simple-array could be "type codes
between x and y" instead of "type code = x1 or type code = x2 or ...".

Putting the new codes in the "right" place is what caused the most
headache.

Ray

Bill Newman

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
Mike McDonald (mik...@engr.sgi.com) wrote:
: In article <36263ebb....@news.newsguy.com>,
: tras...@david-steuber.com (David Steuber "The Interloper") writes:
: > On 14 Oct 1998 17:42:08 GMT, mik...@engr.sgi.com (Mike McDonald)

: > claimed or asked:
: >
: > % Now, if gcc had to load and use the .o files as it compiled them, it too
: > % would have these same kinds of headaches. But that's one of the fundamental
: > % differences between CL and C.
: >
: > This seems to be a serious design flaw in the compiler to me. The
: > compiler output should be going into a separate image file, not the
: > one it's using. While the model you are talking about may be useful
: > for many things (I assume it is) there really needs to be a switch
: > that allows a simple cross-compile.

: No, recompiling the system requires that some parts of the new system be


: loaded so as to effect the new image. In my example, the new definition of
: lisp-stream has to be loaded so the subsequent files know about the new type.
: In Ray's example, the new primitive type codes need to be load into the
: environment so that the compiled files reflect these new codes. The trick is
: to have both the old and new environments in the running image at the same
: time without messing things up.

I've fiddled a little bit with redefining things in CMUCL's :CL
package, and it's certainly convenient to be able to compile and load
a modified source file and have the system modify itself on the
fly. (Not knowing the technical term for it, this is what I referred
to as "self-compilation" in my earlier post. I still don't know the
right term for it, and I now think of it as "compiling itself
introspectively.") But this approach to compiling the compiler seems
guaranteed to cause nasty problems for major changes, as in the
example given earlier of redefining stream classes.

Would it be possible to use tricks with packages and nicknames to
suppress this introspective behavior when the system is being rebuilt
from scratch? E.g. prefacing all the source files with
(IN-PACKAGE :CL-UNDER-CONSTRUCTION)
and then after all compilation was done, running a function
(CL-UNDER-CONSTRUCTION::NEW-WORLD)
which nuked the old :CL package, renamed the old
:CL-UNDER-CONSTRUCTION package to :CL, somehow did any necessary
purification tricks to cause the old :CL code to be GC'ed, then saved
itself? As I sketch out the things that would need to be done, I can
see that it'd be easy to get backed into a corner trying to make this
work, but I don't see offhand that it would be impossible. And it
seems to me that although it would be messy to set it up initially, it
would pay off for ever and ever afterwards by not having to figure out
fiddly little special-purpose hacks to work around bootstrap problems
when redefining system behavior or when cross-compiling under some
other compiler. (Special-purpose hacks might still be needed when
changes in system behavior affected the code in NEW-WORLD, but
hopefully NEW-WORLD would be much smaller and simpler than the system
as a whole, so it'd be affected less often and it'd be easier to
modify safely.)

(Then after the system was rebuilt from scratch, the default
introspective compilation behavior could be restored by making
:CL-UNDER-CONSTRUCTION a nickname for :CL.)

Bill Newman

Raymond Toy

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
>>>>> "Bill" == Bill Newman <wne...@netcom.com> writes:

Bill> Would it be possible to use tricks with packages and nicknames to
Bill> suppress this introspective behavior when the system is being rebuilt
Bill> from scratch? E.g. prefacing all the source files with
Bill> (IN-PACKAGE :CL-UNDER-CONSTRUCTION)
Bill> and then after all compilation was done, running a function
Bill> (CL-UNDER-CONSTRUCTION::NEW-WORLD)
Bill> which nuked the old :CL package, renamed the old
Bill> :CL-UNDER-CONSTRUCTION package to :CL, somehow did any necessary
Bill> purification tricks to cause the old :CL code to be GC'ed, then saved
Bill> itself? As I sketch out the things that would need to be done, I can

I believe this is what the cross-compilation scripts do. However, for
some reason it seems that this is not always enough. I'd wasn't able
to cross-compile an x86 to x86 version that added the long-float
support. I eventually got the freebsd version and build a linux
version from that.

Ray

Martin Cracauer

unread,
Oct 15, 1998, 3:00:00 AM10/15/98
to
Raymond Toy <t...@rtp.ericsson.se> writes:

It's the other way round: The old backend is being renamed and the
native compiler is being adviced to use the one in the nonstandard
place when it builds the new compiler, which is loaded into the
standard compiler place.

(rename-package "X86" "OLD-X86")
(setf (c:backend-name c:*native-backend*) "OLD-X86")
; compile the new compiler with :bootstrap in *features*.
; load the new compiler.
; do some internal compiler settings I don't understand :-/
(setf c:*backend* c:*target-backend*)
; [the following is like a standard CMUCL rebuild]
; compile the world with the new compiler, but don't load it.
; compile the compiler once again with itself, but don't load it.
; build a kernel.core.
; use kernel.core to load the world and compiler compiled files we
; just compiled and save it.
; Note that the copy of the old backend is still in the new image,
; it should probably be deleted before shipping...

Note the the "C" package looks like the main compiler package, but the
stuff that breaks when loading newer sources is in the
machine-dependent backend and you can still use the unrenamed "C"
package to control the compiler.

It is my firm beleive that Douglas built the first longfloat binary
either using a hexdump editor or got it by running a random number
generator long enough. Ever wondered why CMUCL is so strong in RNGs? :-)

Happy Lisping
Martin

David Steuber The Interloper

unread,
Oct 16, 1998, 3:00:00 AM10/16/98
to
Ok, let's see if I have this straight. You use the following
(abbreviated) procedure to build a new lisp environment:

1) lift your left foot about 20 centimeters
2) without putting down your left foot, lift your right foot about 20
centimeters.
3) without putting down either foot, straighten out your knees.
4) do all this while standing in the middle of the room.
5) the hard part is getting back down.