Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

LISP for Windows

479 views
Skip to first unread message

Justin Dyer

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to
Can anyone point me towards a LISP compiler/interpreter for Windows.
Preferably freeware/shareware. Please reply to e-mail as I do not have
time to regularly check this newsgroup.

Thanx in advance.

- Justin Dyer


rusty craine

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to

Justin Dyer wrote in message <366C724A...@silver-wolf-tech.com>...
geeez Justin too bad your so busy, one of the best time savers there is in
lisp programming (unless maybe you are a pro already) is reading this news
group. Reading a few postings can often save hours and hours of booking it
tring to figure out what UNWIND-PROTECT does (for example), when you would
use it and why you never used it in the past. Lots of examples like
this.....t'is time well spent.

rusty

Guilhem de WAILLY

unread,
Dec 8, 1998, 3:00:00 AM12/8/98
to
Justin Dyer wrote:

> Can anyone point me towards a LISP compiler/interpreter for Windows.
> Preferably freeware/shareware. Please reply to e-mail as I do not have
> time to regularly check this newsgroup.
>
> Thanx in advance.
>
> - Justin Dyer

You can try the free version of OpenScheme (Scheme is an efficient Lisp
dialect) that comes with an interpreter and a compiler (that produces
ansi C)
for Linux an d windows.

It has some extensions such as regular expresion, object oriented system
CLOS based, timer, profile file, console text interface with simple
OO widget.
OpenScheme will be extended with a full OO GUI in the next release

Sincerely.


--
-----------------------------------------------------------
Erian Concept | Tel : (33) 04 93 44 18 06
Guilhem de Wailly | Mobil: (33) 06 82 18 39 63
155 bd de la Madeleine | Fax : (33) 04 93 14 36 75
06000 - Nice - FRANCE | mailto:g...@linux-kheops.com
http://www.linux-kheops.com/erian
-----------------------------------------------------------


David Steuber The Interloper

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
On Mon, 7 Dec 1998 21:42:19 -0600, "rusty craine" <rccr...@flash.net>
claimed or asked:

% geeez Justin too bad your so busy, one of the best time savers there is in
% lisp programming (unless maybe you are a pro already) is reading this news
% group. Reading a few postings can often save hours and hours of booking it
% tring to figure out what UNWIND-PROTECT does (for example), when you would
% use it and why you never used it in the past. Lots of examples like
% this.....t'is time well spent.

It's also a shame he couldn't just go to dejanews and search on the
key phrase, "I'm so lazy that I will just go ahead and post a message
asking what people ask about five times a week.

--
David Steuber (ver 1.31.3a)
http://www.david-steuber.com
To reply by e-mail, replace trashcan with david.

May the source be with you...

Gareth McCaughan

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
Guilhem de WAILLY wrote:

> You can try the free version of OpenScheme (Scheme is an efficient Lisp
> dialect) that comes with an interpreter and a compiler (that produces
> ansi C)
> for Linux an d windows.

I like Scheme a lot, but "Scheme is an efficient Lisp dialect"
seems to me like a really bogus thing to say. Firstly, it's
not at all clear that any "dialect" can be efficient as such
(barring really stupid languages). Secondly, the only other
important dialect of Lisp (as a general purpose language --
I'm not thinking here of elisp and AutoLisp) is Common Lisp,
and today's best CL compilers produce (unless I'm badly mistaken)
considerably more efficient code than today's best Scheme
compilers; precisely because CL was designed with the intention
that it should be possible to make a CL system produce really
efficient code, whereas Scheme was not designed with any such
intention.

One possible exception is Siskind's "Stalin", which allegedly
compiles Scheme very well indeed; but my understanding is that
(1) it's not entirely stable and (2) it's very slow when
compiling large programs (because its efficiency is achieved
by doing some impressive whole-program analysis).

--
Gareth McCaughan Dept. of Pure Mathematics & Mathematical Statistics,
gj...@dpmms.cam.ac.uk Cambridge University, England.

Guilhem de WAILLY

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
Gareth McCaughan wrote:

> Guilhem de WAILLY wrote:
>
> > You can try the free version of OpenScheme (Scheme is an efficient Lisp
> > dialect) that comes with an interpreter and a compiler (that produces
> > ansi C)
> > for Linux an d windows.
>
> I like Scheme a lot, but "Scheme is an efficient Lisp dialect"
> seems to me like a really bogus thing to say. Firstly, it's

Of course, here, efficient means "easy to program", "efficient tomake a
working program", "efficient to maintain", efficient to learn :)

> not at all clear that any "dialect" can be efficient as such
> (barring really stupid languages). Secondly, the only other
> important dialect of Lisp (as a general purpose language --
> I'm not thinking here of elisp and AutoLisp) is Common Lisp,
> and today's best CL compilers produce (unless I'm badly mistaken)
> considerably more efficient code than today's best Scheme
> compilers; precisely because CL was designed with the intention
> that it should be possible to make a CL system produce really
> efficient code, whereas Scheme was not designed with any such
> intention.
>
> One possible exception is Siskind's "Stalin", which allegedly
> compiles Scheme very well indeed; but my understanding is that
> (1) it's not entirely stable and (2) it's very slow when
> compiling large programs (because its efficiency is achieved
> by doing some impressive whole-program analysis).
>
> --
> Gareth McCaughan Dept. of Pure Mathematics & Mathematical Statistics,
> gj...@dpmms.cam.ac.uk Cambridge University, England.

--

Erik Naggum

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
* Guilhem de WAILLY

| You can try the free version of OpenScheme (Scheme is an efficient Lisp
| dialect) that comes with an interpreter and a compiler (that produces
| ansi C) for Linux an d windows.

* Gareth McCaughan <gj...@dpmms.cam.ac.uk>


| I like Scheme a lot, but "Scheme is an efficient Lisp dialect" seems to
| me like a really bogus thing to say.

it's always interesting to see how something really bogus could be true,
perhaps especially if it obviously isn't. so consider "Scheme is an
efficient Lisp dialect". since the author of that statement, like most
other Scheme adherents, is behind yet another Scheme implementation, I
guess he's very happy that his implementation of Scheme is efficient.
and since he, like most other Scheme adherents, have a chip on their
shoulders the size of the Common Lisp specification against Common Lisp,
it's only fair to assume that he meant that it would take him hundreds of
years to implement Common Lisp and that he, as an implementer, rather
than a user, prefers really tiny languages.

I must say that I have found Kent Pitman's argument that small languages
require big programs, whereas large languages enable small programs, to
be very compelling.

#:Erik
--
don't call people who don't understand statistics idiots. take their money.

Alan Gunderson

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
Justin Dyer wrote:
>
> Can anyone point me towards a LISP compiler/interpreter for Windows.
> Preferably freeware/shareware. Please reply to e-mail as I do not have
> time to regularly check this newsgroup.
>
> Thanx in advance.
>
> - Justin Dyer
Justin:

The following was published on this list on Nov 2, 1998:

Subject: new Common Lisp for Windows 95/98/NT on the web
Date: 2 Nov 1998 08:57:24 GMT
From: Roger Corman <ro...@corman.net>
Organization: Corman Tools
Newsgroups: comp.lang.lisp

Dear lisp users,

On Lisp's 40th birthday, I am pleased to announce a new Common Lisp
compiler and development environment for Windows 95/98/NT. This is a
full
featured system with very fast performance (competitive with the fastest
Windows
Common Lisps) and full support for using Win32 to build applications.
It compiles to machine code (no interpreter is included), supports
OS-level threads (minimal support now, more under development)
save-image, save-application, etc.

The compiler is free, and I am distributing the source code along
with it (in Common Lisp). A free console app may be used to interact
with the compiler (a listener) but the included IDE application provides
a much richer lisp-enhanced editor and development environment.
I plan to ask a modest license fee for the IDE, although details are
not finished. The included evaluation copy of the IDE is fully
functional for 30 days, after which it will add a nag dialog (and
I hope to have licensing details finalized by then).

I know many of you in this newsgroup will be both interested and
possibly skeptical of this system, and I encourage you to please
download a copy and try it out. I have included some documentation
(in the form of a Word 97 format manual) but have not had time to
document it extensively. Any of you who get back to me with helpful
feedback I will be happy to repay you with a free registration code
for the IDE.

I am the developer of PowerLisp for the Macintosh, which has
been useful for many years for a large number of people.
This new product, called Corman Lisp, is far superior, however, and has
been written from the ground up for the Win32 environment. I have
been working on it for over 2 years, and am eager to get some
feedback. I believe it will fill an important niche and make Common
Lisp a more viable development platform.

You may download the latest version at the following site:
http://corman.net/lisp

The download is about 2.5 megs, and is a self-extracting installer
program, which installs the compiler, IDE, console (which you don't
need if you use the IDE) and source code.

Thanks,

Roger Corman

--
Alan Gunderson
Spring Street Software
Eagle River, Alaska
e-mail: gunde...@acm.org

Gareth McCaughan

unread,
Dec 10, 1998, 3:00:00 AM12/10/98
to
Guilhem de WAILLY wrote:

>>> You can try the free version of OpenScheme (Scheme is an efficient Lisp
>>> dialect) that comes with an interpreter and a compiler (that produces
>>> ansi C)
>>> for Linux an d windows.
>>

>> I like Scheme a lot, but "Scheme is an efficient Lisp dialect"

>> seems to me like a really bogus thing to say. Firstly, it's
>
> Of course, here, efficient means "easy to program", "efficient tomake a
> working program", "efficient to maintain", efficient to learn :)

I'm still not convinced. It certainly takes longer to learn
everything about CL than it does to learn everything about
Scheme, but that's not relevant. One could define a "Scheme-
-like" subset of CL, and I reckon learning that wouldn't take
much longer than learning Scheme does. And then, if you want
to do something that's inconvenient in the Scheme-like subset,
you can learn (incrementally) more about CL. Whereas, if you
want to do something that's inconvenient in Scheme, you need
to start learning some non-standard set of extensions.

And there are a few places where Scheme manages to be simpler
in theory at the cost of being harder in practice. The usual
example is a good one: Continuations are a really clever idea,
and make for a powerful language with few primitives; but they
aren't often what you actually want when writing a program.
And Scheme itself doesn't provide any of the useful things
that could be based on continuations (throw/catch being an
obvious example), so you have to write it yourself. And since
continuations are confusing to many people, this is a pain.

Another standard example. In Scheme, you can do iteration with
DO, MAP, FOR-EACH, or tail recursion. In Common Lisp, you have
a whole family of MAPxxx functions; an equivalent of FOR-EACH;
DO and DO*; the hairy but very powerful LOOP; the ability to
write horrible things with GO; special iterators for things like
hash tables and packages; and (sort of) tail recursion. (And
probably a few other things I've forgotten.) This is certainly
harder to learn, but it's much easier to program with once you've
learned it.

All these things can, of course, by added by good libraries of
macros and procedures. But once you do that, what you have is
no longer just Scheme; it's Scheme plus a bunch of libraries.
You lose the small size that makes Scheme elegant and easy to
learn; you probably also lose the consistency that comes from
the very conservative attitude of the RnRS authors. And what you
have maybe no longer works in all Scheme implementations.

This all sounds very negative, which is a shame because I think
Scheme is *for some purposes* a great language. But I really
can't think of any useful sense in which it's more "efficient"
than Common Lisp in practice.

Now, Scheme is certainly basically a much more elegant design
than CL. I bet it could be used as the *basis* for a more
"efficient" Lisp dialect, for almost any sensible meaning
of "efficient". But it isn't one yet, and I don't see much
sign of its becoming one very soon.

Erik Naggum

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
* Gareth McCaughan <gj...@dpmms.cam.ac.uk>

| Now, Scheme is certainly basically a much more elegant design than CL.

there really isn't much to designing small things elegantly. small is
almost inherently beautiful. the challenge is making things scale well.
e.g., I'd argue that an ant is more elegantly built than an elephant.
now, scale the ant up so it can handle as big a load as the elephant.
surprisingly to some, its support structure collapses and it cannot
handle any load at all.

Ian Wild

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
Gareth McCaughan wrote:

> Scheme is *for some purposes* a great language. But I really
> can't think of any useful sense in which it's more "efficient"
> than Common Lisp in practice.

I do a lot of work at the Unix command line, piping stuff from
here to there though this and that, with the odd rerouting
through A and B. Writing my filters in Scheme is a ton easier
than perl, awk, bash, or whatever. I couldn't /possibly/ use
CL here - the load time alone would kill me.

Hannu Koivisto

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
Ian Wild <i...@cfmu.eurocontrol.be> writes:

| through A and B. Writing my filters in Scheme is a ton easier
| than perl, awk, bash, or whatever. I couldn't /possibly/ use
| CL here - the load time alone would kill me.

Have you tried CLISP? FWIW, I was bored two weeks ago or so and
decided to compare load times (just load and immediate "(exit)"
or "(quit)") of different Scheme and Common Lisp
implementations. I can't remember all the details, but after
trying few Schemes (at least MzScheme, Guile, scsh, Bigloo,
Gambit) and CLs (CLISP, ECL (not sure), ACL5, CMUCL), CLISP
seemed to be the fastest of all. IIRC, I finally found one
implementation that was marginally faster: sci, the Scheme
interpreter written in Scheme and compiled with Scheme-to-C.
Anyway, the difference between those two was barely noticeable
and timing on a server machine is not too accurate, so one can't
really make any conclusions on that. The machine I used was
P5/120 running Debian GNU/Linux (libc6).

//Hannu

Gareth McCaughan

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
Ian Wild wrote:

[I said:]


>> Scheme is *for some purposes* a great language. But I really
>> can't think of any useful sense in which it's more "efficient"
>> than Common Lisp in practice.
>
> I do a lot of work at the Unix command line, piping stuff from
> here to there though this and that, with the odd rerouting

> through A and B. Writing my filters in Scheme is a ton easier
> than perl, awk, bash, or whatever. I couldn't /possibly/ use
> CL here - the load time alone would kill me.

Good call. I agree that there are applications (like scripting,
embedding, use in a shell) for which Scheme is "more efficient"
than CL. That doesn't justify stating that it's "more efficient"
simpliciter, though. I should have said something like "But I
really can't think of any sense in which it's accurate to describe
it as more `efficient' than Common Lisp without further detailed
qualification".

Guilhem de WAILLY

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
Erik Naggum wrote:

> * Guilhem de WAILLY


> | You can try the free version of OpenScheme (Scheme is an efficient Lisp
> | dialect) that comes with an interpreter and a compiler (that produces
> | ansi C) for Linux an d windows.
>

> * Gareth McCaughan <gj...@dpmms.cam.ac.uk>


> | I like Scheme a lot, but "Scheme is an efficient Lisp dialect" seems to
> | me like a really bogus thing to say.
>

> it's always interesting to see how something really bogus could be true,
> perhaps especially if it obviously isn't. so consider "Scheme is an
> efficient Lisp dialect". since the author of that statement, like most
> other Scheme adherents, is behind yet another Scheme implementation, I
> guess he's very happy that his implementation of Scheme is efficient.

Yes, this is true, at Erian Concept, we are very happy with OpenScheme!

> and since he, like most other Scheme adherents, have a chip on their
> shoulders the size of the Common Lisp specification against Common Lisp,
> it's only fair to assume that he meant that it would take him hundreds of
> years to implement Common Lisp and that he, as an implementer, rather
> than a user, prefers really tiny languages.

Really, We do not think that a language is powerful because it hashundred of
specification pages. See the lambda-calculus theory:
Three lines of grammar can describe every thing that modern languages
can describe.

We think that a language is powerful when it is specified very shortly, and
if with this specification, it can be adaptable to the most part of problems.
Scheme has shown its efficient to write processor simulators, language
compilers, objet oriented systems, graphical application, and so on.

OpenScheme contain 90000 line of code, mainly in Scheme itself. Writing new
special forms to extend the language is a piece of cake that we reserve
for the Scheme users (They like cakes). OpenScheme includes a
compiler/interpreter/debugger, regular expressions,
timers, console interface, object oriented system, profile file. The next release

will include primitive graphic interface, OO GUI, threads, postgress interface
and a native database engine. If you want to consider these library interface
as specification, you can. But in fact, Scheme is entirely specified in the
RxRS reports with less than 10 pages!

Sincerely,

Guilhem de Wailly

I must say that I have found Kent Pitman's argument that small languages

> require big programs, whereas large languages enable small programs, to
> be very compelling.
>

> #:Erik
> --
> don't call people who don't understand statistics idiots. take their money.

--

Steve Gonedes

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to

Guilhem de WAILLY <g...@linux-kheops.com> writes:

< OpenScheme contain 90000 line of code, mainly in Scheme itself.
< Writing new special forms to extend the language is a piece of cake
< that we reserve for the Scheme users (They like cakes). OpenScheme
< includes a compiler/interpreter/debugger, regular expressions,
< timers, console interface, object oriented system, profile file. The
< next release

Does it have a native code compiler? Just curious, as I only know of a
few schemes which go this route. Most try to re-write the code into
some other language, which is nice, but doesn't really add value to
the scheme system. If anything, it just trys to show how flexible
`simpler' languages can be, but fails to reinforce the one of the most
powerful aspects of the language. That is, the system seems less
extensible (even with an object code loaded).

How did you implement the regexps? What do you find to be most
difficult/enjoyable about the regexps?


David B. Lamkins

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
In article <36727826...@news.advancenet.net> , jam...@volition-inc.com
(James Hague) wrote:

[snip]

>Yet most compilers for Lisp-like languages spit out C code. This is a
>poor solution, because it puts a large and unreliable system
>underneath a simpler and safer one. It is also annoying to force the
>user to purchase a C compiler--for those people who aren't able to
>work with a UNIX variant--just to use a free Lisp system.

Of course this is only true for appropriate values of "most", or maybe of
"Lisp-like languages".

Here are the Common Lisp compilers I've used that go directly to machine
code, there are many others I won't name because I know them only by
reputation:

Macintosh Common Lisp
CMU Common Lisp
Allegro Common Lisp (PC and Unix)
Harlequin Lispworks (Windows and Unix)
Lucid/Liquid Common Lisp
PowerLisp (Roger Corman's shareware Mac compiler)

In fact, the only compiler I've used that took the compile-to-C approach is:

Kyoto Common Lisp (irrelevant today because of CMUCL)

There are a few variations of KCL (AKCL, GCL, others?) and some academic
projects (CLiCC, ECL) that also took the compile-to-C approach. The main
thing they all seemed to have in common is that they were written by one or
two people and had intensions toward portability -- both seem to weight
design decisions in favor of compiling to a portable low-level language
rather than direct to machine code.

BTW, Eclipse Common Lisp compiles to C as a _feature_, touted by the vendor
because it allows seamless integration of Lisp in C code (without FFI):

One current almost-Common Lisp implementations (CLISP) compiles to byte-code
rather than to machine code or C. One other (XLISP) is -- I think -- a pure
intepreter.

I'm less familiar with Scheme systems, but _of the ones I've noticed_ only
Gambit-C and Scheme->C use the compile-to-C approach. The rest seem to be
native or byte-code compilers, or (rarely) a pure interpreter.

[snip]

---
David B. Lamkins <http://www.teleport.com/~dlamkins/>

Erik Naggum

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
* Ian Wild <i...@cfmu.eurocontrol.be>

| I do a lot of work at the Unix command line, piping stuff from here to
| there though this and that, with the odd rerouting through A and B.
| Writing my filters in Scheme is a ton easier than perl, awk, bash, or
| whatever. I couldn't /possibly/ use CL here - the load time alone would
| kill me.

on my dual 400MHz Pentium II machine with 512M RAM, Emacs and Allegro
Common Lisp start in 0.04 real-time seconds, mostly because Emacs has to
talk the X protocol with the slower 50MHz SPARCstation 2 that is still my
X server, while bash starts in 0.001 seconds. I tend to start Emacs and
Allegro CL once every week or so, while there have been more than 8000
invocations of bash since my system last booted 5 days ago. speaking of
which, it took several _minutes_ to boot after a prolonged power failure
(I love the cold of winter -- NOT!), with all the disks it has to check
and all the other silly things it has to do, but I still don't count the
boot time when I measure the running time of my programs. isn't that odd?

oh, by the way, I have tried scsh. it took 0.7 seconds to load while in
the disk cache on this system. I think I'll fault Scheme for that today.

I hope this was almost as silly as your conjecture, but illuminating.

#:Erik
--
man who cooks while hacking eats food that has died twice.

Erik Naggum

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
* Guilhem de WAILLY <g...@linux-kheops.com>

| Really, We do not think that a language is powerful because it hashundred
| of specification pages. See the lambda-calculus theory: Three lines of
| grammar can describe every thing that modern languages can describe.

there was this group of men who had grown up together and went to school
together and worked together for their whole life, and now they had been
placed in a home for the elderly together. they knew each other so well
they had decided to number their stories instead of just retelling them
over and over, which had gotten to be quite tedious. so they were
sitting in the living room of this place and telling stories. "342!",
said one, and all of them laughed. "916!", said another, and they
snickered and looked at one of them who didn't smile at all. "426!", he
retorted, and they all laughed out aloud. another elderly gentleman
enters the living room and he overhears one of them say "720!". he
bursts out laughing, much to the surprise of the other guys. "what are
_you_ laughing at?", one asked, almost scornfully. "it wasn't _that_
funny!" "oh, I'm sorry," said the intruder humbly, "it's just that I
hadn't heard that one before."

seriously, I have three great sources on Lambda Calculus, all by H. P.
Barendregt. one is his seminal work, The Lambda Calculus, Its Syntax and
Semanticsน, a 622-page book. another is his chapter on Lambda Calculi
with Types in the amazingly compact Handbook of Logic in Computer
Scienceฒ, a 193-page condensed exposition. (it ends with a sentence that
had me laughing real hard when I first encountered it: "Glancing over the
next few pages, the attentive reader that has worked through the proofs
in this subsection may experience a free association of the whirling
details." the following pages (296-298) are absolutely hilarious.) the
third is a 12-page section in his article on Functional Languages and the
Lambda Calculus in the truly fantastic work Handbook of Theoretical
Computer Scienceณ. I'd argue that these decrease in complexity in the
order I listed them, but now only three lines, huh?

I'm reminded of one guy who seriously believes that all of Ayn Rand's
works can be expressed as "A is A", too. you all know that's Aristotle.

#:Erik
-------
น ISBN 0-444-87508-5
ฒ ISBN 0-19-853761-1 for volume 2
ฒ ISBN 0-262-22040-7 for the two-volume set

Steve Gonedes

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to

jam...@volition-inc.com (James Hague) writes:

< I'd also argue that there are are performance benefits of directly
< outputting native code.

How would you do this? I've seen a couple of libraries that
dynamically compile code and do something with it so that it can be
run without restarting - but the code was _way_ out of my league of
understanding.

< Lisp is not C, after all, and so there are often better ways of
< generating code rather than having to reformulate Lisp into C.

When I hear of a scheme outputting C code, I am almost reminded of a
virus (except maybe without the command line options). That is, if you
compiled the scheme compiler, wrote something with it; you'd have to
recompile most of the scheme compiler to get the program to work
wouldn't you? Would a shared system library make it easier?

Maybe some people feel more comfortable using a C compiler (which is
probably reasonable; I've heard good things about the Intel and MS
compiler/assembler duo). I think compiling to C is a rather difficult
task though (scheme and C don't seem to share many commonalities).

Hope you don't mind all the questions; do enjoy hearing about these
things though, thanks...


David Cooper

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
David B. Lamkins wrote:
>
> Of course this is only true for appropriate values of "most", or maybe of
^^
> "Lisp-like languages".
>

Well, this really depends on what your meaning of the word ``is'' is.

#8>|

Pierre Mai

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
Erik Naggum <er...@naggum.no> writes:

> on my dual 400MHz Pentium II machine with 512M RAM, Emacs and Allegro

What, you've got one of these, too? They seem to be getting mighty
popular ;)

> Common Lisp start in 0.04 real-time seconds, mostly because Emacs has to
> talk the X protocol with the slower 50MHz SPARCstation 2 that is still my
> X server, while bash starts in 0.001 seconds. I tend to start Emacs and

Just for those with lesser machines, here are some timings from an AMD
K6-2 350 Box with only 128MB SDRAM (a machine, you can get for less
than $1000 nowadays!) running Linux 2.1.131:

Implementation fresh (s) in-cache (s)
-----------------------------------------------------
ACL 5.0 2.224 0.117
CMU CL CVS 2.4.7 3.593 0.081
CLISP 97-12-06 0.572 0.103
Guile 1.2 0.894 0.550
Elk 3.0 0.182 0.018
Bash 2.01 --- 0.012
Bash 2.01 on AMD5x86 --- 0.064
-----------------------------------------------------
Notes:
- All times are real-time in seconds obtained via
the internal time command of bash 2.01
- The last entry is for Bash 2.01 on an old AMD
5x86-133 box, which has ca. P75-P90 general
non-floating point performance. Actually in
mid-1996 this system achieved the performance of
most entry-level Unix Workstations at the time
under Linux 2.0. So any scripting-engine with
a reload-time in this range should be fast enough


> I hope this was almost as silly as your conjecture, but illuminating.

Just some more silliness, with the slight hope, that the given
data-points might enable some people to reconsider the trade-offs
involved in scripting-language selection (I also consider it
interesting, that of the tested implementations guile had the
largest restart time by far, although guile is being promoted as
the mother of all scripting languages, whereas most of the other
implementations were never intended for scripting use).

Regs, Pierre, who'd like this newsgroup to return to more fruitful
topics than OS-wars and the fight on OSS...

PS: Here is the log-file for the timing sessions:

ACL 5.0 (not in cache and in cache):

bash-2.01$ time /opt/acl5/lisp -e '(excl:exit)'
Loading /opt/acl5/libacl503.so.
Mapping /opt/acl5/lisp.dxl...done.
Mapping /opt/acl5/acl503.epll.
Allegro CL Trial Edition 5.0 [Linux/X86] (8/29/98 10:57)
Copyright (C) 1985-1998, Franz Inc., Berkeley, CA, USA. All Rights Reserved.
; Exiting Lisp

real 0m2.224s
user 0m0.040s
sys 0m0.040s
bash-2.01$ time /opt/acl5/lisp -e '(excl:exit)'
Loading /opt/acl5/libacl503.so.
Mapping /opt/acl5/lisp.dxl...done.
Mapping /opt/acl5/acl503.epll.
Allegro CL Trial Edition 5.0 [Linux/X86] (8/29/98 10:57)
Copyright (C) 1985-1998, Franz Inc., Berkeley, CA, USA. All Rights Reserved.
; Exiting Lisp

real 0m0.117s
user 0m0.070s
sys 0m0.000s

CMU CL CVS-Version 2.4.7 (fresh and in cache):

bash-2.01$ time lisp -eval '(quit)'
;;; *** Don't forget to edit /var/lib/cmucl/site-init.lisp! ***

real 0m3.593s
user 0m0.010s
sys 0m0.060s
bash-2.01$ time lisp -eval '(quit)'
;;; *** Don't forget to edit /var/lib/cmucl/site-init.lisp! ***

real 0m0.081s
user 0m0.060s
sys 0m0.010s

CLISP 1997-12-06 (fresh and in cache):

bash-2.01$ time clisp -x '(quit)'
i i i i i i i ooooo o ooooooo ooooo ooooo
I I I I I I I 8 8 8 8 8 o 8 8
I I I I I I I 8 8 8 8 8 8
I I I I I I I 8 8 8 ooooo 8oooo
I \ `+' / I 8 8 8 8 8
\ `-+-' / 8 o 8 8 o 8 8
`-__|__-' ooooo 8oooooo ooo8ooo ooooo 8
|
------+------ Copyright (c) Bruno Haible, Michael Stoll 1992, 1993
Copyright (c) Bruno Haible, Marcus Daniels 1994-1997

Bye.

real 0m0.572s
user 0m0.010s
sys 0m0.030s
bash-2.01$ time clisp -x '(quit)'
i i i i i i i ooooo o ooooooo ooooo ooooo
I I I I I I I 8 8 8 8 8 o 8 8
I I I I I I I 8 8 8 8 8 8
I I I I I I I 8 8 8 ooooo 8oooo
I \ `+' / I 8 8 8 8 8
\ `-+-' / 8 o 8 8 o 8 8
`-__|__-' ooooo 8oooooo ooo8ooo ooooo 8
|
------+------ Copyright (c) Bruno Haible, Michael Stoll 1992, 1993
Copyright (c) Bruno Haible, Marcus Daniels 1994-1997

Bye.

real 0m0.103s
user 0m0.010s
sys 0m0.010s


Guile 1.2 (fresh and in cache):

bash-2.01$ time guile -c ''

real 0m0.894s
user 0m0.540s
sys 0m0.010s
bash-2.01$ time guile -c ''

real 0m0.550s
user 0m0.540s
sys 0m0.010s

Elk 3.0 (fresh and in cache):

bash-2.01$ time echo "(exit)" | scheme
>
real 0m0.182s
user 0m0.000s
sys 0m0.010s
bash-2.01$ time echo "(exit)" | scheme
>
real 0m0.018s
user 0m0.010s
sys 0m0.000s

Bash 2.01 (in cache):

bash-2.01$ time bash -c ''

real 0m0.012s
user 0m0.010s
sys 0m0.000s

The same on an old AMD5x86-133 (ca. P90 integer performance, in cache):

$ time bash -c ''

real 0m0.064s
user 0m0.040s
sys 0m0.020s


--
Pierre Mai <pm...@acm.org> http://home.pages.de/~trillian/
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Kellom{ki Pertti

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
jam...@volition-inc.com (James Hague) writes:
> I'd also argue that there are are performance benefits of directly
> outputting native code. Lisp is not C, after all, and so there are

> often better ways of generating code rather than having to reformulate
> Lisp into C.

Lisp is not assembler, either, so some radical reformulation is going
to take place anyway. It is true that you can always get at least the
same performance by emitting native code directly as going via C would
give you (simply emit the same code as the C compiler would). However,
you then need to reimplement all the work that has gone into an
optimizing C compiler for each architecture you want your system to
run on.

I am aware that C is not quite low level enough for convenient
implementation of some features (e.g. Scheme continuations) so that
you sometimes need to program around it. With respect to development
effort, emitting native code wins, but there is a large are in which
harnessing the work of others gives a better yield. Where the cutoff
point is is anyone's guess.
--
Pertti Kellom\"aki, Tampere Univ. of Technology, Software Systems Lab

Kellom{ki Pertti

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
Kellom{ki Pertti <p...@kuovi.cs.tut.fi> writes:
> With respect to
> development effort, emitting native code wins(*), but there is a large
> area in which harnessing the work of others gives a better yield.

(*) asymptotically, that is

Rob Warnock

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
Kellom{ki Pertti <p...@kuovi.cs.tut.fi> wrote:
+---------------
| ... (simply emit the same code as the C compiler would). However,

| you then need to reimplement all the work that has gone into an
| optimizing C compiler for each architecture you want your system to run on.
+---------------

Not only that, but it is quite common [at least for some architectures
I'm familiar with (*cough*) (*cough*)] for the native C compiler and/or
assembler to contain bunches of "workarounds" for CPU chip bugs. So a
Lisp compiler that emits machine code directly needs to implement all
the same workarounds... (*ugh!*) for every architecture you want it
to run on... (*ugh!*) To me, that's another real advantage of using C
as the target.


-Rob

-----
Rob Warnock, 8L-855 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
2011 N. Shoreline Blvd. FAX: 650-964-0811
Mountain View, CA 94043 PP-ASEL-IA

Duane Rettig

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:

> Kellom{ki Pertti <p...@kuovi.cs.tut.fi> wrote:
> +---------------
> | ... (simply emit the same code as the C compiler would). However,
> | you then need to reimplement all the work that has gone into an
> | optimizing C compiler for each architecture you want your system to run on.
> +---------------
>
> Not only that, but it is quite common [at least for some architectures
> I'm familiar with (*cough*) (*cough*)] for the native C compiler and/or
> assembler to contain bunches of "workarounds" for CPU chip bugs.

Ah, yes, the infamous R4000 Rev 2.2 jump-at-end-of-page bug. In that
particular case, the linker was also involved in the "fix", and thus
would not have helped native lisp compilers, in which code vectors
(pieces of executable code) are first-class lisp objects and must be
manipulable in data space by the garbage-collector instead of by
a linker. For that matter, I don't even know of any
other lisps that worked on those buggy chips (did emacs even work on
them?) and certainly any compile-to-C lisps would have either had to
take extraordinary measures to work around this problem, or sacrifice
the ability to define functions dynamically.

The nature of the bug was that if a jump instruction was on the last
word of the page (meaning that the delay instruction would be on
another page), and if the delay instruction's page was not yet
paged in, the page would not be properly faulted in and the execution
of that jump instruction would likely fail with a SEGV.

SGI's "fix" was to arrange for the linker to guarantee that none of
these jump instructions would fall on the last word of a page,
either by rearranging code or by inserting well-placed nop instructions.
Since the linker was not involved in lisp code linking, this guarantee
could not be made. Other workarounds were possible, but most resulted in
code bloat and were unacceptable for a language that was always
blasted for being "too big".

The real solution would have been to purge all of the affected chips
from the customer base, but SGI (understandably) did not want to do
such a swapout when most of its customer based was fixed up with their
linker workaround. So we worked out a compromise, where we (Franz)
identified our affected customers, and SGI swapped out those chips.
It was a good compromise and a win for all concerned.

> So a
> Lisp compiler that emits machine code directly needs to implement all
> the same workarounds... (*ugh!*) for every architecture you want it
> to run on... (*ugh!*) To me, that's another real advantage of using C
> as the target.

A native-lisp compiler sometimes (as in this case) must use different
workarounds than C, because lisp has a broader set of requirements than
C does. When a lisp implementor tries to shoehorn lisp requirements into
a C implementation, the implementor is always faced with a strong set of
design decisions, some of which might lead either to a scaling down of
the lisp capabilities of the result, or to usage of non-portable
features of the particular C implementation.

Perhaps, if people are interested, I can put together a list of
design tradeoffs between native-compilation and compile-to-C
implementation. It would, of course, be biased toward the
native-compilation side, because that is how we've gone, but
at least it might provide fodder for a good technical discussion.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Mark K. Gardner

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
On 14 Dec 1998 08:53:51 -0800, Duane Rettig <du...@franz.com> wrote:
>Perhaps, if people are interested, I can put together a list of
>design tradeoffs between native-compilation and compile-to-C
>implementation. It would, of course, be biased toward the
>native-compilation side, because that is how we've gone, but
>at least it might provide fodder for a good technical discussion.

I'll bite. Please post your list of trade-offs.

Mark

--
Mark K. Gardner (mkga...@cs.uiuc.edu)
University of Illinois at Urbana-Champaign
Real-Time Systems Laboratory
--

Duane Rettig

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
mkga...@cs.uiuc.edu (Mark K. Gardner) writes:

> On 14 Dec 1998 08:53:51 -0800, Duane Rettig <du...@franz.com> wrote:
> >Perhaps, if people are interested, I can put together a list of
> >design tradeoffs between native-compilation and compile-to-C
> >implementation. It would, of course, be biased toward the
> >native-compilation side, because that is how we've gone, but
> >at least it might provide fodder for a good technical discussion.
>
> I'll bite. Please post your list of trade-offs.

I've had several such bites via email. Give me a few days to put
a good list together.

Stephen J Bevan

unread,
Dec 15, 1998, 3:00:00 AM12/15/98
to
Duane Rettig <du...@franz.com> writes:
> Ah, yes, the infamous R4000 Rev 2.2 jump-at-end-of-page bug. In that
> particular case, the linker was also involved in the "fix", and thus
> would not have helped native lisp compilers, in which code vectors
> (pieces of executable code) are first-class lisp objects and must be
> manipulable in data space by the garbage-collector instead of by
> a linker. For that matter, I don't even know of any
> other lisps that worked on those buggy chips ...

I don't know about Lisp systems, but I do know that MLWorks would/will
run on them -- not having used anything MIPS based before the first
I'd heard of the bug was from reading the description of the
workaround in the MLWorks source.

Matthias Buelow

unread,
Dec 17, 1998, 3:00:00 AM12/17/98
to
In article <4g1ai8...@beta.franz.com>
Duane Rettig <du...@franz.com> writes:

>> I'll bite. Please post your list of trade-offs.
>
>I've had several such bites via email. Give me a few days to put
>a good list together.

Hmm, putting that list on the www (or on ftp) somewhere and posting
the URL to it in here would be greatly appreciated, I think.. Or if
it's not too long, perhaps you could post it directly?

--
Matthias K. Buelow * Boycott Micro$oft, see http://www.vcnet.com/bms/ *

Duane Rettig

unread,
Dec 18, 1998, 3:00:00 AM12/18/98
to

Some light reading for the weekend, anyone? :-)


A few days ago, I offered to post a list of reasons we at Franz
had for compiling to native code, instead of using C as a sort of
pseudo-assembler. I received a huge number of responses on this
list and by email, and so I am posting this. Please consider this
a first pass; I have no problem refining it, especially in light of
the fact that I don't know all of the various implementations which
might be represented by this list. Also, since as an implementor
I have landed solidly in the "directly to machine-language" camp,
this first report will of course be biased in that direction.

I sent a preliminary copy of this document to Howard Stearns, who
is the developer and owner of Eclipse, a lisp-to-c compiler. He
had some very good comments, some of which I have incorporated
into the document. Howard also had a proposal for an introduction,
which was in reality more of an excellent rebuttal to my submission
(not in the sense that there is disagreement with what I said, but
in the sense that one can look at things from different points of
view). I did not incorporate that introduction here, but I hope
Howard will post it as part of the discussion afterward.

Feedback is welcome. Post to comp.lang.lisp, so that the discussion
can be inclusive.

===

It has been said that C is very close to assembly language. I disagree.
This assumption has often lead people to erroneously assume that
there is little difference between a lisp that compiles to C and one
that compiles directly to machine code. In fact, there are quite a
few things that can be done with assembler/machine-code that are hard
or impossible to do in C.

On the other hand, the two major reasons to compile lisp to C code
instead of to machine level are:
1. C is much more portable than machine architectures, and thus
porting to new architectures is much easier when C is the
intermediate target.
2. If care is taken to retain readability in the generated C
output, it can then be used as source code itself, and can
directly interface to other C libraries, instead of needing
a foreign-function interface.

I have put together a list of differences between lisp-to-C and
lisp-to-machine-level styles, as well as intermediate possibilities.
In this list, I'll use [] to indicate Franz's Allegro CL solution (a
direct-to-machine-level solution), {} to indicate a portable C solution,
and // to indicate a hacked-C-output solution such as the old FranzLisp
(not Allegro CL). FranzLisp used a hybrid solution, since some of its
lisp code was written in lisp and some was written in lispy-C. The lisp
code went directly to assembler, but the C code was given a -S option
to generate assembler code, and then sent through an awk script to
replace some variables with registers.

1. The fact that C does not allow registers to be exposed forces some
hacks in order to get some much needed values into dedicated registers,
such as nil, the current function object, and a lisp global values
table.

[We control the compiler, which defines any special register
requirements]

{A portable C implementation might use functions or macros to grab
such often-used values}

/FranzLisp used both methods, using C variables when in C and
registers while in compiled lisp code/

2. The calling/returning of args/values only supports C semantics.

a. No way to know the number of arguments passed.

[We simply allocate a dedicated register for args passed and values
returned]

{Portable C might pass an extra argument always for the count, or
a marker at the end of the argument list.}

/FranzLisp did not use the C calling sequence, but hacked two values at
assembler-munge time, lbot and np, to registers, and used them to always
pass lisp arguments on the bindstack. The number of args passed was the
difference between lbot and np./

b. No direct &optionals processing.

[we generate efficient code for optionals processing using the count
register]

{similar code is generated, but usually not as efficient since the count
might not be in a register}

/FranzLisp (see the discussion at (a)/

c. No &rest processing. Many man-years have been dedicated to the
solution of the varargs problem, with var-success. Some architectures
pass the first few arguments in registers _unless_ varargs is needed,
in which case they are passed on the stack. The closest equivalent
to &rest processing is the printf genre, where the number-of-args is
implied (but not enforced) by the format control string.

[ Args are always passed in registers, but &rest processing forces a
copy to stack. The lisp calling sequence always guarantees that these
stack locations are contiguous, unlike some varargs implementations]

{For truly portable code, varargs must be used. On some architectures,
this does not assume that args are stacked contiguously, and so may be
inefficient}

/FranzLisp used the lisp bindstack for arguments, which made &rest args
easy enough/

d. No allowance for &rest args with dynamic extent. In general,
dynamic-extent consing is a hard thing to do portably even at the
machine-level. The reason for this is because although some of the
less RISC-like architectures provide separate frame and stack
pointers, most architectures want basically one major stack-allocation
operation, at the beginning, if stack allocation is to take place
at all. So extra stack space for stack-consing is a little harder
to get. The two notable cases of where it is done anyway are:
i. When the compiler can determine the size of all of the objects
and preallocate them; this is easily done with automatic
variables in C, as well as directly using local static space
in lisp-to-machine-level.
ii. When the size of the extra stack is dynamic, but can be
determined before the stack space is allocated, such as is the
case when the &rest argument wants to be allocated.
[We do this directly via the argument count, either when the
time comes to allocate extra stack, or else before the stack
is allocated, if the architecture requires a single stack grab]
{C must rely on the availability of alloca() to do this, or
must drop into special assembler code.}
/FranzLisp did not stack-allocate/

e. No &keys processing. C has no such concept and no equivalents.

[we use a fast primitive call to a keyword-processing primitive, which
fills in a keyword table with relevant info.]

{Don't know how Portable C implementations do this. There are probably
many options, including FranzLisp's way}

/ FranzLisp used higher-level lisp constructs and lexpr argument passing
to implement keyword processing. The compiler knew nothing of keywords,
and the lispy-C functions didn't accept them./

f. No multiple-values to return. C uses lvalues to return more than
one value to the caller.

[Multiple values are usually fairly localized. We pass the first N
values in the same registers that are used as arguments (except sparc,
where N is 1) and values over N in a vector that is available to the
receiving multiple-value-{setq,bind,call} ]

{Probably several ways to do this in portable C, but none very efficient

Howard Stearns adds the following example:
(multiple-value-call #'bar x y 99 (values-producing-function))
as being problematic for determining where to save multiple values.
}

/FranzLisp was before the days of RISC, and so it hacked the stack to
push values before returning./

3. The calling of a function is static. This static nature of the C
function call is becoming more and more gray, because of the introduction
of PIC (position-independent-code) into C (lisp is usually PIC, itself,
and current Allegro CL generated code is always PIC; those lisps that
are not PIC must retain relocation bits or strategies, and relocate each
time the code is moved). However, the basic unit of the "call whatever
my current function code is now" in C is (*(void(*)())func)() (talk
about your "Lots of Irritating Stupid Parentheses" :-) and in segmented
architectures such as the HP this is very inefficient (it usually
involves external branches to stub functions both for calling and
returning).

[We always call through a symbol->function->code access sequence, and
for funcall, a function->code sequence. This may appear to be
inefficient, but is no worse than C calling within shared libraries,
and is necessary for dynamism, and allows very nice things to be
done using a trampoline, including the fact that the trampoline
is _always_ in the instruction cache.]

{The only way I know of is to always call indirect through the
((void*)func)() construct, or to not allow dynamic redefinition,
or to hack the linker, certainly not a portable implementation.

Howard Stearns adds:
"If, though declarations or block-compilation, a user specifies that a
function being called will not be redefined at run time, then all
techniques can generate pretty efficient code. (VERY efficient if the
compiler inlines the call away altogether!) Otherwise, some sort of
indirection can be used in all techniques."
}

/FranzLisp always called through a "translink" table, one per compiled
file, which had a set of symbol/start pairs. The start slot was
normally initialized to a lookup function, but lookups could be
pre-arranged on the table, so that the start field pointed directly
to the function./

4. Special variables must be bound/unbound efficiently. More and
more this dynamic-binding technique tends to be discouraged, but
when it must be used, even the most tightly-inline-assembled code
is not fast enough; dynamic-binding is slow. I don't know how
well C compilers can compile a variable binding, but at the very
least some provision must be made for bindstack overflow. (I am
of course assuming a shallow-binding approach, here, because it
is what most current lisps use; there is something to be said for
deep-binding (even though accesses are slower), especially in the
face of the rising use of multiprocessing).

[We used to do this binding inline, but actually found both speed
and space optimizations to be had in calling out to a few
hand-assembled primitive functions.]
{C ???}
/FranzLisp did no binding in C code; it was done in inlined code
by the lisp compiler/

5. Closure variables must be properly referenced. This includes the
fact that closed-over environments might only be identifyable by
address (identity, as opposed to name), and variable access in such
environments should be as efficient as possible.

[We implement a closed-over environment as a specialized object, and
it has its environment information and its function template. The
closure is itself funcallable, and goes through a short piece of code
(3 to 5 instructions) to establish its environment and to call its
template function. Variable access through the environment is usually
a couple of instructions, with sometimes one indirection.]
{C: Howard Stearns says:
The main way in which the implementation of closures is effected by
machine model is in how the environment is established within the
function. In a lisp-like stack machine, binding access can be much
like argument access. In both register-machines and c-machines,
special code must be used to setup, access and debug bindings.
My paper, below, briefly covers how Eclipse does this."
}
/FranzLisp did not have closures per se. It had something like
closures which captured certain environments, but did not work
like CL closures./

6. There are no constructs in C for various possible hardware
operations.

{This class of deficiency is one which can't be solved
in portable C, because the whole point of these operations is for
efficiency. A few C compilers allow for asm statements, but that is
the exception, not the rule, and so using asm is non-portable,
even on similar architectures.

Regarding items a and b, below, Howard Stearns says:
"C code can be generated that abstracts certain operations into a
macro. On machines which offer support for assembly or
overflow-generated interrupts, the macro can be conditionally
compiled to make use of the hardware."
}

/FranzLisp did not take advantage of very many of these optimizations./

a. Addition/subtraction overflow detection: When two fixnums are
added together in safe code, the result may be a bignum. To
detect this, instruction sequences must be used to detect such
overflow. No C construct allows for this.

b. On sparc, specifically, the tagged add instruction allows a
check that the two operands have the two low order bits set to 0.
On architectures where a fixnum has a two-bit tag of 0 (or, more
likely, where the tag is in the three LSbits and fixnums take
up both tags 0 and 4), this instruction allows for an automatic
detection of non-fixnum operands in an add instruction (as the
open-coded portion of a #'+ operation). C provides no semantics
for such tagged-add usage. Presumably, portable C will efficiently
compile the equivalent mask/test/add operation [e.g.
((x|y & ~3) : lisp_add(x,y) ? (x+y)) ], although this does not
also handle the overflow situation.

c. On architectures which trap on misaligned references (this always
excludes RS/6000 for hardware reasons and almost always excludes
Intel architectures for operating system reasons), and assuming a
three-lower-bit tagging scheme, the tags can be arranged in such a
way that a car/cdr access of an object that is not a list will
cause a trap that will be recognized by a trap handler. For
example, Allegro defines conses with tag bits 001, and nil has
tag 101. The car function in pseudo-assembler is simply
"ld Rx,-1(Ry)". If the access is not aligned, the object in Ry
was not a cons or nil.

{Portable C note: C can cause alignment traps under some conditions.
However, it is hard to predict what assembler code is produced, so
that the trap handler can give a nice message which includes the
bad object; for example:

USER(1): (car 'a)
Error: Attempt to take the car of A which is not listp.
...

}

d. Continuing from errors is hard if the environment cannot be
restored. In the case of an unbound symbol, the simplistic
approach (and the one which portable C must take) is to call
out to an error handler, which can then return with the corrected
value once the restart is taken. However, this approach can
lead to tremendous code bloat, since every safe symbol access
must make a function call to an error handler. Consider instead
this session:

USER(1): (defun foo () (declare (optimize (speed 2) (safety 0))) *x*)
FOO
USER(2): (compile 'foo)
; While compiling FOO:
Warning: Free reference to undeclared variable *X* assumed special.
FOO
T
T
USER(3): (disassemble 'foo)
;; disassembly of #<Function FOO>
;; formals:
;; constant vector:
0: *X*

;; code start: #x204d43c4:
0: e3 02 jcxz 4
2: cd 61 int $97 ; trap-argerr
4: d0 7f a3 sarb [edi-93],$1 ; C_INTERRUPT
7: 74 02 jz 11
9: cd 64 int $100 ; trap-signal-hit
11: 8b 4e 32 movl ecx,[esi+50] ; *X*
14: 8b 41 0d movl eax,[ecx+13]
17: 3b 47 bf cmpl eax,[edi-65] ; UNBOUND
20: 75 02 jnz 24
22: cd 63 int $99 ; trap-unbound
24: f8 clc
25: 8b 75 fc movl esi,[ebp-4]
28: c3 ret
29: 90 nop
USER(4): (foo)
Error: Attempt to take the value of the unbound variable `*X*'.
[condition type: UNBOUND-VARIABLE]

Restart actions (select using :continue):
0: Try evaluating *X* again.
1: Set the symbol-value of *X* and use its value.
2: Use a value without setting *X*.
3: Return to Top Level (an "abort" restart)
[1] USER(5): :cont 1
enter an expression which will evaluate to a new value: 'hi
HI
USER(6): (foo)
HI
USER(7):

Note that in the disassembled output, the interrupt instruction at
byte 22 is bypassed if *x* is bound, and its value is in the return
value register (eax, in this case). But when *x* is unbound, the
interrupt occurs, and the trap handler parses the code prior to the
interrupt both to figure out what symbol is being referenced, and
to figure out what destination register is wanted for the value.
When the continuation at index 1 is selected, and a value is
supplied, the variable is set, and then the interrupt handler
arranges that the appropriate register (eax) will get the value
when the interrupt is returned at instruction byte 24.

Needless to say, there is simply no way to do this in C.

In addition to the above list, I have heard several times the
suggestion that lisp could use the C as a backend for optimizations.
I tend to disagree with this, because of the nature of C optimizations.
There are, of course, issues about the front-end optimizations such
as loop unrolling, strength reduction, and common-subexpression
elimination that must be considered on a per language-need basis,
and there are also the back-end optimizations like branch-tensioning,
instruction scheduling, and other peephole items that must be done.
But it seems to me that the largest source of C compiler work (and bugs
that I've seen) has to do with anti-aliasing proofs; given two pointers
in C, how does one prove that the objects pointed to are/aren't the same?
Since you can either think of lisp as a language with _all_ pointers
or _no_ pointers, and since object equality is defined by the semantics
of the language, pointer aliasing is not a problem as often for lisp.

To be fair, there are areas where algorithms for strength reduction and
other optimizations could be handled by the lower-level assembler/compiler,
and some of these optimizations we are behind on (each of the
native-compiler lisps have their strengths and their weaknesses, and
for example such numeric analysis is definitely one of CMUCL's strengths).


References (thanks to Howard Stearns for supplying this):

A classic virtual register machine for running Lisp is the subject of
one chapter of:
Abelson, Harold, Gerald Jay Sussman and Julie Sussman.
Structure and Interpretation of Computer Programs. MIT Press,
1985. 542 pages. ISBN 0-262-01077-1 Second Edition, 1996.

A brief discussion of many of these issues and how they effect
performance is the first chapter of:
Gabriel, Richard P. Performance and Evaluation of Lisp Systems.
MIT Press, 1986. 285 pages. ISBN 0-262-07093-6

Compile-to-C issues are discussed in this paper AND in its references:
Stearns, Howard. Lisp/C Integration in Eclipse.
40th Anniversary of Lisp Conference, 1998.
http://www.elwood.com/eclipse/papers/lugm98/lisp-c.html

More detailed discussion of different compilation techniques for Lisp
can be found throughout:
Queinnec, Christian. LiSP in Small Pieces.
Cambridge University Press, Cambridge, U.K., 1996. 514 pages.

Immanuel Litzroth

unread,
Dec 19, 1998, 3:00:00 AM12/19/98
to
You might also want to check out the papers by Seymon Peyton jones
about C--: a portable assembly language. They seem to address similar
issues.
http://research.microsoft.com/Users/simonpj/c--.html
There's also a paper on marc feeley's website
http://www.iro.umontreal.ca/~feeley/
that describes some benchmarks, the general gist is that (as I recall)
that compiling to C is 2-3 slower than compiling to native code.
Thanks for sharing your thoughts, because using c as a target language
seems cause more problems than it solves and these issues have been on
my mind too lately.

> 2. If care is taken to retain readability in the generated C
> output, it can then be used as source code itself, and can
> directly interface to other C libraries, instead of needing
> a foreign-function interface.

I have some experience using bigloo and gambit (c-compiling scheme
implementations) and using their foreign function mechanisms. I can
assure you that to do anything non-trivial requires you to know a lot
about C and about the particular implementation.
Immanuel

Paolo Amoroso

unread,
Dec 19, 1998, 3:00:00 AM12/19/98
to
On 18 Dec 1998 19:10:21 -0800, Duane Rettig <du...@franz.com> wrote:

> 6. There are no constructs in C for various possible hardware
> operations.

Can Lisp compilers or runtime systems take advantage of the new breed of
multimedia extensions--e.g. MMX for Pentiums--to the instruction sets of
popular CPUs? Instructions for moving data quickly, for example, might be
useful for garbage collection.


Paolo
--
Paolo Amoroso <amo...@mclink.it>

Duane Rettig

unread,
Dec 19, 1998, 3:00:00 AM12/19/98
to
amo...@mclink.it (Paolo Amoroso) writes:

> On 18 Dec 1998 19:10:21 -0800, Duane Rettig <du...@franz.com> wrote:
>

> > 6. There are no constructs in C for various possible hardware
> > operations.
>

> Can Lisp compilers or runtime systems take advantage of the new breed of
> multimedia extensions--e.g. MMX for Pentiums--to the instruction sets of
> popular CPUs? Instructions for moving data quickly, for example, might be
> useful for garbage collection.

Yes.

The general rule is that any feature of the hardware that is apparent
at the architecture level (and can be executed in user mode, i.e. is not
a priveledged instruction) can be generated in a lisp-to-machine-level
compiler.

Allegro CL has three programming interfaces to the hardware, two input
and one output. The two input interfaces start with LAP (Lisp
Assembler Program) code, a selectively machine-specific pseudo-assembler
code that looks lispy and is easy to manipulate, especially at the
peephole pass. The three interfaces are

1. An in-core assembler, to get from LAP to the bit-patterns of the
code vector.
2. A .s file generator, to go from LAP to assembler source (to be
assembled and linked-in to the runtime system).
3. The disassembler (to take bit patterns and turn them into textual
representation similar to assembler source).

To add new architectural features, the new instructions must be added
to these three interfaces. It is usually not hard, but the challenge
is how to work them into the code generators to get the desired LAP
output.

The problems I see with MMX are:
1. It displaces the floating-point hardware; you effectively cannot
use MMX and floating point in the same function in a pracical manner
because the switch between MMX and float modes is too disruptive.
2. Not all X86 chips on which our software runs supports MMX, thus,
emulation is necessary on those chips (resulting in signigicant
slowdown).
3. Radically different architectural concepts (i.e. which have no
equivalents in other hardware) tend to get less priority in my mind,
unless the payback is _significant_. Note, however, that HP has added
parallel-subword saturation-arithmetic (good for pixel calculations)
in their architectures, so the MMX concept might be good for future
consideration, though _not_ for data movement.

Please note that the above reasons do not kill the possibility of
working with MMX, it just lowers its priority in the mindshare I
give it. I constantly review these mindshare priorities, and change
them when necessary. My colleagues do the same.

Howard R. Stearns

unread,
Dec 23, 1998, 3:00:00 AM12/23/98
to
Here is the promised introduction:


First, understand that there are compiler issues that are at a higher
level than "machine" code generation, which sometimes have more of an
effect on both performance and debugging than anything else. These
have nothing to do with whether C, assembler or machine code is
generated. For example, if a user indicates that function definitions
are known and unchanging (either though declarations or some sort of
block compilation semantics), a compiler might inline code or at least
use a special calling sequence ("inlining" away optional or keyword
argument parsing).

Second, there are different kinds of machines one can target as the
"runtime engine." If you are generating code for specific
register-machine hardware, you can take advantage of that
hardware. (This is what most of this listing will discuss.) One could
also target various higher level virtual machines: a lisp-like
stack-machine, a "C machine", or a byte-engine (such as is done for
Java and CLISP).

If you want to target physical hardware, you have to generate at least
some machine code. Portable C just isn't low-level enough. Either C
source or direct machine code generation can be used when targeting
any of the different kinds of virtual machines. For example, "C
machine" means the function call/return discipline used by standard C
compilers on a given platform. To integrate with other code, many
language compilers allow code to be generated for this "C machine".
This model is missing many important basic Lisp mechanisms, which can
effect performance and Lisp debugging, but is the only way to get
complete integration with arbitrary non-Lisp application components,
including non-Lisp debugging. The same is true for Lisp-in-Java or
similar target machines.

"C machine" code can be produced by either generating C source and
compiling it, or by generating the machine code directly. When C
source code is used for this purpose (or Java code for a Java
machine,) this is sometimes called idiomatic or tight integration.
However, C source can also be generated simply as a means of producing
code for other virtual machines (Lisp engines or byte engines). This
is sometimes called "C-as-assembly-language", but keep in mind that
this does not refer to hardware register-machine assembly, but rather
assembly language for a higher level machine. This latter approach
produces integration only with the virtual machine being targetted.

Finally, there are linking and run-time issues that are also somewhat
independent of the previous issues. Regardless of how the code is
generated, platform-specific techniques can be used to dynamically
load new code. Run-time representations may or may not be independent
of how the code accessing the objects is produced. For example, a
garbage collector is much easier to implement if a Lisp-like virtual
stack machine is used. On the other hand, compiling to idiomatic C
for a "C-machine" alone doesn't guarantee that Lisp functions and data
behave exactly like C functions and data -- the garbage collector
behavior also effects this.

Barry Margolin

unread,
Dec 23, 1998, 3:00:00 AM12/23/98
to
In article <4zp8kn7x...@beta.franz.com>,

Duane Rettig <du...@franz.com> wrote:
>2. The calling/returning of args/values only supports C semantics.

Most of the sub-issues in this section assumed that the Lisp calling
sequence would translate somewhat directly into a C call, e.g.

(f a b c)

would compile into something like:

f(a, b, c);

or perhaps with an extra argument to handle potential &optional issues:

f(3, a, b, c);

Don't many of the Lisp-to-C translators simply forego using the C calling
sequence in this way? They have a separate, global (or per-thread) array
that they use as the argument PDL, or perhaps all functions are called as:

f(arg_count, arg_vector);

The similarity between this and the way C's main() function is invoked is
not coincidental -- it's a simple way to implement varargs semantics
without the hassle of C's varargs mechanism, applicable whenever all the
elements of the vector are of the same type (all strings for main(), and
all of whatever type you use to represent general Lisp objects in a
Lisp-to-C translator).

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Don't bother cc'ing followups to me.

Duane Rettig

unread,
Dec 23, 1998, 3:00:00 AM12/23/98
to
Barry Margolin <bar...@bbnplanet.com> writes:

> In article <4zp8kn7x...@beta.franz.com>,
> Duane Rettig <du...@franz.com> wrote:

> >2. The calling/returning of args/values only supports C semantics.
>

> Most of the sub-issues in this section assumed that the Lisp calling
> sequence would translate somewhat directly into a C call, e.g.
>
> (f a b c)
>
> would compile into something like:
>
> f(a, b, c);
>
> or perhaps with an extra argument to handle potential &optional issues:
>
> f(3, a, b, c);

It may appear that I have made that assumption, especially from some of
the straw exmples I gave, but that was not intended; indeed, the FranzLisp
example does precisely what you describe below, using registers stolen from
C before the assembler phase, which point into the bindstack for such
arguments.

In reality, both lisp and C calls are higher level constructs than most
machines provide for; the "gcd" for most machines is the call instruction
(aka call, jsr, jmpl, bl, etc.) which only provides you with control of
the execution locus (i.e. a place to jump, saving the place to return).
The passing of arguments is entirely up to the programmer. (The sparc
can pass the first 6 args in registers automatically, but even this is
not mandatory; I saw a lisp from Japan which didn't use register windows,
and on deeply recursive functions claimed to be 10X faster than lisps and
Cs running similar benchmarks.)

On the other hand, most architecture/operating-system sets have documents
which define a "calling standard", which lay out for various languages
how the calls are to be performed at the assembler level, and even what
registers are caller-saves vs callee-saves, etc. It behooves any
language implementor that wants his language to talk to other languages
to at least pay attention to this underlying standard. Usually these
standards are called "C calling standards" because other languages such
as fortran, pascal, and lisp have extra needs, and the written standard
is only meant to describe interoperability with C.

So in summary, I assume only that the standard calling conventions are
adhered to, which can _mostly_ be described by C syntax.

> Don't many of the Lisp-to-C translators simply forego using the C calling
> sequence in this way?

FranzLisp did it essentially this way, but from reading Howard Stearns'
paper on Eclipse, they do not do it this way; instead they pass an
"end-of-arguments" flag with regular C arguments. But their goals are
higher-level than just using C as an assembler; they want their generated
code to be able to talk to C programs as directly as possible.

I don't know how any other lisp-to-c translators do this; perhaps their
implementors will speak out about it.

> They have a separate, global (or per-thread) array
> that they use as the argument PDL, or perhaps all functions are called as:
>
> f(arg_count, arg_vector);
>
> The similarity between this and the way C's main() function is invoked is
> not coincidental -- it's a simple way to implement varargs semantics
> without the hassle of C's varargs mechanism, applicable whenever all the
> elements of the vector are of the same type (all strings for main(), and
> all of whatever type you use to represent general Lisp objects in a
> Lisp-to-C translator).

This avoids the hassle of varargs, but new issues must be solved; usually
having to do with allocation (what happens when your argument stack
overflows) or deallocation (what happens when a throw occurs).
Obviously they are solvable, but working with FranzLisp we found that
multiple stacks always require extra overhead to maintain consistency.

Howard R. Stearns

unread,
Dec 28, 1998, 3:00:00 AM12/28/98
to
In my other message on this thread, I wrote about how defining different
target machine models might be more useful in understanding some issues
than simply describing which source languages are used to implement
those target machines. (I know that Duane and Barmar understand this
stuff, I'm just trying to clarify for the rest of us.) Using this
approach...

C-machines define a mechanism for varargs, but not for specifying the
argument count. If your goal is to interoperate with C code using C
idioms, then you want your compiler to "compile to a c-machine" and use
varargs for Lisp's variable arguments. This still leaves open the issue
of how one knows when the arguments are terminated: Eclipse uses an
End-Of-Arguments marker as a compiler-supplied extra last argument,
ECoLisp uses an int argument count as a compiler-supplied extra first
argument. (Eclipse uses EOA becuase it's less prone to breakage by
hand-editing C code.) This also leaves open the issue of whether or not
functions with a fixed number of arguments uses this mechanism. (In
Eclipse, all functions use varargs, so that users writing calling code
(and interpreters!) don't need to know how the function was defined.

I think many of Duane's comments for compiling-to-c are really meant to
apply to c-machines, as opposed to using C to implement a Lisp-like
stack machine. For example, I believe that CLiCC and KCL/ACKL/GCL use C
only as "portable assembly language", where "assembly language" here
does not mean hardware assembly language but rather an assembly language
for a Lisp-like stack-oriented machine, which is itself implemented in
C. Such a machine does not use the C calling convention for user code,
but only for the assembly code itself. User functions operate by having
the caller build a stack frame of arguments. This stack frame is
implemented as an array, and the C-code "assembly language" provides
"instructions" (C macros or functions) for pushing things onto this
stack, etc. In C terms, each Lisp function is then
called/invoked/jumped-to, by C-calling a function that takes only "meta
arguments" -- arguments to the abstract machine such as the location of
the stack and/or number of arguments. The actual arguments for the Lisp
function are on the abstract machine stack, not the hardware/C-machine
stack/registers.

Duane Rettig wrote:
>
> Barry Margolin <bar...@bbnplanet.com> writes:
>
> > In article <4zp8kn7x...@beta.franz.com>,
> > Duane Rettig <du...@franz.com> wrote:

> > >2. The calling/returning of args/values only supports C semantics.
> >

0 new messages