Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

(lisp vs c++ performance) Rick Graham, Master of Hackology in C++

1,252 views
Skip to first unread message

RGR...@loyalistc.on.ca

unread,
Jul 22, 1994, 9:14:19 PM7/22/94
to
rgr...@loyalistc.on.ca
Data:(613)476-4898 Fax:(613)476-1516


(major portion deleted)

I've worked in lisp for 15 years, but I think that when I
start my next project I'll give serious consideration to
starting in c++ rather than lisp.

|Bryan M. Kramer, Ph.D. 416-978-7569, fax 416-978-1455
|Department of Computer Science, University of Toronto
|6 King's College Road, Room 265A
|Toronto, Ontario, Canada M5S 1A4

I program in C++ and I'm writing a LISP interpreter into my product
to give it flexibility. I think LISP is a wonderful method for
adding a robust malleability to the type of software that benifits
from extensibility. AutoLISP is a prime example of this. AutoCAD
started out life as a generic CAD program. The AutoLISP extension
put it on top of the heap. But could you imagine AutoCAD written in
LISP? No way. The only reason I could think of for writing a
program in a language that takes 20 megs of space when another
language will do it in 2 is for the sake of using the language.

I will regret having said this here, but I cannot think of a major
process that I would wish to write in LISP. I don't think that's
where its strength is. LISP is a tool best used to intelligently
link other tools. The space it takes up cannot be justified by cheap
memory. If someone gave me a house with 200 rooms, how would I keep
it clean?

Rick

Rick Graham, the Binary Workshop
rgr...@loyalistc.on.ca Data:(613)476-4898 Fax:(613)476-1516

Lawrence G. Mayka

unread,
Jul 23, 1994, 6:05:11 PM7/23/94
to

I will regret having said this here, but I cannot think of a major
process that I would wish to write in LISP. I don't think that's
where its strength is. LISP is a tool best used to intelligently
link other tools. The space it takes up cannot be justified by cheap
memory. If someone gave me a house with 200 rooms, how would I keep
it clean?

Your question begs for the obvious answer, following your own analogy:
the House of Lisp cleans itself automatically, and indeed offers the
option of removing any rooms that you've decided, after settling in,
that you don't want.

Seriously, real-life Common Lisp applications typically require
image-trimming (e.g., via a treeshaker) in order to be competitive in
space with similar applications written in more parsimonious
languages. We simply must include the image-trimming effort in the
total productivity equation. I still think we come out way ahead.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

Kirk Rader

unread,
Jul 25, 1994, 11:49:36 AM7/25/94
to
In article <LGM.94Ju...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

[...]

>
>Seriously, real-life Common Lisp applications typically require
>image-trimming (e.g., via a treeshaker) in order to be competitive in
>space with similar applications written in more parsimonious
>languages. We simply must include the image-trimming effort in the
>total productivity equation. I still think we come out way ahead.
>--
> Lawrence G. Mayka
> AT&T Bell Laboratories
> l...@ieain.att.com
>
>Standard disclaimer.


I agree that using lisp you come out way ahead in productivity and
without paying too large a cost in executable size and performance if:

1. You are sufficiently aware of the performance and memory
implications of common operations and idioms to know how to design for
a sufficient degree of efficiency up front without having to spend too
much time finding and fixing "performance leaks" after the fact. This
will be highly dependent on the particular implementation in use,
since in my experience most (commercial or otherwise) lisp
implementations have their own idiosyncratic patterns of which
operations cons excessively and which do not, and which functions are
coded in an optimally efficient manner and which are better avoided in
favor of home-grown lisp or foreign code.

2. You are working in a problem domain which is well-suited to lisp
in the first place. Some problem domains are best addressed using the
features of a lisp-like language because they actually make good use
of lisp's semantic bells-and-whistles. Note that the more "lispish"
features one uses, the less tree-shaking is likely to actually find
substantial amounts of unused code to eliminate but, since it is
already conceded in this case that those features "pay their way" in
the application's executable, this is not an issue.

The considerations cited in 1, together with the quality of modern
C/C++ integrated development and debugging environments, are the
reason I feel that most claims of lisp's "productivity enhancing"
features are overblown, if not simply false. There are valid reasons
for using lisp for certain kinds of applications. There are also
valid reasons for avoiding it in others. When starting a new project,
it is a good idea to spend at least some time considering the
trade-offs involved before making a choice as basic as what language
to use in implementing it. As someone whose job it has been to make
piles of performance-critical lisp-machine code run on general-purpose
machines in commercial Common Lisp implementations, I have had this
confirmed through bitter experience. The more that tree-shaking,
declarations, application-specific foreign or simplified versions of
standard functions, etc. are required for acceptable performance and
actually succeed in achieving it, the more evidence it is that lisp
was not really the best choice for that particular application (even
though there are applications for which it is, in fact, the best
choice due to considerations as in 2, above) and the more likely it is
that lisp will make one less, rather than more, productive.

Several people have suggested in this thread that one code one's
mainline application modules in C or C++, and use a small, easily
extensible version of lisp as a "shell" or "macro" language to glue
the pieces together and provide the end-user programmability features
which are one of lisp's greatest assets. This seems to me to be an
ideal compromise for applications where lisp's performance and
resource requirements are unacceptable to the main application but
where it is still desired to retain at least some of lisp's superior
features.

------------------------------------------------------------
Kirk Rader ki...@triple-i.com

Ken Anderson

unread,
Jul 25, 1994, 12:48:55 PM7/25/94
to
In article <CtI6M...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:

...

I agree that using lisp you come out way ahead in productivity and
without paying too large a cost in executable size and performance if:

1. You are sufficiently aware of the performance and memory
implications of common operations and idioms to know how to design for
a sufficient degree of efficiency up front without having to spend too
much time finding and fixing "performance leaks" after the fact. This
will be highly dependent on the particular implementation in use,
since in my experience most (commercial or otherwise) lisp
implementations have their own idiosyncratic patterns of which
operations cons excessively and which do not, and which functions are
coded in an optimally efficient manner and which are better avoided in
favor of home-grown lisp or foreign code.

I sympathize with this. I just replaced three calls to FIND with hand
written loops. However, my other uses of find seem fine in terms of the
amount of time they take. I find Lisp to be a little like a shell (just
type stuff and things start happening), a little like Mathematica (you can
express a fairly complicated program easily and let it take care of the
details), and a little like C (when you need performance you need to be
precise). Performance tuning is an expert activity in any language. So, i
keep thinking there should be an expert system out there to help us.

It sounds like you've had a lot of experience. Can you tell us what you
found Lisp not to be appropriate for, and why?

Thanks,
k
--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA

25381-geller

unread,
Jul 28, 1994, 10:18:42 AM7/28/94
to
In article <CtI6M...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:

[...]

Having just finished my first major development effort in C++, after
years of C programming and lots of Lisp, I have to take exception to
some of the assertions Mr. Rader is making here, both stated and
implied:

- My experience using SoftBench, ObjectCenter, and GNU emacs, compared
to MCL, is that I figure I'm at least twice as productive using MCL --
and I'm much more familiar with C than Lisp. Once a project grows to
any 'reasonable' size, tools like ObjectCenter become unmanageable,
and slow as molassas.
The other other issue here is that it takes us over 5 hours, on an
otherwise empty Sparc 10, to rebuild our application. Even if only a
single source file is changed, a rebuild will often take over 3 hours,
since (for reasons that we don't really understand) many of our
templates will wind up getting rebuilt. This makes it very difficult
to fix (and unit test) more than a couple of small bugs in a single
day!

- The implication that C++ optimization is machine independent is just
plain wrong. Even across different compilers on the same machine, or
different libraries, there can often be significant variations in the
relative performance of operations such as heap allocation and
deallocation, function calls vs. inlined functions, single-
vs. double-precision arithmetic, switch/case vs. if/else, stdio
vs. streams vs. read/write, virtual vs. normal functions, etc. While
you can certainly argue that optimization is less necessary for C or
C++, I'm not sure that this is true in many commercial applications,
which usually spend most of their user CPU on string copying and
string scanning, rather than number crunching.

- One other major advantage of Lisp over C++ is the relative maturity
of the languages. We developed our application in C++ on SunOS 4.1.3
initially, using ObjectCenter's C++ compiler. When we moved to HP-UX
9.0, still with OC C++, we found numerous bugs related to 'edge'
conditions -- basically, all related to the fact that the order of
static object construction and destruction is completely undefined by
the ARM, and thus THE SAME COMPILER generated different orders on two
different systems.
And yes, it's true, we shouldn't have coded anything with any
implicit assumptions about the order of static object construction or
destruction, but what we've been reduced to is a combination of hacks
and backing away from objects in favor of 'built-in' types, whose
static construction is always done first-thing (i.e., const char *
instead of const String). To me, this is a bug in the language
definition, and a very serious one.

Several people have suggested in this thread that one code one's
mainline application modules in C or C++, and use a small, easily
extensible version of lisp as a "shell" or "macro" language to glue
the pieces together and provide the end-user programmability features
which are one of lisp's greatest assets. This seems to me to be an
ideal compromise for applications where lisp's performance and
resource requirements are unacceptable to the main application but
where it is still desired to retain at least some of lisp's superior
features.

I'd argue for the reverse of this -- build one's mainline in Lisp, and
build the performance-critical pieces in C or C++. The less
statically-compiled, statically-linked stuff in the delivered
application, the better -- quicker edit/compile/build/test cycles,
easier patching in the field, and easier for your users to change or
extend what you've given them.

------------------------------------------------------------
Kirk Rader ki...@triple-i.com

--
-----------------------------------------------------------------------------
Alan Geller phone: (908)699-8285
Bell Communications Research fax: (908)336-2953
444 Hoes Lane e-mail: a...@cc.bellcore.com
RRC 5G-110
Piscataway, NJ 08855-1342

Kirk Rader

unread,
Jul 29, 1994, 12:05:06 PM7/29/94
to
In article <KANDERSO.94...@wheaton.bbn.com> kand...@wheaton.bbn.com (Ken Anderson) writes:

[...]

>
>It sounds like you've had a lot of experience. Can you tell us what you
>found Lisp not to be appropriate for, and why?
>
>Thanks,
>k
>--
>Ken Anderson
>Internet: kand...@bbn.com
>BBN ST Work Phone: 617-873-3160
>10 Moulton St. Home Phone: 617-643-0157
>Mail Stop 6/4a FAX: 617-873-2794
>Cambridge MA 02138
>USA


I have found that applications which require "real-time" performance,
either in the traditional sense used by embedded-systems programmers
or in the related sense implied by the requirements of a highly
interactive application like the paint program on which I currently
work, are almost if not impossible to achieve in lisp. We have
achieved it (to the extent we have, yet) primarily through the most
drastic kinds of optimizations discussion of which was the basis of
this thread. The reason for this is that the chaotic behavior
resulting from the combination of lisp's freedom to "involuntarily"
allocate memory with the requirement to therefore periodically invoke
the garbage collector at what amount to random times results in a
system that does not satisfy the basic requirement of a real-time
system - that you can predict that any given operation will not take
longer than some known length of time and that the time it takes will
be short enough to keep up with the asynchronous events with which the
system must interact. Beyond this specific issue, any time you must
consider memory utilization or computational horse-power constraints,
you must carefully decide whether the "one-size fits all" philosophy
of lisp is really a good match for your application. A garbage
collector is an excellent and ideally efficient model for
memory-management if your application requires (by its nature, not as
a side-effect of its implementation language) a large number of small
"anonymous" dynamic memory allocations and it can afford the small or
large "hiccups" that result from it. If, as is actually more common,
your application needs to make few if any allocations "off the heap",
in C jargon, or can easily control the circumstances in which they
occur, e.g. using the C++ constructor/destructor model, then the
garbage-collector's overhead is pure loss. Whether it is fatally so
is a consequence of considerations like that cited above.

Memory-management is only one area in which lisp is optimized for a
particular kind of problem. Common Lisp's model of function-calling,
including lexical-closures and method-combination, require an
implementation to typically expend a great deal more effort just
dispatching to and returning from a subroutine than in a language like
C or C++. If you really need, or at least can make good use of, that
kind of power there is nothing comparable in more "conventional"
languages. As with memory-management, however, I have found that
C++'s model of overloaded functions is more than sufficient in most
instances (no pun intended), and it is actually rather rare that I
want or need to use CLOS's more elaborate object-oriented features.
If you really need to use a true mix-in style of programming, C++
can't cut it - but how often do you need it to an extent that
justifies the overhead that is imposed on _every_ call of _every_
function by a typical real-world implementation in order to support
it? Despite all that, my experience and intuition suggest while that
only a minority of applications really benefit from lisp's more
sophisticated features, only a minority really suffer an unacceptable
performance penalty due to them. The latter is primarily due to the
kernel of truth there is in the "cheap RAM and fast processors"
argument.

But these kinds of performance issues represent only one aspect of the
problems to be faced when using a non-mainstream (whether it is
deservedly so, or not) programming language. In the application
domain in which I work, computer graphics, there are any number of
platform-vendor-supplied and third-party libraries and tools that we
either can not use at all or can only use in a less than satisfactory
way because they assume C/C++ as the application language and a
conventional suite of software-engineering support tools which
specifically does not include a lisp environment. I have worked in
enough other fields besides graphics to know that the same is true in
many other application domains, as well. To quote myself from a
private email message generated as a result of this same thread, I
personally have come to the conclusion that using lisp on a
"C-machine" like a PC or Unix workstation should be regarded as being
as anamolous (but not therefore necessarily inappropriate) as using
anything _but_ lisp would be on a lisp-machine.

------------------------------------------------------------
Kirk Rader ki...@triple-i.com

Jeffrey Mark Siskind

unread,
Jul 29, 1994, 6:25:26 PM7/29/94
to
I this whole discussion of the relative merits and performance of Lisp vs. C
people are confusing merits of implementations with merits of languages and
performance limits of implementations vs. inherent performance limits (or lack
there of) of languages.

It is an unfortunate collective design decision of the Lisp community to focus
on creaping featurism rather than on an implementation with competetive
performance. So much so to the point of people trying to justify inadequate
performance as a good thing with arguments like

- people's time is more valuable than computer time
- or the 80/20 rule (don't worry about performance for 80%, only for the 20%
critical sectiion.

And people try to justify the coding tricks needed to get good performance

- adding in declarations
- tricks to avoid boxing floats

as good Lisp style.

There is no reason why a compiler couldn't do global analysis to determine the
possible types of all expressions and compile code as efficient as C even
without any declarations. It possible to do this without losing safety. It is
even possible to do it in an incremental fashion within a classic
read-eval-print loop based development environment. There is also no reason
why a compiler couldn't safely and efficiently use unboxed representations for
expressions know to be floats, or even double floats or complex numbers. There
is also no reason why a compiler couldn't do global analysis to determine the
lifetime of allocated objects and use static techniques to reclaim storage
instead of garbage collection. But the community is full of people who try to
justify garbage collection as performing better than C's static allocation.
Such justification discourages implementors from exploring alternatives to
garbage collection or alternatives to be used along side with garbage
collection for special cases that the compiler can determine. There is also no
reason why a compiler couldn't determine at compile time the method to be
dispatched by a given generic function call. One can have a uniform language
that doesn't make a distinction between compile-time and run-time dispatch and
have a compiler that safely and efficiently makes those distinctions. One can
also have a development environment that informs the programmer of the
efficiency ramifications of the code, i.e. telling the programmer where
run-time GC, run-time dispatching, boxed numbers and the like were used.
While such things exist to some extent today, they are grossly unusable. I
envision such information provided through an editor back annotated into the
source code with interactive ways for a programmer to converse with a compiler
to add additional declarations only when necessary and to have the compiler
remember such discussions with the programmer.

Now all of this is not very difficult to do. Many of the ideas have been
floating around theoretically in the programming language community for years.
(Like abstract interpretation of compile-time reference counts to eliminate
garbage collection.) Some, like static type inference, have been implemented in
widely used programming languages like ML.

But rather than working on such a compiler, the Lisp community has three
classes of people:
- those who are building yet another byte-code interpreter to add to the
dozens already available
- those adding creaping featurism to the language (like MOP, DEFSYSTEM,
CLIM, ...)
- those adding kludge upon kludge on top of existing compilers rather than
building one from the ground up on solid up-to-date principles.

Things don't have to be the way they are.
--

Jeff (home page http://www.cdf.toronto.edu/DCS/Personal/Siskind.html)

Kirk Rader

unread,
Jul 30, 1994, 3:57:55 PM7/30/94
to


[...]

>Having just finished my first major development effort in C++, after
>years of C programming and lots of Lisp, I have to take exception to
>some of the assertions Mr. Rader is making here, both stated and
>implied:
>
>- My experience using SoftBench, ObjectCenter, and GNU emacs, compared
>to MCL, is that I figure I'm at least twice as productive using MCL --
>and I'm much more familiar with C than Lisp. Once a project grows to
>any 'reasonable' size, tools like ObjectCenter become unmanageable,
>and slow as molassas.
> The other other issue here is that it takes us over 5 hours, on an
>otherwise empty Sparc 10, to rebuild our application. Even if only a
>single source file is changed, a rebuild will often take over 3 hours,
>since (for reasons that we don't really understand) many of our
>templates will wind up getting rebuilt. This makes it very difficult
>to fix (and unit test) more than a couple of small bugs in a single
>day!


This sounds like a case either of a novice C++ programmer's
predictable problems with using templates effectively or a broken
implementation. Either way, my experience has been that Common Lisp
implementations are more susceptible to both kinds of problems than
C++ implementations, primarily because C++ attempts so much less than
CL. I have listed any number of specific "horror stories" encountered
when trying to develop commercial-quality software using third-party
commercial CL development platforms in previous postings and private
email that they generated, but I will repeat them if requested.


>
>- The implication that C++ optimization is machine independent is just
>plain wrong. Even across different compilers on the same machine, or
>different libraries, there can often be significant variations in the
>relative performance of operations such as heap allocation and
>deallocation, function calls vs. inlined functions, single-
>vs. double-precision arithmetic, switch/case vs. if/else, stdio
>vs. streams vs. read/write, virtual vs. normal functions, etc. While
>you can certainly argue that optimization is less necessary for C or
>C++, I'm not sure that this is true in many commercial applications,
>which usually spend most of their user CPU on string copying and
>string scanning, rather than number crunching.


I never suggested that C++ optimization was "machine independent" (or
implementation independent, either.) I do state emphatically that one
must typically spend less time optimizing C++ code than lisp code
since 1) the typical C++ compiler does a better job of optimizing on
its own than the typical lisp compiler and 2) the much smaller
run-time infrastructure assumed by C++ just doesn't give nearly as
much room for implementations and naive programmers using them to do
things in spectacularly unoptimal ways. Since, to paraphrase
Heinlein, it is never safe to underestimate the power of human
stupidity :-), that doesn't mean that either the C++ vendor or the
programmer can't still manage to make it so that, for example,
changing one small line in one function causes a 3-hour rebuild. In
general, though, my experience has been that this sort of problem is
more common in implementations of lisp-like languages than C-like ones
where the size of the feature space and the myth that it is easy to go
back after the fact and just tweak the slow parts of an application
conspire to encourage sloppiness on the part of implementors and
programmers alike.

The trade-off to using this minimalist approach to language and
run-time features is, of course, that if you find you really need a
feature that is built in to a more elaborate language like CL, then
you must either find an existing library implementation or write one
yourself. One factor frequently left of the debate on this issue is
recognition of the fact that due to the greater general acceptance
(deserved or not) of C and C++, there is a greater availability of
both commercial and shareware libraries for everything from very
application-level activities like serial communications to more
system-level activities like memory-management. When you add all of
the features that are easily obtainable in this way to those built in
to C++, it is not really as poor and benighted an environment as many
regular contributors to this newsgroup would have one believe.


>
>- One other major advantage of Lisp over C++ is the relative maturity
>of the languages. We developed our application in C++ on SunOS 4.1.3
>initially, using ObjectCenter's C++ compiler. When we moved to HP-UX
>9.0, still with OC C++, we found numerous bugs related to 'edge'
>conditions -- basically, all related to the fact that the order of
>static object construction and destruction is completely undefined by
>the ARM, and thus THE SAME COMPILER generated different orders on two
>different systems.
> And yes, it's true, we shouldn't have coded anything with any
>implicit assumptions about the order of static object construction or
>destruction, but what we've been reduced to is a combination of hacks
>and backing away from objects in favor of 'built-in' types, whose
>static construction is always done first-thing (i.e., const char *
>instead of const String). To me, this is a bug in the language
>definition, and a very serious one.


While I can sympathize with the frustration that could be caused from
discovering this as the cause of a seemingly mysterious bug, possibly
after some considerable effort chasing blind alleys, I cannot really
take seriously the assertion that this is such a major bug in the
language definition as you make. First of all, it does state quite
explicitly in the ARM that once the default initialization of all
static objects to zero is complete, "No further order is imposed on
the initialization of objects from different translation units." (pg.
19, near the bottom) It then goes on to give references to the
sections which define the ordering rules on which you can depend for
local static objects. So your observation that you shouldn't have
relied on any particular order is quite correct. There is also
interpersed throughout the annotation sections of the ARM and
throughout the C++ literature in general discussions of why that
particular class of features - deferring to the implementation exacly
what steps are taken and in what order when launching the application
- exists and how to avoid the problems it can create.

There can be any amount of debate about the merits of any particular
feature of any particular language. Consider the recent long thread
in this newsgroup on NIL vs '() vs boolean false in various dialects
of lisp. I believe the assertion that there are more such problems
with the C++ specification than with the CL specification is simply
false, again primarily due to the fact that it is so much smaller.

What I believe you actually are objecting to is the fundamentally
different set of principles guiding the initial design and subsequent
evolution of the two families of languages. Lisp began as an academic
exercise in how to apply a certain technique from formal linguistics -
the lamda calculus - to the design of a programming language as a way
of exploring the possibilities inherent in the use of higher-order
functions. As such, theortical purity and semantic "correctness" was
more important than practical considerations like performance and
conservation of machine resources. C, on the other hand, was
originally conceived as a special-purpose "middle-level" programming
language designed for the express purpose of writing a portable
operating system - namely Unix. Retaining assembly langauge's
affinity for highly optimal hardware-oriented "hackery" was considered
more important than theoretical rigor or syntactic elegance. Decades
later, the two families of languages have evolved considerably but
they each still retain an essentially dissimilar "flavor" that is the
result of their very different origins, and each retains features that
make it a better choice of an implementation language for certain
kinds of applications.


>


> Several people have suggested in this thread that one code one's
> mainline application modules in C or C++, and use a small, easily
> extensible version of lisp as a "shell" or "macro" language to glue
> the pieces together and provide the end-user programmability features
> which are one of lisp's greatest assets. This seems to me to be an
> ideal compromise for applications where lisp's performance and
> resource requirements are unacceptable to the main application but
> where it is still desired to retain at least some of lisp's superior
> features.
>
>I'd argue for the reverse of this -- build one's mainline in Lisp, and
>build the performance-critical pieces in C or C++. The less
>statically-compiled, statically-linked stuff in the delivered
>application, the better -- quicker edit/compile/build/test cycles,
>easier patching in the field, and easier for your users to change or
>extend what you've given them.


Again, only assuming that your application can afford the performance
penalties that these features entail - in which case it would not be
in the class of applications to which I explicitly referred. In our
own case, using exactly the strategy you and so many others advocate
has failed spectacularly in achieving comparable size and performance
behavior in our products to those of our competitors, which are
universally written in some combination of C and C++.


>
> ------------------------------------------------------------
> Kirk Rader ki...@triple-i.com
>
>--
>-----------------------------------------------------------------------------
> Alan Geller phone: (908)699-8285
> Bell Communications Research fax: (908)336-2953
> 444 Hoes Lane e-mail: a...@cc.bellcore.com
> RRC 5G-110
> Piscataway, NJ 08855-1342


------------------------------------------------------------
Kirk Rader ki...@triple-i.com

Henry G. Baker

unread,
Aug 1, 1994, 12:34:42 AM8/1/94
to
In article <QOBI.94Ju...@qobi.ai> Qo...@CS.Toronto.EDU writes:
>There is no reason why a compiler couldn't do global analysis to determine the
>possible types of all expressions and compile code as efficient as C even
>without any declarations. It possible to do this without losing safety.

This isn't quite true. There are very good theoretical reasons why
global analysis in Lisp is much less satisfactory than in other
languages. But these stem from the _power_ of Lisp, so one wouldn't
want to give it up. The consensus seems to be that on-the-fly
incremental compilation is probably the best near-term answer, where
the compilation of (a version of) a function is delayed until we get
some idea of what types of arguments it should expect. The SELF
language/implementation is a good example of this style of
compilation.

>There
>is also no reason why a compiler couldn't do global analysis to determine the
>lifetime of allocated objects and use static techniques to reclaim storage
>instead of garbage collection.

Ditto. It is highly likely that the global analysis for the purpose of storage
allocation is at least as hard as, if not harder than, the problem of global
analysis for type determination. Even ML type inference is deterministic
exponential time complete, and ML is much less polymorphic than Lisp.

>But the community is full of people who try to
>justify garbage collection as performing better than C's static allocation.
>Such justification discourages implementors from exploring alternatives to
>garbage collection or alternatives to be used along side with garbage
>collection for special cases that the compiler can determine.

In many cases GC _does_ perform better than C's static allocation.
This advantage is over and above the important service of avoiding
dangling references. Most C/C++ programmers are quite amazed by this --
as if finding out about sex for the first time after being kept ignorant
about it by their parents...

>There is also no
>reason why a compiler couldn't determine at compile time the method to be
>dispatched by a given generic function call.

There are very good theoretical and practical reasons why this is not
to be. See the discussion above. Also, if one is intent on composing
old compiled code with new compiled code, then there have to be some
facts about the new compiled code that the old compiled code wasn't
privy to, and therefore can't take advantage of.

>Now all of this is not very difficult to do. Many of the ideas have been
>floating around theoretically in the programming language community for years.
>(Like abstract interpretation of compile-time reference counts to eliminate
>garbage collection.) Some, like static type inference, have been implemented in
>widely used programming languages like ML.

It's easier in ML, but still not easy.

------

One of the things people forget about Lisp's large footprint is that
it typically includes the compiler, the entire subroutine library, an
editor, and a good fraction of the help files. The US DoD budget can be
made to look smaller, too, by separating out the Army, the Air Force, the
Navy, the NSA, ......


Kris Karas

unread,
Aug 1, 1994, 10:26:18 AM8/1/94
to
In article <CtpM0...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
>I have found that applications which require "real-time" performance,
>either in the traditional sense used by embedded-systems programmers
>or in the related sense implied by the requirements of a highly
>interactive application like the paint program on which I currently
>work, are almost if not impossible to achieve in lisp.

Oh, rubbish. This is true in an apples to oranges comparison;
specifically, if you allow one language to use machine-specific
libraries and code but not the other, it's hard to have a fair
comparison. I can't think of many C programs that don't begin with
#include <sys/xxxx.h> where xxxx is some un*x-specific hack. Device
control. Asynchonous interrupts. Process scheduling. Low-level file
system mangling. These tasks are difficult (e.g. slow) if not
impossible to deal with when the language you're using doesn't have
any standard set of libraries you can call upon. C programs can
practically be made machine-specific too, in the sense that you can
code your programs to generate just a line or two of assembler for
each C subexpression. (Heck, I remember when C was referred to as a
high-level assembler.) "a += b" easily converts to a single
instruction in many implementations.

So, fighting apples to apples now, and using some machine-specific
code, let's see how "slow" this lisp program is compared to its C
counterpart. The task: copy from one wired array to another (wired
means that the storage management system has been told to not swap the
array to disk [virtual memory]). To make the program slightly smaller
for the sake of net.bandwidth, a lot of the setup code will be
simplified (no multi-dimensional arrays, etc), we'll assume the arrays
are integer multiples of 4 words long and that they're the same size.

(defun copy-a-to-b (a b)
(sys:with-block-registers (1 2)
(setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
sys:dtp-physical)
(sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
sys:dtp-physical))
(loop repeat (ash (length a) -2) do
(let ((a (sys:%block-read 1 :prefetch t))
(b (sys:%block-read 1 :prefetch t))
(c (sys:%block-read 1 :prefetch nil))
(d (sys:%block-read 1 :prefetch nil)))
(sys:%block-write 2 a)
(sys:%block-write 2 b)
(sys:%block-write 2 c)
(sys:%block-write 2 d)))))

So, we've used lisp and not assembler. But tell me, how big is this
program? How long does it take to execute? Does it cons? Answer:
It's 16 machine instructions long, 9 of which are within the loop.
For a 4096 byte array (1024 32-bit words) it takes 128 microseconds to
execute. You can embed it within an interrupt handler, if so desired,
as it does not allocate any static memory; it uses five locations on
the data stack.

Embedded-application lisp code is easy to write, it's very fast if you
allow yourself some knowledge of the machine upon which it is being
compiled, and is often available if you allow yourself to use
platform-specific code. The above was, of course, the code for a
Symbolics Ivory processor, something of which I've dealt with a lot:
I wrote the embedded SCSI driver for the NXP1000, a large and complex
driver that has its own mini lisp compiler for the microcoded NCR
SCSI hardware.
--
Kris Karas <k...@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
(setq *disclaimer* "I barely speak for myself, much less anybody else."
*conformist-numbers* '((AMA-CCS 274) (DoD 1236))
*bikes* '((RF900RR-94) (NT650-89 :RaceP T) (CB750F-79 :SellP T)))

Kirk Rader

unread,
Aug 1, 1994, 11:14:42 AM8/1/94
to
>I this whole discussion of the relative merits and performance of Lisp vs. C
>people are confusing merits of implementations with merits of languages and
>performance limits of implementations vs. inherent performance limits (or lack
>there of) of languages.
>
>It is an unfortunate collective design decision of the Lisp community to focus
>on creaping featurism rather than on an implementation with competetive
>performance. So much so to the point of people trying to justify inadequate
>performance as a good thing with arguments like
>
>- people's time is more valuable than computer time
>- or the 80/20 rule (don't worry about performance for 80%, only for the 20%
> critical sectiion.
>
>And people try to justify the coding tricks needed to get good performance
>
>- adding in declarations
>- tricks to avoid boxing floats
>
>as good Lisp style.


These are all apt criticisms of lisp culture, IMHO. One could make
similar criticisms of some of the rationalizations one hears from the
C/C++ community of the requisite hacks needed to avoid some of the
pitfalls into which a programmer can fall using that kind of language.
I take your more general point to be that trying to justify as virtues
those sins one is forced to commit by a particular implementation of a
particular language is counter-productive. I couldn't agree more.


If you said "no reason in principle" that languages and compilers
couldn't be designed in the way you describe, I would entirely agree.
(Modulo any reservations I might feel about the theoretical
possibility of doing the 100% complete control-flow analysis necessary
to achieve some of the optimizations to which you refer automatically
at compile time, not to mention the necessity to frequently go back
and revisit previously made decisions as incremental changes are made
to the system. If people think using C++ templates too-easily causes
compile-time performance problems...!) The fact is, however, that
programmers must make choices today using today's tools. We can't
just stop all development and wait for some hypothetical future
language to come along. In the mean time, the best we can do is
choose the existing language that is best suited to the particular
task and make whatever tradeoffs that choice entails.


>Now all of this is not very difficult to do. Many of the ideas have been
>floating around theoretically in the programming language community for years.
>(Like abstract interpretation of compile-time reference counts to eliminate
>garbage collection.) Some, like static type inference, have been implemented in
>widely used programming languages like ML.
>


"Not very difficult to do"...? As you said yourself, these are issues
of ongoing language research.

And shouldn't there have been a smiley beside the description of ML as
being "widely used"? :-)


>But rather than working on such a compiler, the Lisp community has three
>classes of people:
>- those who are building yet another byte-code interpreter to add to the
>dozens already available
>- those adding creaping featurism to the language (like MOP, DEFSYSTEM,
>CLIM, ...)
>- those adding kludge upon kludge on top of existing compilers rather than
>building one from the ground up on solid up-to-date principles.
>
>Things don't have to be the way they are.
>--
>
> Jeff (home page http://www.cdf.toronto.edu/DCS/Personal/Siskind.html)


Not only don't things have to be the way they are, but I don't expect
them to stay that way forever. As existing languages continue to
evolve and new languages are developed, I would expect that the
boundaries of the sets of which applications are best implemented
using what languages to shift and possibly begin to overlap more than
they do today. While I understand and share to some extent the
frustration which you express, I don't think the lisp community is
quite as moribund as you describe and some future dialect of lisp
incorporating some at least of the features you propose will probably
be among those from which a programmer will be able to choose.

------------------------------------------------------------
Kirk Rader ki...@triple-i.com

Martin Rodgers

unread,
Jul 31, 1994, 12:35:44 PM7/31/94
to
In article <QOBI.94Ju...@qobi.ai>

Qo...@CS.Toronto.EDU "Jeffrey Mark Siskind" writes:

> I this whole discussion of the relative merits and performance of Lisp vs. C
> people are confusing merits of implementations with merits of languages and
> performance limits of implementations vs. inherent performance limits (or lack
> there of) of languages.

This is a point that I frequently find myself wanting make, esp when I
see a comment about "Lisp", as if all dialects are the same, and as if
all implementations of a dialect are the same. If they were vendors
would have a hard time competing with each other!



> It is an unfortunate collective design decision of the Lisp community to focus
> on creaping featurism rather than on an implementation with competetive
> performance. So much so to the point of people trying to justify inadequate
> performance as a good thing with arguments like
>
> - people's time is more valuable than computer time
> - or the 80/20 rule (don't worry about performance for 80%, only for the 20%
> critical sectiion.

Agreed. I _do_ worry about performance, as I have a pretty slow machine,
even by non-workstation standards. It's now too small (not enough RAM)
and too slow (only a 20 Mhz 386) to run VC++. It _can_ still run a few
commercial Lisps, like Allegro CL/PC. I find it very odd that programmers
using VC++ believe that their enviroment and compiler is so superior to
_any_ Lisp, and yet they demonstrate how little they know about Lisp by
refering to myths that might only apply to Lisps from 15+ years ago.



> And people try to justify the coding tricks needed to get good performance
>
> - adding in declarations
> - tricks to avoid boxing floats

I'm happy to use type declarations. Did you mean some other kind?
While I have a choice, I'd like declarations to be available, but
not required. A popular language like C++ requires them in order to
just compile, never mind compile _well_. (That's usually done with
the compilers I know by using command line flags. (-: )

> as good Lisp style.

And so few C code I see seems to have _any_ style. :-)



> There is no reason why a compiler couldn't do global analysis to determine the
> possible types of all expressions and compile code as efficient as C even
> without any declarations. It possible to do this without losing safety. It is
> even possible to do it in an incremental fashion within a classic
> read-eval-print loop based development environment. There is also no reason

I'd be happy if a "production" compiler were available for the final code,
so that it could do some extra work, such as global analysis, if that can
produce better code. Would most function calls need to use late binding,
for example? If you know that a function won't be redefined at runtime,
but I can be certain of in all my programs, then are there any Lisp
compiles that can exploit that? Should they need declarations?

> Things don't have to be the way they are.

I also feel this way. My hope is that Dylan systems may offer better
compilers and enviroments to programmers, and make a distinction
between development by the programmer and runtime for the user.
I also hope that Dylan won't be the only Lisp-like language to do
this.

--
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind

Alan Gunderson

unread,
Aug 1, 1994, 6:40:31 PM8/1/94
to
In article <QOBI.94Ju...@qobi.ai>
Qo...@CS.Toronto.EDU "Jeffrey Mark Siskind" writes:

> There is no reason why a compiler couldn't do global analysis to determine the
> possible types of all expressions and compile code as efficient as C even
> without any declarations. It possible to do this without losing safety. It is
> even possible to do it in an incremental fashion within a classic
> read-eval-print loop based development environment.

The CLiCC (Common Lisp to C Compiler) by Goerigk, Hoffmann, and Knutzen
at Christian-Albrechts-University of Kiel does global analysis type
determination as it generates a C program equivalent of a Lisp program.
With only a few days of modifications to make 13,000 lines of Lisp
code compliant with the Common Lisp subset supported by CLiCC, the
system successfully transformed the code into an 860,000 byte executable.
The CLiCC static libraries rather than the shared libraries were used
in the linking. The generated C code was 15,200 lines of code.

Thus, with a system like CLiCC, the Lisp development environment can
be used to develop a program and then it can be converted to C for
deployment. Thus, the "best" of each languages capabilites can be
exploited.

> But rather than working on such a compiler, the Lisp community has three
> classes of people:
> - those who are building yet another byte-code interpreter to add to the
> dozens already available
> - those adding creaping featurism to the language (like MOP, DEFSYSTEM,
> CLIM, ...)
> - those adding kludge upon kludge on top of existing compilers rather than
> building one from the ground up on solid up-to-date principles.

CLiCC was built from the ground up on solid up-to-date principles. The
CLiCC documentation contains a nice discussion of compiler research and
techniques that influenced the design of CLiCC. These include work on
the Scheme to C compiler done by DEC CRL and compiler techniques used
in functional languages such as ML.

Thus, there are people in the Lisp community working on some great
post-development tools to support Lisp programmers. Seeing tools such
as CLiCC should be encouragement to the Lisp community to internalize
some of Siskind's points and further develop and extend tools such
as CLiCC.

--- AlanG

John B. Plevyak

unread,
Aug 1, 1994, 8:06:48 PM8/1/94
to
Henry G. Baker (hba...@netcom.com) wrote:

: This isn't quite true. There are very good theoretical reasons why


: global analysis in Lisp is much less satisfactory than in other
: languages. But these stem from the _power_ of Lisp, so one wouldn't
: want to give it up. The consensus seems to be that on-the-fly
: incremental compilation is probably the best near-term answer, where
: the compilation of (a version of) a function is delayed until we get
: some idea of what types of arguments it should expect. The SELF
: language/implementation is a good example of this style of
: compilation.

On the contrary, I would say that there are no reasons why global
analysis need be less "satisfactory" for Lisp. It is true that
the language contains features that are difficult to analyze, but
I would be satisfied with less from a analysis of a program which
uses them. In most cases (excluding things like eval) static
analysis and compile time optimizations (and possibly profiling
feedback) can do a better job then on-the-fly incrementatal compilation.
This is because the compiler can globally restructure control flow and
change data representations.

In particular, if the global analysis builds interprocedural control
and data flow graphs, it can clone subtrees and specialize these with
respect to the data they operate on. In addition, the global data
flow information can be used to specialize the physical layout of
structures/objects and then to replicate and specialize all code
which operates on them.

My paper describing such an analysis and applications will appear
in OOPSLA:

Precise Concrete Type Inference for Object-Oriented Languages
http::/www-csag.cs.uiuc.edu/papers/ti-oopsla94.ps

This analysis and the optimizations have been implemented for a
language with some of the difficult bits from Lisp: untyped,
first class selectors (functions), continuations, and
messages (essentially apply).

: >There is also no


: >reason why a compiler couldn't determine at compile time the method to be
: >dispatched by a given generic function call.

: There are very good theoretical and practical reasons why this is not
: to be. See the discussion above. Also, if one is intent on composing
: old compiled code with new compiled code, then there have to be some
: facts about the new compiled code that the old compiled code wasn't
: privy to, and therefore can't take advantage of.

Again, the above analysis computes a safe approximation which is very
accurate. Combined with cloning of subtrees and to specialize for
classes containing instance variables of different types, we have been
able to statically bind the vast majority of call sites (>99% in many cases).

Such analysis are expensive, and require the entire program on which to
operate, but they are not that much more expensive than g++ -O2 :)

Incremental on-the-fly compilation is the best bet for incrementatal
development, debugging and fast turn around, but when you are ready
to build the final version, good global analysis can enable many
optimizations.

--
John Plevyak (ple...@uiuc.edu) 2233 Digital Computer Lab, (217) 244-7116
Concurrent Systems Architecture Group
University of Illinois at Urbana-Champaign
1304 West Springfield
Urbana, IL 61801
<A HREF="http://www-csag.cs.uiuc.edu">CSAG Home Page</A>

Kris Karas

unread,
Aug 2, 1994, 9:32:18 AM8/2/94
to
Henry G. Baker writes:
>In article <QOBI.94Ju...@qobi.ai> Qo...@CS.Toronto.EDU writes:
>>But the community is full of people who try to justify garbage
>>collection as performing better than C's static allocation.
>In many cases GC _does_ perform better than C's static allocation.
>This advantage is over and above the important service of avoiding
>dangling references. Most C/C++ programmers are quite amazed by this --
>as if finding out about sex for the first time after being kept ignorant
>about it by their parents...

Heh heh. Good point. :-)

Full garbage collection takes a long time, but it has zero overhead
when memory is allocated. Thus lots of little creates don't suffer
any memory-subsystem-invocation penalties. C's memory allocator
doesn't take a lunchbreak every so often as Lisp's GC does, but it
spends lots of time making small, incremental re-organizings whenever
a small chunk of memory is allocated or deallocated; add to that the
function calling overhead, and the total time spent in C's allocator
can be greater.

And for those lisps that support ephemeral ("incremental" for lack of
a better lay term) garbage collection, frequently-manipulated pieces
of data get placed adjacent to one another in the machine's memory
space, greatly increasing the hit rate on the disk cache (and thus
reducing page thrashing) on the virtual memory system.

Martin Rodgers

unread,
Aug 2, 1994, 9:11:33 AM8/2/94
to
In article <Ctv3o...@triple-i.com> ki...@triple-i.com "Kirk Rader" writes:

> And shouldn't there have been a smiley beside the description of ML as
> being "widely used"? :-)

The statement would certainly amuse (or bemuse) most C/C++
programmers. :-) I can only guess at what a vendor might say.

Kirk Rader

unread,
Aug 2, 1994, 11:21:35 AM8/2/94
to
In article <31j0ma$g...@hsdndev.harvard.edu> k...@enterprise.bih.harvard.edu (Kris Karas) writes:

[...]

>
>Oh, rubbish. This is true in an apples to oranges comparison;
>specifically, if you allow one language to use machine-specific
>libraries and code but not the other, it's hard to have a fair
>comparison. I can't think of many C programs that don't begin with
>#include <sys/xxxx.h> where xxxx is some un*x-specific hack. Device
>control. Asynchonous interrupts. Process scheduling. Low-level file
>system mangling. These tasks are difficult (e.g. slow) if not
>impossible to deal with when the language you're using doesn't have
>any standard set of libraries you can call upon. C programs can
>practically be made machine-specific too, in the sense that you can
>code your programs to generate just a line or two of assembler for
>each C subexpression. (Heck, I remember when C was referred to as a
>high-level assembler.) "a += b" easily converts to a single
>instruction in many implementations.
>
>So, fighting apples to apples now, and using some machine-specific
>code, let's see how "slow" this lisp program is compared to its C
>counterpart. The task: copy from one wired array to another (wired
>means that the storage management system has been told to not swap the
>array to disk [virtual memory]). To make the program slightly smaller
>for the sake of net.bandwidth, a lot of the setup code will be
>simplified (no multi-dimensional arrays, etc), we'll assume the arrays
>are integer multiples of 4 words long and that they're the same size.
>

[...]

>
>Embedded-application lisp code is easy to write, it's very fast if you
>allow yourself some knowledge of the machine upon which it is being
>compiled, and is often available if you allow yourself to use
>platform-specific code. The above was, of course, the code for a
>Symbolics Ivory processor, something of which I've dealt with a lot:
>I wrote the embedded SCSI driver for the NXP1000, a large and complex
>driver that has its own mini lisp compiler for the microcoded NCR
>SCSI hardware.
>--
>Kris Karas <k...@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
>(setq *disclaimer* "I barely speak for myself, much less anybody else."
> *conformist-numbers* '((AMA-CCS 274) (DoD 1236))
> *bikes* '((RF900RR-94) (NT650-89 :RaceP T) (CB750F-79 :SellP T)))


"Rubbish" yourself! Comparing the use of highly
implementation-specific lisp in a device driver for a machine which
was _designed_ to use lisp _as_ its assembly language to writing
application-level code on general-purpose hardware is your idea of an
"apples to apples" comparison? Do you really suggest implementing,
for example, a Unix device driver in lisp? Did you really not
understand my point that a language with a garbage-collector based
memory-management scheme with no (hardware assisted or otherwise)
real-time programming support is, by definition, not terribly useful
for applications which require continuous real-time response?

It seems to me that you actually make my point for me. C was designed
expressly for system-level hacking or anything else requiring intimate
interaction with the hardware. That is why C compilers typically do
compile to the degree of tightness to which you refer above. While it
may be possible, with sufficient effort and expertise, to achieve
nearly the same efficiency in a particular lisp implementation on a
particular platform using a sufficient degree of implementation- and
platform-specific hackery, it is ludicrous to suggest that the typical
application programmer will achieve that degree of efficiency using
lisp for the typical application in the typical implementation on the
typical platform. Or if they do manage, eventually, to create such
optimal code it will only be after having expended more rather than
less effort to do so than would have been expended using C - precisely
because of the existence of all of C's hardware-level libraries and
optimizations to which you refer. Proposing as counter-examples of
"embedded programming in lisp" lisp-machine device-drivers is, putting
it mildly, missing the point.

One more time: If you want or need lisp's higher-level features for a
particular application and can afford the performance trade-offs they
entail, lisp is an ideal choice. If you don't really need lisp's
power for a particular application but still find lisp's performance
acceptable, then there is no real reason not to use it in that case,
either. Since in the case of an Ivory based machine the performance
trade-offs almost _always_ favor using lisp, the premise is vacuously
true. For most applications running on non-lisp-machines there _is_ a
real performance trade-off to using lisp, and so programmers should
weigh the costs and benefits of using a variety of different
implementation languages before choosing one.

------------------------------------------------------------

Kirk Rader

Kelly Murray

unread,
Aug 4, 1994, 6:45:36 PM8/4/94
to
In article <MARCOXA.94...@mosaic.nyu.edu>, mar...@mosaic.nyu.edu (Marco Antoniotti) writes:
|> In article <QOBI.94Ju...@qobi.ai> qo...@qobi.ai (Jeffrey Mark Siskind) writes:
|>
|> ...lots of stuff deleted about compilers.
|>
|> I agree that a Common Lisp compiler built from scratch and based upon
|> such principles would be desirable. But, given the current
|> circumstances, I am very happy to stick with CMUCL.
|>
|> ...

|>
|> But rather than working on such a compiler, the Lisp community has three
|> classes of people:
|> ...

|>
|> - those adding creaping featurism to the language (like MOP, DEFSYSTEM,
|> CLIM, ...)
|>
|> I cannot agree on this statement. Common Lisp LOST the lead in the GUI
|> field because the user community (and, above all, the vendors) did not
|> agree on CLUE/CLIO when it first became available. CLIM, which is not
|> available (that is: I cannot run it on CMUCL or GCL, hence it is not
|> available) could be a good thing.

I guess I'll bore people again, too, though perhaps I'll say something
different this time.

I agree with Siskind that Lisp could do much better to compete with C
by actually working on the areas that C people care about, rather
than just tell C people they care about the wrong things.

But I think the first mistake for Lisp users and vendors (was) is
to compete against C at all. Lisp is a great high-level language,
suitable for doing high-level programming. It allows one to implement stuff
quickly and not worry about low-level details.

Who needs what Lisp offers??? Not people who implement file systems,
device drivers, graphics libraries, or other "low-level" code.
The right target is BUSINESS PEOPLE. These people use COBOL,
and now relational databases, SQL, 4GLs, FORMS, and god-knows-what-else
high-level, expensive, proprietary package, which lets them
quickly implement an application the Boss needed yesterday,
like how many hoola-hoops were sold in California last month, etc.

A Lisp-based system could serve this market very well.

- Kelly Murray (k...@prl.ufl.edu) <a href="http://www.prl.ufl.edu">
-University of Florida Parallel Research Lab </a> 96-node KSR1, 64-node nCUBE


Flier

unread,
Aug 4, 1994, 2:14:51 PM8/4/94
to
Kris Karas (k...@enterprise.bih.harvard.edu) wrote:

: In article <CtpM0...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
: >I have found that applications which require "real-time" performance,
: >[...] are almost if not impossible to achieve in lisp.

: Oh, rubbish.

: So, fighting apples to apples now, and using some machine-specific

And why, pray tell, would I wish to write this nearly indecipherable
mess of Lisp code instead of 16 lines of perfectly readable assembler?
This does seem like the wrong tool for a simple task.

Greg

Lawrence G. Mayka

unread,
Aug 5, 1994, 5:56:29 PM8/5/94
to
In article <ASG.94Au...@xyzzy.gamekeeper.bellcore.com> a...@xyzzy.gamekeeper.bellcore.com (25381-geller) writes:

Not that CL is perfect, either, of course. The lack of finalization (a
destructor method) is somewhat annoying, although using macros like
with-open-file helps a bit. The existence of set-value can be a

I think most commercial CL implementations now have a finalization
capability, but I agree that we need a de facto standard interface to
it.

Paul F. Snively

unread,
Aug 5, 1994, 1:47:31 AM8/5/94
to
In article <Cu0w0...@rci.ripco.com>
gr...@ripco.com (Flier) writes:

> Kris Karas (k...@enterprise.bih.harvard.edu) wrote:
> : In article <CtpM0...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
> : >I have found that applications which require "real-time" performance,
> : >[...] are almost if not impossible to achieve in lisp.

[Response demonstrating a very fast machine-specific array fill
deleted]

> : So, we've used lisp and not assembler. But tell me, how big is this
> : program? How long does it take to execute? Does it cons? Answer:
> : It's 16 machine instructions long, 9 of which are within the loop.
> : For a 4096 byte array (1024 32-bit words) it takes 128 microseconds to
> : execute. You can embed it within an interrupt handler, if so desired,
> : as it does not allocate any static memory; it uses five locations on
> : the data stack.
>
> And why, pray tell, would I wish to write this nearly indecipherable
> mess of Lisp code instead of 16 lines of perfectly readable assembler?
> This does seem like the wrong tool for a simple task.

It's an existence proof that the original assertion--that writing
`real-time' code in Lisp is nearly or truly impossible--is a bald-faced
falsehood. If you are willing to use Lisp in the fashion that you
_must_ use C (that is, get down and dirty with the hardware, use (and
declare) types that are machine-word-size-and-byte-order specific,
etc.) then there's nothing to prevent you from writing `real-time'
code.

Now, you may find the case of writing pieces of device-driver code for
a Lisp Machine a contrived example. Since I happen to find that
argument fairly compelling myself, let me just point out that there is
a commercial real-time expert system shell, called G2 if memory serves
me correctly, written in Common Lisp and running on stock hardware.
It's being used, among other things, to control the Biosphere 2
environment.

> Greg


-----------------------------------------------------------------------
Paul F. Snively "Just because you're paranoid, it doesn't mean
ch...@shell.portal.com that there's no one out to get you."

25381-geller

unread,
Aug 5, 1994, 11:05:05 AM8/5/94
to
In article <CtrrG...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:

Newsgroups: comp.lang.lisp
Path: athos.cc.bellcore.com!uunet!spool.mu.edu!agate!darkstar.UCSC.EDU!news.hal.COM!decwrl!netcomsv!netcomsv!torii!kirk
From: ki...@triple-i.com (Kirk Rader)
Sender: use...@triple-i.com
Nntp-Posting-Host: pak+
Organization: Information International Inc., Culver City, CA
References: <LGM.94Ju...@polaris.ih.att.com> <CtI6M...@triple-i.com> <ASG.94Ju...@xyzzy.gamekeeper.bellcore.com>
Date: Sat, 30 Jul 1994 19:57:55 GMT
Lines: 192


[...]

Well, I'd certainly agree that I was, at least, a novice C++
programmer, but I've been programming in C for about 13 years now, and
I've been doing OOP development in a variety of languages (Smalltalk,
CLOS, C, and PASCAL) for over 5 years. I didn't have nearly this level
of difficulty with my first significant Lisp project.

We also have a few very experienced C++ developers on our staff, so
it's not just me.

Here again, I'd disagree. For one thing, the wide availability of
tools in C and C++ means that one is constantly running into someone's
class library, theoretically production-proven code, that uses (say) a
yacc/lex-based compiler to parse a simple data structure. Recoding
this parser by hand provided a 4-fold speed-up in a speed-critical
part of our application; of course, it also forced us to completely
redevelop the entire class, since we didn't have source (the speed-up
required direct access to private member variables and adding a new
class variable, and you can't do that without source, even in a
derived class).

Or alternatively, we use three different vendor class libraries that
use three different string representations (char *, USL String, and a
private vstr class). The time (both programmer and CPU) involved in
converting between these representations is not trivial. If C++ had
more run-time infrastructure, then this wouldn't be an issue.

Or again, part of why it takes us over 5 hours to rebuild our
application is that it's big -- over 8 meg (stripped). If a base class
gets changed, every child class has to get recompiled -- and every
other file that includes the header. If we were using a dynamic
language, rebuilding the application is a non-event.

The trade-off to using this minimalist approach to language and
run-time features is, of course, that if you find you really need a
feature that is built in to a more elaborate language like CL, then
you must either find an existing library implementation or write one
yourself. One factor frequently left of the debate on this issue is
recognition of the fact that due to the greater general acceptance
(deserved or not) of C and C++, there is a greater availability of
both commercial and shareware libraries for everything from very
application-level activities like serial communications to more
system-level activities like memory-management. When you add all of
the features that are easily obtainable in this way to those built in
to C++, it is not really as poor and benighted an environment as many
regular contributors to this newsgroup would have one believe.

One of the biggest problems with C and C++ is the availability of
large numbers of incompatible, non-portable commercial and shareware
libraries that almost do everything you'd want, but not quite, and not
quite how you want, and optimized for a usage pattern different than
yours. If C++ allowed more flexible class modification, without
requiring source access, or if the subclassing facility were a bit
nicer, or if there were a MOP, then things would be much easier. But
what is the likelihood of this happening?

If the ARM specified that the result of adding two integers was
implementation-dependent, would that make it OK? If a language
requires major hacks and kludging in order to provide deterministic
start-up and shut-down behavior, then that language has a serious bug
in its specification.

This is not the only place where C++'s relative immaturity is
evident. The ARM more or less throws up it's hands over the treatment
of temporary values, and gives up. There is no ability to extend
existing classes by adding new methods without subclassing, and
subclassing introduces major new hackery:

class String; // defined in, say, USL standard components

class MyString : public String {
public:
// various constructors and whatnot removed, to
// protect the innocent

MyString& rtrim();
MyString& pad(int length, char padChar = ' ');
}

This looks easy, right? Except that you can't do the following:

MyString a("hello "), b("world");
MyString s = a + b;

unless you've written a MyString(String&) constructor, and even then
you're going to wind up copying the String in order to create a
MyString. Now, copying Strings isn't that big a deal (reference
counts!), but for some classes this can become quite expensive. (Yes,
it is possible to write a MyString::operator+(), etc., but this is a
hack, and messy, and bug-prone, and a maintenance nightmare).

Not that CL is perfect, either, of course. The lack of finalization (a
destructor method) is somewhat annoying, although using macros like
with-open-file helps a bit. The existence of set-value can be a

temptation to do truly evil things. The existence of eight different,
slightly incompatible ways to do any single operation is a real
problem when learning (reminiscent of X windows), although I've found
that I actually program in a CL subset, of say maybe 35-45% of the
language.

There can be any amount of debate about the merits of any particular
feature of any particular language. Consider the recent long thread
in this newsgroup on NIL vs '() vs boolean false in various dialects
of lisp. I believe the assertion that there are more such problems
with the C++ specification than with the CL specification is simply
false, again primarily due to the fact that it is so much smaller.

I disagree strongly. Again, Lisp or CL or Scheme are not perfect, but
since they've been used for a much longer time than C++
(specifically), and since far more people have had input into the
structure and semantics of the language than for C/C++, the problem
issues that remain are generally minor, peripheral issues (or
religious debates between different dialects), where for C to an
extent, and for C++ very strongly, there are many users of the
language who feel that it has serious if not fatal flaws in its
fundamental design and conception.

What I believe you actually are objecting to is the fundamentally
different set of principles guiding the initial design and subsequent
evolution of the two families of languages. Lisp began as an academic
exercise in how to apply a certain technique from formal linguistics -
the lamda calculus - to the design of a programming language as a way
of exploring the possibilities inherent in the use of higher-order
functions. As such, theortical purity and semantic "correctness" was
more important than practical considerations like performance and
conservation of machine resources. C, on the other hand, was
originally conceived as a special-purpose "middle-level" programming
language designed for the express purpose of writing a portable
operating system - namely Unix. Retaining assembly langauge's
affinity for highly optimal hardware-oriented "hackery" was considered
more important than theoretical rigor or syntactic elegance. Decades
later, the two families of languages have evolved considerably but
they each still retain an essentially dissimilar "flavor" that is the
result of their very different origins, and each retains features that
make it a better choice of an implementation language for certain
kinds of applications.

I don't think John McCarthy was really concerned with lambda calculus,
at least not from the articles I've read. On the other hand, I was
certainly not there -- I was born the same year as Lisp, 1958 -- so
it's not unlikely that I'm way off base on this.

C, at least, has to me an intellectual coherence that makes it a
useful and usable language. I've written hundreds of thousands of
lines of C code, and while it has its warts -- and while I'd rather
write CL or Scheme -- it's OK. C++, on the other hand, seems to me to
be a hack pasted on top of a kludge, with some spaghetti thrown on
top. It wants to be all things to all people, without admitting to
any flaws -- it's an OOPL, it's procedural, it's dynamic, it's static,
it's a mess.


>
> Several people have suggested in this thread that one code one's
> mainline application modules in C or C++, and use a small, easily
> extensible version of lisp as a "shell" or "macro" language to glue
> the pieces together and provide the end-user programmability features
> which are one of lisp's greatest assets. This seems to me to be an
> ideal compromise for applications where lisp's performance and
> resource requirements are unacceptable to the main application but
> where it is still desired to retain at least some of lisp's superior
> features.
>
>I'd argue for the reverse of this -- build one's mainline in Lisp, and
>build the performance-critical pieces in C or C++. The less
>statically-compiled, statically-linked stuff in the delivered
>application, the better -- quicker edit/compile/build/test cycles,
>easier patching in the field, and easier for your users to change or
>extend what you've given them.

Again, only assuming that your application can afford the performance
penalties that these features entail - in which case it would not be
in the class of applications to which I explicitly referred. In our
own case, using exactly the strategy you and so many others advocate
has failed spectacularly in achieving comparable size and performance
behavior in our products to those of our competitors, which are
universally written in some combination of C and C++.

Why are there more performance penalties in one approach than in the
other? Build the mainline in Scheme, if you don't want the run-time
footprint of CL. Or in Dylan, or EuLisp, or whatever. I guess the
bottom line for me, though, whatever language your main line happens
to be written in, is do as much as you can dynamically, in Lisp or
Scheme or Python or Smalltalk or Self or SML or whatever.

Kirk Rader

unread,
Aug 8, 1994, 11:35:03 AM8/8/94
to


The bareface falsehood is the assertion that it is easy or even
possible to achieve real-time behavior in any of the popular
commercial Common Lisp implementations available on stock hardware,
especially while retaining the features of lisp that are usually
presented as its advantages - abstraction, ease of use, ease of
debuggability, etc. I don't find the example of a lisp-machine device
driver being written in lisp "contrived", I merely consider it
irrelevant to the issues being discussed in the thread at the point at
which it was introduced. By definition, the architecture of a
lisp-machine is going to favor using lisp for most aspects of both
application and system-level code. It should be equally obvious that
both the hardware and the software architecture of systems designed
primarily to run Unix or other popular OS's themselves written in C
using the standard C libraries will generally favor using C-like
languages, other things being equal. Where other things are _not_
equal, such as applications which require the greater expressive power
of lisp more than they need optimal performance, lisp is a good
choice. But suggesting that lisp is just as reasonable a choice as
assembler or C for implementing things like device-drivers on typical
hardware / software configurations is simply ludicrous.

Note also that Unix itself is not particularly well-suited to
real-time applications. Adding the overhead of supporting the typical
lisp implementation's run-time system (especially its
memory-management mechanisms) to the problems already inherent in Unix
for the type of application under discussion only exacerbates the
problems.

Kirk Rader

Thomas M. Breuel

unread,
Aug 8, 1994, 2:11:39 PM8/8/94
to
In article <Cu0w0...@rci.ripco.com> gr...@ripco.com (Flier) writes:

Looks like assembler to me. Worse yet, it's completely unportable and
requires intimate familiarity with the particular system used as well
as details of the hardware and software environment that most users
just don't care about.

The need to write code like in Lisp in order to achieve good
performance is precisely the reason why people prefer languages like C
for many tasks: writing equivalent, efficient C code is
straightforward and needs to rely only on portable primitives.

Thomas.

Marco Antoniotti

unread,
Aug 8, 1994, 6:39:45 PM8/8/94
to
In article <TMB.94Au...@arolla.idiap.ch> t...@arolla.idiap.ch (Thomas M. Breuel) writes:

Never seen a thing as a "portable" assembler. Try to run MIPS code on
your 486 :)

and
requires intimate familiarity with the particular system used as well
as details of the hardware and software environment that most users
just don't care about.

The need to write code like in Lisp in order to achieve good
performance is precisely the reason why people prefer languages like C
for many tasks: writing equivalent, efficient C code is
straightforward and needs to rely only on portable primitives.

Like the stream class of C++? It took the major C++ vendors three
releases before getting out something that it would resemble the AT&T
stream class, and that would run the examples on Lippman's book.

Moreover, the intermediate language used by the GNU GCC compiler (RTL)
looks a lot like...guess what?

Cheers
--
Marco Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab | room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU | e-mail: mar...@cs.nyu.edu

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
Bertholdt Brecht

Mike Haertel

unread,
Aug 9, 1994, 3:41:10 AM8/9/94
to
In article <MARCOXA.94...@mosaic.nyu.edu> mar...@mosaic.nyu.edu (Marco Antoniotti) writes:
> From: t...@arolla.idiap.ch (Thomas M. Breuel)
> The need to write code like in Lisp in order to achieve good
> performance is precisely the reason why people prefer languages like C
> for many tasks: writing equivalent, efficient C code is
> straightforward and needs to rely only on portable primitives.
>
>Like the stream class of C++? It took the major C++ vendors three
>releases before getting out something that it would resemble the AT&T
>stream class, and that would run the examples on Lippman's book.

There is a big difference between C++ and C. C is not a semantic nightmare.

Anyway, you seem to have completely missed Thomas' point, that
the original poster's example of "efficient Lisp" wasn't Lisp at all.
I sure can't find it anywhere in CLtL.

I believe it's possible to make Lisp efficient, but dressing up
assembly language with lots of irritating silly parentheses is
*not* the way to go.

>Moreover, the intermediate language used by the GNU GCC compiler (RTL)
>looks a lot like...guess what?

GCC's RTL syntax may have lots of parentheses, but that's its only
connection with Lisp. So it's not clear to me why you made that remark.
It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
therefore using assembler dressed up in parentheses must be a
reasonable way to write efficient Lisp code." Bzzt! Time for a
reality check.
--
Mike Haertel <mi...@ichips.intel.com>
Not speaking for Intel.

Paul F. Snively

unread,
Aug 9, 1994, 10:13:48 AM8/9/94
to
In article <MIKE.94A...@pdx399.intel.com>
mi...@ichips.intel.com (Mike Haertel) writes:

> In article <MARCOXA.94...@mosaic.nyu.edu> mar...@mosaic.nyu.edu (Marco Antoniotti) writes:
> Anyway, you seem to have completely missed Thomas' point, that
> the original poster's example of "efficient Lisp" wasn't Lisp at all.
> I sure can't find it anywhere in CLtL.

The original poster's point seemed to me to have been that any
real-world Lisp implementation will allow you to do what is generally
necessary in order to talk directly to the hardware, which is what many
people are concerned about when they talk about `real-time'
constraints. It was also to point out that it's not necessarily any
harder to use registers and stack-frames to avoid garbage-collection in
cases where that's important.

To me, it's not valid argument to say `C is great because it's
essentially a quasi-portable assembler that lets you efficiently bang
on the hardware,' and then yell `foul' when someone else says `Lisp is
great because it's a high-level language that will let you, if/when you
want, efficiently bang on the hardware.' I would say that the fact
that C _forces_ you to bang on the hardware, while Lisp merely allows
you to if you're willing to bypass all the abstraction away from the
hardware, is a compelling case in Lisp's favor _for most
non-device-driver one-megabyte-or-more application tasks_.

> I believe it's possible to make Lisp efficient, but dressing up
> assembly language with lots of irritating silly parentheses is
> *not* the way to go.

So far, no one has commented on G2, the hard real-time real-Lisp expert
system development tool. I may have to dig up old AI Expert/PC AI
magazine articles about it. From what I understand, in a nutshell,
they make heavy use of `resources' (that is, they maintain their own
free-lists of various types in order to avoid garbage collection) and
they use lots of type declarations in their code--which languages like
C again _force_ you to do anyway. So by using C-like memory-management
and type declarations, they win, and still get to use Lisp's other
great features.

> >Moreover, the intermediate language used by the GNU GCC compiler (RTL)
> >looks a lot like...guess what?
>
> GCC's RTL syntax may have lots of parentheses, but that's its only
> connection with Lisp.

I'm afraid that that's simply untrue. The connection with Lisp is on a
deep mathematical/logical level. Remember, RMS (Richard Stallman) is
one of the old MIT Lisp hackers. One of his significant claims to fame
is that when the Lisp Machines that MIT created went commercial in the
form of Symbolics, Inc. and Lisp Machines, Inc. being spun off by
various denizens of the MIT AI Lab, RMS would take every new release of
Genera that Symbolics put out, reverse engineer it, and reimplement the
results, which he would then provide to LMI. He sincerely believes
that technology should be freely available to anyone who wants it.

But I digress. RTL is related to Lisp inasmuch as they both derive
directly from the Lambda Calculus. As a theoretical point, it's been
understood for some time that if you're willing to express everything
about a program in terms of the Lambda Calculus, some wonderful
optimization opportunities arise. One reason that this remained a
theoretical consideration for so long is the difficulty inherent in
representing side-effects in the Lambda Calculus, and popular languages
such as C are notorious for their reliance upon side-effects. C adds
insult to injury by allowing aliases--that is, indirect side-effecting
through pointers.

Nevertheless, GCC manages to compile C to RTL, a Lambda-Calculus
derivative, at which point it can do many of the optimizations that one
would expect for a language based on the Lambda Calculus. It then
translates the RTL to assembly language for the target processor, and
hands the results off to the system's assembler.

Note that _all optimizations that GCC does are done to the RTL, not the
assembly language_. And GCC is still considered one of the best
optimizing C compilers around--generally better than the C compiler
that many vendors include with their system! The wiser vendors have
taken to including the latest GCC with their system--Sun Microsystems
and NeXT come to mind.

> So it's not clear to me why you made that remark.
> It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
> therefore using assembler dressed up in parentheses must be a
> reasonable way to write efficient Lisp code." Bzzt! Time for a
> reality check.

I believe that the point was that it's possible for a lisp-like
language (RTL) to generate efficient code, and that GCC is an existence
proof.

> --
> Mike Haertel <mi...@ichips.intel.com>
> Not speaking for Intel.

Jeff Dalton

unread,
Aug 9, 1994, 3:20:53 PM8/9/94
to
In article <MIKE.94A...@pdx399.intel.com> mi...@ichips.intel.com (Mike Haertel) writes:

>Anyway, you seem to have completely missed Thomas' point, that
>the original poster's example of "efficient Lisp" wasn't Lisp at all.
>I sure can't find it anywhere in CLtL.

Since when is Lisp the same as what's in CLtL?

>I believe it's possible to make Lisp efficient, but dressing up
>assembly language with lots of irritating silly parentheses is
>*not* the way to go.

I agree.

>>Moreover, the intermediate language used by the GNU GCC compiler (RTL)
>>looks a lot like...guess what?
>
>GCC's RTL syntax may have lots of parentheses, but that's its only
>connection with Lisp.

But that is Lisp here? What's in CLtL?

Jeff Dalton

unread,
Aug 9, 1994, 3:30:31 PM8/9/94
to
In article <TFB.94Au...@sorley.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:

>* Kirk Rader wrote:
>> By definition, the architecture of a
>> lisp-machine is going to favor using lisp for most aspects of both
>> application and system-level code. It should be equally obvious that
>> both the hardware and the software architecture of systems designed
>> primarily to run Unix or other popular OS's themselves written in C
>> using the standard C libraries will generally favor using C-like
>> languages, other things being equal.
>
>This turns out not to be true as far as I can tell. I'm not a
>compiler-design (Lisp or otherwise) expert, but Lisp compilers can do
>very well on RISC machines, and many of the old `lisp'
>architectures torn out to be not so good after all, or rather, they
>mesh well with the way people wrote lisp systems in the 70s, but they
>don't write them like that any more, they write them better.

I would agree that Lisp can do reasonably well on RISC machines,
but the point of Lisp machines was not just to make Lisp fast
but also to make it fast and safe at the same time and fast
without needing lots of declarations.

Recent Lisp implementations (especially CMU CL) have gone a fair
way towards making it easy to have safe, efficient code on RISC
machines, but it may always require a somewhat different way of
thinking. (Not a bad way, IMHO, but different from LM thinking
nonetheless.)

But what is this about "the way people wrote lisp systems in the 70s"?
What sort of thing do you have in mind? Lisps written in assembler
that could run in 16K 36-bit words? (Presumably not.)

Rob MacLachlan

unread,
Aug 9, 1994, 12:08:43 PM8/9/94
to
>From: ch...@shell.portal.com (Paul F. Snively)

>
>RTL is related to Lisp inasmuch as they both derive directly from the Lambda
>Calculus.

So far as I know, this is not true. I do know that the early papers about the
use of "Register Transfer Language" in portable back-ends make no mention of
any such semantic tie-in, and in actual implementation was also quite un-lispy
(based on string processing of a fairly conventional assembly format.)

I'm not familiar with GCC internals, but Stallman's decision to use the term
RTL doesn't suggest a radical new semantic foundation.

There has been some discussion (such as in Guy Steele's Lambda papers) of
using lambda and continuation passing style as the ultimate target independent
intermediate format (even for non-Lisp languages), but so far as I know CPS
has only been used in basically non-optimizing Scheme and ML compilers.

>I believe that the point was that it's possible for a lisp-like
>language (RTL) to generate efficient code, and that GCC is an existence
>proof.

If you consider Lambda to be what makes a language "Lisp-like", then I would
agree that efficient code could easily be generated for C-with-Lambda (or
C-with-lambda-and-parens) especially if upward closures were illegal.
Performance-wise, the big problem with Lisp variants such as Common Lisp and
Scheme are that:
-- Basic operations such as arithmetic are semantically far removed
from the hardware,
-- Dynamic memory allocation is done by the runtime system, and is thus
not under effective programmer control, and
-- A run-time binding model tends to be used for variable references and
function calls.

Inefficiency is a natural consequence of programming using high-level
operations, though this inefficiency can be overcome with a fair degree of
success by compiler technology and iterative tuning.

I don't hold out much hope for the idea of overcoming the inefficiency of
high-level operations by the explicit use of scads low-level operations,
especially when the operations are implementation dependent. It just gives up
too many of the reasons for wanting to use a high-level language.

I believe that the primary key to adequate performance in dynamic languages
like Lisp is adequate tuning tools and educational materials. Getting good
performance in dynamic languages currently requires far too much knowledge
about low-level implementation techniques. Wizards have too long claimed that
"Lisp is just as efficient as C" --- although Lisp may be highly efficient in
the hands of the wizards, the vast majority of programmers who attempt Lisp
system development don't come anywhere near that level of efficiency.

Rob

Tim Bradshaw

unread,
Aug 9, 1994, 12:28:55 PM8/9/94
to
* Kirk Rader wrote:
> By definition, the architecture of a
> lisp-machine is going to favor using lisp for most aspects of both
> application and system-level code. It should be equally obvious that
> both the hardware and the software architecture of systems designed
> primarily to run Unix or other popular OS's themselves written in C
> using the standard C libraries will generally favor using C-like
> languages, other things being equal.

This turns out not to be true as far as I can tell. I'm not a


compiler-design (Lisp or otherwise) expert, but Lisp compilers can do
very well on RISC machines, and many of the old `lisp'
architectures torn out to be not so good after all, or rather, they
mesh well with the way people wrote lisp systems in the 70s, but they
don't write them like that any more, they write them better.

--tim

Jeff Dalton

unread,
Aug 9, 1994, 3:08:57 PM8/9/94
to

>| (defun copy-a-to-b (a b)
>| (sys:with-block-registers (1 2)
>| (setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
>| sys:dtp-physical)
>| (sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
>| sys:dtp-physical))
>| (loop repeat (ash (length a) -2) do
>| (let ((a (sys:%block-read 1 :prefetch t))
>| (b (sys:%block-read 1 :prefetch t))
>| (c (sys:%block-read 1 :prefetch nil))
>| (d (sys:%block-read 1 :prefetch nil)))
>| (sys:%block-write 2 a)
>| (sys:%block-write 2 b)
>| (sys:%block-write 2 c)
>| (sys:%block-write 2 d)))))
>|
>| So, we've used lisp and not assembler.
>
>Looks like assembler to me.

Really? You must be used to pretty fancy macro assemblers (with
loop macros, etc).

>The need to write code like in Lisp in order to achieve good
>performance

But it's not necessary to write code like that in order to
achieve _good_ performance in Lisp.

Jeff Dalton

unread,
Aug 9, 1994, 3:18:27 PM8/9/94
to
In article <Cu83A...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:

>>> : So, we've used lisp and not assembler. [...]

>>> And why, pray tell, would I wish to write this nearly indecipherable
>>> mess of Lisp code instead of 16 lines of perfectly readable assembler?
>>> This does seem like the wrong tool for a simple task.

>>It's an existence proof that the original assertion--that writing
>>`real-time' code in Lisp is nearly or truly impossible--is a bald-faced

>>falsehood. [...]

>The bareface falsehood is the assertion that it is easy or even
>possible to achieve real-time behavior in any of the popular
>commercial Common Lisp implementations available on stock hardware,
>especially while retaining the features of lisp that are usually
>presented as its advantages - abstraction, ease of use, ease of
>debuggability, etc.

Let's be clear about this. Lisp is not the same as "the popular
commercial Common Lisp implementations available on stock hardware".

> But suggesting that lisp is just as reasonable a choice as
>assembler or C for implementing things like device-drivers on typical
>hardware / software configurations is simply ludicrous.

You could say "suggesting that any of the popular commercial
Common Lisp implementations available on stock hardware is just
as reasonable a choice ..."

I suspect no one would disagree with _that_.

>Note also that Unix itself is not particularly well-suited to
>real-time applications. Adding the overhead of supporting the typical
>lisp implementation's run-time system (especially its
>memory-management mechanisms) to the problems already inherent in Unix
>for the type of application under discussion only exacerbates the
>problems.

Now it's "the typical Lisp implementation" that's said to be losing.

The various terms are not interchangeable.

Anthony Berglas

unread,
Aug 9, 1994, 8:06:09 PM8/9/94
to
Clearly one should not require low level tricks to write fast code.
Below are two examples in C (not C++, reportedly slightly slower) and
Lisp. The first calculates prime nrs using a crude algorithm, CMUCL
beats Sun C anc gcc (it Kills prolog and I suspect most other
"interpretive" languages). The second is a neural net test, in which
CMUCL just beats sun C, and is just beaten by gcc (optimized, of
course).

Note that neither of these applications are traditional Lisp ones ---
there is not a List in sight.

The syntax for declarations is *Awful*, but they only need to be added
to the 10% of code that takes 90% of the time. In fact, most of the
declarations I have used make little difference. C programmers might prefer

(Fixnum ((I 10)) ...) or (Let+ ((I Fixnum 10)) ...)
to
(Let ((I 10)) (Declare (Fixnum I)) ...)

Personally I prefer Let+, allow an optional third argument, but it is
easy to write your own macro. (Try doing that in C++!).

The advantage of Lisp over other interpretive languages is that it can
and IS compiled efficently. However, Microsoft has dictated that some
programs are two be writen in Visual Basic, and others in C++,
impedence mismatch being good for the soul, so who are we to argue?
IMHO the lisp community has only itself to blame for not providing a
standard option of a conventional syntax --- syntax is always more
important then semantics.

Anyway here's the code.

-------- Primes ---------

#include <stdio.h>
#include <assert.h>

int prime (n)
int n;
/*"Returns t if n is prime, crudely. second result first divisor."*/
{ int divor;
for (divor=2;; divor++)
{ /*printf("divor %d n/divor %d n%%divor %d ", divor, n/divor, n%divor);*/
if (n / divor < divor) return 1;
if (n % divor == 0) return 0;
} }

main(argc, argv)
int argc;
char **argv;
{ int n, sum=0, p, i;
assert(argc == 2);
n = atoi(argv[1]); printf("n %d, ", n);
for(i=1; i<=10; i++) { sum =0;
for (p=0; p<n; p++)
{ /*printf("\nprime(%d): %d ", p, prime(p));*/
if (prime(p))
sum += p * p;
}}
printf("Sum %d\n", sum);
}

(declaim (optimize (safety 0) (speed 3)))

(declaim (start-block primes))
(declaim (inline prime))
(deftype unum () '(unsigned-byte 29))

(defun prime (nn)
"Returns t if n is prime, crudely. second result first divisor."
(declare (type unum nn))
(do ((divor 2 (+ divor 1)))
(())
(declare (type unum divor) (inline Floor))
(multiple-value-bind (quot rem) (floor nn divor)
(declare (type unum quot rem))
(when (< quot divor) (return t)) ; divor > sqrt
(when (= rem 0) (return (values nil divor))) )))

(defun primes (n)
"Returns sum of square of primes < n, basic algorithm."
(declare (type unum n))
(let ((sum 0))
; (declare (integer sum))
(dotimes (i 10)
; (print sum)
(setf sum 0)
(do ((p 0 (the t (1+ p))))
((>= p n))
(declare (type unum p))
(when (prime p)
(incf sum (* p p)) )))
sum))

(declaim (end-block))

% crude prime tester.
prime(N):- test(N, 2), !, fail.
prime(N).
test(N, M):- N mod M =:= 0.
test(N, M):- J is M + 1, J * J =< N, test(N, J).

primes(P, S):- prime(P), Q is P - 1, Q > 0, primes(Q, T), S is T + P * P.
primes(P, S):- Q is P - 1, Q > 0, primes(Q, S).
primes(P, 0).

------------ Nueral Net ---------

#include <stdio.h>
#include <math.h>

#define size 5

float w[size][size];
float a[size];
float sum;

main()
{
int epoch, i,j ;

for (i=0; i< size; i++)
for (j=0; j< size; j++)
w[i][j] = 0.0;

for (epoch=0; epoch < 10000; epoch++){

for (i=0; i< size; i++)
a[i] = (float) (random()%32000)/(float) 32000 * 0.1;

for (i=0; i< size; i++)
for (j=0; j< size; j++)
w[i][j] += a[i] * a[j];

for (i=0; i< size; i++){
sum = 0.0;
for (j=0; j< size; j++)
sum += a[i] * w[i][j];
a[i] = 1.0/(1.0 - exp(sum));
};


}
}

;;; simon.lisp -- test or neural simulations
;;;
;;; Simon Dennis & Anthony Berglas

;;; Library stuff -- Example of simple language extensions.
;;; *NOT* NEEDED FOR EFFICIENCY, JUST CONVENIENT

(defmacro doftimes ((var max &rest result) &body body)
"Like DoTimes but var declared Fixnum."
`(DoTimes (,Var ,Max ,@result)
(Declare (Fixnum ,Var))
,@Body))
;; Note that this macro could expand code for fixed loops.

(Eval-When (eval load compile)
;; [a b c] -> (Aref a b c)
(defun AREF-READER (Stream Char)
(declare (ignore char))
(Cons 'AREF (Read-Delimited-List #\] Stream T)) )
(set-macro-character #\[ #'aref-reader Nil)
(set-macro-character #\] (get-macro-character #\)) Nil) )


;;; The program.

(declaim (optimize (safety 0) (speed 3)))


(defconstant size 5)


(defvar *seed* *random-state*)
(defun main()

;; initialize the weight matrix
(let ((w (make-array '(5 5) :element-type 'SHORT-FLOAT :initial-element 0s0))
(a (make-array 5 :element-type 'SHORT-FLOAT)) )
(setf *random-state* (make-random-state *seed*))
(doftimes (epoch 10000)

;; make new activation vector
(doftimes (i size)
(setf [a i] (random 0.1)))

;; update the weights
(doftimes (i size)
(doftimes (j size)
(setf [w i j] (+ [w i j] (* [a i] [a j]))) ))

;; update the activations
(doftimes (i size)
(let ((sum 0s0))
(declare (short-float sum) (inline exp))
(doftimes (j size)
(incf sum (the short-float (* [a i] [w i j] ))) )
(setf [a i] (/ 1 (- 1 (exp sum)))) )))
w))

--
Anthony Berglas
Rm 312a, Computer Science, Uni of Qld, 4072, Australia.
Uni Ph +61 7 365 4184, Home 391 7727, Fax 365 1999

David B. Kuznick

unread,
Aug 10, 1994, 3:26:37 PM8/10/94
to
In article <31sjpj$l...@news1.svc.portal.com>, you write:

|> Now, you may find the case of writing pieces of device-driver code for
|> a Lisp Machine a contrived example. Since I happen to find that
|> argument fairly compelling myself, let me just point out that there is
|> a commercial real-time expert system shell, called G2 if memory serves
|> me correctly, written in Common Lisp and running on stock hardware.
|> It's being used, among other things, to control the Biosphere 2
|> environment.

Well, G2 isn't REALLY written in Common Lisp per se, but in a Lisp where they
have very good control over the memory mangement (i.e. garbage-free floats, etc).
They took Common Lisp, threw out what they didn't need, and rewrote what they did
(with some help from Lucid, the rumours say...)

Definitely, an impressive system, and shining proof that Lisp is indeed useful
in real-world applications.

--
David Kuznick - da...@ci.com (preferred) or dkuz...@world.std.com
When the world brings you down so take your time ___
Look round and see the most in time is where you're meant to be {~._.~}
For you are light inside your dreams ( Y )
For you will find that it's something that touches me. ()~L~()
- Endless Dream - YES (_)-(_)

Marco Antoniotti

unread,
Aug 9, 1994, 12:37:33 PM8/9/94
to
In article <3282ut$p...@news1.svc.portal.com> ch...@shell.portal.com (Paul F. Snively) writes:

From: ch...@shell.portal.com (Paul F. Snively)

Newsgroups: comp.lang.lisp
Date: 9 Aug 1994 14:13:48 GMT
Organization: tumbolia.com
Lines: 98
Sender: ch...@nova.unix.portal.com
References: <KANDERSO.94...@wheaton.bbn.com>
<CtpM0...@triple-i.com> <31j0ma$g...@hsdndev.harvard.edu>
<Cu0w0...@rci.ripco.com> <TMB.94Au...@arolla.idiap.ch>
<MARCOXA.94...@mosaic.nyu.edu>
<MIKE.94A...@pdx399.intel.com>
NNTP-Posting-Host: tumbolia.com
X-Posted-From: InterNews 1....@tumbolia.com
X-Authenticated: chewy on POP host nova.unix.portal.com

In article <MIKE.94A...@pdx399.intel.com>
mi...@ichips.intel.com (Mike Haertel) writes:

...

So far, no one has commented on G2, the hard real-time real-Lisp expert
system development tool. I may have to dig up old AI Expert/PC AI
magazine articles about it. From what I understand, in a nutshell,
they make heavy use of `resources' (that is, they maintain their own
free-lists of various types in order to avoid garbage collection) and
they use lots of type declarations in their code--which languages like
C again _force_ you to do anyway. So by using C-like memory-management
and type declarations, they win, and still get to use Lisp's other
great features.

As I remember correctly, G2 basically goes at great lengths to provide
GC-free arithmetics and carefully uses resources to avoid "consing".

As usual the definition of "Real Time" must be taken into account. NYU
is a notorious ADA stronghold and there are many stories going around
about it. One of the most succulent ones is that of an ADA system
failing to meet its Real Time specs - i.e. the tasks were missing the
deadlines. Well it turned out that no matter how good the compilation
was (and ADA can potentially be optimized in a better way than C) the
system still missed the deadlines. Of course the problem was the
scheduling policy. Hardly a matter of "efficiency of the language".

BTW. There is a portable implementation of Common Lisp Resources in
the Lisp Repository maintained by the never thanked enough Mark
Kantrowitz.

...

> So it's not clear to me why you made that remark.
> It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
> therefore using assembler dressed up in parentheses must be a
> reasonable way to write efficient Lisp code." Bzzt! Time for a
> reality check.

I believe that the point was that it's possible for a lisp-like
language (RTL) to generate efficient code, and that GCC is an existence
proof.

Pretty much it. Thanks

Happy Lisping

Stephen J Bevan

unread,
Aug 10, 1994, 9:53:35 AM8/10/94
to
In article <3295lh$i...@uqcspe.cs.uq.oz.au> ber...@cs.uq.oz.au (Anthony Berglas) writes:
Clearly one should not require low level tricks to write fast code.
Below are two examples in C (not C++, reportedly slightly slower) and
Lisp. The first calculates prime nrs using a crude algorithm, CMUCL
beats Sun C anc gcc (it Kills prolog and I suspect most other
"interpretive" languages). ...

The way I read the parenthetical remark it implies that Prolog is an
"interpretive" language. Is that what you really meant?

Mike Haertel

unread,
Aug 10, 1994, 5:08:23 PM8/10/94
to
In article <3282ut$p...@news1.svc.portal.com> ch...@shell.portal.com (Paul F. Snively) writes:
>>[Mike Haertel wrote this, Snively incorrectly attributed it.]

>> GCC's RTL syntax may have lots of parentheses, but that's its only
>> connection with Lisp.
>
>I'm afraid that that's simply untrue. The connection with Lisp is on a
>deep mathematical/logical level. Remember, RMS (Richard Stallman) is
>one of the old MIT Lisp hackers. One of his significant claims to fame
>is that when the Lisp Machines that MIT created went commercial in the
>form of Symbolics, Inc. and Lisp Machines, Inc. being spun off by
>various denizens of the MIT AI Lab, RMS would take every new release of
>Genera that Symbolics put out, reverse engineer it, and reimplement the
>results, which he would then provide to LMI. He sincerely believes
>that technology should be freely available to anyone who wants it.

Ok, you're saying "RMS is an old time Lisp hacker, therefore GCC's RTL
*must* have some deep connection with Lisp." A stupid thing to say.

The anecdote about his anti-Symbolics crusade is cute, true, and irrelevant
to the matter at hand.

>But I digress. RTL is related to Lisp inasmuch as they both derive
>directly from the Lambda Calculus.

RTL has nothing whatever to do with the Lambda calculus. Every
single RTL statement includes a side effect. RTL makes no pretense
whatever of referential transparency.

>As a theoretical point, it's been
>understood for some time that if you're willing to express everything
>about a program in terms of the Lambda Calculus, some wonderful
>optimization opportunities arise.

This is called continuation passing style. Using CPS as your
intermediate representation has some benefits. However, other
representations, notably static single assignment, wherein each
temporary is assigned exactly once, have the same benefits.
Anyway, as far as I know CPS was invented by Steele for his
scheme compiler "Rabbit" around 1975. It's been used in a
handful of compilers; New Jersey ML is a notable recent example,
and Appel wrote an oft-cited book about it.

But GCC doesn't use CPS. In fact, as far as I know no compiler for
any conventional imperative language uses CPS. There might be an
interesting research project in that.

>[...]

>Nevertheless, GCC manages to compile C to RTL, a Lambda-Calculus
>derivative, at which point it can do many of the optimizations that one
>would expect for a language based on the Lambda Calculus. It then
>translates the RTL to assembly language for the target processor, and
>hands the results off to the system's assembler.

RTL is not a lambda-calculus derivative. Various "register transfer
languages" have been around for a long time. GCC's is based on the
one used in Davidsen & Fraser's "Portable Optimizer", done at Arizona
around 1980. Davidsen & Fraser's RTL was in turn based on something,
I forget the name, invented at CMU.

>Note that _all optimizations that GCC does are done to the RTL, not the
>assembly language_. And GCC is still considered one of the best
>optimizing C compilers around--generally better than the C compiler
>that many vendors include with their system! The wiser vendors have
>taken to including the latest GCC with their system--Sun Microsystems
>and NeXT come to mind.

The purpose of RTL is to provide a semantic representation of machine
instruction effects. For example, the 68020 instruction "addl a0@, d0"
maps (approximately) to the RTL instruction

(set (reg 0) (add (reg 0)
(mem (reg 8))))

Therefore, almost exactly the opposite of what you said is true.
GCC optimizes directly on the target machine's assembly language.
It does not have an abstract intermediate language that it first
goes through.

>I believe that the point was that it's possible for a lisp-like
>language (RTL) to generate efficient code, and that GCC is an existence
>proof.

In case it's not yet blatantly clear to you, RTL is not a lisp-like
language:

1. It is staticly typed.
2. It is heavily side-effect oriented.
3. It has no structured data types.
4. It has no implicit memory allocation.

GCC's RTL's *only* connection with Lisp is that RMS chose a
parenthesis-heavy syntax for it. The syntax is entirely superficial.

Mike Haertel

unread,
Aug 10, 1994, 5:22:20 PM8/10/94
to
In article <CuA8E...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <MIKE.94A...@pdx399.intel.com> mi...@ichips.intel.com (Mike Haertel) writes:
>>Anyway, you seem to have completely missed Thomas' point, that
>>the original poster's example of "efficient Lisp" wasn't Lisp at all.
>>I sure can't find it anywhere in CLtL.
>
>Since when is Lisp the same as what's in CLtL?

Since I was trying to make a point, didn't feel like going into the
never-ending philosophical discussion of exactly what characteristics
of a language make it "Lisp". CLtL was a convenient scapegoat.

The point I was trying to make was that the original poster's
example was just as vendor-specific as assembly language.

>But that is Lisp here? What's in CLtL?

I refuse to get sucked into this discussion. This newsgroup has
seen it to many times before. Are you bored, or what?

p.s. "Lisp" includes Scheme.
--
Mike Haertel <mi...@ichips.intel.com>

Lawrence G. Mayka

unread,
Aug 10, 1994, 1:47:15 PM8/10/94
to
In article <MARCOXA.94...@mosaic.nyu.edu> mar...@mosaic.nyu.edu (Marco Antoniotti) writes:

BTW. There is a portable implementation of Common Lisp Resources in
the Lisp Repository maintained by the never thanked enough Mark
Kantrowitz.

CLIM also includes one (possibly the same one, I don't know) in the
CLIM-SYS package.

Tim Bradshaw

unread,
Aug 11, 1994, 11:49:04 AM8/11/94
to
* Jeff Dalton wrote:
> I would agree that Lisp can do reasonably well on RISC machines,
> but the point of Lisp machines was not just to make Lisp fast
> but also to make it fast and safe at the same time and fast
> without needing lots of declarations.

> Recent Lisp implementations (especially CMU CL) have gone a fair
> way towards making it easy to have safe, efficient code on RISC
> machines, but it may always require a somewhat different way of
> thinking. (Not a bad way, IMHO, but different from LM thinking
> nonetheless.)

Could any of the lisp machines do fast floating point, without
declarations? I know maclisp was rumoured to be able to (on
stock hardware even!) but did it use declarations?

I'd be interested in knowing how fast modern stock-hardware lisps do
per `MIPS' cf the special-architecture things. Of course this is
probably a seriously hard comparison to do meaningfully for all sorts
of reasons.

> But what is this about "the way people wrote lisp systems in the 70s"?
> What sort of thing do you have in mind? Lisps written in assembler
> that could run in 16K 36-bit words? (Presumably not.)

Well the MIT lispms and I think the Xerox dmachines are basically
`70s technology' to my mind, that's what I meant.

--tim

Kirk Rader

unread,
Aug 11, 1994, 10:36:06 AM8/11/94
to
In article <CuA8u...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <TFB.94Au...@sorley.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>>* Kirk Rader wrote:
>>> By definition, the architecture of a
>>> lisp-machine is going to favor using lisp for most aspects of both
>>> application and system-level code. It should be equally obvious that
>>> both the hardware and the software architecture of systems designed
>>> primarily to run Unix or other popular OS's themselves written in C
>>> using the standard C libraries will generally favor using C-like
>>> languages, other things being equal.
>>
>>This turns out not to be true as far as I can tell. I'm not a
>>compiler-design (Lisp or otherwise) expert, but Lisp compilers can do
>>very well on RISC machines, and many of the old `lisp'
>>architectures torn out to be not so good after all, or rather, they
>>mesh well with the way people wrote lisp systems in the 70s, but they
>>don't write them like that any more, they write them better.


I agree that if you focus on line-by-line treatment of individual
translation units, lisp compilers can be made to be quite efficient.
How common it is for real-world implementations to attain that degree
of efficiency is another matter. In any case, this misses the point I
was making that you must look at the hardware and software
architecture of the system as a whole. RISC CPU's are only the
starting point in the design of a modern (non-lisp-machine)
work-station. The system as a whole - it's I/O, memory-management,
etc. hardware and software substrates - were all designed and
optimized with Unix in mind. Any software system such as Common Lisp
which has its own memory-management, I/O, etc. models that are
sufficiently different from that of Unix to prevent the implementor or
application programmer from simply calling the standard libraries in
the same way that a C program would is by definition not only
re-inventing the wheel from the point of view of the work-station's
designers but also risks incurring (and in typical implementations
does incur) serious performance problems. Every brand of work-station
with which I am familiar comes with performance metering tools which
can be used to easily verify the kinds of ill-effects to which I
refer. As a concrete example, it is an enlightening experience to
watch the output of gr_osview on an SGI while a complex lisp
application is running using one of the popular commercial Common Lisp
implementations. One can easily see where the lisp implementation's
I/O model, memory-management model, lightweight process model, and so
on cause really awful behavior of Irix' built-in I/O,
memory-management, and scheduling mechanisms.

All of the above applies equally to desktop PC's, of course, except
that the OS for which the system was optimized is different.


>
>I would agree that Lisp can do reasonably well on RISC machines,
>but the point of Lisp machines was not just to make Lisp fast
>but also to make it fast and safe at the same time and fast
>without needing lots of declarations.


And to build the kind of "holistically lisp friendly" environment that
would make the kind of misbehavior to which I refer above impossible.
A lisp machine has no other memory-management mechanism than lisp's.
Ditto for its scheduler. Ditto for its I/O substrate. There is no
possibility of conflict or misoptimization.


>
>Recent Lisp implementations (especially CMU CL) have gone a fair
>way towards making it easy to have safe, efficient code on RISC
>machines, but it may always require a somewhat different way of
>thinking. (Not a bad way, IMHO, but different from LM thinking
>nonetheless.)


I would disagree with the description that writing efficient
applications for typical RISC-based workstations in lisp is "easy",
for all of the reasons I refer to above, irregardless of the number of
machine instructions to which a simple expression may compile, or
whatever other micro-level measure of compiler efficiency you wish to
use. The ultimate performance of the application as a whole will be
determined by much more than just the tightness of the code emitted by
the compiler.


[...]


Kirk Rader

Kirk Rader

unread,
Aug 11, 1994, 10:53:11 AM8/11/94
to
In article <CuA8A...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <Cu83A...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
>

[...]

>
>Let's be clear about this. Lisp is not the same as "the popular
>commercial Common Lisp implementations available on stock hardware".
>
>> But suggesting that lisp is just as reasonable a choice as
>>assembler or C for implementing things like device-drivers on typical
>>hardware / software configurations is simply ludicrous.
>
>You could say "suggesting that any of the popular commercial
>Common Lisp implementations available on stock hardware is just
>as reasonable a choice ..."
>
>I suspect no one would disagree with _that_.
>
>>Note also that Unix itself is not particularly well-suited to
>>real-time applications. Adding the overhead of supporting the typical
>>lisp implementation's run-time system (especially its
>>memory-management mechanisms) to the problems already inherent in Unix
>>for the type of application under discussion only exacerbates the
>>problems.
>
>Now it's "the typical Lisp implementation" that's said to be losing.
>
>The various terms are not interchangeable.
>


From the point of view of this thread the terms are interchangable to
this extent: "Lisp" (no "typical" or "commercial Common Lisp"
qualifiers) connotes not just any language based on the
lambda-calculus or any one that has generic features to support
higher-order functions and a functional-programming paradigm. "Lisp"
ordinarily refers to a member of a specific family of languages that
all have certain features in common to which this thread has been
referring, such as automatic dynamic memory-allocation with a
garbage-collector based deallocation scheme. The set of all "lisps"
obviously has fuzzy boundaries, but not more so than many other terms
about which it is possible to have meaningful discussions. If you
want to say "I consider XYZ a dialect of lisp, and it avoids the
problems to which refer in the following ways...." that would be
perfectly valid (if true.) But that does not alter the fact that the
majority of implementations of what most people would consider lisp
dialects do in fact suffer the kinds of performance problems which are
the subject of this thread.

Steven Rezsutek

unread,
Aug 11, 1994, 3:55:17 PM8/11/94
to
k...@enterprise.bih.harvard.edu (Kris Karas) writes:

Portability problems for Lisp could be solved in a similar fashion.
Emulate enough of the system-specific functions of a popular
environment in other environments so that any lisp program could
depend upon those functions. Add support for fast, low level I/O to
disk, keyboard, video, and so forth; process manipulation,
synchronization, and scheduler control; asynchronous
events/interrupts including being a network server; standardized calls
to editors, print managers, and so forth and so on.

Cool. A LispM-ulator. ;-)


Steve
--
---
Steven Rezsutek Steven....@gsfc.nasa.gov
Nyma / NASA GSFC
Code 735.2 Vox: +1 301 286 0897
Greenbelt, MD 20771

Thomas M. Breuel

unread,
Aug 11, 1994, 10:28:24 PM8/11/94
to
In article <32dtcr$l...@hsdndev.harvard.edu> k...@enterprise.bih.harvard.edu (Kris Karas) writes:
|The problem with portability of C code was solved for non-Unix
|platforms by making the compiler have knowledge of Unix platform
|layout, wrapping this environment around the compiling program while
|it compiles; if the program asks for a "/usr/include/time.h" the
|compiler will provide it, even if there isn't a directory called
|"usr/include" on the actual machine.

|
|Portability problems for Lisp could be solved in a similar fashion.

Sure, a lot of C portability is based on de-facto standards, rather
than on codified standards. But those de-facto standards exist. And
where they don't exist, there exist easy, de-facto standards for
interfacing to system libraries.

|Emulate enough of the system-specific functions of a popular
|environment in other environments so that any lisp program could
|depend upon those functions. Add support for fast, low level I/O to
|disk, keyboard, video, and so forth; process manipulation,
|synchronization, and scheduler control; asynchronous
|events/interrupts including being a network server; standardized calls
|to editors, print managers, and so forth and so on.

I would be happy if I could write efficient numerical functions in
CommonLisp that I could move to different CommonLisp implementations
on the same machine without having to redo all the declarations, and
if I could write foreign-function interface code that would work under
different CommonLisp implementations on the same machine.

Most of the other stuff you mentioned is operating system specific and
not language specific. You wouldn't get portability for that in C or
any other language, nor do I see why you would expect to.

Of course, with the amazing shrinking number of CommonLisp vendors,
we may soon have all the de-facto standards we want...

Thomas.

Henry G. Baker

unread,
Aug 11, 1994, 1:14:00 PM8/11/94
to
In article <MIKE.94Au...@pdx399.intel.com> mi...@ichips.intel.com (Mike Haertel) writes:
>>As a theoretical point, it's been
>>understood for some time that if you're willing to express everything
>>about a program in terms of the Lambda Calculus, some wonderful
>>optimization opportunities arise.
>
>This is called continuation passing style. Using CPS as your
>intermediate representation has some benefits. However, other
>representations, notably static single assignment, wherein each
>temporary is assigned exactly once, have the same benefits.
>Anyway, as far as I know CPS was invented by Steele for his
^^^^^^^^^^^^^^^^^^^^^^^^^^

>scheme compiler "Rabbit" around 1975. It's been used in a
>handful of compilers; New Jersey ML is a notable recent example,
>and Appel wrote an oft-cited book about it.

WRONG! Continuation-passing style was apparently known & used by
lambda calculus mathematicians (Curry?, Church?) before computers.
(Unfortunately, I don't have a reference.) CPS was popularized by
Michael Fischer in a 1972 paper that was recently reprinted in Lisp &
Symbolic Computation. A great deal of Carl Hewitt's 1972-74 Actor
stuff is CPS++. Steele's Rabbit compiler showed how CPS could be used
in a compiler to simplify and generalize many important optimizations.

Jeff Dalton

unread,
Aug 11, 1994, 2:05:19 PM8/11/94
to
In article <TFB.94Au...@sorley.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>* Jeff Dalton wrote:
>> I would agree that Lisp can do reasonably well on RISC machines,
>> but the point of Lisp machines was not just to make Lisp fast
>> but also to make it fast and safe at the same time and fast
>> without needing lots of declarations.
>
>> Recent Lisp implementations (especially CMU CL) have gone a fair
>> way towards making it easy to have safe, efficient code on RISC
>> machines, but it may always require a somewhat different way of
>> thinking. (Not a bad way, IMHO, but different from LM thinking
>> nonetheless.)
>
>Could any of the lisp machines do fast floating point, without
>declarations?

I don't know how fast their floating point was but, so far as I
know, declarations made no difference.

> I know maclisp was rumoured to be able to (on
>stock hardware even!) but did it use declarations?

Yes.

>> But what is this about "the way people wrote lisp systems in the 70s"?
>> What sort of thing do you have in mind? Lisps written in assembler
>> that could run in 16K 36-bit words? (Presumably not.)
>
>Well the MIT lispms and I think the Xerox dmachines are basically
>`70s technology' to my mind, that's what I meant.

Well, when you talk about "the way people wrote Lisp systems in
the 70s" that sounds like you're talking about pretty much everyone
and about most of the 70s. There were plenty of non-LM Lisps in
the 70s, and Lisp machines didn't really take off until the early
80s when lots of people (mistakenly) thought specialized hardware
was the way to go.

-- jeff

Jeff Dalton

unread,
Aug 11, 1994, 2:22:08 PM8/11/94
to
In article <3289mb$b...@cantaloupe.srv.cs.cmu.edu> r...@cs.cmu.edu (Rob MacLachlan) writes:
>
>If you consider Lambda to be what makes a language "Lisp-like", then I would
>agree that efficient code could easily be generated for C-with-Lambda (or
>C-with-lambda-and-parens) especially if upward closures were illegal.
>Performance-wise, the big problem with Lisp variants such as Common Lisp and
>Scheme are that:
> -- Basic operations such as arithmetic are semantically far removed
> from the hardware,
> -- Dynamic memory allocation is done by the runtime system, and is thus
> not under effective programmer control, and
> -- A run-time binding model tends to be used for variable references and
> function calls.

I know you know what you're talking about, but this doesn't make all
that much sense to me.

I can see the problem with arithmetic, though it's that hard to get
arithmetic that's fairly close to the hardware.

But I don't see why "programmer control" over allocation has to
be more efficient. In many cases, it won't be, and it requires
a fair amount of skill to get efficient storage management when
using malloc directly would be too slow. It's also far more
error-prone.

And what is this "run-time binding" and why is it worse than,
say, shared libraries? _Most_ variables will be ordinary lexical
variables and correspond directly to stack locations just as in
C. Using ordinary 70s-style technology, global variable and
function names can be looked up once to get the required address.
Thereafter, the cost is one level of indirection on each reference
or call. Moreover, it's easy to do better in some cases. For
instance, KCL uses direct calls to functions in the same file.

>I believe that the primary key to adequate performance in dynamic languages
>like Lisp is adequate tuning tools and educational materials. Getting good
>performance in dynamic languages currently requires far too much knowledge
>about low-level implementation techniques. Wizards have too long claimed that
>"Lisp is just as efficient as C" --- although Lisp may be highly efficient in
>the hands of the wizards, the vast majority of programmers who attempt Lisp
>system development don't come anywhere near that level of efficiency.

I agree that performance tools and educational materials are
important, but I don't think it's all that hard to get reasonably
efficient Lisp.

-- jeff

Kris Karas

unread,
Aug 11, 1994, 3:15:39 PM8/11/94
to
>| (loop repeat (ash (length a) -2) do
>| (let ((a (sys:%block-read 1 :prefetch t))
>| (b (sys:%block-read 1 :prefetch t))..........

>|
>| So, we've used lisp and not assembler.
>
>Looks like assembler to me. Worse yet, it's completely unportable and
>requires intimate familiarity with the particular system used

Fair enough. C looks like assember to me.

What many C programmers fail to notice, however, is that what they are
programming has little to do with C alone by itself, and very much to
do with "the C syntax" layered on top of a vast library particular to
one specific system. To wit: most of the C environments I use assume
that the computer it's running on supports a file system with
subdirectories, that there just happens to be a top level directory
called "usr", just happens to be a subdirectory under that called
"include", and so on ad nauseum.

Lets have fun. Take a large system snatched from net.sources or
something, copy it over to a non-Unix platform which has a C compiler
and a library for the functions described in K&R, and compile the thing.
Does it run? Better yet, does it even compile? I have a DeSmet C
compiler on my PC that implements all of K&R, and I can find few
programs indeed that will actually compile and run successfully.
In short, C is *not* portable. Most programs written in it depend
heavily upon knowledge of the platform (Unix) upon which it runs.
This is no different than the lisp/assembler program above depending
upon knowledge of its particular platform.

The problem with portability of C code was solved for non-Unix
platforms by making the compiler have knowledge of Unix platform
layout, wrapping this environment around the compiling program while
it compiles; if the program asks for a "/usr/include/time.h" the
compiler will provide it, even if there isn't a directory called
"usr/include" on the actual machine.

Portability problems for Lisp could be solved in a similar fashion.

Emulate enough of the system-specific functions of a popular
environment in other environments so that any lisp program could
depend upon those functions. Add support for fast, low level I/O to
disk, keyboard, video, and so forth; process manipulation,
synchronization, and scheduler control; asynchronous
events/interrupts including being a network server; standardized calls
to editors, print managers, and so forth and so on.

--
Kris Karas <k...@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
(setq *disclaimer* "I barely speak for myself, much less anybody else."
*conformist-numbers* '((AMA-CCS 274) (DoD 1236))
*bikes* '((RF900RR-94) (NT650-89 :RaceP T) (TSM-U3 :Freebie-P T)))

Bill Gooch on SWIM project x7151

unread,
Aug 11, 1994, 6:09:52 PM8/11/94
to

In article <TFB.94Au...@sorley.cogsci.ed.ac.uk>, t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
|> Could any of the lisp machines do fast floating point, without
|> declarations?

The following comments pertain to Symbolics machines:

For single precision, yes, because it doesn't require boxing.
Symbolics sold some decent floating point accelerator boards.

Double precision is a horse of a different color, because of boxing.
Accelerators didn't help noticeably because the boxing overhead was
heavily dominant over computation time anyway. We were able to get
very substantial improvements in double precision with a single pre-
cision FPA by keeping all the numbers unboxed and using subprimitives
that operate on unboxed args and return unboxed results. This is of
course very specialized and hard-to-maintain code, but it did give us
good results perfomance-wise. I don't remember the exact benchmark
results, but suffice it to say that they were quite competetive with
using C on alternative stock hardware with floating point acceleration.

Symbolics later came out with fast DP floating point acceleration, but
they somehow missed the point because they never put the hooks into
their compiler so that one could just use declarations to get decent
performance by avoiding boxing. This meant that their FPA hardware
was essentially useless to anyone wanting to do DP who didn't want to
get into the kind of coding we did (for which, I should add, we needed
help and some compiler-macro code from David Plummer at Symbolics).

John R. Bane

unread,
Aug 12, 1994, 9:47:33 AM8/12/94
to
In article <STEVE.94A...@baloo.gsfc.nasa.gov> st...@baloo.gsfc.nasa.gov (Steven Rezsutek) writes:
>k...@enterprise.bih.harvard.edu (Kris Karas) writes:
>
> Portability problems for Lisp could be solved in a similar fashion.
> Emulate enough of the system-specific functions of a popular
> environment in other environments so that any lisp program could
> depend upon those functions.....
>
>Cool. A LispM-ulator. ;-)
>
It's been done, several times, and commercially to boot. Medley from Venue
runs D-machine images on top of a C emulator that essentially fakes a
D-machine environment on top of Unix or MS-DOS. You can build an image on
a Dorado, dump it and restart it on a Unix box without a hic-up. You can't
go the other way, because the emulator doesn't do a D-machine perfectly
(doesn't maintain the D-machine page tables, for one thing).

Hasn't Symbolics done an emulator-based port to the Alpha?
--
Internet: ba...@tove.cs.umd.edu
UUCP:...uunet!mimsy!bane
Voice: 301-552-4860

Jeff Dalton

unread,
Aug 12, 1994, 2:48:20 PM8/12/94
to
In article <MIKE.94Au...@pdx399.intel.com> mi...@ichips.intel.com (Mike Haertel) writes:
>In article <CuA8E...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <MIKE.94A...@pdx399.intel.com> mi...@ichips.intel.com (Mike Haertel) writes:
>>>Anyway, you seem to have completely missed Thomas' point, that
>>>the original poster's example of "efficient Lisp" wasn't Lisp at all.
>>>I sure can't find it anywhere in CLtL.
>>
>>Since when is Lisp the same as what's in CLtL?
>
>Since I was trying to make a point, didn't feel like going into the
>never-ending philosophical discussion of exactly what characteristics
>of a language make it "Lisp". CLtL was a convenient scapegoat.

You don't have to go into any philosophical discussion to say
"Common Lisp" if you mean Common Lisp or to say "not portable
Lisp" or "vendor specific" or whatever if _that's_ what you mean.

>The point I was trying to make was that the original poster's
>example was just as vendor-specific as assembly language.

And he was making a point about Lisp, not about standardized,
portable Lisp or whatever it is you're talking about.

>>But that is Lisp here? What's in CLtL?
>
>I refuse to get sucked into this discussion. This newsgroup has
>seen it to many times before. Are you bored, or what?

I'm not trying to suck you into any discussion. I'm just tired of
people saying "Lisp" when they mean Common Lisp (or whatever they
mean -- it's usually unclear).

Actually, I'm more than tired of it. People are forming false
conclusions about all Lisps because of what they've seen of Common
Lisp. This is a serious problem for anyone who wants to work in Lisp.
Claims about "Lisp" that are true only of some subset are part of
the problem.

-- jd

William G. Dubuque

unread,
Aug 14, 1994, 2:30:46 PM8/14/94
to
In article <940810182...@pharlap.ci.com> da...@pharlap.CI.COM (David B. Kuznick) writes:

From: da...@pharlap.CI.COM (David B. Kuznick)
Date: 10 Aug 1994 14:26:37 -0500

In article <31sjpj$l...@news1.svc.portal.com>, you write:

|> Now, you may find the case of writing pieces of device-driver code for
|> a Lisp Machine a contrived example. Since I happen to find that
|> argument fairly compelling myself, let me just point out that there is
|> a commercial real-time expert system shell, called G2 if memory serves
|> me correctly, written in Common Lisp and running on stock hardware.
|> It's being used, among other things, to control the Biosphere 2
|> environment.

Well, G2 isn't REALLY written in Common Lisp per se, but in a Lisp
where they have very good control over the memory mangement (i.e.
garbage-free floats, etc). They took Common Lisp, threw out what
they didn't need, and rewrote what they did (with some help from
Lucid, the rumours say...)

Definitely, an impressive system, and shining proof that Lisp is
indeed useful in real-world applications.

G2 _is_ really written in Common Lisp. However, of course, a
carefully chosen subset of Common Lisp is used in order to maintain
precise control over the code that is generated by Chestnut's Lisp
-> C translator. Even with such measures, G2 is not a true _hard_
real-time system as has been stated elsewhere. These issues have
been discussed in this newsgroup previously.

Lucid has been out of the picture for quite some time.

Jeff Dalton

unread,
Aug 15, 1994, 12:09:13 PM8/15/94
to
In article <CuDKK...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
>In article <CuA8u...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <TFB.94Au...@sorley.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>>>* Kirk Rader wrote:
>>>> By definition, the architecture of a
>>>> lisp-machine is going to favor using lisp for most aspects of both
>>>> application and system-level code. It should be equally obvious that
>>>> both the hardware and the software architecture of systems designed
>>>> primarily to run Unix or other popular OS's themselves written in C
>>>> using the standard C libraries will generally favor using C-like
>>>> languages, other things being equal.

> Every brand of work-station


>with which I am familiar comes with performance metering tools which
>can be used to easily verify the kinds of ill-effects to which I
>refer. As a concrete example, it is an enlightening experience to
>watch the output of gr_osview on an SGI while a complex lisp
>application is running using one of the popular commercial Common Lisp
>implementations. One can easily see where the lisp implementation's
>I/O model, memory-management model, lightweight process model, and so
>on cause really awful behavior of Irix' built-in I/O,
>memory-management, and scheduling mechanisms.

I find this rather strange. What is the mismatch in I/O models?
And why would Lisp's lightweight processes be a problem? Is the
OS not expecting to give timer interrupts? Memory management I
can almost see, but what exactly is going wrong? Berkeley Unix
tried to take Franz Lisp into account. Have things moved backwards
since then?

>>I would agree that Lisp can do reasonably well on RISC machines,
>>but the point of Lisp machines was not just to make Lisp fast
>>but also to make it fast and safe at the same time and fast
>>without needing lots of declarations.
>
>And to build the kind of "holistically lisp friendly" environment that
>would make the kind of misbehavior to which I refer above impossible.
>A lisp machine has no other memory-management mechanism than lisp's.
>Ditto for its scheduler. Ditto for its I/O substrate. There is no
>possibility of conflict or misoptimization.

Sure there is. It might be fine for Lisp A but not for Lisp B.
Besides, I/O and scheduling and much of memory management is
the OS, not the hardware. The OS on ordinary non-Lisp machines
could change toi work better with Lisp.

>>Recent Lisp implementations (especially CMU CL) have gone a fair
>>way towards making it easy to have safe, efficient code on RISC
>>machines, but it may always require a somewhat different way of
>>thinking. (Not a bad way, IMHO, but different from LM thinking
>>nonetheless.)
>
>I would disagree with the description that writing efficient
>applications for typical RISC-based workstations in lisp is "easy",

I didn't say it was easy. But it's true that I don't think it's
as hard as some other people seem too. Moreover, I blame particular
Lisp implementations rather than "Lisp".

>for all of the reasons I refer to above, irregardless of the number of
>machine instructions to which a simple expression may compile, or
>whatever other micro-level measure of compiler efficiency you wish to
>use. The ultimate performance of the application as a whole will be
>determined by much more than just the tightness of the code emitted by
>the compiler.

Well, I (at least) have never made any claim about the number of
machine instructions or indeed any "micro-level measure of compiler
efficiency". So why are you saying this in response to me?

-- jeff

Jeff Dalton

unread,
Aug 15, 1994, 12:33:17 PM8/15/94
to
In article <CuDLC...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
>In article <CuA8A...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>Let's be clear about this. Lisp is not the same as "the popular
>>commercial Common Lisp implementations available on stock hardware".

My point here is that if a claim says "Lisp", the supporting
evidence should be more general that popular commercial CLs.

>>>Note also that Unix itself is not particularly well-suited to
>>>real-time applications. Adding the overhead of supporting the typical

>>>lisp implementation's run-time system [...]

>>Now it's "the typical Lisp implementation" that's said to be losing.
>>
>>The various terms are not interchangeable.

>From the point of view of this thread the terms are interchangable to
>this extent: "Lisp" (no "typical" or "commercial Common Lisp"
>qualifiers) connotes not just any language based on the
>lambda-calculus or any one that has generic features to support
>higher-order functions and a functional-programming paradigm.

A straw man, since no one has said otherwise.

>"Lisp" ordinarily refers to a member of a specific family of languages that
>all have certain features in common to which this thread has been
>referring, such as automatic dynamic memory-allocation with a
>garbage-collector based deallocation scheme.

Do you count reference counting as GC? (It used to be considered
an alternative to GC, but a few years ago it looked like the
distinction hadn't been maintained, at least not in many people's
minds). There are Lisps that use reference counting, and there
are Lisps that don't collect at all. But this is really a matter
of the implementations, not the languages.

Besides, there are a number of cases where Lisp's alloc + GC will be
faster than "manual" alloc and dealloc.

> The set of all "lisps"
>obviously has fuzzy boundaries, but not more so than many other terms
>about which it is possible to have meaningful discussions.

But I am not attempting to exploit any "fuzzy boundaries".

> If you
>want to say "I consider XYZ a dialect of lisp, and it avoids the
>problems to which refer in the following ways...." that would be
>perfectly valid (if true.)

We got into this because someone posted some example real-time
code. So far the complaints about the example have been that
it's messy [true] and that it's implementation-specific [so what?].
Sure, someone came right out and said it's "not Lisp", but their
only argument to that effect was that it couldn't be found anywhere
in CLtL.

Now, if someone wants to argue that (say) a language with GC is
necessarily slower than one that works like C, let them do so.

> But that does not alter the fact that the
>majority of implementations of what most people would consider lisp
>dialects do in fact suffer the kinds of performance problems which are
>the subject of this thread.

Sure, and if you want around saying "the majority of implementations"
you'd receive no complaints from me.

-- jd

Tim Bradshaw

unread,
Aug 15, 1994, 1:00:56 PM8/15/94
to
* Kirk Rader wrote:
> The system as a whole - it's I/O, memory-management,
> etc. hardware and software substrates - were all designed and
> optimized with Unix in mind. Any software system such as Common Lisp
> which has its own memory-management, I/O, etc. models that are
> sufficiently different from that of Unix to prevent the implementor or
> application programmer from simply calling the standard libraries in
> the same way that a C program would is by definition not only
> re-inventing the wheel from the point of view of the work-station's
> designers but also risks incurring (and in typical implementations
> does incur) serious performance problems.

[Talking about Common Lisp here]

CL's I/O model is basically buffered streams. Sounds pretty much
identical to that of C to me. A lot of CL implementations seem to
have rather poor I/O but that's because the I/O systems aren't well
written.

Since C doesn't have any built in memory management, I fail to see how
CL's can be different from it. There are or have been problems with
Unix's VM and typical lisp poor locality behaviour. However these
problems are in fact just as bad for other programs (for instance X
servers exhibit many of the same characteristics, and thrash VM
systems the same way as Lisp on resource-starved machines.

> Every brand of work-station
> with which I am familiar comes with performance metering tools which
> can be used to easily verify the kinds of ill-effects to which I
> refer. As a concrete example, it is an enlightening experience to
> watch the output of gr_osview on an SGI while a complex lisp
> application is running using one of the popular commercial Common Lisp
> implementations. One can easily see where the lisp implementation's
> I/O model, memory-management model, lightweight process model, and so
> on cause really awful behavior of Irix' built-in I/O,
> memory-management, and scheduling mechanisms.

Can you give details? I have spent some time watching large CL (CMUCL)
programs on Suns, and other than VM problems (and CMUCL's garbage
collection is not exactly `state of the art') I find they do fine.
And they weren't even very well written. Lightweight processes are
not part of CL BTW.

> And to build the kind of "holistically lisp friendly" environment that
> would make the kind of misbehavior to which I refer above impossible.
> A lisp machine has no other memory-management mechanism than lisp's.
> Ditto for its scheduler. Ditto for its I/O substrate. There is no
> possibility of conflict or misoptimization.

Sure we'd all like the integrated environment back. But we can't have
it. And it didn't make that kind of misbehaviour impossible, believe
me!

> I would disagree with the description that writing efficient
> applications for typical RISC-based workstations in lisp is "easy",
> for all of the reasons I refer to above, irregardless of the number of
> machine instructions to which a simple expression may compile, or
> whatever other micro-level measure of compiler efficiency you wish to
> use. The ultimate performance of the application as a whole will be
> determined by much more than just the tightness of the code emitted by
> the compiler.

Again, can you give some examples here? I fully agree that
crappily-written programs don't perform well, and you can write badly
in Lisp. But well-written ones do, and you don't need to resort to
arcana to write well.

--tim

Martin Rodgers

unread,
Aug 15, 1994, 7:28:58 AM8/15/94
to
In article <32dtcr$l...@hsdndev.harvard.edu>
k...@enterprise.bih.harvard.edu "Kris Karas" writes:

> Fair enough. C looks like assember to me.

When I've used C to write the kind of code I like write in Lisp,
it just like assembly code, but with indenting. Most of the control
structures are nearly invisble, too.

> What many C programmers fail to notice, however, is that what they are
> programming has little to do with C alone by itself, and very much to
> do with "the C syntax" layered on top of a vast library particular to
> one specific system. To wit: most of the C environments I use assume
> that the computer it's running on supports a file system with
> subdirectories, that there just happens to be a top level directory
> called "usr", just happens to be a subdirectory under that called
> "include", and so on ad nauseum.

Bingo! I have neither usr nor include in my "root" dir. Not that I
consider that important, but so many other people do. I'm not even
using X-Windows.



> This is no different than the lisp/assembler program above depending
> upon knowledge of its particular platform.

I may be agreeing with that, also.



> The problem with portability of C code was solved for non-Unix
> platforms by making the compiler have knowledge of Unix platform
> layout, wrapping this environment around the compiling program while
> it compiles; if the program asks for a "/usr/include/time.h" the
> compiler will provide it, even if there isn't a directory called
> "usr/include" on the actual machine.

I'll have to try that! It's news to me. No manual I've read has
said anything about that, even when the compiler had some minimal
Unix support, or was also availabe for Unix.

Now, maybe all I need in order to compile and run Flex is a util to
unpack a TAZ archive that can also rename filenames with multiple dots?

The solution that most people have is to add POSIX support to all
the non-Unix systems. Will that be enough? I dunno.

--
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind

Thomas M. Breuel

unread,
Aug 15, 1994, 4:21:40 PM8/15/94
to
In article <CuDKK...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
|One can easily see where the lisp implementation's
|I/O model, memory-management model, lightweight process model, and so
|on cause really awful behavior of Irix' built-in I/O,
|memory-management, and scheduling mechanisms.

The use a CommonLisp implementation makes of an operating environment
is not much different from many other heavy duty applications.

In fact, I have been rather unimpressed with Irix' (both 4.* and 5.*)
ability to cope with heavy-duty applications on hardware that would
have been adequate for the task but didn't have much room to spare.

So, if Irix can't cope with Lisp, that's a problem with Irix. And
that problem will not just bite you with Lisp, but also with many
other kinds of applications.

Thomas.

Thomas M. Breuel

unread,
Aug 15, 1994, 4:49:47 PM8/15/94
to
In article <TFB.94Au...@oliphant.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
|Sounds pretty much
|identical to that of C to me. A lot of CL implementations seem to
|have rather poor I/O but that's because the I/O systems aren't well
|written.

CL's I/O system is also lacking some rather important primitives,
like block read/block write.

|However these
|problems are in fact just as bad for other programs (for instance X
|servers exhibit many of the same characteristics, and thrash VM
|systems the same way as Lisp on resource-starved machines.

Sure, but CommonLisp systems will resource-starve a machine much more
quickly than an equivalent C program. The reason is that CL lacks
important primitives for expressing some fundamental kinds of data
abstractions in a space-efficient way (think about how much space your
typical "struct { int x; double y; char z;};" takes as a CommonLisp
DEFSTRUCT). Also, rampant consing by the standard libraries is a real
problem in CL. And, most systems still don't have efficient floating
point arguments/return values for separately compiled functions.

Yes, you are right, neither VM behavior of CommonLisp nor garbage
collection are the problem. However, VM thrashing and excessive
garbage collections are frequently observed symptoms of problems with
the CommonLisp language.

Thomas.

PS:

|CL's I/O model is basically buffered streams.

CL has _typed_ buffered streams. Whatever the CL I/O design was
supposed to achieve, it fails to achieve in practice. Those stream
types at best cause lots of headaches.

|Since C doesn't have any built in memory management, I fail to see how
|CL's can be different from it.

malloc/free meets all the criteria of being "built-in memory
management": it implements manual memory management, it cannot be
implemented portably in user code, and it is standardized. So, C does
have built-in memory management. (malloc/free also is relatively
inefficient...).

Jeffrey Mark Siskind

unread,
Aug 15, 1994, 10:52:02 PM8/15/94
to
In article <TMB.94Au...@arolla.idiap.ch> t...@arolla.idiap.ch (Thomas M. Breuel) writes:

The reason is that CL lacks
important primitives for expressing some fundamental kinds of data
abstractions in a space-efficient way (think about how much space your
typical "struct { int x; double y; char z;};" takes as a CommonLisp
DEFSTRUCT).

What primitives does CL lack? The Scheme compiler that I am writing provides
a DEFINE-STRUCTURE that is essentially a subset of the CL DEFSTRUCT. And it
can produce *exactly* the code "struct { int x; double y; char z;};" for
(DEFINE-STRUCTURE FOO X Y Z) when type inference determines that the X slot
will only hold exact integers, the Y slot only inexact reals, and the Z slot
characters. Note that it does this without any declarations at all. I presume
that the same thing can be done for CL.
--

Jeff (home page http://www.cdf.toronto.edu/DCS/Personal/Siskind.html)

Bob Hutchison

unread,
Aug 16, 1994, 10:42:16 AM8/16/94
to
In <CuDKK...@triple-i.com>, ki...@triple-i.com (Kirk Rader) writes:
>refer. As a concrete example, it is an enlightening experience to
>watch the output of gr_osview on an SGI while a complex lisp
>application is running using one of the popular commercial Common Lisp
>implementations. One can easily see where the lisp implementation's
>I/O model, memory-management model, lightweight process model, and so
>on cause really awful behavior of Irix' built-in I/O,
>memory-management, and scheduling mechanisms.

I missed this bit the first time through...

gr_osview can be quite missleading on SGI platforms. Having worked
at a company that produces a high-end graphics design system
for SGIs, and being involved in performance tuning (at least to the
extent of being covered by the hair that had just been yanked from
its owner's head), I can tell a few stories about that. If you want
to use a tool like that on an SGI, use osview (I think its called).

As for SGI and lisp, I have had three or four implementations of scheme
running and noticed nothing extrordinary about their behaviour. Elk has
since been embedded into the product for further experiments, and it
seems to do its thing without undo disturbance of the rest of the
application. This experience does nothing more than, perhaps, re-emphasise
that implementation matters.

--
Bob Hutchison, hu...@RedRock.com
RedRock, 135 Evans Avenue, Toronto, Ontario, Canada M6S 3V9
(416) 760-0565

Kirk Rader

unread,
Aug 16, 1994, 10:49:00 AM8/16/94
to
In article <CuL3J...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <CuDKK...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
>>In article <CuA8u...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>In article <TFB.94Au...@sorley.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:


[...]


>
>I find this rather strange. What is the mismatch in I/O models?
>And why would Lisp's lightweight processes be a problem? Is the
>OS not expecting to give timer interrupts? Memory management I
>can almost see, but what exactly is going wrong? Berkeley Unix
>tried to take Franz Lisp into account. Have things moved backwards
>since then?


As one concrete example of the kind thing to which I referred, SGI's
filesystem substrate maintains its recently-deallocated file cache
buffers in a "semi-free" state on the theory that they will quickly be
reallocated for the same purpose. Watching the output of gr_osview as
an I/O intensive lisp application executes you can easily see conflict
between the different buffering and memory-management policies being
used by the OS's filesystem and virtual-memory management mechanisms
and by the I/O and memory-management mechanisms of at least two
commercial Common Lisp implementations and one shareware Scheme
implementation with which I am familiar.


[...]


>Sure there is. It might be fine for Lisp A but not for Lisp B.
>Besides, I/O and scheduling and much of memory management is
>the OS, not the hardware. The OS on ordinary non-Lisp machines
>could change toi work better with Lisp.


All these arguments about "lisp A vs lisp B" are red-herrings. On a
given lisp-machine you could only be described as perverse for using
any other dialect than one for which the machine was designed. I
understand your real argument to be that it is in principle possible
to design lisp-friendly OS's and Unix- (or other particular OS-)
friendly lisps. That is undeniably true, but market forces have yet
to produce viable examples of such implementations, so far as I can
tell.


[...]


>
>I didn't say it was easy. But it's true that I don't think it's
>as hard as some other people seem too. Moreover, I blame particular
>Lisp implementations rather than "Lisp".


"Lisp", as opposed to particular lisp implementations, can only be
understood to refer to the concept of any member of a particular
family of programming languages. The common usage of "lisp" implies a
language with certain features that make it difficult to implement in
a way that won't conflict with a Unix workstation's or PC's model of
the universe. The richer and more featureful the lisp dialect, the
more difficult it has proven to be.

[...]


>
>Well, I (at least) have never made any claim about the number of
>machine instructions or indeed any "micro-level measure of compiler
>efficiency". So why are you saying this in response to me?
>
>-- jeff
>


Because your earlier contribution to this thread came in the context
of and both quoted and added various statements about implementation
details and compiler efficiency that simply ignored the more general
issues that cause them to only address one small aspect of the whole
problem. My central point all along has been that the choice of which
language to use for a particular project is a complex one due to the
many trade-offs entailed no matter which language is used. In the
absence of any contextual information about the nature of the
platform, the particular language implementations, and the
requirements of the application both of the statements "Lisp is better
than C" and "C is better than lisp" are either meaningless or false,
depending on one's semantic theory. The same is true for any other
pair of programming languages which might be placed in opposition or
proposed as candidates for "god's own programming language."


Kirk Rader

unread,
Aug 16, 1994, 11:29:34 AM8/16/94
to
In article <CuL4n...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:


[...]


>
>My point here is that if a claim says "Lisp", the supporting
>evidence should be more general that popular commercial CLs.


I have also used various commercial and shareware non-Common Lisp
dialects on both Unix workstations and PC-class machines with similar
results. My point is not about any particular implementation of any
particular dialect, but about language-intrinsic features of lisp as a
class of programming language. I did draw some particular examples
from a particular project implemented using a particular commercial
Common Lisp, but I also very explicitly stated that these were
intended to illustrate points about "lisp", as the term is commonly
understood, in general.


[...]


>
>A straw man, since no one has said otherwise.


Your continuing focus on "lisp" as opposed to "particular lisp
implementations" shows that you are saying otherwise.


[...]


>Do you count reference counting as GC? (It used to be considered
>an alternative to GC, but a few years ago it looked like the
>distinction hadn't been maintained, at least not in many people's
>minds). There are Lisps that use reference counting, and there
>are Lisps that don't collect at all. But this is really a matter
>of the implementations, not the languages.


I am indifferent as to how you choose to categorize different
memory-management strategies, other than the basic difference between
languages which automatically allocate and deallocate memory and those
which only do so under explicit programmer control. Some form of
automatic memory allocation and recovery strategy is central to most
people's idea of what a lisp dialect entails. If you choose to
include in the set of "lisps" some language which has a malloc() /
free() style of memory management, then that dialect would, of course,
be much less prone to memory-management conflicts with widely-used
OS's.


>
>Besides, there are a number of cases where Lisp's alloc + GC will be
>faster than "manual" alloc and dealloc.


And such cases are among those for which I have explicitly advocated
using lisp earlier in this and related threads.


[...]


>
>Now, if someone wants to argue that (say) a language with GC is
>necessarily slower than one that works like C, let them do so.


[...]


Not "necessarily slower", but in my experience it is more common in
real-world programming projects to encounter cases were C's
memory-management philosophy results in higher throughput and fewer
problems for interactivity and real-time response than one which
relies on GC.


Kirk Rader

Kirk Rader

unread,
Aug 16, 1994, 11:51:06 AM8/16/94
to

[...]

>


>[Talking about Common Lisp here]
>
>CL's I/O model is basically buffered streams. Sounds pretty much
>identical to that of C to me. A lot of CL implementations seem to
>have rather poor I/O but that's because the I/O systems aren't well
>written.

C's "minimalist" philosophy makes it much easier for the
implementation to provide alternative library entry points [open() vs
fopen(), etc.] and greater opportunity for the programmer to
circumvent whatever problems there are with a given library
implementation. It is silly to suggest that C and Common Lisp are
really on a par when it comes to the amount semantic "baggage" they
carry for I/O or anything else. If that were true, what advantage
would lisp _ever_ have?


>
>Since C doesn't have any built in memory management, I fail to see how
>CL's can be different from it. There are or have been problems with
>Unix's VM and typical lisp poor locality behaviour. However these
>problems are in fact just as bad for other programs (for instance X
>servers exhibit many of the same characteristics, and thrash VM
>systems the same way as Lisp on resource-starved machines.


C's memory-management philosophy is to rely on the standard-library's
interface to the OS. I do not see how this could be more different
from lisp's reliance on a built-in memory-management scheme. And what
possible relevence could it have that other software systems also
exhibit similar performance problems to those of lisp?


[...]


>
>Can you give details? I have spent some time watching large CL (CMUCL)
>programs on Suns, and other than VM problems (and CMUCL's garbage
>collection is not exactly `state of the art') I find they do fine.
>And they weren't even very well written.

I specifically referred to using gr_osview on SGI's. In particular,
it is easy to observe conflicts between lisp's memory management and
I/O mechanisms and Irix's filesystem and memory-management mechanisms.

> Lightweight processes are
>not part of CL BTW.


But they are part of almost every "serious" implementation of every
lisp dialect with which I am familiar, and I was not talking just about
some particular implementation of some particular dialect.


[...]


>
>Sure we'd all like the integrated environment back. But we can't have
>it. And it didn't make that kind of misbehaviour impossible, believe
>me!


[...]

What I was saying was impossible was that there could be some conflict
between the design of the lisp implementation and the OS, because they
were essentially the same thing. It is always possible to create
programs in any language on any platform that exhibit performance
problems.


[...]


>
>Again, can you give some examples here? I fully agree that
>crappily-written programs don't perform well, and you can write badly
>in Lisp. But well-written ones do, and you don't need to resort to
>arcana to write well.
>
>--tim


The question isn't whether it is possible to write programs in any
particular language that perform well, but rather for any given
programming task do the features of the language make it easier or
harder to achieve acceptable performance? The semantics of a language
which includes GC style memory management, lexical closures, so-called
"weak types", etc. has consciously chosen expressive power in favor of
highest-possible performance. In many cases that is an appropriate
choice, but in many cases it isn't.


Kirk Rader

Kirk Rader

unread,
Aug 16, 1994, 12:01:05 PM8/16/94
to

[...]

>
>The use a CommonLisp implementation makes of an operating environment
>is not much different from many other heavy duty applications.
>
>In fact, I have been rather unimpressed with Irix' (both 4.* and 5.*)
>ability to cope with heavy-duty applications on hardware that would
>have been adequate for the task but didn't have much room to spare.
>
>So, if Irix can't cope with Lisp, that's a problem with Irix. And
>that problem will not just bite you with Lisp, but also with many
>other kinds of applications.
>
> Thomas.


As someone who makes his living creating software for SGI's I cannot
afford to use any tool that does not run well under Irix, whether or
not Irix is particularly well designed. The fact is that there is a
whole SGI software industry, and if lisp evangelists would like to see
more cases of lisp being used there, they would have a greater
likelihood of success by suggesting that the lisp vendors do a better
job of accomodating the platform than that the platform vendor
accomodate lisp. It's a simple matter of economics.


Kirk Rader

Bill Gooch on SWIM project x7151

unread,
Aug 16, 1994, 6:22:13 PM8/16/94
to

In article <CuMuH...@triple-i.com>, ki...@triple-i.com (Kirk Rader) writes:
|> ....

|> All these arguments about "lisp A vs lisp B" are red-herrings. On a
|> given lisp-machine you could only be described as perverse for using
|> any other dialect than one for which the machine was designed....

Oh, I dunno... on Symbolics machines, compatibility with other
dialects is an issue that received significant attention. They
explicitly support development of strict Common Lisp (without
the Symbolics extensions) so that the result can be used on a
range of CL platforms. I've also worked with code that used a
pretty broad set of #+<dialect> and #-<dialect> macros for
multi-platform support purposes. And then there's CLOE, for
developing PC-targeted CL applications based on a well-chosen
subset of the Symbolics extensions.

Even more "interesting" was the infamous ILCP - the InterLisp
Compatibility Package - which supported a reasonable portion of
InterLisp on top of Zetalisp (in pre-CL days). I used this to
port Emycin to Zetalisp, and as undeniably perverse as this was
(in more ways than one), it worked pretty darn well. Was used
by a customer in the oil industry to develop and field two
expert systems applications within one year, start to finish.
BTW, these applications were fielded for production use on Vax
hardware running a reimplementation of the Emycin inference engine
in FORTRAN, developed using (you guessed it) Symbolics' FORTRAN.

Speaking of FORTRAN, there was also Pascal, Ada, Prolog and C
support... but that's another story.

I don't know whether any of this has much bearing on your
discussion. Oh well.

Martin Rodgers

unread,
Aug 16, 1994, 8:19:12 AM8/16/94
to
In article <CuL3J...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk "Jeff Dalton" writes:

> Sure there is. It might be fine for Lisp A but not for Lisp B.
> Besides, I/O and scheduling and much of memory management is
> the OS, not the hardware. The OS on ordinary non-Lisp machines
> could change toi work better with Lisp.

I'd guess that a change to the OS that supports Lisp might also
support an app written in another language, like Smalltalk, Dylan,
Prolog, and other languages. It might also give better support to
apps written in C, altho that might depend on the memory management
used. I know of a C/C++ compiler that is said to have a malloc
implementation that works badly with the paging system for the
platform it is intended to work with, and it's a very popular C/C++
compiler. That's why I don't think it's a purely language or OS
specific issue.

William Paul Vrotney

unread,
Aug 17, 1994, 5:36:43 AM8/17/94
to
In article <CuMuH...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:

> ...


> problem. My central point all along has been that the choice of which
> language to use for a particular project is a complex one due to the
> many trade-offs entailed no matter which language is used. In the
> absence of any contextual information about the nature of the
> platform, the particular language implementations, and the
> requirements of the application both of the statements "Lisp is better
> than C" and "C is better than lisp" are either meaningless or false,
> depending on one's semantic theory. The same is true for any other
> pair of programming languages which might be placed in opposition or

> ...

Instead of all this complex analysis, which I'm not sure is going anywhere,
lets try some simple stuff for a change. Lets try this mind experiment. IF
there was a Lisp compiler that compiled as efficiently as C (or even close
to) and IF your boss said that you can program in either Lisp or C. What
would your choice be? Case closed (one way or the other). I hope.

There are so many more interesting aspects of Lisp that this news group can
be used for.

--
Bill Vrotney - vro...@netcom.com

Kris Karas

unread,
Aug 17, 1994, 10:59:24 AM8/17/94
to
John R. Bane writes:

>Steven Rezsutek writes:
>>k...@enterprise.bih.harvard.edu (Kris Karas) writes:
>> Portability problems for Lisp could be solved in a similar fashion.
>> Emulate enough of the system-specific functions...

>>Cool. A LispM-ulator. ;-)
>It's been done, several times, and commercially to boot.

I think people get stuck assuming that if you're going to emulate a
lisp machine, you should emulate the whole thing (all 50 megabytes of
its image). Symbolics went this route with the Alpha; make an
emulator capable of running world loads saved by an XL/NXP series
platform. It would be better to emulate just those portions that the
user is likely to call, and leave the rest native; there's no need to
make window drivers, disk interfaces, and so on be written in lisp.

Most commercial vendors (Lucid, Franz, etc.) have useful
platform-specific extensions, some of which mimic the lisp-machine
equivalent. If they could come to some consensus, a platform standard
could be devized to go along with CLtL, allowing programmers to have
the full control over the environments of their programs as they do
now with C. I'm not saying anything new here; it's been said before.
Still, I'm amazed at the polarization of lisp development systems into
either (A) super-bloated (lisp machines with 500 megabytes of virtual
memory swapping space) or (B) super-anemic (limited memory, stripped
down, having modern internals, but with a 1970s user interface).

Kirk Rader

unread,
Aug 17, 1994, 11:50:55 AM8/17/94
to
In article <32qj88$7...@relay.tor.hookup.net> hu...@RedRock.com (Bob Hutchison) writes:

[...]

>
>I missed this bit the first time through...
>
>gr_osview can be quite missleading on SGI platforms. Having worked
>at a company that produces a high-end graphics design system
>for SGIs, and being involved in performance tuning (at least to the
>extent of being covered by the hair that had just been yanked from
>its owner's head), I can tell a few stories about that. If you want
>to use a tool like that on an SGI, use osview (I think its called).


For the sake of brevity, and because gr_osview is easiest to use for
(pun intended) gross measurements, I only referred explicitly to it.
In fact, I have seen the results of running any number of SGI's GUI
and command-line performance metering tools, including spending a
lengthy session with a couple of SGI's engineers using some of their
most arcane tools with the benefit of experts' eyes in trying to
analyse the poor performance of a particualr lisp-based application.
It was obvious that the worst performance problems were being caused
by conflicts between the lisp implementation's and Irix' memory
management and scheduling mechanisms.


>
>As for SGI and lisp, I have had three or four implementations of scheme
>running and noticed nothing extrordinary about their behaviour. Elk has
>since been embedded into the product for further experiments, and it
>seems to do its thing without undo disturbance of the rest of the
>application. This experience does nothing more than, perhaps, re-emphasise
>that implementation matters.


In what language is the application written in which Elk was embedded?
Embedding a relatively small implementation of a dialect like Scheme,
especially an implementation like Elk that is more-or-less designed
with the idea of being used in that way, in a large-scale application
the bulk of which is written (presumably) in a C-like language is very
different from trying to implement a large, complex application
entirely in a commercial Common Lisp, using lisp to implement all of
the bells-and-whistles like multi-threading that a large-scale
application is likely to require. I have previously made the point
that a reasonable compromise for achieving high performance for an
application that cannot afford to be entirely written in lisp while
still retaining some, at least, of lisp's desirable features would be
to embed a small lisp environment in an application whose mainline
code is implemented in a more "conventional" language.


[...]


Kirk Rader

Kirk Rader

unread,
Aug 17, 1994, 11:56:22 AM8/17/94
to
In article <32re6l$i...@pulitzer.eng.sematech.org> bill....@sematech.org (Bill Gooch) writes:
>

[...]

>
>Oh, I dunno... on Symbolics machines, compatibility with other
>dialects is an issue that received significant attention. They
>explicitly support development of strict Common Lisp (without
>the Symbolics extensions) so that the result can be used on a
>range of CL platforms. I've also worked with code that used a
>pretty broad set of #+<dialect> and #-<dialect> macros for
>multi-platform support purposes. And then there's CLOE, for
>developing PC-targeted CL applications based on a well-chosen
>subset of the Symbolics extensions.
>
>Even more "interesting" was the infamous ILCP - the InterLisp
>Compatibility Package - which supported a reasonable portion of
>InterLisp on top of Zetalisp (in pre-CL days). I used this to
>port Emycin to Zetalisp, and as undeniably perverse as this was
>(in more ways than one), it worked pretty darn well. Was used
>by a customer in the oil industry to develop and field two
>expert systems applications within one year, start to finish.
>BTW, these applications were fielded for production use on Vax
>hardware running a reimplementation of the Emycin inference engine
>in FORTRAN, developed using (you guessed it) Symbolics' FORTRAN.
>
>Speaking of FORTRAN, there was also Pascal, Ada, Prolog and C
>support... but that's another story.
>
>I don't know whether any of this has much bearing on your
>discussion. Oh well.


All of this perfectly true. My point was not that it is impossible
for any given machine to support multiple dialects or even multiple
languages well, just that it is a strange thing to insist on using a
dialect or language that isn't supported well, or is otherwise not a
good match for the application at hand.

Kirk Rader

unread,
Aug 17, 1994, 12:36:00 PM8/17/94
to
In article <vrotneyC...@netcom.com> vro...@netcom.com (William Paul Vrotney) writes:

[...]

>
>Instead of all this complex analysis, which I'm not sure is going anywhere,
>lets try some simple stuff for a change. Lets try this mind experiment. IF
>there was a Lisp compiler that compiled as efficiently as C (or even close
>to) and IF your boss said that you can program in either Lisp or C. What
>would your choice be? Case closed (one way or the other). I hope.
>
>There are so many more interesting aspects of Lisp that this news group can
>be used for.
>
>--
>Bill Vrotney - vro...@netcom.com


I agree that this is far from the most interesting topic that could be
discussed in this newsgroup. I also see from the above that you have
missed my point entirely. Oh well.

Kirk Rader

John R. Bane

unread,
Aug 17, 1994, 2:33:44 PM8/17/94
to
In article <32t8kc$4...@hsdndev.harvard.edu> k...@enterprise.bih.harvard.edu (Kris Karas) writes:
>>Steven Rezsutek writes:
>>>k...@enterprise.bih.harvard.edu (Kris Karas) writes:
>>> Portability problems for Lisp could be solved in a similar fashion.
>>> Emulate enough of the system-specific functions...
>
>I think people get stuck assuming that if you're going to emulate a
>lisp machine, you should emulate the whole thing...
>.... It would be better to emulate just those portions that the

>user is likely to call, and leave the rest native; there's no need to
>make window drivers, disk interfaces, and so on be written in lisp.

Yes and no. The problem with the full-blown Lisp environments is that
since all the levels of functionality of the internal subsystems are
right there in the image with you, it's too easy to go behind the
published interfaces when you need speed or non-published information.
When these systems were among the most advanced around, programming for
portability and retargetability was a non-issue.

The result can be systems that don't have clean, easily-abstracted
modules for things like the window system. I know of one Lisp-machine
emulator whose interface to the window system is at the bitblt level (!).

Thomas M. Breuel

unread,
Aug 17, 1994, 4:30:16 PM8/17/94
to

Sorry, I mixed two arguments into one.

Yes, for putting scalars into structures, you can generate reasonably
efficient code, but few CL compilers actually do.

However, when you nest those structures, put them into arrays, or pass
them as arguments, you have extra pointer and heap overhead.
Furthermore, the compiler cannot statically decide to optimize the
pointer overhead away, since that is sometimes wrong (i.e., less
efficient) and can also change semantics.

It is incredibly useful to be able to specify reference vs. value
semantics, and CL lacks the primitives for doing that.

Thomas.

Anthony Berglas

unread,
Aug 17, 1994, 7:44:04 PM8/17/94
to
I am rather surprised at this thread. It is not necessary to use
dirty tricks to make Comon Lisp efficient. I am reposting a neural
net benchmark that make that clear, and would be interested in results
on other machines. There are a few small gaps in Common Lisp that
could be addresed, for example it is not possible to put an array in a
structure, just a pointer to one, but they are relatively minor [Clos
is another matter]. Also, any language that allows weak typing can
benefit enormously by simple hardware support, such as Sun's (but not
Mips, say) 30 bit integer arithmetic, but all Lisp expressions need
not be weakly typed.

Any way here is the repost:-


Clearly one should not require low level tricks to write fast code.
Below are two examples in C (not C++, reportedly slightly slower) and
Lisp. The first calculates prime nrs using a crude algorithm, CMUCL
beats Sun C anc gcc (it Kills prolog and I suspect most other
languages whose constructs don't match the hardware very closely ---
[Most Prolog systems cannot be run interpretively but must be compiled,
often to a byte code]). The second is a neural net test, in which
CMUCL just beats sun C, and is just beaten by gcc (optimized, of
course).

Note that neither of these applications are traditional Lisp ones ---
there is not a List in sight.

The syntax for declarations is *Awful*, but they only need to be added
to the 10% of code that takes 90% of the time. In fact, most of the
declarations I have used make little difference. C programmers might prefer

(Fixnum ((I 10)) ...) or (Let+ ((I Fixnum 10)) ...)
to
(Let ((I 10)) (Declare (Fixnum I)) ...)

Personally I prefer Let+, allow an optional third argument, but it is
easy to write your own macro. (Try doing that in C++!).

The advantage of Lisp over other interpretive languages is that it can
and IS compiled efficently. However, Microsoft has dictated that some
programs are two be writen in Visual Basic, and others in C++,
impedence mismatch being good for the soul, so who are we to argue?
IMHO the lisp community has only itself to blame for not providing a
standard option of a conventional syntax --- syntax is always more
important then semantics.

Anyway here's the code.

-------- Primes ---------

#include <stdio.h>
#include <assert.h>

int prime (n)
int n;
/*"Returns t if n is prime, crudely. second result first divisor."*/
{ int divor;
for (divor=2;; divor++)
{ /*printf("divor %d n/divor %d n%%divor %d ", divor, n/divor, n%divor);*/
if (n / divor < divor) return 1;
if (n % divor == 0) return 0;
} }

main(argc, argv)
int argc;
char **argv;
{ int n, sum=0, p, i;
assert(argc == 2);
n = atoi(argv[1]); printf("n %d, ", n);
for(i=1; i<=10; i++) { sum =0;
for (p=0; p<n; p++)
{ /*printf("\nprime(%d): %d ", p, prime(p));*/
if (prime(p))
sum += p * p;
}}
printf("Sum %d\n", sum);
}

(declaim (optimize (safety 0) (speed 3)))

(declaim (start-block primes))
(declaim (inline prime))
(deftype unum () '(unsigned-byte 29))

(defun prime (nn)
"Returns t if n is prime, crudely. second result first divisor."
(declare (type unum nn))
(do ((divor 2 (+ divor 1)))
(())
(declare (type unum divor) (inline Floor))
(multiple-value-bind (quot rem) (floor nn divor)
(declare (type unum quot rem))
(when (< quot divor) (return t)) ; divor > sqrt
(when (= rem 0) (return (values nil divor))) )))

(defun primes (n)
"Returns sum of square of primes < n, basic algorithm."
(declare (type unum n))
(let ((sum 0))
; (declare (integer sum))
(dotimes (i 10)
; (print sum)
(setf sum 0)
(do ((p 0 (the t (1+ p))))
((>= p n))
(declare (type unum p))
(when (prime p)
(incf sum (* p p)) )))
sum))

(declaim (end-block))

% crude prime tester.
prime(N):- test(N, 2), !, fail.
prime(N).
test(N, M):- N mod M =:= 0.
test(N, M):- J is M + 1, J * J =< N, test(N, J).

primes(P, S):- prime(P), Q is P - 1, Q > 0, primes(Q, T), S is T + P * P.
primes(P, S):- Q is P - 1, Q > 0, primes(Q, S).
primes(P, 0).

------------ Nueral Net ---------

#include <stdio.h>
#include <math.h>

#define size 5

float w[size][size];
float a[size];
float sum;

main()
{
int epoch, i,j ;

for (i=0; i< size; i++)
for (j=0; j< size; j++)
w[i][j] = 0.0;

for (epoch=0; epoch < 10000; epoch++){

for (i=0; i< size; i++)
a[i] = (float) (random()%32000)/(float) 32000 * 0.1;

for (i=0; i< size; i++)
for (j=0; j< size; j++)
w[i][j] += a[i] * a[j];

for (i=0; i< size; i++){
sum = 0.0;
for (j=0; j< size; j++)
sum += a[i] * w[i][j];
a[i] = 1.0/(1.0 - exp(sum));
};


}
}

;;; simon.lisp -- test or neural simulations
;;;
;;; Simon Dennis & Anthony Berglas

;;; Library stuff -- Example of simple language extensions.
;;; *NOT* NEEDED FOR EFFICIENCY, JUST CONVENIENT

(defmacro doftimes ((var max &rest result) &body body)
"Like DoTimes but var declared Fixnum."
`(DoTimes (,Var ,Max ,@result)
(Declare (Fixnum ,Var))
,@Body))
;; Note that this macro could expand code for fixed loops.

(Eval-When (eval load compile)
;; [a b c] -> (Aref a b c)
(defun AREF-READER (Stream Char)
(declare (ignore char))
(Cons 'AREF (Read-Delimited-List #\] Stream T)) )
(set-macro-character #\[ #'aref-reader Nil)
(set-macro-character #\] (get-macro-character #\)) Nil) )


;;; The program.

(declaim (optimize (safety 0) (speed 3)))


(defconstant size 5)


(defvar *seed* *random-state*)
(defun main()

;; initialize the weight matrix
(let ((w (make-array '(5 5) :element-type 'SHORT-FLOAT :initial-element 0s0))
(a (make-array 5 :element-type 'SHORT-FLOAT)) )
(setf *random-state* (make-random-state *seed*))
(doftimes (epoch 10000)

;; make new activation vector
(doftimes (i size)
(setf [a i] (random 0.1)))

;; update the weights
(doftimes (i size)
(doftimes (j size)
(setf [w i j] (+ [w i j] (* [a i] [a j]))) ))

;; update the activations
(doftimes (i size)
(let ((sum 0s0))
(declare (short-float sum) (inline exp))
(doftimes (j size)
(incf sum (the short-float (* [a i] [w i j] ))) )
(setf [a i] (/ 1 (- 1 (exp sum)))) )))
w))

--
Anthony Berglas
Rm 312a, Computer Science, Uni of Qld, 4072, Australia.
Uni Ph +61 7 365 4184, Home 391 7727, Fax 365 1999

Kurt Bond

unread,
Aug 17, 1994, 10:54:54 PM8/17/94
to
In article <MIKE.94Au...@pdx399.intel.com>,
Mike Haertel <mi...@ichips.intel.com> wrote:
>But GCC doesn't use CPS. In fact, as far as I know no compiler for
>any conventional imperative language uses CPS. There might be an
>interesting research project in that.

Richard A. Kelsey's dissertation ("Compilation by Program Transformation",
YALEU/CSD/RR #702, May, 1989) discusses compilation using CPS as an
intermediate langauge and performing source to source transformations. He
implemented compilers for Pascal and BASIC to code for the MC68020. Speed
of the resulting code for some simple programs was comparable to the Apollo
Pascal compiler.

(I must say that I found this dissertation very interesting.)

--
T. Kurt Bond, t...@wvlink.mpl.com, Kurt...@launchpad.unc.edu
--
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Launchpad is an experimental internet BBS. The views of its users do not
necessarily represent those of UNC-Chapel Hill, OIT, or the SysOps.
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

Jeffrey Mark Siskind

unread,
Aug 18, 1994, 12:23:06 AM8/18/94
to
In article <TMB.94Au...@arolla.idiap.ch> t...@arolla.idiap.ch (Thomas M. Breuel) writes:

Yes, for putting scalars into structures, you can generate reasonably
efficient code, but few CL compilers actually do.

Yes, but you were not talking about limitations of current CL compilers. You
were claiming that the semantics of CL was inherently deficient.

However, when you nest those structures, put them into arrays, or pass
them as arguments, you have extra pointer and heap overhead.
Furthermore, the compiler cannot statically decide to optimize the
pointer overhead away, since that is sometimes wrong (i.e., less
efficient) and can also change semantics.

Here is an idea: Take the semantics of structures to be defined to be
indirect (as it is in current CL and Scheme). Then make structures immediate
as an optimization. There are only two ways you could possibly tell the
different between an indirect structure and an immediate structure:

a. EQ
b. Mutating a slot. If the structure was implemented as an immediate structure
then mutating one copy would unsoundly fail to mutate other copies.

If a compiler could prove that a given structure type never was mutated and
never appeared as an argument to EQ it would be licensed to implement that
structure as an immediate structure. It might choose not to do so for large
structures since the cost of passing around large structures might outweigh
the cost of allocation, reclaimation, and indirection. But it could use
automatically generated profile data to help determine the tradeoff. Or it
could ask the user, remember the responses between compilations, and allow the
user to subsequently change the decision on a per-structure basis. Having
this decision be orthogonal to the source code makes it easier to change this
decision without having to update the source code in a consistent fashion.

It is incredibly useful to be able to specify reference vs. value
semantics, and CL lacks the primitives for doing that.

How is value semantics anything more than a subset of reference semantics?
Modulo efficiency, the only thing I can see is mutation of an immediate
structure. That can be accomplished in reference semantics by making a new
copy of the structure with one slot modified and the others copied over. I.e.
in Scheme:

(define (set-car pair obj) (cons obj (cdr pair)))
(define (set-cdr pair obj) (cons (car pair) obj))

These can be defined as above and DEFSTRUCT can be modified to provide
analogous nonmutating SET functions along with the standard mutating ones.
This is all still within reference semantics. Simply the compiler can
recognize when it optimizes a structure to be immediate and further eliminate
the copying involved with the nonmutating SET functions.

Now all of this handles passing immediate structures as arguments, returning
them as results, assigning them to variables, and storing them in structure
or vector slots. A further optimization can be performed. If a compiler can
prove that there can be at most only one pointer to a structure then even if
the structure is mutated it can be implemented as an immediate structure. (If
there can be only one pointer to a structure then it would not be possible for
it to appear as both arguments to EQ.) Standard `linearity' analysis can be
used to make this determination. Thus if one created a vector of structures,
where the only pointers to those structures existed in that vector, even if
the structure slots were mutated, the compiler could still optimize this as a
vector of immediate structures. Similarly for structures slots containing
structures.

Now immediate vectors are a bit harder since the length of the vector would
need to be determined at compiler time if one wanted to have a structure or
vector of immediate vectors. But again standard `manifestness' analysis can be
used to determine the simple and most common instances of when this is the
case and allow this optimization to take place. Of course one would need a
nonmutating VECTOR-SET operation as a counterpart to VECTOR-SET! along the
lines of the above. But as before, this can be defined purely within and on
top of the current Scheme and CL language spec.

So as I see it the semantics of CL and Scheme are perfectly adequate in this
regard. It is just a matter of optimization.

ch...@labs-n.bbn.com

unread,
Aug 18, 1994, 12:15:08 PM8/18/94
to
In article <MARCOXA.94...@mosaic.nyu.edu> mar...@mosaic.nyu.edu (Marco Antoniotti) writes:

--> Never seen a thing as a "portable" assembler. Try to run MIPS code on
--> your 486 :)

I thought 'C' is/was the 'portable assembler' ?


ch...@labs-n.bbn.com

unread,
Aug 18, 1994, 12:57:10 PM8/18/94
to
In article <TFB.94Au...@sorley.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
--> * Jeff Dalton wrote:
--> > I would agree that Lisp can do reasonably well on RISC machines,
--> > but the point of Lisp machines was not just to make Lisp fast
--> > but also to make it fast and safe at the same time and fast
--> > without needing lots of declarations.
-->
--> > Recent Lisp implementations (especially CMU CL) have gone a fair
--> > way towards making it easy to have safe, efficient code on RISC
--> > machines, but it may always require a somewhat different way of
--> > thinking. (Not a bad way, IMHO, but different from LM thinking
--> > nonetheless.)
-->
--> Could any of the lisp machines do fast floating point, without
--> declarations? I know maclisp was rumoured to be able to (on
--> stock hardware even!) but did it use declarations?

(dotimes (i 10000)
(* 123.4 i)
)

each multiply seems to take 1.2 microseconds on my 25 MHz 32-bit
Explorer 2.

(dotimes (i 10000)
(* 1.234d2 i)
)

appears to take almost the exact same amount of time.

(dotimes (i 10000)
(/ 1.234d2 i) ;now we're dividing!
)

also appears to take the same amount of time.

seems reasonably fast to me, given that it's *only* 25 MHz. clearly
faster devices like a Sparc or Alpha chip would be much faster, although
unless you spent a lot of time doing the math, it'd be hard to see it in
your apps.

it doesn't seem to make too much difference about the declarations, but
this is a poor test-case.

the LispChip has built-in floating-pt. the Exp 1 did not, and was
slow about heavy floating-pt stuff. you wanted to be careful with your
code not to waste time computing any floating-pt value more than once.

-- clint

ch...@labs-n.bbn.com

unread,
Aug 18, 1994, 1:14:26 PM8/18/94
to
In article <TMB.94Au...@arolla.idiap.ch> t...@arolla.idiap.ch (Thomas M. Breuel) writes:
--> (think about how much space your
--> typical "struct { int x; double y; char z;};" takes as a CommonLisp
--> DEFSTRUCT)

my guess is that it would be 128 bits. 4 words, expecting word-alignment
behavior (well, that's machine-dependent stuff, I was thinking of a
32-bit machine).

sounds like you're suggesting that it comes out otherwise...

-- clint

Erik Naggum

unread,
Aug 18, 1994, 2:49:03 PM8/18/94
to
[ch...@labs-n.bbn.com]

| (dotimes (i 10000)
| (* 123.4 i)
| )
|
| each multiply seems to take 1.2 microseconds on my 25 MHz 32-bit
| Explorer 2.

this is really _outstanding_. on a SPARC 10, the following C program takes
3 seconds to run when given an argument, and an imperctibly small number
with none, i.e., 3 microseconds per operation.

main (int argc, char ** argv)
{
if (argc == 2) {
int i;
double d;
for (i = 0; i < 10000000; i++)
d = 123.4 * i;
}
return 0;
}

of the three Common LISP implementations I have running here, interpreted
performance for your LISP expression is:

GCL 0.6 seconds
CLISP 2.2 seconds
CMUCL 4.1 seconds

are you sure you meant _microseconds_, not milliseconds?

</Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.

ch...@labs-n.bbn.com

unread,
Aug 18, 1994, 3:36:55 PM8/18/94
to
In article <CuMwD...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
--> In article <CuL4n...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
-->
--> Not "necessarily slower", but in my experience it is more common in
--> real-world programming projects to encounter cases were C's
--> memory-management philosophy results in higher throughput and fewer
--> problems for interactivity and real-time response than one which
--> relies on GC.

isn't this a result comparable to that of declaring types? if I took a
casually written lisp program and converted it to C, I'd have to declare
lots more variables types than i did in lisp. I'd also have to use lots
of malloc/free calls, which I didn't in lisp. let us say the the
programs get tested with the exact same test suite...I'd expect the C
version to not exhibit the GC-related slowdowns that might be apparent
in the lisp version because that behavior would be distributed
differently, even though the total would likely be exactly the same
amount of RAM allocated and GC'd.

my currently used Lisp dialects do the typical thing--generational GC,
which waits until a threshold is reached before GC'ing gen 0. a C
program GCs every time you let go of something (free it), and so the
time distribution is different. this is the most painful thing
preventing "real-time" behavior--it occurs more-or-less randomly, and
has unpredictable duration.

some years ago, when i was porting Macsyma to the Explorer, I fiddled
with cons-areas, wondering if I could do a better job of controlling GC
occurrences myself. the idea was that I would allocate an area, work in
it, copy a result out of it and immediately free the area, avoiding
having to GC that area, cutting the cost of a GC when it finally did
occur. this was before generational GC had become available. Macsyma
is/was horrible about generating garbage.

-- clint

William Paul Vrotney

unread,
Aug 19, 1994, 5:57:43 AM8/19/94
to

What is your point, in one sentence?

ch...@labs-n.bbn.com

unread,
Aug 19, 1994, 5:10:48 PM8/19/94
to
In article <199408...@naggum.no> Erik Naggum <er...@naggum.no> writes:
--> [ch...@labs-n.bbn.com]
-->
--> | (dotimes (i 10000)
--> | (* 123.4 i)
--> | )
--> |
--> | each multiply seems to take 1.2 microseconds on my 25 MHz 32-bit
--> | Explorer 2.
-->
--> this is really _outstanding_. on a SPARC 10, the following C program takes
--> 3 seconds to run when given an argument, and an imperctibly small number
--> with none, i.e., 3 microseconds per operation.

unfortunately, it's totally bogus. Marty Hall, a person CLEARLY more
conversant on lisp compilers than I am, pointed out that the compiler
would have eliminated the multiply, since the result wasn't being used.
all I measured was 10000 loops of nothing.

the equivalent function, which didn't eliminate it:

(defun foo () ; 0.197
(let ((x 3.217))
(declare (optimize (speed 3) (safety 0)))
(dotimes (i 10000)
(declare (integer i))
(setq x (* x (float i)))
)
x)
)

10k times, single-float, best optimization. 0.197 seconds for the whole
thing. conses about 35 words, or some such--that's a dynamic
behavior,for reasons unknown to me. that's 20 microseconds per. still
good, but not great. double-float time is 25 microseconds. divide is
actually *faster*, but only about 1%.

disassembling this shows that it's only got 13 instructions, excluding
the FEF overhead. I don't know how many registers the LispChip has, but
it appears to be only one (!?) from looking at the assembly code. I
do'nt know the design, but the assembly code has several push and pop
instructions in it, which suggests that there's an on-chip
stack-register, and the there are two working registers for holding
numbers. the actual loop has three of these instructions in it, and my
(fading) knowledge of microprocessor design suggests that if they were
all registers they wouldn't be necessary here. or would you just have
different ones? been too long...

--> main (int argc, char ** argv)
--> {
--> if (argc == 2) {
--> int i;
--> double d;
--> for (i = 0; i < 10000000; i++)
--> d = 123.4 * i;
--> }
--> return 0;
--> }
-->
--> of the three Common LISP implementations I have running here, interpreted
--> performance for your LISP expression is:
-->
--> GCL 0.6 seconds
--> CLISP 2.2 seconds
--> CMUCL 4.1 seconds

those are decent numbmers. interpreted runtime on the E2-25 is ~9.8
seconds. for ten thousand cycles, single-float.

--> are you sure you meant _microseconds_, not milliseconds?

absolutely. the original number I gave was 0.012 seconds, but it was a
bogus number. the real one is 0.2 seconds.

Allegro 4.2 tells me it's 33 milliseconds for the compiled version,
i.e., about 3 microseconds per multiply. and the dissassemble is
bigger, apparently. looks like 104 words, and 28 instructions, 8 for
overhead.

-- clint

J W Dalton

unread,
Aug 19, 1994, 12:52:50 PM8/19/94
to
t...@arolla.idiap.ch (Thomas M. Breuel) writes:

>In article <TFB.94Au...@oliphant.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>|Sounds pretty much
>|identical to that of C to me. A lot of CL implementations seem to
>|have rather poor I/O but that's because the I/O systems aren't well
>|written.

>CL's I/O system is also lacking some rather important primitives,
>like block read/block write.

1. You say "like". What other examples are there?

2. What exactly is the problem with lacking user-visible block
read and write? The underlying I/O system will still use block r/w.

>|However these
>|problems are in fact just as bad for other programs (for instance X
>|servers exhibit many of the same characteristics, and thrash VM
>|systems the same way as Lisp on resource-starved machines.

>Sure, but CommonLisp systems will resource-starve a machine much more
>quickly than an equivalent C program.

That depends, in part, on what you count as equivalent.

> The reason is that CL lacks
>important primitives for expressing some fundamental kinds of data

>abstractions in a space-efficient way (think about how much space your


>typical "struct { int x; double y; char z;};" takes as a CommonLisp

>DEFSTRUCT).

How much? Do you have some actual numbers?

(There are cases where Lisps typically represent things more
efficiently than C -- consider e.g. a naive implementation of cons in
C using malloc.)

> Also, rampant consing by the standard libraries is a real
>problem in CL.

What standard libraries are those? Which Common Lisps?

Do you know that a number of Lisps never cons unless required to
do so by the user's program?

> And, most systems still don't have efficient floating
>point arguments/return values for separately compiled functions.

Is this an inherent problem or just a matter of implementation
traditions?

>Yes, you are right, neither VM behavior of CommonLisp nor garbage
>collection are the problem. However, VM thrashing and excessive
>garbage collections are frequently observed symptoms of problems with
>the CommonLisp language.

I wonder what people are doing so that this is so. I don't usually
have much problem with excessive GC and when I do it's often easy to
fix.

>|CL's I/O model is basically buffered streams.

>CL has _typed_ buffered streams. Whatever the CL I/O design was
>supposed to achieve, it fails to achieve in practice. Those stream
>types at best cause lots of headaches.

Do you have any examples?

This problem with think kind of account is that it assumes everyone
knows what you're talking about. It lines up with what lots of
people who dislike Lisp already believe, but doesn't say just how
bad the problem actually is. People therefore assume it's as
bad as they always thought it was, or worse, since they may not
have considered all your examples (what exactly is wrong with
typed streams, for instance?).

-- jeff

J W Dalton

unread,
Aug 19, 1994, 1:26:31 PM8/19/94
to
ki...@triple-i.com (Kirk Rader) writes:

>In article <CuL3J...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <CuDKK...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
>>>In article <CuA8u...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>>In article <TFB.94Au...@sorley.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:

>[...]
>>
>>I find this rather strange. What is the mismatch in I/O models?
>>And why would Lisp's lightweight processes be a problem? Is the
>>OS not expecting to give timer interrupts? Memory management I
>>can almost see, but what exactly is going wrong? Berkeley Unix
>>tried to take Franz Lisp into account. Have things moved backwards
>>since then?

>As one concrete example of the kind thing to which I referred, SGI's
>filesystem substrate maintains its recently-deallocated file cache
>buffers in a "semi-free" state on the theory that they will quickly be
>reallocated for the same purpose. Watching the output of gr_osview as
>an I/O intensive lisp application executes you can easily see conflict
>between the different buffering and memory-management policies being
>used by the OS's filesystem and virtual-memory management mechanisms
>and by the I/O and memory-management mechanisms of at least two
>commercial Common Lisp implementations and one shareware Scheme
>implementation with which I am familiar.

What exactly is the problem? I don't have an SGI system, so I
can't watch the output of gr_osview and easily see what's happening.
Is there any reason to suppose it's an inherent Lisp problem rather
than a poorly tuned implementation?

>[...]

>>Sure there is. It might be fine for Lisp A but not for Lisp B.
>>Besides, I/O and scheduling and much of memory management is
>>the OS, not the hardware. The OS on ordinary non-Lisp machines

>>could change to work better with Lisp.

>All these arguments about "lisp A vs lisp B" are red-herrings.

Not when someone says a Lisp machine is bound to be fine for Lisp.

>On a given lisp-machine you could only be described as perverse for using
>any other dialect than one for which the machine was designed.

Why is that any more perverse than using gcc rather than the compiler
from the hardware manufacturer? Who says a Lisp machine has to be for
only one Lisp and an ordinary machine can be for any C and indeed a
whole range of languages? Even existing Lisp machines weren't
that restricted!

>I understand your real argument to be that it is in principle possible
>to design lisp-friendly OS's and Unix- (or other particular OS-)
>friendly lisps. That is undeniably true, but market forces have yet
>to produce viable examples of such implementations, so far as I can
>tell.

Strictly speaking, market forces don't produce implementations?
Surely they select among them instead. Moreover, they don't have to
respect technical merit or performance or anything else.

Now, it was fairly easy on VAXes to find cases where Franz Lisp was
faster than C. Moreover, for some time after Common Lisps appeared,
Franz fit better with C and Unix than they did. Some CLs have passed
Franz in some ways, but then development of Franz stopped years ago.
Now we're starting to see Lisps that compile to C and pass arguments
on the C stack (which KCL did to some extent in the mid 80s), that are
packaged as shared libraries, that can be linked into C programs on a
more or elss equal basis, etc. There are also roles for Lisp that
don't involve competing with C but complementing C instead. What we
now think of as a typical Lisp didn't have to be typical and may
not be typical in the future. Whether market forces will take any
notice is a different question.

>>I didn't say it was easy. But it's true that I don't think it's
>>as hard as some other people seem too. Moreover, I blame particular
>>Lisp implementations rather than "Lisp".

>"Lisp", as opposed to particular lisp implementations, can only be
>understood to refer to the concept of any member of a particular
>family of programming languages.

Sure, but it's not confined to already existing members.

> The common usage of "lisp" implies a
>language with certain features that make it difficult to implement in
>a way that won't conflict with a Unix workstation's or PC's model of
>the universe.

Well, so you say. (Actually, there's not a very good fit between
C and some PC operating systems.) The standard machine for Lisp
used to be the PDP-10. I suspect that Lisp machines have distorted
perceptions of how well Lisp fits with "mainstream" machines.
There are often actual problems with linkers, memory management,
etc, but I don't think there's a fundamental mismatch with the
hardware or even, in most cases, with the software.

> The richer and more featureful the lisp dialect, the
>more difficult it has proven to be.

Common Lisp didn't have to be implemented the way it typically
was.

>>Well, I (at least) have never made any claim about the number of
>>machine instructions or indeed any "micro-level measure of compiler
>>efficiency". So why are you saying this in response to me?

>Because your earlier contribution to this thread came in the context


>of and both quoted and added various statements about implementation
>details and compiler efficiency that simply ignored the more general
>issues that cause them to only address one small aspect of the whole
>problem.

I still don't know what from my messages you have in mind.

> My central point all along has been that the choice of which
>language to use for a particular project is a complex one due to the
>many trade-offs entailed no matter which language is used.

But I agree with that.

> In the
>absence of any contextual information about the nature of the
>platform, the particular language implementations, and the
>requirements of the application both of the statements "Lisp is better
>than C" and "C is better than lisp" are either meaningless or false,
>depending on one's semantic theory.

Sure, but I don't say either of those things.

-- jd

J W Dalton

unread,
Aug 19, 1994, 1:49:01 PM8/19/94
to
ki...@triple-i.com (Kirk Rader) writes:

>In article <CuL4n...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:

>>My point here is that if a claim says "Lisp", the supporting
>>evidence should be more general that popular commercial CLs.

>I have also used various commercial and shareware non-Common Lisp
>dialects on both Unix workstations and PC-class machines with similar
>results. My point is not about any particular implementation of any
>particular dialect, but about language-intrinsic features of lisp as a
>class of programming language.

Yes, but I disagree with that point, to a fair extent, and even
more with the way it has been argued. That you have observed cases
that you've understood in a certain way doesn't show there's a
language-intrinsic problem. You may well be right about typical
implementations or most applications or something like that.

Of course, there will be some applications for which Lisp works
less well than C. But so far as I can tell, there's no language-
intrinsic feature that prevents there being cases where Lisp works
as well as or better than C. Of course, maybe there's some lesser
consequence we should consider.

> I did draw some particular examples
>from a particular project implemented using a particular commercial
>Common Lisp, but I also very explicitly stated that these were
>intended to illustrate points about "lisp", as the term is commonly
>understood, in general.

Sure, but illustration and demonstration are two different things.
Illustrations would be useful if they let one see how the general
claim was correct. So far, yours have not done so, at least not
for me.

>>A straw man, since no one has said otherwise.

>Your continuing focus on "lisp" as opposed to "particular lisp
>implementations" shows that you are saying otherwise.

Bull. I have never thought or claimed that Lisp is (you said
"connotes") "any language based on the lambda-calculus or any
one that has generic features to support higher-order functions
and a functional programming paradigm". That is, I agree that
"Lisp connotes not not just" any such language.

It's possible, I suppose, that someone else disagrees, thus
making it not a straw man, but it's not me.

>>Do you count reference counting as GC?

>I am indifferent as to how you choose to categorize different


>memory-management strategies, other than the basic difference between
>languages which automatically allocate and deallocate memory and those
>which only do so under explicit programmer control.

So you do count reference counting in your claim.

>Some form of
>automatic memory allocation and recovery strategy is central to most
>people's idea of what a lisp dialect entails.

It's also pretty essential to proper implementation of strings
in Basic, but people don't go around saying Basic is inherently
unsuited to workstations and PCs.

> If you choose to
>include in the set of "lisps" some language which has a malloc() /
>free() style of memory management, then that dialect would, of course,
>be much less prone to memory-management conflicts with widely-used
>OS's.

Well, I certainly include implementations that use reference counting
and even ones that don't reclaim at all (see e.g. JonL White's paper
in the 1980 Lisp conference), since I'm not planning to define "Lisp"
so as to do violence to past usage. Now if you want to show that
automatic reclamation inherently causes a mismatch with worstations
and PCs, or whatever your acctual claim is, please do so. I would
like to understand what the problem is.

>>Besides, there are a number of cases where Lisp's alloc + GC will be
>>faster than "manual" alloc and dealloc.

>And such cases are among those for which I have explicitly advocated
>using lisp earlier in this and related threads.

So what _is_ supposed to be the problem with Lisp, then?

>>Now, if someone wants to argue that (say) a language with GC is
>>necessarily slower than one that works like C, let them do so.

>Not "necessarily slower", but in my experience it is more common in


>real-world programming projects to encounter cases were C's
>memory-management philosophy results in higher throughput and fewer
>problems for interactivity and real-time response than one which
>relies on GC.

Since this is probably not clear, let me say it explicitly.
If someone came along in comp.lang.lisp and said

in my experience it is more common in real-world programming
projects to encounter cases were C's memory-management philosophy
results in higher throughput and fewer problems for interactivity and
real-time response than one which relies on GC.

I would not disagree with them.

-- jd

J W Dalton

unread,
Aug 19, 1994, 2:10:50 PM8/19/94
to
ki...@triple-i.com (Kirk Rader) writes:

>In article <TFB.94Au...@oliphant.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:

>[...]

>>
>>[Talking about Common Lisp here]
>>
>>CL's I/O model is basically buffered streams. Sounds pretty much
>>identical to that of C to me. A lot of CL implementations seem to
>>have rather poor I/O but that's because the I/O systems aren't well
>>written.

>C's "minimalist" philosophy makes it much easier for the
>implementation to provide alternative library entry points [open() vs
>fopen(), etc.] and greater opportunity for the programmer to
>circumvent whatever problems there are with a given library
>implementation.

I've found that Lisp typically gives more opportunity to do that.

Anyway, in Franz Lisp, "ports" are alost directly FILE *s.
Nothing prevents there being fd-based operations as well.
(Indeed, there's an fdopen.)

>It is silly to suggest that C and Common Lisp are
>really on a par when it comes to the amount semantic "baggage" they
>carry for I/O or anything else. If that were true, what advantage
>would lisp _ever_ have?

That Lisp has something extra doesn't mean it must always have
excess baggage.

>C's memory-management philosophy is to rely on the standard-library's
>interface to the OS. I do not see how this could be more different
>from lisp's reliance on a built-in memory-management scheme.

What do you mean? Lisp uses the same OS operatins C does.

>And what possible relevence could it have that other software systems also
>exhibit similar performance problems to those of lisp?

It suggests that the problems people observe with Lisp may not be
due to Lisp, since they can (evidently) have other causes.

>>Can you give details? I have spent some time watching large CL (CMUCL)
>>programs on Suns, and other than VM problems (and CMUCL's garbage
>>collection is not exactly `state of the art') I find they do fine.
>>And they weren't even very well written.

>I specifically referred to using gr_osview on SGI's. In particular,
>it is easy to observe conflicts between lisp's memory management and
>I/O mechanisms and Irix's filesystem and memory-management mechanisms.

Can you say something more about this for those of us who can't
us gr_osview on SGIs?

>> Lightweight processes are
>>not part of CL BTW.

>But they are part of almost every "serious" implementation of every
>lisp dialect with which I am familiar, and I was not talking just about
>some particular implementation of some particular dialect.

But they're not part of every serious implementation of every
Lisp dialect, unless you make it true by definition of "serious".

And what is the problem for Lisp lw processes anyway?

>The question isn't whether it is possible to write programs in any
>particular language that perform well, but rather for any given
>programming task do the features of the language make it easier or
>harder to achieve acceptable performance? The semantics of a language
>which includes GC style memory management, lexical closures, so-called
>"weak types", etc. has consciously chosen expressive power in favor of
>highest-possible performance. In many cases that is an appropriate
>choice, but in many cases it isn't.

The semantics of Lisp say objects (normally) have indefinite extent.
Implementations typically use GC as a way to reuse storage. Whether
this has to make it harder for programmers to obtain acceptable
performance is not clear. It depends on the implementation and
the application.

Inclusion of lexical closures in a language does not slow down
cases that don't use them. So how is this part of a consistent
choice of expressive power over highest possible performance?

-- jd

J W Dalton

unread,
Aug 19, 1994, 2:13:02 PM8/19/94
to
ki...@triple-i.com (Kirk Rader) writes:

>>So, if Irix can't cope with Lisp, that's a problem with Irix. And
>>that problem will not just bite you with Lisp, but also with many
>>other kinds of applications.

>As someone who makes his living creating software for SGI's I cannot


>afford to use any tool that does not run well under Irix, whether or
>not Irix is particularly well designed. The fact is that there is a
>whole SGI software industry, and if lisp evangelists would like to see
>more cases of lisp being used there, they would have a greater
>likelihood of success by suggesting that the lisp vendors do a better
>job of accomodating the platform than that the platform vendor
>accomodate lisp. It's a simple matter of economics.

That I agree with. But do you accept that Lisp vendors could
do a better job, opr are there inherent properties of Lisp that
prevent them from doing so (or from doing it to a sufficient extent)?

J W Dalton

unread,
Aug 19, 1994, 2:16:08 PM8/19/94
to
vro...@netcom.com (William Paul Vrotney) writes:

>In article <CuMuH...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:

>> ...
>> problem. My central point all along has been that the choice of which
>> language to use for a particular project is a complex one due to the

>> many trade-offs entailed no matter which language is used. [...]

>Instead of all this complex analysis, which I'm not sure is going anywhere,
>lets try some simple stuff for a change. Lets try this mind experiment. IF
>there was a Lisp compiler that compiled as efficiently as C (or even close
>to) and IF your boss said that you can program in either Lisp or C. What
>would your choice be? Case closed (one way or the other). I hope.

>There are so many more interesting aspects of Lisp that this news group can
>be used for.

I agree. I wish people would stop using it to attack Lisp.
Many of the same criticisms of Lisp could be made in a more
constructive fashion and could include something to let us see
exactlky what and how bad the problems are.


J W Dalton

unread,
Aug 19, 1994, 2:19:31 PM8/19/94
to
t...@arolla.idiap.ch (Thomas M. Breuel) writes:

>In article <QOBI.94Au...@qobi.ai> qo...@qobi.ai (Jeffrey Mark Siskind) writes:
>|In article <TMB.94Au...@arolla.idiap.ch> t...@arolla.idiap.ch (Thomas M. Breuel) writes:
>|
>| The reason is that CL lacks
>| important primitives for expressing some fundamental kinds of data
>| abstractions in a space-efficient way (think about how much space your
>| typical "struct { int x; double y; char z;};" takes as a CommonLisp
>| DEFSTRUCT).
>|
>|What primitives does CL lack? The Scheme compiler that I am writing provides
>|a DEFINE-STRUCTURE that is essentially a subset of the CL DEFSTRUCT. And it
>|can produce *exactly* the code "struct { int x; double y; char z;};" for
>|(DEFINE-STRUCTURE FOO X Y Z) when type inference determines that the X slot
>|will only hold exact integers, the Y slot only inexact reals, and the Z slot
>|characters. Note that it does this without any declarations at all. I presume
>|that the same thing can be done for CL.

>Sorry, I mixed two arguments into one.

>Yes, for putting scalars into structures, you can generate reasonably
>efficient code, but few CL compilers actually do.

>However, when you nest those structures, put them into arrays, or pass
>them as arguments, you have extra pointer and heap overhead.

Which is what, exactly?

>Furthermore, the compiler cannot statically decide to optimize the
>pointer overhead away, since that is sometimes wrong (i.e., less
>efficient) and can also change semantics.

>It is incredibly useful to be able to specify reference vs. value
>semantics, and CL lacks the primitives for doing that.

In many cases, to an extent, ...

Thomas M. Breuel

unread,
Aug 20, 1994, 2:52:23 PM8/20/94
to
In article <330d8n$s...@info-server.bbn.com> ch...@labs-n.bbn.com writes:
|isn't this a result comparable to that of declaring types? if I took a
|casually written lisp program and converted it to C, I'd have to declare
|lots more variables types than i did in lisp.

The problem is that even if you add the same number (or more)
declarations to your Lisp program, on many Lisp implementations it
would still not run as fast as the corresponding C program. And
there is no standard way of figuring out why.

For example, in CMU CL, at some point, FLOOR was much slower than
TRUNCATE because FLOOR didn't get inlined. In Lucid, adding redundant
declarations could slow down your program. Trying to track down the
sources of such problems is just as hard as trying to track down a
compiler bug or a pointer bug in C (actually, harder than a pointer
bug if you have Purify for C...).

Another problem is that if you want to get memory utilization in
CommonLisp that is as efficient as in C, you often have to change your
program logic radically, in ways that are much less abstract than in
C. For example, arrays of structures end up having one pointer and
one structure header overhead for each element ("struct { int x,y; }
foo[N];"). Structures having multiple "small" objects do not get
packed by most CommonLisp implementations ("struct { char x,y,z; }
foo[N];"). Structures of fixed-size arrays have a pointer plus an
array header overhead for each element ("struct { float
vect1[3],vect2[3]; } foo[N];").

The only way to program around those limitations is to convert
everything to FORTRAN-style code, where you don't use structures and
rely on monotyped arrays of characters, bytes, and floating point
numbers and use array indexes as pointers. If you do program
FORTRAN-style, it is usually relatively easy to get good performance
in CommonLisp (there are still some gotchas to watch out for), but
most people choose Lisp in order to have convenient and powerful means
of expressing their algorithms at their disposal, not to be squeezed
into FORTRAN-style programming.

Thomas.

Kirk Rader

unread,
Aug 21, 1994, 7:54:40 AM8/21/94
to
In article <vrotneyC...@netcom.com> vro...@netcom.com (William Paul Vrotney) writes:
>In article <Cuou4...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
>
>>
>> In article <vrotneyC...@netcom.com> vro...@netcom.com (William Paul Vrotney) writes:
>>

[...]

>


>What is your point, in one sentence?
>
>--
>Bill Vrotney - vro...@netcom.com


That different kinds of languages are best suited to different kinds
of tasks, so saying that "language X is better than language Y" is
false or meaningless without specifying better for _what_.

Your previous posting amounted to saying "lisp is better because,
other things being equal, wouldn't you rather use it?" I am
paraphrasing, but that is what I understood you to be saying. I have
been explicitly talking all along about cases for which other things
are _not_ equal. If you really know that the cost of implementing
lisp's more powerful features relative to a C-like language will not
in fact result in unacceptable performance for a given application
then it is true either that lisp is better (if you also intend to use
those features) or at least, as you stated, the choice of language is
simply a matter of taste, or organizational requirements, or whatever.
In my experience, however, there is a significant class of
applications for which the cost of implementing lisp's features has
proven to be too high, such that C-like languages are in fact better
in those cases just as the existence of lisp's more powerful features
make it a better choice in others.

Kirk Rader

Kirk Rader

unread,
Aug 21, 1994, 9:42:56 AM8/21/94
to
In article <CusMt...@festival.ed.ac.uk> je...@festival.ed.ac.uk (J W Dalton) writes:
>ki...@triple-i.com (Kirk Rader) writes:
>
>>In article <CuL4n...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>

[...]

>
>Yes, but I disagree with that point, to a fair extent, and even
>more with the way it has been argued. That you have observed cases
>that you've understood in a certain way doesn't show there's a
>language-intrinsic problem. You may well be right about typical
>implementations or most applications or something like that.

a
It is hardly fair to excerpt only those quotes from a long thread
which refer to specific examples and then complain that the argument
is based solely on particular implementation details. This thread has
been going on for weeks now (hopefully it won't for weeks more!) and
most of the concrete examples to which you refer were offered as
"existence proofs" to people who claimed that arguments based solely
on general principles were unconvincing. I do not blame someone who
has never (presumably because they have been working in a problem
domain to which lisp is well-suited) encountered these kinds of
unacceptable performance problems to require concrete examples. I
consider it false to say that in this thread I have relied solely on
such specific examples.

>
>Of course, there will be some applications for which Lisp works
>less well than C. But so far as I can tell, there's no language-
>intrinsic feature that prevents there being cases where Lisp works
>as well as or better than C. Of course, maybe there's some lesser
>consequence we should consider.

I have repeatedly agreed that that there are applications for which
lisp is better suited than C.

[...]

>
>Sure, but illustration and demonstration are two different things.
>Illustrations would be useful if they let one see how the general
>claim was correct. So far, yours have not done so, at least not
>for me.

But you seem to have ignored the many other quotes in many messages in
this same thread that did refer to the kind of general principles that
you claim are lacking in my argument. Rather than repeat a large
number of them here, let me ask the following question. Since lisp's
semantics are unarguably richer and more powerful than C's, how do you
expect any implementation to obtain them for free? It seems
elementary to me that if a language is intrinsically more powerful, it
will be intrinsically more complex and have more overhead in its
implementation. For many applications, the greater expressive power
of lisp more than pays for itself. For many others, it doesn't. It
is an open question for any given application into which class it
falls.

[...]

>
>Bull. I have never thought or claimed that Lisp is (you said
>"connotes") "any language based on the lambda-calculus or any
>one that has generic features to support higher-order functions
>and a functional programming paradigm". That is, I agree that
>"Lisp connotes not not just" any such language.

Bull yourself. You have several times tried to argue lisp's
performance "problems" (the quotes around "problems" is to emphasize
that I consider them real problems only for some applications, since
you seem to be missing that point fairly consistently) are not real
based on how it could run on hypothetical platforms or how it could
evolve so as to avoid the features that have been performance
stumbling blocks. I have been explicitly referring to existing
dialects running on current platforms, and used the word "connotes"
deliberately so as to emphasize the particular usage of the word
"lisp" I was referring to.

>>>Do you count reference counting as GC?
>
>>I am indifferent as to how you choose to categorize different
>>memory-management strategies, other than the basic difference between
>>languages which automatically allocate and deallocate memory and those
>>which only do so under explicit programmer control.
>
>So you do count reference counting in your claim.

As I said in the quote which I left in, above, I do not count
reference counting per se either in or out of my "claim". There are
for example, a number of standard idioms used in C and C++ that use
reference counting, sometimes to good effect and sometimes not, but
which still never allocate anything "behind the programmer's back", as
it were. I do not count this as a GC-based approach, even though
reference counting can be used to implement a GC. The critical
difference is that in the non-GC use of reference counting, even when
a "deallocation" is deferred due to a positive count, the block of
memory by definition is still not "garbage".

[...]

>
>It's also pretty essential to proper implementation of strings
>in Basic, but people don't go around saying Basic is inherently
>unsuited to workstations and PCs.

"People" may not, but I certainly would say that the class of
applications for which Basic is well-suited is probably smaller than
the classes of applications for which either lisp or C is well-suited.
I also am not among those "people" who say that "lisp inherently
unsuited to workstations and PCs." I have only ever claimed that
there exist applications for which lisp is not particularly
well-suited, just as there are applications for which C or any other
particular language is not well-suited.

[...]

>
>Well, I certainly include implementations that use reference counting
>and even ones that don't reclaim at all (see e.g. JonL White's paper
>in the 1980 Lisp conference), since I'm not planning to define "Lisp"
>so as to do violence to past usage. Now if you want to show that
>automatic reclamation inherently causes a mismatch with worstations
>and PCs, or whatever your acctual claim is, please do so. I would
>like to understand what the problem is.

This particular problem is that for some applications GC is not the
optimum memory-management strategy, even though there are applications
for which it is. My rule of thumb is that if an application needs to
make many small allocations, GC is likely to be the ideal memory
management strategy. If an application needs to make only a few but
large allocations, malloc / free is likely to be more efficient for
the application as a whole.

[...]

>
>So what _is_ supposed to be the problem with Lisp, then?

Again, "the problem with Lisp" is only a problem for those
applications for which lisp's semantics are not a good fit. Of
course, there are also applications for which lisp's sematics are a
better fit than, say, C's, so in those cases "the problem" would be
with C. What started all this was my taking exception with claims
that lisp was always or almost always as good or better a choice than
C or C++, and that all claims of it being "too big" or "too slow" for
any particular application were ill-founded.

[...]

>
>Since this is probably not clear, let me say it explicitly.
>If someone came along in comp.lang.lisp and said
>
> in my experience it is more common in real-world programming
> projects to encounter cases were C's memory-management philosophy
> results in higher throughput and fewer problems for interactivity and
> real-time response than one which relies on GC.
>
>I would not disagree with them.
>
>-- jd

Such sagacity! :-)

Kirk Rader


Kirk Rader

unread,
Aug 21, 1994, 10:23:54 AM8/21/94
to
In article <Cusnx...@festival.ed.ac.uk> je...@festival.ed.ac.uk (J W Dalton) writes:

[...]

>
>That I agree with. But do you accept that Lisp vendors could
>do a better job, opr are there inherent properties of Lisp that
>prevent them from doing so (or from doing it to a sufficient extent)?
>


Well, both. I know that some vendors, at least, are working very hard
to correct many of the particular kinds of performance issues I have
raised. But I just don't see how it can be thought that one language
which is inherently more powerful than another won't necessarily have
made some performance trade-offs in order to achieve that extra power.
So. I expect that it will always be the case that there will be some
applications that are best done in a lower-level language. I don't
anticipate a time when some "one size fits all" language will have
been developed that makes it suitable for every type of application.

Kirk Rader

Kirk Rader

unread,
Aug 21, 1994, 10:13:42 AM8/21/94
to
In article <Cusnu...@festival.ed.ac.uk> je...@festival.ed.ac.uk (J W Dalton) writes:
>ki...@triple-i.com (Kirk Rader) writes:
>
>>In article <TFB.94Au...@oliphant.cogsci.ed.ac.uk> t...@cogsci.ed.ac.uk (Tim Bradshaw) writes:
>

[...]

>


>I've found that Lisp typically gives more opportunity to do that.

Well then, I can only say our experiences have been different.

>
>Anyway, in Franz Lisp, "ports" are alost directly FILE *s.
>Nothing prevents there being fd-based operations as well.
>(Indeed, there's an fdopen.)

Nothing prevents it, perhaps, but nothing encourages it particularly
either. The fact that particular implementations may provide
particular hooks into the underlying OS just seems to me to confirm
the necessity from time to time of bypassing lisp's higher level
functionality. QED

[...]

>
>That Lisp has something extra doesn't mean it must always have
>excess baggage.

How do you get the "something extra" for free?

[...]

>
>What do you mean? Lisp uses the same OS operatins C does.

And it imposes a significantly more complex additional layer of
functionality on top of it. If this additional complexity pays for
itself, as it does in many cases, well and good. But in many other
cases, the additional complexity doesn't pay for itself. In those
cases I would consider it "excess baggage" (as opposed to "useful or
necessary baggage".)

[...]

>
>It suggests that the problems people observe with Lisp may not be
>due to Lisp, since they can (evidently) have other causes.

Or, as I believe is actually the case, that other systems sometimes
suffer symptoms due to the same or similar causes as lisp.

>

[...]

>
>Can you say something more about this for those of us who can't
>us gr_osview on SGIs?

Irix has a fairly complex buffer-management scheme of its own which is
implemented at the lowest level of the filesystem and VM substrates.
Due to the multiple layers of functionality referred to above, the I/O
mechanisms of the particular lisp implementation that I was using in
this example were causing unnecessary cache and page thrashing due to
more memory being allocated per I/O operation than was necessary, or
is typical in applications that use the standard library calls
directly.

[...]

>
>But they're not part of every serious implementation of every
>Lisp dialect, unless you make it true by definition of "serious".

If you want to exclude that particular example from your consideration
on such grounds, by all means do so. For the particular application
on which I work and which I was using as an example, we could not use
any implementation of any dialect that did not include such a
facility, since it is a requirement of the design of our suite of
integrated applications.

>
>And what is the problem for Lisp lw processes anyway?

The same as with lisp memory-management. The additional layer of
OS-like functionality on top of the real OS.

[...]

>
>The semantics of Lisp say objects (normally) have indefinite extent.
>Implementations typically use GC as a way to reuse storage. Whether
>this has to make it harder for programmers to obtain acceptable
>performance is not clear. It depends on the implementation and
>the application.
>
>Inclusion of lexical closures in a language does not slow down
>cases that don't use them. So how is this part of a consistent
>choice of expressive power over highest possible performance?
>
>-- jd

Because one can only avoid the overhead inherent in using
lexical-closures by choosing not to use them (without even raising the
issue of whether they get used in the run-time system, outside of the
application programmer's control.) If one must exercise greater
awareness of performance issues in order to specifically avoid using
exactly those features of lisp that make it more powerful in order to
achieve acceptable performance for a particular application, than how
could it be considered other than a handicap? For such an application
I would expect the programmer to be more productive and the
application to require less debugging and performance tuning if it had
been developed in C to start with. For other applications, where the
nature of the application is such that lisp's more powerful features
are an asset rather than a liability, the advantage would clearly go
to lisp.

Kirk Rader

Kirk Rader

unread,
Aug 21, 1994, 10:37:14 AM8/21/94
to

Since you included a quote from me in this, I assume you consider me
to be one of those who has been attacking lisp. I do not feel that to
be the case. I also think that it is a little hypocritical to
criticize the same line of argument for being both "too dependent on
illustrative examples" and "not including anything to see what the
problems are" (my paraphrase) as you have in this set of related
messages.

My repeatedly emphasized point is not deep, mysterious, hard to grasp,
or even requiring much in the way of complex justification or
analysis. A higher level language like lisp will have made
performance trade-offs making it a better choice of language for some
applications and a worse choice for others relative to a lower-level
language like C. All of the lengthy thread which has ensued is, I
believe, the result of people who feel so threatened by any suggestion
that there can be any application for which lisp is not the best
choice of implementation language that they have felt it necessary to
vehemently attack what were mild observations of (to me. at least)
obvious truths.

Kirk Rader

William G. Dubuque

unread,
Aug 21, 1994, 2:10:19 PM8/21/94
to
From: ch...@labs-n.bbn.com
Date: 18 Aug 1994 19:36:55 GMT
...

some years ago, when i was porting Macsyma to the Explorer, I fiddled
with cons-areas, wondering if I could do a better job of controlling GC
occurrences myself. the idea was that I would allocate an area, work in
it, copy a result out of it and immediately free the area, avoiding
having to GC that area, cutting the cost of a GC when it finally did
occur. this was before generational GC had become available. Macsyma
is/was horrible about generating garbage.

The problem of "intermediate expression swell" is innate in many
computer algebra calculations and is not necessarily specific to
Macsyma. In fact Macsyma was one of the driving forces behind Moon's
original ephemeral GC implementation for MIT Lispms, as well as many
other features of MacLisp/Lispm-lisp that made their way into CLtL2.
Of course once can optimize using temporary consing areas, but if you
go too far you are almost doing the same thing as you would in
C with malloc/free.

Paul Fuqua

unread,
Aug 21, 1994, 9:04:46 PM8/21/94
to
Date: 19 Aug 1994 21:10:48 GMT
From: ch...@labs-n.bbn.com

disassembling this shows that it's only got 13 instructions, excluding
the FEF overhead. I don't know how many registers the LispChip has, but
it appears to be only one (!?) from looking at the assembly code.

At the macroinstruction level, it's a stack machine, with some shortcuts
and warts in various places. Most instructions leave their result(s) on
the stack, and args and locals are kept there (the top 1K is buffered
on-chip). At the microcode level, it's mostly register-to-register with
32 pseudo-two-ported and 992 scratch registers (and at times in the past
we had experimental microcodes that implemented stack-accumulator and
register-window architectures).

As for the multiply speed, my memory and the 1987 chip spec suggest that
the only hardware assist was support for a two-bit-per-step Booth
algorithm for 32-bit integer multiply. Plus, floats are consed; header
word plus 32 or 64 bits of data (but consed in a special area that's
GC'd quickly and frequently).

Yeah, there are still some lispm people out here. I'm typing this on a
Sun with the diamond keys mapped to Control and Control to Rubout (but I
read mail on the Explorer).

Paul Fuqua
Texas Instruments, Dallas, Texas p...@hc.ti.com

Jeff Dalton

unread,
Aug 22, 1994, 11:38:12 AM8/22/94
to
In article <32tp07$b...@disuns2.epfl.ch> mato...@di.epfl.ch (Fernando Mato Mira) writes:
>In article <Cuos0...@triple-i.com>, ki...@triple-i.com (Kirk Rader) writes:
>
>> entirely in a commercial Common Lisp, using lisp to implement all of
> ^^^^^^^^^^^^^^^^^^^^^^^
>> the bells-and-whistles like multi-threading that a large-scale
> ^^^^^^^^^^^^^^^
>> application is likely to require.
>
>That's what I hate the most about current implementations.
>Take advantage of the OS. Do not rewrite it.
>No wonder if performance and COMPATIBILITY stink..

What exactly is the problem with providing (multiple) threads for Lisp
programs? If the OS-/C-library-provided lw process mechanism isn't
sufficient, what are we supposed to do? Give up?

It is loading more messages.
0 new messages