(major portion deleted)
I've worked in lisp for 15 years, but I think that when I
start my next project I'll give serious consideration to
starting in c++ rather than lisp.
|Bryan M. Kramer, Ph.D. 416-978-7569, fax 416-978-1455
|Department of Computer Science, University of Toronto
|6 King's College Road, Room 265A
|Toronto, Ontario, Canada M5S 1A4
I program in C++ and I'm writing a LISP interpreter into my product
to give it flexibility. I think LISP is a wonderful method for
adding a robust malleability to the type of software that benifits
from extensibility. AutoLISP is a prime example of this. AutoCAD
started out life as a generic CAD program. The AutoLISP extension
put it on top of the heap. But could you imagine AutoCAD written in
LISP? No way. The only reason I could think of for writing a
program in a language that takes 20 megs of space when another
language will do it in 2 is for the sake of using the language.
I will regret having said this here, but I cannot think of a major
process that I would wish to write in LISP. I don't think that's
where its strength is. LISP is a tool best used to intelligently
link other tools. The space it takes up cannot be justified by cheap
memory. If someone gave me a house with 200 rooms, how would I keep
it clean?
Rick
Rick Graham, the Binary Workshop
rgr...@loyalistc.on.ca Data:(613)476-4898 Fax:(613)476-1516
I will regret having said this here, but I cannot think of a major
process that I would wish to write in LISP. I don't think that's
where its strength is. LISP is a tool best used to intelligently
link other tools. The space it takes up cannot be justified by cheap
memory. If someone gave me a house with 200 rooms, how would I keep
it clean?
Your question begs for the obvious answer, following your own analogy:
the House of Lisp cleans itself automatically, and indeed offers the
option of removing any rooms that you've decided, after settling in,
that you don't want.
Seriously, real-life Common Lisp applications typically require
image-trimming (e.g., via a treeshaker) in order to be competitive in
space with similar applications written in more parsimonious
languages. We simply must include the image-trimming effort in the
total productivity equation. I still think we come out way ahead.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com
Standard disclaimer.
[...]
>
>Seriously, real-life Common Lisp applications typically require
>image-trimming (e.g., via a treeshaker) in order to be competitive in
>space with similar applications written in more parsimonious
>languages. We simply must include the image-trimming effort in the
>total productivity equation. I still think we come out way ahead.
>--
> Lawrence G. Mayka
> AT&T Bell Laboratories
> l...@ieain.att.com
>
>Standard disclaimer.
I agree that using lisp you come out way ahead in productivity and
without paying too large a cost in executable size and performance if:
1. You are sufficiently aware of the performance and memory
implications of common operations and idioms to know how to design for
a sufficient degree of efficiency up front without having to spend too
much time finding and fixing "performance leaks" after the fact. This
will be highly dependent on the particular implementation in use,
since in my experience most (commercial or otherwise) lisp
implementations have their own idiosyncratic patterns of which
operations cons excessively and which do not, and which functions are
coded in an optimally efficient manner and which are better avoided in
favor of home-grown lisp or foreign code.
2. You are working in a problem domain which is well-suited to lisp
in the first place. Some problem domains are best addressed using the
features of a lisp-like language because they actually make good use
of lisp's semantic bells-and-whistles. Note that the more "lispish"
features one uses, the less tree-shaking is likely to actually find
substantial amounts of unused code to eliminate but, since it is
already conceded in this case that those features "pay their way" in
the application's executable, this is not an issue.
The considerations cited in 1, together with the quality of modern
C/C++ integrated development and debugging environments, are the
reason I feel that most claims of lisp's "productivity enhancing"
features are overblown, if not simply false. There are valid reasons
for using lisp for certain kinds of applications. There are also
valid reasons for avoiding it in others. When starting a new project,
it is a good idea to spend at least some time considering the
trade-offs involved before making a choice as basic as what language
to use in implementing it. As someone whose job it has been to make
piles of performance-critical lisp-machine code run on general-purpose
machines in commercial Common Lisp implementations, I have had this
confirmed through bitter experience. The more that tree-shaking,
declarations, application-specific foreign or simplified versions of
standard functions, etc. are required for acceptable performance and
actually succeed in achieving it, the more evidence it is that lisp
was not really the best choice for that particular application (even
though there are applications for which it is, in fact, the best
choice due to considerations as in 2, above) and the more likely it is
that lisp will make one less, rather than more, productive.
Several people have suggested in this thread that one code one's
mainline application modules in C or C++, and use a small, easily
extensible version of lisp as a "shell" or "macro" language to glue
the pieces together and provide the end-user programmability features
which are one of lisp's greatest assets. This seems to me to be an
ideal compromise for applications where lisp's performance and
resource requirements are unacceptable to the main application but
where it is still desired to retain at least some of lisp's superior
features.
------------------------------------------------------------
Kirk Rader ki...@triple-i.com
...
I agree that using lisp you come out way ahead in productivity and
without paying too large a cost in executable size and performance if:
1. You are sufficiently aware of the performance and memory
implications of common operations and idioms to know how to design for
a sufficient degree of efficiency up front without having to spend too
much time finding and fixing "performance leaks" after the fact. This
will be highly dependent on the particular implementation in use,
since in my experience most (commercial or otherwise) lisp
implementations have their own idiosyncratic patterns of which
operations cons excessively and which do not, and which functions are
coded in an optimally efficient manner and which are better avoided in
favor of home-grown lisp or foreign code.
I sympathize with this. I just replaced three calls to FIND with hand
written loops. However, my other uses of find seem fine in terms of the
amount of time they take. I find Lisp to be a little like a shell (just
type stuff and things start happening), a little like Mathematica (you can
express a fairly complicated program easily and let it take care of the
details), and a little like C (when you need performance you need to be
precise). Performance tuning is an expert activity in any language. So, i
keep thinking there should be an expert system out there to help us.
It sounds like you've had a lot of experience. Can you tell us what you
found Lisp not to be appropriate for, and why?
Thanks,
k
--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA
[...]
Having just finished my first major development effort in C++, after
years of C programming and lots of Lisp, I have to take exception to
some of the assertions Mr. Rader is making here, both stated and
implied:
- My experience using SoftBench, ObjectCenter, and GNU emacs, compared
to MCL, is that I figure I'm at least twice as productive using MCL --
and I'm much more familiar with C than Lisp. Once a project grows to
any 'reasonable' size, tools like ObjectCenter become unmanageable,
and slow as molassas.
The other other issue here is that it takes us over 5 hours, on an
otherwise empty Sparc 10, to rebuild our application. Even if only a
single source file is changed, a rebuild will often take over 3 hours,
since (for reasons that we don't really understand) many of our
templates will wind up getting rebuilt. This makes it very difficult
to fix (and unit test) more than a couple of small bugs in a single
day!
- The implication that C++ optimization is machine independent is just
plain wrong. Even across different compilers on the same machine, or
different libraries, there can often be significant variations in the
relative performance of operations such as heap allocation and
deallocation, function calls vs. inlined functions, single-
vs. double-precision arithmetic, switch/case vs. if/else, stdio
vs. streams vs. read/write, virtual vs. normal functions, etc. While
you can certainly argue that optimization is less necessary for C or
C++, I'm not sure that this is true in many commercial applications,
which usually spend most of their user CPU on string copying and
string scanning, rather than number crunching.
- One other major advantage of Lisp over C++ is the relative maturity
of the languages. We developed our application in C++ on SunOS 4.1.3
initially, using ObjectCenter's C++ compiler. When we moved to HP-UX
9.0, still with OC C++, we found numerous bugs related to 'edge'
conditions -- basically, all related to the fact that the order of
static object construction and destruction is completely undefined by
the ARM, and thus THE SAME COMPILER generated different orders on two
different systems.
And yes, it's true, we shouldn't have coded anything with any
implicit assumptions about the order of static object construction or
destruction, but what we've been reduced to is a combination of hacks
and backing away from objects in favor of 'built-in' types, whose
static construction is always done first-thing (i.e., const char *
instead of const String). To me, this is a bug in the language
definition, and a very serious one.
Several people have suggested in this thread that one code one's
mainline application modules in C or C++, and use a small, easily
extensible version of lisp as a "shell" or "macro" language to glue
the pieces together and provide the end-user programmability features
which are one of lisp's greatest assets. This seems to me to be an
ideal compromise for applications where lisp's performance and
resource requirements are unacceptable to the main application but
where it is still desired to retain at least some of lisp's superior
features.
I'd argue for the reverse of this -- build one's mainline in Lisp, and
build the performance-critical pieces in C or C++. The less
statically-compiled, statically-linked stuff in the delivered
application, the better -- quicker edit/compile/build/test cycles,
easier patching in the field, and easier for your users to change or
extend what you've given them.
------------------------------------------------------------
Kirk Rader ki...@triple-i.com
--
-----------------------------------------------------------------------------
Alan Geller phone: (908)699-8285
Bell Communications Research fax: (908)336-2953
444 Hoes Lane e-mail: a...@cc.bellcore.com
RRC 5G-110
Piscataway, NJ 08855-1342
[...]
>
>It sounds like you've had a lot of experience. Can you tell us what you
>found Lisp not to be appropriate for, and why?
>
>Thanks,
>k
>--
>Ken Anderson
>Internet: kand...@bbn.com
>BBN ST Work Phone: 617-873-3160
>10 Moulton St. Home Phone: 617-643-0157
>Mail Stop 6/4a FAX: 617-873-2794
>Cambridge MA 02138
>USA
I have found that applications which require "real-time" performance,
either in the traditional sense used by embedded-systems programmers
or in the related sense implied by the requirements of a highly
interactive application like the paint program on which I currently
work, are almost if not impossible to achieve in lisp. We have
achieved it (to the extent we have, yet) primarily through the most
drastic kinds of optimizations discussion of which was the basis of
this thread. The reason for this is that the chaotic behavior
resulting from the combination of lisp's freedom to "involuntarily"
allocate memory with the requirement to therefore periodically invoke
the garbage collector at what amount to random times results in a
system that does not satisfy the basic requirement of a real-time
system - that you can predict that any given operation will not take
longer than some known length of time and that the time it takes will
be short enough to keep up with the asynchronous events with which the
system must interact. Beyond this specific issue, any time you must
consider memory utilization or computational horse-power constraints,
you must carefully decide whether the "one-size fits all" philosophy
of lisp is really a good match for your application. A garbage
collector is an excellent and ideally efficient model for
memory-management if your application requires (by its nature, not as
a side-effect of its implementation language) a large number of small
"anonymous" dynamic memory allocations and it can afford the small or
large "hiccups" that result from it. If, as is actually more common,
your application needs to make few if any allocations "off the heap",
in C jargon, or can easily control the circumstances in which they
occur, e.g. using the C++ constructor/destructor model, then the
garbage-collector's overhead is pure loss. Whether it is fatally so
is a consequence of considerations like that cited above.
Memory-management is only one area in which lisp is optimized for a
particular kind of problem. Common Lisp's model of function-calling,
including lexical-closures and method-combination, require an
implementation to typically expend a great deal more effort just
dispatching to and returning from a subroutine than in a language like
C or C++. If you really need, or at least can make good use of, that
kind of power there is nothing comparable in more "conventional"
languages. As with memory-management, however, I have found that
C++'s model of overloaded functions is more than sufficient in most
instances (no pun intended), and it is actually rather rare that I
want or need to use CLOS's more elaborate object-oriented features.
If you really need to use a true mix-in style of programming, C++
can't cut it - but how often do you need it to an extent that
justifies the overhead that is imposed on _every_ call of _every_
function by a typical real-world implementation in order to support
it? Despite all that, my experience and intuition suggest while that
only a minority of applications really benefit from lisp's more
sophisticated features, only a minority really suffer an unacceptable
performance penalty due to them. The latter is primarily due to the
kernel of truth there is in the "cheap RAM and fast processors"
argument.
But these kinds of performance issues represent only one aspect of the
problems to be faced when using a non-mainstream (whether it is
deservedly so, or not) programming language. In the application
domain in which I work, computer graphics, there are any number of
platform-vendor-supplied and third-party libraries and tools that we
either can not use at all or can only use in a less than satisfactory
way because they assume C/C++ as the application language and a
conventional suite of software-engineering support tools which
specifically does not include a lisp environment. I have worked in
enough other fields besides graphics to know that the same is true in
many other application domains, as well. To quote myself from a
private email message generated as a result of this same thread, I
personally have come to the conclusion that using lisp on a
"C-machine" like a PC or Unix workstation should be regarded as being
as anamolous (but not therefore necessarily inappropriate) as using
anything _but_ lisp would be on a lisp-machine.
------------------------------------------------------------
Kirk Rader ki...@triple-i.com
It is an unfortunate collective design decision of the Lisp community to focus
on creaping featurism rather than on an implementation with competetive
performance. So much so to the point of people trying to justify inadequate
performance as a good thing with arguments like
- people's time is more valuable than computer time
- or the 80/20 rule (don't worry about performance for 80%, only for the 20%
critical sectiion.
And people try to justify the coding tricks needed to get good performance
- adding in declarations
- tricks to avoid boxing floats
as good Lisp style.
There is no reason why a compiler couldn't do global analysis to determine the
possible types of all expressions and compile code as efficient as C even
without any declarations. It possible to do this without losing safety. It is
even possible to do it in an incremental fashion within a classic
read-eval-print loop based development environment. There is also no reason
why a compiler couldn't safely and efficiently use unboxed representations for
expressions know to be floats, or even double floats or complex numbers. There
is also no reason why a compiler couldn't do global analysis to determine the
lifetime of allocated objects and use static techniques to reclaim storage
instead of garbage collection. But the community is full of people who try to
justify garbage collection as performing better than C's static allocation.
Such justification discourages implementors from exploring alternatives to
garbage collection or alternatives to be used along side with garbage
collection for special cases that the compiler can determine. There is also no
reason why a compiler couldn't determine at compile time the method to be
dispatched by a given generic function call. One can have a uniform language
that doesn't make a distinction between compile-time and run-time dispatch and
have a compiler that safely and efficiently makes those distinctions. One can
also have a development environment that informs the programmer of the
efficiency ramifications of the code, i.e. telling the programmer where
run-time GC, run-time dispatching, boxed numbers and the like were used.
While such things exist to some extent today, they are grossly unusable. I
envision such information provided through an editor back annotated into the
source code with interactive ways for a programmer to converse with a compiler
to add additional declarations only when necessary and to have the compiler
remember such discussions with the programmer.
Now all of this is not very difficult to do. Many of the ideas have been
floating around theoretically in the programming language community for years.
(Like abstract interpretation of compile-time reference counts to eliminate
garbage collection.) Some, like static type inference, have been implemented in
widely used programming languages like ML.
But rather than working on such a compiler, the Lisp community has three
classes of people:
- those who are building yet another byte-code interpreter to add to the
dozens already available
- those adding creaping featurism to the language (like MOP, DEFSYSTEM,
CLIM, ...)
- those adding kludge upon kludge on top of existing compilers rather than
building one from the ground up on solid up-to-date principles.
Things don't have to be the way they are.
--
Jeff (home page http://www.cdf.toronto.edu/DCS/Personal/Siskind.html)
[...]
>Having just finished my first major development effort in C++, after
>years of C programming and lots of Lisp, I have to take exception to
>some of the assertions Mr. Rader is making here, both stated and
>implied:
>
>- My experience using SoftBench, ObjectCenter, and GNU emacs, compared
>to MCL, is that I figure I'm at least twice as productive using MCL --
>and I'm much more familiar with C than Lisp. Once a project grows to
>any 'reasonable' size, tools like ObjectCenter become unmanageable,
>and slow as molassas.
> The other other issue here is that it takes us over 5 hours, on an
>otherwise empty Sparc 10, to rebuild our application. Even if only a
>single source file is changed, a rebuild will often take over 3 hours,
>since (for reasons that we don't really understand) many of our
>templates will wind up getting rebuilt. This makes it very difficult
>to fix (and unit test) more than a couple of small bugs in a single
>day!
This sounds like a case either of a novice C++ programmer's
predictable problems with using templates effectively or a broken
implementation. Either way, my experience has been that Common Lisp
implementations are more susceptible to both kinds of problems than
C++ implementations, primarily because C++ attempts so much less than
CL. I have listed any number of specific "horror stories" encountered
when trying to develop commercial-quality software using third-party
commercial CL development platforms in previous postings and private
email that they generated, but I will repeat them if requested.
>
>- The implication that C++ optimization is machine independent is just
>plain wrong. Even across different compilers on the same machine, or
>different libraries, there can often be significant variations in the
>relative performance of operations such as heap allocation and
>deallocation, function calls vs. inlined functions, single-
>vs. double-precision arithmetic, switch/case vs. if/else, stdio
>vs. streams vs. read/write, virtual vs. normal functions, etc. While
>you can certainly argue that optimization is less necessary for C or
>C++, I'm not sure that this is true in many commercial applications,
>which usually spend most of their user CPU on string copying and
>string scanning, rather than number crunching.
I never suggested that C++ optimization was "machine independent" (or
implementation independent, either.) I do state emphatically that one
must typically spend less time optimizing C++ code than lisp code
since 1) the typical C++ compiler does a better job of optimizing on
its own than the typical lisp compiler and 2) the much smaller
run-time infrastructure assumed by C++ just doesn't give nearly as
much room for implementations and naive programmers using them to do
things in spectacularly unoptimal ways. Since, to paraphrase
Heinlein, it is never safe to underestimate the power of human
stupidity :-), that doesn't mean that either the C++ vendor or the
programmer can't still manage to make it so that, for example,
changing one small line in one function causes a 3-hour rebuild. In
general, though, my experience has been that this sort of problem is
more common in implementations of lisp-like languages than C-like ones
where the size of the feature space and the myth that it is easy to go
back after the fact and just tweak the slow parts of an application
conspire to encourage sloppiness on the part of implementors and
programmers alike.
The trade-off to using this minimalist approach to language and
run-time features is, of course, that if you find you really need a
feature that is built in to a more elaborate language like CL, then
you must either find an existing library implementation or write one
yourself. One factor frequently left of the debate on this issue is
recognition of the fact that due to the greater general acceptance
(deserved or not) of C and C++, there is a greater availability of
both commercial and shareware libraries for everything from very
application-level activities like serial communications to more
system-level activities like memory-management. When you add all of
the features that are easily obtainable in this way to those built in
to C++, it is not really as poor and benighted an environment as many
regular contributors to this newsgroup would have one believe.
>
>- One other major advantage of Lisp over C++ is the relative maturity
>of the languages. We developed our application in C++ on SunOS 4.1.3
>initially, using ObjectCenter's C++ compiler. When we moved to HP-UX
>9.0, still with OC C++, we found numerous bugs related to 'edge'
>conditions -- basically, all related to the fact that the order of
>static object construction and destruction is completely undefined by
>the ARM, and thus THE SAME COMPILER generated different orders on two
>different systems.
> And yes, it's true, we shouldn't have coded anything with any
>implicit assumptions about the order of static object construction or
>destruction, but what we've been reduced to is a combination of hacks
>and backing away from objects in favor of 'built-in' types, whose
>static construction is always done first-thing (i.e., const char *
>instead of const String). To me, this is a bug in the language
>definition, and a very serious one.
While I can sympathize with the frustration that could be caused from
discovering this as the cause of a seemingly mysterious bug, possibly
after some considerable effort chasing blind alleys, I cannot really
take seriously the assertion that this is such a major bug in the
language definition as you make. First of all, it does state quite
explicitly in the ARM that once the default initialization of all
static objects to zero is complete, "No further order is imposed on
the initialization of objects from different translation units." (pg.
19, near the bottom) It then goes on to give references to the
sections which define the ordering rules on which you can depend for
local static objects. So your observation that you shouldn't have
relied on any particular order is quite correct. There is also
interpersed throughout the annotation sections of the ARM and
throughout the C++ literature in general discussions of why that
particular class of features - deferring to the implementation exacly
what steps are taken and in what order when launching the application
- exists and how to avoid the problems it can create.
There can be any amount of debate about the merits of any particular
feature of any particular language. Consider the recent long thread
in this newsgroup on NIL vs '() vs boolean false in various dialects
of lisp. I believe the assertion that there are more such problems
with the C++ specification than with the CL specification is simply
false, again primarily due to the fact that it is so much smaller.
What I believe you actually are objecting to is the fundamentally
different set of principles guiding the initial design and subsequent
evolution of the two families of languages. Lisp began as an academic
exercise in how to apply a certain technique from formal linguistics -
the lamda calculus - to the design of a programming language as a way
of exploring the possibilities inherent in the use of higher-order
functions. As such, theortical purity and semantic "correctness" was
more important than practical considerations like performance and
conservation of machine resources. C, on the other hand, was
originally conceived as a special-purpose "middle-level" programming
language designed for the express purpose of writing a portable
operating system - namely Unix. Retaining assembly langauge's
affinity for highly optimal hardware-oriented "hackery" was considered
more important than theoretical rigor or syntactic elegance. Decades
later, the two families of languages have evolved considerably but
they each still retain an essentially dissimilar "flavor" that is the
result of their very different origins, and each retains features that
make it a better choice of an implementation language for certain
kinds of applications.
>
> Several people have suggested in this thread that one code one's
> mainline application modules in C or C++, and use a small, easily
> extensible version of lisp as a "shell" or "macro" language to glue
> the pieces together and provide the end-user programmability features
> which are one of lisp's greatest assets. This seems to me to be an
> ideal compromise for applications where lisp's performance and
> resource requirements are unacceptable to the main application but
> where it is still desired to retain at least some of lisp's superior
> features.
>
>I'd argue for the reverse of this -- build one's mainline in Lisp, and
>build the performance-critical pieces in C or C++. The less
>statically-compiled, statically-linked stuff in the delivered
>application, the better -- quicker edit/compile/build/test cycles,
>easier patching in the field, and easier for your users to change or
>extend what you've given them.
Again, only assuming that your application can afford the performance
penalties that these features entail - in which case it would not be
in the class of applications to which I explicitly referred. In our
own case, using exactly the strategy you and so many others advocate
has failed spectacularly in achieving comparable size and performance
behavior in our products to those of our competitors, which are
universally written in some combination of C and C++.
>
> ------------------------------------------------------------
> Kirk Rader ki...@triple-i.com
>
>--
>-----------------------------------------------------------------------------
> Alan Geller phone: (908)699-8285
> Bell Communications Research fax: (908)336-2953
> 444 Hoes Lane e-mail: a...@cc.bellcore.com
> RRC 5G-110
> Piscataway, NJ 08855-1342
------------------------------------------------------------
Kirk Rader ki...@triple-i.com
This isn't quite true. There are very good theoretical reasons why
global analysis in Lisp is much less satisfactory than in other
languages. But these stem from the _power_ of Lisp, so one wouldn't
want to give it up. The consensus seems to be that on-the-fly
incremental compilation is probably the best near-term answer, where
the compilation of (a version of) a function is delayed until we get
some idea of what types of arguments it should expect. The SELF
language/implementation is a good example of this style of
compilation.
>There
>is also no reason why a compiler couldn't do global analysis to determine the
>lifetime of allocated objects and use static techniques to reclaim storage
>instead of garbage collection.
Ditto. It is highly likely that the global analysis for the purpose of storage
allocation is at least as hard as, if not harder than, the problem of global
analysis for type determination. Even ML type inference is deterministic
exponential time complete, and ML is much less polymorphic than Lisp.
>But the community is full of people who try to
>justify garbage collection as performing better than C's static allocation.
>Such justification discourages implementors from exploring alternatives to
>garbage collection or alternatives to be used along side with garbage
>collection for special cases that the compiler can determine.
In many cases GC _does_ perform better than C's static allocation.
This advantage is over and above the important service of avoiding
dangling references. Most C/C++ programmers are quite amazed by this --
as if finding out about sex for the first time after being kept ignorant
about it by their parents...
>There is also no
>reason why a compiler couldn't determine at compile time the method to be
>dispatched by a given generic function call.
There are very good theoretical and practical reasons why this is not
to be. See the discussion above. Also, if one is intent on composing
old compiled code with new compiled code, then there have to be some
facts about the new compiled code that the old compiled code wasn't
privy to, and therefore can't take advantage of.
>Now all of this is not very difficult to do. Many of the ideas have been
>floating around theoretically in the programming language community for years.
>(Like abstract interpretation of compile-time reference counts to eliminate
>garbage collection.) Some, like static type inference, have been implemented in
>widely used programming languages like ML.
It's easier in ML, but still not easy.
------
One of the things people forget about Lisp's large footprint is that
it typically includes the compiler, the entire subroutine library, an
editor, and a good fraction of the help files. The US DoD budget can be
made to look smaller, too, by separating out the Army, the Air Force, the
Navy, the NSA, ......
Oh, rubbish. This is true in an apples to oranges comparison;
specifically, if you allow one language to use machine-specific
libraries and code but not the other, it's hard to have a fair
comparison. I can't think of many C programs that don't begin with
#include <sys/xxxx.h> where xxxx is some un*x-specific hack. Device
control. Asynchonous interrupts. Process scheduling. Low-level file
system mangling. These tasks are difficult (e.g. slow) if not
impossible to deal with when the language you're using doesn't have
any standard set of libraries you can call upon. C programs can
practically be made machine-specific too, in the sense that you can
code your programs to generate just a line or two of assembler for
each C subexpression. (Heck, I remember when C was referred to as a
high-level assembler.) "a += b" easily converts to a single
instruction in many implementations.
So, fighting apples to apples now, and using some machine-specific
code, let's see how "slow" this lisp program is compared to its C
counterpart. The task: copy from one wired array to another (wired
means that the storage management system has been told to not swap the
array to disk [virtual memory]). To make the program slightly smaller
for the sake of net.bandwidth, a lot of the setup code will be
simplified (no multi-dimensional arrays, etc), we'll assume the arrays
are integer multiples of 4 words long and that they're the same size.
(defun copy-a-to-b (a b)
(sys:with-block-registers (1 2)
(setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
sys:dtp-physical)
(sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
sys:dtp-physical))
(loop repeat (ash (length a) -2) do
(let ((a (sys:%block-read 1 :prefetch t))
(b (sys:%block-read 1 :prefetch t))
(c (sys:%block-read 1 :prefetch nil))
(d (sys:%block-read 1 :prefetch nil)))
(sys:%block-write 2 a)
(sys:%block-write 2 b)
(sys:%block-write 2 c)
(sys:%block-write 2 d)))))
So, we've used lisp and not assembler. But tell me, how big is this
program? How long does it take to execute? Does it cons? Answer:
It's 16 machine instructions long, 9 of which are within the loop.
For a 4096 byte array (1024 32-bit words) it takes 128 microseconds to
execute. You can embed it within an interrupt handler, if so desired,
as it does not allocate any static memory; it uses five locations on
the data stack.
Embedded-application lisp code is easy to write, it's very fast if you
allow yourself some knowledge of the machine upon which it is being
compiled, and is often available if you allow yourself to use
platform-specific code. The above was, of course, the code for a
Symbolics Ivory processor, something of which I've dealt with a lot:
I wrote the embedded SCSI driver for the NXP1000, a large and complex
driver that has its own mini lisp compiler for the microcoded NCR
SCSI hardware.
--
Kris Karas <k...@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
(setq *disclaimer* "I barely speak for myself, much less anybody else."
*conformist-numbers* '((AMA-CCS 274) (DoD 1236))
*bikes* '((RF900RR-94) (NT650-89 :RaceP T) (CB750F-79 :SellP T)))
These are all apt criticisms of lisp culture, IMHO. One could make
similar criticisms of some of the rationalizations one hears from the
C/C++ community of the requisite hacks needed to avoid some of the
pitfalls into which a programmer can fall using that kind of language.
I take your more general point to be that trying to justify as virtues
those sins one is forced to commit by a particular implementation of a
particular language is counter-productive. I couldn't agree more.
If you said "no reason in principle" that languages and compilers
couldn't be designed in the way you describe, I would entirely agree.
(Modulo any reservations I might feel about the theoretical
possibility of doing the 100% complete control-flow analysis necessary
to achieve some of the optimizations to which you refer automatically
at compile time, not to mention the necessity to frequently go back
and revisit previously made decisions as incremental changes are made
to the system. If people think using C++ templates too-easily causes
compile-time performance problems...!) The fact is, however, that
programmers must make choices today using today's tools. We can't
just stop all development and wait for some hypothetical future
language to come along. In the mean time, the best we can do is
choose the existing language that is best suited to the particular
task and make whatever tradeoffs that choice entails.
>Now all of this is not very difficult to do. Many of the ideas have been
>floating around theoretically in the programming language community for years.
>(Like abstract interpretation of compile-time reference counts to eliminate
>garbage collection.) Some, like static type inference, have been implemented in
>widely used programming languages like ML.
>
"Not very difficult to do"...? As you said yourself, these are issues
of ongoing language research.
And shouldn't there have been a smiley beside the description of ML as
being "widely used"? :-)
>But rather than working on such a compiler, the Lisp community has three
>classes of people:
>- those who are building yet another byte-code interpreter to add to the
>dozens already available
>- those adding creaping featurism to the language (like MOP, DEFSYSTEM,
>CLIM, ...)
>- those adding kludge upon kludge on top of existing compilers rather than
>building one from the ground up on solid up-to-date principles.
>
>Things don't have to be the way they are.
>--
>
> Jeff (home page http://www.cdf.toronto.edu/DCS/Personal/Siskind.html)
Not only don't things have to be the way they are, but I don't expect
them to stay that way forever. As existing languages continue to
evolve and new languages are developed, I would expect that the
boundaries of the sets of which applications are best implemented
using what languages to shift and possibly begin to overlap more than
they do today. While I understand and share to some extent the
frustration which you express, I don't think the lisp community is
quite as moribund as you describe and some future dialect of lisp
incorporating some at least of the features you propose will probably
be among those from which a programmer will be able to choose.
------------------------------------------------------------
Kirk Rader ki...@triple-i.com
> I this whole discussion of the relative merits and performance of Lisp vs. C
> people are confusing merits of implementations with merits of languages and
> performance limits of implementations vs. inherent performance limits (or lack
> there of) of languages.
This is a point that I frequently find myself wanting make, esp when I
see a comment about "Lisp", as if all dialects are the same, and as if
all implementations of a dialect are the same. If they were vendors
would have a hard time competing with each other!
> It is an unfortunate collective design decision of the Lisp community to focus
> on creaping featurism rather than on an implementation with competetive
> performance. So much so to the point of people trying to justify inadequate
> performance as a good thing with arguments like
>
> - people's time is more valuable than computer time
> - or the 80/20 rule (don't worry about performance for 80%, only for the 20%
> critical sectiion.
Agreed. I _do_ worry about performance, as I have a pretty slow machine,
even by non-workstation standards. It's now too small (not enough RAM)
and too slow (only a 20 Mhz 386) to run VC++. It _can_ still run a few
commercial Lisps, like Allegro CL/PC. I find it very odd that programmers
using VC++ believe that their enviroment and compiler is so superior to
_any_ Lisp, and yet they demonstrate how little they know about Lisp by
refering to myths that might only apply to Lisps from 15+ years ago.
> And people try to justify the coding tricks needed to get good performance
>
> - adding in declarations
> - tricks to avoid boxing floats
I'm happy to use type declarations. Did you mean some other kind?
While I have a choice, I'd like declarations to be available, but
not required. A popular language like C++ requires them in order to
just compile, never mind compile _well_. (That's usually done with
the compilers I know by using command line flags. (-: )
> as good Lisp style.
And so few C code I see seems to have _any_ style. :-)
> There is no reason why a compiler couldn't do global analysis to determine the
> possible types of all expressions and compile code as efficient as C even
> without any declarations. It possible to do this without losing safety. It is
> even possible to do it in an incremental fashion within a classic
> read-eval-print loop based development environment. There is also no reason
I'd be happy if a "production" compiler were available for the final code,
so that it could do some extra work, such as global analysis, if that can
produce better code. Would most function calls need to use late binding,
for example? If you know that a function won't be redefined at runtime,
but I can be certain of in all my programs, then are there any Lisp
compiles that can exploit that? Should they need declarations?
> Things don't have to be the way they are.
I also feel this way. My hope is that Dylan systems may offer better
compilers and enviroments to programmers, and make a distinction
between development by the programmer and runtime for the user.
I also hope that Dylan won't be the only Lisp-like language to do
this.
--
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind
> There is no reason why a compiler couldn't do global analysis to determine the
> possible types of all expressions and compile code as efficient as C even
> without any declarations. It possible to do this without losing safety. It is
> even possible to do it in an incremental fashion within a classic
> read-eval-print loop based development environment.
The CLiCC (Common Lisp to C Compiler) by Goerigk, Hoffmann, and Knutzen
at Christian-Albrechts-University of Kiel does global analysis type
determination as it generates a C program equivalent of a Lisp program.
With only a few days of modifications to make 13,000 lines of Lisp
code compliant with the Common Lisp subset supported by CLiCC, the
system successfully transformed the code into an 860,000 byte executable.
The CLiCC static libraries rather than the shared libraries were used
in the linking. The generated C code was 15,200 lines of code.
Thus, with a system like CLiCC, the Lisp development environment can
be used to develop a program and then it can be converted to C for
deployment. Thus, the "best" of each languages capabilites can be
exploited.
> But rather than working on such a compiler, the Lisp community has three
> classes of people:
> - those who are building yet another byte-code interpreter to add to the
> dozens already available
> - those adding creaping featurism to the language (like MOP, DEFSYSTEM,
> CLIM, ...)
> - those adding kludge upon kludge on top of existing compilers rather than
> building one from the ground up on solid up-to-date principles.
CLiCC was built from the ground up on solid up-to-date principles. The
CLiCC documentation contains a nice discussion of compiler research and
techniques that influenced the design of CLiCC. These include work on
the Scheme to C compiler done by DEC CRL and compiler techniques used
in functional languages such as ML.
Thus, there are people in the Lisp community working on some great
post-development tools to support Lisp programmers. Seeing tools such
as CLiCC should be encouragement to the Lisp community to internalize
some of Siskind's points and further develop and extend tools such
as CLiCC.
--- AlanG
: This isn't quite true. There are very good theoretical reasons why
: global analysis in Lisp is much less satisfactory than in other
: languages. But these stem from the _power_ of Lisp, so one wouldn't
: want to give it up. The consensus seems to be that on-the-fly
: incremental compilation is probably the best near-term answer, where
: the compilation of (a version of) a function is delayed until we get
: some idea of what types of arguments it should expect. The SELF
: language/implementation is a good example of this style of
: compilation.
On the contrary, I would say that there are no reasons why global
analysis need be less "satisfactory" for Lisp. It is true that
the language contains features that are difficult to analyze, but
I would be satisfied with less from a analysis of a program which
uses them. In most cases (excluding things like eval) static
analysis and compile time optimizations (and possibly profiling
feedback) can do a better job then on-the-fly incrementatal compilation.
This is because the compiler can globally restructure control flow and
change data representations.
In particular, if the global analysis builds interprocedural control
and data flow graphs, it can clone subtrees and specialize these with
respect to the data they operate on. In addition, the global data
flow information can be used to specialize the physical layout of
structures/objects and then to replicate and specialize all code
which operates on them.
My paper describing such an analysis and applications will appear
in OOPSLA:
Precise Concrete Type Inference for Object-Oriented Languages
http::/www-csag.cs.uiuc.edu/papers/ti-oopsla94.ps
This analysis and the optimizations have been implemented for a
language with some of the difficult bits from Lisp: untyped,
first class selectors (functions), continuations, and
messages (essentially apply).
: >There is also no
: >reason why a compiler couldn't determine at compile time the method to be
: >dispatched by a given generic function call.
: There are very good theoretical and practical reasons why this is not
: to be. See the discussion above. Also, if one is intent on composing
: old compiled code with new compiled code, then there have to be some
: facts about the new compiled code that the old compiled code wasn't
: privy to, and therefore can't take advantage of.
Again, the above analysis computes a safe approximation which is very
accurate. Combined with cloning of subtrees and to specialize for
classes containing instance variables of different types, we have been
able to statically bind the vast majority of call sites (>99% in many cases).
Such analysis are expensive, and require the entire program on which to
operate, but they are not that much more expensive than g++ -O2 :)
Incremental on-the-fly compilation is the best bet for incrementatal
development, debugging and fast turn around, but when you are ready
to build the final version, good global analysis can enable many
optimizations.
--
John Plevyak (ple...@uiuc.edu) 2233 Digital Computer Lab, (217) 244-7116
Concurrent Systems Architecture Group
University of Illinois at Urbana-Champaign
1304 West Springfield
Urbana, IL 61801
<A HREF="http://www-csag.cs.uiuc.edu">CSAG Home Page</A>
Heh heh. Good point. :-)
Full garbage collection takes a long time, but it has zero overhead
when memory is allocated. Thus lots of little creates don't suffer
any memory-subsystem-invocation penalties. C's memory allocator
doesn't take a lunchbreak every so often as Lisp's GC does, but it
spends lots of time making small, incremental re-organizings whenever
a small chunk of memory is allocated or deallocated; add to that the
function calling overhead, and the total time spent in C's allocator
can be greater.
And for those lisps that support ephemeral ("incremental" for lack of
a better lay term) garbage collection, frequently-manipulated pieces
of data get placed adjacent to one another in the machine's memory
space, greatly increasing the hit rate on the disk cache (and thus
reducing page thrashing) on the virtual memory system.
> And shouldn't there have been a smiley beside the description of ML as
> being "widely used"? :-)
The statement would certainly amuse (or bemuse) most C/C++
programmers. :-) I can only guess at what a vendor might say.
[...]
>
>Oh, rubbish. This is true in an apples to oranges comparison;
>specifically, if you allow one language to use machine-specific
>libraries and code but not the other, it's hard to have a fair
>comparison. I can't think of many C programs that don't begin with
>#include <sys/xxxx.h> where xxxx is some un*x-specific hack. Device
>control. Asynchonous interrupts. Process scheduling. Low-level file
>system mangling. These tasks are difficult (e.g. slow) if not
>impossible to deal with when the language you're using doesn't have
>any standard set of libraries you can call upon. C programs can
>practically be made machine-specific too, in the sense that you can
>code your programs to generate just a line or two of assembler for
>each C subexpression. (Heck, I remember when C was referred to as a
>high-level assembler.) "a += b" easily converts to a single
>instruction in many implementations.
>
>So, fighting apples to apples now, and using some machine-specific
>code, let's see how "slow" this lisp program is compared to its C
>counterpart. The task: copy from one wired array to another (wired
>means that the storage management system has been told to not swap the
>array to disk [virtual memory]). To make the program slightly smaller
>for the sake of net.bandwidth, a lot of the setup code will be
>simplified (no multi-dimensional arrays, etc), we'll assume the arrays
>are integer multiples of 4 words long and that they're the same size.
>
[...]
>
>Embedded-application lisp code is easy to write, it's very fast if you
>allow yourself some knowledge of the machine upon which it is being
>compiled, and is often available if you allow yourself to use
>platform-specific code. The above was, of course, the code for a
>Symbolics Ivory processor, something of which I've dealt with a lot:
>I wrote the embedded SCSI driver for the NXP1000, a large and complex
>driver that has its own mini lisp compiler for the microcoded NCR
>SCSI hardware.
>--
>Kris Karas <k...@enterprise.bih.harvard.edu> for fun, or @aviion-b... for work.
>(setq *disclaimer* "I barely speak for myself, much less anybody else."
> *conformist-numbers* '((AMA-CCS 274) (DoD 1236))
> *bikes* '((RF900RR-94) (NT650-89 :RaceP T) (CB750F-79 :SellP T)))
"Rubbish" yourself! Comparing the use of highly
implementation-specific lisp in a device driver for a machine which
was _designed_ to use lisp _as_ its assembly language to writing
application-level code on general-purpose hardware is your idea of an
"apples to apples" comparison? Do you really suggest implementing,
for example, a Unix device driver in lisp? Did you really not
understand my point that a language with a garbage-collector based
memory-management scheme with no (hardware assisted or otherwise)
real-time programming support is, by definition, not terribly useful
for applications which require continuous real-time response?
It seems to me that you actually make my point for me. C was designed
expressly for system-level hacking or anything else requiring intimate
interaction with the hardware. That is why C compilers typically do
compile to the degree of tightness to which you refer above. While it
may be possible, with sufficient effort and expertise, to achieve
nearly the same efficiency in a particular lisp implementation on a
particular platform using a sufficient degree of implementation- and
platform-specific hackery, it is ludicrous to suggest that the typical
application programmer will achieve that degree of efficiency using
lisp for the typical application in the typical implementation on the
typical platform. Or if they do manage, eventually, to create such
optimal code it will only be after having expended more rather than
less effort to do so than would have been expended using C - precisely
because of the existence of all of C's hardware-level libraries and
optimizations to which you refer. Proposing as counter-examples of
"embedded programming in lisp" lisp-machine device-drivers is, putting
it mildly, missing the point.
One more time: If you want or need lisp's higher-level features for a
particular application and can afford the performance trade-offs they
entail, lisp is an ideal choice. If you don't really need lisp's
power for a particular application but still find lisp's performance
acceptable, then there is no real reason not to use it in that case,
either. Since in the case of an Ivory based machine the performance
trade-offs almost _always_ favor using lisp, the premise is vacuously
true. For most applications running on non-lisp-machines there _is_ a
real performance trade-off to using lisp, and so programmers should
weigh the costs and benefits of using a variety of different
implementation languages before choosing one.
------------------------------------------------------------
Kirk Rader
I guess I'll bore people again, too, though perhaps I'll say something
different this time.
I agree with Siskind that Lisp could do much better to compete with C
by actually working on the areas that C people care about, rather
than just tell C people they care about the wrong things.
But I think the first mistake for Lisp users and vendors (was) is
to compete against C at all. Lisp is a great high-level language,
suitable for doing high-level programming. It allows one to implement stuff
quickly and not worry about low-level details.
Who needs what Lisp offers??? Not people who implement file systems,
device drivers, graphics libraries, or other "low-level" code.
The right target is BUSINESS PEOPLE. These people use COBOL,
and now relational databases, SQL, 4GLs, FORMS, and god-knows-what-else
high-level, expensive, proprietary package, which lets them
quickly implement an application the Boss needed yesterday,
like how many hoola-hoops were sold in California last month, etc.
A Lisp-based system could serve this market very well.
- Kelly Murray (k...@prl.ufl.edu) <a href="http://www.prl.ufl.edu">
-University of Florida Parallel Research Lab </a> 96-node KSR1, 64-node nCUBE
: Oh, rubbish.
: So, fighting apples to apples now, and using some machine-specific
And why, pray tell, would I wish to write this nearly indecipherable
mess of Lisp code instead of 16 lines of perfectly readable assembler?
This does seem like the wrong tool for a simple task.
Greg
Not that CL is perfect, either, of course. The lack of finalization (a
destructor method) is somewhat annoying, although using macros like
with-open-file helps a bit. The existence of set-value can be a
I think most commercial CL implementations now have a finalization
capability, but I agree that we need a de facto standard interface to
it.
> Kris Karas (k...@enterprise.bih.harvard.edu) wrote:
> : In article <CtpM0...@triple-i.com> ki...@triple-i.com (Kirk Rader) writes:
> : >I have found that applications which require "real-time" performance,
> : >[...] are almost if not impossible to achieve in lisp.
[Response demonstrating a very fast machine-specific array fill
deleted]
> : So, we've used lisp and not assembler. But tell me, how big is this
> : program? How long does it take to execute? Does it cons? Answer:
> : It's 16 machine instructions long, 9 of which are within the loop.
> : For a 4096 byte array (1024 32-bit words) it takes 128 microseconds to
> : execute. You can embed it within an interrupt handler, if so desired,
> : as it does not allocate any static memory; it uses five locations on
> : the data stack.
>
> And why, pray tell, would I wish to write this nearly indecipherable
> mess of Lisp code instead of 16 lines of perfectly readable assembler?
> This does seem like the wrong tool for a simple task.
It's an existence proof that the original assertion--that writing
`real-time' code in Lisp is nearly or truly impossible--is a bald-faced
falsehood. If you are willing to use Lisp in the fashion that you
_must_ use C (that is, get down and dirty with the hardware, use (and
declare) types that are machine-word-size-and-byte-order specific,
etc.) then there's nothing to prevent you from writing `real-time'
code.
Now, you may find the case of writing pieces of device-driver code for
a Lisp Machine a contrived example. Since I happen to find that
argument fairly compelling myself, let me just point out that there is
a commercial real-time expert system shell, called G2 if memory serves
me correctly, written in Common Lisp and running on stock hardware.
It's being used, among other things, to control the Biosphere 2
environment.
> Greg
-----------------------------------------------------------------------
Paul F. Snively "Just because you're paranoid, it doesn't mean
ch...@shell.portal.com that there's no one out to get you."
Newsgroups: comp.lang.lisp
Path: athos.cc.bellcore.com!uunet!spool.mu.edu!agate!darkstar.UCSC.EDU!news.hal.COM!decwrl!netcomsv!netcomsv!torii!kirk
From: ki...@triple-i.com (Kirk Rader)
Sender: use...@triple-i.com
Nntp-Posting-Host: pak+
Organization: Information International Inc., Culver City, CA
References: <LGM.94Ju...@polaris.ih.att.com> <CtI6M...@triple-i.com> <ASG.94Ju...@xyzzy.gamekeeper.bellcore.com>
Date: Sat, 30 Jul 1994 19:57:55 GMT
Lines: 192
[...]
Well, I'd certainly agree that I was, at least, a novice C++
programmer, but I've been programming in C for about 13 years now, and
I've been doing OOP development in a variety of languages (Smalltalk,
CLOS, C, and PASCAL) for over 5 years. I didn't have nearly this level
of difficulty with my first significant Lisp project.
We also have a few very experienced C++ developers on our staff, so
it's not just me.
Here again, I'd disagree. For one thing, the wide availability of
tools in C and C++ means that one is constantly running into someone's
class library, theoretically production-proven code, that uses (say) a
yacc/lex-based compiler to parse a simple data structure. Recoding
this parser by hand provided a 4-fold speed-up in a speed-critical
part of our application; of course, it also forced us to completely
redevelop the entire class, since we didn't have source (the speed-up
required direct access to private member variables and adding a new
class variable, and you can't do that without source, even in a
derived class).
Or alternatively, we use three different vendor class libraries that
use three different string representations (char *, USL String, and a
private vstr class). The time (both programmer and CPU) involved in
converting between these representations is not trivial. If C++ had
more run-time infrastructure, then this wouldn't be an issue.
Or again, part of why it takes us over 5 hours to rebuild our
application is that it's big -- over 8 meg (stripped). If a base class
gets changed, every child class has to get recompiled -- and every
other file that includes the header. If we were using a dynamic
language, rebuilding the application is a non-event.
The trade-off to using this minimalist approach to language and
run-time features is, of course, that if you find you really need a
feature that is built in to a more elaborate language like CL, then
you must either find an existing library implementation or write one
yourself. One factor frequently left of the debate on this issue is
recognition of the fact that due to the greater general acceptance
(deserved or not) of C and C++, there is a greater availability of
both commercial and shareware libraries for everything from very
application-level activities like serial communications to more
system-level activities like memory-management. When you add all of
the features that are easily obtainable in this way to those built in
to C++, it is not really as poor and benighted an environment as many
regular contributors to this newsgroup would have one believe.
One of the biggest problems with C and C++ is the availability of
large numbers of incompatible, non-portable commercial and shareware
libraries that almost do everything you'd want, but not quite, and not
quite how you want, and optimized for a usage pattern different than
yours. If C++ allowed more flexible class modification, without
requiring source access, or if the subclassing facility were a bit
nicer, or if there were a MOP, then things would be much easier. But
what is the likelihood of this happening?
If the ARM specified that the result of adding two integers was
implementation-dependent, would that make it OK? If a language
requires major hacks and kludging in order to provide deterministic
start-up and shut-down behavior, then that language has a serious bug
in its specification.
This is not the only place where C++'s relative immaturity is
evident. The ARM more or less throws up it's hands over the treatment
of temporary values, and gives up. There is no ability to extend
existing classes by adding new methods without subclassing, and
subclassing introduces major new hackery:
class String; // defined in, say, USL standard components
class MyString : public String {
public:
// various constructors and whatnot removed, to
// protect the innocent
MyString& rtrim();
MyString& pad(int length, char padChar = ' ');
}
This looks easy, right? Except that you can't do the following:
MyString a("hello "), b("world");
MyString s = a + b;
unless you've written a MyString(String&) constructor, and even then
you're going to wind up copying the String in order to create a
MyString. Now, copying Strings isn't that big a deal (reference
counts!), but for some classes this can become quite expensive. (Yes,
it is possible to write a MyString::operator+(), etc., but this is a
hack, and messy, and bug-prone, and a maintenance nightmare).
Not that CL is perfect, either, of course. The lack of finalization (a
destructor method) is somewhat annoying, although using macros like
with-open-file helps a bit. The existence of set-value can be a
temptation to do truly evil things. The existence of eight different,
slightly incompatible ways to do any single operation is a real
problem when learning (reminiscent of X windows), although I've found
that I actually program in a CL subset, of say maybe 35-45% of the
language.
There can be any amount of debate about the merits of any particular
feature of any particular language. Consider the recent long thread
in this newsgroup on NIL vs '() vs boolean false in various dialects
of lisp. I believe the assertion that there are more such problems
with the C++ specification than with the CL specification is simply
false, again primarily due to the fact that it is so much smaller.
I disagree strongly. Again, Lisp or CL or Scheme are not perfect, but
since they've been used for a much longer time than C++
(specifically), and since far more people have had input into the
structure and semantics of the language than for C/C++, the problem
issues that remain are generally minor, peripheral issues (or
religious debates between different dialects), where for C to an
extent, and for C++ very strongly, there are many users of the
language who feel that it has serious if not fatal flaws in its
fundamental design and conception.
What I believe you actually are objecting to is the fundamentally
different set of principles guiding the initial design and subsequent
evolution of the two families of languages. Lisp began as an academic
exercise in how to apply a certain technique from formal linguistics -
the lamda calculus - to the design of a programming language as a way
of exploring the possibilities inherent in the use of higher-order
functions. As such, theortical purity and semantic "correctness" was
more important than practical considerations like performance and
conservation of machine resources. C, on the other hand, was
originally conceived as a special-purpose "middle-level" programming
language designed for the express purpose of writing a portable
operating system - namely Unix. Retaining assembly langauge's
affinity for highly optimal hardware-oriented "hackery" was considered
more important than theoretical rigor or syntactic elegance. Decades
later, the two families of languages have evolved considerably but
they each still retain an essentially dissimilar "flavor" that is the
result of their very different origins, and each retains features that
make it a better choice of an implementation language for certain
kinds of applications.
I don't think John McCarthy was really concerned with lambda calculus,
at least not from the articles I've read. On the other hand, I was
certainly not there -- I was born the same year as Lisp, 1958 -- so
it's not unlikely that I'm way off base on this.
C, at least, has to me an intellectual coherence that makes it a
useful and usable language. I've written hundreds of thousands of
lines of C code, and while it has its warts -- and while I'd rather
write CL or Scheme -- it's OK. C++, on the other hand, seems to me to
be a hack pasted on top of a kludge, with some spaghetti thrown on
top. It wants to be all things to all people, without admitting to
any flaws -- it's an OOPL, it's procedural, it's dynamic, it's static,
it's a mess.
>
> Several people have suggested in this thread that one code one's
> mainline application modules in C or C++, and use a small, easily
> extensible version of lisp as a "shell" or "macro" language to glue
> the pieces together and provide the end-user programmability features
> which are one of lisp's greatest assets. This seems to me to be an
> ideal compromise for applications where lisp's performance and
> resource requirements are unacceptable to the main application but
> where it is still desired to retain at least some of lisp's superior
> features.
>
>I'd argue for the reverse of this -- build one's mainline in Lisp, and
>build the performance-critical pieces in C or C++. The less
>statically-compiled, statically-linked stuff in the delivered
>application, the better -- quicker edit/compile/build/test cycles,
>easier patching in the field, and easier for your users to change or
>extend what you've given them.
Again, only assuming that your application can afford the performance
penalties that these features entail - in which case it would not be
in the class of applications to which I explicitly referred. In our
own case, using exactly the strategy you and so many others advocate
has failed spectacularly in achieving comparable size and performance
behavior in our products to those of our competitors, which are
universally written in some combination of C and C++.
Why are there more performance penalties in one approach than in the
other? Build the mainline in Scheme, if you don't want the run-time
footprint of CL. Or in Dylan, or EuLisp, or whatever. I guess the
bottom line for me, though, whatever language your main line happens
to be written in, is do as much as you can dynamically, in Lisp or
Scheme or Python or Smalltalk or Self or SML or whatever.
The bareface falsehood is the assertion that it is easy or even
possible to achieve real-time behavior in any of the popular
commercial Common Lisp implementations available on stock hardware,
especially while retaining the features of lisp that are usually
presented as its advantages - abstraction, ease of use, ease of
debuggability, etc. I don't find the example of a lisp-machine device
driver being written in lisp "contrived", I merely consider it
irrelevant to the issues being discussed in the thread at the point at
which it was introduced. By definition, the architecture of a
lisp-machine is going to favor using lisp for most aspects of both
application and system-level code. It should be equally obvious that
both the hardware and the software architecture of systems designed
primarily to run Unix or other popular OS's themselves written in C
using the standard C libraries will generally favor using C-like
languages, other things being equal. Where other things are _not_
equal, such as applications which require the greater expressive power
of lisp more than they need optimal performance, lisp is a good
choice. But suggesting that lisp is just as reasonable a choice as
assembler or C for implementing things like device-drivers on typical
hardware / software configurations is simply ludicrous.
Note also that Unix itself is not particularly well-suited to
real-time applications. Adding the overhead of supporting the typical
lisp implementation's run-time system (especially its
memory-management mechanisms) to the problems already inherent in Unix
for the type of application under discussion only exacerbates the
problems.
Kirk Rader
Looks like assembler to me. Worse yet, it's completely unportable and
requires intimate familiarity with the particular system used as well
as details of the hardware and software environment that most users
just don't care about.
The need to write code like in Lisp in order to achieve good
performance is precisely the reason why people prefer languages like C
for many tasks: writing equivalent, efficient C code is
straightforward and needs to rely only on portable primitives.
Thomas.
Never seen a thing as a "portable" assembler. Try to run MIPS code on
your 486 :)
and
requires intimate familiarity with the particular system used as well
as details of the hardware and software environment that most users
just don't care about.
The need to write code like in Lisp in order to achieve good
performance is precisely the reason why people prefer languages like C
for many tasks: writing equivalent, efficient C code is
straightforward and needs to rely only on portable primitives.
Like the stream class of C++? It took the major C++ vendors three
releases before getting out something that it would resemble the AT&T
stream class, and that would run the examples on Lippman's book.
Moreover, the intermediate language used by the GNU GCC compiler (RTL)
looks a lot like...guess what?
Cheers
--
Marco Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab | room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU | e-mail: mar...@cs.nyu.edu
...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
Bertholdt Brecht
There is a big difference between C++ and C. C is not a semantic nightmare.
Anyway, you seem to have completely missed Thomas' point, that
the original poster's example of "efficient Lisp" wasn't Lisp at all.
I sure can't find it anywhere in CLtL.
I believe it's possible to make Lisp efficient, but dressing up
assembly language with lots of irritating silly parentheses is
*not* the way to go.
>Moreover, the intermediate language used by the GNU GCC compiler (RTL)
>looks a lot like...guess what?
GCC's RTL syntax may have lots of parentheses, but that's its only
connection with Lisp. So it's not clear to me why you made that remark.
It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
therefore using assembler dressed up in parentheses must be a
reasonable way to write efficient Lisp code." Bzzt! Time for a
reality check.
--
Mike Haertel <mi...@ichips.intel.com>
Not speaking for Intel.
> In article <MARCOXA.94...@mosaic.nyu.edu> mar...@mosaic.nyu.edu (Marco Antoniotti) writes:
> Anyway, you seem to have completely missed Thomas' point, that
> the original poster's example of "efficient Lisp" wasn't Lisp at all.
> I sure can't find it anywhere in CLtL.
The original poster's point seemed to me to have been that any
real-world Lisp implementation will allow you to do what is generally
necessary in order to talk directly to the hardware, which is what many
people are concerned about when they talk about `real-time'
constraints. It was also to point out that it's not necessarily any
harder to use registers and stack-frames to avoid garbage-collection in
cases where that's important.
To me, it's not valid argument to say `C is great because it's
essentially a quasi-portable assembler that lets you efficiently bang
on the hardware,' and then yell `foul' when someone else says `Lisp is
great because it's a high-level language that will let you, if/when you
want, efficiently bang on the hardware.' I would say that the fact
that C _forces_ you to bang on the hardware, while Lisp merely allows
you to if you're willing to bypass all the abstraction away from the
hardware, is a compelling case in Lisp's favor _for most
non-device-driver one-megabyte-or-more application tasks_.
> I believe it's possible to make Lisp efficient, but dressing up
> assembly language with lots of irritating silly parentheses is
> *not* the way to go.
So far, no one has commented on G2, the hard real-time real-Lisp expert
system development tool. I may have to dig up old AI Expert/PC AI
magazine articles about it. From what I understand, in a nutshell,
they make heavy use of `resources' (that is, they maintain their own
free-lists of various types in order to avoid garbage collection) and
they use lots of type declarations in their code--which languages like
C again _force_ you to do anyway. So by using C-like memory-management
and type declarations, they win, and still get to use Lisp's other
great features.
> >Moreover, the intermediate language used by the GNU GCC compiler (RTL)
> >looks a lot like...guess what?
>
> GCC's RTL syntax may have lots of parentheses, but that's its only
> connection with Lisp.
I'm afraid that that's simply untrue. The connection with Lisp is on a
deep mathematical/logical level. Remember, RMS (Richard Stallman) is
one of the old MIT Lisp hackers. One of his significant claims to fame
is that when the Lisp Machines that MIT created went commercial in the
form of Symbolics, Inc. and Lisp Machines, Inc. being spun off by
various denizens of the MIT AI Lab, RMS would take every new release of
Genera that Symbolics put out, reverse engineer it, and reimplement the
results, which he would then provide to LMI. He sincerely believes
that technology should be freely available to anyone who wants it.
But I digress. RTL is related to Lisp inasmuch as they both derive
directly from the Lambda Calculus. As a theoretical point, it's been
understood for some time that if you're willing to express everything
about a program in terms of the Lambda Calculus, some wonderful
optimization opportunities arise. One reason that this remained a
theoretical consideration for so long is the difficulty inherent in
representing side-effects in the Lambda Calculus, and popular languages
such as C are notorious for their reliance upon side-effects. C adds
insult to injury by allowing aliases--that is, indirect side-effecting
through pointers.
Nevertheless, GCC manages to compile C to RTL, a Lambda-Calculus
derivative, at which point it can do many of the optimizations that one
would expect for a language based on the Lambda Calculus. It then
translates the RTL to assembly language for the target processor, and
hands the results off to the system's assembler.
Note that _all optimizations that GCC does are done to the RTL, not the
assembly language_. And GCC is still considered one of the best
optimizing C compilers around--generally better than the C compiler
that many vendors include with their system! The wiser vendors have
taken to including the latest GCC with their system--Sun Microsystems
and NeXT come to mind.
> So it's not clear to me why you made that remark.
> It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
> therefore using assembler dressed up in parentheses must be a
> reasonable way to write efficient Lisp code." Bzzt! Time for a
> reality check.
I believe that the point was that it's possible for a lisp-like
language (RTL) to generate efficient code, and that GCC is an existence
proof.
> --
> Mike Haertel <mi...@ichips.intel.com>
> Not speaking for Intel.
>Anyway, you seem to have completely missed Thomas' point, that
>the original poster's example of "efficient Lisp" wasn't Lisp at all.
>I sure can't find it anywhere in CLtL.
Since when is Lisp the same as what's in CLtL?
>I believe it's possible to make Lisp efficient, but dressing up
>assembly language with lots of irritating silly parentheses is
>*not* the way to go.
I agree.
>>Moreover, the intermediate language used by the GNU GCC compiler (RTL)
>>looks a lot like...guess what?
>
>GCC's RTL syntax may have lots of parentheses, but that's its only
>connection with Lisp.
But that is Lisp here? What's in CLtL?
I would agree that Lisp can do reasonably well on RISC machines,
but the point of Lisp machines was not just to make Lisp fast
but also to make it fast and safe at the same time and fast
without needing lots of declarations.
Recent Lisp implementations (especially CMU CL) have gone a fair
way towards making it easy to have safe, efficient code on RISC
machines, but it may always require a somewhat different way of
thinking. (Not a bad way, IMHO, but different from LM thinking
nonetheless.)
But what is this about "the way people wrote lisp systems in the 70s"?
What sort of thing do you have in mind? Lisps written in assembler
that could run in 16K 36-bit words? (Presumably not.)
So far as I know, this is not true. I do know that the early papers about the
use of "Register Transfer Language" in portable back-ends make no mention of
any such semantic tie-in, and in actual implementation was also quite un-lispy
(based on string processing of a fairly conventional assembly format.)
I'm not familiar with GCC internals, but Stallman's decision to use the term
RTL doesn't suggest a radical new semantic foundation.
There has been some discussion (such as in Guy Steele's Lambda papers) of
using lambda and continuation passing style as the ultimate target independent
intermediate format (even for non-Lisp languages), but so far as I know CPS
has only been used in basically non-optimizing Scheme and ML compilers.
>I believe that the point was that it's possible for a lisp-like
>language (RTL) to generate efficient code, and that GCC is an existence
>proof.
If you consider Lambda to be what makes a language "Lisp-like", then I would
agree that efficient code could easily be generated for C-with-Lambda (or
C-with-lambda-and-parens) especially if upward closures were illegal.
Performance-wise, the big problem with Lisp variants such as Common Lisp and
Scheme are that:
-- Basic operations such as arithmetic are semantically far removed
from the hardware,
-- Dynamic memory allocation is done by the runtime system, and is thus
not under effective programmer control, and
-- A run-time binding model tends to be used for variable references and
function calls.
Inefficiency is a natural consequence of programming using high-level
operations, though this inefficiency can be overcome with a fair degree of
success by compiler technology and iterative tuning.
I don't hold out much hope for the idea of overcoming the inefficiency of
high-level operations by the explicit use of scads low-level operations,
especially when the operations are implementation dependent. It just gives up
too many of the reasons for wanting to use a high-level language.
I believe that the primary key to adequate performance in dynamic languages
like Lisp is adequate tuning tools and educational materials. Getting good
performance in dynamic languages currently requires far too much knowledge
about low-level implementation techniques. Wizards have too long claimed that
"Lisp is just as efficient as C" --- although Lisp may be highly efficient in
the hands of the wizards, the vast majority of programmers who attempt Lisp
system development don't come anywhere near that level of efficiency.
Rob
This turns out not to be true as far as I can tell. I'm not a
compiler-design (Lisp or otherwise) expert, but Lisp compilers can do
very well on RISC machines, and many of the old `lisp'
architectures torn out to be not so good after all, or rather, they
mesh well with the way people wrote lisp systems in the 70s, but they
don't write them like that any more, they write them better.
--tim
>| (defun copy-a-to-b (a b)
>| (sys:with-block-registers (1 2)
>| (setf (sys:%block-register 1) (sys:%set-tag (locf (aref a 0))
>| sys:dtp-physical)
>| (sys:%block-register 2) (sys:%set-tag (locf (aref b 0))
>| sys:dtp-physical))
>| (loop repeat (ash (length a) -2) do
>| (let ((a (sys:%block-read 1 :prefetch t))
>| (b (sys:%block-read 1 :prefetch t))
>| (c (sys:%block-read 1 :prefetch nil))
>| (d (sys:%block-read 1 :prefetch nil)))
>| (sys:%block-write 2 a)
>| (sys:%block-write 2 b)
>| (sys:%block-write 2 c)
>| (sys:%block-write 2 d)))))
>|
>| So, we've used lisp and not assembler.
>
>Looks like assembler to me.
Really? You must be used to pretty fancy macro assemblers (with
loop macros, etc).
>The need to write code like in Lisp in order to achieve good
>performance
But it's not necessary to write code like that in order to
achieve _good_ performance in Lisp.
>>> : So, we've used lisp and not assembler. [...]
>>> And why, pray tell, would I wish to write this nearly indecipherable
>>> mess of Lisp code instead of 16 lines of perfectly readable assembler?
>>> This does seem like the wrong tool for a simple task.
>>It's an existence proof that the original assertion--that writing
>>`real-time' code in Lisp is nearly or truly impossible--is a bald-faced
>>falsehood. [...]
>The bareface falsehood is the assertion that it is easy or even
>possible to achieve real-time behavior in any of the popular
>commercial Common Lisp implementations available on stock hardware,
>especially while retaining the features of lisp that are usually
>presented as its advantages - abstraction, ease of use, ease of
>debuggability, etc.
Let's be clear about this. Lisp is not the same as "the popular
commercial Common Lisp implementations available on stock hardware".
> But suggesting that lisp is just as reasonable a choice as
>assembler or C for implementing things like device-drivers on typical
>hardware / software configurations is simply ludicrous.
You could say "suggesting that any of the popular commercial
Common Lisp implementations available on stock hardware is just
as reasonable a choice ..."
I suspect no one would disagree with _that_.
>Note also that Unix itself is not particularly well-suited to
>real-time applications. Adding the overhead of supporting the typical
>lisp implementation's run-time system (especially its
>memory-management mechanisms) to the problems already inherent in Unix
>for the type of application under discussion only exacerbates the
>problems.
Now it's "the typical Lisp implementation" that's said to be losing.
The various terms are not interchangeable.
Note that neither of these applications are traditional Lisp ones ---
there is not a List in sight.
The syntax for declarations is *Awful*, but they only need to be added
to the 10% of code that takes 90% of the time. In fact, most of the
declarations I have used make little difference. C programmers might prefer
(Fixnum ((I 10)) ...) or (Let+ ((I Fixnum 10)) ...)
to
(Let ((I 10)) (Declare (Fixnum I)) ...)
Personally I prefer Let+, allow an optional third argument, but it is
easy to write your own macro. (Try doing that in C++!).
The advantage of Lisp over other interpretive languages is that it can
and IS compiled efficently. However, Microsoft has dictated that some
programs are two be writen in Visual Basic, and others in C++,
impedence mismatch being good for the soul, so who are we to argue?
IMHO the lisp community has only itself to blame for not providing a
standard option of a conventional syntax --- syntax is always more
important then semantics.
Anyway here's the code.
-------- Primes ---------
#include <stdio.h>
#include <assert.h>
int prime (n)
int n;
/*"Returns t if n is prime, crudely. second result first divisor."*/
{ int divor;
for (divor=2;; divor++)
{ /*printf("divor %d n/divor %d n%%divor %d ", divor, n/divor, n%divor);*/
if (n / divor < divor) return 1;
if (n % divor == 0) return 0;
} }
main(argc, argv)
int argc;
char **argv;
{ int n, sum=0, p, i;
assert(argc == 2);
n = atoi(argv[1]); printf("n %d, ", n);
for(i=1; i<=10; i++) { sum =0;
for (p=0; p<n; p++)
{ /*printf("\nprime(%d): %d ", p, prime(p));*/
if (prime(p))
sum += p * p;
}}
printf("Sum %d\n", sum);
}
(declaim (optimize (safety 0) (speed 3)))
(declaim (start-block primes))
(declaim (inline prime))
(deftype unum () '(unsigned-byte 29))
(defun prime (nn)
"Returns t if n is prime, crudely. second result first divisor."
(declare (type unum nn))
(do ((divor 2 (+ divor 1)))
(())
(declare (type unum divor) (inline Floor))
(multiple-value-bind (quot rem) (floor nn divor)
(declare (type unum quot rem))
(when (< quot divor) (return t)) ; divor > sqrt
(when (= rem 0) (return (values nil divor))) )))
(defun primes (n)
"Returns sum of square of primes < n, basic algorithm."
(declare (type unum n))
(let ((sum 0))
; (declare (integer sum))
(dotimes (i 10)
; (print sum)
(setf sum 0)
(do ((p 0 (the t (1+ p))))
((>= p n))
(declare (type unum p))
(when (prime p)
(incf sum (* p p)) )))
sum))
(declaim (end-block))
% crude prime tester.
prime(N):- test(N, 2), !, fail.
prime(N).
test(N, M):- N mod M =:= 0.
test(N, M):- J is M + 1, J * J =< N, test(N, J).
primes(P, S):- prime(P), Q is P - 1, Q > 0, primes(Q, T), S is T + P * P.
primes(P, S):- Q is P - 1, Q > 0, primes(Q, S).
primes(P, 0).
------------ Nueral Net ---------
#include <stdio.h>
#include <math.h>
#define size 5
float w[size][size];
float a[size];
float sum;
main()
{
int epoch, i,j ;
for (i=0; i< size; i++)
for (j=0; j< size; j++)
w[i][j] = 0.0;
for (epoch=0; epoch < 10000; epoch++){
for (i=0; i< size; i++)
a[i] = (float) (random()%32000)/(float) 32000 * 0.1;
for (i=0; i< size; i++)
for (j=0; j< size; j++)
w[i][j] += a[i] * a[j];
for (i=0; i< size; i++){
sum = 0.0;
for (j=0; j< size; j++)
sum += a[i] * w[i][j];
a[i] = 1.0/(1.0 - exp(sum));
};
}
}
;;; simon.lisp -- test or neural simulations
;;;
;;; Simon Dennis & Anthony Berglas
;;; Library stuff -- Example of simple language extensions.
;;; *NOT* NEEDED FOR EFFICIENCY, JUST CONVENIENT
(defmacro doftimes ((var max &rest result) &body body)
"Like DoTimes but var declared Fixnum."
`(DoTimes (,Var ,Max ,@result)
(Declare (Fixnum ,Var))
,@Body))
;; Note that this macro could expand code for fixed loops.
(Eval-When (eval load compile)
;; [a b c] -> (Aref a b c)
(defun AREF-READER (Stream Char)
(declare (ignore char))
(Cons 'AREF (Read-Delimited-List #\] Stream T)) )
(set-macro-character #\[ #'aref-reader Nil)
(set-macro-character #\] (get-macro-character #\)) Nil) )
;;; The program.
(declaim (optimize (safety 0) (speed 3)))
(defconstant size 5)
(defvar *seed* *random-state*)
(defun main()
;; initialize the weight matrix
(let ((w (make-array '(5 5) :element-type 'SHORT-FLOAT :initial-element 0s0))
(a (make-array 5 :element-type 'SHORT-FLOAT)) )
(setf *random-state* (make-random-state *seed*))
(doftimes (epoch 10000)
;; make new activation vector
(doftimes (i size)
(setf [a i] (random 0.1)))
;; update the weights
(doftimes (i size)
(doftimes (j size)
(setf [w i j] (+ [w i j] (* [a i] [a j]))) ))
;; update the activations
(doftimes (i size)
(let ((sum 0s0))
(declare (short-float sum) (inline exp))
(doftimes (j size)
(incf sum (the short-float (* [a i] [w i j] ))) )
(setf [a i] (/ 1 (- 1 (exp sum)))) )))
w))
--
Anthony Berglas
Rm 312a, Computer Science, Uni of Qld, 4072, Australia.
Uni Ph +61 7 365 4184, Home 391 7727, Fax 365 1999
|> Now, you may find the case of writing pieces of device-driver code for
|> a Lisp Machine a contrived example. Since I happen to find that
|> argument fairly compelling myself, let me just point out that there is
|> a commercial real-time expert system shell, called G2 if memory serves
|> me correctly, written in Common Lisp and running on stock hardware.
|> It's being used, among other things, to control the Biosphere 2
|> environment.
Well, G2 isn't REALLY written in Common Lisp per se, but in a Lisp where they
have very good control over the memory mangement (i.e. garbage-free floats, etc).
They took Common Lisp, threw out what they didn't need, and rewrote what they did
(with some help from Lucid, the rumours say...)
Definitely, an impressive system, and shining proof that Lisp is indeed useful
in real-world applications.
--
David Kuznick - da...@ci.com (preferred) or dkuz...@world.std.com
When the world brings you down so take your time ___
Look round and see the most in time is where you're meant to be {~._.~}
For you are light inside your dreams ( Y )
For you will find that it's something that touches me. ()~L~()
- Endless Dream - YES (_)-(_)
From: ch...@shell.portal.com (Paul F. Snively)
Newsgroups: comp.lang.lisp
Date: 9 Aug 1994 14:13:48 GMT
Organization: tumbolia.com
Lines: 98
Sender: ch...@nova.unix.portal.com
References: <KANDERSO.94...@wheaton.bbn.com>
<CtpM0...@triple-i.com> <31j0ma$g...@hsdndev.harvard.edu>
<Cu0w0...@rci.ripco.com> <TMB.94Au...@arolla.idiap.ch>
<MARCOXA.94...@mosaic.nyu.edu>
<MIKE.94A...@pdx399.intel.com>
NNTP-Posting-Host: tumbolia.com
X-Posted-From: InterNews 1....@tumbolia.com
X-Authenticated: chewy on POP host nova.unix.portal.com
In article <MIKE.94A...@pdx399.intel.com>
mi...@ichips.intel.com (Mike Haertel) writes:
...
So far, no one has commented on G2, the hard real-time real-Lisp expert
system development tool. I may have to dig up old AI Expert/PC AI
magazine articles about it. From what I understand, in a nutshell,
they make heavy use of `resources' (that is, they maintain their own
free-lists of various types in order to avoid garbage collection) and
they use lots of type declarations in their code--which languages like
C again _force_ you to do anyway. So by using C-like memory-management
and type declarations, they win, and still get to use Lisp's other
great features.
As I remember correctly, G2 basically goes at great lengths to provide
GC-free arithmetics and carefully uses resources to avoid "consing".
As usual the definition of "Real Time" must be taken into account. NYU
is a notorious ADA stronghold and there are many stories going around
about it. One of the most succulent ones is that of an ADA system
failing to meet its Real Time specs - i.e. the tasks were missing the
deadlines. Well it turned out that no matter how good the compilation
was (and ADA can potentially be optimized in a better way than C) the
system still missed the deadlines. Of course the problem was the
scheduling policy. Hardly a matter of "efficiency of the language".
BTW. There is a portable implementation of Common Lisp Resources in
the Lisp Repository maintained by the never thanked enough Mark
Kantrowitz.
...
> So it's not clear to me why you made that remark.
> It sounds to me like you're saying "GCC's RTL looks a lot like Lisp,
> therefore using assembler dressed up in parentheses must be a
> reasonable way to write efficient Lisp code." Bzzt! Time for a
> reality check.
I believe that the point was that it's possible for a lisp-like
language (RTL) to generate efficient code, and that GCC is an existence
proof.
Pretty much it. Thanks
Happy Lisping