Injecting some sense into the C++ vs. Lisp debate

58 views
Skip to first unread message

Googleplexation is NOT an option!

unread,
Jul 14, 2000, 3:00:00 AM7/14/00
to
With regards to the question about which language produces a faster
sequence of machine instructions from a given sequence of human
readable code then, if you disregard the _quality_ of the compiler
implementation, it all comes down to what information is available to
the compiler (and hence the optimizer) and what information can be
inferred.

C++ does provide more information to the compiler than Lisp, but at the
cost of clarity, often requiring substantially more cryptic code. Have
you ever seen examples of expression templates and template
metaprogramming? These produce highly optimized code, often resulting
in performance that is equivalent or better than Fortran performance on
the same problems, but are VERY difficult to read and even harder to
debug. The only thing now holding back C++ numerical performance (as
compared to Fortran) is pointer aliasing, but this is being addressed
with the restrict keyword.

Other things to consider are quality of library implementation. How
often do we see so called benchmark comparisons that aren't really
comparing languages but are, in fact, comparing the quality of library
implementations?

With regards to *performance*, GC is only an issue when factors
involving memory locality (both spacial and temporal) come into play.
Naive C++ programs often have very poor memory performance
characteristics (along with lots of memory bugs too) and can benefit
greatly from plugin garbage collectors. Well written C++ programs,
where the programmer knows about memory management issues, exhibit
excellent memory performance.

Some of the other major problems that crop up when comparing languages
are:
1. Disparity of experience: Comparing code written by a good programmer
with code written by a poor programmer in another language.
2. Selective examples: Chosen examples that guarantee one language
looks better than another. Its hard to find language-neutral examples.
3. Confusing language clarity with code performance. You may be more
productive in one language than another, and be able to express code
more clearly, but that is a whole other issue.
4. Human nature.

- Peter.


Sent via Deja.com http://www.deja.com/
Before you buy.

David Hanley

unread,
Jul 14, 2000, 3:00:00 AM7/14/00
to

"Googleplexation is NOT an option!" wrote:

> With regards to the question about which language produces a faster
> sequence of machine instructions from a given sequence of human
> readable code then, if you disregard the _quality_ of the compiler
> implementation, it all comes down to what information is available to
> the compiler (and hence the optimizer) and what information can be
> inferred.
>
> C++ does provide more information to the compiler than Lisp,

<snip>

I don't know if i'd say this is always true. Abstractions like mapcar
are giving more info to the compiler than a FOR loop. And LOOP's
COLLECT and other clauses are also more abstract, and
potentially can be threaded.

SISAL was a functional language that promise(s/d) more speed
than fortran for parallelizable code.

You are right that templates can make very fast unrolled code,
but you can do the same with macros, and potentially easier.
In LISP I can make a loop macro that unrolls into a duff device.
Hard to do in C/C++.

I suspect a sufficiently smart lisp compiler can do as well as any
other language.

dave


Jonathan Jean-Paul BAILLEUL

unread,
Jul 14, 2000, 3:00:00 AM7/14/00
to
David Hanley wrote:
>
> "Googleplexation is NOT an option!" wrote:
>
> > With regards to the question about which language produces a faster
> > sequence of machine instructions from a given sequence of human
> > readable code then, if you disregard the _quality_ of the compiler
> > implementation, it all comes down to what information is available to
> > the compiler (and hence the optimizer) and what information can be
> > inferred.
> >
> > C++ does provide more information to the compiler than Lisp,
>
> <snip>
>
> I don't know if i'd say this is always true. Abstractions like mapcar
> are giving more info to the compiler than a FOR loop. And LOOP's
> COLLECT and other clauses are also more abstract, and
> potentially can be threaded.
>
> SISAL was a functional language that promise(s/d) more speed
> than fortran for parallelizable code.
>
> You are right that templates can make very fast unrolled code,
> but you can do the same with macros, and potentially easier.

Excuse me, could you precise that point, please?

> In LISP I can make a loop macro that unrolls into a duff device.
> Hard to do in C/C++.

We went to "unroll" iterations of a nested 'for' form in order to spare
computation time (tests and increments at each step) with success.
Is that what you're thinking about?

>
> I suspect a sufficiently smart lisp compiler can do as well as any
> other language.

Do you know about very efficient compilers?

>
> dave

--
----------------------------------------------
Jonathan BAILLEUL (bail...@emi.u-bordeaux.fr)
Maitrise Informatique, Universite Bordeaux I
Currently attending Berkeley Summer Session Courses CS61a/CS3

Kee

unread,
Jul 15, 2000, 3:00:00 AM7/15/00
to
may I suggest writing in assembly if you are into performance?

I kinda like lisp, it lets me think lispy way especially when it throws NIL
at me.

--
Kee Nam.
kn...@home.com

--- Knowledge is Power -- Francis Bacon
"Jonathan Jean-Paul BAILLEUL" <cs61...@cory.eecs.berkeley.edu> wrote in
message news:396F8CAC...@cory.eecs.berkeley.edu...

Reini Urban

unread,
Jul 16, 2000, 3:00:00 AM7/16/00
to
David Hanley wrote:
>I suspect a sufficiently smart lisp compiler can do as well as any
>other language.

I want to find more references on the "sufficiently smart compiler" myth
history. Is this still a topic of research and interest today?

The term itself seems to be lisp/scheme related only on a websearch
(mainly in CLHS writeup issues), but didn't C++/Java folks use it also?

The jargon file doesn't explicitly mention it.
Only found "pessimizing compiler".
--
When C++ is your hammer, everything looks like a thumb. Steven M. Haflich

Christopher J. Vogt

unread,
Jul 16, 2000, 3:00:00 AM7/16/00
to
Reini Urban wrote:
>
> David Hanley wrote:
> >I suspect a sufficiently smart lisp compiler can do as well as any
> >other language.
>
> I want to find more references on the "sufficiently smart compiler" myth
> history. Is this still a topic of research and interest today?

I'm not sure what you mean by refering to this as a "myth".

> The term itself seems to be lisp/scheme related only on a websearch
> (mainly in CLHS writeup issues), but didn't C++/Java folks use it also?

I remember hearing (and no doubt using) this expression around Symbolics in the
mid to late '80s, and the CLHS writeup you mention was written by Moon in '88,
so that is consistent. I have no recollection of ever reading this expression
in a published paper.

Martin Cracauer

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to
jam...@volition-inc.com (James Hague) writes:

>On Fri, 14 Jul 2000 12:51:49 -0600, David Hanley <d...@ncgr.org> wrote:

>>I suspect a sufficiently smart lisp compiler can do as well as any
>>other language.

>While that may be true, the trend in compiler writing has been toward
>simplicity and reliability and away from aggressive optimization (in
>the 1970s, optimization was seen as more important).

That's my impression, too.

The free software C/C++ folks loudly complained about gcc's lack of
support for complex CPU instruction sheduling, then some examples of
such compilers appeared (pgcc, intel's gcc tuning etc.) and they
damaged code quality, correctness and portability badly so that people
are back to the slow progress of "the" gcc. As discussed before in
this thread, usual C++ programs by non-top programmers are usually
hopelessly slow for reasons outside the compiler's control anyway.

For the CMUCL compiler, most progress done over the last years is
support for a wider range of programmer-controlled choices, i.e. more
data types you declare explicitly and get native instructions. I have
not seen compiler features towards that "sufficiently smart compiler"
that analyses your code neough to draw new useful conplusions
(i.e. what Stalin does for Scheme).

Martin
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <crac...@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/

Martin Cracauer

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to
Googleplexation is NOT an option! <deja...@my-deja.com> writes:

>With regards to the question about which language produces a faster
>sequence of machine instructions from a given sequence of human
>readable code then, if you disregard the _quality_ of the compiler
>implementation, it all comes down to what information is available to
>the compiler (and hence the optimizer) and what information can be
>inferred.

>C++ does provide more information to the compiler than Lisp,

No. I.e., most libraries in C++ are optinal. When the compiler sees
some code that uses a library and the implementation of this library
is not known at compile time, the compiler must not use any special
knowledge about the implementation of that library. It has to write
down the code "as-is".

A Common Lisp compiler might know what - say - this lisp data type
thing is about and might generate code that shortcuts what the
programmer (or the macro expansion) wrote down with knowledge how this
data type is being implemented.

>but at the
>cost of clarity, often requiring substantially more cryptic code. Have
>you ever seen examples of expression templates and template
>metaprogramming?

I've seen code that computes the sinus of floats, unrolls iterations
at compile-time and similar stuff. The implementation looks horrible,
uses features that are optinal in the C++ standard and errornously
implemented in existing compilers.

IMHO, loop unrolling metaprograms are only useful when the
macro-expansion can say "ok, with more that X statements in each
expanded instance, we prefer to emit runtime loop code". Such is not
possible in C++ template compile-time computing.

>With regards to *performance*, GC is only an issue when factors
>involving memory locality (both spacial and temporal) come into play.

The problem here is that various tradeoffs are being made. The
low-interruption GCs often impose a higher overall overhead. Since
the choice of the GC must often be supported by the layout of data
structures in memory, it influences the overall compiler and runtime
design and isn't really switchable (at least being switchable doesn't
bring the full advances of the switched-to scheme).

The C++ tradeoff is far from optimal either since in practice the lack
of GC leads to coding style that is slow as well.

Also, a copying GC is clearly in a good position to *improve* locality
of reference for data. The heap mess that long use of malloc() when
often switching between large-chunk and small-chuck leave is
considerable. I.e. get some ascii data, doing a lot of string
mallocing, build some bigger rectangular data from it, free the
strings, build some more (meta-) data, then parse more.

>Naive C++ programs often have very poor memory performance
>characteristics (along with lots of memory bugs too) and can benefit
>greatly from plugin garbage collectors.

Since the C++ GC is not allowed to move objects, a large part of the
potential speed benefit is missed, see above.

>Well written C++ programs,
>where the programmer knows about memory management issues, exhibit
>excellent memory performance.

Yes.

Another difference is that in C derivates (or any other language
without GC for that matter) the reuse of already allocted buffers
provided by the caller is an often used paradigm in library design.

The CL programmer going towards constant-space computing often needs
to reimplement thing that already are in the CL library
(i.e. strings).

Again, bad programmers will be lost when just hoping instead of
knowing the preallocated buffer will be large enough. Even more so in
C where you don't get overflow checking even when not compiling for
speed. In a way, you could argue that constant-space computing along
these lines must be forbidden in C because of its lack of checking.
Since the safe way of returning newly allocated memory from called
functions also isn't optimal in a GC-less languages...

>Some of the other major problems that crop up when comparing languages
>are:
>1. Disparity of experience: Comparing code written by a good programmer
>with code written by a poor programmer in another language.

I don't have one example of a comparable program written in a C
derivate and in CL, although I am looking for 10 years now.

The problem here is that a second implementation of a problem by the
same person will always be more compact. He/she will know what
generality that was planned for at the beginning of the first short
turned out not to be used at the ends of the first run. Or the second
implementation is a more general program.

Raymond Toy

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to
>>>>> "Martin" == Martin Cracauer <crac...@counter.bik-gmbh.de> writes:

Martin> For the CMUCL compiler, most progress done over the last years is
Martin> support for a wider range of programmer-controlled choices, i.e. more
Martin> data types you declare explicitly and get native instructions. I have
Martin> not seen compiler features towards that "sufficiently smart compiler"
Martin> that analyses your code neough to draw new useful conplusions

Maybe because no one really knows the compiler anymore? Adding the
new data types was relatively easy (even when a difficult bootstrap is
needed) because the changes are straightforward and don't usually
produce major bugs preventing you from building CMUCL at all.

Ray

Martin Cracauer

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to
Raymond Toy <t...@rtp.ericsson.se> writes:

OK, the conclusions I draw from CMUCL might not be true for all
compilers. Still, in practice it seems that the existing source code
analysis that CMUCL does is already ahead of most other compilers
(lisp or not), and that's not a good sign for a timely arrival of the
"sufficiently smart compiler".

David Hanley

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to

Jonathan Jean-Paul BAILLEUL wrote:

> David Hanley wrote:


> >
> > "Googleplexation is NOT an option!" wrote:
> >
> > > With regards to the question about which language produces a faster
> > > sequence of machine instructions from a given sequence of human
> > > readable code then, if you disregard the _quality_ of the compiler
> > > implementation, it all comes down to what information is available to
> > > the compiler (and hence the optimizer) and what information can be
> > > inferred.
> > >
> > > C++ does provide more information to the compiler than Lisp,
> >

> > <snip>
> >
> > I don't know if i'd say this is always true. Abstractions like mapcar
> > are giving more info to the compiler than a FOR loop. And LOOP's
> > COLLECT and other clauses are also more abstract, and
> > potentially can be threaded.
> >
> > SISAL was a functional language that promise(s/d) more speed
> > than fortran for parallelizable code.
> >
> > You are right that templates can make very fast unrolled code,
> > but you can do the same with macros, and potentially easier.
>
> Excuse me, could you precise that point, please?

Well, for example, in C++ i once wrote my own "sort"
function when I really needed speed in some code.
The reason my own 'sort' was faster was that it was a
template, and each type i used it with formed it's own
special sort function at compile time. This function was
really fast because it's comparison function was inlined,
eliminating the overhead of many virtual function calls.
You can do this in lisp, though:

(defmacro make-sorter( name type func )
(`defun ,name( vector )
(declare (type (simple-array ,type) vector)
...... ( if (,func (aref vector first) (aref vector second))) ....
)

So, this function will, if the comparison function is declared inline,
make no function calls except recursive ones to itself.. Even
the array accesses should be highly optimized.

> > In LISP I can make a loop macro that unrolls into a duff device.
> > Hard to do in C/C++.
>
> We went to "unroll" iterations of a nested 'for' form in order to spare
> computation time (tests and increments at each step) with success.
> Is that what you're thinking about?

Yes. A duff device is the following in C:

switch( i & 3 )
{
while(i)
case 3 : foo(i); --i;
case 2: foo(i); --i;
case 1: foo(i); --i;
case 0: foo(i); --i;
}

So, you only need to test once every 'n' iterations of a loop. This can
be a huge advantage in simple loops. Writing such a macro in lisp is
easy.


> >
> > I suspect a sufficiently smart lisp compiler can do as well as any
> > other language.
>

> Do you know about very efficient compilers?

CMU-CL does a very good job, and outran the best the C compiler
could do in some tests on the SUN I was testing on.

dave


David Hanley

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to

James Hague wrote:

> On Fri, 14 Jul 2000 12:51:49 -0600, David Hanley <d...@ncgr.org> wrote:
>

> >I suspect a sufficiently smart lisp compiler can do as well as any
> >other language.
>

> While that may be true, the trend in compiler writing has been toward
> simplicity and reliability and away from aggressive optimization (in
> the 1970s, optimization was seen as more important).

That seems to be true for *lisp* compilers, aside from CMU-CL.
The C compilers I use include very aggressive optimizations.
The java community is working very hard to make java code
run acceptably fast.

I think there is still a prevailing attitude that functional languages
are used for research and education.

dave


David Hanley

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to

Reini Urban wrote:

> David Hanley wrote:
> >I suspect a sufficiently smart lisp compiler can do as well as any
> >other language.
>

> I want to find more references on the "sufficiently smart compiler" myth
> history. Is this still a topic of research and interest today?

Look at CMU-CL. It's highly competitive with a good C compiler, at
least on a SUN box.

dave


David Thornley

unread,
Jul 17, 2000, 3:00:00 AM7/17/00
to
In article <8knn4v$q21$1...@nnrp1.deja.com>,

Googleplexation is NOT an option! <deja...@my-deja.com> wrote:
>With regards to the question about which language produces a faster
>sequence of machine instructions from a given sequence of human
>readable code

There's also the question of how important this is.

then, if you disregard the _quality_ of the compiler
>implementation, it all comes down to what information is available to
>the compiler (and hence the optimizer) and what information can be
>inferred.
>

Yup.

>C++ does provide more information to the compiler than Lisp, but at the


>cost of clarity, often requiring substantially more cryptic code. Have
>you ever seen examples of expression templates and template
>metaprogramming?

I'm not completely sure what you're talking about here (I may have
done some of this under different names), but I really doubt that there's
anything that can't be done in CL with DEFMACRO.

These produce highly optimized code, often resulting
>in performance that is equivalent or better than Fortran performance on
>the same problems, but are VERY difficult to read and even harder to
>debug.

What a template is good at doing, in C++, is providing some sort of
generic code without the overhead of function calls. You write the
code template, and the compiler stamps out the code with it, substituting
types as desired. Templates in general do nothing to improve performance.
They do reduce work and enhance readability.

>Other things to consider are quality of library implementation. How
>often do we see so called benchmark comparisons that aren't really
>comparing languages but are, in fact, comparing the quality of library
>implementations?
>

Plenty of times.

>With regards to *performance*, GC is only an issue when factors
>involving memory locality (both spacial and temporal) come into play.

>Naive C++ programs often have very poor memory performance
>characteristics (along with lots of memory bugs too) and can benefit

>greatly from plugin garbage collectors. Well written C++ programs,


>where the programmer knows about memory management issues, exhibit
>excellent memory performance.
>

It's hard to write a "well-written" C++ program in this sense, and it
gets harder as the program gets larger. Once you've got several people
working on it, with people leaving for other companies and being replaced,
this gets very difficult indeed.

Further, this is just as true of Common Lisp. A good CL programmer
who wants to avoid garbage collection can do the same sorts of things
that the C++ programmer can.

>Some of the other major problems that crop up when comparing languages
>are:
>1. Disparity of experience: Comparing code written by a good programmer
>with code written by a poor programmer in another language.

Or comparing features. One popular argument in language or computer
system flame wars: "My system does X, and I don't know of a good way
for your system to do X, so mine's superior." (This isn't always
phrased this way, of course.)

>2. Selective examples: Chosen examples that guarantee one language
>looks better than another. Its hard to find language-neutral examples.

Um, what would a language-neutral example be between, say, Common
Lisp and C++?

>3. Confusing language clarity with code performance. You may be more
>productive in one language than another, and be able to express code
>more clearly, but that is a whole other issue.

And by far the more important. It is not important for most computer
programs to be optimizable by the compiler. If performance really
mattered, Microsoft would be dead. Windows 95 only became useful on
computers at least ten times as fast as necessary to run similar
systems (W95 has nothing more than a combination of the Mac OS, which
ran satisfactorily on the equivalent of a 286, and Linux can give you).

What matters is getting the product out the door, where it can establish
market share and get you some revenue. Productivity is really, really,
important.

>4. Human nature.
>
Yup.

--
David H. Thornley | If you want my opinion, ask.
da...@thornley.net | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-

Robert Monfera

unread,
Jul 18, 2000, 3:00:00 AM7/18/00
to
David Hanley wrote:

> Well, for example, in C++ i once wrote my own "sort"
> function when I really needed speed in some code.
> The reason my own 'sort' was faster was that it was a
> template, and each type i used it with formed it's own
> special sort function at compile time. This function was
> really fast because it's comparison function was inlined,
> eliminating the overhead of many virtual function calls.
> You can do this in lisp, though:

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


>
> (defmacro make-sorter( name type func )
> (`defun ,name( vector )
> (declare (type (simple-array ,type) vector)
> ...... ( if (,func (aref vector first) (aref vector second))) ....
> )

You can do better than C and compile your highly specialized and
type-declared sorting *run-time*. For example, the user specifies
sorting by gross revenue, defined as (* unit-price quantity * (1+
sales-tax-rate)). You can have this multiplication coded in so that
operations compile into the sorting algorithm directly. A perfect
combination of dynamism (the user has a lot of freedom) and native
compilation.

Robert

Pierre R. Mai

unread,
Jul 18, 2000, 3:00:00 AM7/18/00
to
Jonathan Jean-Paul BAILLEUL <cs61...@cory.eecs.berkeley.edu> writes:

> > You are right that templates can make very fast unrolled code,
> > but you can do the same with macros, and potentially easier.
>
> Excuse me, could you precise that point, please?

C++ templates are instantiated at compile time to produce C++ code,
which is then compiled. CL macros (not to be confused with the
abomination of preprocessor string-based macros in C/C++) are in
effect CL functions that are called at compile time to produce CL code
which is then compiled. Although the C++ people wanted a much more
restricted version of CL macros (which can in effect do anything that
can be done in CL at compile-time), they finally managed to produce
something that was

a) still Turing-complete,
b) had a horribly unreadable syntax even by C++ standards (including
the hacky reuse of < and >, which clashes with the presence of >>
as a lexical token)
c) was quite complex to implement correctly, as witnessed by the time
most C++ compiler vendors needed to do it,

yet was still of much less practical utility than CL macros.

In fact C++ templates are a very serious example of how having two
separate languages in a language is not a good thing at all. If your
programming language is any good for serious programming, it should be
good enough to be it's own source-transformation language.[1]

CL is one of the very few languages that got that aspect right, by
providing access to the base language at read-time, compile-time,
load-time and run-time.

> > In LISP I can make a loop macro that unrolls into a duff device.
> > Hard to do in C/C++.

In the face of optimization, CL's compiler-macros (especially when
used in conjunction with implementation-dependend environment access
functions) allow unobtrusive behind-the-scenes application-specific
optimizations. See e.g. postings by Lieven Marchand, Joe Marshall and
others under the topic of 'compiler-macro example shortage' in March
of this year. To give another example, I've used compiler-macros to
efficiently implement the generation of various distributions of
random numbers, which allows mixing and matching RNGs and distribution
generators (by passing the RNG function into the DG), without boxing
of floats if the function passed in is a compile-time constant.

> > I suspect a sufficiently smart lisp compiler can do as well as any
> > other language.
>

> Do you know about very efficient compilers?

IMHO there isn't one very efficient compiler for all of CL, but there
are many compilers that are especially efficient for certain areas of
CL, while not being the best in other areas, etc. This is a dynamic
property influenced by the needs of the constituencies of the
respective compiler implementors/vendors.

The same is true when comparing the efficiency of CL vs. e.g. C
compilers: Just because e.g. multi-dimensional array access isn't as
micro-optimized in most CL implementations as in some C
implementations doesn't mean that it is intrinsically hard to optimize
the CL case as well or better than the C case, it's probably just a
measure of the performance needs of both constituencies. Given a
good vendor those are usually not critical problems.

Regs, Pierre.

Footnotes:
[1] Note that using your programming language as it's own macro
language does _not_ imply that you have to keep the IDE around at
run-time. You can very well separate compilation environment and
run-time environment, even in CL, though most CL implementations
opt to do it otherwise. While having a fully fledged
implementation of your language available at compile-time adds
some complexity to the compiler, it's as nothing compared to the
complexity added by C++ templates...

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Bruce Hoult

unread,
Jul 19, 2000, 3:00:00 AM7/19/00
to
In article <87em4ra...@orion.bln.pmsf.de>, pm...@acm.org (Pierre R.
Mai) wrote:

> In fact C++ templates are a very serious example of how having two
> separate languages in a language is not a good thing at all. If your
> programming language is any good for serious programming, it should be
> good enough to be it's own source-transformation language.[1]

I agree that it's not a good thing to have two separate and complete
programing languages in one language, but I haven't yet seen a good
example of why you explicitly need the complete language at compile-time
in the first place, as opposed to having simple syntax-rearranging
macros followed by standard optimiser techniques such as partial
evaluation which may *implicitly* run code at compile time (or might
not).

I'm thinking here of the model used by Dylan.


On another newsgroup recently (http://deja.com/=dnc/article/638847080),
the following was offered as an example of what you couldn't do without
procedural macros:

(defmacro with-collect ((&rest collectors) &body forms)
"Evaluate forms, collecting objects into lists.
Within the FORMS, you can use local macros listed among collectors,
they are returned as multiple values.
E.g., (with-collect (c1 c2) (dotimes (i 10) (if (oddp i) (c1 i) (c2 i))))
==> (1 3 5 7 9); (0 2 4 6 8) [2 values]"
(let ((ret (mapcar (lambda (cc) (gensym (format nil "~s-RET-" cc)))
collectors)))
`(let (,@ret)
(macrolet ,(mapcar (lambda (co re) `(,co (form) `(push ,form
,',re)))
collectors ret) ,@forms (values
,@(mapcar (lambda (re) `(nreverse ,re)) ret))))))


My reply in Dylan was:

define macro with-collect
{ with-collect (?:name, ?more-names:*) ?:body end }
=> {let vals = #();
local method ?name(item) vals := pair(item, vals) end;
let (#rest results) = with-collect (?more-names) ?body end;
apply(values, reverse!(vals), results);}
{ with-collect () ?:body end } => { ?body; values() }
end;


-- Bruce

Erik Naggum

unread,
Jul 19, 2000, 3:00:00 AM7/19/00
to
* Googleplexation is NOT an option! <deja...@my-deja.com>

| Some of the other major problems that crop up when comparing languages
| are:
| 1. Disparity of experience: Comparing code written by a good
| programmer with code written by a poor programmer in another
| language.
| 2. Selective examples: Chosen examples that guarantee one language
| looks better than another. Its hard to find language-neutral
| examples.
| 3. Confusing language clarity with code performance. You may be
| more productive in one language than another, and be able to express
| code more clearly, but that is a whole other issue.
| 4. Human nature.

These are all very fine ad hominem arguments against language
comparisons as such, but do you have anything that could be used to
determine whether these fine arguments do not apply to a particular
language comparison? That is, in your view, is a comparison of
languages possible or do you _always_ interpret a comparison of
languages as suffering from these problems?

#:Erik
--
If this is not what you expected, please alter your expectations.

Googleplexation is NOT an option!

unread,
Jul 19, 2000, 3:00:00 AM7/19/00
to
In article <31730183...@naggum.net>,

Erik Naggum <er...@naggum.net> wrote:
>
> These are all very fine ad hominem arguments against language
> comparisons as such, but do you have anything that could be used to
> determine whether these fine arguments do not apply to a particular
> language comparison? That is, in your view, is a comparison of
> languages possible or do you _always_ interpret a comparison of
> languages as suffering from these problems?

There isn't a simple yes/no answer to this. I would say that they
could apply to a greater or lesser extent depending on the languages
being compared. I think language comparisons will always be fuzzy and
coloured by human preferences, perceptions and differences and that we
should acknowledge this rather than wasting too much time on pointless
arguments about whose preferred language is "better" in whatever
sense. But of course this will never happen. Everyone enjoys a good
argument and the temptation to fuel the the flames of a usenet
discussion is often irresistable, as you have so consistently
demonostrated Erik :)

// Peter.

>
> #:Erik

Erik Naggum

unread,
Jul 21, 2000, 3:00:00 AM7/21/00
to
* Googleplexation is NOT an option! <deja...@my-deja.com>
| Everyone enjoys a good argument and the temptation to fuel the the
| flames of a usenet discussion is often irresistable, as you have so
| consistently demonostrated Erik :)

Every master loves to see one his students surpass him, yet is
unhappy about the direction the student takes when he no longer
feels he needs the master.

Reini Urban

unread,
Jul 25, 2000, 3:00:00 AM7/25/00
to
Christopher J. Vogt wrote:
>> I want to find more references on the "sufficiently smart compiler" myth
>> history.

>I'm not sure what you mean by refering to this as a "myth".

a lot of talking since ages about a non existing but praised technology.
given the fact that it should be possible I still miss the
implementation in all aspects.

I know that dynamic objects object to that. nevertheless this argument
is thrown in again all the time when compared to C++.

CLOS and CL is a different thing.
CL without CLOS might have such a "sufficiently smart compiler".
(as other stricter functional languages already have)
And for CLOS there might be a sufficiently smart compiler with regards
to the dynamic aspects.

but the compiler cannot be the general swiss-army-knife in such a
discussion. either it is possible and exists or it is not possible and
does not exist. lisp is far beyond the proof of concept kindergarten.
--
C++ ARM: "C programmers think memory management is too important to be
left to the computer. Lisp programmers think memory management is too
important to be left to the user."

Chris Page

unread,
Aug 4, 2000, 3:00:00 AM8/4/00
to
in article 87em4ra...@orion.bln.pmsf.de, Pierre R. Mai at pm...@acm.org

wrote on 2000.07.18 07:58:

> In fact C++ templates are a very serious example of how having two separate
> languages in a language is not a good thing at all. If your programming
> language is any good for serious programming, it should be good enough to be
> it's own source-transformation language.[1]

I think Göedel might have something to say about that. I agree with you to a
point, but clearly some operations require a meta language. Perhaps eval is
"meta-enough" for most purposes?

I believe Dylan provides an example of how macros can be easier to read and
write than they would be if they had to be written in the base language. I'm
sure someone might argue that this shows that Dylan's base language isn't
general enough to handle macros, but I find it much easier to read macros
when the language is sufficiently different to make it clear which part of
the macro is the macro and which part is the language on which it operates.

--
Chris Page
Mac OS Guy
Palm, Inc.

let mailto = concatenate( "Chris Page <page", "@", "best.com>");

Chris Page

unread,
Aug 4, 2000, 3:00:00 AM8/4/00
to
in article jqaittg...@msdw.com, Russell McManus at
russell...@msdw.com wrote on 2000.08.04 14:51:

> Chris Page <pa...@best.NOSPAM.com> writes:
>
>> I agree with you to a point, but clearly some operations require a
>> meta language. Perhaps eval is "meta-enough" for most purposes?
>

> Can you give an example where a meta language is required? I've never
> come across such a need when writing Lisp macros, myself.

No, I can't. I'm only somewhat familiar with Lisp. I'm just saying that Kurt
Gödel wrote an interesting paper on the subject which leads me to believe
that, taken to the extreme, Lisp is not capable of performing all operations
on itself. The question, then, is whether or not eval is good enough for
most purposes. I suspect the answer may be yes, but it would be interesting
to see if there are any problems that cannot be solved with eval.

Tim Bradshaw

unread,
Aug 5, 2000, 3:00:00 AM8/5/00
to
* Chris Page wrote:
> No, I can't. I'm only somewhat familiar with Lisp. I'm just saying that Kurt
> Gödel wrote an interesting paper on the subject which leads me to believe
> that, taken to the extreme, Lisp is not capable of performing all operations
> on itself.

Fortunately it's capable of performing as large a set of operations as
any other known language.

--tim

Russell Wallace

unread,
Aug 7, 2000, 3:00:00 AM8/7/00
to
Chris Page wrote:
> I think Göedel might have something to say about that. I agree with you to a

> point, but clearly some operations require a meta language. Perhaps eval is
> "meta-enough" for most purposes?

Godel's theorem only shows there are things a formal system cannot prove
about itself. But we already know how that applies to programming
languages - as Turing pointed out, once one is computationally complete,
you can't prove whether a program in it will ever halt.

If you have a computationally complete macro system, it's in principle
no easier or harder to prove things about macros than if they were
written in any other such language.

> I believe Dylan provides an example of how macros can be easier to read and
> write than they would be if they had to be written in the base language. I'm
> sure someone might argue that this shows that Dylan's base language isn't
> general enough to handle macros, but I find it much easier to read macros
> when the language is sufficiently different to make it clear which part of
> the macro is the macro and which part is the language on which it operates.

Now, I do too, but that's a different matter :)

--
"To summarize the summary of the summary: people are a problem."
Russell Wallace
mailto:rwal...@esatclear.ie
http://www.esatclear.ie/~rwallace

Christopher R. Barry

unread,
Aug 7, 2000, 3:00:00 AM8/7/00
to
Russell Wallace <rwal...@esatclear.ie> writes:

> Chris Page wrote:
> > I think Göedel might have something to say about that. I agree with you to a
> > point, but clearly some operations require a meta language. Perhaps eval is
> > "meta-enough" for most purposes?
>
> Godel's theorem only shows there are things a formal system cannot prove
> about itself.

I could bite, but I only have time to gnaw. (I of course assume by
"Godel's theorem" you mean the results of his 1931 paper: On Formally
Undecidable Propositions in Principia Mathematica and Related
Systems.) [Also, in the text that follows I shall put in quotes ("")
that which you are supposed to "suitably interpret".]

Godel's theorem does not apply to all "formal systems". It applies
only to those that entail a formalized arithmetic. (Such as Peano
Arithmetic.) And it is rather specific about just what can't be proven
within such systems, for it shows that the "tools" of an axiomatic
system entailing arithmetic are never sufficient to prove that the
system is free from contradiction. For example, you can't prove that
it is impossible to correctly derive from within "mathematics" a
sentence such as "1=0" or any other contradiction. To prove such a
system free from contradiction, you must use a "stronger" system. But
how do we know that this stronger system we used to prove the
consistency of our weaker one is itself free from contradiction? We
must use a yet stronger system to prove this stronger system
consistent, and an even stronger system to prove this yet stronger one
consistent, and so it goes on.... In the end, we can never have an
"absolute consistency proof" for an axiomatic "mathematics", for their
are these fundamental limitations inherit in the very nature and
structure of the axiomatic method.

Anyways, it seems a fair number of people that post to USENET think[@]
Godel's theorem has a lot to do with Turing machines and what-not, and
that Godel's theorem and the halting problem are the reason why many
features of the Sufficiently Smart Compiler are theoretically
impossible to implement. But their arguments, if existent, are seldom
tenable. [The halting problem DOES prove that one feature the SSC will
never have is to check whether the program it compiles halts or not.
No denying _that_!]

If one wants to prove that the SSC or Lisp can't have a specific
feature, one should generally look more towards the Theory of
NP-completeness[*] than the Theory of Incompleteness. [Practically
speaking.]


Christopher

[*] But while we're being formal, we have to mention that the Theory
of NP-completeness also has a major issue: the truth or falsity of
P=NP is still unproven, and until it is proven it is possible that it
may in fact be neither true or false, but undecidable! (A la the
Continuum Hypothesis.)

[@] They are perhaps influenced by Chaitin's work on his web page, a
lot of which is speculation.... [There's nothing wrong with
speculation. In fact, speculation can be a very good thing. Just don't
mistake it for hard facts, which must be demonstrably true.]

Chris Page

unread,
Aug 7, 2000, 3:00:00 AM8/7/00
to
I asked:

> I think Göedel might have something to say about that. I agree with you to a
> point, but clearly some operations require a meta language. Perhaps eval is
> "meta-enough" for most purposes?

The message to which I was responding used the magic word "all", which I
believe when interpreted as absolutely as possible cannot be true, given the
Incompleteness theorem. I was trying to be somewhat tongue-in-cheek by using
an off-hand comment. This seems to have sent people astray from my question.

Multiple people wrote messages (one in private email) about how Gödel
doesn't apply to programming languages. I guess I'll take that as a yes:
eval provides Lisp with the ability to manipulate itself to a great extent,
possibly far enough to do just about anything short of solving the halting
problem.

--
Chris Page
Mac OS Guy
Palm, Inc.

let mail-to = concatenate( "Chris Page <page", "@", "best.com>");

Chris Page

unread,
Aug 7, 2000, 3:00:00 AM8/7/00
to
in article 87bsz5m...@2xtreme.net, Christopher R. Barry at

cba...@2xtreme.net wrote on 2000.08.07 03:54:

> Anyways, it seems a fair number of people that post to USENET think[@]
> Godel's theorem has a lot to do with Turing machines and what-not, and
> that Godel's theorem and the halting problem are the reason why many
> features of the Sufficiently Smart Compiler are theoretically
> impossible to implement. But their arguments, if existent, are seldom
> tenable. [The halting problem DOES prove that one feature the SSC will
> never have is to check whether the program it compiles halts or not.
> No denying _that_!]

Personally, I'm a great believer in the SSC. Computers ought to do as much
of my work as possible so that I can to Bigger and Better Things and stop
worrying about the details. Simply because there absolute limits to
computability, it doesn't mean that quite a lot of interesting and useful
computations aren't possible.

nco...@bridgetrix.com

unread,
Aug 7, 2000, 3:00:00 AM8/7/00
to

It's fun to read the theoretical arguments for Lisp vs C++. As a Lisp
programmer trying to write a short C++ program, I have some more concrete
reasons:

Lisp has more useful functions. I want to find a file on the hard
drive to open. In MCL, DIRECTORY does fine, with suitable wildcards. In
C, the standard library has some sort of FIND function, but it may only
search within a particular directory. To find a file in the hard drive,
you have to write a tree search.

Lisp has a much more helpful newsgroup. I recently posted a problem
and received the correct answer the next morning. When I posted to the
C/C++ newsgroups about how FIND, the reply was that this wasn't a suitable
subject for the group. It's not part of C/C++ proper. The various
Windows newsgroups weren't much help either.

Neil

Neil Cohen
Bridge Trix

Raffael Cavallaro

unread,
Aug 8, 2000, 3:00:00 AM8/8/00
to
In article <39722a10.353014738@news>, rur...@sbox.tu-graz.ac.at wrote:

>I want to find more references on the "sufficiently smart compiler" myth

>history. Is this still a topic of research and interest today?
>

>The term itself seems to be lisp/scheme related only on a websearch
>(mainly in CLHS writeup issues), but didn't C++/Java folks use it also?
>

>The jargon file doesn't explicitly mention it.
>Only found "pessimizing compiler".

You might look under DWIM (Do What I Mean). eg.

< http://www.science.uva.nl/~mes/jargon/d/dwim.html>

Ralph

--

Raffael Cavallaro, Ph.D.
raf...@mediaone.net

Xah

unread,
Aug 8, 2000, 3:00:00 AM8/8/00
to
Raffael Cavallaro <raf...@mediaone.net> (08 Aug 2000 04:10:10 GMT) wrote:

> You might look under DWIM (Do What I Mean). eg.
>
> < http://www.science.uva.nl/~mes/jargon/d/dwim.html>
>
> Ralph
>
> --
>
> Raffael Cavallaro, Ph.D.
> raf...@mediaone.net


dear Raffle Cavalier P. H. D.,

DWIM stands for Dim Wit I Am, and pronounced Dim Wit. It is a fashionable
locution of the Perl republic. Trumped by priests like Tom Christiansen.

Like:

Dimwit: What's so good about your language?
Tom C: we provide the Dim Wit feature.
Dimwit: <drooling> cooool.

Xah
x...@xahlee.org
http://xahlee.org/PageTwo_dir/more.html


Raffael Cavallaro

unread,
Aug 8, 2000, 3:00:00 AM8/8/00
to

>dear Raffle Cavalier P. H. D.,
>
>DWIM stands for Dim Wit I Am, and pronounced Dim Wit. It is a fashionable
>locution of the Perl republic. Trumped by priests like Tom Christiansen.
>
>Like:
>
>Dimwit: What's so good about your language?
>Tom C: we provide the Dim Wit feature.
>Dimwit: <drooling> cooool.

Xah,
If you actually bothered to read the link I gave, you would see that:

1. The term DWIM predates the existence of Perl by many years, so Perl
is *not* it's original context.

2. That the link contains (in part) the following text:

"DWIM is often suggested in jest as a desired feature for a complex
program; it is also occasionally described as the single instruction the
ideal computer would have. Back when proofs of program correctness were
in vogue, there were also jokes about `DWIMC' (Do What I Mean,
Correctly)."

Which makes it relevant to Reini Urban's query about the mythical
"sufficiently smart compiler."

Continue your rambling, poorly written, invented spelling posts. They
are sometimes amusing (as in "we're laughing *at* you, not *with* you,
Xah"). Just don't expect anyone to take you seriously, that's all.

Cor Gest jr

unread,
Aug 9, 2000, 3:00:00 AM8/9/00
to
Raffael Cavallaro <raf...@mediaone.net> writes:


> 2. That the link contains (in part) the following text:
>
> "DWIM is often suggested in jest as a desired feature for a complex
> program; it is also occasionally described as the single instruction the
> ideal computer would have. Back when proofs of program correctness were
> in vogue, there were also jokes about `DWIMC' (Do What I Mean,
> Correctly)."

Ouch, allways thought it stood for:
Dumb-struck Wimps's Incantation Might Compile.....

cor


--
/*#include<rumor.h> Everything is relative.........even that */
/* If GNU/LINUX has no solution, you'v got the wrong problem */
/* Never install Slackware.........You might learn to use IT */
/* pa3...@amsat.org ICQ:1612519 http://clsnet.dynip.nl */

Torkel Franzen

unread,
Aug 9, 2000, 3:00:00 AM8/9/00
to
cba...@2xtreme.net (Christopher R. Barry) writes:


> To prove such a
> system free from contradiction, you must use a "stronger" system.

Not stronger, just different.

> But
> how do we know that this stronger system we used to prove the
> consistency of our weaker one is itself free from contradiction? We
> must use a yet stronger system to prove this stronger system
> consistent, and an even stronger system to prove this yet stronger one
> consistent, and so it goes on.... In the end, we can never have an
> "absolute consistency proof" for an axiomatic "mathematics", for their
> are these fundamental limitations inherit in the very nature and
> structure of the axiomatic method.

Well, consistency proofs are as "absolute" as any other mathematical
proofs.

Espen Vestre

unread,
Aug 9, 2000, 3:00:00 AM8/9/00
to
Raffael Cavallaro <raf...@mediaone.net> writes:

> 1. The term DWIM predates the existence of Perl by many years, so Perl
> is *not* it's original context.

Xerox Interlisp came with a reasonably good DWIM module: If you wrote
e.g. (labmda ...) it would guess that you meant lambda and give you
a few seconds to stop it from executing the corrected command.
--
(espen)

Xah

unread,
Aug 10, 2000, 3:00:00 AM8/10/00
to
Raffel Cavalier Phenomenal Dimwit,

please acquire reading comprehension.

Xah
x...@xahlee.org
http://xahlee.org/PageTwo_dir/more.html


----------
From: Raffael Cavallaro <raf...@mediaone.net>
Organization: Road Runner
Newsgroups: comp.lang.lisp
Date: Tue, 08 Aug 2000 17:23:34 GMT
Subject: Re: Injecting some sense into the C++ vs. Lisp debate

>dear Raffle Cavalier P. H. D.,
>
>DWIM stands for Dim Wit I Am, and pronounced Dim Wit. It is a fashionable
>locution of the Perl republic. Trumped by priests like Tom Christiansen.
>
>Like:
>
>Dimwit: What's so good about your language?
>Tom C: we provide the Dim Wit feature.
>Dimwit: <drooling> cooool.

Xah,
If you actually bothered to read the link I gave, you would see that:

1. The term DWIM predates the existence of Perl by many years, so Perl


is *not* it's original context.

2. That the link contains (in part) the following text:

"DWIM is often suggested in jest as a desired feature for a complex
program; it is also occasionally described as the single instruction the
ideal computer would have. Back when proofs of program correctness were
in vogue, there were also jokes about `DWIMC' (Do What I Mean,
Correctly)."

Which makes it relevant to Reini Urban's query about the mythical

Reply all
Reply to author
Forward
0 new messages