Writing a Compiler: Lisp or Scheme or Haskell?

174 views
Skip to first unread message

iwrit...@gmail.com

unread,
May 16, 2009, 10:51:59 PM5/16/09
to
Neither the source nor target language is a very low level language
( such as assembler or machine code ) The target language is a not-so-
low-level language such as SQL or even C.

How do Lisp and Scheme stack up against Haskell for this sort of
thing?

thanks in advance.

Robert

Tamas K Papp

unread,
May 16, 2009, 11:16:54 PM5/16/09
to

The question doesn't make much sense. Use the language you know. All
of these languages are appropriate for the job.

If you know all three, you can answer the question for yourself.

Tamas

Andrew Bailey

unread,
May 17, 2009, 4:19:39 PM5/17/09
to

Common Lisp would have the most useful standard library functions out
of the three. I would chose Lisp if it were me ,but I might be biased.
Because I don't like Haskell very much anymore.

daniel....@excite.com

unread,
May 18, 2009, 2:58:04 PM5/18/09
to
On May 16, 10:51 pm, iwriteco...@gmail.com wrote:


What is the target execution enviroment like? Could Lisp or Scheme or
Haskell be part of the execution environment? Could Lisp or Scheme or
Haskell be an intermediate language, then compiled by already existing
compiler?

If you plan to hand code a recursive descent parser, that's pretty
clean in any language.

I suggest think about how you intend to represent information stored
in your symbol tables and dictionaries to get a feel for which
language you'd feel most comfortable in.

Also optimization: peep-hole techniques might be more pleasant when
considering target code as list data... Then, how do you like to
write such list editing code?

Paul Tarvydas

unread,
May 19, 2009, 4:28:25 PM5/19/09
to
iwrit...@gmail.com wrote:

> Neither the source nor target language is a very low level language
> ( such as assembler or machine code ) The target language is a not-so-
> low-level language such as SQL or even C.
>
> How do Lisp and Scheme stack up against Haskell for this sort of
> thing?
>

It seems to me that CL (Common Lisp) allows you more flexibility in
choosing / combining paradigms and to turn the dial to address a wide
spectrum of languages - from hack to seriously complicated.

If you want to hack a small language together, you can use macros.

For a slightly larger hack language, where you get to define the
syntax, you can save yourself time by using CL's built in reader as
your scanner.

Tree, term rewriting, etc. is obviously well-supported in CL (list
operations). Debugging output for intermediate trees is already
supported (PRINT, PPRINT, inspect, etc).

For more complicated languages, I've built my own parsing tools (e.g.
an S/SL (syntax/semantic language) parser, another source-to-source
white-space preserving parser, and a parser based on PEG packrat
ideas) in short amounts of time (often just one weekend).

When I built a compiler with a non-traditional syntax, e.g. syntax
composed of diagrams (see www.visualframeworksinc.com), I used a
prolog-in-lisp to make parsing easier. I parsed the diagrams by
writing inference rules, e.g. 4 lines closed make a box, a box in this
particular context means so-and-so, a non-closed line is composed of a
number of line segments, a line joining two boxes means so-and-so,
etc. Then, once parsed, I could drop out of the "prolog" paradigm and
use more traditional compiler techniques (or, stay in prolog mode to
perform further semantic analysis).

And, as already mentioned, banging a peepholer together in CL is
easy. I use one to clean up generated code to be more human-
readable. In that project, I "emit" lines of code internally as
little lists onto a global list. Then clean up the global list with a
peepholer. Then print it out as final text.

In situations where you are outputting to textual target languages,
e.g. C (which, btw, turns out to be a really bad assembler language),
you are faced with issues like arranging for declaration-before-use in
the output code, which tends to drive you towards using multiple
output streams or multiple code generation passes. Lists and string
streams (with-output-to-string) can be convenient in such situations.

My vote goes to CL.

pt

Jon Harrop

unread,
May 22, 2009, 4:29:20 PM5/22/09
to

Badly. Metaprogramming benefits enormously from pattern matching. Lisp
offers nothing in that regard (and the Greenspun alternatives to modern
function language implementations suck in comparison, e.g. fare-match).
Scheme has something but nothing like as much as Haskell or ML.

You might also consider tools like Camlp4, ocamllex, ocamlyacc, menhir,
dpygen etc. for OCaml.

Lisp and Scheme do offer first-class symbols and EVAL but both are of little
practical value. Symbols are trivial to implement in Haskell or ML. EVAL
was long since made obsolete by VMs.

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u

Raffael Cavallaro

unread,
May 22, 2009, 5:37:19 PM5/22/09
to
On 2009-05-22 16:29:20 -0400, the wet one <to...@toadhall.com> said:

> iwrit...@gmail.com wrote:
>> Neither the source nor target language is a very low level language
>> ( such as assembler or machine code ) The target language is a not-so-
>> low-level language such as SQL or even C.
>>
>> How do Lisp and Scheme stack up against Haskell for this sort of
>> thing?
>
> Badly. Metaprogramming benefits enormously from pattern matching.

Only if your one hammer is pattern matching.

> Lisp
> offers nothing in that regard (and the Greenspun alternatives to modern
> function language implementations suck in comparison, e.g. fare-match).
> Scheme has something but nothing like as much as Haskell or ML.

cl-unification gives you match-case - pattern match all you like.

>
> You might also consider tools like Camlp4, ocamllex, ocamlyacc, menhir,
> dpygen etc. for OCaml.

But he was asking about haskell, not the irish ship of the desert. Hint
to OP, the reason people like to use lisp for metaprogramming is that
the metaprogramming language is the same as the programming language.

>
> Lisp and Scheme do offer first-class symbols and EVAL but both are of little
> practical value. Symbols are trivial to implement in Haskell or ML. EVAL
> was long since made obsolete by VMs.

Lisp and scheme offer macros, which do compile-time expansion; no need
for run-time eval.


--
Raffael Cavallaro, Ph.D.

Kaz Kylheku

unread,
May 22, 2009, 6:11:48 PM5/22/09
to
On 2009-05-22, Jon Harrop <j...@ffconsultancy.com> wrote:
> Lisp and Scheme do offer first-class symbols and EVAL but both are of little
> practical value. Symbols are trivial to implement in Haskell or ML.

An example would go far here.

> EVAL was long since made obsolete by VMs.

EVAL converts a piece of source code into the execution that it represents.

The specification of EVAL does not rule out compiling; it is completely
abstract. A conforming Lisp implementation can define EVAL roughly like this,

(defun eval (form) (funcall (compile nil `(lambda () ,form))))

and so EVAL may be considered to be a convenience function to avoid this
more long-winded compiling setup. Some essentially define it this way. Corman
Lisp has no evaluator. Its eval spins up a function made up of x86 machine code
and jumps to it.

A VM (or real machine) handles executable forms, which must come from
somewhere. What the VM does is conceptually similar to eval, in that it
takes code and executes it, but it works with a translated representation of
the code that isn't directly written by programmers.

How do VMs obsolete translators that produces code for them?

Tamas K Papp

unread,
May 22, 2009, 6:20:28 PM5/22/09
to
On Fri, 22 May 2009 17:37:19 -0400, Raffael Cavallaro wrote:

> On 2009-05-22 16:29:20 -0400, the wet one <to...@toadhall.com> said:
>

>> [usual toad bs snipped]


>
> But he was asking about haskell, not the irish ship of the desert. Hint
> to OP, the reason people like to use lisp for metaprogramming is that
> the metaprogramming language is the same as the programming language.

Hush, don't tell him things like that. His head might explore. Or
implode. I always forget which is the one which happens with a
vacuum.

Tamas

Jon Harrop

unread,
May 22, 2009, 6:46:16 PM5/22/09
to
Raffael Cavallaro wrote:
> On 2009-05-22 16:29:20 -0400, the wet one <to...@toadhall.com> said:
>> iwrit...@gmail.com wrote:
>>> Neither the source nor target language is a very low level language
>>> ( such as assembler or machine code ) The target language is a not-so-
>>> low-level language such as SQL or even C.
>>>
>>> How do Lisp and Scheme stack up against Haskell for this sort of
>>> thing?
>>
>> Badly. Metaprogramming benefits enormously from pattern matching.
>
> Only if your one hammer is pattern matching.

A quick survey of metaprograms written with and without pattern matching
makes it clear how useful the technique is in this context.

>> You might also consider tools like Camlp4, ocamllex, ocamlyacc, menhir,
>> dpygen etc. for OCaml.
>
> But he was asking about haskell, not the irish ship of the desert. Hint
> to OP, the reason people like to use lisp for metaprogramming is that
> the metaprogramming language is the same as the programming language.

Most people do not like to use Lisp for metaprogramming because homogeneous
metaprogramming incurs all of the deficiencies of the host language (e.g.
your generated code is slow in the case of Lisp). Heterogeneous
metaprogramming is not only preferable but comparably simple if you use a
more appropriate language than Lisp.

>> Lisp and Scheme do offer first-class symbols and EVAL but both are of
>> little practical value. Symbols are trivial to implement in Haskell or
>> ML. EVAL was long since made obsolete by VMs.
>
> Lisp and scheme offer macros, which do compile-time expansion; no need
> for run-time eval.

A compiler's source program is generally only available at run-time so
compile time macro expansion is useless.

Jon Harrop

unread,
May 22, 2009, 6:50:52 PM5/22/09
to

Or maybe just maybe you'll stop "contributing" here until you've completed
your degree and earned some credibility.

Raffael Cavallaro

unread,
May 22, 2009, 9:32:47 PM5/22/09
to
On 2009-05-22 18:46:16 -0400, splashy <e...@eyeofnewt.com> said:

> A compiler's source program is generally only available at run-time so
> compile time macro expansion is useless.

So compilers can't use macros in their source code because the code
they will utlitmately compile is not yet available when the compiler
itself is compiled? Really? That's what you're going with? Your willful
misinterpretation is another charming, but ultimately void rhetorical
device.
--
Raffael Cavallaro, Ph.D.

Pillsy

unread,
May 22, 2009, 9:57:50 PM5/22/09
to
On May 22, 6:50 pm, some jackass wrote:

> Or maybe just maybe you'll stop "contributing" here until you've completed
> your degree and earned some credibility.

Yeah, that worked just smashingly for you.

Sheesh,
Pillsy

Jon Harrop

unread,
May 23, 2009, 7:39:34 AM5/23/09
to
Raffael Cavallaro wrote:
> On 2009-05-22 18:46:16 -0400, splashy <e...@eyeofnewt.com> said:
>> A compiler's source program is generally only available at run-time so
>> compile time macro expansion is useless.
>
> So compilers can't use macros in their source code because the code...

A strawman argument.

neptundancer

unread,
May 23, 2009, 8:11:21 AM5/23/09
to
On May 23, 12:46 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Raffael Cavallaro wrote:
> > On 2009-05-22 16:29:20 -0400, the wet one <t...@toadhall.com> said:

Once again, you absolutely don't know what are you writing about.

Frank GOENNINGER

unread,
May 23, 2009, 8:49:04 AM5/23/09
to
Jon Harrop <j...@ffconsultancy.com> writes:

> Most people do not like to use Lisp for metaprogramming because homogeneous
> metaprogramming incurs all of the deficiencies of the host language (e.g.
> your generated code is slow in the case of Lisp). Heterogeneous
> metaprogramming is not only preferable but comparably simple if you use a
> more appropriate language than Lisp.

Wrong.

If you don't understand this answer then please apply it to the
meta-level of your statement.

You don't know what the meta-level of your statement is ?

> A compiler's source program is generally only available at run-time so
> compile time macro expansion is useless.

That's part of the answer to the question above.

Jon Harrop

unread,
May 23, 2009, 9:06:37 AM5/23/09
to
Frank GOENNINGER wrote:
> Jon Harrop <j...@ffconsultancy.com> writes:
>> Most people do not like to use Lisp for metaprogramming because
>> homogeneous metaprogramming incurs all of the deficiencies of the host
>> language (e.g. your generated code is slow in the case of Lisp).
>> Heterogeneous metaprogramming is not only preferable but comparably
>> simple if you use a more appropriate language than Lisp.
>
> Wrong.

No.

Jon Harrop

unread,
May 23, 2009, 9:07:25 AM5/23/09
to
neptundancer wrote:
> Once again, you absolutely don't know what are you writing about.

Then why are you unable to refute it?

gugamilare

unread,
May 23, 2009, 9:11:12 AM5/23/09
to
On 23 maio, 10:07, Jon Harrop <j...@ffconsultancy.com> wrote:
> neptundancer wrote:
> > Once again, you absolutely don't know what are you writing about.
>
> Then why are you unable to refute it?

Oh, yes, and your argument here:

On 23 maio, 10:06, Jon Harrop <j...@ffconsultancy.com> wrote:
> Frank GOENNINGER wrote:
> > Jon Harrop <j...@ffconsultancy.com> writes:

> >> Most people do not like to use Lisp for metaprogramming because
> >> homogeneous metaprogramming incurs all of the deficiencies of the host
> >> language (e.g. your generated code is slow in the case of Lisp).
> >> Heterogeneous metaprogramming is not only preferable but comparably
> >> simple if you use a more appropriate language than Lisp.
>

> > Wrong.
>
> No.


>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

... was much more convincing.

Jon Harrop

unread,
May 23, 2009, 9:55:23 AM5/23/09
to
gugamilare wrote:
> Oh, yes, and your argument here:
>
> On 23 maio, 10:06, Jon Harrop <j...@ffconsultancy.com> wrote:
> > Frank GOENNINGER wrote:
> > > Wrong.
> > No.

>
> ... was much more convincing.

Frank's objection was entirely non-specfic. What do you need convincing of?
That Lisp is unpopular? That heterogeneous metaprogramming is easy?

anonymous...@gmail.com

unread,
May 23, 2009, 10:01:12 AM5/23/09
to
On May 22, 6:50 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> Or maybe just maybe you'll stop "contributing" here until you've completed
> your degree and earned some credibility.

said the schill.

gugamilare

unread,
May 23, 2009, 10:25:58 AM5/23/09
to
On 23 maio, 10:55, Jon Harrop <j...@ffconsultancy.com> wrote:
> gugamilare wrote:
> > Oh, yes, and your argument here:
>
> > On 23 maio, 10:06, Jon Harrop <j...@ffconsultancy.com> wrote:
> > > Frank GOENNINGER wrote:
> > > > Wrong.
> > > No.
>
> > ... was much more convincing.
>
> Frank's objection was entirely non-specfic. What do you need convincing of?
> That Lisp is unpopular? That heterogeneous metaprogramming is easy?

There is nothing you need to convince me or anyone here of. Do you
really think that we are only a bunch of amateurs that don't know how
to program? And clearly you are not a Lisp programmer, so you don't
have enough knowledge to say what Lisp is good or bad for. That is
what Frank was referring as "meta-level" of you statement - so, his
objection wasn't non-specific.

You won't say anything that wasn't said here over and over among
zillions of flame wars. I don't know why people are particularly so
angry about Lisp (frustration, perhaps?), but, unfortunately, they
are, you are not the only one.

So, I am going to repeat this again: stop with whatever you are doing.
This will only end badly.

Pillsy

unread,
May 23, 2009, 10:59:48 AM5/23/09
to
On May 23, 9:07 am, some sad loser wrote:

> neptundancer wrote:

> > Once again, you absolutely don't know what are you writing about.

> Then why are you unable to refute it?

Refuting you is about as useful as refuting an infestation of pubic
lice.

FOAD.
Pillsy

gugamilare

unread,
May 23, 2009, 11:53:15 AM5/23/09
to

This was not a clever thing to say to someone who is already
criticizing Lisp. This is just another motive for him to start a flame
war.

Nicolas Neuss

unread,
May 23, 2009, 12:52:05 PM5/23/09
to
gugamilare <gugam...@gmail.com> writes:

> This was not a clever thing to say to someone who is already criticizing
> Lisp. This is just another motive for him to start a flame war.

Nonsense. Please do us a favour and google for the previous appearances of
Jon Harrop in comp.lang.lisp. Probably, the largest motive for JH's posts
is if CL newbies take him seriously and reply to him including his spamming
consultancy email or web address.

Nicolas

Jon Harrop

unread,
May 23, 2009, 4:44:17 PM5/23/09
to
gugamilare wrote:
> Do you really think that we are only a bunch of amateurs that don't know
> how to program?

When people here say things like "OCaml forces you to write interpreters
instead of compilers", do you really have to ask?

> And clearly you are not a Lisp programmer, so you don't have enough
> knowledge to say what Lisp is good or bad for.

In other words, you want to ignore anyone who chooses not to use Lisp
because they don't know what Lisp is bad for.

> You won't say anything that wasn't said here over and over among
> zillions of flame wars. I don't know why people are particularly so
> angry about Lisp (frustration, perhaps?), but, unfortunately, they
> are, you are not the only one.

I suggest you look up the flamewar I had with Thomas Fischbacher here. He
was a strong Lisp advocate until we debated it. Now he is a major OCaml
user.

> So, I am going to repeat this again: stop with whatever you are doing.
> This will only end badly.

There is nothing "bad" about highlighting misinformation.

Christopher C. Stacy

unread,
May 24, 2009, 4:09:41 AM5/24/09
to
Jon Harrop <j...@ffconsultancy.com> writes:
> Most people do not like to use Lisp for metaprogramming because homogeneous
> metaprogramming incurs all of the deficiencies of the host language (e.g.
> your generated code is slow in the case of Lisp).

Lisp is slow?

Nicolas Neuss

unread,
May 24, 2009, 4:25:26 AM5/24/09
to
Jon Harrop <j...@spamming.com> writes:

> I suggest you look up the flamewar I had with Thomas Fischbacher here. He
> was a strong Lisp advocate until we debated it. Now he is a major OCaml
> user.

So you might have won at least one convert - congratulations! In this
case, a report of Thomas Fischbacher on his experiences of both Lisp and
OCaml would really be of interest, at least to me. (ISTR he did not do
much Common Lisp, but came from Scheme; however, I might be wrong on that
one.)

Nicolas

Jon Harrop

unread,
May 24, 2009, 8:03:53 AM5/24/09
to

Yes.

John Thingstad

unread,
May 24, 2009, 4:59:47 PM5/24/09
to
På Sun, 24 May 2009 10:09:41 +0200, skrev Christopher C. Stacy
<cst...@news.dtpq.com>:

About twice as fast as Java. Though that depends on what you are how you
program it. It is easy to make things 10 or even 100 times slower that
they could be if you don't know what you are doing, and it is not always
obvious why. Lisp I/O can be on the slow side. (Not setting *print-pretty*
to nil can cut the file write speed in half for instance on some
implementations.) The lisp efficiency model is somewhat difficult to
understand and it's declaration system a bit erm.. clunky.
Never the less it is a powerful language in the hand of someone that knows
how to use it. (Unlike Jon Harrop aka Dr. frog)

---------------------
John Thingstad

Jon Harrop

unread,
May 25, 2009, 7:49:48 AM5/25/09
to
John Thingstad wrote:
> På Sun, 24 May 2009 10:09:41 +0200, skrev Christopher C. Stacy
> <cst...@news.dtpq.com>:
>> Jon Harrop <j...@ffconsultancy.com> writes:
>>> Most people do not like to use Lisp for metaprogramming because
>>> homogeneous
>>> metaprogramming incurs all of the deficiencies of the host language
>>> (e.g.
>>> your generated code is slow in the case of Lisp).
>>
>> Lisp is slow?
>
> About twice as fast as Java...

Dream on.

Adlai

unread,
May 25, 2009, 8:28:23 AM5/25/09
to
On May 25, 2:49 pm, Jon Harrop <j...@ffconsultancy.com> wrote:

> John Thingstad wrote:
> > About twice as fast as Java...
>
> Dream on.

http://www.norvig.com/java-lisp.html
http://www.norvig.com/python-lisp.html (here, scroll down to the
colorful chart)

It seems that Lisp ranges from being 10000% to 13% the speed of C++,
with a median of 60% as fast, while Java ranges from being 111% to
4.8% as fast as C++, with a median of 21% as fast.

On that chart, Lisp beats Java in every category but one, which is
also the category with the smallest margin.


- Adlai

anonymous...@gmail.com

unread,
May 25, 2009, 10:06:18 AM5/25/09
to
On May 25, 8:28 am, Adlai <munchk...@gmail.com> wrote:
> On May 25, 2:49 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>
> > John Thingstad wrote:
> > > About twice as fast as Java...
>
> > Dream on.
>
> http://www.norvig.com/java-lisp.htmlhttp://www.norvig.com/python-lisp.html(here, scroll down to the

> colorful chart)
>
> It seems that Lisp ranges from being 10000% to 13% the speed of C++,
> with a median of 60% as fast, while Java ranges from being 111% to
> 4.8% as fast as C++, with a median of 21% as fast.
>
> On that chart, Lisp beats Java in every category but one, which is
> also the category with the smallest margin.
>
>  -  Adlai

I think that what is of more concern is that a grown man with a
(supposed) Phd wouldn't be able to come up with a more clever ploy for
trolling.

I don't believe he's made a constructive (even on topic) post to this
newsgroup yet.

Vend

unread,
May 25, 2009, 10:58:49 AM5/25/09
to
On 17 Mag, 04:51, iwriteco...@gmail.com wrote:
> Neither the source nor target language is a very low level language
> ( such as assembler or machine code ) The target language is a not-so-
> low-level language such as SQL or even C.

You can't target SQL since isn't Turing complete.

> How do Lisp and Scheme stack up against Haskell for this sort of
> thing?

If you want simplicity, Scheme is probably the best choice, at least
as long you exclude continuations. Implementing Scheme continuations
on any target language different than assembler will impose some
performance penality and, depending on your compiler structure, miight
be difficult.
If you don't require full standard compliance you might want not to
implement continuation or to implement some restricted form of them,
like many Scheme implementation do.

If you want to write a complier that produces efficient programs, you
might choose Haskell. A naive compiler wouldn't produce fast code, but
the language has room for lots of optimizations which would be
difficult or impossible to implement in Scheme or Common Lisp.

I would avoid Common Lisp, I don't know it very much but it seems to
me quite complicated and non-uniform.

Vend

unread,
May 25, 2009, 11:01:13 AM5/25/09
to
On 22 Mag, 23:37, Raffael Cavallaro
<raffaelcavall...@pas.espam.s.il.vous.plait.mac.com> wrote:

> Lisp and scheme offer macros, which do compile-time expansion; no need
> for run-time eval.

I don't think that Haskell would benefit much from Lisp-like macros,
since it already has lazy evaluation. Macros would be at most a
performance hack.

Tamas K Papp

unread,
May 25, 2009, 11:01:48 AM5/25/09
to
On Mon, 25 May 2009 07:58:49 -0700, Vend wrote:

> On 17 Mag, 04:51, iwriteco...@gmail.com wrote:
>> Neither the source nor target language is a very low level language (
>> such as assembler or machine code ) The target language is a not-so-
>> low-level language such as SQL or even C.
>
> You can't target SQL since isn't Turing complete.
>
>> How do Lisp and Scheme stack up against Haskell for this sort of thing?
>
> If you want simplicity, Scheme is probably the best choice, at least as
> long you exclude continuations. Implementing Scheme continuations on any

> [...]

I was under the impression that he was looking for a language for
implementing his compiler, not asking which language would be the best
as a source language.

Tamas

Jon Harrop

unread,
May 25, 2009, 6:13:12 PM5/25/09
to

Macros are still useful as a way to add new syntax.

Jon Harrop

unread,
May 25, 2009, 7:12:38 PM5/25/09
to

The results you have cited are all many years out of date. They actually
pertain to Doug Bagley's original Computer Language Shootout from the 1990s
and, in particular, make no use of multicores whatsoever.

Today's shootout results indicate that Java is 1.1-8x faster than Lisp
(SBCL), i.e. Java is faster than Lisp in every single test:

http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=java&lang2=sbcl&box=1

On the knucleotide benchmark, Lisp is thrashed by Erlang, C# Mono, PHP,
Fortran, Scala, Lua, Clean, Java, Pascal, Haskell, ATS and C++.

John Thingstad

unread,
May 25, 2009, 9:02:30 PM5/25/09
to

http://liskell.org/

---------------------
John Thingstad

Vend

unread,
May 25, 2009, 9:49:19 PM5/25/09
to

So?

Vend

unread,
May 25, 2009, 9:50:09 PM5/25/09
to
On 26 Mag, 00:13, Jon Harrop <j...@ffconsultancy.com> wrote:
> Vend wrote:
> > On 22 Mag, 23:37, Raffael Cavallaro
> > <raffaelcavall...@pas.espam.s.il.vous.plait.mac.com> wrote:
> >> Lisp and scheme offer macros, which do compile-time expansion; no need
> >> for run-time eval.
>
> > I don't think that Haskell would benefit much from Lisp-like macros,
> > since it already has lazy evaluation. Macros would be at most a
> > performance hack.
>
> Macros are still useful as a way to add new syntax.

Can't you do that with lazy functions?

Nicolas Neuss

unread,
May 26, 2009, 3:32:47 AM5/26/09
to
Jon Harrop <j...@spammershome.com> writes:

> The results you have cited are all many years out of date. They actually
> pertain to Doug Bagley's original Computer Language Shootout from the
> 1990s

I don't think this is true. At least, I would be very much surprised if a
highly competent person like Peter Norvig would base his tests on something
that bad as the "Shootout" (BTW, it's called "Benchmark Game" today, as I
learned recently in comp.lang.scheme[*]).

> and, in particular, make no use of multicores whatsoever.
>
> Today's shootout results indicate that Java is 1.1-8x faster than Lisp
> (SBCL), i.e. Java is faster than Lisp in every single test:
>
> http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=java&lang2=sbcl&box=1

I think the real difference is the competence of Peter Norvig compared with
the incompetence of the people running the Shootout or the incompetence and
malignity of a certain spammer usually favorizing his own ray-tracer
benchmark.

Nicolas

[*] http://groups.google.com/group/comp.lang.scheme/browse_thread/thread/56dd8b3505ed6aca

Thomas F. Burdick

unread,
May 26, 2009, 4:11:52 AM5/26/09
to

Not to the same extent, and you see more of the plumbing. But between
lazy evaluation and the complex syntax, you can get by; and the same
complex syntax that makes leveraging lazy evaluation and monads
convenient would make structural macros a real pain in the butt.

Marco Antoniotti

unread,
May 26, 2009, 4:15:22 AM5/26/09
to
On May 26, 3:50 am, Vend <ven...@virgilio.it> wrote:

Not necessarily. That depends on the nature of the source-to-source
transformation that you want to implement (macros do essentially
that). If you have the AST of your language/program at hand then you
can manipulate it. Lazyness gives a limited form of "controlled non-
evaluation" (my term); it does not give you AST manipulation which is
what you get with Lisp, or any other AST manipulating program like
some Python libraries and the OCaml preprocessor have, just to name a
few.

Cheers
--
Marco
ELS 2009 www.european-lisp-symposium.org

Thomas F. Burdick

unread,
May 26, 2009, 4:22:53 AM5/26/09
to
On May 17, 4:51 am, iwriteco...@gmail.com wrote:
> Neither the source nor target language is a very low level language
> ( such as assembler or machine code ) The target language is a not-so-
> low-level language such as SQL or even C.
>
> How do Lisp and Scheme stack up against Haskell for this sort of
> thing?
>
> thanks in advance.
>
> Robert

This seems to be the case of a hit-and-run troll (or at a minimum a
*very* absent OP), but in any case, to add my 0,02 € ...

Parsing is not a particularly interesting problem (IMO), and for LALR
parsers, there are perfectly good generators in all three languages.
Perfectly good parser generators for more complex grammars, too, but
I'm less familiar with them.

Where things are interesting is in the manipulation of the program
once you have it parsed. This is one of the things that statically
typed functional languages are good at (if the problem looks academic,
they're probably a good fit), where the static type system doesn't
feel like a hinderance. No dynamic variables in Haskell, though, and
they're super convenient for compilers.

The main thing I'd advise is: CL, Scheme or Haskell, *use a reactive
framework*. In Haskell that means Yampa. In CL that means Cells (or
something like it). In Scheme, FrTime. These things make compilers
much more pleasant.

Vend

unread,
May 26, 2009, 6:03:47 AM5/26/09
to

I was thinking of Scheme-like hygenic macros.
It seems to me that they are actually like lazy procedures, with the
difference that they are expanded before run-time, so they are perhaps
even less expressive (if macro expansion terminates, then substiting
all macros with lazy procedures doesn't change the program semantics,
while there are cases where substituting lazy procedures with macros
causes macro expansion not to terminate).

CL-like macros that directly generate source code that can access
bindings defined outside it can't be always replaced with lazy
procedures, but I don't know whether accessing outer bindings is a
good idea.

Jon Harrop

unread,
May 26, 2009, 8:12:37 AM5/26/09
to
Nicolas Neuss wrote:
> Jon Harrop <j...@spammershome.com> writes:
>> The results you have cited are all many years out of date. They actually
>> pertain to Doug Bagley's original Computer Language Shootout from the
>> 1990s
>
> I don't think this is true. At least, I would be very much surprised if a
> highly competent person like Peter Norvig would base his tests on
> something that bad as the "Shootout"...

Might I suggest that you actually read the article under discussion:

"Relative speeds of 5 languages on 10 benchmarks from The Great Computer
Language Shootout."

>> and, in particular, make no use of multicores whatsoever.
>>
>> Today's shootout results indicate that Java is 1.1-8x faster than Lisp
>> (SBCL), i.e. Java is faster than Lisp in every single test:
>>
>>
http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=java&lang2=sbcl&box=1
>
> I think the real difference is the competence of Peter Norvig compared
> with the incompetence of the people running the Shootout or the
> incompetence and malignity of a certain spammer usually favorizing his own
> ray-tracer benchmark.

I really respect your opinion on that.

Marco Antoniotti

unread,
May 26, 2009, 9:42:09 AM5/26/09
to

I think that even hygienic macros cannot be fully superseded by lazy
calls (I may be wrong). I find that lazyness does buy you something
when building what amount to syntactic layers; yet even with hygienic
macros, you do have the equivalent of the AST at hand, which is not
necessarily the case with lazy functions whose goal is to compute
"something else" than what it is a source-to-source transformation.

> CL-like macros that directly generate source code that can access
> bindings defined outside it can't be always replaced with lazy
> procedures, but I don't know whether accessing outer bindings is a
> good idea.

Maybe and maybe not.

Nicolas Neuss

unread,
May 26, 2009, 10:37:15 AM5/26/09
to
Jon Harrop writes:

> Nicolas Neuss wrote:
> [...]


>> I don't think this is true. At least, I would be very much surprised if a
>> highly competent person like Peter Norvig would base his tests on
>> something that bad as the "Shootout"...
>
> Might I suggest that you actually read the article under discussion:
>
> "Relative speeds of 5 languages on 10 benchmarks from The Great Computer
> Language Shootout."

OK, I stand corrected. My reservations wrt the Shootout remain, but
indeed, Norvig apparently used that data.

Nicolas

Vend

unread,
May 26, 2009, 12:34:18 PM5/26/09
to

Do you have an example?

>  I find that lazyness does buy you something
> when building what amount to syntactic layers; yet even with hygienic
> macros, you do have the equivalent of the AST at hand, which is not
> necessarily the case with lazy functions whose goal is to compute
> "something else" than what it is a source-to-source transformation.

A lazy procedure is essentially a piece of code that gets pasted
inside the caller procedure at runtime, the only difference is that it
doesn't see the parent bindings.
Since this expansion can be recursive, it's possible to generate (the
equivalent of) arbitrary code.

In fact, that is how Scheme syntax-rules works, with the difference
that it attempts to expand everything before runtime rather than on-
demand.

Isaac Gouy

unread,
May 26, 2009, 1:20:29 PM5/26/09
to
On May 26, 12:32 am, Nicolas Neuss <lastn...@math.uni-karlsruhe.de>
wrote:
-snip-

> I think the real difference is the competence of Peter Norvig compared with
> the incompetence of the people running the Shootout or the incompetence and
> malignity of a certain spammer usually favorizing his own ray-tracer
> benchmark.
>
> Nicolas
>
> [*]http://groups.google.com/group/comp.lang.scheme/browse_thread/thread/...


In that comp.lang.scheme discussion you seem quite proudly to proclaim
your ignorance of the benchmarks game - ignorance achieved simply by
not looking at it for the last 3 or 4 years.

Given that context, why should anyone care what you think about the
benchmarks game?

neptundancer

unread,
May 26, 2009, 3:03:09 PM5/26/09
to
On May 26, 2:12 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> Nicolas Neuss wrote:
> > Jon Harrop <j...@spammershome.com> writes:
> >> The results you have cited are all many years out of date. They actually
> >> pertain to Doug Bagley's original Computer Language Shootout from the
> >> 1990s
>
> > I don't think this is true.  At least, I would be very much surprised if a
> > highly competent person like Peter Norvig would base his tests on
> > something that bad as the "Shootout"...
>
> Might I suggest that you actually read the article under discussion:
>
>   "Relative speeds of 5 languages on 10 benchmarks from The Great Computer
> Language Shootout."
>
> >> and, in particular, make no use of multicores whatsoever.
>
> >> Today's shootout results indicate that Java is 1.1-8x faster than Lisp
> >> (SBCL), i.e. Java is faster than Lisp in every single test:
>
> http://shootout.alioth.debian.org/u32q/benchmark.php?test=all〈=ja...

>
>
>
> > I think the real difference is the competence of Peter Norvig compared
> > with the incompetence of the people running the Shootout or the
> > incompetence and malignity of a certain spammer usually favorizing his own
> > ray-tracer benchmark.
>
> I really respect your opinion on that.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

Too bad your respect isn't worth a dime.

Tamas K Papp

unread,
May 26, 2009, 6:12:33 PM5/26/09
to

Nicolas has developed Femlisp, and did quite a bit of numerical work in
CL. That certainly makes me care about his opinion, which I also
consider infinitely more relevant for practical purpose than some
game of benchmarks.

Tamas

anonymous...@gmail.com

unread,
May 27, 2009, 12:00:35 AM5/27/09
to
On May 26, 8:12 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Nicolas Neuss wrote:
> > Jon Harrop <j...@spammershome.com> writes:
> >> The results you have cited are all many years out of date. They actually
> >> pertain to Doug Bagley's original Computer Language Shootout from the
> >> 1990s
>
> > I don't think this is true.  At least, I would be very much surprised if a
> > highly competent person like Peter Norvig would base his tests on
> > something that bad as the "Shootout"...
>

I think the issue here really is that we are comparing Java which has
had lots of money thrown at it, to SBCL, which has had little if any.
I don't see how this would declaratively prove that common lisp is
'slow,' rather that Sbcl is slower than Java, which I believe should
be met with a response of 'duh' (it had to happen sooner or later).

> Might I suggest that you actually read the article under discussion:
>
>   "Relative speeds of 5 languages on 10 benchmarks from The Great Computer
> Language Shootout."
>
> >> and, in particular, make no use of multicores whatsoever.
>

Erlang's the best! horaaay!

---

I will note that I was perusing the web and realized that there are so
many lisp- this or that, that it really is silly to come in here and
claim 'lisp is dead'. If anything lisp is having a renaissance of
sorts. (Even ignoring how lispy stuff like Perl and Python and Ruby
are becoming).

Ironlisp/scheme, Nulisp, Lisp flavored erlang, liskell, clojure,
newlisp, about a half dozen open source implementations of CL, about 4
commercial implementations of CL, about a dozen different scheme
implementations, the embedded lisps, dylan just got open sourced (or
will be soon, I believe)...

Someone asked a similar question about this two years ago:
http://coding.derkeiler.com/Archive/Lisp/comp.lang.lisp/2007-06/msg00372.html

--

I'm going to have to go ahead and object to a few things that Dr.
Harrop has said:

1.) Macros and Eval are useless and super-ceded by other things.

They're not. Macros are for pattern making. CL can be thought of as a
virtual machine with a lot 'already in the box'. Pattern matching is
interesting but it is actually fairly easy to do pattern matching on
lisp code (not that you really need to, as you are more likely to be
programming bottom up than top down for a lisp-based compiler).

Eval is useful for a lot of things, mostly under-the-hood.
It can also be used for building definitions with macros in a more
granular way (like in a compiler).

2.) Parenthesis things:
I rarely look at parenthesis, I look at the shape of the code.
Parenthesis are something my editor and compiler use. Program for more
than a month in lisp with an indenting editor and you will do the
same.

3.) Speed thing: this probably isn't a valid concern. I don't think
that lisp with proper declarations is as mind numbingly slow as a lot
of people would have you believe. my regular code comes out around 2/3
of C speeds. I think other people who are better than me can get it to
go faster. Performance has a lot to do with how much you know about
how computers work and what the compiler is going to try to do.

Qi (which is done in common lisp) might be an example of a compiler in
lisp that could be categorized as quite fast (Then again Mark Taver is
pretty clever).

You can actually think of CL as either a static or dynamically (typed)
language. That people tend to think of it more often in terms of
dynamic typing I think is more of a historical relic. (And possibly
has to do with the complexity* of the type system).

This is perhaps a downfall of the CL spec or the various
implementations (depends how you want to slice it). A lot of the time
the downfall in this area is either due to poor memory management or
insufficient type knowledge.

Although I suppose that as Dr. JH suggests you could take a
'heterogenous' approach and write a compiler that targets a certain
virtual machine or even generates assembly language (SBCL does
something like this to implement itself in native code). Macros can be
just as useful in this situation.
--

Anyway, I Hope the OP doesn't let himself be mislead by JH's (sort of)
factually correct but misleading posts. I believe he was born under a
bridge.

(I find it interesting that Dr. Harrop makes refrence to Mr. Papp's
lack of a degree, yet he himself has degrees not in computer science
but physics and chemistry...)

--
* complexity as in multitude of possibilities

Marco Antoniotti

unread,
May 27, 2009, 2:35:46 AM5/27/09
to

No. That is why I said that I may be wrong.

>
> >  I find that lazyness does buy you something
> > when building what amount to syntactic layers; yet even with hygienic
> > macros, you do have the equivalent of the AST at hand, which is not
> > necessarily the case with lazy functions whose goal is to compute
> > "something else" than what it is a source-to-source transformation.
>
> A lazy procedure is essentially a piece of code that gets pasted
> inside the caller procedure at runtime, the only difference is that it
> doesn't see the parent bindings.
> Since this expansion can be recursive, it's possible to generate (the
> equivalent of) arbitrary code.
>
> In fact, that is how Scheme syntax-rules works, with the difference
> that it attempts to expand everything before runtime rather than on-
> demand.

Yes. And that is the key difference. Plus, with lazy functions you
do not necessarily have the AST at hand. IMHO, that would make
complex source-to-source transformations at a minimum quite
cumbersome. You may argue that such source-to-source transformations
are not desirable, but that is a different argument.

Cheers
--
Marco

Jon Harrop

unread,
May 27, 2009, 8:32:33 AM5/27/09
to
Isaac Gouy wrote:
> In that comp.lang.scheme discussion you seem quite proudly to proclaim
> your ignorance of the benchmarks game - ignorance achieved simply by
> not looking at it for the last 3 or 4 years.
>
> Given that context, why should anyone care what you think about the
> benchmarks game?

Your "shootout" or "benchmarks game" or whatever you are calling it these
days is still based entirely upon the same flawed methodology and even uses
the same flawed benchmarks. These flaws have been painstakingly described
to you by many people including myself.

So the fact that Nicolas had not looked at it for several years makes no
difference: his old conclusion that your shootout is worse than useless
still holds because you still have not fixed it.

Jon Harrop

unread,
May 27, 2009, 10:16:34 AM5/27/09
to
anonymous...@gmail.com wrote:
> On May 26, 8:12 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>> Nicolas Neuss wrote:
>> > Jon Harrop <j...@spammershome.com> writes:
>> >> The results you have cited are all many years out of date. They
>> >> actually pertain to Doug Bagley's original Computer Language Shootout
>> >> from the 1990s
>>
>> > I don't think this is true.  At least, I would be very much surprised
>> > if a highly competent person like Peter Norvig would base his tests on
>> > something that bad as the "Shootout"...
>
> I think the issue here really is that we are comparing Java which has
> had lots of money thrown at it, to SBCL, which has had little if any.

Yes.

> I don't see how this would declaratively prove that common lisp is
> 'slow,'

The absence of decent implementations obviously affects the practical
utility of a language. I appreciate the "sufficiently smart compiler"
hypothesis but it is of no practical use.

> I will note that I was perusing the web and realized that there are so
> many lisp- this or that, that it really is silly to come in here and
> claim 'lisp is dead'. If anything lisp is having a renaissance of
> sorts.

That's interesting. I was under the impression that the only significant use
of Lisp in recent years was airline software done by ITA but their best
people left.

> (Even ignoring how lispy stuff like Perl and Python and Ruby
> are becoming).

That is a stretch, IMHO.

> Ironlisp/scheme, Nulisp, Lisp flavored erlang, liskell, clojure,
> newlisp, about a half dozen open source implementations of CL, about 4
> commercial implementations of CL, about a dozen different scheme
> implementations, the embedded lisps, dylan just got open sourced (or
> will be soon, I believe)...

There are certainly lots of implementations of Lisp variants but the only
significant one is, IMHO, Mathematica and it is very domain specific. None
of the others have garnered a significant user base. Indeed, their market
share of language implementations on popcon is falling.

> 1.) Macros and Eval are useless and super-ceded by other things.
>
> They're not. Macros are for pattern making.

Strawman. I actually said "Macros are still useful as a way to add new
syntax.". I use macros extensively in Mathematica and OCaml and I see no
reason to sacrifice decent syntax.

> CL can be thought of as a
> virtual machine with a lot 'already in the box'. Pattern matching is
> interesting but it is actually fairly easy to do pattern matching on
> lisp code (not that you really need to, as you are more likely to be
> programming bottom up than top down for a lisp-based compiler).

Nonsense. Look at the Lisp code in Maxima to see how debilitating this
problem is.

> Eval is useful for a lot of things, mostly under-the-hood.
> It can also be used for building definitions with macros in a more
> granular way (like in a compiler).

Use a real VM and benefit from industrial strength implementations (like a
decent GC).

> 2.) Parenthesis things:
> I rarely look at parenthesis, I look at the shape of the code.
> Parenthesis are something my editor and compiler use. Program for more
> than a month in lisp with an indenting editor and you will do the
> same.

Then you're sprawling your code across many lines in order to get the
indentation that you depend upon to make it comprehensible, squandering
screen real estate. Either way, it is a bad idea. To see macros done
properly, look at Mathematica.

> 3.) Speed thing: this probably isn't a valid concern. I don't think
> that lisp with proper declarations is as mind numbingly slow as a lot
> of people would have you believe. my regular code comes out around 2/3
> of C speeds. I think other people who are better than me can get it to
> go faster. Performance has a lot to do with how much you know about
> how computers work and what the compiler is going to try to do.
>
> Qi (which is done in common lisp) might be an example of a compiler in
> lisp that could be categorized as quite fast (Then again Mark Taver is
> pretty clever).

Look at Mark Tarver's own results:

http://www.lambdassociates.org/Studies/study10.htm

Qi 1.8x slower than unoptimized OCaml. Optimized OCaml is 4x faster again.

> You can actually think of CL as either a static or dynamically (typed)
> language. That people tend to think of it more often in terms of
> dynamic typing I think is more of a historical relic. (And possibly
> has to do with the complexity* of the type system).

Another stretch. Lisp is, at best, at awful static language.

> (I find it interesting that Dr. Harrop makes refrence to Mr. Papp's
> lack of a degree,

Actually I believe he does have a degree in economics from a university in
Hungary and has been working towards a PhD for many years.

> yet he himself has degrees not in computer science but physics and
> chemistry...)

Actually my first degree did include computer science.

gugamilare

unread,
May 27, 2009, 11:09:28 AM5/27/09
to
On 27 maio, 11:16, Jon Harrop <j...@ffconsultancy.com> wrote:

> anonymous.c.lis...@gmail.com wrote:
> > 2.) Parenthesis things:
> >   I rarely look at parenthesis, I look at the shape of the code.
> > Parenthesis are something my editor and compiler use. Program for more
> > than a month in lisp with an indenting editor and you will do the
> > same.
>
> Then you're sprawling your code across many lines in order to get the
> indentation that you depend upon to make it comprehensible, squandering
> screen real estate.

Not at all. Sprawling code across many lines is what C does to make
its code readable, not Lisp. After a small time using Lisp, you start
to see this

(defun fact (n)
(if (<= n 1)
1
(* n (fact (1- n)))))

Like this:

defun fact (n)
if (<= n 1)
1
* n (fact (1- n))

Parenthesis is something you just ignore when you read the code, you
see the indentation of the code, which is done automatically by any
decent editor. And writing parenthesis makes indentation automatic,
which is much easier than manually indenting code.

Now, I see that OCaml has macros. But many things still drive me away.
For instance:

EXTEND
expr: LEVEL "expr1"
[[ "repeat"; e1 = expr; "until"; e2 = expr ->
<:expr< do { $e1$; while not $e2$ do { $e1$; } }
>> ]];
END;;

WTH is all this $ { } >> [[ ]] -> ; for ? No need for that. Lisp's
parenthesis are way more clean and readable than that. Not to mention
OCaml is very verbose (instead of using Lisp's parenthesis it uses
English words begin and end to mark scope).

Richard Fateman

unread,
May 27, 2009, 11:09:28 AM5/27/09
to
Jon Harrop wrote:
....

>
> That's interesting. I was under the impression that the only significant use
> of Lisp in recent years was airline software done by ITA but their best
> people left.

You are free to define "significant" any way you want, but people use
lisp, as you can determine by a google search.
See
http://www.franz.com/success/
also.

Are you saying that Haskell has more "significant" use?

.. snip..

>> CL can be thought of as a
>> virtual machine with a lot 'already in the box'. Pattern matching is
>> interesting but it is actually fairly easy to do pattern matching on
>> lisp code (not that you really need to, as you are more likely to be
>> programming bottom up than top down for a lisp-based compiler).
>
> Nonsense. Look at the Lisp code in Maxima to see how debilitating this
> problem is.

"this problem" =? pattern matching? How is this debilitating? Most
Maxima code predates CLOS, so it does not use the explicit syntax of
method definition, but is that the pattern matching you mean?

There are in fact two pattern matchers in Maxima (Schatchen and
Defmatch) but I doubt that you mean matching of algebraic expressions.

There is of course lisp compilers already written in lisp, but there are
also parsers for fortran and the maxima user language, also in maxima;
once the parsers produce lisp code, that code can be compiled too.

.....

>> Parenthesis are something my editor and compiler use. Program for more
>> than a month in lisp with an indenting editor and you will do the
>> same.
>
> Then you're sprawling your code across many lines in order to get the
> indentation that you depend upon to make it comprehensible, squandering
> screen real estate.

You are probably more familiar with the idiomatic indentation of C code
which does, in my view, squander lines with isolated single closing
brackets.

Since most program in lisp are written in relative small functional
pieces, in lines that are generally fairly short, with opening brackets
that are not very long, sprawling is not a problem in reasonably
idiomatic code.


> Either way, it is a bad idea. To see macros done
> properly, look at Mathematica.

Huh? I know the Mathematica language pretty well. I wrote an
open-source Mathematica (version 3.0) compatible parser (the only one I
am aware of). Written in, as it happens, lisp. But Macros??
Do you mean the pattern-matching rule-driven convention that substitutes
for function definition in Mathematica? This is cute but hardly a
convincing idea for a successful system-building language. See how much
of Mathematica is, even today, being written in a dialect of C.

This is the first time I have seen anyone praise the Mathematica
language, other than persons in the sphere of influence of Stephen
Wolfram, or physicists notably ignorant of programming languages.

RJF

Isaac Gouy

unread,
May 27, 2009, 11:36:45 AM5/27/09
to


You don't seem to have bothered reading what he wrote.

If it's worse than useless then why doesn't that invalidate your use
of those same measurements (two days ago) to bash Lisp performance?


http://groups.google.com/group/comp.lang.lisp/msg/d08abb4a2ba43779


Isaac Gouy

unread,
May 27, 2009, 12:25:51 PM5/27/09
to


That should make you care about his opinion on things Lisp in which he
is an expert.

On other topics his opinion may be uninformed and plain wrong, even
about things we might expect a high-school student to get right simply
by bothering to read the information:

http://groups.google.com/group/comp.lang.lisp/msg/a43b15db2c9f4f18

I am curious about the attitude Nicolas Neuss seems to express (which
I may have misread) because I think those outside the Lisp bubble
might well take away a rather negative impression.

No one but Lispers made a fuss about the changes 3 or 4 years ago (let
alone continue to make a fuss).

Is it so hard to write a competent program in Lisp that 100 line
programs are a major investment of time? Are those trivial programs so
difficult that they need a Lisp implementor to write them? Is the Lisp
community now so small that there are no workaday programmers capable
of doing a good job on trivial programs - just newbies and language
implementors?

Tamas K Papp

unread,