Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Writing a Compiler: Lisp or Scheme or Haskell?

210 views
Skip to first unread message

iwrit...@gmail.com

unread,
May 16, 2009, 10:51:59 PM5/16/09
to
Neither the source nor target language is a very low level language
( such as assembler or machine code ) The target language is a not-so-
low-level language such as SQL or even C.

How do Lisp and Scheme stack up against Haskell for this sort of
thing?

thanks in advance.

Robert

Tamas K Papp

unread,
May 16, 2009, 11:16:54 PM5/16/09
to

The question doesn't make much sense. Use the language you know. All
of these languages are appropriate for the job.

If you know all three, you can answer the question for yourself.

Tamas

Andrew Bailey

unread,
May 17, 2009, 4:19:39 PM5/17/09
to

Common Lisp would have the most useful standard library functions out
of the three. I would chose Lisp if it were me ,but I might be biased.
Because I don't like Haskell very much anymore.

daniel....@excite.com

unread,
May 18, 2009, 2:58:04 PM5/18/09
to
On May 16, 10:51 pm, iwriteco...@gmail.com wrote:


What is the target execution enviroment like? Could Lisp or Scheme or
Haskell be part of the execution environment? Could Lisp or Scheme or
Haskell be an intermediate language, then compiled by already existing
compiler?

If you plan to hand code a recursive descent parser, that's pretty
clean in any language.

I suggest think about how you intend to represent information stored
in your symbol tables and dictionaries to get a feel for which
language you'd feel most comfortable in.

Also optimization: peep-hole techniques might be more pleasant when
considering target code as list data... Then, how do you like to
write such list editing code?

Paul Tarvydas

unread,
May 19, 2009, 4:28:25 PM5/19/09
to
iwrit...@gmail.com wrote:

> Neither the source nor target language is a very low level language
> ( such as assembler or machine code ) The target language is a not-so-
> low-level language such as SQL or even C.
>
> How do Lisp and Scheme stack up against Haskell for this sort of
> thing?
>

It seems to me that CL (Common Lisp) allows you more flexibility in
choosing / combining paradigms and to turn the dial to address a wide
spectrum of languages - from hack to seriously complicated.

If you want to hack a small language together, you can use macros.

For a slightly larger hack language, where you get to define the
syntax, you can save yourself time by using CL's built in reader as
your scanner.

Tree, term rewriting, etc. is obviously well-supported in CL (list
operations). Debugging output for intermediate trees is already
supported (PRINT, PPRINT, inspect, etc).

For more complicated languages, I've built my own parsing tools (e.g.
an S/SL (syntax/semantic language) parser, another source-to-source
white-space preserving parser, and a parser based on PEG packrat
ideas) in short amounts of time (often just one weekend).

When I built a compiler with a non-traditional syntax, e.g. syntax
composed of diagrams (see www.visualframeworksinc.com), I used a
prolog-in-lisp to make parsing easier. I parsed the diagrams by
writing inference rules, e.g. 4 lines closed make a box, a box in this
particular context means so-and-so, a non-closed line is composed of a
number of line segments, a line joining two boxes means so-and-so,
etc. Then, once parsed, I could drop out of the "prolog" paradigm and
use more traditional compiler techniques (or, stay in prolog mode to
perform further semantic analysis).

And, as already mentioned, banging a peepholer together in CL is
easy. I use one to clean up generated code to be more human-
readable. In that project, I "emit" lines of code internally as
little lists onto a global list. Then clean up the global list with a
peepholer. Then print it out as final text.

In situations where you are outputting to textual target languages,
e.g. C (which, btw, turns out to be a really bad assembler language),
you are faced with issues like arranging for declaration-before-use in
the output code, which tends to drive you towards using multiple
output streams or multiple code generation passes. Lists and string
streams (with-output-to-string) can be convenient in such situations.

My vote goes to CL.

pt

Jon Harrop

unread,
May 22, 2009, 4:29:20 PM5/22/09
to

Badly. Metaprogramming benefits enormously from pattern matching. Lisp
offers nothing in that regard (and the Greenspun alternatives to modern
function language implementations suck in comparison, e.g. fare-match).
Scheme has something but nothing like as much as Haskell or ML.

You might also consider tools like Camlp4, ocamllex, ocamlyacc, menhir,
dpygen etc. for OCaml.

Lisp and Scheme do offer first-class symbols and EVAL but both are of little
practical value. Symbols are trivial to implement in Haskell or ML. EVAL
was long since made obsolete by VMs.

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u

Raffael Cavallaro

unread,
May 22, 2009, 5:37:19 PM5/22/09
to
On 2009-05-22 16:29:20 -0400, the wet one <to...@toadhall.com> said:

> iwrit...@gmail.com wrote:
>> Neither the source nor target language is a very low level language
>> ( such as assembler or machine code ) The target language is a not-so-
>> low-level language such as SQL or even C.
>>
>> How do Lisp and Scheme stack up against Haskell for this sort of
>> thing?
>
> Badly. Metaprogramming benefits enormously from pattern matching.

Only if your one hammer is pattern matching.

> Lisp
> offers nothing in that regard (and the Greenspun alternatives to modern
> function language implementations suck in comparison, e.g. fare-match).
> Scheme has something but nothing like as much as Haskell or ML.

cl-unification gives you match-case - pattern match all you like.

>
> You might also consider tools like Camlp4, ocamllex, ocamlyacc, menhir,
> dpygen etc. for OCaml.

But he was asking about haskell, not the irish ship of the desert. Hint
to OP, the reason people like to use lisp for metaprogramming is that
the metaprogramming language is the same as the programming language.

>
> Lisp and Scheme do offer first-class symbols and EVAL but both are of little
> practical value. Symbols are trivial to implement in Haskell or ML. EVAL
> was long since made obsolete by VMs.

Lisp and scheme offer macros, which do compile-time expansion; no need
for run-time eval.


--
Raffael Cavallaro, Ph.D.

Kaz Kylheku

unread,
May 22, 2009, 6:11:48 PM5/22/09
to
On 2009-05-22, Jon Harrop <j...@ffconsultancy.com> wrote:
> Lisp and Scheme do offer first-class symbols and EVAL but both are of little
> practical value. Symbols are trivial to implement in Haskell or ML.

An example would go far here.

> EVAL was long since made obsolete by VMs.

EVAL converts a piece of source code into the execution that it represents.

The specification of EVAL does not rule out compiling; it is completely
abstract. A conforming Lisp implementation can define EVAL roughly like this,

(defun eval (form) (funcall (compile nil `(lambda () ,form))))

and so EVAL may be considered to be a convenience function to avoid this
more long-winded compiling setup. Some essentially define it this way. Corman
Lisp has no evaluator. Its eval spins up a function made up of x86 machine code
and jumps to it.

A VM (or real machine) handles executable forms, which must come from
somewhere. What the VM does is conceptually similar to eval, in that it
takes code and executes it, but it works with a translated representation of
the code that isn't directly written by programmers.

How do VMs obsolete translators that produces code for them?

Tamas K Papp

unread,
May 22, 2009, 6:20:28 PM5/22/09
to
On Fri, 22 May 2009 17:37:19 -0400, Raffael Cavallaro wrote:

> On 2009-05-22 16:29:20 -0400, the wet one <to...@toadhall.com> said:
>

>> [usual toad bs snipped]


>
> But he was asking about haskell, not the irish ship of the desert. Hint
> to OP, the reason people like to use lisp for metaprogramming is that
> the metaprogramming language is the same as the programming language.

Hush, don't tell him things like that. His head might explore. Or
implode. I always forget which is the one which happens with a
vacuum.

Tamas

Jon Harrop

unread,
May 22, 2009, 6:46:16 PM5/22/09
to
Raffael Cavallaro wrote:
> On 2009-05-22 16:29:20 -0400, the wet one <to...@toadhall.com> said:
>> iwrit...@gmail.com wrote:
>>> Neither the source nor target language is a very low level language
>>> ( such as assembler or machine code ) The target language is a not-so-
>>> low-level language such as SQL or even C.
>>>
>>> How do Lisp and Scheme stack up against Haskell for this sort of
>>> thing?
>>
>> Badly. Metaprogramming benefits enormously from pattern matching.
>
> Only if your one hammer is pattern matching.

A quick survey of metaprograms written with and without pattern matching
makes it clear how useful the technique is in this context.

>> You might also consider tools like Camlp4, ocamllex, ocamlyacc, menhir,
>> dpygen etc. for OCaml.
>
> But he was asking about haskell, not the irish ship of the desert. Hint
> to OP, the reason people like to use lisp for metaprogramming is that
> the metaprogramming language is the same as the programming language.

Most people do not like to use Lisp for metaprogramming because homogeneous
metaprogramming incurs all of the deficiencies of the host language (e.g.
your generated code is slow in the case of Lisp). Heterogeneous
metaprogramming is not only preferable but comparably simple if you use a
more appropriate language than Lisp.

>> Lisp and Scheme do offer first-class symbols and EVAL but both are of
>> little practical value. Symbols are trivial to implement in Haskell or
>> ML. EVAL was long since made obsolete by VMs.
>
> Lisp and scheme offer macros, which do compile-time expansion; no need
> for run-time eval.

A compiler's source program is generally only available at run-time so
compile time macro expansion is useless.

Jon Harrop

unread,
May 22, 2009, 6:50:52 PM5/22/09
to

Or maybe just maybe you'll stop "contributing" here until you've completed
your degree and earned some credibility.

Raffael Cavallaro

unread,
May 22, 2009, 9:32:47 PM5/22/09
to
On 2009-05-22 18:46:16 -0400, splashy <e...@eyeofnewt.com> said:

> A compiler's source program is generally only available at run-time so
> compile time macro expansion is useless.

So compilers can't use macros in their source code because the code
they will utlitmately compile is not yet available when the compiler
itself is compiled? Really? That's what you're going with? Your willful
misinterpretation is another charming, but ultimately void rhetorical
device.
--
Raffael Cavallaro, Ph.D.

Pillsy

unread,
May 22, 2009, 9:57:50 PM5/22/09
to
On May 22, 6:50 pm, some jackass wrote:

> Or maybe just maybe you'll stop "contributing" here until you've completed
> your degree and earned some credibility.

Yeah, that worked just smashingly for you.

Sheesh,
Pillsy

Jon Harrop

unread,
May 23, 2009, 7:39:34 AM5/23/09
to
Raffael Cavallaro wrote:
> On 2009-05-22 18:46:16 -0400, splashy <e...@eyeofnewt.com> said:
>> A compiler's source program is generally only available at run-time so
>> compile time macro expansion is useless.
>
> So compilers can't use macros in their source code because the code...

A strawman argument.

neptundancer

unread,
May 23, 2009, 8:11:21 AM5/23/09
to
On May 23, 12:46 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Raffael Cavallaro wrote:
> > On 2009-05-22 16:29:20 -0400, the wet one <t...@toadhall.com> said:

Once again, you absolutely don't know what are you writing about.

Frank GOENNINGER

unread,
May 23, 2009, 8:49:04 AM5/23/09
to
Jon Harrop <j...@ffconsultancy.com> writes:

> Most people do not like to use Lisp for metaprogramming because homogeneous
> metaprogramming incurs all of the deficiencies of the host language (e.g.
> your generated code is slow in the case of Lisp). Heterogeneous
> metaprogramming is not only preferable but comparably simple if you use a
> more appropriate language than Lisp.

Wrong.

If you don't understand this answer then please apply it to the
meta-level of your statement.

You don't know what the meta-level of your statement is ?

> A compiler's source program is generally only available at run-time so
> compile time macro expansion is useless.

That's part of the answer to the question above.

Jon Harrop

unread,
May 23, 2009, 9:06:37 AM5/23/09
to
Frank GOENNINGER wrote:
> Jon Harrop <j...@ffconsultancy.com> writes:
>> Most people do not like to use Lisp for metaprogramming because
>> homogeneous metaprogramming incurs all of the deficiencies of the host
>> language (e.g. your generated code is slow in the case of Lisp).
>> Heterogeneous metaprogramming is not only preferable but comparably
>> simple if you use a more appropriate language than Lisp.
>
> Wrong.

No.

Jon Harrop

unread,
May 23, 2009, 9:07:25 AM5/23/09
to
neptundancer wrote:
> Once again, you absolutely don't know what are you writing about.

Then why are you unable to refute it?

gugamilare

unread,
May 23, 2009, 9:11:12 AM5/23/09
to
On 23 maio, 10:07, Jon Harrop <j...@ffconsultancy.com> wrote:
> neptundancer wrote:
> > Once again, you absolutely don't know what are you writing about.
>
> Then why are you unable to refute it?

Oh, yes, and your argument here:

On 23 maio, 10:06, Jon Harrop <j...@ffconsultancy.com> wrote:
> Frank GOENNINGER wrote:
> > Jon Harrop <j...@ffconsultancy.com> writes:

> >> Most people do not like to use Lisp for metaprogramming because
> >> homogeneous metaprogramming incurs all of the deficiencies of the host
> >> language (e.g. your generated code is slow in the case of Lisp).
> >> Heterogeneous metaprogramming is not only preferable but comparably
> >> simple if you use a more appropriate language than Lisp.
>

> > Wrong.
>
> No.


>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

... was much more convincing.

Jon Harrop

unread,
May 23, 2009, 9:55:23 AM5/23/09
to
gugamilare wrote:
> Oh, yes, and your argument here:
>
> On 23 maio, 10:06, Jon Harrop <j...@ffconsultancy.com> wrote:
> > Frank GOENNINGER wrote:
> > > Wrong.
> > No.

>
> ... was much more convincing.

Frank's objection was entirely non-specfic. What do you need convincing of?
That Lisp is unpopular? That heterogeneous metaprogramming is easy?

anonymous...@gmail.com

unread,
May 23, 2009, 10:01:12 AM5/23/09
to
On May 22, 6:50 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> Or maybe just maybe you'll stop "contributing" here until you've completed
> your degree and earned some credibility.

said the schill.

gugamilare

unread,
May 23, 2009, 10:25:58 AM5/23/09
to
On 23 maio, 10:55, Jon Harrop <j...@ffconsultancy.com> wrote:
> gugamilare wrote:
> > Oh, yes, and your argument here:
>
> > On 23 maio, 10:06, Jon Harrop <j...@ffconsultancy.com> wrote:
> > > Frank GOENNINGER wrote:
> > > > Wrong.
> > > No.
>
> > ... was much more convincing.
>
> Frank's objection was entirely non-specfic. What do you need convincing of?
> That Lisp is unpopular? That heterogeneous metaprogramming is easy?

There is nothing you need to convince me or anyone here of. Do you
really think that we are only a bunch of amateurs that don't know how
to program? And clearly you are not a Lisp programmer, so you don't
have enough knowledge to say what Lisp is good or bad for. That is
what Frank was referring as "meta-level" of you statement - so, his
objection wasn't non-specific.

You won't say anything that wasn't said here over and over among
zillions of flame wars. I don't know why people are particularly so
angry about Lisp (frustration, perhaps?), but, unfortunately, they
are, you are not the only one.

So, I am going to repeat this again: stop with whatever you are doing.
This will only end badly.

Pillsy

unread,
May 23, 2009, 10:59:48 AM5/23/09
to
On May 23, 9:07 am, some sad loser wrote:

> neptundancer wrote:

> > Once again, you absolutely don't know what are you writing about.

> Then why are you unable to refute it?

Refuting you is about as useful as refuting an infestation of pubic
lice.

FOAD.
Pillsy

gugamilare

unread,
May 23, 2009, 11:53:15 AM5/23/09
to

This was not a clever thing to say to someone who is already
criticizing Lisp. This is just another motive for him to start a flame
war.

Nicolas Neuss

unread,
May 23, 2009, 12:52:05 PM5/23/09
to
gugamilare <gugam...@gmail.com> writes:

> This was not a clever thing to say to someone who is already criticizing
> Lisp. This is just another motive for him to start a flame war.

Nonsense. Please do us a favour and google for the previous appearances of
Jon Harrop in comp.lang.lisp. Probably, the largest motive for JH's posts
is if CL newbies take him seriously and reply to him including his spamming
consultancy email or web address.

Nicolas

Jon Harrop

unread,
May 23, 2009, 4:44:17 PM5/23/09
to
gugamilare wrote:
> Do you really think that we are only a bunch of amateurs that don't know
> how to program?

When people here say things like "OCaml forces you to write interpreters
instead of compilers", do you really have to ask?

> And clearly you are not a Lisp programmer, so you don't have enough
> knowledge to say what Lisp is good or bad for.

In other words, you want to ignore anyone who chooses not to use Lisp
because they don't know what Lisp is bad for.

> You won't say anything that wasn't said here over and over among
> zillions of flame wars. I don't know why people are particularly so
> angry about Lisp (frustration, perhaps?), but, unfortunately, they
> are, you are not the only one.

I suggest you look up the flamewar I had with Thomas Fischbacher here. He
was a strong Lisp advocate until we debated it. Now he is a major OCaml
user.

> So, I am going to repeat this again: stop with whatever you are doing.
> This will only end badly.

There is nothing "bad" about highlighting misinformation.

Christopher C. Stacy

unread,
May 24, 2009, 4:09:41 AM5/24/09
to
Jon Harrop <j...@ffconsultancy.com> writes:
> Most people do not like to use Lisp for metaprogramming because homogeneous
> metaprogramming incurs all of the deficiencies of the host language (e.g.
> your generated code is slow in the case of Lisp).

Lisp is slow?

Nicolas Neuss

unread,
May 24, 2009, 4:25:26 AM5/24/09
to
Jon Harrop <j...@spamming.com> writes:

> I suggest you look up the flamewar I had with Thomas Fischbacher here. He
> was a strong Lisp advocate until we debated it. Now he is a major OCaml
> user.

So you might have won at least one convert - congratulations! In this
case, a report of Thomas Fischbacher on his experiences of both Lisp and
OCaml would really be of interest, at least to me. (ISTR he did not do
much Common Lisp, but came from Scheme; however, I might be wrong on that
one.)

Nicolas

Jon Harrop

unread,
May 24, 2009, 8:03:53 AM5/24/09
to

Yes.

John Thingstad

unread,
May 24, 2009, 4:59:47 PM5/24/09
to
På Sun, 24 May 2009 10:09:41 +0200, skrev Christopher C. Stacy
<cst...@news.dtpq.com>:

About twice as fast as Java. Though that depends on what you are how you
program it. It is easy to make things 10 or even 100 times slower that
they could be if you don't know what you are doing, and it is not always
obvious why. Lisp I/O can be on the slow side. (Not setting *print-pretty*
to nil can cut the file write speed in half for instance on some
implementations.) The lisp efficiency model is somewhat difficult to
understand and it's declaration system a bit erm.. clunky.
Never the less it is a powerful language in the hand of someone that knows
how to use it. (Unlike Jon Harrop aka Dr. frog)

---------------------
John Thingstad

Jon Harrop

unread,
May 25, 2009, 7:49:48 AM5/25/09
to
John Thingstad wrote:
> På Sun, 24 May 2009 10:09:41 +0200, skrev Christopher C. Stacy
> <cst...@news.dtpq.com>:
>> Jon Harrop <j...@ffconsultancy.com> writes:
>>> Most people do not like to use Lisp for metaprogramming because
>>> homogeneous
>>> metaprogramming incurs all of the deficiencies of the host language
>>> (e.g.
>>> your generated code is slow in the case of Lisp).
>>
>> Lisp is slow?
>
> About twice as fast as Java...

Dream on.

Adlai

unread,
May 25, 2009, 8:28:23 AM5/25/09
to
On May 25, 2:49 pm, Jon Harrop <j...@ffconsultancy.com> wrote:

> John Thingstad wrote:
> > About twice as fast as Java...
>
> Dream on.

http://www.norvig.com/java-lisp.html
http://www.norvig.com/python-lisp.html (here, scroll down to the
colorful chart)

It seems that Lisp ranges from being 10000% to 13% the speed of C++,
with a median of 60% as fast, while Java ranges from being 111% to
4.8% as fast as C++, with a median of 21% as fast.

On that chart, Lisp beats Java in every category but one, which is
also the category with the smallest margin.


- Adlai

anonymous...@gmail.com

unread,
May 25, 2009, 10:06:18 AM5/25/09
to
On May 25, 8:28 am, Adlai <munchk...@gmail.com> wrote:
> On May 25, 2:49 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>
> > John Thingstad wrote:
> > > About twice as fast as Java...
>
> > Dream on.
>
> http://www.norvig.com/java-lisp.htmlhttp://www.norvig.com/python-lisp.html(here, scroll down to the

> colorful chart)
>
> It seems that Lisp ranges from being 10000% to 13% the speed of C++,
> with a median of 60% as fast, while Java ranges from being 111% to
> 4.8% as fast as C++, with a median of 21% as fast.
>
> On that chart, Lisp beats Java in every category but one, which is
> also the category with the smallest margin.
>
>  -  Adlai

I think that what is of more concern is that a grown man with a
(supposed) Phd wouldn't be able to come up with a more clever ploy for
trolling.

I don't believe he's made a constructive (even on topic) post to this
newsgroup yet.

Vend

unread,
May 25, 2009, 10:58:49 AM5/25/09
to
On 17 Mag, 04:51, iwriteco...@gmail.com wrote:
> Neither the source nor target language is a very low level language
> ( such as assembler or machine code ) The target language is a not-so-
> low-level language such as SQL or even C.

You can't target SQL since isn't Turing complete.

> How do Lisp and Scheme stack up against Haskell for this sort of
> thing?

If you want simplicity, Scheme is probably the best choice, at least
as long you exclude continuations. Implementing Scheme continuations
on any target language different than assembler will impose some
performance penality and, depending on your compiler structure, miight
be difficult.
If you don't require full standard compliance you might want not to
implement continuation or to implement some restricted form of them,
like many Scheme implementation do.

If you want to write a complier that produces efficient programs, you
might choose Haskell. A naive compiler wouldn't produce fast code, but
the language has room for lots of optimizations which would be
difficult or impossible to implement in Scheme or Common Lisp.

I would avoid Common Lisp, I don't know it very much but it seems to
me quite complicated and non-uniform.

Vend

unread,
May 25, 2009, 11:01:13 AM5/25/09
to
On 22 Mag, 23:37, Raffael Cavallaro
<raffaelcavall...@pas.espam.s.il.vous.plait.mac.com> wrote:

> Lisp and scheme offer macros, which do compile-time expansion; no need
> for run-time eval.

I don't think that Haskell would benefit much from Lisp-like macros,
since it already has lazy evaluation. Macros would be at most a
performance hack.

Tamas K Papp

unread,
May 25, 2009, 11:01:48 AM5/25/09
to
On Mon, 25 May 2009 07:58:49 -0700, Vend wrote:

> On 17 Mag, 04:51, iwriteco...@gmail.com wrote:
>> Neither the source nor target language is a very low level language (
>> such as assembler or machine code ) The target language is a not-so-
>> low-level language such as SQL or even C.
>
> You can't target SQL since isn't Turing complete.
>
>> How do Lisp and Scheme stack up against Haskell for this sort of thing?
>
> If you want simplicity, Scheme is probably the best choice, at least as
> long you exclude continuations. Implementing Scheme continuations on any

> [...]

I was under the impression that he was looking for a language for
implementing his compiler, not asking which language would be the best
as a source language.

Tamas

Jon Harrop

unread,
May 25, 2009, 6:13:12 PM5/25/09
to

Macros are still useful as a way to add new syntax.

Jon Harrop

unread,
May 25, 2009, 7:12:38 PM5/25/09
to

The results you have cited are all many years out of date. They actually
pertain to Doug Bagley's original Computer Language Shootout from the 1990s
and, in particular, make no use of multicores whatsoever.

Today's shootout results indicate that Java is 1.1-8x faster than Lisp
(SBCL), i.e. Java is faster than Lisp in every single test:

http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=java&lang2=sbcl&box=1

On the knucleotide benchmark, Lisp is thrashed by Erlang, C# Mono, PHP,
Fortran, Scala, Lua, Clean, Java, Pascal, Haskell, ATS and C++.

John Thingstad

unread,
May 25, 2009, 9:02:30 PM5/25/09
to

http://liskell.org/

---------------------
John Thingstad

Vend

unread,
May 25, 2009, 9:49:19 PM5/25/09
to

So?

Vend

unread,
May 25, 2009, 9:50:09 PM5/25/09
to
On 26 Mag, 00:13, Jon Harrop <j...@ffconsultancy.com> wrote:
> Vend wrote:
> > On 22 Mag, 23:37, Raffael Cavallaro
> > <raffaelcavall...@pas.espam.s.il.vous.plait.mac.com> wrote:
> >> Lisp and scheme offer macros, which do compile-time expansion; no need
> >> for run-time eval.
>
> > I don't think that Haskell would benefit much from Lisp-like macros,
> > since it already has lazy evaluation. Macros would be at most a
> > performance hack.
>
> Macros are still useful as a way to add new syntax.

Can't you do that with lazy functions?

Nicolas Neuss

unread,
May 26, 2009, 3:32:47 AM5/26/09
to
Jon Harrop <j...@spammershome.com> writes:

> The results you have cited are all many years out of date. They actually
> pertain to Doug Bagley's original Computer Language Shootout from the
> 1990s

I don't think this is true. At least, I would be very much surprised if a
highly competent person like Peter Norvig would base his tests on something
that bad as the "Shootout" (BTW, it's called "Benchmark Game" today, as I
learned recently in comp.lang.scheme[*]).

> and, in particular, make no use of multicores whatsoever.
>
> Today's shootout results indicate that Java is 1.1-8x faster than Lisp
> (SBCL), i.e. Java is faster than Lisp in every single test:
>
> http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=java&lang2=sbcl&box=1

I think the real difference is the competence of Peter Norvig compared with
the incompetence of the people running the Shootout or the incompetence and
malignity of a certain spammer usually favorizing his own ray-tracer
benchmark.

Nicolas

[*] http://groups.google.com/group/comp.lang.scheme/browse_thread/thread/56dd8b3505ed6aca

Thomas F. Burdick

unread,
May 26, 2009, 4:11:52 AM5/26/09
to

Not to the same extent, and you see more of the plumbing. But between
lazy evaluation and the complex syntax, you can get by; and the same
complex syntax that makes leveraging lazy evaluation and monads
convenient would make structural macros a real pain in the butt.

Marco Antoniotti

unread,
May 26, 2009, 4:15:22 AM5/26/09
to
On May 26, 3:50 am, Vend <ven...@virgilio.it> wrote:

Not necessarily. That depends on the nature of the source-to-source
transformation that you want to implement (macros do essentially
that). If you have the AST of your language/program at hand then you
can manipulate it. Lazyness gives a limited form of "controlled non-
evaluation" (my term); it does not give you AST manipulation which is
what you get with Lisp, or any other AST manipulating program like
some Python libraries and the OCaml preprocessor have, just to name a
few.

Cheers
--
Marco
ELS 2009 www.european-lisp-symposium.org

Thomas F. Burdick

unread,
May 26, 2009, 4:22:53 AM5/26/09
to
On May 17, 4:51 am, iwriteco...@gmail.com wrote:
> Neither the source nor target language is a very low level language
> ( such as assembler or machine code ) The target language is a not-so-
> low-level language such as SQL or even C.
>
> How do Lisp and Scheme stack up against Haskell for this sort of
> thing?
>
> thanks in advance.
>
> Robert

This seems to be the case of a hit-and-run troll (or at a minimum a
*very* absent OP), but in any case, to add my 0,02 € ...

Parsing is not a particularly interesting problem (IMO), and for LALR
parsers, there are perfectly good generators in all three languages.
Perfectly good parser generators for more complex grammars, too, but
I'm less familiar with them.

Where things are interesting is in the manipulation of the program
once you have it parsed. This is one of the things that statically
typed functional languages are good at (if the problem looks academic,
they're probably a good fit), where the static type system doesn't
feel like a hinderance. No dynamic variables in Haskell, though, and
they're super convenient for compilers.

The main thing I'd advise is: CL, Scheme or Haskell, *use a reactive
framework*. In Haskell that means Yampa. In CL that means Cells (or
something like it). In Scheme, FrTime. These things make compilers
much more pleasant.

Vend

unread,
May 26, 2009, 6:03:47 AM5/26/09
to

I was thinking of Scheme-like hygenic macros.
It seems to me that they are actually like lazy procedures, with the
difference that they are expanded before run-time, so they are perhaps
even less expressive (if macro expansion terminates, then substiting
all macros with lazy procedures doesn't change the program semantics,
while there are cases where substituting lazy procedures with macros
causes macro expansion not to terminate).

CL-like macros that directly generate source code that can access
bindings defined outside it can't be always replaced with lazy
procedures, but I don't know whether accessing outer bindings is a
good idea.

Jon Harrop

unread,
May 26, 2009, 8:12:37 AM5/26/09
to
Nicolas Neuss wrote:
> Jon Harrop <j...@spammershome.com> writes:
>> The results you have cited are all many years out of date. They actually
>> pertain to Doug Bagley's original Computer Language Shootout from the
>> 1990s
>
> I don't think this is true. At least, I would be very much surprised if a
> highly competent person like Peter Norvig would base his tests on
> something that bad as the "Shootout"...

Might I suggest that you actually read the article under discussion:

"Relative speeds of 5 languages on 10 benchmarks from The Great Computer
Language Shootout."

>> and, in particular, make no use of multicores whatsoever.
>>
>> Today's shootout results indicate that Java is 1.1-8x faster than Lisp
>> (SBCL), i.e. Java is faster than Lisp in every single test:
>>
>>
http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=java&lang2=sbcl&box=1
>
> I think the real difference is the competence of Peter Norvig compared
> with the incompetence of the people running the Shootout or the
> incompetence and malignity of a certain spammer usually favorizing his own
> ray-tracer benchmark.

I really respect your opinion on that.

Marco Antoniotti

unread,
May 26, 2009, 9:42:09 AM5/26/09
to

I think that even hygienic macros cannot be fully superseded by lazy
calls (I may be wrong). I find that lazyness does buy you something
when building what amount to syntactic layers; yet even with hygienic
macros, you do have the equivalent of the AST at hand, which is not
necessarily the case with lazy functions whose goal is to compute
"something else" than what it is a source-to-source transformation.

> CL-like macros that directly generate source code that can access
> bindings defined outside it can't be always replaced with lazy
> procedures, but I don't know whether accessing outer bindings is a
> good idea.

Maybe and maybe not.

Nicolas Neuss

unread,
May 26, 2009, 10:37:15 AM5/26/09
to
Jon Harrop writes:

> Nicolas Neuss wrote:
> [...]


>> I don't think this is true. At least, I would be very much surprised if a
>> highly competent person like Peter Norvig would base his tests on
>> something that bad as the "Shootout"...
>
> Might I suggest that you actually read the article under discussion:
>
> "Relative speeds of 5 languages on 10 benchmarks from The Great Computer
> Language Shootout."

OK, I stand corrected. My reservations wrt the Shootout remain, but
indeed, Norvig apparently used that data.

Nicolas

Vend

unread,
May 26, 2009, 12:34:18 PM5/26/09
to

Do you have an example?

>  I find that lazyness does buy you something
> when building what amount to syntactic layers; yet even with hygienic
> macros, you do have the equivalent of the AST at hand, which is not
> necessarily the case with lazy functions whose goal is to compute
> "something else" than what it is a source-to-source transformation.

A lazy procedure is essentially a piece of code that gets pasted
inside the caller procedure at runtime, the only difference is that it
doesn't see the parent bindings.
Since this expansion can be recursive, it's possible to generate (the
equivalent of) arbitrary code.

In fact, that is how Scheme syntax-rules works, with the difference
that it attempts to expand everything before runtime rather than on-
demand.

Isaac Gouy

unread,
May 26, 2009, 1:20:29 PM5/26/09
to
On May 26, 12:32 am, Nicolas Neuss <lastn...@math.uni-karlsruhe.de>
wrote:
-snip-

> I think the real difference is the competence of Peter Norvig compared with
> the incompetence of the people running the Shootout or the incompetence and
> malignity of a certain spammer usually favorizing his own ray-tracer
> benchmark.
>
> Nicolas
>
> [*]http://groups.google.com/group/comp.lang.scheme/browse_thread/thread/...


In that comp.lang.scheme discussion you seem quite proudly to proclaim
your ignorance of the benchmarks game - ignorance achieved simply by
not looking at it for the last 3 or 4 years.

Given that context, why should anyone care what you think about the
benchmarks game?

neptundancer

unread,
May 26, 2009, 3:03:09 PM5/26/09
to
On May 26, 2:12 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> Nicolas Neuss wrote:
> > Jon Harrop <j...@spammershome.com> writes:
> >> The results you have cited are all many years out of date. They actually
> >> pertain to Doug Bagley's original Computer Language Shootout from the
> >> 1990s
>
> > I don't think this is true.  At least, I would be very much surprised if a
> > highly competent person like Peter Norvig would base his tests on
> > something that bad as the "Shootout"...
>
> Might I suggest that you actually read the article under discussion:
>
>   "Relative speeds of 5 languages on 10 benchmarks from The Great Computer
> Language Shootout."
>
> >> and, in particular, make no use of multicores whatsoever.
>
> >> Today's shootout results indicate that Java is 1.1-8x faster than Lisp
> >> (SBCL), i.e. Java is faster than Lisp in every single test:
>
> http://shootout.alioth.debian.org/u32q/benchmark.php?test=all〈=ja...

>
>
>
> > I think the real difference is the competence of Peter Norvig compared
> > with the incompetence of the people running the Shootout or the
> > incompetence and malignity of a certain spammer usually favorizing his own
> > ray-tracer benchmark.
>
> I really respect your opinion on that.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

Too bad your respect isn't worth a dime.

Tamas K Papp

unread,
May 26, 2009, 6:12:33 PM5/26/09
to

Nicolas has developed Femlisp, and did quite a bit of numerical work in
CL. That certainly makes me care about his opinion, which I also
consider infinitely more relevant for practical purpose than some
game of benchmarks.

Tamas

anonymous...@gmail.com

unread,
May 27, 2009, 12:00:35 AM5/27/09
to
On May 26, 8:12 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Nicolas Neuss wrote:
> > Jon Harrop <j...@spammershome.com> writes:
> >> The results you have cited are all many years out of date. They actually
> >> pertain to Doug Bagley's original Computer Language Shootout from the
> >> 1990s
>
> > I don't think this is true.  At least, I would be very much surprised if a
> > highly competent person like Peter Norvig would base his tests on
> > something that bad as the "Shootout"...
>

I think the issue here really is that we are comparing Java which has
had lots of money thrown at it, to SBCL, which has had little if any.
I don't see how this would declaratively prove that common lisp is
'slow,' rather that Sbcl is slower than Java, which I believe should
be met with a response of 'duh' (it had to happen sooner or later).

> Might I suggest that you actually read the article under discussion:
>
>   "Relative speeds of 5 languages on 10 benchmarks from The Great Computer
> Language Shootout."
>
> >> and, in particular, make no use of multicores whatsoever.
>

Erlang's the best! horaaay!

---

I will note that I was perusing the web and realized that there are so
many lisp- this or that, that it really is silly to come in here and
claim 'lisp is dead'. If anything lisp is having a renaissance of
sorts. (Even ignoring how lispy stuff like Perl and Python and Ruby
are becoming).

Ironlisp/scheme, Nulisp, Lisp flavored erlang, liskell, clojure,
newlisp, about a half dozen open source implementations of CL, about 4
commercial implementations of CL, about a dozen different scheme
implementations, the embedded lisps, dylan just got open sourced (or
will be soon, I believe)...

Someone asked a similar question about this two years ago:
http://coding.derkeiler.com/Archive/Lisp/comp.lang.lisp/2007-06/msg00372.html

--

I'm going to have to go ahead and object to a few things that Dr.
Harrop has said:

1.) Macros and Eval are useless and super-ceded by other things.

They're not. Macros are for pattern making. CL can be thought of as a
virtual machine with a lot 'already in the box'. Pattern matching is
interesting but it is actually fairly easy to do pattern matching on
lisp code (not that you really need to, as you are more likely to be
programming bottom up than top down for a lisp-based compiler).

Eval is useful for a lot of things, mostly under-the-hood.
It can also be used for building definitions with macros in a more
granular way (like in a compiler).

2.) Parenthesis things:
I rarely look at parenthesis, I look at the shape of the code.
Parenthesis are something my editor and compiler use. Program for more
than a month in lisp with an indenting editor and you will do the
same.

3.) Speed thing: this probably isn't a valid concern. I don't think
that lisp with proper declarations is as mind numbingly slow as a lot
of people would have you believe. my regular code comes out around 2/3
of C speeds. I think other people who are better than me can get it to
go faster. Performance has a lot to do with how much you know about
how computers work and what the compiler is going to try to do.

Qi (which is done in common lisp) might be an example of a compiler in
lisp that could be categorized as quite fast (Then again Mark Taver is
pretty clever).

You can actually think of CL as either a static or dynamically (typed)
language. That people tend to think of it more often in terms of
dynamic typing I think is more of a historical relic. (And possibly
has to do with the complexity* of the type system).

This is perhaps a downfall of the CL spec or the various
implementations (depends how you want to slice it). A lot of the time
the downfall in this area is either due to poor memory management or
insufficient type knowledge.

Although I suppose that as Dr. JH suggests you could take a
'heterogenous' approach and write a compiler that targets a certain
virtual machine or even generates assembly language (SBCL does
something like this to implement itself in native code). Macros can be
just as useful in this situation.
--

Anyway, I Hope the OP doesn't let himself be mislead by JH's (sort of)
factually correct but misleading posts. I believe he was born under a
bridge.

(I find it interesting that Dr. Harrop makes refrence to Mr. Papp's
lack of a degree, yet he himself has degrees not in computer science
but physics and chemistry...)

--
* complexity as in multitude of possibilities

Marco Antoniotti

unread,
May 27, 2009, 2:35:46 AM5/27/09
to

No. That is why I said that I may be wrong.

>
> >  I find that lazyness does buy you something
> > when building what amount to syntactic layers; yet even with hygienic
> > macros, you do have the equivalent of the AST at hand, which is not
> > necessarily the case with lazy functions whose goal is to compute
> > "something else" than what it is a source-to-source transformation.
>
> A lazy procedure is essentially a piece of code that gets pasted
> inside the caller procedure at runtime, the only difference is that it
> doesn't see the parent bindings.
> Since this expansion can be recursive, it's possible to generate (the
> equivalent of) arbitrary code.
>
> In fact, that is how Scheme syntax-rules works, with the difference
> that it attempts to expand everything before runtime rather than on-
> demand.

Yes. And that is the key difference. Plus, with lazy functions you
do not necessarily have the AST at hand. IMHO, that would make
complex source-to-source transformations at a minimum quite
cumbersome. You may argue that such source-to-source transformations
are not desirable, but that is a different argument.

Cheers
--
Marco

Jon Harrop

unread,
May 27, 2009, 8:32:33 AM5/27/09
to
Isaac Gouy wrote:
> In that comp.lang.scheme discussion you seem quite proudly to proclaim
> your ignorance of the benchmarks game - ignorance achieved simply by
> not looking at it for the last 3 or 4 years.
>
> Given that context, why should anyone care what you think about the
> benchmarks game?

Your "shootout" or "benchmarks game" or whatever you are calling it these
days is still based entirely upon the same flawed methodology and even uses
the same flawed benchmarks. These flaws have been painstakingly described
to you by many people including myself.

So the fact that Nicolas had not looked at it for several years makes no
difference: his old conclusion that your shootout is worse than useless
still holds because you still have not fixed it.

Jon Harrop

unread,
May 27, 2009, 10:16:34 AM5/27/09
to
anonymous...@gmail.com wrote:
> On May 26, 8:12 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>> Nicolas Neuss wrote:
>> > Jon Harrop <j...@spammershome.com> writes:
>> >> The results you have cited are all many years out of date. They
>> >> actually pertain to Doug Bagley's original Computer Language Shootout
>> >> from the 1990s
>>
>> > I don't think this is true.  At least, I would be very much surprised
>> > if a highly competent person like Peter Norvig would base his tests on
>> > something that bad as the "Shootout"...
>
> I think the issue here really is that we are comparing Java which has
> had lots of money thrown at it, to SBCL, which has had little if any.

Yes.

> I don't see how this would declaratively prove that common lisp is
> 'slow,'

The absence of decent implementations obviously affects the practical
utility of a language. I appreciate the "sufficiently smart compiler"
hypothesis but it is of no practical use.

> I will note that I was perusing the web and realized that there are so
> many lisp- this or that, that it really is silly to come in here and
> claim 'lisp is dead'. If anything lisp is having a renaissance of
> sorts.

That's interesting. I was under the impression that the only significant use
of Lisp in recent years was airline software done by ITA but their best
people left.

> (Even ignoring how lispy stuff like Perl and Python and Ruby
> are becoming).

That is a stretch, IMHO.

> Ironlisp/scheme, Nulisp, Lisp flavored erlang, liskell, clojure,
> newlisp, about a half dozen open source implementations of CL, about 4
> commercial implementations of CL, about a dozen different scheme
> implementations, the embedded lisps, dylan just got open sourced (or
> will be soon, I believe)...

There are certainly lots of implementations of Lisp variants but the only
significant one is, IMHO, Mathematica and it is very domain specific. None
of the others have garnered a significant user base. Indeed, their market
share of language implementations on popcon is falling.

> 1.) Macros and Eval are useless and super-ceded by other things.
>
> They're not. Macros are for pattern making.

Strawman. I actually said "Macros are still useful as a way to add new
syntax.". I use macros extensively in Mathematica and OCaml and I see no
reason to sacrifice decent syntax.

> CL can be thought of as a
> virtual machine with a lot 'already in the box'. Pattern matching is
> interesting but it is actually fairly easy to do pattern matching on
> lisp code (not that you really need to, as you are more likely to be
> programming bottom up than top down for a lisp-based compiler).

Nonsense. Look at the Lisp code in Maxima to see how debilitating this
problem is.

> Eval is useful for a lot of things, mostly under-the-hood.
> It can also be used for building definitions with macros in a more
> granular way (like in a compiler).

Use a real VM and benefit from industrial strength implementations (like a
decent GC).

> 2.) Parenthesis things:
> I rarely look at parenthesis, I look at the shape of the code.
> Parenthesis are something my editor and compiler use. Program for more
> than a month in lisp with an indenting editor and you will do the
> same.

Then you're sprawling your code across many lines in order to get the
indentation that you depend upon to make it comprehensible, squandering
screen real estate. Either way, it is a bad idea. To see macros done
properly, look at Mathematica.

> 3.) Speed thing: this probably isn't a valid concern. I don't think
> that lisp with proper declarations is as mind numbingly slow as a lot
> of people would have you believe. my regular code comes out around 2/3
> of C speeds. I think other people who are better than me can get it to
> go faster. Performance has a lot to do with how much you know about
> how computers work and what the compiler is going to try to do.
>
> Qi (which is done in common lisp) might be an example of a compiler in
> lisp that could be categorized as quite fast (Then again Mark Taver is
> pretty clever).

Look at Mark Tarver's own results:

http://www.lambdassociates.org/Studies/study10.htm

Qi 1.8x slower than unoptimized OCaml. Optimized OCaml is 4x faster again.

> You can actually think of CL as either a static or dynamically (typed)
> language. That people tend to think of it more often in terms of
> dynamic typing I think is more of a historical relic. (And possibly
> has to do with the complexity* of the type system).

Another stretch. Lisp is, at best, at awful static language.

> (I find it interesting that Dr. Harrop makes refrence to Mr. Papp's
> lack of a degree,

Actually I believe he does have a degree in economics from a university in
Hungary and has been working towards a PhD for many years.

> yet he himself has degrees not in computer science but physics and
> chemistry...)

Actually my first degree did include computer science.

gugamilare

unread,
May 27, 2009, 11:09:28 AM5/27/09
to
On 27 maio, 11:16, Jon Harrop <j...@ffconsultancy.com> wrote:

> anonymous.c.lis...@gmail.com wrote:
> > 2.) Parenthesis things:
> >   I rarely look at parenthesis, I look at the shape of the code.
> > Parenthesis are something my editor and compiler use. Program for more
> > than a month in lisp with an indenting editor and you will do the
> > same.
>
> Then you're sprawling your code across many lines in order to get the
> indentation that you depend upon to make it comprehensible, squandering
> screen real estate.

Not at all. Sprawling code across many lines is what C does to make
its code readable, not Lisp. After a small time using Lisp, you start
to see this

(defun fact (n)
(if (<= n 1)
1
(* n (fact (1- n)))))

Like this:

defun fact (n)
if (<= n 1)
1
* n (fact (1- n))

Parenthesis is something you just ignore when you read the code, you
see the indentation of the code, which is done automatically by any
decent editor. And writing parenthesis makes indentation automatic,
which is much easier than manually indenting code.

Now, I see that OCaml has macros. But many things still drive me away.
For instance:

EXTEND
expr: LEVEL "expr1"
[[ "repeat"; e1 = expr; "until"; e2 = expr ->
<:expr< do { $e1$; while not $e2$ do { $e1$; } }
>> ]];
END;;

WTH is all this $ { } >> [[ ]] -> ; for ? No need for that. Lisp's
parenthesis are way more clean and readable than that. Not to mention
OCaml is very verbose (instead of using Lisp's parenthesis it uses
English words begin and end to mark scope).

Richard Fateman

unread,
May 27, 2009, 11:09:28 AM5/27/09
to
Jon Harrop wrote:
....

>
> That's interesting. I was under the impression that the only significant use
> of Lisp in recent years was airline software done by ITA but their best
> people left.

You are free to define "significant" any way you want, but people use
lisp, as you can determine by a google search.
See
http://www.franz.com/success/
also.

Are you saying that Haskell has more "significant" use?

.. snip..

>> CL can be thought of as a
>> virtual machine with a lot 'already in the box'. Pattern matching is
>> interesting but it is actually fairly easy to do pattern matching on
>> lisp code (not that you really need to, as you are more likely to be
>> programming bottom up than top down for a lisp-based compiler).
>
> Nonsense. Look at the Lisp code in Maxima to see how debilitating this
> problem is.

"this problem" =? pattern matching? How is this debilitating? Most
Maxima code predates CLOS, so it does not use the explicit syntax of
method definition, but is that the pattern matching you mean?

There are in fact two pattern matchers in Maxima (Schatchen and
Defmatch) but I doubt that you mean matching of algebraic expressions.

There is of course lisp compilers already written in lisp, but there are
also parsers for fortran and the maxima user language, also in maxima;
once the parsers produce lisp code, that code can be compiled too.

.....

>> Parenthesis are something my editor and compiler use. Program for more
>> than a month in lisp with an indenting editor and you will do the
>> same.
>
> Then you're sprawling your code across many lines in order to get the
> indentation that you depend upon to make it comprehensible, squandering
> screen real estate.

You are probably more familiar with the idiomatic indentation of C code
which does, in my view, squander lines with isolated single closing
brackets.

Since most program in lisp are written in relative small functional
pieces, in lines that are generally fairly short, with opening brackets
that are not very long, sprawling is not a problem in reasonably
idiomatic code.


> Either way, it is a bad idea. To see macros done
> properly, look at Mathematica.

Huh? I know the Mathematica language pretty well. I wrote an
open-source Mathematica (version 3.0) compatible parser (the only one I
am aware of). Written in, as it happens, lisp. But Macros??
Do you mean the pattern-matching rule-driven convention that substitutes
for function definition in Mathematica? This is cute but hardly a
convincing idea for a successful system-building language. See how much
of Mathematica is, even today, being written in a dialect of C.

This is the first time I have seen anyone praise the Mathematica
language, other than persons in the sphere of influence of Stephen
Wolfram, or physicists notably ignorant of programming languages.

RJF

Isaac Gouy

unread,
May 27, 2009, 11:36:45 AM5/27/09
to


You don't seem to have bothered reading what he wrote.

If it's worse than useless then why doesn't that invalidate your use
of those same measurements (two days ago) to bash Lisp performance?


http://groups.google.com/group/comp.lang.lisp/msg/d08abb4a2ba43779


Isaac Gouy

unread,
May 27, 2009, 12:25:51 PM5/27/09
to


That should make you care about his opinion on things Lisp in which he
is an expert.

On other topics his opinion may be uninformed and plain wrong, even
about things we might expect a high-school student to get right simply
by bothering to read the information:

http://groups.google.com/group/comp.lang.lisp/msg/a43b15db2c9f4f18

I am curious about the attitude Nicolas Neuss seems to express (which
I may have misread) because I think those outside the Lisp bubble
might well take away a rather negative impression.

No one but Lispers made a fuss about the changes 3 or 4 years ago (let
alone continue to make a fuss).

Is it so hard to write a competent program in Lisp that 100 line
programs are a major investment of time? Are those trivial programs so
difficult that they need a Lisp implementor to write them? Is the Lisp
community now so small that there are no workaday programmers capable
of doing a good job on trivial programs - just newbies and language
implementors?

Tamas K Papp

unread,
May 27, 2009, 12:46:23 PM5/27/09
to
On Wed, 27 May 2009 08:09:28 -0700, Richard Fateman wrote:

> Jon Harrop wrote:
>> Either way, it is a bad idea. To see macros done properly, look at
>> Mathematica.
>

> This is the first time I have seen anyone praise the Mathematica
> language, other than persons in the sphere of influence of Stephen
> Wolfram, or physicists notably ignorant of programming languages.

Or Xah Lee. JH is in good company :-P

Tamas

Jon Harrop

unread,
May 27, 2009, 1:08:42 PM5/27/09
to
Richard Fateman wrote:
> Jon Harrop wrote:
>> That's interesting. I was under the impression that the only significant
>> use of Lisp in recent years was airline software done by ITA but their
>> best people left.
>
> You are free to define "significant" any way you want, but people use
> lisp, as you can determine by a google search.
> See
> http://www.franz.com/success/
> also.

Their "recent" stories are mostly a decade out of date and do not reflect
current. Their traffic fell 35% in the last 3 months according to Alexa:

http://www.alexa.com/siteinfo/franz.com

Hardly compelling.

> Are you saying that Haskell has more "significant" use?

No.

>>> CL can be thought of as a
>>> virtual machine with a lot 'already in the box'. Pattern matching is
>>> interesting but it is actually fairly easy to do pattern matching on
>>> lisp code (not that you really need to, as you are more likely to be
>>> programming bottom up than top down for a lisp-based compiler).
>>
>> Nonsense. Look at the Lisp code in Maxima to see how debilitating this
>> problem is.
>
> "this problem" =? pattern matching? How is this debilitating? Most
> Maxima code predates CLOS, so it does not use the explicit syntax of
> method definition, but is that the pattern matching you mean?

Pattern matching over algebraic datatypes.

> There is of course lisp compilers already written in lisp, but there are
> also parsers for fortran and the maxima user language, also in maxima;
> once the parsers produce lisp code, that code can be compiled too.

Sure, that is the same as the next language.

>> Either way, it is a bad idea. To see macros done
>> properly, look at Mathematica.
>
> Huh? I know the Mathematica language pretty well. I wrote an
> open-source Mathematica (version 3.0) compatible parser (the only one I
> am aware of). Written in, as it happens, lisp. But Macros??

Look at the syntax extensions, e.g. the one for Feynman diagrams.
Mathematica's macro system allows you to add typeset syntax.

> Do you mean the pattern-matching rule-driven convention that substitutes
> for function definition in Mathematica? This is cute but hardly a
> convincing idea for a successful system-building language. See how much
> of Mathematica is, even today, being written in a dialect of C.

Not a lot, actually. Most of their new code is developed in Mathematica
itself. Even their Eclipse plug-in is written in a Mathematica dialect
interpreted by Java code.

> This is the first time I have seen anyone praise the Mathematica
> language, other than persons in the sphere of influence of Stephen
> Wolfram, or physicists notably ignorant of programming languages.

Mathematica is an excellent example of what a highly-evolved Lisp dialect
can accomplish in a specific domain. For general purpose programming, Lisps
are almost entirely useless, even in theoretical terms and regarding their
practical shortcomings (e.g. no decent implementations, even among very
expensive commercial ones).

Jon Harrop

unread,
May 27, 2009, 2:26:45 PM5/27/09
to
gugamilare wrote:
> On 27 maio, 11:16, Jon Harrop <j...@ffconsultancy.com> wrote:
>> anonymous.c.lis...@gmail.com wrote:
>> > 2.) Parenthesis things:
>> > I rarely look at parenthesis, I look at the shape of the code.
>> > Parenthesis are something my editor and compiler use. Program for more
>> > than a month in lisp with an indenting editor and you will do the
>> > same.
>>
>> Then you're sprawling your code across many lines in order to get the
>> indentation that you depend upon to make it comprehensible, squandering
>> screen real estate.
>
> Not at all. Sprawling code across many lines is what C does to make
> its code readable, not Lisp. After a small time using Lisp, you start
> to see this
>
> (defun fact (n)
> (if (<= n 1)
> 1
> (* n (fact (1- n)))))

Exactly. Compare with:

let rec fact n = if n <= 1 then 1 else n * fact(n - 1)

or:

let rec fact = function 0 | 1 -> 1 | n -> n * fact(n - 1)

No need to split across multiple lines because it is entirely readable on
one line.

> Parenthesis is something you just ignore when you read the code, you
> see the indentation of the code, which is done automatically by any
> decent editor.

According to the many parenthesis that you laid out manually.

> And writing parenthesis makes indentation automatic,
> which is much easier than manually indenting code.

Autoindenting is great but it does not require superfluous parentheses.

> Now, I see that OCaml has macros. But many things still drive me away.
> For instance:
>
> EXTEND
> expr: LEVEL "expr1"
> [[ "repeat"; e1 = expr; "until"; e2 = expr ->
> <:expr< do { $e1$; while not $e2$ do { $e1$; } }
>>> ]];
> END;;
>
> WTH is all this $ { } >> [[ ]] -> ; for ? No need for that.

$ is antiquotation equivalent to , in Lisp.

The { } are apparently part of the grammar that is being defined and have
nothing to do with camlp4 macros.

The [[ ]] define the grammar rules order by precedence which is something
that Lisp macros do not support.

> Lisp's parenthesis are way more clean and readable than that.

Apples and oranges. Lisp macros do not handle extensible grammars with
operator precedence and associativity.

> Not to mention OCaml is very verbose...

Nonsense.

Nicolas Neuss

unread,
May 27, 2009, 2:58:52 PM5/27/09
to
Isaac Gouy <igo...@yahoo.com> writes:

> [...] Is the Lisp community now so small

Yes.

> that there are no workaday programmers capable of doing a good job on
> trivial programs - just newbies and language implementors?

No. But unfortunately, it is sufficiently small that IMO neither newbies
nor capable programmers should waste time on a badly designed and moderated
benchmark game.

Nicolas

Cor

unread,
May 27, 2009, 2:38:04 PM5/27/09
to
Some entity, AKA Isaac Gouy <igo...@yahoo.com>,
wrote this mindboggling stuff:
(selectively-snipped-or-not-p)


> Is it so hard to write a competent program in Lisp that 100 line
> programs are a major investment of time? Are those trivial programs so
> difficult that they need a Lisp implementor to write them? Is the Lisp
> community now so small that there are no workaday programmers capable
> of doing a good job on trivial programs - just newbies and language
> implementors?

Nope, lisp is so simple that it only takes a genius to grok it.

Cor
--
It is your right to restrict my rights in your house
doing that in my house will get you shot
I never threathen but merely mention the consequences of your choice

Marco Antoniotti

unread,
May 27, 2009, 4:18:19 PM5/27/09
to
On May 27, 5:09 pm, gugamilare <gugamil...@gmail.com> wrote:
> On 27 maio, 11:16, Jon Harrop <j...@ffconsultancy.com> wrote:
>
> > anonymous.c.lis...@gmail.com wrote:
> > > 2.) Parenthesis things:
> > >   I rarely look at parenthesis, I look at the shape of the code.
> > > Parenthesis are something my editor and compiler use. Program for more
> > > than a month in lisp with an indenting editor and you will do the
> > > same.
>
> > Then you're sprawling your code across many lines in order to get the
> > indentation that you depend upon to make it comprehensible, squandering
> > screen real estate.
>
> Not at all. Sprawling code across many lines is what C does to make
> its code readable, not Lisp. After a small time using Lisp, you start
> to see this
>
> (defun fact (n)
>   (if (<= n 1)
>       1
>       (* n (fact (1- n)))))
>
> Like this:
>
> defun fact (n)
>   if (<= n 1)
>      1
>      * n (fact (1- n))

Or this

(def pattern function fact 0 -> 1 n -> (* n (fact (1- n))))

> Parenthesis is something you just ignore when you read the code, you
> see the indentation of the code, which is done automatically by any
> decent editor. And writing parenthesis makes indentation automatic,
> which is much easier than manually indenting code.
>
> Now, I see that OCaml has macros.

Not exaclty. OCaml has a very good macro preprocessor.

But many things still drive me away.
> For instance:
>
>        EXTEND
>          expr: LEVEL "expr1"
>            [[ "repeat"; e1 = expr; "until"; e2 = expr ->
>                  <:expr< do { $e1$; while not $e2$ do { $e1$; } }>> ]];
>
>        END;;
>
> WTH is all this $ { } >> [[ ]] -> ; for ? No need for that. Lisp's
> parenthesis are way more clean and readable than that. Not to mention
> OCaml is very verbose (instead of using Lisp's parenthesis it uses
> English words begin and end to mark scope).

Since OCaml syntax is more complex it needs to be represented as a
full blown AST via what amounts to a full blown grammar. Hence its
perceived complexity. CL (and Scheme) macros are much more integrated
in the core language and thus more "usable" (for a given definition of
"usable").

Cheers
--
Marco


Raymond Toy

unread,
May 27, 2009, 4:19:50 PM5/27/09
to
>>>>> "Jon" == Jon Harrop <j...@ffconsultancy.com> writes:

Jon> anonymous...@gmail.com wrote:
>> CL can be thought of as a
>> virtual machine with a lot 'already in the box'. Pattern matching is
>> interesting but it is actually fairly easy to do pattern matching on
>> lisp code (not that you really need to, as you are more likely to be
>> programming bottom up than top down for a lisp-based compiler).

Jon> Nonsense. Look at the Lisp code in Maxima to see how debilitating this
Jon> problem is.

A large part of the Maxima code was written over 30 years ago. Go
find code in your favorite language from 30 years ago and tell me if
it's well written or not.

Ray

du...@franz.com

unread,
May 27, 2009, 4:22:31 PM5/27/09
to
On May 27, 10:08 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> Richard Fateman wrote:
> > Jon Harrop wrote:
> >> That's interesting. I was under the impression that the only significant
> >> use of Lisp in recent years was airline software done by ITA but their
> >> best people left.
>
> > You are free to define "significant" any way you want, but people use
> > lisp, as you can determine by a google search.
> > See
> >http://www.franz.com/success/
> > also.
>
> Their "recent" stories are mostly a decade out of date and do not reflect
> current. Their traffic fell 35% in the last 3 months according to Alexa:
>
>  http://www.alexa.com/siteinfo/franz.com
>
> Hardly compelling.

Leave it to you, Jon, to turn to only the very best tools for your
statistical analysis (NOT! :-).

If you look at http://en.wikipedia.org/wiki/Alexa_Internet you'll find
that Alexa's techniques are controversial. Also, note that on March
31, 2009, they went through a complete redesign and new metrics, which
by itself could account for any perceived anomalies in statistics
gathering for any of the past 6 months.

Also, we have recently redesigned our website to be more efficient
(and we have had many visitors comment on this fact, that it is easier
to use and takes fewer clicks to get to where they want to go), which
could also account for fewer clicks on our site; it is simply more
efficient to navigate our site.

All in all, our own internal statistics have suggested an _increase_
in the number of visitors, first time and/or not, rather than a
decrease.

So that leaves the only critique of substance: the age of our success
stories. As you doubtless know from your vast experience in dealing
with large companies as customers :-) these large companies tend to
each have a massive legal process that must be navigated in order to
make such success stories public (or indeed in order to say anything
at all publicly about their company). It is often the case that since
we deal with programmers and engineers, they don't want to be bothered
by this process, and so in some cases it takes a lot of convincing in
order to get customers to give us such testimonials.

Due to the once-anticipated and now implemented website redesign, we
have left the older success-stories list alone and have kept
internally a much more recent list of success stories that we pass on
to customers and potential customers who ask for it. I will not
publish that list here, as that would constitute spam. But I have
been given the OK by my company to say that anyone here who wants to
see this list can ask for it from sa...@franz.com. We also intend, as
part of the ongoing effort to update our website, to update the older
success stories list with this one in some fashion; it is one of our
next tasks in the list of web redesign tasks.

Duane

Kaz Kylheku

unread,
May 27, 2009, 4:49:15 PM5/27/09
to

Really? Pretend for a second that we are some Java-spewing monkeys who has
never seen a language that didn't look like C or Pascal.

What do the above mean?

Our first suspicion will be that, quite clearly, the identifier being defined
must be ``rec''.

The Lisp is not unreadable if all on line:

(defun fact (n) (if (<= n 1) 1 (* n (fact (1- n)))))

You can make syntactic sense of this even if you've never studied Lisp;
you just have to understand that understand that parentheses indicate
subordination.

Can you split the caml notation across multiple lines such that every single
symbol token is on its own line, and yet such that the indentation still obeys
some canonical rules?

(defun
fact
(n)
(if
(<=
n
1)
1
(*
n
(fact
(1-
n)))))

Unusual, but still readable. I can tell you why everything is placed
where it is.

In Lisp, you have broad choice in the tradeoff between horizontal and vertical
space. Regardless of what tradeoff you choose, you can format the code in
consistent ways which will be completely unsurprising to another Lisp
progrmamer (and yourself six months later).

I have never written in a computer notation which did not irk me to some
existent, in some situations. Wrestling with text in an editor is a bother.
It's secondary to the ideas we are grappling with, but at the same time
necessary.

Working with Lisp code has irked me the least so far, because of the uniform
syntax. I can articulate exactly why it irks me the least: because when I run
into some constraint, like wanting to fit a block of code into approximately so
many lines and columns, all of the possible ways (candidate solutions to the
problem) are obvious, all of them obey canonical code formatting rules, and all
of them are easily changed from one to the other.

You can chop up the large expression in any way you want, and get the pieces
to line up in a readable way.

>> Parenthesis is something you just ignore when you read the code, you
>> see the indentation of the code, which is done automatically by any
>> decent editor.
>
> According to the many parenthesis that you laid out manually.
>
>> And writing parenthesis makes indentation automatic,
>> which is much easier than manually indenting code.
>
> Autoindenting is great but it does not require superfluous parentheses.

It does require parentheses if you are to be able to chop up the syntax between
any two constituents, and yet have it all indent in a sensible way which
obviously agrees with the abstract syntax tree structure.

>> Lisp's parenthesis are way more clean and readable than that.
>
> Apples and oranges. Lisp macros do not handle extensible grammars with
> operator precedence and associativity.

How will your text editor handle this precedence and associativity? For every
set of camlp4 macros you write, you will have to write an editor mode.
(Or make one which parses the camlp4).

What if ten ocaml programmers get together on a project, and each use their pet
macro library?

Code written using Lisp macros is just Lisp, and is handled by your editor
the same way as code written without those macros.

anonymous...@gmail.com

unread,
May 27, 2009, 6:20:02 PM5/27/09
to
On May 27, 10:16 am, Jon Harrop <j...@ffconsultancy.com> wrote:

> anonymous.c.lis...@gmail.com wrote:
> > On May 26, 8:12 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> >> Nicolas Neuss wrote:
> >> > Jon Harrop <j...@spammershome.com> writes:
> >> >> The results you have cited are all many years out of date. They
> >> >> actually pertain to Doug Bagley's original Computer Language Shootout
> >> >> from the 1990s
>
> >> > I don't think this is true.  At least, I would be very much surprised
> >> > if a highly competent person like Peter Norvig would base his tests on
> >> > something that bad as the "Shootout"...
>
> > I think the issue here really is that we are comparing Java which has
> > had lots of money thrown at it, to SBCL, which has had little if any.
>
> Yes.
>
> > I don't see how this would declaratively prove that common lisp is
> > 'slow,'
>
> The absence of decent implementations obviously affects the practical
> utility of a language. I appreciate the "sufficiently smart compiler"
> hypothesis but it is of no practical use.
>

A sufficiently stupid compiler would probably generate faster code.

> > I will note that I was perusing the web and realized that there are so
> > many lisp- this or that, that it really is silly to come in here and
> > claim 'lisp is dead'. If anything lisp is having a renaissance of
> > sorts.
>
> That's interesting. I was under the impression that the only significant use
> of Lisp in recent years was airline software done by ITA but their best
> people left.
>

Your impression might be wrong.

> > (Even ignoring how lispy stuff like Perl and Python and Ruby
> > are becoming).
>
> That is a stretch, IMHO.
>

That's why I'm ignoring it :-)

> > Ironlisp/scheme, Nulisp, Lisp flavored erlang, liskell, clojure,
> > newlisp, about a half dozen open source implementations of CL, about 4
> > commercial implementations of CL, about a dozen different scheme
> > implementations, the embedded lisps, dylan just got open sourced (or
> > will be soon, I believe)...
>
> There are certainly lots of implementations of Lisp variants but the only
> significant one is, IMHO, Mathematica and it is very domain specific. None
> of the others have garnered a significant user base. Indeed, their market
> share of language implementations on popcon is falling.
>
> > 1.) Macros and Eval are useless and super-ceded by other things.
>
> > They're not. Macros are for pattern making.
>
> Strawman. I actually said "Macros are still useful as a way to add new
> syntax.". I use macros extensively in Mathematica and OCaml and I see no
> reason to sacrifice decent syntax.
>

Perhaps I misunderstood then.
Decent syntax is entirely subjective.

> > CL can be thought of as a
> > virtual machine with a lot 'already in the box'. Pattern matching is
> > interesting but it is actually fairly easy to do pattern matching on
> > lisp code (not that you really need to, as you are more likely to be
> > programming bottom up than top down for a lisp-based compiler).
>
> Nonsense. Look at the Lisp code in Maxima to see how debilitating this
> problem is.
>

How old is Maxima?

> > Eval is useful for a lot of things, mostly under-the-hood.
> > It can also be used for building definitions with macros in a more
> > granular way (like in a compiler).
>
> Use a real VM and benefit from industrial strength implementations (like a
> decent GC).
>

'Decent' again.
Specify what is inherently wrong with CL garbage collection.

> > 2.) Parenthesis things:
> >   I rarely look at parenthesis, I look at the shape of the code.
> > Parenthesis are something my editor and compiler use. Program for more
> > than a month in lisp with an indenting editor and you will do the
> > same.
>
> Then you're sprawling your code across many lines in order to get the
> indentation that you depend upon to make it comprehensible, squandering
> screen real estate. Either way, it is a bad idea. To see macros done
> properly, look at Mathematica.
>

More opinion.

> > 3.) Speed thing: this probably isn't a valid concern. I don't think
> > that lisp with proper declarations is as mind numbingly slow as a lot
> > of people would have you believe. my regular code comes out around 2/3
> > of C speeds. I think other people who are better than me can get it to
> > go faster. Performance has a lot to do with how much you know about
> > how computers work and what the compiler is going to try to do.
>
> > Qi (which is done in common lisp) might be an example of a compiler in
> > lisp that could be categorized as quite fast (Then again Mark Taver is
> > pretty clever).
>
> Look at Mark Tarver's own results:
>
>  http://www.lambdassociates.org/Studies/study10.htm
>
> Qi 1.8x slower than unoptimized OCaml. Optimized OCaml is 4x faster again.
>

I thought your point was that it would be unusably slow?
Not that OCaml is faster than Lisp.

> > You can actually think of CL as either a static or dynamically (typed)
> > language. That people tend to think of it more often in terms of
> > dynamic typing I think is more of a historical relic. (And possibly
> > has to do with the complexity* of the type system).
>

> Another stretch. Lisp is, at best, an awful static language.
>

Not a stretch, Common Lisp is statically typed.

Jon Harrop

unread,
May 27, 2009, 6:53:20 PM5/27/09
to

They are also used for substantially different applications. In Lisp, macros
are often used to implement inlining, unboxing or other optimizations based
upon simple term rewriting. In OCaml, their use is restricted to syntax
extensions and parsing new grammars.

Jon Harrop

unread,
May 27, 2009, 7:19:43 PM5/27/09
to
Kaz Kylheku wrote:
> On 2009-05-27, Jon Harrop <j...@ffconsultancy.com> wrote:
>> Exactly. Compare with:
>>
>> let rec fact n = if n <= 1 then 1 else n * fact(n - 1)
>>
>> or:
>>
>> let rec fact = function 0 | 1 -> 1 | n -> n * fact(n - 1)
>>
>> No need to split across multiple lines because it is entirely readable on
>> one line.
>
> Really? Pretend for a second that we are some Java-spewing monkeys who has
> never seen a language that didn't look like C or Pascal.
>
> What do the above mean?
>
> Our first suspicion will be that, quite clearly, the identifier being
> defined must be ``rec''.

No. Java programmers are used to seeing qualifiers before the identifier
being bound, e.g. public static.

> The Lisp is not unreadable if all on line:
>
> (defun fact (n) (if (<= n 1) 1 (* n (fact (1- n)))))

With five sets of superfluous parentheses to balance? No.

> You can make syntactic sense of this even if you've never studied Lisp;

No way. Most programmers could not even decipher "(<= n 1)". Indeed, just
look at the overwhelming majority of programmers who already expressed very
strong opinions about the awfulness of Lisp syntax.

> you just have to understand that understand that parentheses indicate
> subordination.

And that they appear for no reason, as in "(n)", and that operators are
placed unconventionally.

>>> And writing parenthesis makes indentation automatic,
>>> which is much easier than manually indenting code.
>>
>> Autoindenting is great but it does not require superfluous parentheses.
>

> It does require parentheses...

No, it doesn't.

>>> Lisp's parenthesis are way more clean and readable than that.
>>
>> Apples and oranges. Lisp macros do not handle extensible grammars with
>> operator precedence and associativity.
>
> How will your text editor handle this precedence and associativity? For
> every set of camlp4 macros you write, you will have to write an editor
> mode. (Or make one which parses the camlp4).

Not true. For example, I wrote a macro that handles sequences written in the
form [:...:] to represent purely functional arrays and my editor handled it
just fine. There are many other macros, such as syntax for regular
expressions in pattern matches, that are widely used and require no changes
in the editor.

If you wanted to use a radically different grammar that would break your
text editor then the obvious solution is to use an editor that uses camlp4
and your own grammar itself.

> Code written using Lisp macros is just Lisp, and is handled by your editor
> the same way as code written without those macros.

Also not true. Reader macros are an obvious counter example.

Macros in OCaml and Lisp are no different in these respects. Both allow you
to add syntax extensions without breaking the editor's interpretation of
the grammar. Both provide complete flexibility so you can completely alter
the grammar including even lexical tokens. Choosing to do so has the same
disadvantages in terms of maintenance in both cases.

Jon Harrop

unread,
May 27, 2009, 7:28:40 PM5/27/09
to
du...@franz.com wrote:
> On May 27, 10:08 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>> Hardly compelling.
>
> Leave it to you, Jon, to turn to only the very best tools for your
> statistical analysis (NOT! :-).
>
> If you look at http://en.wikipedia.org
> ...

Did you just criticize the reliability of statistics I quoted and then you
quoted WIKIPEDIA?! :-)

> All in all, our own internal statistics have suggested an _increase_
> in the number of visitors, first time and/or not, rather than a
> decrease.

Ok.

> So that leaves the only critique of substance: the age of our success
> stories. As you doubtless know from your vast experience in dealing
> with large companies as customers :-) these large companies tend to
> each have a massive legal process that must be navigated in order to
> make such success stories public (or indeed in order to say anything
> at all publicly about their company). It is often the case that since
> we deal with programmers and engineers, they don't want to be bothered
> by this process, and so in some cases it takes a lot of convincing in
> order to get customers to give us such testimonials.

Actually I am very surprised by that. My experience is that people in all
companies, large and small, make it easy to pass on details and freely give
testimonials. Indeed, all of our products list testimonials and many come
from people in large companies.

> Due to the once-anticipated and now implemented website redesign, we
> have left the older success-stories list alone and have kept
> internally a much more recent list of success stories that we pass on
> to customers and potential customers who ask for it. I will not
> publish that list here, as that would constitute spam. But I have
> been given the OK by my company to say that anyone here who wants to
> see this list can ask for it from sa...@franz.com. We also intend, as
> part of the ongoing effort to update our website, to update the older
> success stories list with this one in some fashion; it is one of our
> next tasks in the list of web redesign tasks.

I just wrote to them and asked for it...

anonymous...@gmail.com

unread,
May 27, 2009, 8:54:30 PM5/27/09
to
On May 27, 6:53 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> Marco Antoniotti wrote:
> > On May 27, 5:09 pm, gugamilare <gugamil...@gmail.com> wrote:
> >> WTH is all this $ { } >> [[ ]] -> ; for ? No need for that. Lisp's
> >> parenthesis are way more clean and readable than that. Not to mention
> >> OCaml is very verbose (instead of using Lisp's parenthesis it uses
> >> English words begin and end to mark scope).
>
> > Since OCaml syntax is more complex it needs to be represented as a
> > full blown AST via what amounts to a full blown grammar.  Hence its
> > perceived complexity.  CL (and Scheme) macros are much more integrated
> > in the core language and thus more "usable" (for a given definition of
> > "usable").
>
> In Lisp, macros
> are often used to implement inlining, unboxing or other optimizations based
> upon simple term rewriting.

Declarations are for these things.

anonymous...@gmail.com

unread,
May 27, 2009, 9:13:02 PM5/27/09
to
On May 27, 7:19 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> Kaz Kylheku wrote:
> > On 2009-05-27, Jon Harrop <j...@ffconsultancy.com> wrote:
> >> Exactly. Compare with:
>
> >>   let rec fact n = if n <= 1 then 1 else n * fact(n - 1)
>
> >> or:
>
> >>   let rec fact = function 0 | 1 -> 1 | n -> n * fact(n - 1)
>
> >> No need to split across multiple lines because it is entirely readable on
> >> one line.
>
> > Really? Pretend for a second that we are some Java-spewing monkeys who has
> > never seen a language that didn't look like C or Pascal.
>
> > What do the above mean?
>
> > Our first suspicion will be that, quite clearly, the identifier being
> > defined must be ``rec''.
>
> No. Java programmers are used to seeing qualifiers before the identifier
> being bound, e.g. public static.
>
> > The Lisp is not unreadable if all on line:
>
> >   (defun fact (n) (if (<= n 1) 1 (* n (fact (1- n)))))
>
> With five sets of superfluous parentheses to balance? No.
>

The parens are after the definition, how do they effect readability?

> > You can make syntactic sense of this even if you've never studied Lisp;
>
> No way. Most programmers could not even decipher "(<= n 1)". Indeed, just

Do you seriously believe that?
Your views on other programmers are more cynical than mine!

> look at the overwhelming majority of programmers who already expressed very
> strong opinions about the awfulness of Lisp syntax.
>

I see a lot of /other/ problems with lisp, the syntax isn't one of
them.

> > you just have to understand that understand that parentheses indicate
> > subordination.
>
> And that they appear for no reason, as in "(n)", and that operators are
> placed unconventionally.
>

I'm sorry, operators?
Do you mean like math operations?
Order of operations confuses me, I hate remembering the rules.
I /like/ that the parenthesis make the precedence obvious.
Maybe this is because I'm not really a math/science person.

> [snip.. arguments about auto-indenting are silly.]


> Not true. For example, I wrote a macro that handles sequences written in the
> form [:...:] to represent purely functional arrays and my editor handled it
> just fine. There are many other macros, such as syntax for regular
> expressions in pattern matches, that are widely used and require no changes
> in the editor.
>

> If you wanted to use a radically different grammar that would break your
> text editor then the obvious solution is to use an editor that uses camlp4
> and your own grammar itself.
>

This last sentence didn't parse for me.

> > Code written using Lisp macros is just Lisp, and is handled by your editor
> > the same way as code written without those macros.
>
> Also not true. Reader macros are an obvious counter example.
>

That's true, reader macros do give CL some nice syntactic
extensibility.

I don't see people use them all that often, however, which probably
speaks to the point about not needing to extend the syntax (just the
semantics, sometimes).

Honestly I think extending the language syntax is probably more
confusing than writing a macro to extend the semantics. At least with
a regular macro I can name the darned thing.

> Macros in OCaml and Lisp are no different in these respects. Both allow you
> to add syntax extensions without breaking the editor's interpretation of
> the grammar. Both provide complete flexibility so you can completely alter
> the grammar including even lexical tokens. Choosing to do so has the same
> disadvantages in terms of maintenance in both cases.

Neat, now only if you replaced that ugly Ocaml syntax with something
that has more parens.
:-)

Jon Harrop

unread,
May 27, 2009, 9:36:57 PM5/27/09
to
Jon Harrop wrote:

> du...@franz.com wrote:
>> Due to the once-anticipated and now implemented website redesign, we
>> have left the older success-stories list alone and have kept
>> internally a much more recent list of success stories that we pass on
>> to customers and potential customers who ask for it. I will not
>> publish that list here, as that would constitute spam. But I have
>> been given the OK by my company to say that anyone here who wants to
>> see this list can ask for it from sa...@franz.com. We also intend, as
>> part of the ongoing effort to update our website, to update the older
>> success stories list with this one in some fashion; it is one of our
>> next tasks in the list of web redesign tasks.
>
> I just wrote to them and asked for it...

Your sales guy, Craig Norvell, just replied stating that he knows nothing of
the "much more recent list of success stories" that you referred to.

Jon Harrop

unread,
May 27, 2009, 9:44:42 PM5/27/09
to

Macros are a last resort when you cannot get the compiler to act
appropriately on your declarations. See Juho Snellman's macro here, for
example:

http://www.ffconsultancy.com/languages/ray_tracer/code/5/ray.lisp

(defmacro def ((name params &body body)
(mname &rest mparams)
(wname &rest wparams))
`(progn
(declaim (inline ,name ,wname))
(defun ,name ,params
(declare (type double-float ,@params))
,@body)
(defmacro ,mname ,(mapcar #'car mparams)
,(loop with inner = (list name)
with body = ``,',inner
with all-names = nil
for (form count) in (reverse mparams)
for names = (loop repeat count collect (gensym))
do
(setf all-names (append all-names names))
(setf body ``(multiple-value-bind ,',(reverse names)
,,form ,,body))
finally
(setf (cdr inner) (reverse all-names))
(return body)))
(defun ,wname ,(mapcar #'car wparams)
(,mname ,@(mapcar #'cadr wparams)))))

Jon Harrop

unread,
May 27, 2009, 9:52:01 PM5/27/09
to
anonymous...@gmail.com wrote:
> On May 27, 7:19 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>> > (defun fact (n) (if (<= n 1) 1 (* n (fact (1- n)))))
>>
>> With five sets of superfluous parentheses to balance? No.
>
> The parens are after the definition, how do they effect readability?

They clutter the line by adding far more tokens than necessary.

>> > You can make syntactic sense of this even if you've never studied Lisp;
>>
>> No way. Most programmers could not even decipher "(<= n 1)". Indeed, just
>
> Do you seriously believe that?

Absolutely. Virtually all programmers would assume that (f x) is a
subexpression "f x" and not a function application.

>> look at the overwhelming majority of programmers who already expressed
>> very strong opinions about the awfulness of Lisp syntax.
>
> I see a lot of /other/ problems with lisp, the syntax isn't one of
> them.

I'm sure you've noticed people complaining about the syntax.

>> > you just have to understand that understand that parentheses indicate
>> > subordination.
>>
>> And that they appear for no reason, as in "(n)", and that operators are
>> placed unconventionally.
>
> I'm sorry, operators?
> Do you mean like math operations?

Yes.

> Order of operations confuses me, I hate remembering the rules.

I'm sure you can handle + - * / and probably also comparisons.

> I /like/ that the parenthesis make the precedence obvious.

But they don't. You still have to contend with , ` ' # and the grammar
itself is still unconventional (e.g. prefix).

> Maybe this is because I'm not really a math/science person.

I do think that makes a big difference. I found Mathematica daunting when I
starting learning it because you have @ // and so on. However, I did learn
it quite quickly.

>> If you wanted to use a radically different grammar that would break your
>> text editor then the obvious solution is to use an editor that uses
>> camlp4 and your own grammar itself.
>>
> This last sentence didn't parse for me.

I mean: if you want to keep your extensible grammar and editor in sync then
just build upon a single shared grammar using camlp4.

>> Macros in OCaml and Lisp are no different in these respects. Both allow
>> you to add syntax extensions without breaking the editor's interpretation
>> of the grammar. Both provide complete flexibility so you can completely
>> alter the grammar including even lexical tokens. Choosing to do so has
>> the same disadvantages in terms of maintenance in both cases.
>
> Neat, now only if you replaced that ugly Ocaml syntax with something
> that has more parens.
> :-)

You can write your code entirely in the OCaml AST if you like. :-)

Kenneth Tilton

unread,
May 27, 2009, 10:08:08 PM5/27/09
to
Jon Harrop wrote:
> du...@franz.com wrote:
>> On May 27, 10:08 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>>> Hardly compelling.
>> Leave it to you, Jon, to turn to only the very best tools for your
>> statistical analysis (NOT! :-).
>>
>> If you look at http://en.wikipedia.org
>> ...
>
> Did you just criticize the reliability of statistics I quoted and then you
> quoted WIKIPEDIA?! :-)

Why not? Wikipedia is where earnest, intellectually honest people spend
countless hours correcting idiocies thrown out by self-aggrandizing
jerks as fast the jerks can post their idiocies...whoa, I just had the
craziest deja vu ....

Miles Bader

unread,
May 27, 2009, 10:15:20 PM5/27/09
to
anonymous...@gmail.com writes:
> I think the issue here really is that we are comparing Java which has
> had lots of money thrown at it, to SBCL, which has had little if any.

To be fair, sbcl is basically cmucl, and cmucl has received its share of
funding over the (many, many) years -- though probably nowhere near what
most serious java implementations have received.

-Miles

--
"... The revolution will be no re-run brothers; The revolution will be live."

Kenneth Tilton

unread,
May 28, 2009, 12:05:13 AM5/28/09
to
Miles Bader wrote:
> anonymous...@gmail.com writes:
>> I think the issue here really is that we are comparing Java which has
>> had lots of money thrown at it, to SBCL, which has had little if any.
>
> To be fair, sbcl is basically cmucl, and cmucl has received its share of
> funding over the (many, many) years -- though probably nowhere near what
> most serious java implementations have received.

So you are saying the assertion that there is a huge disparity in
resources allocated is unfair, aside from the fact that there has been
nowhere near the resources allocated between the two? Just checking.

kt

Kenneth Tilton

unread,
May 28, 2009, 12:07:33 AM5/28/09
to

Not true. I watched him type it. He said "go take a flying frog". Gotta
love the Craigster.

hth, kt

Miles Bader

unread,
May 28, 2009, 1:04:17 AM5/28/09
to
Kenneth Tilton <kent...@gmail.com> writes:
>>> I think the issue here really is that we are comparing Java which has
>>> had lots of money thrown at it, to SBCL, which has had little if any.
>>
>> To be fair, sbcl is basically cmucl, and cmucl has received its share of
>> funding over the (many, many) years -- though probably nowhere near what
>> most serious java implementations have received.
>
> So you are saying the assertion that there is a huge disparity in
> resources allocated is unfair

No, just clarifying the "if any".

-Miles

--
/\ /\
(^.^)
(")")
*This is the cute kitty virus, please copy this into your sig so it can spread.

anonymous...@gmail.com

unread,
May 28, 2009, 1:52:22 AM5/28/09
to
On May 27, 9:52 pm, Jon Harrop <j...@ffconsultancy.com> wrote:

> anonymous.c.lis...@gmail.com wrote:
> > On May 27, 7:19 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> >> > (defun fact (n) (if (<= n 1) 1 (* n (fact (1- n)))))
>
> >> With five sets of superfluous parentheses to balance? No.
>
> > The parens are after the definition, how do they effect readability?
>
> They clutter the line by adding far more tokens than necessary.
>

Would it be better with a hyper-paren?
like so:
(defun fact (n) (if (<= n 1) 1 (* n (fact (1- n]

I know one of the old lisps had that.
It doesn't make much difference to me, honestly...
The extra few parens kind of blend in to one thing that says 'end of
form' to me.

> >> > You can make syntactic sense of this even if you've never studied Lisp;
>
> >> No way. Most programmers could not even decipher "(<= n 1)". Indeed, just
>
> > Do you seriously believe that?
>
> Absolutely. Virtually all programmers would assume that (f x) is a
> subexpression "f x" and not a function application.
>

/Programmers/.
You assume:
1.) Pre-knowledge of something that looks like C.
2.) Absence of context in a document.

There is nothing apparently intuitive about having a number of
different braces which mean different things. I certainly don't use
such things in my english prose. (Interestingly enough, spoken and
written languages are largely symbolic).

> >> look at the overwhelming majority of programmers who already expressed
> >> very strong opinions about the awfulness of Lisp syntax.
>
> > I see a lot of /other/ problems with lisp, the syntax isn't one of
> > them.
>
> I'm sure you've noticed people complaining about the syntax.
>

I would guess that the overwhelming majority of programmers don't have
an opinion on lisp's syntax, being that they haven't been exposed to
it.

It is not clear whether negative reactions are due to the syntax
itself or conditioning.
(and the frustration resulting in being very well conditioned for one
thing, and presented with something else.

> >> > you just have to understand that understand that parentheses indicate
> >> > subordination.
>
> >> And that they appear for no reason, as in "(n)", and that operators are
> >> placed unconventionally.
>
> > I'm sorry, operators?
> > Do you mean like math operations?
>
> Yes.
>

good, we're on the same page.

> > Order of operations confuses me, I hate remembering the rules.
>
> I'm sure you can handle + - * / and probably also comparisons.
>

I can, but it doesn't mean that I /like/ it.
What I like much more is having a concrete representation of
precedence in my code;
and then making up for the few extra parenthesis with n-ary operations
(including comparisons).

> > I /like/ that the parenthesis make the precedence obvious.
>
> But they don't. You still have to contend with , ` ' # and the grammar
> itself is still unconventional (e.g. prefix).
>

They do for mathematical operations, which was our context.

I am not sure where you are going with , ` ' and #, as they don't
really have an effect on precedence (or semantics in general, as they
expand into the same regular lisp code).

In fact, I'm quite positive I could happily (if not as conveniently)
write lisp programs without those read macros.

In terms of other operations (you may call them functions), they are
'convetionally' prefix in other languages as well.
(foo x y)
vs.
foo(x,y)

Those are both prefix.
Although I suppose you would argue that the second is more 'obviously'
prefix.

> > Maybe this is because I'm not really a math/science person.
>
> I do think that makes a big difference. I found Mathematica daunting when I
> starting learning it because you have @ // and so on. However, I did learn
> it quite quickly.
>
> >> If you wanted to use a radically different grammar that would break your
> >> text editor then the obvious solution is to use an editor that uses
> >> camlp4 and your own grammar itself.
>
> > This last sentence didn't parse for me.
>
> I mean: if you want to keep your extensible grammar and editor in sync then
> just build upon a single shared grammar using camlp4.
>

Ah, well that makes a lot of sense.

> >> Macros in OCaml and Lisp are no different in these respects. Both allow
> >> you to add syntax extensions without breaking the editor's interpretation
> >> of the grammar. Both provide complete flexibility so you can completely
> >> alter the grammar including even lexical tokens. Choosing to do so has
> >> the same disadvantages in terms of maintenance in both cases.
>
> > Neat, now only if you replaced that ugly Ocaml syntax with something
> > that has more parens.
> >  :-)
>
> You can write your code entirely in the OCaml AST if you like. :-)
>

http://pauillac.inria.fr/~ddr/camlp5/doc/htmlc/scheme.html

Or I can write it in scheme!

Now just to implement first class symbols, a tokenizer and defmacro.

Tobias C. Rittweiler

unread,
May 28, 2009, 2:09:32 AM5/28/09
to
du...@franz.com writes:

> Also, we have recently redesigned our website to be more efficient
> (and we have had many visitors comment on this fact, that it is easier
> to use and takes fewer clicks to get to where they want to go), which
> could also account for fewer clicks on our site; it is simply more
> efficient to navigate our site.

Yes it's definitively prettier, long overdue update. One quirk, though:
I never manage to find a link to the documentation of ACL. I just happen
to know it's www.franz.com/doc/.

-T.

Giorgos Keramidas

unread,
May 27, 2009, 9:10:08 PM5/27/09
to
On Wed, 27 May 2009 13:22:31 -0700 (PDT), du...@franz.com wrote:
> Also, we have recently redesigned our website to be more efficient
> (and we have had many visitors comment on this fact, that it is easier
> to use and takes fewer clicks to get to where they want to go), which
> could also account for fewer clicks on our site; it is simply more
> efficient to navigate our site.

The new web site looks fantastic, indeed :-)

du...@franz.com

unread,
May 28, 2009, 4:21:13 AM5/28/09
to

You can stop your lying. I know exactly what he said, and it wasn't
even close to that.

Duane


du...@franz.com

unread,
May 28, 2009, 5:09:45 AM5/28/09
to
On May 27, 11:09 pm, "Tobias C. Rittweiler" <t...@freebits.de.invalid>
wrote:

Actually, that's not where it is. If you want documentation for
Allegro CL, which is on the Enterprise Development Tools side, it's
still under the support section, where it has always been located.

Duane

du...@franz.com

unread,
May 28, 2009, 5:10:43 AM5/28/09
to
On May 27, 6:10 pm, Giorgos Keramidas <keram...@ceid.upatras.gr>
wrote:

Thanks; I'll pass that along.

Duane

Jon Harrop

unread,
May 28, 2009, 8:39:07 AM5/28/09
to

Here is his entire response quoted verbatim:

> Jon,
>
> Thank you for your interest. As you may have noted recently we have
> updated our website and are in the process of bringing some of our other
> content up to date.
>
> We have a preliminary site up http://www.franz.com/agraph/success/
>
> This is for AllegroGraph and is not linked within our site since it is a
> work in progress.
>
> I'm not sure what you mean by "verifiable". Generally the transactions
> with our customers are confidential (stated in the PO terms) but we
> always ask for public statements so we can promote use of Lisp. Our
> legal advice has been that we can list a customer's name but anything
> more constitutes a breach of confidentiality. Many companies don't even
> want their name mentioned.
>
> Is there something in particular we can help you with in promoting Lisp
> and/or our other products in your work?
>
> Regards,
> Craig Norvell
> Franz Inc.
> 2201 Broadway, Suite 715
> Oakland, CA 94612
> Office +1 (510) 452-2000 x165
> Fax +1 (510) 452-0182
> Skype - cnorvell

Can you provide the "much more recent list of success stories" or not?

On a related note, is Franz self-sustaining from its own profit or is it
relying upon external funding?

Jon Harrop

unread,
May 28, 2009, 8:44:19 AM5/28/09
to
anonymous...@gmail.com wrote:
> On May 27, 9:52 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>> anonymous.c.lis...@gmail.com wrote:
>> > On May 27, 7:19 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
>> >> > (defun fact (n) (if (<= n 1) 1 (* n (fact (1- n)))))
>>
>> >> With five sets of superfluous parentheses to balance? No.
>>
>> > The parens are after the definition, how do they effect readability?
>>
>> They clutter the line by adding far more tokens than necessary.
>
> Would it be better with a hyper-paren?
> like so:
> (defun fact (n) (if (<= n 1) 1 (* n (fact (1- n]

Slightly better but you still have many open parens.

>> >> > You can make syntactic sense of this even if you've never studied
>> >> > Lisp;
>>
>> >> No way. Most programmers could not even decipher "(<= n 1)". Indeed,
>> >> just
>>
>> > Do you seriously believe that?
>>
>> Absolutely. Virtually all programmers would assume that (f x) is a
>> subexpression "f x" and not a function application.
>
> /Programmers/.
> You assume:
> 1.) Pre-knowledge of something that looks like C.
> 2.) Absence of context in a document.
>
> There is nothing apparently intuitive about having a number of
> different braces which mean different things. I certainly don't use
> such things in my english prose. (Interestingly enough, spoken and
> written languages are largely symbolic).

I was not talking about intuition, of course.

>> > I /like/ that the parenthesis make the precedence obvious.
>>
>> But they don't. You still have to contend with , ` ' # and the grammar
>> itself is still unconventional (e.g. prefix).
>
> They do for mathematical operations, which was our context.
>
> I am not sure where you are going with , ` ' and #, as they don't
> really have an effect on precedence (or semantics in general, as they
> expand into the same regular lisp code).

How else could you understand:

* '(()#())

(NIL #())

You need to know the parsing rules for #.

>> >> Macros in OCaml and Lisp are no different in these respects. Both
>> >> allow you to add syntax extensions without breaking the editor's
>> >> interpretation of the grammar. Both provide complete flexibility so
>> >> you can completely alter the grammar including even lexical tokens.
>> >> Choosing to do so has the same disadvantages in terms of maintenance
>> >> in both cases.
>>
>> > Neat, now only if you replaced that ugly Ocaml syntax with something
>> > that has more parens.
>> > :-)
>>
>> You can write your code entirely in the OCaml AST if you like. :-)
>
> http://pauillac.inria.fr/~ddr/camlp5/doc/htmlc/scheme.html
>
> Or I can write it in scheme!
>
> Now just to implement first class symbols, a tokenizer and defmacro.

Ugh. :-)

Kenneth Tilton

unread,
May 28, 2009, 9:01:04 AM5/28/09
to

Why should old code suck? I wrote some gorgeous COBOL twenty-eight years
ago. Will it suck in two years?

puzzled,kt

Larry Coleman

unread,
May 28, 2009, 9:05:46 AM5/28/09
to
On May 27, 10:08 pm, Kenneth Tilton <kentil...@gmail.com> wrote:

>
> Why not? Wikipedia is where earnest, intellectually honest people spend
> countless hours correcting idiocies thrown out by self-aggrandizing
> jerks as fast the jerks can post their idiocies...whoa, I just had the
> craziest deja vu ....

This should go on your quote list.

Isaac Gouy

unread,
May 28, 2009, 12:03:34 PM5/28/09
to
On May 27, 11:58 am, Nicolas Neuss <lastn...@math.uni-karlsruhe.de>
wrote:
> Isaac Gouy <igo...@yahoo.com> writes:
> > [...] Is the Lisp community now so small
>
> Yes.


That's a shame.


> > that there are no workaday programmers capable of doing a good job on
> > trivial programs - just newbies and language implementors?
>
> No. But unfortunately, it is sufficiently small that IMO neither newbies
> nor capable programmers should waste time on a badly designed and moderated
> benchmark game.
>
> Nicolas


That's just shameful.

You're welcome to whatever opinion you like about how others should
spend their time.

You've done nothing to show that your vague complaints have substance.

You seem unable to do better than limp name-calling.

Jon Harrop

unread,
May 28, 2009, 12:23:24 PM5/28/09
to
Isaac Gouy wrote:
> On May 27, 11:58 am, Nicolas Neuss <lastn...@math.uni-karlsruhe.de>
> wrote:
>> No. But unfortunately, it is sufficiently small that IMO neither newbies
>> nor capable programmers should waste time on a badly designed and
>> moderated benchmark game.
>
> That's just shameful.
>
> You're welcome to whatever opinion you like about how others should
> spend their time.
>
> You've done nothing to show that your vague complaints have substance.
>
> You seem unable to do better than limp name-calling.

Nicolas' complaints are entirely justified. Your shootout is shameful.

Nicolas Neuss

unread,
May 28, 2009, 12:31:28 PM5/28/09
to
Isaac Gouy <igo...@yahoo.com> writes:

> On May 27, 11:58�am, Nicolas Neuss <lastn...@math.uni-karlsruhe.de>

>> Isaac Gouy <igo...@yahoo.com> writes:
>> > [...] Is the Lisp community now so small
>>
>> Yes.
>
> That's a shame.

I wholeheartedly agree, and you could help us very much by telling this
also to the many Java and C++ programmers in your surroundings.

>[...]


> You're welcome to whatever opinion you like about how others should
> spend their time.

Thank you very much.

Nicolas

Kaz Kylheku

unread,
May 28, 2009, 1:01:09 PM5/28/09
to
On 2009-05-27, Nicolas Neuss <last...@math.uni-karlsruhe.de> wrote:
> Isaac Gouy <igo...@yahoo.com> writes:
>
>> [...] Is the Lisp community now so small
>
> Yes.
>
>> that there are no workaday programmers capable of doing a good job on
>> trivial programs - just newbies and language implementors?
>
> No. But unfortunately, it is sufficiently small that IMO neither newbies
> nor capable programmers should waste time on a badly designed and moderated
> benchmark game.

What is so badly designed about it? I've looked at some of these programs
and come to the conclusion that the benchmark is fair.

For instance if we look at the knucleotide, where SBCL has its ass kicked
by the likes of Lua, several things are obvious.

Firstly, everyone is using a crappy algorithm to solve the problem. This must
be consequence of the shootout rules. Now maybe that is not realistic of the
real world, but contest must have rules so that some things are held equal in
order that we may compare other things.

In the knucleotide benchmark, the task is to analyze a long gene by looking
for nucleotide subsequences. The program must calculate the frequencies
of the individual nucleotides, and all 16 possible digraphs, and count
the occurences of various longer subsequences.

Now this could obviously be done by a state machine, custom tailored
to the set of subsequences, making a single pass over the data.

But the way these programs are doing it is building associative maps of
subsequences, and then querying the maps.

So this is a benchmark of the programming language's string manipulation
capabilities: extracting substrings and associative mapping where where the
keys are strings made up of the letters ACGT.

The Lua program is straightforward code. It doesn't concern itself with
details like what kind of hash table to construct. It just uses the awk-like
subscripting syntax. Yet it beats the SBCL code which has been tweaked with
ugly declarations all over the place, and a custom sxhash and equality function
for gene sequences.

There is clearly something wrong there.

On the other hand, I don't see anything on the Lua website that even breathes a
hint at the possibility that Lua character strings might support international
text via wide characters. I suspect they are 8 bit characters. The
lua-users.org wiki connfirms thsi:

http://lua-users.org/wiki/LuaUnicode

``A Lua string is an aribitrary sequence of values which have at least 8 bits
(octets); they map directly into the char type of the C compiler. ''

So that right there cuts memory consumption and bandwidth in half. A string
that occupies two cache lines in the Lisp program using 16 bit strings might
fit into one cache line under Lua, and so is hashed about twice as fast, when
memory bandwidth is the bottleneck! The programs iterate repeatedly over a
large array of data, which is too large for the L1 cache, and which takes up
twice the space in the Lisp program.

But faster strings are not a substitute when you need international strings.

Also, is it a big deal if the default string hashing function doesn't work so
well over strings of 16 bit characters which have only two bits of entropy per
character?

Coming up with hashing functions that give a good distribution for a wide
range of inputs, /and/ which are fast, is not easy.

Should the SBCL guys drop everything and fix the hashing function so that
it improves the knucleotide benchmark? What if something runs slower
because of it?

C++ trounced everything on this benchmark. So let's all use C++! Currently,
the result of the C implementation of the benchmark is that it doesn't build
due to compile errors. Clearly, C is difficult to use and unreliable, so let's
not use that.

I think that some people are reading way too much into these benchmarks,
using them as an excuse for disingenuous trolling.

Nicolas Neuss

unread,
May 28, 2009, 1:46:23 PM5/28/09
to
Kaz Kylheku <kkyl...@gmail.com> writes:

> On 2009-05-27, Nicolas Neuss <last...@math.uni-karlsruhe.de> wrote:
>> No. But unfortunately, it is sufficiently small that IMO neither newbies
>> nor capable programmers should waste time on a badly designed and moderated
>> benchmark game.
>
> What is so badly designed about it? I've looked at some of these programs
> and come to the conclusion that the benchmark is fair.

Finally! A word of someone competent in favor of the shooutout!

However, I am a little astonished that you ask about the bad design when
you observe what you report below. And concerning the "unfairness": maybe
you should contribute and see for yourself.

> For instance if we look at the knucleotide, where SBCL has its ass kicked
> by the likes of Lua, several things are obvious.
>
> Firstly, everyone is using a crappy algorithm to solve the problem.

Yes - why not do benchmarking with good algorithms? I would be much more
motivated to work on that.

> This must be consequence of the shootout rules.

Yes, and it is a sign of what I call "bad taste" or "bad design" of the
people running the "game".

> [...]


> So this is a benchmark of the programming language's string manipulation
> capabilities: extracting substrings and associative mapping where where the
> keys are strings made up of the letters ACGT.
>
> The Lua program is straightforward code. It doesn't concern itself with
> details like what kind of hash table to construct. It just uses the
> awk-like subscripting syntax. Yet it beats the SBCL code which has been
> tweaked with ugly declarations all over the place, and a custom sxhash
> and equality function for gene sequences.
>
> There is clearly something wrong there.

It probably means that the SBCL program is at point 2 or 3 of Juho's
lifecycle description in
http://groups.google.com/group/comp.lang.lisp/msg/5489247d2f56a848

[Or at some intermediate step when someone who thought that he knew how to
optimize CL code inserted a lot of useless declarations.]

>[...]


> So that right there cuts memory consumption and bandwidth in half. A string
> that occupies two cache lines in the Lisp program using 16 bit strings might
> fit into one cache line under Lua, and so is hashed about twice as fast, when
> memory bandwidth is the bottleneck! The programs iterate repeatedly over a
> large array of data, which is too large for the L1 cache, and which takes up
> twice the space in the Lisp program.

Yes, this might be a possible explanation. And I think that this sort of
"random" effects do happen less often when the task is useful and uses an
optimal algorithm.

>[...]


>
> I think that some people are reading way too much into these benchmarks,
> using them as an excuse for disingenuous trolling.

Precisely. And IMO this is the main problem of the "benchmark game" which
renders it worse than useless.

Nicolas

--
There are lies, damned lies, benchmarks, and benchmark games.

cnorvell

unread,
May 28, 2009, 1:50:09 PM5/28/09
to
On May 28, 5:39 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> du...@franz.com wrote:
> > On May 27, 6:36 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> >> Your sales guy, Craig Norvell, just replied stating that he knows nothing
> >> of the "much more recent list of success stories" that you referred to.
>
> > You can stop your lying. I know exactly what he said, and it wasn't even
> > close to that.
>
> Here is his entire response quoted verbatim:
>
>
>
> > Jon,
>
> > Thank you for your interest.  As you may have noted recently we have
> > updated our website and are in the process of bringing some of our other
> > content up to date.
>
> > We have a preliminary site uphttp://www.franz.com/agraph/success/

>
> > This is for AllegroGraph and is not linked within our site since it is a
> > work in progress.
>
> > I'm not sure what you mean by "verifiable".  Generally the transactions
> > with our customers are confidential (stated in the PO terms) but we
> > always ask for public statements so we can promote use of Lisp.   Our
> > legal advice has been that we can list a customer's name but anything
> > more constitutes a breach of confidentiality.  Many companies don't even
> > want their name mentioned.
>
> > Is there something in particular we can help you with in promoting Lisp
> > and/or our other products in your work?
>
> > Regards,
> > Craig Norvell
> > Franz Inc.
> > 2201 Broadway, Suite 715
> > Oakland, CA 94612
> > Office +1 (510) 452-2000 x165
> > Fax +1 (510) 452-0182
> > Skype - cnorvell
>
> Can you provide the "much more recent list of success stories" or not?
>
> On a related note, is Franz self-sustaining from its own profit or is it
> relying upon external funding?
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy Ltd.http://www.ffconsultancy.com/?u

Jon,

It doesn't appear there is any positive outcome by providing you with
this document. It does exist and we provide this to Allegro users
that need some extra support within their organization to show that
Lisp does have traction in the market.

We are in the process of updating the success stories section of our
website and hope to show some improvement in this area over the next
several weeks. Hopefully you will find these changes/updates
satisfactory. If not, we always welcome constructive criticism.

If anyone reading this thread legitimately needs some additional
support within their organization to promote the use of Lisp then
please contact me directly and we can discuss how Franz can help.

Re: Funding. Franz is self funded.

Regards,
Craig

It is loading more messages.
0 new messages