Comparison: Beta - Lisp

60 views
Skip to first unread message

Bruno Haible

unread,
Sep 8, 1994, 9:15:58 AM9/8/94
to
Since a new language called BETA is gaining attention, which has some
features Lisp is proud of, I attempt a comparison between Beta and Lisp.
Since there is currently only one implementation of Beta, the Mjolner
system, I add a rough evaluation of it.


Comparison Lisp <--> Beta

A. The BETA language

1. Has an incredible symmetry between data and executable program code.
Much better than Lisp. Syntax is such that execution goes strictly
from left to right, except for loops.

2. Lexically scoped, but a module system ("fragment" system) permits
to put parts of a program into separate files - effectively splitting
interface and implementation.

3. Multiple values, but symmetrically for both input (function arguments)
and output (function return values). A program can be viewed as moving
data around between black boxes.

4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
all correspond to "pattern"s.

5. Symmetry data/code provides automatically for different kinds of
inheritance:
outer level inner level
----------- -----------
code code local function definitions
data code local variables
code data ??
data data defstruct :include

6. "Part objects" cover the issues
- local functions with lexical scope,
- defstruct :include,
- defclass subclassing, but only single inheritance.

7. The ":<" token makes up
- generic classes (not present in Lisp, you would have to use
a DEFCLASS in a non-null lexical environment),
- generic functions,
- &key parameters for functions.

8. The "::<" token specifies
- subclassing,
- methods for generic functions,
- keyword arguments for functions.

9. CALL-NEXT-METHOD goes the other way around: the most general method
determines the general behaviour, the most specific method only
plays the role of a fixup.

10. Compile-time typing where possible, run-time tying where necessary.
For example, you can put functions into plain lists or into lists
of functions. The latter saves some run-time checks. Of course,
general lists and lists of functions share the same definitions for
insert, delete etc.

11. Concurrency, i.e. deterministic parallelism. (Would make
WITH-PACKAGE-ITERATOR and WITH-HASH-TABLE-ITERATOR obsolete in CL.)

12. No multiple inheritance.
No generic functions with more than one dispatching argument.
The macro system is pretty much limited, violates lexical scoping,
therefore error prone to use.


B. The Mjolner BETA implementation

1. It is as easy to generate machine code for Beta programs than for Lisp.

2. Generates standalone executables. Hello-world is just 100 KB on Linux.

3. Automatic garbage collection (generational, scavenging).

4. Speed:
consing up 1000 closures into a list
Mjolner 0.049 sec
CLISP 0.064 sec
GCL 0.168 sec
some integer array hacking
C 4.2 sec
Mjolner 111 sec
GCL 288 sec
CLISP 415 sec
(All timings on a 486/33.)

5. Libraries for all kinds of container classes, streams, processes,
exceptions, X11, Motif.


C. The grains of salt:

1. The syntax is orthogonal, but you have to get used to it.

2. Unhygienic macros ("super patterns").

3. No nested comments.

4. Need a dictionary for the words "pattern", "part object", "virtual pattern".


For more information about BETA, look into the comp.lang.beta newsgroup.

Disclaimer: I have no relation with Mjolner Informatics, except that they gave
me a copy of their Mjolner compiler for free.

Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

rodrigo vanegas

unread,
Sep 8, 1994, 11:25:14 AM9/8/94
to

> 4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
> all correspond to "pattern"s.

So what are these "patterns" anyway? It sounds as if they are very
close if not identical to lisp closures. After all, can't each of the
above lisp stuff can be implemented as sugar for closures.


rodrigo vanegas
r...@cs.brown.edu

Petri Virkkula

unread,
Sep 8, 1994, 12:25:35 PM9/8/94
to
On 8 Sep 1994 13:15:58 GMT, hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) said:

Bruno> Since a new language called BETA is gaining attention, which
Bruno> has some features Lisp is proud of, I attempt a comparison
Bruno> between Beta and Lisp.

I understand that this group is dedicated for BETA but if
subject says "Comparison: Beta - Lisp". I didn't find too much
comparison, I found mostly list of features that are common in
both languges. It would be nice if somebody posted real
comparison, as atleast only thing I know about Beta is that it
has "pattern"s.

Petri
--
--------------------------------------------------------------------------
Petri Virkkula | Internet: Petri.V...@hut.fi
JMT 11 E 125 | X.400 : /G=Petri/S=Virkkula/O=hut/ADMD=fumail/C=fi/
02150 Espoo | Voice : +358 0 455 1277
FINLAND |
--------------------------------------------------------------------------

Mark C. Chu-Carroll

unread,
Sep 8, 1994, 5:02:55 PM9/8/94
to

You're basically pretty much correct. A pattern is essentially the
same thing as a closure, the primary difference being that a pattern
is static. Essentially, a pattern is a uniform code construct which
will be instantiated into a closure at runtime.

To make a jump backwards to the initial comparison idea, the key
conceptual difference between Beta and Lisp is that Beta is very
static: patterns are static entities which are compiled into
programs. At runtime, a pattern is instantiated into an "object"
which, depending on how it's used, may be a value or a procedure, or
any combination thereof (just like a closure, because it IS a
closure).

The important differences come about because of three things:

<1> Static typing - in Beta, the types of variables are always declared
statically. Programs in Beta is very static, which makes it very
different from the dynamic nature of lisp programming.

<2> Single inheritance- the Beta object model uses only single inheritance.
The designers decided that rather than try to work out a system for
resolving the confusion caused by MI (namespace collisions, repeated
inheritance, etc.), it was better to do without it. I don't necessarily
agree with them, but it did result in keeping Beta simple.

<3> Virtual patterns - the object model is rather different. Instead of
the CLOS model where a child method overrides a parent method, Beta
uses the Simula model, where the child extends the parent method. The
parent implementation of a method contains calls to inner, which are
dynamically bound to the correct extension for the actual child type.

Going further, since everything (functions, procedures, methods, values)
is just a pattern, the virtual pattern idea can be applied to *any*
method at all. This results in a *very* interesting program model,
where you can write any procedure to be dynamically extensible. This can
actually be used to write lambda expressions! It's quite elegant...

<MC>
--
|| Mark Craig Chu-Carroll: <MC> || "I'm not dumb,
|| CIS Grad, U of Delaware || I just have a command of thoroughly
|| PGP key available by finger || useless information."
|| car...@cis.udel.edu || -Calvin

Ole Villumsen

unread,
Sep 8, 1994, 11:45:19 AM9/8/94
to
rodrigo vanegas <r...@cs.brown.edu> writes:

I've posted a response to comp.lang.beta.
I can also mail it to anyone interested.
Ole
ovill...@daimi.aau.dk
Ole Villumsen, Computer Science Department, Aarhus University, Denmark

Mark Friedman

unread,
Sep 8, 1994, 8:27:34 PM9/8/94
to
In article <34nu5v$o...@louie.udel.edu> car...@hercules.cis.udel.edu
(Mark C. Chu-Carroll) writes:

You're basically pretty much correct. A pattern is essentially the
same thing as a closure, the primary difference being that a pattern
is static. Essentially, a pattern is a uniform code construct which
will be instantiated into a closure at runtime.

OK, so let's equate a pattern with a lambda expression which is


instantiated into a closure at runtime.

To make a jump backwards to the initial comparison idea, the key
conceptual difference between Beta and Lisp is that Beta is very
static: patterns are static entities which are compiled into
programs.

And lambda expressions are static entities which are compiled into
functions.

The important differences come about because of three things:

<1> Static typing - in Beta, the types of variables are always declared
statically. Programs in Beta is very static, which makes it very
different from the dynamic nature of lisp programming.

Agreed, although it wouldn't take much of a stretch to imagine a lisp
which required type declarations. Admittedly, most lisp users would
hate that.

<2> Single inheritance- the Beta object model uses only single inheritance.
The designers decided that rather than try to work out a system for
resolving the confusion caused by MI (namespace collisions, repeated
inheritance, etc.), it was better to do without it. I don't necessarily
agree with them, but it did result in keeping Beta simple.

But it looked to me like the object system that Beta uses is one based
on closures. That sort of object system has been used in lisp and has
the same sort of single inheritance. For the same matter, I assume
that one could build a CLOS type multiple inheritance object system
in Beta which did not use closures in such a direct way.

<3> Virtual patterns - the object model is rather different. Instead of
the CLOS model where a child method overrides a parent method, Beta
uses the Simula model, where the child extends the parent method. The
parent implementation of a method contains calls to inner, which are
dynamically bound to the correct extension for the actual child type.

See my last comment.

Going further, since everything (functions, procedures, methods, values)
is just a pattern, the virtual pattern idea can be applied to *any*
method at all. This results in a *very* interesting program model,
where you can write any procedure to be dynamically extensible. This can
actually be used to write lambda expressions! It's quite elegant...

Could you give an example here which doesn't look like a closure. I
guess I'm looking for something which would not have a straightforward
translation into lisp. If there's something new here I'd really like
to know.

-Mark
--
Mark Friedman
NASA-Ames Research Center
MS 269-2
Moffett Field, CA 94035-1000

vmail: (415) 604-0573
email: bo...@ptolemy.arc.nasa.gov

Lenny Gray

unread,
Sep 9, 1994, 2:38:50 AM9/9/94
to
Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:

: ...
: some integer array hacking


: C 4.2 sec
: Mjolner 111 sec
: GCL 288 sec
: CLISP 415 sec
: (All timings on a 486/33.)

: ...

Are these numbers right? I've seriously used GCL and CLISP myself and
had some arguments with a "true believer Lisper" who thought "Lisp _does_
compete reasonably with C for numeric stuff", but I never bothered to do
the timing tests, and always assumed it wasn't this bad. Is it, really?

Also, I was interested in Beta until one minute ago, because of this.
Are there intrinsic reasons for this that will prevent it from ever
improving?

- Lenny Gray -

Jacob Seligmann

unread,
Sep 9, 1994, 7:03:38 AM9/9/94
to
Lenny Gray (lenn...@netcom.com) wrote:

> Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
>
> : ...
> : some integer array hacking
> : C 4.2 sec
> : Mjolner 111 sec
> : GCL 288 sec
> : CLISP 415 sec
> : (All timings on a 486/33.)
> : ...

Ouch!

> Are these numbers right? I've seriously used GCL and CLISP myself and
> had some arguments with a "true believer Lisper" who thought "Lisp _does_
> compete reasonably with C for numeric stuff", but I never bothered to do
> the timing tests, and always assumed it wasn't this bad. Is it, really?

I don't know what Bruno's "some integer array hacking" program does,
but the performance ratios shown do not reflect my personal experience.
For example, here's a very simple "some integer array hacking" program
I wrote for the occasion:

/* siah.c - some integer array hacking in C */
void main(void)
{
int a[10000], i, j;
for (j=0; j<1000; j++)
for (i=0; i<10000; i++)
a[i] = i*4;
}

(* siah.bet - some integer array hacking in BETA *)
ORIGIN '~beta/basiclib/v1.4/betaenv'
--program:descriptor--
(#
a: [10000]@integer;
do
(for j:1000 repeat
(for i:a.range repeat i*4 -> a[i] for);
for);
#)

And here's the run-times (486/66; BETA compiler v5.0(1); gcc v2.5.8):

BETA 4.41
gcc 3.10
gcc -O6 1.24

In this case, the ratio between BETA and unoptimized C was 1.4, while
the ratio between BETA and optimized C was 3.6.

Bear in mind that the BETA compiler does not currently perform nearly
as much optimization as gcc does. Also, the code generated by the BETA
compiler contained an index check (which could have been removed, had
the optimizer been smarter) at each assignment, while the C code
obviously did not.

> Also, I was interested in Beta until one minute ago, because of this.

Hopefully, your interest has been reawakened.

/Jacob Seligmann
------------------------------------------------------------------------
Mjolner Informatics ApS Phone: (+45) 86 20 20 00 ext. 2754
Science Park Aarhus Direct: (+45) 86 20 20 11 - 2754
Gustav Wieds Vej 10 Fax: (+45) 86 20 12 22
DK-8000 Aarhus C, Denmark Email: jac...@mjolner.dk
------------------------------------------------------------------------
BETA is better
------------------------------------------------------------------------

Bruno Haible

unread,
Sep 9, 1994, 11:50:06 AM9/9/94
to
> So what are these "patterns" anyway? It sounds as if they are very
> close if not identical to lisp closures. After all, can't each of the
> above lisp stuff can be implemented as sugar for closures.

From the point of view of a Lisp programmer, a pattern consists of

* a specification of variables (call them "variables" or "closure variables"
or "slots"), and

* a piece of code which is executed after the storage for the variables has
been allocated (call it "initialization method" or simply "program").

But that's only one of many aspects of patterns...


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

rodrigo vanegas

unread,
Sep 9, 1994, 1:25:10 PM9/9/94
to

>> So what are these "patterns" anyway? It sounds as if they are very
>> close if not identical to lisp closures. After all, can't each of the
>> above lisp stuff can be implemented as sugar for closures.

> From the point of view of a Lisp programmer, a pattern consists of
>
> * a specification of variables (call them "variables" or "closure variables"
> or "slots"), and
>
> * a piece of code which is executed after the storage for the variables has
> been allocated (call it "initialization method" or simply "program").

Ok, so consider the following:

(lambda (x y) (print "whatever...") (funcall x y))

This lambda abstraction, which evaluates to a closure, has "a
specification of variables", X and Y, and "a piece of code which is
executed after the storage for the variables has been allocated", the
PRINT followed by the FUNCALL.

I don't see any difference yet...

> But that's only one of many aspects of patterns...

Why is it that every explanation of patterns i've come across so far
always includes a "more to come" disclaimer at the end?! I'm
beginning to wonder if these so called "patterns" can be defined at
all!


rodrigo "tired of playing detective" vanegas
r...@cs.brown.edu

Bruno Haible

unread,
Sep 9, 1994, 12:37:49 PM9/9/94
to
Lenny Gray <lenn...@netcom.com> wrote:
>Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
>
>: ...
>: some integer array hacking
>: C 4.2 sec
>: Mjolner 111 sec
>: GCL 288 sec
>: CLISP 415 sec
>: (All timings on a 486/33.)
>: ...
>
> Are these numbers right? I've seriously used GCL and CLISP myself and
> had some arguments with a "true believer Lisper" who thought "Lisp _does_
> compete reasonably with C for numeric stuff", but I never bothered to do
> the timing tests, and always assumed it wasn't this bad. Is it, really?

Of course these numbers are right. Here is another set of figures, for
the same integer array hacking benchmark:
C 7.7 sec
Lucid CL 3.0 production mode 47 sec
Lucid CL 3.0 development mode 226 sec
CLISP 512 sec
(All timings on a Sun Sparcstation IPC, this time.)

Lisp compilers produce good code, but they can't compete with good C
compilers in this case.

Here's the code, if you want to convince yourself:

(defun fannkuch (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )
(unless (and (> n 0) (<= n 100)) (return-from fannkuch))
(let ((n n))
(declare (fixnum n))
(let ((perm (make-array n :element-type 'fixnum))
(perm1 (make-array n :element-type 'fixnum))
(zaehl (make-array n :element-type 'fixnum))
(permmax (make-array n :element-type 'fixnum))
(bishmax -1))
(declare (type (simple-array fixnum (*)) perm perm1 zaehl permmax))
(declare (fixnum bishmax))
(dotimes (i n) (setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))
(setf (svref zaehl (- \t 1)) \t)
(decf \t)
(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))
(dotimes (i \t) (setf (svref perm1 i) (svref perm1 (+ i 1))))
(setf (svref perm1 \t) perm0)
)
(when (plusp (decf (svref zaehl \t))) (go Kreuz))
(incf \t)
(go Dollar)
standardroutine
(dotimes (i n) (setf (svref perm i) (svref perm1 i)))
(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop
(when (= (setq k (svref perm 0)) 0) (return))
(let ((k2 (ceiling k 2)))
(declare (fixnum k2))
(dotimes (i k2) (rotatef (svref perm i) (svref perm (- k i))))
)
(incf Spiegelungsanzahl)
)
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n) (setf (svref permmax i) (svref perm1 i)))
) )
(go Dollar)
fertig
)
(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)
(when (> i 0) (format t " "))
(format t "~D" (+ (svref permmax i) 1))
)
(format t ")")
(terpri)
(values)
) ) )


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Bruno Haible

unread,
Sep 9, 1994, 11:57:21 AM9/9/94
to
> <1> Static typing - in Beta, the types of variables are always declared
> statically.

With the important extension that it is not only possible to declare
x will be of type A,
but also
x will belong to some fixed subtype of type A.

This makes it possible to have generic container classes.

> Programs in Beta is very static, which makes it very
> different from the dynamic nature of lisp programming.

I don't agree here. This depends on your programming style. You can
perfectly embed Lambda Calculus into Beta.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Mark C. Chu-Carroll

unread,
Sep 9, 1994, 1:44:28 PM9/9/94
to
In article <BOBO.94S...@avogadro.arc.nasa.gov> bo...@ptolemy.arc.nasa.gov writes:
]In article <34nu5v$o...@louie.udel.edu> car...@hercules.cis.udel.edu

](Mark C. Chu-Carroll) writes:
]
] You're basically pretty much correct. A pattern is essentially the
] same thing as a closure, the primary difference being that a pattern
] is static. Essentially, a pattern is a uniform code construct which
] will be instantiated into a closure at runtime.
]
]OK, so let's equate a pattern with a lambda expression which is
]instantiated into a closure at runtime.

Yep, that's pretty much it.

] The important differences come about because of three things:


]
] <1] Static typing - in Beta, the types of variables are always declared
] statically. Programs in Beta is very static, which makes it very
] different from the dynamic nature of lisp programming.
]
]Agreed, although it wouldn't take much of a stretch to imagine a lisp
]which required type declarations. Admittedly, most lisp users would
]hate that.
]
] <2] Single inheritance- the Beta object model uses only single inheritance.
] The designers decided that rather than try to work out a system for
] resolving the confusion caused by MI (namespace collisions, repeated
] inheritance, etc.), it was better to do without it. I don't necessarily
] agree with them, but it did result in keeping Beta simple.
]
]But it looked to me like the object system that Beta uses is one based
]on closures. That sort of object system has been used in lisp and has
]the same sort of single inheritance. For the same matter, I assume
]that one could build a CLOS type multiple inheritance object system
]in Beta which did not use closures in such a direct way.

Again, you're pretty much right. It's *very* similar to closure based
object systems. It reminds me quite a lot of a closure based object
system I implemented for Scheme as an undergrad.

] <3] Virtual patterns - the object model is rather different. Instead of


] the CLOS model where a child method overrides a parent method, Beta
] uses the Simula model, where the child extends the parent method. The
] parent implementation of a method contains calls to inner, which are
] dynamically bound to the correct extension for the actual child type.
]
]See my last comment.
]

] Going further, since everything (functions, procedures, methods, values)
] is just a pattern, the virtual pattern idea can be applied to *any*
] method at all. This results in a *very* interesting program model,
] where you can write any procedure to be dynamically extensible. This can
] actually be used to write lambda expressions! It's quite elegant...
]
]Could you give an example here which doesn't look like a closure. I
]guess I'm looking for something which would not have a straightforward
]translation into lisp. If there's something new here I'd really like
]to know.

No, I can't give an example which doesn't look like a closure, because
as I've said, the comparison to closures is pretty damned exact. The
differences that I've pointed out are primarily differences in
programming style that wind up in common programs.

Programming styles in Beta do end up being somewhat different than
styles in Lisp, primarily because of the factors I've mentioned
above. The Beta idioms could be used to implement the programming
styles used by most commonlisp people; and the commonlisp idioms could
be used to implement the beta style. It's not a semantic difference,
just a stylistic one.

<MC>

--
|| Mark Craig Chu-Carroll: <MC> || "A libertarian is just a republican
|| CIS Grad, U of Delaware || who takes drugs"
|| PGP Key Available, by finger || - Bob Black
|| car...@cis.udel.edu ||

William D. Gooch

unread,
Sep 9, 1994, 1:36:50 PM9/9/94
to
On 9 Sep 1994, Bruno Haible wrote:

> Lenny Gray <lenn...@netcom.com> wrote:
> >
> > Are these numbers right? I've seriously used GCL and CLISP myself and
>

> Of course these numbers are right....

"Right" or wrong, they are meaningless and should be ignored in the
absence of additional information. See below.

> Here's the code, if you want to convince yourself:

OK, I have convinced myself not to pay any attention to your results.
See below.

So where and how is the timing measurement taken? This is crucial; you
cannot expect anyone to glean useful information from a timing test
unless you can demonstrate that it was taken in a reasonable fashion,
which means limiting the test to the things you are claiming to measure
by it. Unless you were careful to exclude other processes while the test
was running, you may have also measured things completely extraneous to the
code you intended to measure. I'm not saying that you don't know these
things, but you need to be explicit about them when you publish a
so-called benchmark.

Also, the above code includes all sorts of stuff that has nothing to do
with integer array hacking, such as parsing, formatting (which involves
printed output and most likely conses), array allocation, and so on. This
is a *terrible* benchmark - it's quite possible that the timings you
published even include time waiting for a window to become exposed for
output, as well as the time taken by the user to type input.

Jeff Dalton

unread,
Sep 10, 1994, 11:00:56 AM9/10/94
to
In article <34nbif$a...@belfort.daimi.aau.dk> ol...@daimi.aau.dk (Ole Villumsen) writes:
>rodrigo vanegas <r...@cs.brown.edu> writes:
>
>>In article <34n2qe$d...@nz12.rz.uni-karlsruhe.de>, hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
>
>>> 4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
>>> all correspond to "pattern"s.
>
>>So what are these "patterns" anyway? It sounds as if they are very
>>close if not identical to lisp closures. After all, can't each of the
>>above lisp stuff can be implemented as sugar for closures.
>
>I've posted a response to comp.lang.beta.
>I can also mail it to anyone interested.

This annoys me big time. Why not post the reply to the newsgroups
that contained the original article?

Jeff Dalton

unread,
Sep 10, 1994, 11:11:33 AM9/10/94
to

>A. The BETA language
>
>1. Has an incredible symmetry between data and executable program code.
> Much better than Lisp.

This is useless without some details. All I know is that you think
Beta has an "incredible symmetry".

Most of the rest of the message has the same problem.

>2. Lexically scoped, but a module system ("fragment" system) permits
> to put parts of a program into separate files - effectively splitting
> interface and implementation.

Virtually every Lisp in the universe lets you put parts of the
program in separate files. So what exactly is the issue here?

>7. The ":<" token makes up
> - generic classes (not present in Lisp, you would have to use
> a DEFCLASS in a non-null lexical environment),

So in fact what you mean is *Common* Lisp, not Lisp.

>9. CALL-NEXT-METHOD goes the other way around: the most general method
> determines the general behaviour, the most specific method only
> plays the role of a fixup.

Again more is needed.

>11. Concurrency, i.e. deterministic parallelism. (Would make
> WITH-PACKAGE-ITERATOR and WITH-HASH-TABLE-ITERATOR obsolete in CL.)

I think you are confused about the nature and role of these
iterators. But perhaps not. It's impossible to tell from
the little you say.

>4. Speed:
> consing up 1000 closures into a list
> Mjolner 0.049 sec
> CLISP 0.064 sec
> GCL 0.168 sec
> some integer array hacking
> C 4.2 sec
> Mjolner 111 sec
> GCL 288 sec
> CLISP 415 sec
> (All timings on a 486/33.)

It's necessary to see the code. What declarations did you use
in Common Lisp? Also, try a faster Common Lisp.

Jeff Dalton

unread,
Sep 10, 1994, 11:21:32 AM9/10/94
to
In article <Pine.A32.3.90.940909...@swim5.eng.sematech.org> "William D. Gooch" <goo...@swim5.eng.sematech.org> writes:
>On 9 Sep 1994, Bruno Haible wrote:
>
>> Lenny Gray <lenn...@netcom.com> wrote:
>> >
>> > Are these numbers right? I've seriously used GCL and CLISP myself and
>>
>> Of course these numbers are right....
>
>"Right" or wrong, they are meaningless and should be ignored in the
>absence of additional information. See below.
>
>> Here's the code, if you want to convince yourself:

>> (dotimes (i n) (setf (svref perm1 i) i))

BTW, you can't just use dotimes even if n is declared to be a fixnum.
I think this is explained in the comp.lang.lisp FAQ. If not, it was
explained in the list of "pitfalls" I posted a while back and perhaps
should update?

-- jeff

Bernhard Pfahringer

unread,
Sep 9, 1994, 2:59:24 PM9/9/94
to
In article <34q30t$n...@nz12.rz.uni-karlsruhe.de>,

Bruno Haible <hai...@ma2s2.mathematik.uni-karlsruhe.de> wrote:
>
>Of course these numbers are right. Here is another set of figures, for
>the same integer array hacking benchmark:
> C 7.7 sec
> Lucid CL 3.0 production mode 47 sec
> Lucid CL 3.0 development mode 226 sec
> CLISP 512 sec
>(All timings on a Sun Sparcstation IPC, this time.)
>
>Lisp compilers produce good code, but they can't compete with good C
>compilers in this case.

May not be the case: I've timed your function using both CMUCL 17c and
Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
CMUCL (estimate!) 15 sec

which is just a factor of 2 off of C. And here is a possible explanation
of that remaining factor: I *guess* svref still checks, if the index is
within the bounds of the vector, whereas there is probably no such check
in the binary produced by the C compiler.

regards, Bernhard
--------------------------------------------------------------------------
Bernhard Pfahringer
Austrian Research Institute for
Artificial Intelligence bern...@ai.univie.ac.at
Schottengasse 3 Fax: (+43 1) 532-0652
A-1010 Vienna, Austria Phone: (+43 1) 533-6112

Scott McLoughlin

unread,
Sep 9, 1994, 4:19:54 PM9/9/94
to
bern...@ai.univie.ac.at (Bernhard Pfahringer) writes:

> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
> CMUCL (estimate!) 15 sec
>
> which is just a factor of 2 off of C. And here is a possible explanation
> of that remaining factor: I *guess* svref still checks, if the index is
> within the bounds of the vector, whereas there is probably no such check
> in the binary produced by the C compiler.
>
> regards, Bernhard
> --------------------------------------------------------------------------
> Bernhard Pfahringer
> Austrian Research Institute for
> Artificial Intelligence bern...@ai.univie.ac.at
> Schottengasse 3 Fax: (+43 1) 532-0652
> A-1010 Vienna, Austria Phone: (+43 1) 533-6112
>

Howdy,
SVREF bounds checks an index with (SAFETY 0)? Weird. Of all
things to optimize, you'd think that "field dereferencing" functions
, e.g., SVREF, SCHAR, <STRUCT>-<SLOT> references, and their builtin
counter parts like SYMBOL-NAME, would not type check and would not
bound's check.
I'd think that it would be pretty easy to group these
functions where the compiler can see'em and generate a single
machine instruction on most processors.

=============================================
Scott McLoughlin
Conscious Computing
=============================================

Lawrence G. Mayka

unread,
Sep 9, 1994, 7:25:42 PM9/9/94
to

Of course these numbers are right. Here is another set of figures, for
the same integer array hacking benchmark:
C 7.7 sec
Lucid CL 3.0 production mode 47 sec
Lucid CL 3.0 development mode 226 sec
CLISP 512 sec
(All timings on a Sun Sparcstation IPC, this time.)

I added some type declarations (and removed some), in order to make
sure that no functions (at least, according to the disassembler) are
being called in the inner parts. I've attached the resulting source
code. (Note again that I am not asserting that =all= those type
declarations are necessary, only that I didn't want to take chances.)
The benchmark result on LispWorks 3.2, on an old Sparc LX, for

(fannkuch-fast 10)

is

57.3 sec

You didn't say what argument you passed to the function, by the way;
10 was as high as I was willing to go!

-----------------------------
(in-package :cl-user)

(defun fannkuch-fast (&optional (n (progn


(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )

(declare (optimize (safety 0) (speed 3) (space 0) (debug 0)))
(unless (and (> n 0) (<= n 100)) (return-from fannkuch-fast))


(let ((n n))
(declare (fixnum n))

(let ((perm (make-array n :initial-element 0))
(perm1 (make-array n :initial-element 0))
(zaehl (make-array n :initial-element 0))
(permmax (make-array n :initial-element 0))
(bishmax -1))
(declare (type simple-vector perm perm1 zaehl permmax))


(declare (fixnum bishmax))
(dotimes (i n)

(declare (fixnum i))


(setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))

(setf (svref zaehl (the fixnum (1- \t))) \t)
(setf \t (the fixnum (1- \t)))


(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))
(dotimes (i \t)

(declare (fixnum i))
(setf (svref perm1 i) (svref perm1 (the fixnum (1+ i)))))


(setf (svref perm1 \t) perm0)
)

(when (> (the fixnum
(setf (the fixnum (svref zaehl \t))
(the fixnum (1- (the fixnum (svref zaehl \t))))))
0)
(go Kreuz))
(setf \t (the fixnum (1+ \t)))


(go Dollar)
standardroutine
(dotimes (i n)

(declare (fixnum i))


(setf (svref perm i) (svref perm1 i)))
(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop

(when (= (the fixnum (setq k (svref perm 0))) 0) (return))
(let ((k2 (ash (the fixnum (1+ k)) -1)))


(declare (fixnum k2))
(dotimes (i k2)

(declare (fixnum i))
(rotatef (svref perm i) (svref perm (the fixnum (- k i)))))
)
(setf Spiegelungsanzahl (the fixnum (1+ Spiegelungsanzahl)))


)
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n)

(declare (fixnum i))


(setf (svref permmax i) (svref perm1 i)))
) )
(go Dollar)
fertig
)
(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)

(declare (fixnum i))


(when (> i 0) (format t " "))

(format t "~D" (the fixnum (1+ (the fixnum (svref permmax i)))))


)
(format t ")")
(terpri)
(values)
) ) )

--------------------------------
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

Bruno Haible

unread,
Sep 10, 1994, 5:02:25 PM9/10/94
to
Lenny Gray <lenn...@netcom.com> wrote:
>
> Also, I was interested in Beta until one minute ago, because of this.
> Are there intrinsic reasons for this that will prevent it from ever
> improving?

I don't think so. If their compiler did more optimizations, the Mjolner
timings certainly would be much closer to the C timings.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Bruno Haible

unread,
Sep 10, 1994, 5:29:59 PM9/10/94
to
Bernhard Pfahringer <bern...@ai.univie.ac.at> wrote:
>> Lisp compilers produce good code, but they can't compete with good C
>> compilers in this case.
>
> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
> CMUCL (estimate!) 15 sec
>
> which is just a factor of 2 off of C.

This agrees with some figures measured by Simon Leinen:

Sun 4/670MP, 40MHz SuperSPARC acc -fast 1.5 sec user
Sun 4/670MP, 40MHz SuperSPARC CMU CL 17e 3.08 sec user
Sun 4/670MP, 40MHz SuperSPARC LispWorks 3.2 15.58 sec user
Sun 4/670MP, 40MHz SuperSPARC Allegro 4.1 33.87 sec user

Indeed CMUCL is only a factor of 2 off of C. I am happy to eat my words
about Lisp compiler's code.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Scott Schwartz

unread,
Sep 10, 1994, 6:05:17 PM9/10/94
to
hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
Lenny Gray <lenn...@netcom.com> wrote:
> Also, I was interested in Beta until one minute ago, because of this.

If their compiler did more optimizations, the Mjolner


timings certainly would be much closer to the C timings.

Contrast with the strategy of the Sather group: their compiler started
out by doing better than C++ on some microbenchmarks. That's what it
takes to win supporters in real life.

Bruno Haible

unread,
Sep 10, 1994, 5:13:52 PM9/10/94
to
William D. Gooch <goo...@swim5.eng.sematech.org> wrote:
>
> So where and how is the timing measurement taken? This is crucial

(compile-file "fannkuch")
(load "fannkuch.fasl")
(time (fannkuch 9))

> Also, the above code includes all sorts of stuff that has nothing to do
> with integer array hacking, such as parsing, formatting (which involves
> printed output and most likely conses), array allocation, and so on.

All these are isolated at the beginning and at the end, and executed
only once. So its time will be negligible.

> This is a *terrible* benchmark - it's quite possible that the timings you
> published even include time waiting for a window to become exposed for
> output, as well as the time taken by the user to type input.

Sure, if your "time" command doesn't distinguish elapsed real time
and the time consumed by a single process in user mode, then it is
unusable for benchmarks.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Lawrence G. Mayka

unread,
Sep 11, 1994, 12:12:30 PM9/11/94
to

I forgot some additional type declarations needed to avoid checking
the GC write barrier when vector elements are modified. My new
result for

(time (fannkuch-fast 9))

on LispWorks 3.2 is

An early-1993 Sparc LX 3.81 sec
An early-1993, low-end Sparc 10 2.97 sec

I have attached the again-modified version of the benchmark.

-------------------------------
(in-package :cl-user)

(defun fannkuch-fast (&optional (n (progn


(format *query-io* "n = ?")
(parse-integer (read-line *query-io*)))))

(declare (optimize (safety 0) (speed 3) (space 0) (debug 0))

(fixnum n))


(unless (and (> n 0) (<= n 100))

(return-from fannkuch-fast))


(let ((perm (make-array n :initial-element 0))
(perm1 (make-array n :initial-element 0))
(zaehl (make-array n :initial-element 0))
(permmax (make-array n :initial-element 0))
(bishmax -1))
(declare (type simple-vector perm perm1 zaehl permmax)

(dynamic-extent perm perm1 zaehl permmax)


(fixnum bishmax))
(dotimes (i n)

(declare (fixnum i))


(setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1)
(go standardroutine))

(setf (svref zaehl (the fixnum (1- \t))) \t)
(setf \t (the fixnum (1- \t)))

(go Kreuz)
Dollar
(when (= \t n)
(go fertig))
(let ((perm0 (svref perm1 0)))

(declare (fixnum perm0))


(dotimes (i \t)
(declare (fixnum i))

(setf (svref perm1 i) (the fixnum (svref perm1 (the fixnum (1+ i))))))
(setf (svref perm1 \t) perm0))
(when (> (the fixnum (setf (svref zaehl \t)


(the fixnum (1- (the fixnum (svref zaehl \t))))))
0)
(go Kreuz))
(setf \t (the fixnum (1+ \t)))

(go Dollar)
standardroutine
(dotimes (i n)

(declare (fixnum i))
(setf (svref perm i) (the fixnum (svref perm1 i))))


(let ((Spiegelungsanzahl 0)
(k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop

(when (= (the fixnum (setq k (svref perm 0))) 0)
(return))
(let ((k2 (ash (the fixnum (1+ k)) -1)))

(declare (fixnum k2))
(dotimes (i k2)

(declare (fixnum i))
(rotatef (the fixnum (svref perm i))
(the fixnum (svref perm (the fixnum (- k i)))))))


(setf Spiegelungsanzahl (the fixnum (1+ Spiegelungsanzahl))))

(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n)

(declare (fixnum i))
(setf (svref permmax i) (the fixnum (svref perm1 i))))))
(go Dollar)
fertig)


(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)

(declare (fixnum i))


(when (> i 0)
(format t " "))

(format t "~D" (the fixnum (1+ (the fixnum (svref permmax i))))))

(format t ")")
(terpri)
(values)))

-----------------------------------------

Ken Anderson

unread,
Sep 11, 1994, 9:56:11 AM9/11/94
to
In article <LGM.94Se...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

Bernhard Pfahringer <bern...@ai.univie.ac.at> wrote:
>> Lisp compilers produce good code, but they can't compete with good C
>> compilers in this case.
>
> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:

Becareful when making such claims. How carefully did you study your
benchmarks to find out why there was such a difference? Generall, CMU and
Lucid are quite comparable.

> CMUCL (estimate!) 15 sec
> > which is just a factor of 2 off of C.

This agrees with some figures measured by Simon Leinen:

Sun 4/670MP, 40MHz SuperSPARC acc -fast 1.5 sec user
Sun 4/670MP, 40MHz SuperSPARC CMU CL 17e 3.08 sec user
Sun 4/670MP, 40MHz SuperSPARC LispWorks 3.2 15.58 sec user
Sun 4/670MP, 40MHz SuperSPARC Allegro 4.1 33.87 sec user

Since two different Lisp's are off by a factor of 5 and 10 from CMUCL, the
benchmark probably needs a little work. For example, i found (ceiling k 2)
accounted for 60% of the time of the original version of (fannkuch 9) in
Allegro. The Allegro fannkuch-fast times are pretty close those below
(though the hardware isn't the same).

Indeed CMUCL is only a factor of 2 off of C. I am happy to eat my words
about Lisp compiler's code.

This is a good example of how easy it is for Lisp to look much worse than
it is.

I forgot some additional type declarations needed to avoid checking
the GC write barrier when vector elements are modified. My new
result for

(time (fannkuch-fast 9))

on LispWorks 3.2 is

An early-1993 Sparc LX 3.81 sec
An early-1993, low-end Sparc 10 2.97 sec

I have attached the again-modified version of the benchmark.

This is a great improvement! The original benchmark had at least a dozen
problems with it. Could you or someone publish the C (and Beta) code so we
could make a completely fair and correct benchmark out of this?

Good work.
--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA

Simon Leinen

unread,
Sep 12, 1994, 4:39:44 AM9/12/94
to
> Since two different Lisp's are off by a factor of 5 and 10 from
> CMUCL, the benchmark probably needs a little work.

One might also conclude that the two other Lisps need a little work.

> For example, i found (ceiling k 2) accounted for 60% of the time
> of the original version of (fannkuch 9) in Allegro.

...which would confirm the alternative conclusion (at least CEILING
seems to need a little work in Allegro).

[In defense of Franz, when I once sent them a complaint about the
performance of some specific part of Lisp functionality that was
implemented sub-optimally, they answered with a very effective patch.]

Maybe the problem is that it is simply very hard to implement all of
Common Lisp in a maximally efficient manner. Fortunately most Lisps
have good profilers now, so that you can at least identify the
problematic areas in your programs without too much effort.
--
Simon.

Fernando Mato Mira

unread,
Sep 12, 1994, 12:02:08 PM9/12/94
to
In article <SIMON.94S...@liasg5.epfl.ch>, si...@lia.di.epfl.ch (Simon Leinen) writes:

> [In defense of Franz, when I once sent them a complaint about the
> performance of some specific part of Lisp functionality that was
> implemented sub-optimally, they answered with a very effective patch.]

Same here.

--
F.D. Mato Mira
Computer Graphics Lab mato...@epfl.ch
EPFL FAX: +41 (21) 693-5328


Paul Krause x7816

unread,
Sep 12, 1994, 6:45:51 PM9/12/94
to
In article <34q30t$n...@nz12.rz.uni-karlsruhe.de> hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
>
> Here's the code, if you want to convince yourself:
>
> (defun fannkuch (&optional (n (progn
> (format *query-io* "n = ?")
> (parse-integer (read-line *query-io*))
> ) ) )

etc.

That's not Lisp! That's C with parens!

--
Paul F. Krause Systems Research and Applications Corporation
(703) 558-7816 Intelligent Information Systems
FAX 558-4723 2000 15th Street North, Arlington, VA 22201
Internet: kra...@sra.com-----Uucp: uunet!uupsi5!sraopus!verdi!krausep

Jacob Seligmann

unread,
Sep 13, 1994, 2:13:27 PM9/13/94
to
Jacob Seligmann (jac...@daimi.aau.dk) wrote:

> Lenny Gray (lenn...@netcom.com) wrote:
>
> > Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
> >
> > > some integer array hacking
> > > C 4.2 sec
> > > Mjolner 111 sec
> > > GCL 288 sec
> > > CLISP 415 sec
>

> > Are these numbers right? I've seriously used GCL and CLISP myself and
> > had some arguments with a "true believer Lisper" who thought "Lisp _does_
> > compete reasonably with C for numeric stuff", but I never bothered to do
> > the timing tests, and always assumed it wasn't this bad. Is it, really?
>

> I don't know what Bruno's "some integer array hacking" program does [...]

Mr. Bruno Haible was friendly enough to send me the C and LISP programs
used. Unfortunately, he had deleted the BETA program, so I had to do
the port from scratch. All three programs are listed at the end of this
post. The benchmark is the program run with n=9.

[As you can see below, the C implementation uses a lot of tricks I did
not care or was unable to use in the BETA port: in-lining (#define's),
hard-wired variable liveliness information (loads of blocks with local
variables, making the code very hard to read), code generation hints
(register ...), array pointers (*d++ = *s++), unstructured gotos, etc.]

Here's my results (486/66; BETA compiler v5.0(2); gcc v2.5.8):

gcc -O6 -fomit-frame-pointer 2.08
BETA -no-range-check 11.00

[Actually, the compiler currently do not have a -no-range-check option.
The BETA figure was obtained by removing all boundl-instructions from
the code by hand, and reassembling; this is the only way I could
achieve a fair comparison.]

As you can see, the ratio between state-of-the-art optimized C and BETA
was 5.3, not 26.4 as above.

> > Also, I was interested in Beta until one minute ago, because of this.

> > Are there intrinsic reasons for this that will prevent it from ever
> > improving?

I looked at the generated code, and it seems that the increase in
execution time can be attributed to two factors:

(1) The BETA code allocates its variables on the heap, while the C
code uses registers almost exclusively.

(2) The BETA code performs a regular lookup calculation at each array
access, while the C code simply increases a dedicated register
during the linear sweep.

With a little bit of effort, there is absolutely no reason why the BETA
compiler should not be able to perform the variable liveliness analysis
itself (although currently it does not, and therefore pays the heavy
price of using heap space instead of registers). Also, the linear array
sweeps are simple enough that the compiler could recognize them and
avoid the index calculations (again, it currently does not, and
therefore pays the price at each lookup).

"Some integer array hacking"-type programs are *exactly* what highly
sophisticated C compilers excel at, but there are no intrinsic reasons
why the BETA compiler should not to be able to produce comparable code.
We're constantly working on it...

Sincerely,

/Jacob Seligmann
------------------------------------------------------------------------
Mjolner Informatics ApS Phone: (+45) 86 20 20 00 ext. 2754
Science Park Aarhus Direct: (+45) 86 20 20 11 - 2754
Gustav Wieds Vej 10 Fax: (+45) 86 20 12 22
DK-8000 Aarhus C, Denmark Email: jac...@mjolner.dk
------------------------------------------------------------------------
BETA is better
------------------------------------------------------------------------

============================== fannkuch.c ==============================

/* Programm zur Simulation des Pfannkuchenspiels */
/* Bruno Haible 10.06.1990 */

#include <stdio.h>

#define PermLength 100
#define PermCopy(Source,Dest,n) \
{register int h = n; register int *s = Source; register int *d = Dest; \
while (h) {*d++ = *s++; h--;}; \
}

void main()
{ int n;
int Perm[PermLength];
int Perm1[PermLength];
int Zaehl[PermLength];
int PermMax[PermLength];
int BishMax; /* bisheriges Maximum aller Spiegelungsanzahlen */
printf("n = ?");
scanf("%d",&n); if (!((n>0)&&(n<=PermLength))) goto Ende;

BishMax=-1;
/* Erzeugung aller Permutationen */
/* Erzeuge die Permutationen nach dem Algorithmus:
PERM1[0..n-1] := (0,...,n-1]
t:=n
# if t=1 then standardroutine, goto $
Z_hl[t-1]:=t
t:=t-1, goto #
$ if t<n then goto &, if t=n then fertig.
& rotiere PERM1[0..t], dec Z_hl[t], if >0 then goto #
t:=t+1, goto $
*/
{ register int i;
for (i=0; i<n; i++) { Perm1[i]=i; };
};
{ register int t;
t=n;
Kreuz: if (t==1) goto standardroutine;
Zaehl[t-1]=t;
t=t-1; goto Kreuz; /* rekursiver Aufruf */
Dollar: /* R_cksprung aus dem rekursiven Aufruf */
if (t==n) goto Fertig;
/* Rotieren: Perm1[0] <- Perm1[1] <- ... <- Perm1[n-1] <- Perm1[0] */
{ register int Perm0; register int i;
Perm0=Perm1[0];
for (i=0; i<t; i++) {Perm1[i]=Perm1[i+1];};
Perm1[t]=Perm0;
};
if (--Zaehl[t]) goto Kreuz;
t=t+1; goto Dollar;

standardroutine:
PermCopy(Perm1,Perm,n); /* Perm := Perm1 */
{ int Spiegelungsanzahl;
Spiegelungsanzahl=0;
{ unsigned int k;
while (!((k=Perm[0]) == 0))
{/* Spiegle Perm[0..k] */
unsigned int k2=(k+1)/2;
register int *up = &Perm[0]; register int *down = &Perm[k];
{ register int i;
i=k2; while (i) {int h; h=*up; *up++=*down; *down--=h; i--;};
}
Spiegelungsanzahl++;
};
};
if (Spiegelungsanzahl>BishMax)
{BishMax=Spiegelungsanzahl; PermCopy(Perm1,PermMax,n);};
}
goto Dollar;
}
Fertig: printf("Das Maximum betrug %d.\n bei ",BishMax);
{register unsigned int i;
printf("(");
for (i=0; i<n; i++)
{if (i>0) printf(" ");
printf("%d",PermMax[i]+1);
};
printf(")");
};
printf("\n");
Ende: ;
}

============================= fannkuch.lsp =============================

(defun fannkuch (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )

(unless (and (> n 0) (<= n 100)) (return-from fannkuch))


(let ((n n))
(declare (fixnum n))

(let ((perm (make-array n :element-type 'fixnum))
(perm1 (make-array n :element-type 'fixnum))
(zaehl (make-array n :element-type 'fixnum))
(permmax (make-array n :element-type 'fixnum))
(bishmax -1))

(declare (type (simple-array fixnum (*)) perm perm1 zaehl permmax))
(declare (fixnum bishmax))
(dotimes (i n) (setf (svref perm1 i) i))


(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))

(setf (svref zaehl (- \t 1)) \t)
(decf \t)

(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))

(dotimes (i \t) (setf (svref perm1 i) (svref perm1 (+ i 1))))


(setf (svref perm1 \t) perm0)
)

(when (plusp (decf (svref zaehl \t))) (go Kreuz))
(incf \t)

(go Dollar)
standardroutine
(dotimes (i n) (setf (svref perm i) (svref perm1 i)))


(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop

(when (= (setq k (svref perm 0)) 0) (return))
(let ((k2 (ceiling k 2)))

(declare (fixnum k2))


(dotimes (i k2) (rotatef (svref perm i) (svref perm (- k i))))
)
(incf Spiegelungsanzahl)
)

(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)

(dotimes (i n) (setf (svref permmax i) (svref perm1 i)))


) )
(go Dollar)
fertig
)

(format t "Das Maximum betrug ~D.~% bei " bishmax)


(format t "(")
(dotimes (i n)

(when (> i 0) (format t " "))

(format t "~D" (+ (svref permmax i) 1))
)

(format t ")")
(terpri)
(values)

) ) )

============================= fannkuch.bet =============================

ORIGIN '~beta/basiclib/v1.4/betaenv'
--program:descriptor--
(#

PermLength: (# exit 100 #);
Perm, Perm1, PermMax, Zaehl: [PermLength]@integer;
h, i, k, n, t, up, down, BishMax, Spiegelungsanzahl: @integer;
do
'n = ?' -> putText;
getInt -> n;
(if (n < 1) or (n > PermLength) then stop if);

-1 -> BishMax;
(for i:n repeat i-1 -> Perm1[i] for);
n -> t;

again:
(#
do (for i:t repeat i -> Zaehl[i] for); 1 -> t;
(for i:n repeat Perm1[i] -> Perm[i] for);
0 -> Spiegelungsanzahl;

while1:
(#
do (if Perm[1]->k // 0 then leave while1 if);
1 -> up; k+1 -> down; down/2 -> i;
while2:
(#
do (if i // 0 then leave while2 if);
Perm[up] -> h; Perm[down] -> Perm[up]; h -> Perm[down];
up+1 -> up; down-1 -> down; i-1 -> i;
restart while2;
#);
Spiegelungsanzahl+1 -> Spiegelungsanzahl;
restart while1;
#);

(if Spiegelungsanzahl > BishMax then
Spiegelungsanzahl -> BishMax;
(for i:n repeat Perm1[i] -> PermMax[i] for)
if);

while3:
(#
do (if t // n then leave while3 if);
Perm1[1] -> h;
(for i:t repeat Perm1[i+1] -> Perm1[i] for);
h -> Perm1[t+1];
(if (Zaehl[t+1]-1 -> Zaehl[t+1]) <> 0 then restart again if);
t+1 -> t;
restart while3;
#);
#);

'Das Maximum betrug ' -> putText;
BishMax -> putInt;
'.\n bei (' -> putText;
(for i:n repeat
(if i > 1 then ' ' -> put if);
PermMax[i]+1 -> putInt;
for);
')' -> putLine;
#)

========================================================================

Jeff Dalton

unread,
Sep 13, 1994, 3:13:08 PM9/13/94
to

There was a Prolog a while back that did better than C for some
numeric stuff. Did this win wupporters? Nooooooooooo...

Jeff Dalton

unread,
Sep 13, 1994, 3:29:23 PM9/13/94
to
In article <SIMON.94S...@liasg5.epfl.ch> si...@lia.di.epfl.ch (Simon Leinen) writes:
> > Since two different Lisp's are off by a factor of 5 and 10 from
> > CMUCL, the benchmark probably needs a little work.
>
>One might also conclude that the two other Lisps need a little work.

One might conclude all kinds of things. But would one be right?

As has been pointed out a number of times, it can be difficult to
get the best results from several different Common Lisps. Sometimes
declarations that help a lot in one make things worse in another,
and so on. I find it easier to get the best numerical results from
CMU CL than from Lucid or Allegro. This may mean that *something*
about Lucid and Allegro needs a little work, or it may just mean
that I and the CMU CL folk think alike when it comes to this.

Another factor is that the person running the benchmarks may not
know all the Lisps equally well, just as someone running C benchmarks
may not know that "-O6" is the optimization setting to use for
compiler X.

>Maybe the problem is that it is simply very hard to implement all of
>Common Lisp in a maximally efficient manner.

And the ways to get the maximally efficient stuff vary.

BTW, it took a while for C to get as efficient as it now usually(?)
is. And where would many people be without gcc? Unfortunately,
the Lisp equivalent of gcc hasn't yet come along.

-- jeff

Scott Schwartz

unread,
Sep 13, 1994, 6:33:51 PM9/13/94
to
je...@aiai.ed.ac.uk (Jeff Dalton) writes:
There was a Prolog a while back that did better than C for some
numeric stuff. Did this win wupporters? Nooooooooooo...

It's a necessary but not sufficient condition. Sather and C are both in
the algol vein; crossover is easy. Prolog is utterly different than C;
performance is almost the least of one's worries.

Erik Naggum

unread,
Sep 13, 1994, 8:02:28 PM9/13/94
to
[Jacob Seligmann]

| [As you can see below, the C implementation uses a lot of tricks I did
| not care or was unable to use in the BETA port: in-lining (#define's),
| hard-wired variable liveliness information (loads of blocks with local
| variables, making the code very hard to read), code generation hints
| (register ...), array pointers (*d++ = *s++), unstructured gotos, etc.]

this is not benchmarking execution speeds of C or LISP or anything. this
is benchmarking a programmer's willingness to write assembly language and
to see how well his assembly language coding style fits various languages.
I'll bet gcc -O6 does a better job on reasonably-written C than it does on
this spaghetti code. I mean, writing your own version of memcpy because
you think memcpy won't be fast enough is rather pathetic when there are
CPU's out that have block move instruction idioms and good compilers that
open-code them when calls to memcpy are requested.

to paraphrase Richard Stallman from a response to a bug report for Emacs
with a horribly convoluted fix:

Unfortunately, the code you wrote looks like the intermediate stage of
scheme compilation, and it's too hard for me to read. I can't even
figure out what [it] does.

and what's this about a semicolon _after_ the closing brace in blocks?
such things worry me when I read other people's C code, because it tells me
that they don't really program in C, but translate from something else into
C. that's the feeling I got from the LISP code, too. is this really a
good way to compare languages? I think not.

are there any good ways to compare _languages_? I think programmer time
spent and number of compilation runs required to solve a particular problem
is a better indication. then, later, we can benchmark the compilers.

I don't use LISP because of its outstanding speed, anyway. I use it
because _I_ am faster in LISP than in anything else I know well enough to
compare to. two days of my time translates to the cost of a CPU upgrade to
my clients. so what if it runs five times slower and takes 100 instead of
20 milliseconds? with the upgraded CPU it'll only take 3 times more time,
and if I need to come back to enhance the functionality, they can buy an
even _faster_ CPU. we're not talking about more than constant factors in
difference between these benchmarks here, right?

BETA and LISP probably have a lot to tell us about optimal program design
in their respective mind-sets and a comparison of their relative strengths
and idiomatic expressiveness would be instructive. onwards, folks!

#<Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.

Jacob Seligmann

unread,
Sep 14, 1994, 4:35:24 AM9/14/94
to
Thus spake Erik Naggum <er...@naggum.no>:

> > [Description of C spaghetti code deleted]


>
> this is not benchmarking execution speeds of C or LISP or anything. this
> is benchmarking a programmer's willingness to write assembly language and
> to see how well his assembly language coding style fits various languages.

I couldn't agree more! I was simply answering to a post which could be
misinterpreted as saying that BETA is inherent 25 times slower than C.
This did not reflect my personal impression, so I retrieved the C
benchmark program from the original poster, did a simple rewrite in
BETA, gained a factor 5, and posted this result along with the programs
used for you all to verify.

That is, I did not write the spaghetti code in the first place, neither
do I feel this is the way to program in BETA, or C, or LISP, or
whatever.

> are there any good ways to compare _languages_? I think programmer time
> spent and number of compilation runs required to solve a particular problem
> is a better indication. then, later, we can benchmark the compilers.

Again, I agree, as long as your programs are not orders of magnitude
slower than what is achievable, and as long as there are no inherent
barriers in the language design to ever achieving better performance.

> BETA and LISP probably have a lot to tell us about optimal program design
> in their respective mind-sets and a comparison of their relative strengths
> and idiomatic expressiveness would be instructive. onwards, folks!

Actually I was quite surprised to see a BETA-LISP comparison thread in
the first place. Sure, BETA strongly supports a functional programming
paradigm, but it is still an object-oriented language at heart - it
never ceases to amaze me just how versatile and expressive the BETA
pattern concept is! Keep the comparisons coming.

Cheers,

Jeff Dalton

unread,
Sep 14, 1994, 12:29:14 PM9/14/94
to
In article <LGM.94Se...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

>I have attached the again-modified version of the benchmark.

> (dotimes (i n)


> (declare (fixnum i))
> (setf (svref perm1 i) i))

I should perhaps point out again that that is not always enough
to get a fully fixnum loop. I normally use macros like these:

;;; Fixnum iterations

(defmacro fix-dotimes ((var count &optional (result nil))
&body body)
(let ((count-var (gensym)))
`(let ((,count-var ,count))
(declare (fixnum ,count-var))
(do ((,var 0 (fix+ ,var 1)))
((fix>= ,var ,count-var)
,result)
(declare (fixnum ,var))
,@body))))

(defmacro do-vector-indices ((var vec &optional (result nil))
&body body)
`(fix-dotimes (,var (length (the vector ,vec))
,@(if result (list result)))
,@body))

fix+, fix>=, etc are defined like this:

(defmacro fix+ (i j)
`(the fixnum (+ (the fixnum ,i) (the fixnum ,j))))

(defmacro fix>= (i j)
`(>= (the fixnum ,i) (the fixnum ,j)))

The root problem seems to be that some Lisps worry that 1 + i may
not be a fixnum even though i is. I don't think you can always
win even by saying (dotimes (i (the fixnum n)) ...).

It's very easy to see what will happen in KCL, BTW. Define
the fns and call disassemble. The C code is sufficiently readable,
because the stuff that's what you'd do in C looks like it is.

-- jeff

Jeff Dalton

unread,
Sep 15, 1994, 12:14:46 PM9/15/94
to
In article <SCHWARTZ.94...@roke.cse.psu.edu> schw...@roke.cse.psu.edu (Scott Schwartz) writes:
>je...@aiai.ed.ac.uk (Jeff Dalton) writes:
> There was a Prolog a while back that did better than C for some
> numeric stuff. Did this win wupporters? Nooooooooooo...
>
>It's a necessary but not sufficient condition.

I'd say it's neither.

Christian Lynbech

unread,
Sep 15, 1994, 4:36:04 PM9/15/94
to
>>>>> "Jeff" == Jeff Dalton <je...@aiai.ed.ac.uk> writes:

Jeff> In article <SCHWARTZ.94...@roke.cse.psu.edu>


Jeff> schw...@roke.cse.psu.edu (Scott Schwartz) writes:
>> je...@aiai.ed.ac.uk (Jeff Dalton) writes: There was a Prolog a while
>> back that did better than C for some numeric stuff. Did this win
>> wupporters? Nooooooooooo...
>>
>> It's a necessary but not sufficient condition.

Jeff> I'd say it's neither.

Are we about to open the `C vs. Lisp' thread again, with BETA on the
side?

(for those puzzled: this has been a raging debate for the last couple
of months in comp.lang.lisp)


------------------------------------------------------------------------------
Christian Lynbech | Hit the philistines three times over the
office: R0.33 (phone: 3217) | head with the Elisp reference manual.
email: lyn...@daimi.aau.dk | - pet...@hal.com (Michael A. Petonic)
------------------------------------------------------------------------------

Matthew McDonald

unread,
Sep 16, 1994, 1:30:30 AM9/16/94
to

I know this is about beta rather than lisp, but what Jacob is
saying about beta sounds a lot like what many people have been saying
about lisp.

jac...@daimi.aau.dk (Jacob Seligmann) writes:
[...]


Here's my results (486/66; BETA compiler v5.0(2); gcc v2.5.8):

gcc -O6 -fomit-frame-pointer 2.08
BETA -no-range-check 11.00

[Actually, the compiler currently do not have a -no-range-check option.
The BETA figure was obtained by removing all boundl-instructions from
the code by hand, and reassembling; this is the only way I could
achieve a fair comparison.]

As you can see, the ratio between state-of-the-art optimized C and BETA
was 5.3, not 26.4 as above.

[...]


With a little bit of effort, there is absolutely no reason why the BETA
compiler should not be able to perform the variable liveliness analysis
itself (although currently it does not, and therefore pays the heavy
price of using heap space instead of registers). Also, the linear array
sweeps are simple enough that the compiler could recognize them and
avoid the index calculations (again, it currently does not, and
therefore pays the price at each lookup).

"Some integer array hacking"-type programs are *exactly* what highly
sophisticated C compilers excel at, but there are no intrinsic reasons
why the BETA compiler should not to be able to produce comparable code.
We're constantly working on it...

What Jacob's saying is
(a) Typical code written in c performs more than 5 times
better than code in his favourite language using available
implementations, and
(b) there's no reason why his favourite language couldn't be
implemented so it was competive.

What lisp (and beta) advocates seem to often ignore is that quality of
code generation really matters to most people.

Telling people that a factor of 5 difference in run-time doesn't
really matter doesn't encourage them to use your language. Neither
does telling them that *in principle* or *some day in the future*,
your language could be competitive.

Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
PA-RISC, and RS6k, few people are going to use a language. Not many
people are going to bother explaining that performance matters to
them, they're just going to ignore you when you try to tell them
otherwise.

Which is a pity, because competive compilers for sane languages like
beta and lisp are obviously feasible. Paul Wilson was proposing a
compiler for scheme+objects that would compete with C, CMU CL was
great (although it now seems to be largely unsupported) and the ETH
Oberon compilers are also wonderful (although the systems they're in
don't co-operate with the rest of the universe.)

At least Jacob's actually working on improving the beta
implementation. As far as I can tell, the usual lisp advocate response
to performance complaints is to either:
(a) deny there's a problem,
(b) say one day there won't be a problem, or
(c) suggest you write code that looks like FORTRAN and
manually weigh expression trees and other insanity.

Perhaps Gwydion or Paul Wilson's scheme+objects compiler will save the
world.

--
Matthew McDonald ma...@cs.uwa.edu.au
Nim's longest recorded utterance was the sixteen-sign declarative
pronouncement, "Give orange me give eat orange me eat orange give me
eat orange give me you."

Scott McLoughlin

unread,
Sep 15, 1994, 9:25:11 PM9/15/94
to
lyn...@xenon.daimi.aau.dk (Christian Lynbech) writes:

> Jeff> I'd say it's neither.
>
> Are we about to open the `C vs. Lisp' thread again, with BETA on the
> side?
>
> (for those puzzled: this has been a raging debate for the last couple
> of months in comp.lang.lisp)
>
>
>

Howdy,
Sure let's open it up again ;-) But no really -- skip all this
talk of "realtime" this and that and concerns about competing with
Fortran on floating point. I'm still _VERY_ curious (concerned?)
about why Lisp isn't more popular in "the trenches". Go look at
Borland C++ compiler's output in large model (typical Windows app) -
not too slick. Now go run a typical Windows app (or Unix workstation
app for that matter). It pages like all get out when you open a
windows or switch to another task. Now go install a Windows
_personal utility app_ -- 40 meg disk footprint or more. We're
not talking big DB servers, just a nice word processor or
spreadsheet.
So why don't folks use Lisp to write this stuff? Blazing
speed,space,etc. aint that critical. What gives?

Naresh Sharma

unread,
Sep 16, 1994, 5:04:42 AM9/16/94
to
Matthew McDonald (ma...@cs.uwa.edu.au) wrote:

: What Jacob's saying is

: (a) Typical code written in c performs more than 5 times

^^^^^^^^^^^^^^^^^^^^^^^^^
A small correction: It should be the-state-of-the-art spaghetti code in c

: better than code in his favourite language using available
: implementations, and

Naresh
--
_______________________________________________________________________________
Naresh Sharma [N.Sh...@LR.TUDelft.NL] Herenpad 28 __|__
Faculty of Aerospace Engineering 2628 AG Delft \_______(_)_______/
T U Delft Optimists designed the aeroplane, ! ! !
Ph(Work) (+31)15-783992 pessimists designed the parachute!
Ph(Home) (+31)15-569636 Plan:Design Airplanes on Linux the best OS on Earth!
------------------------------PGP-KEY-AVAILABLE--------------------------------

Stefan Monnier

unread,
Sep 16, 1994, 10:15:25 AM9/16/94
to
In article <MAFM.94Se...@wambenger.cs.uwa.edu.au>,

Matthew McDonald <ma...@cs.uwa.edu.au> wrote:
> Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
> PA-RISC, and RS6k, few people are going to use a language. Not many
> people are going to bother explaining that performance matters to
> them, they're just going to ignore you when you try to tell them
> otherwise.

So true !
The best example is probably Visual Basic, right ?


Stefan

William D. Gooch

unread,
Sep 16, 1994, 10:03:35 AM9/16/94
to
On 16 Sep 1994, Matthew McDonald wrote:

> (a) Typical code written in c performs more than 5 times
> better than code in his favourite language using available
> implementations, and

This is exactly the sort of unsupportable generalization I was afraid
would result from the net publication of this so-called "benchmark."
(Not a reference to Jacob's post, but to the original.) Differences in
performance do not boil down to single numbers, nor any kind of simple
comparison for that matter.

>.... Telling people that a factor of 5 difference in run-time doesn't


> really matter doesn't encourage them to use your language.

Most of the time, a factor of five is negligible. In critical parts of
the code however, a factor of 1.05 may be important.

> .... As far as I can tell, the usual lisp advocate response


> to performance complaints is to either:
> (a) deny there's a problem,
> (b) say one day there won't be a problem, or
> (c) suggest you write code that looks like FORTRAN and
> manually weigh expression trees and other insanity.

This is gross oversimplification. As a "lisp advocate," I consistently
maintain that if and when there is a performance problem, it can be dealt
with in the same way one deals with performance problems in any language.
There is not a generalized performance problem inherent in lisp.

Lawrence G. Mayka

unread,
Sep 16, 1994, 6:50:43 PM9/16/94
to
In article <Cw4oG...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:

In article <LGM.94Se...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

>I have attached the again-modified version of the benchmark.

> (dotimes (i n)
> (declare (fixnum i))
> (setf (svref perm1 i) i))

I should perhaps point out again that that is not always enough
to get a fully fixnum loop. I normally use macros like these:

Yes, I added sufficient declarations for LispWorks, but not
necessarily enough for other CL implementations.

The root problem seems to be that some Lisps worry that 1 + i may
not be a fixnum even though i is. I don't think you can always
win even by saying (dotimes (i (the fixnum n)) ...).

Yes. Even if i is declared to be FIXNUM, its value might be
MOST-POSITIVE-FIXNUM or MOST-NEGATIVE-FIXNUM, in which cases (1+ i) or
(1- i) will overflow into a bignum. Theoretically, a type declaration
of

(declare (type (integer -30000 30000) i))

should suffice to ensure that (1+ i) and (1- i) generate fixnum
arithmetic; but implementations may vary, of course.

Scott McLoughlin

unread,
Sep 16, 1994, 6:37:15 PM9/16/94
to
schu...@ricotta.cs.wisc.edu (Lee Schumacher) writes:

> Of course from the faculties point of view, they don't want to teach
> lisp (thats manual labor, to their eyes), they want to teach AI. The
> end result is that lisp gets shorted in school, so when the graduates
> of this program get out into the real world lisp is never seriously
> considered for any job at hand - its too esoteric, too academic, too
> damn *hard*...
>
> sadly,
> Lee ...
>

Howdy,
Ok - to sum up as best I can you're reply (thanks): Lisp is
not used "in the trenches" (not particularly time/space intensive apps -
about 95% of the code hacked out there) because Lisp is not taught well
to CS types in college. It is therefore "too hard" a language to use.
I'm not sure that I agree.

1. Scheme seems to be an increasingly popular intro CS language. ML
also seems popular (maybe this is irrelevant, but I think of ML as
"same family" as Lisp/Scheme).

2. C and especially C++ are _very_ hard languages to learn. I've lost
count of the times I've explained pointers/arrays to budding C/C++
programmers. Whatever one thinks of coding style and/or what a compiler
_should_ do, all the C code I've looked at or hacked is full of
while(*p++) if (*p==MG_COOKIE) then foobar((CAST_TYPE)p+offset)
etc. Understanding this type of code (bouncing back and forth between
application level and machine level semantics) is _REQUIRED_ of C/C++
programmers in the trenches.

3. 4GL's, various BASIC dialects, Turbo Pascal and the now ubiquitous
giant Windows API are, in my experience, _NOT_ taught in colleges (not
widely anyway). Programmers in the trenches learn this stuff from
_BOOKS_ and/or from training sessions and/or "on the job". Visual Basic
is a pretty giant beast (not your old Apple BASIC) with all kinds of
weird special cases and options. Variable scoping is downright weird
(Form level variables), etc. But this is tossed at folks with a 2
year community college degree. Most folks program with one eye on
the manual or online help.

4. Given (2) and (3) above, there are _lots_ of nice Lisp books.
Furthmore, Lisp and Scheme are much cleaner regarding heap objects
(vectors, strings, cons cells). Why aren't folks hacking Lisp with
one eye on Wilensky or Winston/Horn or Simply Scheme????

Here is an initial conjecture:

1. Cost of commercial implementations. Plain and simple. Look at
PowerBuilder and Gupta SQL Windows. Obaining market share requires
a low cost implementation. (Personal Eiffel, also - $49) Free is
generally no good "in the trenches".

2. Nasty file/io. Folks want/need to write files of structured
data types and read them back in without thinking too hard about
it, e.g. pascal's FILE of RECTYPE or C's fwrite(f,&s,sizeof(RECTYP)).
Lot's of simple file/io in the trenches.

3. Nasty looping constructs. Hard to hack DO looking at the
manual. LABELS and LETREC are elegant, but complicated. DOTIMES and
DOLIST are a step in the right direction. There should have been
a CL standard WHILE/UNTIL for "bootstrapping" nincompoops (sp?).
LOOP is newer and not so widely used and/or written about. Otherwise
LOOP is fine. Of _course_ Lisp has superior looping/iteration
constructs, but it takes new comers a few months to realize this.

4. Doesn't fit in well with popular version control products. Even
small, unsophisticated shops will use PVCS or a similar product.
Edit/Compile/Link/Debug works well with these products. This is
not a killer though -- only "sophisticated" trench shops rely
on VC.

5. Money (BCD) and Date types. Lots of trenches proramming is about
dates and dollars. VB has a gadzillion date and money formatting
operations/options. Of course, C and Pascal don't have these, so these
are not necessarily "killers". C/Pascal "trenches" programming tends to
be full of rounding errors, etc. because of the absence of these types.

Anyway, these are probably "sufficient" conditions to keep Lisp from
being "popular" in every 1-4 dude(tte) project that goes on across
the U.S. in suburban green wall-to-wall carpet corporate and govt.
offices. It's too bad. These folks could _really use_ a Lisp.
More of these projects might succeed/survive.

Jacob Seligmann

unread,
Sep 16, 1994, 7:01:19 AM9/16/94
to
Matthew McDonald (ma...@cs.uwa.edu.au) wrote:

> What Jacob's saying is
> (a) Typical code written in c performs more than 5 times
> better than code in his favourite language using available
> implementations, and
> (b) there's no reason why his favourite language couldn't be
> implemented so it was competive.

Again, I was merely answering to an earlier post which was
misinterpreted as saying that BETA was *inherently* 25 times or more
slower than C. I did so by using the original C program containing
tight loops with lots of pointer arithmetic to write an equivalent BETA
program which was "only" 5 times slower (thereby trying to show that
the factor or 25 was much too pessimistic), and finally explained the
difference in the code produced (thereby trying to show that the
slowdown was not a product of the language design, only its current
implementation).

> What lisp (and beta) advocates seem to often ignore is that quality of
> code generation really matters to most people.
>
> Telling people that a factor of 5 difference in run-time doesn't
> really matter doesn't encourage them to use your language. Neither
> does telling them that *in principle* or *some day in the future*,
> your language could be competitive.

We at Mjolner Informatics take the quality of the produced code
*extremely* seriously. There is no reason why we should not be able to
produce code comparable to C for the imperative portions of the
language, and we're constantly working on it.

Meanwhile, it is our practical experience that for real applications
(not dhrystone-type benchmarks) the quality of the code currently
produced is more than acceptable. Also, BETA provides an easy interface
to C; if 5% of your code is time-critical, you can therefore still
write it in your favorite procedural, optimized language, and use
BETA's modelling features and extensive libraries for the remaining 95%.

Cheers,

Peter da Silva

unread,
Sep 16, 1994, 11:59:32 PM9/16/94
to
In article <35c99t$l...@info.epfl.ch>,

Stefan Monnier <mon...@di.epfl.ch> wrote:
>In article <MAFM.94Se...@wambenger.cs.uwa.edu.au>,
>Matthew McDonald <ma...@cs.uwa.edu.au> wrote:
>> Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
>> PA-RISC, and RS6k, few people are going to use a language. [...]

>The best example is probably Visual Basic, right ?

Well, unless you're Microsoft.

Where does a 500 pound gorilla sleep?
--
Har du kramat din varg idag?

Lee Schumacher

unread,
Sep 16, 1994, 3:07:23 PM9/16/94
to
In article <os2Psc...@sytex.com> sm...@sytex.com (Scott McLoughlin) writes:

>Howdy,
> Sure let's open it up again ;-) But no really -- skip all this
>talk of "realtime" this and that and concerns about competing with
>Fortran on floating point. I'm still _VERY_ curious (concerned?)
>about why Lisp isn't more popular in "the trenches". Go look at
>Borland C++ compiler's output in large model (typical Windows app) -
>not too slick. Now go run a typical Windows app (or Unix workstation
>app for that matter). It pages like all get out when you open a
>windows or switch to another task. Now go install a Windows
>_personal utility app_ -- 40 meg disk footprint or more. We're
>not talking big DB servers, just a nice word processor or
>spreadsheet.
> So why don't folks use Lisp to write this stuff? Blazing
>speed,space,etc. aint that critical. What gives?
>
>=============================================
>Scott McLoughlin
>Conscious Computing
>=============================================

Well, I've been occupying the training ground for the trenches for the
past couple o' weeks and I now have a very good idea why those who
graduate to the trenches don't use lisp - poor education. Here at the
UW comp sci dept ones introduction to lisp consists of a couple of
lectures in the AI course + one chapter in the text. Assignment 1
follows immediately: write 3 simple functions in lisp (set union,
intersection, difference), and then write a substantial program to
drive the "Agent World" simulator. Now, there's nothing wrong with
agent world (at first glance, anyway), but the students have little
experience in using an interpreted system, or any grasp of the
underlying lisp paradigms, so naturally they're frustrated. This
frustration is taken out on lisp - they don't understand it, therefore
it must be hard.

John Doner

unread,
Sep 16, 1994, 8:15:37 PM9/16/94
to
In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
>I'm still _VERY_ curious (concerned?)
>about why Lisp isn't more popular in "the trenches".
...

> So why don't folks use Lisp to write this stuff? Blazing
>speed,space,etc. aint that critical. What gives?

I posed a similar question to the comp.lang.dylan newsgroup a couple of
months ago. I received many interesting replies which I haven't yet
fully digested. Reasons having to do with availability, popularity,
vendors' support, what they teach in school, etc., are common, but
really beg the question because they don't explain why C got there in
the first place. None of them can explain why C won out over Pascal,
for example. My latest theory is that the answer lies in cognitive
effects arising from the conception and structure of the language.
People make up mental models of how things work, and interpret the
programs they write in terms of those models. For experienced
programmers, compiler writers perhaps, these models are complete and
accurate, closely corresponding to the objective reality. Novice
programmers have poor models that are incomplete, poorly related to the
actual computing machines, and perhaps even inconsistent. The
intellectual effort required to develop a good model for Lisp or Ada is
much greater than that required to develop one for C. There are more
abstractions involved. Thus, C is more easily comprehended by
inexperienced programmers.

I invite criticism of this theory.

John Doner

Marty Hall

unread,
Sep 17, 1994, 11:27:01 PM9/17/94
to
In article <sooRsc...@sytex.com> sm...@sytex.com (Scott McLoughlin) writes:
>schu...@ricotta.cs.wisc.edu (Lee Schumacher) writes:
>
>> Of course from the faculties point of view, they don't want to teach
>> lisp (thats manual labor, to their eyes), they want to teach AI. The
>> end result is that lisp gets shorted in school, so when the graduates
>> of this program get out into the real world lisp is never seriously
>> considered for any job at hand - its too esoteric, too academic, too
>> damn *hard*...
[...]

> Ok - to sum up as best I can you're reply (thanks): Lisp is
>not used "in the trenches" (not particularly time/space intensive apps -
>about 95% of the code hacked out there) because Lisp is not taught well
>to CS types in college. It is therefore "too hard" a language to use.
> I'm not sure that I agree.

I don't think this is the only reason, but I do agree that one problem
is Lisp education. My experience (doing AI and working with AI/Lisp
programmers in industry for 8 years and teaching AI and Lisp
programming for 6 years) has been that Lisp is frequently taught only
to illustrate AI concepts. In fact, I am guilty of that myself in
my Intro AI course to part-time MS students I teach. I just want to
give enough for them to try out things they've been introduced
to. There really isn't much time to point out many issues, so I have
to save that for my AI Programming course. This problem is all the
worse for the faculty without backgrounds in "serious" Lisp
programming and who don't have the desire I do to promote Lisp.

I've seen very, very, very few people who had a Lisp course that even
talked about efficiency issues, was careful to discuss the costs of
using linked lists, talked about GC issues and boxed data types,
and so forth. People aren't [usually] taught Lisp; they're taught AI
with a little Lisp thrown in along the way.

>1. Scheme seems to be an increasingly popular intro CS language. [...]

I hope you are correct wrt the "increasingly" part. It is certainly
true at some enlightened institutions. But my experience with people
doing Lisp work in industry has brought me in contact with very, very,
few people with this type of introduction. Other people's experiences
may vary, of course.
- Marty
(proclaim '(inline skates))

Arun Welch

unread,
Sep 17, 1994, 8:57:07 PM9/17/94
to
In article <sooRsc...@sytex.com> sm...@sytex.com (Scott McLoughlin) writes:

Why aren't folks hacking Lisp with
one eye on Wilensky or Winston/Horn or Simply Scheme????

Here is an initial conjecture:

1. Cost of commercial implementations. Plain and simple. Look at
PowerBuilder and Gupta SQL Windows. Obaining market share requires
a low cost implementation.

It's interesting that you bring up PowerBuilder, as it raises some
issues important to the Lisp community. There really isn't a low-cost
implementation, the Desktop version is severely crippled for
developing code. So, we've got a product that is:
a) expensive, around $4K for a reasonably setup system
b) slow. PB is an order of magnitude slower than Medley on the same
hardware, and Medley was never considered a speed demon in the Lisp world.
c) buggy. I get at least a GPF a day, frequently more.
d) poorly documented. Order of instructions can have interesting
effects, none of which are documented. There is 1 book on PB on the
market, compared to a couple dozen on Lisp.
e) support is *expensive*. For $5000 you get to call an 800 number
(where I'll admit they're generally helpfull), but no updates. I
hereby publicly apologize to Xerox, Venue, Franz, and Lucid for commenting
to their sales reps that their support costs were ludicrous. No
site licenses either.
f) proprietary.
g) training is expensive, only available from PowerSoft or their
authorised representatives.
h) non-portable, though this will change in version 4.0.
i) single inheritance (actually, I could probably go to z comparing
the features that CLOS has over the attempt at OOP in PB)
j) closed. No access to the object system.
k) poor looping constructs, and no recursion. As languages go,
PowerScript is *very* primitive.

I could go on, but you get the idea. On the other hand, PowerBuilder
is *incredibly* successfull. Any Lisp vendor (heck *any* language
vendor) would kill for that kind of growth rate. Why is it so popular?
I don't really know, but I guess it's because it takes an incredibly
grueling task (grovelling in a database) and makes it easier. The
difference between doing DB stuff by hand and using PowerBuilder is so
incredibly large that people are willing to put up with a lot of
grief. Also, they don't really have much competition. In every single
study I've seen between PB and the other 2 or 3 products that do
similar things PB comes out on top. Compare this to how many different
Lisp vendors, all of whom have products that stand out in some way
over their competition.

It's a wierd world we live in.

...arun
--
---------------------------------------------------------------------------
Arun Welch 2455 Northstar Rd
Network Engineer Columbus, OH 43221
OARnet we...@oar.net

Lawrence G. Mayka

unread,
Sep 18, 1994, 4:46:03 PM9/18/94
to
In article <MAFM.94Se...@wambenger.cs.uwa.edu.au> ma...@cs.uwa.edu.au (Matthew McDonald) writes:

I know this is about beta rather than lisp, but what Jacob is
saying about beta sounds a lot like what many people have been saying
about lisp.

...
What Jacob's saying is
(a) Typical code written in c performs more than 5 times
better than code in his favourite language using available
implementations, and

For Common Lisp (LispWorks and CMU, at least), the factor was 2, not
5. Moreover, I certainly disagree that this small numerical-analysis
benchmark is a typical C program. It's certainly not typical of, say,
telecommunications switching systems. Nevertheless, I agree that the
potential difference in efficiency of generated code between CL and
(the best) C compilers remains an obstacle to broadening CL's market.

Telling people that a factor of 5 difference in run-time doesn't
really matter doesn't encourage them to use your language. Neither
does telling them that *in principle* or *some day in the future*,
your language could be competitive.

I agree that "in principle" is a weak argument; "can be fixed by the
vendor within n months/years" is a somewhat better argument (if
true!).

Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
PA-RISC, and RS6k, few people are going to use a language. Not many

Common Lisp has all these.

At least Jacob's actually working on improving the beta
implementation. As far as I can tell, the usual lisp advocate response
to performance complaints is to either:
(a) deny there's a problem,

I don't deny the existence of a problem.

(b) say one day there won't be a problem, or

I think a concerted effort by vendors could substantially solve the
problem--i.e., make the best Common Lisp compilers generate code
substantially competitive with the code generated by the best C
compilers--within a reasonably short time period (perhaps 1-2 years).
Presumably, this will occur only if/when CL users raise its priority
above that of other new features and quality improvements.

(c) suggest you write code that looks like FORTRAN and
manually weigh expression trees and other insanity.

Ideally, code should "look like" the abstract solution to the problem
at hand. If the problem is numerical analysis, then shuffling vector
elements is precisely the =correct= abstraction of the
solution--mathematician have no abstraction higher-level than that.
If you want to say that the ideal compiler would deduce all datatypes
so that explicit type declarations would be unnecessary, I agree; but
I don't see how you can hold this lack of type inferencing against
Common Lisp when C does no such thing either.

Bob Hutchison

unread,
Sep 19, 1994, 11:44:40 AM9/19/94
to
In <35dcf9$j...@news.aero.org>, do...@aero.org (John Doner) writes:
>In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
>>I'm still _VERY_ curious (concerned?)
>>about why Lisp isn't more popular in "the trenches".
>....

>> So why don't folks use Lisp to write this stuff? Blazing
>>speed,space,etc. aint that critical. What gives?
>
>I posed a similar question to the comp.lang.dylan newsgroup a couple of
>months ago. I received many interesting replies which I haven't yet
>fully digested. Reasons having to do with availability, popularity,
>vendors' support, what they teach in school, etc., are common, but
>really beg the question because they don't explain why C got there in
>the first place. None of them can explain why C won out over Pascal,
>for example.

There is a perspective on this that I suspect you would have had to have
worked through the late '70s and early '80s on micro computers to see.
Micros were not fast then and memory was expensive. There were a
few languages 'fighting' it out for system and application development
at the time: Basic (the leader), C, and Pascal. At the time programmers
were asking why Basic was being used so widely. There was incredible
pressure to develop software using Basic. Basic handled memory for you,
automatically, and for many situations not well enough. Basic was also
slow. If you needed something faster you went to C or Pascal. At the
time C was available for free (thanks UNIX and I think Dr. Dobbs), pascal
was not (it was expensive). Pascal was also usually pcode based and
ran in proprietary environments. Pascal also fell down badly when you
needed control of your memory. In other words, the available Pascal
*implementations* were not different enough from Basic in a few key
aspects. C was used. (Let me point out one other belief that will perhaps
further illustrate the world at that time, it was the official position of
a very large multi-national company that compilers were unreliable and
produced poor code, and that this was *inherent* in the technology).

Briefly C filled a gaping hole in micro computer languages. So C was used
on micros and was widely used in universities due to UNIX.

(The current arguments for lisp/smalltalk/etc are an interesting
reversal of the situation I've described as having existed way back
then).

> My latest theory is that the answer lies in cognitive
>effects arising from the conception and structure of the language.
>People make up mental models of how things work, and interpret the
>programs they write in terms of those models. For experienced
>programmers, compiler writers perhaps, these models are complete and
>accurate, closely corresponding to the objective reality. Novice
>programmers have poor models that are incomplete, poorly related to the
>actual computing machines, and perhaps even inconsistent. The
>intellectual effort required to develop a good model for Lisp or Ada is
>much greater than that required to develop one for C. There are more
>abstractions involved. Thus, C is more easily comprehended by
>inexperienced programmers.

Interesting theory here, but I don't think I agree. In my experience, novice
programmers are so caught up in the details that they cannot make
useful abstractions at all. This is the kind of thing you would expect
of anyone learning something (e.g. many sports, especially team sports).
I think that you are right that C is easier for them to comprehend, but
I don't think it is because it has easier abstractions. I think it is because
they can use the computer hardware itself as C's abstraction, that is,
use a concrete thing as an abstraction -- what an illusion :-) I guess
that I think you are basically right that the novice has a difficult time
forming a useful understanding of how the language works, but I think
this is difficult for any language. I think that in C's case the novice can
cheat.

My real disagreement with your theory is that I don't think it is sufficient
to explain why C is actually used. Why does an experienced programmer
use it? In my case it is because of the history I described above. C has
also been able to keep up with changes, it is still a useful tool and so
it does not force me to look to something else. These are the reasons
for many of my contemporaries too (the ones that didn't live in a university
for ten years anyway :-). I've been a senior member of a team, the
manager of a team, then manager of a bunch of teams, a technical
director of a whole bunch of teams. Does the fact that I use C/C++
have something to do with my teams using it? I don't take my influence,
in itself, as being overly convincing to anyone, but when so many of my
contemporaries are the same?

How do you get an experience programmer to consider something other
than C? I've found it tremendously effective to (persistently) encourage
programmers to try something serious in smalltalk. So far I've had a
100% success rate in getting them to realise the limitations of C/C++.
Smalltalk isn't magic, but it has a really nice bunch of tools that go
with it. The lisp environments would work just as well I think, but until
recently they were too expensive. (These guys all bought smalltalk
with their own money -- I can be very persistent :-).

Once you get the more senior guys thinking about alternatives you then
have to ask why they don't do something about it. This is a really
interesting situation. Ultimately it seems to come down to the body of
existing code and a bit of nagging uncertainty. The nagging uncertainty
is the most significant of the two in most cases it seems.

In short, I don't think there are any technical reasons why C/C++ is the
most commonly used programming language, nor do I think that C/C++
as a language differs sufficiently from other languages to make a
difference. I think the reasons are (in order) historical, fear/doubt
(i.e. risk), and existing bodies of working code.

Even shorter, C/C++ is being used for the same reasons COBOL is
still used. (an awfully long post to come to that conclusion, sorry :-)

--
Bob Hutchison, hu...@RedRock.com
RedRock, 135 Evans Avenue, Toronto, Ontario, Canada M6S 3V9
(416) 760-0565

Mark S. Riggle

unread,
Sep 19, 1994, 2:38:25 PM9/19/94
to

In article <35dcf9$j...@news.aero.org>, do...@aero.org (John Doner) writes:
|> The
|> intellectual effort required to develop a good model for Lisp or Ada is
|> much greater than that required to develop one for C. There are more
|> abstractions involved.
|>
|> I invite criticism of this theory.
|>


It's the SLSM theory again (my favorite). That is 'Simple Languages
for Simple Minds'.

Although you can argue that C is not a simple language since it
is so hard to get things right.

--
=========================================================
Mark Riggle | "Give me LAMBDA or
sas...@unx.sas.com | give me death"
SAS Institute Inc., |
SAS Campus Drive, Cary, NC, 27513 |
(919) 677-8000 |


Scott Schwartz

unread,
Sep 19, 1994, 4:11:37 PM9/19/94
to
sas...@zinfande.unx.sas.com (Mark S. Riggle) writes:
It's the SLSM theory again (my favorite). That is 'Simple Languages
for Simple Minds'.

That's not it at all. People who like C like it because It Gets The Job
Done. For practically any task, you can hand someone a C compiler, and
they can feel assured that it will efficiently do what they want. All
the wierdness and complexity of the language just feeds the user's
feeling of controlling a real power tool.

To give a second example, Larry Wall spent lots of time advocating Perl
in the unix newsgroups. His sales pitch was always the same: perl is
easier and faster. Easier means shorter more obviously correct
programs. Faster means "runs faster". That's why perl is so popular
today.

Adrian L. Flanagan

unread,
Sep 19, 1994, 6:13:25 PM9/19/94