Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Comparison: Beta - Lisp

132 views
Skip to first unread message

Bruno Haible

unread,
Sep 8, 1994, 9:15:58 AM9/8/94
to
Since a new language called BETA is gaining attention, which has some
features Lisp is proud of, I attempt a comparison between Beta and Lisp.
Since there is currently only one implementation of Beta, the Mjolner
system, I add a rough evaluation of it.


Comparison Lisp <--> Beta

A. The BETA language

1. Has an incredible symmetry between data and executable program code.
Much better than Lisp. Syntax is such that execution goes strictly
from left to right, except for loops.

2. Lexically scoped, but a module system ("fragment" system) permits
to put parts of a program into separate files - effectively splitting
interface and implementation.

3. Multiple values, but symmetrically for both input (function arguments)
and output (function return values). A program can be viewed as moving
data around between black boxes.

4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
all correspond to "pattern"s.

5. Symmetry data/code provides automatically for different kinds of
inheritance:
outer level inner level
----------- -----------
code code local function definitions
data code local variables
code data ??
data data defstruct :include

6. "Part objects" cover the issues
- local functions with lexical scope,
- defstruct :include,
- defclass subclassing, but only single inheritance.

7. The ":<" token makes up
- generic classes (not present in Lisp, you would have to use
a DEFCLASS in a non-null lexical environment),
- generic functions,
- &key parameters for functions.

8. The "::<" token specifies
- subclassing,
- methods for generic functions,
- keyword arguments for functions.

9. CALL-NEXT-METHOD goes the other way around: the most general method
determines the general behaviour, the most specific method only
plays the role of a fixup.

10. Compile-time typing where possible, run-time tying where necessary.
For example, you can put functions into plain lists or into lists
of functions. The latter saves some run-time checks. Of course,
general lists and lists of functions share the same definitions for
insert, delete etc.

11. Concurrency, i.e. deterministic parallelism. (Would make
WITH-PACKAGE-ITERATOR and WITH-HASH-TABLE-ITERATOR obsolete in CL.)

12. No multiple inheritance.
No generic functions with more than one dispatching argument.
The macro system is pretty much limited, violates lexical scoping,
therefore error prone to use.


B. The Mjolner BETA implementation

1. It is as easy to generate machine code for Beta programs than for Lisp.

2. Generates standalone executables. Hello-world is just 100 KB on Linux.

3. Automatic garbage collection (generational, scavenging).

4. Speed:
consing up 1000 closures into a list
Mjolner 0.049 sec
CLISP 0.064 sec
GCL 0.168 sec
some integer array hacking
C 4.2 sec
Mjolner 111 sec
GCL 288 sec
CLISP 415 sec
(All timings on a 486/33.)

5. Libraries for all kinds of container classes, streams, processes,
exceptions, X11, Motif.


C. The grains of salt:

1. The syntax is orthogonal, but you have to get used to it.

2. Unhygienic macros ("super patterns").

3. No nested comments.

4. Need a dictionary for the words "pattern", "part object", "virtual pattern".


For more information about BETA, look into the comp.lang.beta newsgroup.

Disclaimer: I have no relation with Mjolner Informatics, except that they gave
me a copy of their Mjolner compiler for free.

Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

rodrigo vanegas

unread,
Sep 8, 1994, 11:25:14 AM9/8/94
to

> 4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
> all correspond to "pattern"s.

So what are these "patterns" anyway? It sounds as if they are very
close if not identical to lisp closures. After all, can't each of the
above lisp stuff can be implemented as sugar for closures.


rodrigo vanegas
r...@cs.brown.edu

Petri Virkkula

unread,
Sep 8, 1994, 12:25:35 PM9/8/94
to
On 8 Sep 1994 13:15:58 GMT, hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) said:

Bruno> Since a new language called BETA is gaining attention, which
Bruno> has some features Lisp is proud of, I attempt a comparison
Bruno> between Beta and Lisp.

I understand that this group is dedicated for BETA but if
subject says "Comparison: Beta - Lisp". I didn't find too much
comparison, I found mostly list of features that are common in
both languges. It would be nice if somebody posted real
comparison, as atleast only thing I know about Beta is that it
has "pattern"s.

Petri
--
--------------------------------------------------------------------------
Petri Virkkula | Internet: Petri.V...@hut.fi
JMT 11 E 125 | X.400 : /G=Petri/S=Virkkula/O=hut/ADMD=fumail/C=fi/
02150 Espoo | Voice : +358 0 455 1277
FINLAND |
--------------------------------------------------------------------------

Mark C. Chu-Carroll

unread,
Sep 8, 1994, 5:02:55 PM9/8/94
to

You're basically pretty much correct. A pattern is essentially the
same thing as a closure, the primary difference being that a pattern
is static. Essentially, a pattern is a uniform code construct which
will be instantiated into a closure at runtime.

To make a jump backwards to the initial comparison idea, the key
conceptual difference between Beta and Lisp is that Beta is very
static: patterns are static entities which are compiled into
programs. At runtime, a pattern is instantiated into an "object"
which, depending on how it's used, may be a value or a procedure, or
any combination thereof (just like a closure, because it IS a
closure).

The important differences come about because of three things:

<1> Static typing - in Beta, the types of variables are always declared
statically. Programs in Beta is very static, which makes it very
different from the dynamic nature of lisp programming.

<2> Single inheritance- the Beta object model uses only single inheritance.
The designers decided that rather than try to work out a system for
resolving the confusion caused by MI (namespace collisions, repeated
inheritance, etc.), it was better to do without it. I don't necessarily
agree with them, but it did result in keeping Beta simple.

<3> Virtual patterns - the object model is rather different. Instead of
the CLOS model where a child method overrides a parent method, Beta
uses the Simula model, where the child extends the parent method. The
parent implementation of a method contains calls to inner, which are
dynamically bound to the correct extension for the actual child type.

Going further, since everything (functions, procedures, methods, values)
is just a pattern, the virtual pattern idea can be applied to *any*
method at all. This results in a *very* interesting program model,
where you can write any procedure to be dynamically extensible. This can
actually be used to write lambda expressions! It's quite elegant...

<MC>
--
|| Mark Craig Chu-Carroll: <MC> || "I'm not dumb,
|| CIS Grad, U of Delaware || I just have a command of thoroughly
|| PGP key available by finger || useless information."
|| car...@cis.udel.edu || -Calvin

Ole Villumsen

unread,
Sep 8, 1994, 11:45:19 AM9/8/94
to
rodrigo vanegas <r...@cs.brown.edu> writes:

I've posted a response to comp.lang.beta.
I can also mail it to anyone interested.
Ole
ovill...@daimi.aau.dk
Ole Villumsen, Computer Science Department, Aarhus University, Denmark

Mark Friedman

unread,
Sep 8, 1994, 8:27:34 PM9/8/94
to
In article <34nu5v$o...@louie.udel.edu> car...@hercules.cis.udel.edu
(Mark C. Chu-Carroll) writes:

You're basically pretty much correct. A pattern is essentially the
same thing as a closure, the primary difference being that a pattern
is static. Essentially, a pattern is a uniform code construct which
will be instantiated into a closure at runtime.

OK, so let's equate a pattern with a lambda expression which is


instantiated into a closure at runtime.

To make a jump backwards to the initial comparison idea, the key
conceptual difference between Beta and Lisp is that Beta is very
static: patterns are static entities which are compiled into
programs.

And lambda expressions are static entities which are compiled into
functions.

The important differences come about because of three things:

<1> Static typing - in Beta, the types of variables are always declared
statically. Programs in Beta is very static, which makes it very
different from the dynamic nature of lisp programming.

Agreed, although it wouldn't take much of a stretch to imagine a lisp
which required type declarations. Admittedly, most lisp users would
hate that.

<2> Single inheritance- the Beta object model uses only single inheritance.
The designers decided that rather than try to work out a system for
resolving the confusion caused by MI (namespace collisions, repeated
inheritance, etc.), it was better to do without it. I don't necessarily
agree with them, but it did result in keeping Beta simple.

But it looked to me like the object system that Beta uses is one based
on closures. That sort of object system has been used in lisp and has
the same sort of single inheritance. For the same matter, I assume
that one could build a CLOS type multiple inheritance object system
in Beta which did not use closures in such a direct way.

<3> Virtual patterns - the object model is rather different. Instead of
the CLOS model where a child method overrides a parent method, Beta
uses the Simula model, where the child extends the parent method. The
parent implementation of a method contains calls to inner, which are
dynamically bound to the correct extension for the actual child type.

See my last comment.

Going further, since everything (functions, procedures, methods, values)
is just a pattern, the virtual pattern idea can be applied to *any*
method at all. This results in a *very* interesting program model,
where you can write any procedure to be dynamically extensible. This can
actually be used to write lambda expressions! It's quite elegant...

Could you give an example here which doesn't look like a closure. I
guess I'm looking for something which would not have a straightforward
translation into lisp. If there's something new here I'd really like
to know.

-Mark
--
Mark Friedman
NASA-Ames Research Center
MS 269-2
Moffett Field, CA 94035-1000

vmail: (415) 604-0573
email: bo...@ptolemy.arc.nasa.gov

Lenny Gray

unread,
Sep 9, 1994, 2:38:50 AM9/9/94
to
Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:

: ...
: some integer array hacking


: C 4.2 sec
: Mjolner 111 sec
: GCL 288 sec
: CLISP 415 sec
: (All timings on a 486/33.)

: ...

Are these numbers right? I've seriously used GCL and CLISP myself and
had some arguments with a "true believer Lisper" who thought "Lisp _does_
compete reasonably with C for numeric stuff", but I never bothered to do
the timing tests, and always assumed it wasn't this bad. Is it, really?

Also, I was interested in Beta until one minute ago, because of this.
Are there intrinsic reasons for this that will prevent it from ever
improving?

- Lenny Gray -

Jacob Seligmann

unread,
Sep 9, 1994, 7:03:38 AM9/9/94
to
Lenny Gray (lenn...@netcom.com) wrote:

> Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
>
> : ...
> : some integer array hacking
> : C 4.2 sec
> : Mjolner 111 sec
> : GCL 288 sec
> : CLISP 415 sec
> : (All timings on a 486/33.)
> : ...

Ouch!

> Are these numbers right? I've seriously used GCL and CLISP myself and
> had some arguments with a "true believer Lisper" who thought "Lisp _does_
> compete reasonably with C for numeric stuff", but I never bothered to do
> the timing tests, and always assumed it wasn't this bad. Is it, really?

I don't know what Bruno's "some integer array hacking" program does,
but the performance ratios shown do not reflect my personal experience.
For example, here's a very simple "some integer array hacking" program
I wrote for the occasion:

/* siah.c - some integer array hacking in C */
void main(void)
{
int a[10000], i, j;
for (j=0; j<1000; j++)
for (i=0; i<10000; i++)
a[i] = i*4;
}

(* siah.bet - some integer array hacking in BETA *)
ORIGIN '~beta/basiclib/v1.4/betaenv'
--program:descriptor--
(#
a: [10000]@integer;
do
(for j:1000 repeat
(for i:a.range repeat i*4 -> a[i] for);
for);
#)

And here's the run-times (486/66; BETA compiler v5.0(1); gcc v2.5.8):

BETA 4.41
gcc 3.10
gcc -O6 1.24

In this case, the ratio between BETA and unoptimized C was 1.4, while
the ratio between BETA and optimized C was 3.6.

Bear in mind that the BETA compiler does not currently perform nearly
as much optimization as gcc does. Also, the code generated by the BETA
compiler contained an index check (which could have been removed, had
the optimizer been smarter) at each assignment, while the C code
obviously did not.

> Also, I was interested in Beta until one minute ago, because of this.

Hopefully, your interest has been reawakened.

/Jacob Seligmann
------------------------------------------------------------------------
Mjolner Informatics ApS Phone: (+45) 86 20 20 00 ext. 2754
Science Park Aarhus Direct: (+45) 86 20 20 11 - 2754
Gustav Wieds Vej 10 Fax: (+45) 86 20 12 22
DK-8000 Aarhus C, Denmark Email: jac...@mjolner.dk
------------------------------------------------------------------------
BETA is better
------------------------------------------------------------------------

Bruno Haible

unread,
Sep 9, 1994, 11:50:06 AM9/9/94
to
> So what are these "patterns" anyway? It sounds as if they are very
> close if not identical to lisp closures. After all, can't each of the
> above lisp stuff can be implemented as sugar for closures.

From the point of view of a Lisp programmer, a pattern consists of

* a specification of variables (call them "variables" or "closure variables"
or "slots"), and

* a piece of code which is executed after the storage for the variables has
been allocated (call it "initialization method" or simply "program").

But that's only one of many aspects of patterns...


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

rodrigo vanegas

unread,
Sep 9, 1994, 1:25:10 PM9/9/94
to

>> So what are these "patterns" anyway? It sounds as if they are very
>> close if not identical to lisp closures. After all, can't each of the
>> above lisp stuff can be implemented as sugar for closures.

> From the point of view of a Lisp programmer, a pattern consists of
>
> * a specification of variables (call them "variables" or "closure variables"
> or "slots"), and
>
> * a piece of code which is executed after the storage for the variables has
> been allocated (call it "initialization method" or simply "program").

Ok, so consider the following:

(lambda (x y) (print "whatever...") (funcall x y))

This lambda abstraction, which evaluates to a closure, has "a
specification of variables", X and Y, and "a piece of code which is
executed after the storage for the variables has been allocated", the
PRINT followed by the FUNCALL.

I don't see any difference yet...

> But that's only one of many aspects of patterns...

Why is it that every explanation of patterns i've come across so far
always includes a "more to come" disclaimer at the end?! I'm
beginning to wonder if these so called "patterns" can be defined at
all!


rodrigo "tired of playing detective" vanegas
r...@cs.brown.edu

Bruno Haible

unread,
Sep 9, 1994, 12:37:49 PM9/9/94
to
Lenny Gray <lenn...@netcom.com> wrote:
>Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
>
>: ...
>: some integer array hacking
>: C 4.2 sec
>: Mjolner 111 sec
>: GCL 288 sec
>: CLISP 415 sec
>: (All timings on a 486/33.)
>: ...
>
> Are these numbers right? I've seriously used GCL and CLISP myself and
> had some arguments with a "true believer Lisper" who thought "Lisp _does_
> compete reasonably with C for numeric stuff", but I never bothered to do
> the timing tests, and always assumed it wasn't this bad. Is it, really?

Of course these numbers are right. Here is another set of figures, for
the same integer array hacking benchmark:
C 7.7 sec
Lucid CL 3.0 production mode 47 sec
Lucid CL 3.0 development mode 226 sec
CLISP 512 sec
(All timings on a Sun Sparcstation IPC, this time.)

Lisp compilers produce good code, but they can't compete with good C
compilers in this case.

Here's the code, if you want to convince yourself:

(defun fannkuch (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )
(unless (and (> n 0) (<= n 100)) (return-from fannkuch))
(let ((n n))
(declare (fixnum n))
(let ((perm (make-array n :element-type 'fixnum))
(perm1 (make-array n :element-type 'fixnum))
(zaehl (make-array n :element-type 'fixnum))
(permmax (make-array n :element-type 'fixnum))
(bishmax -1))
(declare (type (simple-array fixnum (*)) perm perm1 zaehl permmax))
(declare (fixnum bishmax))
(dotimes (i n) (setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))
(setf (svref zaehl (- \t 1)) \t)
(decf \t)
(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))
(dotimes (i \t) (setf (svref perm1 i) (svref perm1 (+ i 1))))
(setf (svref perm1 \t) perm0)
)
(when (plusp (decf (svref zaehl \t))) (go Kreuz))
(incf \t)
(go Dollar)
standardroutine
(dotimes (i n) (setf (svref perm i) (svref perm1 i)))
(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop
(when (= (setq k (svref perm 0)) 0) (return))
(let ((k2 (ceiling k 2)))
(declare (fixnum k2))
(dotimes (i k2) (rotatef (svref perm i) (svref perm (- k i))))
)
(incf Spiegelungsanzahl)
)
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n) (setf (svref permmax i) (svref perm1 i)))
) )
(go Dollar)
fertig
)
(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)
(when (> i 0) (format t " "))
(format t "~D" (+ (svref permmax i) 1))
)
(format t ")")
(terpri)
(values)
) ) )


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Bruno Haible

unread,
Sep 9, 1994, 11:57:21 AM9/9/94
to
> <1> Static typing - in Beta, the types of variables are always declared
> statically.

With the important extension that it is not only possible to declare
x will be of type A,
but also
x will belong to some fixed subtype of type A.

This makes it possible to have generic container classes.

> Programs in Beta is very static, which makes it very
> different from the dynamic nature of lisp programming.

I don't agree here. This depends on your programming style. You can
perfectly embed Lambda Calculus into Beta.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Mark C. Chu-Carroll

unread,
Sep 9, 1994, 1:44:28 PM9/9/94
to
In article <BOBO.94S...@avogadro.arc.nasa.gov> bo...@ptolemy.arc.nasa.gov writes:
]In article <34nu5v$o...@louie.udel.edu> car...@hercules.cis.udel.edu

](Mark C. Chu-Carroll) writes:
]
] You're basically pretty much correct. A pattern is essentially the
] same thing as a closure, the primary difference being that a pattern
] is static. Essentially, a pattern is a uniform code construct which
] will be instantiated into a closure at runtime.
]
]OK, so let's equate a pattern with a lambda expression which is
]instantiated into a closure at runtime.

Yep, that's pretty much it.

] The important differences come about because of three things:


]
] <1] Static typing - in Beta, the types of variables are always declared
] statically. Programs in Beta is very static, which makes it very
] different from the dynamic nature of lisp programming.
]
]Agreed, although it wouldn't take much of a stretch to imagine a lisp
]which required type declarations. Admittedly, most lisp users would
]hate that.
]
] <2] Single inheritance- the Beta object model uses only single inheritance.
] The designers decided that rather than try to work out a system for
] resolving the confusion caused by MI (namespace collisions, repeated
] inheritance, etc.), it was better to do without it. I don't necessarily
] agree with them, but it did result in keeping Beta simple.
]
]But it looked to me like the object system that Beta uses is one based
]on closures. That sort of object system has been used in lisp and has
]the same sort of single inheritance. For the same matter, I assume
]that one could build a CLOS type multiple inheritance object system
]in Beta which did not use closures in such a direct way.

Again, you're pretty much right. It's *very* similar to closure based
object systems. It reminds me quite a lot of a closure based object
system I implemented for Scheme as an undergrad.

] <3] Virtual patterns - the object model is rather different. Instead of


] the CLOS model where a child method overrides a parent method, Beta
] uses the Simula model, where the child extends the parent method. The
] parent implementation of a method contains calls to inner, which are
] dynamically bound to the correct extension for the actual child type.
]
]See my last comment.
]

] Going further, since everything (functions, procedures, methods, values)
] is just a pattern, the virtual pattern idea can be applied to *any*
] method at all. This results in a *very* interesting program model,
] where you can write any procedure to be dynamically extensible. This can
] actually be used to write lambda expressions! It's quite elegant...
]
]Could you give an example here which doesn't look like a closure. I
]guess I'm looking for something which would not have a straightforward
]translation into lisp. If there's something new here I'd really like
]to know.

No, I can't give an example which doesn't look like a closure, because
as I've said, the comparison to closures is pretty damned exact. The
differences that I've pointed out are primarily differences in
programming style that wind up in common programs.

Programming styles in Beta do end up being somewhat different than
styles in Lisp, primarily because of the factors I've mentioned
above. The Beta idioms could be used to implement the programming
styles used by most commonlisp people; and the commonlisp idioms could
be used to implement the beta style. It's not a semantic difference,
just a stylistic one.

<MC>

--
|| Mark Craig Chu-Carroll: <MC> || "A libertarian is just a republican
|| CIS Grad, U of Delaware || who takes drugs"
|| PGP Key Available, by finger || - Bob Black
|| car...@cis.udel.edu ||

William D. Gooch

unread,
Sep 9, 1994, 1:36:50 PM9/9/94
to
On 9 Sep 1994, Bruno Haible wrote:

> Lenny Gray <lenn...@netcom.com> wrote:
> >
> > Are these numbers right? I've seriously used GCL and CLISP myself and
>

> Of course these numbers are right....

"Right" or wrong, they are meaningless and should be ignored in the
absence of additional information. See below.

> Here's the code, if you want to convince yourself:

OK, I have convinced myself not to pay any attention to your results.
See below.

So where and how is the timing measurement taken? This is crucial; you
cannot expect anyone to glean useful information from a timing test
unless you can demonstrate that it was taken in a reasonable fashion,
which means limiting the test to the things you are claiming to measure
by it. Unless you were careful to exclude other processes while the test
was running, you may have also measured things completely extraneous to the
code you intended to measure. I'm not saying that you don't know these
things, but you need to be explicit about them when you publish a
so-called benchmark.

Also, the above code includes all sorts of stuff that has nothing to do
with integer array hacking, such as parsing, formatting (which involves
printed output and most likely conses), array allocation, and so on. This
is a *terrible* benchmark - it's quite possible that the timings you
published even include time waiting for a window to become exposed for
output, as well as the time taken by the user to type input.

Jeff Dalton

unread,
Sep 10, 1994, 11:00:56 AM9/10/94
to
In article <34nbif$a...@belfort.daimi.aau.dk> ol...@daimi.aau.dk (Ole Villumsen) writes:
>rodrigo vanegas <r...@cs.brown.edu> writes:
>
>>In article <34n2qe$d...@nz12.rz.uni-karlsruhe.de>, hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
>
>>> 4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
>>> all correspond to "pattern"s.
>
>>So what are these "patterns" anyway? It sounds as if they are very
>>close if not identical to lisp closures. After all, can't each of the
>>above lisp stuff can be implemented as sugar for closures.
>
>I've posted a response to comp.lang.beta.
>I can also mail it to anyone interested.

This annoys me big time. Why not post the reply to the newsgroups
that contained the original article?

Jeff Dalton

unread,
Sep 10, 1994, 11:11:33 AM9/10/94
to

>A. The BETA language
>
>1. Has an incredible symmetry between data and executable program code.
> Much better than Lisp.

This is useless without some details. All I know is that you think
Beta has an "incredible symmetry".

Most of the rest of the message has the same problem.

>2. Lexically scoped, but a module system ("fragment" system) permits
> to put parts of a program into separate files - effectively splitting
> interface and implementation.

Virtually every Lisp in the universe lets you put parts of the
program in separate files. So what exactly is the issue here?

>7. The ":<" token makes up
> - generic classes (not present in Lisp, you would have to use
> a DEFCLASS in a non-null lexical environment),

So in fact what you mean is *Common* Lisp, not Lisp.

>9. CALL-NEXT-METHOD goes the other way around: the most general method
> determines the general behaviour, the most specific method only
> plays the role of a fixup.

Again more is needed.

>11. Concurrency, i.e. deterministic parallelism. (Would make
> WITH-PACKAGE-ITERATOR and WITH-HASH-TABLE-ITERATOR obsolete in CL.)

I think you are confused about the nature and role of these
iterators. But perhaps not. It's impossible to tell from
the little you say.

>4. Speed:
> consing up 1000 closures into a list
> Mjolner 0.049 sec
> CLISP 0.064 sec
> GCL 0.168 sec
> some integer array hacking
> C 4.2 sec
> Mjolner 111 sec
> GCL 288 sec
> CLISP 415 sec
> (All timings on a 486/33.)

It's necessary to see the code. What declarations did you use
in Common Lisp? Also, try a faster Common Lisp.

Jeff Dalton

unread,
Sep 10, 1994, 11:21:32 AM9/10/94
to
In article <Pine.A32.3.90.940909...@swim5.eng.sematech.org> "William D. Gooch" <goo...@swim5.eng.sematech.org> writes:
>On 9 Sep 1994, Bruno Haible wrote:
>
>> Lenny Gray <lenn...@netcom.com> wrote:
>> >
>> > Are these numbers right? I've seriously used GCL and CLISP myself and
>>
>> Of course these numbers are right....
>
>"Right" or wrong, they are meaningless and should be ignored in the
>absence of additional information. See below.
>
>> Here's the code, if you want to convince yourself:

>> (dotimes (i n) (setf (svref perm1 i) i))

BTW, you can't just use dotimes even if n is declared to be a fixnum.
I think this is explained in the comp.lang.lisp FAQ. If not, it was
explained in the list of "pitfalls" I posted a while back and perhaps
should update?

-- jeff

Bernhard Pfahringer

unread,
Sep 9, 1994, 2:59:24 PM9/9/94
to
In article <34q30t$n...@nz12.rz.uni-karlsruhe.de>,

Bruno Haible <hai...@ma2s2.mathematik.uni-karlsruhe.de> wrote:
>
>Of course these numbers are right. Here is another set of figures, for
>the same integer array hacking benchmark:
> C 7.7 sec
> Lucid CL 3.0 production mode 47 sec
> Lucid CL 3.0 development mode 226 sec
> CLISP 512 sec
>(All timings on a Sun Sparcstation IPC, this time.)
>
>Lisp compilers produce good code, but they can't compete with good C
>compilers in this case.

May not be the case: I've timed your function using both CMUCL 17c and
Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
CMUCL (estimate!) 15 sec

which is just a factor of 2 off of C. And here is a possible explanation
of that remaining factor: I *guess* svref still checks, if the index is
within the bounds of the vector, whereas there is probably no such check
in the binary produced by the C compiler.

regards, Bernhard
--------------------------------------------------------------------------
Bernhard Pfahringer
Austrian Research Institute for
Artificial Intelligence bern...@ai.univie.ac.at
Schottengasse 3 Fax: (+43 1) 532-0652
A-1010 Vienna, Austria Phone: (+43 1) 533-6112

Scott McLoughlin

unread,
Sep 9, 1994, 4:19:54 PM9/9/94
to
bern...@ai.univie.ac.at (Bernhard Pfahringer) writes:

> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
> CMUCL (estimate!) 15 sec
>
> which is just a factor of 2 off of C. And here is a possible explanation
> of that remaining factor: I *guess* svref still checks, if the index is
> within the bounds of the vector, whereas there is probably no such check
> in the binary produced by the C compiler.
>
> regards, Bernhard
> --------------------------------------------------------------------------
> Bernhard Pfahringer
> Austrian Research Institute for
> Artificial Intelligence bern...@ai.univie.ac.at
> Schottengasse 3 Fax: (+43 1) 532-0652
> A-1010 Vienna, Austria Phone: (+43 1) 533-6112
>

Howdy,
SVREF bounds checks an index with (SAFETY 0)? Weird. Of all
things to optimize, you'd think that "field dereferencing" functions
, e.g., SVREF, SCHAR, <STRUCT>-<SLOT> references, and their builtin
counter parts like SYMBOL-NAME, would not type check and would not
bound's check.
I'd think that it would be pretty easy to group these
functions where the compiler can see'em and generate a single
machine instruction on most processors.

=============================================
Scott McLoughlin
Conscious Computing
=============================================

Lawrence G. Mayka

unread,
Sep 9, 1994, 7:25:42 PM9/9/94
to

Of course these numbers are right. Here is another set of figures, for
the same integer array hacking benchmark:
C 7.7 sec
Lucid CL 3.0 production mode 47 sec
Lucid CL 3.0 development mode 226 sec
CLISP 512 sec
(All timings on a Sun Sparcstation IPC, this time.)

I added some type declarations (and removed some), in order to make
sure that no functions (at least, according to the disassembler) are
being called in the inner parts. I've attached the resulting source
code. (Note again that I am not asserting that =all= those type
declarations are necessary, only that I didn't want to take chances.)
The benchmark result on LispWorks 3.2, on an old Sparc LX, for

(fannkuch-fast 10)

is

57.3 sec

You didn't say what argument you passed to the function, by the way;
10 was as high as I was willing to go!

-----------------------------
(in-package :cl-user)

(defun fannkuch-fast (&optional (n (progn


(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )

(declare (optimize (safety 0) (speed 3) (space 0) (debug 0)))
(unless (and (> n 0) (<= n 100)) (return-from fannkuch-fast))


(let ((n n))
(declare (fixnum n))

(let ((perm (make-array n :initial-element 0))
(perm1 (make-array n :initial-element 0))
(zaehl (make-array n :initial-element 0))
(permmax (make-array n :initial-element 0))
(bishmax -1))
(declare (type simple-vector perm perm1 zaehl permmax))


(declare (fixnum bishmax))
(dotimes (i n)

(declare (fixnum i))


(setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))

(setf (svref zaehl (the fixnum (1- \t))) \t)
(setf \t (the fixnum (1- \t)))


(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))
(dotimes (i \t)

(declare (fixnum i))
(setf (svref perm1 i) (svref perm1 (the fixnum (1+ i)))))


(setf (svref perm1 \t) perm0)
)

(when (> (the fixnum
(setf (the fixnum (svref zaehl \t))
(the fixnum (1- (the fixnum (svref zaehl \t))))))
0)
(go Kreuz))
(setf \t (the fixnum (1+ \t)))


(go Dollar)
standardroutine
(dotimes (i n)

(declare (fixnum i))


(setf (svref perm i) (svref perm1 i)))
(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop

(when (= (the fixnum (setq k (svref perm 0))) 0) (return))
(let ((k2 (ash (the fixnum (1+ k)) -1)))


(declare (fixnum k2))
(dotimes (i k2)

(declare (fixnum i))
(rotatef (svref perm i) (svref perm (the fixnum (- k i)))))
)
(setf Spiegelungsanzahl (the fixnum (1+ Spiegelungsanzahl)))


)
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n)

(declare (fixnum i))


(setf (svref permmax i) (svref perm1 i)))
) )
(go Dollar)
fertig
)
(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)

(declare (fixnum i))


(when (> i 0) (format t " "))

(format t "~D" (the fixnum (1+ (the fixnum (svref permmax i)))))


)
(format t ")")
(terpri)
(values)
) ) )

--------------------------------
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

Bruno Haible

unread,
Sep 10, 1994, 5:02:25 PM9/10/94
to
Lenny Gray <lenn...@netcom.com> wrote:
>
> Also, I was interested in Beta until one minute ago, because of this.
> Are there intrinsic reasons for this that will prevent it from ever
> improving?

I don't think so. If their compiler did more optimizations, the Mjolner
timings certainly would be much closer to the C timings.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Bruno Haible

unread,
Sep 10, 1994, 5:29:59 PM9/10/94
to
Bernhard Pfahringer <bern...@ai.univie.ac.at> wrote:
>> Lisp compilers produce good code, but they can't compete with good C
>> compilers in this case.
>
> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
> CMUCL (estimate!) 15 sec
>
> which is just a factor of 2 off of C.

This agrees with some figures measured by Simon Leinen:

Sun 4/670MP, 40MHz SuperSPARC acc -fast 1.5 sec user
Sun 4/670MP, 40MHz SuperSPARC CMU CL 17e 3.08 sec user
Sun 4/670MP, 40MHz SuperSPARC LispWorks 3.2 15.58 sec user
Sun 4/670MP, 40MHz SuperSPARC Allegro 4.1 33.87 sec user

Indeed CMUCL is only a factor of 2 off of C. I am happy to eat my words
about Lisp compiler's code.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Scott Schwartz

unread,
Sep 10, 1994, 6:05:17 PM9/10/94
to
hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
Lenny Gray <lenn...@netcom.com> wrote:
> Also, I was interested in Beta until one minute ago, because of this.

If their compiler did more optimizations, the Mjolner


timings certainly would be much closer to the C timings.

Contrast with the strategy of the Sather group: their compiler started
out by doing better than C++ on some microbenchmarks. That's what it
takes to win supporters in real life.

Bruno Haible

unread,
Sep 10, 1994, 5:13:52 PM9/10/94
to
William D. Gooch <goo...@swim5.eng.sematech.org> wrote:
>
> So where and how is the timing measurement taken? This is crucial

(compile-file "fannkuch")
(load "fannkuch.fasl")
(time (fannkuch 9))

> Also, the above code includes all sorts of stuff that has nothing to do
> with integer array hacking, such as parsing, formatting (which involves
> printed output and most likely conses), array allocation, and so on.

All these are isolated at the beginning and at the end, and executed
only once. So its time will be negligible.

> This is a *terrible* benchmark - it's quite possible that the timings you
> published even include time waiting for a window to become exposed for
> output, as well as the time taken by the user to type input.

Sure, if your "time" command doesn't distinguish elapsed real time
and the time consumed by a single process in user mode, then it is
unusable for benchmarks.


Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de

Lawrence G. Mayka

unread,
Sep 11, 1994, 12:12:30 PM9/11/94
to

I forgot some additional type declarations needed to avoid checking
the GC write barrier when vector elements are modified. My new
result for

(time (fannkuch-fast 9))

on LispWorks 3.2 is

An early-1993 Sparc LX 3.81 sec
An early-1993, low-end Sparc 10 2.97 sec

I have attached the again-modified version of the benchmark.

-------------------------------
(in-package :cl-user)

(defun fannkuch-fast (&optional (n (progn


(format *query-io* "n = ?")
(parse-integer (read-line *query-io*)))))

(declare (optimize (safety 0) (speed 3) (space 0) (debug 0))

(fixnum n))


(unless (and (> n 0) (<= n 100))

(return-from fannkuch-fast))


(let ((perm (make-array n :initial-element 0))
(perm1 (make-array n :initial-element 0))
(zaehl (make-array n :initial-element 0))
(permmax (make-array n :initial-element 0))
(bishmax -1))
(declare (type simple-vector perm perm1 zaehl permmax)

(dynamic-extent perm perm1 zaehl permmax)


(fixnum bishmax))
(dotimes (i n)

(declare (fixnum i))


(setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1)
(go standardroutine))

(setf (svref zaehl (the fixnum (1- \t))) \t)
(setf \t (the fixnum (1- \t)))

(go Kreuz)
Dollar
(when (= \t n)
(go fertig))
(let ((perm0 (svref perm1 0)))

(declare (fixnum perm0))


(dotimes (i \t)
(declare (fixnum i))

(setf (svref perm1 i) (the fixnum (svref perm1 (the fixnum (1+ i))))))
(setf (svref perm1 \t) perm0))
(when (> (the fixnum (setf (svref zaehl \t)


(the fixnum (1- (the fixnum (svref zaehl \t))))))
0)
(go Kreuz))
(setf \t (the fixnum (1+ \t)))

(go Dollar)
standardroutine
(dotimes (i n)

(declare (fixnum i))
(setf (svref perm i) (the fixnum (svref perm1 i))))


(let ((Spiegelungsanzahl 0)
(k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop

(when (= (the fixnum (setq k (svref perm 0))) 0)
(return))
(let ((k2 (ash (the fixnum (1+ k)) -1)))

(declare (fixnum k2))
(dotimes (i k2)

(declare (fixnum i))
(rotatef (the fixnum (svref perm i))
(the fixnum (svref perm (the fixnum (- k i)))))))


(setf Spiegelungsanzahl (the fixnum (1+ Spiegelungsanzahl))))

(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n)

(declare (fixnum i))
(setf (svref permmax i) (the fixnum (svref perm1 i))))))
(go Dollar)
fertig)


(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)

(declare (fixnum i))


(when (> i 0)
(format t " "))

(format t "~D" (the fixnum (1+ (the fixnum (svref permmax i))))))

(format t ")")
(terpri)
(values)))

-----------------------------------------

Ken Anderson

unread,
Sep 11, 1994, 9:56:11 AM9/11/94
to
In article <LGM.94Se...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

Bernhard Pfahringer <bern...@ai.univie.ac.at> wrote:
>> Lisp compilers produce good code, but they can't compete with good C
>> compilers in this case.
>
> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:

Becareful when making such claims. How carefully did you study your
benchmarks to find out why there was such a difference? Generall, CMU and
Lucid are quite comparable.

> CMUCL (estimate!) 15 sec
> > which is just a factor of 2 off of C.

This agrees with some figures measured by Simon Leinen:

Sun 4/670MP, 40MHz SuperSPARC acc -fast 1.5 sec user
Sun 4/670MP, 40MHz SuperSPARC CMU CL 17e 3.08 sec user
Sun 4/670MP, 40MHz SuperSPARC LispWorks 3.2 15.58 sec user
Sun 4/670MP, 40MHz SuperSPARC Allegro 4.1 33.87 sec user

Since two different Lisp's are off by a factor of 5 and 10 from CMUCL, the
benchmark probably needs a little work. For example, i found (ceiling k 2)
accounted for 60% of the time of the original version of (fannkuch 9) in
Allegro. The Allegro fannkuch-fast times are pretty close those below
(though the hardware isn't the same).

Indeed CMUCL is only a factor of 2 off of C. I am happy to eat my words
about Lisp compiler's code.

This is a good example of how easy it is for Lisp to look much worse than
it is.

I forgot some additional type declarations needed to avoid checking
the GC write barrier when vector elements are modified. My new
result for

(time (fannkuch-fast 9))

on LispWorks 3.2 is

An early-1993 Sparc LX 3.81 sec
An early-1993, low-end Sparc 10 2.97 sec

I have attached the again-modified version of the benchmark.

This is a great improvement! The original benchmark had at least a dozen
problems with it. Could you or someone publish the C (and Beta) code so we
could make a completely fair and correct benchmark out of this?

Good work.
--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA

Simon Leinen

unread,
Sep 12, 1994, 4:39:44 AM9/12/94
to
> Since two different Lisp's are off by a factor of 5 and 10 from
> CMUCL, the benchmark probably needs a little work.

One might also conclude that the two other Lisps need a little work.

> For example, i found (ceiling k 2) accounted for 60% of the time
> of the original version of (fannkuch 9) in Allegro.

...which would confirm the alternative conclusion (at least CEILING
seems to need a little work in Allegro).

[In defense of Franz, when I once sent them a complaint about the
performance of some specific part of Lisp functionality that was
implemented sub-optimally, they answered with a very effective patch.]

Maybe the problem is that it is simply very hard to implement all of
Common Lisp in a maximally efficient manner. Fortunately most Lisps
have good profilers now, so that you can at least identify the
problematic areas in your programs without too much effort.
--
Simon.

Fernando Mato Mira

unread,
Sep 12, 1994, 12:02:08 PM9/12/94
to
In article <SIMON.94S...@liasg5.epfl.ch>, si...@lia.di.epfl.ch (Simon Leinen) writes:

> [In defense of Franz, when I once sent them a complaint about the
> performance of some specific part of Lisp functionality that was
> implemented sub-optimally, they answered with a very effective patch.]

Same here.

--
F.D. Mato Mira
Computer Graphics Lab mato...@epfl.ch
EPFL FAX: +41 (21) 693-5328


Paul Krause x7816

unread,
Sep 12, 1994, 6:45:51 PM9/12/94
to
In article <34q30t$n...@nz12.rz.uni-karlsruhe.de> hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
>
> Here's the code, if you want to convince yourself:
>
> (defun fannkuch (&optional (n (progn
> (format *query-io* "n = ?")
> (parse-integer (read-line *query-io*))
> ) ) )

etc.

That's not Lisp! That's C with parens!

--
Paul F. Krause Systems Research and Applications Corporation
(703) 558-7816 Intelligent Information Systems
FAX 558-4723 2000 15th Street North, Arlington, VA 22201
Internet: kra...@sra.com-----Uucp: uunet!uupsi5!sraopus!verdi!krausep

Jacob Seligmann

unread,
Sep 13, 1994, 2:13:27 PM9/13/94
to
Jacob Seligmann (jac...@daimi.aau.dk) wrote:

> Lenny Gray (lenn...@netcom.com) wrote:
>
> > Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
> >
> > > some integer array hacking
> > > C 4.2 sec
> > > Mjolner 111 sec
> > > GCL 288 sec
> > > CLISP 415 sec
>

> > Are these numbers right? I've seriously used GCL and CLISP myself and
> > had some arguments with a "true believer Lisper" who thought "Lisp _does_
> > compete reasonably with C for numeric stuff", but I never bothered to do
> > the timing tests, and always assumed it wasn't this bad. Is it, really?
>

> I don't know what Bruno's "some integer array hacking" program does [...]

Mr. Bruno Haible was friendly enough to send me the C and LISP programs
used. Unfortunately, he had deleted the BETA program, so I had to do
the port from scratch. All three programs are listed at the end of this
post. The benchmark is the program run with n=9.

[As you can see below, the C implementation uses a lot of tricks I did
not care or was unable to use in the BETA port: in-lining (#define's),
hard-wired variable liveliness information (loads of blocks with local
variables, making the code very hard to read), code generation hints
(register ...), array pointers (*d++ = *s++), unstructured gotos, etc.]

Here's my results (486/66; BETA compiler v5.0(2); gcc v2.5.8):

gcc -O6 -fomit-frame-pointer 2.08
BETA -no-range-check 11.00

[Actually, the compiler currently do not have a -no-range-check option.
The BETA figure was obtained by removing all boundl-instructions from
the code by hand, and reassembling; this is the only way I could
achieve a fair comparison.]

As you can see, the ratio between state-of-the-art optimized C and BETA
was 5.3, not 26.4 as above.

> > Also, I was interested in Beta until one minute ago, because of this.

> > Are there intrinsic reasons for this that will prevent it from ever
> > improving?

I looked at the generated code, and it seems that the increase in
execution time can be attributed to two factors:

(1) The BETA code allocates its variables on the heap, while the C
code uses registers almost exclusively.

(2) The BETA code performs a regular lookup calculation at each array
access, while the C code simply increases a dedicated register
during the linear sweep.

With a little bit of effort, there is absolutely no reason why the BETA
compiler should not be able to perform the variable liveliness analysis
itself (although currently it does not, and therefore pays the heavy
price of using heap space instead of registers). Also, the linear array
sweeps are simple enough that the compiler could recognize them and
avoid the index calculations (again, it currently does not, and
therefore pays the price at each lookup).

"Some integer array hacking"-type programs are *exactly* what highly
sophisticated C compilers excel at, but there are no intrinsic reasons
why the BETA compiler should not to be able to produce comparable code.
We're constantly working on it...

Sincerely,

/Jacob Seligmann
------------------------------------------------------------------------
Mjolner Informatics ApS Phone: (+45) 86 20 20 00 ext. 2754
Science Park Aarhus Direct: (+45) 86 20 20 11 - 2754
Gustav Wieds Vej 10 Fax: (+45) 86 20 12 22
DK-8000 Aarhus C, Denmark Email: jac...@mjolner.dk
------------------------------------------------------------------------
BETA is better
------------------------------------------------------------------------

============================== fannkuch.c ==============================

/* Programm zur Simulation des Pfannkuchenspiels */
/* Bruno Haible 10.06.1990 */

#include <stdio.h>

#define PermLength 100
#define PermCopy(Source,Dest,n) \
{register int h = n; register int *s = Source; register int *d = Dest; \
while (h) {*d++ = *s++; h--;}; \
}

void main()
{ int n;
int Perm[PermLength];
int Perm1[PermLength];
int Zaehl[PermLength];
int PermMax[PermLength];
int BishMax; /* bisheriges Maximum aller Spiegelungsanzahlen */
printf("n = ?");
scanf("%d",&n); if (!((n>0)&&(n<=PermLength))) goto Ende;

BishMax=-1;
/* Erzeugung aller Permutationen */
/* Erzeuge die Permutationen nach dem Algorithmus:
PERM1[0..n-1] := (0,...,n-1]
t:=n
# if t=1 then standardroutine, goto $
Z_hl[t-1]:=t
t:=t-1, goto #
$ if t<n then goto &, if t=n then fertig.
& rotiere PERM1[0..t], dec Z_hl[t], if >0 then goto #
t:=t+1, goto $
*/
{ register int i;
for (i=0; i<n; i++) { Perm1[i]=i; };
};
{ register int t;
t=n;
Kreuz: if (t==1) goto standardroutine;
Zaehl[t-1]=t;
t=t-1; goto Kreuz; /* rekursiver Aufruf */
Dollar: /* R_cksprung aus dem rekursiven Aufruf */
if (t==n) goto Fertig;
/* Rotieren: Perm1[0] <- Perm1[1] <- ... <- Perm1[n-1] <- Perm1[0] */
{ register int Perm0; register int i;
Perm0=Perm1[0];
for (i=0; i<t; i++) {Perm1[i]=Perm1[i+1];};
Perm1[t]=Perm0;
};
if (--Zaehl[t]) goto Kreuz;
t=t+1; goto Dollar;

standardroutine:
PermCopy(Perm1,Perm,n); /* Perm := Perm1 */
{ int Spiegelungsanzahl;
Spiegelungsanzahl=0;
{ unsigned int k;
while (!((k=Perm[0]) == 0))
{/* Spiegle Perm[0..k] */
unsigned int k2=(k+1)/2;
register int *up = &Perm[0]; register int *down = &Perm[k];
{ register int i;
i=k2; while (i) {int h; h=*up; *up++=*down; *down--=h; i--;};
}
Spiegelungsanzahl++;
};
};
if (Spiegelungsanzahl>BishMax)
{BishMax=Spiegelungsanzahl; PermCopy(Perm1,PermMax,n);};
}
goto Dollar;
}
Fertig: printf("Das Maximum betrug %d.\n bei ",BishMax);
{register unsigned int i;
printf("(");
for (i=0; i<n; i++)
{if (i>0) printf(" ");
printf("%d",PermMax[i]+1);
};
printf(")");
};
printf("\n");
Ende: ;
}

============================= fannkuch.lsp =============================

(defun fannkuch (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )

(unless (and (> n 0) (<= n 100)) (return-from fannkuch))


(let ((n n))
(declare (fixnum n))

(let ((perm (make-array n :element-type 'fixnum))
(perm1 (make-array n :element-type 'fixnum))
(zaehl (make-array n :element-type 'fixnum))
(permmax (make-array n :element-type 'fixnum))
(bishmax -1))

(declare (type (simple-array fixnum (*)) perm perm1 zaehl permmax))
(declare (fixnum bishmax))
(dotimes (i n) (setf (svref perm1 i) i))


(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))

(setf (svref zaehl (- \t 1)) \t)
(decf \t)

(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))

(dotimes (i \t) (setf (svref perm1 i) (svref perm1 (+ i 1))))


(setf (svref perm1 \t) perm0)
)

(when (plusp (decf (svref zaehl \t))) (go Kreuz))
(incf \t)

(go Dollar)
standardroutine
(dotimes (i n) (setf (svref perm i) (svref perm1 i)))


(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop

(when (= (setq k (svref perm 0)) 0) (return))
(let ((k2 (ceiling k 2)))

(declare (fixnum k2))


(dotimes (i k2) (rotatef (svref perm i) (svref perm (- k i))))
)
(incf Spiegelungsanzahl)
)

(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)

(dotimes (i n) (setf (svref permmax i) (svref perm1 i)))


) )
(go Dollar)
fertig
)

(format t "Das Maximum betrug ~D.~% bei " bishmax)


(format t "(")
(dotimes (i n)

(when (> i 0) (format t " "))

(format t "~D" (+ (svref permmax i) 1))
)

(format t ")")
(terpri)
(values)

) ) )

============================= fannkuch.bet =============================

ORIGIN '~beta/basiclib/v1.4/betaenv'
--program:descriptor--
(#

PermLength: (# exit 100 #);
Perm, Perm1, PermMax, Zaehl: [PermLength]@integer;
h, i, k, n, t, up, down, BishMax, Spiegelungsanzahl: @integer;
do
'n = ?' -> putText;
getInt -> n;
(if (n < 1) or (n > PermLength) then stop if);

-1 -> BishMax;
(for i:n repeat i-1 -> Perm1[i] for);
n -> t;

again:
(#
do (for i:t repeat i -> Zaehl[i] for); 1 -> t;
(for i:n repeat Perm1[i] -> Perm[i] for);
0 -> Spiegelungsanzahl;

while1:
(#
do (if Perm[1]->k // 0 then leave while1 if);
1 -> up; k+1 -> down; down/2 -> i;
while2:
(#
do (if i // 0 then leave while2 if);
Perm[up] -> h; Perm[down] -> Perm[up]; h -> Perm[down];
up+1 -> up; down-1 -> down; i-1 -> i;
restart while2;
#);
Spiegelungsanzahl+1 -> Spiegelungsanzahl;
restart while1;
#);

(if Spiegelungsanzahl > BishMax then
Spiegelungsanzahl -> BishMax;
(for i:n repeat Perm1[i] -> PermMax[i] for)
if);

while3:
(#
do (if t // n then leave while3 if);
Perm1[1] -> h;
(for i:t repeat Perm1[i+1] -> Perm1[i] for);
h -> Perm1[t+1];
(if (Zaehl[t+1]-1 -> Zaehl[t+1]) <> 0 then restart again if);
t+1 -> t;
restart while3;
#);
#);

'Das Maximum betrug ' -> putText;
BishMax -> putInt;
'.\n bei (' -> putText;
(for i:n repeat
(if i > 1 then ' ' -> put if);
PermMax[i]+1 -> putInt;
for);
')' -> putLine;
#)

========================================================================

Jeff Dalton

unread,
Sep 13, 1994, 3:13:08 PM9/13/94
to

There was a Prolog a while back that did better than C for some
numeric stuff. Did this win wupporters? Nooooooooooo...

Jeff Dalton

unread,
Sep 13, 1994, 3:29:23 PM9/13/94
to
In article <SIMON.94S...@liasg5.epfl.ch> si...@lia.di.epfl.ch (Simon Leinen) writes:
> > Since two different Lisp's are off by a factor of 5 and 10 from
> > CMUCL, the benchmark probably needs a little work.
>
>One might also conclude that the two other Lisps need a little work.

One might conclude all kinds of things. But would one be right?

As has been pointed out a number of times, it can be difficult to
get the best results from several different Common Lisps. Sometimes
declarations that help a lot in one make things worse in another,
and so on. I find it easier to get the best numerical results from
CMU CL than from Lucid or Allegro. This may mean that *something*
about Lucid and Allegro needs a little work, or it may just mean
that I and the CMU CL folk think alike when it comes to this.

Another factor is that the person running the benchmarks may not
know all the Lisps equally well, just as someone running C benchmarks
may not know that "-O6" is the optimization setting to use for
compiler X.

>Maybe the problem is that it is simply very hard to implement all of
>Common Lisp in a maximally efficient manner.

And the ways to get the maximally efficient stuff vary.

BTW, it took a while for C to get as efficient as it now usually(?)
is. And where would many people be without gcc? Unfortunately,
the Lisp equivalent of gcc hasn't yet come along.

-- jeff

Scott Schwartz

unread,
Sep 13, 1994, 6:33:51 PM9/13/94
to
je...@aiai.ed.ac.uk (Jeff Dalton) writes:
There was a Prolog a while back that did better than C for some
numeric stuff. Did this win wupporters? Nooooooooooo...

It's a necessary but not sufficient condition. Sather and C are both in
the algol vein; crossover is easy. Prolog is utterly different than C;
performance is almost the least of one's worries.

Erik Naggum

unread,
Sep 13, 1994, 8:02:28 PM9/13/94
to
[Jacob Seligmann]

| [As you can see below, the C implementation uses a lot of tricks I did
| not care or was unable to use in the BETA port: in-lining (#define's),
| hard-wired variable liveliness information (loads of blocks with local
| variables, making the code very hard to read), code generation hints
| (register ...), array pointers (*d++ = *s++), unstructured gotos, etc.]

this is not benchmarking execution speeds of C or LISP or anything. this
is benchmarking a programmer's willingness to write assembly language and
to see how well his assembly language coding style fits various languages.
I'll bet gcc -O6 does a better job on reasonably-written C than it does on
this spaghetti code. I mean, writing your own version of memcpy because
you think memcpy won't be fast enough is rather pathetic when there are
CPU's out that have block move instruction idioms and good compilers that
open-code them when calls to memcpy are requested.

to paraphrase Richard Stallman from a response to a bug report for Emacs
with a horribly convoluted fix:

Unfortunately, the code you wrote looks like the intermediate stage of
scheme compilation, and it's too hard for me to read. I can't even
figure out what [it] does.

and what's this about a semicolon _after_ the closing brace in blocks?
such things worry me when I read other people's C code, because it tells me
that they don't really program in C, but translate from something else into
C. that's the feeling I got from the LISP code, too. is this really a
good way to compare languages? I think not.

are there any good ways to compare _languages_? I think programmer time
spent and number of compilation runs required to solve a particular problem
is a better indication. then, later, we can benchmark the compilers.

I don't use LISP because of its outstanding speed, anyway. I use it
because _I_ am faster in LISP than in anything else I know well enough to
compare to. two days of my time translates to the cost of a CPU upgrade to
my clients. so what if it runs five times slower and takes 100 instead of
20 milliseconds? with the upgraded CPU it'll only take 3 times more time,
and if I need to come back to enhance the functionality, they can buy an
even _faster_ CPU. we're not talking about more than constant factors in
difference between these benchmarks here, right?

BETA and LISP probably have a lot to tell us about optimal program design
in their respective mind-sets and a comparison of their relative strengths
and idiomatic expressiveness would be instructive. onwards, folks!

#<Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.

Jacob Seligmann

unread,
Sep 14, 1994, 4:35:24 AM9/14/94
to
Thus spake Erik Naggum <er...@naggum.no>:

> > [Description of C spaghetti code deleted]


>
> this is not benchmarking execution speeds of C or LISP or anything. this
> is benchmarking a programmer's willingness to write assembly language and
> to see how well his assembly language coding style fits various languages.

I couldn't agree more! I was simply answering to a post which could be
misinterpreted as saying that BETA is inherent 25 times slower than C.
This did not reflect my personal impression, so I retrieved the C
benchmark program from the original poster, did a simple rewrite in
BETA, gained a factor 5, and posted this result along with the programs
used for you all to verify.

That is, I did not write the spaghetti code in the first place, neither
do I feel this is the way to program in BETA, or C, or LISP, or
whatever.

> are there any good ways to compare _languages_? I think programmer time
> spent and number of compilation runs required to solve a particular problem
> is a better indication. then, later, we can benchmark the compilers.

Again, I agree, as long as your programs are not orders of magnitude
slower than what is achievable, and as long as there are no inherent
barriers in the language design to ever achieving better performance.

> BETA and LISP probably have a lot to tell us about optimal program design
> in their respective mind-sets and a comparison of their relative strengths
> and idiomatic expressiveness would be instructive. onwards, folks!

Actually I was quite surprised to see a BETA-LISP comparison thread in
the first place. Sure, BETA strongly supports a functional programming
paradigm, but it is still an object-oriented language at heart - it
never ceases to amaze me just how versatile and expressive the BETA
pattern concept is! Keep the comparisons coming.

Cheers,

Jeff Dalton

unread,
Sep 14, 1994, 12:29:14 PM9/14/94
to
In article <LGM.94Se...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

>I have attached the again-modified version of the benchmark.

> (dotimes (i n)


> (declare (fixnum i))
> (setf (svref perm1 i) i))

I should perhaps point out again that that is not always enough
to get a fully fixnum loop. I normally use macros like these:

;;; Fixnum iterations

(defmacro fix-dotimes ((var count &optional (result nil))
&body body)
(let ((count-var (gensym)))
`(let ((,count-var ,count))
(declare (fixnum ,count-var))
(do ((,var 0 (fix+ ,var 1)))
((fix>= ,var ,count-var)
,result)
(declare (fixnum ,var))
,@body))))

(defmacro do-vector-indices ((var vec &optional (result nil))
&body body)
`(fix-dotimes (,var (length (the vector ,vec))
,@(if result (list result)))
,@body))

fix+, fix>=, etc are defined like this:

(defmacro fix+ (i j)
`(the fixnum (+ (the fixnum ,i) (the fixnum ,j))))

(defmacro fix>= (i j)
`(>= (the fixnum ,i) (the fixnum ,j)))

The root problem seems to be that some Lisps worry that 1 + i may
not be a fixnum even though i is. I don't think you can always
win even by saying (dotimes (i (the fixnum n)) ...).

It's very easy to see what will happen in KCL, BTW. Define
the fns and call disassemble. The C code is sufficiently readable,
because the stuff that's what you'd do in C looks like it is.

-- jeff

Jeff Dalton

unread,
Sep 15, 1994, 12:14:46 PM9/15/94
to
In article <SCHWARTZ.94...@roke.cse.psu.edu> schw...@roke.cse.psu.edu (Scott Schwartz) writes:
>je...@aiai.ed.ac.uk (Jeff Dalton) writes:
> There was a Prolog a while back that did better than C for some
> numeric stuff. Did this win wupporters? Nooooooooooo...
>
>It's a necessary but not sufficient condition.

I'd say it's neither.

Christian Lynbech

unread,
Sep 15, 1994, 4:36:04 PM9/15/94
to
>>>>> "Jeff" == Jeff Dalton <je...@aiai.ed.ac.uk> writes:

Jeff> In article <SCHWARTZ.94...@roke.cse.psu.edu>


Jeff> schw...@roke.cse.psu.edu (Scott Schwartz) writes:
>> je...@aiai.ed.ac.uk (Jeff Dalton) writes: There was a Prolog a while
>> back that did better than C for some numeric stuff. Did this win
>> wupporters? Nooooooooooo...
>>
>> It's a necessary but not sufficient condition.

Jeff> I'd say it's neither.

Are we about to open the `C vs. Lisp' thread again, with BETA on the
side?

(for those puzzled: this has been a raging debate for the last couple
of months in comp.lang.lisp)


------------------------------------------------------------------------------
Christian Lynbech | Hit the philistines three times over the
office: R0.33 (phone: 3217) | head with the Elisp reference manual.
email: lyn...@daimi.aau.dk | - pet...@hal.com (Michael A. Petonic)
------------------------------------------------------------------------------

Matthew McDonald

unread,
Sep 16, 1994, 1:30:30 AM9/16/94
to

I know this is about beta rather than lisp, but what Jacob is
saying about beta sounds a lot like what many people have been saying
about lisp.

jac...@daimi.aau.dk (Jacob Seligmann) writes:
[...]


Here's my results (486/66; BETA compiler v5.0(2); gcc v2.5.8):

gcc -O6 -fomit-frame-pointer 2.08
BETA -no-range-check 11.00

[Actually, the compiler currently do not have a -no-range-check option.
The BETA figure was obtained by removing all boundl-instructions from
the code by hand, and reassembling; this is the only way I could
achieve a fair comparison.]

As you can see, the ratio between state-of-the-art optimized C and BETA
was 5.3, not 26.4 as above.

[...]


With a little bit of effort, there is absolutely no reason why the BETA
compiler should not be able to perform the variable liveliness analysis
itself (although currently it does not, and therefore pays the heavy
price of using heap space instead of registers). Also, the linear array
sweeps are simple enough that the compiler could recognize them and
avoid the index calculations (again, it currently does not, and
therefore pays the price at each lookup).

"Some integer array hacking"-type programs are *exactly* what highly
sophisticated C compilers excel at, but there are no intrinsic reasons
why the BETA compiler should not to be able to produce comparable code.
We're constantly working on it...

What Jacob's saying is
(a) Typical code written in c performs more than 5 times
better than code in his favourite language using available
implementations, and
(b) there's no reason why his favourite language couldn't be
implemented so it was competive.

What lisp (and beta) advocates seem to often ignore is that quality of
code generation really matters to most people.

Telling people that a factor of 5 difference in run-time doesn't
really matter doesn't encourage them to use your language. Neither
does telling them that *in principle* or *some day in the future*,
your language could be competitive.

Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
PA-RISC, and RS6k, few people are going to use a language. Not many
people are going to bother explaining that performance matters to
them, they're just going to ignore you when you try to tell them
otherwise.

Which is a pity, because competive compilers for sane languages like
beta and lisp are obviously feasible. Paul Wilson was proposing a
compiler for scheme+objects that would compete with C, CMU CL was
great (although it now seems to be largely unsupported) and the ETH
Oberon compilers are also wonderful (although the systems they're in
don't co-operate with the rest of the universe.)

At least Jacob's actually working on improving the beta
implementation. As far as I can tell, the usual lisp advocate response
to performance complaints is to either:
(a) deny there's a problem,
(b) say one day there won't be a problem, or
(c) suggest you write code that looks like FORTRAN and
manually weigh expression trees and other insanity.

Perhaps Gwydion or Paul Wilson's scheme+objects compiler will save the
world.

--
Matthew McDonald ma...@cs.uwa.edu.au
Nim's longest recorded utterance was the sixteen-sign declarative
pronouncement, "Give orange me give eat orange me eat orange give me
eat orange give me you."

Scott McLoughlin

unread,
Sep 15, 1994, 9:25:11 PM9/15/94
to
lyn...@xenon.daimi.aau.dk (Christian Lynbech) writes:

> Jeff> I'd say it's neither.
>
> Are we about to open the `C vs. Lisp' thread again, with BETA on the
> side?
>
> (for those puzzled: this has been a raging debate for the last couple
> of months in comp.lang.lisp)
>
>
>

Howdy,
Sure let's open it up again ;-) But no really -- skip all this
talk of "realtime" this and that and concerns about competing with
Fortran on floating point. I'm still _VERY_ curious (concerned?)
about why Lisp isn't more popular in "the trenches". Go look at
Borland C++ compiler's output in large model (typical Windows app) -
not too slick. Now go run a typical Windows app (or Unix workstation
app for that matter). It pages like all get out when you open a
windows or switch to another task. Now go install a Windows
_personal utility app_ -- 40 meg disk footprint or more. We're
not talking big DB servers, just a nice word processor or
spreadsheet.
So why don't folks use Lisp to write this stuff? Blazing
speed,space,etc. aint that critical. What gives?

Naresh Sharma

unread,
Sep 16, 1994, 5:04:42 AM9/16/94
to
Matthew McDonald (ma...@cs.uwa.edu.au) wrote:

: What Jacob's saying is

: (a) Typical code written in c performs more than 5 times

^^^^^^^^^^^^^^^^^^^^^^^^^
A small correction: It should be the-state-of-the-art spaghetti code in c

: better than code in his favourite language using available
: implementations, and

Naresh
--
_______________________________________________________________________________
Naresh Sharma [N.Sh...@LR.TUDelft.NL] Herenpad 28 __|__
Faculty of Aerospace Engineering 2628 AG Delft \_______(_)_______/
T U Delft Optimists designed the aeroplane, ! ! !
Ph(Work) (+31)15-783992 pessimists designed the parachute!
Ph(Home) (+31)15-569636 Plan:Design Airplanes on Linux the best OS on Earth!
------------------------------PGP-KEY-AVAILABLE--------------------------------

Stefan Monnier

unread,
Sep 16, 1994, 10:15:25 AM9/16/94
to
In article <MAFM.94Se...@wambenger.cs.uwa.edu.au>,

Matthew McDonald <ma...@cs.uwa.edu.au> wrote:
> Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
> PA-RISC, and RS6k, few people are going to use a language. Not many
> people are going to bother explaining that performance matters to
> them, they're just going to ignore you when you try to tell them
> otherwise.

So true !
The best example is probably Visual Basic, right ?


Stefan

William D. Gooch

unread,
Sep 16, 1994, 10:03:35 AM9/16/94
to
On 16 Sep 1994, Matthew McDonald wrote:

> (a) Typical code written in c performs more than 5 times
> better than code in his favourite language using available
> implementations, and

This is exactly the sort of unsupportable generalization I was afraid
would result from the net publication of this so-called "benchmark."
(Not a reference to Jacob's post, but to the original.) Differences in
performance do not boil down to single numbers, nor any kind of simple
comparison for that matter.

>.... Telling people that a factor of 5 difference in run-time doesn't


> really matter doesn't encourage them to use your language.

Most of the time, a factor of five is negligible. In critical parts of
the code however, a factor of 1.05 may be important.

> .... As far as I can tell, the usual lisp advocate response


> to performance complaints is to either:
> (a) deny there's a problem,
> (b) say one day there won't be a problem, or
> (c) suggest you write code that looks like FORTRAN and
> manually weigh expression trees and other insanity.

This is gross oversimplification. As a "lisp advocate," I consistently
maintain that if and when there is a performance problem, it can be dealt
with in the same way one deals with performance problems in any language.
There is not a generalized performance problem inherent in lisp.

Lawrence G. Mayka

unread,
Sep 16, 1994, 6:50:43 PM9/16/94
to
In article <Cw4oG...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:

In article <LGM.94Se...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

>I have attached the again-modified version of the benchmark.

> (dotimes (i n)
> (declare (fixnum i))
> (setf (svref perm1 i) i))

I should perhaps point out again that that is not always enough
to get a fully fixnum loop. I normally use macros like these:

Yes, I added sufficient declarations for LispWorks, but not
necessarily enough for other CL implementations.

The root problem seems to be that some Lisps worry that 1 + i may
not be a fixnum even though i is. I don't think you can always
win even by saying (dotimes (i (the fixnum n)) ...).

Yes. Even if i is declared to be FIXNUM, its value might be
MOST-POSITIVE-FIXNUM or MOST-NEGATIVE-FIXNUM, in which cases (1+ i) or
(1- i) will overflow into a bignum. Theoretically, a type declaration
of

(declare (type (integer -30000 30000) i))

should suffice to ensure that (1+ i) and (1- i) generate fixnum
arithmetic; but implementations may vary, of course.

Scott McLoughlin

unread,
Sep 16, 1994, 6:37:15 PM9/16/94
to
schu...@ricotta.cs.wisc.edu (Lee Schumacher) writes:

> Of course from the faculties point of view, they don't want to teach
> lisp (thats manual labor, to their eyes), they want to teach AI. The
> end result is that lisp gets shorted in school, so when the graduates
> of this program get out into the real world lisp is never seriously
> considered for any job at hand - its too esoteric, too academic, too
> damn *hard*...
>
> sadly,
> Lee ...
>

Howdy,
Ok - to sum up as best I can you're reply (thanks): Lisp is
not used "in the trenches" (not particularly time/space intensive apps -
about 95% of the code hacked out there) because Lisp is not taught well
to CS types in college. It is therefore "too hard" a language to use.
I'm not sure that I agree.

1. Scheme seems to be an increasingly popular intro CS language. ML
also seems popular (maybe this is irrelevant, but I think of ML as
"same family" as Lisp/Scheme).

2. C and especially C++ are _very_ hard languages to learn. I've lost
count of the times I've explained pointers/arrays to budding C/C++
programmers. Whatever one thinks of coding style and/or what a compiler
_should_ do, all the C code I've looked at or hacked is full of
while(*p++) if (*p==MG_COOKIE) then foobar((CAST_TYPE)p+offset)
etc. Understanding this type of code (bouncing back and forth between
application level and machine level semantics) is _REQUIRED_ of C/C++
programmers in the trenches.

3. 4GL's, various BASIC dialects, Turbo Pascal and the now ubiquitous
giant Windows API are, in my experience, _NOT_ taught in colleges (not
widely anyway). Programmers in the trenches learn this stuff from
_BOOKS_ and/or from training sessions and/or "on the job". Visual Basic
is a pretty giant beast (not your old Apple BASIC) with all kinds of
weird special cases and options. Variable scoping is downright weird
(Form level variables), etc. But this is tossed at folks with a 2
year community college degree. Most folks program with one eye on
the manual or online help.

4. Given (2) and (3) above, there are _lots_ of nice Lisp books.
Furthmore, Lisp and Scheme are much cleaner regarding heap objects
(vectors, strings, cons cells). Why aren't folks hacking Lisp with
one eye on Wilensky or Winston/Horn or Simply Scheme????

Here is an initial conjecture:

1. Cost of commercial implementations. Plain and simple. Look at
PowerBuilder and Gupta SQL Windows. Obaining market share requires
a low cost implementation. (Personal Eiffel, also - $49) Free is
generally no good "in the trenches".

2. Nasty file/io. Folks want/need to write files of structured
data types and read them back in without thinking too hard about
it, e.g. pascal's FILE of RECTYPE or C's fwrite(f,&s,sizeof(RECTYP)).
Lot's of simple file/io in the trenches.

3. Nasty looping constructs. Hard to hack DO looking at the
manual. LABELS and LETREC are elegant, but complicated. DOTIMES and
DOLIST are a step in the right direction. There should have been
a CL standard WHILE/UNTIL for "bootstrapping" nincompoops (sp?).
LOOP is newer and not so widely used and/or written about. Otherwise
LOOP is fine. Of _course_ Lisp has superior looping/iteration
constructs, but it takes new comers a few months to realize this.

4. Doesn't fit in well with popular version control products. Even
small, unsophisticated shops will use PVCS or a similar product.
Edit/Compile/Link/Debug works well with these products. This is
not a killer though -- only "sophisticated" trench shops rely
on VC.

5. Money (BCD) and Date types. Lots of trenches proramming is about
dates and dollars. VB has a gadzillion date and money formatting
operations/options. Of course, C and Pascal don't have these, so these
are not necessarily "killers". C/Pascal "trenches" programming tends to
be full of rounding errors, etc. because of the absence of these types.

Anyway, these are probably "sufficient" conditions to keep Lisp from
being "popular" in every 1-4 dude(tte) project that goes on across
the U.S. in suburban green wall-to-wall carpet corporate and govt.
offices. It's too bad. These folks could _really use_ a Lisp.
More of these projects might succeed/survive.

Jacob Seligmann

unread,
Sep 16, 1994, 7:01:19 AM9/16/94
to
Matthew McDonald (ma...@cs.uwa.edu.au) wrote:

> What Jacob's saying is
> (a) Typical code written in c performs more than 5 times
> better than code in his favourite language using available
> implementations, and
> (b) there's no reason why his favourite language couldn't be
> implemented so it was competive.

Again, I was merely answering to an earlier post which was
misinterpreted as saying that BETA was *inherently* 25 times or more
slower than C. I did so by using the original C program containing
tight loops with lots of pointer arithmetic to write an equivalent BETA
program which was "only" 5 times slower (thereby trying to show that
the factor or 25 was much too pessimistic), and finally explained the
difference in the code produced (thereby trying to show that the
slowdown was not a product of the language design, only its current
implementation).

> What lisp (and beta) advocates seem to often ignore is that quality of
> code generation really matters to most people.
>
> Telling people that a factor of 5 difference in run-time doesn't
> really matter doesn't encourage them to use your language. Neither
> does telling them that *in principle* or *some day in the future*,
> your language could be competitive.

We at Mjolner Informatics take the quality of the produced code
*extremely* seriously. There is no reason why we should not be able to
produce code comparable to C for the imperative portions of the
language, and we're constantly working on it.

Meanwhile, it is our practical experience that for real applications
(not dhrystone-type benchmarks) the quality of the code currently
produced is more than acceptable. Also, BETA provides an easy interface
to C; if 5% of your code is time-critical, you can therefore still
write it in your favorite procedural, optimized language, and use
BETA's modelling features and extensive libraries for the remaining 95%.

Cheers,

Peter da Silva

unread,
Sep 16, 1994, 11:59:32 PM9/16/94
to
In article <35c99t$l...@info.epfl.ch>,

Stefan Monnier <mon...@di.epfl.ch> wrote:
>In article <MAFM.94Se...@wambenger.cs.uwa.edu.au>,
>Matthew McDonald <ma...@cs.uwa.edu.au> wrote:
>> Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
>> PA-RISC, and RS6k, few people are going to use a language. [...]

>The best example is probably Visual Basic, right ?

Well, unless you're Microsoft.

Where does a 500 pound gorilla sleep?
--
Har du kramat din varg idag?

Lee Schumacher

unread,
Sep 16, 1994, 3:07:23 PM9/16/94
to
In article <os2Psc...@sytex.com> sm...@sytex.com (Scott McLoughlin) writes:

>Howdy,
> Sure let's open it up again ;-) But no really -- skip all this
>talk of "realtime" this and that and concerns about competing with
>Fortran on floating point. I'm still _VERY_ curious (concerned?)
>about why Lisp isn't more popular in "the trenches". Go look at
>Borland C++ compiler's output in large model (typical Windows app) -
>not too slick. Now go run a typical Windows app (or Unix workstation
>app for that matter). It pages like all get out when you open a
>windows or switch to another task. Now go install a Windows
>_personal utility app_ -- 40 meg disk footprint or more. We're
>not talking big DB servers, just a nice word processor or
>spreadsheet.
> So why don't folks use Lisp to write this stuff? Blazing
>speed,space,etc. aint that critical. What gives?
>
>=============================================
>Scott McLoughlin
>Conscious Computing
>=============================================

Well, I've been occupying the training ground for the trenches for the
past couple o' weeks and I now have a very good idea why those who
graduate to the trenches don't use lisp - poor education. Here at the
UW comp sci dept ones introduction to lisp consists of a couple of
lectures in the AI course + one chapter in the text. Assignment 1
follows immediately: write 3 simple functions in lisp (set union,
intersection, difference), and then write a substantial program to
drive the "Agent World" simulator. Now, there's nothing wrong with
agent world (at first glance, anyway), but the students have little
experience in using an interpreted system, or any grasp of the
underlying lisp paradigms, so naturally they're frustrated. This
frustration is taken out on lisp - they don't understand it, therefore
it must be hard.

John Doner

unread,
Sep 16, 1994, 8:15:37 PM9/16/94
to
In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
>I'm still _VERY_ curious (concerned?)
>about why Lisp isn't more popular in "the trenches".
...

> So why don't folks use Lisp to write this stuff? Blazing
>speed,space,etc. aint that critical. What gives?

I posed a similar question to the comp.lang.dylan newsgroup a couple of
months ago. I received many interesting replies which I haven't yet
fully digested. Reasons having to do with availability, popularity,
vendors' support, what they teach in school, etc., are common, but
really beg the question because they don't explain why C got there in
the first place. None of them can explain why C won out over Pascal,
for example. My latest theory is that the answer lies in cognitive
effects arising from the conception and structure of the language.
People make up mental models of how things work, and interpret the
programs they write in terms of those models. For experienced
programmers, compiler writers perhaps, these models are complete and
accurate, closely corresponding to the objective reality. Novice
programmers have poor models that are incomplete, poorly related to the
actual computing machines, and perhaps even inconsistent. The
intellectual effort required to develop a good model for Lisp or Ada is
much greater than that required to develop one for C. There are more
abstractions involved. Thus, C is more easily comprehended by
inexperienced programmers.

I invite criticism of this theory.

John Doner

Marty Hall

unread,
Sep 17, 1994, 11:27:01 PM9/17/94
to
In article <sooRsc...@sytex.com> sm...@sytex.com (Scott McLoughlin) writes:
>schu...@ricotta.cs.wisc.edu (Lee Schumacher) writes:
>
>> Of course from the faculties point of view, they don't want to teach
>> lisp (thats manual labor, to their eyes), they want to teach AI. The
>> end result is that lisp gets shorted in school, so when the graduates
>> of this program get out into the real world lisp is never seriously
>> considered for any job at hand - its too esoteric, too academic, too
>> damn *hard*...
[...]

> Ok - to sum up as best I can you're reply (thanks): Lisp is
>not used "in the trenches" (not particularly time/space intensive apps -
>about 95% of the code hacked out there) because Lisp is not taught well
>to CS types in college. It is therefore "too hard" a language to use.
> I'm not sure that I agree.

I don't think this is the only reason, but I do agree that one problem
is Lisp education. My experience (doing AI and working with AI/Lisp
programmers in industry for 8 years and teaching AI and Lisp
programming for 6 years) has been that Lisp is frequently taught only
to illustrate AI concepts. In fact, I am guilty of that myself in
my Intro AI course to part-time MS students I teach. I just want to
give enough for them to try out things they've been introduced
to. There really isn't much time to point out many issues, so I have
to save that for my AI Programming course. This problem is all the
worse for the faculty without backgrounds in "serious" Lisp
programming and who don't have the desire I do to promote Lisp.

I've seen very, very, very few people who had a Lisp course that even
talked about efficiency issues, was careful to discuss the costs of
using linked lists, talked about GC issues and boxed data types,
and so forth. People aren't [usually] taught Lisp; they're taught AI
with a little Lisp thrown in along the way.

>1. Scheme seems to be an increasingly popular intro CS language. [...]

I hope you are correct wrt the "increasingly" part. It is certainly
true at some enlightened institutions. But my experience with people
doing Lisp work in industry has brought me in contact with very, very,
few people with this type of introduction. Other people's experiences
may vary, of course.
- Marty
(proclaim '(inline skates))

Arun Welch

unread,
Sep 17, 1994, 8:57:07 PM9/17/94
to
In article <sooRsc...@sytex.com> sm...@sytex.com (Scott McLoughlin) writes:

Why aren't folks hacking Lisp with
one eye on Wilensky or Winston/Horn or Simply Scheme????

Here is an initial conjecture:

1. Cost of commercial implementations. Plain and simple. Look at
PowerBuilder and Gupta SQL Windows. Obaining market share requires
a low cost implementation.

It's interesting that you bring up PowerBuilder, as it raises some
issues important to the Lisp community. There really isn't a low-cost
implementation, the Desktop version is severely crippled for
developing code. So, we've got a product that is:
a) expensive, around $4K for a reasonably setup system
b) slow. PB is an order of magnitude slower than Medley on the same
hardware, and Medley was never considered a speed demon in the Lisp world.
c) buggy. I get at least a GPF a day, frequently more.
d) poorly documented. Order of instructions can have interesting
effects, none of which are documented. There is 1 book on PB on the
market, compared to a couple dozen on Lisp.
e) support is *expensive*. For $5000 you get to call an 800 number
(where I'll admit they're generally helpfull), but no updates. I
hereby publicly apologize to Xerox, Venue, Franz, and Lucid for commenting
to their sales reps that their support costs were ludicrous. No
site licenses either.
f) proprietary.
g) training is expensive, only available from PowerSoft or their
authorised representatives.
h) non-portable, though this will change in version 4.0.
i) single inheritance (actually, I could probably go to z comparing
the features that CLOS has over the attempt at OOP in PB)
j) closed. No access to the object system.
k) poor looping constructs, and no recursion. As languages go,
PowerScript is *very* primitive.

I could go on, but you get the idea. On the other hand, PowerBuilder
is *incredibly* successfull. Any Lisp vendor (heck *any* language
vendor) would kill for that kind of growth rate. Why is it so popular?
I don't really know, but I guess it's because it takes an incredibly
grueling task (grovelling in a database) and makes it easier. The
difference between doing DB stuff by hand and using PowerBuilder is so
incredibly large that people are willing to put up with a lot of
grief. Also, they don't really have much competition. In every single
study I've seen between PB and the other 2 or 3 products that do
similar things PB comes out on top. Compare this to how many different
Lisp vendors, all of whom have products that stand out in some way
over their competition.

It's a wierd world we live in.

...arun
--
---------------------------------------------------------------------------
Arun Welch 2455 Northstar Rd
Network Engineer Columbus, OH 43221
OARnet we...@oar.net

Lawrence G. Mayka

unread,
Sep 18, 1994, 4:46:03 PM9/18/94
to
In article <MAFM.94Se...@wambenger.cs.uwa.edu.au> ma...@cs.uwa.edu.au (Matthew McDonald) writes:

I know this is about beta rather than lisp, but what Jacob is
saying about beta sounds a lot like what many people have been saying
about lisp.

...
What Jacob's saying is
(a) Typical code written in c performs more than 5 times
better than code in his favourite language using available
implementations, and

For Common Lisp (LispWorks and CMU, at least), the factor was 2, not
5. Moreover, I certainly disagree that this small numerical-analysis
benchmark is a typical C program. It's certainly not typical of, say,
telecommunications switching systems. Nevertheless, I agree that the
potential difference in efficiency of generated code between CL and
(the best) C compilers remains an obstacle to broadening CL's market.

Telling people that a factor of 5 difference in run-time doesn't
really matter doesn't encourage them to use your language. Neither
does telling them that *in principle* or *some day in the future*,
your language could be competitive.

I agree that "in principle" is a weak argument; "can be fixed by the
vendor within n months/years" is a somewhat better argument (if
true!).

Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
PA-RISC, and RS6k, few people are going to use a language. Not many

Common Lisp has all these.

At least Jacob's actually working on improving the beta
implementation. As far as I can tell, the usual lisp advocate response
to performance complaints is to either:
(a) deny there's a problem,

I don't deny the existence of a problem.

(b) say one day there won't be a problem, or

I think a concerted effort by vendors could substantially solve the
problem--i.e., make the best Common Lisp compilers generate code
substantially competitive with the code generated by the best C
compilers--within a reasonably short time period (perhaps 1-2 years).
Presumably, this will occur only if/when CL users raise its priority
above that of other new features and quality improvements.

(c) suggest you write code that looks like FORTRAN and
manually weigh expression trees and other insanity.

Ideally, code should "look like" the abstract solution to the problem
at hand. If the problem is numerical analysis, then shuffling vector
elements is precisely the =correct= abstraction of the
solution--mathematician have no abstraction higher-level than that.
If you want to say that the ideal compiler would deduce all datatypes
so that explicit type declarations would be unnecessary, I agree; but
I don't see how you can hold this lack of type inferencing against
Common Lisp when C does no such thing either.

Bob Hutchison

unread,
Sep 19, 1994, 11:44:40 AM9/19/94
to
In <35dcf9$j...@news.aero.org>, do...@aero.org (John Doner) writes:
>In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
>>I'm still _VERY_ curious (concerned?)
>>about why Lisp isn't more popular in "the trenches".
>....

>> So why don't folks use Lisp to write this stuff? Blazing
>>speed,space,etc. aint that critical. What gives?
>
>I posed a similar question to the comp.lang.dylan newsgroup a couple of
>months ago. I received many interesting replies which I haven't yet
>fully digested. Reasons having to do with availability, popularity,
>vendors' support, what they teach in school, etc., are common, but
>really beg the question because they don't explain why C got there in
>the first place. None of them can explain why C won out over Pascal,
>for example.

There is a perspective on this that I suspect you would have had to have
worked through the late '70s and early '80s on micro computers to see.
Micros were not fast then and memory was expensive. There were a
few languages 'fighting' it out for system and application development
at the time: Basic (the leader), C, and Pascal. At the time programmers
were asking why Basic was being used so widely. There was incredible
pressure to develop software using Basic. Basic handled memory for you,
automatically, and for many situations not well enough. Basic was also
slow. If you needed something faster you went to C or Pascal. At the
time C was available for free (thanks UNIX and I think Dr. Dobbs), pascal
was not (it was expensive). Pascal was also usually pcode based and
ran in proprietary environments. Pascal also fell down badly when you
needed control of your memory. In other words, the available Pascal
*implementations* were not different enough from Basic in a few key
aspects. C was used. (Let me point out one other belief that will perhaps
further illustrate the world at that time, it was the official position of
a very large multi-national company that compilers were unreliable and
produced poor code, and that this was *inherent* in the technology).

Briefly C filled a gaping hole in micro computer languages. So C was used
on micros and was widely used in universities due to UNIX.

(The current arguments for lisp/smalltalk/etc are an interesting
reversal of the situation I've described as having existed way back
then).

> My latest theory is that the answer lies in cognitive
>effects arising from the conception and structure of the language.
>People make up mental models of how things work, and interpret the
>programs they write in terms of those models. For experienced
>programmers, compiler writers perhaps, these models are complete and
>accurate, closely corresponding to the objective reality. Novice
>programmers have poor models that are incomplete, poorly related to the
>actual computing machines, and perhaps even inconsistent. The
>intellectual effort required to develop a good model for Lisp or Ada is
>much greater than that required to develop one for C. There are more
>abstractions involved. Thus, C is more easily comprehended by
>inexperienced programmers.

Interesting theory here, but I don't think I agree. In my experience, novice
programmers are so caught up in the details that they cannot make
useful abstractions at all. This is the kind of thing you would expect
of anyone learning something (e.g. many sports, especially team sports).
I think that you are right that C is easier for them to comprehend, but
I don't think it is because it has easier abstractions. I think it is because
they can use the computer hardware itself as C's abstraction, that is,
use a concrete thing as an abstraction -- what an illusion :-) I guess
that I think you are basically right that the novice has a difficult time
forming a useful understanding of how the language works, but I think
this is difficult for any language. I think that in C's case the novice can
cheat.

My real disagreement with your theory is that I don't think it is sufficient
to explain why C is actually used. Why does an experienced programmer
use it? In my case it is because of the history I described above. C has
also been able to keep up with changes, it is still a useful tool and so
it does not force me to look to something else. These are the reasons
for many of my contemporaries too (the ones that didn't live in a university
for ten years anyway :-). I've been a senior member of a team, the
manager of a team, then manager of a bunch of teams, a technical
director of a whole bunch of teams. Does the fact that I use C/C++
have something to do with my teams using it? I don't take my influence,
in itself, as being overly convincing to anyone, but when so many of my
contemporaries are the same?

How do you get an experience programmer to consider something other
than C? I've found it tremendously effective to (persistently) encourage
programmers to try something serious in smalltalk. So far I've had a
100% success rate in getting them to realise the limitations of C/C++.
Smalltalk isn't magic, but it has a really nice bunch of tools that go
with it. The lisp environments would work just as well I think, but until
recently they were too expensive. (These guys all bought smalltalk
with their own money -- I can be very persistent :-).

Once you get the more senior guys thinking about alternatives you then
have to ask why they don't do something about it. This is a really
interesting situation. Ultimately it seems to come down to the body of
existing code and a bit of nagging uncertainty. The nagging uncertainty
is the most significant of the two in most cases it seems.

In short, I don't think there are any technical reasons why C/C++ is the
most commonly used programming language, nor do I think that C/C++
as a language differs sufficiently from other languages to make a
difference. I think the reasons are (in order) historical, fear/doubt
(i.e. risk), and existing bodies of working code.

Even shorter, C/C++ is being used for the same reasons COBOL is
still used. (an awfully long post to come to that conclusion, sorry :-)

--
Bob Hutchison, hu...@RedRock.com
RedRock, 135 Evans Avenue, Toronto, Ontario, Canada M6S 3V9
(416) 760-0565

Mark S. Riggle

unread,
Sep 19, 1994, 2:38:25 PM9/19/94
to

In article <35dcf9$j...@news.aero.org>, do...@aero.org (John Doner) writes:
|> The
|> intellectual effort required to develop a good model for Lisp or Ada is
|> much greater than that required to develop one for C. There are more
|> abstractions involved.
|>
|> I invite criticism of this theory.
|>


It's the SLSM theory again (my favorite). That is 'Simple Languages
for Simple Minds'.

Although you can argue that C is not a simple language since it
is so hard to get things right.

--
=========================================================
Mark Riggle | "Give me LAMBDA or
sas...@unx.sas.com | give me death"
SAS Institute Inc., |
SAS Campus Drive, Cary, NC, 27513 |
(919) 677-8000 |


Scott Schwartz

unread,
Sep 19, 1994, 4:11:37 PM9/19/94
to
sas...@zinfande.unx.sas.com (Mark S. Riggle) writes:
It's the SLSM theory again (my favorite). That is 'Simple Languages
for Simple Minds'.

That's not it at all. People who like C like it because It Gets The Job
Done. For practically any task, you can hand someone a C compiler, and
they can feel assured that it will efficiently do what they want. All
the wierdness and complexity of the language just feeds the user's
feeling of controlling a real power tool.

To give a second example, Larry Wall spent lots of time advocating Perl
in the unix newsgroups. His sales pitch was always the same: perl is
easier and faster. Easier means shorter more obviously correct
programs. Faster means "runs faster". That's why perl is so popular
today.

Adrian L. Flanagan

unread,
Sep 19, 1994, 6:13:25 PM9/19/94
to
do...@aero.org (John Doner) writes:

>In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
>>I'm still _VERY_ curious (concerned?)
>>about why Lisp isn't more popular in "the trenches".
>...
>> So why don't folks use Lisp to write this stuff? Blazing
>>speed,space,etc. aint that critical. What gives?

[long abstract theory deleted]

>I invite criticism of this theory.

>John Doner

I must strenously disagree with the original poster. "Blazing
speed,space,etc." are that critical. Particularly in the PC DOS
world with its 640K restriction, program size and efficiency of
compiled code made a tremendous market difference in acceptance of
early commercial programs. Programmers writing in C had a large
advantage over programmers using the early Lisp systems, and lesser
but still significant advantages over Pascal programmers (although
some commercial apps were written in Turbo Pascal). Casual users
may have been better off using Lisp, but they wanted to use what the
"big boys" were using, and the vendors of support tools followed the
pro developers.

The (relative) failure of Lisp has everything to do with Lisp
vendors' failure to understand (even now) the needs of their
marketplace. Call it Ivory Tower Syndrome.
--
A. Lloyd Flanagan a.k.a. "Wild Card"
Think: What you do when you can't thwim. -- Dexter's Disturbed Dictionary

Scott McLoughlin

unread,
Sep 19, 1994, 10:57:05 PM9/19/94
to
csc...@cabell.vcu.edu (Adrian L. Flanagan) writes:

>
> I must strenously disagree with the original poster. "Blazing
> speed,space,etc." are that critical. Particularly in the PC DOS
> world with its 640K restriction, program size and efficiency of
> compiled code made a tremendous market difference in acceptance of
> early commercial programs. Programmers writing in C had a large
> advantage over programmers using the early Lisp systems, and lesser
> but still significant advantages over Pascal programmers (although
> some commercial apps were written in Turbo Pascal). Casual users
> may have been better off using Lisp, but they wanted to use what the
> "big boys" were using, and the vendors of support tools followed the
> pro developers.
>
> The (relative) failure of Lisp has everything to do with Lisp
> vendors' failure to understand (even now) the needs of their
> marketplace. Call it Ivory Tower Syndrome.
> --
> A. Lloyd Flanagan a.k.a. "Wild Card"
> Think: What you do when you can't thwim. -- Dexter's Disturbed Dictionary

Howdy,
1. Right. DOS+640K definitely gives advantage to C/Pascal/Asm.
Esp with TSR's and other weirdnesses. My question refered to the folks,
though, who are using VB and VC and BCW4.0 _now_ - big environments and
big images. I like the general historical picture, though.
2. When I say "no blazing speed", I mean -- "Not hard realtime".
I'm assuming only a 2X to 3X speed avantage of native C over compiled
Lisp. Any body ever benchmark Allegro vs. Borland C++ on say TAK ?
3. "Ivory Tower Syndrom" -- YES! Now I think we'er getting to
the heart of the matter ;-)

Jeff Dalton

unread,
Sep 20, 1994, 10:34:28 AM9/20/94
to
In article <35bttv$3...@belfort.daimi.aau.dk> jac...@daimi.aau.dk (Jacob Seligmann) writes:
>Matthew McDonald (ma...@cs.uwa.edu.au) wrote:
>
>> What Jacob's saying is
>> (a) Typical code written in c performs more than 5 times
>> better than code in his favourite language using available
>> implementations, and
>> (b) there's no reason why his favourite language couldn't be
>> implemented so it was competive.
>
>Again, I was merely answering to an earlier post which was
>misinterpreted as saying that BETA was *inherently* 25 times or more
>slower than C. I did so by using the original C program containing
>tight loops with lots of pointer arithmetic to write an equivalent BETA
>program which was "only" 5 times slower (thereby trying to show that
>the factor or 25 was much too pessimistic), and finally explained the
>difference in the code produced (thereby trying to show that the
>slowdown was not a product of the language design, only its current
>implementation).

Which is exactly right. It's necessary to know why the code is
slower and whether it can be fixed (and how hard it would be to
fix it) before you can reach conclusions about the _language_.

>> What lisp (and beta) advocates seem to often ignore is that quality of
>> code generation really matters to most people.

Who's done that? I haven't noticed it. But then I'm not on the
lookout for such things.

>> Telling people that a factor of 5 difference in run-time doesn't
>> really matter doesn't encourage them to use your language. Neither
>> does telling them that *in principle* or *some day in the future*,
>> your language could be competitive.

So? Let's settle first what's true. What people decide to do with
that information is then up to them.

[More reasonable stuff from Jacob Seligmann omitted.]

-- jeff

Jeff Dalton

unread,
Sep 20, 1994, 11:55:38 AM9/20/94
to
In article <MAFM.94Se...@wambenger.cs.uwa.edu.au> ma...@cs.uwa.edu.au (Matthew McDonald) writes:
>
> I know this is about beta rather than lisp, but what Jacob is
>saying about beta sounds a lot like what many people have been saying
>about lisp.

Many people? Like who, for instance?

I hope we don't add a misleading "Lisp advocate" stereotype
to the already misleading "Lisp" stereotype.

>Which is a pity, because competive compilers for sane languages like
>beta and lisp are obviously feasible. Paul Wilson was proposing a
>compiler for scheme+objects that would compete with C, CMU CL was
>great (although it now seems to be largely unsupported) and the ETH
>Oberon compilers are also wonderful (although the systems they're in
>don't co-operate with the rest of the universe.)

So you're actually on the same side as the Lisp advocates (modulo
your misleading characterization of them).

>At least Jacob's actually working on improving the beta
>implementation.

That's a rather unfair complaint. I have to work, and I'm
not employed these days to improve Lisp implementation. A
number of other Lisp "advocates" are in a similar position.

>As far as I can tell, the usual lisp advocate response
>to performance complaints is to either:
> (a) deny there's a problem,

Who has done that? There are a number of obvious problems, in
addition to any non-obvious ones. For instance, Lucid CL is too
large for me to use it for my work if I try to run it on the machine
on my desk.

> (b) say one day there won't be a problem, or

Who says that? Since no one knows what will happen in the
future, how can anyone say there won't be a problem one day?

> (c) suggest you write code that looks like FORTRAN and
> manually weigh expression trees and other insanity.

Who said anything about writing code that looks like FORTRAN?

The only thing close to that I can recall was in the "data bloat"
thread where it was pointed out that you could pack data by using
parallel arrays (rather than structs) as in FORTRAN. The code
still wouldn't have to look like FORTRAN. Nor does declaration-
filled Common Lisp look like FORTRAN.

(I have seen Lisp code that looked like FORTRAN, BTW. Imagine
lots of PROGs and GOs. But not for many years.)

-- jeff

Jeff Dalton

unread,
Sep 20, 1994, 12:28:59 PM9/20/94
to
In article <35dcf9$j...@news.aero.org> do...@aero.org (John Doner) writes:
>In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
>>I'm still _VERY_ curious (concerned?)
>>about why Lisp isn't more popular in "the trenches".
>...
>> So why don't folks use Lisp to write this stuff? Blazing
>>speed,space,etc. aint that critical. What gives?
>
>I posed a similar question to the comp.lang.dylan newsgroup a couple of
>months ago. I received many interesting replies which I haven't yet
>fully digested. Reasons having to do with availability, popularity,
>vendors' support, what they teach in school, etc., are common, but
>really beg the question because they don't explain why C got there in
>the first place. None of them can explain why C won out over Pascal,
>for example.

I'd suggest:

1. Unix.
2. Pascal is too restrictive.
3. Positive feedback effects.

> My latest theory is that the answer lies in cognitive
>effects arising from the conception and structure of the language.
>People make up mental models of how things work, and interpret the

>programs they write in terms of those models. [...] The


>intellectual effort required to develop a good model for Lisp or Ada is
>much greater than that required to develop one for C. There are more
>abstractions involved. Thus, C is more easily comprehended by
>inexperienced programmers.
>
>I invite criticism of this theory.

Ok. What you say makes sense for Ada and maybe for Common Lisp.
But I don't think Lisp is very difficult, when properly taught.
Moreover, C is a rather tricky and complex language. I found it
harder to learn C than Pascal (and indeed harder than a number
of other languages). I think C is just about at the edge of
what's reasonably learnable with C++ going too far. (This is
not to say people can't learn to program well in C or C++,
but there a lot more "tricks of the trade" to pick up, it
seems to me. For instance, I've had to write "*(char **)" as
a cast. A correct model of C is rather hard to develop, it
seems to me. But perhaps it's easier to get by with incorrect
models?)

I also think it's fairly easy to explain Lisp to C programmers.
You can explain how Lisp data structures could be represented in
C (a pointer to a union of various structs and other things),
and say something about the syntax and storage management.
This gives a model of Lisp in terms of a small subset of C.

-- jeff

Jeff Dalton

unread,
Sep 20, 1994, 2:13:22 PM9/20/94
to
In article <1994Sep19....@cabell.vcu.edu> csc...@cabell.vcu.edu (Adrian L. Flanagan) writes:
>do...@aero.org (John Doner) writes:
>
>>In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
>>>I'm still _VERY_ curious (concerned?)
>>>about why Lisp isn't more popular in "the trenches".
>>...
>>> So why don't folks use Lisp to write this stuff? Blazing
>>>speed,space,etc. aint that critical. What gives?
>
>[long abstract theory deleted]
>
>>I invite criticism of this theory.
>
>>John Doner
>
>I must strenously disagree with the original poster. "Blazing
>speed,space,etc." are that critical.

Then why are so many things so large and slow? Sure, there are
some cases where speed, space, etc are critical, but there must
be many others where they aren't. I think you are right to an
extent, but it can't be the whole story.

>Particularly in the PC DOS
>world with its 640K restriction, program size and efficiency of
>compiled code made a tremendous market difference in acceptance of
>early commercial programs. Programmers writing in C had a large

>advantage over programmers using the early Lisp systems, [...]

>The (relative) failure of Lisp has everything to do with Lisp
>vendors' failure to understand (even now) the needs of their
>marketplace. Call it Ivory Tower Syndrome.

Did anyone really think Lisp would occupy the place C now has?
If so, they sure went about it in a bizarre way!

Most Common Lisp vendors at least did not seem to see the PC DOS world
as their market. (Or, again, if they did, they approached it a very
strange way.) There is a market that commercial Common Lisps served
fairly well. It was more restricted than it could have been, even if
we look only at reasonably powerful "workstations". Perhaps the vendors
didn't realize how much people would still want to do Unix stuff
rather than just live in the Lisp World. I don't know.

A strange thing is that it sometimes looks like only success in the
PC market counts at all. The PC market is a rather odd place.
The OS technology would have been laughed out of town in the 70s.
And yet people put up with 640K restrictions, no virtual memory,
no proper multi-tasking, etc, for ages. I think it's reasonable
that someone might not have predicted that things would develop
the way they did in the PC market, much less that the PC market
would start to dominate other markets (at least so far as
perception of success is concerned).

Moreover, I think it's surprising that _any_ language has so
dominant a position.

-- jeff

Patrick D. Logan

unread,
Sep 20, 1994, 12:04:41 PM9/20/94
to
In article <CwFsG...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>I'd suggest:

>1. Unix.
>2. Pascal is too restrictive.
>3. Positive feedback effects.

My guess: (nothing else to talk about so,...)

I think there is peer pressure to understand and use C. Everyone else does, so
it would be bad not to do so as well.

Lisp has a reputation of being different and not widely accepted for various
reasons. Therefore it is acceptable not to understand and use Lisp.


Message has been deleted

Scott McLoughlin

unread,
Sep 21, 1994, 3:04:46 AM9/21/94
to
si...@rheged.dircon.co.uk (Simon Brooke) writes:

> Out there in the real world, there are n (where n > 20, probably)
> people sitting in front of a Windows box for every one person sitting
> in front of a real computer. So if you write a program in Visual
> BASIC, you'll be able to sell it. You won't be able to maintain it, of
> course, but that's the customer's problem.
>
> If you write a program in a real language for a real computer, *unless
> you can port it to Windows (or windows NT, or Windows 95, or whatever
> other dreck Bill Gates decides to unload on the uneducated next)* you
> are not going to sell it. I know. I set up a company in 1988 to
> develop knowledge engineering tools. I said to myself 'the PC isn't
> powerful enough to do what I want to do, so I'll develop tools for
> real computers'. There were other mistakes I made, but I think it was
> that one that cost me the company...

Howdy,
Exactly. I suspect that N is much > 20 if you don't count
bank machines ;-) Now, how do we get the "real languages" in use
on not so real Windows boxes? What exactly _is_ the "political
economy" of commercial dynamic language implementations.
We've got a $49 "Personal Eiffel". Is anyone out there
working in a "Personal Lisp" or "Personal Scheme" or "Personal
Dylan" or "Personal ML"???? If not -- Why Not? Were all the
venture capitalists "burned" in the 80's AI hyped up binge? Do
the implementors have a comfy niche with Govt contracts and
research grants, so why bother? Are these languages so hard
to implement (well) that the economics simply won't bear a
low cost implementation (with CL it's imaginable!) to price
conscious consumers? Is the marketing channel blocked by
MS and Borland and Watcom and a few others, so product just
can't get out the door?
Anyway, with FrameMaker selling cheap on PC's and
Interleaf coming to a Windows box near you and Intergraph
running on NT and ... All the $$ workstation software is
(1) coming to PC's and then fairly quickly thereafter
(2) dropping in price. What about languages? If not,
why not?

ps. Sorry to hear about your company.

Scott McLoughlin

unread,
Sep 21, 1994, 11:51:29 PM9/21/94
to
ST...@MACC.WISC.EDU (STEB) writes:

> Harlequin doesn't charge for the software, but does charge a "nominal" fee fo
> the media, documentation, shipping and handling. Nominal in this case = $50,
> which, given the common lisp industry's history, is pretty nominal.
>
> FL requirements:
> *educational use only
> *386/486 PC (486 preferred)
> *4mb RAM (8mb preferred)
>
> This might be quite interesting and as soon as I can spare $50 I intend to
> order a copy and check it out. Is there anyone out there who is/has use/d it?
> Impressions?

Howdy,
Ok. Extra cool. Right footprint. Right machine. Right OS.
(Look, I wish I had a SparcStation and my clients had SparcStations -
but they don't. Policy analysts in Washington typically have PC's,
'kay?).
Problem: I'm not in school. Policy analysts at World Bank and
AID are not in school. They use PC's so they can take them
to foreign countries and give presentations and tell people what to
do (solicited advice, of course).
Can _I_ get a copy for say $100. Can I distribute programs
to clients for, say, $250? Anyone out there from Harlequin? Better
than $_NONE_.
It's just a _language_ after all -- like C++ or Pascal or
BASIC. It's a _nicer_ language IMHO, but its not _diamonds_ - it has
no intrinsic value (except aesthetic). It's a tool that we programmers
use to create value for users.
So, what's the poop on marketing the little gem? Little add
in the back of Dr. Dobb's and 1-800 number forthcoming?

Jeff Dalton

unread,
Sep 22, 1994, 9:02:59 AM9/22/94
to
In article <35kbl8$8...@relay.tor.hookup.net> hu...@RedRock.com (Bob Hutchison) writes:
>> My latest theory is that the answer lies in cognitive
>>effects arising from the conception and structure of the language.
>>People make up mental models of how things work, and interpret the
>>programs they write in terms of those models. [...] The

>>intellectual effort required to develop a good model for Lisp or Ada is
>>much greater than that required to develop one for C. There are more
>>abstractions involved. Thus, C is more easily comprehended by
>>inexperienced programmers.
>
>Interesting theory here, but I don't think I agree. In my experience, novice
>programmers are so caught up in the details that they cannot make
>useful abstractions at all. This is the kind of thing you would expect
>of anyone learning something (e.g. many sports, especially team sports).
>I think that you are right that C is easier for them to comprehend, but
>I don't think it is because it has easier abstractions. I think it is because
>they can use the computer hardware itself as C's abstraction, that is,
>use a concrete thing as an abstraction -- what an illusion :-) I guess
>that I think you are basically right that the novice has a difficult time
>forming a useful understanding of how the language works, but I think
>this is difficult for any language. I think that in C's case the novice can
>cheat.

I thought your (Bob Hutchison's) article excellent, and I don't want
to give the opposite impression by disagreeing with part of it. Also,
I like the idea of using a concrete thing as an abstraction, though I
suspect that many people have a somewhat abstract model of the hardware.

However:

(a) Novices don't necessarily know all that much about the hardware;
(b) Novices (e.g. children even back in the days before they grew up
with video games) have found it fairly easy to learn languages
that aren't so close to the hardware (e.g. LOGO, Basic).
(c) There are reasonably simple hardware-based models that work
for Lisp.

This makes me question whether C wins because novices can use
the hardware as a "cheat".

It's important to bear in mind that some Lisps -- e.g. Common Lisp,
InterLisp -- are large and full of rather complicated stuff while
other Lisps are very simple. They're smaller and simpler than C; and
Lisp implementations tend to be interactive, which makes it easier to
try things out. It's also easy to set up Lisp to use the "just run
it" Basic approach.

Nonetheless, I think that in practice Lisp *is* often hard to learn.
I'm not sure I can say whether it's easier or harder than C. It
would depend, for one thing, on how much of C and how well it must
be understood, and on how much of which Lisp.

Anyway, in my view the following factors are responsible for much
of the difficulty:

(1) The fully parenthesized prefix syntax.
(2) Peculiar, unfamiliar names such as car, cdr, cond, and lambda.
(3) Hard topics such as recursion that tend to be mixed in with
learning Lisp.
(4) Confusing presentations of eval, quote, and "evaluates its
arguments" that make the question of what gets evaluated
seem much harder than it is. (The syntax also contributes
to this, because it's so uniform.)
(5) Teaching that has a mathematical flavour and emphasises the
functional side of Lisp. This is great for some students but
makes Lisp much harder for others. E.g. box-and-arrow diagrams
are tied to the discussion of mutation, and hence aren't
available when people are first trying to figure out what lists
are. (A number of odd models can result from this.)

Some of these are already questions of how Lisp is taught. Others,
such as the fully parenthesized syntax, require more care in
presentation than they often receive. It will also be interesting
to see how much difference it makes to change the syntax (as in
Dylan).

-- jeff

Cyber Surfer

unread,
Sep 21, 1994, 12:24:32 PM9/21/94
to
In article <CwFxA...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk "Jeff Dalton" writes:

> >I must strenously disagree with the original poster. "Blazing
> >speed,space,etc." are that critical.
>
> Then why are so many things so large and slow? Sure, there are
> some cases where speed, space, etc are critical, but there must
> be many others where they aren't. I think you are right to an
> extent, but it can't be the whole story.

The problem is that real Lisp system don't compete well with
C/C++ systems. It doesn't matter that they _could_, it only
matters that they don't do it well enough. I blame it on byte
counting, but that doesn't help much.

> A strange thing is that it sometimes looks like only success in the
> PC market counts at all. The PC market is a rather odd place.

I agree. (Oh no, not again! (-; ) It's the market that gets the
highest publicity. I have a friend who regularly slags off the
Mac, which is a machine he doesn't use. It may not be #1 on the
list of popular machines, but it's _there_. He doesn't see it
that way, of course. He has a nasty habit of attending those
multimedia events that Microsoft like to organise, esp when
Bill Gates makes an appearance.

Just blame the media and strong marketing. :-)

> The OS technology would have been laughed out of town in the 70s.
> And yet people put up with 640K restrictions, no virtual memory,
> no proper multi-tasking, etc, for ages. I think it's reasonable
> that someone might not have predicted that things would develop
> the way they did in the PC market, much less that the PC market
> would start to dominate other markets (at least so far as
> perception of success is concerned).

Someone has a wonderful quote in their sigfile:

"640K outta be enough for anyone" -- Bill Gates.

No. You can never have enough memory. It's obvious why. At
the very least, they'll want to sell you software with more
features, and features eat memory. Microsoft have been very
good to people who make RAM chips...

> Moreover, I think it's surprising that _any_ language has so
> dominant a position.

Agreed. "There can only be one." Once it is there, why should it
change? Think about when it changed, and why. For micros, it was
when we switched from 8 bit machines to 16 bit. I used a C compiler
on an 8bit machine, but then, I was odd like that. I didn't hear
about C being used to write best selling apps until much later.

Martin Rodgers
--
Future generations are relying on us
It's a world we've made - Incubus
We're living on a knife edge, looking for the ground -- Hawkwind

Cyber Surfer

unread,
Sep 21, 1994, 12:07:43 PM9/21/94
to
In article <1994Sep19....@cabell.vcu.edu>

csc...@cabell.vcu.edu "Adrian L. Flanagan" writes:

> The (relative) failure of Lisp has everything to do with Lisp
> vendors' failure to understand (even now) the needs of their
> marketplace. Call it Ivory Tower Syndrome.

I agree about the need for speed and small a "footprint". At one
time, all the reviews of C compilers that I found would measure
of number of bytes for a "Hello, World" program. This was to see
how much library overhead there was, which means very little when
see object code sizes of more than a megabyte. The object code for
many leading apps these days tend to be at least this big.

The IDDE, the grahical front end, for the C++ compiler I use is
about 1 MB, and that's not counting the compiler and linker etc,
which are in their own DLL files. My machine can barely run the
IDDE, as it only has 8 MB of RAM. It does not run well, and yet
this is nothing compared to VC++.

Someone once said that "Lisp programmers know the value of every
thing, and the cost of nothing". This may be a gross generalisation,
but there may be some truth in it. I might say that C programmers
know the cost of everything, and the value of nothing, and that
would also be a gross generalisation. There also might be some
truth in it. Think of those byte counting compiler reviews. Now
imagine if Lisp systems were review with the same attention to
object code size.

I know, there's a different culture. In Lisp, you might not want
stand alone programs. You might simply call a function, instead
of launching an application. It might even look the same! That
was how Smalltalk-80 was intended to be used, but today, a modern
Smalltalk works a little differently. Smalltalk/V is a good example
of what I mean, and yet it is still judged by the standards set
by C programmers, who are still counting bytes.

Also note how the user interface ideas, like the desktop metaphor,
were taken from Smalltalk, but the idea of all code being part of
the system and not making a special distinction between system
code and app code wasn't adopted by Apple, Microsoft, etc.

If you underestimate that cutural difference, you may have to
pay for it. I hope you don't, but I can't find as many jobs
offered for Smalltalk/Lisp/Prolog as there are for C/C++/VB.
I could just be looking in the wrong places, but I would still
advise any vendor to look closely at what makes their product
different from, let's say, Microsoft's.

Perhaps Apple have thought about this. The design of the Dylan
language suggests to me that they have. I hope that it succeeds,
as I'm told that Smalltalk programmers are paid more than C/C++
programmers, and Dylan might well do better than Smalltalk as
a pure object orientted language. ;-)

Cyber Surfer

unread,
Sep 21, 1994, 10:01:06 AM9/21/94
to
In article <CwFqw...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk "Jeff Dalton" writes:

> I hope we don't add a misleading "Lisp advocate" stereotype
> to the already misleading "Lisp" stereotype.

Is that a reference to the thread about a newsgroup for Lisp advocacy?
I'm not sure, but this looks like you've misunderstood the issue,
which was about the need, or not, for a comp.lang.lisp.advocacy
newsgroup, and not the nature and/or worth of Lisp advocacy, which
is another matter. My point was merely that at the moment, there's
no choice, except to add such advocacy threads to a killfile, which
is not ideal.

I hope I've misunderstood your comment, in which case I apologise.
However, I'm not aware of any stereotype being the problem. Perhaps
you're refering to some other thread, perhaps in another newsgroup,
but I don't know. So I'll just add that _my_ thread (the one I started)
was about the location of such advocacy threads, bot about their
worth. I'd like to have a choice about what I read, which is a purely
personal thing.

It just happens that we see these threads in comp.lang.lisp, and I'm
not aware of a Lisp newsgroup that might cover the same subjects but
without the advocacy threads. If you can suggest one, then I'll happily
leave comp.lang.lisp and go and read it.

Meanwhile, I've been enjoying this particular thread, as I felt it
had some interesting things to say about implementation and design
of languages. I hadn't thought of it in the same way as "C vs Lisp"
threads. Should I?

As I said above, if I've misunderstood your comment, then I'm sorry.
I wish well with your Lisp advocacy, whether I read it or not. If
I've ever suggested or implied that you _shouldn't_ advocate Lisp,
then I'm sorry for that, too. It could only have been because of
an misunderstanding. I've seen some extreme advocacy of Pascal
vs C, and itwas ugly. That rather colours my feelings! I didn't
mean to imply that you were included in that group of advocates,
as I've always found your posts worth reading.

The comp.lang.visual newsgroup is in need of another kind of advocacy,
simply because so many users of VB and VC++ have yet to discover the
wider world of "visual" programming that Microsoft have yet to support.
That goes way beyond "X vs Y" debates, and into the power of marketing
over the power (or lack of it) of other media, such as UseNet. You
can read the comp.lang.visual FAQ for the details, if you want to
know more about that problem. It'll explain why some people believe
that c.l.v should become a moderated newsgroup.

I'm glad that Lisp hasn't reached that point, and I hope that it
never will. After all, the name Lisp still refers to Lisp, and not
a Microsoft product that looks nothing like Lisp. Ouch.

Well, ok. That's my c.l.v advocacy over with for today. ;-)

> (I have seen Lisp code that looked like FORTRAN, BTW. Imagine
> lots of PROGs and GOs. But not for many years.)

Yeah, me too. I used to write Basic code like that. Then I switched
to Forth, and never used an exlicit GOTO again. Now I just write
compilers and macros that can do it for me. ;-)

Mike Fischbein

unread,
Sep 22, 1994, 8:59:18 AM9/22/94
to
John Doner (do...@aero.org) wrote:

: In article <os2Psc...@sytex.com>, Scott McLoughlin <sm...@sytex.com> wrote:
: >I'm still _VERY_ curious (concerned?)
: >about why Lisp isn't more popular in "the trenches".
: ...
: > So why don't folks use Lisp to write this stuff? Blazing
: >speed,space,etc. aint that critical. What gives?

: None of them can explain why C won out over Pascal,
: for example.

Pascal has many limitations imposed in an attempt to mandate "good"
coding practices on beginners; C removes those limitations. Many
Pascal compilers remove some of them, but use of those extensions
renders code non-portable.

: My latest theory is that the answer lies in cognitive


: effects arising from the conception and structure of the language.
: People make up mental models of how things work, and interpret the
: programs they write in terms of those models. For experienced
: programmers, compiler writers perhaps, these models are complete and
: accurate, closely corresponding to the objective reality. Novice
: programmers have poor models that are incomplete, poorly related to the
: actual computing machines, and perhaps even inconsistent.

This is an excellent basis for the discussion.

: The


: intellectual effort required to develop a good model for Lisp or Ada is
: much greater than that required to develop one for C.

This sentence I must disagree with completely. Lisp can be
conceptualized more easily than most computer languages, certainly the
ones under discussion. One might leave out significant chunks of the
language, but that is frequently what novice programmers do -- in whatever
language they work in.

: There are more


: abstractions involved. Thus, C is more easily comprehended by
: inexperienced programmers.

Which renders this conclusion invalid. There are not more abstractions
involved; but the abstractions involved in Lisp are *different* than
those involved in C, and *different* than those most novice programmers
have been involved with.

C's conceptual machine model is similar to that of the most common
beginner/teaching languages, BASIC and Pascal, without many of the
limitations of those languages. C's conceptual machine model is also
similar to most common CPUs. This makes it easier for novice
programmers, who've been working in BASIC and studying the 8086
instruction set, to map their conceptualizations to C than to Lisp.
Similar handwaving for Pascal and 68000.

A programmer who can work comfortably with C, BASIC, Pascal, Fortran,
et al, has really learned one conceptual machine with different (and
varying amounts of) syntactic sugar. This makes it easy to shift from
one Algol-like language to another; having learned BASIC or Pascal, the
novice programmer finds C to be an extension of already known
concepts. Lisp presents a different way of thinking about the problem
that does not fit comfortably with what the novice already "knows"
about programming.

Lisp, Forth, Smalltalk, awk, and APL (and other non-Algol-like
languages) all have different conceptual machines. All require
significant shifts in the way the programmer thinks about solving the
problem at hand (compared to the Algol-like family). Gross conceptual
shifts are much more difficult than for most people than the relatively
minor syntactical shifts required for staying in a single language
family. This applies to Lisp also; it is easier for a hypothetical
programmer who knows only KCL, say, to learn Scheme than it would be
for that same individual to learn C. But beginning programmers
generally work with simple languages usually designed for beginners,
such as BASIC, Pascal, Shell and Rexx; and these languages are almost
all part of the Algol family.

mike

Message has been deleted

David Gadbois

unread,
Sep 22, 1994, 10:11:10 PM9/22/94
to
Simon Brooke <si...@rheged.dircon.co.uk> wrote:
>Cambridge LisP has the beautiful construct
>
> (loop
> forms
> (until <condition> value)
> forms
> (while <condition> value)
> forms)
>
>[...] it might be worth spending a quiet winter's
>evening reinventing it

It's a no-brainer:

(defmacro cambridge-loop (&body forms)
(let ((block (make-symbol "BLOCK"))
(start (make-symbol "START")))
`(block ,block
(macrolet ((until (condition &optional value)
`(when ,condition (return-from ,',block ,value)))
(while (condition &optional value)
`(unless ,condition (return-from ,',block ,value))))
(tagbody
,start
(progn ,@forms)
(go ,start))))))

--David "Will write macros for food" Gadbois


Bob Hutchison

unread,
Sep 23, 1994, 11:15:06 AM9/23/94
to

Thanks, and I don't mind being disagreed with, I am quite used to it :-)

>
>However:
>
> (a) Novices don't necessarily know all that much about the hardware;
> (b) Novices (e.g. children even back in the days before they grew up
> with video games) have found it fairly easy to learn languages
> that aren't so close to the hardware (e.g. LOGO, Basic).
> (c) There are reasonably simple hardware-based models that work
> for Lisp.

I don't know that it matters that their model of the hardware is abstract.
The model they have seems actually quite good at predicting what the
hardware will do (CPU here, not IO devices so much). If they can then
translate that predictive model into a 'C' model, then they stand a
chance of predicting what a C program will do.

Children seem to be an exception to everything to do with learning
(do you have kids?). They seem to be better at learning. If we think
of adults learning, we see that they do it a bit differently than
children. Adults try to relate new things to things they already know
or understand. Children don't have that luxury if they are young,
and don't seem to need that technique so much. If this is a valid
understanding of how many adults learn, then being having the aid
of a simple hardware model translated into C, that can help them
predict what a program will do, might be found useful.

Use of a hardware model to aid in learning a programming language
would apply to most languages. I don't doubt that there is a suitable
hardware model to explain lisp, but I don't think it is the same one.
Unfortunatly the one availble to a C programmer is the one taught,
at least where I went to school.

The other difficulty with languages like the lisps and other high level
languages, is that they provide a fair bit of support for the
development of 'software'. I wonder what a hardware model of
a continuation in scheme or ml would look like, or a non-deterministic
program written using them? What is the hardware model for an
abstract data type for that matter? What is the hardware model
for a CL macro? (this macro idea is one that seems to be somethin
that a C programmer has an awful time comprehending, possibly
it is just an 'I don't believe you' problem rather than a 'what would
I do with it?' problem)

>
>This makes me question whether C wins because novices can use
>the hardware as a "cheat".
>
>It's important to bear in mind that some Lisps -- e.g. Common Lisp,
>InterLisp -- are large and full of rather complicated stuff while
>other Lisps are very simple. They're smaller and simpler than C; and
>Lisp implementations tend to be interactive, which makes it easier to
>try things out. It's also easy to set up Lisp to use the "just run
>it" Basic approach.

Scheme is relatively new to me, I assume that it is one of the simpler
lisps you refer to. While it is a nice simple clean language that I find
rather appealing, it supports a programming style that, in my opinion,
is fundamentally a software oriented style, not a hardware one.

Is there a simple model of a scheme 'machine' that would allow
someone to predict behaviour of the software? I would have thought
that scheme is its own best model. Wasn't that kind of the point of
scheme?

My first reaction to the 'just run it' approach to lisp was a bit negative.
But when you think about it 'just running' lisp is probably not much
different than the kind of C programming we get. It also holds the
promise that as the programmer gains experience the other aspects
of lisp become available.

>
>Nonetheless, I think that in practice Lisp *is* often hard to learn.
>I'm not sure I can say whether it's easier or harder than C. It
>would depend, for one thing, on how much of C and how well it must
>be understood, and on how much of which Lisp.

I can tell you that my problem with CL was finding a subset of it
that I could do something with. CLtL was not much use for that.
It wasn't until I came across Paul Graham's "On Lisp: ..." that things
'switched on' with CL. Scheme was much easier.

>
>Anyway, in my view the following factors are responsible for much
>of the difficulty:
>
> (1) The fully parenthesized prefix syntax.
> (2) Peculiar, unfamiliar names such as car, cdr, cond, and lambda.
> (3) Hard topics such as recursion that tend to be mixed in with
> learning Lisp.
> (4) Confusing presentations of eval, quote, and "evaluates its
> arguments" that make the question of what gets evaluated
> seem much harder than it is. (The syntax also contributes
> to this, because it's so uniform.)
> (5) Teaching that has a mathematical flavour and emphasises the
> functional side of Lisp. This is great for some students but
> makes Lisp much harder for others. E.g. box-and-arrow diagrams
> are tied to the discussion of mutation, and hence aren't
> available when people are first trying to figure out what lists
> are. (A number of odd models can result from this.)
>

Most of these points are illustrations of what I mean by support
for 'software'. These are software ideas not hardware. Though, I
should mention that in my case car, cdr, cond were no problem
at all (even in 1976) -- they were unique names for things so if
anything they reduced confusion. Lambda still seems to imply
more than it does, something deep that isn't really there, and so
is a bit distracting :-)

>Some of these are already questions of how Lisp is taught. Others,
>such as the fully parenthesized syntax, require more care in
>presentation than they often receive. It will also be interesting
>to see how much difference it makes to change the syntax (as in
>Dylan).
>

This will be interesting. Though what about ML? It has been around
for a while now, what is the experience with that?

I think that this discussion is interesting, and possibly even useful,
the real issue with lisp is social and political.

Bailin Jeremy

unread,
Sep 23, 1994, 3:39:11 PM9/23/94
to
I've noticed that it seems that the old paradigm of learning vocal languages
(English, French, Japanese etc) seems to be equally valid with programming
languages, ironically enough. Basically, learn as many different ones as
possible while you're still young. <grin> People have difficulty moving from
one type of language paradigm (say C) to a very different one (say Lisp),
whereas both can be learned quite well early on.
On the other hand... Every now and then I look back at my old Basic code...
and shudder. :-)=


_________________________________
/ "I took a drink of holy water \ Jeremy Bailin
/ It tasted like the pipes were rusty \
| I listened to the words of wise men | SVP111 BAILIN.92B
| It sounded like their words were dusty" | g4bo...@cdf.utoronto.ca
\ - Cause & Effect /
\___________________________________/ MCMXCIV

Geoffrey P. Clements

unread,
Sep 23, 1994, 7:58:11 AM9/23/94
to
Hi,

Pardon me for jumping in the middle of this discussion. The question I
have is: what is Lisp used for? Is there anyone out there using Lisp to
develop comercial applications? The only application I've ever seen
written in Lisp "in the real world" was a FORTRAN to Ada translator. You
couldn't buy it. You gave the company your FORTRAN code and they gave you
back Ada code.

I've heard all the reasons why Lisp is such a great language, but no one
seems to be using it for anything but research projects. (I think. Correct
me if I'm wrong.)

I've played with a few small Lisps. Power Lisp and xLisp for the
Macintosh. I don't see what use they are over Metrowerk Code Warrior C++
for developing useful applications. (Read comercially saleable.)

geoff

Cyber Surfer

unread,
Sep 24, 1994, 11:00:49 AM9/24/94
to
In article <CwJ88...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk "Jeff Dalton" writes:

> Some of these are already questions of how Lisp is taught. Others,
> such as the fully parenthesized syntax, require more care in
> presentation than they often receive. It will also be interesting
> to see how much difference it makes to change the syntax (as in
> Dylan).

This is the reason why I feel that Dylan will do better than Lisp.
It uses infix, for example. That helps programmers who are used to
infix languages, altho I had no trouble, but then, I moved from Basic
to assembly to Forth to C to Lisp. Today, I might use any of them.

I realise that because of my interest in compilers that I'm probably
not a typical programmer! Maybe that explains it...

Cyber Surfer

unread,
Sep 24, 1994, 11:29:12 AM9/24/94
to
In article <CwJuu...@rheged.dircon.co.uk>
si...@rheged.dircon.co.uk "Simon Brooke" writes:

> Cambridge LisP has the beautiful construct
>
> (loop
> forms
> (until <condition> value)
> forms
> (while <condition> value)
> forms)

Is this the Cambridge Lisp that was once available for the ST and
Amiga machines? I used that a bit, a few years ago. It was the first
time I'd seen compiled Lisp code, and I was impressed by it. It made
using GEM in Lisp look veyr easy, very fast, and yet still as friendly
as you'd expect for a Lisp.

I was only sad that it didn't use GEM itself, as the character mode
interface was rather basic compared to what was possible in a GEM
enviroment.

Message has been deleted

Omar Foda

unread,
Sep 25, 1994, 2:28:34 AM9/25/94
to
mfis...@panix.com (Mike Fischbein) writes:

>But beginning programmers generally work with simple languages usually
> designed for beginners, such as BASIC, Pascal, Shell and Rexx; and
> these languages are almost all part of the Algol family.

I agree with what you say--in general--but must disagree with the
above statement. Logo is relatively easy to use for novices, and
it is in the lisp family.

Strangely enough, my kids find it easier to learn certain aspects of
Common Lisp than the corresponding aspects of Logo. In particular,
they find "quote" easier in the former.

When I think of it, I find that Lisp is easier than C and Pascal.
Being intrpreted is one very big plus for a novice.

Omar.


Thomas M. Breuel

unread,
Sep 25, 1994, 5:40:00 AM9/25/94
to
In article <CwMH3...@rheged.dircon.co.uk> si...@rheged.dircon.co.uk (Simon Brooke) writes:
|This is history. No-one is writing new commercial applications for a
|640Kb memory model any more. We all know why LisP wasn't successful in
|the past: the question is why isn't insert high level language of
|your choice successful now?

Because our problems have grown along with our machines. C still lets
me take better advantage of the 64Mbytes that I have than CommonLisp.

On the other hand, for less demanding problems, high level languages
like Basic and Perl have become successful.

Thomas.

Lawrence G. Mayka

unread,
Sep 25, 1994, 12:23:57 PM9/25/94
to
In article <gclements-230...@155.50.21.58> gcle...@keps.com (Geoffrey P. Clements) writes:

I've heard all the reasons why Lisp is such a great language, but no one
seems to be using it for anything but research projects. (I think. Correct
me if I'm wrong.)

Come to next year's Lisp Users and Vendors Conference. The one this
year had plenty of developers delivering real-life applications in
Lisp.

You also might ask Harlequin Inc. for a copy of their press release
describing the use of their real-time CLOS in an announced AT&T
switching system.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

John W.F. McClain

unread,
Sep 26, 1994, 8:01:46 AM9/26/94
to
In article <gclements-230...@155.50.21.58> gcle...@keps.com (Geoffrey P. Clements) writes:

Well here is a list of some "real" application written in Lisp that I
compiled a year or two ago. If anyone finds any errors please send
them to me.

Note Loral has no idea I even have this list....


"Major" Lisp Applications

"Mass Market Applications"
----------------------------------------------------------
GNU EMACS Text Editor & mini-lispm
[C at core]

MACSYMA Math system
Reduce Math system
Alkahest Math system

XLISP-STAT a Lisp based statistical programming enviroment

Interleaf Page Layout/documentation system
[C at core]

BELIEF A program for manipulating graphical belief
models

Many AutoCAD applications CAD
BOSS a link level communications CAD simulation tool
BONeS Designer a system-level network CAD simulation tool
PlanNet a simple network simulation tool
Cadence's Design Framework II CAD
and some sub-applications
ICAD knowledge based engineering support system


Itasca OODBMS Distributed Object Database Management System
Software Refinery CASE
NoteCards Hypertext system
Symbolics' S-products (3D modelling, animation & rendering)

AI applications
----------------------------------------------------------
Boyer-Moore Theorem Prover
OPS-5 (has been rewriten in C) Rule based programing lang.
ART AI/Expert-System dev. tools
KEE AI/Expert-System dev. tools
KnowledgeCraft AI/Expert-System dev. tools
G2 AI/Expert-System dev. tools
Eurisko "large AI system"
MYCIN "large AI system"
CYC "large AI system"
Concept Modeler knowledge based engineering support system

Natural language parsers
Accounting Applications

Company/Group Specific Applications & one shots
----------------------------------------------------------
An Authorizer Assistant for a major credit card co.
RAM (Rear Area Movement) for the Army
FAITH (F16 Air Intercept Training Heuristics)
Application to interpret proposals for use of Hubble Space Telescope
First Fortran 77 compiler
R1 VAXen configuration program
XCON VAXen configuration program
Systems at AT&T, Lockheed, Ford Aerospace

Lisp Machine Environments
----------------------------------------------------------
Genera
LMI

????
----------------------------------------------------------
The Graphics Tool
PASTA
CPAS Timewarp system
METAL Machine translation program

--

I am solely responsible for contents of this message. It does not
necessarily reflect the opinions of Loral or its customers.

John W.F. McClain Loral Advanced Distributed Simulation
jmcc...@camb-lads.loral.com 50 Moulton Street
work (617) 441-2062 Cambridge, MA 02139

DANIEL CORKILL

unread,
Sep 26, 1994, 9:12:55 AM9/26/94
to
>Pardon me for jumping in the middle of this discussion. The question I
>have is: what is Lisp used for? Is there anyone out there using Lisp to
>develop comercial applications?

Blackboard Technology Group's GBB generic blackboard builder is
written in CL/CLOS. The GBB product has been on the market for
5 years now, and some "real-world" applications of GBB include:

-- Canadian Space Agency's RADARSAT earth-imaging satellite
operational mission planner (being built for an estimated
$7mil using GBB vs an original projection of $20mil without)

-- U.S. Army's FRAN (Force Resturcturing Analysis Network) and
new ATLAS logistics planning system


-- U.S Air Force (Rome Laboratory) intelligent multichip module
analysis system

-- Design analysis systems at Ford

By the way, there are no commercial blackboard frameworks
available in C/C++....

Jeff Dalton

unread,
Sep 27, 1994, 12:41:05 PM9/27/94
to
In article <CwJuu...@rheged.dircon.co.uk> si...@rheged.dircon.co.uk (Simon Brooke) writes:

>Cambridge LisP has the beautiful construct
>
> (loop
> forms
> (until <condition> value)
> forms
> (while <condition> value)
> forms)

In Common Lisp, continue to use loop but replace

(until <condition> value)
by (when <condition> (return value))

and

(while <condition> value)
by (unless <condition> (return value))

"Named let" is also rather elegant. I use:

;;; LABEL defines a "named LET" that can be called as a local function.
;;;
;;; (LABEL fname ((var init) ...) form ...)
;;; ==> (LABELS ((fname (var ...) form ...))
;;; (fname init ...))

(defmacro label (name init-specs &body forms)
`(labels ((,name ,(mapcar #'car init-specs)
,@forms))
(,name ,@(mapcar #'cadr init-specs))))

E.g. to print 0 .. 9:

(label repeat ((i 0))
(when (< i 10)
(print i)
(repeat (1+ i))))

-- jeff

Jeff Dalton

unread,
Sep 27, 1994, 1:14:31 PM9/27/94
to
In article <uPkXsc...@sytex.com> sm...@sytex.com (Scott McLoughlin) writes:

>csc...@cabell.vcu.edu (Adrian L. Flanagan) writes:
>>
>> I must strenously disagree with the original poster. "Blazing
>> speed,space,etc." are that critical. Particularly in the PC DOS

>> world with its 640K restriction, program size and efficiency of
>> compiled code made a tremendous market difference in acceptance of
>> early commercial programs. Programmers writing in C had a large
>> advantage over programmers using the early Lisp systems [...]

If speed is so crucial, why are so many people willing to use Macs
with the cache off (3,4, 10 times slower)?

There's a lot of evidence that people are often willing to
sacrifice speed for other things. (Not always, of course.)

In any case, what early Lisp systems were competing with C?

>> The (relative) failure of Lisp has everything to do with Lisp
>> vendors' failure to understand (even now) the needs of their
>> marketplace. Call it Ivory Tower Syndrome.

> 1. Right. DOS+640K definitely gives advantage to C/Pascal/Asm.
>Esp with TSR's and other weirdnesses. My question refered to the folks,
>though, who are using VB and VC and BCW4.0 _now_ - big environments and
>big images. I like the general historical picture, though.

I can't really say, because it's lacking all detail.

> 2. When I say "no blazing speed", I mean -- "Not hard realtime".
>I'm assuming only a 2X to 3X speed avantage of native C over compiled
>Lisp. Any body ever benchmark Allegro vs. Borland C++ on say TAK ?

KCL is as fast as C for TAK (on Sun3s, which reflects the last time I
tried it). But TAK is a very limited benchmark.

> 3. "Ivory Tower Syndrom" -- YES! Now I think we're getting to
>the heart of the matter ;-)

How so?

I could understand someone saying Lisp vendors went after market A
(AI researchers, perhaps?) when they should have gone after market B
(commercial DOS users?). But the two of you seem to be saying they
misunderstood the market they went after. Perhaps you think they
went for B and blew it when what they really did was go for A.

-- jd

Jeff Dalton

unread,
Sep 27, 1994, 2:09:28 PM9/27/94
to
In article <CwJ82...@csfb1.fir.fbc.com> mfis...@panix.com (Mike Fischbein) writes:
>
>: There are more
>: abstractions involved. Thus, C is more easily comprehended by
>: inexperienced programmers.
>
>Which renders this conclusion invalid. There are not more abstractions
>involved; but the abstractions involved in Lisp are *different* than
>those involved in C, and *different* than those most novice programmers
>have been involved with.
>
>C's conceptual machine model is similar to that of the most common
>beginner/teaching languages, BASIC and Pascal, without many of the
>limitations of those languages.

I'd be interested in hearing what people think this conceptual
machine model is. I suspect that Lisp could be explained in
terms of that model or one fairly similar to it.

I also wonder how people learning C understand certain things.
For instance, if a variable has type int, there are a couple
of ways to think of this:

* It ensures that you can assign only ints to it.

* It says how to interpret the bits at the address that corresponds
to the variable (including how many of them are part of the var's
value).

BTW, there's a clash between Basic and C when it comes to strings.
Strings are much easier to deal with in a good Basic (to my mind, at
least), but the required machine-level model is fairly complex.
I don't think lists have to be harder to understand than that,
and I suspect they're easier.

> C's conceptual machine model is also
>similar to most common CPUs. This makes it easier for novice
>programmers, who've been working in BASIC and studying the 8086
>instruction set, to map their conceptualizations to C than to Lisp.
>Similar handwaving for Pascal and 68000.

Do novice programmers normally study the hardware instruction set?
That sounds like a rather hacker-like set of novices to me!

>A programmer who can work comfortably with C, BASIC, Pascal, Fortran,
>et al, has really learned one conceptual machine with different (and
>varying amounts of) syntactic sugar. This makes it easy to shift from
>one Algol-like language to another; having learned BASIC or Pascal, the
>novice programmer finds C to be an extension of already known
>concepts.

Or a restriction (e.g. compared to Basic strings).

But I agree with the general point.

> Lisp presents a different way of thinking about the problem
>that does not fit comfortably with what the novice already "knows"
>about programming.

But why is that? I'm not sure it's true.

-- jeff

Jeff Dalton

unread,
Sep 27, 1994, 2:58:53 PM9/27/94
to
In article <35urdq$3...@relay.tor.hookup.net> hu...@RedRock.com (Bob Hutchison) writes:
>In <CwJ88...@cogsci.ed.ac.uk>, je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <35kbl8$8...@relay.tor.hookup.net> hu...@RedRock.com (Bob Hutchison) writes:

[...]

>>>I think that [...] C is easier for them to comprehend, but I don't think


>>> it is because it has easier abstractions. I think it is because
>>>they can use the computer hardware itself as C's abstraction, that is,
>>>use a concrete thing as an abstraction -- what an illusion :-) I guess
>>>that I think you are basically right that the novice has a difficult time
>>>forming a useful understanding of how the language works, but I think
>>>this is difficult for any language. I think that in C's case the novice
>>>can cheat.

>>I like the idea of using a concrete thing as an abstraction, though I


>>suspect that many people have a somewhat abstract model of the hardware.

>>However:


>>
>> (a) Novices don't necessarily know all that much about the hardware;
>> (b) Novices (e.g. children even back in the days before they grew up
>> with video games) have found it fairly easy to learn languages
>> that aren't so close to the hardware (e.g. LOGO, Basic).
>> (c) There are reasonably simple hardware-based models that work
>> for Lisp.
>
>I don't know that it matters that their model of the hardware is abstract.

I don't either; I was just mentioning it.

>The model they have seems actually quite good at predicting what the
>hardware will do (CPU here, not IO devices so much). If they can then
>translate that predictive model into a 'C' model, then they stand a
>chance of predicting what a C program will do.

I'm not sure what you have in mind here. It sounds like you're
thinking of people who start by learning to program in assembler,
so that they have to predict what the CPU will do.

>Children seem to be an exception to everything to do with learning
>(do you have kids?). They seem to be better at learning.

But do you think LOGO is hard for old folks to learn? (How old
are people these days when they're learning C?)

>Use of a hardware model to aid in learning a programming language
>would apply to most languages. I don't doubt that there is a suitable
>hardware model to explain lisp, but I don't think it is the same one.
>Unfortunatly the one availble to a C programmer is the one taught,
>at least where I went to school.

Can you say something more about this? I learned how to program
before I knew abything about how the hardware worked.

>The other difficulty with languages like the lisps and other high level
>languages, is that they provide a fair bit of support for the
>development of 'software'. I wonder what a hardware model of
>a continuation in scheme or ml would look like, or a non-deterministic
>program written using them?

What is the hardware model of a coroutine or a thread? There are
such models, of course, and there are similar ones for continuations.

But I'd like to separate the question of learning Lisp from that
of learning all of Lisp. After all, it may be that some parts are
hard to learn while others are not. A conclusion along those lines
would be (if it does turn out that way) more precise and useful.

>What is the hardware model for an
>abstract data type for that matter?

Lisp doesn't have those. (Some Lisps might, of course.)

> What is the hardware model
>for a CL macro? (this macro idea is one that seems to be somethin
>that a C programmer has an awful time comprehending, possibly
>it is just an 'I don't believe you' problem rather than a 'what would
>I do with it?' problem)

What is the hardware model for a C macro? In any case, you have to
explain it (sooner or later) as textual substitution. A CL macro is
(a) taking some source code represented as Lisp data, (b) calling
a function that returns some different source code -- the expansion,
(c) processing the expansion instead of the original code. In
some ways this is simpler than C, because the expansion mechanism
is Common Lisp rather than something different.

I think it's much easier BTW to understand a Lisp interpreter
(for a suitable simple Lisp) than a C compiler (even for a smallish
subset of C). So in some ways it's easier to explain how Lisp
works.

>>This makes me question whether C wins because novices can use
>>the hardware as a "cheat".
>>
>>It's important to bear in mind that some Lisps -- e.g. Common Lisp,
>>InterLisp -- are large and full of rather complicated stuff while
>>other Lisps are very simple. They're smaller and simpler than C; and
>>Lisp implementations tend to be interactive, which makes it easier to
>>try things out. It's also easy to set up Lisp to use the "just run
>>it" Basic approach.
>
>Scheme is relatively new to me, I assume that it is one of the simpler
>lisps you refer to. While it is a nice simple clean language that I find
>rather appealing, it supports a programming style that, in my opinion,
>is fundamentally a software oriented style, not a hardware one.

Scheme *is* fairly simple, especially compared to Common Lisp.
But there are simpler Lisps, e.g. without call/cc.

I'm intrigued by the phrase "a software oriented style". Could
you say something more about it? (I assume it's related to your
point about macros.)

>Is there a simple model of a scheme 'machine' that would allow
>someone to predict behaviour of the software? I would have thought
>that scheme is its own best model. Wasn't that kind of the point of
>scheme?
>
>My first reaction to the 'just run it' approach to lisp was a bit negative.
>But when you think about it 'just running' lisp is probably not much
>different than the kind of C programming we get. It also holds the
>promise that as the programmer gains experience the other aspects
>of lisp become available.

Well, I think an interactive Lisp is better when learning the
language. But the Basic model is fairly simple. You have the
program source, you type "run", and it runs. C follows a more
complex compile-and-link model.

>>Nonetheless, I think that in practice Lisp *is* often hard to learn.
>>I'm not sure I can say whether it's easier or harder than C. It
>>would depend, for one thing, on how much of C and how well it must
>>be understood, and on how much of which Lisp.
>
>I can tell you that my problem with CL was finding a subset of it
>that I could do something with.

I think that's a very serious problem with CL. It's difficult
to extract a nice subset, especially if you're new to Lisp.
Textbooks ought to help there, but many of them cover too
much of the language (IMHO).

> CLtL was not much use for that.
>It wasn't until I came across Paul Graham's "On Lisp: ..." that things
>'switched on' with CL. Scheme was much easier.

I think it's often very hard to find things in CLtL. Looking up
a function name in the index and turning to the appropriate page is
often not enough. For instance it may talk about something that
"satisfies the test". That phrase is explained somewhere else
(at the start of the chapter?). So CLtL has some problems as
a reference manual, and it has other problems as an introduction.

>>Anyway, in my view the following factors are responsible for much
>>of the difficulty:
>>
>> (1) The fully parenthesized prefix syntax.
>> (2) Peculiar, unfamiliar names such as car, cdr, cond, and lambda.
>> (3) Hard topics such as recursion that tend to be mixed in with
>> learning Lisp.
>> (4) Confusing presentations of eval, quote, and "evaluates its
>> arguments" that make the question of what gets evaluated
>> seem much harder than it is. (The syntax also contributes
>> to this, because it's so uniform.)
>> (5) Teaching that has a mathematical flavour and emphasises the
>> functional side of Lisp. This is great for some students but
>> makes Lisp much harder for others. E.g. box-and-arrow diagrams
>> are tied to the discussion of mutation, and hence aren't
>> available when people are first trying to figure out what lists
>> are. (A number of odd models can result from this.)
>
>Most of these points are illustrations of what I mean by support
>for 'software'. These are software ideas not hardware.

But 2-5 are presentation problems, not language problems.

Perhaps we could discuss briefly how C is taught? I learned C
from a book, as I did for most other languages I know, so I don't
know how it's normally done. (Though I've taught Lisp and Basic.)

I'm tempted at this point to try to write a "Lisp for C Programmers".

>>Some of these are already questions of how Lisp is taught. Others,
>>such as the fully parenthesized syntax, require more care in
>>presentation than they often receive. It will also be interesting
>>to see how much difference it makes to change the syntax (as in
>>Dylan).
>>
>
>This will be interesting. Though what about ML? It has been around
>for a while now, what is the experience with that?

I don't think there are enough implementations. SML of NJ,
though excellent, is rather large. I'm not very well-informed
on what's available, though.

>I think that this discussion is interesting, and possibly even useful,
>the real issue with lisp is social and political.

I agree.

I don't really expect Lisp to become super-popular, if Lisp means
Common Lisp, or even Scheme. I think the syntax will always work
too well against it, even though many people find it more readable
rather than less (I am one). But Lisp can have a different syntax,
and there are Lisp-like languages (whether we consider them in
the Lisp family, strictly speaking, or not) such as Pop and Logo
and Dylan that illustrate this (as does the never fully realized
Lisp 2 and various alt syntaxes construted over the years).

So I think there's still hope for Lisp-like languages.

-- jeff

Markku Laukkanen

unread,
Sep 27, 1994, 8:54:32 AM9/27/94
to
>The root problem seems to be that some Lisps worry that 1 + i may
>not be a fixnum even though i is. I don't think you can always
>win even by saying (dotimes (i (the fixnum n)) ...).
>

try to declare
(declare (ftype (function (fixnum) fixnum) 1+))
or
(declare (ftype (function (fixnum fixnum)fixnum) +))

Any (decent) CL - compiler should be smart enough to handle those

PKY

Curt Eggemeyer

unread,
Sep 29, 1994, 10:39:54 AM9/29/94
to

>>> 3. "Ivory Tower Syndrom" -- YES! Now I think we're getting to
>>>the heart of the matter ;-)

...[stuff removed]...
>
>Or perhaps they went after market C - people building large scale
>applications; only to discover that most such people were not much
>interested in A) learning symbolic programming techniques and B) spending
>$$$$$$ on hardware that weren't suited to running or developing other
>kinds of applications.

Of course now we are beginning to see strains in the C/C++ world of
maintenance/programmer support required for "BIG" applications. I think
the latest Scientific American or Discovery had an interesting article
concerning this.

I am a strong believer that representation is 80% of the effort, and LISP/
Smalltalk made a good start to providing the infrastructure for building it.
C++ burdens the coder with too much machine implementation details. Dylan
looks promising. Who knows what the future holds?

Keith M. Corbett

unread,
Sep 29, 1994, 9:39:32 AM9/29/94
to

Or perhaps they went after market C - people building large scale

applications; only to discover that most such people were not much
interested in A) learning symbolic programming techniques and B) spending
$$$$$$ on hardware that weren't suited to running or developing other
kinds of applications.

I'm not merely speculating here; I was manager of software support at LMI,
one of the Lisp vendors that died in the 80s. The label "Ivory Tower
Syndrome" might describe some of the researchers at LMI, but management
was keenly aware of market forces. We were not trying to compete with
PCs, and we didn't expect to survive by catering to AI researchers.
Like many other vendors of specialized HW and SW we were gambling on the
chance that what we were building would catch on in the larger market.

You win some, you lose some...

-kmc

Jeff Dalton

unread,
Sep 29, 1994, 3:21:03 PM9/29/94
to
In article <780156...@wildcard.demon.co.uk> cyber_...@wildcard.demon.co.uk writes:
>In article <CwFqw...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk "Jeff Dalton" writes:
>
>> I hope we don't add a misleading "Lisp advocate" stereotype
>> to the already misleading "Lisp" stereotype.
>
>Is that a reference to the thread about a newsgroup for Lisp advocacy?
>I'm not sure, but this looks like you've misunderstood the issue,
>which was about the need, or not, for a comp.lang.lisp.advocacy
>newsgroup, and not the nature and/or worth of Lisp advocacy, which
>is another matter. My point was merely that at the moment, there's
>no choice, except to add such advocacy threads to a killfile, which
>is not ideal.

I thought my article was reasonably clear in context. Maybe not.
Anyway, I was responding to an article that said such things as "the
usual lisp advocate response".

>I hope I've misunderstood your comment, in which case I apologise.
>However, I'm not aware of any stereotype being the problem. Perhaps
>you're refering to some other thread, perhaps in another newsgroup,
>but I don't know.

Why not assume I'm talking about something in *this* thread,
namely: Comparison: Beta - Lisp?

-- jeff

Erik Naggum

unread,
Sep 30, 1994, 4:55:35 AM9/30/94
to
[Bob Hutchison]

| What I mean when I say "hardware" is something with a very low level of
| abstraction (maybe?) Perhaps high degree of concreteness? What
| happens at this level is very well defined and is easily translated to
| common knowledge. For example, most people understand what adding two
| integers together means and what to expect as a result. Writing a
| program in C largely involves manipulating things at this level. Of
| course C supports "aggregations" (I need words :-) of these
| manipulations in the form of things like data structures and functions,
| but it doesn't go too much beyond that.

this gave me an idea, since I don't think hardware is easy. what happens
in the real hardware is incredibly complex, and at the >100MHz speeds we're
enjoying now, it is not a piece of cake to deal with, even in the friendly
ALU, which isn't anymore. now, we have been able to formulate a relatively
simple model of how the hardware works, of what the hardware processes take
as input and give as output, even though real life is nowhere near this. I
believe it is this model that C has managed to exploit, and LISP has not.

one could even argue that the C model is employed at the hardware level,
and thus makes hardware understandable in terms of software concepts.
(somewhat akin to the interesting flow of ideas between models of the brain
and models of hardware.)

for me, programming in MACRO-10 on the DEC 10 some 10 years ago provided an
interesting insight into some of the things in Common LISP today. such as
the constants used in the `boole' function. small things, but so familiar
that they made me laugh with recognition. LISP is hardware, too!

I don't know about any LISP Machines, but I wonder: was the hardware model
on those machines equally susceptible to the C model, or were they "tuned"
to the LISP model?

| I think we will always have languages for programming hardware and
| languages for programming software.

well said, but maybe what we have now is hardware that supports languages
for programming hardware, to the _exclusion_ of hardware that supports
languages for programming software?

#<Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.

David Gadbois

unread,
Sep 30, 1994, 5:08:24 PM9/30/94
to
Erik Naggum <er...@naggum.no> wrote:
>one could even argue that the C model is employed at the hardware level,
>and thus makes hardware understandable in terms of software concepts.

Except that the C hardware model is one of a 16-bit, byte-addressed,
low memory latency, non-pipelined, single-issue uniprocessors. They
don't make many machines like that these days, and the model does not
help much understanding 64-bit, word-addressed, high memory latency,
superpipelined, superscalar multiprocessors.

--David Gadbois

Bob Hutchison

unread,
Sep 30, 1994, 1:41:26 AM9/30/94
to
In <Cwsy2...@cogsci.ed.ac.uk>, je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <35urdq$3...@relay.tor.hookup.net> hu...@RedRock.com (Bob Hutchison) writes:
>>In <CwJ88...@cogsci.ed.ac.uk>, je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>In article <35kbl8$8...@relay.tor.hookup.net> hu...@RedRock.com (Bob Hutchison) writes:
>
>[...]
>
>>>>I think that [...] C is easier for them to comprehend, but I don't think
>>>> it is because it has easier abstractions. I think it is because
>>>>they can use the computer hardware itself as C's abstraction, that is,
>>>>use a concrete thing as an abstraction -- what an illusion :-) I guess
>>>>that I think you are basically right that the novice has a difficult time
>>>>forming a useful understanding of how the language works, but I think
>>>>this is difficult for any language. I think that in C's case the novice
>>>>can cheat.
>
>
>>The model they have seems actually quite good at predicting what the
>>hardware will do (CPU here, not IO devices so much). If they can then
>>translate that predictive model into a 'C' model, then they stand a
>>chance of predicting what a C program will do.
>
>I'm not sure what you have in mind here. It sounds like you're
>thinking of people who start by learning to program in assembler,
>so that they have to predict what the CPU will do.

OK, I agree I am not being clear :-) Any attempt to correct this may
lead to disaster, but here goes...

What I mean when I say 'hardware' is something with a very low level
of abstraction (maybe?) Perhaps high degree of concreteness? What
happens at this level is very well defined and is easily translated to
common knowledge. For example, most people understand what adding
two integers together means and what to expect as a result. Writing
a program in C largely involves manipulating things at this level. Of
course C supports 'aggregations' (I need words :-) of these manipulations
in the form of things like data structures and functions, but it doesn't
go too much beyond that.

What I meant when I said that lisp and other high level languages provide
support 'software oriented' programs, was that lisp provides support for
constructing 'aggregations' of these 'aggregations'. It isn't perfect, but...
For example, scheme allows you to manipulate functions as you would
any other data (type?), it has continuations. Some languages support
concurrency in the language (Erlang, BETA) rather than as a library.

Furthermore, languages like lisp and others, are designed expecting
programs to be hierarchies of this kind, and so support it with
strange things like call-with-current-continuation.

Macros in CL vs C also demonstrate this difference. In C you have
straight text substitution. In CL your macro can kick in a function that
supplies the replacement text. The result is the same, as you say, but
the capabilities of the two are quite different. As is often said, you can
change the syntax of CL with macros to almost anything, not so for C.

Is this clearer? or worse? It is late at night now, I hope it doesn't show :-)

(Two asides: 1) translation to common knowledge is something that I
talked about earler, people like to learn new things by relating them
to things they already know about. 2) understanding what happens
when two integers are added, even at this low level, isn't quite what
we might expect, we are really adding modulo some number and
hoping we don't have problems -- we cannot rely on this prior
knowledge/understanding even here -- surprise, surprise)

>
>>Children seem to be an exception to everything to do with learning
>>(do you have kids?). They seem to be better at learning.
>
>But do you think LOGO is hard for old folks to learn? (How old
>are people these days when they're learning C?)

I looked at LOGO in 1980 or so and have not since. I took it to be
a bit of a toy. I knew this wasn't true. I had attended a 'debate'
between some of the originators of LOGO, smalltalk (Goldberg) and
APL (Iverson). I cannot remember the LOGO guy's name, he was
from MIT... how embarassing! Both Iverson and Goldberg took LOGO
quite seriously -- I couldn't quite get past turtle graphics.

>
>>Use of a hardware model to aid in learning a programming language
>>would apply to most languages. I don't doubt that there is a suitable
>>hardware model to explain lisp, but I don't think it is the same one.
>>Unfortunatly the one availble to a C programmer is the one taught,
>>at least where I went to school.
>
>Can you say something more about this? I learned how to program
>before I knew abything about how the hardware worked.

The university I attended (still) presents basic CPU architecture
very early (brief overview in first year, details first thing second year)

I think you probably had a pretty good idea of some simple concepts
from hardware. The idea of a data store, arithmetic, updating values,
precise sequential operation (probably this is the first hard thing to
really learn, it is understood early, but...). Stuff like that. You had
used a calculator I would imagine. Even using paper to do scratch
calculations on would provide some (small) useful model to the
learner. (If you had a good grounding in doing calculations on paper,
would it ever occur to you that you could group some of those written
lines of calculations, and perform calculations on them in turn? Not
likely, and I think this is similar to a C programmer's difficulty in
thinking of manipulating his programs -- you can't do it in C, so you
don't think about it.)

>
>>The other difficulty with languages like the lisps and other high level
>>languages, is that they provide a fair bit of support for the
>>development of 'software'. I wonder what a hardware model of
>>a continuation in scheme or ml would look like, or a non-deterministic
>>program written using them?
>
>What is the hardware model of a coroutine or a thread? There are
>such models, of course, and there are similar ones for continuations.

Sure, but a novice doesn't know those models. Have you seen many
novice programmers that can handle even coroutines much less
a thread? Actually, I think my personal model of a thread is an
abstract one, not concrete -- I know how it is implemented because
I need to, but that knowledge isn't used when designing software.

I don't want to get too carried away with providing 'hardware' models
for programmers. I think the more abstract models are better suited
for the job. However, I think it is useful to think of these differing
models when wondering why very good C programmers cannot
immediately jump into lisp and other high level languages. And,
perhaps, there is enough to this that it can help these programmers
make that transition.

>
>I think it's much easier BTW to understand a Lisp interpreter
>(for a suitable simple Lisp) than a C compiler (even for a smallish
>subset of C). So in some ways it's easier to explain how Lisp
>works.

Really? My very first experience with C was to port it to a machine
I had to work with. It was the only compiler that I know of at the
time that was even remotely portable. I learned C later. That
C compiler was awfully simple. I expect that ANSI C requires
somewhat more sophisticated techniques.

>>Scheme is relatively new to me, I assume that it is one of the simpler
>>lisps you refer to. While it is a nice simple clean language that I find
>>rather appealing, it supports a programming style that, in my opinion,
>>is fundamentally a software oriented style, not a hardware one.
>

>I'm intrigued by the phrase "a software oriented style". Could
>you say something more about it? (I assume it's related to your
>point about macros.)

I hope I've mostly answered this. There are other examples, scheme
and CL and others, have a number system that is quite independent
of the underlying hardware. This is both a good thing and a bad thing.
There are any number of programmers intensely put off by this lack
of 'control' over how their numbers will be represented. I have to
admit that I get a bit annoyed sometimes when I say "(/ 3 7)" and
get the same thing back rather than 0.4285... :-)

>>
>>My first reaction to the 'just run it' approach to lisp was a bit negative.
>>But when you think about it 'just running' lisp is probably not much
>>different than the kind of C programming we get. It also holds the
>>promise that as the programmer gains experience the other aspects
>>of lisp become available.
>
>Well, I think an interactive Lisp is better when learning the
>language. But the Basic model is fairly simple. You have the
>program source, you type "run", and it runs. C follows a more
>complex compile-and-link model.

Is that what you meant? I thought you were talking about giving
a programmer lisp and not worrying at first whether you get a lisp
program or a C-like program.

Interactive environments like lisp and smalltalk are clearly winners over
the compile/link/debug cycle of C (or the compile/link/reboot cycle in
Windows -- yes, it is even more dramatic in windows). However, some
of these C/C++ compilers on PCs are getting awfully fast.

>>>Anyway, in my view the following factors are responsible for much
>>>of the difficulty:
>>>
>>> (1) The fully parenthesized prefix syntax.
>>> (2) Peculiar, unfamiliar names such as car, cdr, cond, and lambda.
>>> (3) Hard topics such as recursion that tend to be mixed in with
>>> learning Lisp.
>>> (4) Confusing presentations of eval, quote, and "evaluates its
>>> arguments" that make the question of what gets evaluated
>>> seem much harder than it is. (The syntax also contributes
>>> to this, because it's so uniform.)
>>> (5) Teaching that has a mathematical flavour and emphasises the
>>> functional side of Lisp. This is great for some students but
>>> makes Lisp much harder for others. E.g. box-and-arrow diagrams
>>> are tied to the discussion of mutation, and hence aren't
>>> available when people are first trying to figure out what lists
>>> are. (A number of odd models can result from this.)
>>
>>Most of these points are illustrations of what I mean by support
>>for 'software'. These are software ideas not hardware.
>
>But 2-5 are presentation problems, not language problems.

Yes, but parenthesized prefix syntax, lambda, recursion, eval, quote,
and 'evaluates its arguments' (or not) are in support of manipulating
software by programs.

>
>I'm tempted at this point to try to write a "Lisp for C Programmers".

Personally, I think this is something you might want to consider seriously.
If not you then someone else.

>
>>I think that this discussion is interesting, and possibly even useful,
>>the real issue with lisp is social and political.
>
>I agree.
>
>I don't really expect Lisp to become super-popular, if Lisp means
>Common Lisp, or even Scheme. I think the syntax will always work
>too well against it, even though many people find it more readable
>rather than less (I am one). But Lisp can have a different syntax,
>and there are Lisp-like languages (whether we consider them in
>the Lisp family, strictly speaking, or not) such as Pop and Logo
>and Dylan that illustrate this (as does the never fully realized
>Lisp 2 and various alt syntaxes construted over the years).
>
>So I think there's still hope for Lisp-like languages.
>

Yes, and there is smalltalk, Erlang, BETA, Clean that share something
with lisp.

I think we will always have languages for programming hardware
and languages for programming software.

--

Cyber Surfer

unread,
Sep 29, 1994, 3:12:09 AM9/29/94
to
In article <Cwsvr...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk "Jeff Dalton" writes:

> >C's conceptual machine model is similar to that of the most common
> >beginner/teaching languages, BASIC and Pascal, without many of the
> >limitations of those languages.
>
> I'd be interested in hearing what people think this conceptual
> machine model is. I suspect that Lisp could be explained in
> terms of that model or one fairly similar to it.

That's exactly how I think of it. Perhaps that's coz I've sent too
long programming in C/Basic/Forth/etc, but I don't have a problem
with that. I recall that I adapted to "Lisp" very quickly, just as
I seem to with any language that's new to me.



> I also wonder how people learning C understand certain things.
> For instance, if a variable has type int, there are a couple
> of ways to think of this:
>
> * It ensures that you can assign only ints to it.

Um, doesn't the compiler ensure that the result of an expression will
be coerced to an int, if that's possible? If it's not possible, then
the compiler complains. I don't know what CL is supposed to do, as
that's never been clear to me, even after reading CTtL1/2. I assume
that it's handled at runtime by the interpreter, but I've no idea
what the compiler is suppoed to do about it.

> * It says how to interpret the bits at the address that corresponds
> to the variable (including how many of them are part of the var's
> value).

I'm also unsure about this. If I were to declare than a variable
is of a certain type, like (string 20), what would the variable's
bytes look like, on a CPU with 8 bit bytes and byte addressed
memory? That's a matter for the implementation, but the semantics
are unclear to me. What if I were to assign a literal string of
21 chars, or the value of an expression that may sometimes produce
a string with more than 20 chars?

I know, Steele's book isn't enough. I should read the ANSI CL doc.
I would, but it's too big for me, so I don't have a copy of it.
Hopefully, it'll someday be available on a CD-ROM, along with
any other useful Lisp documents (like CTtL).



> BTW, there's a clash between Basic and C when it comes to strings.
> Strings are much easier to deal with in a good Basic (to my mind, at
> least), but the required machine-level model is fairly complex.

Arrays are similar. In interpreted Basics in the early 80, I could
redimension an array at runtime. In a compiled Basic for the same
dialect, the array would have to be statically dimensioned. There's
no really good reason for this, but that's how compiler writers
chose to do it.

> I don't think lists have to be harder to understand than that,
> and I suspect they're easier.

No harder than arrays, but probably a lot easier, but that might
depend on the how the language is taught. One of the several books
I have on Prolog is incredibly bad at explaining unification, but
I know it can be done better, because of the other books I have.



> Do novice programmers normally study the hardware instruction set?
> That sounds like a rather hacker-like set of novices to me!

When I began programming, Basic was all I knew, and I thought
of the machine as running Basic directly! It was a pleasant
shock to discover that there was another level below that.
After that, _everything_ became a lot easier!



> > Lisp presents a different way of thinking about the problem
> >that does not fit comfortably with what the novice already "knows"
> >about programming.
>
> But why is that? I'm not sure it's true.

My experience tells me that what a novice "knows" can be very
suprising! Z80 processors do _not_ directly execute Basic, nor
do they know how to do I/O. That's what a ROM is there for, and
in the case of my first machine, that ROM was 12K. I neither knew
that, nor cared at that point.

I now "know" differently. I now "know" how little I really "know",
and how much I "assume". This reminds me of a James Burke TV series
years ago, which opened up my mind to the idea that my viewpoint
could be shaped by what I know (and what I "know"), and that it
might be very different from what someone else "knows" about the
world. In the case of this discussion, instead of a model of the
world, it applies to the conceptual model of a computer and in
particular, a computer language. For example, "Lisp".

The above quotes are used whenever I'm refering to something that
is defined by what we think we know about the world, about computers,
and about Lisp. In the case of Lisp itself, it might also mean the
dialect that we think of as Lisp. It could be Common Lisp, or it
could be Scheme, or some other Lisp. It could even be Dylan.

In the same way, I should really refer to "Basic", as that is
even more variable - there seem to be as many dialects as there
are machines, and probably even more than that. I certainly don't
mean ANSI Basic, as I've never seen that. Nobody I know uses it.

--
http://cyber.sfgate.com/examiner/people/surfer.html

Cyber Surfer

unread,
Sep 29, 1994, 4:28:38 AM9/29/94
to
> But do you think LOGO is hard for old folks to learn? (How old
> are people these days when they're learning C?)

I only learned Logo after nearly 10 years of programming, and
yet I had no trouble with it. Of course, I was familiar with
Lisp, which might have helped.

> I think it's much easier BTW to understand a Lisp interpreter
> (for a suitable simple Lisp) than a C compiler (even for a smallish
> subset of C). So in some ways it's easier to explain how Lisp
> works.

Agreed. This is one of the stengths of Forth. A the source code
for a threaded Forth can be very easy to understand, while a
native code Forth will be much more complex, perhaps because it
won't be using interpretation so much.

What's the simplest interpreter you can write in Lisp, by the
way? What's the simplest compiler you can write in Lisp? Now
try the same things in Lisp or a thread Forth. A batch compiled
lamguage suffers from a large handicap, because the code you
write can't make use of the existing code that implements the
language itself. Some people call this "leverage".

> Well, I think an interactive Lisp is better when learning the
> language. But the Basic model is fairly simple. You have the
> program source, you type "run", and it runs. C follows a more
> complex compile-and-link model.

I agree that an interactive language is better, but I only have
my own experience with an interactive Basic to judge this by.
I'd say that the batch compile approach used by C is just a more
complex way of doing the same thing as typing "run" in Basic.
It only looks different, but that's an implementation issue.

PJ Brown's book Writing Interactive Compilers and Interpreters
explains how an interactive language like Basic (or Lisp) can
also use a compiler, and make it transparent. I believe there
are Lisp and Smalltalk implementations that use such techniques,
altho I know that some Lisps require the compiler to be explicitly
invoked - a bit like a C compiler, in fact:

(edit) ; edit
(compile-file "foo.lsp") ; compile
(load "foo") ; link
(foo) ; run

There's no need for it to be this explicit, and I bet that there
will be Dylan implementations that do this better. I've yet to
see a Lisp system do it as neatly as Smalltalk/V, but I've no
doubt that it can be done, and probably has been done. The typical
Smalltalk browser window is an example of how simple the editer
and compiler can be, while the transcript window makes the "run"
stage of the edit/compile/run cycle simple _and_ flexable.

I'm still waiting for an interactive incremental C++ compiler. :-)
I don't call waiting two minutes for a "compile & link" interactive
or incremental. Actor and a number Lisps can "cycle" in under two
seconds on the same machine. That's why I'm interested in Dylan,
as it seems to promise _both_ runtime speed and an interactive
incremental compiler in the one language. Unless Lisp systems can
offer me the same features, I'll probably switch to Dylan.

> I think that's a very serious problem with CL. It's difficult
> to extract a nice subset, especially if you're new to Lisp.
> Textbooks ought to help there, but many of them cover too
> much of the language (IMHO).

I not only found a usable subset, but one that I could also
implement. It's written in about 10,000 of C and uses a subset
of about the same size as XLISP, but consisting of different
features. For example, I've yet to add arrays or structures,
but I have multiple values and full SETF semantics.

> I think it's often very hard to find things in CLtL. Looking up
> a function name in the index and turning to the appropriate page is
> often not enough. For instance it may talk about something that
> "satisfies the test". That phrase is explained somewhere else
> (at the start of the chapter?). So CLtL has some problems as
> a reference manual, and it has other problems as an introduction.

As far asI'm concern, it _is_ a reference manual. It's certainly
not a tutorial. I'm amazed that anyone could confuse it with one.
Perhaps my idea of tutorials and manuals are very different from
other peoples'?

> But 2-5 are presentation problems, not language problems.

Agreed. Just look at Dylan! Even if that's not "Lisp", it's a
good example of how to do it _right_. In my humble opinion,
of course. I'd rather use Dylan than most Lisps I've used,
despite the lack of macros. I love Lisp macros, but I'd rather
be programming, esp if there's a chance I might be paid to do
it. Right now, I'd have more chance of writing in Smalltalk
for a living, but I feel that Dylan is a more realistic language
than Smalltalk. That may be because I like the idea of sharing
code at the class level. It could certainly kill all the criticism
about footprints.



> Perhaps we could discuss briefly how C is taught? I learned C
> from a book, as I did for most other languages I know, so I don't
> know how it's normally done. (Though I've taught Lisp and Basic.)

I leared Basic on my own, at a keyboard. I only had a manual!
It took me about a day. (I'm still learning to _programme_,
which is significantly different.) I "learned" C from a book
(K&R), but it was the experience of programming im C that taught
me most of what I "know" about it. I'm still "learning"...

> I'm tempted at this point to try to write a "Lisp for C Programmers".

That would be an excellent idea! Good luck, if you do it. If you
want a co-author, I'm available. :-) If course, you might be better
of without me, as I have some odd ways of working. So my friends
tell me, anyway! (But they all use C...or Basic.)

--
http://cyber.sfgate.com/examiner/people/surfer.html

Cyber Surfer

unread,
Sep 20, 1994, 11:46:10 AM9/20/94
to
In article <35kbl8$8...@relay.tor.hookup.net>
hu...@RedRock.com "Bob Hutchison" writes:

> pressure to develop software using Basic. Basic handled memory for you,
> automatically, and for many situations not well enough. Basic was also
> slow. If you needed something faster you went to C or Pascal. At the

Most Basics (well, the ones I've seen) were badly implemented, and
yet still given names like "Fast Basic". It seems that implementors
of Basic rarely read any books on compiler theory, not even PJ Brown's
Writing Interactive Compilers and Interpreters, which is a book that
might clear up a _lot_ of comfusion in the minds of programmers. If
only a few more of them would read it. <sigh>

A small number of programmers chose Forth, instead of Basic. Forth
has a much higher standard of compiler writing, but Forth is an odd
case. So is QuickBasic, which also uses threaded code. It also uses
a few techniques written about, or at least hinted at, in Brown's
book. It's still supplied by Microsoft with DOS, I think, but I don't
remember ever seeing any documentation for it. These days, Visual
Basic will have taken over, and I'm not so sure about the quality
of VB's implementation. That's another issue.

> time C was available for free (thanks UNIX and I think Dr. Dobbs), pascal
> was not (it was expensive). Pascal was also usually pcode based and
> ran in proprietary environments. Pascal also fell down badly when you
> needed control of your memory. In other words, the available Pascal
> *implementations* were not different enough from Basic in a few key
> aspects. C was used. (Let me point out one other belief that will perhaps
> further illustrate the world at that time, it was the official position of
> a very large multi-national company that compilers were unreliable and
> produced poor code, and that this was *inherent* in the technology).

I used the P-system on a Z80 machine running at 1.76 Mhz, and it
run _very_ slowly. The code from a C compiler based on Small C
ran much faster. So, I can agree with you about that. However, the
compiler only just ran on that machine, and only by generating
assembly code.

My 2nd machine was a 68K running at 8 Mhz. It also ran the P-system,
but at a decent speed. I wasn't aware of it being much faster than
the code from a C compiler for the same machine. That's subjective,
and I never did do a proper comparison. In any case, it was too late,
as most people at that time were beginning to use micros made by
IBM, or clones of those machines. I've yet to use a Pascal compiler
for that platform.

I'm merely adding to your comments with my own experience. My own
belief is that ignorance of compiler theory is a big factor, both
with programmers and with the writers of many early compilers for
micros. This is one of those historical "accidents" that's hard to
correct. It's worth trying, tho. I often refer programmers to PJ
Brown's book, which is still being published. It's worth reading
even if you're not interested in compilers, simply for the Twelve
Deadly Sins of Programming. It's a book for all programmers.

> My real disagreement with your theory is that I don't think it is sufficient
> to explain why C is actually used. Why does an experienced programmer
> use it? In my case it is because of the history I described above. C has

One reason could be that that's the job offered to them. I see ads
for C/C++ jobs all the time, and it's well known that not everyone
(ahem, a manager, for example) knows as much about the job as the
programmer who actually _does_ it.

I hear a lot about this from my friends in network support, except
that in their case, it applies to network software. I'm told that
when they ask why some people do things the way they do, the answer
is just that they've "always done it that way". When people like
that are in charge of networks with 1000+ machines, I start to worry.

Why should I worry less about managers of programmers? I hope
someone can diagree with me about this, as it may just be personal
experience, either mine or my friends. Please tell me if it is!

> How do you get an experience programmer to consider something other
> than C? I've found it tremendously effective to (persistently) encourage
> programmers to try something serious in smalltalk. So far I've had a
> 100% success rate in getting them to realise the limitations of C/C++.

I've succeeded in converting a friend from C to Actor, which is
Smalltalk-like language. This was before C++ became so popular.
The problem is, the friend isn't a professional programmer anymore.
He's moved to network support, where he writes batch files that
people claim are too complicated for them to understand, until
he asks them if they tried _reading_ the file. He dispairs.

> Smalltalk isn't magic, but it has a really nice bunch of tools that go
> with it. The lisp environments would work just as well I think, but until
> recently they were too expensive. (These guys all bought smalltalk
> with their own money -- I can be very persistent :-).

You might like to look at Dylan when the first system ships. I'm
hoping that it'll have a better chance than Smalltalk, which is
doing pretty good in some places, and getting fabulous reviews.
I won't suggest that it'll beat C++, but who knows? We can hope.

> Even shorter, C/C++ is being used for the same reasons COBOL is
> still used. (an awfully long post to come to that conclusion, sorry :-)

So C/C++ will be around for a few decades after we all "know" that
it's dead? ;-) Don't worry about the length, as I think you made a
lot of good points, and they add support to your conclusion.

Martin Rodgers

Cyber Surfer

unread,
Sep 30, 1994, 5:36:40 AM9/30/94
to
In article <CwwoF...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk "Jeff Dalton" writes:

> I thought my article was reasonably clear in context. Maybe not.
> Anyway, I was responding to an article that said such things as "the
> usual lisp advocate response".

I'm still not sure what that means. Was it a reference to the advocacy
thread or not? It's a question that asks for a yes or no.

> Why not assume I'm talking about something in *this* thread,
> namely: Comparison: Beta - Lisp?

I am assuming that, but it doesn't make the "advocacy" reference
unambigous. Perhaps that's coz I see so many ways of interpreting
people's comments. I don't see _this_ thread as one about advocacy.
It appears to me to be about Lisp and Beta, but I don't recall seeing
anyone trying to "advocate" the use of one language or another.

That could be coz I don't see all comparison threads as advocacy
threads. It's possible that I missed that aspect, or I just didn't
consider it stong enough in this thread.

--
http://cyber.sfgate.com/examiner/people/surfer.html

Bob Hutchison

unread,
Oct 3, 1994, 10:35:23 AM10/3/94
to
In <19940930T0...@naggum.no>, Erik Naggum <er...@naggum.no> writes:
>[Bob Hutchison]
>
>| What I mean when I say "hardware" is something with a very low level of
>| abstraction (maybe?) Perhaps high degree of concreteness? What
>| happens at this level is very well defined and is easily translated to
>| common knowledge. For example, most people understand what adding two
>| integers together means and what to expect as a result. Writing a
>| program in C largely involves manipulating things at this level. Of
>| course C supports "aggregations" (I need words :-) of these
>| manipulations in the form of things like data structures and functions,
>| but it doesn't go too much beyond that.
>
>this gave me an idea, since I don't think hardware is easy. what happens
>in the real hardware is incredibly complex, and at the >100MHz speeds we're
>enjoying now, it is not a piece of cake to deal with, even in the friendly
>ALU, which isn't anymore. now, we have been able to formulate a relatively
>simple model of how the hardware works, of what the hardware processes take
>as input and give as output, even though real life is nowhere near this. I
>believe it is this model that C has managed to exploit, and LISP has not.

This is more or less what I was trying to say, thanks for the help :-) I don't
suggest that a novice has any kind of accurate model of hardware, but I do
think they have a notion of a simple machine that corresponds very well
to the techniques supported by C.

>
>one could even argue that the C model is employed at the hardware level,
>and thus makes hardware understandable in terms of software concepts.

I think that this is part of what I was trying to get at. C allows the programmer
to easily predict the behaviour of a program because of this similarity.

>
>for me, programming in MACRO-10 on the DEC 10 some 10 years ago provided an
>interesting insight into some of the things in Common LISP today. such as
>the constants used in the `boole' function. small things, but so familiar
>that they made me laugh with recognition. LISP is hardware, too!

Sure, but lisp addresses a lot more than what C addresses.

>I don't know about any LISP Machines, but I wonder: was the hardware model
>on those machines equally susceptible to the C model, or were they "tuned"
>to the LISP model?

I don't know, but I'd bet they tried to hide the hardware and present a good
lisp implementation. In another thread someone posted some code written
to exploit the hardware of a lisp machine, it didn't look like the kind of
thing people would want to get involved with too often.

>
>| I think we will always have languages for programming hardware and
>| languages for programming software.
>
>well said, but maybe what we have now is hardware that supports languages
>for programming hardware, to the _exclusion_ of hardware that supports
>languages for programming software?

I think that hardware is designed to support already common environments,
like UNIX and DOS/Windows. From the perspective of the hardware
vendor there isn't much choice. Of course this introduces a kind of hurdle
different kinds of technology must jump to make maximum use of the
current hardware. However, these 'alternative' technologies have a
different area of competence that can be used to their advantage. They
can be used to develop software less expensively. OK, OK... now we've
come full circle and are in danger of becoming repetative :-) How do
we get programmers to look at the alternatives?

Bob Hutchison

unread,
Oct 3, 1994, 11:04:15 AM10/3/94
to
In <780075...@wildcard.demon.co.uk>, cyber_...@wildcard.demon.co.uk (Cyber Surfer) writes:
>In article <35kbl8$8...@relay.tor.hookup.net>
> hu...@RedRock.com "Bob Hutchison" writes:
>
>Most Basics (well, the ones I've seen) were badly implemented, and
>yet still given names like "Fast Basic".
[ ... ]

> PJ Brown's
>Writing Interactive Compilers and Interpreters

Personally, I think that the quality of implementation issue was at the
time less important that the marketing ('Written in BASIC!!') and
productivity issues.

When was Brown's book written?

>A small number of programmers chose Forth, instead of Basic. Forth
>has a much higher standard of compiler writing, but Forth is an odd
>case.

Forth is an interesting language, but by the time Forth became an option
for me, C had already won. The first papers I remember about threaded
interpreters was around '83-'84, though certainly the technique was not
new then (I had used it myself in a UI definition language I wrote). These
papers are what brought forth into my conciousness.

>I used the P-system on a Z80 machine running at 1.76 Mhz, and it
>run _very_ slowly. The code from a C compiler based on Small C
>ran much faster. So, I can agree with you about that. However, the
>compiler only just ran on that machine, and only by generating
>assembly code.

There were a bunch of what we called 'peep-hole' optimisers available
or written for specific ports of the language. I know I had a couple
I would run in succession to optimise the code. They made a big
difference. (They just looked for local patterns that were know to be
generated by the compilers that could be sped up).

I cannot remember what the version of C was that I ported but
small C is familiar... This compiler was in fact quite portable for
the time. Step 1, change the assembler output from the host machines
to the target's; Step 2, recompile the compiler on the host machine
using the new assembler output; Step 3, take the assembler to the
target machine and assemble it, and... ported (there were a few iterations
of course :-)

>> My real disagreement with your theory is that I don't think it is sufficient
>> to explain why C is actually used. Why does an experienced programmer
>> use it? In my case it is because of the history I described above. C has
>
>One reason could be that that's the job offered to them.

The experienced programmers become the ones who place the ads for new
programmers.

>> Smalltalk isn't magic, but it has a really nice bunch of tools that go
>> with it. The lisp environments would work just as well I think, but until
>> recently they were too expensive. (These guys all bought smalltalk
>> with their own money -- I can be very persistent :-).
>
>You might like to look at Dylan when the first system ships.

I've been following Dylan pretty closely for what seems like years
now. We'll see. I think it will have to provide some pretty fancy
tools and be used to implement some pretty significant applications
before it gets taken seriously by the software development
community (this has to include developers, managers, marketing,
sales, press, educators).

>> Even shorter, C/C++ is being used for the same reasons COBOL is
>> still used. (an awfully long post to come to that conclusion, sorry :-)
>
>So C/C++ will be around for a few decades after we all "know" that
>it's dead? ;-)

How can you think otherwise? :-)

It is loading more messages.
0 new messages