Comparison Lisp <--> Beta
A. The BETA language
1. Has an incredible symmetry between data and executable program code.
Much better than Lisp. Syntax is such that execution goes strictly
from left to right, except for loops.
2. Lexically scoped, but a module system ("fragment" system) permits
to put parts of a program into separate files - effectively splitting
interface and implementation.
3. Multiple values, but symmetrically for both input (function arguments)
and output (function return values). A program can be viewed as moving
data around between black boxes.
4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
all correspond to "pattern"s.
5. Symmetry data/code provides automatically for different kinds of
inheritance:
outer level inner level
----------- -----------
code code local function definitions
data code local variables
code data ??
data data defstruct :include
6. "Part objects" cover the issues
- local functions with lexical scope,
- defstruct :include,
- defclass subclassing, but only single inheritance.
7. The ":<" token makes up
- generic classes (not present in Lisp, you would have to use
a DEFCLASS in a non-null lexical environment),
- generic functions,
- &key parameters for functions.
8. The "::<" token specifies
- subclassing,
- methods for generic functions,
- keyword arguments for functions.
9. CALL-NEXT-METHOD goes the other way around: the most general method
determines the general behaviour, the most specific method only
plays the role of a fixup.
10. Compile-time typing where possible, run-time tying where necessary.
For example, you can put functions into plain lists or into lists
of functions. The latter saves some run-time checks. Of course,
general lists and lists of functions share the same definitions for
insert, delete etc.
11. Concurrency, i.e. deterministic parallelism. (Would make
WITH-PACKAGE-ITERATOR and WITH-HASH-TABLE-ITERATOR obsolete in CL.)
12. No multiple inheritance.
No generic functions with more than one dispatching argument.
The macro system is pretty much limited, violates lexical scoping,
therefore error prone to use.
B. The Mjolner BETA implementation
1. It is as easy to generate machine code for Beta programs than for Lisp.
2. Generates standalone executables. Hello-world is just 100 KB on Linux.
3. Automatic garbage collection (generational, scavenging).
4. Speed:
consing up 1000 closures into a list
Mjolner 0.049 sec
CLISP 0.064 sec
GCL 0.168 sec
some integer array hacking
C 4.2 sec
Mjolner 111 sec
GCL 288 sec
CLISP 415 sec
(All timings on a 486/33.)
5. Libraries for all kinds of container classes, streams, processes,
exceptions, X11, Motif.
C. The grains of salt:
1. The syntax is orthogonal, but you have to get used to it.
2. Unhygienic macros ("super patterns").
3. No nested comments.
4. Need a dictionary for the words "pattern", "part object", "virtual pattern".
For more information about BETA, look into the comp.lang.beta newsgroup.
Disclaimer: I have no relation with Mjolner Informatics, except that they gave
me a copy of their Mjolner compiler for free.
Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de
> 4. PROGN, LET, Lisp closures, list objects, defstruct objects, CLOS instances
> all correspond to "pattern"s.
So what are these "patterns" anyway? It sounds as if they are very
close if not identical to lisp closures. After all, can't each of the
above lisp stuff can be implemented as sugar for closures.
rodrigo vanegas
r...@cs.brown.edu
Bruno> Since a new language called BETA is gaining attention, which
Bruno> has some features Lisp is proud of, I attempt a comparison
Bruno> between Beta and Lisp.
I understand that this group is dedicated for BETA but if
subject says "Comparison: Beta - Lisp". I didn't find too much
comparison, I found mostly list of features that are common in
both languges. It would be nice if somebody posted real
comparison, as atleast only thing I know about Beta is that it
has "pattern"s.
Petri
--
--------------------------------------------------------------------------
Petri Virkkula | Internet: Petri.V...@hut.fi
JMT 11 E 125 | X.400 : /G=Petri/S=Virkkula/O=hut/ADMD=fumail/C=fi/
02150 Espoo | Voice : +358 0 455 1277
FINLAND |
--------------------------------------------------------------------------
You're basically pretty much correct. A pattern is essentially the
same thing as a closure, the primary difference being that a pattern
is static. Essentially, a pattern is a uniform code construct which
will be instantiated into a closure at runtime.
To make a jump backwards to the initial comparison idea, the key
conceptual difference between Beta and Lisp is that Beta is very
static: patterns are static entities which are compiled into
programs. At runtime, a pattern is instantiated into an "object"
which, depending on how it's used, may be a value or a procedure, or
any combination thereof (just like a closure, because it IS a
closure).
The important differences come about because of three things:
<1> Static typing - in Beta, the types of variables are always declared
statically. Programs in Beta is very static, which makes it very
different from the dynamic nature of lisp programming.
<2> Single inheritance- the Beta object model uses only single inheritance.
The designers decided that rather than try to work out a system for
resolving the confusion caused by MI (namespace collisions, repeated
inheritance, etc.), it was better to do without it. I don't necessarily
agree with them, but it did result in keeping Beta simple.
<3> Virtual patterns - the object model is rather different. Instead of
the CLOS model where a child method overrides a parent method, Beta
uses the Simula model, where the child extends the parent method. The
parent implementation of a method contains calls to inner, which are
dynamically bound to the correct extension for the actual child type.
Going further, since everything (functions, procedures, methods, values)
is just a pattern, the virtual pattern idea can be applied to *any*
method at all. This results in a *very* interesting program model,
where you can write any procedure to be dynamically extensible. This can
actually be used to write lambda expressions! It's quite elegant...
<MC>
--
|| Mark Craig Chu-Carroll: <MC> || "I'm not dumb,
|| CIS Grad, U of Delaware || I just have a command of thoroughly
|| PGP key available by finger || useless information."
|| car...@cis.udel.edu || -Calvin
I've posted a response to comp.lang.beta.
I can also mail it to anyone interested.
Ole
ovill...@daimi.aau.dk
Ole Villumsen, Computer Science Department, Aarhus University, Denmark
You're basically pretty much correct. A pattern is essentially the
same thing as a closure, the primary difference being that a pattern
is static. Essentially, a pattern is a uniform code construct which
will be instantiated into a closure at runtime.
OK, so let's equate a pattern with a lambda expression which is
instantiated into a closure at runtime.
To make a jump backwards to the initial comparison idea, the key
conceptual difference between Beta and Lisp is that Beta is very
static: patterns are static entities which are compiled into
programs.
And lambda expressions are static entities which are compiled into
functions.
The important differences come about because of three things:
<1> Static typing - in Beta, the types of variables are always declared
statically. Programs in Beta is very static, which makes it very
different from the dynamic nature of lisp programming.
Agreed, although it wouldn't take much of a stretch to imagine a lisp
which required type declarations. Admittedly, most lisp users would
hate that.
<2> Single inheritance- the Beta object model uses only single inheritance.
The designers decided that rather than try to work out a system for
resolving the confusion caused by MI (namespace collisions, repeated
inheritance, etc.), it was better to do without it. I don't necessarily
agree with them, but it did result in keeping Beta simple.
But it looked to me like the object system that Beta uses is one based
on closures. That sort of object system has been used in lisp and has
the same sort of single inheritance. For the same matter, I assume
that one could build a CLOS type multiple inheritance object system
in Beta which did not use closures in such a direct way.
<3> Virtual patterns - the object model is rather different. Instead of
the CLOS model where a child method overrides a parent method, Beta
uses the Simula model, where the child extends the parent method. The
parent implementation of a method contains calls to inner, which are
dynamically bound to the correct extension for the actual child type.
See my last comment.
Going further, since everything (functions, procedures, methods, values)
is just a pattern, the virtual pattern idea can be applied to *any*
method at all. This results in a *very* interesting program model,
where you can write any procedure to be dynamically extensible. This can
actually be used to write lambda expressions! It's quite elegant...
Could you give an example here which doesn't look like a closure. I
guess I'm looking for something which would not have a straightforward
translation into lisp. If there's something new here I'd really like
to know.
-Mark
--
Mark Friedman
NASA-Ames Research Center
MS 269-2
Moffett Field, CA 94035-1000
vmail: (415) 604-0573
email: bo...@ptolemy.arc.nasa.gov
: ...
: some integer array hacking
: C 4.2 sec
: Mjolner 111 sec
: GCL 288 sec
: CLISP 415 sec
: (All timings on a 486/33.)
: ...
Are these numbers right? I've seriously used GCL and CLISP myself and
had some arguments with a "true believer Lisper" who thought "Lisp _does_
compete reasonably with C for numeric stuff", but I never bothered to do
the timing tests, and always assumed it wasn't this bad. Is it, really?
Also, I was interested in Beta until one minute ago, because of this.
Are there intrinsic reasons for this that will prevent it from ever
improving?
- Lenny Gray -
> Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
>
> : ...
> : some integer array hacking
> : C 4.2 sec
> : Mjolner 111 sec
> : GCL 288 sec
> : CLISP 415 sec
> : (All timings on a 486/33.)
> : ...
Ouch!
> Are these numbers right? I've seriously used GCL and CLISP myself and
> had some arguments with a "true believer Lisper" who thought "Lisp _does_
> compete reasonably with C for numeric stuff", but I never bothered to do
> the timing tests, and always assumed it wasn't this bad. Is it, really?
I don't know what Bruno's "some integer array hacking" program does,
but the performance ratios shown do not reflect my personal experience.
For example, here's a very simple "some integer array hacking" program
I wrote for the occasion:
/* siah.c - some integer array hacking in C */
void main(void)
{
int a[10000], i, j;
for (j=0; j<1000; j++)
for (i=0; i<10000; i++)
a[i] = i*4;
}
(* siah.bet - some integer array hacking in BETA *)
ORIGIN '~beta/basiclib/v1.4/betaenv'
--program:descriptor--
(#
a: [10000]@integer;
do
(for j:1000 repeat
(for i:a.range repeat i*4 -> a[i] for);
for);
#)
And here's the run-times (486/66; BETA compiler v5.0(1); gcc v2.5.8):
BETA 4.41
gcc 3.10
gcc -O6 1.24
In this case, the ratio between BETA and unoptimized C was 1.4, while
the ratio between BETA and optimized C was 3.6.
Bear in mind that the BETA compiler does not currently perform nearly
as much optimization as gcc does. Also, the code generated by the BETA
compiler contained an index check (which could have been removed, had
the optimizer been smarter) at each assignment, while the C code
obviously did not.
> Also, I was interested in Beta until one minute ago, because of this.
Hopefully, your interest has been reawakened.
/Jacob Seligmann
------------------------------------------------------------------------
Mjolner Informatics ApS Phone: (+45) 86 20 20 00 ext. 2754
Science Park Aarhus Direct: (+45) 86 20 20 11 - 2754
Gustav Wieds Vej 10 Fax: (+45) 86 20 12 22
DK-8000 Aarhus C, Denmark Email: jac...@mjolner.dk
------------------------------------------------------------------------
BETA is better
------------------------------------------------------------------------
From the point of view of a Lisp programmer, a pattern consists of
* a specification of variables (call them "variables" or "closure variables"
or "slots"), and
* a piece of code which is executed after the storage for the variables has
been allocated (call it "initialization method" or simply "program").
But that's only one of many aspects of patterns...
Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de
>> So what are these "patterns" anyway? It sounds as if they are very
>> close if not identical to lisp closures. After all, can't each of the
>> above lisp stuff can be implemented as sugar for closures.
> From the point of view of a Lisp programmer, a pattern consists of
>
> * a specification of variables (call them "variables" or "closure variables"
> or "slots"), and
>
> * a piece of code which is executed after the storage for the variables has
> been allocated (call it "initialization method" or simply "program").
Ok, so consider the following:
(lambda (x y) (print "whatever...") (funcall x y))
This lambda abstraction, which evaluates to a closure, has "a
specification of variables", X and Y, and "a piece of code which is
executed after the storage for the variables has been allocated", the
PRINT followed by the FUNCALL.
I don't see any difference yet...
> But that's only one of many aspects of patterns...
Why is it that every explanation of patterns i've come across so far
always includes a "more to come" disclaimer at the end?! I'm
beginning to wonder if these so called "patterns" can be defined at
all!
rodrigo "tired of playing detective" vanegas
r...@cs.brown.edu
Of course these numbers are right. Here is another set of figures, for
the same integer array hacking benchmark:
C 7.7 sec
Lucid CL 3.0 production mode 47 sec
Lucid CL 3.0 development mode 226 sec
CLISP 512 sec
(All timings on a Sun Sparcstation IPC, this time.)
Lisp compilers produce good code, but they can't compete with good C
compilers in this case.
Here's the code, if you want to convince yourself:
(defun fannkuch (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )
(unless (and (> n 0) (<= n 100)) (return-from fannkuch))
(let ((n n))
(declare (fixnum n))
(let ((perm (make-array n :element-type 'fixnum))
(perm1 (make-array n :element-type 'fixnum))
(zaehl (make-array n :element-type 'fixnum))
(permmax (make-array n :element-type 'fixnum))
(bishmax -1))
(declare (type (simple-array fixnum (*)) perm perm1 zaehl permmax))
(declare (fixnum bishmax))
(dotimes (i n) (setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))
(setf (svref zaehl (- \t 1)) \t)
(decf \t)
(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))
(dotimes (i \t) (setf (svref perm1 i) (svref perm1 (+ i 1))))
(setf (svref perm1 \t) perm0)
)
(when (plusp (decf (svref zaehl \t))) (go Kreuz))
(incf \t)
(go Dollar)
standardroutine
(dotimes (i n) (setf (svref perm i) (svref perm1 i)))
(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop
(when (= (setq k (svref perm 0)) 0) (return))
(let ((k2 (ceiling k 2)))
(declare (fixnum k2))
(dotimes (i k2) (rotatef (svref perm i) (svref perm (- k i))))
)
(incf Spiegelungsanzahl)
)
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n) (setf (svref permmax i) (svref perm1 i)))
) )
(go Dollar)
fertig
)
(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)
(when (> i 0) (format t " "))
(format t "~D" (+ (svref permmax i) 1))
)
(format t ")")
(terpri)
(values)
) ) )
Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de
With the important extension that it is not only possible to declare
x will be of type A,
but also
x will belong to some fixed subtype of type A.
This makes it possible to have generic container classes.
> Programs in Beta is very static, which makes it very
> different from the dynamic nature of lisp programming.
I don't agree here. This depends on your programming style. You can
perfectly embed Lambda Calculus into Beta.
Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de
Yep, that's pretty much it.
] The important differences come about because of three things:
]
] <1] Static typing - in Beta, the types of variables are always declared
] statically. Programs in Beta is very static, which makes it very
] different from the dynamic nature of lisp programming.
]
]Agreed, although it wouldn't take much of a stretch to imagine a lisp
]which required type declarations. Admittedly, most lisp users would
]hate that.
]
] <2] Single inheritance- the Beta object model uses only single inheritance.
] The designers decided that rather than try to work out a system for
] resolving the confusion caused by MI (namespace collisions, repeated
] inheritance, etc.), it was better to do without it. I don't necessarily
] agree with them, but it did result in keeping Beta simple.
]
]But it looked to me like the object system that Beta uses is one based
]on closures. That sort of object system has been used in lisp and has
]the same sort of single inheritance. For the same matter, I assume
]that one could build a CLOS type multiple inheritance object system
]in Beta which did not use closures in such a direct way.
Again, you're pretty much right. It's *very* similar to closure based
object systems. It reminds me quite a lot of a closure based object
system I implemented for Scheme as an undergrad.
] <3] Virtual patterns - the object model is rather different. Instead of
] the CLOS model where a child method overrides a parent method, Beta
] uses the Simula model, where the child extends the parent method. The
] parent implementation of a method contains calls to inner, which are
] dynamically bound to the correct extension for the actual child type.
]
]See my last comment.
]
] Going further, since everything (functions, procedures, methods, values)
] is just a pattern, the virtual pattern idea can be applied to *any*
] method at all. This results in a *very* interesting program model,
] where you can write any procedure to be dynamically extensible. This can
] actually be used to write lambda expressions! It's quite elegant...
]
]Could you give an example here which doesn't look like a closure. I
]guess I'm looking for something which would not have a straightforward
]translation into lisp. If there's something new here I'd really like
]to know.
No, I can't give an example which doesn't look like a closure, because
as I've said, the comparison to closures is pretty damned exact. The
differences that I've pointed out are primarily differences in
programming style that wind up in common programs.
Programming styles in Beta do end up being somewhat different than
styles in Lisp, primarily because of the factors I've mentioned
above. The Beta idioms could be used to implement the programming
styles used by most commonlisp people; and the commonlisp idioms could
be used to implement the beta style. It's not a semantic difference,
just a stylistic one.
<MC>
--
|| Mark Craig Chu-Carroll: <MC> || "A libertarian is just a republican
|| CIS Grad, U of Delaware || who takes drugs"
|| PGP Key Available, by finger || - Bob Black
|| car...@cis.udel.edu ||
> Lenny Gray <lenn...@netcom.com> wrote:
> >
> > Are these numbers right? I've seriously used GCL and CLISP myself and
>
> Of course these numbers are right....
"Right" or wrong, they are meaningless and should be ignored in the
absence of additional information. See below.
> Here's the code, if you want to convince yourself:
OK, I have convinced myself not to pay any attention to your results.
See below.
So where and how is the timing measurement taken? This is crucial; you
cannot expect anyone to glean useful information from a timing test
unless you can demonstrate that it was taken in a reasonable fashion,
which means limiting the test to the things you are claiming to measure
by it. Unless you were careful to exclude other processes while the test
was running, you may have also measured things completely extraneous to the
code you intended to measure. I'm not saying that you don't know these
things, but you need to be explicit about them when you publish a
so-called benchmark.
Also, the above code includes all sorts of stuff that has nothing to do
with integer array hacking, such as parsing, formatting (which involves
printed output and most likely conses), array allocation, and so on. This
is a *terrible* benchmark - it's quite possible that the timings you
published even include time waiting for a window to become exposed for
output, as well as the time taken by the user to type input.
This annoys me big time. Why not post the reply to the newsgroups
that contained the original article?
>A. The BETA language
>
>1. Has an incredible symmetry between data and executable program code.
> Much better than Lisp.
This is useless without some details. All I know is that you think
Beta has an "incredible symmetry".
Most of the rest of the message has the same problem.
>2. Lexically scoped, but a module system ("fragment" system) permits
> to put parts of a program into separate files - effectively splitting
> interface and implementation.
Virtually every Lisp in the universe lets you put parts of the
program in separate files. So what exactly is the issue here?
>7. The ":<" token makes up
> - generic classes (not present in Lisp, you would have to use
> a DEFCLASS in a non-null lexical environment),
So in fact what you mean is *Common* Lisp, not Lisp.
>9. CALL-NEXT-METHOD goes the other way around: the most general method
> determines the general behaviour, the most specific method only
> plays the role of a fixup.
Again more is needed.
>11. Concurrency, i.e. deterministic parallelism. (Would make
> WITH-PACKAGE-ITERATOR and WITH-HASH-TABLE-ITERATOR obsolete in CL.)
I think you are confused about the nature and role of these
iterators. But perhaps not. It's impossible to tell from
the little you say.
>4. Speed:
> consing up 1000 closures into a list
> Mjolner 0.049 sec
> CLISP 0.064 sec
> GCL 0.168 sec
> some integer array hacking
> C 4.2 sec
> Mjolner 111 sec
> GCL 288 sec
> CLISP 415 sec
> (All timings on a 486/33.)
It's necessary to see the code. What declarations did you use
in Common Lisp? Also, try a faster Common Lisp.
>> (dotimes (i n) (setf (svref perm1 i) i))
BTW, you can't just use dotimes even if n is declared to be a fixnum.
I think this is explained in the comp.lang.lisp FAQ. If not, it was
explained in the list of "pitfalls" I posted a while back and perhaps
should update?
-- jeff
May not be the case: I've timed your function using both CMUCL 17c and
Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
CMUCL (estimate!) 15 sec
which is just a factor of 2 off of C. And here is a possible explanation
of that remaining factor: I *guess* svref still checks, if the index is
within the bounds of the vector, whereas there is probably no such check
in the binary produced by the C compiler.
regards, Bernhard
--------------------------------------------------------------------------
Bernhard Pfahringer
Austrian Research Institute for
Artificial Intelligence bern...@ai.univie.ac.at
Schottengasse 3 Fax: (+43 1) 532-0652
A-1010 Vienna, Austria Phone: (+43 1) 533-6112
> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
> CMUCL (estimate!) 15 sec
>
> which is just a factor of 2 off of C. And here is a possible explanation
> of that remaining factor: I *guess* svref still checks, if the index is
> within the bounds of the vector, whereas there is probably no such check
> in the binary produced by the C compiler.
>
> regards, Bernhard
> --------------------------------------------------------------------------
> Bernhard Pfahringer
> Austrian Research Institute for
> Artificial Intelligence bern...@ai.univie.ac.at
> Schottengasse 3 Fax: (+43 1) 532-0652
> A-1010 Vienna, Austria Phone: (+43 1) 533-6112
>
Howdy,
SVREF bounds checks an index with (SAFETY 0)? Weird. Of all
things to optimize, you'd think that "field dereferencing" functions
, e.g., SVREF, SCHAR, <STRUCT>-<SLOT> references, and their builtin
counter parts like SYMBOL-NAME, would not type check and would not
bound's check.
I'd think that it would be pretty easy to group these
functions where the compiler can see'em and generate a single
machine instruction on most processors.
=============================================
Scott McLoughlin
Conscious Computing
=============================================
Of course these numbers are right. Here is another set of figures, for
the same integer array hacking benchmark:
C 7.7 sec
Lucid CL 3.0 production mode 47 sec
Lucid CL 3.0 development mode 226 sec
CLISP 512 sec
(All timings on a Sun Sparcstation IPC, this time.)
I added some type declarations (and removed some), in order to make
sure that no functions (at least, according to the disassembler) are
being called in the inner parts. I've attached the resulting source
code. (Note again that I am not asserting that =all= those type
declarations are necessary, only that I didn't want to take chances.)
The benchmark result on LispWorks 3.2, on an old Sparc LX, for
(fannkuch-fast 10)
is
57.3 sec
You didn't say what argument you passed to the function, by the way;
10 was as high as I was willing to go!
-----------------------------
(in-package :cl-user)
(defun fannkuch-fast (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )
(declare (optimize (safety 0) (speed 3) (space 0) (debug 0)))
(unless (and (> n 0) (<= n 100)) (return-from fannkuch-fast))
(let ((n n))
(declare (fixnum n))
(let ((perm (make-array n :initial-element 0))
(perm1 (make-array n :initial-element 0))
(zaehl (make-array n :initial-element 0))
(permmax (make-array n :initial-element 0))
(bishmax -1))
(declare (type simple-vector perm perm1 zaehl permmax))
(declare (fixnum bishmax))
(dotimes (i n)
(declare (fixnum i))
(setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))
(setf (svref zaehl (the fixnum (1- \t))) \t)
(setf \t (the fixnum (1- \t)))
(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))
(dotimes (i \t)
(declare (fixnum i))
(setf (svref perm1 i) (svref perm1 (the fixnum (1+ i)))))
(setf (svref perm1 \t) perm0)
)
(when (> (the fixnum
(setf (the fixnum (svref zaehl \t))
(the fixnum (1- (the fixnum (svref zaehl \t))))))
0)
(go Kreuz))
(setf \t (the fixnum (1+ \t)))
(go Dollar)
standardroutine
(dotimes (i n)
(declare (fixnum i))
(setf (svref perm i) (svref perm1 i)))
(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop
(when (= (the fixnum (setq k (svref perm 0))) 0) (return))
(let ((k2 (ash (the fixnum (1+ k)) -1)))
(declare (fixnum k2))
(dotimes (i k2)
(declare (fixnum i))
(rotatef (svref perm i) (svref perm (the fixnum (- k i)))))
)
(setf Spiegelungsanzahl (the fixnum (1+ Spiegelungsanzahl)))
)
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n)
(declare (fixnum i))
(setf (svref permmax i) (svref perm1 i)))
) )
(go Dollar)
fertig
)
(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)
(declare (fixnum i))
(when (> i 0) (format t " "))
(format t "~D" (the fixnum (1+ (the fixnum (svref permmax i)))))
)
(format t ")")
(terpri)
(values)
) ) )
--------------------------------
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com
Standard disclaimer.
I don't think so. If their compiler did more optimizations, the Mjolner
timings certainly would be much closer to the C timings.
Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de
This agrees with some figures measured by Simon Leinen:
Sun 4/670MP, 40MHz SuperSPARC acc -fast 1.5 sec user
Sun 4/670MP, 40MHz SuperSPARC CMU CL 17e 3.08 sec user
Sun 4/670MP, 40MHz SuperSPARC LispWorks 3.2 15.58 sec user
Sun 4/670MP, 40MHz SuperSPARC Allegro 4.1 33.87 sec user
Indeed CMUCL is only a factor of 2 off of C. I am happy to eat my words
about Lisp compiler's code.
Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de
If their compiler did more optimizations, the Mjolner
timings certainly would be much closer to the C timings.
Contrast with the strategy of the Sather group: their compiler started
out by doing better than C++ on some microbenchmarks. That's what it
takes to win supporters in real life.
(compile-file "fannkuch")
(load "fannkuch.fasl")
(time (fannkuch 9))
> Also, the above code includes all sorts of stuff that has nothing to do
> with integer array hacking, such as parsing, formatting (which involves
> printed output and most likely conses), array allocation, and so on.
All these are isolated at the beginning and at the end, and executed
only once. So its time will be negligible.
> This is a *terrible* benchmark - it's quite possible that the timings you
> published even include time waiting for a window to become exposed for
> output, as well as the time taken by the user to type input.
Sure, if your "time" command doesn't distinguish elapsed real time
and the time consumed by a single process in user mode, then it is
unusable for benchmarks.
Bruno Haible
hai...@ma2s2.mathematik.uni-karlsruhe.de
I forgot some additional type declarations needed to avoid checking
the GC write barrier when vector elements are modified. My new
result for
(time (fannkuch-fast 9))
on LispWorks 3.2 is
An early-1993 Sparc LX 3.81 sec
An early-1993, low-end Sparc 10 2.97 sec
I have attached the again-modified version of the benchmark.
-------------------------------
(in-package :cl-user)
(defun fannkuch-fast (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*)))))
(declare (optimize (safety 0) (speed 3) (space 0) (debug 0))
(fixnum n))
(unless (and (> n 0) (<= n 100))
(return-from fannkuch-fast))
(let ((perm (make-array n :initial-element 0))
(perm1 (make-array n :initial-element 0))
(zaehl (make-array n :initial-element 0))
(permmax (make-array n :initial-element 0))
(bishmax -1))
(declare (type simple-vector perm perm1 zaehl permmax)
(dynamic-extent perm perm1 zaehl permmax)
(fixnum bishmax))
(dotimes (i n)
(declare (fixnum i))
(setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1)
(go standardroutine))
(setf (svref zaehl (the fixnum (1- \t))) \t)
(setf \t (the fixnum (1- \t)))
(go Kreuz)
Dollar
(when (= \t n)
(go fertig))
(let ((perm0 (svref perm1 0)))
(declare (fixnum perm0))
(dotimes (i \t)
(declare (fixnum i))
(setf (svref perm1 i) (the fixnum (svref perm1 (the fixnum (1+ i))))))
(setf (svref perm1 \t) perm0))
(when (> (the fixnum (setf (svref zaehl \t)
(the fixnum (1- (the fixnum (svref zaehl \t))))))
0)
(go Kreuz))
(setf \t (the fixnum (1+ \t)))
(go Dollar)
standardroutine
(dotimes (i n)
(declare (fixnum i))
(setf (svref perm i) (the fixnum (svref perm1 i))))
(let ((Spiegelungsanzahl 0)
(k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop
(when (= (the fixnum (setq k (svref perm 0))) 0)
(return))
(let ((k2 (ash (the fixnum (1+ k)) -1)))
(declare (fixnum k2))
(dotimes (i k2)
(declare (fixnum i))
(rotatef (the fixnum (svref perm i))
(the fixnum (svref perm (the fixnum (- k i)))))))
(setf Spiegelungsanzahl (the fixnum (1+ Spiegelungsanzahl))))
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n)
(declare (fixnum i))
(setf (svref permmax i) (the fixnum (svref perm1 i))))))
(go Dollar)
fertig)
(format t "The maximum was ~D.~% at " bishmax)
(format t "(")
(dotimes (i n)
(declare (fixnum i))
(when (> i 0)
(format t " "))
(format t "~D" (the fixnum (1+ (the fixnum (svref permmax i))))))
(format t ")")
(terpri)
(values)))
-----------------------------------------
In article <34t8gn$1...@nz12.rz.uni-karlsruhe.de> hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
Bernhard Pfahringer <bern...@ai.univie.ac.at> wrote:
>> Lisp compilers produce good code, but they can't compete with good C
>> compilers in this case.
>
> May not be the case: I've timed your function using both CMUCL 17c and
> Lucid CL 4.0.0, CMUCL is 3 times faster than Lucid, so:
Becareful when making such claims. How carefully did you study your
benchmarks to find out why there was such a difference? Generall, CMU and
Lucid are quite comparable.
> CMUCL (estimate!) 15 sec
> > which is just a factor of 2 off of C.
This agrees with some figures measured by Simon Leinen:
Sun 4/670MP, 40MHz SuperSPARC acc -fast 1.5 sec user
Sun 4/670MP, 40MHz SuperSPARC CMU CL 17e 3.08 sec user
Sun 4/670MP, 40MHz SuperSPARC LispWorks 3.2 15.58 sec user
Sun 4/670MP, 40MHz SuperSPARC Allegro 4.1 33.87 sec user
Since two different Lisp's are off by a factor of 5 and 10 from CMUCL, the
benchmark probably needs a little work. For example, i found (ceiling k 2)
accounted for 60% of the time of the original version of (fannkuch 9) in
Allegro. The Allegro fannkuch-fast times are pretty close those below
(though the hardware isn't the same).
Indeed CMUCL is only a factor of 2 off of C. I am happy to eat my words
about Lisp compiler's code.
This is a good example of how easy it is for Lisp to look much worse than
it is.
I forgot some additional type declarations needed to avoid checking
the GC write barrier when vector elements are modified. My new
result for
(time (fannkuch-fast 9))
on LispWorks 3.2 is
An early-1993 Sparc LX 3.81 sec
An early-1993, low-end Sparc 10 2.97 sec
I have attached the again-modified version of the benchmark.
This is a great improvement! The original benchmark had at least a dozen
problems with it. Could you or someone publish the C (and Beta) code so we
could make a completely fair and correct benchmark out of this?
Good work.
--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA
One might also conclude that the two other Lisps need a little work.
> For example, i found (ceiling k 2) accounted for 60% of the time
> of the original version of (fannkuch 9) in Allegro.
...which would confirm the alternative conclusion (at least CEILING
seems to need a little work in Allegro).
[In defense of Franz, when I once sent them a complaint about the
performance of some specific part of Lisp functionality that was
implemented sub-optimally, they answered with a very effective patch.]
Maybe the problem is that it is simply very hard to implement all of
Common Lisp in a maximally efficient manner. Fortunately most Lisps
have good profilers now, so that you can at least identify the
problematic areas in your programs without too much effort.
--
Simon.
Same here.
--
F.D. Mato Mira
Computer Graphics Lab mato...@epfl.ch
EPFL FAX: +41 (21) 693-5328
etc.
That's not Lisp! That's C with parens!
--
Paul F. Krause Systems Research and Applications Corporation
(703) 558-7816 Intelligent Information Systems
FAX 558-4723 2000 15th Street North, Arlington, VA 22201
Internet: kra...@sra.com-----Uucp: uunet!uupsi5!sraopus!verdi!krausep
> Lenny Gray (lenn...@netcom.com) wrote:
>
> > Bruno Haible (hai...@ma2s2.mathematik.uni-karlsruhe.de) wrote:
> >
> > > some integer array hacking
> > > C 4.2 sec
> > > Mjolner 111 sec
> > > GCL 288 sec
> > > CLISP 415 sec
>
> > Are these numbers right? I've seriously used GCL and CLISP myself and
> > had some arguments with a "true believer Lisper" who thought "Lisp _does_
> > compete reasonably with C for numeric stuff", but I never bothered to do
> > the timing tests, and always assumed it wasn't this bad. Is it, really?
>
> I don't know what Bruno's "some integer array hacking" program does [...]
Mr. Bruno Haible was friendly enough to send me the C and LISP programs
used. Unfortunately, he had deleted the BETA program, so I had to do
the port from scratch. All three programs are listed at the end of this
post. The benchmark is the program run with n=9.
[As you can see below, the C implementation uses a lot of tricks I did
not care or was unable to use in the BETA port: in-lining (#define's),
hard-wired variable liveliness information (loads of blocks with local
variables, making the code very hard to read), code generation hints
(register ...), array pointers (*d++ = *s++), unstructured gotos, etc.]
Here's my results (486/66; BETA compiler v5.0(2); gcc v2.5.8):
gcc -O6 -fomit-frame-pointer 2.08
BETA -no-range-check 11.00
[Actually, the compiler currently do not have a -no-range-check option.
The BETA figure was obtained by removing all boundl-instructions from
the code by hand, and reassembling; this is the only way I could
achieve a fair comparison.]
As you can see, the ratio between state-of-the-art optimized C and BETA
was 5.3, not 26.4 as above.
> > Also, I was interested in Beta until one minute ago, because of this.
> > Are there intrinsic reasons for this that will prevent it from ever
> > improving?
I looked at the generated code, and it seems that the increase in
execution time can be attributed to two factors:
(1) The BETA code allocates its variables on the heap, while the C
code uses registers almost exclusively.
(2) The BETA code performs a regular lookup calculation at each array
access, while the C code simply increases a dedicated register
during the linear sweep.
With a little bit of effort, there is absolutely no reason why the BETA
compiler should not be able to perform the variable liveliness analysis
itself (although currently it does not, and therefore pays the heavy
price of using heap space instead of registers). Also, the linear array
sweeps are simple enough that the compiler could recognize them and
avoid the index calculations (again, it currently does not, and
therefore pays the price at each lookup).
"Some integer array hacking"-type programs are *exactly* what highly
sophisticated C compilers excel at, but there are no intrinsic reasons
why the BETA compiler should not to be able to produce comparable code.
We're constantly working on it...
Sincerely,
/Jacob Seligmann
------------------------------------------------------------------------
Mjolner Informatics ApS Phone: (+45) 86 20 20 00 ext. 2754
Science Park Aarhus Direct: (+45) 86 20 20 11 - 2754
Gustav Wieds Vej 10 Fax: (+45) 86 20 12 22
DK-8000 Aarhus C, Denmark Email: jac...@mjolner.dk
------------------------------------------------------------------------
BETA is better
------------------------------------------------------------------------
============================== fannkuch.c ==============================
/* Programm zur Simulation des Pfannkuchenspiels */
/* Bruno Haible 10.06.1990 */
#include <stdio.h>
#define PermLength 100
#define PermCopy(Source,Dest,n) \
{register int h = n; register int *s = Source; register int *d = Dest; \
while (h) {*d++ = *s++; h--;}; \
}
void main()
{ int n;
int Perm[PermLength];
int Perm1[PermLength];
int Zaehl[PermLength];
int PermMax[PermLength];
int BishMax; /* bisheriges Maximum aller Spiegelungsanzahlen */
printf("n = ?");
scanf("%d",&n); if (!((n>0)&&(n<=PermLength))) goto Ende;
BishMax=-1;
/* Erzeugung aller Permutationen */
/* Erzeuge die Permutationen nach dem Algorithmus:
PERM1[0..n-1] := (0,...,n-1]
t:=n
# if t=1 then standardroutine, goto $
Z_hl[t-1]:=t
t:=t-1, goto #
$ if t<n then goto &, if t=n then fertig.
& rotiere PERM1[0..t], dec Z_hl[t], if >0 then goto #
t:=t+1, goto $
*/
{ register int i;
for (i=0; i<n; i++) { Perm1[i]=i; };
};
{ register int t;
t=n;
Kreuz: if (t==1) goto standardroutine;
Zaehl[t-1]=t;
t=t-1; goto Kreuz; /* rekursiver Aufruf */
Dollar: /* R_cksprung aus dem rekursiven Aufruf */
if (t==n) goto Fertig;
/* Rotieren: Perm1[0] <- Perm1[1] <- ... <- Perm1[n-1] <- Perm1[0] */
{ register int Perm0; register int i;
Perm0=Perm1[0];
for (i=0; i<t; i++) {Perm1[i]=Perm1[i+1];};
Perm1[t]=Perm0;
};
if (--Zaehl[t]) goto Kreuz;
t=t+1; goto Dollar;
standardroutine:
PermCopy(Perm1,Perm,n); /* Perm := Perm1 */
{ int Spiegelungsanzahl;
Spiegelungsanzahl=0;
{ unsigned int k;
while (!((k=Perm[0]) == 0))
{/* Spiegle Perm[0..k] */
unsigned int k2=(k+1)/2;
register int *up = &Perm[0]; register int *down = &Perm[k];
{ register int i;
i=k2; while (i) {int h; h=*up; *up++=*down; *down--=h; i--;};
}
Spiegelungsanzahl++;
};
};
if (Spiegelungsanzahl>BishMax)
{BishMax=Spiegelungsanzahl; PermCopy(Perm1,PermMax,n);};
}
goto Dollar;
}
Fertig: printf("Das Maximum betrug %d.\n bei ",BishMax);
{register unsigned int i;
printf("(");
for (i=0; i<n; i++)
{if (i>0) printf(" ");
printf("%d",PermMax[i]+1);
};
printf(")");
};
printf("\n");
Ende: ;
}
============================= fannkuch.lsp =============================
(defun fannkuch (&optional (n (progn
(format *query-io* "n = ?")
(parse-integer (read-line *query-io*))
) ) )
(unless (and (> n 0) (<= n 100)) (return-from fannkuch))
(let ((n n))
(declare (fixnum n))
(let ((perm (make-array n :element-type 'fixnum))
(perm1 (make-array n :element-type 'fixnum))
(zaehl (make-array n :element-type 'fixnum))
(permmax (make-array n :element-type 'fixnum))
(bishmax -1))
(declare (type (simple-array fixnum (*)) perm perm1 zaehl permmax))
(declare (fixnum bishmax))
(dotimes (i n) (setf (svref perm1 i) i))
(prog ((\t n))
(declare (fixnum \t))
Kreuz
(when (= \t 1) (go standardroutine))
(setf (svref zaehl (- \t 1)) \t)
(decf \t)
(go Kreuz)
Dollar
(when (= \t n) (go fertig))
(let ((perm0 (svref perm1 0)))
(dotimes (i \t) (setf (svref perm1 i) (svref perm1 (+ i 1))))
(setf (svref perm1 \t) perm0)
)
(when (plusp (decf (svref zaehl \t))) (go Kreuz))
(incf \t)
(go Dollar)
standardroutine
(dotimes (i n) (setf (svref perm i) (svref perm1 i)))
(let ((Spiegelungsanzahl 0) (k 0))
(declare (fixnum Spiegelungsanzahl k))
(loop
(when (= (setq k (svref perm 0)) 0) (return))
(let ((k2 (ceiling k 2)))
(declare (fixnum k2))
(dotimes (i k2) (rotatef (svref perm i) (svref perm (- k i))))
)
(incf Spiegelungsanzahl)
)
(when (> Spiegelungsanzahl bishmax)
(setq bishmax Spiegelungsanzahl)
(dotimes (i n) (setf (svref permmax i) (svref perm1 i)))
) )
(go Dollar)
fertig
)
(format t "Das Maximum betrug ~D.~% bei " bishmax)
(format t "(")
(dotimes (i n)
(when (> i 0) (format t " "))
(format t "~D" (+ (svref permmax i) 1))
)
(format t ")")
(terpri)
(values)
) ) )
============================= fannkuch.bet =============================
ORIGIN '~beta/basiclib/v1.4/betaenv'
--program:descriptor--
(#
PermLength: (# exit 100 #);
Perm, Perm1, PermMax, Zaehl: [PermLength]@integer;
h, i, k, n, t, up, down, BishMax, Spiegelungsanzahl: @integer;
do
'n = ?' -> putText;
getInt -> n;
(if (n < 1) or (n > PermLength) then stop if);
-1 -> BishMax;
(for i:n repeat i-1 -> Perm1[i] for);
n -> t;
again:
(#
do (for i:t repeat i -> Zaehl[i] for); 1 -> t;
(for i:n repeat Perm1[i] -> Perm[i] for);
0 -> Spiegelungsanzahl;
while1:
(#
do (if Perm[1]->k // 0 then leave while1 if);
1 -> up; k+1 -> down; down/2 -> i;
while2:
(#
do (if i // 0 then leave while2 if);
Perm[up] -> h; Perm[down] -> Perm[up]; h -> Perm[down];
up+1 -> up; down-1 -> down; i-1 -> i;
restart while2;
#);
Spiegelungsanzahl+1 -> Spiegelungsanzahl;
restart while1;
#);
(if Spiegelungsanzahl > BishMax then
Spiegelungsanzahl -> BishMax;
(for i:n repeat Perm1[i] -> PermMax[i] for)
if);
while3:
(#
do (if t // n then leave while3 if);
Perm1[1] -> h;
(for i:t repeat Perm1[i+1] -> Perm1[i] for);
h -> Perm1[t+1];
(if (Zaehl[t+1]-1 -> Zaehl[t+1]) <> 0 then restart again if);
t+1 -> t;
restart while3;
#);
#);
'Das Maximum betrug ' -> putText;
BishMax -> putInt;
'.\n bei (' -> putText;
(for i:n repeat
(if i > 1 then ' ' -> put if);
PermMax[i]+1 -> putInt;
for);
')' -> putLine;
#)
========================================================================
There was a Prolog a while back that did better than C for some
numeric stuff. Did this win wupporters? Nooooooooooo...
One might conclude all kinds of things. But would one be right?
As has been pointed out a number of times, it can be difficult to
get the best results from several different Common Lisps. Sometimes
declarations that help a lot in one make things worse in another,
and so on. I find it easier to get the best numerical results from
CMU CL than from Lucid or Allegro. This may mean that *something*
about Lucid and Allegro needs a little work, or it may just mean
that I and the CMU CL folk think alike when it comes to this.
Another factor is that the person running the benchmarks may not
know all the Lisps equally well, just as someone running C benchmarks
may not know that "-O6" is the optimization setting to use for
compiler X.
>Maybe the problem is that it is simply very hard to implement all of
>Common Lisp in a maximally efficient manner.
And the ways to get the maximally efficient stuff vary.
BTW, it took a while for C to get as efficient as it now usually(?)
is. And where would many people be without gcc? Unfortunately,
the Lisp equivalent of gcc hasn't yet come along.
-- jeff
It's a necessary but not sufficient condition. Sather and C are both in
the algol vein; crossover is easy. Prolog is utterly different than C;
performance is almost the least of one's worries.
| [As you can see below, the C implementation uses a lot of tricks I did
| not care or was unable to use in the BETA port: in-lining (#define's),
| hard-wired variable liveliness information (loads of blocks with local
| variables, making the code very hard to read), code generation hints
| (register ...), array pointers (*d++ = *s++), unstructured gotos, etc.]
this is not benchmarking execution speeds of C or LISP or anything. this
is benchmarking a programmer's willingness to write assembly language and
to see how well his assembly language coding style fits various languages.
I'll bet gcc -O6 does a better job on reasonably-written C than it does on
this spaghetti code. I mean, writing your own version of memcpy because
you think memcpy won't be fast enough is rather pathetic when there are
CPU's out that have block move instruction idioms and good compilers that
open-code them when calls to memcpy are requested.
to paraphrase Richard Stallman from a response to a bug report for Emacs
with a horribly convoluted fix:
Unfortunately, the code you wrote looks like the intermediate stage of
scheme compilation, and it's too hard for me to read. I can't even
figure out what [it] does.
and what's this about a semicolon _after_ the closing brace in blocks?
such things worry me when I read other people's C code, because it tells me
that they don't really program in C, but translate from something else into
C. that's the feeling I got from the LISP code, too. is this really a
good way to compare languages? I think not.
are there any good ways to compare _languages_? I think programmer time
spent and number of compilation runs required to solve a particular problem
is a better indication. then, later, we can benchmark the compilers.
I don't use LISP because of its outstanding speed, anyway. I use it
because _I_ am faster in LISP than in anything else I know well enough to
compare to. two days of my time translates to the cost of a CPU upgrade to
my clients. so what if it runs five times slower and takes 100 instead of
20 milliseconds? with the upgraded CPU it'll only take 3 times more time,
and if I need to come back to enhance the functionality, they can buy an
even _faster_ CPU. we're not talking about more than constant factors in
difference between these benchmarks here, right?
BETA and LISP probably have a lot to tell us about optimal program design
in their respective mind-sets and a comparison of their relative strengths
and idiomatic expressiveness would be instructive. onwards, folks!
#<Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.
> > [Description of C spaghetti code deleted]
>
> this is not benchmarking execution speeds of C or LISP or anything. this
> is benchmarking a programmer's willingness to write assembly language and
> to see how well his assembly language coding style fits various languages.
I couldn't agree more! I was simply answering to a post which could be
misinterpreted as saying that BETA is inherent 25 times slower than C.
This did not reflect my personal impression, so I retrieved the C
benchmark program from the original poster, did a simple rewrite in
BETA, gained a factor 5, and posted this result along with the programs
used for you all to verify.
That is, I did not write the spaghetti code in the first place, neither
do I feel this is the way to program in BETA, or C, or LISP, or
whatever.
> are there any good ways to compare _languages_? I think programmer time
> spent and number of compilation runs required to solve a particular problem
> is a better indication. then, later, we can benchmark the compilers.
Again, I agree, as long as your programs are not orders of magnitude
slower than what is achievable, and as long as there are no inherent
barriers in the language design to ever achieving better performance.
> BETA and LISP probably have a lot to tell us about optimal program design
> in their respective mind-sets and a comparison of their relative strengths
> and idiomatic expressiveness would be instructive. onwards, folks!
Actually I was quite surprised to see a BETA-LISP comparison thread in
the first place. Sure, BETA strongly supports a functional programming
paradigm, but it is still an object-oriented language at heart - it
never ceases to amaze me just how versatile and expressive the BETA
pattern concept is! Keep the comparisons coming.
Cheers,
>I have attached the again-modified version of the benchmark.
> (dotimes (i n)
> (declare (fixnum i))
> (setf (svref perm1 i) i))
I should perhaps point out again that that is not always enough
to get a fully fixnum loop. I normally use macros like these:
;;; Fixnum iterations
(defmacro fix-dotimes ((var count &optional (result nil))
&body body)
(let ((count-var (gensym)))
`(let ((,count-var ,count))
(declare (fixnum ,count-var))
(do ((,var 0 (fix+ ,var 1)))
((fix>= ,var ,count-var)
,result)
(declare (fixnum ,var))
,@body))))
(defmacro do-vector-indices ((var vec &optional (result nil))
&body body)
`(fix-dotimes (,var (length (the vector ,vec))
,@(if result (list result)))
,@body))
fix+, fix>=, etc are defined like this:
(defmacro fix+ (i j)
`(the fixnum (+ (the fixnum ,i) (the fixnum ,j))))
(defmacro fix>= (i j)
`(>= (the fixnum ,i) (the fixnum ,j)))
The root problem seems to be that some Lisps worry that 1 + i may
not be a fixnum even though i is. I don't think you can always
win even by saying (dotimes (i (the fixnum n)) ...).
It's very easy to see what will happen in KCL, BTW. Define
the fns and call disassemble. The C code is sufficiently readable,
because the stuff that's what you'd do in C looks like it is.
-- jeff
I'd say it's neither.
Jeff> In article <SCHWARTZ.94...@roke.cse.psu.edu>
Jeff> schw...@roke.cse.psu.edu (Scott Schwartz) writes:
>> je...@aiai.ed.ac.uk (Jeff Dalton) writes: There was a Prolog a while
>> back that did better than C for some numeric stuff. Did this win
>> wupporters? Nooooooooooo...
>>
>> It's a necessary but not sufficient condition.
Jeff> I'd say it's neither.
Are we about to open the `C vs. Lisp' thread again, with BETA on the
side?
(for those puzzled: this has been a raging debate for the last couple
of months in comp.lang.lisp)
------------------------------------------------------------------------------
Christian Lynbech | Hit the philistines three times over the
office: R0.33 (phone: 3217) | head with the Elisp reference manual.
email: lyn...@daimi.aau.dk | - pet...@hal.com (Michael A. Petonic)
------------------------------------------------------------------------------
jac...@daimi.aau.dk (Jacob Seligmann) writes:
[...]
Here's my results (486/66; BETA compiler v5.0(2); gcc v2.5.8):
gcc -O6 -fomit-frame-pointer 2.08
BETA -no-range-check 11.00
[Actually, the compiler currently do not have a -no-range-check option.
The BETA figure was obtained by removing all boundl-instructions from
the code by hand, and reassembling; this is the only way I could
achieve a fair comparison.]
As you can see, the ratio between state-of-the-art optimized C and BETA
was 5.3, not 26.4 as above.
[...]
With a little bit of effort, there is absolutely no reason why the BETA
compiler should not be able to perform the variable liveliness analysis
itself (although currently it does not, and therefore pays the heavy
price of using heap space instead of registers). Also, the linear array
sweeps are simple enough that the compiler could recognize them and
avoid the index calculations (again, it currently does not, and
therefore pays the price at each lookup).
"Some integer array hacking"-type programs are *exactly* what highly
sophisticated C compilers excel at, but there are no intrinsic reasons
why the BETA compiler should not to be able to produce comparable code.
We're constantly working on it...
What Jacob's saying is
(a) Typical code written in c performs more than 5 times
better than code in his favourite language using available
implementations, and
(b) there's no reason why his favourite language couldn't be
implemented so it was competive.
What lisp (and beta) advocates seem to often ignore is that quality of
code generation really matters to most people.
Telling people that a factor of 5 difference in run-time doesn't
really matter doesn't encourage them to use your language. Neither
does telling them that *in principle* or *some day in the future*,
your language could be competitive.
Unless you have competitive ports to x86, Sparc, Alpha, DEC & SG Mips,
PA-RISC, and RS6k, few people are going to use a language. Not many
people are going to bother explaining that performance matters to
them, they're just going to ignore you when you try to tell them
otherwise.
Which is a pity, because competive compilers for sane languages like
beta and lisp are obviously feasible. Paul Wilson was proposing a
compiler for scheme+objects that would compete with C, CMU CL was
great (although it now seems to be largely unsupported) and the ETH
Oberon compilers are also wonderful (although the systems they're in
don't co-operate with the rest of the universe.)
At least Jacob's actually working on improving the beta
implementation. As far as I can tell, the usual lisp advocate response
to performance complaints is to either:
(a) deny there's a problem,
(b) say one day there won't be a problem, or
(c) suggest you write code that looks like FORTRAN and
manually weigh expression trees and other insanity.
Perhaps Gwydion or Paul Wilson's scheme+objects compiler will save the
world.
--
Matthew McDonald ma...@cs.uwa.edu.au
Nim's longest recorded utterance was the sixteen-sign declarative
pronouncement, "Give orange me give eat orange me eat orange give me
eat orange give me you."
> Jeff> I'd say it's neither.
>
> Are we about to open the `C vs. Lisp' thread again, with BETA on the
> side?
>
> (for those puzzled: this has been a raging debate for the last couple
> of months in comp.lang.lisp)
>
>
>
Howdy,
Sure let's open it up again ;-) But no really -- skip all this
talk of "realtime" this and that and concerns about competing with
Fortran on floating point. I'm still _VERY_ curious (concerned?)
about why Lisp isn't more popular in "the trenches". Go look at
Borland C++ compiler's output in large model (typical Windows app) -
not too slick. Now go run a typical Windows app (or Unix workstation
app for that matter). It pages like all get out when you open a
windows or switch to another task. Now go install a Windows
_personal utility app_ -- 40 meg disk footprint or more. We're
not talking big DB servers, just a nice word processor or
spreadsheet.
So why don't folks use Lisp to write this stuff? Blazing
speed,space,etc. aint that critical. What gives?
: What Jacob's saying is
: (a) Typical code written in c performs more than 5 times
^^^^^^^^^^^^^^^^^^^^^^^^^
A small correction: It should be the-state-of-the-art spaghetti code in c
: better than code in his favourite language using available
: implementations, and
Naresh
--
_______________________________________________________________________________
Naresh Sharma [N.Sh...@LR.TUDelft.NL] Herenpad 28 __|__
Faculty of Aerospace Engineering 2628 AG Delft \_______(_)_______/
T U Delft Optimists designed the aeroplane, ! ! !
Ph(Work) (+31)15-783992 pessimists designed the parachute!
Ph(Home) (+31)15-569636 Plan:Design Airplanes on Linux the best OS on Earth!
------------------------------PGP-KEY-AVAILABLE--------------------------------
So true !
The best example is probably Visual Basic, right ?
Stefan
> (a) Typical code written in c performs more than 5 times
> better than code in his favourite language using available
> implementations, and
This is exactly the sort of unsupportable generalization I was afraid
would result from the net publication of this so-called "benchmark."
(Not a reference to Jacob's post, but to the original.) Differences in
performance do not boil down to single numbers, nor any kind of simple
comparison for that matter.
>.... Telling people that a factor of 5 difference in run-time doesn't
> really matter doesn't encourage them to use your language.
Most of the time, a factor of five is negligible. In critical parts of
the code however, a factor of 1.05 may be important.
> .... As far as I can tell, the usual lisp advocate response
> to performance complaints is to either:
> (a) deny there's a problem,
> (b) say one day there won't be a problem, or
> (c) suggest you write code that looks like FORTRAN and
> manually weigh expression trees and other insanity.
This is gross oversimplification. As a "lisp advocate," I consistently
maintain that if and when there is a performance problem, it can be dealt
with in the same way one deals with performance problems in any language.
There is not a generalized performance problem inherent in lisp.