Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Loop Syntax

14 views
Skip to first unread message

Jeffrey P. Sandys

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
After reading many arguments about the loop macro I have
finally started using loops instead of do or recursion.

Should loop keywords have the colon prefix keyword syntax?

(loop
for i to 5
collect i)
;; --or--
(loop
:for i :to 5
:collect i)

With the keyword syntax, it is easier to distinguish the
loop key words, especially when some loop keywords have the
same name as lisp functions. Also emacs formats long loop
clauses better with the keywords (as shown above).

But _Common_Lisp,_the_Language_ and the Hyperspec show
loop examples without the colon prefix keyword syntax.

Thanks,
Jeff Sandys

Barry Margolin

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
In article <38EE1F8B...@asme.org>,

Jeffrey P. Sandys <san...@asme.org> wrote:
>After reading many arguments about the loop macro I have
>finally started using loops instead of do or recursion.
>
>Should loop keywords have the colon prefix keyword syntax?

LOOP doesn't care what package the keywords are in, it just looks at their
names. Traditionally, LOOP keywords have not been written with the colon
prefix (naturally, since LOOP was originally implemented in Maclisp, which
didn't have packages).

>With the keyword syntax, it is easier to distinguish the
>loop key words, especially when some loop keywords have the
>same name as lisp functions. Also emacs formats long loop
>clauses better with the keywords (as shown above).

If you find that you prefer that style, go ahead. It shouldn't bother any
valid implementation.

--
Barry Margolin, bar...@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Erik Naggum

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
* "Jeffrey P. Sandys" <san...@asme.org>

| Should loop keywords have the colon prefix keyword syntax?

a very good question. only the symbol-name of the loop keywords are used
by the macro function, so you're free to use keyword symbols. it might
look odd to some people, but I find serious merit in using a distinct
syntax for this as long as we can't use a more Lispy syntax to begin with.

#:Erik

Tom Breton

unread,
Apr 8, 2000, 3:00:00 AM4/8/00
to
"Jeffrey P. Sandys" <san...@asme.org> writes:

> After reading many arguments about the loop macro I have
> finally started using loops instead of do or recursion.
>

> Should loop keywords have the colon prefix keyword syntax?
>

> (loop
> for i to 5
> collect i)
> ;; --or--
> (loop
> :for i :to 5
> :collect i)

It's misleading IMO. It suggests that you can say

(loop
:to 5
:collect i
:for i)

which you can't, and that you can't say

(loop
:for i :to 5

:for j :to 5
:collect (list i j))

which you can.


--
Tom Breton, http://world.std.com/~tob
Not using "gh" since 1997. http://world.std.com/~tob/ugh-free.html
Rethink some Lisp features, http://world.std.com/~tob/rethink-lisp/index.html

Rahul Jain

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
In article <m3d7o0f...@world.std.com> posted on Saturday, April 8, 2000

6:41 PM, Tom Breton <t...@world.std.com> wrote:
> It's misleading IMO. It suggests that you can say
>
> (loop
> :to 5
> :collect i
> :for i)
>
> which you can't, and that you can't say
>
> (loop
> :for i :to 5
> :for j :to 5
> :collect (list i j))
>
> which you can.

Note that someone with a reasonable amount of lisp experience would
immediately notice that the order of the symbols within a loop DOES
matter. Perhaps the following is a better way of expressing the loop

(loop
:for i to 5
:for j to 5
:collect (list i j))

but in either case, the reader should know that a loop is not a normal
function, and needs to be read and interpreted in the appropriate context.

--
-> -\-=-=-=-=-=-=-=-=-=-/^\-=-=-=<*><*>=-=-=-/^\-=-=-=-=-=-=-=-=-=-/- <-
-> -/-=-=-=-=-=-=-=-=-=/ { Rahul -<>- Jain } \=-=-=-=-=-=-=-=-=-\- <-
-> -\- "I never could get the hang of Thursdays." - HHGTTG by DNA -/- <-
-> -/- http://photino.sid.rice.edu/ -=- mailto:rahul...@usa.net -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
Version 11.423.999.210020101.23.50110101.042
(c)1996-2000, All rights reserved. Disclaimer available upon request.


Marc Battyani

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to

Rahul Jain <ra...@rice.edu> wrote in message
news:8cqo0e$s7u$1...@joe.rice.edu...

> (loop
> :for i to 5
> :for j to 5
> :collect (list i j))

why using a j variable which is the same as the i variable ?

(loop :for i to 5

:collect (list i i))

will have the same result.

Marc Battyani

Rahul Jain

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
In article
<2E7966180703631B.D2B3567E...@lp.airnews.net>

posted on Sunday, April 9, 2000 4:01 PM, "Marc Battyani" <Marc_B...@csi.com> wrote:
>
> why using a j variable which is the same as the i variable ?

Because it was an example, not something I would really code. :)
My real point was that you can't interpret code without context. You have to
understand that certain constructs have certain syntax/semantics, and
interpret the code as such.

Tom Breton

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
Rahul Jain <ra...@rice.edu> writes:

> In article <m3d7o0f...@world.std.com> posted on Saturday, April 8, 2000
> 6:41 PM, Tom Breton <t...@world.std.com> wrote:
> > It's misleading IMO.

> [big snip]
> [] the reader should know that a loop is not a normal


> function, and needs to be read and interpreted in the appropriate context.

Which implies that (again loop syntax is bad, and that) using
colonized symbols in loop as if they were keywords is misleading.

Rahul Jain

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
In article <m3ln2mr...@world.std.com> posted on Sunday, April

9, 2000 5:49 PM, Tom Breton <t...@world.std.com> wrote:
> Rahul Jain <ra...@rice.edu> writes:
>
>> In article <m3d7o0f...@world.std.com> posted on Saturday, April 8, 2000
>> 6:41 PM, Tom Breton <t...@world.std.com> wrote:
>> > It's misleading IMO.
>> [big snip]
>> [] the reader should know that a loop is not a normal
>> function, and needs to be read and interpreted in the appropriate context.
>
> Which implies that (again loop syntax is bad, and that) using
> colonized symbols in loop as if they were keywords is misleading.
>

If you don't like it, don't use it. There are many other ways to
accomplish iteration in lisp. Some people like to use loop in some
situations.

Rahul Jain

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
In article <38F11D9F...@san.rr.com> posted on Sunday, April
9, 2000 7:15 PM, Courageous <jkra...@san.rr.com> wrote:
> I confess that, all warnings to the contrary, I still occasionally
> make use of goto in C/C++. I find it a very succint for bailing
> out of complex code bodies to the end of a function where clean
> up occurs.
>

You're not alone, take a peek at the linux kernel source some time, it's
got plenty of gotos, and for clarity reasons more than speed reasons.

When misused, any construct is bad. An experienced programmer should be
able to tell when the right and wrong uses of a construct are. goto is
especially horrible beacuse it's "easy" to use and then when there's a
bug, it's impossible to track down if there's a whole slew of gotos.
However, when used judiciously and for good reason, a goto can be worth
a huge number of nested ifs.

Courageous

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

> If you don't like it, don't use it. There are many other ways to
> accomplish iteration in lisp. Some people like to use loop in some
> situations.

I confess that, all warnings to the contrary, I still occasionally


make use of goto in C/C++. I find it a very succint for bailing
out of complex code bodies to the end of a function where clean
up occurs.

It just goes to show that religious reverence or revilement of
a form seldom suits a person well. Languages are tools. Not altars.


C/

Courageous

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

> > I confess that, all warnings to the contrary, I still occasionally
> > make use of goto...

> You're not alone, take a peek at the linux kernel source some time, it's
> got plenty of gotos, and for clarity reasons more than speed reasons.

Yeah. As a younger programmer, I avoided goto because I was
indoctrinated with "Thou Shalt Not". Then one day in the
midst of writing a horrifically complex nested bailout, the
light dawned. Clear, succinct, and manageable. I used that
form ever after.

When learning Java, I war irritated by the lack of goto for
exactly this reason, until I learned how finally clauses
really work. It was quite a revelation when I discovered
that finally intercept both throw *and* return. Quite nice.


C/

David Hanley

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

As an aside:

I was cuirious about the performance of loop vs mapc in some code
I was writing, so I did the following (in HCL personal edition) :

(declaim (optimize (speed 3)(safety 0)))

(defparameter arr (make-array 3))

(defun foo( n1 n2 n3 )
(setf (svref arr 0) n1
(svref arr 1) n2
(svref arr 2) n3))

(defparameter L1 '( 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15))
(defparameter L2 '( 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20))
(defparameter L3 '(11 12 13 14 15 16 17 18 19 20 21 22 23 24 25))

;; functions to test mapping/recursion vs loop
(defun mapc-funcall-test()
(mapc #'foo l1 l2 l3))

(defun loop-funcall-test()
(loop for n1 in L1
for n2 in L2
for n3 in L3 do (foo n1 n2 n3)))

(defun length-recur( list )
(labels ((length-recur2( l len)
(if (not l) len
(length-recur2 (cdr l) (1+ len)))))
(length-recur2 list 0)))

(defun length-loop( list )
(let ((len 0))
(loop for n in list do (incf len))
len))

;; the benchmarking code

(defun bench-map-funcall-test()
(dotimes (i 100000) (mapc-funcall-test)))

(defun bench-loop-funcall-test()
(dotimes (i 100000) (loop-funcall-test)))

(defun bench-recur-length()
(dotimes (i 200000)
(length-recur l1)
(length-recur l2)
(length-recur l3)))

(defun bench-loop-length()
(dotimes (i 200000)
(length-loop l1)
(length-loop l2)
(length-loop l3)))

;; after compiling and loading....

CL-USER 37 > (time (bench-map-funcall-test))

2.7 seconds used.
Standard Allocation 192 bytes.
Fixlen Allocation 3200248 bytes.
NIL

CL-USER 38 > (time (bench-loop-funcall-test))

0.4 seconds used.
Standard Allocation 192 bytes.
Fixlen Allocation 248 bytes.
NIL

CL-USER 39 > (time (bench-recur-length))

1.4 seconds used.
Standard Allocation 192 bytes.
Fixlen Allocation 248 bytes.
NIL

CL-USER 40 > (time (bench-loop-length))

0.8 seconds used.
Standard Allocation 192 bytes.
Fixlen Allocation 248 bytes.
NIL

Disassembling the code indicates that the mapc is not being inlined.
I am not sure if the extra allocation comes from calling (apply..) in
mapc or if it is shortcutting by using mapcar. Having seen this, I
will now use loop judiciously ( in this lisp ) in spots where I think
speed is critical. In particular, since the (maps) don't seem to get
inlined, it seems like loop is a good idea when the functon you
are calling is inlineable as well.

dave


Joe Marshall

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
David Hanley <d...@ncgr.org> writes:

> As an aside:
>
> I was cuirious about the performance of loop vs mapc in some code
> I was writing, so I did the following (in HCL personal edition) :
>

> Disassembling the code indicates that the mapc is not being inlined.
> I am not sure if the extra allocation comes from calling (apply..) in
> mapc or if it is shortcutting by using mapcar. Having seen this, I
> will now use loop judiciously ( in this lisp ) in spots where I think
> speed is critical. In particular, since the (maps) don't seem to get
> inlined, it seems like loop is a good idea when the functon you
> are calling is inlineable as well.

Ugh! In ACL 5.0.1, the mapc and the loop produce virtually the same
code. The performance difference is negligable. It's one thing to
recode your algorithms to ones that are faster, but bumming code at
this level should be done only as a last resort.

Rather than recoding specific mapcs as loops, you might be better off
writing your own compiler macro for mapc that converts them into
loops. You will then gain performance everywhere you use mapc, you
won't have to benchmark all the mapcs to find the ones worth
converting, and if you move to a lisp where mapc is equivalent (or
faster than) loop, you can just comment out your version.

~jrm


Paul Dietz

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
In article <u66tpv...@alum.mit.edu>,
Joe Marshall <jmar...@alum.mit.edu> wrote:

> Ugh! In ACL 5.0.1, the mapc and the loop produce virtually the same
> code. The performance difference is negligable. It's one thing to
> recode your algorithms to ones that are faster, but bumming code at
> this level should be done only as a last resort.

The difference is indeed small. MAPC (or DOLIST) is slightly
faster than (LOOP FOR ... IN ... DO ...) on ACL 5.0.1. This is
because LOOP must (by the definition in the standard) use ENDP
for the end of list test rather than checking for NIL.

As you say, look elsewhere first for performance enhancements.

I bet DOLIST is fast in both lisps.

Paul

David Hanley

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

Joe Marshall wrote:

> David Hanley <d...@ncgr.org> writes:
>
> > As an aside:
> >
> > I was cuirious about the performance of loop vs mapc in some code
> > I was writing, so I did the following (in HCL personal edition) :
> >
> > Disassembling the code indicates that the mapc is not being inlined.
> > I am not sure if the extra allocation comes from calling (apply..) in
> > mapc or if it is shortcutting by using mapcar. Having seen this, I
> > will now use loop judiciously ( in this lisp ) in spots where I think
> > speed is critical. In particular, since the (maps) don't seem to get
> > inlined, it seems like loop is a good idea when the functon you
> > are calling is inlineable as well.
>

> Ugh! In ACL 5.0.1, the mapc and the loop produce virtually the same
> code. The performance difference is negligable. It's one thing to
> recode your algorithms to ones that are faster, but bumming code at
> this level should be done only as a last resort.

Says you. If it's an issue of easily making a critical function run 10
times
faster, and I can do it in thirty seconds, and it doesn't impact
readability,
I'm quite happy to do it. I tend to think users appreciate it.


> Rather than recoding specific mapcs as loops, you might be better off
> writing your own compiler macro for mapc that converts them into
> loops.

Yeah, I thought about this. In fact, you can do better than loop
in this case if you decide to stick to lists as the input mechanism,
since loop uses endp.

dave


David Hanley

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

Paul Dietz wrote:

>
> As you say, look elsewhere first for performance enhancements.

Thanks; I'm glad you know where the bottlenecks are in my code so
much better than I do. :)

> I bet DOLIST is fast in both lisps.

Might be, but it does't iterate across multiple lists simutaneously.

dave


Joe Marshall

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
David Hanley <d...@ncgr.org> writes:

> Joe Marshall wrote:
> > Ugh! In ACL 5.0.1, the mapc and the loop produce virtually the same
> > code. The performance difference is negligable. It's one thing to
> > recode your algorithms to ones that are faster, but bumming code at
> > this level should be done only as a last resort.
>
> Says you. If it's an issue of easily making a critical function run 10
> times
> faster, and I can do it in thirty seconds, and it doesn't impact
> readability,
> I'm quite happy to do it. I tend to think users appreciate it.

Yes, says me. (while trying to forget that I've re-arranged the order
of arguments to critical procedures just to give hints to the compiler
as to which ones belong in registers.)

But of course, it all depends on the situation, etc. Since you are
using HCL personal edition, I assume that 1) your development team is
just you, and 2) funds are limited, so upgrading to a better compiler
isn't an option.

But in a commercial environment, where you have many developers, and
the risk of introducing bugs during the rewrite is higher (if you
accidently have a typo in a small project, it'll be found and fixed,
if you accidentally have a typo in a commercial product, people with
money get pissed and stop giving it to you), the payoff of writing a
compiler macro or upgrading the compiler is much higher.

> > Rather than recoding specific mapcs as loops, you might be better off
> > writing your own compiler macro for mapc that converts them into
> > loops.
>
> Yeah, I thought about this. In fact, you can do better than loop
> in this case if you decide to stick to lists as the input mechanism,
> since loop uses endp.

Go for it. Writing compiler-macros really impresses the chicks.

--
~jrm

Tom Breton

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
Joe Marshall <jmar...@alum.mit.edu> writes:
>
> Ugh! In ACL 5.0.1, the mapc and the loop produce virtually the same
> code. The performance difference is negligable. It's one thing to
> recode your algorithms to ones that are faster, but bumming code at
> this level should be done only as a last resort.

> Rather than recoding specific mapcs as loops, you might be better off


> writing your own compiler macro for mapc that converts them into

> loops. You will then gain performance everywhere you use mapc, you
> won't have to benchmark all the mapcs to find the ones worth
> converting, and if you move to a lisp where mapc is equivalent (or
> faster than) loop, you can just comment out your version.

Yes! Exactly! Finally someone on this ng understands optimization!

Well, I don't seriously mean to tar everyone in cll, but I had the
darndest time trying to get that point across to Naggum and his
friends a few weeks back and I got called some pretty strange things
for trying.

David Hanley

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

Joe Marshall wrote:

> David Hanley <d...@ncgr.org> writes:
>
> > Joe Marshall wrote:

> > > Ugh! In ACL 5.0.1, the mapc and the loop produce virtually the same
> > > code. The performance difference is negligable. It's one thing to
> > > recode your algorithms to ones that are faster, but bumming code at
> > > this level should be done only as a last resort.
> >

> > Says you. If it's an issue of easily making a critical function run 10
> > times
> > faster, and I can do it in thirty seconds, and it doesn't impact
> > readability,
> > I'm quite happy to do it. I tend to think users appreciate it.
>
> Yes, says me. (while trying to forget that I've re-arranged the order
> of arguments to critical procedures just to give hints to the compiler
> as to which ones belong in registers.)
>
> But of course, it all depends on the situation, etc. Since you are
> using HCL personal edition, I assume that

<snip> I used the personal edition because I was at a location where
that's all I have(salaried job). I do have the full version at home.

> But in a commercial environment, where you have many developers, and
> the risk of introducing bugs during the rewrite is higher (if you
> accidently have a typo in a small project, it'll be found and fixed,
> if you accidentally have a typo in a commercial product, people with
> money get pissed and stop giving it to you), the payoff of writing a
> compiler macro or upgrading the compiler is much higher.

I don't agree with this logic one bit. The risk of introducing
bugs in a large project IS higher.. That's why introducing a
global macro to rewrite mapc's into loops is a pretty bad idea.
A programmer's code who I break with my nifty new macro
may come and kick my ass, and rightly so. Instead, if I can
do one trivial change to some part of my code and quadruple
the speed of a bottleneck, I might as well do it.

> > Yeah, I thought about this. In fact, you can do better than loop
> > in this case if you decide to stick to lists as the input mechanism,
> > since loop uses endp.
>
> Go for it. Writing compiler-macros really impresses the chicks.

It's not that hard:

(defmacro fast-mapc( function &rest lists )
(let ((t1 (gensym "fast-mapc-start"))
(t2 (gensym "fast-mapc-end"))
(temp-lists (loop for i from 1 to (length lists) collect (gensym
"fast-mapc-var"))))
`(let ,(loop for nv in temp-lists for ov in lists collect (list nv ov))
(tagbody
,t1 (when (or ,@(mapcar #'(lambda(x)`(null ,x))temp-lists))
(go ,t2))
(funcall ,function ,@(mapcar #'(lambda(x)`(car ,x)) temp-lists))
,@(mapcar #'(lambda(x)`(setq ,x (cdr ,x))) temp-lists)
(go ,t1)
,t2))))

My "chick" seems to be more impressed by other things, frankly. :)

Yes, I did use both mapcar and loop just to be cheeky. :)

dave


David Hanley

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

Tom Breton wrote:

> Yes! Exactly! Finally someone on this ng understands optimization!

well, finally...someone means that it can't be you, you must realize. :)

Glad to know we have a little coven here that knows more than some
of the reals pros that have posted in the past. While you're enlightening
henry baker, peter norvig, et al, about optimization, could you shoot
a little enlightenment my way?

dave


Joe Marshall

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
David Hanley <d...@ncgr.org> writes:

> > But in a commercial environment, where you have many developers, and
> > the risk of introducing bugs during the rewrite is higher (if you
> > accidently have a typo in a small project, it'll be found and fixed,
> > if you accidentally have a typo in a commercial product, people with
> > money get pissed and stop giving it to you), the payoff of writing a
> > compiler macro or upgrading the compiler is much higher.
>
> I don't agree with this logic one bit. The risk of introducing
> bugs in a large project IS higher.. That's why introducing a
> global macro to rewrite mapc's into loops is a pretty bad idea.

The probability of introducing a bug is more correlated with the
number of lines of code changed than it is with the number of
callers. Suppose I wrote the macro and got it wrong. Then whether
there are 1000 or 1,000,000 uses of mapc, there is only one `bug', one
locus of code to repair.

It stands to reason that rewriting a section of code can introduce
bugs. Each time you edit a mapc to rewrite it into a loop, you have a
small, but finite, chance of introducing a bug. (I'm *constantly*
mistyping variable names and such, but maybe I'm just a klutz). If
you rewrite 10 mapc's that's that many more times you might introduce
an error.

> It's not that hard:
>
> (defmacro fast-mapc( function &rest lists )
> (let ((t1 (gensym "fast-mapc-start"))
> (t2 (gensym "fast-mapc-end"))
> (temp-lists (loop for i from 1 to (length lists) collect (gensym
> "fast-mapc-var"))))
> `(let ,(loop for nv in temp-lists for ov in lists collect (list nv ov))
> (tagbody
> ,t1 (when (or ,@(mapcar #'(lambda(x)`(null ,x))temp-lists))
> (go ,t2))
> (funcall ,function ,@(mapcar #'(lambda(x)`(car ,x)) temp-lists))
> ,@(mapcar #'(lambda(x)`(setq ,x (cdr ,x))) temp-lists)
> (go ,t1)
> ,t2))))
>
> My "chick" seems to be more impressed by other things, frankly. :)

Damn. That's been my experience, too. I'm beginning to think that
hacking lisp is not the `babe magnet' I was lead to believe.

--
~jrm

Erik Naggum

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
* Tom Breton <t...@world.std.com>
| Well, I don't seriously mean to tar everyone in cll ...

sure, but that's OK, Tom, you're only tarring yourself, anyway.

being wrong sometimes has the _unexplainable_ effect that other people
_continue_ to argue against you. isn't that just too _weird_, Tom?

#:Erik

Christopher Browne

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to
Centuries ago, Nostradamus foresaw a time when Courageous would say:

>> If you don't like it, don't use it. There are many other ways to
>> accomplish iteration in lisp. Some people like to use loop in some
>> situations.
>
>I confess that, all warnings to the contrary, I still occasionally
>make use of goto in C/C++. I find it a very succint for bailing
>out of complex code bodies to the end of a function where clean
>up occurs.
>
>It just goes to show that religious reverence or revilement of
>a form seldom suits a person well. Languages are tools. Not altars.

This amounts to the same thing as Michael Jackson's Rules of Software
Optimization.

Rule #1. Don't Optimize.

Rule #2. (Only for Experts) Don't Optimize _Yet._

It's a good move to require that "clueless newbies" that haven't
figured out the horrors of completely unstructured code not use the
Ultimate Tool For Destructuring.

On the other hand, if someone is writing a macro to generate a new
control structure, it is _perfectly acceptable_ to use GOTO as one of
the underlying instructions.

And on occasion, a carefully placed "GOTO" is downright useful in
making program flow CLEARER. That's only true if there's ONE of them;
code that is liberally peppered with GOTOs is likely ugly
spaghetti...
--
Pound for pound, the amoeba is the most vicious animal on earth.
cbbr...@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>

Tom Breton

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to

Let me tell you a little story about optimization.

Years ago I was playing with the Angband source code. I was looking
at the level-generation code and I thaut to myself "Why on earth is it
done that way? Look, this algorithm here is mostly redundant, most of
the stuff it writes will just get overwritten. And this here can be
speeded up. And so forth. Why, nobody has thaut about speed here at
all!"

Then I read Ben Harrison (Angband maintainer)'s comment. He pointed
that level generation only rarely occurs. It's the opposite of a hot
spot, it's a cold spot near absolute zero. He also pointed out that
it needed to be done properly - a mistake there can ruin a whole
level, often a whole game. The code needed to be clean and careful
far more than it needed to be fast.

And I realized with a shock that he was absolutely rite and I was way
off base. All the optimizations that had jumped so eagerly to mind
would have amounted to at best maybe a second per hour of play time,
about 0.03%. Any single bug, ever, even once, would have cost more
time than all the time "saved", totalled over everyone. The
optimizations that looked so tempting wouldn't have been improvements,
they would have been *anti*improvements.

It's a lesson that stuck with me - and one that I learned cheaply,
thanks to Ben. Optimizations are *not* a goodie to be grabbed at as
quickly as possible. Hold off on the optimizations until you're sure
they're needed. Instead focus on cleanness, correctness,
expressiveness. Hopefully someone will learn it from this post as
cheaply as I learned it from Ben.

Courageous

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to

> It's a lesson that stuck with me - and one that I learned cheaply,
> thanks to Ben. Optimizations are *not* a goodie to be grabbed at as
> quickly as possible. Hold off on the optimizations until you're sure
> they're needed. Instead focus on cleanness, correctness,
> expressiveness. Hopefully someone will learn it from this post as
> cheaply as I learned it from Ben.

Quite right. Generally, the best code to optimize is the code
which is taking the most amount of CPU time. Anything else is
wasted effort, IMO.

C/

David Hanley

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to

Courageous wrote:

Gee, I'm glad I can come on comp.lang.lisp and be informed of freshman
year comp sci class stuff. :)

dave


Raymond Toy

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
>>>>> "David" == David Hanley <d...@ncgr.org> writes:

David> Courageous wrote:

>> > It's a lesson that stuck with me - and one that I learned cheaply,
>> > thanks to Ben. Optimizations are *not* a goodie to be grabbed at as
>> > quickly as possible. Hold off on the optimizations until you're sure
>> > they're needed. Instead focus on cleanness, correctness,
>> > expressiveness. Hopefully someone will learn it from this post as
>> > cheaply as I learned it from Ben.
>>
>> Quite right. Generally, the best code to optimize is the code
>> which is taking the most amount of CPU time. Anything else is
>> wasted effort, IMO.

David> Gee, I'm glad I can come on comp.lang.lisp and be informed of freshman
David> year comp sci class stuff. :)

But I bet the number of people who know this but don't always follow
this would larger than you would hope.

It's very enticing as you write a simple function to say, "Hey, if I
did this, it would be much faster". I think this happens a lot as you
write code because you are only writing one function at a time and
until you have a large part of the whole thing in place, you can't
profile to tell where the hot spots are.

Ray


David Hanley

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to

Raymond Toy wrote:

> >>>>> "David" == David Hanley <d...@ncgr.org> writes:
>
> David> Gee, I'm glad I can come on comp.lang.lisp and be informed of freshman
> David> year comp sci class stuff. :)
>
> But I bet the number of people who know this but don't always follow
> this would larger than you would hope.

This is true. But if they already know, but don't follow it correctly,
I doubt repetition on here is going to help very much.


> It's very enticing as you write a simple function to say, "Hey, if I
> did this, it would be much faster". I think this happens a lot as you
> write code because you are only writing one function at a time and
> until you have a large part of the whole thing in place, you can't
> profile to tell where the hot spots are.

I still disagree with this. If you have a modicum of design realized
you frequently know which are the one or two functions will be the
bottleneck. This is very important, because it might affect some
of your up-front data structure design. For example, If you need
to have a lot of objects, and fetch some of them by name, or
some other field, you probably need a hash table, or a tree.
Now, you may be able to isolate what this data structure is,
so it can be easily changed, BUT you might as well do it
that way in the first place, instead of using the potentially simpler
list data structure, assuming it isn't going to make your
initial coding radically harder. Depending on your specific case,
it might be better to just do it right the first time, rather than
to code an unacceptable version for "now."

I place these "don't optimize" platitudes in the same category
as "no gotos" and "no more than 10 line functions..." rules of
thumb. Generally good ideas in most cases, but good programmers
know when to ignore them to do "the right thing."

dave


Joe Marshall

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
David Hanley <d...@ncgr.org> writes:

>
> Gee, I'm glad I can come on comp.lang.lisp and be informed of freshman

> year comp sci class stuff. :)

It gets better! You can be informed of stuff that isn't even true!

Tom Breton

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
> >>>>> "David" == David Hanley <d...@ncgr.org> writes:
>
> David> Courageous wrote:
>
> >> > It's a lesson that stuck with me - and one that I learned cheaply,
> >> > thanks to Ben. Optimizations are *not* a goodie to be grabbed at as
> >> > quickly as possible. Hold off on the optimizations until you're sure
> >> > they're needed. Instead focus on cleanness, correctness,
> >> > expressiveness. Hopefully someone will learn it from this post as
> >> > cheaply as I learned it from Ben.
> >>
> >> Quite right. Generally, the best code to optimize is the code
> >> which is taking the most amount of CPU time. Anything else is
> >> wasted effort, IMO.
>
> David> Gee, I'm glad I can come on comp.lang.lisp and be informed of freshman
> David> year comp sci class stuff. :)

Hey, believe me, I was just as surprised and chagrined to realize Comp
Sci 101 was needed here. Let me emphasize, only by *some*; I'm not
tarring everyone in cll.

Gareth Rees

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
Courageous wrote:
> Quite right. Generally, the best code to optimize is the code which is
> taking the most amount of CPU time. Anything else is wasted effort.

That's only half the story. If you use only a strategy of (1) write the
code; (2) profile it; (3) optimize the bottlenecks; then you are missing
a lot of possibilities. Rather than merely optimizing the bottlenecks
there may be ways to design the algorithm or data structures or the
entire architecture that completely eliminate particular bottlenecks.
This should involve thinking about efficiency at an early stage -- but
efficiency in general, not just speed of execution of parts of the code.

ObLisp: For example, in languages like C, compilation is a bottleneck in
the edit-compile-debug cycle. One response to this problem would be to
profile the compiler and start optimizing it. But a better response
would be to design the development environment so that interpretation or
incremental compilation is possible.

See also: http://www.naggum.no/erik/optimization.html

--
Gareth Rees

Friedrich Dominicus

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
>>>>> "GR" == Gareth Rees <garet...@pobox.com> writes:


GR> ObLisp: For example, in languages like C, compilation is a bottleneck in
GR> the edit-compile-debug cycle. One response to this problem would be to
GR> profile the compiler and start optimizing it. But a better response
GR> would be to design the development environment so that interpretation or
GR> incremental compilation is possible.

There are C-Intepreters available. So you may check them out looke for Eic
and Cint

Regards
Friedrich

Marco Antoniotti

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to

Gareth Rees <garet...@pobox.com> writes:

> Courageous wrote:
> > Quite right. Generally, the best code to optimize is the code which is
> > taking the most amount of CPU time. Anything else is wasted effort.
>
> That's only half the story. If you use only a strategy of (1) write the
> code; (2) profile it; (3) optimize the bottlenecks; then you are missing
> a lot of possibilities. Rather than merely optimizing the bottlenecks
> there may be ways to design the algorithm or data structures or the
> entire architecture that completely eliminate particular bottlenecks.
> This should involve thinking about efficiency at an early stage -- but
> efficiency in general, not just speed of execution of parts of the
> code.

I'd agree, provided that you qualify the "thinking about efficiency at
an early stage" sentence. As long as you don't mean "milling the bit"
("limare il bit" for the italian crowd :) ) but rather "using algorithms
with better proven behavior", I'd agree completely.

Unfortunately, many times people (myself included) tend to do fall in
the "milling the bit" mode of operation, thus improving the constant
factors in the latest and greatest implementation of bubble-sort. Not
a very useful thing to do.

Cheers

--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa

Johan Kullstam

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
David Hanley <d...@ncgr.org> writes:

> Courageous wrote:
>
> > > It's a lesson that stuck with me - and one that I learned cheaply,
> > > thanks to Ben. Optimizations are *not* a goodie to be grabbed at as
> > > quickly as possible. Hold off on the optimizations until you're sure
> > > they're needed. Instead focus on cleanness, correctness,
> > > expressiveness. Hopefully someone will learn it from this post as
> > > cheaply as I learned it from Ben.
> >

> > Quite right. Generally, the best code to optimize is the code
> > which is taking the most amount of CPU time. Anything else is

> > wasted effort, IMO.


>
> Gee, I'm glad I can come on comp.lang.lisp and be informed of freshman

> year comp sci class stuff. :)

optimization is hacker crack.

despite *knowing* it can be bad for me, i need repeat reminders.
sometimes it helps to state the blindingly obvious (because it blinds
you and you can't see it).

--
johan kullstam l72t00052


Tom Breton

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
Gareth Rees <garet...@pobox.com> writes:

> Courageous wrote:
> > Quite right. Generally, the best code to optimize is the code which is

> > taking the most amount of CPU time. Anything else is wasted effort.
>
> That's only half the story. If you use only a strategy of (1) write the
> code; (2) profile it; (3) optimize the bottlenecks; then you are missing
> a lot of possibilities. Rather than merely optimizing the bottlenecks
> there may be ways to design the algorithm or data structures or the
> entire architecture that completely eliminate particular bottlenecks.

Yes.

> This should involve thinking about efficiency at an early stage -- but
> efficiency in general, not just speed of execution of parts of the code.

This is less clear. I presume you hope to avoid writing code twice
when you change to a faster algorithm. But OTOH before you have
working code, you can only guess at what is and isn't fast enuff.
Making premature decisions can also lead to wasted effort.

So do think ahead, but mostly in terms of making the code very clean
so that you can easily change to a faster+hairier algorithm if you
need to.

> ObLisp: For example, in languages like C, compilation is a bottleneck in

> the edit-compile-debug cycle. One response to this problem would be to

> profile the compiler and start optimizing it. But a better response

> would be to design the development environment so that interpretation or

> incremental compilation is possible.

This is true. Having come from years of C, Lisp's typical development
cycle is a huge improvement.

> See also: http://www.naggum.no/erik/optimization.html

Sorry, Erik is absolutely not someone you want to cite on optimization.

Erik Naggum

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
* Tom Breton <t...@world.std.com>

| Sorry, Erik is absolutely not someone you want to cite on optimization.

I'm fascinated that you optimized your journey from ignorance to opinion
by removing the normally required task of _reading_ what you opine on.
this reflects on most of your other opinions on optimization, a least
with me, because I optimize away the incredibly unrewarding task of
trying to get past your personal hangups with people you don't agree
with, which you might learn a little bit about if you read what I have
written on _agreement_ as part of the process of argumentation, but it is
by now fairly obvious that you also optimize away what other people call
"learning experience".

sigh. just get over it, Tom. I don't like being so important in some
dreadful morons' lives as I am in yours and this "Courageous" fellow.

#:Erik

Pierre R. Mai

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
Tom Breton <t...@world.std.com> writes:

> This is less clear. I presume you hope to avoid writing code twice
> when you change to a faster algorithm. But OTOH before you have
> working code, you can only guess at what is and isn't fast enuff.

People with some experience have been known to make fairly reasonable
guesses in many situations. Sometimes doing a little experimental
coding and experimentation will yield a very useful base for something
called an "educated guess".

Of course people with experience have been known to make fairly stupid
guesses as well ;)

> Making premature decisions can also lead to wasted effort.

Performance is also very often a function of architectural issues.
Since you can't just rip out and replace architectures, even in CL,
getting basic facts right about the performance characteristics, costs
and usage profiles involved will be necessary to select an
architecture that has the potential of fullfilling mandated
performance requirements within time, budget and without leading to
contorted unmaintainable coding around architectural limitations.

> So do think ahead, but mostly in terms of making the code very clean
> so that you can easily change to a faster+hairier algorithm if you
> need to.

You seem to presume that all optimizations will lead to hairier code,
or involve hairier algorithms. Quite often this is not the case, and
better performance can be had with equally clear (or even clearer)
algorithms and code. You might still incur costs for reduced
generality, though.

> > See also: http://www.naggum.no/erik/optimization.html


>
> Sorry, Erik is absolutely not someone you want to cite on optimization.

Get over your hangup. This is getting silly.

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Courageous

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to

> sigh. just get over it, Tom. I don't like being so important in some
> dreadful morons' lives as I am in yours and this "Courageous" fellow.

Given the long chains of invective, cursing, put downs, and
outright temper tantrums I've been subject to one both
usenet and long personal email messages -- and the apologetic
email I've received from the group explaining that you have
an established tendancy to behave the way you do (were you
aware that people thought of you that way?) ... *well*.

Ahem.

C/

thi

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
Erik Naggum <er...@naggum.no> writes:

> there's something inside you that makes me important to you. find it!

downcase the world!

thi

Jon S Anthony

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
Erik Naggum wrote:
<...>
> --
> statistically, only 1 in 250 people understood anything in this message.

That many? I'm impressed. Really.

/Jon

--
Jon Anthony
Synquiry Technologies, Ltd. Belmont, MA 02478, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Erik Naggum

unread,
Apr 14, 2000, 3:00:00 AM4/14/00
to
* Erik Naggum

| sigh. just get over it, Tom. I don't like being so important in some
| dreadful morons' lives as I am in yours and this "Courageous" fellow.

* Courageous <jkra...@san.rr.com>


| Given the long chains of invective, cursing, put downs, and outright
| temper tantrums I've been subject to one both usenet and long personal
| email messages -- and the apologetic email I've received from the group
| explaining that you have an established tendancy to behave the way you do
| (were you aware that people thought of you that way?) ... *well*.

you missed the relevant part, of course. _you_ make me important in
_your_ lives. you don't have to, yet you complain bitterly when you do,
and you refuse to understand that it's your very own choice to do so.

when your have reached the point in your life when _you_ become important
to yourself the way I am to you now, you will thank me, in your own way.
until then, you will hate me, or something more or less indistinguishable
from hate. _that_ is the established tendency. of course, you won't
believe this now, but that's _why_ I'm important to you: you _need_ to be
preoccupied with me and you _need_ to continue to post so much insane
drivel about me from inside your _own_ psyche it will make you very, very
embarrassed when you realize what you've done, instead of the simple and
easy thing you _could_ do, but which _you_ only tell other people as a
piece of advice you'll never follow yourself: look inside yourself.


there's something inside you that makes me important to you. find it!

understandably, however, you are right now at the point where the only
thing that matters to you is to keep enlightenment from occuring to you,
and the more you can preoccupy yourself with _me_, the less you will be
enlightened. once you understand, fundamentally, emotionally, and fully
that spending any energy at all on other _people_ is a matter of choice
that reflects concern, personal bonding, and apprecation for something
you often do not understand when it happens, you will be prepared to let
go of _people_ and focus on _ideas_, which will enable you to concentrate
on those few people that _really_ matter to you, as you will know why.

however, I don't want anything to do with _you_, yet this very rejection
is what drives you like a madman to keep sending me mail that is also
rejected, keep replying to articles, keep _preoccupying_ yourself with
me. I want you morons to shut up and _listen_, not to _me_, but to _any_
source of englightenment you can find that will cause you to grow up and
relieve me of the burden of having to reject your pathetic attempts to
ingratiate yourself with me. it's not _about_ me, you morons, it's about
what I trigger _inside_ of you, whatever the hell it is -- I wish I knew,
because then maybe I could get a little _control_ over your sycophants,
like perhaps giving me all your money, but I'm joking, of course.

and if you found this nauseating enough to read that you stop stalking me
with your sniveling little "please love me!" messages, so much the better!
I shall appreciate it highly when you reach enlightenment, as interaction
between the enlightened is always such a pleasure, but until then, you're
nothing but an ignorent, annoying pest attracted to the nearest source of
light. seek elsewhere, you moths-for-brains, but above all: seek!

#:Erik

Courageous

unread,
Apr 14, 2000, 3:00:00 AM4/14/00
to

> you missed the relevant part, of course. _you_ make me important in
> _your_ lives.

Seeing another full page ego-fest of a dissertation, I would
wager that it's the other way around by a long shot. If you're
like this in real life, you need therapy.


C/

Erik Naggum

unread,
Apr 14, 2000, 3:00:00 AM4/14/00
to
* Courageous <jkra...@san.rr.com>

| Seeing another full page ego-fest of a dissertation, I would
| wager that it's the other way around by a long shot. If you're
| like this in real life, you need therapy.

ego-fest? geez. is there _nothing_ that can help you get out of your
sick preoccupation with _me_? but who's surprised? you didn't
understand _anything_ in that message, as was predicted therein.

#:Erik

Courageous

unread,
Apr 15, 2000, 3:00:00 AM4/15/00
to

>> you often do not understand when it happens, you will be prepared to let
> go of _people_ and focus on _ideas_...

> you didn't understand _anything_ in that message

Oh I understood what you *want* well enough, but life's
simply not like that. When you behave the way you have,
certain social consequences are part of the natural order
of things. Contemplate it.

C/

Erik Naggum

unread,
Apr 15, 2000, 3:00:00 AM4/15/00
to
* Courageous <jkra...@san.rr.com>

| Oh I understood what you *want* well enough, but life's simply not like
| that. When you behave the way you have, certain social consequences are
| part of the natural order of things. Contemplate it.

so you _do_ realize that! but of course, it applies only to other people
and you're exempt from the rules that apply to other people, aren't you?

focus on _yourself_. care about _yourself_. let other people focus on
themselves and care about themselves. get off the "getting personal"
track and focus on _your_ needs, discuss matters that matter to _you_,
and learn what you came to learn, if you _did_ come to learn anything
instead of merely posturing as a moron with an agenda. by continuing to
taking this so _personally_ you display a personal insecurity that is
disgusting to behold and which is of no value to anyone to know about in
the first place. "contemplate it."

#:Erik

David Hanley

unread,
Apr 17, 2000, 3:00:00 AM4/17/00
to

Tom Breton wrote:

> Gareth Rees <garet...@pobox.com> writes:
>
> > Courageous wrote:
> > > Quite right. Generally, the best code to optimize is the code which is
> > > taking the most amount of CPU time. Anything else is wasted effort.
> >
> > That's only half the story. If you use only a strategy of (1) write the
> > code; (2) profile it; (3) optimize the bottlenecks; then you are missing
> > a lot of possibilities. Rather than merely optimizing the bottlenecks
> > there may be ways to design the algorithm or data structures or the
> > entire architecture that completely eliminate particular bottlenecks.
>
> Yes.
>
> > This should involve thinking about efficiency at an early stage -- but
> > efficiency in general, not just speed of execution of parts of the code.
>

> This is less clear. I presume you hope to avoid writing code twice
> when you change to a faster algorithm. But OTOH before you have
> working code, you can only guess at what is and isn't fast enuff.

I dunno. I find that I usually know ahead of time what the bottlenecks
will be. For example, in the GIF outputting java code I referred to earlier,
I knew full well that translating a java image to a GIF was going to be
the bottleneck; hence, before I got too far, I wrote code to pass a java
image to C code which spit out GIF files. This not only greatly reduced
redundant effort, but brought potential bugs and issues to the forefront
early in the development cycle, when it was possible to modify fundamental
design to accomodate them. If I had waited until I had a working application
to profile, this would nothave been nearly so easy.

See also: http://www.naggum.no/erik/optimization.html

>
> Sorry, Erik is absolutely not someone you want to cite on optimization.

Umm. he can be caustic, and I don't agree with him on some
issues, but I don't think you even read that page. It seems closer to
your position than mine.

dave


0 new messages