Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Time

2 views
Skip to first unread message

Devin Currie

unread,
Nov 20, 1999, 3:00:00 AM11/20/99
to
Hello,

Can someone post up the code that will test the efficiency of
the function? Thanks.

Regards,

(format t "~A" 'Devin)


Robert Monfera

unread,
Nov 20, 1999, 3:00:00 AM11/20/99
to
Devin Currie wrote:
>
> Hello,
>
> Can someone post up the code that will test the efficiency of
> the function? Thanks.

Yes.

(time (format t "~A" 'Devin))

Robert

Devin Currie

unread,
Nov 22, 1999, 3:00:00 AM11/22/99
to
Thanks. :-)

_Now_, I wonder if anybody compiled a list of functions that affects
the efficiency of the code? Ie. recursive vs. interative, SETF creating
lots of cons cells, LABEL vs. DEFUN, APPLY/APPEND, etc.

IOW, which functions cause garbage collections and what doesn't?

I am hoping that someone would have such a list because it will take
me a while to compile such a list myself after enough experience and
testing.

Thanks.

R. Matthew Emerson

unread,
Nov 23, 1999, 3:00:00 AM11/23/99
to
Devin Currie <dscu...@direct.ca> writes:

> _Now_, I wonder if anybody compiled a list of functions that affects
> the efficiency of the code? Ie. recursive vs. interative, SETF creating
> lots of cons cells, LABEL vs. DEFUN, APPLY/APPEND, etc.
>
> IOW, which functions cause garbage collections and what doesn't?
>
> I am hoping that someone would have such a list because it will take
> me a while to compile such a list myself after enough experience and
> testing.

First of all, I'd say don't worry much about efficiency issues. Just
write code that works. Profile and optimize as needed.

Of course, it is true that it's useful to be generally aware of
efficiency issues (so you don't do egregiously wasteful things like
using append to collect a series of values into a list rather than
push/nreverse or the collect clause of the loop macro), and maybe that
is what you're trying to understand.

If that's the case, you might find chapters 9 and 10 of Norvig's
Paradigms of AI Programming: Case Studies in Common Lisp to be
enlightening. Chapter 10 in particular (which is about low-level
efficiency issues) would probably be interesting to you.

See also chapter 13 in Graham's ANSI Common Lisp.

-matt

Paolo Amoroso

unread,
Nov 23, 1999, 3:00:00 AM11/23/99
to
On Mon, 22 Nov 1999 19:52:13 -0700, Devin Currie <dscu...@direct.ca>
wrote:

> _Now_, I wonder if anybody compiled a list of functions that affects
> the efficiency of the code? Ie. recursive vs. interative, SETF creating
> lots of cons cells, LABEL vs. DEFUN, APPLY/APPEND, etc.

Check question [1-3] "How can I improve my Lisp programming style and
coding efficiency?" of the comp.lang.lisp FAQ. Someone else in this thread
has already mentioned the book "Paradigms of Artificial Intelligence
Programming" by Norvig.


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

Erik Naggum

unread,
Nov 23, 1999, 3:00:00 AM11/23/99
to
* Devin Currie <dscu...@direct.ca>

| I am hoping that someone would have such a list because it will take
| me a while to compile such a list myself after enough experience and
| testing.

that is a reasonable thing to hope for, but unfortunately, there is no
substitute for experience and testing. the experience you get varies
from version to version of each implementation and may be completely
useless from implementation to implementation. the best you can hope for
is to understand the major factor for the time and space consumption of a
function. e.g., the function = (with n > 2 arguments) will return false
as soon as two neighboring arguments are unequal, which means the running
time is proportional to n, while the function /= (with n > 2 arguments)
will return false as soon as one argument is equal to any other argument,
which means the running time of each test is proportional to n, and of
the whole function proportional to n*(n/2), or n², for brevity.

however, just how much time is involved can vary tremendously. suppose
the numbers are all fixnums and declared as such. you would probably
expect the complexity of = to be such that the compiler can inline the
tests completely and produce very efficient machine code. the complexity
of /=, however, is such that you would face geometric expansion of the
generated machine code were it done inline, so it would be a function
call away. that function would most probably be more generic (not in the
CLOS sense) and not be able to benefit from the declaration of the
fixnums, so it would do type tests or more generic number comparisons.
so even though = and /= would differ by n/2 in theoretical execution
time, you could very easily wind up with a large constant factor on top
of that difference that for most practical purposes (reasonably small
values of n) would dwarf n and even n².

obviously, the thresholds and conditions for when such factors play a
bigger role than the theoretical performance values simply cannot be
communicated exhaustively, and most of us have only sketchy knowledge of
the vast space of interacting conditions that we face daily, anyway.
that's why profiling the execution-time behavior of the code on real-life
data under real-life conditions is such a big win. understanding the
results of profiling is a challenge in itself.

the only good news here, relative to your hopes, is that you don't need a
general list, which would overwhelm you if you got it, anyway, you need
experience with whatever you're actually trying to do. with intelligent
probes into the solution space, you will also be able to vary the way you
do these things radically and achieve very different optimization results
-- and Common Lisp is the best ever language to led you fiddle with the
many varying algorithms that could solve a problem, not just fiddle with
a single implementation because it's so hard to write it's too hard to
discard it.

so the interesting question becomes: how much time and space does it take
to write efficient Common Lisp programs? despite numerous efforts to
profile the _programmers_, interpreting the results that have come back
is far from trivial. e.g., most programmers spend a lot of CPU time in a
function of their brain that returns true for the input question "is this
really supposed to work?" despite mounting evidence to the contrary, yet
there is evidence that programmers whose interrupt handlers for the often
non-maskable interrupt "there's _got_ to be a better way than this!" are
more able to return false from the above function and thus waste less
time on preconceived notions of what constitutes speed and correct code.
unfortunately, there's no substitute for experience with this, either.

#:Erik

Gareth McCaughan

unread,
Nov 23, 1999, 3:00:00 AM11/23/99
to
Erik Naggum wrote:

> that is a reasonable thing to hope for, but unfortunately, there is no
> substitute for experience and testing. the experience you get varies
> from version to version of each implementation and may be completely
> useless from implementation to implementation. the best you can hope for
> is to understand the major factor for the time and space consumption of a
> function. e.g., the function = (with n > 2 arguments) will return false
> as soon as two neighboring arguments are unequal, which means the running
> time is proportional to n, while the function /= (with n > 2 arguments)
> will return false as soon as one argument is equal to any other argument,
> which means the running time of each test is proportional to n, and of
> the whole function proportional to n*(n/2), or n², for brevity.
>
> however, just how much time is involved can vary tremendously. suppose
> the numbers are all fixnums and declared as such. you would probably
> expect the complexity of = to be such that the compiler can inline the
> tests completely and produce very efficient machine code. the complexity
> of /=, however, is such that you would face geometric expansion of the
> generated machine code were it done inline,

Not geometric. Just quadratic. That's still painful, though.

> so it would be a function
> call away. that function would most probably be more generic (not in the
> CLOS sense) and not be able to benefit from the declaration of the
> fixnums, so it would do type tests or more generic number comparisons.
> so even though = and /= would differ by n/2 in theoretical execution
> time, you could very easily wind up with a large constant factor on top
> of that difference that for most practical purposes (reasonably small
> values of n) would dwarf n and even n².

Just for information: doing

(declaim (optimize (speed 3)))
(defun f (a b c d e)
(declare (fixnum a b c d e))
(/= a b c d e))

in CMU CL produces code that does lots of individual comparisons.
It does the same thing even with 10 fixnum arguments. I presume
it would do the same even with 100.

In ACL, even with only three args it's a function call, and as
you say the code doesn't change at all if you take out the fixnum
declarations.

It would be possible, if anyone cared about /= with large numbers
of arguments, to do it more efficiently-in-the-worst-case by
sorting the numbers first. Unsurprisingly, ACL doesn't appear
to do this (judginf from timing figures).

CMU CL's code is much faster for n=10 and n=30 than ACL's on my
machine -- more than 10 times faster for n=10. On the other hand,
it takes a little while about compiling the n=30 code. For n=100,
compilation was *painful*; I gave up and killed it when its
allocated memory size was about 56MB. :-)

Does anyone actually ever call /= with large numbers of values?

--
Gareth McCaughan Gareth.M...@pobox.com
sig under construction

0 new messages