Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

setf'able?

84 views
Skip to first unread message

Dobes Vandermeer

unread,
Nov 2, 1998, 3:00:00 AM11/2/98
to

How do you write functions that are "setf'able".

i.e.

(setf (myfunc myparams) 24) => Sets a special value "pointed to" in some
way by (myfunc myparams).

All info appreciated!

Thanks
Dobes

Barry Margolin

unread,
Nov 2, 1998, 3:00:00 AM11/2/98
to
In article <363D7A39...@mindless.com>,

Dobes Vandermeer <do...@mindless.com> wrote:
>How do you write functions that are "setf'able".
>
>i.e.
>
>(setf (myfunc myparams) 24) => Sets a special value "pointed to" in some
>way by (myfunc myparams).

Define a function like:

(defun (setf myfunc) (new-value param1 param2 ...)
...)

or define a SETF expander using DEFSETF or DEFINE-SETF-EXPANDER. See the
section of CLtL2 on Generalize Variables for full details (at the time the
book was written DEFINE-SETF-EXPANDER was known as DEFINE-SETF-METHOD).

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Don't bother cc'ing followups to me.

Kent M Pitman

unread,
Nov 2, 1998, 3:00:00 AM11/2/98
to
Barry Margolin <bar...@bbnplanet.com> writes:

> In article <363D7A39...@mindless.com>,
> Dobes Vandermeer <do...@mindless.com> wrote:
> >How do you write functions that are "setf'able".
> >
> >i.e.
> >
> >(setf (myfunc myparams) 24) => Sets a special value "pointed to" in some
> >way by (myfunc myparams).
>
> Define a function like:
>
> (defun (setf myfunc) (new-value param1 param2 ...)
> ...)
>
> or define a SETF expander using DEFSETF or DEFINE-SETF-EXPANDER. See the
> section of CLtL2 on Generalize Variables for full details (at the time the
> book was written DEFINE-SETF-EXPANDER was known as DEFINE-SETF-METHOD).

Barry, I'm sure you know this, but since you didn't mention it, here's
info for other new people who don't know...

You CAN, of course, use CLTL2, but the same info is in the Common Lisp
HyperSpec(TM) at http://www.harlequin.com/education/books/HyperSpec/FrontMatter
The nice thing about CLHS as a reference is that it's free, it's
webbed, it's lightweight and takes up no desk footprint, and it has the
up-to-date names. You can even carry it around if yyou download it to your
laptop. Download info is at
http://www.harlequin.com/education/books/HyperSpec/

For info on setf expanders, try either the section on "Generalized References"
and "places" in Section 5.1 of CLHS.

You could find that section also by going to the glossary under "setf xxx"
(for various values of xxx--there are a number of compound terms in the glossary).

You could find the setf expander stuff in the symbol index under
DEFINE-SETF-EXPANDER.

Don't forget to see also DEFINE-MODIFY-MACRO, which handles an interesting
set of cases.


Barry Margolin

unread,
Nov 2, 1998, 3:00:00 AM11/2/98
to
In article <sfw67cy...@world.std.com>,

Kent M Pitman <pit...@world.std.com> wrote:
>You CAN, of course, use CLTL2, but the same info is in the Common Lisp
>HyperSpec(TM) at http://www.harlequin.com/education/books/HyperSpec/FrontMatter
>The nice thing about CLHS as a reference is that it's free, it's
>webbed, it's lightweight and takes up no desk footprint, and it has the
>up-to-date names. You can even carry it around if yyou download it to your
>laptop. Download info is at
> http://www.harlequin.com/education/books/HyperSpec/

I don't normally refer people to the CLHS because it's much more
referential than instructive. CLtL2 isn't really the best tutorial,
either, but it's better than the CLHS.

BTW, CLtL2 is available online as well:

http://www.cs.cmu.edu/Web/Groups/AI/html/cltl/cltl2.html

Erik Naggum

unread,
Nov 2, 1998, 3:00:00 AM11/2/98
to
* Dobes Vandermeer <do...@mindless.com>

| How do you write functions that are "setf'able".
|
| i.e.
|
| (setf (myfunc myparams) 24) => Sets a special value "pointed to" in some
| way by (myfunc myparams).

others have answered with the code you need to write, but I'd like to
change your view of setting and getting values, if I may. a variable (or
a generalized place, in general) is something that yields a value that
does not change from reference to reference (or call to call). "setting"
the value of a variable (or a generalized place) is an "instruction" that
it return that value until you "set" it again, as in "remember this!".
if you think of this way, there are number of interesting and otherwise
"read-only" places that you might want to return a different value.

like, _setting_ the transatlantic round-trip time of IP datagrams -- your
system could choose a different ISP, upgrade your links, move your office
location, etc... :)

#:Erik
--
The Microsoft Dating Program -- where do you want to crash tonight?

rusty craine

unread,
Nov 2, 1998, 3:00:00 AM11/2/98
to

Dobes Vandermeer wrote in message <363D7A39...@mindless.com>...

>
>How do you write functions that are "setf'able".
>
Stig Hemmer in "RE: looked for a tutor in the area....alas 0" in this news
group
used setf in a three coins problem solution to show me how to use lisp in a
more lisp-like manner. If you have access to _common lispcraft_ by robert
wilensky [isbn 0-393-95544-3], there are several examples using setf in
different situations. One of the examples is to set (setf) properties of
various objects, kinda interesting. the above book is a non-expert's
common lisp book.
quoted from wilensky>>>

(get 'chair3 'color)
-> NIL
(setf (get 'chair3 'color) 'blue)
-> BLUE
(get 'chair3 'color)
->BLUE
(setf (get 'chair3 'owner) 'john)
-> JOHN

on on etc...<<< but maybe your past this example.

I use it constantly trying to understand the code snippets presented here.
Between the snippets here, the book and some advice from the news
group...I've learned alot.

rusty

Dobes Vandermeer

unread,
Nov 3, 1998, 3:00:00 AM11/3/98
to
Barry Margolin wrote:
>
> In article <sfw67cy...@world.std.com>,
> Kent M Pitman <pit...@world.std.com> wrote:
> >You CAN, of course, use CLTL2, but the same info is in the Common Lisp
> >HyperSpec(TM) at http://www.harlequin.com/education/books/HyperSpec/FrontMatter
> >The nice thing about CLHS as a reference is that it's free, it's
> >webbed, it's lightweight and takes up no desk footprint, and it has the
> >up-to-date names. You can even carry it around if yyou download it to your
> >laptop. Download info is at
> > http://www.harlequin.com/education/books/HyperSpec/
>
> I don't normally refer people to the CLHS because it's much more
> referential than instructive. CLtL2 isn't really the best tutorial,
> either, but it's better than the CLHS.
>
> BTW, CLtL2 is available online as well:
>
> http://www.cs.cmu.edu/Web/Groups/AI/html/cltl/cltl2.html

Ill be honest.. I've been to both places, and both are completely
hopeless attempt to describe the language.

Read at SETF (In the HyperSpec):

setf changes the value of place to be newvalue.

(setf place newvalue) expands into an update form that stores the result
of evaluating newvalue into the location referred to by place. Some
place forms involve uses of
accessors that take optional arguments. Whether those optional arguments
are permitted by setf, or what their use is, is up to the setf expander
function and is not under the
control of setf. The documentation for any function that accepts
&optional, &rest, or ..... key arguments and that claims to be usable
with setf must specify how
those arguments are treated.

Strong start, I'll grant.. but things only get worse from there. Is it
really necessary for it to expand into an "update form" in only the
second sentence? This documentation was written for, and by, people who
already know what these functions do, and are looking for the very
minute details. CLTL2 is slightly better, going to a fair length to
describe setf, etc. Neither of them takes the perspective, however, of
someone who is looking for a function to set one thing to another, or
anything like that.

Nevertheless, its pretty frustrating learning LISP at first, with such
poor material, but once you get into the swing of things you can figure
it out with the help of a book, instructor, or newsgroup.

CU
Dobes

Erik Naggum

unread,
Nov 3, 1998, 3:00:00 AM11/3/98
to
* Dobes Vandermeer <do...@mindless.com>

| Ill be honest.. I've been to both places, and both are completely
| hopeless attempt to describe the language.

I'll be honest, too. I have never found anything _but_ specifications
and "hard" technical literature to be worth reading when I wanted to know
something. textbooks are wrong, inaccurate, confused, incomplete, or
gloss over exactly the thing I need to understand, and the authors
usually have a misguided idea of what constitutes a "pedagogical order
and presentation" so as not to offend the meager intelligence of the mass
audience their publisher said would be the only ones to buy the book and
pay for their work.

of course, specifications come in several flavors, too. Common Lisp's
specification is one the best there is. I'll submit that the worst 10%
of specifications are better than the best 90% of textbooks, tutorials,
guides, etc. never have I seen a textbook better than the specification,
even for a sorry excuse of a specification like ISO 8879 (SGML).

if you can't read specifications, _that's_ the problem that needs to be
fixed. with any luck, there are still people who can _write_ them,
despite the increasing disdain for precision in the Western civilization,
but I wouldn't be surprised if nothing is left of those skills a hundred
years from now.



| Read at SETF (In the HyperSpec):

no wonder you don't like it -- you haven't bothered to read it, either.
you can't expect to jump into the middle of a 1000-page document and get
much useful stuff out of a single page without any appreciation of its
context. see 5.1 Generalized Reference for that context. however, I
strongly recommend that you read the entire specification, but _skip_ the
dictionaries on first reading. also, do use the glossary.

| Strong start, I'll grant..

well, you weren't supposed to _start_ there. you don't need the SETF
dictionary entry at all to learn how to use it. your mistake, only.

| Neither of them takes the perspective, however, of someone who is looking
| for a function to set one thing to another, or anything like that.

again, see 5.1 Generalized Reference.

| Nevertheless, its pretty frustrating learning LISP at first, with such
| poor material, but once you get into the swing of things you can figure
| it out with the help of a book, instructor, or newsgroup.

sigh. for me, it's more than just "pretty frustrating" to have to listen
to whining losers who can't read technical literature to learn something
highly tehcnical in nature, and who blame the literature, not themselves.
whatever _are_ people taught in schools these days, when they come out of
them unable to deal with specifications, documentation, and manuals in
general? do schools only turn out yet more consumer generations?

everything worth doing in life is hard. everything that is simple was
once hard, but somebody worked _very_ hard to make it simple. simplicity
is an ideal, a goal; to be worked for, not taken for granted. when
something is simple for _you_, remember that somebody else decided that
it shouldn't be as hard as it had been for him, and went ahead and _did_
something about it. quit whining, start working. the books weren't
written without effort, a good instructor has spent years to become what
he is, and those who answer your fuzzy and vague questions to undo some
deep-rooted misunderstanding in a newsgroup actually have to work to do
it -- it's not as if books, instructors, or newsgroup answers grow on
trees, and they do not in particular fall into your lap if you're too
lazy even to get up and reach for it.

just because newsgroups are friendly to people who do their own homework
doesn't mean they are going to stay that way if you don't do yours.

Tim Bradshaw

unread,
Nov 3, 1998, 3:00:00 AM11/3/98
to
* Dobes Vandermeer wrote:
[CLHS and CLtL2 on the net]

> Nevertheless, its pretty frustrating learning LISP at first, with such
> poor material, but once you get into the swing of things you can figure
> it out with the help of a book, instructor, or newsgroup.

But these are both reference works really. I wouldn't recommend
either for learners. Something like Paul Graham's `ANSI Common Lisp'
book is *much* better. Of course you have to pay for it, but it's not
exactly a vast investment compared to the time you're likely to spend
learning a new language.

--tim


Sunil Mishra

unread,
Nov 3, 1998, 3:00:00 AM11/3/98
to
"rusty craine" <rccr...@flash.net> writes:

> different situations. One of the examples is to set (setf) properties of
> various objects, kinda interesting. the above book is a non-expert's
> common lisp book.
> quoted from wilensky>>>

[examples deleted]

Wilensky's book is a fine book, but alas sadly out of date. Modern lisp
programming practices generally deprecate the use of property lists. The
biggest problem with them is that they are persistent, attached permanently
to the symbol. Setting them to nil (as in (setf (get 'chair3 'color) nil))
is quite dangerous, since there is always the possibility of the system
itself storing stuff in there. So, if you ever decide to rename or throw
away chair3, you will have to go through every place which says anything
about chair3, and update it. Painful, to say the least.

A better thing to do is to create a hash table to track properties. (Keep
in mind that I often forget the order in which the arguments to gethash
must be presented, so you may have to reverse the order if this does not
work.) If the properties you want to track are known in advance, and every
object can be expected to have most of them (that is, if all your objects
are chairs, they will have similar properties) then using a structure to
store individual properties would be the way to go. Otherwise, try
association lists, like below.

(setq table (make-hash-table))
(gethash table 'chair3) => nil
(setf (gethash table 'chair3)
(acons 'color 'blue (gethash table 'chair3))) => ((COLOR . BLUE))
(assoc 'color (gethash table 'chair3)) => (COLOR . BLUE)

etc.

> I use it constantly trying to understand the code snippets presented here.
> Between the snippets here, the book and some advice from the news
> group...I've learned alot.

For what it's worth, ANSI Common Lisp will serve you better for modern lisp
environments.

Sunil

David Steuber The Interloper

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to
On Tue, 03 Nov 1998 03:57:03 GMT, Dobes Vandermeer
<do...@mindless.com> claimed or asked:

% Nevertheless, its pretty frustrating learning LISP at first, with such
% poor material, but once you get into the swing of things you can figure
% it out with the help of a book, instructor, or newsgroup.

I've been finding "ANSI Common Lisp" to be pretty steep. Terms seem
to be brought into use without definition. On the other hand, I don't
want to try to find a "Lisp in 21 days" book either. I figure if I
hack at the language and pester people in this group, I will
eventually grok the Lisp.

My impression so far is that Common Lisp is more powerful than my
imagination. I think for a large application, like the one I am
designing, it is better than the bit banging one does in a language
like C++ or, dare I say it, Java.

It took time for me to master C++. I am not a master of Java or Perl,
even though I can do little things in them. I expect Lisp to take
time as well. It is the nature of language. I'm sure I would have
been far better off if I played with Lisp as a young child when my
brain hadn't hardened yet.

--
David Steuber (ver 1.31.2a)
http://www.david-steuber.com
To reply by e-mail, replace trashcan with david.

"Ignore reality there's nothing you can do about it..."
-- Natalie Imbruglia "Don't you think?"

Kent M Pitman

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to
tras...@david-steuber.com (David Steuber "The Interloper") writes:

> I've been finding "ANSI Common Lisp" to be pretty steep. Terms seem
> to be brought into use without definition.

Try combining this with the HyperSpec glossary. In hardcopy, the glossary
is over 60 pages. In webbed form, it's an intricately interconnected weave
of the terminology that professional lisp programmers use continuously.
It may help to fill the gaps.

> On the other hand, I don't
> want to try to find a "Lisp in 21 days" book either. I figure if I
> hack at the language and pester people in this group, I will
> eventually grok the Lisp.

Hmm, so maybe there is a market opportunity for me to still squeeze in
a Lisp book. I suddenly have some time on my hands (I'll post about
that separately). Then again, first on my list is to finish writing my
scifi novel...

Steve Gonedes

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to

tras...@david-steuber.com (David Steuber "The Interloper") writes:

<
< On Tue, 03 Nov 1998 03:57:03 GMT, Dobes Vandermeer
< <do...@mindless.com> claimed or asked:
<
< % Nevertheless, its pretty frustrating learning LISP at first, with such
< % poor material, but once you get into the swing of things you can figure
< % it out with the help of a book, instructor, or newsgroup.
<

< I've been finding "ANSI Common Lisp" to be pretty steep. Terms seem

< to be brought into use without definition. On the other hand, I don't


< want to try to find a "Lisp in 21 days" book either. I figure if I
< hack at the language and pester people in this group, I will
< eventually grok the Lisp.

I thought Ansi Common Lisp (the book) had too little structure for me;
it seemed to jump around too much. I really liked Lisp 3rd edition, as
it had more structure to it. The book is broken up into three
sections, the first and second sections serve as an into. to Common
Lisp.

I remeber section one had a chapter about mapping primitives (such as
mapcar, find-if, remove-if), printing functions, predicates and
conditions (like `if', cond, `or', numberp, symbolp), a small
chapter on file-interface (with-open-file), and wraps up the section
with tips for debugging common-lisp programs (step, trace, break).

The second section talks about arrays, structures (via defstruct),
encapsulation, a chapter on using macros (defmacro), and section 2
also has a very small intro to the common lisp object system (clos).

The third section gets a little difficult, but the authors keep it
exciting by reusing some of the demonstrated tools to solve more
complex problems. For example, the book discusses (step by step) how
to construct a pattern matcher which eventually turns into a unifier.
Rather than end the book, the authors show how unification can be used
to solve problems by forward- and backward-chaining - which is
certainly not "21 day book" material (imho). I thought that the
authors did a fantastic job of presenting complex material in small
enough chunks so that you don't get overwhelmed by indigestion (sorry
for the bad humor - I'm really tired).

The third section talks about a small backward-chaining/prolog like
thing and some other topics I found too difficult. I just re-read Ansi
Common Lisp, and came back to them a tad bit later.

Since you have experience with other programming languages most of the
discussed topics may come with little ease (I image that you're
familiar with arrays, objects, strings, etc.).

Some people don't seem to like Lisp 3rd because the last section is
discusses fairly complex material, but the majority of the book is
really good - not many books discuss arrays and hashtables (pretty
common things in CL). Oh well... Have fun - and remeber to use better
variable names then Grahm does (he uses very short, sometimes one
letter identifiers for most variables - hard to follow sometimes).

rusty craine

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to

Kent M Pitman wrote in message ...

>Hmm, so maybe there is a market opportunity for me to still squeeze in
>a Lisp book. I suddenly have some time on my hands (I'll post about
>that separately). Then again, first on my list is to finish writing my
>scifi novel...

I'll take 10 copies of the lisp book. I off load some of the respondents of
this news group to AskSam (free from searchable database) to use for
reference. When I query the database for help, I'm always glade to see your
name in the document list of matches. In the four or five months I've been
reading this news group (not knowing anyone's credentails to start with) I
quickly being to trust your judgment and expertise. A few days back one of
the respondents commented that they would pay to read your posting....count
me in with that respondent.

Do consider this at the local book store there are 8 racks of SF and zero
lisp books. I guess SF pays better.

dang ya gotta do what ya gotta do.
Rusty


Sunil Mishra

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to
Steve Gonedes <jgon...@worldnet.att.net> writes:

> I thought Ansi Common Lisp (the book) had too little structure for me;
> it seemed to jump around too much. I really liked Lisp 3rd edition, as
> it had more structure to it. The book is broken up into three
> sections, the first and second sections serve as an into. to Common
> Lisp.

I have to admit I haven't seen the book... Learned lisp before it appeared,
and have picked up things since from various gurus, on and off this ng.

> I remeber section one had a chapter about mapping primitives (such as
> mapcar, find-if, remove-if), printing functions, predicates and
> conditions (like `if', cond, `or', numberp, symbolp), a small
> chapter on file-interface (with-open-file), and wraps up the section
> with tips for debugging common-lisp programs (step, trace, break).

It's precisely these mapping primitives you have to use with care. It is
much to tempting to write code that does something like:

(remove nil (mapcar #'foo some-list))

In doing this, you would be allocating and deallocating a lot of stuff,
which adds up in the long run. It may also require the construction of an
anonymous function, if you decide to use #'(lambda ...) rather than #'foo.

A generally better thing to do when faced with such a situation is to think
about how you can avoid getting nil's into your list in the first place,
which means either writing a recursive function from scratch, or resorting
to something like (loop ...), which many purists tend to dislike.

> The second section talks about arrays, structures (via defstruct),
> encapsulation, a chapter on using macros (defmacro), and section 2
> also has a very small intro to the common lisp object system (clos).

Pay attention to this! Arrays, by my reasoning (which means others are free
to correct me if I am completely off base :-) ), are far more space
efficient and easier to recycle if you have a large number of things you
want to maintain in a straight list. Compare the box-and-pointer cons cell
representation to the structure of an array, and you will probably come to
the same conclusion. Structures are generally managed through vectors, and
automagically generate accessors and constructors. Great for abstraction. I
have also decided that CLOS is a must for anyone handling a complex data
management problem. Check out Norvig's Paradigms in AI Programming, and the
book code for Russell & Norvig's Artificial Intelligence: A Modern Approach
to get a better sense for good programming style. For the latter, they had
a huge advantage in dealing with relatively well understood issues. The
former is great for learning how you can use Lisp to solve rather complex
problems fairly efficiently. Of course, a working knowledge of AI issues
makes the books *a lot* easier to deal with.

Alas, other than AI books, I haven't come across that many books that deal
with Lisp coding style. This is one wish I would *love* to see fulfilled.

Sunil

Steve Gonedes

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to

Sunil Mishra <smi...@hustle.cc.gatech.edu> writes:

< It's precisely these mapping primitives you have to use with care. It is
< much to tempting to write code that does something like:
<
< (remove nil (mapcar #'foo some-list))

I don't think unwise use of memory has anything to do with lisp. This
seems to be more of a general programming issue.

A quick question on mapcar... Does mapcar always allocate a fresh
list? It seems like it would, and various experiments I have done seem
to confirm this. I don't believe that the CL spec specifically states
that "mapcar must make a new list". The reason that I ask is because I
have, on numerous occasions, come to see the function mappend; which
is typically written as,

(defun mappend (fn &rest lists)
(reduce #'append (apply #'mapcar fn lists))).

If mapcar creates a fresh list, why not use mapcan? Just curious (now
that we are discussing mapcar).

< In doing this, you would be allocating and deallocating a lot of stuff,
< which adds up in the long run. It may also require the construction of an
< anonymous function, if you decide to use #'(lambda ...) rather than #'foo.
<
< A generally better thing to do when faced with such a situation is to think
< about how you can avoid getting nil's into your list in the first place,
< which means either writing a recursive function from scratch, or resorting
< to something like (loop ...), which many purists tend to dislike.

I agree with you; but this really has nothing to do with lisp
programming. The functions mapcar and such are very fast and easy to
use, and there is nothing wrong with consing up a list!

< Pay attention to this! Arrays, by my reasoning (which means others
< are free to correct me if I am completely off base :-) ), are far
< more space efficient and easier to recycle if you have a large
< number of things you want to maintain in a straight list.

Arrays are cumbersome and wasteful for storing things which need to be
accessed sequentially, also I believe that arrays can be slower to
initially allocate (dynamically) than a cons cell or two. On the other
hand, arrays usually take up less space, but (cons 1 2) is probably
equal to #(1 2). I think the best thing about arrays are the quick
access to non-sequential information - they make things like d-trees
possible.

< Compare the box-and-pointer cons cell representation to the
< structure of an array, and you will probably come to the same
< conclusion.

Consider doing this with a couple thousand arrays,

(loop for tail on some-large-array doing (something-silly tail).

Copying arrays is no more wasteful that consing up lists. I understand
what you're saying, but declaring mapping functions and lists in
general to be evil could possible limit the total number of solutions.

[ I know that copying arrays is fairly fast and inexpensive, and that
there are ways to avoid the foremenioned example - hence no copying is
better than any copying - just trying to make a point. ]

< Structures are generally managed through vectors, and
< automagically generate accessors and constructors. Great for
< abstraction. I have also decided that CLOS is a must for anyone
< handling a complex data management problem. Check out Norvig's
< Paradigms in AI Programming, and the book code for Russell &
< Norvig's Artificial Intelligence: A Modern Approach to get a better
< sense for good programming style. For the latter, they had a huge
< advantage in dealing with relatively well understood issues. The
< former is great for learning how you can use Lisp to solve rather
< complex problems fairly efficiently. Of course, a working knowledge
< of AI issues makes the books *a lot* easier to deal with.

In PAIP there is mention of the pattern matcher used in Lisp 3rd
edition; the difference is that in Lisp 3rd edition, there is step by
step anaylsis of what is going on. There is also a nice pattern
matcher in Ansi Common Lisp which was interesting as it used
multiple-values. Unfortunately these pattern matchers fail as unifiers
as they were unsound (i think).

I would also argue that Norvig's PAIP is not completely ideal in terms
of efficiency. The first half of the book was pretty good in this
respect, but later chapters seem to shift the focus more towards
methods of "rapid prototyping". For instance, the scheme interpreter
was _*much*_ more efficient using an array with a fill-pointer - but
this really made the peep-hole optimizer a pain in the ass.

In the words of Kent Pitman, "It's a tradeoff".


Lyman S. Taylor

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to
In article <m27lxbn...@KludgeUnix.com>,

Steve Gonedes <jgon...@worldnet.att.net> wrote:
>
>
>Sunil Mishra <smi...@hustle.cc.gatech.edu> writes:
>
>< It's precisely these mapping primitives you have to use with care. It is
>< much to tempting to write code that does something like:
><
>< (remove nil (mapcar #'foo some-list))
>
>I don't think unwise use of memory has anything to do with lisp. This
>seems to be more of a general programming issue.

Depends upon what you are doing. If you have some code with
some "real time" constraints then you would likely want to avoid
giving the garbage collector a work out.

>
>A quick question on mapcar... Does mapcar always allocate a fresh

>list? ...


> It seems like it would, and various experiments I have done seem
>to confirm this. I don't believe that the CL spec specifically states
>that "mapcar must make a new list".

In general operations that are destructive are outlined as such
in the standards. Otherwise operations tend to fall within the
functional programming paradigm. If the list arg doesn't change
then the result implicitly consists of a new list.


>The reason that I ask is because I
>have, on numerous occasions, come to see the function mappend; which
>is typically written as,
>
>(defun mappend (fn &rest lists)
> (reduce #'append (apply #'mapcar fn lists))).
>
>If mapcar creates a fresh list, why not use mapcan? Just curious (now
>that we are discussing mapcar).

Err, why the REDUCE instead of an APPLY of append? REDUCE will
reinvoke APPEND with each intermediary result, which is promptly
"thrown away". Presumably APPEND can avoid all of these intermediate
throwaways by working the &rest list right to left.

MAPCAN creates a new list also. It just doesn't managed to
leave so much intermediate garbage for the collector.
In this context where you are trying to concatenate together
a bunch of "locally created" ( not accessed
any where else) lists into a larger result MAPCAN seems a better
choice.


>< In doing this, you would be allocating and deallocating a lot of stuff,
>< which adds up in the long run. It may also require the construction of an
>< anonymous function, if you decide to use #'(lambda ...) rather than #'foo.

Unless this lambda is a closure invoking a lambda should be any more
expensive (with a reasonable about of optimization) than any function
call.


>< which means either writing a recursive function from scratch, or resorting
>< to something like (loop ...), which many purists tend to dislike.

...


>I agree with you; but this really has nothing to do with lisp
>programming. The functions mapcar and such are very fast and easy to
>use, and there is nothing wrong with consing up a list!

As long as you know you are consing up a list and are using
the functions for ease of expressiveness and leaving
performance queaking for another time.


>< Pay attention to this! Arrays, by my reasoning (which means others
>< are free to correct me if I am completely off base :-) ), are far
>< more space efficient and easier to recycle if you have

manually recycle right? :-) If grouped into "working sets"
I'm not so sure that set of cons cells is not as easy to
recycle as an array.

>
>Arrays are cumbersome and wasteful for storing things which need to be
>accessed sequentially,

Eh? Sequentially walking through an array is going to be much
faster than sequentially walking through a list (barring tricks
like cdr coding, which the programmer has no contorl over).

Now if you dyanmically need to adjust the number of elements in a
collection or mix and match contents, then the list is the
far more flexibile dynamic structure.

--

Lyman S. Taylor "The Borg -- party poopers of the Galaxy. "
(ly...@cc.gatech.edu) EMH Doctor Star Trek Voyager.

Sunil Mishra

unread,
Nov 4, 1998, 3:00:00 AM11/4/98
to
Steve Gonedes <jgon...@worldnet.att.net> writes:

> I don't think unwise use of memory has anything to do with lisp. This
> seems to be more of a general programming issue.

It's an issue of understanding a little bit about how the language works,
where inefficiencies accumulate and how you manage them. Yes, in the grand
scheme of things it is a general programming issue. But such details can
quickly become invisible in lisp. Therefore my comment.

The example I gave was fairly simple. Imagine nesting multiple functions,
each of which consed up a new list, just to have that list recycled. I was
garbage collecting quite frequently, and waiting many minutes for the
program to finish. Just sharing the worst of my coding habits :-)

> I agree with you; but this really has nothing to do with lisp
> programming. The functions mapcar and such are very fast and easy to
> use, and there is nothing wrong with consing up a list!

I didn't mean to imply there is anything wrong with consing up a list. I
meant to point out that consing up a list, and immediately throwing it away
to perform a simple operation that could have been done in the process of
consing up the list, is the wrong thing to do. Loop helps at times. At
times it's better to write a clean recursive function from scratch.

> < Pay attention to this! Arrays, by my reasoning (which means others
> < are free to correct me if I am completely off base :-) ), are far

> < more space efficient and easier to recycle if you have a large
> < number of things you want to maintain in a straight list.
>

> Arrays are cumbersome and wasteful for storing things which need to be

> accessed sequentially, also I believe that arrays can be slower to
> initially allocate (dynamically) than a cons cell or two. On the other
> hand, arrays usually take up less space, but (cons 1 2) is probably
> equal to #(1 2). I think the best thing about arrays are the quick
> access to non-sequential information - they make things like d-trees
> possible.

I'm talking about large quantities of data here. I would never want to
carry around a list of a hundred items, though a list of a dozen or so is
convenient for temporary processing. If you know you have lots of things
you want to collect *and want to always keep together*, then an array might
turn out to be far more economical. I would not call this a rule by any
means. It might be better to use a hash table at times. Depends on the
application. Resizable arrays with fill pointers can work well as a large
stack. You can also create offset arrays, so subarrays are also easy to
handle, in some ways easier than sublists, if you want to keep copying to a
minimum.

> < Compare the box-and-pointer cons cell representation to the
> < structure of an array, and you will probably come to the same
> < conclusion.
>
> Consider doing this with a couple thousand arrays,
>
> (loop for tail on some-large-array doing (something-silly tail).

Well, you can't loop on an array, as far as I remember.

You would have to do something like

(loop for index from 0 below (length array)
for offset-array = (make-some-offset-array array index)
do (something-silly offset-array))

Not too shabby, I'd say, if constructing a fresh array is necessary. If
not, delimiting indices are more convenient yet.

> Copying arrays is no more wasteful that consing up lists. I understand
> what you're saying, but declaring mapping functions and lists in
> general to be evil could possible limit the total number of solutions.

I'm not saying they are evil. I'm simply pointing out that using carefully
the data structures lisp provides can lead to cleaner programs.

> < Structures are generally managed through vectors, and
> < automagically generate accessors and constructors. Great for
> < abstraction. I have also decided that CLOS is a must for anyone
> < handling a complex data management problem. Check out Norvig's
> < Paradigms in AI Programming, and the book code for Russell &
> < Norvig's Artificial Intelligence: A Modern Approach to get a better
> < sense for good programming style. For the latter, they had a huge
> < advantage in dealing with relatively well understood issues. The
> < former is great for learning how you can use Lisp to solve rather
> < complex problems fairly efficiently. Of course, a working knowledge
> < of AI issues makes the books *a lot* easier to deal with.
>
> In PAIP there is mention of the pattern matcher used in Lisp 3rd
> edition; the difference is that in Lisp 3rd edition, there is step by
> step anaylsis of what is going on. There is also a nice pattern
> matcher in Ansi Common Lisp which was interesting as it used
> multiple-values. Unfortunately these pattern matchers fail as unifiers
> as they were unsound (i think).

:-) I read a report at some point. I can't remember who it was, but someone
had gone through unifiers presented in about five different AI texts, and
found that only one of them was correctly implemented. I have not studied
the example you are referring to, so I can't comment.

> I would also argue that Norvig's PAIP is not completely ideal in terms
> of efficiency. The first half of the book was pretty good in this
> respect, but later chapters seem to shift the focus more towards
> methods of "rapid prototyping". For instance, the scheme interpreter
> was _*much*_ more efficient using an array with a fill-pointer - but
> this really made the peep-hole optimizer a pain in the ass.
>
> In the words of Kent Pitman, "It's a tradeoff".

Agreed :-) But nevertheless a good book for learning advanced materials.

Sunil

David Steuber The Interloper

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
On 4 Nov 1998 09:03:35 GMT, Steve Gonedes <jgon...@worldnet.att.net>
claimed or asked:

% Some people don't seem to like Lisp 3rd because the last section is
% discusses fairly complex material, but the majority of the book is
% really good - not many books discuss arrays and hashtables (pretty
% common things in CL). Oh well... Have fun - and remeber to use better
% variable names then Grahm does (he uses very short, sometimes one
% letter identifiers for most variables - hard to follow sometimes).

Your recommendation is appreciated, but I've really blown my budget on
books for a while.

Deciding the best way to organize a programming text is probably an
extremely difficult task. I've always felt that "The C Programming
Language" by Kernigan and Ritchie was the best language book ever
written. I still feel that way. However, I bet there are people who
would disagree with me.

I'm not saying there aren't other good books. My experience is that
it is easiest to understand the things that are most similar to the
things you already understand. Prior knowledge is a function of
experience. No two people can have the same experience, so there are
subtle differences in the way people think and deal with new
information.

The one thing that is true of every language is that it has to be read
and written in to learn. So with Lisp, I will start with doing little
things. Iteration is an important concept in C. So I will play with
different iteration constructs so that I can do the equivalent of C's
for, while, and do while. Then there are other constructs like do
until which seems to be the do in Lisp.

Another thing I do in C is break out of loops or continue a loop under
some condition (break and continue).

Procedural programming is another model used in C. In that respect,
there are some similarities with Lisp. For example, functions may be
called just for their return value. They may be called for a side
effect. This has good correspondence in Lisp. Lisp even has function
pointers for passing as parameters to a function.

Both C and Lisp support lexical scoping. Lisp also supports dynamic
scoping. That feature can be emulated in C, but it is not natural to
program that way. I expect I will have a little bit of difficulty
finding when dynamic scoping is the better way.

I also play with Perl. It has for each as an iteration primitive.
Lisp has do-list. I don't know if there is a general for-each in lisp
that covers any arbitrary data structure. That can be implemented in
C++, but it is something that requires containers to support
functionality to help it along. It would be nice to have a for-each
macro that will do strings and arrays as well as lists. If it doesn't
exist, then it would be a worthwhile exercise to write one.

I sense that I am drifting around here. Perhaps it would have helped
to have a point. Oh yes. Other languages. I've had them. I've
learned some of the basic data structures and algorithms. Learning to
use them in Lisp is just another step.

I can't wait to be able to take advantage of the constructs that make
Lisp Lisp. It really is something of an adventure.

David Steuber The Interloper

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
On 04 Nov 1998 13:01:13 -0500, Sunil Mishra
<smi...@hustle.cc.gatech.edu> claimed or asked:

% Alas, other than AI books, I haven't come across that many books that deal
% with Lisp coding style. This is one wish I would *love* to see fulfilled.

I'm not really sure what is going on in AI circles these days. My
application for Lisp certainly is not in that lofty atmosphere. The
language supports constructs that are quite useful for mundane
programming tasks. I wouldn't consider Emacs and its derivatives to
have anything to do with AI. However, they are quite flexible and
customizable. That is what I am going for, and Lisp seems to be the
best language to do that.

Lisp understands itself out of the box. C++ does not. Java does not.
Perl does, but it is just a scripting language for quick and dirty
work. The fact that Lisp understands itself means I don't have to
create my own scripting language or extension language. It is already
there. Language design is harder than application design. I would
rather do the easier part.

Pierre Mai

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
tras...@david-steuber.com (David Steuber "The Interloper") writes:

> Another thing I do in C is break out of loops or continue a loop under
> some condition (break and continue).

There are two reasons that break & continue are used so often in C:

a) Because C has a "broken" case (=switch) statement, and so break has
to be used to end case-clauses.
b) Because wrapping loop-bodies in ifs is tedious in C, whereas using
when/unless in lisp is quite easy and uses much less screen
real-estate.

Depending on the context, there are better and more idiomatic
solutions in Lisp:

- Just using normal when/unless/if
- case & cond
- Use smaller loop bodies by abstracting out new functions and macros
- loop + when/unless/if-clauses (NOT the normal when/unless/if
operators! See loop in the CLHS)
- block + return/return-from
- For state-machines using tagbody and go can be quite nice
- ...

> I also play with Perl. It has for each as an iteration primitive.
> Lisp has do-list. I don't know if there is a general for-each in lisp
> that covers any arbitrary data structure. That can be implemented in
> C++, but it is something that requires containers to support
> functionality to help it along. It would be nice to have a for-each
> macro that will do strings and arrays as well as lists. If it doesn't
> exist, then it would be a worthwhile exercise to write one.

loop does support looping over vectors, hash-tables and packages as
well as over lists. But it uses different keywords for each type, so
you will have to change the iteration line in your loop when you
change the type...

Then there is map, which supports mapping over general sequences. But
beware: Some implementations have very bad map implementations,
amongst them CMU CL, which take O(n^2) for mapping over lists[1], and
generally cons a lot. ACL5.0 is much better in this respect.

> I can't wait to be able to take advantage of the constructs that make
> Lisp Lisp. It really is something of an adventure.

How true!

Regs, Pierre.

Footnotes:
[1] One could argue that long lists are stupid/slow anyway, and so
this isn't really earth-shattering news.

BTW: Here is a simplistic implementation of for-each and the more
general for-each* using map. Doing something more efficient is left
as an excercise to the reader (using loop is an option, or going
directly to the baremetal).
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
. SPOILER
.
.
.
.
.
.
.
.
.
.
.
.
.
.

;;; You could also call the for-each doseq/dosequence, which is more lispy.
;;; For for-each* I don't know a very nice name. May be seq-let ;)

(defmacro for-each ((var sequence) &body body)
"FOR-EACH (Var Sequence) Declaration* Form*
Loops over body with the symbol `var' bound to each element of the sequence.
Returns nil."
`(map nil (function (lambda (,var) ,@body)) ,sequence))

(defmacro for-each* (bindings &body body)
"FOR-EACH* ({(Var Sequence)}+) Declaration* Form*
Loops over body with the symbols `var' bound to each element of the
corresponding sequences. The variables are bound in parallel.
Returns nil."
(loop for (var seq) in bindings
collect var into var-list
collect seq into seq-list
finally
(return
`(map nil (function (lambda (,@var-list) ,@body)) ,@seq-list))))

#|
;;; Examples:

* (for-each (x (list 'a 'b 'c)) (print x))
A
B
C
NIL
* (for-each* ((x (list 1 2 3)) (y (vector 3 2 1))) (print (+ x y)))
4
4
4
NIL

;;; If you tell your editor that for-each* is like let, and for-each like
;;; dolist, you will get nice indentation. For Emacs put this in your
;;; lisp-mode-hook:

(put 'for-each 'lisp-indent-function 1)
(put 'for-each* 'lisp-indent-function 1)

(defun demo (x list)
(for-each (x list)
(princ x)
(terpri))
(for-each* ((x list)
(y (reverse list)))
(princ x)
(princ y)
(terpri))
(princ x)
(terpri))
|#

--
Pierre Mai <pm...@acm.org> http://home.pages.de/~trillian/
"Such is life." -- Fiona in "Four Weddings and a Funeral" (UK/1994)

Barry Margolin

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
In article <71qm5u$h...@pravda.cc.gatech.edu>,

Lyman S. Taylor <ly...@cc.gatech.edu> wrote:
> Err, why the REDUCE instead of an APPLY of append? REDUCE will
> reinvoke APPEND with each intermediary result, which is promptly
> "thrown away". Presumably APPEND can avoid all of these intermediate
> throwaways by working the &rest list right to left.

One reason why REDUCE is often used in favor of APPLY is that REDUCE
doesn't have a limit on the number of entries in the list, but APPLY may be
limited to CALL-ARGUMENTS-LIMIT total arguments in the underlying call.

However, in the case of APPEND, the performance impact of REDUCE can be
severe. A single call to APPEND should only need to traverse and copy each
argument once, so it should be O(n). REDUCE of APPEND, on the other hand,
will end up being O(n^2) in both space and time (although an ephemeral GC
should reduce the impact of the space use). Conceivably, an SSC
(Sufficiently Smart Compiler) could recognize REDUCE of APPEND and optimize
all the intermediate stuff away, but this level of optimization is more
than I generally expect and I don't program for it (the same could be said
of the LOOP calling NTH in another thread).

lrphill

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
Tim Bradshaw wrote:

>Something like Paul Graham's `ANSI Common Lisp' book is
>*much* better.

Want to "second the emotion" re Graham's book and also recommend his
other book "On Lisp: Advanced Techniques for Common Lisp" as being full
of good strategy and tactics.

LRP

lrphill

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
David Steuber wrote:

>I don't want to try to find a "Lisp in 21 days" book
>either. I figure if I hack at the language and pester
>people in this group, I will eventually grok the Lisp.

I remember that back in the early 80's (?) Douglas Hofstadter wrote a
series of articles for Scientific American called "Metamagical Themas."
Embedded in it was a sub-series (like 6-8 articles) about Lisp. It was a
pretty good gentle introduction (recursion, Towers of Hanoi, etc.). But
I don't know where to get them. I even wrote to Hofstadter, who informed
me that no, they weren't anywhere on line. I don't even have paper
copies of them. Anybody have them?

LRP

Andrew Shalit

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
lrphill <lrp...@sandia.gov> writes:

> I remember that back in the early 80's (?) Douglas Hofstadter wrote a
> series of articles for Scientific American called "Metamagical Themas."
> Embedded in it was a sub-series (like 6-8 articles) about Lisp. It was a
> pretty good gentle introduction (recursion, Towers of Hanoi, etc.). But
> I don't know where to get them. I even wrote to Hofstadter, who informed
> me that no, they weren't anywhere on line. I don't even have paper
> copies of them. Anybody have them?

I'd check your local library, or local university library. I know
that Cabot Library at Harvard, for example, has a fairly complete
set of Scientific American back issues, complete with year-to-year
indexes.

Richard Hoskins

unread,
Nov 5, 1998, 3:00:00 AM11/5/98
to
lrphill <lrp...@sandia.gov> writes:

> I remember that back in the early 80's (?) Douglas Hofstadter wrote
> a series of articles for Scientific American called "Metamagical
> Themas." Embedded in it was a sub-series (like 6-8 articles) about
> Lisp. It was a pretty good gentle introduction (recursion, Towers of
> Hanoi, etc.). But I don't know where to get them. I even wrote to
> Hofstadter, who informed me that no, they weren't anywhere on
> line. I don't even have paper copies of them. Anybody have them?

Hofstader's Metamagical Themas columns we're reprinted in 1989, and
are currently available in a Basic Books edition. There are three
articles, IIRC, about Lisp. They won't serve as a good tutorial for
Common Lisp, but are an interesting read.

_Metamagical Themas : Questing for the Essence of Mind and Pattern_
Douglas Hofstadter
Basic Books; ISBN: 0465045669
--
party naked

Erik Naggum

unread,
Nov 6, 1998, 3:00:00 AM11/6/98
to
* lrphill <lrp...@sandia.gov>

| I remember that back in the early 80's (?) Douglas Hofstadter wrote a
| series of articles for Scientific American called "Metamagical Themas."
| Embedded in it was a sub-series (like 6-8 articles) about Lisp. It was a
| pretty good gentle introduction (recursion, Towers of Hanoi, etc.). But
| I don't know where to get them. I even wrote to Hofstadter, who informed
| me that no, they weren't anywhere on line. I don't even have paper
| copies of them. Anybody have them?

upon question, my bookshelves returned with Douglas R. Hofstadter:
"Metamagical Themas, Questing for the Essence of Mind and Pattern, an
Interlocked Collection of Literary, Scientific and Artistic Studies".
it's a Penguin paperback, acquisition date 1987-06-01, copyright Basic
Books, Inc, 1985. the ISBN is probably useless, but is 0-14-008534-3.

books.com and amazon.com both report it in stock as a trade paperback
March 1996 _reissue_ at USD 20 (retail USD 25). ISBN 0-465-04566-9.

it contains three articles on Lisp: Atoms and Lists; Lists and Recursion;
Recursion and Generality; encompassing 59 pages out of 880 total.

Pierre Mai

unread,
Nov 6, 1998, 3:00:00 AM11/6/98
to
lrphill <lrp...@sandia.gov> writes:

> I remember that back in the early 80's (?) Douglas Hofstadter wrote a
> series of articles for Scientific American called "Metamagical Themas."
> Embedded in it was a sub-series (like 6-8 articles) about Lisp. It was a
> pretty good gentle introduction (recursion, Towers of Hanoi, etc.). But
> I don't know where to get them. I even wrote to Hofstadter, who informed
> me that no, they weren't anywhere on line. I don't even have paper
> copies of them. Anybody have them?

Douglas Hofstadter re-released his whole "Metamagical Themas" series
(and then some) in book form. In the UK it is[1] available from Penguin
as a paperback. Have a look on Amazon.com unter Hofstadter, Douglas
R.: ``Metamagical Themas: Questing for the Essence of Mind and
Pattern''.

Regs, Pierre.

Footnotes:
[1] At least it was some four years ago. Haven't checked recently
for geographical reasons... ;)

Tim Bradshaw

unread,
Nov 6, 1998, 3:00:00 AM11/6/98
to
* lrphill wrote:
> I remember that back in the early 80's (?) Douglas Hofstadter wrote a
> series of articles for Scientific American called "Metamagical Themas."
> Embedded in it was a sub-series (like 6-8 articles) about Lisp. It was a
> pretty good gentle introduction (recursion, Towers of Hanoi, etc.). But
> I don't know where to get them. I even wrote to Hofstadter, who informed
> me that no, they weren't anywhere on line. I don't even have paper
> copies of them. Anybody have them?

There was a book with them in, and the other articles he wrote. I
think it was called `Metamagical Themas' too, I probably have it
somewhere and could find ISBN &c.

--tim

Rob Warnock

unread,
Nov 7, 1998, 3:00:00 AM11/7/98
to
Barry Margolin <bar...@bbnplanet.com> wrote:
+---------------

| One reason why REDUCE is often used in favor of APPLY is that REDUCE
| doesn't have a limit on the number of entries...

| However, in the case of APPEND, the performance impact of REDUCE can be
| severe. A single call to APPEND should only need to traverse and copy each
| argument once, so it should be O(n). REDUCE of APPEND, on the other hand,
| will end up being O(n^2) in both space and time...
+---------------

Sounds like there's a need for a REDUCE that takes another (keyword?) arg
to say how many things to pass at once to the function you're REDUCE'ing
the list on, sort of like the "xargs" program in Unix, where:

% find {path}... {predicates}... -print | xargs foo

will never call try to call a single involcation of "foo" with more args
[in either number of total character count] than the system limit, even
though there may be many more than that generated by the "find".

With the equivalent refinement to REDUCE, you could pass large (but not
*too* large) "gulps" to APPEND at once. Maybe something like this:

(reduce #'append a-really-big-list &max-args CALL-ARGUMENTS-LIMIT)

Or if you knew something about the way a specific compiler generated code,
e.g., that there was a limit on the number of args that could be passed
in regeisters, you might set "&max-args" lower:

(reduce #'+ a-really-big-list-of-nums &max-args MAX-REGISTER-ARGS)

[The default for &max-args should of course be 2, for backwards compat.]


-Rob

-----
Rob Warnock, 8L-855 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
2011 N. Shoreline Blvd. FAX: 650-964-0811
Mountain View, CA 94043 PP-ASEL-IA

Steve Gonedes

unread,
Nov 8, 1998, 3:00:00 AM11/8/98
to

rp...@rigden.engr.sgi.com (Rob Warnock) writes:

< Barry Margolin <bar...@bbnplanet.com> wrote:
< +---------------
< | One reason why REDUCE is often used in favor of APPLY is that REDUCE
< | doesn't have a limit on the number of entries...
< | However, in the case of APPEND, the performance impact of REDUCE can be
< | severe. A single call to APPEND should only need to traverse and copy each
< | argument once, so it should be O(n). REDUCE of APPEND, on the other hand,
< | will end up being O(n^2) in both space and time...
< +---------------
<
< Sounds like there's a need for a REDUCE that takes another (keyword?) arg
< to say how many things to pass at once to the function you're REDUCE'ing
< the list on, sort of like the "xargs" program in Unix, where:
<
< % find {path}... {predicates}... -print | xargs foo

(defun mappend (fn &rest lists)
(reduce #'append (apply #'mapcar fn lists) :FROM-END T))


Dobes Vandermeer

unread,
Nov 8, 1998, 3:00:00 AM11/8/98
to
Erik Naggum wrote:
>
> * Dobes Vandermeer <do...@mindless.com>
> | Ill be honest.. I've been to both places, and both are completely
> | hopeless attempt to describe the language.

> | Nevertheless, its pretty frustrating learning LISP at first, with such


> | poor material, but once you get into the swing of things you can figure

> | it out with the help of a book, instructor, or newsgroup.
>

> sigh. for me, it's more than just "pretty frustrating" to have to listen
> to whining losers who can't read technical literature to learn something
> highly tehcnical in nature, and who blame the literature, not themselves.

Its even more frusterating to listen to whining losers attempt to flame
others for having a difference in opinion. I am sorry if my
individuality offends you.. but please take your crap to alt.flames

CU
Dobes

Bill Coderre

unread,
Nov 10, 1998, 3:00:00 AM11/10/98
to
Compare and contrast the following two statements:

1) It is a poor craftsman who blames the tools
2) It is a poor teacher that blames the students


(Personally, I blame the Internet because no matter how often you post a
list of good books, people have missed it and request it again.)

Three GREAT Lisp books:
"On Lisp" by Graham (small and really damn good. Not as small and good as
Kernighan and Ritchie, but definitely leaning in that direction.)

"Paradigms of AI Programming" by Norvig (big and beefy with lots of examples)

"Concrete Abstractions" by Max Hailperin et al. (This last one is really
about Scheme, not Lisp, and my friend wrote it, but it's a cool book, so
you should check it out. You can pre-order it at www.amazon.com now, and
also read a description by the author. It should be relatively inexpensive
when it does ship.)

bc

Thomas A. Russ

unread,
Nov 12, 1998, 3:00:00 AM11/12/98
to

Steve Gonedes <jgon...@worldnet.att.net> writes:
>
> Arrays are cumbersome and wasteful for storing things which need to be
> accessed sequentially, also I believe that arrays can be slower to
> initially allocate (dynamically) than a cons cell or two. On the other
> hand, arrays usually take up less space, but (cons 1 2) is probably
> equal to #(1 2).

In most lisp implementations, the list '(1 2) will take up less space
than the array #(1 2), because arrays have some overhead for storing
type information (which usually takes up an additional machine word on
everything except lisp machines) and information about the number of
dimensions and their limits. This will often be 3 to 5 machine words
for one-dimensional arrays.

Conses on the other hand, typically have no overhead beyond the storage
space for the pointers to their values. [Type codes on stock hardware
are embedded in the pointers to conses or handled in some other way that
doesn't require an additional word]. True lists have the need for one
additional word to hold the (hidden) pointer to NIL in the last cons.

Therefore you would often get the following space requirements:

(1 . 2) 2 words
(1 2) 3 words
#(1 2) 5 to 7 words (2 words + overhead of 3 to 5)

The upshot of this is that in most lisp implementations you will have
the following space requirements:

List N + 1 words
Vector N + C words where C is the array overhead.


--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

Duane Rettig

unread,
Nov 12, 1998, 3:00:00 AM11/12/98
to
t...@sevak.isi.edu (Thomas A. Russ) writes:

> Steve Gonedes <jgon...@worldnet.att.net> writes:
> >
> > Arrays are cumbersome and wasteful for storing things which need to be
> > accessed sequentially, also I believe that arrays can be slower to
> > initially allocate (dynamically) than a cons cell or two. On the other
> > hand, arrays usually take up less space, but (cons 1 2) is probably
> > equal to #(1 2).

This is true in Allegro CL.

> In most lisp implementations, the list '(1 2) will take up less space
> than the array #(1 2), because arrays have some overhead for storing
> type information (which usually takes up an additional machine word on
> everything except lisp machines) and information about the number of
> dimensions and their limits. This will often be 3 to 5 machine words
> for one-dimensional arrays.

This may be true for lisp implementations on lisp machines, but it is not
true for Allegro CL. Other stock-hardware vendors will have to comment
on their own implementations. In Allegro CL, a simple-array has one
word of overhead.

> Conses on the other hand, typically have no overhead beyond the storage
> space for the pointers to their values. [Type codes on stock hardware
> are embedded in the pointers to conses or handled in some other way that
> doesn't require an additional word].

This is true in Allegro CL; conses are two words each.

> True lists have the need for one
> additional word to hold the (hidden) pointer to NIL in the last cons.

This would be true for lisp machines where cdr-coding is possible
(practical), but I don't know of any stock-hardware lisp
implementations which implement cdr-coding (I would of course be
very interested in hearing of any that do, along with the experiences
of profiling such an implementation). In Allegro CL, lists cost two
words per element.

> Therefore you would often get the following space requirements:
>
> (1 . 2) 2 words

Yes

> (1 2) 3 words

4 words in Allegro CL

> #(1 2) 5 to 7 words (2 words + overhead of 3 to 5)

4 words in Allegro CL (3 words rounded up to the nearest 2-word
boundary, for tagging purposes).

> The upshot of this is that in most lisp implementations you will have
> the following space requirements:
>
> List N + 1 words

N * 2 in Allegro CL

> Vector N + C words where C is the array overhead.

N + 1 [+ 1] in Allegro CL

>
> --
> Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Dave Pearson

unread,
Nov 13, 1998, 3:00:00 AM11/13/98
to
On 12 Nov 1998 09:40:24 -0800, Thomas A. Russ <t...@sevak.isi.edu> wrote:

[Pardon me for butting in, as a lurker and lisp-dabbler this one caught my
eye]

> Therefore you would often get the following space requirements:
>
> (1 . 2) 2 words

> (1 2) 3 words

Given that you'd said 2 words for a cons, shouldn't the above be:

(1 . 2) 2 words

(1 2) 4 words

Isn't the list (1 2) two conses? As in:

(1 . (2 . NIL))

Or am I missing something?

--
Take a look in Hagbard's World: | w3ng - The WWW Norton Guide reader.
http://www.acemake.com/hagbard/ | eg - Norton Guide reader for Linux.
http://www.hagbard.demon.co.uk/ | weg - Norton Guide reader for Windows.
Free software, including........| dgscan - DGROUP scanner for Clipper.


Tim Bradshaw

unread,
Nov 13, 1998, 3:00:00 AM11/13/98
to
* Thomas A Russ wrote:
> The upshot of this is that in most lisp implementations you will have
> the following space requirements:

> List N + 1 words

2N words. Each cons cell has a car & cdr pointer, unless it's cdr
coded (do any implementations cdr code now?). And lists often have
have rotten locality, which can cost you a lot (copying GCs should
make this better over time, I don't know if they really do in
practice).

> Vector N + C words where C is the array overhead.

right.

--tim

Thomas A. Russ

unread,
Nov 13, 1998, 3:00:00 AM11/13/98
to
davep...@hagbard.demon.co.uk (Dave Pearson) writes:

>
> On 12 Nov 1998 09:40:24 -0800, Thomas A. Russ <t...@sevak.isi.edu> wrote:
> > Therefore you would often get the following space requirements:
> >
> > (1 . 2) 2 words
> > (1 2) 3 words
>
> Given that you'd said 2 words for a cons, shouldn't the above be:
>
> (1 . 2) 2 words
> (1 2) 4 words
>
> Isn't the list (1 2) two conses? As in:
>
> (1 . (2 . NIL))
>
> Or am I missing something?

No, you are correct.

I answered too quickly and miscounted the conses. Oops.

Excursion: Although I wasn't thinking of it when I made my initial
response, there was a technique called cdr-coding that allowed a more
space efficient storage of lists by laying them out more like arrays.

j...@rebol.com

unread,
Nov 13, 1998, 3:00:00 AM11/13/98
to
t...@sevak.isi.edu (Thomas A. Russ) writes:

> Excursion: Although I wasn't thinking of it when I made my initial
> response, there was a technique called cdr-coding that allowed a more
> space efficient storage of lists by laying them out more like arrays.


Cdr coding trades off address space (it requires 2 bits per pointer
regardless of whether the pointer is a cons), and performace (calls
to CDR must call CAR first to see if the CDR exists), for improved
space efficiency (CDR coded lists can be up to half the size).

We did an analysis at LMI in 1985 and decided that the tradeoff wasn't
worth it anymore. Modern systems have a lot more memory, and modern lisps
support vectors, arrays, and defstructs. If I remember correctly, a
typical Lisp machine band had only about 20% conses and less than half
of them could take advantage of CDR codes.

CDR coding does have the one feature that you can easily represent the
stack as a CDR coded list. This was used to advantage on the Lisp
Machine for handling &rest arguments and a couple of other quirky hacks.

~jrm

0 new messages