Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Short question: the letter n

13 views
Skip to first unread message

srs

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
A lot of the destructive operations in lisp start with the letter "n".
(e.g. nreverse nsubstitute). What does this letter stand for?


Sent via Deja.com http://www.deja.com/
Before you buy.

Michael Hudson

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
srs <sr...@my-deja.com> writes:

> A lot of the destructive operations in lisp start with the letter "n".
> (e.g. nreverse nsubstitute). What does this letter stand for?

"non-consing". I'm sure this is in ctlt2 somewhere, but I can't find
it quickly.

Cheers,
M.

--
34. The string is a stark data structure and everywhere it is
passed there is much duplication of process. It is a perfect
vehicle for hiding information.
-- Alan Perlis, http://www.cs.yale.edu/~perlis-alan/quotes.html

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* srs <sr...@my-deja.com>

| A lot of the destructive operations in lisp start with the letter "n".
| (e.g. nreverse nsubstitute). What does this letter stand for?

non-consing. "consing" here is the basic allocation operation.

don't think of these operations as "destructive". think of them as
returning a value just like their consing counter-parts, except that
you know that you won't use the argument you gave them again, then
don't ever use the argument you gave them, no matter how much some
people tell you that you _could_.

true "destructive" operations include rplaca and rplacd.

#:Erik

Johan Kullstam

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Erik Naggum <er...@naggum.no> writes:

i think the lisp nomenclature is a bit weak in distinguishing the two cases.

in the former (represented by nreverse), you allow wanton destruction
of certain arguments in order to (hopefully) buy speed performance.
nreverse does not *have* to trash its arg but it could. therefore,
you cannot rely on the value of something passed to nreverse.
nreverse has a certain non-deterministic freedom to do whatever it
wants with the arg passed to it.

in the latter (represented by rplaca), you want a particular
action applied to the argument. this action modifies the argument and
is therefore "destructive", but it is also "constructive" in that you
have a required result upon the argument. the side-effect is the
whole point of rplaca.

in the former you don't care what happens to the arg, in the latter
you care very much. you don't know what nreverse will do to its arg,
you *do* know what rplaca will do.

most of the n- forms are in the former camp.

both are commonly called "destructive" in the litterature since they
(might) modify some of their arguments.

i wish there were two words to describe the distinction since in my
mind they really are two different things.

--
johan kullstam l72t00052

David Bakhash

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Johan Kullstam <kull...@ne.mediaone.net> writes:

> i wish there were two words to describe the distinction since in my
> mind they really are two different things.

I'll give my $0.02 here.

the only thing I consider when dealing with destructive functions
(destructive in the broader sense of the word) is whether or not I
know just how it will affect the data. In other words, you _know_ how
rplaca and rplacd affect a cons, but it's undefined how, say, nreverse
will do so. Anyway, I never use rplac[ad] directly, but rather the
(setf c[ad]r).

As you probably know already, CL really doesn't have a single
convention for destructive functions (like appending a `!'). But It's
easy enough to deal with the issue.

I've seen a set of macros that let's you always get to the destructive
versions of functions for CL by calling (! <function_name>). So, for
example, you wouldn't have to remember that the destructive version of
reverse is nreverse, and that the destructive version of remove is
delete; instead, you'd call them like this:

(! reverse) == nreverse
(! remove) == delete

so you can do something like:

(setq list-of-numbers ((! remove) 3 list-of=numbers))

etc.

But who cares. It's all just minor prettification of the language.

dave

Scott L. Burson

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Erik Naggum wrote:
>
> * srs <sr...@my-deja.com>
> | A lot of the destructive operations in lisp start with the letter "n".
> | (e.g. nreverse nsubstitute). What does this letter stand for?
>
> non-consing. "consing" here is the basic allocation operation.

Hmm. My understanding has always been that it was intended to suggest
operations that ran in linear time ("O(n)").

This doesn't really make sense, because REVERSE, and (I think, off the cuff)
most of the other consing versions of these operations also run in linear time
-- it's just a much longer linear time because of the cost of first creating and
later GC-ing the new conses.

Nevertheless, that is what I recall learning on the knee of Bernie Greenberg or
Gerry Sussman or someone like that.

Whom could we ask to get the real scoop? Guy Steele, perhaps? John McCarthy?

-- Scott

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* "Scott L. Burson" <SBu...@my-dejanews.com>

| My understanding has always been that it was intended to suggest
| operations that ran in linear time ("O(n)").

time to scratch your understanding...

| This doesn't really make sense, because REVERSE, and (I think, off
| the cuff) most of the other consing versions of these operations
| also run in linear time -- it's just a much longer linear time
| because of the cost of first creating and later GC-ing the new
| conses.

you're quite mistaken on this, too. the non-consing versions do not
(necessarily) run faster for a number of really hairy reasons. they
don't cons. that's all. GC'ing dead objects doesn't take time in a
whole bunch of GC implementations -- e.g., it takes time to keep
objects _alive_ from generation to generation in a generational GC.

| Whom could we ask to get the real scoop? Guy Steele, perhaps? John
| McCarthy?

just trust me. :)

#:Erik

Tim Bradshaw

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* Scott L Burson wrote:

> This doesn't really make sense, because REVERSE, and (I think, off the cuff)
> most of the other consing versions of these operations also run in linear time
> -- it's just a much longer linear time because of the cost of first creating and
> later GC-ing the new conses.

No, it may well not be significantly longer linear time. Picking up
dead objects is essentially free for a copying GC, so the GC runtime
depends on the amount of live data only. Allocating objects still
takes time, but again for a system with a copying GC this can be tiny
since it's basically incrementing a register and initializing a couple
of words of memory.

If you try the experiment you can find really small differences. In
Allegro for instance the difference for a lot of reverses of a longish
list seems to be less than a factor of 1.4 with no effort spent on
thinking about GC (I think most of the time difference is going in the
overhead of simply starting the GC lots of times, so making it run
less often would probably reduce the difference to almost nothing).

--tim

Joe Marshall

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Tim Bradshaw <t...@cley.com> writes:

> * Scott L Burson wrote:
>
> > This doesn't really make sense, because REVERSE, and (I think, off the cuff)
> > most of the other consing versions of these operations also run in linear time
> > -- it's just a much longer linear time because of the cost of first creating and
> > later GC-ing the new conses.
>
> No, it may well not be significantly longer linear time. Picking up
> dead objects is essentially free for a copying GC, so the GC runtime
> depends on the amount of live data only. Allocating objects still
> takes time, but again for a system with a copying GC this can be tiny
> since it's basically incrementing a register and initializing a couple
> of words of memory.

And if you are using a generational GC, modifying an existing cons
cell will probably take more time (either in the page marking, or
later during the scavenge of this same cell) than initializing a
freshly consed cell.

--
~jrm

Barry Margolin

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
In article <ey33do8...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:
>No, it may well not be significantly longer linear time. Picking up
>dead objects is essentially free for a copying GC, so the GC runtime
>depends on the amount of live data only. Allocating objects still
>takes time, but again for a system with a copying GC this can be tiny
>since it's basically incrementing a register and initializing a couple
>of words of memory.

However, allocating more objects is also likely to cause more frequent
GC's. It doesn't matter that the individual GC's take the same amount of
time, it's still more total time.

In my experience, many uses of things like NREVERSE are idiomatic, so it
becomes a simple, almost automatic optimization. For instance, in the
traditional idiom for accumulating into a list, where you build the list
with (push item result) during the loop and finally return (nreverse
result), it's quite obvious that the old list will become dead immediately,
so it's a simple matter to reuse it.

While this could be considered premature optimization, the idioms arose
because they're really easy to apply. I do them in my sleep, much like
eliminating common sub-expressions by using LET (not only does this
optimize the generated code, but I also find it makes the program easier to
read, by shrinking long expressions and giving names to things). The
N-functions can also help human readers understand the code -- it's like a
comment warning them that the structure being reused is no longer needed in
its old form.

--
Barry Margolin, bar...@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Scott L. Burson

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Erik Naggum wrote:
>
> time to scratch your understanding...
>
> you're quite mistaken on this, too. ... for a number of really
> hairy reasons...

You have neither right nor call to address me in such a condescending manner,
and I do not appreciate it. My post was entirely polite and respectful.

I've seen you get in donnybrooks with any number of other people. Since I never
saw how they started, I reserved judgment. Now I see.

-- Scott

Scott L. Burson

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Joe Marshall wrote:
>
> [I]f you are using a generational GC, modifying an existing cons

> cell will probably take more time (either in the page marking, or
> later during the scavenge of this same cell) than initializing a
> freshly consed cell.

But in the common case in which the cell being modified is still in the youngest
generation, all the write barrier has to do is to figure that out -- which
usually takes just an address comparison or two. It doesn't have to do anything
else like page marking or adding an element to the root set, as the case may be.

Of course, the more ephemeral objects one conses, the shorter the time that
not-so-ephemeral objects remain in the youngest generation. So there's another
hidden cost of consing ephemeral objects: once the other objects get promoted,
they're more expensive to modify.

Look, I'm not trying to tell people that they should be anal about avoiding
ephemeral consing, unless they're stuck with a non-generational collector, as
some still are. I don't worry about it much in ACL, which I use a lot. But the
cost isn't quite zero. I grant that I overstated the case when I said that the
consing versions were "much" more expensive than the "N" versions -- again,
unless one is stuck with a non-generational collector.

-- Scott

Tim Bradshaw

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
* Barry Margolin wrote:

> However, allocating more objects is also likely to cause more frequent
> GC's. It doesn't matter that the individual GC's take the same amount of
> time, it's still more total time.

This is true of course, I was just trying to point out that the
difference needn't be huge.

> In my experience, many uses of things like NREVERSE are idiomatic, so it
> becomes a simple, almost automatic optimization. For instance, in the
> traditional idiom for accumulating into a list, where you build the list
> with (push item result) during the loop and finally return (nreverse
> result), it's quite obvious that the old list will become dead immediately,
> so it's a simple matter to reuse it.

I use NREVERSE like this all the time, and I think the idiom is
important even if the performance difference might be small.

--tim


Espen Vestre

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
"Scott L. Burson" <SBu...@my-dejanews.com> writes:

> Erik Naggum wrote:
> >
> > time to scratch your understanding...
> >
> > you're quite mistaken on this, too. ... for a number of really
> > hairy reasons...
>
> You have neither right nor call to address me in such a condescending manner,
> and I do not appreciate it. My post was entirely polite and respectful.

Hmm? Do you really think Erik's answer was rude? This is internet news
(and not the morning paper), after all.

My compatriot is more knowledgeable in english than I am, so he might
have meant to be more rude than my understanding of his posting, but
I'll leave it for him to answer on that.

Just don't start one of these unneccessary everlasting threads again, OK?
--
(espen)

Espen Vestre

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
Barry Margolin <bar...@genuity.net> writes:

> However, allocating more objects is also likely to cause more frequent
> GC's. It doesn't matter that the individual GC's take the same amount of
> time, it's still more total time.

Good point. More frequent GC's may also have other unwanted side effects,
like aging *other* objects too fast. My latest application gobbles so much
memory that it has to be run with global GC turned off. Since I have a
lot of objects that live for a few seconds (frequently up to half a minute)
before they can be GC'd, GC frequency is very crucial to the behaviour
of the application.
--
(espen)

Erik Naggum

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
* "Scott L. Burson" <SBu...@my-dejanews.com>
| You have neither right nor call to address me in such a
| condescending manner, and I do not appreciate it. My post was
| entirely polite and respectful.

sorry, but if you found that _condescending_, I had no part in it.

| I've seen you get in donnybrooks with any number of other people.
| Since I never saw how they started, I reserved judgment. Now I see.

yes, you do. this started when you dramatically overreacted and
didn't "get" a light-hearted and slightly sarcastic response. I
hope you are willing to judge your own reaction, as well, because
the need and willingness to judge people is part of the problem.

in real life, I would have responsed "geez, dude, lighten _up_".

has it occurred to you (or anyone else) that _I_ don't appreciate
you judge-happy uptight freaks? why do _you_ have the right to bug
and annoy me with your unnatural uptightness under cover of being
polite and respectful? answer: you don't.

if you are so polite and respectful, now is the time to prove it.

#:Erik

Rob Warnock

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
Erik Naggum <er...@naggum.no> wrote:
+---------------

| * "Scott L. Burson" <SBu...@my-dejanews.com>
| | This doesn't really make sense, because REVERSE, and (I think, off
| | the cuff) most of the other consing versions of these operations
| | also run in linear time -- it's just a much longer linear time
| | because of the cost of first creating and later GC-ing the new conses.
|
| you're quite mistaken on this, too. the non-consing versions do not
| (necessarily) run faster for a number of really hairy reasons. they
| don't cons. that's all. GC'ing dead objects doesn't take time in a
| whole bunch of GC implementations -- e.g., it takes time to keep
| objects _alive_ from generation to generation in a generational GC.
+---------------

Not only that, but doing an NREVERSE of "old" data with a generational GC
can actually be *slower* than REVERSE, since in the NREVERSE case every
"old" cons that is altered must be [well, may need to be] recorded in the
"remembered set" for that generation (so that the next minor collection
will work correctly).

Fortunately, most write barriers do check for the "new" data being from
the same generation as the "old" data, and if so, suppress the recording
overhead. But the point stands: With generational GCs, "destructive/in-place"
operations *can* be more expensive that the corresponding "fresh-allocating"
versions of those same operations.


-Rob

-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043

Robert Monfera

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to

Tim Bradshaw wrote:

> Picking up
> dead objects is essentially free for a copying GC, so the GC runtime
> depends on the amount of live data only.

Erik wrote:

> GC'ing dead objects doesn't take time in a whole bunch of GC
> implementations -- e.g., it takes time to keep
> objects _alive_ from generation to generation in a generational GC.

Speaking of one GC pass, this is true, but the more memory allocated and
garbage generated, the more frequently the GC has to run to free memory,
thus increasing the overall GC runtime.

(In _this_ case, it becomes significant only if allocation is at a high
level relative to the amount of physical memory, at which point using a
vector becomes preferable.)

Robert

Joe Marshall

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
Robert Monfera <mon...@fisec.com> writes:

> Tim Bradshaw wrote:
>
> > Picking up
> > dead objects is essentially free for a copying GC, so the GC runtime
> > depends on the amount of live data only.
>
> Erik wrote:
>
> > GC'ing dead objects doesn't take time in a whole bunch of GC
> > implementations -- e.g., it takes time to keep
> > objects _alive_ from generation to generation in a generational GC.
>
> Speaking of one GC pass, this is true, but the more memory allocated and
> garbage generated, the more frequently the GC has to run to free memory,
> thus increasing the overall GC runtime.

Presuming, of course, that the ratio of `live' to `dead' objects
remains the same. If you are allocating furiously, but discarding
everything virtually immediately, yes, you do GC more frequently, but
the amount of work done by the GC may still be negligable.

~jrm

F. Xavier Noria

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
On 27 Apr 2000 11:37:01 GMT, rp...@rigden.engr.sgi.com (Rob Warnock) wrote:

: Not only that, but doing an NREVERSE of "old" data with a generational GC


: can actually be *slower* than REVERSE, since in the NREVERSE case every
: "old" cons that is altered must be [well, may need to be] recorded in the
: "remembered set" for that generation (so that the next minor collection
: will work correctly).

So, when you need to [n]reverse a list, what are the reasons you consider
in order to choose the version you will use?

-- Xavier

Rob Warnock

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
F. Xavier Noria <f...@retemail.es> wrote:
+---------------

| rp...@rigden.engr.sgi.com (Rob Warnock) wrote:
| : Not only that, but doing an NREVERSE of "old" data with a generational GC
| : can actually be *slower* than REVERSE...

|
| So, when you need to [n]reverse a list, what are the reasons you consider
| in order to choose the version you will use?
+---------------

Most of the time, I go ahead and use NREVERSE if I've just consed up the
list (e.g., collected some results in a loop or tree walk), since that's
the common idiom and since "new" conses probably haven't been promoted to
older generations yet anyway, and use REVERSE everywhere else.

Don Geddis

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
On Wed, 26 Apr 2000 23:43:56 GMT, Barry Margolin <bar...@genuity.net> wrote:
> For instance, in the traditional idiom for accumulating into a list, where
> you build the list with (push item result) during the loop and finally return
> (nreverse result), it's quite obvious that the old list will become dead
> immediately, so it's a simple matter to reuse it.

In this idiomatic case, isn't it both more efficient [ O(n) instead of O(2n) ]
and conceptually cleaner [no need to mention "result" at all] to do
(loop
collect item )
instead?

-- Don
_______________________________________________________________________________
Don Geddis http://shop.goto.com ged...@goto.com

Barry Margolin

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
In article <slrn8h17mn...@jedi.tesserae.com>,

Don Geddis <Ged...@Goto.Com> wrote:
>On Wed, 26 Apr 2000 23:43:56 GMT, Barry Margolin <bar...@genuity.net> wrote:
>> For instance, in the traditional idiom for accumulating into a list, where
>> you build the list with (push item result) during the loop and finally return
>> (nreverse result), it's quite obvious that the old list will become dead
>> immediately, so it's a simple matter to reuse it.
>
>In this idiomatic case, isn't it both more efficient [ O(n) instead of O(2n) ]
>and conceptually cleaner [no need to mention "result" at all] to do
> (loop
> collect item )
>instead?

Of course. Hence my use of the word "traditional" -- I was referring to
the idiom that was used before LOOP became common.

Also, that idiom may still be used when LOOP isn't appropriate for the
application. You might have some code that has complex control structure
and is *also* accumulating a result, so (push item result) will appear in
various places, and (nreverse result) will appear at the end. This would
actually be an appropriate place to use the COLLECTING macro from the
Series facility, but now I go back to my reference to "traditional".

0 new messages