Google Groepen ondersteunt geen nieuwe Usenet-berichten of -abonnementen meer. Historische content blijft zichtbaar.

hacker challenge - traverse a list

106 weergaven
Naar het eerste ongelezen bericht

Richard Harter

ongelezen,
29 feb 2008, 12:56:2429-02-2008
aan

Here is a little challenge - print the contents of a binary tree
in order only using O(1) additional memory. In other words you
can't use recursion or simulate it with a stack. As far as I
know you can't do this without some kind of hack. Also AFAIK
there is no way to do this that should ever be used in real code.
Rules? We don't need no steenking rules! Use whatever language
you like, though I suppose this is a natural for C based
languages. Tag bit solutions (i.e. mark a link as visited when
we go down it and unmark it when we come back up) don't count
because (a) everyone thinks of it, and (b) it (debatably) uses
O(log n) memory. Platform dependencies are okay - this a hack,
after all.


Richard Harter, c...@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Save the Earth now!!
It's the only planet with chocolate.

Ben Pfaff

ongelezen,
29 feb 2008, 13:04:2629-02-2008
aan
c...@tiac.net (Richard Harter) writes:

> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory.

Do you want to limit runtime to O(n) or forbid modifying the
binary tree? I can think of a simple solution in the latter
category, and I have the sketch of a solution in the former
category.

Is it a binary search tree or an arbitrary binary tree?
--
Ben Pfaff
http://benpfaff.org

Julienne Walker

ongelezen,
29 feb 2008, 13:12:5829-02-2008
aan
On Feb 29, 12:56 pm, c...@tiac.net (Richard Harter) wrote:
> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory. In other words you
> can't use recursion or simulate it with a stack. As far as I
> know you can't do this without some kind of hack. Also AFAIK
> there is no way to do this that should ever be used in real code.
> Rules? We don't need no steenking rules! Use whatever language
> you like, though I suppose this is a natural for C based
> languages. Tag bit solutions (i.e. mark a link as visited when
> we go down it and unmark it when we come back up) don't count
> because (a) everyone thinks of it, and (b) it (debatably) uses
> O(log n) memory. Platform dependencies are okay - this a hack,
> after all.

I notice you didn't mention that the structure of the tree can't
change, so why not rotate away the left branches as you traverse? Even
better, once you're done you can make a second pass that balances the
tree. ;-)

Andrey Tarasevich

ongelezen,
29 feb 2008, 14:29:2929-02-2008
aan
Richard Harter wrote:
> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory.

What exactly is a "binary tree" in this case?

In a tree that has child->parent links in addition to parent->child
links the solution is trivial. No hacks necessary.

In a tree that only has parent->child links the the problem has no
solution with O(1) memory (the way you "defined" it in your "debatably"
part).

--
Best regards,
Andrey Tarasevich

Andrey Tarasevich

ongelezen,
29 feb 2008, 14:32:0529-02-2008
aan
Julienne Walker wrote:
> ...

> I notice you didn't mention that the structure of the tree can't
> change, so why not rotate away the left branches as you traverse?
> ...

Hew explicitly stated that this approach will "debatably" use O(log n)
(or even O(n)) memory, which is a violation of the O(1) requirement.

Nelu

ongelezen,
29 feb 2008, 15:25:5529-02-2008
aan
On Fri, 29 Feb 2008 17:56:24 +0000, Richard Harter wrote:

> Here is a little challenge - print the contents of a binary tree in
> order only using O(1) additional memory. In other words you can't use
> recursion or simulate it with a stack. As far as I know you can't do
> this without some kind of hack. Also AFAIK there is no way to do this
> that should ever be used in real code. Rules? We don't need no
> steenking rules! Use whatever language you like, though I suppose this
> is a natural for C based languages. Tag bit solutions (i.e. mark a link
> as visited when we go down it and unmark it when we come back up) don't
> count because (a) everyone thinks of it, and (b) it (debatably) uses
> O(log n) memory. Platform dependencies are okay - this a hack, after
> all.
>

What does in order mean? Inorder?


If you just want to print the content of every node no matter what the
order of the keys is, assign an index (number) to each node (even not
existing nodes) similar to what you would have in a heap structure.
You can use a loop to print each node by its index starting with 0 (the
root). To get to the node with a certain index you can constuct the path
step by step by diving the index (-1 if zero based) by 2 and using the
result to mark the parent and the remainder specifies how to reach the
current node from the parent node (0 means left, 1 means right).
This is expensive because you have to repeat some calculations to avoid
using a queue. What you probably need in this case is a cursor (pointer
to walk the tree), variables for the current node and target node indexes
and a flag that tells you if there were any nodes printed at this depth.
If none was printed you stop the algorithm as you just ran out of
nodes :).

The indexes are assigned starting with the root and from left to right at
each depth
0
1 2
3 4 5 6
and so on...


I assume this fits the O(1) description. (This is something I just made
up so it's not tested, I just assume it works). Each depth start with a
node of index 2^depth-1 (again, even if the node doesn't really exist).

--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)

Richard Harter

ongelezen,
29 feb 2008, 15:26:2629-02-2008
aan

What I was thinking of was a binary tree with left and right
children links and no parent links. Obviously it is trivial if
there is a parent link.

My take is that you can modify the tree during the traversal
provided that it is in the original condition when you are done.
The challenge isn't explicit so I suppose that revising the tree
counts.

O(n) is more elegant (!!?) and is what I had in mind, but a more
expensive method without modifying the tree is a fair solution.

Richard Harter

ongelezen,
29 feb 2008, 15:34:2529-02-2008
aan
On Fri, 29 Feb 2008 11:29:29 -0800, Andrey Tarasevich
<andreyta...@hotmail.com> wrote:

>Richard Harter wrote:
>> Here is a little challenge - print the contents of a binary tree
>> in order only using O(1) additional memory.
>
>What exactly is a "binary tree" in this case?

A tree with left and right child links only, no parent link;
traverse in the order specified by left and right.

>
>In a tree that has child->parent links in addition to parent->child
>links the solution is trivial. No hacks necessary.

True.


>
>In a tree that only has parent->child links the the problem has no
>solution with O(1) memory (the way you "defined" it in your "debatably"
>part).

Not so. It depends upon what further restrictions you place.

Ben Pfaff

ongelezen,
29 feb 2008, 16:13:5529-02-2008
aan
c...@tiac.net (Richard Harter) writes:

> On Fri, 29 Feb 2008 10:04:26 -0800, Ben Pfaff
> <b...@cs.stanford.edu> wrote:
>
>>c...@tiac.net (Richard Harter) writes:
>>
>>> Here is a little challenge - print the contents of a binary tree
>>> in order only using O(1) additional memory.
>>
>>Do you want to limit runtime to O(n) or forbid modifying the
>>binary tree? I can think of a simple solution in the latter
>>category, and I have the sketch of a solution in the former
>>category.
>>
>>Is it a binary search tree or an arbitrary binary tree?
>
> What I was thinking of was a binary tree with left and right
> children links and no parent links. Obviously it is trivial if
> there is a parent link.
>
> My take is that you can modify the tree during the traversal
> provided that it is in the original condition when you are done.

One way to do it would be to use the BST rebalancing algorithm
described, among other places, at:
http://adtinfo.org/libavl.html/Balancing-a-BST.html
The first step of the algorithm converts the tree into a doubly
linked list. At this point, you can traverse it in inorder. The
second step balances the tree. The algorithm runs in O(1) space
and O(n) time.

This approach will not restore the original shape of the tree,
but the new shape is just as good or better than the original
shape for the purpose of typical BST operations. Perhaps it is
good for partial credit.
--
"GNU does not eliminate all the world's problems,
only some of them."
--Richard Stallman

Richard Harter

ongelezen,
29 feb 2008, 16:24:4329-02-2008
aan

Nice. IIANM the execution cost is O(n*h) where n is the number
of nodes and h is the maximum height - average case O(n*log(n))
and worst case O(n*n). Can this be improved to worst case
O(n*log(n))?

Richard Harter

ongelezen,
29 feb 2008, 16:29:2129-02-2008
aan
On Fri, 29 Feb 2008 13:13:55 -0800, Ben Pfaff
<b...@cs.stanford.edu> wrote:

Works for me - I thought that's what you had in mind. If nobody
comes up with anything suitably dirty I'll post what I had in
mind.

Julienne Walker

ongelezen,
29 feb 2008, 18:03:4529-02-2008
aan
On Feb 29, 2:32 pm, Andrey Tarasevich <andreytarasev...@hotmail.com>
wrote:

> Julienne Walker wrote:
> > ...
> > I notice you didn't mention that the structure of the tree can't
> > change, so why not rotate away the left branches as you traverse?
>
> > ...
>
> Hew explicitly stated that this approach will "debatably" use O(log n)
> (or even O(n)) memory, which is a violation of the O(1) requirement.

That's not the same approach. Rotating away all of the left branches
has an O(1) storage cost and takes O(N) time.

Andrey Tarasevich

ongelezen,
29 feb 2008, 18:08:3329-02-2008
aan

No, what I meant is that the original message seems to imply that
modifying O(log n) (or O(n)) existing memory locations in the tree is
already supposed to qualify as "using" O(log n) (or O(n)) memory, if I
understood it correctly.

Of course, it is debatable, as the original poster noted, but still it
would be great to know where exactly he draws the line.

Richard Harter

ongelezen,
29 feb 2008, 18:20:5129-02-2008
aan

If I understand you correctly, that's what Ben is proposing. My
original thought was that the structure could change during the
process as long as it was restored at the end. However the
challenge didn't say that.

Richard Harter

ongelezen,
29 feb 2008, 18:32:3929-02-2008
aan

Let me try to clarify. Modifying memory locations in the tree is
okay. What is not okay is altering specific bits in links to
record information - a typical hack is to set the sign bit or a
low order bit. If the addressing scheme is such that a certain
bit is always 0 or always 1 this trick works. The reason it is
debatable is that one can argue about whether the hack is using
extra space or not. To make it simple, assume that all bits of
an address can potentially be set.

Is this clearer?

Nelu

ongelezen,
29 feb 2008, 20:18:2029-02-2008
aan
On Fri, 29 Feb 2008 21:24:43 +0000, Richard Harter wrote:

> On 29 Feb 2008 20:25:55 GMT, Nelu <spam...@gmail.com> wrote:
>
>>On Fri, 29 Feb 2008 17:56:24 +0000, Richard Harter wrote:
>>
>>> Here is a little challenge - print the contents of a binary tree in
>>> order only using O(1) additional memory.

<snip>


>>I assume this fits the O(1) description. (This is something I just made
>>up so it's not tested, I just assume it works). Each depth start with a
>>node of index 2^depth-1 (again, even if the node doesn't really exist).
>
> Nice. IIANM the execution cost is O(n*h) where n is the number of nodes
> and h is the maximum height - average case O(n*log(n)) and worst case
> O(n*n). Can this be improved to worst case O(n*log(n))?
>

Not sure. I'd say not at a first glance. I may take another look at a
later time. I'll have to see if it's possible to get rid of some
calculations related to finding the path to the node to make it faster
but that's not going to make it better by an order of magnitude.
This is probably the price for O(1) and not changing the structure :).

Gene

ongelezen,
29 feb 2008, 21:30:1429-02-2008
aan
On Feb 29, 12:56 pm, c...@tiac.net (Richard Harter) wrote:
> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory. In other words you
> can't use recursion or simulate it with a stack. As far as I
> know you can't do this without some kind of hack. Also AFAIK
> there is no way to do this that should ever be used in real code.
> Rules? We don't need no steenking rules! Use whatever language
> you like, though I suppose this is a natural for C based
> languages. Tag bit solutions (i.e. mark a link as visited when
> we go down it and unmark it when we come back up) don't count
> because (a) everyone thinks of it, and (b) it (debatably) uses
> O(log n) memory. Platform dependencies are okay - this a hack,
> after all.

I think this was done 25 years ago:

G. Lindstrom. Scanning list structures without stacks or tag bits.
Information Processing
Letters, 2(2):47-51, June 1973.

Even if you use the classical Deutsch-Schorr-Waite algorithm, which
needs 2 tag bits per node, you can steal them from the low order bits
of the child pointers by insisting that nodes are allocated on 2-byte
boundaries, which most modern systems already do...

Gene

ongelezen,
29 feb 2008, 21:31:3829-02-2008
aan
On Feb 29, 9:30 pm, Gene <gene.ress...@gmail.com> wrote:
>
> I think this was done 25 years ago:
>
> G. Lindstrom. Scanning list structures without stacks or tag bits.
> Information Processing
> Letters, 2(2):47-51, June 1973.
>
> Even if you use the classical Deutsch-Schorr-Waite algorithm, which
> needs 2 tag bits per node, you can steal them from the low order bits
> of the child pointers by insisting that nodes are allocated on 2-byte
> boundaries, which most modern systems already do...

Yikes it's been a long week...35 years ago.

Paul Hsieh

ongelezen,
29 feb 2008, 23:32:0029-02-2008
aan
On Feb 29, 9:56 am, c...@tiac.net (Richard Harter) wrote:
> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory. In other words you
> can't use recursion or simulate it with a stack. As far as I
> know you can't do this without some kind of hack. Also AFAIK
> there is no way to do this that should ever be used in real code.
> Rules? We don't need no steenking rules! Use whatever language
> you like, though I suppose this is a natural for C based
> languages. Tag bit solutions (i.e. mark a link as visited when
> we go down it and unmark it when we come back up) don't count
> because (a) everyone thinks of it, and (b) it (debatably) uses
> O(log n) memory. Platform dependencies are okay - this a hack,
> after all.

I assume its an ordinary binary tree, and not a B+ tree (i.e., values
strewn throughout the tree, not just rammed down onto the leaves).

I would try to implement:

node ** findmin (node **);
void deleteEl (node **);
void print (node **);

for (;;) {
v = findmin (&treetop);
print(v)
deleteEl (v);
}

Where findmin() and deleteEl() are O(log(n)) time, and O(1) storage.
Why I am double indirecting instead of just single indirecting (i.e.,
node ** instead of node *) is because I am always referring to the
*link* to the element rather than the element. I.e., I am always
looking up the tree by one level, but just at the parent *link* not,
the parent *node* *(in this way the treetop has a parent link, by
taking the address of the variable which holds to the treetop).

findmin is trivial, so the trick is to implement deleteEl(), non-
recursively. So lets see:

deleteEl (node ** v) {
for (;;) {
if ((*v)->left == NULL && (*v)->right == NULL) { /*
Terminal node? */
free (*v);
*v = NULL;
return;
}
if ((*v)->left) { /* Swap with the right-most one on the
left side, and recurse? */
node ** p = &((*v)->left);
while ((*p)->right) *p = &((*p)->right);
(*v)->value = (*p)->value;
v = p;
} else { /* Swap with the left-most one on the right
side, and recurse */
node ** p = &((*v)->right);
while ((*p)->left) *p = &((*p)->left);
(*v)->value = (*p)->value;
v = p;
}
}
}

So the point is to realize that the recursion here, is really tail
recursion, and I just implement it directly as a for loop. So this is
basically a full solution (if you give me the findmin function), and
there is nothing hacky about it. Do I fail, because I didn't hack my
way through this?

BTW, What the hell happened to this group?!?! There is not a lot of
substance in the other posts around here.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Richard Harter

ongelezen,
1 mrt 2008, 00:18:4601-03-2008
aan

Interesting. I'd never seen the DSW algorithm before, though the
idea of pointer reversal is rather obvious. Thanks very much for
the reference.

As it chances I thought of a different way to pull off the trick,
though I didn't take into account cycle detection.

Richard Heathfield

ongelezen,
1 mrt 2008, 00:29:5201-03-2008
aan
Paul Hsieh said:

<snip>



> BTW, What the hell happened to this group?!?!

Mr Nilges (aka "spinoza") happened to this group.

> There is not a lot of substance in the other posts around here.

Right. <sigh>

Feel free to increase the average content quality.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999

Remo D.

ongelezen,
1 mrt 2008, 02:42:1401-03-2008
aan
Paul Hsieh ha scritto:

>
> BTW, What the hell happened to this group?!?! There is not a lot of
> substance in the other posts around here.
>

I can't resist to comment on this.

It's a fact that some posts have the clear intent to provoke flaming
reactions and/or contain personal attacks but more than the original
posters of those messages I blame those who can't resist answering to
them starting a snowball reaction.

I think that, for how hard it could be ( http://xkcd.com/386/ ), this
place (and clc) would be a better place if clearly off-topic messages
could be left unanswered.

My 2c.

CBFalconer

ongelezen,
1 mrt 2008, 02:11:0801-03-2008
aan
Gene wrote:
>
... snip ...

>
> I think this was done 25 years ago:
>
> G. Lindstrom. Scanning list structures without stacks or tag bits.
> Information Processing
> Letters, 2(2):47-51, June 1973.
>
> Even if you use the classical Deutsch-Schorr-Waite algorithm, which
> needs 2 tag bits per node, you can steal them from the low order bits
> of the child pointers by insisting that nodes are allocated on 2-byte
> boundaries, which most modern systems already do...

While this will often work, it is not portable. You know nothing
about the work done by bits in pointers.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

moi

ongelezen,
1 mrt 2008, 07:11:1501-03-2008
aan

Well this smells like the XOR hack ...

HTH,
AvK

Richard Harter

ongelezen,
1 mrt 2008, 11:58:5301-03-2008
aan
On Sat, 01 Mar 2008 13:11:15 +0100, moi
<ro...@invalid.address.org> wrote:

>On Fri, 29 Feb 2008 20:26:26 +0000, Richard Harter wrote:
>
>> On Fri, 29 Feb 2008 10:04:26 -0800, Ben Pfaff <b...@cs.stanford.edu>
>> wrote:
>>
>>>c...@tiac.net (Richard Harter) writes:
>>>
>>>> Here is a little challenge - print the contents of a binary tree in
>>>> order only using O(1) additional memory.
>>>
>>>Do you want to limit runtime to O(n) or forbid modifying the binary
>>>tree? I can think of a simple solution in the latter category, and I
>>>have the sketch of a solution in the former category.
>>>
>>>Is it a binary search tree or an arbitrary binary tree?
>>
>> What I was thinking of was a binary tree with left and right children
>> links and no parent links. Obviously it is trivial if there is a parent
>> link.
>>
>> My take is that you can modify the tree during the traversal provided
>> that it is in the original condition when you are done. The challenge
>> isn't explicit so I suppose that revising the tree counts.
>>
>> O(n) is more elegant (!!?) and is what I had in mind, but a more
>> expensive method without modifying the tree is a fair solution.

>Well this smells like the XOR hack ...


Here's the general idea. Suppose you have just arrived at a
node. At this point you know where you came from. What you are
going to do next is visit your children one after another. When
you get done visiting them you want to go back to where you came
from. So what you want to do is to somehow nondestructively
store the information about where you came from (a link to your
parent) in the node.

Fortunately the link holding the address of the child you are
going to visit can be overwritten - we will know what it was when
we return from the visit because we know where we just came from.
So, we can stick the parent's address there while we are off
visiting and restore the link when we get back. There is a catch
though; when we get back how do we know whether we just came from
visiting the left child or the right child. We need somehow to
save one additional bit of information. The obvious way to do
this is to use a tag bit to designate which child we visited.
This idea is the basis for the DSW algorithm; whether it is a
hack or not is, I suppose, a matter of perspective. It turns out
that you can do a clever trick with moving pointers around that
implicitly captures that bit of information; that's the Lindscott
variation that Gene mentioned. I thought there was an xor based
hack to store the bit but I can't get it to work.

Paul Hsieh

ongelezen,
1 mrt 2008, 16:42:4601-03-2008
aan
On Feb 29, 8:32 pm, Paul Hsieh <websn...@gmail.com> wrote:
> On Feb 29, 9:56 am, c...@tiac.net (Richard Harter) wrote:
> > Here is a little challenge - print the contents of a binary tree
> > in order only using O(1) additional memory. [...]

>
> I would try to implement:
>
> node ** findmin (node **);
> void deleteEl (node **);
> void print (node **);
>
> for (;;) {
> v = findmin (&treetop);
> print(v)
> deleteEl (v);
> }

Well, the problem with this, of course, is that I destroy the tree in
the process. But, of course, this was not mentioned in the original
statement of the problem. Upon reflection, I have found a way to
mangle and print the binary tree, then demangle it back to its
original state. Though technically, in most cases, its not important
or relevant to get it back to its *exact* state, but rather just
*some* binary tree like state (which is a *WAY* easier problem, of
course), I have found one that recovers the tree exactly:

node * xtmp;
node ** tmp = &xtmp;
node * tmptop = treetop;

/* Flatten, and output the tree */
for (;;) {
v = findmin (&treetop); /* return with parent link to min node.
*/
print (v)
*tmp = v;
tmp = &((*v)->right);
*v = NULL; /* Delink the node from its parent; don't free it.
*/
}

/* Rebuild the tree */
tmp = &xtmp
treetop = tmptop;
treetop = *tmp;
for (tmp = (*tmp)->right;*tmp;) {
node ** x = &treetop;
while ((*tmp)->left != *x && (*x)->right != NULL) {
x = &((*x)->right);
}
*x = *tmp;
tmp = (*tmp)->right
}
treetop = tmptop;

The basic idea is to convert the right pointer into a directly ordered
linked list pointer in the first past while printing the elements out,
while leaving the left pointer alone. Then we reinsert the nodes *in
order* into a binary tree. In order to know *where* to insert, we use
the left link to determine whether or not each node should be re-
inserted *above* the greatest previous node (whenever it is non-NULL)
or *below* the greatest previous node. This reinsertion only affects
the parent right links, as the left links are already correct.

No hacks, no controversy, complete solution. At this point I am only
violating a potential const property for the tree, but I am pretty
sure, that can't be solved in O(1) space with or without hacks
(excluding finding O(n) space somewhere because of some platform-
specific thing).

Besides solving the problem, this demonstrates a general technique
about linked list and tree-like algorithms. That is that you should
operate on "previous links" rather than direct pointers to nodes. Its
*almost* like always having the previous node for every node around,
except it also has the benefit of working even on the first/top-mode
node. I don't think this technique is generally taught in schools or
textbooks, but I think its well worth taking notice of.

mike3

ongelezen,
1 mrt 2008, 19:25:2001-03-2008
aan
On Feb 29, 10:56 am, c...@tiac.net (Richard Harter) wrote:
> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory. In other words you
> can't use recursion or simulate it with a stack. As far as I
> know you can't do this without some kind of hack. Also AFAIK
> there is no way to do this that should ever be used in real code.

What would happen if you tried using it in "real" code?

dj3v...@csclub.uwaterloo.ca.invalid

ongelezen,
1 mrt 2008, 19:52:4301-03-2008
aan
In article <cb11b705-d1b1-4f04...@d21g2000prf.googlegroups.com>,

The maintenance programmer who inherited it from you would come and
break your legs.


dave

--
Dave Vandervies dj3vande at eskimo dot com

Oil _is_ solar power. It's just sunshine from a long time ago.
--Matt Roberds in the scary devil monastery

Randy Howard

ongelezen,
1 mrt 2008, 20:29:2401-03-2008
aan
On Sat, 1 Mar 2008 18:52:43 -0600, dj3v...@csclub.uwaterloo.ca.invalid
wrote
(in article <fqctor$m2l$1...@rumours.uwaterloo.ca>):

> In article
> <cb11b705-d1b1-4f04...@d21g2000prf.googlegroups.com>,
> mike3 <mike...@yahoo.com> wrote:
>> On Feb 29, 10:56 am, c...@tiac.net (Richard Harter) wrote:
>
>>> As far as I
>>> know you can't do this without some kind of hack. Also AFAIK
>>> there is no way to do this that should ever be used in real code.
>>
>> What would happen if you tried using it in "real" code?
>
> The maintenance programmer who inherited it from you would come and
> break your legs.

:-)

Maybe if this actually happened regularly, code quality would be much
better overall.


--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Gene

ongelezen,
1 mrt 2008, 21:39:0101-03-2008
aan
On Mar 1, 2:11 am, CBFalconer <cbfalco...@yahoo.com> wrote:
> Gene wrote:
>
> ... snip ...
>
> > I think this was done 25 years ago:
>
> > G. Lindstrom. Scanning list structures without stacks or tag bits.
> > Information Processing
> > Letters, 2(2):47-51, June 1973.
>
> > Even if you use the classical Deutsch-Schorr-Waite algorithm, which
> > needs 2 tag bits per node, you can steal them from the low order bits
> > of the child pointers by insisting that nodes are allocated on 2-byte
> > boundaries, which most modern systems already do...
>
> While this will often work, it is not portable.  You know nothing
> about the work done by bits in pointers.

Of course. On the other hand, there are many language runtimes that
make the same non-portable assumption by embedding type information in
the low order bits of pointers. DSW is most frequently used for
garbage collection, which entails machine-dependent assumptions
anyway.


Ben Pfaff

ongelezen,
2 mrt 2008, 00:17:3502-03-2008
aan
Randy Howard <randy...@FOOverizonBAR.net> writes:

> On Sat, 1 Mar 2008 18:52:43 -0600, dj3v...@csclub.uwaterloo.ca.invalid
> wrote
> (in article <fqctor$m2l$1...@rumours.uwaterloo.ca>):
>
>> In article
>> <cb11b705-d1b1-4f04...@d21g2000prf.googlegroups.com>,
>> mike3 <mike...@yahoo.com> wrote:
>>> On Feb 29, 10:56 am, c...@tiac.net (Richard Harter) wrote:
>>
>>>> As far as I
>>>> know you can't do this without some kind of hack. Also AFAIK
>>>> there is no way to do this that should ever be used in real code.
>>>
>>> What would happen if you tried using it in "real" code?
>>
>> The maintenance programmer who inherited it from you would come and
>> break your legs.
>

> Maybe if this actually happened regularly, code quality would be much
> better overall.

But I wonder how many programmers would have broken legs.
--
Ben Pfaff
http://benpfaff.org

dj3v...@csclub.uwaterloo.ca.invalid

ongelezen,
2 mrt 2008, 00:44:3402-03-2008
aan
In article <87ejati...@blp.benpfaff.org>,

In the steady state, probably not many.
In the transition, well, I'd rather not think about it.


dave

--
Dave Vandervies dj3vande at eskimo dot com

Ah, good. I like being safely disregarded. Gives me more time to work
on the Earth-Shattering Kaboom.
--Dan Birchall in the scary devil monastery

Paul Hsieh

ongelezen,
2 mrt 2008, 01:15:3802-03-2008
aan
On Mar 1, 9:17 pm, Ben Pfaff <b...@cs.stanford.edu> wrote:
> Randy Howard <randyhow...@FOOverizonBAR.net> writes:
> > On Sat, 1 Mar 2008 18:52:43 -0600, dj3va...@csclub.uwaterloo.ca.invalid
> > wrote
> > (in article <fqctor$m2...@rumours.uwaterloo.ca>):
> >> In article
> >> <cb11b705-d1b1-4f04-861f-8331245c7...@d21g2000prf.googlegroups.com>,

> >> mike3  <mike4...@yahoo.com> wrote:
> >>> On Feb 29, 10:56 am, c...@tiac.net (Richard Harter) wrote:
> >>>> As far as I
> >>>> know you can't do this without some kind of hack.  Also AFAIK
> >>>> there is no way to do this that should ever be used in real code.
>
> >>> What would happen if you tried using it in "real" code?
>
> >> The maintenance programmer who inherited it from you would come and
> >> break your legs.
>
> > Maybe if this actually happened regularly, code quality would be much
> > better overall.
>
> But I wonder how many programmers would have broken legs.

Indeed.

If I read his implication correctly, Richard Harter has in mind some
horrible hacky way of solving this problem. Some people want to ram
bits into pointers (*sigh* not every machine is RISC ... in fact these
days precious few of them are). You posted a solution which changes
the tree structure. I've posted two solutions: one whch destroys the
tree altogether (not a violation of the original problem statement),
and another which solves the problem completely without issue except
for a potential const violation (i.e., my solution would not work if
the links were, say, web page links that I don't have write access
to). Its possible some bright guy may come along and show us all how
to do this with a *const* binary tree in O(1) space and no hacks (I
highly doubt this, but I recall a time when I had to eat crow in
USENET over the existence of an O(n) median finding algorithm, so I
won't raise the stakes too high ;) ).

So the question is -- who gets to wield the bat, and set the standard
for when people's legs get whacked? I think my legs would probably be
ok for a while, but I wouldn't envy the position of some other people.

moi

ongelezen,
2 mrt 2008, 07:00:1502-03-2008
aan
On Sat, 01 Mar 2008 16:58:53 +0000, Richard Harter wrote:

> On Sat, 01 Mar 2008 13:11:15 +0100, moi <ro...@invalid.address.org>
> wrote:
>
>>On Fri, 29 Feb 2008 20:26:26 +0000, Richard Harter wrote:
>>
>>> On Fri, 29 Feb 2008 10:04:26 -0800, Ben Pfaff <b...@cs.stanford.edu>
>>> wrote:
>>>
>>>>c...@tiac.net (Richard Harter) writes:
>>>>
>>>>> Here is a little challenge - print the contents of a binary tree in
>>>>> order only using O(1) additional memory.
>>>>

>>> My take is that you can modify the tree during the traversal provided


>>> that it is in the original condition when you are done. The challenge
>>> isn't explicit so I suppose that revising the tree counts.
>>>
>>> O(n) is more elegant (!!?) and is what I had in mind, but a more
>>> expensive method without modifying the tree is a fair solution.

Loosely speaking, this is about constructing an iterator for the
treewalk: given a certain node)or key), find the one that should be the
next to visit.


>>Well this smells like the XOR hack ...

I am still not sure if the XOR-hack works for this particular problem.

>
> Here's the general idea. Suppose you have just arrived at a node. At
> this point you know where you came from. What you are going to do next
> is visit your children one after another. When you get done visiting
> them you want to go back to where you came from. So what you want to do
> is to somehow nondestructively store the information about where you
> came from (a link to your parent) in the node.
>
> Fortunately the link holding the address of the child you are going to
> visit can be overwritten - we will know what it was when we return from
> the visit because we know where we just came from. So, we can stick the
> parent's address there while we are off visiting and restore the link
> when we get back. There is a catch though; when we get back how do we
> know whether we just came from visiting the left child or the right
> child. We need somehow to save one additional bit of information. The
> obvious way to do this is to use a tag bit to designate which child we
> visited. This idea is the basis for the DSW algorithm; whether it is a
> hack or not is, I suppose, a matter of perspective. It turns out that
> you can do a clever trick with moving pointers around that implicitly
> captures that bit of information; that's the Lindscott variation that
> Gene mentioned. I thought there was an xor based hack to store the bit
> but I can't get it to work.
>

I have tried this kind of construction yesterday. It seemed to work
reasonably, except for the root node, which has no parent, and makes it
impossible for it's children to find out weather they are on the left or
right side of the root.

Another idea I have, is to extend the "dancing links" principle to three
pointers:
Every node needs three pointers (parent,left,right), but has place for
only two. Rotating the pointervalues over the two locations + a temp
could do the trick, but again one needs to remember the "state" (number
of times a node has undergone rotation), which needs 1.5 (;-) bit per
node. Plus: there is again a null-pointer and termination-problem.
(storing the two bits in the lower bits of the pointers is obvious, but
forbidden by the rules of the game)

Yet another bit (maybe 1.5 ?) of information is present in the nodes, if
the "comparison" function is known by the treewalker.That could be
exploited to determine whether a node has been permuted or not.
(non-unique keys will probably ruin this trick). Null vs root's parent is
again a problem.


Back to the old drawing board,
AvK


moi

ongelezen,
2 mrt 2008, 07:14:2802-03-2008
aan
On Sun, 02 Mar 2008 13:00:15 +0100, moi wrote:

> On Sat, 01 Mar 2008 16:58:53 +0000, Richard Harter wrote:
>
>> On Sat, 01 Mar 2008 13:11:15 +0100, moi <ro...@invalid.address.org>
>> wrote:
>>
>>>On Fri, 29 Feb 2008 20:26:26 +0000, Richard Harter wrote:
>>>
>>>> On Fri, 29 Feb 2008 10:04:26 -0800, Ben Pfaff <b...@cs.stanford.edu>
>>>> wrote:
>>>>
>>>>>c...@tiac.net (Richard Harter) writes:
>>>>>
>>>>>> Here is a little challenge - print the contents of a binary tree in
>>>>>> order only using O(1) additional memory.
>>>>>

> Yet another bit (maybe 1.5 ?) of information is present in the nodes, if


> the "comparison" function is known by the treewalker.That could be
> exploited to determine whether a node has been permuted or not.
> (non-unique keys will probably ruin this trick). Null vs root's parent
> is again a problem.
>

BTW if the comparison function is known, O(n log n) can be done, IMHO.

AvK

Ben Bacarisse

ongelezen,
2 mrt 2008, 09:40:0602-03-2008
aan
Paul Hsieh <webs...@gmail.com> writes:

It depends on exactly how you count the O(1) space, but it is possible
to do this by numbering the nodes. One way is to number the 2^d nodes
at depth d (starting with d = 0) from, 2^d up to 2^(d+1) - 1. You can
then write a find_node(tree, n) function that finds the node numbered
n using no space other then that number (and O(1) temporary copies of
it).

An in-order traversal is then just a loop to print nodes from 1
upwards. Of course, you need to know when to stop, but that is just a
matter or noticing that there were no nodes at the last printed depth.

It can be argued that this uses O(log2(n)) space for the number of the
node (and for any copies of this number that are needed) but in many
analyses of algorithms, the size of integers is taken to be O(1).
With that view, this method uses O(1) space and can work with a
unmodifiable tree. In practice, any tree that can be manipulated will
be able to have its nodes numbered in a constant size integer
variable.

[I have not been able to follow this thread as I would like because my
news feed is playing up, but I think someone has already posted this
(or a similar) solution. Also, I apologise in advance if I seem to
ignore some postings.]

--
Ben.

Richard Harter

ongelezen,
2 mrt 2008, 11:33:1102-03-2008
aan
On Sun, 02 Mar 2008 14:40:06 +0000, Ben Bacarisse
<ben.u...@bsb.me.uk> wrote:

>Paul Hsieh <webs...@gmail.com> writes:
>
[snip]

It's been done. The objection is that it's not O(n) in time; the
cost of find_node is O(log n).

Richard Harter

ongelezen,
2 mrt 2008, 11:47:2302-03-2008
aan
On Sat, 1 Mar 2008 13:42:46 -0800 (PST), Paul Hsieh
<webs...@gmail.com> wrote:

>On Feb 29, 8:32 pm, Paul Hsieh <websn...@gmail.com> wrote:
>> On Feb 29, 9:56 am, c...@tiac.net (Richard Harter) wrote:
>> > Here is a little challenge - print the contents of a binary tree
>> > in order only using O(1) additional memory. [...]

[snip solution]

>
>No hacks, no controversy, complete solution. At this point I am only
>violating a potential const property for the tree, but I am pretty
>sure, that can't be solved in O(1) space with or without hacks
>(excluding finding O(n) space somewhere because of some platform-
>specific thing).

Very pretty. I think you've rediscovered the Lindscott version
of DSW which is a lot more than I did. I will check it all out
and make sure.


>
>Besides solving the problem, this demonstrates a general technique
>about linked list and tree-like algorithms. That is that you should
>operate on "previous links" rather than direct pointers to nodes. Its
>*almost* like always having the previous node for every node around,
>except it also has the benefit of working even on the first/top-mode
>node. I don't think this technique is generally taught in schools or
>textbooks, but I think its well worth taking notice of.

That's a very good point.

Ben Bacarisse

ongelezen,
2 mrt 2008, 12:59:3402-03-2008
aan
c...@tiac.net (Richard Harter) writes:

> On Sun, 02 Mar 2008 14:40:06 +0000, Ben Bacarisse
> <ben.u...@bsb.me.uk> wrote:
>
>>It depends on exactly how you count the O(1) space, but it is possible
>>to do this by numbering the nodes. One way is to number the 2^d nodes
>>at depth d (starting with d = 0) from, 2^d up to 2^(d+1) - 1. You can
>>then write a find_node(tree, n) function that finds the node numbered
>>n using no space other then that number (and O(1) temporary copies of
>>it).
>>
>>An in-order traversal is then just a loop to print nodes from 1
>>upwards. Of course, you need to know when to stop, but that is just a
>>matter or noticing that there were no nodes at the last printed depth.
>>
>>It can be argued that this uses O(log2(n)) space for the number of the
>>node (and for any copies of this number that are needed) but in many
>>analyses of algorithms, the size of integers is taken to be O(1).
>>With that view, this method uses O(1) space and can work with a
>>unmodifiable tree. In practice, any tree that can be manipulated will
>>be able to have its nodes numbered in a constant size integer
>>variable.
>>

<snip>


> It's been done. The objection is that it's not O(n) in time; the
> cost of find_node is O(log n).

I must have missed that part of the spec, sorry. I don't think I've
seen all posts in the thread.

--
Ben.

Richard Harter

ongelezen,
2 mrt 2008, 13:22:3902-03-2008
aan
On Sun, 02 Mar 2008 17:59:34 +0000, Ben Bacarisse
<ben.u...@bsb.me.uk> wrote:

It's not in the spec (which was deliberatley vague.) So far
people have come up with either O(n) time and muck with the links
or O(n*log n) time and don't touch the links.

spinoza1111

ongelezen,
2 mrt 2008, 22:40:0202-03-2008
aan
On Mar 1, 1:56 am, c...@tiac.net (Richard Harter) wrote:
> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory.  In other words you
> can't use recursion or simulate it with a stack.  As far as I

> know you can't do this without some kind of hack.  Also AFAIK
> there is no way to do this that should ever be used in real code.
> Rules?  We don't need no steenking rules!  Use whatever language
> you like, though I suppose this is a natural for C based
> languages.  Tag bit solutions (i.e. mark a link as visited when
> we go down it and unmark it when we come back up) don't count
> because (a) everyone thinks of it, and (b) it (debatably) uses
> O(log n) memory.  Platform dependencies are okay - this a hack,
> after all.

Thanks for submitting this very interesting challenge!

I am solving the problem (slowly as my schedule permits) without
hacking. I won't look at other solutions until I am done or give up.
This essay response describes my approach. Use of this approach or
comments are welcome.

In a nutshell: "don't hack: SIMULATE a stack".

The untested C sharp code that follows the essay is a stack simulator
which uses a single number to represent a stack with a fixed bound. It
is best viewed in a monospaced font.

It uses an old Fortran trick for representing arrays and stacks I used
in the very limited memory of IBM 1401 Fortran running on a minimal 8K
configuration, and learned in an old Cambridge University Press book
called, if memory serves, "Non-Numerical Programming in Fortran".
Decimal numbers are "squozed" into a large binary number by shifting
them by powers of ten.

It's not a hack and that's the beauty of the thing: using an object-
oriented language means that you can encapsulate your dirty laundry in
a class!

Right now, though, I am stuck on how to write inorder traversal with
my nice new stack. It is easy if you can use recursion, but we cannot.

I claim that using a stack simulated by a single integer gives
"order(1) prime" memory complexity and not O(1) because I view the
direct application of the big O notation to code as an error. Scoffers
may scoff and be damned, but given optimizing OO compilation, there is
no limit to inlining, and for this reason the essential cost of the
stack is (1) the cost of the integer that holds the stack and (2) the
code associated with the stack, which doesn't count.

I am looking forward to reading other replies because in the post tree
I see good people including Julienne and few of the thugs. Hey Randy
where is your solution, big man? But first I will spend some time
trying to do inorder traversal with a stack but without recursion.

Here is the untested code of the stack class, best viewed in a
monospaced font such as Courier New and implemented, in C Sharp, as a
struct.

private struct TYPintegerStack
{
uint INTstack;
byte BYTdecimalWidth;
ushort SHRshift;
public TYPintegerStack(byte bytWidth) // constructor
{
INTstack = 0;
BYTdecimalWidth = bytWidth;
SHRshift = (ushort)Math.Pow(10, BYTdecimalWidth);
}
public bool push(ushort shrValue)
{
if (shrValue >= Math.Pow(10, BYTdecimalWidth))
{
Console.WriteLine
("Value for push " +
shrValue.ToString() + " " +
"exceeds decimal width " +
BYTdecimalWidth.ToString());
return false;
}
long lngTest = INTstack * SHRshift + shrValue + 1;
if (lngTest > Math.Pow(2, 31) - 1)
{
Console.WriteLine("Stack overflow");
return false;
}
return true;
}
public ushort pop()
{
if (INTstack == 0)
{
Console.WriteLine("Stack underflow");
return 0;
}
ushort shrReturn =
(ushort)(INTstack % (int)SHRshift - 1);
INTstack /= SHRshift;
return shrReturn;
}
public bool Empty
{
get { return INTstack == 0; }
}
}

>
> Richard Harter, c...@tiac.nethttp://home.tiac.net/~cri,http://www.varinoma.com

spinoza1111

ongelezen,
2 mrt 2008, 23:06:3702-03-2008
aan
> > It's the only planet with chocolate.- Hide quoted text -
>
> - Show quoted text -

The solution doesn't, I believe, even need a simulated stack.

Subsequent to the above post I found the following algorithm in
wikipedia for "threaded tree traversal inorder" with neither recursion
nor the stack but so far I haven't got its transliteration to C Sharp
to work and will try to do so. It is time for me to look at the rest
of the posts in this very interesting challenge, but I will continue
to work on this solution as well.

//while hasleftchild(node) do
// node = node.left
//do
// visit(node)
// if (hasrightchild(node)) then
// node = node.right
// while hasleftchild(node) do
// node = node.left
// else
// node = node.right
//while node ≠ null

spinoza1111

ongelezen,
2 mrt 2008, 23:13:2102-03-2008
aan
On Mar 1, 3:32 am, Andrey Tarasevich <andreytarasev...@hotmail.com>
wrote:
> Julienne Walker wrote:
> > ...
> > I notice you didn't mention that the structure of the tree can't
> > change, so why not rotate away the left branches as you traverse?
>
>  > ...
>
> Hew explicitly stated that this approach will "debatably" use O(log n)
> (or even O(n)) memory, which is a violation of the O(1) requirement.

For small instances any stack and any array can be represented in a
long int by storing shifted powers of 2, 10 or n: in an object
oriented programming language this representation can be encapsulated:
this gives you, I claim "order 1 prime" memory complexity (where I
refuse to use the unprime terminology for actual code for reasons
stated).

The encapsulation can expand the array when it reaches the limit by
means of a switch structure that selects one of a constant number of
small integer-represented arrays. While the overall size of the array
or stack remains bounded, I believe that in the misuse of "O(1)", this
is still "O(1)" or "O(K)".
>
> --
> Best regards,
> Andrey Tarasevich

spinoza1111

ongelezen,
2 mrt 2008, 23:15:4602-03-2008
aan
On Mar 1, 1:56 am, c...@tiac.net (Richard Harter) wrote:
> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory.  In other words you
> can't use recursion or simulate it with a stack.  As far as I
> know you can't do this without some kind of hack.  Also AFAIK
> there is no way to do this that should ever be used in real code.
> Rules?  We don't need no steenking rules!  Use whatever language
> you like, though I suppose this is a natural for C based
> languages.  Tag bit solutions (i.e. mark a link as visited when
> we go down it and unmark it when we come back up) don't count
> because (a) everyone thinks of it, and (b) it (debatably) uses
> O(log n) memory.  Platform dependencies are okay - this a hack,
> after all.
>
> Richard Harter, c...@tiac.nethttp://home.tiac.net/~cri,http://www.varinoma.com
> Save the Earth now!!
> It's the only planet with chocolate.

Not sure what you mean, now, by O(1) and wish you would not use the
terminology for the reasons I have given elsethread. Does it mean that
I cannot use ANY extra scalar variables, including my encapsulated
stack, simulated as it is in a long integer? If not I be fucked.

spinoza1111

ongelezen,
2 mrt 2008, 23:19:5802-03-2008
aan
On Mar 1, 1:29 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> Paul Hsieh said:
>
> <snip>
>
> > BTW, What the hell happened to this group?!?!
>
> Mr Nilges (aka "spinoza") happened to this group.

Get out of this thread: you are off topic, offensive and a vandal. It
is for discussion of Richard Harter's problem, not for discussion of
personalities. If you have nothing to contribute you do not belong
here.

spinoza1111

ongelezen,
2 mrt 2008, 23:25:4702-03-2008
aan
On Mar 1, 3:42 pm, "Remo D." <rdentato> wrote:
> Paul Hsieh ha scritto:
>
>
>
> > BTW, What the hell happened to this group?!?!  There is not a lot of
> > substance in the other posts around here.
>
> I can't resist to comment on this.
>
> It's a fact that some posts have the clear intent to provoke flaming
> reactions and/or contain personal attacks but more than the original
> posters of those messages I blame those who can't resist answering to
> them starting a snowball reaction.

I believe people have, in a network subject to corporate and
government surveillance, the right to go off topic when they are
attacked as I have been attacked, especially if they then, as I do,
return the conversation to programming by posting challenges such as
my spark notes test and code including my sort lab.

Otherwise at any time, corporate and government surveillance can
destroy reputations with no chance of redress.

The blame lies CLEARLY with certain individuals here, especially
Richard Heathfield, Randy Howard, and "Willem" who enter threads, as
Richard did on March 1, to name and disrespect a person.

Responders are at a maximum guilty of one thing, and one thing only:
going offtopic to defend their reputations, or, in the rare case when
someone defends another (rare because of a culture of selfishness and
of cowardice), another's good name.

Whereas initiators, including Heathfield here, are guilty of that
infraction AND of causing pain and harm to others deliberately.


>
> I think that, for how hard it could be (http://xkcd.com/386/), this

spinoza1111

ongelezen,
2 mrt 2008, 23:27:2102-03-2008
aan
On Mar 2, 9:29 am, Randy Howard <randyhow...@FOOverizonBAR.net> wrote:
> On Sat, 1 Mar 2008 18:52:43 -0600, dj3va...@csclub.uwaterloo.ca.invalid
> wrote
> (in article <fqctor$m2...@rumours.uwaterloo.ca>):
>
> > In article
> > <cb11b705-d1b1-4f04-861f-8331245c7...@d21g2000prf.googlegroups.com>,
> > mike3  <mike4...@yahoo.com> wrote:
> >> On Feb 29, 10:56 am, c...@tiac.net (Richard Harter) wrote:
>
> >>> As far as I
> >>> know you can't do this without some kind of hack.  Also AFAIK
> >>> there is no way to do this that should ever be used in real code.
>
> >> What would happen if you tried using it in "real" code?
>
> > The maintenance programmer who inherited it from you would come and
> > break your legs.
>
> :-)
>
> Maybe if this actually happened regularly, code quality would be much
> better overall.

With NOTHING to contribute, because he's a coward who's afraid of
being wrong, Randy Howard is attracted to the smell of blood.

Richard Heathfield

ongelezen,
3 mrt 2008, 00:25:4003-03-2008
aan
spinoza1111 said:

<snip>



> Not sure what you mean, now, by O(1)

He means that you can use as much additional memory as you like, but it
must be a fixed amount, regardless of the number of nodes in the tree.

> and wish you would not use the terminology for the reasons I
> have given elsethread.

If you don't use the same terminology that everyone else uses, that's going
to make your life more difficult than is entirely necessary.

> Does it mean that I cannot use ANY extra scalar variables, including
> my encapsulated stack, simulated as it is in a long integer?

No, but if your stack grows in a way that depends on the number of nodes in
the tree, that would not be acceptable within the terms of the original
challenge. (And no, one stack push per tree depth isn't acceptable either,
because although it isn't O(N) it is still O(log N).)

The obvious solution (adding a thread to the tree) is wrong because it
requires one extra pointer per node, which is O(N) extra memory.

Richard Heathfield

ongelezen,
3 mrt 2008, 00:31:0803-03-2008
aan
spinoza1111 said:

> On Mar 1, 1:29 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> Paul Hsieh said:
>>
>> <snip>
>>
>> > BTW, What the hell happened to this group?!?!
>>
>> Mr Nilges (aka "spinoza") happened to this group.
>
> Get out of this thread:

Oh, hush now. The guy asked a question, and he got an answer. If you don't
want to be blamed for disrupting the group, don't disrupt the group.

> you are off topic, offensive and a vandal.

Insults from you are nothing new, and bear no relation to reality. Yawn.

> It is for discussion of Richard Harter's problem, not
> for discussion of personalities.

I was answering the question "what happened to this group", which is
perfectly topical here.

> If you have nothing to contribute you do not belong here.

I contributed the answer to his question. I have also contributed an answer
to your question. Your own contributions to this subject that I have read
so far consist of an incorrect answer, an untested answer, and a question
that betrays fundamental ignorance of computer science. If that is the
measure of your contribution, then according to your own arguments you
don't belong here.

Robert Maas, see http://tinyurl.com/uh3t

ongelezen,
3 mrt 2008, 01:24:1803-03-2008
aan
> From: c...@tiac.net (Richard Harter)

> Here is a little challenge - print the contents of a binary tree
> in order only using O(1) additional memory. In other words you
> can't use recursion or simulate it with a stack.

I assume there are no up links, otherwise the algorithm is trivial.

Is the algorithm required to use only O(1) additional memory
regardless of the size of the tree, totally unbounded in size, thus
running only on a hypothetical computer with unbounded address
space hence unbounded number of bits in a pointer? If not, i.e. if
there's an absolute maximum size on memory, then that maximum size
is O(1), with a very very large constant, and again the algorithm
is trivial.

If address space is unbounded, then all algorithms that emulate a
stack by using bit positions within an integer or other bit vector
are invalid because for sufficiently large address space that
particular constant-size bit-vector will overflow.

Likewise if address space is unbounded, hence size of pointers is
unbounded, then a trick based on XOR is invalid because the XOR of
a short pointer and a longer pointer will be a long pointer which
can't be temporarily stored overwriting the short pointer. (If you
allocate a long pointer in each such a case, then extra storage is
o(N), or even worse, in worst case, violating the requirement.)

The only general algorithm I've seen is one that destroys the tree
in the process of printing it out, either by rotating to remove all
non-leaf left branches, or outright deleting each node as it's
printed.

> Also AFAIK there is no way to do this that should ever be used in
> real code.

That sounds like a clue that destroying the tree as its nodes are
printed is completely legal. Note that if rotations are done
immediately before descending into left branches, then we can print
a special mark each time we do such a rotation, thereby providing
an audit trail whereby the printout can be used to rebuild the
original tree later, at least I think so.

Example where only leaves have data:
tree = ((((((foo . bar) . (42 . 27)) . 1) . 2) . 3) . 4)
Left branch *not* leaf, rotate:
tree = (((((foo . bar) . (42 . 27)) . 1) . 2) . (3 . 4))
Left branch *not* leaf, rotate:
tree = ((((foo . bar) . (42 . 27)) . 1) . (2 . (3 . 4)))
Left branch *not* leaf, rotate:
tree = (((foo . bar) . (42 . 27)) . (1 . (2 . (3 . 4))))
Left branch *not* leaf, rotate:
tree = ((foo . bar) . ((42 . 27) . (1 . (2 . (3 . 4)))))
Left branch *not* leaf, rotate:
tree = (foo . (bar . ((42 . 27) . (1 . (2 . (3 . 4))))))
Left branch *is* leaf, output left leaf, keep only right sub-tree:
tree = (bar . ((42 . 27) . (1 . (2 . (3 . 4)))))
Left branch *is* leaf, output left leaf, keep only right sub-tree:
tree = ((42 . 27) . (1 . (2 . (3 . 4))))
Left branch *not* leaf, rotate:
tree = (42 . (27 . (1 . (2 . (3 . 4)))))
Left branch *is* leaf, output left leaf, keep only right sub-tree:
tree = (27 . (1 . (2 . (3 . 4))))
Left branch *is* leaf, output left leaf, keep only right sub-tree:
tree = (1 . (2 . (3 . 4)))
Left branch *is* leaf, output left leaf, keep only right sub-tree:
tree = (2 . (3 . 4))
Left branch *is* leaf, output left leaf, keep only right sub-tree:
tree = (3 . 4)
Left branch *is* leaf, output left leaf, keep only right sub-tree:
tree = 4
Current tree is leaf, output it, then output end-of-traversal mark.

Output was thus: R R R R R foo bar R 42 27 1 2 3 4 .
To reconstruct the tree, read the tokens preceding the . in reverse
sequence and for each token do the opposite transformation:
(setq tree (cons token tree)) instead of (setq tree (cdr tree))
rotate left
(setq tree (cons (cons (car tree) (cadr tree)) (cddr tree)))
instead of rotate right
(setq tree (cons (caar tree) (cons (cdar tree) (cdr tree))))
except the very first (in reverse order) token is just used init the tree.

One little caveat: Reversing the output to produce the input for
reconstruction requires O(n) extra storage, because the number of
R's might be almost as large as the total number of nodes in the
tree. But if your OS allows you to read a file backwards, that's
not a problem. We *assume* you have enough space on your output
device to store the entire list of output tokens, otherwise what
was the point of the algorithm? Ah, what if the output device has
only enough space for the leaf data, not for all those R's also?
Then the algorithm is permanently destructive, c'est la vie.

If both leaves and forking nodes have data, the above algorithm is
only a little bit more complicated. If only forking nodes have
data, there really aren't any leaves, just forking nodes with both
pointers null, the above algorithm is just a little bit different.

> Platform dependencies are okay - this a hack, after all.

Well just about every platform I ever heard of has a maximum
address space determined by the size of the various memory-address
registers in the CPU, which reduces to the case where the maximum
possible address space is a constant O(1). The trivial algorithm is
to buy a second machine of maximum possible size and use it to
store just the up pointers. The communication protocol to link the
two computers together to exchange info is of course O(1) also.

OK, here's a possible almost-non-destructive algorithm: Every time
you go down a left branch, you have at hand three pointers, the
parent node, the current node, and the down-left node. You no
longer need the pointer from current node to down-left node, so you
temporary overwrite that with an up pointer. When you come back up
from a left branch, you again have three local pointers and you
restore the left pointer. When you go down a right branch, you
don't need to do anything special. I think I saw an allusion to
this trick elsewhere in the thread, but I haven't studied it to
make sure it really works. How do you tell whether you are coming
up from a left or right branch, without using O(n) or O(log n) tag
bits depending on whether each node has a tag bit available or you
have a stack of tag bits? If the down-right pointer of the parent
node in fact points to the node you just came up from, you were
down a right branch, else you were down a left branch. I still need
to think if this works in all cases.

Now if it's a binary SEARCH tree, with unique keys, it's all much
easier, as somebody mentionned elsewhere in the thread.

spinoza1111

ongelezen,
3 mrt 2008, 02:23:3503-03-2008
aan
On Mar 3, 1:25 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 said:
>
> <snip>
>
> > Not sure what you mean, now, by O(1)
>
> He means that you can use as much additional memory as you like, but it
> must be a fixed amount, regardless of the number of nodes in the tree.

You don't belong in this discussion since you entered to bully people
and not contribute a solution, but this redeems you by a small value
close to zero. I was aware that this was probably true, but I wanted
to make sure. Thank you.

OK: any O(1) solution can use a stack of arbitrary bounds by mapping
that stack to a long integer and packing its values, perhaps adding
one to integer values to disambiguate them from zero.

Unfortunately, even misused computational complexity theory does not
address the practical issue of the difference between code that is
bounded by some N and code that uses memory allocation, which
demonstrates the weakness of using complexity theory without
considerable change in real programming. In practice, "real" solutions
will "build larger and larger arrays as the tree gets larger", but the
illusion of non "O(1)" complexity disappears IF the real program has
allocated a fixed array!

Because it's a bad idea to use computational complexity theory to talk
about code, and because nobody ever got around to it, you could pass
Harter's test by pointing out that a program with a large array
declared using a constant was O(1) where the array is "one" memory
element.

This isn't a problem with Harter's excellent adventure. It is a
problem with the sloppy use of big O.


>
> > and wish you would not use the terminology for the reasons I
> > have given elsethread.
>
> If you don't use the same terminology that everyone else uses, that's going
> to make your life more difficult than is entirely necessary.

My life in this is difficult


>
> > Does it mean that I cannot use ANY extra scalar variables, including
> > my encapsulated stack, simulated as it is in a long integer?
>
> No, but if your stack grows in a way that depends on the number of nodes in
> the tree, that would not be acceptable within the terms of the original
> challenge. (And no, one stack push per tree depth isn't acceptable either,
> because although it isn't O(N) it is still O(log N).)

OK, given this insightful comment (keep behaving yourself), using a
bounded stack in an integer is "cheating", one that in fact exposes
why using order bazonga is such imprecise language.

An integer simulating a stack, packed to an arbitrary amount using
more sophisticated algorithms, can contain larger and larger stacks
with a fixed bound nonetheless. "Order n" notation simply fails to
show us whether or not this is a different solution from allocating a
fixed array.

O(1) memory complexity as applied to real code is a total confusion. A
SCIENTIFIC definition would for example tell you how wide each memory
cell is, but if you use O(1) sloppily in reference to code, you can
allocate bytes, short integers and long integers, and you can even
represent arrays and stacks in one long integer or a finite known
number of long integers, and have this O(1) complexity.

This is, in my view, the result of the fact that since I left CS grad
school, computer science has been delivered as an incoherent mixture
of the infantile disorder C and real science.


>
> The obvious solution (adding a thread to the tree) is wrong because it
> requires one extra pointer per node, which is O(N) extra memory.

Correct.

spinoza1111

ongelezen,
3 mrt 2008, 02:34:3903-03-2008
aan
On Mar 3, 1:31 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 said:
>
> > On Mar 1, 1:29 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> Paul Hsieh said:
>
> >> <snip>
>
> >> > BTW, What the hell happened to this group?!?!
>
> >> Mr Nilges (aka "spinoza") happened to this group.
>
> > Get out of this thread:
>
> Oh, hush now. The guy asked a question, and he got an answer. If you don't
> want to be blamed for disrupting the group, don't disrupt the group.

DON'T introduce my name in the context of abuse if you care for this
ng. Every time you start something, here by dragging my name into the
discussion at a point in time when I had not joined the discussion,
you are responsible for (1) causing harm to another that is actionable
in law and (2) the ensuing thread vandalism, including my replies,
because I'll be God damned if your behavior will silence me as is your
purpose.


>
> > you are off topic, offensive and a vandal.
>
> Insults from you are nothing new, and bear no relation to reality. Yawn.
>
> > It is for discussion of Richard Harter's problem, not
> > for discussion of personalities.
>
> I was answering the question "what happened to this group", which is
> perfectly topical here.

No it is not. This thread is about Harter's challenge.


>
> > If you have nothing to contribute you do not belong here.
>
> I contributed the answer to his question. I have also contributed an answer
> to your question. Your own contributions to this subject that I have read
> so far consist of an incorrect answer, an untested answer, and a question
> that betrays fundamental ignorance of computer science. If that is the
> measure of your contribution, then according to your own arguments you
> don't belong here.

I have said it before and I shall say it again. I have "forgotten"
more CS than you will ever know, since you can only express what you
have learned in terms of the infantile disorder C and you care
incapable of coherent English. As regards computational complexity
theory, I was cautioned by the professor and the textbook that it does
NOT apply to real code without considerable work.

The ignorance, Mr Heathfield, is yours.

Is an array "one" memory element, or many? I agree that the key
question is a proportional relationship of memory use to size of tree
and there is a practical side to the computational memory complexity.

But, Harter's question belongs to programming, and not to computer
science, where programming is writing code (using as needed computer
science without screwing the latter up). The CS-theoretical question
would be whether in essence no algorithm exists that does not create a
new data structure (or pad tree nodes in the example you gave) that
grows as the tree grows unbounded. The programming question was
whether you could "hack" a solution.

The do-ability of simulating a stack means that as a programmer, and
not a computer scientist, you can do the deed, and if you use OO it's
not even a hack...even though any specific edition of the code will
limit the tree (but not grow in memory use), in the same way most
hacks will probably allocate bounded extra arrays when failing to meet
the terms of the challenge.

Richard Heathfield

ongelezen,
3 mrt 2008, 03:11:3203-03-2008
aan
spinoza1111 said:

> On Mar 3, 1:25 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111 said:
>>
>> <snip>
>>
>> > Not sure what you mean, now, by O(1)
>>
>> He means that you can use as much additional memory as you like, but it
>> must be a fixed amount, regardless of the number of nodes in the tree.
>
> You don't belong in this discussion

You don't get to make that decision.

> but this [statement of the blindingly obvious] redeems


> you by a small value close to zero. I was aware that
> this was probably true,

No doubt you will forgive me for not believing you, but I don't believe
you.

> OK: any O(1) solution can use a stack of arbitrary bounds by mapping
> that stack to a long integer and packing its values, perhaps adding
> one to integer values to disambiguate them from zero.

If by "long integer" you mean a fixed-size integer value (e.g. 32, 64, or
whatever fixed number of bits), then no, you can't fit arbitrarily many
stack nodes in it. And if by "long integer" you mean an expandable
bignum-style integer, you have not demonstrated why the maximum size of
the bignum does not depend on N or some derivative thereof. That doesn't
mean you're wrong, but it doesn't mean you're right either.

<snip>

Richard Heathfield

ongelezen,
3 mrt 2008, 03:20:3203-03-2008
aan
spinoza1111 said:

> On Mar 3, 1:31 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111 said:
>>
>> > On Mar 1, 1:29 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> Paul Hsieh said:
>>
>> >> <snip>
>>
>> >> > BTW, What the hell happened to this group?!?!
>>
>> >> Mr Nilges (aka "spinoza") happened to this group.
>>
>> > Get out of this thread:
>>
>> Oh, hush now. The guy asked a question, and he got an answer. If you
>> don't want to be blamed for disrupting the group, don't disrupt the
>> group.
>
> DON'T introduce my name in the context of abuse if you care for this
> ng.

If you don't want your name associated with the abuse you ladle down this
newsgroup's throat, stop abusing the group.

> you are responsible for (1) causing harm to another that is actionable
> in law

You haven't sued me yet. I take this to mean that you yourself are not
convinced that you're right.

> and (2) the ensuing thread vandalism, including my replies,
> because I'll be God damned if your behavior will silence me as is your
> purpose.

Oh, nonsense. I'm a free speech advocate. If you want to advertise your
idiocy to the entire world, who am I to stop you?

>> I was answering the question "what happened to this group", which is
>> perfectly topical here.
>
> No it is not.

Er, yes, it is. It's a meta-discussion about the group. By longstanding
Usenet tradition, such discussions belong in the group that is being
discussed.

> This thread is about Harter's challenge.

Yes, but this subthread isn't. Invest in a newsreader (there are some
really good *free* ones available), and learn all about the marvels of
threaded discussions.

> I have said it before and I shall say it again.

Undoubtedly. That doesn't make it true.

> I have "forgotten" more CS than you will ever know,

In support of your claim, I must admit that there seems to be little
evidence of your having remembered any CS at all.

> Is an array "one" memory element, or many?

Think bits. If the number of bits of extra storage you need is proportional
to the number of nodes in the tree or to some expression involving that
number, then your solution is invalid.

Or to put it more monosyllabically: if you need more bits for more N, you
got it wrong.

<snip>

spinoza1111

ongelezen,
3 mrt 2008, 05:23:0603-03-2008
aan
On Mar 3, 4:20 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111said:
>
>
>
>
>
> > On Mar 3, 1:31 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >>spinoza1111said:
>
> >> > On Mar 1, 1:29 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> >> Paul Hsieh said:
>
> >> >> <snip>
>
> >> >> > BTW, What the hell happened to this group?!?!
>
> >> >> Mr Nilges (aka "spinoza") happened to this group.
>
> >> > Get out of this thread:
>
> >> Oh, hush now. The guy asked a question, and he got an answer. If you
> >> don't want to be blamed for disrupting the group, don't disrupt the
> >> group.
>
> > DON'T introduce my name in the context of abuse if you care for this
> > ng.
>
> If you don't want your name associated with the abuse you ladle down this
> newsgroup's throat, stop abusing the group.

No, the abuse in all cases starts with you. In this case, you injected
my name into a group when you knew that I would respond to the
offensive use. You haven't proved, nor are you qualified to prove,
that my level of knowledge or style damages the group as such.
Instead, you in all cases since 2000 have taken personal exception to
my style and approach, and without being able to establish that it
causes damage, you initiate campaigns to which I respond as is my
right, and as a result, threads are cluttered with garbage.

I have frequently selected silence as a response. I hardly ever
replied, for example, between 2004 and this year. Every time I have re-
entered you have restarted the campaign.

Were you both a well-qualified computer scientist and programmer, you
might be able to prove that my contributions damage the group. But
your case has always been very weak, based on "making mistakes" which
as I have shown we all make from time to time.

>
> > you are responsible for (1) causing harm to another that is actionable
> > in law
>
> You haven't sued me yet. I take this to mean that you yourself are not
> convinced that you're right.

The mills of the gods grind slow but fine.

>
> > and (2) the ensuing thread vandalism, including my replies,
> > because I'll be God damned if your behavior will silence me as is your
> > purpose.
>
> Oh, nonsense. I'm a free speech advocate. If you want to advertise your
> idiocy to the entire world, who am I to stop you?

No, you are not. You are a free speech libertarian, which is something
quite different.

A free speech advocate encourages continual expression of different
viewpoints within boundaries set by the common sense difference
between speech intended as communication, and what Chief Justice
Holmes of the USA exemplified by "shouting fire in a crowded theater"
when there is no fire, and "fighting words".

A free speech libertarian doesn't acknowledge Justice Holmes'
distinction and because libertarians are generally uneducated members
of the lower middle class, doesn't know it either. His "freedom of
speech" includes his right to "shout fire in a crowded theater" and to
act-in-speech, to accomplish goals outside of the goal of that sort of
speech which should be free, that is, getting to a group consensus on
the truth.

In your case, that goal is the effective destruction of any poster who
challenges your sense of dominance, or any computer author whose books
outsell yours, such as Schildt.

This is libel and "shouting fire in a crowded theater" because you
don't even understand the distinction between speech protected by the
US First Amendment and the unwritten British constitution, and
behavior that merely uses speech to accomplish a goal of disrupting a
crowded theater that isn't on fire for shits and giggles, libelously
destroying the reputation of Schildt, or here, injecting my patronymic
in a newsgroup to which I had not posted in order to cause "yet
another fire in a crowded theater": that is, a newsgroup in which an
interesting technical challenge is vandalized by your repetitious and
manic posting.

I acknowledge that my exercise of my free speech involves a wrong to
the group in the second order. However, I have seen good people
destroyed in office politics when deprived of that which makes us
human, our ability or right to calmly respond, and to return from the
space of wrong and harm to a common search for truth.

I'd warn you, again, that court officers above the level of bailiff
are able to see the distinction I have made.

Richard Heathfield

ongelezen,
3 mrt 2008, 05:35:1903-03-2008
aan
spinoza1111 said:
> On Mar 3, 4:20 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111said:
<snip>

>> If you don't want your name associated with the abuse you ladle down
>> this newsgroup's throat, stop abusing the group.
>
> No, the abuse in all cases starts with you. In this case, you injected
> my name into a group

No, you introduced your name into this group by posting here in the first
place.

<usual verbiage snipped>

Clive D. W. Feather

ongelezen,
3 mrt 2008, 06:04:1803-03-2008
aan
In article
<9fbb7222-deea-4717...@s8g2000prg.googlegroups.com>,
spinoza1111 <spino...@yahoo.com> writes

>The untested C sharp code that follows the essay is a stack simulator
>which uses a single number to represent a stack with a fixed bound. It
>is best viewed in a monospaced font.
>
>It uses an old Fortran trick for representing arrays and stacks

It's a trick I was using in TI (something even more primitive than
assembler).

However, it's not any use in this case - it can only store a fixed
number of values, the number of digits in "uint" divided by bytWidth.
Since the problem allows for a tree of unlimited depth, you will
overflow the stack.

In other words, hiding the stack doesn't stop it being a stack, which we
already know doesn't work.

>Here is the untested code of the stack class, best viewed in a
>monospaced font such as Courier New and implemented, in C Sharp, as a
>struct.

Noting that it is untested:

> public bool push(ushort shrValue)
> {
> if (shrValue >= Math.Pow(10, BYTdecimalWidth))
> {
> Console.WriteLine
> ("Value for push " +
> shrValue.ToString() + " " +
> "exceeds decimal width " +
> BYTdecimalWidth.ToString());
> return false;
> }
> long lngTest = INTstack * SHRshift + shrValue + 1;
> if (lngTest > Math.Pow(2, 31) - 1)
> {
> Console.WriteLine("Stack overflow");
> return false;
> }
> return true;
> }

This function never changes INTstack. Therefore it isn't actually
storing the value in the stack!

Why are you limiting the stored value to 2^31-1? And why are you calling
Math.Pow every time rather than just writing the constant 0x7FFFFFFF?

--
Clive D.W. Feather | Home: <cl...@davros.org>
Tel: +44 20 8495 6138 (work) | Web: <http://www.davros.org>
Fax: +44 870 051 9937 | Work: <cl...@demon.net>
Please reply to the Reply-To address, which is: <cl...@davros.org>

Clive D. W. Feather

ongelezen,
3 mrt 2008, 06:11:1003-03-2008
aan
In article
<3d782936-6a2d-4016...@i7g2000prf.googlegroups.com>,
spinoza1111 <spino...@yahoo.com> writes

>OK: any O(1) solution can use a stack of arbitrary bounds by mapping
>that stack to a long integer and packing its values, perhaps adding
>one to integer values to disambiguate them from zero.

No it can't. If the maximum value you are going to push is M, and the
maximum value that can be stored in a long integer is L, then you can
only push P values where P = floor (log L / log M) (the base used for
the logarithms doesn't matter).

>> (And no, one stack push per tree depth isn't acceptable either,
>> because although it isn't O(N) it is still O(log N).)
>
>OK, given this insightful comment (keep behaving yourself), using a
>bounded stack in an integer is "cheating", one that in fact exposes
>why using order bazonga is such imprecise language.

No, it isn't cheating. It simply doesn't work because it can only push a
fixed number of values before overflowing.

>An integer simulating a stack, packed to an arbitrary amount using
>more sophisticated algorithms,

that don't exist. You can't pack "to an arbitrary amount" - Shannon
won't let you.

>O(1) memory complexity as applied to real code is a total confusion. A
>SCIENTIFIC definition would for example tell you how wide each memory
>cell is, but if you use O(1) sloppily in reference to code, you can
>allocate bytes, short integers and long integers, and you can even
>represent arrays and stacks in one long integer or a finite known
>number of long integers, and have this O(1) complexity.

No, everyone else here realizes that, so long as memory cells are each a
fixed and small number of bits (whether 8, 39, or 128), then O notation
makes perfect sense and gives useful answers. The binary tree will
contain O(N) cells and, if balanced, have depth O(log N); at the extreme
unbalanced case, it could also have depth O(N).

Nelu

ongelezen,
3 mrt 2008, 12:00:4103-03-2008
aan
On Sat, 01 Mar 2008 01:18:20 +0000, Nelu wrote:

> On Fri, 29 Feb 2008 21:24:43 +0000, Richard Harter wrote:
>
>> On 29 Feb 2008 20:25:55 GMT, Nelu <spam...@gmail.com> wrote:
>>
>>>On Fri, 29 Feb 2008 17:56:24 +0000, Richard Harter wrote:
>>>
<snip>
>>
>> Nice. IIANM the execution cost is O(n*h) where n is the number of
>> nodes and h is the maximum height - average case O(n*log(n)) and worst
>> case O(n*n). Can this be improved to worst case O(n*log(n))?
>>
>>
> Not sure. I'd say not at a first glance. I may take another look at a
> later time. I'll have to see if it's possible to get rid of some
> calculations related to finding the path to the node to make it faster
> but that's not going to make it better by an order of magnitude. This is
> probably the price for O(1) and not changing the structure :).

I doubt that it can get you less than O(n^2), unfortunately (unless it's
guaranteed to be a balanced tree).

I also think this is a general solution. It doesn't have to be a binary
tree as long as the maximum number of children for a node is known.


--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)

Paul Hsieh

ongelezen,
3 mrt 2008, 17:58:0603-03-2008
aan
On Mar 1, 10:15 pm, Paul Hsieh <websn...@gmail.com> wrote:
> [...] I've posted two solutions:

Third time's the charm!

> [...] one whch destroys the


> tree altogether (not a violation of the original problem statement),
> and another which solves the problem completely without issue except
> for a potential const violation (i.e., my solution would not work if
> the links were, say, web page links that I don't have write access
> to).  Its possible some bright guy may come along and show us all how
> to do this with a *const* binary tree in O(1) space and no hacks (I

> highly doubt this, but I recall a time when I had to eat crow in
> USENET over the existence of an O(n) median finding algorithm, so I
> won't raise the stakes too high ;) ).

Fortunately, the one to make me eat crow this time is *myself*. :)

Indeed, there is a non-destructive, const preserving O(1) space, and
O(n*ln n) time algorithm for doing this. Drumroll please.

The principle is simple. When traversing a sorted binary tree in
search of a particular value, one can use one single temp pointer and
a boolean to get you sufficient information to deterministically find
the exact next higher value as well. The binary value will encode
whether or not that next value exists either below and to the right of
the found value or else its exactly the value that is directly above
it. The temp pointer will point to the node in the right pointer of
the found value or the node above respectively in these two cases, and
will be NULL if there is not next (the found value corresponded to the
highest found value in the whole tree.)

Skipping over the first case for the moment, let us describe the
general algorithm. The algorithm will guide its search according to
the tuple (b, t) where b is the boolean described above, and t is the
temp pointer described above. If b corresponds to the above node,
then the search will look directly for t using a typical binary tree
search (this requires no more than O(1) space and O(ln n) steps,
obviously.) If b corresponds to the "somewhere below and to the left"
node, then it will look for t, and once it finds it, then try to
repetitively traverse in the leftward direction until there are no
more leftward branches.

Ok, so the trick is, how is (b, t) generated for successive
iterations? During each traversal step noted above, we are always
walking a direct path to t, or else beyond t but in a leftward
direction. At each step if we take a leftword step then we update b'
to be "upwards" and t' to be the node we are stepping away from, and
if we take a rightword step then *if* we see yet another rightward
link from the step we are going to, then update b' to be "down and to
the right" and t' to be the node from that second rightward link
otherwise leave b' and t' alone. Then when you finally reach the
intended value for this iteration, print it out, update (b, t) with
the values (b', t'), loop back to the top of the tree and do the next
iteration. You keep doing this until t is NULL.

The first iteration is just running down the left side of the tree
then setting b to "upwards" and t to the second to last node pointing
leftward. If there is no left node, then set t to the right node of
the top and b to "down and to the right" then print the top. If there
is no left or right nodes then print the top and stop. If there is no
top then just return.

So the general idea is not to search for the *minimum* entry on each
pass, but rather to search for a particular node (or else the left
most child of that node) on each pass. *While* this traversal is
being done, it is also calculating where the "next node" would be
(either deterministically, or just the direction of it) if the
traversal path were to terminate at each step. This is trivial if you
are going leftward, and not too difficult if you are going rightward
(you know it must either be at the end, or else must follow from some
further rightward traversal).

I would have written up the C code for this, but I was too excited, I
just had to post it. At this point I demand a gold star. :)

> So the question is -- who gets to wield the bat, and set the standard
> for when people's legs get whacked?  I think my legs would probably be
> ok for a while, but I wouldn't envy the position of some other people.

So even more curiously, if *I* were the one holding the bat, should I
whack my own legs twice for my two sub-optimal solutions before
hitting upon this one?

P.S.: Good problem Richard. Thanks for the brain exercise!

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

moi

ongelezen,
3 mrt 2008, 19:34:0803-03-2008
aan
On Mon, 03 Mar 2008 14:58:06 -0800, Paul Hsieh wrote:

> On Mar 1, 10:15 pm, Paul Hsieh <websn...@gmail.com> wrote:
>> [...] I've posted two solutions:
>
> Third time's the charm!
>
>> [...] one whch destroys the
>> tree altogether (not a violation of the original problem statement),
>> and another which solves the problem completely without issue except
>> for a potential const violation (i.e., my solution would not work if
>> the links were, say, web page links that I don't have write access to).
>>  Its possible some bright guy may come along and show us all how to do
>> this with a *const* binary tree in O(1) space and no hacks (I highly
>> doubt this, but I recall a time when I had to eat crow in USENET over
>> the existence of an O(n) median finding algorithm, so I won't raise the
>> stakes too high ;) ).
>
> Fortunately, the one to make me eat crow this time is *myself*. :)

Well, I tried to resurrect your impement your implement your contributed
code, but I failed to do so ;-[

> Indeed, there is a non-destructive, const preserving O(1) space, and
> O(n*ln n) time algorithm for doing this. Drumroll please.

Yes, that is the Deutsch-Schorr-Waite algorithm, posted by Gene on
Saturday, 18:30PST.

The Lindstrom - reference (also posted by Gene) solves the "need one bit
per node" problem by using the (numerical) ordering of the left and right
pointers as a mark bit. While eating the tree, it uses the left-pointers
as a stack (linked list, actually) to backtrack once the subtree below
the "cursor" becomes exhausted. Excellent hack!


>
> P.S.: Good problem Richard. Thanks for the brain exercise!

Could not agree more.

AvK

spinoza1111

ongelezen,
4 mrt 2008, 00:53:3004-03-2008
aan
On Mar 3, 7:11 pm, "Clive D. W. Feather" <cl...@on-the-
train.demon.co.uk> wrote:
> In article
> <3d782936-6a2d-4016-85f9-e475d98c6...@i7g2000prf.googlegroups.com>,spinoza1111<spinoza1...@yahoo.com> writes

>
> >OK: any O(1) solution can use a stack of arbitrary bounds by mapping
> >that stack to a long integer and packing its values, perhaps adding
> >one to integer values to disambiguate them from zero.
>
> No it can't. If the maximum value you are going to push is M, and the
> maximum value that can be stored in a long integer is L, then you can
> only push P values where P = floor (log L / log M) (the base used for
> the logarithms doesn't matter).

Arbitrary in the sense of fixed. My poor choice of words. I realized
it is small: that's why I said that you can increase this fixed bound
by using a fixed (constant) number of stack objects, each bounded in
the way you say, to create a truly arbitrary, but limited and bounded,
stack for each new edition of the code.

It's obvious to all of us that a stack represented as powers of N in
an integer is bounded. You don't have to belabor the point with
pretentious mathematics.

What's interesting about the pretentious mathematics (cf. Concrete
Mathematics by Knuth) is that like the use of the infantile disorder C
to teach computer "science", it fails to teach useful lessons.

Yes, you can only "push P values where P = floor(log L/log M)" and P
is although arbitrary constant as well. You can push P * K if you
reengineer my stack-as-integer class to use switch() to select from K
different "stacks", each one long integer.

But any time a student hands in a solution where the stack is
represented as an array with fixed bounds, his solution is O(1) in one
sense, and not in another, because you've misused big O. For the same
reason you think that strlen(s) must run slower than K-1 (which it may
or may not in practice), you don't know the size of memory cell.

Programmers learn big O but not the more interesting lesson that x-y
is one less than the size of an integer range because the use of an
infantile disorder to teach a science ruins the science.

>
> >> (And no, one stack push per tree depth isn't acceptable either,
> >> because although it isn't O(N) it is still O(log N).)
>
> >OK, given this insightful comment (keep behaving yourself), using a
> >bounded stack in an integer is "cheating", one that in fact exposes
> >why using order bazonga is such imprecise language.
>
> No, it isn't cheating. It simply doesn't work because it can only push a
> fixed number of values before overflowing.

This is also true about most C solutions because C lacks garbage
collection and memory management, and most programmers are too lazy to
implement it specially for a given solution using C's primitive
"malloc" function.

Any sample solution will probably just allocate a fixed size array to
hold the stack. This demonstrates the incoherence of using big O in
reference to programs, since why is a long pointer not a violation of
O(1) but a fixed array such a violation?

The only solution that truly violates Harter's condition is an
unbounded one that uses a linked list to represent the stack. Any
given edition of a program that packs the stack into K integers where
K is known at compile time, OR that uses an array of K entries to hold
stack entries, will fail for a tree of sufficient depth.

But so will a "true violator", one that uses a linked list, when the
physical machine's storage runs out!

What's interesting is that many people here said a while back that you
cannot simulate a Turing machine but here might not see this point. A
computer has a finite number of states but an abstract automaton does
not, and abstract algorithms run only in a Platonic heaven where
storage is truly unlimited.

Which means that while you can transfer O notation to computers, you
have to know what you are doing, and not be always engaged in
campaigns of personal destruction that blind the soul.

I used floating point numbers on the Texas Instruments TI-79
programmable calculator to simulate stacks on the way to compiling and
interpreting the old Mouse language in 1979. Of course, the size of
program was bounded by the size of memory (1K). But this is still true
today.

Are you aware that true chaos mathematicians deny that computers can
even simulate chaos and that they feel it's a dangerous illusion to
think that they might be of much political use as climatological
models for this reason? Applying O(n) notation directly to programs is
a mistake, pure and simple. It can "prove" anything.


>
> >An integer simulating a stack, packed to an arbitrary amount using
> >more sophisticated algorithms,
>
> that don't exist. You can't pack "to an arbitrary amount" - Shannon
> won't let you.

You deliberately misread "arbitrary", perhaps my poor choice of word
but also your usual savagery because you're here to target
personalities as a politician, and not a programmer.

>
> >O(1) memory complexity as applied to real code is a total confusion. A
> >SCIENTIFIC definition would for example tell you how wide each memory
> >cell is, but if you use O(1) sloppily in reference to code, you can
> >allocate bytes, short integers and long integers, and you can even
> >represent arrays and stacks in one long integer or a finite known
> >number of long integers, and have this O(1) complexity.
>
> No, everyone else here realizes that, so long as memory cells are each a
> fixed and small number of bits (whether 8, 39, or 128), then O notation
> makes perfect sense and gives useful answers. The binary tree will
> contain O(N) cells and, if balanced, have depth O(log N); at the extreme
> unbalanced case, it could also have depth O(N).

Your as I have said pretentious use of a notation for talking about
abstract algorithms conceals the fact that you cannot apply "O(1)" to
code without having defined in code terms what constitutes a memory
cell, and what constitutes something that grows.

As in the case of comparing strlen(s) to K-1, and as in the case of
your harassment of Schildt, you pretend that abstractions are
real...even the abstraction "undefined".

As was the case of old, an anti-intellectual Platonism creates a
Scholasticism in which because of an initial intellectual dishonesty,
a field declines (as logic declined before Frege) into one-upmanship
and human sacrifice, as in the destruction of Peter Abelard.

Ben Bacarisse

ongelezen,
4 mrt 2008, 06:10:3904-03-2008
aan
spinoza1111 <spino...@yahoo.com> writes:

> On Mar 3, 7:11 pm, "Clive D. W. Feather" <cl...@on-the-
> train.demon.co.uk> wrote:
>> In article
>> <3d782936-6a2d-4016-85f9-e475d98c6...@i7g2000prf.googlegroups.com>,spinoza1111<spinoza1...@yahoo.com> writes
>>
>> >OK: any O(1) solution can use a stack of arbitrary bounds by mapping
>> >that stack to a long integer and packing its values, perhaps adding
>> >one to integer values to disambiguate them from zero.
>>
>> No it can't. If the maximum value you are going to push is M, and the
>> maximum value that can be stored in a long integer is L, then you can
>> only push P values where P = floor (log L / log M) (the base used for
>> the logarithms doesn't matter).
>
> Arbitrary in the sense of fixed. My poor choice of words. I realized
> it is small: that's why I said that you can increase this fixed bound
> by using a fixed (constant) number of stack objects, each bounded in
> the way you say, to create a truly arbitrary, but limited and bounded,
> stack for each new edition of the code.
>
> It's obvious to all of us that a stack represented as powers of N in
> an integer is bounded. You don't have to belabor the point with
> pretentious mathematics.

If an algorithm has some internal bound that means that it only works
on a finite set of inputs (trees in this case) then it can have no
"big O" complexity bounds at all.

If one extends the program to "solve" all instances of the problem by
either reporting a solution or some "out of space" message then, in
effect, such a program only solves a related problem and it can always
do so in constant time and space, so all discussion of asymptotic
bounds is pointless.

> Any sample solution will probably just allocate a fixed size array to
> hold the stack. This demonstrates the incoherence of using big O in
> reference to programs, since why is a long pointer not a violation of
> O(1) but a fixed array such a violation?

The discussion needs to be about algorithms not programs. In
addition, all big O notation needs to have an agreed notion of what is
being measured (in particular, what are the "units"). If
arithmetically unbounded integers are the units of space, your stack
is O(1) and can handle any depth. You realise that is unreasonable,
but putting a bound on the stack size, probably means that you can't
solve the problem in a way that allows the algorithm to be analysed.

I say, "probably" because you don't present a solution, only a stack.
Since we know from the literature that O(1) space and O(n) time is
possible -- given enough hacks -- your bounded stack might form part of
such a solution when you finish it, although I can't see how at the
moment.

<other stuff snipped>

--
Ben.

spinoza1111

ongelezen,
4 mrt 2008, 08:50:1804-03-2008
aan
On Mar 4, 7:10 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:

> spinoza1111<spinoza1...@yahoo.com> writes:
> > On Mar 3, 7:11 pm, "Clive D. W. Feather" <cl...@on-the-
> > train.demon.co.uk> wrote:
> >> In article
> >> <3d782936-6a2d-4016-85f9-e475d98c6...@i7g2000prf.googlegroups.com>,spinoza1111<spinoza1...@yahoo.com> writes
>
> >> >OK: any O(1) solution can use a stack of arbitrary bounds by mapping
> >> >that stack to a long integer and packing its values, perhaps adding
> >> >one to integer values to disambiguate them from zero.
>
> >> No it can't. If the maximum value you are going to push is M, and the
> >> maximum value that can be stored in a long integer is L, then you can
> >> only push P values where P = floor (log L / log M) (the base used for
> >> the logarithms doesn't matter).
>
> > Arbitrary in the sense of fixed. My poor choice of words. I realized
> > it is small: that's why I said that you can increase this fixed bound
> > by using a fixed (constant) number of stack objects, each bounded in
> > the way you say, to create a truly arbitrary, but limited and bounded,
> > stack for each new edition of the code.
>
> > It's obvious to all of us that a stack represented as powers of N in
> > an integer is bounded. You don't have to belabor the point with
> > pretentious mathematics.
>
> If an algorithm has some internal bound that means that it only works
> on a finite set of inputs (trees in this case) then it can have no
> "big O" complexity bounds at all.

By this reasoning, Ben, then no actual (physically executable)
programs have complexity bounds. That indeed was my point: that for
somewhat the same reason John von Neumann said "anyone who says he
obtained a random number from a computer is in a state of sin",
computational complexity is about algorithms, which don't run at all
on computers.

Our crap runneth on computers, not pure or *reinen* abstract
algorithms.

Even if a program implements the stack in a linked list, sooner or
later (same as a solution that represents the stack in a packed
integer) it work ONLY for a finite set of inputs because its storage
is finite.

>
> If one extends the program to "solve" all instances of the problem by
> either reporting a solution or some "out of space" message then, in
> effect, such a program only solves a related problem and it can always
> do so in constant time and space, so all discussion of asymptotic
> bounds is pointless.

But this as I have said is true of all programs without the caveat
that our talk of computational complexity is NOT ABOUT THE PROGRAM but
about the algorithm in the program.

Clearly, a program that implements the stack in a linked list, a
program that implements it in a fixed-but-let-us-say-big array, and
even a program that packs the stack into 1 or K integers, have the
same computational complexity as algorithms, where the "algorithm"
isn't the code, it's the human (indeed social) intention of the
programmer as obvious, or not, from the code.

It is a meaning and an intention, which is why the behavior of some
posters here is indeed reprehensible: they delight in ignoring
meaning, and focusing like reptiles on a truth that is said to be
resident in dead things.


>
> > Any sample solution will probably just allocate a fixed size array to
> > hold the stack. This demonstrates the incoherence of using big O in
> > reference to programs, since why is a long pointer not a violation of
> > O(1) but a fixed array such a violation?
>
> The discussion needs to be about algorithms not programs.  In

Yes, it does.

> addition, all big O notation needs to have an agreed notion of what is
> being measured (in particular, what are the "units").  If

I couldn't agree more.

> arithmetically unbounded integers are the units of space, your stack
> is O(1) and can handle any depth.  You realise that is unreasonable,
> but putting a bound on the stack size, probably means that you can't
> solve the problem in a way that allows the algorithm to be analysed.

I can build hardware with "unlimited" integer lengths for the same
reason Prince Hamlet could be bound in a nutshell and count himself
king of infinite space. Indeed such hardware existed. The IBM 1401
midsize-mainframe, announced in 1959, had "unlimited" integers that
were bounded in essentially C string fashion: an extra bit, a "word
mark" was placed at the end, and calculating 100 factorial EXACTLY was
a few lines of assembler code.

For the same reason, then, a C program that represents the stack as an
"unlimited" linked list (limited, as were 1401 integers, by available
memory) or one that allocates a large-but-fixed array is from the
standpoint of what we are doing here, equivalent.

What we are doing is Vulgar Computational Complexity (VCC), where this
is the application, whether reflective or just plain stupid, of a
theory of pure algorithms to actual programs.

>
> I say, "probably" because you don't present a solution, only a stack.
> Since we know from the literature that O(1) space and O(n) time is
> possible -- given enough hacks -- your bounded stack might form part of
> such a solution when you finish it, although I can't see how at the
> moment.

Well, at the present time I have more important things to do than
return with you now to the thrilling days of yesteryear in which to
fit FORTRAN code into small memories I would treat integers as arrays
and stacks, but I will try to get around to it.

I think I've made my point. Computational complexity is outside
Plato's cave, but like Aristotle we have to realize that it is present
physically only in imperfect shadows, our programs. This is why it is
a characteristically Platonist error to think of it as real, and able
to exist independent of Creation.

The Platonist pretense is that it does, and it is a favorite past-time
of totalitarians to make mountains out of molehills over the supposed
difference, in all programs no less, between strlen(s) for small
(which is to say reasonable) s and K-1. They also like calling people
liars.

Usually in my experience, and that of Derrida's, the Platonist, while
pretending to be a disinterested scholar is usually a specialist in
politics and man management trying to persuade us that he has seen
truth butt naked, and has been mentioned in Hansard.

The vendetta, for example against Schildt, is for him more delightful
than actually accomplishing any thing.

Hi Clive.

>
> <other stuff snipped>
>
> --
> Ben.- Hide quoted text -

Ben Bacarisse

ongelezen,
4 mrt 2008, 09:39:0404-03-2008
aan
spinoza1111 <spino...@yahoo.com> writes:

Of course. But note that I was talking about algorithms, not programs.
I was giving you the benefit of the doubt -- I was assuming you were
suggesting an algorithm for the problem, not just a bounded program.
Your algorithm, in abstract, is either bounded (and thus does not solve
the problem in any general algorithmic sense) or it does not use O(1)
extra storage -- you choose. It depends on how you define your
integers.

This argument -- about the bounds in physical programs -- is old and
pointless. We can get round it by talking about algorithms in this
case. I suggest that is what we all do. We can talk about programs,
provided we think of these as abstract illustrations of algorithms --
unbounded by the real constraints we know they would have if we
compiled and ran them. If we don't do this, we can't have these sorts
of debate at all.

> That indeed was my point: that for
> somewhat the same reason John von Neumann said "anyone who says he
> obtained a random number from a computer is in a state of sin",
> computational complexity is about algorithms, which don't run at all
> on computers.
>
> Our crap runneth on computers, not pure or *reinen* abstract
> algorithms.
>
> Even if a program implements the stack in a linked list, sooner or
> later (same as a solution that represents the stack in a packed
> integer) it work ONLY for a finite set of inputs because its storage
> is finite.

Thus we should discuss the merits and demerits of algorithms, not
programs on our limited computers. We may illustrate these with code,
but we should talk about the algorithms.

<snip>


>> The discussion needs to be about algorithms not programs.  In
>
> Yes, it does.

Note this.

>> addition, all big O notation needs to have an agreed notion of what is
>> being measured (in particular, what are the "units").  If
>
> I couldn't agree more.

and this.

>> arithmetically unbounded integers are the units of space, your stack
>> is O(1) and can handle any depth.  You realise that is unreasonable,
>> but putting a bound on the stack size, probably means that you can't
>> solve the problem in a way that allows the algorithm to be analysed.
>
> I can build hardware with "unlimited" integer lengths for the same
> reason Prince Hamlet could be bound in a nutshell and count himself
> king of infinite space. Indeed such hardware existed. The IBM 1401
> midsize-mainframe, announced in 1959, had "unlimited" integers that
> were bounded in essentially C string fashion: an extra bit, a "word
> mark" was placed at the end, and calculating 100 factorial EXACTLY was
> a few lines of assembler code.

So why are you talking about all this? I was trying to get you to
talk about your algorithm and its serious flaws. You agreed, but keep
talking about machines. If you don't want to talk about algorithms,
don't post in threads where the only meaningful contribution is an
algorithmic one.

If the thread asks about O(1) space (as this one does) it really does
not help to say, "do it how you like -- big O means nothing on finite
hardware". We know from experience that there are real advantages,
even on our poor finite hardware to using certain algorithms.

<snip>

> I think I've made my point. Computational complexity is outside
> Plato's cave, but like Aristotle we have to realize that it is present
> physically only in imperfect shadows, our programs.

Not as far as I can see. You answered a post about the complexity
bounds that might be possible for some problem with a part-solution.
How can your point be now that such talk is meaningless? If this is
your point of view, please don't post in threads about algorithms and
their asymptotic bounds. Or, at least, limit it to one post saying
that you don't care about these things because real programs are
finite. You seem confused about whether you will talk about
algorithms or not.

--
Ben.

spinoza1111

ongelezen,
4 mrt 2008, 11:41:3804-03-2008
aan

You need give me no such benefit. You don't judge me, I don't judge
you.

"All scientists are candidates for posts" (Adorno) - on the job. But
here, we're talking about a problem, not looking for a frigging job.

> suggesting an algorithm for the problem, not just a bounded program.
> Your algorithm, in abstract, is either bounded (and thus does not solve
> the problem in any general algorithmic sense) or it does not use O(1)
> extra storage -- you choose.  It depends on how you define your
> integers.

OK, you claim my hack is an algorithm. You do me too much credit.

But we CAN abstract "representing a stack in code when you can use
either only one or a finite number K of integers" as an algorithm, and
your apparent conclusion, that if in the Platonic world we have
unlimited precision then stacks are O(1), is true.

But it's also nonsense. It's like assuming we have cold fusion and
then proving we have it. O(1) memory complexity assumes cells of
finite precision.

So my solution provides no gain in terms of the Vulgar Misapplication
of Complexity Theory that seems to be on tap here, over a malloc'd
stack. Both solutions are O'(1) where the prime operator indicates
"Vulgar Misapplication".


>
> This argument -- about the bounds in physical programs -- is old and
> pointless.  We can get round it by talking about algorithms in this
> case.  I suggest that is what we all do.  We can talk about programs,
> provided we think of these as abstract illustrations of algorithms --

There is no need to do so. Algorithms represent themselves since they
are nothing more than their appearance...their explanation...their
perspicuous explanation.

Programs, far from being "abstract" illustrations, are CONCRETE
attempts in the real world (including social facts) to realize
algorithms. In many cases, especially in MIS, they are a Platonist
attempt at bedazzlement so that people are reconciled to miserable
lives: Saul Bellow's fictional Tommy Wilhelm, in Seize the Day,
ruminates even in the 1950s why it is companies must print these bills
on their computers, and men must lie awake wondering how to pay them.
Of course, the Platonic precision of those bills often concealed
software errors.

Consider a "straight line" drawn in Microsoft Paint. It isn't what
Euclid meant because it necessarily has dimensions of one pixel,
whereas a Euclidean line has no thickness. If one would teach a child
about straight lines, one would do better to NOT use Paint, and
instead, take him sailing on a calm day and point to the horizon: the
horizon on a day of infinite visibility is Euclidean since it's the
dimensionless point at which sea turns into sky.

In fact, because of the limitations of Euclid's tools, it was obvious
to his adepts that he was talking about idealizations that do not
exist in concrete instances except, in St Paul's words, "through a
glass darkly".

This point was made by Dijkstra: the very precision of computation,
which is always a false precision (whether because of the bit size of
floating point numbers or the limits of virtual storage), fills men
with illusions in which they confuse a concretion with an abstraction.

It's when those illusions change to the rage of mobs, pursuing The
Chosen One, that "darkness falls".

I think this is why Leonard Smith, in "Chaos: A Very Short
Introduction" makes the point: what we see in "computer models of
chaos" is never chaos. A simple example in my experience occured in
1995 when two clowns from a somewhat fraudulent "research institute"
funded by oil companies claimed in the Wall Street Journal that their
"climate model" disproved "global warming" because it showed a cooling
trend since 1981.

Upon investigation, these academic hosebags had created an Excel
spreadsheet and removed average temperatures until the data set was
sufficiently small for the trend line to reverse direction for each
removal, and a "cooling trend" appeared. I added the numbers for 1980
to my copy of their spreadsheet and the trend line flipped towards
warming.

[Their tirade, on the editorial page of the WSJ, contained some
interesting language about the unscientific girliemen who in 1995 were
making unfounded claims that human activity was causing climate
change.]

[Later that year, 600 people died in the 1995 Chicago heat disaster
including a lonely man on my floor of my building. His body was
discovered after the smell emerged, and his flat was filled with
flesh, flies and maggots: his body has exploded.]

[As should any illusion that technology is value-neutral.]

This reminds me of others' (and not your) arguments about
computational complexity such as the silly claim that strlen(s) is bad
style and K-1 is not...not from the standpoint of readability, where
it's good style in both cases not to confuse the program reader in
extempore C meant as pseudocode, but from that of a foolish confusion
of algorithms and programs.

> unbounded by the real constraints we know they would have if we
> compiled and ran them.  If we don't do this, we can't have these sorts
> of debate at all.

This may not be a useful way to spend our time. We'll all get
different numbers...perhaps on the same machine.

>
> > That indeed was my point: that for
> > somewhat the same reason John von Neumann said "anyone who says he
> > obtained a random number from a computer is in a state of sin",
> > computational complexity is about algorithms, which don't run at all
> > on computers.
>
> > Our crap runneth on computers, not pure or *reinen* abstract
> > algorithms.
>
> > Even if a program implements the stack in a linked list, sooner or
> > later (same as a solution that represents the stack in a packed
> > integer) it work ONLY for a finite set of inputs because its storage
> > is finite.
>
> Thus we should discuss the merits and demerits of algorithms, not
> programs on our limited computers.  We may illustrate these with code,
> but we should talk about the algorithms.

Well, the problem here is that we have to know, every man jack of us,
how to write mathematical proofs, an occupation which leaves even less
time than programming for snipping at each other like little baby
girls, or, like me, responding wearily like a man to constant
harassment.

The last time I wrote mathematical proofs, for shits and for giggles,
was when I proved the doability of recursion in Quine's system of "Set
Theory and its Logic" and some of the theorems concerning abstract
automata in Hopcroft and Ullman's "Formal Languages and their Relation
to Automata". This was in 1970. Before that was in high school
geometry. Oh, yeah, I did some proofs in my class on automata theory
in 1978 in grad school.

But hey, I'm game.


>
> <snip>
>
> >> The discussion needs to be about algorithms not programs.  In
>
> > Yes, it does.
>
> Note this.
>
> >> addition, all big O notation needs to have an agreed notion of what is
> >> being measured (in particular, what are the "units").  If
>
> > I couldn't agree more.
>
> and this.
>
> >> arithmetically unbounded integers are the units of space, your stack
> >> is O(1) and can handle any depth.  You realise that is unreasonable,
> >> but putting a bound on the stack size, probably means that you can't
> >> solve the problem in a way that allows the algorithm to be analysed.
>
> > I can build hardware with "unlimited" integer lengths for the same
> > reason Prince Hamlet could be bound in a nutshell and count himself
> > king of infinite space. Indeed such hardware existed. The IBM 1401
> > midsize-mainframe, announced in 1959, had "unlimited" integers that
> > were bounded in essentially C string fashion: an extra bit, a "word
> > mark" was placed at the end, and calculating 100 factorial EXACTLY was
> > a few lines of assembler code.
>
> So why are you talking about all this?  I was trying to get you to
> talk about your algorithm and its serious flaws.  You agreed, but keep

It's not my algorithm, it is a part of the common treasury of
humanity. It's a vanishingly trivial microtruth that "you can simulate
stacks on a platform without the ability to malloc or to allocate
arrays such as the Texas Instruments TI-79 calculator", but it's still
true: since even a linked list stack is limited by available RAM, both
solutions are equivalent *sub specie aeternitatis*. Here, you turn
towards the language of the thugs, in which "all scientists" are
reminded, in Adorno's words, that they are technical serfs and wage
slaves, "candidates for posts".

This is a very efficient way to stop thought in its tracks, but
useless in getting at the truth, and Adorno happened to be on his way
to showing that the white collar cannot do the task to which they are
set: the detailed administration of society under the sign of truth
(from where we get simple justice in all cases).

When nobody can be excluded by virtue of the technical structure of
usenet, it is an irony that there has always been, on usenet, a moral
panic about people who don't belong, which is constituted by
psychological transfer in isolated men who have been taught, on the
job, to fear and hate the secret and feminine contour of their
weakness.

The petty bourgeois programmer will always, in Adorno's model, do a
bad job that makes him unhappy. The pressure to short-circuit is
always there, not for technical reasons, but because if no compromises
are made, the system will contradict the intention of its funders.

> talking about machines.  If you don't want to talk about algorithms,
> don't post in threads where the only meaningful contribution is an
> algorithmic one.

Now, that's bullshit. I haven't seen mathematical proofs here,
although some writers approach that level. Harter asked for code, and
most of the responses speculate about code.

I announced my trivial "discovery" as a coding point which undercuts
any claim that a C program which represents the stack as a linked list
is not "O(1)" in the misuse of the term. It can be rephrased to say
that from the abstract standpoint, the machine running the code must
be a Turing machine for there to be any difference between the stack-
as-integer hack.

On a hypothetical (and nonexistent) Turing machine that ACTUALLY RUNS
C CODE in the physical world, the linked list stack would not be O(1).
But in any real computer, it is "O(1)" in the sloppy and misleading
sense that formula is used here.

[I have gone on record as saying you can simulate a TM. But this isn't
the same thing.]

Note that as soon as your language turns minatory...as soon as you
recall to yourself that you are talking to The Chosen One, the
Carthaginian (who must be destroyed)...you science falls apart. Ponder
that.

You don't give me the benefit of the doubt: I don't accept your right
to harm or benefit me. Instead, you now need to show me where my
reasoning, to the conclusion that all solutions are O'(1) memory
complex, is false.

>
> If the thread asks about O(1) space (as this one does) it really does
> not help to say, "do it how you like -- big O means nothing on finite
> hardware".  We know from experience that there are real advantages,
> even on our poor finite hardware to using certain algorithms.

What are these advantages?

Sure, it's good to know how to avoid having to build a REAL linked-
list stack while performing a tree operation, especially in modern
embedded applications. If we can rotate and avoid a stack, cool.
That's a productive result! Thanks Julienne! I'm not worthy!

But this isn't a MATHEMATICAL result, nor even an engineering result.
It's a programming result.

It has no more dignity *sub specie aeternitatis*, as a result, than my
1974 realization that despite the very limited storage available on an
8K 1401, running compiled Fortran code interpretively such that the 8K
was divided between the interpreter and the code, I could do a
reasonable version of John Conway's "Game of Life" with a one-
dimensional small array of integers by representing cells as multiples
of powers of ten.

It has no more dignity, as a result, than my 1979 realization, after
reading the Byte magazine article about "The Mouse Programming
Language" that although with two kids I couldn't afford an Apple II, I
could nonetheless write a compiler by representing the runtime stack
in a floating point number.

Writing C code, and timing it, is simply not a good way to do
complexity theory! It's like the Turing Machine simulator I wrote for
shits and giggles on the TI-79 and which amused my prof. It is
NONCREDIT work.


>
> <snip>
>
> > I think I've made my point. Computational complexity is outside
> > Plato's cave, but like Aristotle we have to realize that it is present
> > physically only in imperfect shadows, our programs.
>
> Not as far as I can see.  You answered a post about the complexity
> bounds that might be possible for some problem with a part-solution.
> How can your point be now that such talk is meaningless?  If this is
> your point of view, please don't post in threads about algorithms and
> their asymptotic bounds.  Or, at least, limit it to one post saying
> that you don't care about these things because real programs are
> finite.  You seem confused about whether you will talk about
> algorithms or not.

The confusion is in the things I talk about. Like T. S. Eliot's
Apeneck Sweeney, I gotta use words when I talk to you, and I am
talking to people who believe that they can get abstract results by
writing C code. As in the case of Clive's disorganized and almost
incoherent rant against Schildt, which reads like it was written in
one draft, as a laundry list and charge sheet, we are in a twittering
world.

This is the world of people with little real programming experience
(building largeish systems as a single developer, and NOT
circumventing the hard parts by declaring victory prematurely while
leaving them for some "low level coder", and NOT trashing their
coworkers using girlyboy gossip). Nor are these people skilled enough
as writers to write a coherent mathematical proof. Reasonable
arguments have been produced, especially by Julienne, but I haven't
seen any proofs.

This world creates a confusion which then is easily blamed on The
Chosen One.

At this point, you seem to have joined, against the better angels of
your nature, to have joined the cybernetic mob who, as Malcolm has
pointed out, don't care about the truth, only "who is to be master" by
a Survivor like process of group voting, not for the master directly,
but by exclusion. As such, you have very little to tell me...no
insight at all. I'd stick to programming, not writing.


If the red slayer thinks he slays,
Or if the slain thinks that he is slain,
They know not well the subtle ways
I keep, and pass, and turn again.

Fear or forgot to me is near;
Shadow and sunlight are the same;
The vanished gods to me appear;
And one to me are shame and fame.

They reckon ill who leave me out;
When me they fly, I am the wings;
I am the doubter and the doubt;
And I the hymn the Brahmin sings.

The strong gods pine for my abode,
And pine in vain the sacred Seven;
But thou, meek lover of the good!
Find me, and turn thy back on heaven.

Ralph Waldo Emerson, Brahma

spinoza1111

ongelezen,
4 mrt 2008, 11:54:0304-03-2008
aan
On Mar 3, 7:04 pm, "Clive D. W. Feather" <cl...@on-the-
train.demon.co.uk> wrote:
> In article
> <9fbb7222-deea-4717-9a5a-36e498aae...@s8g2000prg.googlegroups.com>,spinoza1111<spinoza1...@yahoo.com> writes

>
> >The untested C sharp code that follows the essay is a stack simulator
> >which uses a single number to represent a stack with a fixed bound. It
> >is best viewed in a monospaced font.
>
> >It uses an old Fortran trick for representing arrays and stacks
>
> It's a trick I was using in TI (something even more primitive than
> assembler).
>
> However, it's not any use in this case - it can only store a fixed
> number of values, the number of digits in "uint" divided by bytWidth.
> Since the problem allows for a tree of unlimited depth, you will
> overflow the stack.

(Sigh) as will ANY implementation on a real computer sooner or later.
Might as well get it over with! Flunk now and avoid the June rush!


>
> In other words, hiding the stack doesn't stop it being a stack, which we
> already know doesn't work.
>
> >Here is the untested code of the stack class, best viewed in a
> >monospaced font such as Courier New and implemented, in C Sharp, as a
> >struct.
>
> Noting that it is untested:

Thank you.


>
>
>
>
>
> >            public bool push(ushort shrValue)
> >            {
> >                if (shrValue >= Math.Pow(10, BYTdecimalWidth))
> >                {
> >                    Console.WriteLine
> >                            ("Value for push " +
> >                             shrValue.ToString() + " " +
> >                             "exceeds decimal width " +
> >                             BYTdecimalWidth.ToString());
> >                    return false;
> >                }
> >                long lngTest = INTstack * SHRshift + shrValue + 1;
> >                if (lngTest > Math.Pow(2, 31) - 1)
> >                {
> >                    Console.WriteLine("Stack overflow");
> >                    return false;
> >                }
> >                return true;
> >            }
>
> This function never changes INTstack. Therefore it isn't actually
> storing the value in the stack!

Wow! A missing line of code! Alert the media! Post to Amazon! Make a
Web site! Harass and hound my family! Threaten me with anal rape and
death!

(Sigh)

My bad, of course, I "forgot" in this extempore whiteboard pseudo code
to assign lngTest back into INTstack.


>
> Why are you limiting the stored value to 2^31-1? And why are you calling
> Math.Pow every time rather than just writing the constant 0x7FFFFFFF?

To take the piss out of ye, laddie.

2^31-1 is the maximum value of a signed int. I use Math.Pow because I
know how to write pseudo-code, and I don't arrogantly assume, as did
Kernighan, that my reader has been mentally crippled by the infantile
disorder, C.

Any questions, laddie?

>
> --
> Clive D.W. Feather                       | Home: <cl...@davros.org>
> Tel: +44 20 8495 6138 (work)             | Web:  <http://www.davros.org>
> Fax: +44 870 051 9937                    | Work: <cl...@demon.net>

> Please reply to the Reply-To address, which is:  <cl...@davros.org>- Hide quoted text -

Ben Bacarisse

ongelezen,
4 mrt 2008, 13:08:1804-03-2008
aan
spinoza1111 <spino...@yahoo.com> writes:

Sorry, I have no time for this sort of game. If you choose to offer
an algorithm that addresses the program, I'll be glad to look at it but I
don't want to discuss your vague ideas about big O notation.

I'll learn, one day.

--
Ben.

spinoza1111

ongelezen,
4 mrt 2008, 13:56:1904-03-2008
aan

The vagueness is in your very understanding and in your prose...look
at your error: "an algorithm that addresses the program" is more than
a typographical error, it was made because people in this newsgroup
are confused about the distinction.

"Platonism" has a precise meaning. Just because it sounds vague
doesn't mean it is vague, just as Saturn has an iron core. The REAL
imprecision here is a twittering false precision in which people
assert with a straight face that they "write O(1) code".

For the last time: code isn't "order" anything. Code isn't "order"
jack shit. Code imperfectly and as a social construct in a human life-
world expresses the algorithm as a prediction that the computer will
form the algorithm's execution trace for all usable sets of data. The
algorithm is "order" something.

Get it? A human life-world in which we don't ascribe the most
convenient motivations and meaning intentions to people to make
ourselves come out on top in a bad rerun of Survivor or The
Apprentice.

You are play-acting a gatekeeper to this discussion, but there is no
gate and this is why you make elementary blunders such as confusing an
algorithm and a program. I know perfectly well you meant it the other
way, and the ONLY reason I point it out is to show that the confusion
is a group disorder.

There's a certain slant of light,
On winter afternoons
That oppresses, like the weight
Of cathedral tunes.

Heavenly hurt it gives us;
We can find no scar,
But internal difference
Where the meanings, are.

None may teach it anything,
'T is the seal, despair, --
An imperial affliction
Sent us of the air.

When it comes, the landscape listens,
Shadows hold their breath;
When it goes, 't is like the distance
On the look of death.

"We can find no scar". The actual algorithm with let us say an
unbounded stack, a truly unbounded stack, has no scar. Instead, it has
a linked list which will sooner or later run out of memory, or a fixed
large array and a test for its end.

The meaning of the program is the "internal difference" where poet
Emily Dickinson finds meaning. It is, in fact, the very failure of the
code to be the Turing Machine, with truly unlimited and unbounded
behavior.

The problem here is that thugs think they can discount intentions.
Using strlen(s) in a for loop created a "difference" between ideal
"efficiency" and the intent to say, not to a dead thing (the computer)
but to another person, don't forget, the end of the string is the end
of the for. The distance between the repeated evaluation of for, or
math.Pow in two sets of code not meant for execution, and copying
those values into temporary variables means that the code writer in
fact likes to waste "valuable (ha!) computer time" in favor of human
understanding.

Sure, I'll recalculate Math.Pow until the cows come home. Big deal. It
keeps my computer out of trouble.

Programmers give lip service to "readability". The problem is that as
"candidates for posts" they are like Rosencrantz and Guildenstern at
saying a lot of things to keep a job so there is no real commitment to
"readability" nor even consensus on what it is. Harter's code is
readable...although he uses one character identifiers in the old
memory management, because his good intentions shine forth in the fact
that his active intellect in 1990 had him comment every line in an
assembler style, which really just works.

Whereas the thugs are all too eager to ascribe bad motivations,
including the bad motivation of stupidity, to anything that questions
their thug wisdom.

Really understanding (listening) is what Italian mathematician Enrico
Bombieri did for Nash when he contacted my supervisor to ask, who is
this guy? Most mathematicians are deluged with crap on the order of
Clive Feather's nonsense about Schildt, and many of them assume it's
all crap. But as we perceive a slant of light, Bombieri detected
SOMETHING in Nash's email and wasn't a thug, like Richard, who
(wrongly) assumes that all problems are solved, and that only in-crowd
members could provide him with new insight on Riemann.

Harter asked for a hack. You're playing games, because I gave him a
stack that wasn't a hack, because I don't hack. It was finite in size,
could be increased by a switch statement to any size, and is order,
prime, one. Now quit pestering me.


>
> I'll learn, one day.

I don't think you will learn anything useful until you thoroughly
understand the difference between an algorithm and a program in such a
way you wouldn't ever reverse the words.

Paul Hsieh

ongelezen,
4 mrt 2008, 16:56:2804-03-2008
aan
On Mar 3, 4:34 pm, moi <r...@invalid.address.org> wrote:
> On Mon, 03 Mar 2008 14:58:06 -0800, Paul Hsieh wrote:
> > On Mar 1, 10:15 pm, Paul Hsieh <websn...@gmail.com> wrote:
> > Fortunately, the one to make me eat crow this time is *myself*.  :)
>
> Well, I tried to resurrect your impement your implement your contributed
> code, but I failed to do so ;-[

Yeah I just did it all in my head with a few examples on paper and
didn't check it by actually trying it. So the code is quite likely at
least slightly incorrect. But that's why I went to some trouble to
try to explain what it did so that people could fix the code to make
it work with little trouble.

> > Indeed, there is a non-destructive, const preserving O(1) space, and
> > O(n*ln n) time algorithm for doing this.  Drumroll please.
>
> Yes, that is the Deutsch-Schorr-Waite algorithm, posted by Gene on
> Saturday, 18:30PST.

I looked up this link, but it appears to destroy the tree as it goes.

> > The principle is simple.  When traversing a sorted binary tree [...]


>
> The Lindstrom - reference (also posted by Gene) solves the "need one bit
> per node" problem by using the (numerical) ordering of the left and right
> pointers as a mark bit. While eating the tree, it uses the left-pointers
> as a stack (linked list, actually) to backtrack once the subtree below
> the "cursor" becomes exhausted. Excellent hack!

Well ok, that's pretty devious as a hack. But again, it modifies the
tree. To be clear, my last solution does not modify the tree in any
way, it uses O(1) additional space, and performs the whole operation
in O(n*ln(n)) which is the minimum possible recursive time anyways.

So my questions are, 1) Is my third idea in error? 2) Am I the first
to come up with this (and should I publish?) or 3) Has my idea been
discovered earlier by someone else who has not yet been mentioned
before?

Should I publish this? I guess at the very least I should put this up
on my website.

spinoza1111

ongelezen,
5 mrt 2008, 01:02:4505-03-2008
aan
On Mar 3, 6:35 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111said:
>
> > On Mar 3, 4:20 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111said:
> <snip>
> >> If you don't want your name associated with the abuse you ladle down
> >> this newsgroup's throat, stop abusing the group.
>
> > No, the abuse in all cases starts with you. In this case, you injected
> > my name into a group
>
> No, you introduced your name into this group by posting here in the first
> place.

Stop playing with words. You do so exclusively to start trouble and
spread confusion, as well as to cover your own errors and exagerrate
others failings in a newsgroup where there are several people (Harter,
Julienne, Hsieh and Bacarisse among them) far more qualified than you,
and whose contributions you obscure by your insistence on pursuing and
harassing people as well as injecting their name into discussions in
order to activate their replies. By "group" it was obvious that I
meant "the group of people in this newsgroup responding to Harter".
You inserted my name BEFORE I posted my contribution re the simulation
of a stack in an integer.

Richard Heathfield

ongelezen,
5 mrt 2008, 01:43:1505-03-2008
aan
spinoza1111 said:

> On Mar 3, 6:35 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111said:
>>
>> > On Mar 3, 4:20 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> spinoza1111said:
>> <snip>
>> >> If you don't want your name associated with the abuse you ladle down
>> >> this newsgroup's throat, stop abusing the group.
>>
>> > No, the abuse in all cases starts with you. In this case, you injected
>> > my name into a group
>>
>> No, you introduced your name into this group by posting here in the
>> first place.
>
> Stop playing with words.

What are you talking about? It was you, not me, who introduced your name
into comp.programming. I believe you did this some time in 2000. Are you
claiming that isn't true? The archives suggest that it /is/ true.

> You do so exclusively to start trouble and
> spread confusion,

If the facts confuse you, that's your problem, not mine.

> as well as to cover your own errors and exagerrate
> others failings in a newsgroup where there are several people (Harter,
> Julienne, Hsieh and Bacarisse among them) far more qualified than you,

I am happy to concede that there are plenty of people in this newsgroup who
know far more about programming than I do, and I read their articles on
programming with interest and enthusiasm. And then, of course, there's
you.

> and whose contributions you obscure by your insistence on pursuing and
> harassing people as well as injecting their name into discussions in
> order to activate their replies. By "group" it was obvious that I
> meant "the group of people in this newsgroup responding to Harter".

Oh, I see. Well, that may have been obvious to you, but it wasn't obvious
to me. Your meaning was ambiguous. When I wrote something a few weeks ago
that was unintentionally ambiguous, you claimed it meant I couldn't write.
To remain consistent, you must now claim that /you/ can't write. But don't
worry if you can't bring yourself to do so. It is already widely known.

> You inserted my name BEFORE I posted my contribution re the simulation
> of a stack in an integer.

Well, yes, I gave a correct answer to a topical question about the
newsgroup. Someone asked what had happened to this newsgroup, and I
answered that you had happened to it.

I find it strange that you complain about my use of your name, given that
you use mine and those of others here with such carefree abandon. But then
you always were blind to your own hypocrisy.

Gene

ongelezen,
5 mrt 2008, 01:41:4705-03-2008
aan

DSW changes the tree while it's working but by the time it finishes,
restores all to original condition. It works on arbitrary trees
rather than BSTs, important for garbage collectors were it's often
used. Google Scheme 9 from Empty Space for an application. It's fun
to study this program.

This isn't right: "O(n*ln(n)) ... is the minimum possible recursive
time anyways." The usual stack algorithm visits each node 3 times,
each requiring constant time, so it's O(n). If you are assuming the
tree is a BST (sorted) _and_ balanced, then O(n log n) is not
remarkable because you could do n searches in that time, each time
looking for the next larger element. If the tree isn't balanced, then
maybe you have something good. I'm sorry I can't grok your
explanation. Is there working code?

moi

ongelezen,
5 mrt 2008, 06:27:4005-03-2008
aan
On Tue, 04 Mar 2008 13:56:28 -0800, Paul Hsieh wrote:

> On Mar 3, 4:34 pm, moi <r...@invalid.address.org> wrote:
>> On Mon, 03 Mar 2008 14:58:06 -0800, Paul Hsieh wrote:
>> > On Mar 1, 10:15 pm, Paul Hsieh <websn...@gmail.com> wrote:
>> > Fortunately, the one to make me eat crow this time is *myself*.  :)
>>
>> Well, I tried to resurrect your impement your implement your
>> contributed code, but I failed to do so ;-[
>
> Yeah I just did it all in my head with a few examples on paper and
> didn't check it by actually trying it. So the code is quite likely at
> least slightly incorrect. But that's why I went to some trouble to try
> to explain what it did so that people could fix the code to make it work
> with little trouble.

Well, I could not. Especially, the 'v' variable was undeclared, and it
appeared in contexts where it could eather be a 'node *' or a 'node **'.
At other places, there were some extra or missing asterikses, too :-[


>> > Indeed, there is a non-destructive, const preserving O(1) space, and
>> > O(n*ln n) time algorithm for doing this.  Drumroll please.
>>
>> Yes, that is the Deutsch-Schorr-Waite algorithm, posted by Gene on
>> Saturday, 18:30PST.
>
> I looked up this link, but it appears to destroy the tree as it goes.

If I understood it correctly it needs two tag+mark bits per node, which
makes it O(N) in space-requirements.

>
>> > The principle is simple.  When traversing a sorted binary tree [...]

Note the 'sorted' above. There is a difference in information when the
treewalker knows about the ordering. Knowing it will allow the treewalker
to estimate the point where the next 'value' should be, either from a
given subtree, or -if needed- from the root down. This would probably
result in a O(n log N) time complexity.

Without knowledge about the ordering, the treewalker can only take the
deepest path to the left, and needs to remember all the places where he
took a (left) turn. Storing this backtracking information in the nodes is
obvious, but guaranteeing that the original contents are restored
afterwards is not so trivial :-]


>> The Lindstrom - reference (also posted by Gene) solves the "need one
>> bit per node" problem by using the (numerical) ordering of the left and
>> right pointers as a mark bit. While eating the tree, it uses the
>> left-pointers as a stack (linked list, actually) to backtrack once the
>> subtree below the "cursor" becomes exhausted. Excellent hack!
>
> Well ok, that's pretty devious as a hack. But again, it modifies the
> tree. To be clear, my last solution does not modify the tree in any

It modifies the tree (the tree is not read-only; cannot be in ROM), but
after the *full* treewalk it should be fully restored. (it is used in
garlic-collectors (pun intended) and to dump trees (to logfiles) inside
out-of-resources conditions, if I understood correctly.



> way, it uses O(1) additional space, and performs the whole operation in
> O(n*ln(n)) which is the minimum possible recursive time anyways.

The minimal time for a recursive algorithm seems O(N) to me:
void visit(*p){
if (!p) return;
visit (p->left);
print (p->payload);
visit (p->right);


}

> So my questions are, 1) Is my third idea in error? 2) Am I the first to
> come up with this (and should I publish?) or 3) Has my idea been
> discovered earlier by someone else who has not yet been mentioned
> before?
>
> Should I publish this? I guess at the very least I should put this up
> on my website.

Once it works. Yes, please do so.

AvK

Ben Bacarisse

ongelezen,
5 mrt 2008, 06:37:0305-03-2008
aan
Gene <gene.r...@gmail.com> writes:
<snip>

> This isn't right: "O(n*ln(n)) ... is the minimum possible recursive
> time anyways." The usual stack algorithm visits each node 3 times,
> each requiring constant time, so it's O(n). If you are assuming the
> tree is a BST (sorted) _and_ balanced, then O(n log n) is not
> remarkable because you could do n searches in that time, each time
> looking for the next larger element.

I don't think sorted is needed for O(n*ln(n)) "visits". Take, as a
base-line, the numbering scheme I talked about:

find_node(tree, i) takes ln(n) visits so I think you get the traversal
in O(n*ln(n)) visits if the tree is optimally balanced. If the tree
in unbalanced, the situation is more complex and the analysis is
beyond me, but I'll try... later.

> If the tree isn't balanced, then
> maybe you have something good. I'm sorry I can't grok your
> explanation. Is there working code?

Or pseudo-code? I found the text hard. I prefer code, even if it is
made-up!

--
Ben.

spinoza1111

ongelezen,
5 mrt 2008, 07:38:3805-03-2008
aan
On Mar 5, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111said:
>
>
>
>
>
> > On Mar 3, 6:35 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111said:
>
> >> > On Mar 3, 4:20 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> >> spinoza1111said:
> >> <snip>
> >> >> If you don't want your name associated with the abuse you ladle down
> >> >> this newsgroup's throat, stop abusing the group.
>
> >> > No, the abuse in all cases starts with you. In this case, you injected
> >> > my name into a group
>
> >> No, you introduced your name into this group by posting here in the
> >> first place.
>
> > Stop playing with words.
>
> What are you talking about? It was you, not me, who introduced your name
> into comp.programming. I believe you did this some time in 2000. Are you
> claiming that isn't true? The archives suggest that it /is/ true.

You are being deliberately deceptive. You knew that by "group" I meant
"this group of people responding to Harter's challenge", and you
entered this group, this thread, this discussion BEFORE I did to
inject my name in it. For this reason, you are completely responsible
for its disruption because people have a right to respond to
harassment such as yours.


>
> > You do so exclusively to start trouble and
> > spread confusion,
>
> If the facts confuse you, that's your problem, not mine.

Yeah, I'm real confused, mate, as to what you are doing in this
discussion. You haven't contributed anything to Harter's question as
to whether it's possible to print a tree inorder without using an
unbounded data structure and thereby making the solution what he calls
O(1) memory, and what I call O'(1) memory. You entered on a campaign
of personal destruction.

>
> > as well as to cover your own errors and exagerrate
> > others failings in a newsgroup where there are several people (Harter,
> > Julienne, Hsieh and Bacarisse among them) far more qualified than you,
>
> I am happy to concede that there are plenty of people in this newsgroup who
> know far more about programming than I do, and I read their articles on
> programming with interest and enthusiasm. And then, of course, there's
> you.

I have read entirely too many of this mealy-mouthed boilerplate from
you. You are an incompetent and are here under the illusion that you
are a thought leader.

>
> > and whose contributions you obscure by your insistence on pursuing and
> > harassing people as well as injecting their name into discussions in
> > order to activate their replies. By "group" it was obvious that I
> > meant "the group of people in this newsgroup responding to Harter".
>
> Oh, I see. Well, that may have been obvious to you, but it wasn't obvious
> to me. Your meaning was ambiguous. When I wrote something a few weeks ago
> that was unintentionally ambiguous, you claimed it meant I couldn't write.
> To remain consistent, you must now claim that /you/ can't write. But don't
> worry if you can't bring yourself to do so. It is already widely known.

No, you either cannot read a word used normally and not in its usenet
sense because you have no life outside this group and do not read
books.

>
> > You inserted my name BEFORE I posted my contribution re the simulation
> > of a stack in an integer.
>
> Well, yes, I gave a correct answer to a topical question about the
> newsgroup. Someone asked what had happened to this newsgroup, and I
> answered that you had happened to it.

There is no consensus around this viewpoint. In fact, I have empowered
several technical discussions with technical challenges, which brought
out the abilities of truly competent programmers like Ben Bacarisse et
aliter. Because you did not do well, confusing && and || on one
challenge and thoughtlessly posting a solution undefined for overlap
in the other, you are now making smartass and wrong answers to
inappropriate questions.

>
> I find it strange that you complain about my use of your name, given that
> you use mine and those of others here with such carefree abandon. But then
> you always were blind to your own hypocrisy.

The difference is simple, and it was remarked in meatspace on Lamma
Island. One of my online enemies met me at an island party and
thereafter remarked that to his surprise I seemed an O.K. bloke. The
web master made a similar observation at the same time.

This is because I do not start trouble. This thread was free of our
discussions. You couldn't stand this so you used my name, not even
having the decency to observe online courtesy and use my posting name.

I don't mention your name when discussing whether a stack is O'(1) in
code insofar as it is implemented in a fixed memory array or always-
bounded memory. I would if you had anything useful to say on this
issue, but you do not.

But because you didn't have anything to say, you had to say something,
so, you restarted the campaign of personal harassment and gossip in
this discussion, this thread, this group.

And I responded in a world in which people are being increasingly
socialised by corporations not to do so. But my written responses are
no more brisk than my first managers' responses to his own boss in
1971. He was being forced to use an out of date machine to satisfy
requests while IBM lied to the university, and he expressed himself
with a pushback and a truth-to-power that was common back then in
technical circles, and the truth of which put Neil Armstrong on the
moon.

It was only later that people were counseled not to do this which is
why it shocks people when I respond to abuse like a man.

Get on topic or get out, Heathfield. A person's patronymic is FAR
more distant from a programming topic than society and politics,
especially when that person is being dragged through the mud.

>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> Google users: <http://www.cpax.org.uk/prg/writings/googly.php>

> "Usenet is a strange place" - dmr 29 July 1999- Hide quoted text -

Richard Heathfield

ongelezen,
5 mrt 2008, 07:57:5605-03-2008
aan
spinoza1111 said:

<snip>

> Because you did not do well, confusing && and ||

By making that claim, you demonstrate either your malice or your
incompetence. Hanlon's Razor applies.

<snip>

spinoza1111

ongelezen,
5 mrt 2008, 07:58:4605-03-2008
aan

Interesting solution: but it depends on the fact that you navigate by
finding t, a "temporary pointer", using a binary tree search. The
problem is that to find t, you have to match it by taking the address
of each searched node.

It's news to me, as a layperson with respect to computational
complexity (who completed the class at uni a long time ago and has
become a layperson), that computational complexity so airily allows
one to find a magical extra fact about data elements, to wit, their
address. Don't you match t with the address in the search, or am I
mistaken?

Most programming languages other than a certain infantile disorder
(sorry, but that's C) do not allow you to have the address of a data
item.

OK, suppose you replace t as an address by an array subscript in a
programming language where trees are in arrays. The problem here is
that there is no magic that allows you to find the subscript when you
are at an array entry in some situation where you have forgotten it:
in non-C, if you pass the entry to a procedure, that procedure does
not know the subscript.

OK, so I put array subscripts in the tree when not coding in C.

Oops. I am no longer order prime n memory-wise (I use "prime" because
order notation is misused in reference to code).

Paul, you're a smart guy and I may be just a troublemaker. But I am
honestly at a loss to understand this Strange Brew of C and abstract
computational complexity theory in which scalars know their address.

whereAt = &topOfTree;
while (true)
{
if (&whereAt == t) break;
... (tree binary search me too lazy to write)
}

If your searchback for t takes the address of whereAt, you are doing
"C computational complexity", not "computational complexity".

Ben Bacarisse

ongelezen,
5 mrt 2008, 11:33:2005-03-2008
aan
spinoza1111 <spino...@yahoo.com> writes:

> On Mar 4, 6:58 am, Paul Hsieh <websn...@gmail.com> wrote:

<snip>


>>  If b corresponds to the above node,
>> then the search will look directly for t using a typical binary tree
>> search (this requires no more than O(1) space and O(ln n) steps,
>> obviously.)

<snip>


> Interesting solution: but it depends on the fact that you navigate by
> finding t, a "temporary pointer", using a binary tree search. The
> problem is that to find t, you have to match it by taking the address
> of each searched node.
>
> It's news to me, as a layperson with respect to computational
> complexity (who completed the class at uni a long time ago and has
> become a layperson), that computational complexity so airily allows
> one to find a magical extra fact about data elements, to wit, their
> address. Don't you match t with the address in the search, or am I
> mistaken?
>
> Most programming languages other than a certain infantile disorder
> (sorry, but that's C) do not allow you to have the address of a data
> item.
>
> OK, suppose you replace t as an address by an array subscript in a
> programming language where trees are in arrays. The problem here is
> that there is no magic that allows you to find the subscript when you
> are at an array entry in some situation where you have forgotten it:
> in non-C, if you pass the entry to a procedure, that procedure does
> not know the subscript.

If you are using array subscripts, that is how the node should be
identified. All your operations will take a node identified by a
subscript. If you forget that subscript, you will be in trouble, but
the solution is simple: don't forget it -- pass the node to the
procedure by passing the index.

It is possible to break any algorithm by using the wrong data
structure or representation, but in an array representation of a tree,
nodes should be identified by subscripts. In a notation that uses
pointers, the nodes should be identified by pointer.

BTW, I don't yet understand Paul's suggestion in enough detail to
comment on it, but finding a node (or a node's parent as I currently
understand the suggestion) either by subscript or by pointer is not
part of the trouble.

--
Ben.

moi

ongelezen,
5 mrt 2008, 12:36:2905-03-2008
aan
On Fri, 29 Feb 2008 18:30:14 -0800, Gene wrote:

> On Feb 29, 12:56 pm, c...@tiac.net (Richard Harter) wrote:
>> Here is a little challenge - print the contents of a binary tree in
>> order only using O(1) additional memory. In other words you can't use
>> recursion or simulate it with a stack. As far as I know you can't do
>> this without some kind of hack. Also AFAIK there is no way to do this
>> that should ever be used in real code. Rules? We don't need no
>> steenking rules! Use whatever language you like, though I suppose this
>> is a natural for C based languages. Tag bit solutions (i.e. mark a
>> link as visited when we go down it and unmark it when we come back up)
>> don't count because (a) everyone thinks of it, and (b) it (debatably)
>> uses O(log n) memory. Platform dependencies are okay - this a hack,
>> after all.
>
> I think this was done 25 years ago:
>
> G. Lindstrom. Scanning list structures without stacks or tag bits.
> Information Processing
> Letters, 2(2):47-51, June 1973.
>
> Even if you use the classical Deutsch-Schorr-Waite algorithm, which
> needs 2 tag bits per node, you can steal them from the low order bits of
> the child pointers by insisting that nodes are allocated on 2-byte
> boundaries, which most modern systems already do...

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

struct node {
struct node *left;
struct node *right;
char *payload;
};

struct node * root = NULL;

void node_add(struct node **hnd, char *data);
void tree_print(int indent, struct node *p);
void walk_hs(struct node *r);
struct node * nibble_left(struct node **h);

int main()
{
struct node *p;

/* create some data */
node_add(&root, "_E_" );
node_add(&root, "_I_" );
node_add(&root, "_B_" );
node_add(&root, "_A_" );
node_add(&root, "_C_" );
node_add(&root, "_D_" );
node_add(&root, "_H_" );
node_add(&root, "_F_" );
node_add(&root, "_G_" );

printf ("\nThe tree:\n" );
tree_print(0, root);

printf ("\n1:" );
walk_hs(root);

printf ("\n2:" );
walk_hs(root);

printf ("\n3:" );
while (p = nibble_left(&root)) {
printf ("%s " , p->payload);
free (p);
}
printf ("\nDone" );
printf ("\n" );

return 0;
}

void node_add(struct node **hnd, char *data)
{
while (*hnd) {
if (strcmp((*hnd)->payload, data) >= 0) hnd = &(*hnd)->left;
else hnd = &(*hnd)->right;
}
*hnd = malloc (sizeof **hnd);
(*hnd)->payload = data;
(*hnd)->left = NULL;
(*hnd)->right = NULL;
}

struct node * nibble_left(struct node **h)
{
struct node *t ;

while (t = *h) {
if (t->left) h = &t->left;
else {
*h = t->right; // t->right = NULL;
return t;
}
}
return t;
}

void tree_print(int indent, struct node *p)
{
int ii;
if (!p) { printf("NuLl\n"); return; }

printf("%s\n", p->payload );

if (p->left) {
for (ii=0; ii < indent; ii++) putchar(' ');
printf("L="); tree_print(indent+2, p->left );
}

if (p->right) {
for (ii=0; ii < indent; ii++) putchar(' ');
printf("R="); tree_print(indent+2, p->right );
}
}

#define VISIT(p) printf("%s " \
, ((p) && (p)->payload)? (p)->payload: "Lull" )
#define SHOW_IT(m) fprintf(stderr, "\n%s [c=%s p=%s] {l=%p:%s r=%
p:%s} %s" \
, (m) \
, (cur) ? cur->payload: "Null" \
, (par) ? par->payload: "Null" \
, cur->left , cur->left? cur->left->payload: "Null" \
, cur->right , cur->right? cur->right->payload: "Null" \
, cur->left < cur->right ? "<<<<" : ">>>>" \
)
#define SHOW_IT(m) /**/
#define IS_LEAF(p) ((p) && (p)->left==NULL && (p)->right==NULL && (p) !=
Lroot)
#define SWAP(a,b) do { \
char t[sizeof(a)]; \
memcpy(t,&(a), sizeof t); \
(a) = (b); \
memcpy(&(b),t, sizeof t); \
} while(0)

/*
* Treewalk without the need for auxillary storage.
* Adapted from D.S Hirschberg and S.S. Seiden
* http://www.ics.uci.edu/~dan/pubs/trav.ps
* The numbering of the states/labels is from the text.
*
* The real trick (the "Lindstrom hack") is to use the difference
* between *numerical* values of the left and right * pointers to
* store an extra bit of information. (Marked *Magic* in the source)
* This of course only works for non-empty nodes (since NULL==NULL);
* which causes a lot of duplicated paths (which depend on IS_LEAF())
* in the algorithm.
* Strictly, the tree is not left unaltered: some of the pointers may
* have been swapped the first time. The *contents* however are unchanged,
* however. The authors call it 'semantically identical'.
*
* The algorithm needs a special sentinel pointer value (Lroot) for
* representing the (non existing) parent of the root node.
* This value needs to be different from any pointer value, and from NULL.
* In my implementation, the addres of a local variable is used
* for this purpose.
*
* [An important detail (that kept me busy for a day :-[) is that the
* IS_LEAF() macro should yield FALSE for the Lroot-sentinel.
* Yes, it was in the text.]
*/
void walk_hs(struct node *rt)
{
struct node *tmp, *cur, *par, dummy= {0,0,"Lroot"};
#define Lroot (&dummy)

/* step_1: initialise */
if (!rt) goto step_9;
cur = rt;
par = Lroot;
step_2: /* Order children */
SHOW_IT("2:Init(C)");
if (cur->left && cur->right
&& !IS_LEAF(cur->left) && !IS_LEAF(cur->right)
&& cur->left > cur->right) { /**Magic**/
SHOW_IT("2:Swap(I)");
SWAP(cur->left,cur->right);
SWAP(cur->left[0] , cur->right[0] );
}
if (!cur->left) goto step_4;
if (IS_LEAF(cur->left)) {
SHOW_IT("2:Visit(L)");
VISIT (cur->left);
goto step_4;
}
step_3: /* Traverse left */
SHOW_IT("3:Push(L)");
tmp = cur->left;
cur->left = par;
par = cur;
cur = tmp;
goto step_2;
step_4: /* (Finished left subtree) inorder visit */
SHOW_IT("4:Done(L)");
VISIT(cur);
if (!cur->right) goto step_6;
if (IS_LEAF(cur->right)) {
SHOW_IT("4:Visit(R)");
VISIT (cur->right);
goto step_6;
}
/* step_5: Traverse Right */
if (!cur->left || IS_LEAF(cur->left)) {
SHOW_IT("5a:Push(R)");
tmp = cur->right;
cur->right = par;
par = cur;
cur = tmp;
goto step_2;
}
/* else */
SHOW_IT("5b:Swap");
SWAP(cur->left,cur->right);
goto step_3;
step_6: /* (Finished Right subtree) postorder visit */
SHOW_IT("6:Done(R)");
if (par == Lroot) goto step_9;
if (!par->left || IS_LEAF(par->left)) {
/* step_7: There had been no left subtree */
SHOW_IT("7:Pop(R)");
tmp = cur;
cur = par;
par = cur->right;
cur->right = tmp;
goto step_6;
}
/* else */
/* step_8: There had been a left subtree */
SHOW_IT("8:Pop(L)");
tmp = cur;
cur = par;
par = cur->left;
cur->left = tmp;
if (!cur->right || IS_LEAF(cur->right)
|| cur->left < cur->right) { /**Magic**/
goto step_4;
}
else {
SHOW_IT("8:Swap");
SWAP(cur->left, cur->right);
goto step_6;
}
step_9: /* Done! */
return;
}

/* Eof
* HTH,
* AvK
*/

spinoza1111

ongelezen,
6 mrt 2008, 00:22:0106-03-2008
aan
On Mar 6, 12:33 am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:

Perhaps I was confused, then. Yes, if t is a subscript it can be
matched against the subscript of each array entry when the tree is
stored in an array. My concern was that C gives something "extra" in
the & operator which I thought Paul was using. I guess this is not so
and I withdraw the objection.


>
> It is possible to break any algorithm by using the wrong data
> structure or representation, but in an array representation of a tree,
> nodes should be identified by subscripts.  In a notation that uses
> pointers, the nodes should be identified by pointer.
>
> BTW, I don't yet understand Paul's suggestion in enough detail to
> comment on it, but finding a node (or a node's parent as I currently
> understand the suggestion) either by subscript or by pointer is not
> part of the trouble.

I still maintain that the trouble is trying to do abstract complexity
theory in C.

>
> --
> Ben.- Hide quoted text -

spinoza1111

ongelezen,
6 mrt 2008, 02:43:3306-03-2008
aan
On Mar 5, 8:57 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111said:
>
> <snip>
>
> > Because you did not do well, confusing && and ||
>
> By making that claim, you demonstrate either your malice or your
> incompetence. Hanlon's Razor applies.

It doesn't demonstrate incompetence, merely (minimal)
competence...knowing the difference.

Nor does it demonstrate malice, where this is the intention to start,
as you start, or continue, as you continue, attacks on personalities
offtopic. In one sense my point is that "we all make errors, and
because of this, we need not to personalize errors in our writing
here": not a malicious point. In another my point is "if you
maliciously initiate attacks on personalities, as you have done
repeatedly here, or participate in them, as you have with Schildt, you
must expect replies worded as attacks, whether as *tu quoque* meant in
anger not malice, or as exemplary of why you should zip the fuck up
for a fucking change".

Your writing and reading is seriously deficient. All malice is a form
of anger, but not all anger, a form of malice. Malice (for example as
used in the language of the law of libel) starts something from
nothing, as did Iago in the old play: "Hell and night must bring this
monstrous birth to the world's light."

Sure, your behavior makes me angry: because it is malicious. Your
latest stunt was to inject this discussion with my name before I had
contributed.

Get out of this discussion and this newsgroup.

moi

ongelezen,
15 mrt 2008, 11:14:3915-03-2008
aan
On Wed, 05 Mar 2008 18:36:29 +0100, moi wrote:

Solved ?

AvK

Paul Hsieh

ongelezen,
27 mrt 2008, 04:52:5927-03-2008
aan
On Mar 4, 11:41 pm, Gene <gene.ress...@gmail.com> wrote:
> On Mar 4, 4:56 pm, Paul Hsieh <websn...@gmail.com> wrote:
> > Well ok, that's pretty devious as a hack. But again, it modifies the
> > tree. To be clear, my last solution does not modify the tree in any
> > way, it uses O(1) additional space, and performs the whole operation
> > in O(n*ln(n)) which is the minimum possible recursive time anyways.
>
> This isn't right: "O(n*ln(n)) ... is the minimum possible recursive
> time anyways." The usual stack algorithm visits each node 3 times,
> each requiring constant time, so it's O(n). If you are assuming the
> tree is a BST (sorted) _and_ balanced, then O(n log n) is not
> remarkable because you could do n searches in that time, each time
> looking for the next larger element. If the tree isn't balanced, then
> maybe you have something good.

Right. As moi points out, bin tree traversal should really be O(n).

> [...] I'm sorry I can't grok your


> explanation. Is there working code?

There is now:

http://www.pobox.com/~qed/treeO1.c

moi

ongelezen,
27 mrt 2008, 14:26:0327-03-2008
aan
On Thu, 27 Mar 2008 01:52:59 -0700, Paul Hsieh wrote:

That looks solid.
My guess is that it is about O*N log(log(n)) ???

If I understand it correctly, it just works by dropping an
anchor every time it takes a left turn. Once the left subtree
is exhausted, it visits the anchor, lifts it and visits the right
subtree. If it needs to crawl up but has no anchor, it performs a re-
search in order to reposition the anchor (always on the path of the
previous anchor, IIUC).


But: your solution does need to *know the ordering function* that was
used to construct the tree. (the Larsson-variant I posted three weeks ago
does not. But it messes with the pointer values...)

AvK

Paul Hsieh

ongelezen,
27 mrt 2008, 17:20:1527-03-2008
aan
On Mar 27, 11:26 am, moi <r...@invalid.address.org> wrote:
> On Thu, 27 Mar 2008 01:52:59 -0700, Paul Hsieh wrote:
> > There is now:
>
> >    http://www.pobox.com/~qed/treeO1.c
>
> That looks solid.
> My guess is that it is about O*N log(log(n)) ???

Its O(n*log(n)) when the tree is balanced, otherwise its O(n**2) worst
case.

> If I understand it correctly, it just works by dropping an
> anchor every time it takes a left turn. Once the left subtree
> is exhausted, it visits the anchor, lifts it and visits the right
> subtree. If it needs to crawl up but has no anchor, it performs a re-
> search in order to reposition the anchor (always on the path of the
> previous anchor, IIUC).

Uhh ... close. The anchor is ignored if the final target node has a
right subtree; the new anchor becomes the left most grandchild of head
of that right subtree (which is represented symbolically as just the
head of the right subtree and a boolean flag that tells the traversal
to go leftward maximally.) If you follow the paths from root to end
for a standard iteration and you look carefully at the last direction
travelled you will see that the next node must be upward if it was
left, and rightward then maximally leftward if it was right. To
correctly move upward, you need to have a lagging pointer.

> But: your solution does need to *know the ordering function* that was
> used to construct the tree. (the Larsson-variant I posted three weeks ago
> does not. But it messes with the pointer values...)

Oh yeah. I thought that that could be assumed as given. Without a
way of knowing the structure implicitely, you need to store hints for
traversing it, since we have no stack and no way to *know* how the
structure was constructed. How do we motivate this scenario?

moi

ongelezen,
27 mrt 2008, 19:38:5627-03-2008
aan
On Thu, 27 Mar 2008 14:20:15 -0700, Paul Hsieh wrote:

> On Mar 27, 11:26 am, moi <r...@invalid.address.org> wrote:
>> On Thu, 27 Mar 2008 01:52:59 -0700, Paul Hsieh wrote:
>> > There is now:
>>
>> >    http://www.pobox.com/~qed/treeO1.c
>>
>> That looks solid.
>> My guess is that it is about O*N log(log(n)) ???
>
> Its O(n*log(n)) when the tree is balanced, otherwise its O(n**2) worst
> case.


Oops. My guess (based on a balanced tree) assumed that
1) given a valid "anchor" the next node could be found in one hop
2) given a lifted anchor, a "log(<N)" search would be needed.

But it was just a layman's wild guess.


> Uhh ... close. The anchor is ignored if the final target node has a
> right subtree; the new anchor becomes the left most grandchild of head
> of that right subtree (which is represented symbolically as just the
> head of the right subtree and a boolean flag that tells the traversal to
> go leftward maximally.) If you follow the paths from root to end for a
> standard iteration and you look carefully at the last direction
> travelled you will see that the next node must be upward if it was left,
> and rightward then maximally leftward if it was right. To correctly
> move upward, you need to have a lagging pointer.

In fact, you'll need a linked list (or stack) of anchors to avoid the
search. My first attempt to store this additional pointer by rotating it
through the left and right offspring pointers failed miserably...

>
>> But: your solution does need to *know the ordering function* that was
>> used to construct the tree. (the Larsson-variant I posted three weeks
>> ago does not. But it messes with the pointer values...)
>
> Oh yeah. I thought that that could be assumed as given. Without a way
> of knowing the structure implicitely, you need to store hints for
> traversing it, since we have no stack and no way to *know* how the
> structure was constructed. How do we motivate this scenario?

Well, sorry, I did not know the Larsson-hack before. It was an eyeopener
for me. I still think it swings. And I think it very much resembles the
dancing links -trick. (which also does a dance-around-the-clock kind of
shuffle)
The actual lesson I learned from it is that you only need to store one
bit of information per node. (not two bits, as I thought)
But we can afford only zero bits/node...

Back to the old drawing board...

AvK
AvK

0 nieuwe berichten