# reversing list

14 views

### M. NG

Nov 16, 1998, 3:00:00 AM11/16/98
to
I remember lisp has a "reverse" function that
flips a list from head to tail... I can't find
a equivalent in Tcl... is there one is Tcl??

What would be the most efficient way to reverse a
Tcl list??

Thanks.

### Michael and Clairone Delaney

Nov 17, 1998, 3:00:00 AM11/17/98
to
I think there's an "lsort -decreasing" option that will sort from largest
to
smallest.

M. NG <m...@jps.net> wrote in article <36510B6E...@jps.net>...

### Christopher Nelson

Nov 17, 1998, 3:00:00 AM11/17/98
to
M. NG wrote:
>
> I remember lisp has a "reverse" function that
> flips a list from head to tail... I can't find
> a equivalent in Tcl... is there one is Tcl??

No, there's no native reverse command.

> What would be the most efficient way to reverse a
> Tcl list??

I remember this being discussed but couldn't find it on dejanews.
Hmm...

set reverse {}
foreach e \$list {
set reverse [concat \$e \$reverse]
}

is fairly good.

Chris
--
Rens-se-LEER is a county. RENS-se-ler is a city. R-P-I is a school!

### Alexandre Ferrieux

Nov 17, 1998, 3:00:00 AM11/17/98
to
Christopher Nelson wrote:
>
> set reverse {}
> foreach e \$list {
> set reverse [concat \$e \$reverse]
> }
>
> is fairly good.

Algorithm alley...

For long lists, you might prefer:

set reverse {}
set n [expr {[llength \$list]-1}]
for {set i \$n} {\$i>0} {incr i -1} {
lappend reverse [lindex \$list \$i]
}

I've tested it on the list of integers from 1 to 10,000.
It takes 200ms on my Ultra-1-170, while the first one above takes
1200ms.

FWIW...

-Alex

### Robin Becker

Nov 17, 1998, 3:00:00 AM11/17/98
to
In article <365192...@cnet.francetelecom.fr>, Alexandre Ferrieux
<alexandre...@cnet.francetelecom.fr> writes

>Christopher Nelson wrote:
>>
>> set reverse {}
>> foreach e \$list {
>> set reverse [concat \$e \$reverse]
>> }
>>
>> is fairly good.
>
>Algorithm alley...
>
>For long lists, you might prefer:
>
> set reverse {}
> set n [expr {[llength \$list]-1}]
> for {set i \$n} {\$i>0} {incr i -1} {
>=

> lappend reverse [lindex \$list \$i]
> }
>
>I've tested it on the list of integers from 1 to 10,000.
>It takes 200ms on my Ultra-1-170, while the first one above takes
>1200ms.
>
>FWIW...
>
>-Alex
it's not clear that [lindex \$list \$i] is a constant time operation.
In general this isn't so. A C/asm code for list reversal can be written
as a linear time operation ie cost is proportional to length of the
list. It's difficult to see why this should be true for Tcl; my tests
suggest that Alex's code is nearly linear until I get very large lists
n>1e6 when I suspect that memory is a problem.

A more obviously linear algorithm would require some notion of a pointer
or for an lprepend built in.
--
Robin Becker

### M. NG

Nov 17, 1998, 3:00:00 AM11/17/98
to Robin Becker
Thanks Robin and others!

I wonder is there a reason why Tcl does not have
a lprepend, or a built in thing that
removes the last element of a list (without having
to get the length of a list...)

### Costas Menico

Nov 18, 1998, 3:00:00 AM11/18/98
to
"M. NG" <m...@jps.net> wrote:

>I remember lisp has a "reverse" function that
>flips a list from head to tail... I can't find
>a equivalent in Tcl... is there one is Tcl??

You can also do it using the classic recursion:

proc reverse {xlist} {
set char [lindex \$xlist 0]
if {[llength \$xlist]>0} {
return "[reverse [lrange \$xlist 1 end]] \$char"
}
}

Costas

### Alexandre Ferrieux

Nov 18, 1998, 3:00:00 AM11/18/98
to
Robin Becker wrote:
>
> it's not clear that [lindex \$list \$i] is a constant time operation.

At least it was the simplified image I had in mind, because I was told
so. Never checked, though :)

> ...

> It's difficult to see why this should be true for Tcl

Why shouldn't it ? Aren't Tcl's lists (lazily reallocated when growing)
vectors indeed ? The time of linked list was back in the pre-7.x days,
wasn't it ?

> my tests
> suggest that Alex's code is nearly linear until I get very large lists
> n>1e6 when I suspect that memory is a problem.

Yes. That's what is usually called "linear"...
(BTW: are you sure that it's not the "lappend" which slows down first in
this case ?)

-Alex

### Alexandre Ferrieux

Nov 18, 1998, 3:00:00 AM11/18/98
to
M. NG wrote:
>
> Thanks Robin and others!

Our pleasure. BTW, there was a typo in my code, thanks to Paul Duffin
for pointing it out: the test in the for... loop should read "\$i>=0"

> I wonder is there a reason why Tcl does not have
> a lprepend,

I guess, because the current implementation aims at maximal efficiency
for [lindex], which implies vectors instead of linked lists, which in
turn means that lprepend is necessarily *slow* (need to 'scroll' the
whole vector)...

> or a built in thing that
> removes the last element of a list (without having
> to get the length of a list...)

I've not check, but is [lrange \$l 0 [expr {[llength \$l]-2}]] so bad ?

Okay, I understand why Lispers can be frustrated by that.
If the aim is to implement a *stack*, the code above looks like your
best bet today !

-Alex

### Klaus Riehm

Nov 18, 1998, 3:00:00 AM11/18/98
to
To remove the last element of a list

lreplace \$l end end

springs to mind without a need to get the length of the list explicitly.

Klaus

Alexandre Ferrieux wrote:

--
with kind regards \\\|///
(o o)
-----------------------------ooO-(_)-Ooo------------------------------
Klaus Riehm Phone: +49/(0)89/636 - 43066
Siemens AG Fax: +49/(0)89/636 - 47655
ICP CS XS COM DCE mailto:Klaus...@mch.sni.de
81730 Muenchen
____________________________.oooO___Oooo._____________________________
( ) ( )
\ ( ) /
\_) (_/

### Alexandre Ferrieux

Nov 18, 1998, 3:00:00 AM11/18/98
to
Klaus Riehm wrote:
>
> To remove the last element of a list
>
> lreplace \$l end end
>
> springs to mind without a need to get the length of the list explicitly.

Wow ! I had completely forgotten [lreplace], Thanks !
BTW, this also gives a solution to the [lprepend] issue: [lreplace \$list
0 -1 \$elem]

-Alex

### Robin Becker

Nov 18, 1998, 3:00:00 AM11/18/98
to
In article <365285...@cnet.francetelecom.fr>, Alexandre Ferrieux
<alexandre...@cnet.francetelecom.fr> writes
so lists are implemented as vectors. I must look at the generic dir more
often. This must be the most depressing thing I've heard for ages;
doesn't this make all the assumptions about lists wrong eg lindex is
O(1) but linsert is O(n) etc. I begin to understand why Tcl has mostly
value semantics for lists. There's no advantage doing it any other way
if vectors have to be copied around.
--
Robin Becker

### Alexandre Ferrieux

Nov 18, 1998, 3:00:00 AM11/18/98
to
Robin Becker wrote:
>
> so lists are implemented as vectors. I must look at the generic dir more
> often.

:^)

> This must be the most depressing thing I've heard for ages;
> doesn't this make all the assumptions about lists wrong eg lindex is
> O(1) but linsert is O(n) etc.

Think of the {array,list} pair: they are complimentary. If you need
something quickly mutable, aim for arrays.
If you want a more readonly thing, aim for lists. I agree this doesn't
cover the whole landscape, but it comes close: how many happy [O(1)
lindex] guys for a sad [O(n) linsert] one ? (remember also that lappend
is O(1)).

Now let's investigate: is there room for a new, more pointer-oriented
(linked list) type ?
Currently, pointer semantics is obtained by name indirection (arrays for
arbitrary strings, lists for compact and bounded integers
). This makes sense in the scripting framework, in most situations
(indirection overhead on the same scale as other, language-level ones).
But can you argue otherwise ?

> I begin to understand why Tcl has mostly
> value semantics for lists. There's no advantage doing it any other way
> if vectors have to be copied around.

Yup.

-Alex

### Robin Becker

Nov 18, 1998, 3:00:00 AM11/18/98
to
In article <3652AA...@cnet.francetelecom.fr>, Alexandre Ferrieux
<alexandre...@cnet.francetelecom.fr> writes
The vector list implementation makes lappend cheapest anyhow.
--
Robin Becker

### Robin Becker

Nov 18, 1998, 3:00:00 AM11/18/98
to
In article <3652DA...@cnet.francetelecom.fr>, Alexandre Ferrieux
<alexandre...@cnet.francetelecom.fr> writes
for every algorithm there's an optimal data representation. As lappend
is worstcase O(n) Alex's reversal alg. is still ok compared to the
pointer version. On the other hand any algorithm involving repeated
modifications to lists is difficult; I'm not sure what the O() value is
for the Tcl_Hash stuff, but collisions make it >O(1) this probably means
B-trees etc in tcl is a no no ? :)
--
Robin Becker

### Heribert Dahms

Nov 19, 1998, 3:00:00 AM11/19/98
to
In <36523f1f...@news.mindspring.com> costas...@mindspring.com writes:

: You can also do it using the classic recursion:

:
: proc reverse {xlist} {
: set char [lindex \$xlist 0]
: if {[llength \$xlist]>0} {
: return "[reverse [lrange \$xlist 1 end]] \$char"

: }
: }

But isn't this the classic string/list problem? What about (untested):
return [concat [lrange \$xlist 1 end] [list \$char]]

Bye, Heribert (da...@ifk20.mach.uni-karlsruhe.de)

### Donal K. Fellows

Nov 19, 1998, 3:00:00 AM11/19/98
to
In article <72vlpl\$6t1\$1...@news.rz.uni-karlsruhe.de>,

Heribert Dahms <DA...@ifk20.mach.uni-karlsruhe.de> wrote:
> But isn't this the classic string/list problem? What about (untested):
> return [concat [lrange \$xlist 1 end] [list \$char]]

Unfortunately, the last time I looked at [concat] it lost list
representations since it worked at the level of strings anyway (which
is very useful in other areas where it is used, like command
construction.)

The efficient form is:
proc reverse {xlist} {
if {[llength \$xlist]>0} {
linsert [reverse [lrange \$xlist 1 end]] end [lindex \$xlist 0]
}
}

But recursive algorithms over lists are not a good idea in Tcl due to
the recursion limiting, so you'd be better off with a loop-based
algorithm instead. Given that [llength] and [lindex] are O(1), a
suitable [for] will do the business...

proc reverse {xlist} {
set rlist \$xlist
for {
set j 0
set i [expr {[llength \$xlist]-1}]
} {\$i>=0} {
incr i -1
incr j
} {
set rlist [lreplace \$rlist \$j \$j [lindex \$xlist \$i]]
}
set rlist
}

This even has the advantage of keeping the number of memory
allocations _strictly_ bounded, or at least it would if there was a
way of getting the contents of the rlist variable without
unnecessarily bumping up the reference count. I'm not sure if the
compiler is sufficiently sophisticated yet.

Donal.
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
Department of Computer Science, University of Manchester, U.K. +44-161-275-6137
--
If you staple a penguin to the head of a gnu, the result won't catch herring...

### Donal K. Fellows

Nov 19, 1998, 3:00:00 AM11/19/98
to
In article <ebTWhAAo...@jessikat.demon.co.uk>,

Robin Becker <ro...@jessikat.demon.co.uk> wrote:
> for every algorithm there's an optimal data representation. As lappend
> is worstcase O(n) Alex's reversal alg. is still ok compared to the
> pointer version. On the other hand any algorithm involving repeated
> modifications to lists is difficult; I'm not sure what the O() value is
> for the Tcl_Hash stuff, but collisions make it >O(1) this probably means
> B-trees etc in tcl is a no no ? :)

Well, keys are hashed to linked lists, so lookup and key-insertion are
O(m + n**k) where m is the length of the key, n is the number of items
in the array, and k is between 0 and 1, and depends on the loading
factors of the hashtable and how "good" the hash generator is. Worst
case (and hence the _strict_ O() number) is O(m+n) where all the keys
hash to the same bucket. Best case is (virtually) constant access
O(m) where all the keys hash to different buckets and the table is not

Analysing hashtable performance is difficult, but the stuff in
tclHash.c is pretty good (given that it is a perfect, extensible hash,
that is. You can do better if you are willing to ignore clashes...)

### Christopher Nelson

Nov 19, 1998, 3:00:00 AM11/19/98
to
Donal K. Fellows wrote:
> ... recursive algorithms over lists are not a good idea in Tcl due to

> the recursion limiting, so you'd be better off with a loop-based
> algorithm instead. Given that [llength] and [lindex] are O(1), a
> suitable [for] will do the business...
>
> proc reverse {xlist} {
> set rlist \$xlist
> for {
> set j 0
> set i [expr {[llength \$xlist]-1}]
> } {\$i>=0} {
> incr i -1
> incr j
> } {
> set rlist [lreplace \$rlist \$j \$j [lindex \$xlist \$i]]
> }
> set rlist
> }
>
> This even has the advantage of keeping the number of memory
> allocations _strictly_ bounded, or at least it would if there was a
> way of getting the contents of the rlist variable without
> unnecessarily bumping up the reference count. I'm not sure if the
> compiler is sufficiently sophisticated yet.

On the subject of a "sophisticated" compiler, are there any
optimizations in Tcl? A (probably poor) example off the top of my head:

foreach x \$y {
set a \$x + [expr 3.14 * 2]
set b \$x + [expr 3.14 * 2]
}

vs.

set t [expr 3.14 * 2]
foreach x \$y {
set a \$x + \$t
set b \$x + \$t
}

An optimizer would turn the first into the second and maybe even treat
it as:

set t [expr 3.14 * 2]
foreach x \$y {
set b [set a [expr \$x + \$t]]
}

In other words, handle loop invariants, unrolling loops, etc. Honestly,
my language theory background is in compilers, not interpreters, so I'm
not even sure such things are possible in an interpreter but surely SOME
optimizations are possible. Are any attempted?

### Alexandre Ferrieux

Nov 19, 1998, 3:00:00 AM11/19/98
to
Donal K. Fellows wrote:
>
> set rlist [lreplace \$rlist \$j \$j [lindex \$xlist \$i]]
>
> This even has the advantage of keeping the number of memory
> allocations _strictly_ bounded,

Sorry if I'm being dense: can you explain why this is better than
[lappend] ? Naively, I'd say that this [lreplace] method is much more
costly in both time and memory...

-Alex

### Robin Becker

Nov 19, 1998, 3:00:00 AM11/19/98
to
In article <365438...@cnet.francetelecom.fr>, Alexandre Ferrieux
<alexandre...@cnet.francetelecom.fr> writes
yes lreplace is O(n) so Donal's alg ends up as O(n^2)
--
Robin Becker

### Ted E. Dunning

Nov 19, 1998, 3:00:00 AM11/19/98
to

I very often find myself wishing that lindex would accept more than
one index. The syntax that would be (slightly) handier would be to
accept either a number or a list of numbers, but the internal code
would probably be a bit cleaner if the implementation accepted a
variable number of arguments, the last n-1 of which were indices.
Together with a function to quickly generate a list of integers,
reverse should reduce to the following:

proc lrevers {list} {
lindex \$list [integers [expr [llength \$list]-1] 0]
}

If there is sufficient sentiment in favor of extending lindex this
way, I would be happy to generate patches and submit them to
scriptics.

--
"It is a very sad thing that nowadays there is so little useless
information."
Oscar Wilde

### Alexandre Ferrieux

Nov 20, 1998, 3:00:00 AM11/20/98
to
Ted E. Dunning wrote:
>
> I very often find myself wishing that lindex would accept more than
> one index. The syntax that would be (slightly) handier would be to
> accept either a number or a list of numbers, but the internal code
> would probably be a bit cleaner if the implementation accepted a
> variable number of arguments, the last n-1 of which were indices.
> Together with a function to quickly generate a list of integers,
> reverse should reduce to the following:
>
> proc lrevers {list} {
> lindex \$list [integers [expr [llength \$list]-1] 0]
> }
>
> If there is sufficient sentiment in favor of extending lindex this
> way, I would be happy to generate patches and submit them to
> scriptics.

Instead of extending the existing (and inlined, optimized...) [lindex],
I'd prefer a new [lpermut]... Also, I'd advocate the
one-single-list-argument approach: the well-known [coords] vararg syntax
for polyline canvas items is so nagging (need to use "eval") !

Another thing: does this really motivate a new primitive ? Agreed,
implementing it in C rather than with a [foreach-lappend-lindex] loop
would save some CPU, but do you need this so often ? Is it time-critical
? Are you also aware that in the initial context (lreverse), this
approach is a waste of time of memory (building the [integers] list,
while a simple incremented counter would do...) ? Don't you think that a
better candidate for a primitive would be [lreverse] itself ?

-Alex

### Donal K. Fellows

Nov 20, 1998, 3:00:00 AM11/20/98
to
In article <36541D...@pinebush.com>,

Christopher Nelson <ch...@pinebush.com> wrote:
> On the subject of a "sophisticated" compiler, are there any
> optimizations in Tcl? A (probably poor) example off the top of my head:
>
> foreach x \$y {
> set a \$x + [expr 3.14 * 2]

YM: set a [expr {\$x + (3.14 * 2)}]

> set b \$x + [expr 3.14 * 2]
> }
>
> vs.
>
> set t [expr 3.14 * 2]
> foreach x \$y {
> set a \$x + \$t

YM: set a [expr {\$x + \$t}]

> set b \$x + \$t
> }
>
> An optimizer would turn the first into the second and maybe even treat
> it as:
>
> set t [expr 3.14 * 2]
> foreach x \$y {
> set b [set a [expr \$x + \$t]]
> }
>
> In other words, handle loop invariants, unrolling loops, etc. Honestly,
> my language theory background is in compilers, not interpreters, so I'm
> not even sure such things are possible in an interpreter but surely SOME
> optimizations are possible. Are any attempted?

Ignore the difference between compilation and interpreting for now -
it is a red herring - and concentrate on all the compiler side (you
just have a machine-independent object-code format.)

Now, it would be possible to compile that code down into something
like the result of:

foreach x \$y {set b [set a [expr {\$x + K}]]}

Where K is an inlined literal constant which happens to be equal to
3.14*2 (and this is in fact better than creating an extra variable due
to there being fewer side effects...)

An important thing to remember in doing such stuff is that you're
working with a non-register-based execution model (using one based on
a very large amount of memory and a stack instead) which changes the
sorts of optimisation you can perform. All the register-colouring
optimisations in the standard references, for example, are useless
here. There simply is no call for them.

I don't know what optimisations are attempted in the compiler yet
though. I've not had time to look...

### Donal K. Fellows

Nov 20, 1998, 3:00:00 AM11/20/98
to
In article <+wTurEAs...@jessikat.demon.co.uk>,
Robin Becker <ro...@jessikat.demon.co.uk> wrote:
> Alexandre Ferrieux <alexandre...@cnet.francetelecom.fr> writes

>> Sorry if I'm being dense: can you explain why this is better than
>> [lappend] ? Naively, I'd say that this [lreplace] method is much more
>> costly in both time and memory...
>
> yes lreplace is O(n) so Donal's alg ends up as O(n^2)

But if you are replacing a single element with a single element (as I
was doing in my algorithm) in a list with only a single reference to
it (as I suspect I _wasn't_ doing due to compiler limitations) then
the [lreplace] could be O(1) without any memory allocation or
deallocation, which would make the overall process O(n).

The thing is that the key time-taking ops (in most code) are malloc()
and free() - reduce the calls to those and you will substantially
improve your execution speed - so my code was written to avoid that as
much as possible.

### Alexandre Ferrieux

Nov 20, 1998, 3:00:00 AM11/20/98
to
Donal K. Fellows wrote:
>
> > Alexandre Ferrieux writes

> >>
> >> Sorry if I'm being dense: can you explain why this is better than
> >> [lappend] ? Naively, I'd say that this [lreplace] method is much more
> >> costly in both time and memory...

> ...

> The thing is that the key time-taking ops (in most code) are malloc()
> and free() - reduce the calls to those and you will substantially
> improve your execution speed - so my code was written to avoid that as
> much as possible.
>

Okay, but how often does [lappend] reallocate ? My instinctive reaction
would be to say "there's a chunksize of XXX Kbytes". So for a sufficient
XXX, the malloc overhead (over your one-single-huge-one) could become
negligible. Moreover, for a growing block like that, after a few
iteration it becomes the 'top bubble' (i.e. the block with highest
address on the heap), and realloc becames only a matter of mapping new
pages...

-Alex

### Christopher Nelson

Nov 20, 1998, 3:00:00 AM11/20/98
to
Alexandre Ferrieux wrote:
> Ted E. Dunning wrote:
> > I very often find myself wishing that lindex would accept more than
> > one index. The syntax that would be (slightly) handier would be to
> > accept either a number or a list of numbers, but the internal code
> > would probably be a bit cleaner if the implementation accepted a
> > variable number of arguments, the last n-1 of which were indices.
> > Together with a function to quickly generate a list of integers,
> > reverse should reduce to the following:
> >
> > proc lrevers {list} {
> > lindex \$list [integers [expr [llength \$list]-1] 0]
> > }
> >
> > If there is sufficient sentiment in favor of extending lindex this
> > way, I would be happy to generate patches and submit them to
> > scriptics.

I've written:

proc lselect { listIn indices } {
set listOut {}
foreach i \$indices {
lappend listOut [lindex \$listIn \$i]
}
return \$listOut
}

but I'd see *some* benefit to improving speed by putting it in the
core. Also, it might be nice to be able to specify ranges (e.g., 1-3 to
get the second, third, and fourth elements).

>
> Instead of extending the existing (and inlined, optimized...) [lindex],
> I'd prefer a new [lpermut]...

I'd call it [lselect] but that's a fine point.

> Also, I'd advocate the
> one-single-list-argument approach: the well-known [coords] vararg syntax
> for polyline canvas items is so nagging (need to use "eval") !

I suppose this is a religious issue.

> Another thing: does this really motivate a new primitive ? Agreed,
> implementing it in C rather than with a [foreach-lappend-lindex] loop
> would save some CPU, but do you need this so often ? Is it time-critical
> ? Are you also aware that in the initial context (lreverse), this
> approach is a waste of time of memory (building the [integers] list,
> while a simple incremented counter would do...) ? Don't you think that a
> better candidate for a primitive would be [lreverse] itself ?

In general, I'd like to look at list processing languages like LISP and
make sure all of their paradigms and primatives were in the Tcl core.

### Robin Becker

Nov 20, 1998, 3:00:00 AM11/20/98
to
In article <733oru\$q39\$1...@m1.cs.man.ac.uk>, Donal K. Fellows
<fell...@cs.man.ac.uk> writes

>
...
>>
>> yes lreplace is O(n) so Donal's alg ends up as O(n^2)
>
>But if you are replacing a single element with a single element (as I
>was doing in my algorithm) in a list with only a single reference to
>it (as I suspect I _wasn't_ doing due to compiler limitations) then
>the [lreplace] could be O(1) without any memory allocation or
>deallocation, which would make the overall process O(n).
>
lreplace duplicates (O(n)) the input list when it is shared. This
intelligence is in the lreplace object command itself.

A good compiler would optimise the real reuse case when no extra memory
is required and reuse the existing data structure.

A peephole (one statement) bytecode optimiser could recognise that in
set L [lreplace \$L ...] \$L is unshared, but it's done yet (debugging
reveals .

>The thing is that the key time-taking ops (in most code) are malloc()
>and free() - reduce the calls to those and you will substantially
>improve your execution speed - so my code was written to avoid that as
>much as possible.

Unfortunately until the byte compiler gets clever each iteration of the
lreplace must have a malloc and a free for the list data structure.
>
>Donal.

--
Robin Becker

### Viktor Dukhovni

Nov 20, 1998, 3:00:00 AM11/20/98
to
fell...@cs.man.ac.uk (Donal K. Fellows) writes:

>I don't know what optimisations are attempted in the compiler yet
>though. I've not had time to look...

None. The byte code compiler speeds the execution of individual commands
by optimizing out repeated parsing of words, repeated command lookup,
variable lookup, conversion to/from strings, etc. It does not optimize the
byte code accross multiple commands.

--
Viktor.

### Przemek Klosowski

Nov 20, 1998, 3:00:00 AM11/20/98
to
Alexandre Ferrieux <alexandre...@cnet.francetelecom.fr> writes:

> I'd prefer a new [lpermut]... Also, I'd advocate the

> one-single-list-argument approach: the well-known [coords] vararg syntax
> for polyline canvas items is so nagging (need to use "eval") !

I always wondered about that. Which form is better indeed:

myproc a b c or myproc {a b c}

The first case uses of course the magic 'args' argument in the
procedure definition.

Let's call them style A and B.

If the actual arguments are generated by some other procedure, we do
'proc [make-args]', but for style A, we could also do 'eval myproc
[make-args]'; the difference would be that in the procedure, we would
get \$args equal to '{a b c}' and 'a b c' respectively (a single item
being a list of three elements, and a list of three elements). In
this sense, style B is simpler because it doesn't allow alternative
styles for arguments.

Now, my conceptual difficulty begins when we want to have a
doubly-vararg situation; for instance, we want a procedure that
calculates vector lengths (vectors are of arbitrary length). In simple
form, style A would be and style B would be

proc veclen args { proc veclen arg {
set result 0 set result 0
foreach a \$args {incr result \$a} foreach a \$arg {incr result \$a}
return \$result return \$result
} }

and we must say 'veclen 1 2 3' and 'veclen {1 2 3}', respectively.
I note that the behavior of 'args' is to implicitly wrap the arguments with
curly braces.

When we want to extend this to return the length for an arbitrary
collection of vectors (each of arbitrary length), it becomes a little muddled:

proc veclen args {
set result {}
foreach v \$args {
set sum 0
foreach a \$v {incr sum \$a}
lappend result \$sum
}
return \$result
}

so we say 'veclen 1 2 3' meaning three vectors of length one, or
'veclen {1 2 3} {3 5 6}' to mean two vectors of length 3.
We can do things like 'veclen {1 2 3} [make-args]', where make-args
returns a list.

Style B would require 'veclen {{1 2 3} {3 5 6}}', I suppose, and would
make it more difficult to mix explicit arguments and arguments generated
by calls.

In the end, it seems that style A is better, even though it sometimes
requires evals. I confess that I can't think clearly about all the little
issues that come to play here; I remember that in practice I ended up making
mistakes of the type '1 2 3 vs {1 2 3}'. What do more knowledgeable people think?

--
przemek klosowski <prz...@nist.gov> (301) 975-6249
NIST Center for Neutron Research (bldg. 235), E111
National Institute of Standards and Technology
Gaithersburg, MD 20899, USA
.. and for spam extractors, FCC Commisioners' email is:
wken...@fcc.gov,sn...@fcc.gov,pmis...@fcc.gov,mpo...@fcc.gov

### Marc Thirion

Nov 21, 1998, 3:00:00 AM11/21/98
to
In article <733oru\$q39\$1...@m1.cs.man.ac.uk>,

fell...@cs.man.ac.uk (Donal K. Fellows) writes:
> In article <+wTurEAs...@jessikat.demon.co.uk>,
> Robin Becker <ro...@jessikat.demon.co.uk> wrote:
>> yes lreplace is O(n) so Donal's alg ends up as O(n^2)
>
> But if you are replacing a single element with a single element (as I
> was doing in my algorithm) in a list with only a single reference to
> it (as I suspect I _wasn't_ doing due to compiler limitations) then
> the [lreplace] could be O(1) without any memory allocation or
> deallocation, which would make the overall process O(n).

The compiler would have to recognize the idiom :
set list [lreplace \$list \$i \$i elem]

Otherwise, the script :
set list2 [lreplace \$list1 \$i \$i elem]
would modify list1 (assuming list1 is the only reference to \$list1).

You can avoid the malloc/free overhead with the first idiom by writing :

# like "set list", but removes the reference list -> \$list1
proc unref {varName} {
upvar 1 \$varName var
set v \$var
unset var
set v
}

set list [lreplace [unref list] 10 10 e]

On my P90 running Linux, the following script :

for {set i 0} {\$i < 1000} {incr i} {
lappend list1 \$i
lappend list2 \$i
}

puts "\$tcl_patchLevel"

puts [time {
set list1 [lreplace [unref list1] 10 10 e]
} 10000]
puts [time {
set list2 [lreplace \$list2 10 10 e]
} 10000]

puts [string compare \$list1 \$list2]

outputs :
8.0p2
274 microseconds per iteration
1046 microseconds per iteration
0

(the first timing stays constant when the list grows; the second one
increases).

--
Marc Thirion | Toulouse, France
Un Travail pour Chacun : http://www.pratique.fr/~tberrone/utc/utc.htm

### lvi...@cas.org

Nov 22, 1998, 3:00:00 AM11/22/98
to

In a recent intro to tcl class I tried out several algorithms for reversing
strings - here's the script i used as well as the results I got:

\$ /home/lwv26/Classwork/lreverse.tcl
204 microseconds per iteration
156 microseconds per iteration
249 microseconds per iteration
112 microseconds per iteration

\$ cat /home/lwv26/Classwork/lreverse.tcl
#! /usr/tcl80/sun4/bin/tclsh

proc lreverse1 { olist } {
for { set i [llength \$olist] ; incr i -1 } { \$i >= 0 } { incr i -1 } {
lappend nlist [lindex \$olist \$i]
}
return \$nlist
}

proc lreverse2 { olist } {
set nlist {}
for { set i 0 } { \$i < [llength \$olist] } { incr i } {
set nlist "[lindex \$olist \$i] \$nlist"
}
return \$nlist
}

proc lreverse3 { olist } {
set nlist {}
set llen [llength \$olist]
for { set i \$llen } { \$i >= 0 } {} {
incr i -1
lappend newlist [lindex \$olist \$i]
}
return \$nlist
}

proc lreverse4 { olist } {
set nlist {}
foreach elem \$olist {
set newlist [linsert \$nlist 0 \$elem]
}
return \$nlist
}

puts [time {lreverse1 {a b c d e f}} 10000]
puts [time {lreverse2 {a b c d e f}} 10000]
puts [time {lreverse3 {a b c d e f}} 10000]
puts [time {lreverse4 {a b c d e f}} 10000]
# puts [lreverse1 {a b c d e f}]
# puts [lreverse2 {a b c d e f}]
# puts [lreverse3 {a b c d e f}]
# puts [lreverse4 {a b c d e f}]

# set colors [list red green blue]
# puts [lreverse2 \$colors]

--
<URL:mailto:lvi...@cas.org> Quote: Saving the world before bedtime.
<*> O- <URL:http://www.purl.org/NET/lvirden/>
Unless explicitly stated to the contrary, nothing in this posting
should be construed as representing my employer's opinions.

### Jean-Claude Wippler

Nov 22, 1998, 3:00:00 AM11/22/98
to
Larry,

> \$ /home/lwv26/Classwork/lreverse.tcl
> 204 microseconds per iteration
> 156 microseconds per iteration
> 249 microseconds per iteration
> 112 microseconds per iteration

Interesting - what processor is this?

My PII/400 blazes through your script on Win98 with:

C>tclsh80 virden.tcl
154 microseconds per iteration
71 microseconds per iteration
127 microseconds per iteration
55 microseconds per iteration
C>

My P5/166 on Linux RH 5.1 delivers an appalling:

\$ tclsh8.0 virden.tcl
2016 microseconds per iteration
1745 microseconds per iteration
2330 microseconds per iteration
1358 microseconds per iteration
\$

For comparison, a trial on a 486/DX66 with Linux RH 5.0 gives this:

\$ tclsh8.0 virden.tcl
2376 microseconds per iteration
1789 microseconds per iteration
2647 microseconds per iteration
1519 microseconds per iteration
\$

It looks like the P5/166 machine is *very* badly configured.

Does anyone know of a reasonably consistent benchmark for the Tcl core?
Its results might be useful to detect such sub-optimal installations...

-- Jean-Claude

### Andreas Kupries

Nov 22, 1998, 3:00:00 AM11/22/98
to

Jean-Claude Wippler <j...@equi4.com> writes:

> Larry,

> > \$ /home/lwv26/Classwork/lreverse.tcl
> > 204 microseconds per iteration
> > 156 microseconds per iteration
> > 249 microseconds per iteration
> > 112 microseconds per iteration

> Interesting - what processor is this?

He's using an UltraSparc, I think. Don't know the exact model

> My PII/400 blazes through your script on Win98 with:

> C>tclsh80 virden.tcl
> 154 microseconds per iteration
> 71 microseconds per iteration
> 127 microseconds per iteration
> 55 microseconds per iteration

> My P5/166 on Linux RH 5.1 delivers an appalling:

> \$ tclsh8.0 virden.tcl
> 2016 microseconds per iteration
> 1745 microseconds per iteration
> 2330 microseconds per iteration
> 1358 microseconds per iteration

I have a Linux Suse 4.0.?, P5/133 (64M), the results for tcl 8.0p2+
are
580 microseconds per iteration
462 microseconds per iteration
611 microseconds per iteration
331 microseconds per iteration

and tcl 7.6 delivers
984 microseconds per iteration
1264 microseconds per iteration
1140 microseconds per iteration
442 microseconds per iteration

> For comparison, a trial on a 486/DX66 with Linux RH 5.0 gives this:

> \$ tclsh8.0 virden.tcl
> 2376 microseconds per iteration
> 1789 microseconds per iteration
> 2647 microseconds per iteration
> 1519 microseconds per iteration

> It looks like the P5/166 machine is *very* badly configured.

Seems so, yes.

> Does anyone know of a reasonably consistent benchmark for the Tcl core?
> Its results might be useful to detect such sub-optimal installations...

No, but I would like to have such a beast too. Another area of
application would be comparisons of the different tcl versions.

Yesterday a friend of mine called, he stress-tested the listbox with
20.000 entries and got rather different results for tcl 7.6, 8.0b1 and
8.0.3, with 8.0.3 much behind 8.0b1. I tried to write up the same test
and was unable to confirm the behaviour for my machine (maybe it is
too fast :-), so I'll call him tomorrow to get at the exact script.

A standardized speed test-suite would have helped much.

--
Sincerely,
Andreas Kupries <a.ku...@westend.com>
<http://www.westend.com/~kupries/>
-------------------------------------------------------------------------------

### Robin Becker

Nov 22, 1998, 3:00:00 AM11/22/98
to
In article <7392pk\$h36\$1...@srv38s4u.cas.org>, lvi...@cas.org writes

>
>In a recent intro to tcl class I tried out several algorithms for reversing
>strings - here's the script i used as well as the results I got:
>
>\$ /home/lwv26/Classwork/lreverse.tcl
>204 microseconds per iteration
>156 microseconds per iteration
>249 microseconds per iteration
>112 microseconds per iteration
>
>
I note that lreverse4 is wrong at least
I think it should be
proc lreverse4 { olist } {
set nlist {}
foreach elem \$olist {
set nlist [linsert \$nlist 0 \$elem]
}
return \$nlist
}

--
Robin Becker

### Alexandre Ferrieux

Nov 23, 1998, 3:00:00 AM11/23/98
to
lvi...@cas.org wrote:
>
> proc lreverse2 { olist } {
> set nlist {}
> for { set i 0 } { \$i < [llength \$olist] } { incr i } {
> set nlist "[lindex \$olist \$i] \$nlist"
> }
> return \$nlist
> }
>
> ...

>
> proc lreverse4 { olist } {
> set nlist {}
> foreach elem \$olist {
> set newlist [linsert \$nlist 0 \$elem]
> }
> return \$nlist
> }

Why this and not the obvious blend:

proc lreverse 24 { olist } {

set nlist {}
foreach elem \$olist {

set nlist "\$elem \$nlist"
# okay purists would prefer "[list \$elem] \$nlist"
}
return \$nlist
}

-Alex

### lvi...@cas.org

Nov 24, 1998, 3:00:00 AM11/24/98
to

According to Jean-Claude Wippler <j...@equi4.com>:
:Larry,
:
:> \$ /home/lwv26/Classwork/lreverse.tcl

:> 204 microseconds per iteration
:> 156 microseconds per iteration
:> 249 microseconds per iteration
:> 112 microseconds per iteration
:
:Interesting - what processor is this?

UltraSPARC 5, Solaris 2.6

:Does anyone know of a reasonably consistent benchmark for the Tcl core?

:Its results might be useful to detect such sub-optimal installations...

I'd be tempted to say to look at the times of a "make test" <grin>.

### Andreas Kupries

Nov 24, 1998, 3:00:00 AM11/24/98
to

Andreas Kupries <a.ku...@westend.com> writes:
> Jean-Claude Wippler <j...@equi4.com> writes:
> > Larry,

Some more numbers to look at. These are the execution times for the 4
lreverse variants posted by larry, as reported by a friend, for
various versions of tcl. His machine is a 486/66 running 'Interactive
Unix 3.0', he uses gcc 2.7.2. The times reported below are always the
best out of 3 runs.

_______________________________________________________
7.6 8.0b2 8.0 8.0p2 8.0.3 8.0.4
_______________________________________________________
1 2707 1039 1124 1097 1254 1135
2 3306 871 923 870 1044 954
3 3092 1220 1310 1287 1329 1335
4 1516 724 727 739 789 750
_______________________________________________________

The interesting fact is that 8.0.3 is definitely behind 8.0p2, even
slower than 8.0. The new 8.0.4 is again a little bit faster.

The ranking of the algorithms differs only between 7.6 and the 8.x
groups. There is no difference between the different 8.x'es. The
winner in both groups is always number 4, which is consistent with the
previously reported times.

best...worst
8.x: 4 2 1 3
7.6: 4 1 3 2