Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Impressive results using K combinator

67 views
Skip to first unread message

Helmut Giese

unread,
Mar 10, 2003, 1:57:36 PM3/10/03
to
Hello out there,
collecting the results of my posting "Enlightenment sought on use of K
combinator" I changed my test script (see below). The results are
impressive:
- Not only can you see the quadratic (?) increase in processing time
using the "normal" [lreplace] but
- you see that with using the K combinator you get *constant* time
irrespective of the length of the list.
Whow.
Helmut Giese

[test script]
proc K {x y} {set x}

proc lreplNorm {lst} {
upvar $lst l
set l [lreplace $l end end]
}

proc lreplK {lst} {
upvar $lst l
set l [lreplace [K $l [set l {}]] end end]
}

proc lreplKunset {lst} {
upvar $lst l
set l [lreplace [K $l [unset l]] end end]
}

# create some lists (this time use strings)
puts "Creating lists"
set str "A string to blow up the time for copying large lists"
for {set i 0} {$i < 100} {incr i} {lappend lst100 $str}
for {set i 0} {$i < 10000} {incr i} {lappend lst10K $str}
for {set i 0} {$i < 1000000} {incr i} {lappend lst1M $str}

puts ""
puts "list has 100 elements"
puts "Normal lreplace:\t [time {lreplNorm lst100 ; lappend lst100
$str} 1000]"
puts "lreplace with K:\t [time {lreplK lst100 ; lappend lst100 $str}
1000]"
puts "lreplace with K & unset: [time {lreplKunset lst100 ; lappend
lst100 $str} 1000]"

puts ""
puts "list has 10 K elements"
puts "Normal lreplace:\t [time {lreplNorm lst10K ; lappend lst10K
$str} 1000]"
puts "lreplace with K:\t [time {lreplK lst10K ; lappend lst10K $str}
1000]"
puts "lreplace with K & unset: [time {lreplKunset lst10K ; lappend
lst10K $str} 1000]"

puts ""
puts "list has 1 M elements"
# experience shows that we should drastically reduce the no. of calls
puts "Normal lreplace:\t [time {lreplNorm lst1M ; lappend lst1M $str}
100]"
puts "lreplace with K:\t [time {lreplK lst1M ; lappend lst1M $str}
1000]"
puts "lreplace with K & unset: [time {lreplKunset lst1M ; lappend
lst1M $str} 1000]"

# [unset] lists for multiple sourcing
unset lst100
unset lst10K
unset lst1M

se...@fishpool.com

unread,
Mar 12, 2003, 10:56:06 AM3/12/03
to
Helmut Giese <hgi...@ratiosoft.com> wrote:
> - Not only can you see the quadratic (?) increase in processing time
> using the "normal" [lreplace] but
> - you see that with using the K combinator you get *constant* time
> irrespective of the length of the list.
> Whow.

Search "new lreplace kristoffer lawson" on google groups to see a solution
to that which works for any use of lreplace and doesn't require the
K combinator.

Maybe it's just academic and it's just a matter of ego or whatever,
but I strangely would like somekind of recognition for that model.
Especially as some ppl have now been considering a similar system for
strings.

Ah well, I still haven't packaged a nice branch for that or written
a TIP (as it was never really resolved if the access tradeoff was worth it).

-- Lazy bugger :(

--
/ http://www.fishpool.com/~setok/

Helmut Giese

unread,
Mar 12, 2003, 2:15:41 PM3/12/03
to
On Wed, 12 Mar 2003 15:56:06 GMT, se...@fishpool.com wrote:

>Helmut Giese <hgi...@ratiosoft.com> wrote:
>> - Not only can you see the quadratic (?) increase in processing time
>> using the "normal" [lreplace] but
>> - you see that with using the K combinator you get *constant* time
>> irrespective of the length of the list.
>> Whow.
>
>Search "new lreplace kristoffer lawson" on google groups to see a solution
>to that which works for any use of lreplace and doesn't require the
>K combinator.
>
>Maybe it's just academic and it's just a matter of ego or whatever,
>but I strangely would like somekind of recognition for that model.
>Especially as some ppl have now been considering a similar system for
>strings.

It does look convincing, and if you get the speed improvement built in
(that is, even without using K) - all the better. What happened to it
- the thread dates from April 2001?

As an aside: Hope my Borland patches don't take that long to advance
to the next step.
Best regards
Helmut (pondering the elapse of time) Giese

se...@fishpool.com

unread,
Mar 18, 2003, 10:35:55 AM3/18/03
to
Helmut Giese <hgi...@ratiosoft.com> wrote:
> It does look convincing, and if you get the speed improvement built in
> (that is, even without using K) - all the better. What happened to it
> - the thread dates from April 2001?

Basically it ended in the discussion of whether the 10% hit in access
times is worth the large gains in lreplace and linsert times and
sort of faded out.

--
/ http://www.fishpool.com/~setok/

Andreas Leitgeb

unread,
Mar 19, 2003, 8:35:29 AM3/19/03
to

I think, just offering a variant of lreplace,lrange,linsert with
semantics like lappend could solve the problems without the new
issues raised by this somewhat fundamental change in list-handling.
This would still have the problem, that existing code would have
to be modified to take advantage of it.

Your approach would be more interesting, if these indirections were
of limited lifetime: Any of the following actions would have to
force a cleanup to a state where the list-value is immediately
available (the view-indirection is resolved):
- The deletion of the last non-view object-reference to the original
value should trigger cleanup of all attached views.
For simplicity, the maximum number of views to a list might be
limited to one.
- Storing a reference to the "view" into a variable.
(very reasonable, not to save the whole garbage for eternity ;-),
but not necessary)
- Accessing the "view" with not-view-aware list-API functions.
(necessary for compatibility!)
For the classical case, namely
set l [listop1 [listop2 [... $l ...] ...] ...]
this would still boost performance, because the value originally stored
in $l would not get copied: the view-resolvement starts only after
the reference (in variable l) to the original value is freed.

--
Nichts ist schlimmer als zu wissen, wie das Unheil sich entwickelt,
und voll Ohnmacht zusehn muessen - das macht mich voellig krank...
-- (Musical Elisabeth; "Die Schatten werden laenger")

Donald Arseneau

unread,
Mar 19, 2003, 2:45:49 PM3/19/03
to
Andreas Leitgeb <Andreas...@siemens.at> writes:

> I think, just offering a variant of lreplace,lrange,linsert with
> semantics like lappend could solve the problems

And an optimizing byte-coder could handle most cases without
any external language changes!

Fully tracing the use of variables would be too much overhead
for the byte-coder, I expect, unlike a one-shot optimizing
compiler; but it could still be worth it to detect a few
specific cases like

set foo [lrange $foo ...]

I don't suppose there is yet any infrastructure for looking across
a [ ] barrier.


Donald Arseneau as...@triumf.ca

Kevin Kenny

unread,
Mar 19, 2003, 5:58:07 PM3/19/03
to
Donald Arseneau wrote:
> And an optimizing byte-coder could handle most cases without
> any external language changes!
>
> Fully tracing the use of variables would be too much overhead
> for the byte-coder, I expect, unlike a one-shot optimizing
> compiler; but it could still be worth it to detect a few
> specific cases like
>
> set foo [lrange $foo ...]
>
> I don't suppose there is yet any infrastructure for looking across
> a [ ] barrier.

Uh, that's easy, compared with the problem of ascertaining that
there can be no read traces on variable 'foo'. In the presence of
traces, you have to worry about trace side effects, and the
'obvious' optimization is wrong.

This trouble is one reason that [lset] was a new command, rather
than an extra optimization on
set foo [lreplace foo $n $n ...]
(The multi-index form of [lset] is much harder to do in Tcl,
so that was a more important reason.)

I'm not saying that it's insurmountable, but so far the
TclExecuteByteCode gang hasn't come up with a solution that's
worth anything. (Also, there are still longer poles in the tent.)
--
73 de ke9tv/2, Kevin

se...@fishpool.com

unread,
Mar 20, 2003, 10:19:50 AM3/20/03
to
Andreas Leitgeb <Andreas...@siemens.at> wrote:
>
> Your approach would be more interesting, if these indirections were
> of limited lifetime: Any of the following actions would have to
> force a cleanup to a state where the list-value is immediately
> available (the view-indirection is resolved):
> - The deletion of the last non-view object-reference to the original
> value should trigger cleanup of all attached views.
> For simplicity, the maximum number of views to a list might be
> limited to one.
> - Storing a reference to the "view" into a variable.
> (very reasonable, not to save the whole garbage for eternity ;-),
> but not necessary)
> - Accessing the "view" with not-view-aware list-API functions.
> (necessary for compatibility!)

I'm not sure if I understood you correctly, but my approach does have
limited lifetime, of a sort. The indirection level is limited to one
and there are various situations which cause the "interpretation"
to be flattened down to the actual list. For example, a second
[lreplace] on the list will cause the interpretation to be dropped down
(if shared, we need to copy).

--
/ http://www.fishpool.com/~setok/

se...@fishpool.com

unread,
Mar 20, 2003, 10:21:23 AM3/20/03
to
Donald Arseneau <as...@triumf.ca> wrote:
> Andreas Leitgeb <Andreas...@siemens.at> writes:
>
>> I think, just offering a variant of lreplace,lrange,linsert with
>> semantics like lappend could solve the problems
>
> And an optimizing byte-coder could handle most cases without
> any external language changes!

IIRC someone said that could get really hairy. Generally I try to avoid
touching the byte-encoder myself ;-)

--
/ http://www.fishpool.com/~setok/

0 new messages