Am 22.01.15 um 18:24 schrieb Andreas Leitgeb:
> I've only now seen this reply. It looks good.
> I didn't think of the order-preserving feature in dicts.
>
> Reducing size seems a bit more complicated with dicts,
> although still straightforward. I suppose, if I was
> dealing with longer MRUs (like 100 items) then yours
> would surely beat mine by lengths (mostly for avoiding
> the lsearch), but for 4 items the difference probably(*)
> won't be significant. (*): actual measurements fill follow.
I agree but I think it may also be limited by the byte code interpreter
more than by actual work on the CPU. See the timings below.
> After posting my question, I needed an immediate
> solution to start with, and came up with this:
premature optimization? You still need to feed in the data from
somewhere. It'll be only significantly faster if you have a compiled
command that takes a list of items at once, instead of one by one. How
many items do you have?
> set C "c3 c2 c1 c0"
> proc addMRU {it} {
> global C; set idx [lsearch -integer $C $it]
> if {$idx < 0} { set idx 0 };# if not there, then remove oldest instead
> set C [lreplace $C[set C ""] $idx $idx]
> lappend C $it
> }
> proc getMRU {idx} {
> global C; lindex $C end-$idx
> }
>
> I start with an MRU of four dummy elements to avoid the special
> cases for when it isn't yet filled, and then just make sure that
> size stays the same. This may not be suitable for a general MRU-
> implementation, but works for my task.
>
>>> Actually, some timing with global etc. shows my fastest version is this:
>>> interp alias {} pushmru {} apply {{v} {global mru; dict unset mru $v;
>>> dict set mru $v {}}}
>
> But you're cheating here, as you left out size-reduction. ;-)
>
Yes, I left it out deliberately. The code was benchmarked with pushing
both 100,000 distinct (i.e. increasing) as well as identical items. It
turned out that it's not necessary to reduce the size. At the end, for
retrieval, you can just do lrange [dict keys] end-3 end and create a new
dict from these keys.
This is my timing code:
=============================== >8 ==========
# Andreas' method
set C "c3 c2 c1 c0"
proc addMRU {it} {
global C; set idx [lsearch -integer $C $it]
if {$idx < 0} { set idx 0 };# if not there, then remove oldest instead
set C [lreplace $C[set C ""] $idx $idx]
lappend C $it
}
proc getMRU {idx} {
global C; lindex $C end-$idx
}
proc getAndreas {} {
global C
set C [lrange $C end-3 end]
}
# Christian's method
set mru {}
interp alias {} pushmru {} apply {{v} {global mru; dict unset mru $v;
dict set mru $v {}}}
proc getChristian {} {
global mru
set rv [lrange [dict keys $mru] end-3 end]
set mru {}
foreach v $rv { dict set mru $v {}}
return $rv
}
proc andreas N {
for {set i 0} {$i<$N} {incr i} {
addMRU $i
}
}
proc christian N {
for {set i 0} {$i<$N} {incr i} {
pushmru $i
}
}
proc empty N {
for {set i 0} {$i<$N} {incr i} {
}
}
# timing code
# first exec them to become compiled code
foreach x {1 2 3 4} {
addMRU $x
pushmru $x
}
# insert 100,000 times the same item
puts "Andreas [time {addMRU 2} 1000000]"
puts "Christian [time {pushmru 2} 1000000]"
puts [getAndreas]
puts [getChristian]
# add 100,000 distinct items
puts "Andreas [time {andreas 1000000}]"
puts "Christian [time {christian 1000000}]"
puts [getAndreas]
puts [getChristian]
=============================================
It seems that they are pretty equivalent in speed, mine is only
marginally faster on my machine:
(Tests) 49 % source mrutime.tcl
Andreas 1.322123284 microseconds per iteration
Christian 1.267695078 microseconds per iteration
1 3 4 2
1 3 4 2
Andreas 1496512 microseconds per iteration
Christian 1465378 microseconds per iteration
999996 999997 999998 999999
999996 999997 999998 999999
(Tests) 49 % empty 5
(Tests) 50 % time {empty 1000000}
55946 microseconds per iteration
Christian