Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Deleting the first element of a list

0 views
Skip to first unread message

JB

unread,
Oct 2, 2002, 6:36:10 PM10/2/02
to
Is using del to delete the first element (that is, the 0th
element) of a list faster than x = x[1:]?

--
J.... B....


-----------== Posted via Newsfeed.Com - Uncensored Usenet News ==----------
http://www.newsfeed.com The #1 Newsgroup Service in the World!
-----= Over 100,000 Newsgroups - Unlimited Fast Downloads - 19 Servers =-----

Erik Max Francis

unread,
Oct 2, 2002, 6:53:36 PM10/2/02
to
JB wrote:

> Is using del to delete the first element (that is, the 0th
> element) of a list faster than x = x[1:]?

For the long answer, try it out yourself. Write to programs that do
these operations many, many times, and then time them. Make sure that
you use lists that are representative of whatever sample you're shooting
for. To test the worst case, use outrageously long lists.

Slicing creates new lists, so invariably the slicing mechanism is going
to be more expensive, since a new object has to be created each time.
Testing by taking 1000-element lists and trimming the first element off
until the list is empty, indicates that the slicing method takes about 3
times longer.

The short answer, though, is not to optimize prematurely. That's the
First Rule of Optimization: Don't bother optimizing until you have
quantitative information about where the bottlenecks truly are. This
concept is especially powerful in a high-level language like Python; the
very choice of using Python means that you're willing to trade raw speed
for flexibility. If you're overconcerned with squeezing every last bit
of CPU power right from the start, then Python (or any other high-level
language) was probably not a good choice of languages.

Though I'm overstating it. Certainly premature optimization is often
wasted effort, but given tiny, equivalent, and readable ways of doing
the same very simple thing, there's really no reason not to prefer the
one that is faster. This is one of those cases; use del L[0] rather
than L = L[1:].

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE
/ \ Punctuality is the virtue of the bored.
\__/ Evelyn Waugh
Erik Max Francis' bookmarks / http://www.alcyone.com/max/links/
A highly categorized list of Web links.

Mark McEahern

unread,
Oct 2, 2002, 6:34:38 PM10/2/02
to
> Is using del to delete the first element (that is, the 0th
> element) of a list faster than x = x[1:]?

n = 100
l = range(n)
del l[0]

// m

Chad Netzer

unread,
Oct 2, 2002, 6:49:32 PM10/2/02
to
On Wednesday 02 October 2002 15:36, JB wrote:
> Is using del to delete the first element (that is, the 0th
> element) of a list faster than x = x[1:]?

Looks like it (timed both short and long lists. w/ loops unrolled):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

import time

short_list = [0,1,2,3,4,5] * 2
long_list = short_list * 10000

n = 10
avg = 0

lists = [ short_list, long_list ]
for l in lists:

for i in xrange(n):
tmp_list = l[:]
start = time.time()

del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]
del tmp_list[ 0 ]

end = time.time()
avg = avg + (end - start)
avg = avg / 10 / n
print "avg (del):\t %.13f secs" % (avg,)

for i in xrange(n):
tmp_list = l[:]
start = time.time()

tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]
tmp_list = tmp_list[ 1: ]

end = time.time()
avg = avg + (end - start)
avg = avg / 10 / n
print "avg (slicing):\t %.13f secs" % (avg,)
print
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

avg (del): 0.0000028991699 secs
avg (slicing): 0.0000033883095 secs

avg (del): 0.0032141237535 secs
avg (slicing): 0.0115537309935 secs

--

Chad Netzer
cne...@mail.arc.nasa.gov

Peter Hansen

unread,
Oct 2, 2002, 7:25:59 PM10/2/02
to
Erik Max Francis wrote:
> JB wrote:
>
>
>>Is using del to delete the first element (that is, the 0th
>>element) of a list faster than x = x[1:]?
>
> The short answer, though, is not to optimize prematurely. That's the
> First Rule of Optimization:

Actually, that's the first rule of *programming*. :-)

> Don't bother optimizing until you have
> quantitative information about where the bottlenecks truly are. This
> concept is especially powerful in a high-level language like Python; the
> very choice of using Python means that you're willing to trade raw speed
> for flexibility. If you're overconcerned with squeezing every last bit
> of CPU power right from the start, then Python (or any other high-level
> language) was probably not a good choice of languages.

Well stated, although a note about changing algorithms generally
providing the most significant speedups might be in order. Even
with Python, picking a better algorithm can produce huge gains.
(This does not, of course, apply to a small idiomatic choice such
as this thread was about.)

-Peter

Greg Ewing

unread,
Oct 2, 2002, 8:48:30 PM10/2/02
to
JB wrote:

> Is using del to delete the first element (that is, the 0th
> element) of a list faster than x = x[1:]?


Probably, because del x[0] modifies the list in-place,
whereas x = x[1:] makes a copy of the whole list
except for the first element.

I say "probably" because intuitions about this sort
of thing are sometimes wrong in Python. The only
way to find out for sure is by experiment.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Delaney, Timothy

unread,
Oct 2, 2002, 9:12:02 PM10/2/02
to
> From: Greg Ewing [mailto:see_repl...@something.invalid]

>
> I say "probably" because intuitions about this sort
> of thing are sometimes wrong in Python. The only
> way to find out for sure is by experiment.

One reason why there may not be any perceptable difference is that del x[0]
will need to move all the rest of the data in the current implementation
i.e. you have:

del x[0]:
Move x[1:] to x[:-1]

x = x[1:]
Create new list
Copy x[1:] into new list

So basically, do whichever feels more natural to you.

The real advantage of

x = x[1:]

is that it won't blow up if x is empty. Of course, you may also *want* it
to blow up in that case.

Tim Delaney

Bruce Dawson

unread,
Oct 3, 2002, 12:26:58 AM10/3/02
to
Peter Hansen wrote:

> Erik Max Francis wrote:
>
>> JB wrote:

...


>> quantitative information about where the bottlenecks truly are. This
>> concept is especially powerful in a high-level language like Python; the
>> very choice of using Python means that you're willing to trade raw speed
>> for flexibility. If you're overconcerned with squeezing every last bit
>> of CPU power right from the start, then Python (or any other high-level
>> language) was probably not a good choice of languages.
>
>
> Well stated, although a note about changing algorithms generally
> providing the most significant speedups might be in order. Even
> with Python, picking a better algorithm can produce huge gains.
> (This does not, of course, apply to a small idiomatic choice such
> as this thread was about.)


Highly applicable to this thread. Deleting the first item of a Python
list or C++ vector is an O(n) operation - proportional to the number of
items in the list. If you do that for every item in the list it is
O(n^2) - proportional to the square.

The thing about O(n^2), and any complexity higher than that, is that
there are many values of N that are small enough for N items to easily
fit into memory, but large enough to make the program run for days and
days and days and days.

O(n^2) is bad, unless you know that n will always be small.

So, if performance is a problem, the OP should consider reworking the
code. For instance, if the original code was:

while myList:
# Do Something with the first element
del myList[0]

Then it should be much better to go:

myList.reverse()
while myList:
# Do Something with the last element
del myList[len(myList)-1]

This changes it from O(n^2) to O(n), which could make it run thousands
of times faster.

Premature optimization is the root of all evil, but O(n^2) is pretty
darned evil too!

Of course, the OP's code may not be so easily reworked as my simple example.

Peter Hansen

unread,
Oct 3, 2002, 1:40:24 AM10/3/02
to
Bruce Dawson wrote:
> Then it should be much better to go:
>
> myList.reverse()
> while myList:
> # Do Something with the last element
> del myList[len(myList)-1]
>
> This changes it from O(n^2) to O(n), which could make it run thousands
> of times faster.

You're making assumptions, possibly good ones, about the implementation
of the list object in Python. It's possible, though obviously unlikely,
that the list is reallocated and copied every time it shrinks by even
one element. (I'm just pointing this out for discussion purposes...
no doubt it doesn't work that way.) That would leave it at O(n^2).

Unless you profiled this already under a particular implementation of
Python and are sure it really is a question of O(n) vs. O(n^2)...

Likewise, even with the original question, it's possible that for
del list[0] vs. list = list[1:] the overhead of allocating a new
list and deallocating the old one is negligible compared to the
cost of copying n-1 references around in memory. In that case there
would be only minor performance differences between them for
suitable values of n.

-Peter

Disclaimer: the above may contain bizarre misspellings. Apparently
late-night dyslexia led me to write "dellocating" and "deglegible",
and I couldn't figure out the correct spellings for a while... there
may still be others lurking. Yikes...

Alex Martelli

unread,
Oct 3, 2002, 2:23:16 AM10/3/02
to
Peter Hansen wrote:

> Bruce Dawson wrote:
>> Then it should be much better to go:
>>
>> myList.reverse()
>> while myList:
>> # Do Something with the last element
>> del myList[len(myList)-1]

Incidentally, "del myList[-1]" is a more readable way to express this.

>> This changes it from O(n^2) to O(n), which could make it run thousands
>> of times faster.
>
> You're making assumptions, possibly good ones, about the implementation
> of the list object in Python. It's possible, though obviously unlikely,
> that the list is reallocated and copied every time it shrinks by even
> one element. (I'm just pointing this out for discussion purposes...
> no doubt it doesn't work that way.) That would leave it at O(n^2).

Python does not specify the O()-behavior of its built-in type's operations
(the way C++'s standard library does), but I think I have the next best
thing -- Tim Peters' review of my forthcoming Nutshell book's chapter on
optimization, which does devote some time to documenting that behavior.
For x>0, "del somelist[x]" is O(len(somelist)-x) -- you can count on
that as well as you can count on most things in this sublunar orb.

IOW, I don't think these "discussion purposes" serve any concrete
purpose in this context.

> Unless you profiled this already under a particular implementation of
> Python and are sure it really is a question of O(n) vs. O(n^2)...

"Only a madman can be absolutely sure", but that applies to just
about anything. That doesn't and shouldn't stop us from acting
on the basis of "reasonable certainty".


> Likewise, even with the original question, it's possible that for
> del list[0] vs. list = list[1:] the overhead of allocating a new
> list and deallocating the old one is negligible compared to the
> cost of copying n-1 references around in memory. In that case there
> would be only minor performance differences between them for
> suitable values of n.

Yes, allocation costs cannot be guaranteed -- THAT is a point worth
making. Similarly, no guarantees can be offered regarding such
things as caching effects, VM thrashing, and other by-products of
the existence of memory hierarchies (maybe that's why Cray boxes
tended to avoid such hierarchies...).

"del alist[0]" must copy N-1 references; "alist = alist[1]" must
perform the same copies plus one allocation and one freeing. So,
the former's performance "can't" (with reasonable certainty) be
worse than the latter's, but you can't predict how much better --
surely not across platforms. Still, if you must choose between
the two ways of expressing functionality that is equivalent to
your application, the former is clearly better.

As between "del alist[0]" and "del alist[-1]", the case is even
sharper: the latter's O(1) [at worst in an amortized sense,
though I don't think you need this proviso in any current or
planned Python implementation], the former's O(N) [ditto].

Of course, if any user-coded __del__ is triggered by any of
these (if it is by one, it should be by all), then all bets are
off -0- there are no limits to the amount of computation that
such a __del__ could perform, and thus to its ill effects on
a program's overall performance.


Alex

Duncan Booth

unread,
Oct 3, 2002, 4:36:30 AM10/3/02
to
Bruce Dawson <bruce_delet...@cygnus-software.com> wrote in
news:3D9BC6F0...@cygnus-software.com:

> Then it should be much better to go:
>
> myList.reverse()
> while myList:
> # Do Something with the last element
> del myList[len(myList)-1]
>
> This changes it from O(n^2) to O(n), which could make it run thousands
> of times faster.

Ignoring any questions of speed, it may make the code somewhat clearer to
rewrite this as:

myList.reverse()
while myList:
element = myList.pop()
# Do Something with element

Alternatively, in many cases you can just avoid the removal of individual
items. For example if the list contains items to process, and during the
processing it is possible that more items are added:

while myList:
work, myList = myList, []
for item in work:
process(item)

--
Duncan Booth dun...@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?

Peter Hansen

unread,
Oct 3, 2002, 8:08:06 AM10/3/02
to
Alex Martelli wrote:
> Peter Hansen wrote:
>>Bruce Dawson wrote:
>>
>>>This changes it from O(n^2) to O(n), which could make it run thousands
>>>of times faster.
>>
>>You're making assumptions, possibly good ones, about the implementation
>>of the list object in Python. It's possible, though obviously unlikely,
>>that the list is reallocated and copied every time it shrinks by even
>>one element. (I'm just pointing this out for discussion purposes...
>>no doubt it doesn't work that way.) That would leave it at O(n^2).
>
> Python does not specify the O()-behavior of its built-in type's operations
> (the way C++'s standard library does), but I think I have the next best
> thing -- Tim Peters' review of my forthcoming Nutshell book's chapter on
> optimization, which does devote some time to documenting that behavior.
> For x>0, "del somelist[x]" is O(len(somelist)-x) -- you can count on
> that as well as you can count on most things in this sublunar orb.
>
> IOW, I don't think these "discussion purposes" serve any concrete
> purpose in this context.

Sure they did... they brought out your nice response. That is,
after all, what "discussion purposes" means. ;-)

> "del alist[0]" must copy N-1 references; "alist = alist[1]" must
> perform the same copies plus one allocation and one freeing. So,
> the former's performance "can't" (with reasonable certainty) be
> worse than the latter's, but you can't predict how much better --
> surely not across platforms. Still, if you must choose between
> the two ways of expressing functionality that is equivalent to
> your application, the former is clearly better.

Taking note again of Timothy Delaney's point that in fact the
functionality is *not* equivalent: del alist[0] will throw
an exception if the list is empty, while slicing will not.

I think this all just reinforces the claim that these choices
should generally be made on the basis of readability rather than
performance.

-Peter

Alex Martelli

unread,
Oct 3, 2002, 8:42:38 AM10/3/02
to
Peter Hansen wrote:
...

>> "del alist[0]" must copy N-1 references; "alist = alist[1]" must
>> perform the same copies plus one allocation and one freeing. So,
>> the former's performance "can't" (with reasonable certainty) be
>> worse than the latter's, but you can't predict how much better --
>> surely not across platforms. Still, if you must choose between
>> the two ways of expressing functionality that is equivalent to
>> your application, the former is clearly better.
>
> Taking note again of Timothy Delaney's point that in fact the
> functionality is *not* equivalent: del alist[0] will throw
> an exception if the list is empty, while slicing will not.

That's why I qualified my last sentence: when the functionality
is NOT equivalent to your application, then you must let that
lead your choice. Besides empty lists, there's also the case
of multiple references to the same list: if names a and b refer
to the same list object, del a[0] affects that one object, while
rebinding a = a[1:] doesn't. When such issues matter, you must
choose the expression that does whatever is right for your app.


> I think this all just reinforces the claim that these choices
> should generally be made on the basis of readability rather than
> performance.

Choices between idioms with similar big-O characteristics should
generally be made on the basis of readability. However, when two
choices have different O(f(N)) characteristics, then [unless you can
put strict bounds on N or already have elsewhere bottlenecks with
O(f(N)) behavior bad enough to make the given choice immaterial] it's
different. I surely don't mind paying an extra 10% of runtime, and
may not mind 100%, but the runtime % overhead in such big-O-issues
cases is unbounded, and growing with N.

For example, some people appear to find "Cramer's rule" a "readable"
way to solve systems of linear equations (maybe that's because it's
the one I learned in high school, whatever). It doesn't matter,
though, unless you KNOW the number N of equations and unknowns is
definitely tiny: Cramer's rule's performances scales up *SO* badly
(O(N!), I think I recall) that you just CAN'T use it unless you
KNOW that N is no worse than 3 or 4. The price of "readability" in
such cases might be to make your program totally unusable in some
common cases. And MY order of priority is always Vitruvius'
"firmitas, utilitas, venustas", NOT Alberti's venustas-first one --
solidity, usefulness, and delight, in *THIS* order (Alberti's actual
_practice_, I claim, followed Vitruvius' order which I approve, not
the one he _wrote_ about...:-).

You might claim this is an issue of choice of algorithm and not of
idiom, but to me that doesn't make much difference. Some people
may find the following idiom more readable (they certainly DO,
because it keeps popping up...):

bigstring = ''
while somecondition():
bigstring += onemoretinypiece()

but it doesn't matter -- it's still the wrong idiom to use in
Python, as its big-O behavior is a disaster. You can take your
pick between other idioms with roughly O(N) behavior (you can use
a cStringIO instance, an array, etc, etc), though my favorite is:

templist = []
while somecondition():
templist.append(onemoretinypiece())
bigstring = ''.join(templist)


Choice between idioms with similar O() behavior is quite properly
driven by 'venustas' -- you don't really mind paying for that
with a slowdown or, say, 20%, except in rather rare cases -- but
that doesn't generalize -- *exactly* as it doesn't for choices
of data structures and algorithms. If you can train yourself to
find more readable the version that performs better, so much
the better, of course -- and human ability for rationalization
being almost unbounded, you generally CAN, if you wish:-).


Alex

Paul Rubin

unread,
Oct 3, 2002, 9:09:13 AM10/3/02
to al...@aleax.it
Alex Martelli <al...@aleax.it> writes:
> You might claim this is an issue of choice of algorithm and not of
> idiom, but to me that doesn't make much difference. Some people
> may find the following idiom more readable (they certainly DO,
> because it keeps popping up...):
>
> bigstring = ''
> while somecondition():
> bigstring += onemoretinypiece()
>
> but it doesn't matter -- it's still the wrong idiom to use in
> Python, as its big-O behavior is a disaster.

Don't you think that's kind of a deficiency in Python? Python could
fix that by having mutable string objects, either under or over the covers.

Alex Martelli

unread,
Oct 3, 2002, 10:19:05 AM10/3/02
to
Paul Rubin wrote:
...

>> bigstring = ''
>> while somecondition():
>> bigstring += onemoretinypiece()
>>
>> but it doesn't matter -- it's still the wrong idiom to use in
>> Python, as its big-O behavior is a disaster.
>
> Don't you think that's kind of a deficiency in Python? Python could
> fix that by having mutable string objects, either under or over the
> covers.

It appears to me that immutable strings achieve both simplicity and
efficiency for an _extremely_ common and important set of cases.
Making all strings mutable would play havoc with Python's powerful
use of dictionaries for instance attributes &c. Introducing a new
"mutablestr" type would basically offer yet one more way to perform
the same kind of tasks. Specialcasing a += b, when a has only one
outstanding reference, to allocate excess space, would potentially
waste much memory if nothing further were then appended to a.

All in all, I suspect all solutions would be worse than the "disease"
they're supposed to cure, which is just one of a zillion uses of
strings, not the single most common one, and one which newbies soon
learn good alternative idioms for. There is probably no way to do
strings that's just perfect for ALL uses -- at least not if you
include "newbies can't be tempted to use a very inefficient idiom"
among the criteria for perfection -- and I love the set of design
choices that Python's strings are based on.


Alex

Chad Netzer

unread,
Oct 3, 2002, 5:30:07 PM10/3/02
to
On Wednesday 02 October 2002 21:26, Bruce Dawson wrote:
>
> Highly applicable to this thread. Deleting the first item of a Python
> list or C++ vector is an O(n) operation - proportional to the number
> of items in the list. If you do that for every item in the list it is
> O(n^2) - proportional to the square.

Just for fun, I wrote this up:

N = 17
M = 5
for n in xrange(4,N):
m = 5
avg = 0
for j in xrange( M ):
l = range( 2**n )
start = time.time()
for i in xrange( 2**(n-3) ):
del l[ 0 ]
del l[ 0 ]
del l[ 0 ]
del l[ 0 ]
del l[ 0 ]
del l[ 0 ]
del l[ 0 ]
del l[ 0 ]


end = time.time()
avg = avg + (end - start)

avg = avg / m
print "avg time to delete list of length %10d: %.13f secs" % (2**n,
avg)

Here are the results on my machine:
avg time to delete list of length 16: 0.0000482082367 secs
avg time to delete list of length 32: 0.0000820159912 secs
avg time to delete list of length 64: 0.0001710176468 secs
avg time to delete list of length 128: 0.0003101825714 secs
avg time to delete list of length 256: 0.0006976127625 secs
avg time to delete list of length 512: 0.0017696142197 secs
avg time to delete list of length 1024: 0.0053425788879 secs
avg time to delete list of length 2048: 0.0166998147964 secs
avg time to delete list of length 4096: 0.0555817842484 secs
avg time to delete list of length 8192: 0.2105991840363 secs
avg time to delete list of length 16384: 0.9571284055710 secs
avg time to delete list of length 32768: 3.1392956018448 secs
avg time to delete list of length 65536: 25.6988666057587 secs

The last line I think shows the some effects of cache thrashing (but
not swapping; I have 1 Gig of RAM). It also illustrates each doubling
of the problem set increases the running time by a factor of 4 or more
(ie. ~ O( N^2 ) growth, at least when the lists start to get long (ie.
> 512 items or so)

And if I change the 'del l[ 0 ]' to 'del l[ -1 ]':
avg time to delete list of length 16: 0.0000484228134 secs
avg time to delete list of length 32: 0.0000805854797 secs
avg time to delete list of length 64: 0.0001428127289 secs
avg time to delete list of length 128: 0.0002708435059 secs
avg time to delete list of length 256: 0.0005234003067 secs
avg time to delete list of length 512: 0.0010224103928 secs
avg time to delete list of length 1024: 0.0020072221756 secs
avg time to delete list of length 2048: 0.0040839672089 secs
avg time to delete list of length 4096: 0.0080970048904 secs
avg time to delete list of length 8192: 0.0164772510529 secs
avg time to delete list of length 16384: 0.0334905624390 secs
avg time to delete list of length 32768: 0.0808611869812 secs
avg time to delete list of length 65536: 0.1617539882660 secs

Here a doubling of list length leads more or less to a doubling of
running time (ie. O( N )).

So, no real surprises, but illustrative for those who have never
compared O( N^2 ) to O( N ) algorithms before (hmmm, maybe that should
be phi( N ) or even o( N )? ) I didn't dare do a longer list for 'del
l[ 0 ]', whereas 'del l[ -1 ]' scales easily to much longer lists.

--

Chad Netzer
cne...@mail.arc.nasa.gov

Peter Hansen

unread,
Oct 3, 2002, 6:48:29 PM10/3/02
to
Chad Netzer wrote:
> On Wednesday 02 October 2002 21:26, Bruce Dawson wrote:
>
>>Highly applicable to this thread. Deleting the first item of a Python
>>list or C++ vector is an O(n) operation - proportional to the number
>>of items in the list. If you do that for every item in the list it is
>>O(n^2) - proportional to the square.
>
> Here are the results on my machine:
...

> avg time to delete list of length 4096: 0.0555817842484 secs
...

>
> And if I change the 'del l[ 0 ]' to 'del l[ -1 ]':
...

> avg time to delete list of length 4096: 0.0080970048904 secs
...

> Here a doubling of list length leads more or less to a doubling of
> running time (ie. O( N )).
>
> So, no real surprises, but illustrative for those who have never
> compared O( N^2 ) to O( N ) algorithms before (hmmm, maybe that should
> be phi( N ) or even o( N )? ) I didn't dare do a longer list for 'del
> l[ 0 ]', whereas 'del l[ -1 ]' scales easily to much longer lists.

Nice analysis. I find it equally illustrative, for those who
always jump to optimization too quickly, to note that for any
value of N up to at least 4096 in your example, the difference
could well be completely below the threshold of noticability for
anything involving, for example, human response times.

Depending entirely on the particular case in which it is going to be
used, it is quite possible that spending more than ten seconds on the
whole issue ends up being wasted time. Maybe the list will never
have more than 100 items, or maybe it has 16000 but it runs once a
day and nobody will be sitting waiting for it.

All the emphasis on algorithmic performance improvements is interesting,
but practically none of this should ever be an issue until somebody
is actually *bothered* by the performance once the system is running,
at least in testing. (Well, that might be overstating it a little:
in some environments making changes that late in the game is dangerous,
but I guess I'm coming from an agile environment and optimization just
doesn't seem to be an issue these days, even with Python.)

-Peter

0 new messages