Just for Fun: Python Permutation Jumbles

18 views
Skip to first unread message

Stuart D. Gathman

unread,
Oct 23, 2002, 11:20:28 AM10/23/02
to
I am trying to see how efficient I can make a permute function in pure
python (without resorting to a C extension). Here is my best effort so
far:

def permute(l):
n = len(l)
if n <= 2:
yield l
if n <= 1: return
l[0],l[1] = l[1],l[0]
yield l
return
for i in xrange(0,n):
l[0],l[i] = l[i],l[0]
first = [l[0]]
for j in permute(l[1:]):
yield first + j


To test this, we can solve newspaper jumbles:

from __future__ import generators
import sys

def permute...

words = [ s[:-1].lower() for s in open('/usr/share/dict/words','r').readlines()]
dict = dict(zip(words,words))
del words

for w in sys.argv[1:]:
wl = w.split('-')
res = []
for i in permute(list(''.join(wl))):
pos = 0
ans = []
for j in wl:
word = ''.join(i[pos:pos+len(j)])
pos += len(j)
if word in dict:
ans.append(word)
if len(ans) == len(wl):
ans = ' '.join(ans)
if ans not in res:
res.append(ans)
for ans in res:
print ans

The output of "time python2.2 jumble.py faee-frct":
face fret
fact free
fact reef
free fact
fret face
fret cafe
reef fact
cafe fret
2.29s real 2.23s user 0.05s system

on my 1.2 Ghz Celeron.

Unfortunately, a larger jumble such as "fruits-labor" takes over 15
minutes. Can python gurus speed up permutations in pure python?

--
Stuart D. Gathman <stu...@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.

Bengt Richter

unread,
Oct 23, 2002, 9:37:09 PM10/23/02
to
On Wed, 23 Oct 2002 15:20:28 GMT, "Stuart D. Gathman" <stu...@bmsi.com> wrote:

>I am trying to see how efficient I can make a permute function in pure
>python (without resorting to a C extension). Here is my best effort so
>far:
>
>def permute(l):
> n = len(l)
> if n <= 2:
> yield l
> if n <= 1: return
> l[0],l[1] = l[1],l[0]
> yield l
> return
> for i in xrange(0,n):
> l[0],l[i] = l[i],l[0]
> first = [l[0]]
> for j in permute(l[1:]):
> yield first + j
>
>
>To test this, we can solve newspaper jumbles:
>

Jumbles solving is a different problem than making an efficient permute.
Was your goal to solve jumbles? If so, I think think I had
an idea that should go faster, once you get a special dictionary built ;-)
(Which could be pickled in binary and loaded fairly quickly, I think, but
if you use this interactively, you only pay the first time ;-)

I didn't have your word list immediately available, but I did have the moby
scrabble-word list, so I used that. I didn't know what the rules were re
the word pairs, so I put a '*' between sets whose cartesian product would
be available pairs. I think it could still be speeded up if you want to
be obsessive about it. This was just to show the benefit of an alternative
algorithm.

BTW, my (aging) system is 300mhz P2 320mb ram NT4 and latest Python 2.2:

Python 2.2.2 (#37, Oct 14 2002, 17:02:34) [MSC 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import jumble
>>> jumble.test('faee-frct')
Building chars2words dict ...
... took 10.095 seconds
Searching for jumbles ...
... took 0.032 seconds.

* affecter
cete * raff
eff * caret carte cater crate react recta trace
effect * ar
eft fet * facer farce
er re * affect
erect terce * aff
fee * craft
fer ref * facet
fere free reef * fact
fret reft tref * cafe face
refect * fa
teff * acre care race

Now that the dictionary is built, it goes pretty quickly:
(note that ttest skips the dict build that plain test did).

>>> jumble.ttest('fruits-labor')
Searching for jumbles ...
... took 0.298 seconds.

brios * artful
broil * frusta
bruit * floras safrol
bruits * flora
brulot * fairs fiars
brulots * fair fiar
burst * foliar
first frits rifts * labour
flit lift * arbours
flits lifts * arbour
florist * bura
floruit * bars bras
floruits * bar bra
flout * briars
flouts * briar
flu * arborist
fob * ruralist
forb * rituals
forbs * ritual
fort * burials
forts frost * burial
fours * tribal
frit rift * labours suboral
fruit * labors
fruits * bolar labor lobar
fur * orbitals strobila
furs surf * orbital
orb rob * fistular
orts rots sort tors * fibular
robust turbos * filar flair frail
torr * fibulas
troilus * barf
turbo * flairs frails
turf * bailors
turfs * bailor

--< jumble.py >------------------
# jumble.py
# make special dict
chars2words=None

def makedict(): # so you can control when this is done
global chars2words
words = file(r'C:\Info\Linguistics\moby\mwords\113809of.fic','r').read().split()
chars2words={}
for w in words:
ws = list(w); ws.sort(); ws=''.join(ws)
chars2words.setdefault(ws,[]).append(w)

def jumble(jumble): # 'word1-word2'
jchars = list(jumble)
jchars.remove('-')
jchars.sort()
jchars = ''.join(jchars).lower()
nc= len(jchars)
bits = [2**n for n in range(nc)]
bits.reverse()
used ={}; out=[]
for mask in range(2**nc): # exclude single word solutions?
w1=[]; w2=[]
for i in range(nc):
if bits[i] & mask:
w1.append(jchars[i])
else:
w2.append(jchars[i])
w1=''.join(w1); w2=''.join(w2)
if w1 in used and w2 in used: continue
used[w1]=1; used[w2]=1
# if len(w1)!=len(w2): continue
if (not w1 or w1 in chars2words) and (not w2 or w2 in chars2words):
out.append('%s * %s' % (
' '.join(chars2words.get(w1,[])), ' '.join(chars2words.get(w2,[]))
))
out.sort()
return '\n'.join(out)

def test(jwords):
import time
start = time.clock()
print 'Building chars2words dict ...'
makedict()
print '... took %6.3f seconds\nSearching for jumbles ...' % (time.clock()-start)
start = time.clock()
result = jumble(jwords)
print '... took %6.3f seconds.\n' % (time.clock()-start)
print result

def ttest(jwords):
import time
print 'Searching for jumbles ...'
start = time.clock()
result = jumble(jwords)
print '... took %6.3f seconds.\n' % (time.clock()-start)
print result

if __name__ == '__main__':
import sys
try:
test(sys.argv[1])
except:
print 'Usage: jumble.py word1-word2'
---------------------------------

Regards,
Bengt Richter

Alex Martelli

unread,
Oct 24, 2002, 1:32:37 PM10/24/02
to
Stuart D. Gathman wrote:

> I am trying to see how efficient I can make a permute function in pure
> python (without resorting to a C extension). Here is my best effort so

I see others have already addressed the issue that "a permute function"
need not be the fastest way to solve the example application you posed
(jumbles), but let's forget that, and just aim to get a fast generators
of permutations in pure Python.

I think a good first step is to examine your solution:

def permute(l):
n = len(l)
if n <= 2:
yield l
if n <= 1: return
l[0],l[1] = l[1],l[0]
yield l
return
for i in xrange(0,n):
l[0],l[i] = l[i],l[0]
first = [l[0]]
for j in permute(l[1:]):
yield first + j

and compare it to a slightly simpler one:

def perm(l):
n = len(l)
if n < 2:
yield l
return
for i in range(n):
first = [l[i]]
for p in perm(l[:i]+l[i+1:]):
yield first + p

with a systematic time-measurement of each:

import time
for func in permute, perm:
print '%s:' % func.func_name
for N in range(5,10):
start = time.clock()
for p in func(range(N)): pass
stend = time.clock()
print ' %d %.2f' % (N, stend-start)

Here, we get (on my machine):

permute:
5 0.00
6 0.01
7 0.10
8 0.78
9 7.58
perm:
5 0.00
6 0.02
7 0.15
8 1.27
9 12.17

So, WHY is your solution faster than the simpler one? Clearly,
upon examination, because your solution special-cases the (recursively
frequent) case of permuting a list of length two -- it "loop-unrolls"
at least one recursive call out of each such leaf. So, let's try
unrolling one more (and minimizing the packing/unpacking overhead):

def perm(l):
n = len(l)
if n < 3:
yield l
if n < 2: return
yield [l[1], l[0]]
return
elif n == 3:
yield l
yield [l[0], l[2], l[1]]
yield [l[1], l[0], l[2]]
yield [l[1], l[2], l[0]]
yield [l[2], l[0], l[1]]
yield [l[2], l[1], l[0]]
return
for i in range(n):
first = [l[i]]
for p in perm(l[:i]+l[i+1:]):
yield first + p

Now, we get:

permute:
5 0.00
6 0.01
7 0.09
8 0.78
9 7.66
perm:
5 0.00
6 0.01
7 0.06
8 0.56
9 5.74

So, clearly, this unrolling does do some good. Unfortunately,
writing out by hand all of these yield statements is quite
tedious. Hmmm, a repetitive, tedious task -- why not have
a computer program do it for us...? E.g.:

def makepermn(N):
header = 'def perm(L):\n yield L\n'
body = []
if N >= 2:
R = range(N)
permindx = permute(R)
permindx.next()
for p in permindx:
pieces = [None]*N
for i in R:
pieces[i] = 'L[%d]'%p[i]
body.append(' yield [%s]' % ','.join(pieces))
body.append('')
func = header+'\n'.join(body)
print func # just for double-checking...!
d = {}
exec func in d
return d['perm']

This should return a function object able to permute a list
of a fixed length N. Let's check...:

p3 = makepermn(3)
for p in p3('bla'):
print ''.join(p)

[alex@lancelot Python-2.2.2]$ python2.2 -O peg.py
def perm(L):
yield L
yield [L[0],L[2],L[1]]
yield [L[1],L[0],L[2]]
yield [L[1],L[2],L[0]]
yield [L[2],L[0],L[1]]
yield [L[2],L[1],L[0]]

bla
bal
lba
lab
abl
alb

OK, seems to work. Of course, we'll do some more tests, just
to make sure, but, let's take those for granted here. Now,
we can loop-unroll to any depth (removing the print statement
from makepermn, of course!):

memz = {}
for i in range(6):
memz[i] = makepermn(i)
def perm(l):
n = len(l)
func = memz.get(n)
if func:
for p in func(l):
yield p
return
for i in range(n):
first = [l[i]]
for p in perm(l[:i]+l[i+1:]):
yield first + p

Now, we have:

[alex@lancelot Python-2.2.2]$ python2.2 -O peg.py
permute:
5 0.00
6 0.01
7 0.08
8 0.77
9 7.72
perm:
5 0.00
6 0.00
7 0.04
8 0.35
9 3.90

This IS a bit unfair, of course, because we're not counting
the cost of the initial priming of memz. Also, this tells us
nothing about the best depth at which we need to unroll --
that will depend on how many permutations we will then want
to generate, and on lists of which lengths. Since the time
will always grow with the exponential of the list's length,
it IS a bit tiresome to check, but let's give it a try for
such a hypothetical simple case -- all we want is one permute
cycle each for lists of length 5 to 9 included, one list of
each length, and we also want to count all overheads. A
little OO wrapping will of course help here:

class Permer:
def __init__(self, N):
self.func_name = 'Permer(%s)' % N
self.range = range(N)
def __call__(self, L):
if self.range:
memz.clear()
for i in self.range:
memz[i] = makepermn(i)
self.range = 0
return perm(L)

funcs = [permute]
for i in range(2, 9):
funcs.append(Permer(i))
for func in funcs:

with the rest of the timing loop unchanged. Now, we see:

permute:
5 0.01
6 0.01
7 0.09
8 0.76
9 7.64
Permer(2):
5 0.01
6 0.02
7 0.18
8 1.46
9 13.77
Permer(3):
5 0.00
6 0.01
7 0.10
8 0.92
9 8.86
Permer(4):
5 0.00
6 0.01
7 0.07
8 0.59
9 6.08
Permer(5):
5 0.01
6 0.01
7 0.05
8 0.45
9 4.78
Permer(6):
5 0.03
6 0.00
7 0.04
8 0.37
9 3.95
Permer(7):
5 0.30
6 0.01
7 0.02
8 0.28
9 3.27
Permer(8):
5 2.28
6 0.00
7 0.02
8 0.22
9 2.63

Thus, we notice that, even considering the initial overhead fully (and
then some...), and even when all we need to permute is one list of each
possible length, pre-generating and compiling the code does still help
quite a bit. Of course, there's a time-space tradeoff here -- Permer(N)
for high N does generate big functions (growing exponentially with N).


Still, I think the general idea is worth keeping in mind: particularly
when you need to optimize highly recursive functions, consider what you
can memoize -- and when nothing obvious presents itself, consider
memoizing _functions_; further, "code generation" to prepare suitable
functions on the fly (ONCE each...) can be a worthwhile optimization,
and tends to go well with the idea of sticking functions into a dict...


Alex

Brad Clements

unread,
Oct 24, 2002, 1:50:56 PM10/24/02
to
Is this technique in the Python Cookbook?

Very interesting idea.. Thanks Alex!

--
Novell DeveloperNet Sysop #5

_
"Alex Martelli" <al...@aleax.it> wrote in message news:V8Wt9.29100>

> Still, I think the general idea is worth keeping in mind: particularly
> when you need to optimize highly recursive functions, consider what you
> can memoize -- and when nothing obvious presents itself, consider
> memoizing _functions_; further, "code generation" to prepare suitable
> functions on the fly (ONCE each...) can be a worthwhile optimization,
> and tends to go well with the idea of sticking functions into a dict...
>
>
> Alex
>


-----------== Posted via Newsfeed.Com - Uncensored Usenet News ==----------
http://www.newsfeed.com The #1 Newsgroup Service in the World!
-----= Over 100,000 Newsgroups - Unlimited Fast Downloads - 19 Servers =-----

Bengt Richter

unread,
Oct 24, 2002, 1:53:49 PM10/24/02
to
On 24 Oct 2002 01:37:09 GMT, bo...@oz.net (Bengt Richter) wrote:

>On Wed, 23 Oct 2002 15:20:28 GMT, "Stuart D. Gathman" <stu...@bmsi.com> wrote:
>

[...]


>>The output of "time python2.2 jumble.py faee-frct":
>>face fret
>>fact free
>>fact reef
>>free fact
>>fret face
>>fret cafe
>>reef fact
>>cafe fret
>> 2.29s real 2.23s user 0.05s system
>>
>>on my 1.2 Ghz Celeron.
>>
>>Unfortunately, a larger jumble such as "fruits-labor" takes over 15
>>minutes. Can python gurus speed up permutations in pure python?

^^^^^^^ YOW!


>>
>
>I didn't have your word list immediately available, but I did have the moby
>scrabble-word list, so I used that.

Ok, I got the unix words list (from my slackware CD - it wasn't installed for
some reason), which is quite a bit shorter than the scrabble list ( ~39k vs ~114k words)
-- so the dictionary build goes faster, but the search stays virtually the same
as with the other word list:

>>> import jumble
>>> jumble.test('faee-frct')
Building chars2words dict ...

... took 3.182 seconds
Searching for jumbles ...
... took 0.031 seconds.

fee * craft
free reef * fact
fret * cafe face
re * affect

>>> jumble.ttest('fruits-labor')
Searching for jumbles ...

... took 0.291 seconds.

forts frost * burial
fours * tribal

fruit * labors
fruits * labor
furs surf * orbital
robust * flair frail

I think you could extend the logic to search for
N-word-jumbles with time proportional
to N**nc instead of 2**nc. You could do N=s.count('-')+1
and adjust automatically according to input string.

Regards,
Bengt Richter

Tim Lavoie

unread,
Oct 24, 2002, 2:40:22 PM10/24/02
to
>>>>> "Alex" == Alex Martelli <al...@aleax.it> writes:

[ much cool stuff snipped ]

Alex> Still, I think the general idea is worth keeping in mind:
Alex> particularly

Alex> when you need to optimize highly recursive functions,
Alex> consider what you can memoize -- and when nothing obvious
Alex> presents itself, consider memoizing _functions_; further,
Alex> "code generation" to prepare suitable functions on the fly
Alex> (ONCE each...) can be a worthwhile optimization, and tends
Alex> to go well with the idea of sticking functions into a
Alex> dict...

That's neat! I hadn't tried that sort of thing with Python yet, but
have enjoyed some tinkering like that with PostScript and Lisp. Those
are probably a bit less fussy for this sort of thing though, since
indentation isn't needed.

Cheers,
Tim

--
I have an existential map. It has "You are here" written all over it.
-- Steven Wright

Alex Martelli

unread,
Oct 24, 2002, 5:30:25 PM10/24/02
to
Brad Clements wrote:

> Is this technique in the Python Cookbook?

Not the printed version, no (alas -- we put a lot of things in it,
but couldn't put EVERYthing...!-). Dunno if anybody ever posted it
to the online version -- haven't been going *systematically* over
it for at least six months, oh the shame...


> Very interesting idea.. Thanks Alex!

You're welcome! As somebody else commented, it's pretty routine
in, say, Lisp or Postscript. It's weirdly under-utilized in most
other classes of languages, though.


Alex

Alex Martelli

unread,
Oct 24, 2002, 5:36:12 PM10/24/02
to
Tim Lavoie wrote:
...

> Alex> "code generation" to prepare suitable functions on the fly
> Alex> (ONCE each...) can be a worthwhile optimization, and tends
> Alex> to go well with the idea of sticking functions into a
> Alex> dict...
>
> That's neat! I hadn't tried that sort of thing with Python yet, but
> have enjoyed some tinkering like that with PostScript and Lisp. Those
> are probably a bit less fussy for this sort of thing though, since
> indentation isn't needed.

Haven't done it in Postscript, but I've had no substantial difference
in the difficulty of doing it in Python vs, say, Scheme -- basically
treat an indent as you would an open-paren and a dedent as a closed
one. Besides, when what you're generating is loop-unrolling, there
may be no indents nor dedents involved, just like in the example I
just posted -- you loop-unroll to *eliminate* some compound statements
(loops), after all;-).


Alex

Stuart D. Gathman

unread,
Oct 24, 2002, 9:11:31 PM10/24/02
to
On Thu, 24 Oct 2002 13:32:37 -0400, Alex Martelli wrote:

> So, WHY is your solution faster than the simpler one? Clearly, upon
> examination, because your solution special-cases the (recursively
> frequent) case of permuting a list of length two -- it "loop-unrolls" at
> least one recursive call out of each such leaf. So, let's try unrolling
> one more (and minimizing the packing/unpacking overhead):

Yes, in fact I had already unrolled in one more, after posting.

> This should return a function object able to permute a list of a fixed
> length N. Let's check...

Interesting approach. I was getting lost in working out how to visit all
permutations by swapping two elements at a time. However, working on it
jogged my memory. I was reading about such a permuation algorithm in
C/C++ Journal. I'll see if I can find it and get back.

Tim Lavoie

unread,
Oct 25, 2002, 1:08:12 AM10/25/02
to
>>>>> "Alex" == Alex Martelli <al...@aleax.it> writes:


Alex> Haven't done it in Postscript, but I've had no substantial
Alex> difference in the difficulty of doing it in Python vs, say,
Alex> Scheme -- basically treat an indent as you would an
Alex> open-paren and a dedent as a closed one. Besides, when what
Alex> you're generating is loop-unrolling, there may be no indents
Alex> nor dedents involved, just like in the example I just posted
Alex> -- you loop-unroll to *eliminate* some compound statements

Alex> (loops), after all;-).

Good point, it is logically the same thing after all. Just don't ask
me to generate APL... :-)

Bengt Richter

unread,
Oct 25, 2002, 5:07:00 AM10/25/02
to

My first reaction was to the slicing and dicing involved in the
recursive call and the "first + p". It seemed like unnecessary overhead.

Indeed, by just doing the permutation in place in the caller's list,
and using the yields as no more than synchronous notifications to the
caller, the time was cut by more than 50% compared to the original version.

I haven't pursued additional improvements on top of that, but I'm
sure they're possible.

Function name time(5) time(6) time(7) time(8) time(9)
------------- ------- ------- ------- ------- -------
permute_orig 0.008 0.046 0.353 3.148 30.868
pip 0.004 0.023 0.171 1.495 13.893
perm_simpler 0.012 0.078 0.586 5.078 47.975
perm_unrolln3 0.004 0.027 0.226 2.098 21.666
Permer(2) 0.017 0.100 0.740 6.256 59.397
Permer(3) 0.011 0.055 0.424 3.727 36.267
Permer(4) 0.011 0.031 0.256 2.300 23.395
Permer(5) 0.028 0.019 0.174 1.681 17.981
Permer(6) 0.160 0.014 0.133 1.362 14.973
Permer(7) 1.137 0.009 0.095 1.085 11.931
Permer(8) 9.520 0.009 0.061 0.931 9.964

(Your system is ~10 times faster ;-/ )
I hope I didn't glitch something in all the cutting and pasting, but
it seems ok. 'pip' is for permute in place. (I assume the caller doesn't
need copies of the permuted list, but just would make use of the temporary
state to access the referenced objects in desired permuted order, and then
call for the next permutation):

----
from __future__ import generators

def pip(alist, tx=0, te=None): # permute tail at alist[tx:te] in place
if not tx: te = len(alist) # init start and stop/end of tail
if tx>=te: return
if te-tx==2:
yield None
tmp = alist[tx+1]
alist[tx+1] = alist[tx]
alist[tx] = tmp
yield None
tmp = alist[tx+1]
alist[tx+1] = alist[tx]
alist[tx] = tmp #restore original
else:
tmp = alist[tx]
for i in range(tx, te):
alist[tx] = alist[i]
alist[i] = tmp
for dummy in pip(alist, tx+1, te):
yield None
alist[i]=alist[tx]
alist[tx]=tmp
----

Well, I just had another idea for more speed, but no time now...


>with a systematic time-measurement of each:

[... much interesting stuff ...]


>
>Thus, we notice that, even considering the initial overhead fully (and
>then some...), and even when all we need to permute is one list of each
>possible length, pre-generating and compiling the code does still help
>quite a bit. Of course, there's a time-space tradeoff here -- Permer(N)
>for high N does generate big functions (growing exponentially with N).
>
>
>Still, I think the general idea is worth keeping in mind: particularly
>when you need to optimize highly recursive functions, consider what you
>can memoize -- and when nothing obvious presents itself, consider
>memoizing _functions_; further, "code generation" to prepare suitable
>functions on the fly (ONCE each...) can be a worthwhile optimization,
>and tends to go well with the idea of sticking functions into a dict...
>

Regards,
Bengt Richter

Duncan Booth

unread,
Oct 25, 2002, 8:54:14 AM10/25/02
to
Alex Martelli <al...@aleax.it> wrote in
news:V8Wt9.29100$aL4.8...@news1.tin.it:
> Thus, we notice that, even considering the initial overhead fully (and
> then some...), and even when all we need to permute is one list of each
> possible length, pre-generating and compiling the code does still help
> quite a bit. Of course, there's a time-space tradeoff here -- Permer(N)
> for high N does generate big functions (growing exponentially with N).

What a complicated way to do things (IMHO).

Like Bengt, my first thought was that all that list construction looked
like overkill, so I wrote this:

def myperm(l):
llen = len(l)
if llen <= 1:
yield l
else:
aRange = range(llen)
v = l.pop()
for p in myperm(l):
for j in aRange:
l.insert(j, v)
yield l
del l[j]
l.append(v)

Timings on my machine:

permute:
5 0.00
6 0.01

7 0.07
8 0.64
9 6.38
myperm:
5 0.00
6 0.00
7 0.02
8 0.18
9 1.48
Permer(8):
5 1.74
6 0.00
7 0.02
8 0.19
9 2.29

Next I threw in some micro-optimisations to get a slightly more complex
function:

# Try to make it faster by little code tweaks
def myperm2(l):
pop, insert, append = l.pop, l.insert, l.append

def realperm():
ll = l
llen = len(ll)
if llen <= 1:
yield ll
return
aRange = range(llen)
v = pop()
for p in realperm():
for j in aRange:
insert(j, v)
yield ll
del ll[j]
append(v)

return realperm()

myperm2:
5 0.00
6 0.00
7 0.02
8 0.16
9 1.27

However, anticipating the complaints that might arise
because I return the original list each time instead of creating a new
copy, I also produced this:

def myperm3(l):
for p in myperm2(l): yield p[:]

myperm3:
5 0.00
6 0.00
7 0.03
8 0.24
9 1.95

This is slower than myperm, but still mostly faster than Permer(8).

Bengt Richter

unread,
Oct 25, 2002, 2:55:10 PM10/25/02
to

Very nice ;-)

You should add another decimal to the timings ;-)

I wonder what extra benefit you may be getting from possibly operating
within the bounds of a mimimum list space allocation with a short list.
(E.g., you could compare generating the first 100k permutations of
a 9-length vs of a 1000-length list).

This makes me think that the Python ought to have a standard tool that
would make a chart of timings for fundamental expressions and ops
such as list ops, attribute access, single-char vs multichar binding
access, slot access, function calls, tuple un/packing, etc., etc.

Also the effect of looking for user object special methods vs being able
quickly to bypass such in the case of system builtin primites and objects.

Just a nice simple chart. Also with the option of normalizing wrt pystones.
It would also be a quick check for the language implementers to see the
effects of changes they're making.

Maybe also a tool to hook into the interpreter to get a profile of how
much use each of the above-mentioned are getting in a program's execution.

Regards,
Bengt Richter

Alex Martelli

unread,
Oct 26, 2002, 7:00:13 AM10/26/02
to
Duncan Booth wrote:
...

> Like Bengt, my first thought was that all that list construction looked
> like overkill, so I wrote this:
>
> def myperm(l):
> llen = len(l)
> if llen <= 1:
> yield l
> else:
> aRange = range(llen)
> v = l.pop()
> for p in myperm(l):
> for j in aRange:
> l.insert(j, v)
> yield l
> del l[j]
> l.append(v)

Quite interesting! I find it peculiar that all the insert
and del should still be cheaper than building new lists, but
they are, particularly in Python 2.2 -- comparing with a slight
variation that doesn't modify L in-place, i.e.:

def myperm2(L):
llen = len(L)
if llen <= 1:
yield L
else:
aRange = range(llen)
v = L[-1:]
for p in myperm2(L[:-1]):
for j in aRange:
yield p[:j]+v+p[j:]

I get, with 2.2:

[alex@lancelot Python-2.2.2]$ python -O peg.py
myperm:
5 0.00
6 0.01
7 0.02
8 0.19
9 1.56
myperm2:
5 0.00
6 0.01
7 0.04
8 0.32
9 2.91

although with 2.3 the difference is less (2.3's allocation
having been tuned), it's still in favor of the altering version:

[alex@lancelot Python-2.2.2]$ python2.3 -O peg.py


myperm:
5 0.00
6 0.00
7 0.02
8 0.18

9 1.46


myperm2:
5 0.00
6 0.00

7 0.03
8 0.21
9 1.94

Some small attempts at memoization and the like haven't sped
up myperm2 measurably. So, if you can't amortize the generation
of the functions-dictionary, and perhaps even if you can, the
in-place approach of 'myperm' (or the slight optimization you
have already posted for it) seem definitely faster. Curiously,
the timing of myperm2 appear to be very similar to those for the
already-memoized perm (like Permer(8) without paying for the
priming time, i.e., the cost when you're able to amortize the
priming of the memo over many permutations' computations) --
this is starting to look quite close to the sheer cost of
generating (e.g) 9! (362880) new lists of length 9. Indeed,
a non-recursive 'nonperm' function that just computes the
factorial(length(L)) and yields that many copies of L made by
list comprehension:

def nonperm(L):
la = len(L)
fac = 1
for i in range(2,la+1): fac *= i
for x in xrange(fac):
yield [z for z in L]

is substantially SLOWER (when len(L) is 9) than myperm2...:

nonperm:
5 0.00
6 0.01
7 0.03
8 0.30
9 2.87

Alex

Anton Vredegoor

unread,
Oct 26, 2002, 9:19:13 AM10/26/02
to
On Sat, 26 Oct 2002 11:00:13 GMT, Alex Martelli <al...@aleax.it>
wrote:

<Duncan's algo>

>Quite interesting! I find it peculiar that all the insert
>and del should still be cheaper than building new lists, but
>they are, particularly in Python 2.2 -- comparing with a slight
>variation that doesn't modify L in-place, i.e.:
>
>def myperm2(L):
> llen = len(L)
> if llen <= 1:
> yield L
> else:
> aRange = range(llen)
> v = L[-1:]
> for p in myperm2(L[:-1]):
> for j in aRange:
> yield p[:j]+v+p[j:]

The first permutation is the reverse of the last permutation, the
second permutation is the reverse of the butlast permutation, and so
on. Permhalf is a "one char fix" of myperm2!

Regards,
Anton.

from __future__ import generators

def permhalf(L):
llen = len(L)
if llen <= 2:


yield L
else:
aRange = range(llen)
v = L[-1:]

for p in permhalf(L[:-1]):


for j in aRange:
yield p[:j]+v+p[j:]

def myperm3(L):
ph = permhalf(L)
for h in ph:
p = h[:]
yield p
p.reverse()
yield p

def test():
s = "abcd"
for p in myperm3(list(s)):
print "".join(p)

if __name__=='__main__':
test()

Alex Martelli

unread,
Oct 26, 2002, 11:41:42 AM10/26/02
to
Anton Vredegoor wrote:
...

> The first permutation is the reverse of the last permutation, the
> second permutation is the reverse of the butlast permutation, and so
> on. Permhalf is a "one char fix" of myperm2!

*blink* -- yes... nonobvious but it works -- slicing the time to
permute range(10) [need to start using more substantial examples
here:-)] from 19.63 to 12.22 on my box. However, the same trick
also works for the modifying-in-place "myperm", slicing ITS time
for permuting range(10) from 14.10 down to 10.13 (I'm only using
Python 2.3 now for all of these measurements).


And, applying Anton's trick to Duncan's best micro-optimization
of his in-place-modification approach:

def mypermy(l):


pop, insert, append = l.pop, l.insert, l.append

def halfperm():


ll = l
llen = len(ll)

if llen <= 2:
yield ll
return
aRange = range(llen)
v = pop()
for p in halfperm():


for j in aRange:
insert(j, v)
yield ll
del ll[j]
append(v)

for h in halfperm():
yield h
h.reverse()
yield h
h.reverse()

ITS time to permute range(10) is reduced from 12.13 to 9.59
Yes we do need two .reverse calls -- halfperm's dependent
on the list not being modified under it -- the alternative
way to write the final loop:

for h in halfperm():


p = h[:]
yield p
p.reverse()
yield p

takes 11.03 to permute range(10), once again showing (here,
to me at least, less surprisingly) that modifying in-place
and undoing the modification is faster than making a new
list (a copy) and modifying the copy.


Alex

Anton Vredegoor

unread,
Oct 26, 2002, 1:50:16 PM10/26/02
to
On Sat, 26 Oct 2002 15:41:42 GMT, Alex Martelli <al...@aleax.it>
wrote:

>And, applying Anton's trick to Duncan's best micro-optimization


>of his in-place-modification approach:
>
>def mypermy(l):
> pop, insert, append = l.pop, l.insert, l.append

reverse = l.reverse


>
> def halfperm():
> ll = l
> llen = len(ll)
> if llen <= 2:
> yield ll
> return
> aRange = range(llen)
> v = pop()
> for p in halfperm():
> for j in aRange:
> insert(j, v)
> yield ll
> del ll[j]
> append(v)
>
> for h in halfperm():
> yield h

reverse()
> yield h


reverse()
>
>ITS time to permute range(10) is reduced from 12.13 to 9.59

With micro-optimized reverse I guess on your machine permute range(10)
will take just under 8 seconds.

Regards,
Anton.

Jane Austine

unread,
Oct 26, 2002, 11:53:39 PM10/26/02
to
Duncan Booth <dun...@rcp.co.uk> wrote in message news:<Xns92B28E662F5...@rcp.co.uk>...

> Alex Martelli <al...@aleax.it> wrote in
> news:V8Wt9.29100$aL4.8...@news1.tin.it:
> > Thus, we notice that, even considering the initial overhead fully (and
> > then some...), and even when all we need to permute is one list of each
> > possible length, pre-generating and compiling the code does still help
> > quite a bit. Of course, there's a time-space tradeoff here -- Permer(N)
> > for high N does generate big functions (growing exponentially with N).
>
> What a complicated way to do things (IMHO).
>
> Like Bengt, my first thought was that all that list construction looked
> like overkill, so I wrote this:
>
> def myperm(l):
> llen = len(l)
> if llen <= 1:
> yield l
> else:
> aRange = range(llen)
> v = l.pop()
> for p in myperm(l):
> for j in aRange:
> l.insert(j, v)
> yield l
> del l[j]
> l.append(v)

Why this difference?

>>> for e in myperm(range(3)):
print e,

[2, 1, 0] [1, 2, 0] [1, 0, 2] [2, 0, 1] [0, 2, 1] [0, 1, 2]
>>> print list(myperm(range(3)))
[[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]]

What's the semantics of "list"?

Changjune Kim

unread,
Oct 27, 2002, 1:56:15 AM10/27/02
to

"Jane Austine" <janeau...@hotmail.com> wrote in message
news:ba1e306f.02102...@posting.google.com...

[e for e in myperm(range(3))] will do the same with list().

This happens because "yield l" returns, as with other objects in python, a
reference to the list l. Therefore, the result of list() will consist of a
multiple of the list l's last state.

To get the intended result, you have to copy every list you get returned
from each yield.

Anton Vredegoor

unread,
Oct 27, 2002, 4:44:22 AM10/27/02
to
On 26 Oct 2002 20:53:39 -0700, janeau...@hotmail.com (Jane
Austine) wrote:

>Why this difference?
>
>>>> for e in myperm(range(3)):
> print e,
>
>[2, 1, 0] [1, 2, 0] [1, 0, 2] [2, 0, 1] [0, 2, 1] [0, 1, 2]
>>>> print list(myperm(range(3)))
>[[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]]
>
>What's the semantics of "list"?

Because the topic is "Just for fun ..." some people (maybe only me!)
possibly got a bit carried away into the premature optimization trap.
The faster versions modify a list "in place" instead of returning a
copy, need more lines of code and sometimes return the list of
permutations in a non-intuitive order.

Alex's version of Duncan's idea is currently the most concise and
rubust solution I think. It works ok with "list", see below.

By the way, there are ways af generating permutations by index number
that do not have to go through the list of permutations in order to
return a specific permutation. This can be fast too, and it's more
practical to do it this way if the number of permutations is too big
to generate them all. Randomly choosing a bridge dealing is an example
of such a situation, although then a permutation with "multiple
occurences" is neccessary. There's also a solution for this specific
problem using set partitions but lets not bore the readers.

However it's almost certainly not so concise and elegant as using
generators to permute small ranges.

Have fun!

Anton.

from __future__ import generators

def permute(L):
llen = len(L)
if llen <= 1:
yield L
else:
aRange = range(llen)
v = L[-1:]
for p in permute(L[:-1]):


for j in aRange:
yield p[:j]+v+p[j:]

def test():
L = range(3)
for p in permute(L):
print p,
print
print list(permute(L))

if __name__=='__main__':
test()

Alex Martelli

unread,
Oct 31, 2002, 2:45:12 PM10/31/02
to
Anton Vredegoor wrote:
...

> By the way, there are ways af generating permutations by index number
> that do not have to go through the list of permutations in order to
> return a specific permutation. This can be fast too, and it's more

One simple (not necessarily fastest) way:

def perm_given_index(alist, apermindex):
for i in range(len(alist)-1):
apermindex, j = divmod(apermindex, len(alist)-i)
alist[i], alist[i+j] = alist[i+j], alist[i]

> practical to do it this way if the number of permutations is too big
> to generate them all. Randomly choosing a bridge dealing is an example
> of such a situation, although then a permutation with "multiple
> occurences" is neccessary. There's also a solution for this specific

I'm not sure what a permutation with "multiple occurrences" would be,
nor, in particular, of why such a thing would be necessary for
"randomly choosing a bridge deal" (a subject dear and near to my
heart). If you have a hypothetical pseudo-random number generator
uniform in the range 1 to 52 factorial, perm_given_index would let
you rearrange a bridge deck (passed in as argument alist) according
to the random number (passed in as argument apermindex).

Of course, there are fewer significantly-different bridge deals than
there are permutations of a 52-cards deck -- a bridge player does not
care, when the SET of spades he or she receives among his or her 13
cards for the deal is e.g. Ace and Two, whether the Ace came before
or after the Two, or what cards in other suits, if any, came in
between. The number of significantly-different bridge deals is thus
C(52,13) times C(39,13) times C(26,13), 53644737765488792839237440000
vs 80658175170943878571660636856403766975289505440883277824000000000000
which is the number of permutations of the deck (52 factorial).

But the _handiest_ way to deal a random deal, assuming for the sake
of argument that you have a uniform real pseudorandom number generator
u() [therefore you can just as easily generate U(N), uniform integer
pseudorandom generators for any integer N] is still to deal the cards
one by one just about as above:

def random_deal(alist, U):
for i in range(len(alist)-1):
j = U(len(alist)-i)
alist[i], alist[i+j] = alist[i+j], alist[i]

Yes, there are faster ways (mostly because generating 51 pseudorandom
numbers tends to be costly), but it's hard to beat this simplicity
(always an important issue). Generally the processing you want to do
after dealing the deal swamps the effort of generating the deal itself,
anyway (when the post-processing you want to do is _trivially_ simple,
it may be preferable to computer whatever statistics you're after on
the universe of ALL possible deals, than to generate a random sample
and do some counting on the sample -- but that's another issue).

> problem using set partitions but lets not bore the readers.
>
> However it's almost certainly not so concise and elegant as using
> generators to permute small ranges.

Not sure what's "it" in this context, but if "it" is generating
a permutation from a permutation-index, it seems concise and elegant
enough to me in the version above...


Alex

Alex Martelli

unread,
Oct 31, 2002, 3:04:39 PM10/31/02
to
Jane Austine wrote:
...
>> def myperm(l):
...
>> yield l

> Why this difference?
>
>>>> for e in myperm(range(3)):
> print e,
>
> [2, 1, 0] [1, 2, 0] [1, 0, 2] [2, 0, 1] [0, 2, 1] [0, 1, 2]
>>>> print list(myperm(range(3)))
> [[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]]
>
> What's the semantics of "list"?

myperm is yielding the SAME object each and every time, and
mutating it between one yield and the next one. The semantics
of result=list(myperm(X)) are the same as:
result = []
for x in myperm(X): result.append(x)
so we're appending the SAME list object 6 times (in your example
where X has length 3) and therefore ending up with a list
with six identical items.

Basically, this not-uncommon buglet is tied to the deep notions of
"object identity" and Python's reference semantics. Each time
you refer to an object, in Python, you have just that, a _reference_
to the object, not an implicit copy. The object you refer to may
mutate, but you're still holding a reference to the object.

Imagine that I'm modeling T-shirts at a geeks-haute-couture catwalk.
First I come out with a green T-shirt, you like it, and say "I'll
buy the T-shirt Alex is wearing". Then I come out again with a
blue T-shirt, you like it too, and again you say "I'll buy the T-shirt
Alex is wearing". I come out a third time, with a black T-shirt,
and you repeat the same sentence a third time. If each time just a
"reference" was recorded, the black T-shirt is all you'll get, because
THAT is "the T-shirt Alex is wearing" at the end of the show when your
order is fulfilled. If you wanted a green T-shirt, AND a blue one,
AND a black one too, you should each time have said "I'll buy A COPY OF
the T-shirt Alex is wearing RIGHT NOW" -- explicitly taking "a snapshot
of the situation" before the referent of the reference "T-shirt Alex
is wearing" mutates.

Similarly:

import copy
result = []
for x in myperm(X): result.append(copy.copy(x))

The call to function copy of module copy is the most explicit and
clearest way to ask for A COPY OF whatever object x references right
now, rather than a reference to the object itself. There are other
ways, marginally faster, more cryptic, and depending on specific
knowledge about the type of the object x is referring to, e.g.:

result = []
for x in myperm(X): result.append(x[:])

(the "all-list slice" idiom), or:

result = []
for x in myperm(X): result.append(list(x))

(a bit more readable, using the 'list' constructor).

More concise ways to express any of these are given by list-comprehension
syntax, but the semantics is the same as for the loops, i.e.:

[ x for x in myperm(X) ]

will leave you with a list of 6 identical items, exactly as list(myperm(X))
did, because it takes no "snapshot", asks for no COPY; while any of:

[ copy.copy(x) for x in myperm(X) ]
[ x[:] for x in myperm(X) ]
[ list(x) for x in myperm(X) ]

will take 6 snapshots, one per item, and therefore give you the result
you expect.

Of course, SOME (but NOT all!) of the performance advantage obtained
by permuting in-place rather than building new lists each time goes
away when you need to take snapshots of each permutation. NOT all,
because you're only going to copy (snapshot) the permutation of the
overall list: the permutations recursively used within the list to
compute the overall permutation are still going to benefit.


Alex


Anton Vredegoor

unread,
Oct 31, 2002, 4:27:53 PM10/31/02
to
On Thu, 31 Oct 2002 19:45:12 GMT, Alex Martelli <al...@aleax.it>
wrote:

>Anton Vredegoor wrote:


> ...
>> By the way, there are ways af generating permutations by index number
>> that do not have to go through the list of permutations in order to
>> return a specific permutation. This can be fast too, and it's more
>
>One simple (not necessarily fastest) way:
>
>def perm_given_index(alist, apermindex):
> for i in range(len(alist)-1):
> apermindex, j = divmod(apermindex, len(alist)-i)
> alist[i], alist[i+j] = alist[i+j], alist[i]

Yes, very nice. maybe even better than using generators. Although
generators seem to be faster for small ranges.

>> practical to do it this way if the number of permutations is too big
>> to generate them all. Randomly choosing a bridge dealing is an example
>> of such a situation, although then a permutation with "multiple
>> occurences" is neccessary. There's also a solution for this specific
>
>I'm not sure what a permutation with "multiple occurrences" would be,
>nor, in particular, of why such a thing would be necessary for
>"randomly choosing a bridge deal" (a subject dear and near to my
>heart). If you have a hypothetical pseudo-random number generator
>uniform in the range 1 to 52 factorial, perm_given_index would let
>you rearrange a bridge deck (passed in as argument alist) according
>to the random number (passed in as argument apermindex).

To get my mind out of some troubling issues concerning unemployment I
was programming permutations again today. I have a script at

http://home.hccnet.nl/a.vredegoor/multiperm

but it's way to complicated and I am always trying to reduce it to
more managable proportions.

>
>Of course, there are fewer significantly-different bridge deals than
>there are permutations of a 52-cards deck -- a bridge player does not
>care, when the SET of spades he or she receives among his or her 13
>cards for the deal is e.g. Ace and Two, whether the Ace came before
>or after the Two, or what cards in other suits, if any, came in
>between. The number of significantly-different bridge deals is thus
>C(52,13) times C(39,13) times C(26,13), 53644737765488792839237440000
>vs 80658175170943878571660636856403766975289505440883277824000000000000
>which is the number of permutations of the deck (52 factorial).

No, not 52! :-) but something less, as in the beginning of the
paragraph.

>But the _handiest_ way to deal a random deal, assuming for the sake
>of argument that you have a uniform real pseudorandom number generator
>u() [therefore you can just as easily generate U(N), uniform integer
>pseudorandom generators for any integer N] is still to deal the cards
>one by one just about as above:
>
>def random_deal(alist, U):
> for i in range(len(alist)-1):
> j = U(len(alist)-i)
> alist[i], alist[i+j] = alist[i+j], alist[i]

I have a hardware TRNG that is perfectly able to generate long
integers in the desired range so it could be used to deal the cards at
once. This would be satisfying for even the most subtle bridge players
I think. Since it's integrated on a not so expensive motherboard it
could be interesting for bridge players to buy a MB and deal this way.

But it's not portable. I mean the hardware, the python driver for the
hardware could be portable in principle:
(windows only for now)

http://home.hccnet.nl/a.vredegoor/truerand

>Yes, there are faster ways (mostly because generating 51 pseudorandom
>numbers tends to be costly), but it's hard to beat this simplicity
>(always an important issue). Generally the processing you want to do
>after dealing the deal swamps the effort of generating the deal itself,
>anyway (when the post-processing you want to do is _trivially_ simple,
>it may be preferable to computer whatever statistics you're after on
>the universe of ALL possible deals, than to generate a random sample
>and do some counting on the sample -- but that's another issue).

The script I mentioned above can also *rank* a given permutation so it
is possible this way to find out which (indexed number) card deal was
used for a specific game.

>> problem using set partitions but lets not bore the readers.
>>
>> However it's almost certainly not so concise and elegant as using
>> generators to permute small ranges.
>
>Not sure what's "it" in this context, but if "it" is generating
>a permutation from a permutation-index, it seems concise and elegant
>enough to me in the version above...
>

I am not sure too but I am including a script below that permutes the
"hands" instead of the cards. I also have somewhere (not ready yet now
to wrap my head around this again) a script that generates a partition
of the set of cards in four subsets and then does a permutation of the
hands to deal the cards. After all there are 24 ways of assigning the
hands after partitioning.

It's just for fun, but see below for something not so elegant as I
would like it to be ... Not tested thoroughly see the (longer) script
at my homepage for a more stable version.

Regards,
Anton.

#multiperm.py by Anton Vredegoor an...@vredegoor.doge.nl 31-10-2002
"""
Mperm returns all different permutations of a list
by index. The list can contain the same element more
than once.
"""

from operator import mul, add

def fac(n):
#faculty of n
return reduce(mul,range(2,n+1),1L)

def freqs(L):
#dictionary of frequencies of the elements of list L
res = {}
for x in L:
res[x]= res.get(x,0)+1
return res

def nperm(f):
#tuple of (number of elements,number of possible permutations)
#for this dictionary of frequencies
fv = f.values()
n,np = 1,1
if fv:
n = reduce(add,fv)
up = fac(n)
down = reduce(mul,[fac(x) for x in fv])
np = up/down
return n,np

def findfirst(index,L):
#tuple of
#(number of permutations used,first element of the permutation)
f = freqs(L)
n,np = nperm(f)
cum = 0
k = f.keys()
k.sort() # for sorted output
for e in k:
contr = (np*f[e])/n
cum += contr
if cum > index:
return cum-contr,e

def mperm(index,L):
#indexed *different* permutations of the list L
T = L[:]
res = []
remain = index
while T:
done, e = findfirst(remain,T)
remain -= done
res.append(e)
T.remove(e)
return res

def test():
#tests mperm
L = range(3)
#n == len(L),np==number of possible different permutations
n,np = nperm(freqs(L))
for i in range(np):
print '%2i %s' %(i,mperm(i,L))
print
L = list("aabc")
#n == len(L),np==number of possible different permutations
n,np = nperm(freqs(L))
for i in range(np):
print '%2i %s' %(i,"".join(mperm(i,L)))
print
#deal some bridge cards
L = list("0123" *13)
#generate some (not) very random number ...
r = 2L**95
hands = [[],[],[],[]]
i = 0
for c in mperm(r,L):
#map this permutation to the four hands
hands[int(c)].append(i)
i+=1
for hand in hands:
print hand
print

if __name__=='__main__':
test()

Reply all
Reply to author
Forward
0 new messages