I am looking for a method to "shuffle" the lines of a large file.
I have a corpus of sorted and "uniqed" English sentences that has been
produced with (1):
(1) sort corpus | uniq > corpus.uniq
corpus.uniq is 80G large. The fact that every sentence appears only
once in corpus.uniq plays an important role for the processes
I use to involve my corpus in. Yet, the alphabetical order is an
unwanted side effect of (1): Very often, I do not want (or rather, I
do not have the computational capacities) to apply a program to all of
corpus.uniq. Yet, any series of lines of corpus.uniq is obviously a
very lopsided set of English sentences.
So, it would be very useful to do one of the following things:
- produce corpus.uniq in a such a way that it is not sorted in any way
- shuffle corpus.uniq > corpus.uniq.shuffled
Unfortunately, none of the machines that I may use has 80G RAM.
So, using a dictionary will not help.
Any ideas?
Joerg Schuster
There was a thread a while ago about choosing random lines from a file without reading the whole
file into memory. Would that help? Instead of shuffling the file, shuffle the users. I can't find
the thread though...
Kent
### START ###
import random
f = open('corpus.uniq')
buffer = []
max_buffered_lines = 1000
for line in f:
if len(buffer) < max_buffered_lines:
buffer.append(line)
else:
buffer.shuffle()
for line in buffer:
print line
random.shuffle(buffer)
for line in buffer:
print line
f.close()
### END ###
Hello,
Any ideas?
Joerg Schuster
The following program should do the trick (filenames are hardcoded, look at
top of file):
### shuffle.py
import random
import shelve
# Open external files needed for data storage.
lines = open("test.dat","r")
lineindex = shelve.open("test.idx")
newlines = open("test.new.dat","w")
# Create an index of all lines of the file in an external flat file DB.
# This means that nothing actually remains in memory, but in an extremely
# efficient (g)dbm flatfile DB.
def makeIdx():
i = 0L
lastpos = 0L
curpos = None
while lines.readline():
# This is after the (\r)\n, which will be stripped() and rewritten
# by writeNewLines().
curpos = long(lines.tell())
lineindex[hex(i)[2:-1]] = "%s:%s" % (hex(lastpos)[2:-1],
hex(curpos-lastpos)[2:-1])
lastpos = curpos
i += 1
return i
maxidx = makeIdx()
# To shuffle the file, just shuffle the index. Problem being: there is no
# random number generator which even remotely has the possibility of yielding
# all possible permutations. Thus, for simplicity: just exchange every element
# in order 1..end with a random element from the rest of the file. This is
# certainly no perfect shuffle, and in case the shuffling is too bad, just
# rerun shuffleIdx() a couple of times.
def shuffleIdx():
oldi = 0L
# Use a while loop, as xrange doesn't work with longs.
while oldi < maxidx:
oi = hex(oldi)[2:-1]
while True:
ni = hex(long(random.randrange(maxidx)))[2:-1]
if ni <> oi:
break
lineindex[oi], lineindex[ni] = lineindex[ni], lineindex[oi]
oldi += 1
shuffleIdx()
# Write out the shuffled file. Do this by just walking the index 0..end.
def writeNewLines():
i = 0L
# Use a while loop, as xrange doesn't work with longs.
while i < maxidx:
# Extract line index and line length from the index file.
lidx, llen = [long(x,16) for x in lineindex[hex(i)[2:-1]].split(":")]
lines.seek(lidx)
line = lines.read(llen).strip()
newlines.write(line+"\n")
i += 1
writeNewLines()
### End shuffle.py
I don't know how fast this program will run, but at least, it does as
told... ;)
--
--- Heiko.
>Hello,
>Any ideas?
Instead of shuffling the file itself maybe you could index it (with dbm for
instance) and select random lines by using random indexes whenever you need a
sample.
Eddie
else:
buffer.shuffle()
for line in buffer:
print line
should be
else:
random.shuffle(buffer)
for line in buffer:
print line
of course
f.close()
### END ###
Hello,
Any ideas?
Joerg Schuster
So, just replace:
import shelve
by
import anydbm
and
lineindex = shelve.open("test.idx")
by
lineindex = anydbm.open("test.idx","c")
Keep the rest as is.
--
--- Heiko.
> I am looking for a method to "shuffle" the lines of a large file.
Of the top of my head: decorate, randomize, undecorate.
Prepend a suitable large random number or hash to each
line and then use sort. You could prepend new line numbers
instead but even storing the randomised indexes might use
too much memory.
If you only need 1000 sentences, use only one sentence from each block,
if you need 1M, then use them all.
[I hope I understood you problem]
-- george
Why don't you index the file? I would store the byte-offsets of the
beginning of each line into an index file. Then you can generate a
random number from 1 to Whatever, go get that index from the index file,
then open your text file, seek to that position in the file, read one
line, and close the file. Using this process you can then extract a
somewhat random set of lines from your 'corpus' text file.
You probably should consider making a database of the file, keep the raw
text file for sure, but create a converted copy in bsddb or pytables format.
Warren
>Hello,
>
>I am looking for a method to "shuffle" the lines of a large file.
[snip]
>So, it would be very useful to do one of the following things:
>
>- produce corpus.uniq in a such a way that it is not sorted in any way
>- shuffle corpus.uniq > corpus.uniq.shuffled
>
>Unfortunately, none of the machines that I may use has 80G RAM.
>So, using a dictionary will not help.
To implement your 'shuffle' command in Python, you can do the following
algorithm, with a couple of assumptions:
ASSUMPTION
----------
The total line count in your big file is less than sys.maxint.
The algorithm as given works for systems where eol is a single '\n'.
ALGORITHM
---------
Create a temporary filelist.FileList fl (see attached file) of
struct.calcsize("q") bytes each (struct.pack and the 'q' format string is your
friend), to hold the offset of each line start in big_file. fl[0] would be 0,
fl[1] would be the length of the first line including its '\n' and so on.
Read once the big_file appending to fl the offset each time (if you need help
with this, let me know).
random.shuffle(fl) # this is tested with the filelist.FileList as given
for offset_as_str in fl:
offset= struct.unpack("q", offset_as_str)[0]
big_file.seek(offset)
sys.stdout.write(big_file.readline())
That's it. Redirect output to your preferred file. No promises for speed
though :)
--
TZOTZIOY, I speak England very best.
"Be strict when sending and tolerant when receiving." (from RFC1958)
I really should keep that in mind when talking with people, actually...
Jörg
+1 QOTW
STeVe
> I am looking for a method to "shuffle" the lines of a large file.
If speed and space are not a concern, I would be tempted to presume that
this can be organised without too much difficulty. However, looking for
speed handling a big file, while keeping equiprobability of all possible
permutations, might be sensibly more difficult.
I vaguely remember having read something along these lines (not
shuffling as you mean it, but still, reorganising a lengthy file) in
Knuth's "Art of Computer Programming", in one of the exercises within
the chapter on Sorting methods (volume 3). That's long ago, but if I
remember well, Knuth did not consider this as an easy exercise.
--
François Pinard http://pinard.progiciels-bpi.ca
> Replying to oneself is bad, [...]
Not necessarily. :-)
Very good solution!
Sort is truly excellent at very large datasets. If you give it a file
bigger than memory then it divides it up into temporary files of
memory size, sorts each one, then merges all the temporary files back
together.
You tune the memory sort uses for in memory sorts with --buffer-size.
Its pretty good at auto tuning though.
You may want to set --temporary-directory also to save filling up your
/tmp.
In a previous job I did a lot of stuff with usenet news and was
forever blowing up the server with scripts which used too much memory.
sort was always the solution!
--
Nick Craig-Wood <ni...@craig-wood.com> -- http://www.craig-wood.com/nick
If this is what's wanted, then perhaps some variation on this cookbook
recipe might do the trick:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/59865
--
Cheers,
Simon B,
si...@brunningonline.net,
http://www.brunningonline.net/simon/blog/
I couldn't resist. ;-)
import random
def randomLines(filename, lines=1):
selected_lines = list(None for line_no in xrange(lines))
for line_index, line in enumerate(open(filename)):
for selected_line_index in xrange(lines):
if random.uniform(0, line_index) < 1:
selected_lines[selected_line_index] = line
return selected_lines
This has the advantage that every line had the same chance of being
picked regardless of its length. There is the chance that it'll pick
the same line more than once, though.
Problem being: if the file the OP is talking about really is 80GB in size, and
you consider a sentence to have 80 bytes on average (it's likely to have less
than that), that makes 10^9 sentences in the file. Now, multiply that with
the memory overhead of storing a list of 10^9 None(s), and reconsider,
whether that algorithm really works for the posted conditions. I don't think
that any machine I have access to even has near enough memory just to store
this list... ;)
--
--- Heiko.
Ah, but that's the clever bit; it *doesn't* store the whole list -
only the selected lines.
Just a short note on this line. If lines is really large, its much faster to use
from itertools import repeat
selected_lines = list(repeat(None, len(lines)))
which only repeats None without having to create huge numbers of integer
objects as xrange does.
BTW, list comprehension is usually faster than list(iterator), so
[None for no in xrange(lines)]
ends up somwhere between the two.
Proof (in 2.4):
# python -m timeit 'from itertools import repeat
a = [ None for i in range(10000) ]'
100 loops, best of 3: 3.68 msec per loop
# python -m timeit 'from itertools import repeat
a = [ None for i in xrange(10000) ]'
100 loops, best of 3: 3.49 msec per loop
# python -m timeit 'from itertools import repeat
a = list(repeat(None, 10000))'
1000 loops, best of 3: 308 usec per loop
There. Factor 10. That's what I call optimization...
Stefan
The simplest approach is even faster:
C:\>python -m timeit -s "from itertools import repeat" "[None for i in
range(10000)]"
100 loops, best of 3: 2.53 msec per loop
C:\>python -m timeit -s "from itertools import repeat" "[None for i in
xrange(10000)]"
100 loops, best of 3: 2.45 msec per loop
C:\>python -m timeit -s "from itertools import repeat"
"list(repeat(None, 10000))"
1000 loops, best of 3: 245 usec per loop
C:\>python -m timeit -s "from itertools import repeat" "[None] * 10000"
10000 loops, best of 3: 160 usec per loop
;-)
BTW, I've posted a more general version here:
http://www.brunningonline.net/simon/blog/archives/001784.html
But that means that it'll only read several lines from the file, never do a
shuffle of the whole file content... When you'd want to shuffle the file
content, you'd have to set lines=1 and throw away repeating lines in
subsequent runs, or you'd have to set lines higher, and deal with the
resulting lines too in some way (throw away repeating ones... :-). Doesn't
matter how, you'd have to store which lines you've already read
(selected_lines). And in any case you'd need a line cache of 10^9 entries for
this amount of data...
That's just what I wanted to say...
--
--- Heiko.
Err, thing is, it *does* pick a random selection from the whole file,
without holding the whole file in memory. (It does hold all the
selected items in memory - I don't see any way to avoid that.) Why not
try it and see?
> When you'd want to shuffle the file
> content, you'd have to set lines=1 and throw away repeating lines in
> subsequent runs, or you'd have to set lines higher, and deal with the
> resulting lines too in some way (throw away repeating ones... :-).
Eliminating duplicates is left as an exercise for the reader. ;-)
> Doesn't
> matter how, you'd have to store which lines you've already read
> (selected_lines). And in any case you'd need a line cache of 10^9 entries for
> this amount of data...
Nope, you don't.
> I couldn't resist. ;-)
Me neither...
> import random
>
> def randomLines(filename, lines=1):
> selected_lines = list(None for line_no in xrange(lines))
>
> for line_index, line in enumerate(open(filename)):
> for selected_line_index in xrange(lines):
> if random.uniform(0, line_index) < 1:
> selected_lines[selected_line_index] = line
>
> return selected_lines
>
> This has the advantage that every line had the same chance of being
> picked regardless of its length. There is the chance that it'll pick
> the same line more than once, though.
The idea is simple: have a chain of "pickers" that pass all items that are
not picked on to the next picker. The current implementation, however, may
hit the recursion limit for large samples.
import random
class PickOne(object):
"""Pick a random item from an iterable and store it in the 'picked'
attribute.
As an iterator, yield all items that are not picked.
Helper for the pick() function.
"""
def __init__(self, iterable):
self.iterable = iterable
def __iter__(self):
it = enumerate(self.iterable)
index, self.picked = it.next()
for index, item in it:
if random.uniform(0, index) < 1:
yield self.picked
self.picked = item
else:
yield item
def pick(iterable, N, accept_less=False):
"""Pick N random items from an iterable.
"""
# Set up the PickOne(PickOne( ... PickOne(iterable) ... )) chain
keepers = []
for i in xrange(N):
iterable = PickOne(iterable)
keepers.append(iterable)
# We are interested in the side effects, i. e. the PickOne instances
# picking random items from the initial iterable
for item in iterable:
pass
# The following could be
# return [k.picked for k in keepers]
# but we want to handle the case len(iterable) < N, too
result = []
for keeper in keepers:
try:
picked = keeper.picked
except AttributeError:
if accept_less:
return result
raise ValueError("sequence contains less items than shall be
picked")
result.append(picked)
return result
def demo():
"""Pick random lines from a set of files.
"""
import optparse
from itertools import chain, izip, repeat, count
parser = optparse.OptionParser()
parser.usage = ("%prog [nl] file1 file2 ... fileN\n\n"
"Choose random lines from files given on the command line.")
parser.add_option("-n", "--count", type="int", default=10,
help="number of lines that shall be picked (default=10)")
parser.add_option("-l", "--less", action="store_true",
help="accept less picked lines than specified, if there are less
total lines in the file(s)")
options, args = parser.parse_args()
index_name_line = chain(*[izip(count(), repeat(fn), file(fn)) for fn in
args])
try:
picked = pick(index_name_line, options.count, options.less)
except ValueError:
raise SystemExit("Cannot pick more lines than found in the file(s)")
print "".join([("%3d <%s> %s" % ifl) for ifl in picked])
if __name__ == "__main__":
demo()
Paul