Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Slow down while creating a big list and iterating over it

0 views
Skip to first unread message

marc magrans de abril

unread,
Jan 30, 2010, 3:36:59 PM1/30/10
to
Dear colleagues,

I was doing a small program to classify log files for a cluster of
PCs, I just wanted to simplify a quite repetitive task in order to
find errors and so.

My first naive implementation was something like:
patterns = []
while(logs):
pattern = logs[0]
new_logs = [l for l in logs if dist(pattern,l)>THERESHOLD]
entry = (len(logs)-len(new_logs),pattern)
patterns.append(entry)
logs = new_logs

Where dist(...) is the levenshtein distance (i.e. edit distance) and
logs is something like 1.5M logs (700 MB file). I thought that python
will be an easy choice although not really fast..

I was not surprised when the first iteration of the while loop was
taking ~10min. I thought "not bad, let's how much it takes". However,
it seemed that the second iteration never finished.

My surprise was big when I added a print instead of the list
comprehension:
new_logs=[]
for count,l in enumerate(logs):
print count
if dist(pattern,l)>THERESHOLD:
new_logs.append(l)

The surprise was that the displayed counter was running ~10 times
slower on the second iteration of the while loop.

I am a little lost. Anyone knows the reson of this behavior? How
should I write a program that deals with large data sets in python?

Thanks a lot!
marc magrans de abril

Alf P. Steinbach

unread,
Jan 30, 2010, 5:08:56 PM1/30/10
to
* marc magrans de abril:

It's on line 42 of your program. :-) That is, it's in the dist function.
Evidently it doesn't like a more complex 'pattern'.


> How should I write a program that deals with large data sets in python?

As in any other language. Try to avoid repeating the same computations. Try to
make the data fit the computational task.


Cheers & hth.,

- Alf

MRAB

unread,
Jan 30, 2010, 6:11:54 PM1/30/10
to pytho...@python.org
Find out which pattern is being used on the second iteration and then
try it on the first iteration. Is it just as slow? If so, then it's down
to the length/complexity of that pattern - a much longer/more complex
pattern might take much longer when computing the distance.

>> How should I write a program that deals with large data sets in python?
>
> As in any other language. Try to avoid repeating the same computations.
> Try to make the data fit the computational task.
>

True. Basically, you're computing the distance between every pair of
logs!

Message has been deleted

marc magrans de abril

unread,
Jan 31, 2010, 9:53:15 AM1/31/10
to
>Find out which pattern is being used on the second iteration and then try it on the first iteration. Is it just as slow?
You were right, the second pattern was 1891 bytes but the first was
just 142 :P

I will need to put more thought than I expect in the "small script".

marc magrans de abril

unread,
Jan 31, 2010, 3:08:19 PM1/31/10
to
Hi!

...I have found a good enough solution, although it only works if the
number of patterns (clusters) is not very big:
def classify(f):
THERESHOLD=0.1

patterns={}
for l in enumerate(f):
found = False
for p,c in patterns.items():
if dist(l,p) < THERESHOLD:
found=True
patterns[p] = c +1

if not found:
patterns[l] = 1

return patterns

This algorithm is O(n*np*m^2). Where n is the number of logs, np the
number of patterns, and m is the log length (i.e. m^2 is the distance
cost). So it's way better O(n^2*m^2) and I can run it for some hours
to get back the results.

I wonder if there is a single threaded/process clustering algorithm
than runs in O(n)?

Cheers,
marc

MRAB

unread,
Jan 31, 2010, 5:42:52 PM1/31/10
to pytho...@python.org
Your original code used the first entry in the remaining logs for each
pattern, but your new code stores the patterns in a dict, which is
unordered, so you might get different results.

But that doesn't matter, because your new code increments the count when
it has found a match, and then continues looking, so it might match and
increment more than once.

Finally, your original code treated it as a match if distance <=
threshold but your new code treats it as a match if distance <
threshold.

patterns = []
for x in logs:
for index, (pat, count) in enumerate(patterns):
if dist(pat, x) <= THRESHOLD:
patterns[index] = pat, count + 1
break
else:
# Didn't break out of the loop, therefore no match.
patterns.append((x, 1))

0 new messages