Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

concurrent file reading/writing using python

56 views
Skip to first unread message

Abhishek Pratap

unread,
Mar 26, 2012, 6:56:29 PM3/26/12
to pytho...@python.org
Hi Guys

I am fwding this question from the python tutor list in the hope of
reaching more people experienced in concurrent disk access in python.

I am trying to see if there are ways in which I can read a big file
concurrently on a multi core server and process data and write the
output to a single file as the data is processed.

For example if I have a 50Gb file, I would like to read it in parallel
with 10 process/thread, each working on a 10Gb data and perform the
same data parallel computation on each chunk of fine collating the
output to a single file.


I will appreciate your feedback. I did find some threads about this on
stackoverflow but it was not clear to me what would be a good way to
go about implementing this.

Thanks!
-Abhi

---------- Forwarded message ----------
From: Steven D'Aprano <st...@pearwood.info>
Date: Mon, Mar 26, 2012 at 3:21 PM
Subject: Re: [Tutor] concurrent file reading using python
To: tu...@python.org


Abhishek Pratap wrote:
>
> Hi Guys
>
>
> I want to utilize the power of cores on my server and read big files
> (> 50Gb) simultaneously by seeking to N locations.


Yes, you have many cores on the server. But how many hard drives is
each file on? If all the files are on one disk, then you will *kill*
performance dead by forcing the drive to seek backwards and forwards:

seek to 12345678
read a block
seek to 9947500
read a block
seek to 5891124
read a block
seek back to 12345678 + 1 block
read another block
seek back to 9947500 + 1 block
read another block
...

The drive will spend most of its time seeking instead of reading.

Even if you have multiple hard drives in a RAID array, performance
will depend strongly the details of how it is configured (RAID1,
RAID0, software RAID, hardware RAID, etc.) and how smart the
controller is.

Chances are, though, that the controller won't be smart enough.
Particularly if you have hardware RAID, which in my experience tends
to be more expensive and less useful than software RAID (at least for
Linux).

And what are you planning on doing with the files once you have read
them? I don't know how much memory your server has got, but I'd be
very surprised if you can fit the entire > 50 GB file in RAM at once.
So you're going to read the files and merge the output... by writing
them to the disk. Now you have the drive trying to read *and* write
simultaneously.

TL; DR:

Tasks which are limited by disk IO are not made faster by using a
faster CPU, since the bottleneck is disk access, not CPU speed.

Back in the Ancient Days when tape was the only storage medium, there
were a lot of programs optimised for slow IO. Unfortunately this is
pretty much a lost art -- although disk access is thousands or tens of
thousands of times slower than memory access, it is so much faster
than tape that people don't seem to care much about optimising disk
access.



> What I want to know is the best way to read a file concurrently. I
> have read about file-handle.seek(),  os.lseek() but not sure if thats
> the way to go. Any used cases would be of help.


Optimising concurrent disk access is a specialist field. You may be
better off asking for help on the main Python list, comp.lang.python
or pytho...@python.org, and hope somebody has some experience with
this. But chances are very high that you will need to search the web
for forums dedicated to concurrent disk access, and translate from
whatever language(s) they are using to Python.


--
Steven

_______________________________________________
Tutor maillist  -  Tu...@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor
Message has been deleted

Steve Howell

unread,
Mar 26, 2012, 9:44:30 PM3/26/12
to
On Mar 26, 3:56 pm, Abhishek Pratap <abhishek....@gmail.com> wrote:
> Hi Guys
>
> I am fwding this question from the python tutor list in the hope of
> reaching more people experienced in concurrent disk access in python.
>
> I am trying to see if there are ways in which I can read a big file
> concurrently on a multi core server and process data and write the
> output to a single file as the data is processed.
>
> For example if I have a 50Gb file, I would like to read it in parallel
> with 10 process/thread, each working on a 10Gb data and perform the
> same data parallel computation on each chunk of fine collating the
> output to a single file.
>
> I will appreciate your feedback. I did find some threads about this on
> stackoverflow but it was not clear to me what would be a good  way to
> go about implementing this.
>

Have you written a single-core solution to your problem? If so, can
you post the code here?

If CPU isn't your primary bottleneck, then you need to be careful not
to overly complicate your solution by getting multiple cores
involved. All the coordination might make your program slower and
more buggy.

If CPU is the primary bottleneck, then you might want to consider an
approach where you only have a single thread that's reading records
from the file, 10 at a time, and then dispatching out the calculations
to different threads, then writing results back to disk.

My approach would be something like this:

1) Take a small sample of your dataset so that you can process it
within 10 seconds or so using a simple, single-core program.
2) Figure out whether you're CPU bound. A simple way to do this is
to comment out the actual computation or replace it with a trivial
stub. If you're CPU bound, the program will run much faster. If
you're IO-bound, the program won't run much faster (since all the work
is actually just reading from disk).
3) Figure out how to read 10 records at a time and farm out the
records to threads. Hopefully, your program will take significantly
less time. At this point, don't obsess over collating data. It might
not be 10 times as fast, but it should be somewhat faster to be worth
your while.
4) If the threaded approach shows promise, make sure that you can
still generate correct output with that approach (in other words,
figure out out synchronization and collating).

At the end of that experiment, you should have a better feel on where
to go next.

What is the nature of your computation? Maybe it would be easier to
tune the algorithm then figure out the multi-core optimization.




Abhishek Pratap

unread,
Mar 27, 2012, 2:08:08 AM3/27/12
to Steve Howell, pytho...@python.org
Thanks for the advice Dennis.

@Steve : I haven't actually written the code. I was thinking more on
the generic side and wanted to check if what I thought made sense and
I now realize it can depend on then the I/O. For starters I was just
thinking about counting lines in a line without doing any computation
so this can be strictly I/O bound.

I guess what I need to ask was can we improve on the existing disk I/O
performance by reading different portions of the file using threads or
processes. I am kind of pointing towards a MapReduce task on a file in
a shared file system such as GPFS(from IBM). I realize this can be
more suited to HDFS but wanted to know if people have implemented
something similar on a normal linux based NFS

-Abhi
> --
> http://mail.python.org/mailman/listinfo/python-list
Message has been deleted
0 new messages