Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

multiple processes, private working directories

1 view
Skip to first unread message

Tim Arnold

unread,
Sep 24, 2008, 9:27:24 PM9/24/08
to
I have a bunch of processes to run and each one needs its own working
directory. I'd also like to know when all of the processes are
finished.

(1) First thought was threads, until I saw that os.chdir was process-
global.
(2) Next thought was fork, but I don't know how to signal when each
child is
finished.
(3) Current thought is to break the process from a method into a
external
script; call the script in separate threads. This is the only way I
can see
to give each process a separate dir (external process fixes that), and
I can
find out when each process is finished (thread fixes that).

Am I missing something? Is there a better way? I hate to rewrite this
method
as a script since I've got a lot of object metadata that I'll have to
regenerate with each call of the script.

thanks for any suggestions,
--Tim Arnold

r0g

unread,
Sep 24, 2008, 9:52:38 PM9/24/08
to

(1) + avoid os.chdir and maintain hard paths to all files/folders? or
(2) + sockets? or
(2) + polling your systems task list?

Cameron Simpson

unread,
Sep 24, 2008, 10:03:13 PM9/24/08
to Tim Arnold, pytho...@python.org
On 24Sep2008 18:27, Tim Arnold <a_j...@bellsouth.net> wrote:
| I have a bunch of processes to run and each one needs its own working
| directory. I'd also like to know when all of the processes are
| finished.
|
| (1) First thought was threads, until I saw that os.chdir was process-
| global.

Yep. But do you need separate working directories?
As opposed to having the thread state include a notional working
directory and constructing file paths within it.

| (2) Next thought was fork, but I don't know how to signal when each
| child is finished.

Open a pipe (os.pipe()). Have a parent process to track state.

Fork each child.

In each child: close the read end of the pipe. Do stuff. When finished,
close the write end of the pipe.

In the parent, after forking all children: close the write end of the
pipe. Read from the read end. When all the children have finished
they will have closed all the write ends and you will see EOF
on the read end of the pipe.

For extra credit you can have the children write some sort of
success/failure byte to the pipe before closing. Counting and examinine
these bytes in the parent can tell you about individual failure if you
care.

| (3) Current thought is to break the process from a method into a
| external
| script; call the script in separate threads. This is the only way I
| can see
| to give each process a separate dir (external process fixes that), and
| I can
| find out when each process is finished (thread fixes that).

Yeah, that'll work:

for child in 1 2 3 4 5 6 ...
do
mkdir workdir
( cd work-dir; run-child ) &
done
wait

| Am I missing something? Is there a better way? I hate to rewrite this
| method
| as a script since I've got a lot of object metadata that I'll have to
| regenerate with each call of the script.

See the pipe scheme in point (2) above. Doubtless there are other
methods, but pipes are each shared resources with the right behaviour.
I'd prefer method (1) myself, assuming you have control of the working
file paths.

Cheers,
--
Cameron Simpson <c...@zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

If everyone is thinking alike, then someone isn't thinking. - Patton

Michael Palmer

unread,
Sep 24, 2008, 10:07:38 PM9/24/08
to

1, Does the work in the different directories really have to be done
concurrently? You say you'd like to know when each thread/process was
finished, suggesting that they are not server processes but rather
accomplish some limited task.

2. If the answer to 1. is yes: All that os.chdir gives you is an
implicit global variable. Is that convenience really worth a multi-
process architecture? Would it not be easier to just work with
explicit path names instead? You could store the path of the per-
thread working directory in an instance of threading.local - for
example:

>>> import threading
>>> t = threading.local()
>>>
>>> class Worker(threading.Thread):
... def __init__(self, path):
... t.path=path
...

the thread-specific value of t.path would then be available to all
classes and functions running within that thread.

Karthik Gurusamy

unread,
Sep 24, 2008, 11:14:43 PM9/24/08
to

Use subprocess; it supports a cwd argument to provide the given
directory as the child's working directory.

Help on class Popen in module subprocess:

class Popen(__builtin__.object)
| Methods defined here:
|
| __del__(self)
|
| __init__(self, args, bufsize=0, executable=None, stdin=None,
stdout=None, st
derr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None,
env=None, un
iversal_newlines=False, startupinfo=None, creationflags=0)
| Create new Popen instance.

You want to provide the cwd argument above.
Then once you have launched all your n processes, run thru' a loop
waiting for each one to finish.

# cmds is a list of dicts providing details on what processes to run..
what it's cwd should be

runs = []
for c in cmds:
run = subprocess.Popen(cmds['cmd'], cwd = cmds['cwd'] ..... etc
other args)
runs.append(run)

# Now wait for all the processes to finish
for run in runs:
run.wait()

Note that if any of the processes generate lot of stdout/stderr, you
will get a deadlock in the above loop. Then you way want to go for
threads or use run.poll and do the reading of the output from your
child processes.

Karthik

Carl Banks

unread,
Sep 24, 2008, 11:17:57 PM9/24/08
to
On Sep 24, 9:27 pm, Tim Arnold <a_j...@bellsouth.net> wrote:
> (2) Next thought was fork, but I don't know how to signal when each
> child is
> finished.

Consider the multiprocessing module, which is available in Python 2.6,
but it began its life as a third-party module that acts like threading
module but uses processes. I think you can still run it as a third-
party module in 2.5.


Carl Banks

Tim Arnold

unread,
Sep 25, 2008, 8:16:18 AM9/25/08
to
"Tim Arnold" <a_j...@bellsouth.net> wrote in message
news:57cdd3f1-cde8-45f5...@l43g2000hsh.googlegroups.com...

>I have a bunch of processes to run and each one needs its own working
> directory. I'd also like to know when all of the processes are
> finished.

Thanks for the ideas everyone--I now have some news tools in the toolbox.
The task is to use pdflatex to compile a bunch of (>100) chapters and know
when the book is complete (i.e. the book pdf is done and the separate
chapter pdfs are finished. I have to wait for that before I start some
postprocessing and reporting chores.

My original scheme was to use a class to manage the builds with threads,
calling pdflatex within each thread. Since pdflatex really does need to be
in the directory with the source, I had a problem.

I'm reading now about python's multiprocessing capabilty, but I think I can
use Karthik's suggestion to call pdflatex in subprocess with the cwd set.
That seems like the simple solution at this point, but I'm going to give
Cameron's pipes suggestion a go as well.

In any case, it's clear I need to rethink the problem. Thanks to everyone
for helping me get past my brain-lock.

--Tim Arnold


Michael Palmer

unread,
Sep 26, 2008, 3:54:47 PM9/26/08
to

I still don't see why this should be done concurrently? Do you have >
100 processors available? I also happen to be writing a book in Latex
these days. I have one master document and pull in all chapters using
\include, and pdflatex is only ever run on the master document. For a
quick preview of the chapter I'm currently working on, I just use
\includeonly - compiles in no time at all.

How do you manage to get consistent page numbers and cross-referencing
if you process all chapters separately, and even in _parallel_ ? That
just doesn't look right to me.

0 new messages