Marc-André,
Thank you for your most informative answer. I will read up on Scoop
and start playing with it as time allows. I looked briefly at
multiprocessing.Manager, but thought I would ask the list before jumping
into that...
Also, DEAP has been working wonderfully for us over the last 8 months.
Soon I should write a paper on what we are doing and why.
This whole exercise was brought about by automating a model's
calibration -- which can take 10's of thousands of trials, and each
trial is creeping up on the 1 to 2 hours each. As is, DEAP has allowed
me to fully utilize a 24 core system, but I really need to bump it up by
a factor of at least 4 if not 20. There are also serious performance
hits going on where each trial is reading in up to 100 GB of input data
for each trial. There are a number of things which can be done to make
that more efficient, but at the moment I am looking at getting it to run
on the available clusters.
As a note, I broke the processing up so that the task returns about 20
parameters back to the originating program, so once the program is
spawned (whether on a separate core or across a network) the
communication is typically 100 to 250 byte blocks at the beginning and
end. So, the message passing part is quite efficient.
Thanks again,
EBo --
On Sun, 26 Aug 2012 23:11:58 -0700 (PDT), Marc-André Gardner wrote:
> Hi,
>
> Multiprocessing does not seem to be the right tool for your usage.
> Although
> there is some kind of remote control in the multiprocessing module,
> the
> introduction of its doc clearly states that "the
>
> multiprocessing<
http://docs.python.org/library/multiprocessing.html#module-multiprocessing>module