>
> Is mpi4py capable of doing this?
>
Yes, mpi4py can do this, and if you child scripts need to communicate
each other, then MPI could make your like easier.
> Are there any advantages to using it
> vs. (for instance) a tool such as RabbitMQ?
>
I do not have too much experience using other networking libraries
like RabbitMQ. Take into account that MPI is a "scientific" library,
in the sense that was designed with scientific application in mind,
and tries to hide many of the complexities of networking. However, in
highly dynamic applications, possibly requiring fault-tolerance, MPI
could not be the appropriate choice. However, if your computing
resources are things like a cluster, possibly with special
high-performance network hardware, and you need to communicate huge
array data back and forth, then MPI is a good choice.
>
> Any pointers, code samples, ideas how to do this are greatly
> appreciated.
>
You could start by taking a look to the tiny example on demo, and
perhaps demo/spawning (however, you could had trouble getting Open MPI
working with that specific example). However, depending on your actual
application needs, you could use simpler alternatives. Do your scripts
really need to be separate applications?
PS: If MPI/mpi4py does not fit your needs, take a look to this:
http://www.zeromq.org/ and http://www.zeromq.org/bindings:python
--
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
3000 Santa Fe, Argentina
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
Definitely
> My scripts must be separate, because this is how I parallelize my work
> across nodes in the cluster (I do not want to use python's
> multiprocessing).
>
Please note that MPI (and then mpi4py) is way different to python's
multiprocessing.
> My idea is to have a main script (call it *parent*) that defines &
> keeps track of a big, multi-dimensional numpy array.
>
> Then, I will have 8 other *children* scripts (running at the same
> time) that will:
>
> (1) Have ability to 'read' the parent's multi-dimensional numpy array
> (either directly, or alternatively, by the parents sending the numpy
> array to them as a message).
>
> (2) Do some processing, and send messages (which are new numpy arrays)
> to the parent. These messages will be consumed by the parent, which
> will update its own big, multi-dimensional numpy array.
>
I'm still not sure you actually need this parent/child distinction.
You could launch a MPI-parallel run with let say 8 processes, process
0 keeps tracks of the big array, but also contributes with the
computation, and the other 7 processes send/recv to/from process 0.
> All the messages transferred are basically numpy arrays, and my
> application is definitely considered to be scientific.
>
> Do you think that mpi4py might be suitable for this? are there any
> suggesting / examples that you can point me to?
>
Please take a look at "demo/mandelbrot/mandelbrot.py". It is not
exactly the same as your app, and it uses collective communication.
The idea is that each process computes a block of rows in a 2d grid,
and then the results are communicated back (using collective
communication) to process 0.
Shalom,
Perhaps our project, Global Arrays (GA), may be a good fit for you. GA implements a shared-memory interface for distributed memory arrays and implements its own one-sided communication library while interoperating with MPI. As of release 5.0 we have Python bindings.
However, if you're willing to work with the development trunk, we have a NumPy work-alike which allows you to distribute NumPy arrays if you "import ga.gain as numpy". GA relies on the MPI process manager (mpiexec) and GA and MPI can interoperate, for example the gain module uses mpi4py and GA in tandem for some of the reductions such as numpy.sum(). We haven't yet implemented all of NumPy's 400+ functions and ndarray methods, but we have at least done all of the ufuncs and many array creation methods (so perhaps 25% of core numpy functionality). I encourage you to check it out.
Webpage:
http://www.emsl.pnl.gov/docs/global/
SciPy'11 talk: http://conference.scipy.org/scipy2011/slides/daily_GA_Toolkit.pdf
Source: https://svn.pnl.gov/svn/hpctools/trunk/ga
Username/password = anonymous/anonymous
_______________________________________________
Jeff Daily
Scientist
DATA INTENSIVE SCIENTIFIC COMPUTING GROUP
Pacific Northwest National Laboratory
902 Battelle Boulevard
P.O. Box 999, MSIN K7-90
Richland, WA 99352 USA
Tel: 509-372-6548
Fax: 509-372-4720
jeff....@pnnl.gov
www.pnnl.gov
Yes, that's possible. However note that this is not under mpi4py
control, but a specific mechanism of the underlying MPI
implementation. Usually, you have to write a text file (the
"machinefile") indicating the hostnames and the number of processes
you want to run in each host (usually you set that value to the number
of physical processors/sockets/cores). Then you use "mpiexec
-machinefile hostlist.txt -n 8 python script.py"
>
> (2) Do I have to install openMPI on all nodes in the cluster? or one
> node will be sufficient?
>
Yes, MPI (and mpi4py) has to be available at all compute node (note
that this is the usual requirement for MPI applications, not only
mpi4py, at least if MPI is built with shared libraries). This can be
achieved by installing the software at local disks in all compute
nodes, or using NFS exports, or distributed filesystems like GFS. Your
mileage may vary.
BTW, in may cluster setups, the user's home directory (or some scratch
directory) is available to all compute nodes. Then you can just copy
the mpi4py tree (the directory tree after installation in
site-packages, for example) to some appropriate dir visible at compute
nodes, add that entry to sys.path, and you are done.
> (3) Is the communication channel isolated? meaning, suppose that few
> seconds after starting my first 8 processes, I start another 8
> processes. Is there a chance that the communication between both
> groups of processes will be mixed up?
>
It is completely isolated. Even at the application level, if you
duplicate the MPI.COMM_WORLD communicator (invoking the method Dup())
or create new communication domains using Comm.Create() or
Comm.Split() methods to create sub-groups of interconnected processes,
then you can also have new private communication context that are
guaranteed to not conflict with the other communicators.
> (4) Must all the processes send / receive the information? Can I
> choose only some processes (say, process1 and process4) to receive the
> information that is being sent, while the other processes will simply
> be "senders" of info?
>
There are basically two different communication mechanisms:
1) point to point: one process send, the other receives.
2) collective: all processes (in a communicator) participate in the call
So, in short, yes. If you need process 0 to send a message to 1 and 4
(note: process 0 has to send 2 separate mensages, even if the data is
the same), while process 2,3,5,etc. just send back to 0 (and 0
receives all of them), that's perfectly possible.
However, you should really take a look and understand collective
comunication (it is a rather simple subject, actually). Many times
they make your coding easier and your application faster. For example,
suppose you have to send the same exact message from process 0 to all
other; you have to options: a) send() from 0 to others in a loop,
while the other procs recv() from 0, or b) use bcast() from 0 to all
others. Option b) is easier to code, and more performant, as MPI
implements it using a tree pattern achieving O(log(numprocs)) time.
Other similar example: supose you want to distribute some different
pieces of data from 0 to others; again you can send in a loop from 0
to others, and others receive... or you can use scatter() to achieve
the same with a single call.
Take a look at this minimal stuff,
http://mpi4py.scipy.org/docs/usrman/tutorial.html and the more
complete tutorial here:
http://www.bu.edu/pasi/files/2011/01/Lisandro-Dalcin-mpi4py.pdf (slide
sources and code samples here:
https://bitbucket.org/dalcinl/pasi-2011-mpi4py)
PS: as you are a beginner, you'll likely notice mpi4py docs are far
from good, sorry!. If you have quick questions, feel free to contact
me by chat (Gmail or the my GTalk link at mpi4py.googlecode.com)
> -----Original Message-----
> From: mpi...@googlegroups.com [mailto:mpi...@googlegroups.com] On
> Behalf Of Shalom Rav
> Sent: Thursday, July 21, 2011 6:33 PM
> To: mpi4py
> Subject: [mpi4py] Re: Sending & Receiving ONE numpy array?
>
> Jeff,
>
> Thank you for your response. Your project is definitely interesting.
> Can you kindly provide some more details:
>
> (1) Suppose that I define a big numpy 2d array at the main process (call it
> *process0*). Will all the other 7 processes have a direct read/write access to
> this array? meaning, I will NOT have to send / receive any messages?
Array creation is collective -- all processors participate. Once the array is created, any process can copy portions of the array to their local memory (ga.get()) or write data to a portion of the array (ga.put()). Advanced users can ga.access() memory local to a process without the copy overhead.
> (2) Is the access to the big numpy 2d array atomic? Are there any locks
> involved while processes are accessing this big numpy 2d array?
No, access to the arrays is not atomic, in general. However, GA does provide functions for locking and unlocking portions of the arrays. However, in most cases each process will work on disjoint patches of the arrays. Lastly, GA does provide a few atomic operations, 1) ga.acc() which is an atomic accumulate, and 2) ga.read_inc() which is an atomic counter.
> (3) Where do the 2d numpy array reside -- does it reside only in the memory
> of the *main* process, or is it evenly split between all processes (meaning, a
> chunck of it will reside in process0, another chunk in process1, another chunk
> in process2 etc...)?
By default, array memory is evenly split across all processes. However, users are free to create "irregular" arrays where the memory is not evenly split. Further, users can create "restricted arrays" in which the user can specify a subset of processes on which to allocate the array even though all processes can still get() and put() to the array regardless of having any of the array data allocated locally.
> (4) Can I use cython to write functions that will process the big 2d array FAST?
I have not personally attempted to do this, but I don't see why not. GA and GAiN make available the local array memory as an ndarray, so I can see one writing fast functions for processing these arrays using cython.
> (5) Must I have openMPI installed on all the nodes in the cluster in order to
> use Global Arrays?
Yes. If not OpenMPI, at least some flavor of MPI installed on all nodes or on a disk mounted and accessible on all nodes.
> (6) Are there any code samples that you can share that would achieve what
> I'm trying to do (or something similar)?
Unfortunately, no...
> --
> You received this message because you are subscribed to the Google Groups
> "mpi4py" group.
> To post to this group, send email to mpi...@googlegroups.com.
> To unsubscribe from this group, send email to
> mpi4py+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/mpi4py?hl=en.