[mpi4py] Allgatherv in mpi4py

645 views
Skip to first unread message

Battalgazi YILDIRIM

unread,
May 21, 2010, 4:43:11 PM5/21/10
to mpi4py
Hi,

I have been using collective communicatior over intercommunicator. I am
now trying Allgatherv in my code and I got error as follows

  File "Comm.pyx", line 478, in mpi4py.MPI.Comm.Allgatherv (src/mpi4py.MPI.c:51479)                               
  File "message.pxi", line 442, in mpi4py.MPI._p_msg_cco.for_allgather (src/mpi4py.MPI.c:16412)                   
  File "message.pxi", line 330, in mpi4py.MPI._p_msg_cco.for_cco_recv (src/mpi4py.MPI.c:15443)                    
  File "message.pxi", line 195, in mpi4py.MPI.message_vector (src/mpi4py.MPI.c:14334)                             
TypeError: 'mpi4py.MPI.Datatype' object is not iterable 


So, I go back Lisandro's example provided for me just modified  argument (add 0 as DISPLS) and changed
function from Allgather to Allgatherv

program main
 use mpi
 implicit none
 integer :: parent, rank, val, dummy, ierr
call MPI_Init(ierr)
 call MPI_Comm_get_parent(parent, ierr)
 call MPI_Comm_rank(parent, rank, ierr)
 val = rank + 1
 call MPI_Allgatherv(val,   1, MPI_INTEGER, &
                    dummy, 0,0, MPI_INTEGER, &
                    parent, ierr)
 call MPI_Comm_disconnect(parent, ierr)
 call MPI_Finalize(ierr)
end program main

And python.py has been changed only calling Allgatherv (I look at API for Allgatherv which is the same as Allgather)

from mpi4py import MPI
from array import array
import os

progr = os.path.abspath('a.out')
child = MPI.COMM_WORLD.Spawn(progr,[], 8)
n = child.remote_size
a = array('i', [0]) * n
child.Allgatherv([None,MPI.INT],[a,MPI.INT])
child.Disconnect()
print a

yildirim@memosa:~/python_intercomm$ mpif90 fortran.f90
yildirim@memosa:~/python_intercomm$ python python.py
Traceback (most recent call last):
  File "python.py", line 9, in <module>
    child.Allgatherv([None,MPI.INT],[a,MPI.INT])
  File "Comm.pyx", line 478, in mpi4py.MPI.Comm.Allgatherv (src/mpi4py.MPI.c:51479)
  File "message.pxi", line 442, in mpi4py.MPI._p_msg_cco.for_allgather (src/mpi4py.MPI.c:16412)
  File "message.pxi", line 330, in mpi4py.MPI._p_msg_cco.for_cco_recv (src/mpi4py.MPI.c:15443)
  File "message.pxi", line 195, in mpi4py.MPI.message_vector (src/mpi4py.MPI.c:14334)
TypeError: 'mpi4py.MPI.Datatype' object is not iterable


what am I doing wrong? if you can help me out, I will be thankful,

Thanks

--
B. Gazi YILDIRIM

--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To post to this group, send email to mpi...@googlegroups.com.
To unsubscribe from this group, send email to mpi4py+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mpi4py?hl=en.

Lisandro Dalcin

unread,
May 21, 2010, 5:06:25 PM5/21/10
to mpi...@googlegroups.com
On 21 May 2010 17:43, Battalgazi YILDIRIM <yildi...@gmail.com> wrote:
>
> And python.py has been changed only calling Allgatherv (I look at API for
> Allgatherv which is the same as Allgather)
>

There should be the problem. Despite the API's being the same, you
have to change the way you pass sendbuf/recvbuf arguments, now you
have to pass something like

sendbuf=[array, (count, displs), MPI.INT] # Note: 3-list/tuple, second
item have to be a 2-list/tuple

or a 4-list/tuple:

sendbuf=[array, count, displs, MPI.INT]


In general, "count" and "displ" should be sequences
(list/tuples/arrays) with integer items, and
len(count)=len(displ)=size/remote_size (for intra/inter
communication), array usually is len(array)==sum(count) (but it could
also be the case of len(array)>sum(count)).


I understand all this is non-obvious and is undocumented, and hope you
understand I do not have time to write documentation and no one has
provided help with this. Please take a look at the MPI standard, or
MPI book, or MPI tutorial if you have never used Allgaterv() (or other
vector-variant collectives).


> from mpi4py import MPI
> from array import array
> import os
>
> progr = os.path.abspath('a.out')
> child = MPI.COMM_WORLD.Spawn(progr,[], 8)
> n = child.remote_size
> a = array('i', [0]) * n
> child.Allgatherv([None,MPI.INT],[a,MPI.INT])
>
> what am I doing wrong? if you can help me out, I will be thankful,
>

There, you have to change to:

child.Allgatherv([None, (0,0), MPI.INT],[a, (count, displ) MPI.INT])

However, note that usually one uses first a call to Algather() to get
"count", next accumulate values to get "displ", allocate array with
sum(count) entries, and after all that you and ready to make a
meaningfull Allgatherv(). I bet this should be your use case, as it
likely that your Python side does not know how many things each
Fortran process will send.

Am I being clear enough? Take a look at demo/mandelbrot/, that
example is similar to what you are trying to do.

--
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169

Battalgazi YILDIRIM

unread,
May 21, 2010, 6:48:40 PM5/21/10
to mpi...@googlegroups.com
Hi Lisandro,

I have used Allgatherv a lot in c++, I am just confused
if I don't see displs option in api (I thought
mpi4py  is a magic :)) )

but your help is a great,

I think that best documantation is
more example for mpi4py since MPI is well documented everywhere.

If you look at matlibplot library, They provide
gallery of examples (may be thousands).  We might do similiar things.

I really appreciate your help, If I speak for myself I think that  I (many of
mpi4py users)  owe you a favor.

For example, I can create examples  how to connect two independent
codes through MPI (fortran  code and python code by sgiw wrapper).
I am sure that they are many unsual examples will help all existing
and future mpi4py users.

what do you think of it?

thank you very much 
--
B. Gazi YILDIRIM

Lisandro Dalcin

unread,
May 21, 2010, 8:38:52 PM5/21/10
to mpi4py
On 21 May 2010 19:48, Battalgazi YILDIRIM <yildi...@gmail.com> wrote:
> Hi Lisandro,
>
> I have used Allgatherv a lot in c++, I am just confused
> if I don't see displs option in api (I thought
> mpi4py  is a magic :)) )
>

Well, the idea is that you pass sendbuf/recvbug argumens, and they are
list/tuples containing the full description of the communication. I
like this, because has some symmetry with Python pickled
communication, and the API can be made simpler to use for common
cases. For example, if you call Send(array), mpi4py can, in some cases
(numpy.ndarray object, builtin array.array object, byte strings, any
PEP 3118 buffer-like object), automatically figure out the count and
datatype to use. So in this cases, you can just Send(array), and never
bother about passing count and datatype.

>
> but your help is a great,
>
> I think that best documantation is
> more example for mpi4py since MPI is well documented everywhere.
>
> If you look at matlibplot library, They provide
> gallery of examples (may be thousands).  We might do similiar things.
>

Yes, but writing good examples for others to learn takes some careful
thinking, and time!

> I really appreciate your help, If I speak for myself I think that  I (many
> of
> mpi4py users)  owe you a favor.
>
> For example, I can create examples  how to connect two independent
> codes through MPI (fortran  code and python code by sgiw wrapper).
> I am sure that they are many unsual examples will help all existing
> and future mpi4py users.
>
> what do you think of it?

Any contribution will be very appreciated.
Reply all
Reply to author
Forward
0 new messages