[mpi4py] dynamic processing question,

43 views
Skip to first unread message

Battalgazi YILDIRIM

unread,
May 19, 2010, 2:28:32 PM5/19/10
to mpi4py
Hello Lisandro,

I am using  two differnt parallel codes written in Python and Fortran. I did coupling
with via dynamic processing (Actually Lisandro helped me out for this ).

We have one "main.py"  script  which is parent process. And then parent process spawn two children (Fortran
and Python ).

We have intercommunicator between Fortran and Python script, I have been using send-recv, Bcast, or Gather
sucsesfully so far.

However, I need to use "Allgather" communications which seems a litte bit problematic,

In fortran side, I call     

call  MPI_ALLGATHER( nlocfaces, 1, MPI_INTEGER, dummy, 1,    &
                              MPI_DATATYPE_NULL,  MPM_COMM_FVM, iFVMerr)

MPM_COMM_FVM is intercommunicator enabling two siblings communicate ( Fortran code is MPM,  Python code is FVM)

In python side, I do

self.FVM_COMM_MPM.Allgather([dummy,MPI.INT],[self.nfaces,MPI.INT])

(not: even FVM_COMM_MPM and MPM_COMM_FVM has different names but they are the same intercommunicators)

The probem is here, I am reading MPI refrence for this, it says if you want to do data movement in only one direction,
you have specify "sendcount=0" for the communication in the reverse direction.

I look at Allgather API
    Allgather(self, sendbuf, recvbuf)
 
I want to  do something like this in python side (I don't want data coming  from python side)

self.FVM_COMM_MPM.Allgather([dummy,MPI.INT], sendcount=0, [self.nfaces,MPI.INT])


Could you help me out for this?

Thanks,


B. Gazi YILDIRIM

--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To post to this group, send email to mpi...@googlegroups.com.
To unsubscribe from this group, send email to mpi4py+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mpi4py?hl=en.

Lisandro Dalcin

unread,
May 19, 2010, 2:53:37 PM5/19/10
to mpi...@googlegroups.com
If you do not want to send/recv data from Python side, you have two
easy ways: One is to pass None as the buffer object, the other is pass
a dummy buffer-like object and explicit count=0, like this:

In [1]: from mpi4py import MPI

# Option (A)

In [2]: MPI.COMM_WORLD.Allgather([None,MPI.INT],[None,MPI.INT])


# Option (B)

In [3]: from array import array

In [4]: dummy1 = array('i',[0]) # dummy array

In [5]: dummy2 = array('i',[0]) # dummy array

In [6]: MPI.COMM_WORLD.Allgather([dummy1,0,MPI.INT],[dummy2,0,MPI.INT])


However, note that if you do not send data from Python, then you have
to use recvcount=0 in the Fortran side. Additionally, do not use
MPI_DATATYPE_NULL, but MPI_INTEGER with recvcount=0.

Does this help ?

Note: passing None should work in recent (1.2.x) mpi4py versions;
can't remember for older releases.

> --
> You received this message because you are subscribed to the Google Groups
> "mpi4py" group.
> To post to this group, send email to mpi...@googlegroups.com.
> To unsubscribe from this group, send email to
> mpi4py+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/mpi4py?hl=en.
>



--
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169

Battalgazi YILDIRIM

unread,
May 19, 2010, 3:16:01 PM5/19/10
to mpi...@googlegroups.com
Hi Lisandro,

Thanks for correction, None worked in my version,

It is now like that      

In fortran,
     call  MPI_ALLGATHER( nlocfaces, 1, MPI_INTEGER, dummy, 0,    &
                              MPI_INTEGER,  MPM_COMM_FVM, iFVMerr)

in python,
       self.nfaces  = zeros( self.FVM_COMM_MPM.Get_remote_size(),int32 )
       self.FVM_COMM_MPM.Allgather([None,MPI.INT],[self.nfaces,MPI.INT])

I am printing nlocfaces in Fortran and nfaces in python side,  

  MPM side nlocfaces =          672                                                             
  MPM side nlocfaces =          544                                                             
  MPM side nlocfaces =          544                                                             
  MPM side nlocfaces =          672                                                             
  MPM side nlocfaces =          544                                                             
  MPM side nlocfaces =          544                                                             
  MPM side nlocfaces =          544                                                             
  MPM side nlocfaces =          544 

 nfaces =======================  [0 0 0 0 0 0 0 0]   (8 processes on Fortran side)


But not getting correct values,  just again FVM_COMM_MPM or MPM_COMM_FVM is intercommunicator.

thanks,



--
B. Gazi YILDIRIM

Lisandro Dalcin

unread,
May 19, 2010, 3:50:17 PM5/19/10
to mpi...@googlegroups.com
Not sure what's going on in your side. See the code below, it runs
just fine for me (using MPICH2 in my desktop box)... Perhaps a bug in
the underlying MPI (what MPI are you using?) Could you make Python
side send and Fortran side recv, just to be sure this is not a nasty
bug related to the underlying MPI?


[dalcinl@trantor intercomm]$ cat python.py
from mpi4py import MPI
from array import array
import os

progr = os.path.abspath('a.out')
child = MPI.COMM_WORLD.Spawn(progr,[], 8)
n = child.remote_size
a = array('i', [0]) * n
child.Allgather([None,MPI.INT],[a,MPI.INT])
child.Disconnect()
print a

[dalcinl@trantor intercomm]$ cat fortran.f90
program main
use mpi
implicit none
integer :: parent, rank, val, dummy, ierr
call MPI_Init(ierr)
call MPI_Comm_get_parent(parent, ierr)
call MPI_Comm_rank(parent, rank, ierr)
val = rank + 1
call MPI_Allgather(val, 1, MPI_INTEGER, &
dummy, 0, MPI_INTEGER, &
parent, ierr)
call MPI_Comm_disconnect(parent, ierr)
call MPI_Finalize(ierr)
end program main


[dalcinl@trantor intercomm]$ mpif90 fortran.f90
[dalcinl@trantor intercomm]$ python python.py
array('i', [1, 2, 3, 4, 5, 6, 7, 8])

Battalgazi YILDIRIM

unread,
May 19, 2010, 5:08:10 PM5/19/10
to mpi...@googlegroups.com
Hi Lisandro,

Thank you very much stand-alone example,

I have tried in my computer and got following results,,

yildirim@memosa:~/python_intercomm$ more python.py

from mpi4py import MPI
from array import array
import os

progr = os.path.abspath('a.out')
child = MPI.COMM_WORLD.Spawn(progr,[], 8)
n = child.remote_size
a = array('i', [0]) * n
child.Allgather([None,MPI.INT],[a,MPI.INT])
child.Disconnect()
print a

yildirim@memosa:~/python_intercomm$ more fortran.f90

program main
 use mpi
 implicit none
 integer :: parent, rank, val, dummy, ierr
 call MPI_Init(ierr)
 call MPI_Comm_get_parent(parent, ierr)
 call MPI_Comm_rank(parent, rank, ierr)
 val = rank + 1
 call MPI_Allgather(val,   1, MPI_INTEGER, &
                    dummy, 0, MPI_INTEGER, &
                    parent, ierr)
 call MPI_Comm_disconnect(parent, ierr)
 call MPI_Finalize(ierr)
end program main

yildirim@memosa:~/python_intercomm$ mpif90 fortran.f90

yildirim@memosa:~/python_intercomm$ python python.py
array('i', [0, 0, 0, 0, 0, 0, 0, 0])


In my original script, I have many send/recv from both side working
correctly but now first time I am trying Allgather and not working,

I use openmpi, I have tried on ubuntu-linux and red-hat linux (both
using openmpi), I did get array of zeros,

I will try different platform with mpich2 or mvapich2
--
B. Gazi YILDIRIM

Battalgazi YILDIRIM

unread,
May 19, 2010, 5:19:00 PM5/19/10
to mpi...@googlegroups.com
Hi Lisandro,

I went to one of our clusters and compile mpi4py with mpich2 and compile
run in there I got correct results,


array('i', [1, 2, 3, 4, 5, 6, 7, 8])
-bash-3.2$


thanks,

Lisandro Dalcin

unread,
May 19, 2010, 5:19:18 PM5/19/10
to mpi...@googlegroups.com
On 19 May 2010 18:08, Battalgazi YILDIRIM <yildi...@gmail.com> wrote:
> Hi Lisandro,
>
Damn it, this is clearly a bug in Open MPI. I've just built mpi4py
against Open MPI and the same zeros you get.

Lisandro Dalcin

unread,
May 19, 2010, 5:25:43 PM5/19/10
to mpi...@googlegroups.com
On 19 May 2010 18:19, Battalgazi YILDIRIM <yildi...@gmail.com> wrote:
> Hi Lisandro,
>
> I went to one of our clusters and compile mpi4py with mpich2 and compile
> run in there I got correct results,
>
> array('i', [1, 2, 3, 4, 5, 6, 7, 8])
> -bash-3.2$
>

OK, confirmed. This seems to be a bug in Open MPI (I've tested against
latest 1.4.2 release)... Will you have the time to report it to Open
MPI devs (and please CC me)? Just put a link to this thread, the
Python code should be pretty obvious for Open MPI folks to acknowledge
the bug.
Reply all
Reply to author
Forward
0 new messages