Manual Init Requires Explicit Finalize?

296 views
Skip to first unread message

fre...@witherden.org

unread,
Dec 30, 2012, 3:11:55 PM12/30/12
to mpi...@googlegroups.com
Hi all,

In my application I have the need to control when MPI_Init is called.  However, it appears as if this also requires an explicit finalize:

freddie@fluorine ~/Programming $ cat test.py
import mpi4py.rc
mpi4py.rc.initialize = False

from mpi4py import MPI

MPI.Init()

print 'I am %d' % MPI.COMM_WORLD.rank

freddie@fluorine ~/Programming $ mpirun -n 2 python test.py
I am 1
I am 0
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 12882 on
node fluorine exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

Is this a bug in mpi4py?  I can not find anything implying that rc.initialize and rc.finalize are mutually exclusive and since rc.finalize defaults to 1 I was expecting everything to just work.

Regards, Freddie.

Lisandro Dalcin

unread,
Dec 30, 2012, 5:50:53 PM12/30/12
to mpi4py
On 30 December 2012 17:11, <fre...@witherden.org> wrote:
> Hi all,
>
> In my application I have the need to control when MPI_Init is called.
> However, it appears as if this also requires an explicit finalize:
>
>
> Is this a bug in mpi4py? I can not find anything implying that
> rc.initialize and rc.finalize are mutually exclusive and since rc.finalize
> defaults to 1 I was expecting everything to just work.
>

Well, I've implemented this long ago and at that time I assumed that
if you were handling initialization yourself, you also need to handle
finalization. https://code.google.com/p/mpi4py/source/browse/src/MPI/atimport.pxi#127

What we could do o be backward-compatible with old behaviour is the
following: If initialize=False, then by default we also have
finalize=False, but if the user explicitly sets finalize=True, mpi4py
automatically calls MPI_Finalize(). @freddie, does this sounds
acceptable to you?

What other people on this list think about this?



--
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
3000 Santa Fe, Argentina
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169

fre...@witherden.org

unread,
Dec 31, 2012, 6:53:53 AM12/31/12
to mpi...@googlegroups.com
On Sunday, 30 December 2012 22:50:53 UTC, Lisandro Dalcin wrote:
Well, I've implemented this long ago and at that time I assumed that
if you were handling initialization yourself, you also need to handle
finalization. https://code.google.com/p/mpi4py/source/browse/src/MPI/atimport.pxi#127

What we could do o be backward-compatible with old behaviour is the
following: If initialize=False, then by default we also have
finalize=False, but if the user explicitly sets finalize=True, mpi4py
automatically calls MPI_Finalize(). @freddie, does this sounds
acceptable to you?

What other people on this list think about this?

That seems like a good idea.

Broadly speaking there are some very good reasons for wanting to defer initialization but few for wanting to also defer finalization.  (One example is to handle CUDA-aware MPI implementations which allow Send/Recv to take pointers to CUDA device allocations.  However the caveat is that these require a CUDA context to be available at the time MPI_Init is called.  Hence a need to control initialization explicitly.)

Regards, Freddie.
 

Lisandro Dalcin

unread,
Jan 4, 2013, 7:44:28 PM1/4/13
to mpi4py
Freddy, could you please open an new issue in the tacker about your request?

Lisandro Dalcin

unread,
Jan 15, 2013, 10:23:37 AM1/15/13
to mpi4py
On Fri, Jan 4, 2013 at 9:44 PM, Lisandro Dalcin <dal...@gmail.com> wrote:
>
> Freddy, could you please open an new issue in the tacker about your request?
>

http://code.google.com/p/mpi4py/source/detail?r=a6a46e0b924b5ff531fa899615db46a447a99df6
Reply all
Reply to author
Forward
0 new messages