Using mpi4py with wrapped fortran library that calls mpi

503 views
Skip to first unread message

Amit

unread,
Jun 2, 2013, 5:58:31 AM6/2/13
to mpi...@googlegroups.com
Hello,

I am trying to wrap (using f2py) a fortran library that calls mpi. The library expects the main file (the fortran code that uses the library) to initialize mpi by calling mpi_init().
The main fortrtan file doesn't pass any objects to the library, i.e. the comm object, therefore I can't use the example in the documentation.
I try to imitate this behavior using mpi4py by including the following code in my main python file (that calls the wrapped library):

from mpi4py import MPI

From what I understand this should initialize the mpi library. But when I try to run my script I get the following error:

Attempting to use an MPI routine before initializing MPI

Thanks in advance,
Amit

Aron Ahmadia

unread,
Jun 2, 2013, 11:24:22 AM6/2/13
to mpi...@googlegroups.com
Attempting to use an MPI routine before initializing MPI

This is a sign that your mpi4py MPI library is not compiled/linked in the same way that the Fortran code is compiling/linking MPI.

It is essential that you use the exact same compilers/linkers/flags for both codes, or something is likely to go awry.

One easy way to check how things are linked is by using the "ldd" tool for dynamically linked libraries (.so).  Both mpi4py and your Fortran extension code wrapped by f2py should be .so files, and both of them should be pointing to the same MPI library.

Let us know how it goes.

Cheers,
Aron



--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mpi4py+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Amit

unread,
Jun 2, 2013, 4:41:55 PM6/2/13
to mpi...@googlegroups.com, ar...@ahmadia.net
Thanks for your response.
Below is the output of lld command on the so files:

First the MPI code:
tamnun [/u/amitibo/code/mpi4py] 313 > ldd ~/.local/lib/python2.7/site-packages/mpi4py/MPI.so
        linux-vdso.so.1 =>  (0x00007ffff7ffe000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007ffff7ab3000)
        libpython2.7.so.1.0 => /usr/local/epd2/lib/libpython2.7.so.1.0 (0x00007ffff7704000)
        libmpi_dbg.so.4 => /usr/local/intel/impi/4.0.3.008/intel64/lib/libmpi_dbg.so.4 (0x00007ffff7084000)
        libmpigf.so.4 => /usr/local/intel/impi/4.0.3.008/intel64/lib/libmpigf.so.4 (0x00007ffff6f55000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ffff6d38000)
        librt.so.1 => /lib64/librt.so.1 (0x00007ffff6b30000)
        libc.so.6 => /lib64/libc.so.6 (0x00007ffff679d000)
        /lib64/ld-linux-x86-64.so.2 (0x0000003e14600000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00007ffff659a000)
        libm.so.6 => /lib64/libm.so.6 (0x00007ffff6316000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007ffff60ff000)

And on my code:
tamnun [/u/amitibo/code/pydirect] 317 > ldd DIRECT/parallel/pardirect.so
        linux-vdso.so.1 =>  (0x00007ffff7ffe000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007ffff3820000)
        libmpi.so.4 => /usr/local/intel/impi/4.0.3.008/intel64/lib/libmpi.so.4 (0x00007ffff3356000)
        libmpigf.so.4 => /usr/local/intel/impi/4.0.3.008/intel64/lib/libmpigf.so.4 (0x00007ffff3228000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ffff300a000)
        librt.so.1 => /lib64/librt.so.1 (0x00007ffff2e02000)
        libpython2.7.so.1.0 => /usr/local/epd2/lib/libpython2.7.so.1.0 (0x00007ffff2a53000)
        libg2c.so.0 => /usr/lib64/libg2c.so.0 (0x00007ffff2831000)
        libm.so.6 => /lib64/libm.so.6 (0x00007ffff25ad000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007ffff2397000)
        libc.so.6 => /lib64/libc.so.6 (0x00007ffff2004000)
        /lib64/ld-linux-x86-64.so.2 (0x0000003e14600000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00007ffff1e01000)

I went over the setup.py of mpi4py in order to understand what flag it uses. It is quite complicated to follow. Is there a simple way (simpler code) to get the same compile flags that mpi4py uses?

Thanks,
Amit

Aron Ahmadia

unread,
Jun 2, 2013, 7:43:11 PM6/2/13
to mpi...@googlegroups.com, ar...@ahmadia.net
I am on a phone now but glancing at your ldd output it looks like mpi4py is linking against a debug library and your extension isn't. This may be the problem, it is hard to say for sure. Lisandro might have some other ideas. You can control compile/link settings by modifying the configuration file at the top-level. 

Cheers,
Aron

Lisandro Dalcin

unread,
Jun 3, 2013, 7:27:09 AM6/3/13
to mpi4py
On 3 June 2013 02:43, Aron Ahmadia <ar...@ahmadia.net> wrote:
> I am on a phone now but glancing at your ldd output it looks like mpi4py is
> linking against a debug library and your extension isn't.

Yes, there is the issue. I'm not sure why the compiler is picking this
library, though. Perhaps it is related to Python's distutils passing
the -g flag to the linker.


--
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
3000 Santa Fe, Argentina
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169

Rodrigo Paz

unread,
Jun 3, 2013, 6:16:28 PM6/3/13
to mpi...@googlegroups.com, mpi...@googlegroups.com

On Monday, June 3, 2013 4:27:09 AM UTC-7, Lisandro Dalcin wrote:
On 3 June 2013 02:43, Aron Ahmadia <ar...@ahmadia.net> wrote:
> I am on a phone now but glancing at your ldd output it looks like mpi4py is
> linking against a debug library and your extension isn't.

Yes, there is the issue. I'm not sure why the compiler is picking this
library, though. Perhaps it is related to Python's distutils passing
the -g flag to the linker.


Hi all,
It seems that in new versions of intel mpicc and mpicxx (at least I see that in intel compiler versions 12 and 13) COPTFLAGS and CXXOPTFLAGS are by default in debug mode.
So, redefining COPTFLAGS and CXXOPTFLAGS with -O3 (and friends) the linker uses the correct optimized libmpi.so

[rpaz@shang2 petsc-dev]$ ldd arch-linux-intel-intelmpi/lib/libpetsc.so
...
    libmpi.so.4 => /opt/intel/impi/4.1.0/lib64/libmpi.so.4 (0x00007fe57f343000)
    libmpigf.so.4 => /opt/intel/impi/4.1.0/lib64/libmpigf.so.4 (0x00007fe57f112000)
...

this is compiling petsc-dev with COPTFLAGS=-O3 and CXXOPTFLAGS=-O3

--
Rodrigo


Rodrigo Paz

unread,
Jun 3, 2013, 7:15:40 PM6/3/13
to mpi...@googlegroups.com


On Monday, June 3, 2013 3:16:28 PM UTC-7, Rodrigo Paz wrote:

On Monday, June 3, 2013 4:27:09 AM UTC-7, Lisandro Dalcin wrote:
On 3 June 2013 02:43, Aron Ahmadia <ar...@ahmadia.net> wrote:
> I am on a phone now but glancing at your ldd output it looks like mpi4py is
> linking against a debug library and your extension isn't.

Yes, there is the issue. I'm not sure why the compiler is picking this
library, though. Perhaps it is related to Python's distutils passing
the -g flag to the linker.


Hi all,
It seems that in new versions of intel mpicc and mpicxx (at least I see that in intel compiler versions 12 and 13) COPTFLAGS and CXXOPTFLAGS are by default in debug mode.
So, redefining COPTFLAGS and CXXOPTFLAGS with -O3 (and friends) the linker uses the correct optimized libmpi.so

[rpaz@shang2 petsc-dev]$ ldd arch-linux-intel-intelmpi/lib/libpetsc.so
...
    libmpi.so.4 => /opt/intel/impi/4.1.0/lib64/libmpi.so.4 (0x00007fe57f343000)
    libmpigf.so.4 => /opt/intel/impi/4.1.0/lib64/libmpigf.so.4 (0x00007fe57f112000)
...

this is compiling petsc-dev with COPTFLAGS=-O3 and CXXOPTFLAGS=-O3



Earlier in my previous reply I was building petsc-dev, so, I pointed to C{XX}OPTFLAGS; but
for mpi4py builds try to redefine these flags {CPPFLAGS, CFLAGS, CXXFLAGS}

[rpaz@shang2 mpi4py-dev]$ ldd /usr/local/lib64/python2.7/site-packages/mpi4py/MPI.so
...
    libmpi.so.4 => /opt/intel/impi/4.1.0/lib64/libmpi.so.4 (0x00007f02e2a1d000)
    libmpigf.so.4 => /opt/intel/impi/4.1.0/lib64/libmpigf.so.4 (0x00007f02e27ed000)
...


--
Rodrigo

 
Reply all
Reply to author
Forward
0 new messages