warning: dereferencing type-punned pointer will break strict-aliasing rules

5 views
Skip to first unread message

Brian Murrell

unread,
Jun 24, 2020, 5:09:54 PM6/24/20
to mpi4py
I'm trying to build mpi4py 3.0.1 on OpenSUSE Leap 15 and getting the following errors:

src/mpi4py.MPI.c: In function ‘__pyx_f_6mpi4py_3MPI_getOptions’:
src/mpi4py.MPI.c:5722:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
src/mpi4py.MPI.c:5732:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
src/mpi4py.MPI.c:5762:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
src/mpi4py.MPI.c:5772:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
src/mpi4py.MPI.c: In function ‘__pyx_f_6mpi4py_3MPI_PyMPI_probe’:
src/mpi4py.MPI.c:51028:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
In file included from src/MPI.c:4:0:
src/mpi4py.MPI.c: In function ‘__pyx_pf_6mpi4py_3MPI_8Datatype_13is_predefined___get__’:
src/mpi4py.MPI.c:72984:5: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
In file included from src/MPI.c:4:0:
src/mpi4py.MPI.c: In function ‘__pyx_pf_6mpi4py_3MPI_7Request_8Wait’:
src/mpi4py.MPI.c:80224:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
src/mpi4py.MPI.c: In function ‘__pyx_pf_6mpi4py_3MPI_7Request_20Waitall’:
src/mpi4py.MPI.c:81616:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
In file included from src/MPI.c:4:0:
src/mpi4py.MPI.c: In function ‘__pyx_pf_6mpi4py_3MPI_4Comm_50Probe’:
src/mpi4py.MPI.c:101362:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_True);
^~~~~~~~~~~~
src/mpi4py.MPI.c: In function ‘__pyx_pf_6mpi4py_3MPI_9Intracomm_2Create_cart’:
src/mpi4py.MPI.c:116806:5: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_False);
^~~~~~~~~~~~
src/mpi4py.MPI.c: In function ‘__pyx_pf_6mpi4py_3MPI_9Intracomm_12Cart_map’:
src/mpi4py.MPI.c:118381:5: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
__Pyx_INCREF(Py_False);
^~~~~~~~~~~~
In file included from src/MPI.c:4:0:
src/mpi4py.MPI.c: In function ‘__Pyx_PyBool_FromLong’:
src/mpi4py.MPI.c:178260:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False);
^~~~~~
src/mpi4py.MPI.c:178260:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]


gcc (SUSE Linux) 7.5.0
Python 2.7.14
Cython 0.27.3

The build does finish, but tests then fail with:

testIMProbe (test_p2p_buf_matched.TestP2PMatchedSelf) ... 
python2:3086646 terminated with signal 11 at PC=7f961630adf9 SP=7ffe23fd8770.  Backtrace:
/usr/lib64/mpi/gcc/mpich/lib64/libmpi.so.0(+0x4d7df9)[0x7f961630adf9]
/usr/lib64/mpi/gcc/mpich/lib64/libmpi.so.0(+0x4a7f4d)[0x7f96162daf4d]
/usr/lib64/mpi/gcc/mpich/lib64/libmpi.so.0(PMPI_Improbe+0x255)[0x7f96161a0335]
/home/brian/daos/rpm/mpi4py/_topdir/BUILDROOT/mpi4py-3.0.1-5.x86_64/usr/lib64/python2.7/site-packages/mpich/mpi4py/MPI.so(+0x8271b)[0x7f961722271b]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x502)[0x7f9619940c82]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0xc2f)[0x7f96199413af]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(+0xbed87)[0x7f96198f1d87]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x37c0)[0x7f9619943f40]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(+0xbebee)[0x7f96198f1bee]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(+0xb15be)[0x7f96198e45be]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(+0x15df6b)[0x7f9619990f6b]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5c9d)[0x7f961994641d]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(+0xbed87)[0x7f96198f1d87]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x37c0)[0x7f9619943f40]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(+0xbebee)[0x7f96198f1bee]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(+0xb15be)[0x7f96198e45be]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(+0x15df6b)[0x7f9619990f6b]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5c9d)[0x7f961994641d]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(+0xbed87)[0x7f96198f1d87]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x37c0)[0x7f9619943f40]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(+0xbebee)[0x7f96198f1bee]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(+0xb15be)[0x7f96198e45be]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(+0x15df6b)[0x7f9619990f6b]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f96198dbae3]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5c9d)[0x7f961994641d]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0xc2f)[0x7f96199413af]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5d9e)[0x7f961994651e]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5d9e)[0x7f961994651e]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x2d4)[0x7f961993fa84]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x19)[0x7f96199a4189]
/usr/lib64/libpython2.7.so.1.0(+0x1777af)[0x7f96199aa7af]
/usr/lib64/libpython2.7.so.1.0(PyRun_FileExFlags+0x82)[0x7f96199aa752]
/usr/lib64/libpython2.7.so.1.0(PyRun_SimpleFileExFlags+0x19e)[0x7f96199aa64e]
/usr/lib64/libpython2.7.so.1.0(Py_Main+0x669)[0x7f96199b05a9]
/lib64/libc.so.6(__libc_start_main+0xea)[0x7f961927bf8a]
python2(_start+0x2a)[0x563a1665b7fa]

Maybe the two are unrelated.  Any ideas what's causing either of these?

Lisandro Dalcin

unread,
Jun 24, 2020, 5:26:46 PM6/24/20
to mpi...@googlegroups.com
On Thu, 25 Jun 2020 at 00:09, Brian Murrell <brianj...@gmail.com> wrote:
I'm trying to build mpi4py 3.0.1 on OpenSUSE Leap 15 and getting the following errors:

1.- Any reason you are not using the latest release mpi4py 3.0.3? Try upgrading to the last release 3.0.3, or at least 3.0.2. Note that 3.0.1 has a bug related to not handling writable vs readonly buffers the right way. Trust me, 3.0.3 is the one you want.

2.- I think these warnings can be safely ignored. However, make sure the flag `-fno-strict-aliasing` is being used (should be added automatically by distutils). I think these warnings are unrelated to the test failures.

3.- What's your MPICH version? The error seems to happen down the line, in MPI land. If the error persists with mpi4py 3.0.3, then you should install debuginfo packages and run things under valgrind or a debugger. 
 
--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mpi4py+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mpi4py/4e8937c7-0490-4756-82d2-4c877f1e03dbn%40googlegroups.com.


--
Lisandro Dalcin
============
Research Scientist
Extreme Computing Research Center (ECRC)
King Abdullah University of Science and Technology (KAUST)
http://ecrc.kaust.edu.sa/

Brian Murrell

unread,
Jun 25, 2020, 9:22:38 AM6/25/20
to mpi4py
On Wednesday, June 24, 2020 at 5:26:46 PM UTC-4 Lisandro Dalcin wrote:
1.- Any reason you are not using the latest release mpi4py 3.0.3?
 
3.0.3 has the same problems.

 
2.- I think these warnings can be safely ignored. However, make sure the flag `-fno-strict-aliasing` is being used (should be added automatically by distutils).

It seems to be.  For example:

/usr/lib64/mpi/gcc/mpich/bin/mpicc -pthread -fno-strict-aliasing -fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g -DNDEBUG -fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -g -DOPENSSL_LOAD_CONF -fwrapv -fPIC -c src/lib-pmpi/mpe.c -o build/temp.linux-x86_64-2.7/src/lib-pmpi/mpe.o

I think these warnings are unrelated to the test failures.

OK.  It's really the test failures I am concerned about.

3.- What's your MPICH version?

I'm using their master at  b0c0ec7a3 due to needing code not in an official release yet.  It's not entirely ideal, I understand, and could very well be something in their master that is causing the problem, hence my next question...

The error seems to happen down the line, in MPI land. If the error persists with mpi4py 3.0.3, then you should install debuginfo packages and run things under valgrind or a debugger. 
 
Any hints on how I can run just the failing test with, say, gdb to get a usable stacktrace?

Cheers,
b.

Lisandro Dalcin

unread,
Jun 25, 2020, 9:57:39 AM6/25/20
to mpi...@googlegroups.com
On Thu, 25 Jun 2020 at 16:22, Brian Murrell <brianj...@gmail.com> wrote:
On Wednesday, June 24, 2020 at 5:26:46 PM UTC-4 Lisandro Dalcin wrote:

3.- What's your MPICH version?

I'm using their master at  b0c0ec7a3 due to needing code not in an official release yet.  It's not entirely ideal, I understand, and could very well be something in their master that is causing the problem, hence my next question...


Oh, I see. Well, it could very well be a bug in mpich/master. The failing test is rather trivial.

The error seems to happen down the line, in MPI land. If the error persists with mpi4py 3.0.3, then you should install debuginfo packages and run things under valgrind or a debugger. 
 
Any hints on how I can run just the failing test with, say, gdb to get a usable stacktrace?


Either this way

$ python test/runtests.py -v -i test_p2p_buf_matched TestP2PMatchedSelf.testIMProbe

or this way (the standard distutils way):

$ python test/test_p2p_buf_matched.py TestP2PMatchedSelf.testIMProbe


Brian Murrell

unread,
Jun 25, 2020, 10:20:27 AM6/25/20
to mpi4py
On Thursday, June 25, 2020 at 9:57:39 AM UTC-4 dal...@gmail.com wrote:
Oh, I see. Well, it could very well be a bug in mpich/master. The failing test is rather trivial.

Indeed.
 
Either this way

$ python test/runtests.py -v -i test_p2p_buf_matched TestP2PMatchedSelf.testIMProbe

So this is interesting.  This is hitting the nail right on the head of what my root issue is:

$ python2 test/runtests.py -v -i test_p2p_buf_matched TestP2PMatchedSelf.testIMProbe
[mpi...@host.example.com] match_arg (utils/args/args.c:160): unrecognized argument pmi_args
[mpi...@host.example.com] HYDU_parse_array (utils/args/args.c:175): argument matching returned error
[mpi...@host.example.com] parse_args (ui/mpich/utils.c:1523): error parsing input array
[mpi...@host.example.com] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1575): unable to parse user arguments
[mpi...@host.example.com] main (ui/mpich/mpiexec.c:128): error parsing parameters

or this way (the standard distutils way):

$ python test/test_p2p_buf_matched.py TestP2PMatchedSelf.testIMProbe

Produces the same result as above.

Previously, I was taking a hint on running the tests from the Fedora RPM packaging spec and doing:

$ PYTHONPATH=/mpi4py-3.0.3/build/lib.linux-x86_64-2.7/mpi4py/ mpiexec -n 1 python2 st/runtests.py -v --no-builddir -e spawn

So why does it work in the above case, but not trying to run it as you suggest, without mpiexec?

I can run the one single test with:

$ PYTHONPATH=/mpi4py-3.0.3/build/lib.linux-x86_64-2.7/mpi4py/ mpiexec -n 1 python2 test/runtests.py -v -i test_p2p_buf_matched TestP2PMatchedSelf.testIMProbe --no-builddir -e spawn

and get the segfault and stack trace, but I suspect having mpiexec as the launcher in there is going to complicate getting it to run with gdb.

Cheers,
b.

Lisandro Dalcin

unread,
Jun 25, 2020, 11:02:36 AM6/25/20
to mpi...@googlegroups.com
On Thu, 25 Jun 2020 at 17:20, Brian Murrell <brianj...@gmail.com> wrote:
On Thursday, June 25, 2020 at 9:57:39 AM UTC-4 dal...@gmail.com wrote:
Oh, I see. Well, it could very well be a bug in mpich/master. The failing test is rather trivial.

Indeed.
 
Either this way

$ python test/runtests.py -v -i test_p2p_buf_matched TestP2PMatchedSelf.testIMProbe

So this is interesting.  This is hitting the nail right on the head of what my root issue is:

$ python2 test/runtests.py -v -i test_p2p_buf_matched TestP2PMatchedSelf.testIMProbe
[mpi...@host.example.com] match_arg (utils/args/args.c:160): unrecognized argument pmi_args
[mpi...@host.example.com] HYDU_parse_array (utils/args/args.c:175): argument matching returned error
[mpi...@host.example.com] parse_args (ui/mpich/utils.c:1523): error parsing input array
[mpi...@host.example.com] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1575): unable to parse user arguments
[mpi...@host.example.com] main (ui/mpich/mpiexec.c:128): error parsing parameters

or this way (the standard distutils way):

$ python test/test_p2p_buf_matched.py TestP2PMatchedSelf.testIMProbe

Produces the same result as above.

Previously, I was taking a hint on running the tests from the Fedora RPM packaging spec and doing:

$ PYTHONPATH=/mpi4py-3.0.3/build/lib.linux-x86_64-2.7/mpi4py/ mpiexec -n 1 python2 st/runtests.py -v --no-builddir -e spawn

So why does it work in the above case, but not trying to run it as you suggest, without mpiexec?

I have no idea! Maybe a change or bug in mpich/master? It certainly works on my workstation and laptop with the latest MPICH release, and it has always worked.


I can run the one single test with:

$ PYTHONPATH=/mpi4py-3.0.3/build/lib.linux-x86_64-2.7/mpi4py/ mpiexec -n 1 python2 test/runtests.py -v -i test_p2p_buf_matched TestP2PMatchedSelf.testIMProbe --no-builddir -e spawn

and get the segfault and stack trace, but I suspect having mpiexec as the launcher in there is going to complicate getting it to run with gdb.


You can probably do `mpiexec -n 1 gdb ...`

Suggestion: Why don't you just try to put that failing piece of code in a separate script, and remove uneeded stuff until you have a minimal reproducer? Then you can email MPICH folks, Rob Latham may help you, he "speaks" mpi4py.

Look, here you have, try this code in its own script, try to find the first failure.

from mpi4py import MPI

def assertEqual(a, b):
    assert a == b

comm = MPI.COMM_WORLD.Dup()
m = comm.improbe()
assertEqual(m, None)
m = comm.improbe(MPI.ANY_SOURCE)
assertEqual(m, None)
m = comm.improbe(MPI.ANY_SOURCE, MPI.ANY_TAG)
assertEqual(m, None)
status = MPI.Status()
m = comm.improbe(MPI.ANY_SOURCE, MPI.ANY_TAG, status)
assertEqual(m, None)
assertEqual(status.source, MPI.ANY_SOURCE)
assertEqual(status.tag,    MPI.ANY_TAG)
assertEqual(status.error,  MPI.SUCCESS)
m = MPI.Message.iprobe(comm)
assertEqual(m, None)
s = comm.isend(None, comm.rank, 0)
r = comm.mprobe(comm.rank, 0).irecv()
MPI.Request.waitall([s,r])


Brian J. Murrell

unread,
Jun 25, 2020, 1:26:41 PM6/25/20
to mpi...@googlegroups.com
On Thu, 2020-06-25 at 18:02 +0300, Lisandro Dalcin wrote:
>
> I have no idea! Maybe a change or bug in mpich/master?

You might think, except this all works (same version of mpich and
mpi4py) on CentOS 7.

> It certainly works
> on my workstation and laptop with the latest MPICH release, and it
> has
> always worked.

Agreed.

> You can probably do `mpiexec -n 1 gdb ...`

Yeah, I did actually try that and it seems to work. Now it's just a
matter of getting a reasonable set of -debuginfo packages on a Leap 15
machine.

> Suggestion: Why don't you just try to put that failing piece of code
> in a
> separate script, and remove uneeded stuff until you have a minimal
> reproducer? Then you can email MPICH folks, Rob Latham may help you,
> he
> "speaks" mpi4py.
>
> Look, here you have, try this code in its own script, try to find the
> first
> failure.
>
> from mpi4py import MPI

From previous experiments, this just import is enough in fact

$ PYTHONPATH=/mpi4py-3.0.3/build/lib.linux-x86_64-2.7/mpi4py/ python2 -c "from mpi4py import MPI"
[mpi...@host.example.com] match_arg (utils/args/args.c:160): unrecognized argument pmi_args
[mpi...@host.example.com] HYDU_parse_array (utils/args/args.c:175): argument matching returned error
[mpi...@host.example.com] parse_args (ui/mpich/utils.c:1523): error parsing input array
[mpi...@host.example.com] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1575): unable to parse user arguments
[mpi...@host.example.com] main (ui/mpich/mpiexec.c:128): error parsing parameters

Where it's now sitting waiting for a socket connection (from strace):

[pid 3567162] accept(13,

but before that another thread did:

[pid 3567164] execve("/usr/lib64/mpi/gcc/mpich/bin/mpiexec", ["mpiexec", "-pmi_args", "37343", "default_interface", "default_key", "3567164"], 0x7ffc443ef3d8 /* 87 vars */) = 0

which is where the errors about unrecognized arguments are coming from.
But I cannot find anywhere in mpi4py where an mpiexec with those
arguments is being called from. Those arguments do exist
in /usr/lib64/mpi/gcc/mpich/lib64/libmpi.so.0 though, so looks like
mpich itself is running that execve.

Can "from mpi4py import MPI" be decomposed down into a small enough
fragment of C code to reproduce this so that mpi4py is right out of the
picture for an mpich bug report?

Cheers,
b.

signature.asc

Lisandro Dalcin

unread,
Jun 25, 2020, 4:17:49 PM6/25/20
to mpi...@googlegroups.com
On Thu, 25 Jun 2020 at 20:26, Brian J. Murrell <brianj...@gmail.com> wrote:

Can "from mpi4py import MPI" be decomposed down into a small enough
fragment of C code to reproduce this so that mpi4py is right out of the
picture for an mpich bug report? 

No, it is not that easy. But "from mpi4py import MPI" triggers a call to MPI_Init(). Maybe you can, instead of using mpi4py, use ctypes to dlopen libmpi.so and call MPI_Init() ? Here you have some code, put that in a script and try to run it. You may need to adjust LD_LIBRARY_PATH for ctypes to find the library. Just to be sure, instead of "libmpi.so", use the full path "/usr/lib64/mpi/gcc/mpich/lib64/libmpi.so.0" . If this code doesn't work, then something funky is going on at MPI_Init(), there you have your minimal reproducer that can be run in any Python install out there without external dependencies.

import ctypes

libmpi = ctypes.CDLL("libmpi.so", ctypes.RTLD_GLOBAL)

MPI_Init = libmpi.MPI_Init
MPI_Init.restype = ctypes.c_int
MPI_Init.argtypes = [ctypes.c_voidp]*2

MPI_Finalize = libmpi.MPI_Finalize
MPI_Finalize.restype = ctypes.c_int
MPI_Finalize.argtypes = []

MPI_Init(None, None)
MPI_Finalize()


By the way, did you try running with/without mpiexec some trivial C MPI program, like demo/helloworld.c in mpi4py's sources? Does that work with/without mpiexec?


Reply all
Reply to author
Forward
0 new messages