We have been using mpi4py without problems for many years on our old cluster that was running subsequent versions of Debian stable. It's currently running "stretch", i.e. oldstable.
Recently, we installed a new cluster with the current Debian stable "buster". And we have been experiencing a strange problem for which we have not been able to find a solution. Perhaps someone here has an idea?
Running the command
mpiexec -n 31 python3 -c "import numpy as np; from mpi4py import MPI"
fails with a segmentation fault (see the attached log of outpout to stderr).
Strangely, the above problem does not occur when -n 30 or smaller numbers is given. The problem starts with -n 31 on both the frontend (32 cores) and the compute nodes (64 cores). The problem does not occur when numpy is not imported, or when it's imported after mpi4py.
The above happens with mpi4py 2.0.0 as included in Debian buster. We also tried latest mpi4py: the problem persists.
I searched on this list for possible solutions. One thing I tried was
mpiexec -n 31 python3 -c "import numpy as np; import mpi4py; mpi4py.rc.threads = False; from mpi4py import MPI"
but this does not help.