Running an mpi program with mvapitch

470 views
Skip to first unread message

George Zaki

unread,
May 18, 2018, 9:48:15 AM5/18/18
to singularity

Hi singularity team,


I would like to run an MPI program in a singularity container. The program is compiled using mvapicth2.2 using a gcc version 5.4.


I can see that my cluster has a compiled version of mvapitch2.2 with gcc 5.3


When I run:


mpiexe -n 1 singularity exec /path/to/sing/image ./mpi-pi.o

the call does not return.

 

Does the gcc version has to be exactly the same? I tried the switch the compiler in this image:


BootStrap: docker
From: nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04


However when gcc 5.3 is used the mvapitch does not build correctly.


If that's the problem, Is there a preferred method of switching gcc version in this container singularity container?


Thanks,
George

Kandes, Martin

unread,
May 18, 2018, 1:30:11 PM5/18/18
to singu...@lbl.gov
Hi George,

I run with different gcc compiler versions inside and outside my MPI containers. So I would be surprised if that is the issue here. I'm not sure I have a good recommendation of where to start debugging your problem. But I might start by double checking the MPI versions match inside and outside the container. e.g. see [1].

Marty

[1]

[mkandes@comet-ln3 ~]$ srun --partition=debug --pty --nodes=1 --ntasks-per-node=24 -t 00:30:00 --wait=0 --export=ALL /bin/bash
srun: job 16364303 queued and waiting for resources
srun: job 16364303 has been allocated resources
[mkandes@comet-14-06 ~]$ cd /scratch/mkandes/16364303/
[mkandes@comet-14-06 16364303]$ cp /oasis/scratch/comet/mkandes/temp_project/singularity/images/ubuntu-mvapich2.img ./
[mkandes@comet-14-06 16364303]$ module purge
[mkandes@comet-14-06 16364303]$ module load gnu
[mkandes@comet-14-06 16364303]$ module load mvapich2_ib
[mkandes@comet-14-06 16364303]$ module list
Currently Loaded Modulefiles:
  1) gnu/4.9.2         2) mvapich2_ib/2.1
[mkandes@comet-14-06 16364303]$ module load singularity
[mkandes@comet-14-06 16364303]$ gcc --version
gcc (GCC) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

[mkandes@comet-14-06 16364303]$ mpirun --version
HYDRA build details:
    Version:                                 3.1.4
    Release Date:                            Thu Apr  2 17:15:15 EDT 2015
    CC:                              gcc  -fPIC -O3 
    CXX:                             g++  -fPIC -O3 
    F77:                             gfortran -fPIC -O3 
    F90:                             gfortran -fPIC -O3 
    Configure options:                       '--disable-option-checking' '--prefix=/opt/mvapich2/gnu/ib' '--enable-shared' '--enable-sharedlibs=gcc' '--with-hwloc' '--enable-f77' '--enable-fc' '--enable-hybrid' '--with-ib-include=/usr/include/infiniband' '--with-ib-libpath=/usr/lib64' '--enable-fast=O3' '--with-limic2=/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/../..//cache/build-limic' '--with-slurm=/usr/lib64/slurm' '--with-file-system=lustre' 'CC=gcc' 'CFLAGS=-fPIC -O3 -O3' 'CXX=g++' 'CXXFLAGS=-fPIC -O3 -O3' 'FC=gfortran' 'FCFLAGS=-fPIC -O3 -O3' 'F77=gfortran' 'FFLAGS=-L/usr/lib64 -L/lib -L/lib -fPIC -O3 -O3' '--cache-file=/dev/null' '--srcdir=.' 'LDFLAGS=-L/usr/lib64 -L/lib -L/lib -L/lib -Wl,-rpath,/lib -L/lib -Wl,-rpath,/lib -L/usr/lib64 -L/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/../..//cache/build-limic/lib -L/lib -L/lib' 'LIBS=-libmad -lrdmacm -libumad -libverbs -ldl -lrt -llimic2 -lm -lpthread ' 'CPPFLAGS=-I/usr/include/infiniband -I/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/mvapich2-2.1/src/mpl/include -I/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/mvapich2-2.1/src/mpl/include -I/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/mvapich2-2.1/src/openpa/src -I/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/mvapich2-2.1/src/openpa/src -D_REENTRANT -I/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/mvapich2-2.1/src/mpi/romio/include -I/include -I/include -I/usr/include/infiniband -I/state/partition1/git/mpi-roll/BUILD/sdsc-mvapich2_gnu_ib-2.1/../..//cache/build-limic/include -I/include -I/include'
    Process Manager:                         pmi
    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist
    Topology libraries available:            hwloc
    Resource management kernels available:   user slurm ll lsf sge pbs cobalt
    Checkpointing libraries available:      
    Demux engines available:                 poll select
[mkandes@comet-14-06 16364303]$ singularity shell ubuntu-mvapich2.img
Singularity: Invoking an interactive shell within container...

Singularity ubuntu-mvapich2.img:/scratch/mkandes/16364303> gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Singularity ubuntu-mvapich2.img:/scratch/mkandes/16364303> mpirun --version
HYDRA build details:
    Version:                                 3.1.4
    Release Date:                            Thu Apr  2 17:15:15 EDT 2015
    CC:                              gcc   
    CXX:                             g++   
    F77:                             gfortran  
    F90:                             gfortran  
    Configure options:                       '--disable-option-checking' '--prefix=/opt/mvapich2' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'CFLAGS= -DNDEBUG -DNVALGRIND -O2' 'LDFLAGS=-L/lib -L/lib -L/lib -Wl,-rpath,/lib -L/lib -Wl,-rpath,/lib -L/lib -L/lib' 'LIBS=-libmad -lrdmacm -libumad -libverbs -ldl -lrt -lm -lpthread ' 'CPPFLAGS= -I/tmp/mvapich2-2.1/src/mpl/include -I/tmp/mvapich2-2.1/src/mpl/include -I/tmp/mvapich2-2.1/src/openpa/src -I/tmp/mvapich2-2.1/src/openpa/src -D_REENTRANT -I/tmp/mvapich2-2.1/src/mpi/romio/include -I/include -I/include -I/include -I/include'
    Process Manager:                         pmi
    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist
    Topology libraries available:            hwloc
    Resource management kernels available:   user slurm ll lsf sge pbs cobalt
    Checkpointing libraries available:      
    Demux engines available:                 poll select
Singularity ubuntu-mvapich2.img:/scratch/mkandes/16364303>



From: George Zaki [georg...@gmail.com]
Sent: Friday, May 18, 2018 6:48 AM
To: singularity
Subject: [Singularity] Running an mpi program with mvapitch

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

George Zaki

unread,
May 18, 2018, 1:57:15 PM5/18/18
to singu...@lbl.gov
Thanks Marty

Below are the values I got, any obvious mismatch? I git this working fine with OpenMPI.

Here is also what I try to run:

singularity exec /data/zakigf/candle/swift-hypervisor-horovod-mvapich.simg mpicc mpi-pi.c -o  mpi-pi.o 

[zakigf@cn2360 mpi-example]$ mpiexec -n 1 singularity exec /data/zakigf/candle/swift-hypervisor-horovod-mvapich.simg mpi-pi.o 

Then I kill after no response:

^C[mpiexec@cn2360] Sending Ctrl-C to processes as requested

[mpiexec@cn2360] Press Ctrl-C again to force abort

[mpiexec@cn2360] HYDU_sock_write (utils/sock/sock.c:286): write error (Bad file descriptor)

[mpiexec@cn2360] HYD_pmcd_pmiserv_send_signal (pm/pmiserv/pmiserv_cb.c:169): unable to write data to proxy

[mpiexec@cn2360] ui_cmd_cb (pm/pmiserv/pmiserv_pmci.c:79): unable to send signal downstream

[mpiexec@cn2360] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status

[mpiexec@cn2360] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:198): error waiting for event

[mpiexec@cn2360] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion


Now about the versions:

Singularity: Invoking an interactive shell within container...


Singularity swift-hypervisor-horovod-mvapich.simg:~> mpiexec --version 

HYDRA build details:

    Version:                                 3.1.4

    Release Date:                            Wed Sep  7 14:33:43 EDT 2016

    CC:                              gcc    

    CXX:                             g++    

    F77:                             gfortran   

    F90:                             gfortran   

    Configure options:                       '--disable-option-checking' '--prefix=NONE' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'CFLAGS= -DNDEBUG -DNVALGRIND -O2' 'LDFLAGS=-L/lib -L/lib -L/lib -Wl,-rpath,/lib -L/lib -Wl,-rpath,/lib -L/lib -L/lib' 'LIBS=-libmad -libumad -libverbs -ldl -lrt -lm -lpthread ' 'CPPFLAGS= -I/tmp/mvapich2-2.2/src/mpl/include -I/tmp/mvapich2-2.2/src/mpl/include -I/tmp/mvapich2-2.2/src/openpa/src -I/tmp/mvapich2-2.2/src/openpa/src -D_REENTRANT -I/tmp/mvapich2-2.2/src/mpi/romio/include -I/include -I/include -I/include -I/include'

    Process Manager:                         pmi

    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist

    Topology libraries available:            hwloc

    Resource management kernels available:   user slurm ll lsf sge pbs cobalt

    Checkpointing libraries available:       

    Demux engines available:                 poll select

Singularity swift-hypervisor-horovod-mvapich.simg:~> gcc --version 

gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609

Copyright (C) 2015 Free Software Foundation, Inc.

This is free software; see the source for copying conditions.  There is NO

warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


Singularity swift-hypervisor-horovod-mvapich.simg:~> exit 

exit


[zakigf@cn2360 ~]$ ml mvapich2/2.2/gcc-5.3.0 

[+] Loading mvapich2 2.2 for GCC 5.3.0

[zakigf@cn2360 ~]$ gcc --version

gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17)

Copyright (C) 2010 Free Software Foundation, Inc.

This is free software; see the source for copying conditions.  There is NO

warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


[zakigf@cn2360 ~]$ mpiexec --version

HYDRA build details:

    Version:                                 3.1.4

    Release Date:                            Wed Sep  7 14:33:43 EDT 2016

    CC:                              /usr/local/GCC/5.3.0/bin/gcc    

    CXX:                             /usr/local/GCC/5.3.0/bin/g++    

    F77:                             /usr/local/GCC/5.3.0/bin/gfortran   

    F90:                             /usr/local/GCC/5.3.0/bin/gfortran   

    Configure options:                       '--disable-option-checking' '--prefix=/usr/local/MVAPICH2/2.2/gcc-5.3.0' '--with-slurm-lib=/usr/local/slurm/lib' '--with-slurm-include=/usr/local/slurm/include' '--enable-debug=none' '--enable-timing=runtime' 'CC=/usr/local/GCC/5.3.0/bin/gcc' 'CXX=/usr/local/GCC/5.3.0/bin/g++' 'FC=/usr/local/GCC/5.3.0/bin/gfortran' 'F77=/usr/local/GCC/5.3.0/bin/gfortran' '--cache-file=/dev/null' '--srcdir=.' 'CFLAGS= -DNDEBUG -DNVALGRIND -O2' 'LDFLAGS=-L/lib -L/lib -L/lib -Wl,-rpath,/lib -L/lib -Wl,-rpath,/lib -L/lib -L/lib' 'LIBS=-libmad -libumad -libverbs -ldl -lrt -lm -lpthread ' 'CPPFLAGS= -I/usr/local/src/mvapich2/mvapich2-2.2/src/mpl/include -I/usr/local/src/mvapich2/mvapich2-2.2/src/mpl/include -I/usr/local/src/mvapich2/mvapich2-2.2/src/openpa/src -I/usr/local/src/mvapich2/mvapich2-2.2/src/openpa/src -D_REENTRANT -I/usr/local/src/mvapich2/mvapich2-2.2/src/mpi/romio/include -I/include -I/include -I/include -I/include'

    Process Manager:                         pmi

    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist

    Topology libraries available:            hwloc

    Resource management kernels available:   user slurm ll lsf sge pbs cobalt

    Checkpointing libraries available:       

    Demux engines available:                 poll select

You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity...@lbl.gov.

Jason Stover

unread,
May 18, 2018, 2:14:29 PM5/18/18
to singu...@lbl.gov
Hi George,

Can you run it from inside the container? For example:

singularity exec
/data/zakigf/candle/swift-hypervisor-horovod-mvapich.simg mpiexec -n 1
mpi-pi.o

-J

victor sv

unread,
May 21, 2018, 3:03:53 AM5/21/18
to singu...@lbl.gov
Hi George,

not any experience with mvapitch. I think the compiler version has no effect here.

Jason solution and is a good starting point to check if the container MPI works in a single node (not in several nodes).

To run the hybrid MPI approach you should take into account that both version and vendor of MPI and PMI must match. Can you check if PMI libraries match?

BR,
Víctor.


>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "singularity" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

>
> --
> You received this message because you are subscribed to the Google Groups
> "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

George Zaki

unread,
May 21, 2018, 8:58:35 AM5/21/18
to singu...@lbl.gov
Hi Jason, 

I was not able to run the program even without singularity when I use mvapitch. I am in contact with our system admin

Here is the  output I got when I run mpiexec within singularity: 

mpiexec -n 1 mpi-pi.o 

[mpiexec@cn3137] HYDU_create_process (utils/launch/launch.c:75): execvp error on file srun (No such file or directory)

Best regards,

George.



>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "singularity" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

>
> --
> You received this message because you are subscribed to the Google Groups
> "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity...@lbl.gov.

victor sv

unread,
May 21, 2018, 9:17:25 AM5/21/18
to singu...@lbl.gov
Hi George, 

please check that you are calling the right program. Is the executable path in the PATH environment variable? If not you have the prepend the path to call the executable.

Take a look to this:


BR,
Víctor.


>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "singularity" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

>
> --
> You received this message because you are subscribed to the Google Groups
> "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

George Zaki

unread,
May 21, 2018, 11:36:06 AM5/21/18
to singu...@lbl.gov
Thanks Victor, 

Here is the results with the path of the executable:

Singularity swift-hypervisor-horovod-mvapich.simg:~/mpi-examples/mpi-example> mpiexec -n 1 ./mpi-pi.o 

[mpiexec@cn3112] HYDU_create_process (utils/launch/launch.c:75): execvp error on file srun (No such file or directory)




>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "singularity" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

>
> --
> You received this message because you are subscribed to the Google Groups
> "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity...@lbl.gov.

victor sv

unread,
May 23, 2018, 2:55:56 AM5/23/18
to singu...@lbl.gov
Hi George,

it's strange. The file that is not finding is "srun" not "mpi-pi.o". Is this something related with Slurm workload manager? is srun installed inside the container?

Best,
Víctor


>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "singularity" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

>
> --
> You received this message because you are subscribed to the Google Groups
> "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

George Zaki

unread,
May 23, 2018, 4:08:59 PM5/23/18
to singu...@lbl.gov
Hi Victor,

No I did not explicitly install slurm is not installed in the container. Does the native installation of mvapitch requires slurm to be installed?


Thanks and regards,
George.


>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "singularity" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

>
> --
> You received this message because you are subscribed to the Google Groups
> "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity...@lbl.gov.

victor sv

unread,
May 24, 2018, 2:54:52 AM5/24/18
to singu...@lbl.gov
HI George,

as you can see in the configure options, native installation was configure with "--with-slurm=/usr/lib64/slurm". Problably it's being linked with Slurm PMI. 

One posible test is to compile a version of mvapitch without slurm support in the host and try again. If this works, check if it's possible to run the native mvapich with its own PMI instead the Slurm one...

On the other hand, if mpirun is some kind of alias of "srun"  you can check the PMI versions supported with "srun --mpi=list" and then, if any of them is compatible with the PMI(x) version in the container, you can choose the right one with "srun --mpi=YOURCHOICE".

Sorry, but I have no experience with mvapitch. 

Hope it helps ... 
Víctor


>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "singularity" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

>
> --
> You received this message because you are subscribed to the Google Groups
> "singularity" group.
> To unsubscribe from this group and stop receiving emails from it, send an


--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to a topic in the Google Groups "singularity" group.
To unsubscribe from this topic, visit https://groups.google.com/a/lbl.gov/d/topic/singularity/A6I5mZxnmFU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to singularity+unsubscribe@lbl.gov.

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Reply all
Reply to author
Forward
0 new messages