--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
2. Yes, you should embed all of the libraries and headers necessary to work on the hardware configurations you wish to be compatible with. Luckily, we have figured this out with GPUs, but not OFED, Qlogic, or OmniPath.
Hope that helps!Greg
On Thu, May 11, 2017 at 2:55 PM, Cédric Clerget <cedric....@gmail.com> wrote:
Hello,
I will speak next week in a workshop about reproducible science and portability and I wouldn't lie about MPI and Singularity containers.
I managed to run MPI applications with Singularity and OpenMPI.
So I installed version 2.1.0rc4 on host (centos 6) and container (ubuntu 16.04), by following the documentation I simply compiled OpenMPI in container with
./configure && make && make install.
On the host: ./configure --with-sge --with-psm && make && make install
All works as expected with a hello example. To be sure it run on Infiniband, I launched a PingPong between two hosts
and latency results show it used Ethernet.
The solution was to install libpsm-infinipath1 and libpsm-infinipath1-dev package and recompile OMPI with ./configure --with-psm
All documentations just did ./configure in container without any options.
I red in this group that MVAPICH works without problem with singularity, give it a try: same behaviour, need to install psm headers too and recompile.
and came to these questions:It would be grateful if you would share you experience about that
- is there some options to pass in configure on OMPI/MVAPICH host
- for portability should I embed all libs/headers to work with many hardware configurations (mellanox, glogic, intel)
Regards,
Cédric Clerget
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.
Now if you compiled your OFED libraries in the same place you were bind mounting the OpenMPI from on your host, *and* if those libraries were glibc compatible with your container (which I am assuming they were, because you didn't mention any problems), then all would indeed work as expected!
Hope that helps, and yes on the PR to the docs! PLEASE!
Thanks Greg,Now if you compiled your OFED libraries in the same place you were bind mounting the OpenMPI from on your host, *and* if those libraries were glibc compatible with your container (which I am assuming they were, because you didn't mention any problems), then all would indeed work as expected!You've surmised correctly! In this case I was running a recent Ubuntu (16.04) container on an older (CentOS 7) host, with OFED and MPI compiled with the older CentOS 7 glibc.I guess my strategy of bind mounting helps me run new software on older stable cluster nodes, but would not help with the reverse strategy of running old stable containers for reproducible science on new clusters.
So, is there any functional difference in container integration between Open MPI 1.x series and Open MPI 2.1 series? I'm not sure which (if any) of the above assumptions I can relax for 2.1.
@Vanessa: That helps, but you didn't notice that I submitted that PR to you ;-) I want to update it to make sure that its crystal clear what the Open MPI 2.1 series enables and what the differences with the Open MPI 1.x and 2.0 series is (at the moment I can't find any when using bind mounts and container glibc > host glibc, so the example should work for 1.10 as well as 2.1 although I need to verify)Hope that helps, and yes on the PR to the docs! PLEASE!Definitely!
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
--
Thanks Greg, good idea. I'll put it on my list. I first want to find some time to do a few more experiments with things like the GPUs and MPI. But, once it's on a list, it'll get done sometime ;-)
MC
--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.