1. --mca btl ^openib
2. --mca btl vader,tcp,self --mca btl_tcp_if_include ib0
3. --mca btl vader,tcp,self --mca btl_tcp_if_include eth0
One or the other may fix your issue. The first causes OpenMPI to not use the infiniband communication option (infiniband libraries use registered memory in a way that causes system calls to generate segfaults). It will usually force communication to go over another adapter. The second tries to use the infiband adapter, but uses TCP over infiniband (way to indirectly bypass problem causing libraries). The third specifically forces the use of the ethernet adapter instead of infiniband adapter.
--Carson
> _______________________________________________
> maker-devel mailing list
> maker...@box290.bluehost.com
> http://box290.bluehost.com/mailman/listinfo/maker-devel_yandell-lab.org
_______________________________________________
maker-devel mailing list
maker...@box290.bluehost.com
http://box290.bluehost.com/mailman/listinfo/maker-devel_yandell-lab.org
Alternatively MPICH3 and IntelMPI (with some extra configuration for IntelMPI) can be used. If you decide to try Intel MPI let me know, and I can provide you with the info on configuration.
—Carson
A few options you will need if trying intel MPI:
-binding pin=disable #requires to disable processor affinity (otherwise MAKER calls to BLAST and other programs which are parallelized independent of MPI may not work)
Environmental variables to set:
export I_MPI_PIN_DOMAIN=node #otherwise MAKER calls to BLAST and other programs which are parallelized independent of MPI may not work
export I_MPI_FABRICS='shm:tcp’ #avoid potential complication with OpenFabrics libraries (they block system calls because of how they use registered memory, i.e. MAKER calling BLAST would fail)
export I_MPI_HYDRA_IFACE=ib0 #set to eth0 if you don’t have an infiniband over ip inerface (required because of the above I_MPI_FABRICS change)
Also make sure to compile on the node you run. You can try expanding to other nodes after that.
—Carson