[maker-devel] some problem with MPI

1,077 views
Skip to first unread message

Michael Thon

unread,
Sep 23, 2015, 3:45:49 AM9/23/15
to MAKER
Hi -

I'm installing MAKER and I can't get it to run with MPI. I'm using Ubuntu linux and the openmpi packages from the linux package manager. when I ran perl Build.pl I made sure that the paths were correct. Running Build install gave me these errors:

./Build install
Configuring MAKER with MPI support
Installing MAKER...
Configuring MAKER with MPI support
Subroutine dl_load_flags redefined at (eval 125) line 8.
Subroutine Parallel::Application::MPI::C_MPI_ANY_SOURCE redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_ANY_TAG redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_SUCCESS redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_Init redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_Finalize redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_Comm_rank redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_Comm_size redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_Send redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::C_MPI_Recv redefined at (eval 125) line 9.
Subroutine Parallel::Application::MPI::_comment redefined at (eval 125) line 9.
Installing /home/mike/maker/maker/src/../perl/lib/MAKER/ConfigData.pm
Installing /home/mike/maker/maker/src/../perl/lib/auto/Parallel/Application/MPI/MPI.inl
Installing /home/mike/maker/maker/src/../perl/man/MAKER::ConfigData.3pm
Skip /home/mike/maker/maker/src/../perl/config-x86_64-linux-gnu-thread-multi-5.018002 (unchanged)


Here are the errors I get when trying to run maker. Maker seems to work fine if I run it without mpi. Any suggestions are welcome.
Thanks

mpiexec -n 2 /home/mike/maker/maker/bin/maker -nodatastore >out
[odie:28576] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_paffinity_hwloc: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28576] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_carto_auto_detect: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28576] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_carto_file: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28576] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28576] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_posix: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28576] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

opal_shmem_base_select failed
--> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
[odie:28576] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file runtime/orte_init.c at line 79
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
[odie:28576] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

ompi_mpi_init: orte_init failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
[odie:28575] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_paffinity_hwloc: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28575] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_carto_auto_detect: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28575] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_carto_file: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28575] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28575] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_posix: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
[odie:28575] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv: perhaps a missing symbol, or compiled for a different version of Open MPI? (ignored)
--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

opal_shmem_base_select failed
--> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
[odie:28575] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file runtime/orte_init.c at line 79
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
[odie:28575] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

ompi_mpi_init: orte_init failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpiexec noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------
_______________________________________________
maker-devel mailing list
maker...@box290.bluehost.com
http://box290.bluehost.com/mailman/listinfo/maker-devel_yandell-lab.org

Carson Holt

unread,
Sep 28, 2015, 11:46:29 AM9/28/15
to Michael Thon, MAKER
Sorry for the slow reply. I’ve been away for the last week.

I’ve found that using Ubuntu’s apt-get doesn’t always set up OpenMPI and MPICH2 correctly for shared libraries. You may have to do a manual install.

Also if using OpenMPI, make sure to set LD_PRELOAD environmental variable to the location of libmpi.so before even trying to install MAKER. It must also be set before running MAKER (or any program that uses OpenMPI's shared libraries), so it's best just to add it to your ~/.bash_profile. (i.e. export LD_PRELOAD=/usr/local/openmpi/lib/libmpi.so).

--Carson

Michael Thon

unread,
Sep 29, 2015, 10:59:30 PM9/29/15
to Carson Holt, MAKER
Apparently my system (Ubuntu 14.04) has mipexec and mpiexec.openmpi executables. mpiexec.openmpi works with MAKER.

-Mike

Carson Holt

unread,
Sep 29, 2015, 11:26:33 PM9/29/15
to Michael Thon, MAKER
Good to know.

Thanks,
Carson

Reply all
Reply to author
Forward
0 new messages