[maker-devel] MPI test

265 views
Skip to first unread message

Sebastien Moretti

unread,
Aug 2, 2016, 10:11:31 AM8/2/16
to maker...@yandell-lab.org
Hi

I compiled maker with MPI support and would like to test it.
So I tried to run mpi_evaluator but it fails because Parallel/MPIcar.pm
is missing.

What does mpi_evaluator do?
How to test properly MPI in maker?

maker 2.31.8, Linux x86_64 2.6.32, Perl 5.18.2
openmpi 1.8.1

Regards

--
Sébastien Moretti
Staff Scientist
SIB Vital-IT EMBnet, Quartier Sorge - Genopode
CH-1015 Lausanne, Switzerland
Tel.: +41 (21) 692 4079/4221
http://www.vital-it.ch/ http://myhits.vital-it.ch/ http://MetaNetX.org/

_______________________________________________
maker-devel mailing list
maker...@box290.bluehost.com
http://box290.bluehost.com/mailman/listinfo/maker-devel_yandell-lab.org

Carson Holt

unread,
Aug 2, 2016, 10:19:53 AM8/2/16
to Sebastien Moretti, maker...@yandell-lab.org
mpi_evaluator is a maker development related accessory script. You should ignore it.

—Carson

Carson Holt

unread,
Aug 2, 2016, 11:00:03 AM8/2/16
to Sebastien Moretti, maker...@yandell-lab.org
To run MAKER with MPI, you just call maker with mpiexec. Examples can be found on the MAKER wiki, in the devel list archives, and in the README/INSTALL guide.

Basically something like this to run on 10 CPUs —> mpiexec -n 10 maker

Any other MPI options will depend on the flavor of MPI you use, and your cluster settings.

—Carson

> On Aug 2, 2016, at 6:00 AM, Sebastien Moretti <sebastie...@unil.ch> wrote:
>

sebastie...@unil.ch

unread,
Aug 3, 2016, 11:59:31 AM8/3/16
to Carson Holt, maker...@yandell-lab.org
OK, I removed execution rights for mpi_evaluator.

I tried mpiexec -n 10 maker
MPI tries to write in the maker/perl/ directory.
This is an issue for me because maker is installed in a read-only file
system.

Is there an option or something to tell MPI to write elsewhere?

Sébastien

Carson Holt

unread,
Aug 3, 2016, 12:17:18 PM8/3/16
to Sebastien Moretti, maker...@yandell-lab.org
The issue is that you moved the …/maker/perl directory after initial setup. So when maker runs it can’t find the files, and tries to rebuild the shared objects for C/Perl MPI binding that are missing (compiles on demand).

Solutions (one of the following two options)
1. Install maker as indicated in the INTALL file and then append the path for …/maker/bin/ to the PATH environmental variable in /etc/profile (i.e. export PATH=/usr/local/maker/bin:$PATH).
2. Install maker as indicated in the INTALL file and then soft link the executables in …/maker/bin/ to the location of your choice (i.e. /usr/bin/ or /usr/local/bin/ - this is what homebrew does for example). 

Example:
MacBook-Pro:~ cholt$ ls -al /usr/local/bin/maker
lrwxr-xr-x  1 cholt  admin  32 Oct 25  2015 /usr/local/bin/maker -> ../Cellar/maker/2.31.8/bin/maker

Notice that the softink in /usr/local/bin/ is pointing to the actual location where homebrew placed the installation. Doing something similar would be better than you trying to break up the installation structure. Basically they move the contents of …/maker/* to /usr/local/Cellar/maker/2.31.8/, then they performed all installation steps there, and then softlinked the executables elsewhere using ‘ln -s'


When you install maker, you have to run the setup steps in …/maker/src/. After install, …/maker/bin/ will contain executables and …/maker/perl/ and …/maker/lib/ will contain libraries and shared objects used by those executables. Remember the executables are not binaries, they are scripts, so they need access to the libraries and shared objects used by the scripts. The executables know to look for libraries/files in the installation folder, but if you move them, the @INC search list used by perl will not find them. You can try and manually declare new locations to search in the @INC list using the PERL5LIB environmental variable (but using PERL5LIB is beyond the scope of the maker-devel list and can be investigated using Perl’s own docs).

Also as explained in the homebrew example above, the base directory does not have to be named ‘maker’. You can name it whatever you want. It is just the installation structure within the containing directory that has to be consistent.

Alternatively if you are doing something more complex like a ‘module’ setup using lua based modules just use the prepend_path command to add the .../maker/bin foldert to the environment on demand —>

Example:

-- -*- lua -*-
help(
[[
This module loads the MAKER genome annotation pipeline
]])

whatis("Name: MAKER")
whatis("Version: 2.31.8")
whatis("Category: Gene Annotation")
whatis("Keywords: Gene Annotation")
whatis("URL: http://www.yandell-lab.org/software/maker.html")
whatis("Description: MAKER is a highly parallelized genome annotation/re-annotation pipeline")

-- Load required modules
load("openmpi/1.8.4.i", "genemark-ES", "augustus", "snap", "exonerate", "ncbi-blast+", "RepeatMasker")

-- Prepare needed values
local APPS     = "/ucgd/apps/"
local id       = "maker"
local version  = "2.31.8"
local base     = pathJoin(APPS, id, version)
local bin      = pathJoin(base, "bin")

-- Set PATH and other environmental variables
prepend_path("PATH", bin)
prepend_path("LD_PRELOAD", "
/ucgd/apps/openmpi/1.8.4i/lib/libmpi.so")
setenv("OMPI_MCA_mpi_warn_on_fork", 0
)


—Carson

Carson Holt

unread,
Aug 4, 2016, 4:21:59 PM8/4/16
to Sebastien Moretti, maker...@yandell-lab.org
Hi Sebastien,

As long as you are not following the …/maker/INSTALL instructions, are instead moving files around manually, and trying to modify the default behavior of ./Build I can not provide support. I’ve tried to best explain how you should be doing things in the previous e-mails as well as the information below.

If you are installing for a module based setup, then the proper way to do this would be to move the maker tarball to the desired module location, then follow the INSTALL instructions.

Example (bent for a module based setup since you said that is what you are doing):
#load required tools
module load snap
module load openmpi
module load blast
module load exonerate
module load RepeatMasker

#set up environment for OpenMPI install (explained in …/maker/INSTALL instructions)
export LD_PRELOAD=/location/of/openmpi/lib/libmpi.so

#download and untar maker into desired location
mkdir $APPS/maker
cd $APPS/maker
wget http://maker_url/maker.tgz
tar -zxvf maker.tgz
mv maker 2.31.8

#go to src directory
cd $APPS/maker/2.31.8/src

#install according to …/maker/INSTALL instructions
perl Build.PL
./Build installdeps
./Build installexes
./Build
./Build install

#test normal installation
$APPS/maker/2.31.8/bin/maker --help

#test MPI installation (will print only one help message - something is wrong if it prints 5)
mpiexec -mca btl ^openib -n 5 $APPS/maker/2.31.8/bin/maker --help

#then setup a module file for maker that includes:
1. steps for loading prerequisite tools and modules (such as openmpi)
2. setting OMPI_MCA_mpi_warn_on_fork=0
3. setting LD_PRELOAD=/location/of/openmpi/lib/libmpi.so
4. prepend path for $APPS/maker/2.31.8/bin to PATH environmental variable

Thanks,
Carson


> On Aug 4, 2016, at 8:28 AM, Sebastien Moretti <sebastie...@unil.ch> wrote:
>
> I already use a module to load maker and external tools ;-)
> As well as to set OMPI_MCA_mpi_warn_on_fork = 0
>
>
> In my case perl/config-x86_64-linux-thread-multi-5.018002 was created
> properly but not copied.
> I use ./Build install --destdir=<my_path> and the destdir value is not
> reported at the end of the install process when
> blib/config-x86_64-linux-thread-multi-5.018002 is copied to ../perl/
>
> I manually copy blib/config-x86_64-linux-thread-multi-5.018002 to
> my_path/../perl/ and MPI does not try to write it now.
>
>
>
> Now I have issues when MPI tries to send messages.
> Don't know if you can help me with this.
> Here is part of the error message I got:
>
> [devfrt01.XXX:10256] mca: base: component_find: unable to open
> /software/lib64/openmpi/lib/openmpi/mca_shmem_mmap: perhaps a missing
> symbol, or compiled for a different version of Open MPI? (ignored)
> ...
>
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems. This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
> opal_shmem_base_select failed
> --> Returned value -1 instead of OPAL_SUCCESS
> ...
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems. This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
> opal_init failed
> --> Returned value Error (-1) instead of ORTE_SUCCESS
> ...
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10252] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10256] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10253] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10257] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10251] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> -------------------------------------------------------
> Primary job terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> -------------------------------------------------------
> --------------------------------------------------------------------------
> mpiexec detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
> Process name: [[21120,1],1]
> Exit code: 1

>>> <mailto:sebastie...@unil.ch> wrote:
>>>
>>> OK, I removed execution rights for mpi_evaluator.
>>>
>>> I tried mpiexec -n 10 maker
>>> MPI tries to write in the maker/perl/ directory.
>>> This is an issue for me because maker is installed in a read-only file
>>> system.
>>>
>>> Is there an option or something to tell MPI to write elsewhere?
>>>
>>> Sébastien
>>>
>>>> To run MAKER with MPI, you just call maker with mpiexec. Examples can
>>>> be found on the MAKER wiki, in the devel list archives, and in the
>>>> README/INSTALL guide.
>>>>
>>>> Basically something like this to run on 10 CPUs —> mpiexec -n 10 maker
>>>>
>>>> Any other MPI options will depend on the flavor of MPI you use, and
>>>> your cluster settings.
>>>>
>>>> —Carson
>>>>
>>>>
>>>>
>>>>> On Aug 2, 2016, at 6:00 AM, Sebastien Moretti

>>>>> <sebastie...@unil.ch <mailto:sebastie...@unil.ch>> wrote:
>>>>>
>>>>> Hi
>>>>>
>>>>> I compiled maker with MPI support and would like to test it.
>>>>> So I tried to run mpi_evaluator but it fails because Parallel/MPIcar.pm
>>>>> is missing.
>>>>>
>>>>> What does mpi_evaluator do?
>>>>> How to test properly MPI in maker?
>>>>>
>>>>> maker 2.31.8, Linux x86_64 2.6.32, Perl 5.18.2
>>>>> openmpi 1.8.1
>>>>>
>>>>> Regards
>
> --
> Sébastien Moretti
> Staff Scientist
> SIB Vital-IT EMBnet, Quartier Sorge - Genopode
> CH-1015 Lausanne, Switzerland
> Tel.: +41 (21) 692 4079/4221
> http://www.vital-it.ch/ http://myhits.vital-it.ch/ http://MetaNetX.org/

Reply all
Reply to author
Forward
0 new messages