I compiled maker with MPI support and would like to test it.
So I tried to run mpi_evaluator but it fails because Parallel/MPIcar.pm
is missing.
What does mpi_evaluator do?
How to test properly MPI in maker?
maker 2.31.8, Linux x86_64 2.6.32, Perl 5.18.2
openmpi 1.8.1
Regards
--
Sébastien Moretti
Staff Scientist
SIB Vital-IT EMBnet, Quartier Sorge - Genopode
CH-1015 Lausanne, Switzerland
Tel.: +41 (21) 692 4079/4221
http://www.vital-it.ch/ http://myhits.vital-it.ch/ http://MetaNetX.org/
_______________________________________________
maker-devel mailing list
maker...@box290.bluehost.com
http://box290.bluehost.com/mailman/listinfo/maker-devel_yandell-lab.org
—Carson
Basically something like this to run on 10 CPUs —> mpiexec -n 10 maker
Any other MPI options will depend on the flavor of MPI you use, and your cluster settings.
—Carson
> On Aug 2, 2016, at 6:00 AM, Sebastien Moretti <sebastie...@unil.ch> wrote:
>
I tried mpiexec -n 10 maker
MPI tries to write in the maker/perl/ directory.
This is an issue for me because maker is installed in a read-only file
system.
Is there an option or something to tell MPI to write elsewhere?
Sébastien
As long as you are not following the …/maker/INSTALL instructions, are instead moving files around manually, and trying to modify the default behavior of ./Build I can not provide support. I’ve tried to best explain how you should be doing things in the previous e-mails as well as the information below.
If you are installing for a module based setup, then the proper way to do this would be to move the maker tarball to the desired module location, then follow the INSTALL instructions.
Example (bent for a module based setup since you said that is what you are doing):
#load required tools
module load snap
module load openmpi
module load blast
module load exonerate
module load RepeatMasker
#set up environment for OpenMPI install (explained in …/maker/INSTALL instructions)
export LD_PRELOAD=/location/of/openmpi/lib/libmpi.so
#download and untar maker into desired location
mkdir $APPS/maker
cd $APPS/maker
wget http://maker_url/maker.tgz
tar -zxvf maker.tgz
mv maker 2.31.8
#go to src directory
cd $APPS/maker/2.31.8/src
#install according to …/maker/INSTALL instructions
perl Build.PL
./Build installdeps
./Build installexes
./Build
./Build install
#test normal installation
$APPS/maker/2.31.8/bin/maker --help
#test MPI installation (will print only one help message - something is wrong if it prints 5)
mpiexec -mca btl ^openib -n 5 $APPS/maker/2.31.8/bin/maker --help
#then setup a module file for maker that includes:
1. steps for loading prerequisite tools and modules (such as openmpi)
2. setting OMPI_MCA_mpi_warn_on_fork=0
3. setting LD_PRELOAD=/location/of/openmpi/lib/libmpi.so
4. prepend path for $APPS/maker/2.31.8/bin to PATH environmental variable
Thanks,
Carson
> On Aug 4, 2016, at 8:28 AM, Sebastien Moretti <sebastie...@unil.ch> wrote:
>
> I already use a module to load maker and external tools ;-)
> As well as to set OMPI_MCA_mpi_warn_on_fork = 0
>
>
> In my case perl/config-x86_64-linux-thread-multi-5.018002 was created
> properly but not copied.
> I use ./Build install --destdir=<my_path> and the destdir value is not
> reported at the end of the install process when
> blib/config-x86_64-linux-thread-multi-5.018002 is copied to ../perl/
>
> I manually copy blib/config-x86_64-linux-thread-multi-5.018002 to
> my_path/../perl/ and MPI does not try to write it now.
>
>
>
> Now I have issues when MPI tries to send messages.
> Don't know if you can help me with this.
> Here is part of the error message I got:
>
> [devfrt01.XXX:10256] mca: base: component_find: unable to open
> /software/lib64/openmpi/lib/openmpi/mca_shmem_mmap: perhaps a missing
> symbol, or compiled for a different version of Open MPI? (ignored)
> ...
>
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems. This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
> opal_shmem_base_select failed
> --> Returned value -1 instead of OPAL_SUCCESS
> ...
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems. This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
> opal_init failed
> --> Returned value Error (-1) instead of ORTE_SUCCESS
> ...
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10252] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10256] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10253] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10257] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> ompi_mpi_init: ompi_rte_init failed
> --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> *** and potentially your MPI job)
> [devfrt01.vital-it.ch:10251] Local abort before MPI_INIT completed
> successfully; not able to aggregate error messages, and not able to
> guarantee that all other processes were killed!
> -------------------------------------------------------
> Primary job terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> -------------------------------------------------------
> --------------------------------------------------------------------------
> mpiexec detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
> Process name: [[21120,1],1]
> Exit code: 1
>>> <mailto:sebastie...@unil.ch> wrote:
>>>
>>> OK, I removed execution rights for mpi_evaluator.
>>>
>>> I tried mpiexec -n 10 maker
>>> MPI tries to write in the maker/perl/ directory.
>>> This is an issue for me because maker is installed in a read-only file
>>> system.
>>>
>>> Is there an option or something to tell MPI to write elsewhere?
>>>
>>> Sébastien
>>>
>>>> To run MAKER with MPI, you just call maker with mpiexec. Examples can
>>>> be found on the MAKER wiki, in the devel list archives, and in the
>>>> README/INSTALL guide.
>>>>
>>>> Basically something like this to run on 10 CPUs —> mpiexec -n 10 maker
>>>>
>>>> Any other MPI options will depend on the flavor of MPI you use, and
>>>> your cluster settings.
>>>>
>>>> —Carson
>>>>
>>>>
>>>>
>>>>> On Aug 2, 2016, at 6:00 AM, Sebastien Moretti
>>>>> <sebastie...@unil.ch <mailto:sebastie...@unil.ch>> wrote:
>>>>>
>>>>> Hi
>>>>>
>>>>> I compiled maker with MPI support and would like to test it.
>>>>> So I tried to run mpi_evaluator but it fails because Parallel/MPIcar.pm
>>>>> is missing.
>>>>>
>>>>> What does mpi_evaluator do?
>>>>> How to test properly MPI in maker?
>>>>>
>>>>> maker 2.31.8, Linux x86_64 2.6.32, Perl 5.18.2
>>>>> openmpi 1.8.1
>>>>>
>>>>> Regards
>
> --
> Sébastien Moretti
> Staff Scientist
> SIB Vital-IT EMBnet, Quartier Sorge - Genopode
> CH-1015 Lausanne, Switzerland
> Tel.: +41 (21) 692 4079/4221
> http://www.vital-it.ch/ http://myhits.vital-it.ch/ http://MetaNetX.org/