Hi Pawel,
I have installed pydusa-1.15-sparx-8.tgz
But I run into another problem:
when I start ISAC:
sxisac.py bdb:data isac1 --radius=35 --CTF
I got those errors:
[:35409] mca: base: component_find: unable to open
/usr/lib64/openmpi/lib/openmpi/mca_shmem_mmap: perhaps a missing
symbol, or compiled for a different version of Open MPI? (ignored)
[
medusa.ohsu.edu:35409] mca: base: component_find: unable to open
/usr/lib64/openmpi/lib/openmpi/mca_shmem_posix: perhaps a missing
symbol, or compiled for a different version of Open MPI? (ignored)
[:35409] mca: base: component_find: unable to open
/usr/lib64/openmpi/lib/openmpi/mca_shmem_sysv: perhaps a missing
symbol, or compiled for a different version of Open MPI? (ignored)
--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
opal_shmem_base_select failed
--> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
opal_init failed
--> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[:35409] Local abort before MPI_INIT completed successfully; not able
to aggregate error messages, and not able to guarantee that all other
processes were killed!
Thanks,
Lei