SLURM and OMPI

1,161 views
Skip to first unread message

elyas goli

unread,
Jul 17, 2020, 6:07:46 PM7/17/20
to moose-users
Hi All,

Has anyone ever had an issue with adopting openmpi with a job scheduler like SLURM? What do you sugest to solve the issue?

[goli@n0081 problems]$ ./amrita-opt -i ZmechNew.i
[n0081.savio2:22127] OPAL ERROR: Not initialized in file pmix3x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:

  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.

  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.

Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[n0081.savio2:22127] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

Fande Kong

unread,
Jul 17, 2020, 8:41:31 PM7/17/20
to moose...@googlegroups.com
On Fri, Jul 17, 2020 at 4:07 PM elyas goli <goli....@gmail.com> wrote:
Hi All,

Has anyone ever had an issue with adopting openmpi with a job scheduler like SLURM? What do you sugest to solve the issue?

[goli@n0081 problems]$ ./amrita-opt -i ZmechNew.i
[n0081.savio2:22127] OPAL ERROR: Not initialized in file pmix3x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute.

I would suggest you ask help from the HPC system admins. It seems OpenMPI was not built with SLURM support. And then, you could not run the simulation using OpenMPI.

You could also check if the system already has an installed MPI. If so, you are encouraged to use the system carried MPI.


Thanks,

Fande,

There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:

  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.

  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.

Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[n0081.savio2:22127] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/e3f700a0-a9d7-42a0-b672-9453ae0f8d54o%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages