openmpi and cuda issues.

35 views
Skip to first unread message

Saksham Pande

unread,
May 15, 2023, 6:12:48 AM5/15/23
to moose-users
Hi all,
I have the following issue when trying to execute a simulation file on a gpu hpc cluster.
The following error.


._____________________________________________________________________________________
|
| Initial checks...
| All good.
|_____________________________________________________________________________________
[gpu008:162305] OPAL ERROR: Not initialized in file pmix3x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:

  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.

  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.

Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[gpu008:162305] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

previously i was using the loaded module of openmpi4.1.1, but after finding that one existed in /usr/mpi/gcc/openmpi4.0.1a1, i editted the $Path and $ldlibrarypath based on the outputs of ldd <executable>, but am still getting the same error.
I am using loaded modules of gcc/10.2 and cuda/11.1
Is there anything I missed, or any suggestion would be great.

Thanks

dan...@schwen.de

unread,
May 15, 2023, 6:15:12 AM5/15/23
to moose...@googlegroups.com
We are migrating our mailing list to
https://github.com/idaholab/moose/discussions,
could you please start a new discussion with your question there or check if the
question has been answered already? Thank you.

P.S.: This is an automatic message :-)
Reply all
Reply to author
Forward
0 new messages