upcxx-run fails during SLURM batch-job (No such file or directory)

124 views
Skip to first unread message

Soonni

unread,
Aug 9, 2022, 12:07:38 PM8/9/22
to UPC++
Hello,

I tried to run an upcxx program in a batch session with following SLURM-jobscript:

#!/bin/bash

### Job name
#SBATCH --job-name=test

### File for the output
#SBATCH --output=test_output

### Time for job execution
#SBATCH --time=00:10:00

### No. of nodes
#SBATCH --nodes=4

### Change to working directory
cd ~/workspace/bachelor-thesis-2022ss/code/parallel

### Set modules
module load gcc/7
module list

### Set path
export PATH=~/workspace/upcxx-2022.3.0_iibv/upcxx-install-path/bin/:$PATH
upcxx --version

### Run parallel application
upcxx-run -n 4 ./heston.exe


Doing the same commands in an interactive session on the same system seems to work, but as a batch job I get following output and error message:

(OK) Loading gcc 7.3.0
(!!) OpenACC / OpenMP offload to Pascal GPUs might be slow. Use
(!!)  $FLAGS_OFFLOAD_OPENMP or $FLAGS_OFFLOAD_OPENACC envvars.
Currently Loaded Modulefiles:
 1) DEVELOP        2) intel/19.0     3) intelmpi/2018  4) gcc/7        
UPC++ version 2022.3.0  / gex-2022.3.0-0-gd509b6a
Citing UPC++ in publication? Please see: https://upcxx.lbl.gov/publications
Copyright (c) 2022, The Regents of the University of California,
through Lawrence Berkeley National Laboratory.
https://upcxx.lbl.gov

icpc (ICC) 19.0.1.144 20181018
Copyright (C) 1985-2018 Intel Corporation.  All rights reserved.

error running /rwthfs/rz/cluster/home/<MYUSERID>/workspace/upcxx-2022.3.0_iibv/upcxx-install-path/gasnet.opt/bin/gasnetrun_ibv-mpi.pl:
 gasnetrun: exec(/opt/MPI/bin/mpirun -np 4 /usr/bin/env UPCXX_SHARED_HEAP_SIZE=128%020MB GASNET_PSHM_ENABLED=yes GASNET_MAX_SEGSIZE=128MB/P GASNET_SPAWN_CONTROL=mpi GASNET_SPAWN_HAVE_MPI=1 GASNET_ENVCMD=/usr/bin/env GASNET_SPAWN_HAVE_PMI=0 GASNET_PLATFORM=generic GASNET_SPAWN_CONDUIT=IBV /rwthfs/rz/cluster/home/<MYUSERID> /workspace/bachelor-thesis-2022ss/code/parallel/./heston.exe) failed: No such file or directory


Does anybody see where the issue lies?

Thanks in advance.
 

Dan Bonachea

unread,
Aug 9, 2022, 2:00:45 PM8/9/22
to Soonni, UPC++
Hi Soonni -

Based on this error message:

error running /rwthfs/rz/cluster/home/<MYUSERID>/workspace/upcxx-2022.3.0_iibv/upcxx-install-path/gasnet.opt/bin/gasnetrun_ibv-mpi.pl:
 gasnetrun: exec(/opt/MPI/bin/mpirun -np 4 [...] /rwthfs/rz/cluster/home/<MYUSERID> /workspace/bachelor-thesis-2022ss/code/parallel/./heston.exe) failed: No such file or directory

My guess would be that /opt/MPI/bin/mpirun does not exist on the compute node - either due to some difference in the file system mounts or perhaps this is the wrong job launch command for your cluster. The `<MYUSERID>` component also looks "fishy", but I'm guessing that was your manual redaction?

I would recommend confirming whether `/opt/MPI/bin/mpirun -np 2 hello-world.exe` works for a simple MPI hello world program (with no UPC++) - this is a prerequisite for using the MPI-based job spawner you are currently using. You can modify the command by setting e.g. envvar MPIRUN_CMD='mpirun -np %N %P %A' at run time.

If you cannot get the MPI spawner to work, we have two other alternative job spawn mechanisms, PMI and SSH.
If functional PMI support was found at configure time, then you might be able to run by setting envvar GASNET_IBV_SPAWNER=pmi
Otherwise you could try SSH spawner with GASNET_IBV_SPAWNER=ssh, although that one requires functional password-less ssh access to the compute nodes, and additionally requires additional settings to pass the compute node names (see the README).

-D


--
You received this message because you are subscribed to the Google Groups "UPC++" group.
To unsubscribe from this group and stop receiving emails from it, send an email to upcxx+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/upcxx/a5233fbf-060a-4a55-b76e-08ec1f8e65f4n%40googlegroups.com.

Soonni

unread,
Aug 16, 2022, 2:24:00 PM8/16/22
to UPC++
Yes, the "<MYUSERID>" was me trying to censor my username and I accidentally inserted a space in the path. Sorry for the confusion.

I have another question: I'm trying to install upc++ with the ofi-conduit, but after "make all" during the "make check"-test, all the 16 tests compile successfully but can't run with the error message below. I didn't install libfabric before the installation because I can't use sudo commands, and I don't know whether libfabric is already installed on the cluster. So this might be the source of the problem. If I can't install upc++ with the ofi-conduit, are there any drawbacks in using the ibv-conduit (which is working for me) on a cluster with an Intel Omni Path network. The program I'm going to test with upc++ is a Monte Carlo simulation, which transfers very little data between the nodes besides a reduction at the end, if that is relevant.

Wed Aug 10 14:41:32 CEST 2022
+ /rwthfs/rz/cluster/home/<MYUSERID>/workspace/upcxx-2022.3.0_iofi/bld/upcxx.assert1.optlev0.dbgsym1.gasnet_seq.ofi/bin/upcxx-run -np 4 -network ofi -- timeout --foreground -k 420s 300s ./test-hello_upcxx-ofi
[0] MPI startup(): I_MPI_TCP_BUFFER_SIZE variable has been removed from the product, its value is ignored

*** FATAL ERROR (proc 0): in gasnetc_ofi_init() at 2.3.0_iofi/bld/GASNet-2022.3.0/ofi-conduit/gasnet_ofi.c:722: fi_endpoint for rdma failed: -22(Invalid argument)
NOTICE: Before reporting bugs, run with GASNET_BACKTRACE=1 in the environment to generate a backtrace.
*** FATAL ERROR (proc 2): in gasnetc_ofi_init() at 2.3.0_iofi/bld/GASNet-2022.3.0/ofi-conduit/gasnet_ofi.c:722: fi_endpoint for rdma failed: -22(Invalid argument)
NOTICE: Before reporting bugs, run with GASNET_BACKTRACE=1 in the environment to generate a backtrace.
*** FATAL ERROR (proc 1): in gasnetc_ofi_init() at 2.3.0_iofi/bld/GASNet-2022.3.0/ofi-conduit/gasnet_ofi.c:722: fi_endpoint for rdma failed: -22(Invalid argument)
NOTICE: Before reporting bugs, run with GASNET_BACKTRACE=1 in the environment to generate a backtrace.
*** FATAL ERROR (proc 3): in gasnetc_ofi_init() at 2.3.0_iofi/bld/GASNet-2022.3.0/ofi-conduit/gasnet_ofi.c:722: fi_endpoint for rdma failed: -22(Invalid argument)
NOTICE: Before reporting bugs, run with GASNET_BACKTRACE=1 in the environment to generate a backtrace.
/usr/bin/timeout: the monitored command dumped core

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 0 PID 113871 RUNNING AT nrm213
=   KILLED BY SIGNAL: 6 (Aborted)
===================================================================================

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 1 PID 113872 RUNNING AT nrm213
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 2 PID 113873 RUNNING AT nrm213
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 3 PID 113874 RUNNING AT nrm213
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================
Failure executing command /opt/intel/impi/2019.8.254/compilers_and_libraries/linux/mpi/intel64/bin/mpirun -launcher ssh -machinefile /tmp/<MYUSERID>/login18-1_197941/hostfile-9412 -np 4 /usr/bin/timeout --foreground -k 420s 300s /rwthfs/rz/cluster/home/<MYUSERID>/workspace/upcxx-2022.3.0_iofi/./test-hello_upcxx-ofi
real 2.33
user 0.49
sys 0.23
Wed Aug 10 14:41:35 CEST 2022


Thanks in advance!

Dan Bonachea

unread,
Aug 16, 2022, 4:58:14 PM8/16/22
to Soonni, UPC++
Hi Soonni -

Apologies I did not realize we were discussing an OmniPath system. This failure mode indicates that the MPI library being used for job spawning is consuming the only endpoint available on the OmniPath network adapter, preventing its use for GASNet-EX/UPC++ communication.

From the "Job Spawning" section of the ofi-conduit docs:
Depending on the libfabric provider in use, there may be restrictions on how
mpi-based spawning is used.  In particular, the psm2 provider has the
property that each process may only open the network adapter once.  If you
wish to use mpi-spawner, please consult its README for advice on how to set
your MPIRUN_CMD to use TCP/IP.
So you'll need to either use a different GASNet spawner (PMI or SSH) to circumvent this restriction, or find the right knobs to tell the MPI library to communicate via TCP instead of native PSM2. The relevant instructions are in the mpi-spawner README:
If one is using mpi-conduit, or is expecting to run hybrid GASNet+MPI
applications with any conduit, then MPIRUN_CMD should be set as one would
normally use mpirun for MPI applications.  However, since mpi-spawner itself
uses MPI only for initialization and finalization (and MPI is not used for
GASNet communications other than in mpi-conduit) one can potentially reduce
resource use by setting MPIRUN_CMD such that the MPI will use TCP/IP for its
communication.  Below are recommendations for several MPIs.  When recommended
to set an environment variable, the most reliable way is to prefix the mpirun
command.  For instance
   MPIRUN_CMD='env VARIABLE=value mpirun [rest of your template]'

    - Open MPI
      Pass "--mca btl tcp,self" to mpirun, OR
      Set environment variable OMPI_MCA_BTL=tcp,self.
    - MPICH or MPICH2
      These normally default to TCP/IP, so no special action is required.
    - MVAPICH family
      These most often support only InfiniBand and therefore are not
      recommended for use as a launcher for GASNet applications if one
      is concerned with reducing resource usage.
    - Intel MPI
      Set environment variable I_MPI_DEVICE=sock.
    - HP-MPI
      Set environment variable MPI_IC_ORDER=tcp.
    - LAM/MPI
      Pass "-ssi rpi tcp" to mpirun, OR
      Set environment variable LAM_MPI_SSI_rpi=tcp.
If I_MPI_DEVICE=sock doesn't work then you might need I_MPI_OFI_PROVIDER=sockets, and please let us know if this fixes your issue!
Note this same restriction also applies to MPI spawning of ibv-conduit on this hardware, as that conduit also needs to open the OmniPath adapter underneath the verbs-over-PSM2 layer. However, ofi-conduit is the strongly recommended transport for OmniPath hardware, for reasons of both performance and stability.

If you really don't care about the performance of the UPC++ communication at all, you could alternatively use the mpi-conduit, which should "just work", although offers far less efficient communication than the native ofi-conduit backend.

Hope this helps..
-D


Soonni

unread,
Aug 19, 2022, 3:13:22 PM8/19/22
to UPC++
Hi Dan,

setting  I_MPI_DEVICE=sock or  I_MPI_OFI_PROVIDER=sockets does not work. These are some of the error messages:

MPI startup(): I_MPI_DEVICE environment variable is not supported.
MPI startup(): To check the list of supported variables, use the impi_info utility or refer to https://software.intel.com/en-us/mpi-library/documentation/get-started.
FATAL ERROR (proc 0): in gasnetc_ofi_init() at 2.3.0_iofi/bld/GASNet-2022.3.0/ofi-conduit/gasnet_ofi.c:618: OFI provider 'psm2' selected at configure time is not available at run time and/or has been overridden by FI_PROVIDER='sockets' in the environment.

Instead I had to configure and "make all" again using --with-ofi-provider=sockets, which ended up working. This is the complete configure-command:
./configure --prefix=upcxx-install-path --disable-smp --disable-udp --disable-ibv --enable-ofi --with-ofi-spawner=mpi --with-cc=mpiicc --with-cxx=mpiicpc --with-ofi-provider=sockets

I have some other questions:
Before program execution I get following warning:
WARNING: Using OFI provider (sockets), which has not been validated to provide
WARNING: acceptable GASNet performance. You should consider using a more
WARNING: hardware-appropriate GASNet conduit. See ofi-conduit/README.
WARNING: ofi-conduit is experimental and should not be used for
         performance measurements.
But because I want to measure the performance of my program like for example its scalability, is there something I have to consider, or can I safely ignore this message?

Also I noticed some inconsistencies of the output of my program. My program parallelizes a big for loop assigning the iterations to the processes with 
if (upcxx::rank_me() == i % upcxx::rank_n())
where each iteration also outputs its loop iteration and upc++-process-rank. Using mpirun/mpiexec/upcxx-run with two processes in an interactive session all leads to pretty even outputs from both processes (i.e. both pretty evenly taking turns outputting about one message at a time). But as a batch-job srun results in a process outputting about 10 to 50 iterations at a time before the other process also outputs about 10 to 50 iterations, while for mpirun/mpiexec it's about 100 to 200 iterations per turn. upcxx-run still doesn't work in the batch-job with the same error message as in my very first post. Do you have an idea, what might cause all this?


Thanks in advance!

Dan Bonachea

unread,
Aug 19, 2022, 4:43:19 PM8/19/22
to Soonni, UPC++
Hi Sooni -

Sorry to hear you're still encountering difficulties, responses interspersed below..

On Fri, Aug 19, 2022 at 3:13 PM Soonni <son...@hotmail.de> wrote:
setting  I_MPI_DEVICE=sock or  I_MPI_OFI_PROVIDER=sockets does not work. These are some of the error messages:

MPI startup(): I_MPI_DEVICE environment variable is not supported.
MPI startup(): To check the list of supported variables, use the impi_info utility or refer to https://software.intel.com/en-us/mpi-library/documentation/get-started.
FATAL ERROR (proc 0): in gasnetc_ofi_init() at 2.3.0_iofi/bld/GASNet-2022.3.0/ofi-conduit/gasnet_ofi.c:618: OFI provider 'psm2' selected at configure time is not available at run time and/or has been overridden by FI_PROVIDER='sockets' in the environment.


The first message above indicates you have a newer version of Intel MPI, the syntax for this option changed over successive versions of Intel MPI and Libfabric.
The last message indicates GASNet wants to use the high-speed psm2 provider, but it appears that Intel MPI startup may be setting FI_PROVIDER='sockets' in the environment, overriding its ability to do that.

I think I_MPI_OFI_PROVIDER=sockets is the correct variable we need you to set to force MPI to use the TCP interface. Unfortunately Intel MPI seems to have the undesirable side-effect of propagating this setting to the FI_PROVIDER envvar, which will also override GASNet's provider choice. Thank you very much for bringing this problem to our attention! I've added an entry in our bug database (Bug 4486) to request we improve this behavior for an upcoming UPC++/GASNet release.

Attached is a quick-and-dirty patch you can apply to your GASNet source tree to force it to use the high-speed PSM2 provider. Note the GASNet sources where you'll need to apply this are probably in the UPC++ build directory inside $builddir/bld/GASNet-2022.3.0

Try applying that patch to your build that is using the default configure --with-ofi-provider=psm2, rebuild UPC++ and your application, and then run with  I_MPI_OFI_PROVIDER=sockets . Please let us know how that goes!
 
Instead I had to configure and "make all" again using --with-ofi-provider=sockets, which ended up working. This is the complete configure-command:
./configure --prefix=upcxx-install-path --disable-smp --disable-udp --disable-ibv --enable-ofi --with-ofi-spawner=mpi --with-cc=mpiicc --with-cxx=mpiicpc --with-ofi-provider=sockets

This is the opposite of what you probably want, because it gives the high-speed endpoint to MPI (which is only used for job spawning) and relegates all the UPC++/GASNet communication to the slower TCP interface. This would likely run correctly, but you'd get far more efficient communication by keeping the default --with-ofi-provider=psm2 to get high-speed UPC++ communication, and finding a different solution to the job spawning problem (tweaking MPI to use TCP, or switching to GASNet's PMI or SSH spawner)
 
I have some other questions:
Before program execution I get following warning:
WARNING: Using OFI provider (sockets), which has not been validated to provide
WARNING: acceptable GASNet performance. You should consider using a more
WARNING: hardware-appropriate GASNet conduit. See ofi-conduit/README.
WARNING: ofi-conduit is experimental and should not be used for
         performance measurements.
But because I want to measure the performance of my program like for example its scalability, is there something I have to consider, or can I safely ignore this message?

This is the warning telling you that sockets provider is probably not what you want for UPC++/GASNet communication.
 

Also I noticed some inconsistencies of the output of my program. My program parallelizes a big for loop assigning the iterations to the processes with 
if (upcxx::rank_me() == i % upcxx::rank_n())
where each iteration also outputs its loop iteration and upc++-process-rank. Using mpirun/mpiexec/upcxx-run with two processes in an interactive session all leads to pretty even outputs from both processes (i.e. both pretty evenly taking turns outputting about one message at a time). But as a batch-job srun results in a process outputting about 10 to 50 iterations at a time before the other process also outputs about 10 to 50 iterations, while for mpirun/mpiexec it's about 100 to 200 iterations per turn. upcxx-run still doesn't work in the batch-job with the same error message as in my very first post. Do you have an idea, what might cause all this?

The behavior you describe is most likely due to console output buffering that takes place partially outside your rank processes, which can lead to some "surprising" interleavings of nearly-concurrent output from multiple processes, especially in a distributed job. See this FAQ entry for more info.

Hope this helps..
-D

force-psm2.diff

Soonni

unread,
Aug 20, 2022, 4:14:50 PM8/20/22
to UPC++
Hi Dan,

unfortunately, the patch didn't work and the error message was the same (OFI provider 'psm2' selected at configure time is not available at run time and/or has been overridden by FI_PROVIDER='sockets' in the environment.) So I switched to an older version of Intel MPI (ver. 2018.4.274) and setting the environment variable I_MPI_FABRICS=tcp ended up working. I still get this warning:
WARNING: ofi-conduit is experimental and should not be used for
         performance measurements.
         Please see `ofi-conduit/README` for more details.
But this warning can be ignored in my performance evaluation, right?

Other than that I think this should be the proper build. So thank you for all your help and replies!
-S

Dan Bonachea

unread,
Aug 20, 2022, 6:33:12 PM8/20/22
to Soonni, UPC++
On Sat, Aug 20, 2022 at 4:14 PM Soonni <son...@hotmail.de> wrote:
Hi Dan,

unfortunately, the patch didn't work and the error message was the same (OFI provider 'psm2' selected at configure time is not available at run time and/or has been overridden by FI_PROVIDER='sockets' in the environment.)

Thanks for the update, we're already working on a superior fix for our next release, and might even invite you try it out once it's ready.
 
So I switched to an older version of Intel MPI (ver. 2018.4.274) and setting the environment variable I_MPI_FABRICS=tcp ended up working.

Outstanding! Glad to hear you finally got a working setup!
As I mentioned before, ofi-conduit only uses MPI for job spawning, so (unless your program is making MPI calls) then the MPI library version/performance should be irrelevant.
 
I still get this warning:
WARNING: ofi-conduit is experimental and should not be used for
         performance measurements.
         Please see `ofi-conduit/README` for more details.

But this warning can be ignored in my performance evaluation, right?

Yes, this warning is normal since ofi-conduit is still officially in experimental status, although it's still the best (and only "native") choice for Omni-Path hardware.
 
-D

--
You received this message because you are subscribed to the Google Groups "UPC++" group.
To unsubscribe from this group and stop receiving emails from it, send an email to upcxx+un...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages