[slurm-users] Submitting hybrid OpenMPI and OpenMP Jobs

102 views
Skip to first unread message

Selch, Brigitte (FIDD)

unread,
Sep 22, 2023, 7:58:23 AM9/22/23
to slurm...@schedmd.com

Hello,

 

one of our applications need hybrid OpenMPI and OpenMP Job-Submit.

Only one task is allowed on one node, but this task should use all cores of the node.

So, for example I made:

 

#!/bin/bash

 

#SBATCH --nodes=5

#SBATCH --ntasks=5

#SBATCH --cpus-per-task=44

#SBATCH --export=ALL

 

export OMP_NUM_THREADS=44

mpiexec PreonNode test.prscene

 

 

But the job does not take more than one  thread:

 

Thread binding will be disabled because the full machine is not available for the process.

Detected 44 CPU threads, 2 l3 caches and 2 packages on the machine.

Number of CPU processors reported by OpenMP: 1

Maximum number of CPU threads reported by OpenMP: 44

 

Warning: OMP_NUM_THREADS was set to 44, which is higher than the number of available processors of 1. Will use 1 threads now.

 

What did I wrong?

Does anyone have any idea why OpenMP thinks it can only use one thread per node?

 

Thanks !

 

Best regards,

Brigitte Selch

 

MAN Truck & Bus SE

IT Produktentwicklung Simulation (FIDD)

Vogelweiher Str. 33

90441 Nürnberg




MAN Truck & Bus SE
Sitz der Gesellschaft: München
Registergericht: Amtsgericht München, HRB 247520
Vorsitzender des Aufsichtsrats: Christian Levin, Vorstand: Alexander Vlaskamp (Vorsitzender), Murat Aksel, Friedrich-W. Baumann, Michael Kobriger, Inka Koljonen, Arne Puls, Dr. Frederik Zohm

You can find information about how we process your personal data and your rights in our data protection notice: www.man.eu/data-protection-notice

This e-mail (including any attachments) is confidential and may be privileged.
If you have received it by mistake, please notify the sender by e-mail and delete this message from your system.
Any unauthorised use or dissemination of this e-mail in whole or in part is strictly prohibited.
Please note that e-mails are susceptible to change.
MAN Truck & Bus SE (including its group companies) shall not be liable for the improper or incomplete transmission of the information contained in this communication nor for any delay in its receipt.
MAN Truck & Bus SE (or its group companies) does not guarantee that the integrity of this communication has been maintained nor that this communication is free of viruses, interceptions or interference.

Lambers, Martin

unread,
Sep 22, 2023, 8:25:09 AM9/22/23
to slurm...@lists.schedmd.com
Hello,

for this setup it typically helps to disable MPI process binding with
"mpirun --bind-to none ..." (or similar) so that OpenMP can use all cores.

Best,
Martin

On 22/09/2023 13:57, Selch, Brigitte (FIDD) wrote:
> Hello,
>
> one of our applications need hybrid OpenMPI and OpenMP Job-Submit.
>
> Only one task is allowed on one node, but this task should use all cores
> of the node.
>
> So, for example I made:
>
> /#!/bin/bash/
>
> //
>
> /#SBATCH --nodes=5/
>
> /#SBATCH --ntasks=5/
>
> /#SBATCH --cpus-per-task=44/
>
> /#SBATCH --export=ALL/
>
> //
>
> /export OMP_NUM_THREADS=44/
>
> /mpiexec PreonNode test.prscene/
>
> But the job does not take more than one  thread:
>
> …
>
> /Thread binding will be disabled because the full machine is not
> available for the process./
>
> */Detected 44 CPU threads/*/, 2 l3 caches and 2 packages on the machine./
>
> */Number of CPU processors reported by OpenMP: 1/*
>
> */Maximum number of CPU threads reported by OpenMP: 44/*
>
> //
>
> /Warning: *OMP_NUM_THREADS was set to 44, which is higher than the
> number of available processors of *1. Will use 1 threads now./
>
> /…/
>
> What did I wrong?
>
> Does anyone have any idea why OpenMP thinks it can only use one thread
> per node?
>
> Thanks !
>
> Best regards,
>
> Brigitte Selch
>
> **
>
> *MAN Truck & Bus SE*
>
> IT Produktentwicklung Simulation (FIDD)
>
> Vogelweiher Str. 33
>
> 90441 Nürnberg
>
>
> ------------------------------------------------------------------------
>
> MAN Truck & Bus SE
> Sitz der Gesellschaft: München
> Registergericht: Amtsgericht München, HRB 247520
> Vorsitzender des Aufsichtsrats: Christian Levin, Vorstand: Alexander
> Vlaskamp (Vorsitzender), Murat Aksel, Friedrich-W. Baumann, Michael
> Kobriger, Inka Koljonen, Arne Puls, Dr. Frederik Zohm
>
> You can find information about how we process your personal data and
> your rights in our data protection notice: www.man.eu/data-protection-notice
>
> This e-mail (including any attachments) is confidential and may be
> privileged.
> If you have received it by mistake, please notify the sender by e-mail
> and delete this message from your system.
> Any unauthorised use or dissemination of this e-mail in whole or in part
> is strictly prohibited.
> Please note that e-mails are susceptible to change.
> MAN Truck & Bus SE (including its group companies) shall not be liable
> for the improper or incomplete transmission of the information contained
> in this communication nor for any delay in its receipt.
> MAN Truck & Bus SE (or its group companies) does not guarantee that the
> integrity of this communication has been maintained nor that this
> communication is free of viruses, interceptions or interference.
>

--
Dr. Martin Lambers
Forschung und wissenschaftliche Informationsversorgung
IT.SERVICES
Ruhr-Universität Bochum | 44780 Bochum | Germany
fon : +49 234 32 29941
https://www.it-services.rub.de/

Paul Edmon

unread,
Sep 22, 2023, 9:31:24 AM9/22/23
to slurm...@lists.schedmd.com
You might also try swapping to use srun instead of mpiexec as that way
slurm can give more direction as to what cores have been allocated to
what. I've found it in the past that mpiexec will ignore what Slurm
tells it.

-Paul Edmon-

Ozeryan, Vladimir

unread,
Sep 22, 2023, 9:32:41 AM9/22/23
to Slurm User Community List, slurm...@schedmd.com

Hello,

 

I would set “--ntasks"= number of cpus you want use for your job and remove “--cpus-per-task” which would be set to 1 by default.

 

From: slurm-users <slurm-use...@lists.schedmd.com> On Behalf Of Selch, Brigitte (FIDD)
Sent: Friday, September 22, 2023 7:58 AM
To: slurm...@schedmd.com
Subject: [EXT] [slurm-users] Submitting hybrid OpenMPI and OpenMP Jobs

 

APL external email warning: Verify sender slurm-use...@lists.schedmd.com before clicking links or attachments

 

Selch, Brigitte (FIDD)

unread,
Sep 25, 2023, 4:55:34 AM9/25/23
to Slurm User Community List
Hello Martin,

your solution works like a charm.
Thank you!

Best regards,
Brigitte

MAN Truck & Bus SE
IT Produktentwicklung Simulation (FIDD)
Vogelweiher Str. 33
90441 Nürnberg

Brigitt...@man.eu

-----Ursprüngliche Nachricht-----
Von: slurm-users <slurm-use...@lists.schedmd.com> Im Auftrag von Lambers, Martin
Gesendet: Freitag, 22. September 2023 14:25
An: slurm...@lists.schedmd.com
Betreff: Re: [slurm-users] Submitting hybrid OpenMPI and OpenMP Jobs
________________________________
Reply all
Reply to author
Forward
0 new messages