[slurm-users] Minimum cpu cores per node partition level configuration

41 views
Skip to first unread message

Jeherul Islam via slurm-users

unread,
Mar 28, 2025, 12:39:04 AM3/28/25
to Slurm User Community List
Dear All,
I need to configure the slurm so the user must take a certain minimum number of CPU cores for a particular partition(not system-wide). Otherwise, the job must not run.

Any suggestions will be highly appreciated.


With Thanks and Regards
--
Jeherul Islam

Ole Holm Nielsen via slurm-users

unread,
Mar 28, 2025, 3:34:48 AM3/28/25
to slurm...@lists.schedmd.com
Hi Jeherul Islam,

Such a policy may be implemented using a job_submit plugin which you have
to write yourself. You may perhaps find this Wiki page useful:
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_configuration/#job-submit-plugins

On 3/28/25 05:36, Jeherul Islam via slurm-users wrote:
> I need to configure the slurm so the user must take a certain minimum
> number of CPU cores for a particular partition(not system-wide).
> Otherwise, the job must not run.
>
> Any suggestions will be highly appreciated.

IHTH,
Ole

--
slurm-users mailing list -- slurm...@lists.schedmd.com
To unsubscribe send an email to slurm-us...@lists.schedmd.com

Cutts, Tim via slurm-users

unread,
Apr 3, 2025, 12:54:01 PM4/3/25
to Jeherul Islam, Slurm User Community List

You can set a partition QoS which specifies a minimum.  We have such a qos on our large-gpu partition; we don’t want people scheduling small stuff to it, so we have this qos:

 

$ sacctmgr show qos large-gpu  --json | jq '.QOS[] | { name: .name, min_limits: .limits.min }'

{

  "name": "large-gpu",

  "min_limits": {

    "priority_threshold": {

      "set": false,

      "infinite": true,

      "number": 0

    },

    "tres": {

      "per": {

        "job": [

          {

            "type": "cpu",

            "name": "",

            "id": 1,

            "count": 32

          },

          {

            "type": "mem",

            "name": "",

            "id": 2,

            "count": 262144

          },

          {

            "type": "gres",

            "name": "gpu",

            "id": 1002,

            "count": 3

          }

        ]

      }

    }

  }

}

 

i.e. the user has to request 32 cores and at least 3 GPUs.  If I try to allocate less I get an error:

 

$ salloc -p large-gpu --gres=gpu:1 -c 32 --mem 256G

salloc: error: QOSMinGRES

salloc: error: Job submit/allocate failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

 

-- 

Tim Cutts

Senior Director, R&D IT - Data, Analytics & AI, Scientific Computing Platform

AstraZeneca

 

Find out more about R&D IT Data, Analytics & AI and how we can support you by visiting our Service Catalogue |


AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.

This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.com

Loris Bennett via slurm-users

unread,
Apr 4, 2025, 1:37:20 AM4/4/25
to Slurm Users Mailing List
Hi Tim,

"Cutts, Tim via slurm-users"

<slurm-users-rGrgPyRx505G7+FkpxDULAC/G2K4...@public.gmane.org> writes:

> You can set a partition QoS which specifies a minimum. We have such a qos on our large-gpu partition; we don’t want people scheduling small stuff to it, so we
> have this qos:

How does this affect total throughput? Presumably, 'small' GPU jobs
might potentially have to wait for resources in other partitions, even
if resources are free in 'large-gpu'. Do you other policies which
ameliorate this?

Cheers,

Loris

[snip (135 lines)]


--
Dr. Loris Bennett (Herr/Mr)
FUB-IT, Freie Universität Berlin

Cutts, Tim via slurm-users

unread,
Apr 10, 2025, 10:13:29 AM4/10/25
to Loris Bennett, Slurm Users Mailing List

As you say, it does reduce overall throughput, because those large-gpu nodes are dedicated.  We don’t have many nodes in that partition, the rest are in our standard queues.  Ultimately, this is a tradeoff.  We have chosen to reduce total throughput slightly in order to make sure that large jobs actually get scheduled in a timely fashion.

 

This is the boulders-and-sand scheduling problem; I don’t think there’s any single perfect configuration, it just depends on what’s most important to your business; throughput, or the turnaround time of particular jobs.

 

Tim

 

-- 

Tim Cutts

Senior Director, R&D IT - Data, Analytics & AI, Scientific Computing Platform

AstraZeneca

 

Find out more about R&D IT Data, Analytics & AI and how we can support you by visiting our Service Catalogue |

 

 

Reply all
Reply to author
Forward
0 new messages