[slurm-users] SLURM config option to not tie up a host completely.

4 views
Skip to first unread message

Ron Gould via slurm-users

unread,
May 8, 2026, 1:15:06 PM (6 days ago) May 8
to slurm...@lists.schedmd.com
I have a user asking about a SLURM config setting that would allow a job to not tie up a node's unused resources, thereby allowing another job to run concurrently.

User:
On our cluster, our Fluent GPU jobs each use only 1 of the 2 GPUs on GPUServer[1,2]. Currently the node is fully allocated to whichever job lands on it first, leaving the second GPU, 38 CPU cores, and 270+ GB of RAM idle.

Has anyone dealt with this? What options would facilitate this? Are there any gotchas or pitfalls?

--
slurm-users mailing list -- slurm...@lists.schedmd.com
To unsubscribe send an email to slurm-us...@lists.schedmd.com

Cutts, Tim via slurm-users

unread,
May 8, 2026, 1:52:54 PM (6 days ago) May 8
to Ron Gould, slurm...@lists.schedmd.com

Slurm has support for cgroups which confines the job to only the resources allocated to it.

https://slurm.schedmd.com/cgroups.html

Tim


AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.

This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.com

Paul Edmon via slurm-users

unread,
May 8, 2026, 2:12:26 PM (6 days ago) May 8
to slurm...@lists.schedmd.com
You probably want to look at this setting:
https://slurm.schedmd.com/slurm.conf.html#OPT_CR_Core_Memory Which is
what we use in combination with
https://slurm.schedmd.com/slurm.conf.html#OPT_select/cons_tres To allow
users to specify what fractions of they node they want to use, leaving
the rest open for other users.

-Paul Edmon-

Davide DelVento via slurm-users

unread,
May 8, 2026, 2:18:42 PM (6 days ago) May 8
to Ron Gould, slurm...@lists.schedmd.com
As long as the partition does not force exclusive job, people can request in slurm the exact amount of cores and memory they need, and slurm will "automagically" create the cgroup to isolate the job and happily schedule additional ones (in their own cgroup).
We run in that mode, and perhaps you are too already, just make sure to request only some cores and some memory rather than everything
Reply all
Reply to author
Forward
0 new messages