[slurm-users] partition qos without managing users

15 views
Skip to first unread message

ego...@posteo.me

unread,
Nov 20, 2023, 4:45:41 PM11/20/23
to slurm...@lists.schedmd.com
Hello,

I'd like to configure some sort of partition QoS so that the number of
jobs or cpus is limited for a single user.
So far my testing always depends on creating users within the accounting
database however I'd like to avoid managing each user and having to
create or sync _all_ LDAP users also within Sturm.
Or - are there solutions to sync LDAP or AzureAD users to the Slurm
accounting database?

Thanks for any input.


Best - Eg.


Brian Andrus

unread,
Nov 20, 2023, 5:37:36 PM11/20/23
to slurm...@lists.schedmd.com
You would have to do such syncing with your own scripts. There is no way
slurm would be able to tell which users should have access and what
access without the slurmdb and such info is not contained in AD.

At our site, we iterate through the group(s) that are slurm user groups
and add the users if they do not exist. We also delete users when they
are removed from AD. This does have the effect of losing job info
produced by said users, but since we export that into a larger historic
repository, we don't worry about it.

So simple case is to iterate through an AD group which your slurm users
belong to and add them to slurmdbd. Once they are in there, you can set
defaults with exceptions for specific users.
If you are only looking to have settings apply to all users, you don't
have to import the users. Set the QoS for the partition.

Brian Andrus

ego...@posteo.me

unread,
Nov 21, 2023, 1:52:03 PM11/21/23
to slurm...@lists.schedmd.com
ok, I understand synching of users to slurm database is a task which it
not built-in, but could be added outside of slurm :-)

With regards to the QoS or Partition QoS setting I've tried several
settings and configurations however it was not possible at all to
configure a QoS on partition level only without not adding specific
users to the slurm database.
Either I don't understand the docs properly or there is no configuration
option to limit jobs with e.g. cpu=4 globally on a partition.

Could anybody share a configuration which set partition QoS (e.g. cpu=8)
without managing users or a configuration to silently change the job QoS
using job_submit.lua again without maintaining users within slurm
database?


Thanks

> Date: Mon, 20 Nov 2023 14:37:11 -0800
> From: Brian Andrus <toom...@gmail.com>
> To: slurm...@lists.schedmd.com
> Subject: Re: [slurm-users] partition qos without managing users
> Message-ID: <2f421687-40aa-4e35...@gmail.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed

Brian Andrus

unread,
Nov 22, 2023, 7:42:14 PM11/22/23
to slurm...@lists.schedmd.com
Eg,

Could you be more specific as to what you want?
Is there a specific user you want to control, or no user should get more
than x cpus in the partition? Or no single job should get more than x cpus?
The details matter to determine the right approach and right settings.

Brian Andrus

Loris Bennett

unread,
Nov 23, 2023, 8:42:49 AM11/23/23
to Slurm User Community List
ego...@posteo.me writes:

> ok, I understand synching of users to slurm database is a task which
> it not built-in, but could be added outside of slurm :-)
>
> With regards to the QoS or Partition QoS setting I've tried several
> settings and configurations however it was not possible at all to
> configure a QoS on partition level only without not adding specific
> users to the slurm database.
> Either I don't understand the docs properly or there is no
> configuration option to limit jobs with e.g. cpu=4 globally on a
> partition.
>
> Could anybody share a configuration which set partition QoS
> (e.g. cpu=8) without managing users or a configuration to silently
> change the job QoS using job_submit.lua again without maintaining
> users within slurm database?

We add users to the Slurm DB automatically via job_submit.lua if they do
not already exist. Probably not what you want to do if you have very
high throughput, which we do not. For us it means that we minimize the
stuff which needs to be deleted for the case that some one applies for
HPC access, but does not use it within a certain period and is therefore
removed from the system.

Cheers,

Loris
--
Dr. Loris Bennett (Herr/Mr)
ZEDAT, Freie Universität Berlin

Reply all
Reply to author
Forward
0 new messages