[slurm-users] Why AllowAccounts not work in slurm-23.11.6

239 views
Skip to first unread message

daijiangkuicgo--- via slurm-users

unread,
Jun 24, 2024, 11:51:16 PM6/24/24
to slurm...@lists.schedmd.com
I have set AllowAccounts=sunlabc5hpc,root, but it doesn’t seem to work. User c010637 is not part of the sunlabc5hpc account but is still able to use the sunlabc5hpc partition. I have tried setting EnforcePartLimits to ALL, ANY, and NO, but none of these options resolved the issue.

[c010637@sl-login ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
cpu* up infinite 3 mix sl-c[0035,0042-0043]
cpu* up infinite 1 idle sl-c0036
gpu up infinite 3 idle sl-c[0045-0047]
sunlabc5hpc up infinite 1 idle sl-c0048
[c010637@sl-login ~]$ scontrol show partition sunlabc5hpc
PartitionName=sunlabc5hpc
AllowGroups=ALL AllowAccounts=sunlabc5hpc,root AllowQos=ALL
AllocNodes=ALL Default=NO QoS=N/A
DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
Nodes=sl-c0048
PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
OverTimeLimit=NONE PreemptMode=OFF
State=UP TotalCPUs=256 TotalNodes=1 SelectTypeParameters=NONE
JobDefaults=(null)
DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
TRES=cpu=256,mem=515000M,node=1,billing=256,gres/gpu=8

[c010637@sl-login ~]$ sacctmgr list assoc format=cluster,user,account%20,qos user=$USER
Cluster User Account QOS
---------- ---------- -------------------- --------------------
snowhpc c010637 c010637_bank normal
[c010637@sl-login ~]$ sacctmgr list account sunlabc5hpc
Account Descr Org
---------- -------------------- --------------------
sunlabc5h+ sunlabc5hpc sunlabc5hpc
[c010637@sl-login ~]$ sacctmgr show assoc where Account=sunlabc5hpc format=User,Account
User Account
---------- ----------
sunlabc5h+
c010751 sunlabc5h+
snowdai sunlabc5h+

--
slurm-users mailing list -- slurm...@lists.schedmd.com
To unsubscribe send an email to slurm-us...@lists.schedmd.com

daijiangkuicgo--- via slurm-users

unread,
Jun 29, 2024, 4:30:23 AM6/29/24
to slurm...@lists.schedmd.com
AllowGroups is ok.

shaobo liu via slurm-users

unread,
Oct 16, 2024, 8:58:09 PM10/16/24
to daijian...@gmail.com, slurm...@lists.schedmd.com
Tested slurm-23.* version, AllowAccounts parameter does not work.

daijiangkuicgo--- via slurm-users <slurm...@lists.schedmd.com> 于2024年6月29日周六 16:30写道:

Paul Raines via slurm-users

unread,
Oct 17, 2024, 10:23:58 AM10/17/24
to slurm-users

I am using Slurm 23.11.3 and it AllowAccounts works for me. We
have a partition defied with AllowAccounts and if one tries to
submit in an account not in the list one will get

srun: error: Unable to allocate resources: Invalid account or
account/partition combination specified


Do you have EnforcePartLimits=ALL


On Wed, 16 Oct 2024 8:55pm, shaobo liu via slurm-users wrote:

> External Email - Use Caution

The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Mass General Brigham Compliance HelpLine at https://www.massgeneralbrigham.org/complianceline <https://www.massgeneralbrigham.org/complianceline> .
Please note that this e-mail is not secure (encrypted). If you do not wish to continue communication over unencrypted e-mail, please notify the sender of this message immediately. Continuing to send or respond to e-mail after receiving this message means you understand and accept this risk and wish to continue to communicate over unencrypted e-mail.

shaobo liu via slurm-users

unread,
Oct 19, 2024, 4:36:33 AM10/19/24
to Paul Raines, slurm-users
My slurm is 23.02.6,I have used EnforcePartLimits=ALL,Users outside of AllowAccounts can still submit。

root@node196:/etc/slurm# scontrol show config |grep EnforcePartLimits
EnforcePartLimits       = ALL
root@node196:/etc/slurm# scontrol show part
PartitionName=T1
   AllowGroups=ALL AllowAccounts=root AllowQos=ALL
   AllocNodes=ALL Default=YES QoS=N/A

   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
   MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
   Nodes=node196

   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
   OverTimeLimit=NONE PreemptMode=OFF
   State=UP TotalCPUs=4 TotalNodes=1 SelectTypeParameters=NONE
   JobDefaults=(null)
   DefMemPerCPU=1024 MaxMemPerNode=UNLIMITED
   TRES=cpu=4,mem=4G,node=1,billing=4


Paul Raines via slurm-users <slurm...@lists.schedmd.com> 于2024年10月17日周四 22:24写道:

shaobo liu via slurm-users

unread,
Oct 19, 2024, 4:54:15 AM10/19/24
to Paul Raines, slurm-users
My slurm.conf file is as follows:

ClusterName=hpc01
SlurmctldHost=node196
SlurmctldPort=6817
SlurmdPort=6818
SlurmUser=slurm
SlurmctldDebug=debug
SlurmctldLogFile=/var/log/slurm/slurmctld.log
SlurmdDebug=debug
SlurmdLogFile=/var/log/slurm/slurmd.log
SlurmctldPidFile=/var/run/slurmctld/slurmctld.pid
SlurmdPidFile=/var/run/slurmd/slurmd.pid
SlurmdSpoolDir=/var/spool/slurmd
StateSaveLocation=/var/spool/slurmctld
AccountingStorageEnforce=associations,limits,qos
AccountingStorageHost=node196
AccountingStoragePort=6819
AccountingStorageType=accounting_storage/slurmdbd
JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/cgroup
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=CR_CORE_MEMORY
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=60
SlurmdTimeout=120
Waittime=0
AuthType=auth/munge
CredType=cred/munge
EnforcePartLimits=YES
MpiDefault=none
ProctrackType=proctrack/cgroup
ReturnToService=1
SwitchType=switch/none
TaskPlugin=task/affinity,task/cgroup
MaxJobCount=10000
MaxArraySize=1001
MailProg=/usr/bin/smail
PluginDir=/usr/local/slurm-23.02.6/lib/slurm
NodeName=node196 Sockets=1 CoresPerSocket=2 ThreadsPerCore=2 RealMemory=4096 State=UNKNOWN
PartitionName=T1 Nodes=node196 DefMemPerCPU=1024 Default=YES State=UP AllowAccounts=root

shaobo liu <dspa...@gmail.com> 于2024年10月19日周六 16:34写道:

Laura Hild via slurm-users

unread,
Oct 19, 2024, 10:01:42 AM10/19/24
to shaobo liu, slurm-users
What do you have for `sacctmgr list account`? If "root" is your top-level (Slurm) (bank) account, AllowAccounts=root may just end up meaning any account. To have AllowAccounts limit what users can submit, you'd need to name a lower-level Slurm (bank) account that only some users have an Association with.

shaobo liu via slurm-users

unread,
Oct 29, 2024, 8:55:25 PM10/29/24
to Laura Hild, slurm-users
My slurm is 23.02.6,used EnforcePartLimits=ALL,Accounts outside of AllowAccounts can still submit。

Laura Hild <l...@jlab.org> 于2024年10月19日周六 21:59写道:

Laura Hild via slurm-users

unread,
Oct 30, 2024, 8:52:27 AM10/30/24
to shaobo liu, slurm-users
If you run

sshare

or

sacctmgr show association where parent=root

(and so forth recursively where parent= each of the children) do you find that these other accounts that can submit to the partition are not ultimately sub-accounts of "root"?

Quoting man 5 slurm.conf, concerning AllowAccounts, "This list is also hierarchical, meaning subaccounts are included automatically."


________________________________________
Od: shaobo liu <dspa...@gmail.com>
Poslano: torek, 29. oktober 2024 20:49
Za: Laura Hild
Kp: slurm-users
Zadeva: Re: [slurm-users] Re: Why AllowAccounts not work in slurm-23.11.6

My slurm is 23.02.6,used EnforcePartLimits=ALL,Accounts outside of AllowAccounts can still submit。

Laura Hild <l...@jlab.org<mailto:l...@jlab.org>> 于2024年10月19日周六 21:59写道:
What do you have for `sacctmgr list account`? If "root" is your top-level (Slurm) (bank) account, AllowAccounts=root may just end up meaning any account. To have AllowAccounts limit what users can submit, you'd need to name a lower-level Slurm (bank) account that only some users have an Association with.


Marko Markoc via slurm-users

unread,
Oct 30, 2024, 12:37:45 PM10/30/24
to Laura Hild, shaobo liu, slurm-users
Hi All,

AllowAccounts broke for me when I moved from version 22 to 23. I've opened this bug report https://support.schedmd.com/show_bug.cgi?id=19315 . It's been a while since I tackled it but more info is 
available in the bug report. I think the change is related to the following commit https://github.com/SchedMD/slurm/commit/776edd83952082e3ec46b3726858755c96af8a60 .

Thanks,
Marko
Reply all
Reply to author
Forward
0 new messages