Groups
Sign in
Groups
slurm-users
Conversations
About
Send feedback
Help
slurm-users
Contact owners and managers
1–30 of 8074
You have reached the Slurm Workload Manager user list archive. Please post all new threads to
slurm-users@schedmd.com
. All communication will be copied here. (This is just an archive)
Mark all as read
Report group
0 selected
Paul Edmon via slurm-users
Apr 25
[slurm-users] HPC Principal System Engineer at the Broad
A friend ask me to pass this along. Figured some folks on this list might be interested. https://
unread,
[slurm-users] HPC Principal System Engineer at the Broad
A friend ask me to pass this along. Figured some folks on this list might be interested. https://
Apr 25
Gestió Servidors via slurm-users
Apr 23
[slurm-users] Apply an specific QoS to all users that belongs to an specific account
Hi, I would like to know if it is possible to apply an specific QoS to all users that belongs to an
unread,
[slurm-users] Apply an specific QoS to all users that belongs to an specific account
Hi, I would like to know if it is possible to apply an specific QoS to all users that belongs to an
Apr 23
Robert Kudyba via slurm-users
Apr 19
[slurm-users] any way to allow interactive jobs or ssh in Slurm 23.02 when node is draining?
We use Bright Cluster Manager with SLurm 23.02 on RHEL9. I know about pam_slurm_adopt https://slurm.
unread,
[slurm-users] any way to allow interactive jobs or ssh in Slurm 23.02 when node is draining?
We use Bright Cluster Manager with SLurm 23.02 on RHEL9. I know about pam_slurm_adopt https://slurm.
Apr 19
Jeffrey Layton via slurm-users
6
Apr 19
[slurm-users] Integrating Slurm with WekaIO
On Bright it's set in a few places: grep -r -i SLURM_CONF /etc /etc/systemd/system/slurmctld.
unread,
[slurm-users] Integrating Slurm with WekaIO
On Bright it's set in a few places: grep -r -i SLURM_CONF /etc /etc/systemd/system/slurmctld.
Apr 19
Ole Holm Nielsen via slurm-users
11
Apr 19
[slurm-users] Munge log-file fills up the file system to 100%
It turns out that the Slurm job limits are *not* controlled by the normal /etc/security/limits.conf
unread,
[slurm-users] Munge log-file fills up the file system to 100%
It turns out that the Slurm job limits are *not* controlled by the normal /etc/security/limits.conf
Apr 19
Joe Teumer via slurm-users
Apr 18
[slurm-users] Job Invalid Account
We installed slurm 23.11.5 and we are receiving "JobId=n has invalid account" for every
unread,
[slurm-users] Job Invalid Account
We installed slurm 23.11.5 and we are receiving "JobId=n has invalid account" for every
Apr 18
Shooktija S N via slurm-users
3
Apr 17
[slurm-users] Reserving resources for use by non-slurm stuff
On a single Rocky8 workstation with one GPU where we wanted ssh interactive logins to it to have a
unread,
[slurm-users] Reserving resources for use by non-slurm stuff
On a single Rocky8 workstation with one GPU where we wanted ssh interactive logins to it to have a
Apr 17
KK via slurm-users
Apr 17
[slurm-users] Inconsistencies in CPU time Reporting by sreport and sacct Tools
I wish to ascertain the CPU core time utilized by user dj1 and dj. I have tested with sreport cluster
unread,
[slurm-users] Inconsistencies in CPU time Reporting by sreport and sacct Tools
I wish to ascertain the CPU core time utilized by user dj1 and dj. I have tested with sreport cluster
Apr 17
Gestió Servidors via slurm-users
Apr 17
[slurm-users] Association limit problem
Hello, I'm doing some test with “associations” with “sacctmgr”. I have created three users (
unread,
[slurm-users] Association limit problem
Hello, I'm doing some test with “associations” with “sacctmgr”. I have created three users (
Apr 17
wdennis--- via slurm-users
2
Apr 16
[slurm-users] Redirect jobs submitted to old partition to new
For jobs already in default_queue squeue -t pd -h --Format=jobID |xargs -L1 -I{} scontrol update
unread,
[slurm-users] Redirect jobs submitted to old partition to new
For jobs already in default_queue squeue -t pd -h --Format=jobID |xargs -L1 -I{} scontrol update
Apr 16
Marshall Garey via slurm-users
Apr 16
[slurm-users] Slurm version 23.11.6 is now available
We are pleased to announce the availability of Slurm version 23.11.6. The 23.11.6 release includes
unread,
[slurm-users] Slurm version 23.11.6 is now available
We are pleased to announce the availability of Slurm version 23.11.6. The 23.11.6 release includes
Apr 16
KK via slurm-users
Apr 15
[slurm-users] Fwd: sreport cluster UserUtilizationByaccount Used result versus sreport job SizesByAccount or sacct: inconsistencies
---------- Forwarded message --------- 发件人: KK <daijian...@gmail.com> Date: 2024年4月15日周一 13
unread,
[slurm-users] Fwd: sreport cluster UserUtilizationByaccount Used result versus sreport job SizesByAccount or sacct: inconsistencies
---------- Forwarded message --------- 发件人: KK <daijian...@gmail.com> Date: 2024年4月15日周一 13
Apr 15
Xaver Stiensmeier via slurm-users
2
Apr 15
[slurm-users] Slurm.conf and workers
Xaver, If you look at your slurmctld log, you likely end up seeing messages about each node's
unread,
[slurm-users] Slurm.conf and workers
Xaver, If you look at your slurmctld log, you likely end up seeing messages about each node's
Apr 15
nico.derl--- via slurm-users
2
Apr 15
[slurm-users] Interfaces of topology/tree and Topology Awareness
I know this isn't a developer forum, but I don't really know where else to ask. I've had
unread,
[slurm-users] Interfaces of topology/tree and Topology Awareness
I know this isn't a developer forum, but I don't really know where else to ask. I've had
Apr 15
shaobo liu via slurm-users
3
Apr 14
[slurm-users] slurmrestd connect to 192.168.87.113:6819 Connection refused
Thanks, The reason was found. It was caused by the expiration of the rest api token. <nico.derl@
unread,
[slurm-users] slurmrestd connect to 192.168.87.113:6819 Connection refused
Thanks, The reason was found. It was caused by the expiration of the rest api token. <nico.derl@
Apr 14
Josef Dvoracek via slurm-users
2
Apr 12
[slurm-users] visualisation of JobComp and JobacctGather data with Grafana - screenshots, ideas?
Hi Josef, we use ClusterCockpit for that purpose. Users could monitor their running jobs or have a
unread,
[slurm-users] visualisation of JobComp and JobacctGather data with Grafana - screenshots, ideas?
Hi Josef, we use ClusterCockpit for that purpose. Users could monitor their running jobs or have a
Apr 12
Tristan LEFEBVRE
, …
Williams, Jenny Avis via slurm-users
7
Apr 11
[slurm-users] Slurmd enabled crash with CgroupV2
The end goal is to see the following 2 things – jobs under the slurmstepd cgroup path, and the cpu,
unread,
[slurm-users] Slurmd enabled crash with CgroupV2
The end goal is to see the following 2 things – jobs under the slurmstepd cgroup path, and the cpu,
Apr 11
archisman.pathak--- via slurm-users
6
Apr 11
[slurm-users] Jobs of a user are stuck in Completing stage for a long time and cannot cancel them
On 4/10/24 10:41 pm, archisman.pathak--- via slurm-users wrote: > In our case, that node has been
unread,
[slurm-users] Jobs of a user are stuck in Completing stage for a long time and cannot cancel them
On 4/10/24 10:41 pm, archisman.pathak--- via slurm-users wrote: > In our case, that node has been
Apr 11
Steve Berg via slurm-users
2
Apr 10
[slurm-users] Upgrading nodes
Yes. You can build the 8 rpms on 9. Look at 'mock' to do so. I did similar when I still had
unread,
[slurm-users] Upgrading nodes
Yes. You can build the 8 rpms on 9. Look at 'mock' to do so. I did similar when I still had
Apr 10
Alison Peterson via slurm-users
2
Apr 10
[slurm-users] single node configuration
On Tue, 2024-04-09 at 11:07:32 -0700, Slurm users wrote: > Hi everyone, I'm conducting some
unread,
[slurm-users] single node configuration
On Tue, 2024-04-09 at 11:07:32 -0700, Slurm users wrote: > Hi everyone, I'm conducting some
Apr 10
Gerhard Strangar via slurm-users
6
Apr 10
[slurm-users] Avoiding fragmentation
Various options that might help reduce job fragmentation. Turn up debugging on slurmctld and add the
unread,
[slurm-users] Avoiding fragmentation
Various options that might help reduce job fragmentation. Turn up debugging on slurmctld and add the
Apr 10
Alison Peterson via slurm-users
10
Apr 9
[slurm-users] Nodes required for job are down, drained or reserved
Alison I'm glad I was able to help. Good luck. Jeff From: Alison Peterson <apete...@sdsu.edu
unread,
[slurm-users] Nodes required for job are down, drained or reserved
Alison I'm glad I was able to help. Good luck. Jeff From: Alison Peterson <apete...@sdsu.edu
Apr 9
Glen MacLachlan via slurm-users
2
Apr 9
[slurm-users] Trouble Running Slurm C Extension Plugin
Glen, I don't think I see it in your message, but are you pointing to the plugin in slurm.conf
unread,
[slurm-users] Trouble Running Slurm C Extension Plugin
Glen, I don't think I see it in your message, but are you pointing to the plugin in slurm.conf
Apr 9
Ansgar Esztermann-Kirchner via slurm-users
Apr 9
[slurm-users] DefCpuPerGPU and multiple partitions
Hello List, does anyone have experience with DefCpuPerGPU and jobs requesting multiple partitions? I
unread,
[slurm-users] DefCpuPerGPU and multiple partitions
Hello List, does anyone have experience with DefCpuPerGPU and jobs requesting multiple partitions? I
Apr 9
Xaver Stiensmeier via slurm-users
3
Apr 9
[slurm-users] Elastic Computing: Is it possible to incentivize grouping power_up calls?
Thank you Brian, while ResumeRate might be able to keep the CPU usage within an acceptable margin,
unread,
[slurm-users] Elastic Computing: Is it possible to incentivize grouping power_up calls?
Thank you Brian, while ResumeRate might be able to keep the CPU usage within an acceptable margin,
Apr 9
Victoria Hobson via slurm-users
Apr 8
[slurm-users] Slurm User Group 2024 Call for Papers
Slurm User Group (SLUG) 2024 is set for September 12-13 at the University of Oslo in Oslo, Norway.
unread,
[slurm-users] Slurm User Group 2024 Call for Papers
Slurm User Group (SLUG) 2024 is set for September 12-13 at the University of Oslo in Oslo, Norway.
Apr 8
Shooktija S N via slurm-users
4
Apr 8
[slurm-users] How to reinstall / reconfigure Slurm?
Follow up: I was able to fix my problem following advice in this post which said that the GPU GRES
unread,
[slurm-users] How to reinstall / reconfigure Slurm?
Follow up: I was able to fix my problem following advice in this post which said that the GPU GRES
Apr 8
greent10--- via slurm-users
Apr 5
[slurm-users] No SLURM_SHARDS_ON_NODE in prolog/epilog
Hi, I have found that SLURM_SHARDS_ON_NODE is not an environment variable in prolog or epilog. Is
unread,
[slurm-users] No SLURM_SHARDS_ON_NODE in prolog/epilog
Hi, I have found that SLURM_SHARDS_ON_NODE is not an environment variable in prolog or epilog. Is
Apr 5
thomas.hartmann--- via slurm-users
5
Apr 4
[slurm-users] Suggestions for Partition/QoS configuration
Hi, I'm currently testing an approach similar to the example by Loris. Why consider preemption?
unread,
[slurm-users] Suggestions for Partition/QoS configuration
Hi, I'm currently testing an approach similar to the example by Loris. Why consider preemption?
Apr 4
Alison Peterson via slurm-users
4
Apr 4
[slurm-users] SLURM configuration help
Thank you!!!! That was the issue, I'm so happy :-) sending you many thanks. On Thu, Apr 4, 2024
unread,
[slurm-users] SLURM configuration help
Thank you!!!! That was the issue, I'm so happy :-) sending you many thanks. On Thu, Apr 4, 2024
Apr 4