Groups
Sign in
Groups
slurm-users
Conversations
About
Send feedback
Help
slurm-users
Contact owners and managers
1–30 of 8084
You have reached the Slurm Workload Manager user list archive. Please post all new threads to
slurm-users@schedmd.com
. All communication will be copied here. (This is just an archive)
Mark all as read
Report group
0 selected
Dan Healy via slurm-users
4
2:28 AM
[slurm-users] Convergence of Kube and Slurm?
Tim Wickberg via slurm-users <slurm...@lists.schedmd.com> writes: > [1] Slinky is not an
unread,
[slurm-users] Convergence of Kube and Slurm?
Tim Wickberg via slurm-users <slurm...@lists.schedmd.com> writes: > [1] Slinky is not an
2:28 AM
Nuno Teixeira via slurm-users
8
May 6
[slurm-users] FreeBSD/aarch64: ld: error: unknown emulation: elf_aarch64
On 5/6/24 3:19 pm, Nuno Teixeira via slurm-users wrote: > Fixed with: > [...] > > Thanks
unread,
[slurm-users] FreeBSD/aarch64: ld: error: unknown emulation: elf_aarch64
On 5/6/24 3:19 pm, Nuno Teixeira via slurm-users wrote: > Fixed with: > [...] > > Thanks
May 6
ARNULD via slurm-users
May 6
[slurm-users] Rootless Docker Errors with Slurm
I am trying to integrate Rootless Docker with Slurm. have set-up Rootless Docker as per the docs
unread,
[slurm-users] Rootless Docker Errors with Slurm
I am trying to integrate Rootless Docker with Slurm. have set-up Rootless Docker as per the docs
May 6
Gestió Servidors via slurm-users
May 6
[slurm-users] Invalid/incorrect gres.conf syntax
Hello, I have configured my “gres.conf” in this way: NodeName=node-gpu-1 AutoDetect=off Name=gpu Type
unread,
[slurm-users] Invalid/incorrect gres.conf syntax
Hello, I have configured my “gres.conf” in this way: NodeName=node-gpu-1 AutoDetect=off Name=gpu Type
May 6
Shooktija S N via slurm-users
May 3
[slurm-users] GPU GRES verification and some really broad questions.
Hi, I am a complete slurm-admin and sys-admin noob trying to set up a 3 node Slurm cluster. I have
unread,
[slurm-users] GPU GRES verification and some really broad questions.
Hi, I am a complete slurm-admin and sys-admin noob trying to set up a 3 node Slurm cluster. I have
May 3
WANG, Hongying via slurm-users
May 3
[slurm-users] slurm reservation: how to use more nodes (bigger than NodeCnt) when submitting jobs
Hi all, I have a question about the reservation. If I create a reservation with 3 nodes (NodeCnt=3),
unread,
[slurm-users] slurm reservation: how to use more nodes (bigger than NodeCnt) when submitting jobs
Hi all, I have a question about the reservation. If I create a reservation with 3 nodes (NodeCnt=3),
May 3
Henderson, Brent via slurm-users
2
May 2
[slurm-users] srun launched mpi job occasionally core dumps
Re-tested with slurm 23.02.7 (had to also disable slurmdbd and run the controller with the '-i
unread,
[slurm-users] srun launched mpi job occasionally core dumps
Re-tested with slurm 23.02.7 (had to also disable slurmdbd and run the controller with the '-i
May 2
Jason Simms via slurm-users
2
May 2
[slurm-users] Partition Preemption Configuration Question
Hi Jason, I wanted exactly the same and was confused exactly like you. For a while it did not work,
unread,
[slurm-users] Partition Preemption Configuration Question
Hi Jason, I wanted exactly the same and was confused exactly like you. For a while it did not work,
May 2
Dietmar Rieder via slurm-users
11
Apr 30
[slurm-users] scheduling according time requirements
Hi Loris, On 4/30/24 4:26 PM, Loris Bennett via slurm-users wrote: > Hi Dietmar, > > Dietmar
unread,
[slurm-users] scheduling according time requirements
Hi Loris, On 4/30/24 4:26 PM, Loris Bennett via slurm-users wrote: > Hi Dietmar, > > Dietmar
Apr 30
Jason Simms via slurm-users
3
Apr 29
[slurm-users] Trying to Track Down root Usage
Thanks, Juergen. I think you've solved it in one. I do have a root reservation on some nodes and
unread,
[slurm-users] Trying to Track Down root Usage
Thanks, Juergen. I think you've solved it in one. I do have a root reservation on some nodes and
Apr 29
Paul Edmon via slurm-users
Apr 25
[slurm-users] HPC Principal System Engineer at the Broad
A friend ask me to pass this along. Figured some folks on this list might be interested. https://
unread,
[slurm-users] HPC Principal System Engineer at the Broad
A friend ask me to pass this along. Figured some folks on this list might be interested. https://
Apr 25
Gestió Servidors via slurm-users
Apr 23
[slurm-users] Apply an specific QoS to all users that belongs to an specific account
Hi, I would like to know if it is possible to apply an specific QoS to all users that belongs to an
unread,
[slurm-users] Apply an specific QoS to all users that belongs to an specific account
Hi, I would like to know if it is possible to apply an specific QoS to all users that belongs to an
Apr 23
Robert Kudyba via slurm-users
Apr 19
[slurm-users] any way to allow interactive jobs or ssh in Slurm 23.02 when node is draining?
We use Bright Cluster Manager with SLurm 23.02 on RHEL9. I know about pam_slurm_adopt https://slurm.
unread,
[slurm-users] any way to allow interactive jobs or ssh in Slurm 23.02 when node is draining?
We use Bright Cluster Manager with SLurm 23.02 on RHEL9. I know about pam_slurm_adopt https://slurm.
Apr 19
Jeffrey Layton via slurm-users
6
Apr 19
[slurm-users] Integrating Slurm with WekaIO
On Bright it's set in a few places: grep -r -i SLURM_CONF /etc /etc/systemd/system/slurmctld.
unread,
[slurm-users] Integrating Slurm with WekaIO
On Bright it's set in a few places: grep -r -i SLURM_CONF /etc /etc/systemd/system/slurmctld.
Apr 19
Ole Holm Nielsen via slurm-users
11
Apr 19
[slurm-users] Munge log-file fills up the file system to 100%
It turns out that the Slurm job limits are *not* controlled by the normal /etc/security/limits.conf
unread,
[slurm-users] Munge log-file fills up the file system to 100%
It turns out that the Slurm job limits are *not* controlled by the normal /etc/security/limits.conf
Apr 19
Joe Teumer via slurm-users
Apr 18
[slurm-users] Job Invalid Account
We installed slurm 23.11.5 and we are receiving "JobId=n has invalid account" for every
unread,
[slurm-users] Job Invalid Account
We installed slurm 23.11.5 and we are receiving "JobId=n has invalid account" for every
Apr 18
Shooktija S N via slurm-users
3
Apr 17
[slurm-users] Reserving resources for use by non-slurm stuff
On a single Rocky8 workstation with one GPU where we wanted ssh interactive logins to it to have a
unread,
[slurm-users] Reserving resources for use by non-slurm stuff
On a single Rocky8 workstation with one GPU where we wanted ssh interactive logins to it to have a
Apr 17
KK via slurm-users
Apr 17
[slurm-users] Inconsistencies in CPU time Reporting by sreport and sacct Tools
I wish to ascertain the CPU core time utilized by user dj1 and dj. I have tested with sreport cluster
unread,
[slurm-users] Inconsistencies in CPU time Reporting by sreport and sacct Tools
I wish to ascertain the CPU core time utilized by user dj1 and dj. I have tested with sreport cluster
Apr 17
Gestió Servidors via slurm-users
Apr 17
[slurm-users] Association limit problem
Hello, I'm doing some test with “associations” with “sacctmgr”. I have created three users (
unread,
[slurm-users] Association limit problem
Hello, I'm doing some test with “associations” with “sacctmgr”. I have created three users (
Apr 17
wdennis--- via slurm-users
2
Apr 16
[slurm-users] Redirect jobs submitted to old partition to new
For jobs already in default_queue squeue -t pd -h --Format=jobID |xargs -L1 -I{} scontrol update
unread,
[slurm-users] Redirect jobs submitted to old partition to new
For jobs already in default_queue squeue -t pd -h --Format=jobID |xargs -L1 -I{} scontrol update
Apr 16
Marshall Garey via slurm-users
Apr 16
[slurm-users] Slurm version 23.11.6 is now available
We are pleased to announce the availability of Slurm version 23.11.6. The 23.11.6 release includes
unread,
[slurm-users] Slurm version 23.11.6 is now available
We are pleased to announce the availability of Slurm version 23.11.6. The 23.11.6 release includes
Apr 16
KK via slurm-users
Apr 15
[slurm-users] Fwd: sreport cluster UserUtilizationByaccount Used result versus sreport job SizesByAccount or sacct: inconsistencies
---------- Forwarded message --------- 发件人: KK <daijian...@gmail.com> Date: 2024年4月15日周一 13
unread,
[slurm-users] Fwd: sreport cluster UserUtilizationByaccount Used result versus sreport job SizesByAccount or sacct: inconsistencies
---------- Forwarded message --------- 发件人: KK <daijian...@gmail.com> Date: 2024年4月15日周一 13
Apr 15
Xaver Stiensmeier via slurm-users
2
Apr 15
[slurm-users] Slurm.conf and workers
Xaver, If you look at your slurmctld log, you likely end up seeing messages about each node's
unread,
[slurm-users] Slurm.conf and workers
Xaver, If you look at your slurmctld log, you likely end up seeing messages about each node's
Apr 15
nico.derl--- via slurm-users
2
Apr 15
[slurm-users] Interfaces of topology/tree and Topology Awareness
I know this isn't a developer forum, but I don't really know where else to ask. I've had
unread,
[slurm-users] Interfaces of topology/tree and Topology Awareness
I know this isn't a developer forum, but I don't really know where else to ask. I've had
Apr 15
shaobo liu via slurm-users
3
Apr 14
[slurm-users] slurmrestd connect to 192.168.87.113:6819 Connection refused
Thanks, The reason was found. It was caused by the expiration of the rest api token. <nico.derl@
unread,
[slurm-users] slurmrestd connect to 192.168.87.113:6819 Connection refused
Thanks, The reason was found. It was caused by the expiration of the rest api token. <nico.derl@
Apr 14
Josef Dvoracek via slurm-users
2
Apr 12
[slurm-users] visualisation of JobComp and JobacctGather data with Grafana - screenshots, ideas?
Hi Josef, we use ClusterCockpit for that purpose. Users could monitor their running jobs or have a
unread,
[slurm-users] visualisation of JobComp and JobacctGather data with Grafana - screenshots, ideas?
Hi Josef, we use ClusterCockpit for that purpose. Users could monitor their running jobs or have a
Apr 12
Tristan LEFEBVRE
, …
Williams, Jenny Avis via slurm-users
7
Apr 11
[slurm-users] Slurmd enabled crash with CgroupV2
The end goal is to see the following 2 things – jobs under the slurmstepd cgroup path, and the cpu,
unread,
[slurm-users] Slurmd enabled crash with CgroupV2
The end goal is to see the following 2 things – jobs under the slurmstepd cgroup path, and the cpu,
Apr 11
archisman.pathak--- via slurm-users
6
Apr 11
[slurm-users] Jobs of a user are stuck in Completing stage for a long time and cannot cancel them
On 4/10/24 10:41 pm, archisman.pathak--- via slurm-users wrote: > In our case, that node has been
unread,
[slurm-users] Jobs of a user are stuck in Completing stage for a long time and cannot cancel them
On 4/10/24 10:41 pm, archisman.pathak--- via slurm-users wrote: > In our case, that node has been
Apr 11
Steve Berg via slurm-users
2
Apr 10
[slurm-users] Upgrading nodes
Yes. You can build the 8 rpms on 9. Look at 'mock' to do so. I did similar when I still had
unread,
[slurm-users] Upgrading nodes
Yes. You can build the 8 rpms on 9. Look at 'mock' to do so. I did similar when I still had
Apr 10
Alison Peterson via slurm-users
2
Apr 10
[slurm-users] single node configuration
On Tue, 2024-04-09 at 11:07:32 -0700, Slurm users wrote: > Hi everyone, I'm conducting some
unread,
[slurm-users] single node configuration
On Tue, 2024-04-09 at 11:07:32 -0700, Slurm users wrote: > Hi everyone, I'm conducting some
Apr 10