Groups
Sign in
Groups
slurm-users
Conversations
About
Send feedback
Help
slurm-users
Contact owners and managers
1–30 of 8182
You have reached the Slurm Workload Manager user list archive. Please post all new threads to
slurm-users@schedmd.com
. All communication will be copied here. (This is just an archive)
Mark all as read
Report group
0 selected
Steffen Grunewald via slurm-users
4
5:06 PM
[slurm-users] Background tasks in Slurm scripts?
Generally speaking, when the batch script exits, slurm will clean up (ie kill) any stray processes.
unread,
[slurm-users] Background tasks in Slurm scripts?
Generally speaking, when the batch script exits, slurm will clean up (ie kill) any stray processes.
5:06 PM
jpuerto--- via slurm-users
2
4:01 PM
[slurm-users] API - Specify GPUs
On Fr, 2024-07-26 at 19:34 +0000, jpuerto--- via slurm-users wrote: > It does not seem that the
unread,
[slurm-users] API - Specify GPUs
On Fr, 2024-07-26 at 19:34 +0000, jpuerto--- via slurm-users wrote: > It does not seem that the
4:01 PM
Shooktija S N via slurm-users
6:10 AM
[slurm-users] slurmd error: port already in use, resulting in slaves not being able to communicate with master slurmctld
Hi, I'm trying to set up a Slurm (version 22.05.8) cluster consisting of 3 nodes with these
unread,
[slurm-users] slurmd error: port already in use, resulting in slaves not being able to communicate with master slurmctld
Hi, I'm trying to set up a Slurm (version 22.05.8) cluster consisting of 3 nodes with these
6:10 AM
Josef Dvořáček via slurm-users
3
Jul 24
[slurm-users] slumrestd 24.05.1: crashes when GET on /slurm/v0.0.41/nodes : unsorted double linked list corrupted
This is a know issue and resolved in 24.05.2 in the patches labeled "Always allocate pointers
unread,
[slurm-users] slumrestd 24.05.1: crashes when GET on /slurm/v0.0.41/nodes : unsorted double linked list corrupted
This is a know issue and resolved in 24.05.2 in the patches labeled "Always allocate pointers
Jul 24
Jason Ellul via slurm-users
4
Jul 24
[slurm-users] slurmctld hourly: Unexpected missing socket error
Hi, we're on 389 directory server (aka 389ds), which is pretty large instance. One of
unread,
[slurm-users] slurmctld hourly: Unexpected missing socket error
Hi, we're on 389 directory server (aka 389ds), which is pretty large instance. One of
Jul 24
Shooktija S N via slurm-users
Jul 23
[slurm-users] Error binding slurm stream socket: Address already in use, and GPU GRES verification
Hi, I am trying to set up Slurm with GPUs as GRES on a 3 node configuration (hostnames: server1,
unread,
[slurm-users] Error binding slurm stream socket: Address already in use, and GPU GRES verification
Hi, I am trying to set up Slurm with GPUs as GRES on a 3 node configuration (hostnames: server1,
Jul 23
stth via slurm-users
2
Jul 22
[slurm-users] Cgroup
On 7/22/24 12:05, stth via slurm-users wrote: > I am configuring cgroups on my server for the
unread,
[slurm-users] Cgroup
On 7/22/24 12:05, stth via slurm-users wrote: > I am configuring cgroups on my server for the
Jul 22
Bhaskar Chakraborty via slurm-users
8
Jul 20
[slurm-users] Custom Plugin Integration
Just to add a some more ideas in response to your other comments which I realized & read a little
unread,
[slurm-users] Custom Plugin Integration
Just to add a some more ideas in response to your other comments which I realized & read a little
Jul 20
Martin Lee via slurm-users
3
Jul 19
[slurm-users] CLOUD nodes with unknown IP addresses
I had missed cloud_reg_addrs - we're running an older version of Slurm and although I'd found
unread,
[slurm-users] CLOUD nodes with unknown IP addresses
I had missed cloud_reg_addrs - we're running an older version of Slurm and although I'd found
Jul 19
Alper AYKUT via slurm-users
Jul 18
[slurm-users] Slurm Users cannot open gui login nodes with X11 without logging into the Compute node.
Hello 1 Login Server I have a total of 3 calculation nodes. Especially in non-GUI jobs, I can run
unread,
[slurm-users] Slurm Users cannot open gui login nodes with X11 without logging into the Compute node.
Hello 1 Login Server I have a total of 3 calculation nodes. Especially in non-GUI jobs, I can run
Jul 18
William VINCENT via slurm-users
14
Jul 18
[slurm-users] Slurmctld process error 'double free or corruption' on RHEL 9 (Rocky Linux)
On 18-07-2024 08:15, William V via slurm-users wrote: > yes ! that work with crb repo Thanks for
unread,
[slurm-users] Slurmctld process error 'double free or corruption' on RHEL 9 (Rocky Linux)
On 18-07-2024 08:15, William V via slurm-users wrote: > yes ! that work with crb repo Thanks for
Jul 18
Mike Mikailov via slurm-users
Jul 16
[slurm-users] Slurm sacct ResvCPURAW invalid field in version 24.12.5
Dear All, Does anyone know what is the equivalent field for ResvCPURAW of sacct command of Slurm in
unread,
[slurm-users] Slurm sacct ResvCPURAW invalid field in version 24.12.5
Dear All, Does anyone know what is the equivalent field for ResvCPURAW of sacct command of Slurm in
Jul 16
joao.damas--- via slurm-users
3
Jul 15
[slurm-users] _refresh_assoc_mgr_qos_list: no new list given back keeping cached one
Hi João, did you get this problem solved? I have the exact same problem and would be very interested.
unread,
[slurm-users] _refresh_assoc_mgr_qos_list: no new list given back keeping cached one
Hi João, did you get this problem solved? I have the exact same problem and would be very interested.
Jul 15
Emyr James via slurm-users
2
Jul 12
[slurm-users] Job Step State
There's an enum job_states in slurm.h. It becomes OUT_OF_MEMORY, &c. in the job_state_string
unread,
[slurm-users] Job Step State
There's an enum job_states in slurm.h. It becomes OUT_OF_MEMORY, &c. in the job_state_string
Jul 12
jack.mellor--- via slurm-users
5
Jul 12
[slurm-users] Nodes TRES double what is requested
Not sure if this is correct but I think you need to leave a bit of RAM for the OS to use so best not
unread,
[slurm-users] Nodes TRES double what is requested
Not sure if this is correct but I think you need to leave a bit of RAM for the OS to use so best not
Jul 12
Cutts, Tim via slurm-users
2
Jul 11
[slurm-users] SLURM noob administrator question
You probably want to look at scontrol show node and scontrol show job for that node and the jobs on
unread,
[slurm-users] SLURM noob administrator question
You probably want to look at scontrol show node and scontrol show job for that node and the jobs on
Jul 11
Daniel Letai via slurm-users
Jul 11
[slurm-users] Replacing MUNGE with SACK (auth/slurm)
Does SACK replace MUNGE? As in - MUNGE is not required when building Slurm or on compute? If so, can
unread,
[slurm-users] Replacing MUNGE with SACK (auth/slurm)
Does SACK replace MUNGE? As in - MUNGE is not required when building Slurm or on compute? If so, can
Jul 11
Paul Raines via slurm-users
3
Jul 9
[slurm-users] Job submitted to multiple partitions not running when any partition is full
Thanks. I traced it to a MaxMemPerCPU=16384 setting on the pubgpu partition. -- Paul Raines (http://
unread,
[slurm-users] Job submitted to multiple partitions not running when any partition is full
Thanks. I traced it to a MaxMemPerCPU=16384 setting on the pubgpu partition. -- Paul Raines (http://
Jul 9
Daniel L'Hommedieu via slurm-users
5
Jul 9
[slurm-users] Temporarily bypassing pam_slurm_adopt.so
At HMS we do the same as Paul's cluster and specify the groups we want to have access to all our
unread,
[slurm-users] Temporarily bypassing pam_slurm_adopt.so
At HMS we do the same as Paul's cluster and specify the groups we want to have access to all our
Jul 9
LEROY Christine 208562 via slurm-users
2
Jul 9
[slurm-users] vers 23: slurmctld pb (memory leak and response time)
Hi all, I'm answering to myself : in fact the memory leak happened when the slurm.conf file was
unread,
[slurm-users] vers 23: slurmctld pb (memory leak and response time)
Hi all, I'm answering to myself : in fact the memory leak happened when the slurm.conf file was
Jul 9
Dan Healy via slurm-users
4
Jul 8
[slurm-users] Can SLURM queue different jobs to start concurrently?
Dan, The requirement for varying CPU and RAM requirements sounds like it could be met with the
unread,
[slurm-users] Can SLURM queue different jobs to start concurrently?
Dan, The requirement for varying CPU and RAM requirements sounds like it could be met with the
Jul 8
Chris Taylor via slurm-users
4
Jul 6
[slurm-users] cgroups/v2 plugin rpmbuild issue
Ah, thanks :) - I just realized this when I saw v1 was the only plugin included in the source. Yes I
unread,
[slurm-users] cgroups/v2 plugin rpmbuild issue
Ah, thanks :) - I just realized this when I saw v1 was the only plugin included in the source. Yes I
Jul 6
Roland Fehrenbacher via slurm-users
Jul 6
[slurm-users] Qlustar HPC Core Stack 24.06
Hi all, the Qlustar HPC Core Stack 24.06 update is available. It includes an update to Slurm 23.11.8.
unread,
[slurm-users] Qlustar HPC Core Stack 24.06
Hi all, the Qlustar HPC Core Stack 24.06 update is available. It includes an update to Slurm 23.11.8.
Jul 6
Karri Vrkreddy via slurm-users
Jul 5
[slurm-users] Run a program via Strigger when a node joins the cluster
Hi, We have a requirement to run a specific program whenever any new node joins the slurm cluster.
unread,
[slurm-users] Run a program via Strigger when a node joins the cluster
Hi, We have a requirement to run a specific program whenever any new node joins the slurm cluster.
Jul 5
Ricardo Cruz via slurm-users
8
Jul 5
[slurm-users] Using sharding
I would try specifying cpus and mem just to be sure its not requesting 0/all. Also, I was running
unread,
[slurm-users] Using sharding
I would try specifying cpus and mem just to be sure its not requesting 0/all. Also, I was running
Jul 5
Robert Kudyba via slurm-users
3
Jul 3
[slurm-users] Re: Slurm commands fail when run in Singularity container with the error "Invalid user for SlurmUser slurm, SINGULARITYENV_SLURM_CONF
Thanks Ben but there's no mention of SINGULARITYENV_SLURM_CONF in that page. Slurm is not in the
unread,
[slurm-users] Re: Slurm commands fail when run in Singularity container with the error "Invalid user for SlurmUser slurm, SINGULARITYENV_SLURM_CONF
Thanks Ben but there's no mention of SINGULARITYENV_SLURM_CONF in that page. Slurm is not in the
Jul 3
Markus Köberl via slurm-users
5
Jul 3
[slurm-users] problem with squeue --json with version 24.05.1
On 7/2/24 18:48, Markus Köberl via slurm-users wrote: > $ squeue --version > slurm 24.05.1 >
unread,
[slurm-users] problem with squeue --json with version 24.05.1
On 7/2/24 18:48, Markus Köberl via slurm-users wrote: > $ squeue --version > slurm 24.05.1 >
Jul 3
Marko Markoc via slurm-users
8
Jul 2
[slurm-users] AllowAccounts partition setting
Hi Christine, we don't use AllowGroups but have AllowAccounts, which is not working anymore as
unread,
[slurm-users] AllowAccounts partition setting
Hi Christine, we don't use AllowGroups but have AllowAccounts, which is not working anymore as
Jul 2
daijiangkuicgo--- via slurm-users
2
Jun 29
[slurm-users] Why AllowAccounts not work in slurm-23.11.6
AllowGroups is ok. -- slurm-users mailing list -- slurm...@lists.schedmd.com To unsubscribe send
unread,
[slurm-users] Why AllowAccounts not work in slurm-23.11.6
AllowGroups is ok. -- slurm-users mailing list -- slurm...@lists.schedmd.com To unsubscribe send
Jun 29
Tim Wickberg via slurm-users
Jun 27
[slurm-users] Slurm version 24.05.1 is now available
We are pleased to announce the availability of Slurm version 24.05.1. This release addresses a number
unread,
[slurm-users] Slurm version 24.05.1 is now available
We are pleased to announce the availability of Slurm version 24.05.1. This release addresses a number
Jun 27