Groups
Groups
Sign in
Groups
Groups
slurm-users
Conversations
About
Send feedback
Help
slurm-users
1–30 of 8568
You have reached the Slurm Workload Manager user list archive. Please post all new threads to
slurm-users@schedmd.com
. All communication will be copied here. (This is just an archive)
Mark all as read
Report group
0 selected
sami sami via slurm-users
5:52 AM
[slurm-users] Technical Update: Native Slurm Simulation Environment Successfully Configured
Hi All, I am currently working on a single Linux machine running Ubuntu 25.10 and am looking to
unread,
[slurm-users] Technical Update: Native Slurm Simulation Environment Successfully Configured
Hi All, I am currently working on a single Linux machine running Ubuntu 25.10 and am looking to
5:52 AM
Paul Raines via slurm-users
2
3:38 AM
[slurm-users] inconsistent 'Requested node configuration is not available' error
Hi Paul, Just my 2 cents: Are you running the latest Slurm version? BR Ole On 5/13/2026 6:48 PM, Paul
unread,
[slurm-users] inconsistent 'Requested node configuration is not available' error
Hi Paul, Just my 2 cents: Are you running the latest Slurm version? BR Ole On 5/13/2026 6:48 PM, Paul
3:38 AM
Ron Gould via slurm-users
4
May 8
[slurm-users] SLURM config option to not tie up a host completely.
As long as the partition does not force exclusive job, people can request in slurm the exact amount
unread,
[slurm-users] SLURM config option to not tie up a host completely.
As long as the partition does not force exclusive job, people can request in slurm the exact amount
May 8
James A Allsopp via slurm-users
2
May 8
[slurm-users] Moving a Slurm database from one slurm database host to another
Hello Dr Allsopp. Only thing that comes to mind is this part in the upgrade guide: https://slurm.
unread,
[slurm-users] Moving a Slurm database from one slurm database host to another
Hello Dr Allsopp. Only thing that comes to mind is this part in the upgrade guide: https://slurm.
May 8
Tim Wickberg via slurm-users
May 7
[slurm-users] Slurm release candidate version 26.05.0rc1 is available for testing
We are pleased to announce the availability of Slurm release candidate 26.05.0rc1. To highlight some
unread,
[slurm-users] Slurm release candidate version 26.05.0rc1 is available for testing
We are pleased to announce the availability of Slurm release candidate 26.05.0rc1. To highlight some
May 7
Emyr James via slurm-users
2
May 6
[slurm-users] Re: cgroup/v2 regression in 25.11.2 — memory.peak read disabled after first task
Dear all, Heads-up for anyone running 25.11.2 with cgroup/v2 + jobacct_gather/cgroup: there's a
unread,
[slurm-users] Re: cgroup/v2 regression in 25.11.2 — memory.peak read disabled after first task
Dear all, Heads-up for anyone running 25.11.2 with cgroup/v2 + jobacct_gather/cgroup: there's a
May 6
Pols, Maarten via slurm-users
4
May 6
[slurm-users] slurmdbd and slurmctld prevent alma9 login
Dear John, We don't see any strange warnings in the logs that could explain this. The disks aren
unread,
[slurm-users] slurmdbd and slurmctld prevent alma9 login
Dear John, We don't see any strange warnings in the logs that could explain this. The disks aren
May 6
Pharthiphan Asokan via slurm-users
3
May 5
[slurm-users] Jobs canceling when nodes become unreachable – need guidance
I would suggest making very sure that all compute nodes are time synced properly. Then look at logs
unread,
[slurm-users] Jobs canceling when nodes become unreachable – need guidance
I would suggest making very sure that all compute nodes are time synced properly. Then look at logs
May 5
fernando.seguro--- via slurm-users
5
Apr 24
[slurm-users] How to track solver usage in Slurm
Yes I suppose anything is possibly with LD_PRELOAD hacks and such, but it would be much nicer if it
unread,
[slurm-users] How to track solver usage in Slurm
Yes I suppose anything is possibly with LD_PRELOAD hacks and such, but it would be much nicer if it
Apr 24
Berg, Stephen P CIV USN NRL DET SSC MS (USA) via slurm-users
5
Apr 17
[slurm-users] Can't clear the REASON
I have tried that and it does work but the reason persists after the nodes get to an idle state.
unread,
[slurm-users] Can't clear the REASON
I have tried that and it does work but the reason persists after the nodes get to an idle state.
Apr 17
Pharthiphan Asokan via slurm-users
3
Apr 16
[slurm-users] Jobs aborting after slurmctld reload on Intel nodes - AMD unaffected
Have you run lstopo on Intel and AMD nodes? Run it in text mode and graphical mode. It might be worth
unread,
[slurm-users] Jobs aborting after slurmctld reload on Intel nodes - AMD unaffected
Have you run lstopo on Intel and AMD nodes? Run it in text mode and graphical mode. It might be worth
Apr 16
Marshall Garey via slurm-users
Apr 14
[slurm-users] We are pleased to announce the availability of Slurm version 25.11.5
We are pleased to announce the availability of Slurm version 25.11.5. Some of the changes in 25.11.0
unread,
[slurm-users] We are pleased to announce the availability of Slurm version 25.11.5
We are pleased to announce the availability of Slurm version 25.11.5. Some of the changes in 25.11.0
Apr 14
Massimo Sgaravatto via slurm-users
11
Apr 14
[slurm-users] Why my job can't start (backfill reservation issue)
I didn't mention that I have: SchedulerType=sched/backfill in slurm.conf I am reading https://
unread,
[slurm-users] Why my job can't start (backfill reservation issue)
I didn't mention that I have: SchedulerType=sched/backfill in slurm.conf I am reading https://
Apr 14
Ratnasamy, Fritz via slurm-users
7
Apr 9
[slurm-users] pam_slurm_adopt
You are getting some good insights here. To add to them, there is a VERY good chance that your pam.d
unread,
[slurm-users] pam_slurm_adopt
You are getting some good insights here. To add to them, there is a VERY good chance that your pam.d
Apr 9
Faraz Hussain via slurm-users
Apr 7
[slurm-users] How to delete my defaultwckey ?
I want every submitted job to have some value for the wckey, ie: #SBATCH --wckey=myproject I made the
unread,
[slurm-users] How to delete my defaultwckey ?
I want every submitted job to have some value for the wckey, ie: #SBATCH --wckey=myproject I made the
Apr 7
Hany Ibrahim via slurm-users
Apr 7
[slurm-users] Difference between sreport and sacct
Hello, I am trying to understand how sreport work. So I run sreport and obtain th estatistics about
unread,
[slurm-users] Difference between sreport and sacct
Hello, I am trying to understand how sreport work. So I run sreport and obtain th estatistics about
Apr 7
Dustin Lang via slurm-users
5
Mar 31
[slurm-users] Shared queue: how to request full node with all resources?
No, as I replied to a previous poster, when you put "--exclusive" in the sbatch command,
unread,
[slurm-users] Shared queue: how to request full node with all resources?
No, as I replied to a previous poster, when you put "--exclusive" in the sbatch command,
Mar 31
Antonio Jose Alonso-Stepanov via slurm-users
3
Mar 26
[slurm-users] How do you handle GPU node failures during long jobs?
On 3/14/26 11:46 pm, Antonio Jose Alonso-Stepanov via slurm-users wrote: > When a GPU node goes
unread,
[slurm-users] How do you handle GPU node failures during long jobs?
On 3/14/26 11:46 pm, Antonio Jose Alonso-Stepanov via slurm-users wrote: > When a GPU node goes
Mar 26
Xaver Stiensmeier via slurm-users
4
Mar 23
[slurm-users] _node_config_validate: gres/gpu: Count changed on node (0 != 2)
Hey, I am not 100% sure yet as that needs further testing (in case it is a race condition), but I
unread,
[slurm-users] _node_config_validate: gres/gpu: Count changed on node (0 != 2)
Hey, I am not 100% sure yet as that needs further testing (in case it is a race condition), but I
Mar 23
Marshall Garey via slurm-users
Mar 12
[slurm-users] We are pleased to announce the availability of Slurm versions 25.11.4 and 25.05.7.
We are pleased to announce the availability of Slurm versions 25.11.4 and 25.05.7. Changes in both
unread,
[slurm-users] We are pleased to announce the availability of Slurm versions 25.11.4 and 25.05.7.
We are pleased to announce the availability of Slurm versions 25.11.4 and 25.05.7. Changes in both
Mar 12
Gestió Servidors via slurm-users
Mar 9
[slurm-users] Server with two GPUs sharing each GPU to different partitions
Hello, First of all, sorry if my question is about something easy, but in my environment, this is the
unread,
[slurm-users] Server with two GPUs sharing each GPU to different partitions
Hello, First of all, sorry if my question is about something easy, but in my environment, this is the
Mar 9
Guillaume COCHARD via slurm-users
3
Mar 2
[slurm-users] Optimizing CPU allocation in Slurm with hyperthreading enabled
This sounds like https://slurm.schedmd.com/slurm.conf.html#OPT_CR_ONE_TASK_PER_CORE
unread,
[slurm-users] Optimizing CPU allocation in Slurm with hyperthreading enabled
This sounds like https://slurm.schedmd.com/slurm.conf.html#OPT_CR_ONE_TASK_PER_CORE
Mar 2
Michael Gutteridge via slurm-users
Feb 25
[slurm-users] Missing mounts.h in Ubuntu Bionic build
Hi We're running into a problem with building 25.11.3 for Ubuntu Bionic. When the cgroup/v2
unread,
[slurm-users] Missing mounts.h in Ubuntu Bionic build
Hi We're running into a problem with building 25.11.3 for Ubuntu Bionic. When the cgroup/v2
Feb 25
Groner, Rob via slurm-users
2
Feb 24
[slurm-users] Issue with coordinator increasing limits
Ooops....in my error message command example, I used the account I had actually been using (
unread,
[slurm-users] Issue with coordinator increasing limits
Ooops....in my error message command example, I used the account I had actually been using (
Feb 24
Adam Novak via slurm-users
8
Feb 24
[slurm-users] Reliable, Atomic, Idempotent, or Transactional Job Submission?
I'm not really in a position to check, since I'm not our cluster admin. I asked him and he
unread,
[slurm-users] Reliable, Atomic, Idempotent, or Transactional Job Submission?
I'm not really in a position to check, since I'm not our cluster admin. I asked him and he
Feb 24
Edmundo Carmona Antoranz via slurm-users
5
Feb 23
[slurm-users] Get the list of added dynamic nodes
On 2/23/26 3:19 am, Edmundo Carmona Antoranz via slurm-users wrote: > That pulled it off, thanks!
unread,
[slurm-users] Get the list of added dynamic nodes
On 2/23/26 3:19 am, Edmundo Carmona Antoranz via slurm-users wrote: > That pulled it off, thanks!
Feb 23
Tim Wickberg via slurm-users
Feb 19
[slurm-users] We are pleased to announce the availability of Slurm version 25.11.3
We are pleased to announce the availability of Slurm version 25.11.3. 25.11.3 fixes a number of
unread,
[slurm-users] We are pleased to announce the availability of Slurm version 25.11.3
We are pleased to announce the availability of Slurm version 25.11.3. 25.11.3 fixes a number of
Feb 19
Marchand Aurélia via slurm-users
5
Feb 19
[slurm-users] preempt/qos is not working as expected
Thanks for letting us know, I was not aware of this necessity. On Thu, Feb 19, 2026 at 2:09 AM
unread,
[slurm-users] preempt/qos is not working as expected
Thanks for letting us know, I was not aware of this necessity. On Thu, Feb 19, 2026 at 2:09 AM
Feb 19
squallabc--- via slurm-users
5
Feb 16
[slurm-users] for srun jobs, if we try to get UserCPU time by sacct. it always shows 0.
Hello, This is an issue we already reported to the support[1]. For your information, we were using
unread,
[slurm-users] for srun jobs, if we try to get UserCPU time by sacct. it always shows 0.
Hello, This is an issue we already reported to the support[1]. For your information, we were using
Feb 16
Alessandro D'Auria via slurm-users
2
Feb 13
[slurm-users] Sharding GPUs
I add other informations OS Rhel8.9 Slurm 25.11.2 -- slurm-users mailing list -- slurm-users@lists.
unread,
[slurm-users] Sharding GPUs
I add other informations OS Rhel8.9 Slurm 25.11.2 -- slurm-users mailing list -- slurm-users@lists.
Feb 13