Groups
Groups
Sign in
Groups
Groups
slurm-users
Conversations
About
Send feedback
Help
slurm-users
1–30 of 8488
You have reached the Slurm Workload Manager user list archive. Please post all new threads to
slurm-users@schedmd.com
. All communication will be copied here. (This is just an archive)
Mark all as read
Report group
0 selected
Paul Raines via slurm-users
3:00 PM
[slurm-users] Setting GrpTRES for specific Account only for specific Partition(s)
I am trying to figure out how one can limit resources like with GrpTRES for specific accounts only
unread,
[slurm-users] Setting GrpTRES for specific Account only for specific Partition(s)
I am trying to figure out how one can limit resources like with GrpTRES for specific accounts only
3:00 PM
Ozeryan, Vladimir via slurm-users
2
12:23 PM
[slurm-users] Unexpected missing socket error
Hello, we had this issue previously - it was connected to timeouts, where the socket disappeared due
unread,
[slurm-users] Unexpected missing socket error
Hello, we had this issue previously - it was connected to timeouts, where the socket disappeared due
12:23 PM
Yusuke Saiki via slurm-users
3
11:58 AM
[slurm-users] About tpology.conf
Thank you, Mr. Paul. You're a big help. -- slurm-users mailing list -- slurm...@lists.schedmd.
unread,
[slurm-users] About tpology.conf
Thank you, Mr. Paul. You're a big help. -- slurm-users mailing list -- slurm...@lists.schedmd.
11:58 AM
Tom Sparks via slurm-users
2
Oct 3
[slurm-users] cloud compute limits/budget
Hey, there are definitely people who can answer this question better as I am not overly familiar with
unread,
[slurm-users] cloud compute limits/budget
Hey, there are definitely people who can answer this question better as I am not overly familiar with
Oct 3
Bruno Bruzzo via slurm-users
8
Oct 1
[slurm-users] SRUN and SBATCH network issues on configless login node.
On 9/30/25 20:52, Bruno Bruzzo via slurm-users wrote: > Update: > We have solved the issue.
unread,
[slurm-users] SRUN and SBATCH network issues on configless login node.
On 9/30/25 20:52, Bruno Bruzzo via slurm-users wrote: > Update: > We have solved the issue.
Oct 1
Grigory Shamov via slurm-users
5
Sep 30
[slurm-users] How to make TLS and PMIx v4 work together?
Regarding performance, have a look at the release notes: https://slurm.schedmd.com/release_notes.html
unread,
[slurm-users] How to make TLS and PMIx v4 work together?
Regarding performance, have a look at the release notes: https://slurm.schedmd.com/release_notes.html
Sep 30
jibl3azzmenusa--- via slurm-users
2
Sep 28
[slurm-users] شهادة الأبوة في المغرب
كل ما تحتاج معرفته حول شهادة الأبوة بالمغرب يمكنك تحميل الطلب من خلال الرابط المباشر https://www.
unread,
[slurm-users] شهادة الأبوة في المغرب
كل ما تحتاج معرفته حول شهادة الأبوة بالمغرب يمكنك تحميل الطلب من خلال الرابط المباشر https://www.
Sep 28
Steve Kirk via slurm-users
3
Sep 26
[slurm-users] FUTURE nodes do not return to idle on slurmctld restart
Afternoon, On Fri, 2025-09-26 at 09:06 +0200, Bjørn-Helge Mevik via slurm-users wrote: > I think
unread,
[slurm-users] FUTURE nodes do not return to idle on slurmctld restart
Afternoon, On Fri, 2025-09-26 at 09:06 +0200, Bjørn-Helge Mevik via slurm-users wrote: > I think
Sep 26
Julien Tailleur via slurm-users
5
Sep 24
[slurm-users] Node switching randomly to down state
Look at the slurmd logs on these nodes. Or try to run slurmd in non background mode. And as I said on
unread,
[slurm-users] Node switching randomly to down state
Look at the slurmd logs on these nodes. Or try to run slurmd in non background mode. And as I said on
Sep 24
Dhumal, Dr. Nilesh via slurm-users
3
Sep 22
[slurm-users] No output and can't job by id
Have you set up and tested munge On Mon, Sep 22, 2025, 9:46 PM John Hearns <hea...@gmail.com>
unread,
[slurm-users] No output and can't job by id
Have you set up and tested munge On Mon, Sep 22, 2025, 9:46 PM John Hearns <hea...@gmail.com>
Sep 22
Gestió Servidors via slurm-users
8
Sep 22
[slurm-users] Node in drain state
Hi Patrick, On 9/22/25 07:39, Patrick Begou via slurm-users wrote: > I also see twice a node
unread,
[slurm-users] Node in drain state
Hi Patrick, On 9/22/25 07:39, Patrick Begou via slurm-users wrote: > I also see twice a node
Sep 22
Dhumal, Dr. Nilesh via slurm-users
6
Sep 21
[slurm-users] Compute node not responding
run slurmd -V This will give you the version On Sun, Sep 21, 2025, 7:57 AM Dhumal, Dr. Nilesh via
unread,
[slurm-users] Compute node not responding
run slurmd -V This will give you the version On Sun, Sep 21, 2025, 7:57 AM Dhumal, Dr. Nilesh via
Sep 21
Josu Lazkano Lete via slurm-users
10
Sep 18
[slurm-users] seff for GPU
> That's sad that seff is being deprecated due to dropping the perl api. Agreed, but at least
unread,
[slurm-users] seff for GPU
> That's sad that seff is being deprecated due to dropping the perl api. Agreed, but at least
Sep 18
Kevin M. Hildebrand via slurm-users
8
Sep 17
[slurm-users] Scheduling issues with multiple different types of GPU in one partition
We have heterogeneous partitions too. We see this occasionally, but it's not a huge problem. The
unread,
[slurm-users] Scheduling issues with multiple different types of GPU in one partition
We have heterogeneous partitions too. We see this occasionally, but it's not a huge problem. The
Sep 17
Ole Holm Nielsen via slurm-users
Sep 10
[slurm-users] New "NOT-state" selection of the sinfo command in Slurm 25.05
We just upgraded Slurm to 25.05.3, and I would like to highlight a new functionality of the "
unread,
[slurm-users] New "NOT-state" selection of the sinfo command in Slurm 25.05
We just upgraded Slurm to 25.05.3, and I would like to highlight a new functionality of the "
Sep 10
John Hearns via slurm-users
6
Sep 10
[slurm-users] Development RPMs for cgroups v2
Thankyou! On Wed, 10 Sept 2025 at 08:11, Bjørn-Helge Mevik via slurm-users <slurm-users@lists.
unread,
[slurm-users] Development RPMs for cgroups v2
Thankyou! On Wed, 10 Sept 2025 at 08:11, Bjørn-Helge Mevik via slurm-users <slurm-users@lists.
Sep 10
blines--- via slurm-users
Sep 8
[slurm-users] NHC skips nodes in "mix-" state
The node-mark-offline script skips nodes in the "mix-" state with the error: State "
unread,
[slurm-users] NHC skips nodes in "mix-" state
The node-mark-offline script skips nodes in the "mix-" state with the error: State "
Sep 8
John Snowdon via slurm-users
8
Sep 8
[slurm-users] Creating /run/user/$UID - for Podman runtime
Thanks to everyone for your feedback. We've now implemented two simple prolog/epilog scripts
unread,
[slurm-users] Creating /run/user/$UID - for Podman runtime
Thanks to everyone for your feedback. We've now implemented two simple prolog/epilog scripts
Sep 8
Marshall Garey via slurm-users
Sep 4
[slurm-users] Slurm version 25.05.3 is now available
We are pleased to announce the availability of Slurm version 25.05.3. This version fixes an issue
unread,
[slurm-users] Slurm version 25.05.3 is now available
We are pleased to announce the availability of Slurm version 25.05.3. This version fixes an issue
Sep 4
Michele Esposito Marzino via slurm-users
Sep 4
[slurm-users] Discussion about the --cpus-per-task flag
Hi everyone, I would like to ask about the rationale behind the --cpus-per-task flag. While I do
unread,
[slurm-users] Discussion about the --cpus-per-task flag
Hi everyone, I would like to ask about the rationale behind the --cpus-per-task flag. While I do
Sep 4
Rémy Dernat via slurm-users
2
Sep 4
[slurm-users] sacctmgr output with and without associations
Hi Rémy, I'm not sure how to do this using only sacctmgr, but to achieve something similar we
unread,
[slurm-users] sacctmgr output with and without associations
Hi Rémy, I'm not sure how to do this using only sacctmgr, but to achieve something similar we
Sep 4
Matthias Leopold via slurm-users
Sep 1
[slurm-users] Can't remove account
Hi, I have a cluster with account foo. I want to delete this account. For every user that had
unread,
[slurm-users] Can't remove account
Hi, I have a cluster with account foo. I want to delete this account. For every user that had
Sep 1
Jon Marshall via slurm-users
5
Aug 28
[slurm-users] slurmrestd unable to find plugin rest_auth/jwt
For anyone interested, I managed to resolve this by running: rpmbuild -ta slurm-24.11.6.tar.bz2 --
unread,
[slurm-users] slurmrestd unable to find plugin rest_auth/jwt
For anyone interested, I managed to resolve this by running: rpmbuild -ta slurm-24.11.6.tar.bz2 --
Aug 28
Manisha Yadav via slurm-users
9
Aug 28
[slurm-users] Assistance with Node Restrictions and Priority for Users in Floating Partition
Manisha Yadav via slurm-users <slurm...@lists.schedmd.com> writes: > Hii Bjørn-Helge,
unread,
[slurm-users] Assistance with Node Restrictions and Priority for Users in Floating Partition
Manisha Yadav via slurm-users <slurm...@lists.schedmd.com> writes: > Hii Bjørn-Helge,
Aug 28
Paul Edmon via slurm-users
17
Aug 27
[slurm-users] Node Health Check Program
Related to the NHC "dev" branch (version 1.5) I've been looking at the issue https://
unread,
[slurm-users] Node Health Check Program
Related to the NHC "dev" branch (version 1.5) I've been looking at the issue https://
Aug 27
David Gauchard via slurm-users
3
Aug 27
[slurm-users] Slurm 25.05: Retrieving jobs GPU Indices on Heterogeneous Cluster
Many thanks, this command gives indeed what I need ! Le 26/08/2025 à 22:35, Laura Hild via slurm-
unread,
[slurm-users] Slurm 25.05: Retrieving jobs GPU Indices on Heterogeneous Cluster
Many thanks, this command gives indeed what I need ! Le 26/08/2025 à 22:35, Laura Hild via slurm-
Aug 27
Leonardo Sala via slurm-users
Aug 26
[slurm-users] Issues with dynamic configless nodes and pam_slurm_adopt
Hallo everyone we have recently noticed that when running nodes in configless and dynamic mode,
unread,
[slurm-users] Issues with dynamic configless nodes and pam_slurm_adopt
Hallo everyone we have recently noticed that when running nodes in configless and dynamic mode,
Aug 26
Bjørn-Helge Mevik via slurm-users
2
Aug 21
[slurm-users] Tips or experiences with Burst Buffers?
We are listening here too and have been dragging our feet for awhile now. Bill On 8/21/25 6:48 AM,
unread,
[slurm-users] Tips or experiences with Burst Buffers?
We are listening here too and have been dragging our feet for awhile now. Bill On 8/21/25 6:48 AM,
Aug 21
Ratnasamy, Fritz via slurm-users
Aug 18
[slurm-users] GPFS nvme limit storage per job
We have our new dedicated GPFS storage and were wondering how to accommodate all the users on the
unread,
[slurm-users] GPFS nvme limit storage per job
We have our new dedicated GPFS storage and were wondering how to accommodate all the users on the
Aug 18
Xaver Stiensmeier via slurm-users
4
Aug 18
[slurm-users] Nodes Become Invalid Due to Less Total RAM Than Expected
Guillaume, Jobs shouldn't fail if they are requesting the max amount of memory they intend to use
unread,
[slurm-users] Nodes Become Invalid Due to Less Total RAM Than Expected
Guillaume, Jobs shouldn't fail if they are requesting the max amount of memory they intend to use
Aug 18