[slurm-users] SlurmdSpoolDir full

47 views
Skip to first unread message

Xaver Stiensmeier

unread,
Dec 8, 2023, 10:01:15 AM12/8/23
to slurm...@lists.schedmd.com
Dear slurm-user list,

during a larger cluster run (the same I mentioned earlier 242 nodes), I
got the error "SlurmdSpoolDir full". The SlurmdSpoolDir is apparently a
directory on the workers that is used for job state information
(https://slurm.schedmd.com/slurm.conf.html#OPT_SlurmdSpoolDir). However,
I was unable to find more precise information on that dictionary. We
compute all data on another volume so SlurmdSpoolDir has roughly 38 GB
of free space where nothing is intentionally put during the run. This
error only occurred on very few nodes.

I would like to understand what Slurmd is placing in this dir that fills
up the space. Do you have any ideas? Due to the workflow used, we have a
hard time reconstructing the exact scenario that caused this error. I
guess, the "fix" is to just pick a bit larger disk, but I am unsure
whether Slurm behaves normal here.

Best regards
Xaver Stiensmeier


Ole Holm Nielsen

unread,
Dec 8, 2023, 10:25:52 AM12/8/23
to slurm...@lists.schedmd.com
Hi Xaver,
With Slurm RPM installation this directory is configured:

$ scontrol show config | grep SlurmdSpoolDir
SlurmdSpoolDir = /var/spool/slurmd

In SlurmdSpoolDir we find job scripts and various cached data. In our
cluster it's usually a few Megabytes on each node. We never had any
issues with the size of SlurmdSpoolDir.

Do you store SlurmdSpoolDir on a shared network storage, or what?

Can you job scripts contain large amounts of data?

/Ole

Brian Andrus

unread,
Dec 9, 2023, 6:42:33 PM12/9/23
to slurm...@lists.schedmd.com
Xaver,

It is likely your /var or /var/spool mount.
That may be a separate partition or part of your root partition. It is
the partition that is full, not the directory itself. So the cause could
very well be log files in /var/log. I would check to see what (if any)
partitions are getting filled on the node. You can run 'df -h' and see
some info that would get you started.

Brian Andrus

Xaver Stiensmeier

unread,
Dec 10, 2023, 3:54:00 AM12/10/23
to slurm...@lists.schedmd.com
Hello Brian Andrus,

we ran 'df -h' to determine the amount of free space I mentioned below.
I also should add that at the time we inspected the node, there was
still around 38 GB of space left - however, we were unable to watch the
remaining space while the error occurred so maybe the large file(s) got
removed immediately.

I will take a look at /var/log. That's a good idea. I don't think that
there will be anything unusual, but it's something I haven't thought
about yet (the reason of the error being somewhere else).

Best regards
Xaver

Ryan Novosielski

unread,
Dec 10, 2023, 11:30:33 AM12/10/23
to Slurm User Community List
This is basically always somebody filling up /tmp and /tmp residing on the same filesystem as the actual SlurmdSpoolDirectory.

/tmp, without modifications, it’s almost certainly the wrong place for temporary HPC files. Too large.

Sent from my iPhone

> On Dec 8, 2023, at 10:02, Xaver Stiensmeier <xaversti...@gmx.de> wrote:
>
> Dear slurm-user list,

Peter Goode

unread,
Dec 10, 2023, 12:07:56 PM12/10/23
to Slurm User Community List
We maintain /tmp as a separate partition to mitigate this exact scenario on all nodes though it doesn’t necessarily need to be part of the primary system RAID. No need for tmp resiliency.


Regards,
Peter

Peter Goode
Research Computing Systems Administrator
Lafayette College

> On Dec 10, 2023, at 11:33, Ryan Novosielski <novo...@rutgers.edu> wrote:
>
> This is basically always somebody filling up /tmp and /tmp residing on the same filesystem as the actual SlurmdSpoolDirectory.

Ole Holm Nielsen

unread,
Dec 10, 2023, 1:22:15 PM12/10/23
to slurm...@lists.schedmd.com
On 10-12-2023 17:29, Ryan Novosielski wrote:
> This is basically always somebody filling up /tmp and /tmp residing on the same filesystem as the actual SlurmdSpoolDirectory.
>
> /tmp, without modifications, it’s almost certainly the wrong place for temporary HPC files. Too large.

Agreed! That's why temporary job directories may be configured in
Slurm, see the Wiki page for a summary:
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_configuration/#temporary-job-directories

/Ole
Reply all
Reply to author
Forward
0 new messages