[slurm-dev] Is /tmp guaranteed to be writable?

6 views
Skip to first unread message

Bob Moench

unread,
Nov 11, 2014, 2:38:20 PM11/11/14
to slurm-dev

Does anyone know if /tmp is guaranteed to be writable on the
cluster nodes under SLURM?

Thanks,
Bob


--
Bob Moench (rwm); PE Debugger Development; 605-9034; 354-7895; SP 24227

Jared David Baker

unread,
Nov 11, 2014, 3:40:22 PM11/11/14
to slurm-dev
Bob,

It potentially could not be guaranteed, but almost every system that I've seen/worked with is that /tmp is 1777 permissions which basically state that it is (user,group,other) (read,write,execute) on the directory as well as the sticky bit being set. The sticky bit is a method used such that users (except root) cannot delete other user owned files. Like I said, these are the permissions that I have set on all of our cluster compute nodes and infrastructure nodes that run Linux and is I believe has nothing to do with SLURM itself.

Jared

Bob Moench

unread,
Nov 11, 2014, 3:55:21 PM11/11/14
to slurm-dev

Yes, it is certainly the norm for /tmp to be writable. In fact,
it can be quiet a pain if it is not writable. Unfortunately,
I am familiar with Linux sites that do not allow /tmp to be
writable and was hoping that I could hide behind the shadow
of SLURM requiring a writable /tmp.

It sounds like you are not aware of an explicit need for SLURM to write
on /tmp. Yet maybe someone else is... or maybe it just isn't
so.

Bob

Morris Jette

unread,
Nov 11, 2014, 4:03:22 PM11/11/14
to slurm-dev
Slurm has no need of /tmp to even exist.
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Jared David Baker

unread,
Nov 11, 2014, 4:08:18 PM11/11/14
to slurm-dev
I can understand why admins don't allow writing to /tmp, it's just not enforced on our clusters I suppose. Perhaps /tmp is stateless and very limited or you have bad users. So I understand. I don't have our Slurm installation write anything to /tmp on the compute nodes. I suppose you could I just don't see a need to from an admin perspective. Are you asking from a user point of view or an administrator point of view?

Bob Moench

unread,
Nov 11, 2014, 4:30:18 PM11/11/14
to slurm-dev

Neither. I am a tool writer. I need some temporary space to save
scalably staged data and executables over on the compute
nodes. To make matters worse, I need to reliably clean up
afterward. I mention "scalably" to indicate that such space
should not be some centrally located, cross mounted
location. The cluster node /tmp fits the bill... if I can
write on it.

Does SLURM support any other user accessible disk space on
the cluster nodes?

Christopher Samuel

unread,
Nov 11, 2014, 5:41:24 PM11/11/14
to slurm-dev

On 12/11/14 06:38, Bob Moench wrote:

> Does anyone know if /tmp is guaranteed to be writable on the
> cluster nodes under SLURM?

If they wish to comply with the FHS (Filesystem Hierarchy Standard) or
the LSB (Linux Standards Base which uses the FHS) then yes, it must be.

http://refspecs.linuxfoundation.org/FHS_2.3/fhs-2.3.html

# /tmp : Temporary files
#
# Purpose
#
# The /tmp directory must be made available for programs that require
# temporary files.
#
# Programs must not assume that any files or directories in /tmp are
# preserved between invocations of the program.
#
# Rationale
#
# IEEE standard P1003.2 (POSIX, part 2) makes requirements that are
# similar to the above section.

Oh yes, the small matter of POSIX.. The old (1992) version of POSIX is here:

http://www.oldlinux.org/Linux.old/Ref-docs/POSIX/all.pdf

and says:

# 2.7 Required Files
#
# [...]
# The following directory shall exist on conforming systems and
# shall be used as described.
#
# /tmp
#
# A directory made available for programs that need a place to
# create temporary files. Applications shall be allowed to create
# files in this directory, but shall not assume that such files
# are preserved between invocations of the application.

The current version (2013) is basically the same, only a single word
changed, and says:

http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap10.html

# The following directory shall exist on conforming systems and shall
# be used as described:
#
# /tmp
# A directory made available for applications that need a place to
# create temporary files. Applications shall be allowed to create
# files in this directory, but shall not assume that such files are
# preserved between invocations of the application.

So there you go! If /tmp isn't writeable then it's a bug, not a feature.

All the best,
Chris
--
Christopher Samuel Senior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
http://www.vlsci.org.au/ http://twitter.com/vlsci

Christopher Samuel

unread,
Nov 11, 2014, 5:44:21 PM11/11/14
to slurm-dev

On 12/11/14 08:30, Bob Moench wrote:

> Neither. I am a tool writer. I need some temporary space to save
> scalably staged data and executables over on the compute
> nodes. To make matters worse, I need to reliably clean up
> afterward. I mention "scalably" to indicate that such space
> should not be some centrally located, cross mounted
> location. The cluster node /tmp fits the bill... if I can
> write on it.

Whilst our compute nodes have /tmp (and /var/tmp) writeable it is only
1GB in size as it is a RAMdisk (our compute nodes are diskless) and our
instructions to users are to use $TMPDIR (which maps to our global
/scratch GPFS filesystem) instead and is cleaned up for them on job exit.

I have great interest in this SPANK plugin that uses Linux namespaces to
map /tmp for a job to a temporary directory (i.e. $TMPDIR in our case)
so we don't have to do so much hacking of code that seems to think it
should be able to dump gigabytes of stuff to /tmp.

https://github.com/hpc2n/spank-private-tmp

I just need some Copious Free Time(tm) to give it a whirl..

> Does SLURM support any other user accessible disk space on
> the cluster nodes?

As a sysadmin any application should honour and use $TMPDIR if defined.

If that isn't set then all you can really rely on is /tmp and /var/tmp.
Reply all
Reply to author
Forward
0 new messages