[slurm-users] Elastic Compute on Cloud - Error Handling

33 views
Skip to first unread message

Felix Wolfheimer

unread,
Jul 28, 2018, 2:33:48 PM7/28/18
to slurm...@schedmd.com
I'm experimenting with SLURM Elastic Compute on a cloud platform. I'm facing the following situation: Let's say, SLURM requests that a compute instance is started. The ResumeProgram tries to create the instance, but doesn't succeed because the cloud provider can't provide the instance type at this point in time (happens for example if a GPU instance is requested, but the datacenter simply doesn't have the capacity to provide this instance). 
SLURM will mark the instance as "DOWN" and will not try again to request it. For this scenario this behavior is not optimal. Instead of marking the node DOWN and not trying to request it again after some time, I'd like that slurmctld just forgets about the failure and tries again to start the node. Is there any knob which can be used to achieve this behavior? Optimally, the behavior might be triggered by the return code of the ResumeProgram, e.g., 

return code=0 - Node is starting up
return code=1 - A permanent error has occurred, don't try again
return code=2 - A temporary failure has occurred. Try again later.

Lachlan Musicman

unread,
Jul 28, 2018, 10:00:31 PM7/28/18
to Slurm User Community List
I don't have an answer to your question - but I would like to know how you manage injecting the hostname and/or IP address into slurm.conf and then distribute it in this situation?

I have read the documentation, but it doesn't indicate a best practice in this scenario iirc.

Is it as simple as doing those steps - wait for boot, grab hostname, inject into slurm.conf, distribute slurm.conf to nodes, restart slurm?

Cheers
L.

Felix Wolfheimer

unread,
Jul 31, 2018, 2:09:51 AM7/31/18
to Slurm User Community List
After a bit more testing I can answer my original question: I was just
too impatient. When the ResumeProgram comes back with an exit code != 0
SLURM doesn't taint the node, i.e., it tries to start it again after a
while. Exactly what I want! :-)

@Lachlan Musicman: My slurm.conf Node and Partition section looks like
this (if I want to have a maximum cluster size of 10 instances):


ResumeProgram=<myprog>
NodeName=compute-[1-10] CPUs=<whatever> <...> State=CLOUD
PartitionName=compute Nodes=compute-[1-10] <...> State=UP

<myprog> is called by SLURM when it wants to spin up one of the
instances. In this script I create an instance from an instance
template, which is pre-configured to mount the SLURM installation from
the master node, which contains also the slurm.conf file, i.e., the
slurm.conf file the new node sees is the one from the master. This way
all instances are consistent. Although, I could probably set the
hostname of the new node to the NodeName (compute-<x>) and create a DNS
record for the subnet to which the new node is deployed, such that the
hostname gets correctly resolved, I use the following simpler approach:

When the instance starts up, it gets an arbitrary name from DHCP. It's
easy to get this name in <myprog>. Once I have this, I only need to map
it to the NodeName with the following command (also in <myprog>):

scontrol update NodeName=compute-<x> NodeHostName=<hostname>
NodeAddr=<hostname>

where <hostname> is the hostname assigned by DHCP. This way, the node
registers itself correctly. I assign a tag to the instance which
contains the NodeName, such that I can find it easily when SLURM calls
the SuspendProgram to terminate the node.
Reply all
Reply to author
Forward
0 new messages