I think I figured it out.
There is, indeed, a script (slurmsync.py) that runs every minute, on the controller, and which will restart nodes that have been preempted.
That was not, however, my concern.
It turns out, that the RebootProgram is not set in slurm.conf.tpl. Setting this to the same as SuspendProgram (so, suspend.py) make scontrol reboot work.
After that, it's as simple as issuing scontrol reboot ASAP <nodename> when it's a given that it no longer makes sense to schedule the next job on a node.