Killing a thread is different from breaking a thread. Killing a thread kills the thread unrecoverably, and no cleanup actions are run. This usually isn’t what you want, but there’s always a tension between these kinds of things: defensive programmers ask “How do I make myself unkillable so I can safely clean up?” but then implementors of a dynamic environment (like, say, DrRacket) find themselves asking “How do I kill a runaway thread?” Assuming you’re not DrRacket, you usually want `break-thread`, not `kill-thread`.
But perhaps you know that already, and your question is just about breaking, so by “kill-safe” you mean “break-safe.” You ask why `semaphore-break` doesn’t just disable breaking, but that wouldn’t help with the problem the documentation alludes to. The problem is that there’s fundamentally a race condition in code like this:
(semaphore-wait sem)
; do something important
(semaphore-post sem)
If this code is executed in a context where breaks are enabled, it’s not break-safe whether or not `semaphore-wait` were to disable breaks while waiting on the semaphore. As soon as `semaphore-wait` returns, the queued break would be delivered, the stack would unwind, and the matching `semaphore-post` call would never execute, potentially holding a lock forever. So the issue isn’t that the semaphore’s internal state gets somehow corrupted, but that the state no longer reflects the value you want.
The right way to write that code is to disable breaks in the critical section:
(parameterize-break #f
(semaphore-wait sem)
; do something important
(semaphore-post sem))
This eliminates the race condition, since a break cannot be delivered until the `semaphore-post` executes (and synchronous, non-break exceptions can be protected against via `dynamic-wind` or an exception handler). But this creates a new problem, since if a break is delivered while the code is blocked on the semaphore, it won’t be delivered until the semaphore is posted/unlocked, which may be a very long time. You’d really rather just break the thread, since it hasn’t entered the critical section yet, anyway.
This is what `semaphore-wait/enable-break` is for. You can think of it as a version of `semaphore-wait` that re-enables breaks internally, inside its implementation, and it installs an exception handler to ensure that if a break is delivered at the worst possible moment (after the count has been decremented but before breaks are disabled again), it reverses the change and re-raises the break exception. (I have no idea if this is how it’s actually implemented, but I think it’s an accurate model of its behavior.) This does exactly what we want, since it ensures that if we do enter the critical section, breaks are disabled until we exit it, but we can still be interrupted if we’re blocked waiting to enter it.
So it’s not so much that there’s anything really special going on here, but more that break safety is inherently anti-modular where state is involved, and you can’t implement `semaphore-wait/enable-break`-like constructs if you only have access to the `semaphore-wait`-like sibling.