I've recently been going through the Standard, line by line, and realized that it needs to be reworded regarding "evaluated exactly and then rounded" as comes up in the definition of "fused", for example.
It is only necessary to evaluate a function to enough bits of precision that the way it rounds is determined. As written, the Standard implies that you first evaluate a function to an infinite number of bits! This is the way all math libraries work.
The Table-Maker's Dilemma (TMD) is that for transcendental functions (log, exponential, trig and inverse trig), the number of bits required to find the correct rounding can be arbitrarily large. There are two ways to deal with this… the method of Ziv, where you evaluate to several extra bits and only in the uncommon cases where you still can't tell which way it rounds, re-evaluate to even more bits, and so on until you find the correct rounding… or the Minefield Method, greatly preferred, where the locations of hard-to-round values are known in advance and the approximation is designed to produce correct rounding.
Functions like sqrt are algebraic, that is, the results can be expressed as the root of a polynomial with integer coefficients. It is always possible to find the correct rounding for such functions with predictable resources, so those are not a big problem like TMD.