a) interpolation is just a certain (lower order) polynomial approximation to (ahem) a full scale polynomial.
b) the J.M.Mueller book indicates that the number of bits requires to correctly round ALL cases is somewhere in the neighborhood of 2×n+3..2×n+12 (chapter 12)
c) given an accurate polynomial with somewhat less than 1 ULP of error: call it p(), John Gusaftson's paper on "mine field" method created a correction to the polynomial p(): call it c() that should close the gap. y = p(x) has some residual error, z=c(x) has the corrections to p(), so, r = (p+c)(x) should be a direct polynomial that avoids the hard to round cases directly. It is proving that that is the hard part.
d) my research indicates that something "about" n+8 bits of fraction should be able to deliver IEEE 754 quality rounding using (p+c)(x).
e) My patented work has already indicated that n+6 bits of fraction can deliver an accuracy of 0.5+1/247 ULP RMS and exp(x) only takes 14 cycles (yes, not missing 0s at the end) in IEEE 754, it should take 16 cycles in unum and have similar accuracy even without correction.
f) how much more do you actually need if you only have a rounding error once every ~300 calculations ?
g) given a 64×64 multiplier tree, one has n+12 bits of fraction (2×n+24 before rounding) and one can drive this down to one rounding error every 10,000 calculations RMS (and get both integer multiplication and integer division for free!) How much better does one need.