Table-maker's dilemma

48 views
Skip to first unread message

Tomek Czajka

unread,
Jan 7, 2022, 5:00:39 AM1/7/22
to Unum Computing
The "Beating Floating Point" paper says this:

"Some math library programmers worry about “The Table-Maker’s Dilemma” that some values can take much more work to determine their correct rounding; this can be eliminated by using interpolation tables instead of polynomial approximations."

Are the details of this described somewhere? I don't understand how using interpolation tables helps.

How do you calculate exp(x) with correct rounding by using interpolation tables?

I would think that instead, you can do this by iteratively using more and more accurate polynomial approximations until you're certain what the last bit is.

I'm not sure how feasible it is to figure out what the worst case for this procedure is, but at least for 64 bits or less, it should be possible to brute force it (in a smart way, by skipping most easy cases).

Nicolas Brunie

unread,
Jan 7, 2022, 5:21:00 AM1/7/22
to Tomek Czajka, Unum Computing
Hello Tomek,

If I remember correctly, even for 64 bit precision, brute forcing the research of hard-to-round cases is not trivial and some functions require up to 3 times the result precision.
Vincent Lefevre has been working for a while on determining the hard-to-round cases for 64-bit (http://perso.ens-lyon.fr/jean-michel.muller/TMDworstcases.pdf)
I think we will find the following references interesting:

The method you mention (iterating over more and more accurate implementations while testing the error) is called Ziv's method: https://hal-ens-lyon.archives-ouvertes.fr/ensl-00693317v2/document
And has been used and studied, so it should be easy to find interesting references.

I hope this helps,

Regards,
N. Brunie

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/67020aba-605d-48fd-99d2-b4ef5efa5889n%40googlegroups.com.

MitchAlsup

unread,
Jan 8, 2022, 8:05:09 PM1/8/22
to Unum Computing
a) interpolation is just a certain (lower order) polynomial approximation to (ahem) a full scale polynomial.

b) the J.M.Mueller book indicates that the number of bits requires to correctly round ALL cases is somewhere in the neighborhood of 2×n+3..2×n+12 (chapter 12)

c) given an accurate polynomial with somewhat less than 1 ULP of error: call it p(),  John Gusaftson's paper on "mine field" method created a correction to the polynomial p():  call it c() that should close the gap. y = p(x) has some residual error, z=c(x) has the corrections to p(), so, r = (p+c)(x) should be a direct polynomial that avoids the hard to round cases directly. It is proving that that is the hard part.

d) my research indicates that something "about" n+8 bits of fraction should be able to deliver IEEE 754 quality rounding using (p+c)(x).

e) My patented work has already indicated that n+6 bits of fraction can deliver an accuracy of 0.5+1/247 ULP RMS and exp(x) only takes 14 cycles (yes, not missing 0s at the end) in IEEE 754, it should take 16 cycles in unum and have similar accuracy even without correction.

f) how much more do you actually need if you only have a rounding error once every ~300 calculations ?

g) given a 64×64 multiplier tree, one has n+12 bits of fraction (2×n+24 before rounding) and one can drive this down to one rounding error every 10,000 calculations RMS (and get both integer multiplication and integer division for free!) How much better does one need.

Reply all
Reply to author
Forward
0 new messages