Epsilon values for posits?

98 views
Skip to first unread message

Jack Lewis

unread,
May 21, 2018, 4:53:18 PM5/21/18
to Unum Computing

Some C code I'm converting to use posits uses the value "epsilon", which I have learned is the difference between exactly 1.0000 and the next larger number that can be represented in a given number format. 

Has anyone published epsilon values for the various posit variants? 

John L. Gustafson

unread,
May 21, 2018, 6:49:24 PM5/21/18
to Jack Lewis, Unum Computing
Rob/Jack,

The epsilon value you describe is easily calculated as 2^(es + 3 - nbits).

However, be careful to understand how the C code uses that value. It may assume constant accuracy across the dynamic range of normalized floats, which is not how posits work... they have tapered accuracy. When the calculation strays out of the dynamic range 1/useed to useed, you lose one bit of accuracy, for example, and a code that attempts to automate some numerical analysis may be overly optimistic if it assumes the epsilon value applies to every binade.

John

On May 21, 2018, at 1:53 PM, Jack Lewis <rjack...@gmail.com> wrote:


Some C code I'm converting to use posits uses the value "epsilon", which I have learned is the difference between exactly 1.0000 and the next larger number that can be represented in a given number format. 

Has anyone published epsilon values for the various posit variants? 

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To post to this group, send email to unum-co...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/0a1b72ae-f58e-4295-b3e0-eae1a47ee957%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.




Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

Jack Lewis

unread,
May 23, 2018, 7:10:47 PM5/23/18
to Unum Computing
Mister Gustafson, 

Thank you for the reply. You are correct, I believe that using the posit epsilon for this benchmarking suite that I'm working on converting may not be a good/useful approach. 

I was wondering if you had any suggestions or knew of any ways in which one might demonstrate programmatically the value of posits over floats or doubles?

Thanks, 

-Jack


On Monday, May 21, 2018 at 3:49:24 PM UTC-7, John L. Gustafson wrote:

  Jack,
The epsilon value you describe is easily calculated as 2^(es + 3 - nbits).

However, be careful to understand how the C code uses that value. It may assume constant accuracy across the dynamic range of normalized floats, which is not how posits work... they have tapered accuracy. When the calculation strays out of the dynamic range 1/useed to useed, you lose one bit of accuracy, for example, and a code that attempts to automate some numerical analysis may be overly optimistic if it assumes the epsilon value applies to every binade.

John

On May 21, 2018, at 1:53 PM, Jack Lewis <rjack...@gmail.com> wrote:


Some C code I'm converting to use posits uses the value "epsilon", which I have learned is the difference between exactly 1.0000 and the next larger number that can be represented in a given number format. 

Has anyone published epsilon values for the various posit variants? 

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To post to this group, send email to unum-c...@googlegroups.com.

John L. Gustafson

unread,
May 23, 2018, 9:05:37 PM5/23/18
to Jack Lewis, Unum Computing
Jack,

I'm not sure what you mean by "programatically." One can show the advantage of posits over floats analytically by the methods I have presented in my Stanford talk and the paper, "Beating Floating Point at its Own Game". It is also possible to show empirical results from running programs with floats vs. with posits and looking at the accuracy. Perhaps the best examples of that are the studies done by Peter Lindstrom and his and team at LLNL showing that posits are two orders of magnitude more accurate than IEEE floats of the same precision, when running shock hydrodynamics codes and an Eulerian incompressible flow simulator. And that's without even using the quire, which could potentially increase the accuracy another thousandfold. Now that we're about to release SoftPosit, patterned after Berkeley's SoftFloat, people will be able to see that posits are slightly faster than floats at the same technology level (FPGA studies are also confirming this). What is this benchmark suite you want to run? Some kind of accuracy tester? It would be interesting to run one of those, but it might have to be modified to run posits for the comparison to be fair.

Maybe the more direct question I should ask is, why are the results that have been published to date not convinced you of the value of posits over floats of similar precision (or over fixed-point arithmetic, in many cases)?

John


For more options, visit https://groups.google.com/d/optout.

Jack Lewis

unread,
May 24, 2018, 1:41:46 PM5/24/18
to Unum Computing
John, 

I've been running a modified version of a project called miniFE (found here). I've altered it to use Stillwater's posit implementation. The project uses the Conjugate Gradient approach to simulate heat flow through a cube of voxels of x, y and z dimensions. As the program approaches a perfect solution, the "residual" reported gets closer and closer to the value zero (never actually reaching zero). The residual is the norm of the vector r = b - Ax, and as x converges to the solution of the linear system, the norm of r become small. 

How the program decides when to stop iterating is determined by a tolerance parameter, which by default is set to the epsilon value of the number format in use. I'm not sure whether this is appropriate for a posit implementation; do you have any insight? The program's residual does not seem to be able to actually solve down to the accuracy of the particular posit's epsilon, which I believe is due to the "tapered accuracy" behavior inherent to posit math. 

I'm still working on understanding posits fully, but one thing I've gleaned is that the choice of the optimum es value seems to be very problem-dependent. I haven't dug into the internals of the miniFE application to determine the dynamic range of the numbers it uses; perhaps I will attempt that. Do you have any guidelines for how to choose es, and is there any "general-purpose" value that works in most cases as a replacement for IEEE floats and doubles? 

I will definitely be investigating the Lawrence Livermore studies that you mention! 

Thanks, 
-Jack
John

Reply all
Reply to author
Forward
0 new messages