There is a feature of posits that I hope someone will exploit in any formatting software: The need to express the power-of-ten exponent takes more space just as the decimal digits that are
significant take less space, because of the tapered relative accuracy. The number of significand digits need not exceed the number needed to convert to any posit without ambiguity. This is a project almost worthy of a technical paper, if anyone wants to take it on. Let me clarify with some examples:
Suppose you are using 16-bit standard posits (2 for the exponent size). I think 10 characters may suffice for all values to be expressed in human-readable form. Which is pretty cool and very compact. Here's what I mean: maxPos is 72,057,594,037,927,936, but it only has one significant digit so you can write it
+7.~E+16
The eight characters express the sign, the decimal point (probably don't need that), the "~" to express that the notation is inexact and there are more digits after the one shown, the E or e symbol to demarcate the exponent, the sign of the exponent, and two decimals to express the exponent. The "~" is a unum computing concept; I prefer to write a raised ellipsis "⋯" instead after the last significant digit, but few people know how to get to that from a standard keyboard so "~" makes more sense.
When the number is in the "sweet spot" with more significant digits, you need as many as five decimals to express a 16-bit posit (and this is shown in the Posit Standard, along with the minimum for 32-bit and 64-bit posits). So the number just barely bigger than 1 is 1 + 1/2048 = 1.00048828125, but all you need is "1.0005" to express that unambiguously for 16-bit posits. So for that number, the format would be
+1.0005~
and there is no need for an exponent field. For a number like 5/4 = 1.25 that can expressed exactly, you would format it as
and not need the "~" at all; I assume the three unneeded spaces would be leading blanks, but they could also be trailing blanks.
The smallest 16-bit posit, minPos = 1/maxPos, is about 1.3878E–17, but again there is only one significant digit so
+1.~E-17
is sufficient to express the posit without ambiguity. You can go all the way to the posit for 999,424 with eight characters,
and more than that is when you need to start using scientific notation. The next-larger posit is 1,003,520, which can still be written unambiguously with ten characters because there are fewer significant digits for that larger magnitude:
What about when the exponent requires two decimals to express? By that point, near 1E10, the posits express values 9,663,676,416 and 10,737,418,240, so you only need three significant decimal digits to express the number unambiguously, and the formatting still fits a ten-character budget:
While expressing the exact posit value may be useful in some situations, it is mostly misleading since it makes it look like there are more significant digits than there really are. A fundamental principle of unum computing is to express what you don't know about a number. If we could get people used to the "~" annotation, it would go a long way to avoid confusion about why 0.1 + 0.2 isn't exactly 0.3 in binary formats for real numbers.
I envision a compiler that gives an error warning if you try to compile
just as it would complain if you tried to assign 0.1 to an integer. At least, I hope it would complain and not just silently truncate the 0.1 value to 0. The compiler would accept, for example,
since the closest 16-bit posit to 0.1 is 0.100006103515625. Now you have a way of making explicit the inexactness of translating (most) decimals into binary.
John