Can't post to this group? Let me know!

149 views
Skip to first unread message

Zoltán Lehóczky (Lombiq Technologies)

unread,
Jun 21, 2019, 3:22:45 PM6/21/19
to Unum Computing
Robbert has handed over the group ownership to me so I'll be taking on administering the technicalities. So let me know if you have trouble posting to this group or have any other issues!

If you can't even access the group, then well, you won't see this :). But I just checked and every topic is visible even if you are not logged in, as it should according to the configuration. So I have no idea where reports of this can stem from.

John Gustafson

unread,
Jun 24, 2019, 11:15:18 AM6/24/19
to "Zoltán Lehóczky (Lombiq Technologies)", Unum Computing
Zoltán,

I am delighted that you have taken ownership of this Google group. It may not be ideal, as your earlier email indicated, but it seems useful and perhaps the best way of sharing information at some level. I think the reason people were having trouble was because they were trying to log in with an email other than the one Google knows them by.

To kick off a new discussion, I just discovered something I found slightly mind-blowing and I wanted to share it with the group, as Jonathan Low and I start a Gitlab site with fast math functions for posits (patterned after Cerlane Leong's SoftFloat). The logarithm base 2, unlike the natural logarithm, is exact for quite a few input arguments. Similarly, 2^x can be exact for representable finite x values, something that e^x can only do if x is 0.

So, just for fun, if we do a 'round trip' of computing y = log2(x) and then x' = 2^y, with each computation correctly rounded, how often do we get x = x' with every bit identical? I tried it with all the positive posits (bit strings that look like integers 1 to 32767) and plotted the graph of the difference between the bit string representing x and the bit string representing x', treated as integers. In other words, the ULPs of difference. It looks like this:

posit-roundtrip.pdf
float-roundtrip.pdf
posit-naturalroundtrip.pdf
float-natural-roundtrip.pdf

Job van der Zwan

unread,
Jun 24, 2019, 11:35:00 AM6/24/19
to John Gustafson, Zoltán Lehóczky, Unum Computing
Very cool! Do you have an idea if there is something about the underlying representation of numbers that should make us expect this difference? Similarly, how does it change if you have more bits? Do floats catch up or does the difference only get bigger? And finally, would that mean that fast log-based tricks to do approximate calculations tricks might fast work a lot better for posits than floats? (logarithms were originally invented for fast maths calculations, after all)

/Job

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/EF2F8029-2AE6-46BB-8D2A-D92A7E0DC5BC%40icloud.com.


The errors are never more than 1 ULP in either direction, and the round trip is lossless for about 89% of all input values. There is also a big swath of values in the center that is free of ULP errors… posits with bit strings that look like 12287 to 22316 as integers, representing values ~0.49994 to ~0.28965.

Now try it for IEEE 754 Standard 16- bit floats. The tested inputs look like integers 1 to 31743, which is all the floats that represent finite positive values:

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/EF2F8029-2AE6-46BB-8D2A-D92A7E0DC5BC%40icloud.com.



Only about 33% of the values make the round trip correctly, and the errors are as much as ±5 ULPS.

We shouldn't expect this big a contrast for round trip experiments in general, and floats may perform better than posits for some function-inverse function compositions. Still, the qualitative difference between the two plots above is pretty jaw-dropping.

I just tried it for y=ln(x) and x'=e^y and the plots were quite similar. Posits were again lossless in 89% of the cases, but there were a few large inputs that were off by as much as 2 ULPs:

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/EF2F8029-2AE6-46BB-8D2A-D92A7E0DC5BC%40icloud.com.



Floats got worse as well, off by as much as 8 ULPs:


--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/EF2F8029-2AE6-46BB-8D2A-D92A7E0DC5BC%40icloud.com.

John G.

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/EF2F8029-2AE6-46BB-8D2A-D92A7E0DC5BC%40icloud.com.

Job van der Zwan

unread,
Jun 24, 2019, 11:39:04 AM6/24/19
to John Gustafson, Zoltán Lehóczky, Unum Computing
Err… apologies for that mess of a sentence near the end of my previous email. Serves me right for trying to type and edit a response on my phone. I hope the intended question came through regardless ;)

/Job

theo

unread,
Jun 24, 2019, 1:25:56 PM6/24/19
to Unum Computing
Hi John:

I have been playing games with Euler's number, PI, sqrt(2), and the golden ratio, phi, and how they map to invariants.

So, for example, I would expect that the sin function would map to 0 for an argument that is the approximation of pi in the number system, but I am getting minpos.

 3.14159 -> sin(3.14159) = 1.22465e-16
01101001 -> sin( +3.125) = 00000001 (reference: 00000001)   PASS

 3.1415926535898 -> sin(3.1415926535898) = 1.2246467991474e-16
0101100100100010 -> sin( +3.1416015625) = 1111111111001101 (reference: 0000000000000001)   FAIL

 3.14159265358979311599796346854 -> sin(3.14159265358979311599796346854) = 1.22464679914735320717376402946e-16
01001100100100001111110110101010 -> sin( +3.1415926516056060791015625) = 00000000011100010000101101000110 (reference: 00000000000000011100011010011001)   FAIL

These roundings to minpos are all caused by the projection away from 0, but would imply that I cannot properly evaluate cancellations in Fourier series expansions and other infinite series approximations that are based on arguments that are multiples of PI, PI/2, PI/4 etc.

In general, how are you envisioning working with roots? The rounding away from 0 seems to make these type of evaluations error prone.

John Gustafson

unread,
Jun 25, 2019, 1:22:37 AM6/25/19
to Job van der Zwan, Zoltán Lehóczky, Unum Computing
Job,

We start with only about half the possible bit patterns (the ones representing finite positive values) and the result of a logarithm is to map to log(minimum real) to log(maximum real), both negative and positive values of much smaller magnitude. Because the magnitude is reduced, posits have more bits of accuracy for the log. Then in exponentiating, the coarser spacing of the represented very large or very small numbers makes it more likely that the result will round to the correct one. I think that's the quickest explanation of why posits do so well, and so well compared to floats. A similar effect happens if you take the square root of a number (reducing its magnitude) and then square it; posits do much better than floats in making that round trip as well.

It doesn't matter how many bits are in the representation. The posits will do better than floats when a function generally reduces the magnitude of a value.

I know we need the general x^y function, and this is one of the toughest functions to write and round perfectly for all inputs. The general approach is to compute y * log(x) and then exponentiate that, where the log and the exponentiation are done using higher precision than the working precision. You have to use more bits are you will make rounding errors like crazy, and for a few of the values, you need hundreds of bits (the Table Maker's Dilemma). This is on my Things to Do pile, and I think the Minefield method that I've mentioned previously on this forum will work for a three-dimensional minefield {x, y, f(x,y)}. But in the meantime, we can use the log(x) and exp(x) functions (or log2(x) and exp2(x) if it turns out they work better) to compute a slightly inaccurate x^y. And the fact that taking the log(x) raises the number of significand bits is going to help to make that a fairly decent approximation.

John

Zoltán Lehóczky (Lombiq Technologies)

unread,
Aug 16, 2019, 9:00:19 AM8/16/19
to Unum Computing
I now changed the visibility of this group in the "group directory" to be public as well (formerly it was just members). This should resolve all issues with people not being able to see the group. It fixed it for Vijay at least.

This is quite strange though, as this config is supposed to only affect how people an find the group ("Listing a group in the directory will make it easier for people to find your group. When listed in the directory your group's name, email and description will be visible and searchable.") but since the group is (and always was) publicly visible to everyone with the URL you should've just be able to see it any way...

Please let everyone know if you know of somebody who had trouble with the group before!
Reply all
Reply to author
Forward
0 new messages