Posit Standard has been ratified and posted

237 views
Skip to first unread message

John Gustafson

unread,
Apr 29, 2022, 10:55:08 PM4/29/22
to Unum Computing, Tommy Thorn
If you go to posithub.org, the home page has a button that provides the PDF of the ratified Posit Standard. After almost five years, the Posit Working Group finally worked through all points of disagreement. The document is well-hyperlinked and only 12 pages long, in sharp contrast to the nearly hundred pages in the IEEE Std 754™ (2019).

The Standard is not intended to discourage experimentation with, say, different exponent sizes (the eS value). By fixing eS = 2 for all precisions, many things became much simpler, like conversion between different posit precisions no longer requires decoding the bit fields. In the hundreds of papers that have been written about the application of posit arithmetic, the neural network experiments (~8 bit posits), the signal/image processing and weather/climate modeling experiments (~16-bit posits) have tried various eS values and found that eS = 2 works best. So this deviates from the original hunch, five years ago, that we should use

n =   8, eS = 0
n = 16, eS = 1
n = 32, eS = 2
n = 64, eS = 3.

Another huge simplification is that the quire for n-bit posits is always 16n bits long. And n need not be an integer power of 2 for that to work.

Tommy Thorn, cc'd here, suggested that we inform Hacker News (news.ycombinator.com). I'll leave that to Tommy, but now we can simply point people to posithub.org if they want to get a copy of Posit Standard (2022).

The hope of the Posit Working Group is that the Standard will facilitate the construction of full custom VLSI processors and high-quality math libraries that will allow interoperability and reproducibility at speeds similar to CPUs that support IEEE 754 floats in hardware. FPGAs are great, but it's time to move beyond prototyping and really build posit-enabled CPUs and GPUs. The language standards group might also start thinking about supporting posits in a way that follows the Standard, which will make computations with real numbers bitwise reproducible for the first time.

John

MitchAlsup

unread,
May 23, 2022, 8:03:00 PM5/23/22
to Unum Computing
Congratulations, and good luck.

elias khatami

unread,
Jul 30, 2022, 4:00:53 AM7/30/22
to Unum Computing
Hello
 Do we have a standard library like Float for Posit to use in Python now? If possible, please guide
thank you

John Gustafson

unread,
Jul 30, 2022, 12:18:56 PM7/30/22
to elias khatami, Unum Computing
Hello Elias,

SoftPosit by Cerlane Leong has a Python wrapper. It's open source on GitLab. The Universal library from Theo Omtzigt (Stillwater Supercomputing) also has Python support. Speedgo, by Shin Yee Chung, is a Python library. Bill Zorn wrote sfpy with SoftPosit bindings for Python; I don't know how it differs from the one Cerlane Leong did. And there's PySigmoid by Ken Mercado, and Deep PeNSieve by Raul Murillo. The last one is the most recent and may well be the best.

Check out the projects table at Wikipedia for comparative details:


There is also support for PyTorch from several places, open source, like the recent "QTorch" paper. 

Best,
John G.

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/b6f38202-e32a-4faf-a773-810f50c32e0cn%40googlegroups.com.

Glenn Matlin

unread,
Sep 1, 2022, 3:41:18 PM9/1/22
to Unum Computing
Thank you John for pointing out Deep PeNSieve -- it looks like a promising library to try.

Another option you have mentioned to me before as well is QyPyTorch+ https://github.com/minhhn2910/QPyTorch which builds on top of https://arxiv.org/abs/1910.04540 https://github.com/Tiiiger/QPyTorch out of Cornell.
Reply all
Reply to author
Forward
0 new messages