CoNGA 2025 Call for Papers

58 views
Skip to first unread message

John Gustafson

unread,
Jun 1, 2025, 8:05:17 PMJun 1
to Unum Computing
Slap to the forehead that I did not announce this earlier to this group:

I really hope all of you will consider submitting a paper to CoNGA this year, which will for the first time attach itself to the annual Supercomputing Conference (Saturday before the main conference begins).

Some people tried submitting papers about novel arithmetic formats like posits to the ARITH conference and got woefully ignorant reviews that didn't understand what the paper was about and rejected the papers. Such papers will get expert and fair reviews from the CoNGA Program Committee.

Something else I want to mention is that my most recent experiments indicate that a superior variant of the Posit Standard (2022) format (2 exponent bits, no limit on regime length) is to use 3 exponent bits but limit the maximum number of regime bits to 6. This results in a dynamic range of about 1e–15 to 1e15 for all precisions, and a quire size of only 256 bits for all precisions. We may need to add it to the existing Standard. I call it a "b-posit" for "bounded posit". Early work on the hardware for b-posits indicate a sharp reduction in chip area, latency, power dissipation, and energy per operation compared to standard posits with eS = 2. The decode and encode cost of b-posits is now beating that of IEEE 754 floats. I describe b-posits in Every Bit Counts: Posit Computing, but you shouldn't have to buy the book to hear about this idea.

Best,
John

Oscar Smith

unread,
Jun 1, 2025, 8:18:30 PMJun 1
to Unum Computing
> This results in a dynamic range of about 1e–15 to 1e15 for all precisions

This seems incredibly bad for 32/64 bit. Electrical engineering commonly works with capacitances in the 1e-15F range, and Astronomers care about distances in the 5e26 range. Limiting maximum range to ~[1e-15,1e15] is IMO incredibly limiting for a 64 bit type, and doing so would mean that there was no Posit type big enough to let scientists use it without being worried about overflow/underflow when representing real world quantities in SI units.

John Gustafson

unread,
Jun 1, 2025, 8:42:01 PMJun 1
to Oscar Smith, Unum Computing
Thanks for the feedback. If you can accept 1e–30 to 1e30 instead, going to eS = 4 does that. The point is to bound the maximum number of bits that express the exponent, and to always have at least some fraction bits for precision 16 and up.

What I've seen in actual programs is that units are adjusted when magnitudes become extreme. Astronomers switch to parsecs or light-years or A.U.s. Electrical engineers express capacitance in nanofarads, not farads. Chemists work with femtoseconds. Chemists also switch to logarithms to deal with extreme magnitudes (like pH values). So the actual codes input and output more human-friendly quantities closer to zero in magnitude. 

Algorithms are engineered to avoid overflow at all cost, and also try to avoid underflow (though the latter is less important when doing accumulations). Overflow to infinity means complete loss of information about the answer.

I have a couple projects going to histogram the values used by a wide taxonomy of HPC codes and find out what the distribution is. Preliminary work showed it was very rare to see values outside the range 1e–13 to 1e13. 

I'm very interested in being proved wrong, with an actual production code. Please let me know.

Best,
John

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/unum-computing/94e646dc-e925-4f99-8b6e-f1e525006d3fn%40googlegroups.com.

Oscar Smith

unread,
Jun 1, 2025, 10:48:21 PMJun 1
to Unum Computing
> If you can accept 1e–30 to 1e30 instead, going to eS = 4 does that.

I think that's totally reasonable for 32 bit. For 64 bit, I'd really like 1e50-1e60 (so that you can do Pythagorean theorem on numbers in the 1e25-1e30 range). One of the main reasons FP64 is so popular is it wastes a bit or two of precision to cover an absolutely humongous range. Having a format that is big enough to easily handle the biggest/smallest number anyone will throw at it is a really nice feature. FP32 has enough range that you don't run into problems until you start going to planetary scales, and FP64 is big enough to let you express universe sized scales in whatever units you feel like. It's not maximally efficient, but with 64 bits, it's better to tune for ease rather than efficiency.


> What I've seen in actual programs is that units are adjusted when magnitudes become extreme. Astronomers switch to parsecs or light-years or A.U.s. Electrical engineers express capacitance in nanofarads, not farads. Chemists work with femtoseconds. Chemists also switch to logarithms to deal with extreme magnitudes (like pH values). So the actual codes input and output more human-friendly quantities closer to zero in magnitude.

Some do, some don't (and those that do will often break their normalization accidentally. NGSpice internally uses meter/kilogram/seconds (https://github.com/ngspice/ngspice/blob/032b1c32c4dbad45ff132bcfac1dbecadbd8abb0/src/spicelib/devices/hisimhv1/hsmhvset.c#L1390), Chemists will often represent their inputs as pH, and then take an exponential to get a concentration internally, etc. The important part is that once you aren't trying to squeeze real numbers into 16 bits (which is small enough to inherently force normalization), non expert programmers will try to do things in every way imaginable (and lots of ways that numerics people would never imagine because they're too weird). Especially in 64 bit land, the goal of a number format should be for the user to have to think about it as little as possible. FP64 (for all of IEEE 754's faults) does an excellence job of that, and trying to replace it with a new number format that requires users to think about normalization.


> I'm very interested in being proved wrong, with an actual production code. Please let me know.

I hope the ngspice link satisfies this, but I think production code is only half of the issue. A 64 bit real number format is the format that we expect intro CS students, undergrad physics students, hacked together research scripts and all other manner of people to use. I think such a limited exponent range would be fine if we were in the 1980s where computers were expensive and users were carefully trained, but in the modern world, the default number format will be used by 10s of millions of people, and 99% of them won't even know the name of the number format they're using.
Reply all
Reply to author
Forward
0 new messages