How do you perform posit addition?

1,038 views
Skip to first unread message

Koz Ross

unread,
Jun 10, 2017, 5:25:38 PM6/10/17
to unum-co...@googlegroups.com
Thanks to Dr. Gustafson's paper on posits, I now feel I understand the posit
representation. However, I'm not quite sure how you would perform posit addition
(multiplication I can see an algorithm for). Would someone be able to explain
it, or link me to an explanation? It would be a really big help.
--
Koz Ross <koz....@retro-freedom.nz>
www.retro-freedom.nz

On the organizing committee for the AUT Computer Science Club.
Check us out: https://notabug.org/aut-csc
Talk with us at ##aut-csc on Freenode via IRC.

Proud member of the Open Wireless Movement.
Find out more at https://openwireless.org/

John L. Gustafson

unread,
Jun 10, 2017, 8:30:04 PM6/10/17
to Koz Ross, unum-co...@googlegroups.com
Koz, adding posits is very much like adding floats, but with a simplification. The regime bits and exponent bits determine the scale factor of each number, and you shift the mantissa bits by the difference of the two scale factors. The smaller magnitude number is the one to shift. (If the two values have the same scale factor, no shifting is needed.) Then add the mantissas, and convert the result back as you would for a float, except use the regime bits to abbreviate the scaling factor.

The simplification is that the mantissas, once aligned, add like two's complement integers and that works for both negative and positive values. The "hidden bit" for negative posits has the value -2 instead of 1. That saves at least one conditional test that floats have to do: "Are the inputs the same sign or opposite sign?"

Find a definition of how to do addition of conventional IEEE floats (I think Berkeley SoftFloat will work, or read Knuth's description in The Art of Computer Programming) and step through it until you understand it. Only a few lines of that algorithm need to be changed to work for posits.
> --
> You received this message because you are subscribed to the Google Groups "Unum Computing" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
> To post to this group, send email to unum-co...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/unum-computing/20170610212534.GA18040%40Emi.
> For more options, visit https://groups.google.com/d/optout.


________________________________

Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

sco...@gmail.com

unread,
Aug 11, 2017, 1:04:52 PM8/11/17
to Unum Computing, koz....@retro-freedom.nz, john.gu...@nus.edu.sg

On Saturday, June 10, 2017 at 6:30:04 PM UTC-6, John L. Gustafson wrote:
Koz, adding posits is very much like adding floats, but with a simplification.

...


The simplification is that the mantissas, once aligned, add like two's complement integers and that works for both negative and positive values. The "hidden bit" for negative posits has the value -2 instead of 1. That saves at least one conditional test that floats have to do: "Are the inputs the same sign or opposite sign?"

One follow-up question about this: the SFI paper says you have to take the absolute value of a posit before decoding its regime/exponent info (Sec. 2.1), which means we already have to test the signs of both operands in order to align them. At that point, how much savings come by avoiding the test for differing signs? For example, the posit implementation in Julia does compare the signs of the two operands [1], and invokes different subroutines for same-sign vs. different-sign. Those subroutines also pay additional branching to compare fraction magnitudes of operands that were naturally aligned.

Thoughts?
Ryan

[1] https://github.com/interplanetary-robot/SigmoidNumbers/blob/master/src/Math/addsub.jl#L35

John Gustafson

unread,
Aug 13, 2017, 1:42:01 AM8/13/17
to sco...@gmail.com, Unum Computing, koz....@retro-freedom.nz, john.gu...@nus.edu.sg
While I work out a detailed example, here's something to think about. It is NOT necessary to take the absolute value first; that simply makes it easier for humans. It is faster and simpler for hardware to decode regime bits of negative numbers as they are, using XOR operations. The first two bits of a posit tell you which quadrant you are in and the integer meaning of the run of identical bits.

Then, the "hidden bit" of a negative posit can be treated as having value –2 instead of 1. Yonemoto's Verilog does this, saving a conditional test when adding. Since posits already seem confusing to people, I try not to introduce too many novel ideas at once when explaining them, so starting by taking the absolute value helps. Yonemoto started designing a posit adder similarly to a float adder, and he noticed that the two conditional paths were doing the same thing to the data so the branch condition was not needed!

I would really like to see Berkeley's SoftFloat translated for posits. We may work on this here in Singapore, but if anyone has already started down that path, I hope they let this group know so we can combine efforts.

John

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To post to this group, send email to unum-co...@googlegroups.com.

sco...@gmail.com

unread,
Aug 14, 2017, 1:49:44 PM8/14/17
to Unum Computing, sco...@gmail.com, koz....@retro-freedom.nz, john.gu...@nus.edu.sg
I understand how to decode the regime bits of a positive-valued posit (regardless of the regime's sign), but I would have thought it unsafe to decode a negative posit directly. Wouldn't taking the two's complement risk generating a carry from the mantissa that could pollute the exponent?

For example, consider the following +/- pairs in {32,3} encoding:

+64+1/64 => 0 10 110 [+1.]00000000000100000000000000
-64-1/64 => 1 01 001 [-2.]11111111111100000000000000
            ^^^^^^^^
           perfect complement


+64-1/64 => 0 10 101 [+1.]11111111111000000000000000
-64+1/64 => 1 01 010 [-2.]00000000001000000000000000
            ^^^^^^^^
            perfect complement
  

+64      => 0 10 110 [+1.]00000000000000000000000000
-64      => 1 01 010 [-2.]00000000000000000000000000
                  !
            not perfect complement due to carry-in from mantissa

Assuming I'm not mistaken in my analysis, it seemed like trying to detect and correct carry-in pollution would be more trouble than it's worth.

Thoughts?
Ryan

John Gustafson

unread,
Aug 14, 2017, 11:53:24 PM8/14/17
to sco...@gmail.com, Unum Computing, Koz Ross, john.gu...@nus.edu.sg
I think you misunderstand me. You still need to separate the scaling value (determined by regime and exponent bits) from the fraction bits, and addition and subtraction are performed on the hidden bit and fraction bits as they would for a float. A float addition doesn't produce a carry bit that pollutes the exponent!

My point is that you can regard the hidden bit as 01 for positive posits and 10 for negative posits, and treat them as 2's complement fixed point numbers. As 2's complement numbers, 01 just left of the binary point means +1, and 10 just left of the binary point means –2. When you shift right to line up the binary points, you do an arithmetic shift (copy the sign bit). Ordinary integer addition hardware suffices, and the sign bit takes care of itself. Then you renormalize the result as you do with floats, which tells you the regime and exponent bits.

John

--
You received this message because you are subscribed to the Google Groups "Unum Computing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unum-computin...@googlegroups.com.
To post to this group, send email to unum-co...@googlegroups.com.

sco...@gmail.com

unread,
Aug 15, 2017, 12:08:33 AM8/15/17
to Unum Computing, sco...@gmail.com, koz....@retro-freedom.nz, john.gu...@nus.edu.sg
I think I'm failing to explain my question well, probably combined with misunderstanding you for good measure.

I got the actual addition part working, as you say. Works great. I was worried about all the (branchy) code that gets you to the point where you can issue that integer add, in particular how to extract regime and exponent bits efficiently before, and pack them back together after.

I got the idea from your previous replies that this could be done in data-independent fashion. I am able to extract regime bits from a positive-valued posit without needing to know whether the regime indicator bit is set, but to handle negative posits I had to essentially take the absolute value first, because there was no obvious way to extract the regime or exponent bits directly (because of carries when you negate a posit's bit pattern)... if that's normal and expected, then we're good. If there *is* in fact a way to extract both regime and exponent fields without knowing/caring what the sign was, then I'd love to learn it. The closest I have right now is a branchless way to take absolute value before, and restore sign after:

int32_t smask = value >> 31;            // extract sign info
value = (value ^ smask) - smask;    // remove sign (if present)
/* manipulate bits */
value = (value ^ smask) - smask;    // restore sign (if needed)

Incidentally, it turns out gcc knows the above shift-xor-subtract trick, substituting it for expressions such as:

value = (value < 0)? -value : value

Regards,
Ryan
Reply all
Reply to author
Forward
0 new messages