Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Dismiss

98 views

Skip to first unread message

Jan 28, 2024, 1:34:12 PMJan 28

to

How do computers nowadays actually perform decimal arithmetic,

and how did they do it in the past?

For adding, the first microprocessors used the "if the result of

A+B is larger than nine, then add six to the result and set carry"

method. If one wanted to speed this up with today's possibilities,

then the two additions could be done in parallel, with the carry

of A+B+6 selecting the result. This could then be integrated into

a conditional sum adder. If A+B+6 could be calculated efficiently

with a modified carry lookahead method, this method could work well.

For multiplication... I am not sure that the method outlined above

for addition would be efficient.

My (uninformed) guess would be that people would re-encode the BCD

numbers to something with different weights than the traditional

8-4-2-1 (maybe 4-2-2-1) and then make modified Wallace or Dadda

trees, which would deliver the indificual digits in that encoding,

from which they could then be converted to BCD or another format

as desired.

Does anybody know how this is actually done?

and how did they do it in the past?

For adding, the first microprocessors used the "if the result of

A+B is larger than nine, then add six to the result and set carry"

method. If one wanted to speed this up with today's possibilities,

then the two additions could be done in parallel, with the carry

of A+B+6 selecting the result. This could then be integrated into

a conditional sum adder. If A+B+6 could be calculated efficiently

with a modified carry lookahead method, this method could work well.

For multiplication... I am not sure that the method outlined above

for addition would be efficient.

My (uninformed) guess would be that people would re-encode the BCD

numbers to something with different weights than the traditional

8-4-2-1 (maybe 4-2-2-1) and then make modified Wallace or Dadda

trees, which would deliver the indificual digits in that encoding,

from which they could then be converted to BCD or another format

as desired.

Does anybody know how this is actually done?

Jan 28, 2024, 3:05:49 PMJan 28

to

BCDADC/BCDSBB instructions, which add a group of 16 BCD digits in 64

bits, using SR.T as a CarryIn/CarryOut flag.

For each digit, IIRC:

First do a 4-bit add with a carry in/out;

If result was > 9 (res[3]&((res[2]|res[1]))),

Subtract 10 from result, and set carry.

Then chain 16 of these units together to form a full-width BCD adder.

The BCDSBB logic was similar, just adding a lookup table on the input

side to complement the value.

Say:

If SBB=0, RHS digit passed in as-is;

If SBB=1, RHS digit is complemented, carry-in is inverted.

For multiply, IIRC (in software):

Could use shift-and-add, just using a BCD adder.

For divide, IIRC:

Could use shift-and-subtract, just using a BCD subtract.

Multiplying/dividing by a power of 10 can be done with a plain shift.

I have a vague memory of naive shift-add happening to still work with

BCD, provided the BCD operations were still used for the main ADD/SUB

part (rather than needing to resort to a more complex long-division

algorithm). I suspect that the factor is that each multiple of 4 bits

happens to map exactly to a power of 10, and each group of 4 bits

doesn't go outside of the allowed range.

Not sure of a full hardware multiplier, I didn't implement one for BCD.

There was the nifty trick that one could do fastish binary to decimal:

Rotate-with-carry-left the binary value by 1 bit;

Do a BCDADC with the value of itself (adds the bit from the binary value);

Repeat for each bit of the input value.

For signed values, had used a wonky/modified version of 9s complement.

Top digit:

0..7: Positive

8/9: Negative

But, otherwise the same.

This variant mostly added the property that normal integer comparisons

could be used on the values, and would still give the same relative

ordering.

IIRC, there were also some ops to pack/unpack DPD values (DPD being

wonky, but not too difficult for hardware to pack/unpack).

I think my thinking was that with BCD, this minimized the amount of

"additional" logic needed to support BCD (could add BCDADC/BCDSBB, and

pretty much everything else could still leverage normal integer

instructions).

But, practically, not a whole lot of use-case for BCD in the stuff I am

doing...

Say, I am not imagining embedded/DSP style use-cases are going to have

much use for BCD or decimal arithmetic.

I guess one other possible scheme could be do add a version of decimal

math where, say, each group of 10 bits is understood as a value between

000 and 999, with each 64-bit register understood to hold 18 or 19

decimal digits (top 4 bits being special).

Multiplying/dividing would be a bit more complicated (I don't imagine

shift/add or shift/subtract will give the desired results). Would likely

need to use a form of (mod 1000) long multiply and long division.

To make this effective, the hardware support would need to be a bit more

involved, with special instructions needed to help facilitate multiply

and similar, etc. Vs just being able to get along entirely with ADD/SUB

helpers.

There is also the 30-bit 000000000..999999999 scheme, but this makes

more sense for a plain software implementation (can mostly leverage

normal integer operations without too much overhead). Would be less

viable for direct hardware support.

...

Jan 28, 2024, 4:05:46 PMJan 28

to

Thomas Koenig wrote:

> How do computers nowadays actually perform decimal arithmetic,

> and how did they do it in the past?

> For adding, the first microprocessors used the "if the result of

> A+B is larger than nine, then add six to the result and set carry"

> method. If one wanted to speed this up with today's possibilities,

> then the two additions could be done in parallel, with the carry

> of A+B+6 selecting the result. This could then be integrated into

> a conditional sum adder. If A+B+6 could be calculated efficiently

> with a modified carry lookahead method, this method could work well.

The above describes a suite of methods applicable to decimal arithmetic.
> How do computers nowadays actually perform decimal arithmetic,

> and how did they do it in the past?

> For adding, the first microprocessors used the "if the result of

> A+B is larger than nine, then add six to the result and set carry"

> method. If one wanted to speed this up with today's possibilities,

> then the two additions could be done in parallel, with the carry

> of A+B+6 selecting the result. This could then be integrated into

> a conditional sum adder. If A+B+6 could be calculated efficiently

> with a modified carry lookahead method, this method could work well.

> For multiplication... I am not sure that the method outlined above

> for addition would be efficient.

and feed these into a 3-input decimal carry save adder. By restricting

the input values to the set {0..9} instead of {0..15} you save a whole

bunch of gates. It is equivalent to a 100 cell ROM after the Verilog

gate muncher gets rid of excess logic.

{ { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } // [*,0]

{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 } // [*,1]

{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 } // [*,2]

{ 0, 3, 6, 9, 12, 15, 18, 21, 24, 27 } // [*,3]

..

{ 0, 9, 18, 27, 26, 45, 54, 63, 72, 81 } // [*,9]

}

You route the leading digit to the next column of significance.

You route the trailing digit to this column of significance.

> My (uninformed) guess would be that people would re-encode the BCD

> numbers to something with different weights than the traditional

> 8-4-2-1 (maybe 4-2-2-1) and then make modified Wallace or Dadda

> trees, which would deliver the indificual digits in that encoding,

> from which they could then be converted to BCD or another format

> as desired.

Jan 28, 2024, 5:09:17 PMJan 28

to

According to Thomas Koenig <tko...@netcologne.de>:

in mainframes. Decimal floating point has a compressed format that puts

three digits into ten bits, but they say they unpack it into BCD to do

the arithmeitc. The actual arithmetic is still largely digit by digit,

with multiplication somewhat parallel adding up partial products.

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=7c68c318aa6f98ae5a23163ff6f246180a782331

https://speleotrove.com/mfc/files/schwarz2009-decimalFP-on-z10.pdf

I get the impression the arithmetic algorithms haven't changed much

even though modern systems wrap fancier stuff around it like DFP and

vector registers. (No, I do not know what people do with decimal

vector registers, but they must do something since IBM added them

recently.)

--

Regards,

John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",

Please consider the environment before reading this e-mail. https://jl.ly

>How do computers nowadays actually perform decimal arithmetic,

>and how did they do it in the past?

Here's some papers from IBM about the implemenation of decimal arithmetic
>and how did they do it in the past?

in mainframes. Decimal floating point has a compressed format that puts

three digits into ten bits, but they say they unpack it into BCD to do

the arithmeitc. The actual arithmetic is still largely digit by digit,

with multiplication somewhat parallel adding up partial products.

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=7c68c318aa6f98ae5a23163ff6f246180a782331

https://speleotrove.com/mfc/files/schwarz2009-decimalFP-on-z10.pdf

I get the impression the arithmetic algorithms haven't changed much

even though modern systems wrap fancier stuff around it like DFP and

vector registers. (No, I do not know what people do with decimal

vector registers, but they must do something since IBM added them

recently.)

--

Regards,

John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",

Please consider the environment before reading this e-mail. https://jl.ly

Jan 28, 2024, 6:17:43 PMJan 28

to

On Sun, 28 Jan 2024 18:34:08 +0000, Thomas Koenig wrote:

> How do computers nowadays actually perform decimal arithmetic,

> and how did they do it in the past?

There are many ways to perform decimal arithmetic.
> How do computers nowadays actually perform decimal arithmetic,

> and how did they do it in the past?

Let's start with a decimal adder. If decimal digits

are represented as BCD, it's possible to put together

a logic circuit that takes two BCD digits and a carry

bit as input, and produces one BCD digit and a carry

bit as an output.

This logic circuit can then be put in a larger circuit

in the same way that a binary adder can be. So you

can speed up addition with a carry-select adder, for

example.

If you scroll down about 7/8 of the way, on my page at

http://www.quadibloc.com/comp/cp0202.htm

I show a logic diagram for a decimal carry-save adder.

My scheme, though, is crude and simplistic, and better

designs are possible; the page mentions a paper from

1974 discussing a decimal carry-save adder.

So most of the techniques used in computers to speed up

binary arithmetic are also applicable to decimal arithmetic.

In the past, some very simple methods were used to implement

decimal arithmetic cheaply. For example, representing decimal

digits in "excess-3" notation meant that the carry out from

adding two decimal digits would occur when a normal binary

carry out would occur, so less special circuitry for decimal

is needed. One has to either add or subtract three, depending

on whether there was or wasn't a carry, to correct the digits

for future arithmetic afterwards.

John Savard

Jan 29, 2024, 10:32:02 AMJan 29

to

Thomas Koenig <tko...@netcologne.de> writes:

>How do computers nowadays actually perform decimal arithmetic,

>and how did they do it in the past?

http://bitsavers.org/pdf/burroughs/MediumSystems/B2500_B3500/1025475_B2500_B3500_RefMan_Oct69.pdf
>How do computers nowadays actually perform decimal arithmetic,

>and how did they do it in the past?

Starting at page 5-10.

0 new messages

Search

Clear search

Close search

Google apps

Main menu