On 2/24/2013 10:10 AM,
nm...@cam.ac.uk wrote:
> In article <kgdisp$39r$
1...@dont-email.me>,
> Ivan Godard <
iv...@ootbcomp.com> wrote:
>>>
>>> Or just throw the whole, horrible mess out of the window, use a
>>> binary format, and convert when needed. Decimal floating-point
>>> is a marketing solution to a known technical non-requirement.
>>
>> Not so. It is a very technical requirement, with real world consequences.
>
> No. Decimal FIXED-POINT is the requirement. It doesn't help.
Don't be misled by the names of things. IEEE decimal is in fact decimal
fixed point.
>
>> Many business computations are required *by law* to be exact and to have
>> exactly specified rounding, in decimal to match the real world of pounds
>> and dollars. Because many quantities that are exact in decimal are
>> non-terminating fractions in binary, this technical requirement cannot
>> be satisfied as you suggest, because round-off error in converted binary
>> will not provide the same result for a computation that is exact in
>> decimal. ...
>
> You are mistaken, because you assume that I meant binary floating-
> point. I didn't. This requirement has been met since time immemorial
> by using scaled integers. Improving the hardware support for fixed-
> point emulation would help.
Scaled integers (as built on ordinary integer arithmetic) suffer from
the problem that scale must be maintained statically, whereas one wants
(and many algorithms require) scale to be maintained dynamically. The
intent of the quanta parts of IEEE decimal is to permit automatic
scaling, as binary floating point does, while preserving significance
information. Loss of significance is detectable by the hardware in IEEE
decimal, and ignored in IEEE binary.
You are absolutely right that better hardware support for fixed point
would be a good thing. However, neither the IEEE charter nor the
practicalities of legacy languages and applications would permit a
redefinition of integer. Consequently fixed-point support was defined in
the only domain in which the languages and practice already supported
extensive fixed-point, namely the commercial computing world.
You will find all the "hardware support for fixed point emulation" you
want in IEEE decimal.
>
> Decimal floating-point doesn't cut the mustard for a large number
> of reasons. Here are SOME of the reasons:
>
> a) The rounding modes include many that are not in IEEE 754;
> some even require carry values.
International accounting standards that long predate computers mandate
so-called "banker's rounding" for currency calculations; in many
countries, including your own, this requirement is also by statute. This
rounding mode is essential if hardware decimal is to be usable at all.
It is of course meaningless in binary where the rounding will be wrong
anyway. We considered adding it to binary anyway simply for the
symmetry, but decided that there was no reason to burden the hardware
guys with a pointless mode that would potentially break binary codes
that assume the existing set of modes.
> b) There are also strict rules on precision and overflow,
> at specific limits.
You would prefer no rules?
> c) Many of them constrain multiplication and division; the
> former has to be done by a double precision operating and rounding
> to s precision, and the latter is too horrible to contemplate
> starting from floating-point.
There are no "many of them". There is one standard, and almost nothing
is "up to the implementation". Implementation-defined things include the
bitwise representation of values (which you cannot detect with decimal
operations; you'd have to cast to integer and look at the bits). The
semantics of trap handlers is also implementation-defined, but that's
true of binary FP as well, and the definitions and facilities for traps
are identical in the two radices.
>
>> As for the impact, we on the committee (mostly HPC types with little
>> business background) wondered if there was really a need to upgrade 854
>> as part of our work. Then we saw measurements of real-world database
>> applications. If a column is typed "numeric" then the database keeps the
>> values in decimal, and the programs - yes, often COBOL - do the
>> arithmetic in decimal too. Over a sample of several thousand
>> applications, umptillion CPU cycles, and billions of dollars of
>> hardware, some 31% of all processor cycles were spent in *decimal
>> arithmetic emulation routines*.
>
> I don't believe that statistic. Waiting for memory and recovering
> from glitches is VASTLY higher. But, even if it were true, my
> point is that what has been standardised won't meet even Cobol's
> basic requirements, let alone those of all of the laws, and it
> will STILL require an emulation layer!
Well, I didn't do the studies, but I believe the counts were of issued
instructions and hence would not have included memory stalls that were
not hidden by the OOO. I *have* looked at the emulation codes that were
consuming the cycles, and I believe the numbers for apps in which
essentially all the computation is being done in decimal, which is the
routine case. YMMV.
I *can* address the COBOL question. The IEEE committee worked very
closely with the COBOL committee on the new standard, and they said they
were completely satisfied. They had to make one important change to the
COBOL standard, from mandating 34 digit precision to only 33-digit, so
as to get something that would fit in a quad (the former size dated from
when numbers were BCD strings on a 1401), but they were happy to do so
to get the added range and be free from the task of explicitly coding
the decimal point maintenance.
>> Interesting to me, it turns our that decimal floating point has better
>> behavior than binary for many scientific codes as well. The reasons are
>> esoteric and this is not my expertise, but it boils down to the fact
>> that IEEE decimal is not really "floating" point in the sense that
>> binary is. Instead, decimal is really a scaled fixed-point
>> representation, because it doesn't do normalization. Consequently, in
>> decimal 1.0, 1.00, and 1.000 are three different numbers with three
>> different representations, whereas in binary they are all the same.
>
> Well, it is my area, and it isn't. Back in the days when there were
> a lot of radices in use (including decimal, though I never personally
> used it), the investigations found that binary was marginally better
> than anything else. Only marginally.
>
> What you say sounds like a garbling of the numerical differences
> between scaled fixed point and floating-point, which was well-known
> (to expert numerical analysts) by 1970. And, in general, floating-
> point is much better. For the few codes where fixed-point is better,
> the IEEE 754 form won't work because of multiplication and division.
I'm not sure why you single out multiplication and division, which work
just fine on IEEE decimal.
There is only one case I know of in which a standard that preserves
significance requires added thought (and code). That's in algorithms
that actually *increase* significance, although they don't look like it.
A Newton-Rapheson sqrt is adding 10 bits or so of significance with
every iteration, yet in standard decimal the quanta will be shrinking.
You have to use the (standard) operations to assert the new and correct
significance of the result.
Of course, I doubt many COBOL finance programs are doing
Newton-Rapheson. :-)
> Quanta are merely an extension of the old unnormalised floating-
> point and that was well-known to be a numerical disaster area,
> except in microkernels written by top numerical experts. As I
> understand it, IEEE 754 has essentially specified prenormalisation
> for multiplication and division, so it does avoid the number one
> disaster.
Unnormalized floating point with improper rounding is a disaster for
computations that approximate the real line; you tend to get cumulative
roundoff errors. Used to be that the same occured in binary FP too. That
is fixed in the standard by requiring that IEEE decimal offer
"round-to-nearest-even" just like IEEE binary; use that and there is no
problem with unnormalization. The applications for IEEE decimal do *not*
approximate the real line, as you will discover if you walk into a bank
and try to cash a check for 2pi pounds. :-)
There is no specification for prenormalization; not sure where you got
the idea. Instead the standard uses the same terminology as it uses for
binary: computation shall be *as if* the operation were performed
exactly, and the result is then rounded as specified by the mode. I'm
reasonably confident that doing prenormalization followed by a hardware
binary operation and then post un-normalization would not conform to
standard; I do not recommend that you buy any machine that does it that
way.
If you are interested in how to do it right, I suggest you contact Mike
Cowlishaw at IBM Cambridge who has a full software emulation package
that is public domain (and was used as the base for the IBM hardware
implementation). You can download the package at ICU:
http://download.icu-project.org/files/decNumber/
>> For some truly inspired rants about round-off and significance issues in
>> binary floating point, Google "William Kahan floating point".
>
> Which apply, even more strongly, to decimal floating-point, because
> of the "wobbling significance" problem. I can't remember offhand
> references to the numerical problems with fixed-point, but they
> are in the context of pivoting. Nor can I remember any references
> to the binary versus other investigations, though I did a few.
You are complaining that a sewing machine is not a hammer. I doubt many
will use IEEE decimal for LU decomposition. :-)
However, we did consider it apt for stability exploration. Kahan had a
technique that he had found very useful in heavy numeric applications as
a way to get a sense of the "goodness" of a result. He would run the
application twice, once with rounding mode set to round-up (toward
positive infinity) and once to round-down. The difference between the
two results gave an approximation to the significance. He had quite a
few harmless-looking examples where the difference was of the same
magnitude as the result itself.
Applied here, one could run an existing app again in decimal and then
try to figure out why the quantum had shrunk to zero :-)