On 4/8/21 2:09 AM, Juha Nieminen wrote:
> In comp.lang.c++ Richard Damon <
Ric...@damon-family.org> wrote:
>> Floating Point should ALWAYS be treated is 'imprecise', and sometimes
>> imprecise can create errors big enough to cause other issues.
>
> That's actually one of the most common misconceptions about floating point
> arithmetic.
And I would say that THAT is one of the most common misconceptions about
floating point.
Yes, there are LIMITED situations where floating point will be exact, or
at least consistent, but unless you absolutely KNOW that you are in one
of those cases, thinking you are is the source of most problems.
And, if you are in one of those cases, very often floating point in not
the best answer by many measures (it may be the simplest, but that
doesn't always mean best)
Double Precision floating point numbers represent only slightly less
than 2**64 discrete values. Of the values they might represent in that
range, the odds of one chosen truely at random is minuscule.
They do tend to be 'close enough' as few things are actually needed to a
better precision than they represent. So that is where acknowledging
that they ARE imprecise, but precise enough for most cases, is the way
to go, so you are alert for the cases when that imprecision gets
important, like the case talked about previously.
>
> I know that you didn't mean it like this, but nevertheless, quite often
> people talk about floating point values as if they were some kind of
> nebulous quantum mechanics Heisenberg uncertainty entities that have no
> precise well-defined values, as if they were hovering around the
> "correct" value, only randomly being exact if you are lucky, but most
> of the time being off, and you can never be certain that it will have
> a particular value, as if they were affected by quantum fluctuations
> and random transistor noise.
>
Yes, Imprecise does not mean 'random', but that shows not understanding
the meaning of terms.
> Or that's the picture one easily gets when people talk about floating
> point values.
>
> In actuality, floating point values have very precise determined
> values, specified by an IEEE standard. They, naturally, suffer from
> *rounding errors* when the value to be represented can't fit in the
> mantissa, but that's it. That's not uncertainty, that's just well-defined
> rounding (which can be predicted and calculated, if you really wanted).
Yes, a particular bit combination has an exact value that it represents,
and the operatons on those bits have fairly precise rules for how they
are to be performed.
>
> Most certainly if two floating point variables have been assigned the
> same exact value, you can certainly assume them to be bit-by-bit
> identical.
>
> double a = 1.25;
> double b = 1.25;
> assert(a == b); // Will never, ever, ever fail
That just shows repeatability, not accuracy.
It also works for the values of 0.1 and 0.2 even though they are NOT
exactly represented. As can be seen be:
double a = 0.1;
double b = 0.2;
double c = 0.3;
assert(a + b == c); /* Will fail on most binary floating point
machines */
>
> If two floating point values have been calculated using the exact
> same operations, the results will also be bit-by-bit identical.
> There is no randomness in floating point arithmetic. There are no
> quantum fluctuations affecting them.
>
> double a = std::sqrt(1.5) / 2.5;
> double b = std::sqrt(1.5) / 2.5;
> assert(a == b); // Will never, ever, ever fail
Again, shows consistency not accuracy.
For almost all values of a we will find that
std::sqrt(a) * std::sqrt(a) != a
(Yes, this will succeed for some values of a, likely gotten by taking a
value with less than 26 bits of precision and squaring it.
>
> Also, there are many values that can be represented 100% accurately
> with floating point. Granted that them being in base-2 makes it
> sometimes slightly unintuitive what these accurately-representable
> values are, because we think in base-10, but there are many.
>
> double a = 123456.0; // Is stored with 100% accuracy
> double b = 0.25; // Likewise.
>
> In general, all integers up to about 2^52 (positive or negative)
> can be represented with 100% accuracy in a double-precision floating
> point variable. Decimal numbers have to be thought in base-2 when
> considering whether they can be represented accurately. For example
> 0.5 can be represented accurately, 0.1 cannot (because 0.1 has an
> infinite representation in base-2, and thus will be rounded to
> the nearest value that can be represented with a double.)
>
Yes, and if you really are dealing with just integers, keep them as
integers. If you have to stop to decide if your number happens to be one
that is exact, you probably are doing something wrong and making
problems for yourself, and will likely make a mistake at some point.
Much better to either reformulate so they ALWAYS are exact or you treat
them like they are always possibly inexact.