I just started on the Python learning path.
I tried some calculator stuff, and got this
sort of result:
http://static.userland.com/images/stanleydaily/pythonarithmetic.gif
I searched the Tutorial at http://www.python.org/doc/current/tut/tut.html
and the Reference manual at http://www.python.org/doc/current/ref/ref.html
for some discussion of why simple decimal arithmetic
gives these sorts of inaccurate results. But I couldn't find anything.
Can someone provide one or more URLs that
discuss the topic ??
Thanks
Stan
Most decimal fractions aren't exactly representable in binary floating point.
Not unique to Python; same story in C, C++, Java, Perl, Fortran, etc. See
http://www.python.org/cgi-bin/moinmoin/RepresentationError
for details.
If you need exact decimal arithmetic, and can afford the expense of
simulating decimal arithmetic, your best bet for now in Python is to use the
FixedPoint class at
ftp://ftp.python.org/pub/python/contrib-09-Dec-1999/
DataStructures/FixedPoint.py
Else you simply have to live with that binary floats are just approximations
(but good ones) to decimal fractions.
if-you-want-exact-buy-a-one-dollar-hand-calculator<wink>-ly y'rs - tim
"""
Taking the repr() of a float now uses a different formatting precision
than str(). repr() uses %.17g format string for C's sprintf(), while
str()
uses %.12g as before. The effect is that repr() may occasionally show
more decimal places than str(), for certain numbers. For example, the
number 8.1 can't be represented exactly in binary, so repr(8.1) is
'8.0999999999999996', while str(8.1) is '8.1'.
"""
--
Eric Renouf
Software Engineer
Opticom Inc.
www.getiview.com
Who WAS that masked man!
> Not unique to Python; same story in C, C++, Java, Perl, Fortran, etc.
Hmmm. Here's a corresponding c program
http://static.userland.com/images/stanleydaily/carithmeticprogram.gif
and here's its output
http://static.userland.com/images/stanleydaily/carithmetic.gif
> See
>
> http://www.python.org/cgi-bin/moinmoin/RepresentationError
>
> for details.
Thanks for the link.
> If you need exact decimal arithmetic, and can afford the expense of
> simulating decimal arithmetic, your best bet for now in Python is to use
the
> FixedPoint class at
>
>
ftp://ftp.python.org/pub/python/contrib-09-Dec-1999/DataStructures/FixedPoin
t.py
>
Thanks again for the link.
> if-you-want-exact-buy-a-one-dollar-hand-calculator
<hehehehe>
stan
try using "%.17g" (show all significant digits) instead of "%f" (fudge it)
...or use "print" in Python, to get str() instead of repr() behaviour:
>>> 0.1
0.10000000000000001
>>> print 0.1
0.1
Cheers /F
> try using "%.17g" (show all significant digits) instead of "%f" (fudge it)
Aha ! Python defaults to Truth !
Verifying: Here's my c output using %.17g :
http://static.userland.com/images/stanleydaily/carithmetic17g.gif
The inaccuracies exist 15 or so places to the
right of the decimal point.
In the Python test
http://static.userland.com/images/stanleydaily/pythonarithmetic.gif
the inaccuracies also exist 15 or so places to the right of the decimal
point. Interestingly, in this particular test case, because the answers run
beneath the ideal, the inaccuracies propagate their visibility to just 2 or
3 places to the right
of the decimal point.
> ...or use "print" in Python, to get str() instead of repr() behaviour:
Thanks for the tip !
Stan
just newbying along ....
[Stanley Krute]
> Aha ! Python defaults to Truth !
Nope: repr(float) gives a closer *approximation* to machine truth than does
str(float), and just *enough* of the truth so that eval(repr(x)) == x. In
general, you cannot rely on eval(str(x)) == x (in fact, it's unusual to get
the same float back if you go thru str() conversion first). If you wanted
the full truth, then e.g. you would get this instead:
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
>>>
That very long decimal number is the exact value of the binary approximation
to 0.1 stored in the machine by your HW -- and assuming that your platform C
library does best-possible conversion of the string "0.1" to an IEEE-754
double precision number to begin with.
> ...
> The inaccuracies exist 15 or so places to the right of the decimal
> point.
In general, since there 53 bits of precision in an IEEE-double, the tiniest
errors creep into the 53rd significant bit. An error of 1 in 2**53 is an
error of 1 in 10**x, for *some* value of x. A little math reveals x =
log10(2**53) = 53 * log10(2)
>>> import math
>>> 53 * math.log10(2)
15.954589770191003
>>>
So when viewed as decimal again, even the tiniest error will show up in the
15th or 16th significant decimal digit. Since that's what you've already
observed, you have reason to believe it <wink>.
it's-not-insane-but-it-is-maddening-ly y'rs - tim
Thanks, Michael, for the most-excellent link !
Stan