Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

123.3 + 0.1 is 123.3999999999 ?

0 views
Skip to first unread message

A Puzzled User

unread,
May 15, 2003, 12:14:13 PM5/15/03
to
In Python 2.2.2

>>> float("123.4")+0.1
123.5

>>> float("123.3")+0.1
123.39999999999999

>>> float("123.1") + 1
124.09999999999999

how come there are these inaccuracies?


--------------------------------
P.S.
I just tried in Perl

print eval("123.3") + 0.1;

and it gives
123.4

S. Francis

unread,
May 15, 2003, 12:58:18 PM5/15/03
to
> >>> float("123.1") + 1
> 124.09999999999999
>
> how come there are these inaccuracies?

Floating-point calculations are inaccurate by nature: e.g. ``.1''
looks very `round' because within a 10-base context, but not when the
base switches to 2 : 1/2+1/4+1/8+1/16+1/32...

>>> reduce(lambda x,y:x+y, [2**-p for p in range(1, 21)])
0.99999904632568359

>>> .1
0.10000000000000001

It is a simple matter of intuitive expectations.

Simon Brunning

unread,
May 15, 2003, 12:24:02 PM5/15/03
to
> From: A Puzzled User [SMTP:kendear...@nospam.com]

> In Python 2.2.2
>
> >>> float("123.4")+0.1
> 123.5
>
> >>> float("123.3")+0.1
> 123.39999999999999
>
> >>> float("123.1") + 1
> 124.09999999999999
>
> how come there are these inaccuracies?

<http://www.brunningonline.net/simon/blog/archives/000710.html>

Cheers,
Simon Brunning
TriSystems Ltd.
sbru...@trisystems.co.uk
--LongSig


-----------------------------------------------------------------------
The information in this email is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this email by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in
reliance on it, is prohibited and may be unlawful. TriSystems Ltd. cannot
accept liability for statements made which are clearly the senders own.

Fredrik Lundh

unread,
May 15, 2003, 12:38:10 PM5/15/03
to
"An Anonymous Coward" wrote:

> In Python 2.2.2
>
> >>> float("123.4")+0.1
> 123.5
>
> >>> float("123.3")+0.1
> 123.39999999999999
>
> >>> float("123.1") + 1
> 124.09999999999999
>
> how come there are these inaccuracies?

It's the way binary floating point numbers work (if you're going
to squeeze an infinite range of numbers into a limited number of
bits, you have to cheat).

The Python tutorial has the full story:

http://www.python.org/doc/current/tut/node14.html

> I just tried in Perl
>
> print eval("123.3") + 0.1;
>
> and it gives
> 123.4

Perl lies. So does Python, if you ask it to:

>>> print eval("123.3") + 0.1

123.4
>>> print float("123.3") + 0.1
123.4

</F>


Anton Vredegoor

unread,
May 15, 2003, 1:38:25 PM5/15/03
to
A Puzzled User <kendear...@nospam.com> wrote:

>In Python 2.2.2
>
> >>> float("123.4")+0.1
>123.5
>
> >>> float("123.3")+0.1
>123.39999999999999
>
> >>> float("123.1") + 1
>124.09999999999999
>
>how come there are these inaccuracies?

Computers work with bits internally. A bit can be either switched on
or off. If one wants to express something like the number 1 divided by
3 there's no way to perfectly represent the outcome using something
built out of 2-valued bits. If there were 3-valued bits this would be
possible, but then there would be problems representing 1 divided by
2. In fact choosing *any* integer-valued bits, there will always be
fractional numbers that can not be exactly represented. So until
there's some smarter way of representing numbers (Qbits, variable
primenumber-valued bits?) we're going to keep having imprecise floats.

>
>
>--------------------------------
>P.S.
>I just tried in Perl
>
> print eval("123.3") + 0.1;
>
>and it gives
>123.4

Typing in "float("123.3")+0.1" in the interactive interpreter prints
more decimals (printing all *bits* would tell even more), but the same
command as above, but now used in Python gives this:

$ python
Python 2.3b1 (#2, Apr 26 2003, 15:09:25)
[GCC 2.95.3-5 (cygwin special)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.


>>> print eval("123.3") + 0.1

123.4
>>>

Now the question is: Can Perl give the value it *really* uses
internally? :->

Anton

Jeff Kowalczyk

unread,
May 15, 2003, 1:41:37 PM5/15/03
to
> The Python tutorial has the full story:
> http://www.python.org/doc/current/tut/node14.html

Has the python community converged on a favored
solution for financial/currency apps? An uninformed use
of floats as currency would have obvious pitfalls.

I hope to write some order/inventory apps for a client
who uses product unit prices with variations of as little
the ten-thousandths of a cent, and unit counts of
millions of items.

It would be reassuring to have the necessary float()
issues for fixed-accuracy calculation encapsulated
in a reusable class. Does a decent one already exist?

Tim Peters

unread,
May 15, 2003, 2:06:06 PM5/15/03
to
[Jeff Kowalczyk]

> Has the python community converged on a favored
> solution for financial/currency apps? An uninformed use
> of floats as currency would have obvious pitfalls.

http://fixedpoint.sourceforge.net/

seems to be most used at this time.

On the horizon, Python CVS's nondist/sandbox/decimal/ directory contains a
first-cut implementation of IBM's proposal for decimal arithmetic. The IBM
proposal is here:

http://www2.hursley.ibm.com/decimal/

I don't think this will make it into Python 2.3 -- although it might, if
people volunteered enough work to move it along quickly.


Fredrik Lundh

unread,
May 15, 2003, 2:10:03 PM5/15/03
to
Jeff Kowalczyk wrote:

> Has the python community converged on a favored
> solution for financial/currency apps? An uninformed use
> of floats as currency would have obvious pitfalls.
>

> I hope to write some order/inventory apps for a client
> who uses product unit prices with variations of as little
> the ten-thousandths of a cent, and unit counts of
> millions of items.
>
> It would be reassuring to have the necessary float()
> issues for fixed-accuracy calculation encapsulated
> in a reusable class. Does a decent one already exist?

if Aahz didn't spend all his time arguing about object semantics,
he might get around to finish his Decimal.py module:

http://starship.python.net/crew/aahz/Decimal.py

which is based on Mike Cowlishaw (of REXX fame)'s work over at:

http://www2.hursley.ibm.com/decimal/

there's also:

http://fixedpoint.sourceforge.net/

and probably others that I cannot think of right now.

</F>


Tim Peters

unread,
May 15, 2003, 2:35:14 PM5/15/03
to
[Fredrik Lundh]

> if Aahz didn't spend all his time arguing about object semantics,
> he might get around to finish his Decimal.py module:
>
> http://starship.python.net/crew/aahz/Decimal.py
>
> which is based on Mike Cowlishaw (of REXX fame)'s work over at:
>
> http://www2.hursley.ibm.com/decimal/
>
> there's also:
>
> http://fixedpoint.sourceforge.net/
>
> and probably others that I cannot think of right now.

Time to update your notes: Eric Price took over Aahz's module, and greatly
expanded it. He says the current version passes the IBM test vectors. The
current version lives in Python CVS's nondist/sandbox/decimal directory.


Tony Meyer

unread,
May 15, 2003, 6:36:10 PM5/15/03
to
> Computers work with bits internally. A bit can be either
> switched on or off. If one wants to express something like
> the number 1 divided by 3 there's no way to perfectly
> represent the outcome using something built out of 2-valued
> bits.

This isn't strictly true, of course. A third (or any other rational
number) can be represented by a pair of numbers (numerator, denominator)
like 1 and 3. You can build up a rational data type that will do
arithmetic and so on if you like; I'm sure that there are reasonably
good implementations out there already. I'm also pretty sure that
adding a rational data type to Python has been discussed here before.

Irrational numbers (PI, e, ...) are another story, of course; other
methods are needed to deal with those.

=Tony Meyer


Erik Max Francis

unread,
May 15, 2003, 7:17:03 PM5/15/03
to
Tony Meyer wrote:

> This isn't strictly true, of course. A third (or any other rational
> number) can be represented by a pair of numbers (numerator,
> denominator)
> like 1 and 3.

I think you mean a pair of _integers_. It's nitpicking, but it's rather
important nitpick :-).

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ Men with empires in their purpose / And new eras in their brains
\__/ Lamya

Tony Meyer

unread,
May 15, 2003, 7:38:14 PM5/15/03
to
> > This isn't strictly true, of course. A third (or any other rational
> > number) can be represented by a pair of numbers (numerator,
> > denominator) like 1 and 3.
>
> I think you mean a pair of _integers_. It's nitpicking, but
> it's rather important nitpick :-).

Ah, that was hidden in the meaning of 'like'. As in, 1 and 3 are
integers, so 'a pair of numbers that are integers' <wink>.

Actually, a rational number can be expressed as a pair of _rational_
numbers as well, although sooner or later you're going to have to use
only integers or run into infinity. You can use a pair of complex
numbers as well, as long as the real/imaginary parts are also
integers/rationals. In fact, for some numbers, you can even use
irrational pairs - one can be represented as PI/PI, for example.

If one was _implementing_ a rational class, of course, then yes,
integers are the way to go. (Unless there's some wacky complex stuff
going on).

=Tony Meyer


Erik Max Francis

unread,
May 15, 2003, 8:18:22 PM5/15/03
to
Tony Meyer wrote:

> Actually, a rational number can be expressed as a pair of _rational_

> numbers as well, ...

Sure, but that's only because it's easy do decompose a valid (i.e., no
division by zero) ratio of rational numbers into a ratio of integers.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Take the slow train / To my destination
\__/ Sandra St. Victor

Isaac To

unread,
May 15, 2003, 9:48:28 PM5/15/03
to
>>>>> "Jeff" == Jeff Kowalczyk <j...@yahoo.com> writes:

>> The Python tutorial has the full story:
>> http://www.python.org/doc/current/tut/node14.html

Jeff> Has the python community converged on a favored solution for
Jeff> financial/currency apps? An uninformed use of floats as currency
Jeff> would have obvious pitfalls.

Jeff> I hope to write some order/inventory apps for a client who uses
Jeff> product unit prices with variations of as little the
Jeff> ten-thousandths of a cent, and unit counts of millions of items.

Jeff> It would be reassuring to have the necessary float() issues for
Jeff> fixed-accuracy calculation encapsulated in a reusable class. Does
Jeff> a decent one already exist?

The type that you have mentioned is already there. It is called long
integers.

Regards,
Isaac.

Isaac To

unread,
May 15, 2003, 10:07:58 PM5/15/03
to
>>>>> "Isaac" == Isaac To <kk...@csis.hku.hk> writes:

Jeff> Has the python community converged on a favored solution for
Jeff> financial/currency apps? An uninformed use of floats as currency
Jeff> would have obvious pitfalls.

Jeff> I hope to write some order/inventory apps for a client who uses
Jeff> product unit prices with variations of as little the
Jeff> ten-thousandths of a cent, and unit counts of millions of items.

Jeff> It would be reassuring to have the necessary float() issues for
Jeff> fixed-accuracy calculation encapsulated in a reusable class. Does
Jeff> a decent one already exist?

Isaac> The type that you have mentioned is already there. It is called
Isaac> long integers.

By the way, I think even if you use float, there is no "obvious pitfall" in
financial/currency apps. Normal IEEE double-precision floating point
numbers has 16-17 significant digits of precision. It means unless you are
doing things like adding up two numbers one of the size 1e13 (i.e., ten
trillions) and another of size 1e-3 (and then subtract the first), you are
pretty fine by just rounding to the nearest 1e-3. (Who the hell will
actually do that?) That you want to multiply two numbers one as small as
1e-5 and one as large as 1e15 is not a problem either, since multiplications
do not cause much loss in precision.

Somehow people keep yelling that they can't stand the inaccuracy of floating
points, without actually looking at how inaccurate (or actually, accurate)
they are.

Regards,
Isaac.

Asun Friere

unread,
May 16, 2003, 1:33:45 AM5/16/03
to
an...@vredegoor.doge.nl (Anton Vredegoor) wrote in message news:<ba0jfo$7kh$1...@news.hccnet.nl>...

>
>
> Now the question is: Can Perl give the value it *really* uses
> internally? :->
>
> Anton


#!/usr/bin/perl
$x = 123;
$y = 0.1;
printf "%.14f\n" , $x+$y;

A Puzzled User

unread,
May 16, 2003, 1:40:14 AM5/16/03
to
Isaac To wrote:
>
> Somehow people keep yelling that they can't stand the inaccuracy of floating
> points, without actually looking at how inaccurate (or actually, accurate)
> they are.
>
> Regards,
> Isaac.

I think people won't like it if 12.95 becomes 12.94999999999
and some comparison fails such as checking if an item is more than
or equal to $12.95 and the 12.94999999999 doesn't satisfy
the >= 12.95 check. But if fact, it does -- because essentially,
it is checking against itself. 12.949999999 >= 12.95 is true!
because 12.95 is stored as 12.949999999 anyways.

Here is a test:

>>> a = 12.95
>>> a
12.949999999999999
>>> if (a >= 12.95): print "more than my budget"

more than my budget
>>>

I suppose there is no situation that the inaccuracy
can cause such obvious bugs? I just wonder why the
interactive command line gives us 12.949999999 while
print 12.5 will give us 12.95.... why doesn't the
command line lie also?


Erik Max Francis

unread,
May 16, 2003, 2:14:14 AM5/16/03
to
A Puzzled User wrote:

> I suppose there is no situation that the inaccuracy
> can cause such obvious bugs? I just wonder why the
> interactive command line gives us 12.949999999 while

> print [12.95] will give us 12.95.... why doesn't the
> command line lie also?

It's str vs. repr:

>>> x = 0.1
>>> str(x)
'0.1'
>>> repr(x)
'0.10000000000000001'
>>> x
0.10000000000000001

In the interactive interpreter, it prints the repr of the value of an
expression. In this case, str gives a reasonable string representation
of the value, but repr is tasked to give the most accurate
representation possible.

Regrettably, both the str and repr of containers include the repr of
their contents, resulting in the somewhat confusing:

>>> l = [0.1, 0.2, 0.3] # these can't be expressed exactly
>>> str(l)
'[0.10000000000000001, 0.20000000000000001, 0.29999999999999999]'
>>> repr(l)
'[0.10000000000000001, 0.20000000000000001, 0.29999999999999999]'
>>> '[%s]' % ', '.join(map(str, l))
'[0.1, 0.2, 0.3]'

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Behind an able man there are always other able men.
\__/ (a Chinese proverb)

Dan Bishop

unread,
May 16, 2003, 2:16:50 AM5/16/03
to
A Puzzled User <kendear...@nospam.com> wrote in message news:<3EC3BCCD...@nospam.com>...

> In Python 2.2.2
>
> >>> float("123.4")+0.1
> 123.5
>
> >>> float("123.3")+0.1
> 123.39999999999999
>
> >>> float("123.1") + 1
> 124.09999999999999
>
> how come there are these inaccuracies?

To make it easier to see the "problem", I will rewrite all the numbers
in the same base the computer uses, i.e., base 2.

decimal 123.4 + 0.1
= binary 1111011.0110 0110 0110 0110... + 0.0 0011 0011 0011...

decimal 123.3 + 0.1
= binary 1111011.0 1001 1001 1001... + 0.0 0011 0011 0011...

decimal 123.1 + 1
= binary 1111011.0 0011 0011 0011... + 1

Note that 1/10, 3/10, and 4/10 have repeating fraction digits in
binary. (I would have said "repeating decimals", but that gets
confusing when you aren't using base ten.) This is analogous to 1/3 =
0.333... in base 10 (but not, for example, in base 12, where 1/3 =
0.4).

The computer doesn't store an infinite number of bits, so the
repeating bits have to get rounded off at some point. In IEEE 754
double-precision (the usual underlying type of Python "float"), the
exact values the numbers get rounded to (expressed in C99 hex
literals) are

0.1 = 0x1.999999999999Ap-4
123.1 = 0x1.EC66666666666p+6
123.3 = 0x1.ED33333333333p+6
123.4 = 0x1.ED9999999999Ap+6

and the results of the additions are

123.4 + 0.1
= 0x1.ED9999999999Ap+6 + 0x1.999999999999Ap-4
# denormalize the smaller number to get the same exponent
= 0x1.ED9999999999Ap+6 + 0x0.0066666666666668p+6
= 0x1.EE00000000000668p+6
# round to 53 significant bits
= 0x1.EE00000000000p+6
= 123.5 (exactly)

By coincidence, the result is exactly equal to a "nice" decimal. But
this isn't usually the case. For example,

123.3 + 0.1
= 0x1.ED33333333333p+6 + 0x1.999999999999Ap-4
= 0x1.ED99999999999668p+6
= 0x1.ED99999999999p+6 (rounded)
= 123.3999999999999914734871708787977695465087890625

123.1 + 1
= 0x1.EC66666666666p+6 + 1
= 0x1.F066666666666p+6
= 124.099999999999994315658113919198513031005859375

Most people don't like seeing long strings of "noise" digits, so
repr(x) doesn't return the exact value stored, but a reasonably short
decimal string approximating the value. But it must have the property
that eval(repr(x)) == x.

So why not just print "123.4" and "124.1"? For the last example,
"124.1" would be OK, because eval("124.1") is indeed equal to
0x1.F066666666666p+6. Unfortunately, eval("123.4") is
0x1.ED9999999999Ap+6, which isn't the correct value of
0x1.ED99999999999p+6.

So what should repr(123.3 + 0.1) be? Well, let's look at the next
nearest representable values that 123.3 + 0.1 needs to be
distinguished from.

0x1.ED99999999998p+6 = 123.3999999999999772626324556767940521240234375
0x1.ED99999999999p+6 =
123.3999999999999914734871708787977695465087890625
0x1.ED9999999999Ap+6 =
123.400000000000005684341886080801486968994140625

If we round to 17 significant digits, i.e.,

repr(0x1.ED99999999998p+6 ) == "123.39999999999998"
repr(0x1.ED99999999999p+6 ) == "123.39999999999999"
repr(0x1.ED9999999999Ap+6 ) == "123.40000000000001"

Then all 3 values have different reprs. It can be shown that 17
significant digits is the minimum necessary to ensure that repr(x) !=
repr(y) whenever x != y, so repr(x) for a float is defined as "%.17g"
% x (minus any trailing 0's at the end).

Therefore,

repr(123.4 + 0.1) == "123.5"
repr(123.3 + 0.1) == "123.39999999999999"
repr(123.1 + 1) == "124.09999999999999"

These are the exact results you got. Your Python interpreter is
working correctly.

> --------------------------------
> P.S.
> I just tried in Perl
>
> print eval("123.3") + 0.1;
>
> and it gives
> 123.4

Try

print (eval("123.3") + 0.1) == 123.4

You might be surprised.

Simon Brunning

unread,
May 16, 2003, 4:13:26 AM5/16/03
to
> From: Isaac To [SMTP:kk...@csis.hku.hk]

> Somehow people keep yelling that they can't stand the inaccuracy of
> floating
> points, without actually looking at how inaccurate (or actually, accurate)
> they are.

Ah, but try this:

>>> sum = 0.0
>>> for i in range(10): sum += 0.1
...
>>> sum == 1
0

So, if you do any equality tests, even tiny inaccuracies can trip you up.
Using FixedPoint:

>>> import FixedPoint as fp
>>> sum = fp.FixedPoint(0.0)
>>> for i in range(10): sum += sum += 0.1
...
>>> sum == 1
1

This works as expected.

Cheers,
Simon Brunning
TriSystems Ltd.
sbru...@trisystems.co.uk

Isaac To

unread,
May 16, 2003, 4:57:08 AM5/16/03
to
>>>>> "A" == A Puzzled User <kendear...@nospam.com> writes:

A> I think people won't like it if 12.95 becomes 12.94999999999 and some
A> comparison fails such as checking if an item is more than or equal to
A> $12.95 and the 12.94999999999 doesn't satisfy the >= 12.95 check.
A> But if fact, it does -- because essentially, it is checking against
A> itself. 12.949999999 >= 12.95 is true! because 12.95 is stored as
A> 12.949999999 anyways.

A> Here is a test:

>>>> a = 12.95 a
A> 12.949999999999999


>>>> if (a >= 12.95): print "more than my budget"

A> more than my budget
>>>>

A> I suppose there is no situation that the inaccuracy can cause such
A> obvious bugs? I just wonder why the interactive command line gives
A> us 12.949999999 while print 12.5 will give us 12.95.... why doesn't
A> the command line lie also?

Anyone who know how to work on floating point numbers will know that they
must compare against 12.955 in such cases. This is not new to anyone with
elementary statistics background: when people talk about class boundary they
are doing essentially the same thing. (So in short, never do equality
comparison with floating point.)

Such relaxation is always important, whether or not it is a financial
application or a scientific one. You can try writing a ray tracer in
whatever language you like using floating point numbers. If you don't relax
the floating point comparisons a little bit, around half of the ray you
shoot into an object will have its reflection blocked by the same object.

Of course, if one is in financial apps, he has the option to use a decimal
class instead. But I think it just hide the problem without solving it.
Sooner or later you will find that you need to perform divisions (e.g., to
convert between currencies), and at that time you have to lose accuracy
anyway.

And of course, if don't like inaccurate arithematics, he has the option to
use a rational class instead. But I think it just hide the problem without
solving it. Sooner or later you will find that you need to perform
logarithms (e.g., to compute present or future values), and at that time you
have to lose accuracy anyway.

Worse of all, both solutions lose time and space efficiency; and they tends
to lose accuracy *more* seriously than the binary representations once you
must lose accuracy. So in short, believe in your float, calculate the
error, relax your comparison, and that's enough for most practical purposes.
In cases that this does not solve the problem, you probably have to work on
your representation in a much more careful way than a rational and decimal
class would provide.

Regards,
Isaac.

Isaac To

unread,
May 16, 2003, 5:00:30 AM5/16/03
to
>>>>> "Simon" == Simon Brunning <SBru...@trisystems.co.uk> writes:

>> From: Isaac To [SMTP:kk...@csis.hku.hk] Somehow people keep yelling
>> that they can't stand the inaccuracy of floating points, without
>> actually looking at how inaccurate (or actually, accurate) they are.

Simon> Ah, but try this:

>>>> sum = 0.0 for i in range(10): sum += 0.1

Simon> ...
>>>> sum == 1
Simon> 0

Simon> So, if you do any equality tests, even tiny inaccuracies can trip
Simon> you up. Using FixedPoint:

>>>> import FixedPoint as fp sum = fp.FixedPoint(0.0) for i in
>>>> range(10): sum += sum += 0.1

Simon> ...
>>>> sum == 1
Simon> 1

Simon> This works as expected.

But what if you divide the number by 3, multiply it by 3 again, and compare
with 1?

Regards,
Isaac.

Simon Brunning

unread,
May 16, 2003, 5:07:46 AM5/16/03
to
> From: Isaac To [SMTP:kk...@csis.hku.hk]
> >>>>> "Simon" == Simon Brunning <SBru...@trisystems.co.uk> writes:
>
> >> From: Isaac To [SMTP:kk...@csis.hku.hk] Somehow people keep yelling
> >> that they can't stand the inaccuracy of floating points, without
> >> actually looking at how inaccurate (or actually, accurate) they
> are.
>
> Simon> Ah, but try this:
>
> >>>> sum = 0.0 for i in range(10): sum += 0.1
> Simon> ...
> >>>> sum == 1
> Simon> 0
>
> Simon> So, if you do any equality tests, even tiny inaccuracies can
> trip
> Simon> you up. Using FixedPoint:
>
> >>>> import FixedPoint as fp sum = fp.FixedPoint(0.0) for i in
> >>>> range(10): sum += sum += 0.1
> Simon> ...
> >>>> sum == 1
> Simon> 1
>
> Simon> This works as expected.
>
> But what if you divide the number by 3, multiply it by 3 again, and
> compare
> with 1?

In the context of monetary values, (which is what we are talking about, I
believe) then you wouldn't expect this to sum to 1.

£1 / 3 = £0.33. Multiply by 3, and you get £0.99. Accountants expect
rounding errors like this. So, in a *financial* context, fixed point does
the wrong thing, and FixedPoint works as expected.

In *other* contexts, of course, FixedPoint does the wrong thing and fixed
point is right.

Simon Brunning

unread,
May 16, 2003, 5:25:38 AM5/16/03
to
> From: Isaac To [SMTP:kk...@csis.hku.hk]
> Of course, if one is in financial apps, he has the option to use a decimal
> class instead. But I think it just hide the problem without solving it.
> Sooner or later you will find that you need to perform divisions (e.g., to
> convert between currencies), and at that time you have to lose accuracy
> anyway.

It's not so much a matter of whether or not you lose accuracy at some point
- of course you do. The crucial point in financial apps is to be inaccurate
in the way that the accountants expect you to be.

Isaac To

unread,
May 16, 2003, 5:24:05 AM5/16/03
to
>>>>> "Simon" == Simon Brunning <SBru...@trisystems.co.uk> writes:

Simon> In the context of monetary values, (which is what we are talking
Simon> about, I believe) then you wouldn't expect this to sum to 1.

Simon> £1 / 3 = £0.33. Multiply by 3, and you get £0.99. Accountants
Simon> expect rounding errors like this. So, in a *financial* context,
Simon> fixed point does the wrong thing, and FixedPoint works as
Simon> expected.

I won't say that it is a "correct" thing. Instead I'd say it is accepted
failure. On the other hand, this is not quite as simple as what you imply.
If there are multiple number manipulations that needs to be done in a single
transaction, most accountants are trained to do the calculation to the
accuracy provided by their digital calculators (typically 10 or 12 decimal
places), and round it to the nearest cent only as the last step. So even in
context of financial computing it isn't producing expected values to just
use fix points everywhere. Instead, one turns numbers to fix point only as
a tool for external representation, and for auditors so that they can verify
your records without going through your internal machine representation of
numbers. All these are done exactly the same in normal floats.

Regards,
Isaac.

Grant Edwards

unread,
May 16, 2003, 9:13:24 AM5/16/03
to
In article <mailman.105307298...@python.org>, Simon Brunning wrote:

> Ah, but try this:
>
>>>> sum = 0.0
>>>> for i in range(10): sum += 0.1
> ...
>>>> sum == 1
> 0
>
> So, if you do any equality tests, even tiny inaccuracies can
> trip you up. Using FixedPoint:
>
>>>> import FixedPoint as fp
>>>> sum = fp.FixedPoint(0.0)
>>>> for i in range(10): sum += sum += 0.1
> ...
>>>> sum == 1
> 1
>
> This works as expected.

So did the first example. ;)

--
Grant Edwards grante Yow! Yes, but will I
at see the EASTER BUNNY in
visi.com skintight leather at an
IRON MAIDEN concert?

Christopher A. Craig

unread,
May 16, 2003, 10:20:33 AM5/16/03
to
Isaac To <kk...@csis.hku.hk> writes:

> But what if you divide the number by 3, multiply it by 3 again, and compare
> with 1?

Use rationals:
http://sf.net/projects/pythonic and look at cRat

If you compile it for Windows, please mail me a binary so I can add it
to the downloads page. <r(1)/100 wink>

--
Christopher A. Craig <list-...@ccraig.org>
Code that does not adhere to the documentation is bad code that should
be broken, even if it is your own. -- Christian Tismer

Tim Roberts

unread,
May 17, 2003, 8:21:43 PM5/17/03
to
A Puzzled User <kendear...@nospam.com> wrote:
>
>I think people won't like it if 12.95 becomes 12.94999999999
>and some comparison fails such as checking if an item is more than
>or equal to $12.95 and the 12.94999999999 doesn't satisfy
>the >= 12.95 check.

And that's why you have to be very careful when comparing floating point
numbers, a situation that is true in ALL languages on ALL digital
computers.

>I suppose there is no situation that the inaccuracy
>can cause such obvious bugs?

Of course there are! Floating point is a very dangerous tool when used
improperly or naïvely. It tends to lull you into thinking that all numbers
are exact, when in fact that is not the case. As long as you are careful,
things work fine.

>I just wonder why the
>interactive command line gives us 12.949999999 while
>print 12.5 will give us 12.95.... why doesn't the
>command line lie also?

Because the "print" command uses str() which ROUNDS the number to 12
decimal places, whereas the command line uses repr() gives you the "exact"
value, even if the exact value is not what you expected.
--
- Tim Roberts, ti...@probo.com
Providenza & Boekelheide, Inc.

Tim Peters

unread,
May 17, 2003, 8:53:14 PM5/17/03
to
[A Puzzled User]

>I just wonder why the interactive command line gives us 12.949999999
> while print 12.5 will give us 12.95.... why doesn't the command line
> lie also?

[Tim Roberts]


> Because the "print" command uses str() which ROUNDS the number to 12
> decimal places, whereas the command line uses repr() gives you the
> "exact" value, even if the exact value is not what you expected.

Do note that they both lie: repr() lies less, rounding to 17 significant
decimal digits. The reasons for this are explained in a tutorial appendix:

http://www.python.org/doc/current/tut/node14.html

The exact value stored in the computer in this case is

12.949999999999999289457264239899814128875732421875

Rounding that to 17 significant decimal digits gives the

12.949999999999999

displayed at the prompt.


Anton Vredegoor

unread,
May 18, 2003, 8:18:00 PM5/18/03
to
list-...@ccraig.org (Christopher A. Craig) wrote:

>http://sf.net/projects/pythonic and look at cRat
>
>If you compile it for Windows, please mail me a binary so I can add it
>to the downloads page. <r(1)/100 wink>

Please have a look at:

http://www.python.org/doc/FAQ.html#3.24

Anton

D.W.

unread,
Jun 2, 2003, 9:15:20 PM6/2/03
to
Evidently, python has different rules for numbers entered in
interactive mode. Try this little snippet or just using a float
variable. It works both as a program and in interactive mode.

x = 123.3
print x
for j in range( 0, 3 ) :
x += 0.1
print x

Results:
123.3
123.4
123.5
123.6

Onward through the fog.
D.W.

A Puzzled User <kendear...@nospam.com> wrote in message news:<3EC3BCCD.
309...@nospam.com>...


> In Python 2.2.2
>
> >>> float("123.4")+0.1
> 123.5
>
> >>> float("123.3")+0.1
> 123.39999999999999
>
> >>> float("123.1") + 1
> 124.09999999999999
>
> how come there are these inaccuracies?
>
>

D.W.

unread,
Jun 2, 2003, 9:36:22 PM6/2/03
to
Good article. It is correct when it says that this happens in any
language. I always thought it was common knowledge among programmers
that "1" could be stored as 0.99999999999999999... It is also common
knowledge (although I don;t know why or even if it is true) that this
is a result from Intel's initial chip designs i.e. from the way that
they store numbers. Hence the many commercial and some open-source
math packages that take care of this.

I don't know of any easy way to solve this except to round and convert
to a string if you are going to store the data. That way, at least,
you will be sure of the number of significant digits and therefore the
reliability.

D.W.

>
> The Python tutorial has the full story:
>
> http://www.python.org/doc/current/tut/node14.html
>

> > I just tried in Perl
> >
> > print eval("123.3") + 0.1;
> >
> > and it gives
> > 123.4
>

> Perl lies. So does Python, if you ask it to:


>
> >>> print eval("123.3") + 0.1

> 123.4
> >>> print float("123.3") + 0.1
> 123.4
>
> </F>

Grant Edwards

unread,
Jun 2, 2003, 10:06:08 PM6/2/03
to
In article <895e4ce2.03060...@posting.google.com>, D.W. wrote:

> Good article. It is correct when it says that this happens in any
> language. I always thought it was common knowledge among programmers
> that "1" could be stored as 0.99999999999999999...

Wha? I defy you to find a system where 1 isn't stored exactly.

The more typical problem is with the inability to exactly represent 0.1
(base-10) as a base-2 FP number.

> It is also common knowledge (although I don;t know why or even if it is
> true) that this is a result from Intel's initial chip designs i.e. from the
> way that they store numbers.

Wha?

1) It's a basic problem with storing FP numbers in one base (in a finite
amount of memory) and displaying them in another. It's not solvable,
even in theory.

2) Intel just followed the IEEE floating point standard. Though Intel ought
to be thrashed for the awful IA32 architecture in general, Intel didn't
do anything particulary wrong with floating point.

> Hence the many commercial and some open-source math packages that take care
> of this.
>
> I don't know of any easy way to solve this except to round and convert
> to a string if you are going to store the data. That way, at least,
> you will be sure of the number of significant digits and therefore the
> reliability.

There are a number of solutions. Among them:

1) Store it as a rational number (fraction).

2) Store it in BCD fixed or floating point. Or, in a more general sense,
store it in any base that can exactly represent base-10 numbers (if
base-10 is what you care about).

--
Grant Edwards grante Yow! Yow! And then we
at could sit on the hoods of
visi.com cars at stop lights!

Erik Max Francis

unread,
Jun 2, 2003, 11:05:24 PM6/2/03
to
Grant Edwards wrote:

> In article <895e4ce2.03060...@posting.google.com>, D.W.
> wrote:
>
> > Good article. It is correct when it says that this happens in any
> > language. I always thought it was common knowledge among
> > programmers
> > that "1" could be stored as 0.99999999999999999...
>
> Wha? I defy you to find a system where 1 isn't stored exactly.

I suspect he's being facetious, since 0.999... = 1.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Success and failure are equally disastrous.
\__/ Tennessee Williams

Erik Max Francis

unread,
Jun 2, 2003, 11:27:06 PM6/2/03
to
"D.W." wrote:

> Evidently, python has different rules for numbers entered in
> interactive mode. Try this little snippet or just using a float
> variable. It works both as a program and in interactive mode.

It's because the interactive interpreter displays the repr of the value,
whereas print displays the str of it:

>>> x = 123.3
>>> y = x + 0.1
>>> str(y)
'123.4'
>>> repr(y)
'123.39999999999999'
>>> print y # this does the str
123.4
>>> y # this does the repr
123.39999999999999

It's not that Python handles the arithmetic differently, it's just that
it has trivially different ways of converting the numbers to strings for
display. Unfortunately, this difference causes a great deal of
confusion.

Grant Edwards

unread,
Jun 3, 2003, 12:02:36 AM6/3/03
to
In article <3EDC1074...@alcyone.com>, Erik Max Francis wrote:
> Grant Edwards wrote:
>
>> In article <895e4ce2.03060...@posting.google.com>, D.W.
>> wrote:
>>
>> > Good article. It is correct when it says that this happens in any
>> > language. I always thought it was common knowledge among
>> > programmers
>> > that "1" could be stored as 0.99999999999999999...
>>
>> Wha? I defy you to find a system where 1 isn't stored exactly.
>
> I suspect he's being facetious, since 0.999... = 1.

Right. But it's never _stored_ as an infinite number of nines. Nor could
it be.

--
Grant Edwards grante Yow! Clear the
at laundromat!! This
visi.com whirl-o-matic just had a
nuclear meltdown!!

Erik Max Francis

unread,
Jun 3, 2003, 4:35:10 AM6/3/03
to
Grant Edwards wrote:

> In article <3EDC1074...@alcyone.com>, Erik Max Francis wrote:
>
> > I suspect he's being facetious, since 0.999... = 1.
>
> Right. But it's never _stored_ as an infinite number of nines. Nor
> could
> it be.

Of course not, hence the facetiousness.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Sometimes there's no point in giving up.
\__/ Louis Wu

Paul Boddie

unread,
Jun 3, 2003, 9:54:46 AM6/3/03
to
dwb...@yahoo.com (D.W.) wrote in message news:<895e4ce2.03060...@posting.google.com>...

> Evidently, python has different rules for numbers entered in
> interactive mode.

It depends on the "evidence".

> x = 123.3
> print x

# Yes, a different number (it works better for this example):
x = 123.4
print x
print repr(x)

> Results:
> 123.3

123.4
123.40000000000001

> Onward through the fog.

Consider this a lighthouse.

Paul

A. Lloyd Flanagan

unread,
Jun 3, 2003, 10:33:17 AM6/3/03
to
Erik Max Francis <m...@alcyone.com> wrote in message news:<3EDC5DBE...@alcyone.com>...

> Grant Edwards wrote:
>
> > In article <3EDC1074...@alcyone.com>, Erik Max Francis wrote:
> >
> > > I suspect he's being facetious, since 0.999... = 1.
> >
> > Right. But it's never _stored_ as an infinite number of nines. Nor
> > could
> > it be.
>
> Of course not, hence the facetiousness.

Well, you can't store an infinite sequence, but you can store the
concept of an infinite number of nines:

def get_one():
yield '0'
yield '.'
while 1:
yield '9'

Exercise for the reader: write a program to prove that get_one()
equals one.

Steven Taschuk

unread,
Jun 3, 2003, 1:49:27 PM6/3/03
to
Quoth A. Lloyd Flanagan:
[...]

> def get_one():
> yield '0'
> yield '.'
> while 1:
> yield '9'
>
> Exercise for the reader: write a program to prove that get_one()
> equals one.

Here's a hack for this purpose:

import copy

def generatorstate(genit):
# broken, really; assumes no use of globals, for example
frame = genit.gi_frame
return copy.deepcopy((frame.f_locals, frame.f_lasti))

def period(genit):
items = []
states = [generatorstate(genit)]
while True:
items.append(genit.next())
states.append(generatorstate(genit))
if states[-1] in states[:-1]:
break
i = states.index(states[-1])
return items[:i], items[i:]

def rational(genit):
integerpart = int(''.join(list(iter(genit.next, '.'))))
preamble, repeated = period(genit)
preamble = ''.join(preamble)
repeated = ''.join(repeated)
repeated = Rational(int(repeated), 10**len(repeated)-1) \
/ 10**len(preamble)
preamble = Rational(int(preamble), 10**len(preamble))
return integerpart + preamble + repeated

Then, with a suitable Rational class (see below), we have

>>> rational(get_one())
Rational(1, 1)

as desired.

A simple implementation of a Rational class:

def gcd(a, b):
while b:
a, b = b, a % b
return abs(a)

class Rational(object):
def __init__(self, numerator, denominator=1):
if denominator == 0:
raise ZeroDivisionError('%r/%r' % (numerator, denominator))
if denominator < 0:
numerator = -numerator
denominator = -denominator
g = gcd(numerator, denominator)
self.numerator = numerator//g
self.denominator = denominator//g

def __str__(self):
return '%s/%s' % (self.numerator, self.denominator)

def __repr__(self):
return 'Rational(%r, %r)' % (self.numerator, self.denominator)

def __add__(self, other):
if isinstance(other, int) or isinstance(other, long):
return self + Rational(other)
elif isinstance(other, Rational):
return Rational(self.numerator*other.denominator
+ other.numerator*self.denominator,
self.denominator*other.denominator)
else:
return NotImplemented
__radd__ = __add__

def __mul__(self, other):
if isinstance(other, int) or isinstance(other, long):
return self * Rational(other)
elif isinstance(other, Rational):
return Rational(self.numerator * other.numerator,
self.denominator * other.denominator)
else:
return NotImplemented
__rmul__ = __mul__

def invert(self):
return Rational(self.denominator, self.numerator)

def __truediv__(self, other):
if isinstance(other, int) or isinstance(other, long):
return self / Rational(other)
elif isinstance(other, Rational):
return self * other.invert()
else:
return NotImplemented
__div__ = __truediv__

def __rtruediv__(self, other):
if isinstance(other, int) or isinstance(other, long):
return Rational(other) / self
else:
return NotImplemented
__rdiv__ = __rtruediv__

--
Steven Taschuk stas...@telusplanet.net
"I tried to be pleasant and accommodating, but my head
began to hurt from his banality." -- _Seven_ (1996)

Tim Rowe

unread,
Jun 4, 2003, 7:31:18 PM6/4/03
to
On Mon, 02 Jun 2003 20:05:24 -0700, Erik Max Francis <m...@alcyone.com>
wrote:

>I suspect he's being facetious, since 0.999... = 1.

... usually :-)

(It's true in the definition of real numbers mathematicians /usually/
use, but there are alternative definitions available -- and used --
when it's convenient to make the distinction)

Erik Max Francis

unread,
Jun 4, 2003, 8:04:57 PM6/4/03
to
Tim Rowe wrote:

> (It's true in the definition of real numbers mathematicians /usually/
> use, but there are alternative definitions available -- and used --
> when it's convenient to make the distinction)

If you're not using the normal definitions of real numbers, you need to
explicitly say so or no one's going to understand what you're getting at
... :-)

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ If you think you're free, there's no escape possible.
\__/ Baba Ram Dass

Andrew Dalke

unread,
Jun 4, 2003, 10:06:21 PM6/4/03
to
Erik Max Francis:

> >I suspect he's being facetious, since 0.999... = 1.

Tim Rowe:


> ... usually :-)
>
> (It's true in the definition of real numbers mathematicians /usually/
> use, but there are alternative definitions available -- and used --
> when it's convenient to make the distinction)

I seek enlightnment.

I remember back in high school reading a proof of Cantor's diagonalization
which showed the cardinality(Z+) != cardinality(reals). It was the table
description, which looked like

1 | 0.01312314553754698372...
2 | 0.87348204895729483218...
3 | 0.74195034785983576422..
...

then take the diagonal to get a new number, which is 0.071...
add one to each digit to get 0.182... and hence a construction
of a number which is not matched to a Z+.

Only years later did I learn that that's an approximation, in that
if the diagonal happens to have 888888888.... then the number
generated is 99999999.... which may construct a number which
isn't in [0, 1).

I only recently read the Cantor's original proof was much more
elegant than this, using the intersection of an infinite number of
closed segments instead of the actual digit representation, and
showing that the intersection must be non-empty, hence more
reals than integers. (Elegant because I don't like proofs which
depend on the representation of a number - I would say axiom
of choice, but I was more of an analysis person, and not an
algebra weenie ;)

Anyway, so I wish to know which definition of the reals you
refer to, in the sense that I though all the definitions used (like
continued fractions) were equivalent to the common notation
where "0.99999...." does equal 1.

"Die ganze Zahl schuf der liebe Gott, alles Ubrige is Menschenwerk."
(God made the integers, all else is the work of man.)
But then again, Kronecker didn't like Cantor :)

Andrew
da...@dalkescientific.com


Erik Max Francis

unread,
Jun 4, 2003, 10:17:46 PM6/4/03
to
Andrew Dalke wrote:

> Anyway, so I wish to know which definition of the reals you
> refer to, in the sense that I though all the definitions used (like
> continued fractions) were equivalent to the common notation
> where "0.99999...." does equal 1.

There are all sorts of alternative definitions of the reals which have
differing properties than the reals we've all come to know and love.
They usually fall under the general category of "non-standard analysis."
I'm not familiar with the particular alternative that the original
poster mentioned, but non-standard analysis can be employed to get all
sorts of weird things, where you have for instance infinities that are
elements of the (non-standard) reals, often called "projected reals"
(and there's a few different ways of doing it). Same thing's probably
going on here.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Said it? yep / Regret it? nope
\__/ Ice Cube

Moshe Zadka

unread,
Jun 4, 2003, 11:51:55 PM6/4/03
to
On Wed, 04 Jun 2003, Erik Max Francis <m...@alcyone.com> wrote:

> There are all sorts of alternative definitions of the reals which have
> differing properties than the reals we've all come to know and love.
> They usually fall under the general category of "non-standard analysis."

In non-standard analysis, 0.9999..... has *no* meaning if you just translate
the usual definition of limit from the real numbers. This shouldn't be
surprising, as limits are not a first-order property. In general, since
being a least upper bound is not first-order property, there are no
least upper bounds in non-standard analysis. And again, not surprising --
it is easy to prove that any ordered field with least upper bounds is
the standard reals. Only when we become myopic enough for first-order
sentences only, can the non-standard reals simulate the reals.

(Oh, of course, if you translate the definition of limits blindly
*enough*, that is to allow the sequence to range over the non-standard
integers too, then 0.999....=1 again. Although the *name* 0.99999...
is misleading, since it looks like it only ranges over the standard
integers).
--
Moshe Zadka -- http://moshez.org/
Buffy: I don't like you hanging out with someone that... short.
Riley: Yeah, a lot of young people nowadays are experimenting with shortness.
Agile Programming Language -- http://www.python.org/

Tim Rowe

unread,
Jun 5, 2003, 6:10:15 AM6/5/03
to
On Wed, 04 Jun 2003 19:17:46 -0700, Erik Max Francis <m...@alcyone.com>
wrote:

>There are all sorts of alternative definitions of the reals which have


>differing properties than the reals we've all come to know and love.
>They usually fall under the general category of "non-standard analysis."
>I'm not familiar with the particular alternative that the original
>poster mentioned,

Yes, it's non-standard analysis I was thinking of. I don't know the
detail, but I came across it when learning about fractals, in which
0.9999... and 1.000... might be on completely different branches of a
structure, and the axioms are chosen to keep them distinct.

Andrew Dalke

unread,
Jun 5, 2003, 6:44:27 PM6/5/03
to
Erik Max Francis:

> There are all sorts of alternative definitions of the reals which have
> differing properties than the reals we've all come to know and love.
> They usually fall under the general category of "non-standard analysis."

Sadly, beyond a bachelor's in math I have only a lay knowledge of
math, which luckily includes Ian Stewart's excellent "The Problems of
Mathematics.", p74 in my copy

The usual list of axioms for the real numbers is second order; and it has
long been known that it has a unique model, the usual real numbers R.
This is satisfyingly tidy. However, it turns out that if the axioms are
weakened, to comprise only the first-order properties of R, then other
models exist, including some that violdate (2) above. Let R* be such
a model. The upshot is a theory of non-standard analysis, initiated by
Abraham Robinson in about 1961. In non-standard analysis there are
actual infinities, actual infinitesimals. They are constants, not
Cauchy-style
variables....

The (2) is
if x < 1/n for all integers n then x = 0

This is in agreement with the statement you and Moshe mentioned.
Reals are reals, and only reals. There are models related to reals
but which are not reals, and so while it may be that in some models
0.99999..... != 1, that isn't the case for R.

Andrew
da...@dalkescientific.com


Moshe Zadka

unread,
Jun 5, 2003, 8:20:02 PM6/5/03
to
On Thu, 5 Jun 2003, "Andrew Dalke" <ada...@mindspring.com> wrote:

> This is in agreement with the statement you and Moshe mentioned.
> Reals are reals, and only reals. There are models related to reals
> but which are not reals, and so while it may be that in some models
> 0.99999..... != 1, that isn't the case for R.

Again, I reiterate that if you move to non-standard analysis, you want
to be careful about what you mean with 0.999...
The usual definition is lim (1-0.1**n) as n tends to infinity. But
which n would we be talking about? If you're talking about standard
natural numbers, then a limit does not exist (the property of increasing
sequences to have a limit is second order). If you're talking about
non-standard natural numbers, it's 1 (but the notation 0.9999...
is misleading).

Greg Ewing (using news.cis.dfn.de)

unread,
Jun 8, 2003, 9:45:37 PM6/8/03
to
Andrew Dalke wrote:
> then take the diagonal to get a new number, which is 0.071...
> add one to each digit to get 0.182... and hence a construction
> of a number which is not matched to a Z+.
>
> Only years later did I learn that that's an approximation, in that
> if the diagonal happens to have 888888888.... then the number
> generated is 99999999.... which may construct a number which
> isn't in [0, 1).

I don't think there has to be any particular system to how you
modify the digits, you just have to choose a different digit at
each position from what was there before. So you can easily
make sure the number stays between 0 and 1.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

0 new messages