Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Decimal arithmatic, was Re: Python GUI app to impress the bos s?

6 views
Skip to first unread message

sism...@hebmex.com

unread,
Sep 25, 2002, 10:00:48 AM9/25/02
to
> >
> > There ought to be severe penalties for idiots that use
> > floating point dollars for financial applications.
> > If forced to use floating point (e.g. because customer
> > demands BASIC), then keep money amounts in whole pennies
> > (or whatever the smallest currency unit for the country
> > is), and divide by 100 (or whatever) for printing only (or
> > just add the decimal point yourself).
> >
>
> In many circumstances this is exactly what you can't do.
> The minute you need to calculate percentages you will get in to
> the things like .0001254 of a cent, no matter what is you
> smallest unit of currency.
>
> I worked for a insurance company (using Business Basic) many
> years ago on MAI and Prime gear, and we did everything as floats
> with 14 places after the decimal point,and only rounded when
> a human needed to see a number, and then used standard accounting
> practices for rounding,
>
> Rgds
>
> Tim
>

OK, so don't use whole pennies (or smallest currency) as your
base, use 10,000ths of a penny, but still use integer arithmetic
for your calculations.

Floating point has too many rounding errors to be trustworthy;
or, looking at it from another angle, floating-point calculations
cannot be 100% accurate because of the way it's implemented.

-gustavo


Advertencia:
La informacion contenida en este mensaje es confidencial y restringida y
esta destinada unicamente para el uso de la persona arriba indicada, Esta
comunicacion representa la opinion personal del remitente y no refleja
necesariamente la opinion de la Compañia. Se le notifica que esta
estrictamente prohibida cualquier difusion, distribucion o copia de este
mensaje. Si ha recibido esta comunicacion o copia de este mensaje por error,
o si hay problemas en la transmision, favor de comunicarse con el remitente.


Todo el correo electrónico enviado para o desde esta dirección será
procesado por el sistema de correo corporativo de HEB. Tal correo
electrónico esta sujeto a ser almacenado y puede ser revisado por alguien
ajeno al recipiente autorizado con el propósito de monitorear que se cumplan
las normas de seguridad de la empresa.

Delaney, Timothy

unread,
Sep 25, 2002, 7:50:41 PM9/25/02
to
> From: dan...@yahoo.com [mailto:dan...@yahoo.com]
>
> "Stuart D. Gathman" <stu...@bmsi.com> wrote in message
> news:<amo0aq$5nf$1...@nntp2-cm.news.eni.net>...
> >...

> > There ought to be severe penalties for idiots that use
> floating point
> > dollars for financial applications. If forced to use floating point
> > (e.g. because customer demands BASIC), then keep money amounts in
> > whole pennies (or whatever the smallest currency unit for
> the country
> > is), and divide by 100 (or whatever) for printing only (or
> just add the
> > decimal point yourself).
>
> That's exactly how things were done at my old workplace, except that
> cents were also used for user input. Most of the time.

It's called Fixed Point arithmetic, and strangely enough, Tim Peters wrote
an excellent implementation ...

Tim Delaney

Steve Holden

unread,
Sep 29, 2002, 7:00:34 AM9/29/02
to
<sism...@hebmex.com> wrote...
> >
[ ... ]

OK, so don't use whole pennies (or smallest currency) as your
base, use 10,000ths of a penny, but still use integer arithmetic
for your calculations.

Floating point has too many rounding errors to be trustworthy;
or, looking at it from another angle, floating-point calculations
cannot be 100% accurate because of the way it's implemented.

"...has too many errors to be trustworthy..." is clearly an overstatement
given the huge number of computations each day which use floating-point
without any apparent problem. Floating-point got us to the moon and back,
surely it can handle a few money computations.

The real problem is that you have to grok the nature of floating-point to
use it well and appropriately. This is not a simple task (and you only have
to look at the timbot to realise that it can have disastrous consequences
;-). A typical modern double (64-bit floating-point number) will give you an
effective 53 bits of fraction, which equates to over 14 significant figures.
Since you always need two digits after the decimal point you can work with
amounts up to 999,999,999,999.99 without loss of accuracy as long as you
know what you are doing. Appropriate rounding before storage will avoid
accumulation of rounding errors.

Of course, none of this is necessarily an argument that f-p is the *best*
representation for money, but I'm always amused to see people arguing
against it on general grounds when it's perfectly acceptable for many such
purposes *when correctly used*.


regards
-----------------------------------------------------------------------
Steve Holden http://www.holdenweb.com/
Python Web Programming http://pydish.holdenweb.com/pwp/
Previous .sig file retired to www.homeforoldsigs.com
-----------------------------------------------------------------------

Christopher Browne

unread,
Sep 29, 2002, 1:06:53 PM9/29/02
to
Quoth "Steve Holden" <sho...@holdenweb.com>:

> <sism...@hebmex.com> wrote...
>> >
> [ ... ]
> OK, so don't use whole pennies (or smallest currency) as your
> base, use 10,000ths of a penny, but still use integer arithmetic
> for your calculations.
>
> Floating point has too many rounding errors to be trustworthy;
> or, looking at it from another angle, floating-point calculations
> cannot be 100% accurate because of the way it's implemented.

The problem with FP is that FP involves the use of binary
approximations, which commonly cannot exactly represent decimal
values.

Notably, the decimal fraction 0.1, which is /exactly/ 1/10, does not
have any exact encoding in binary. Its representation in binary is a
repeating binary fraction.

When the (not unreasonable!) expectation is that computers ought to be
able to deal /exactly/ with plain simple decimal values turns out to
be wrong, that ought to lead people to think.

But all too often it doesn't, as we get the typical reaction "Well,
IEEE FP values can be sufficiently exact for this range of numbers;
just use them right!"

There's thirty years of experience now of programmers being able to
use COBOL and PL/1 to manipulate decimal values without having to go
into the numerical analysis effort to "just use FP right!"

The "FP way" is that once in a while, you've got to insert operations
to round things appropriately. The "old school" folks can
legitimately gripe that they had decimal types that never required any
of that nonsense, and that they had those types before many of the
modern developers were even born.
--
(concatenate 'string "aa454" "@freenet.carleton.ca")
http://cbbrowne.com/info/linuxxian.html
Rules of the Evil Overlord #195. "I will not use hostages as bait in a
trap. Unless you're going to use them for negotiation or as human
shields, there's no point in taking them."
<http://www.eviloverlord.com/>

Bengt Richter

unread,
Sep 29, 2002, 4:53:17 PM9/29/02
to
On 29 Sep 2002 17:06:53 GMT, Christopher Browne <cbbr...@acm.org> wrote:

>Quoth "Steve Holden" <sho...@holdenweb.com>:
>> <sism...@hebmex.com> wrote...
>>> >
>> [ ... ]
>> OK, so don't use whole pennies (or smallest currency) as your
>> base, use 10,000ths of a penny, but still use integer arithmetic
>> for your calculations.
>>
>> Floating point has too many rounding errors to be trustworthy;
>> or, looking at it from another angle, floating-point calculations
>> cannot be 100% accurate because of the way it's implemented.

Floating point IS EXACT over a large range of values and operations.
The problem is that people who don't understand exactly what is happening
want to be safe (fine) and substitute FUD or downright misinformation
(not fine) for fact in talking about it.


>
>The problem with FP is that FP involves the use of binary
>approximations, which commonly cannot exactly represent decimal
>values.
>
>Notably, the decimal fraction 0.1, which is /exactly/ 1/10, does not
>have any exact encoding in binary. Its representation in binary is a
>repeating binary fraction.

How is that a significantly different problem from repeating decimal
fractions? (E.g., 1/3. or 1/7. or wxyz/9999. which will give you 0.wxyzwxzy..?)


>
>When the (not unreasonable!) expectation is that computers ought to be
>able to deal /exactly/ with plain simple decimal values turns out to
>be wrong, that ought to lead people to think.
>
>But all too often it doesn't, as we get the typical reaction "Well,
>IEEE FP values can be sufficiently exact for this range of numbers;
>just use them right!"
>
>There's thirty years of experience now of programmers being able to
>use COBOL and PL/1 to manipulate decimal values without having to go
>into the numerical analysis effort to "just use FP right!"

There's thirty years of experience now of programmers being able to

use cars and trains to manipulate bodily location values without having to go
into the numerical analysis effort to "just use diesel compression ignition right!"

Many/most of them don't understand the details of the mechanisms they depend on,
and don't care whether they're using gas/sparks or steam/coal or diesel underneath.
Thirty years of commuting doesn't automatically give the commuter insight into
engine technology.

>
>The "FP way" is that once in a while, you've got to insert operations
>to round things appropriately. The "old school" folks can
>legitimately gripe that they had decimal types that never required any
>of that nonsense, and that they had those types before many of the
>modern developers were even born.

I'm curious what those "old school" folks thought their programs did when
they had to divide or multiply, and how they conceived of the general rules
behind the results they got ;-)

IMO the big point re use of any kind of arithmetic in a program is that the
definitive rules come from the customer's requirements (whatever the process
of determining those ;-), and should be recorded as an unambigous specification.

Then it's either implemented correctly or not, and it doesn't matter if you are
internally manipulating decimals with digits represented in bi-quinary
or using floating point, other than optimization or real time issues (ok
possibly also understandability issues for the lucky maintainers ;-)

A customer who wants to convert between European currencies according to law
will probably have references to those laws in his spec. A customer who wants to
control machinery or compress music is going to give you other rules to get right
(not unlikely among them to watch out for patent and copyright law ;-)

Regards,
Bengt Richter

Chris Gonnerman

unread,
Sep 29, 2002, 6:33:57 PM9/29/02
to
----- Original Message -----
From: "Bengt Richter" <bo...@oz.net>


> On 29 Sep 2002 17:06:53 GMT, Christopher Browne <cbbr...@acm.org> wrote:
>
> >The problem with FP is that FP involves the use of binary
> >approximations, which commonly cannot exactly represent
> >decimal values.
> >
> >Notably, the decimal fraction 0.1, which is /exactly/ 1/10,
> >does not have any exact encoding in binary. Its
> >representation in binary is a repeating binary fraction.
>
> How is that a significantly different problem from repeating
> decimal fractions? (E.g., 1/3. or 1/7. or wxyz/9999. which
> will give you 0.wxyzwxzy..?)

Humans think in decimal. To a human, 1.0/10.0 == 0.1, not
0.10000000000000001. OK, so this *rounds* to 0.10 at two
decimal places, but what about:

>>> .70 * .05
0.034999999999999996

Rounded to two decimals, that's 0.03; but done in proper
decimal numbers, the answer should be 0.035, rounded to 0.04
using either "classic" or "banker's" decimal rounding.

If you are calculating sales tax, for instance, this is
unacceptable.

(I borrowed this example from memory from a previous poster,
and if I could remember who it was I'd give credit...)

You ask how it is a different problem. Simple. A human,
training in decimal math in grade school, knows that

1/3 = 0.3333333...

and would *expect* it to round to 0.33 (still in two decimal
places). The problem with *binary* floating point is the
"surprise factor."

So, Bengt, how do I do this right using only floats? I can't.
I have to have decimal arithmetic to get the right answer.

Chris Gonnerman -- chris.g...@newcenturycomputers.net
http://newcenturycomputers.net

Paul Rubin

unread,
Sep 29, 2002, 7:14:20 PM9/29/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> >>> .70 * .05
> 0.034999999999999996
>
> Rounded to two decimals, that's 0.03; but done in proper
> decimal numbers, the answer should be 0.035, rounded to 0.04
> using either "classic" or "banker's" decimal rounding.
> ...

> So, Bengt, how do I do this right using only floats? I can't.
> I have to have decimal arithmetic to get the right answer.

How about just adding a small amount before rounding at the end?

>>> (.70 * .05) + .000000001

rounds to the right thing.

Magnus Lyckå

unread,
Sep 29, 2002, 6:58:33 PM9/29/02
to
Steve Holden wrote:
> Of course, none of this is necessarily an argument that f-p is the *best*
> representation for money, but I'm always amused to see people arguing
> against it on general grounds when it's perfectly acceptable for many such
> purposes *when correctly used*.

My opinion is that Python has a lot of the features
required to become "the next COBOL".

Yes, I admit it sounds horrible, I hate COBOL as much
as anybody ;) but try to understand what I mean.

Python has the potential to be used to quickly and in
a practical way build millions of applications and tools
by people who aren't always from a CS, math or engineering
background.

Python also has the potential to become the next BASIC--a
simple language for education, autodidacts and power-users.

It can also become the next Pascal, but I don't think we
need to change any language features for that.

But to suit beginners and...well, amateurs...we need to
follow the principle of least surprise.

It IS surprising for a lot of people that
>>> 4/5
0.80000000000000004

You know that Steve, you see it here on c.l.py now and then.

It IS surprising for some people that for some numeric
values of 'w':
>>> 1+w+w+w+w == w+w+w+w+1
0

And
>>> 4/5
0

was certainly surprising for many, and that was
changed. The other big surprise was obviously

>>> w = 5
>>> print W
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
NameError: name 'W' is not defined

but that I think, is not so difficult to explain:
We don't consider w and W to be the same thing.
It's a design decision, and it could have been the
other way. Guido choose this way, just like he choose
= for assignment and == for equality, instead of
:= and = as in Pascal. End of story.

But we can't explain why floating point arithmetic
works the way it does without going into details the
groups I refer to feel that they shouldn't have to
understand.

It's a bit like demanding that a car driver understands
the physics and chemistry of combustion engines. You
can't sell a lot of cars if you need a degree in science
to drive them...

Or, to put it the other way around, you can sell a lot
more cars if people can drive them without a B.Sc, and
you can "sell" Python to a lot more people if they don't
need a B.Sc. to understand it...

On the other hand, I feel that when you make the tool
so simple that it can be handled by any fool, any fool
will be put in charge of the tool, and regardless of the
quality of the tool, they will make a mess because they
lack the required thinking ability.

In other words, we who actually understand computers and
problem solving will have to clean things up. But
a) I'd rather fix real logical problems in the business
sphere than correct trivial programming blunders, and
b) We shouldn't make life more difficult than needed for
those who are clever enough, but just don't have the
expertize in our field. Finally,
c) we don't think that pen and paper ought to be difficult
to use, so that people need to learn a lot of things
unrelated to the ideas they want to put in writing.

Maybe this not such a big issue as I sometimes think it
is, but I do feel that Python is way ahead of the
competetition from a pedagogic perspective in most cases,
but this is one where it's...average.

Perhaps the right trade-off is still to educate the
programmers, and to build tools or routines to prevent
problems. But I think the issue is worth discussing.

It seems to me that a really smooth handling of dates,
times and decimal numbers would make Python even more
useful for most people who don't program today, but
could benefit from it if they did...

--
Magnus Lycka, Thinkware AB
Alvans vag 99, SE-907 50 UMEA, SWEDEN
phone: int+46 70 582 80 65, fax: int+46 70 612 80 65
http://www.thinkware.se/ mailto:mag...@thinkware.se

Chris Gonnerman

unread,
Sep 29, 2002, 10:19:10 PM9/29/02
to

Cool. Now PROVE that's right in all cases.

Like I said, decimal arithmetic is STILL the only choice.

Paul Rubin

unread,
Sep 29, 2002, 11:19:33 PM9/29/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> > >>> (.70 * .05) + .000000001
> >
> > rounds to the right thing.
>
> Cool. Now PROVE that's right in all cases.

What do you mean by "right"? What do you mean by "all cases"?

> Like I said, decimal arithmetic is STILL the only choice.

I'm not convinced decimal arithmetic is "right in all cases".

Chris Gonnerman

unread,
Sep 29, 2002, 11:42:52 PM9/29/02
to
----- Original Message -----
From: "Paul Rubin" <phr-n...@NOSPAMnightsong.com>

> "Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> > > >>> (.70 * .05) + .000000001
> > >
> > > rounds to the right thing.
> >
> > Cool. Now PROVE that's right in all cases.
>
> What do you mean by "right"? What do you mean by "all
> cases"?

The original subject is *business* arithmetic. There is no
choice but to use decimal arithmetic if you want your math
results to match good-old-fashioned pen and paper math.

That defines "right." "All cases" refers to all numeric
domains common to *business* math. The exact number of
decimal places and the rounding rule (0.5 up/0.4 down vs.
"banker's rounding" which is rounding 0.5 to an even number)
will vary from nation to nation and sometimes from industry
to industry, but the expectation of reliable rounding results
is still the same.

Sorry not to have defined the terms; if you are following
the discussion I would expect you to know this. If we were
talking about tracking satellites or computing the mass of
the electron I'd defer to the experts (those who work in
those fields) regarding the best sort of computer math to
use. When it comes to business, though, I have experience.

As another poster noted, it really doesn't matter if *we*
think that results such as the above are "good enough," as
our bosses/customers/legislators/users control the definition
of "good enough."

Most of them can't stand to mislay a penny.

I've never worked for a boss who wasn't human (although I
have wondered sometimes if I was working for a weasel in
human clothing).

Dennis Lee Bieber

unread,
Sep 29, 2002, 11:17:02 PM9/29/02
to
Chris Gonnerman fed this fish to the penguins on Sunday 29 September
2002 03:33 pm:


> You ask how it is a different problem. Simple. A human,
> training in decimal math in grade school, knows that
>
> 1/3 = 0.3333333...
>
> and would *expect* it to round to 0.33 (still in two decimal

So, in 2 decimal place arithmetic

a = 1.00 / 3.00
b = a * 3.00

gives
b = 0.99

which is NOT equal to 1... But this is okay because it's done is
"decimal arithmetic"? <G>

The telling phrase is "training in DECIMAL math"... Most computers
don't work in decimal math, so the user /should/ be ready to learn the
ins&outs of binary floating point...

For every example of "what is wrong with binary floating point"
someone else can find a similar case when using decimal math. Even VB's
"currency" type (which looks to be a scaled 64-bit integer) carries 4
decimal places -- which allows for "proper" results when displaying
final results in two decimal places with rounding.

1/3 => 0.3333 internally
3*0.3333 => 0.9999
1.00 for display after rounding.


--
> ============================================================== <
> wlf...@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
> wulf...@dm.net | Bestiaria Support Staff <
> ============================================================== <
> Bestiaria Home Page: http://www.beastie.dm.net/ <
> Home Page: http://www.dm.net/~wulfraed/ <

Paul Rubin

unread,
Sep 30, 2002, 12:42:21 AM9/30/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> > What do you mean by "right"? What do you mean by "all
> > cases"?
>
> The original subject is *business* arithmetic. There is no
> choice but to use decimal arithmetic if you want your math
> results to match good-old-fashioned pen and paper math.

No, I'm still not convinced. Is business arithmetic well-defined
enough that you can never do the same calculation two ways and get
different results?

Example: bunches are three for a dollar. How much do you pay if you
buy nine bananas?

Answer #1:
price_per_banana = 1.00 / 3
number_of_bananas = 9
total_price = number_of_bananas * price_per_banana
print total_price

Answer #2:
price_per_bunch = 1.00 # three bananas
number_of_bunches = 3
total_price = number_of_bunches * price_per_banana
print total_price

With floating point and rounding on output, you get the same answer
both ways.

With decimal arithmetic, you get $2.97 the first way and $3.00 the
second way.

For "business arithmetic" to always give the same answer, it has to
forbid at least one of those methods of doing the problem.

Bengt Richter

unread,
Sep 30, 2002, 12:42:39 AM9/30/02
to

I hope that includes me ;-)


>training in decimal math in grade school, knows that
>
> 1/3 = 0.3333333...
>
>and would *expect* it to round to 0.33 (still in two decimal
>places). The problem with *binary* floating point is the
>"surprise factor."
>

Old programmers aren't surprised by anything. They expect the unexpected ;-)

>So, Bengt, how do I do this right using only floats? I can't.
>I have to have decimal arithmetic to get the right answer.
>

It goes back to requirements. E.g.,

>>> for x in xrange(100):
... for y in xrange(100):
... fx = float('.%02d' %x)
... fy = float('.%02d' %y)
... fxy = fx*fy
... xy = x*y
... sfxy = '%.2f' % (fxy*1.000000001)
... sxy = '0.%02d' % ((xy+50)/100)
... assert sfxy == sxy
...

finishes sans problem, as a quick check on a limited problem domain. Obviously
a proper test suite and some math would be required to show validity for a
realistic problem domain.

If you can specify exactly how your decimal virtual machine works,
(i.e., how many significant figures are carried, how it rounds/truncates
intermediate values etc., and how many figures are going to be in the inputs
and outputs, then we can determine how much of a problem it will be to do it
using floating point.

The example you cited is included in the above:
>> '%.2f' % ( .05 * .70)
0.03'
>> '%.2f' % (-.05 * .70)
-0.03'

changes to

>> '%.2f' % (( .05 * .70) * 1.000000001)
0.04'
>> '%.2f' % ((-.05 * .70) * 1.000000001)
-0.04'

There is not unlimited room in the double floating point space around the
true abstract real values corresponding to decimal representations, but there is
quite a bit, unless you are carrying a lot of figures. E.g., as an
approximate indication,

>>> '%.2f' % ((.0349999999 ) * 1.0000000001)
'0.03'
>>> '%.2f' % ((.03499999999 ) * 1.0000000001)
'0.03'
>>> '%.2f' % ((.034999999999 ) * 1.0000000001)
'0.04'

and the multiplier is probably bigger than it has to be (and better specified
in binary). But that can be optimized when you specify your decimal virtual machine.

If you need rounding that alternates direction on alternate exact trailing fives,
then you need an extra step.

If you visualize the true theoretical decimal values representable in your decimal virtual
machine as little red glowing dots on the real axis, and visualize as little glowing blue dots
all the exact theoretical numbers corresponding to all valid bit combinations in a floating
point double, then if every pair of successive red dots has thousands of blue dots in between,
and the error in floating point calculations can be held to within a few blue dots, then
you can see how to select the red dot that represents the true decimal answer, even if
there is no blue dot exactly coinciding with it. Twelve decimal digits would give about a thousand
blue dots between the red ones. And a thousand times that while still in the FPU, before
storing in memory. Rounding an exact decimal is moving from one red dot to another,
perhaps skipping hundreds or thousands of dots, depending on fractional decimals carried. But
we must round by moving from some blue dot to another blue dot. By shifting the blue
uncertainty set reliably to one side of its true red dot, we can move from blue dot to
blue dot where the new blue dot will reliably be in an uncertainty set corresponding
to the proper red dot. If more blue dots are needed, they can be arranged.
If you are doing very few operations, so that the range of blue dot uncertainty that can be
accumulated is small, then you can have more red dots.

Regards,
Bengt Richter

Paul Rubin

unread,
Sep 30, 2002, 12:43:17 AM9/30/02
to
Paul Rubin <phr-n...@NOSPAMnightsong.com> writes:
> Example: bunches are three for a dollar. How much do you pay if you
> buy nine bananas?

Whoops, meant "bananas are three for a dollar". Sorry.

Chris Gonnerman

unread,
Sep 30, 2002, 1:13:33 AM9/30/02
to
----- Original Message -----
From: "Dennis Lee Bieber" <wlf...@ix.netcom.com>


> Chris Gonnerman fed this fish to the penguins on Sunday 29 September
> 2002 03:33 pm:
>
>
> > You ask how it is a different problem. Simple. A human,
> > training in decimal math in grade school, knows that
> >
> > 1/3 = 0.3333333...
> >
> > and would *expect* it to round to 0.33 (still in two
> > decimal
>
> So, in 2 decimal place arithmetic
>
> a = 1.00 / 3.00
> b = a * 3.00
>
> gives
> b = 0.99
>
> which is NOT equal to 1... But this is okay because it's done
> is "decimal arithmetic"? <G>

It's not "OK"... it's EXPECTED.

> The telling phrase is "training in DECIMAL math"...
> Most computers don't work in decimal math, so the
> user /should/ be ready to learn the ins&outs of binary
> floating point...

Incorrect. The *user* expects the dang computer to produce
the same results as for a manual calculation. In other words,
truly serious businesspeople expect to be able to hand-check
the numbers. The occasional (but far too common) binary float
anomalies are unacceptable in that environment.

>
> For every example of "what is wrong with binary
> floating point" someone else can find a similar case when
> using decimal math.

True, but the decimal anomalies are *expected* and the binary
anomalies are *surprising*... I've been a programmer for
a long time, working in everything from a scientific lab to
a retail store. I still don't understand the ins and outs,
as you put it, of binary floating point.

> Even VB's "currency" type (which looks to be a scaled 64-bit
> integer) carries 4 decimal places -- which allows for
> "proper" results when displaying final results in two decimal
> places with rounding.
>
> 1/3 => 0.3333 internally
> 3*0.3333 => 0.9999
> 1.00 for display after rounding.

I hate to give anything to Microsoft, but they seem to have a
good, workable, not-often-surprising idea here.

I still prefer the FixedPoint.py module's programmer-controlled
scaling.

Chris Gonnerman

unread,
Sep 30, 2002, 1:20:40 AM9/30/02
to
----- Original Message -----
From: "Paul Rubin" <phr-n...@NOSPAMnightsong.com>

> No, I'm still not convinced. Is business arithmetic well-

> defined enough that you can never do the same calculation two


> ways and get different results?
>
> Example: bunches are three for a dollar. How much do you pay
> if you buy nine bananas?
>
> Answer #1:
> price_per_banana = 1.00 / 3
> number_of_bananas = 9
> total_price = number_of_bananas * price_per_banana
> print total_price
>
> Answer #2:
> price_per_bunch = 1.00 # three bananas
> number_of_bunches = 3
> total_price = number_of_bunches * price_per_banana
> print total_price
>
> With floating point and rounding on output, you get the same
> answer both ways.
>
> With decimal arithmetic, you get $2.97 the first way and
> $3.00 the second way.

Excellent example, wrong focus. The business user (in this
case, the retail store owner/manager) would define which
method, and which results, are right. The second mode is
the "canonical" way in the manual system, but lately I've
realized that many (most?) retail POS systems can do it "both"
ways, handling single items by a variation of method 1 which gives answers
consistent with method 2.

I'm glad I didn't have to write and debug that code.

> For "business arithmetic" to always give the same answer, it
> has to forbid at least one of those methods of doing the
> problem.

Except for the hack I described above, you're right. *The
business user will define the rules* regarding when and to
how much you round. You seem to think that I mean to round
to two places (as in US dollars and cents) when what I mean
is to handle numbers in (or as if in) base 10 all the time.

It isn't a question of when you round, so much as it is a
question of what *happens* when you round. Proper floating
decimal numbers would also solve the problem; it's floating
binary that causes unexpected anomalies.

The banana problem might be solved in a given POS (point of
sale) system by using four or five decimals, then rounding to
two at the line-item level. That would give correct results
for both methods above.

Paul Rubin

unread,
Sep 30, 2002, 2:36:50 AM9/30/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> > With decimal arithmetic, you get $2.97 the first way and
> > $3.00 the second way.
>
> Excellent example, wrong focus. The business user (in this
> case, the retail store owner/manager) would define which
> method, and which results, are right. The second mode is
> the "canonical" way in the manual system, but lately I've
> realized that many (most?) retail POS systems can do it "both"
> ways, handling single items by a variation of method 1 which gives answers
> consistent with method 2.
>
> I'm glad I didn't have to write and debug that code.

I'm really skeptical that any "business user" can specify a set of
rules that gets all this stuff straight for a complex business
involving cents-off coupons, percentage advertising discounts, etc.
It sounds much more complicated than what we usually call software
implementation, and it sounds like it has to be done differently for
every business situation. They're almost certainly better off using
some method that gives consistent results at the end.

I worked on one business application where the vendor felt that any
floating point roundoff errors were unacceptable. He had me implement
exact rational arithmetic (easy in Python using the gmpy package).
All intermediate results were represented as exact integer ratios, so
you were guaranteed that 3 * (1/3) == 1. This was not the same as
decimal arithmetic. You would never get $2.97 as an answer for the
banana problem. But I don't think any customers complained.

Chris Gonnerman

unread,
Sep 30, 2002, 9:10:41 AM9/30/02
to
----- Original Message -----
From: "Paul Rubin" <phr-n...@NOSPAMnightsong.com>

> "Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> > > With decimal arithmetic, you get $2.97 the first way and
> > > $3.00 the second way.
> >
> > Excellent example, wrong focus. The business user (in this
> > case, the retail store owner/manager) would define which
> > method, and which results, are right. The second mode is
> > the "canonical" way in the manual system, but lately I've
> > realized that many (most?) retail POS systems can do it
> > "both" ways, handling single items by a variation of method
> > 1 which gives answers consistent with method 2.
> >
> > I'm glad I didn't have to write and debug that code.
>
> I'm really skeptical that any "business user" can specify a
> set of rules that gets all this stuff straight for a complex
> business involving cents-off coupons, percentage advertising
> discounts, etc.

Usually, in the "canned" systems I've seen, those 3 for a
buck deals are supported by allowing three to five decimal
places rather than just two for amounts. Results come out
much as you are expecting there.

It seems that what you are finding fault with (IMHO) is not
decimal arithmetic, it's *fixed* decimal arithmetic. What
I am finding fault with isn't floating point arithmetic, it's
floating *binary* point arithmetic. Later today (tonight
probably) I plan to post a test module comparing the very
simple "tax" calculation I've been using as an example. My
code isn't quite clean enough to show yet, and given the
nature of this discussion I want it real clean.

However, here are some interesting results so far:

5% of $0.50 = 0.02 decimal, 0.03 float
5% of $0.70 = 0.04 decimal, 0.03 float
6% of $0.75 = 0.04 decimal, 0.05 float
9% of $0.50 = 0.04 decimal, 0.05 float

Sell a lot of fifty-cent items in a 5% tax district, and you'll
wind up with a large discrepancy in your sales tax. Of course,
it'll be long rather than short... just watch out for those
seventy-five cent items.

If you sell exactly as many $0.75 as $0.50, you're OK...

In other words, it's just not good enough.

Now, if you can point me at a *floating decimal* math library,
THAT would be COOL.

James J. Besemer

unread,
Sep 30, 2002, 1:54:22 PM9/30/02
to

Steve Holden wrote:

><sism...@hebmex.com> wrote...
>
>
>[ ... ]
>OK, so don't use whole pennies (or smallest currency) as your
>base, use 10,000ths of a penny, but still use integer arithmetic
>for your calculations.
>

Normal Integers aren't big enough. 2^31 / 10K == 215K.

--jb

--
James J. Besemer 503-280-0838 voice
2727 NE Skidmore St. 503-280-0375 fax
Portland, Oregon 97211-6557 mailto:j...@cascade-sys.com
http://cascade-sys.com

Steve Holden

unread,
Sep 30, 2002, 2:00:27 PM9/30/02
to
>
> Steve Holden wrote:
>
> ><sism...@hebmex.com> wrote...
> >
> >
> >[ ... ]
> >OK, so don't use whole pennies (or smallest currency) as your
> >base, use 10,000ths of a penny, but still use integer arithmetic
> >for your calculations.
> >
> Normal Integers aren't big enough. 2^31 / 10K == 215K.
>

Please note that the remark you are responding to was made by the poster you
quote me as quoting.

sism...@hebmex.com

unread,
Sep 30, 2002, 1:59:03 PM9/30/02
to

>
> Steve Holden wrote:
>
> ><sism...@hebmex.com> wrote...
> >
> >
> >[ ... ]
> >OK, so don't use whole pennies (or smallest currency) as your
> >base, use 10,000ths of a penny, but still use integer arithmetic
> >for your calculations.
> >
> Normal Integers aren't big enough. 2^31 / 10K == 215K.
>
> --jb
>
> --
> James J. Besemer 503-280-0838 voice
> 2727 NE Skidmore St. 503-280-0375 fax
> Portland, Oregon 97211-6557 mailto:j...@cascade-sys.com
> http://cascade-sys.com
>

"Integer arithmetic" != "32-bit integer arithmetic". The first
may be based on the second but they are definitely not the
same thing. Python longs are memory-limited, yet they
are also based on machine integers. Does that make them
unsuitable?

Chris Gonnerman

unread,
Sep 30, 2002, 10:12:00 PM9/30/02
to
Ok, here it is. I have written some Python code which
displays the problems with float (floating binary)
arithmetic applied to monetary amounts. Run this, and
many the places where the simple percentage times
amount calculation in floating binary falls down will
become obvious.

You need the fixedpoint.py module installed to run this.

http://fixedpoint.sourceforge.net

(code attached)

fp-test.py

James J. Besemer

unread,
Oct 1, 2002, 5:22:43 AM10/1/02
to

sism...@hebmex.com wrote:

>"Integer arithmetic" != "32-bit integer arithmetic".
>

I never said it was. I made a true assertion about "normal integers,"
meaning the native machine word size on most machines.

>Python longs are memory-limited, yet they
>are also based on machine integers. Does that make them
>unsuitable?
>

I know. I never said they weren't.

But Steve said "integers' NOT "Python longs" and my only point was that
Steve's original suggestion (as stated) was not practical with most
machine's native integers.

And I'll add that if you're going to consider Longs you may as well go
with FixedPoint and do it "right". ;o)

James J. Besemer

unread,
Oct 1, 2002, 5:24:48 AM10/1/02
to

Paul Rubin wrote:

>How about just adding a small amount before rounding at the end?
>
>
>
>>>>(.70 * .05) + .000000001
>>>>
>>>>
>
>rounds to the right thing.
>
>

Nope. It helps in some cases but introduces errors in other cases
(e.g., negative numbers).

Paul Rubin

unread,
Oct 1, 2002, 6:20:57 AM10/1/02
to
"James J. Besemer" <j...@cascade-sys.com> writes:
> >>>>(.70 * .05) + .000000001
> >>>>
> >
> >rounds to the right thing.
> >
> Nope. It helps in some cases but introduces errors in other cases
> (e.g., negative numbers).

OK, fix the rounding direction:

def round(x):
if x < 0: sign = -1.0
else: sign = 1.0
return sign * (abs(x) + .00000001)

Mark Charsley

unread,
Oct 1, 2002, 6:32:00 AM10/1/02
to
In article <mailman.1033391388...@python.org>,
chris.g...@newcenturycomputers.net (Chris Gonnerman) wrote:

> Usually, in the "canned" systems I've seen, those 3 for a
> buck deals are supported by allowing three to five decimal
> places rather than just two for amounts. Results come out
> much as you are expecting there.

The system a previous company wrote said that "if they've bought 3
40p items these give them a discount of 20p". All nice integers: which,
given
a) the checkouts had a slow CPU, limited memory and no FPU
b) the FP software library was huge and slow
was a good thing.

--

Mark - personal opinion only, could well be wrong, not representing
company, don't sue us etc. etc.

Grant Edwards

unread,
Oct 1, 2002, 11:22:09 AM10/1/02
to
In article <memo.20021001113235.1724A@a.a>, Mark Charsley wrote:

>> Usually, in the "canned" systems I've seen, those 3 for a
>> buck deals are supported by allowing three to five decimal
>> places rather than just two for amounts. Results come out
>> much as you are expecting there.
>
> The system a previous company wrote said that "if they've bought 3
> 40p items these give them a discount of 20p".

But, standard practice around here is that the "3 for a dollar"
price is good on quantities of less than three. (IOW the unit
price is 1/3 dollar). I have seen instances where you have to
buy 3 to get the discount but IME that's rare.

--
Grant Edwards grante Yow! MMM-MM!! So THIS is
at BIO-NEBULATION!
visi.com

Kim Petersen

unread,
Oct 1, 2002, 11:32:21 AM10/1/02
to
Grant Edwards wrote:
> In article <memo.20021001113235.1724A@a.a>, Mark Charsley wrote:
>
>
>>>Usually, in the "canned" systems I've seen, those 3 for a
>>>buck deals are supported by allowing three to five decimal
>>>places rather than just two for amounts. Results come out
>>>much as you are expecting there.
>>
>>The system a previous company wrote said that "if they've bought 3
>>40p items these give them a discount of 20p".
>
>
> But, standard practice around here is that the "3 for a dollar"
> price is good on quantities of less than three. (IOW the unit
> price is 1/3 dollar). I have seen instances where you have to
> buy 3 to get the discount but IME that's rare.

(assuming IME is In My Environment) might be - but around here - All
shops (with possible exceptions (i've never seen it)) sell only
(multiple) quantities of 3 at the discount *even* if 2 will sell at a
higher price....

>


Steve Holden

unread,
Oct 1, 2002, 12:21:09 PM10/1/02
to
"James J. Besemer" <j...@cascade-sys.com> wrote in message
news:mailman.1033464184...@python.org...

>
> sism...@hebmex.com wrote:
>
> >"Integer arithmetic" != "32-bit integer arithmetic".
> >
> I never said it was. I made a true assertion about "normal integers,"
> meaning the native machine word size on most machines.
>
> >Python longs are memory-limited, yet they
> >are also based on machine integers. Does that make them
> >unsuitable?
> >
> I know. I never said they weren't.
>
> But Steve said "integers' NOT "Python longs" and my only point was that
> Steve's original suggestion (as stated) was not practical with most
> machine's native integers.
>
I never said that, as I have already pointed out. It was Outlook Express'
weird quoting, which in that post I failed to correct, that made it look
like me.

Be that as it may, if that was your only point you'd do well to stop
hammering it now everyone's heard you.

> And I'll add that if you're going to consider Longs you may as well go
> with FixedPoint and do it "right". ;o)
>

Paul Boddie

unread,
Oct 1, 2002, 12:32:35 PM10/1/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> wrote in message news:<mailman.1033438201...@python.org>...
> """fp-test.py -- fixedpoint versus float comparison"""

It's interesting to compare the results from FixedPoint with
ScaledDecimal and with FixedPoint objects that have increased
precision. Many issues in the supplied program can explain the
apparent difference between rounded floating point results and the
FixedPoint results. Here are some example results from an extended
version of the original program:

PCT AMT Fixed Fixed Fixed Scaled Scaled Float
2dp 4dp -> 2dp 2dp 4dp
0.01 0.50 0.00 0.0050 0.00 0.01 0.0050 0.01
0.02 0.25 0.00 0.0050 0.00 0.01 0.0050 0.01
0.05 0.10 0.00 0.0050 0.00 0.01 0.0050 0.01
0.05 0.50 0.02 0.0250 0.02 0.03 0.0250 0.03
0.05 0.70 0.04 0.0350 0.04 0.04 0.0350 0.03
0.05 0.90 0.04 0.0450 0.04 0.05 0.0450 0.05
0.06 0.75 0.04 0.0450 0.04 0.05 0.0450 0.05
0.09 0.50 0.04 0.0450 0.04 0.05 0.0450 0.05
0.10 0.05 0.00 0.0050 0.00 0.01 0.0050 0.01
0.10 0.25 0.02 0.0250 0.02 0.03 0.0250 0.03
0.10 0.35 0.04 0.0350 0.04 0.04 0.0350 0.03
0.10 0.45 0.04 0.0450 0.04 0.05 0.0450 0.05
0.10 0.65 0.06 0.0650 0.06 0.07 0.0650 0.07
0.10 0.85 0.08 0.0850 0.08 0.09 0.0850 0.09

When the floating point result is only a "wafer thin mint" below 0.35
(on this Pentium III laptop running Windows 2000), the rounded,
truncated result is given as 0.3. Such conditions could be seen to be
a flaw when attempting to highlight problems with the FixedPoint
implementation. Meanwhile, ScaledDecimal's reliance on the long
integer arithmetic in Python, as well as the automatic extension of
precision in certain operations, do seem to help it produce the
results you were *probably* expecting.

"""fp-test.py -- fixedpoint versus float comparison"""

# You need the fixedpoint.py module installed
# http://fixedpoint.sourceforge.net
# and the ScaledDecimal.py module installed
# http://www.boddie.org.uk/python/downloads/ScaledDecimal.py

from fixedpoint import FixedPoint
from ScaledDecimal import ScaledDecimal

print "PCT AMT Fixed Fixed Fixed Scaled Scaled Float"
print " 2dp 4dp -> 2dp 2dp 4dp"

# I'm checking percentages, from 1% to 10% inclusive,
# multiplied by "money" amounts from 0.01 (one cent)
# to 1.00 (a buck).

# Given appropriate precision, this problem applies
# to other monetary systems than just USA.

for percent in range(1,11):

fixed_pct = FixedPoint(percent)
fixed_pct /= 100
fixed_pct_ext = FixedPoint(percent, 4)
fixed_pct_ext /= 100
# Think of it as 1..11 * (10 ** -2)
scaled_pct = ScaledDecimal(percent, -2)
float_pct = percent / 100.0

for amount in range(1,101):

fixed_amt = FixedPoint(amount)
fixed_amt /= 100
fixed_amt_ext = FixedPoint(amount, 4)
fixed_amt_ext /= 100
scaled_amt = ScaledDecimal(amount, -2)
float_amt = amount / 100.0

##############################################################
# Now for the crux of the matter:

fixed_res = fixed_amt * fixed_pct # rounding is implicit here
fixed_res_ext = fixed_amt_ext * fixed_pct_ext
fixed_res_ext_rounded = fixed_res_ext.copy()
fixed_res_ext_rounded.set_precision(2)
scaled_res = scaled_amt * scaled_pct # precision is increased
float_res = round(float_amt * float_pct, 2)

##############################################################
# That's it, let's see if they match:

if str(fixed_res) != ("%0.2f" % float_res) or \
str(scaled_res.round(-2)) != ("%0.2f" % float_res):

print "0.%02d 0.%02d %-6s %-6s %-6s %-6s %-6s %-6.2f"
% \
(percent, amount, fixed_res, fixed_res_ext,
fixed_res_ext_rounded, scaled_res.round(-2),
scaled_res.round(-4), float_res)

# end of file.

Grant Edwards

unread,
Oct 1, 2002, 12:44:17 PM10/1/02
to
In article <3D99C005...@kyborg.dk>, Kim Petersen wrote:
>> In article <memo.20021001113235.1724A@a.a>, Mark Charsley wrote:
>>
>>>>Usually, in the "canned" systems I've seen, those 3 for a
>>>>buck deals are supported by allowing three to five decimal
>>>>places rather than just two for amounts. Results come out
>>>>much as you are expecting there.
>>>
>>>The system a previous company wrote said that "if they've bought 3
>>>40p items these give them a discount of 20p".
>>
>> But, standard practice around here is that the "3 for a dollar"
>> price is good on quantities of less than three. (IOW the unit
>> price is 1/3 dollar). I have seen instances where you have to
>> buy 3 to get the discount but IME that's rare.
>
> (assuming IME is In My Environment)

I was thinking in my experience, but either works. :)

> might be - but around here - All shops (with possible
> exceptions (i've never seen it)) sell only (multiple)
> quantities of 3 at the discount *even* if 2 will sell at a
> higher price....

I have vague memories of that practice here in the US back in
pre-UPC scanner days when prices were entered by the clerks
manually.

--
Grant Edwards grante Yow! I had a lease on an
at OEDIPUS COMPLEX back in
visi.com '81...

Tim Peters

unread,
Oct 1, 2002, 2:56:56 PM10/1/02
to
[Paul Boddie]

I don't see what this shows beyond that FixedPoint implements "banker's
rounding" (round the infinitely precise result to the nearest retained
digit, but in case of tie round to the nearest even number), while you seem
to prefer "add a half and chop" rounding. No single rounding discipline is
suitable for all commercial applications, and I hope the FixedPoint project
finds a way to let users specify the rounding *their* app needs.
Unfortunately, I was never able to find a FixedPoint user who was able to
articulate the rounding rules they were trying to emulate <wink/sigh>.

BTW, note that the result of expressions like

%0.2f" % float_res

is platform-dependent! The underlying C sprintf function differs across
platforms. Windows seems to use "add a half and chop" for halfway cases,
while *most* other platforms use IEEE-754 to-nearest/even rounding in
halfway cases (same as "banker's rounding" in this context). FixedPoint's
results are independent of platform.


dougfort

unread,
Oct 1, 2002, 3:50:33 PM10/1/02
to
On Tue, 01 October 2002, Tim Peters wrote:

<snip/>

> suitable for all commercial applications, and I hope the FixedPoint project
> finds a way to let users specify the rounding *their* app needs.

<snip/>

We're talking about a class method, or a mixin class in the constructor. Not too
clear on this yet, because we want to get another release out with the initial
set of proposed changes (__slots__, etc.), before we deal with it.

We're also trying to accumulate test cases from c.l.p or wherever. We'd like to
add a benchmark/rounding demo to the project.

The more concrete examples proposed here and at fixedpoint.sourceforge.net, the
better.

Doug Fort <doug...@dougfort.net>
http://www.dougfort.net

Bengt Richter

unread,
Oct 1, 2002, 5:03:53 PM10/1/02
to
On Mon, 30 Sep 2002 21:12:00 -0500, "Chris Gonnerman" <chris.g...@newcenturycomputers.net> wrote:
[... test program whose modified version appears below ...]

Hi Chris,
This is unproved, but maybe with the help of timbot et al we can prove
the range over which is is safe and useful. The test run doesn't prove
anything, but I thought it interesting. Just run it as you would your version.

----< fptester.py >---------------
# fptester.py -- adding experimental evenround test to fptest.py - bokr
#
# On Mon, 30 Sep 2002 21:12:00 -0500,
# "Chris Gonnerman" <chris.g...@newcenturycomputers.net> wrote:
#
# >This is a multi-part message in MIME format.
# >
# >------=_NextPart_000_000B_01C268C6.04954560
# >Content-Type: text/plain;
# > charset="Windows-1252"
# >Content-Transfer-Encoding: 7bit
# >
# Ok, here it is. I have written some Python code which
# displays the problems with float (floating binary)
# arithmetic applied to monetary amounts. Run this, and
# many the places where the simple percentage times
# amount calculation in floating binary falls down will
# become obvious.
#
# You need the fixedpoint.py module installed to run this.
#
# http://fixedpoint.sourceforge.net
#
# (code attached)
#
# Chris Gonnerman -- chris.g...@newcenturycomputers.net
# http://newcenturycomputers.net
# ___________________________________________________________________________
#
# Chris,
# The evenround function I defined below seems (!) to do ok for your test,
# and then some (I extended it to run through with smaller and smaller
# percentages using increasing numbers of decimal fraction digits)
#
# Warning! This is as yet unproved. I just had the idea in response to the
# discussion. So don't go putting it in production software! You have been advised ;-)
#
# I modified your code a fair amount to incorporate and test my rounding function,
# as you can see. It would be interesting to prove what the range of validity
# really is.
#
# Anyway, run this just as you would your version.
# BTW, straight text is nicer to post ;-)
#
# Regards,
# Bengt Richter
# ___________________________________________________________________________


"""fptester.py -- fixedpoint versus float versus evenrounded float comparison"""

# You need the fixedpoint.py module installed
# http://fixedpoint.sourceforge.net

from fixedpoint import FixedPoint

FMTCOLS = '%-14s '*5 + '%s'
TITLES = "PCT AMT Fixed Float EFloat EFloat==Fixed".split()
print FMTCOLS % tuple(TITLES)
print FMTCOLS % tuple(map(lambda x:'-'*len(x), TITLES))

# I'm checking percentages, from 1% to 10% inclusive,
# multiplied by "money" amounts from 0.01 (one cent)
# to 1.00 (a buck).

# (I added an outer loop to scale the percentages down by factors of 10**x -- Bengt)

# Given appropriate precision, this problem applies
# to other monetary systems than just USA.

####################
# Here is my evenround function, apparently letting you use floating point as-is
# over a large range until it's time to round and print with %f format.
# -- Bengt Richter (NO WARRANTIES! Experimental software!)
#
EPSILON = 1.0e-14
BIAS = 1.0 + EPSILON
def evenround(f, nfdec):
"""
Return floating value that will round to even trailing digit
with nfdec fractional digits in %.0<nfdec>f format.
"""
f, fsigned = abs(f), f
half = 0.5 * 10.**-nfdec
f *= BIAS
nh, rh = divmod(f, half)
f -= (rh<EPSILON and int(nh)%4==1)*half*BIAS
if fsigned<0: return -f
return f
#
####################
# Here I run the percentages through smaller-by-factors-of-ten ranges
for decimals in range(2,13):
ffmt = '%%.0%df' % decimals
for percent in range(1,11):

fixed_pct = FixedPoint(percent,decimals)
fixed_pct /= 10**decimals

float_pct = percent / 10.0**decimals

for amount in range(1,101):

fixed_amt = FixedPoint(amount, decimals)
fixed_amt /= 100

float_amt = amount / 100.0

##############################################################
# Now for the crux of the matter:

fixed_res = fixed_amt * fixed_pct # rounding is implicit here

float_res = round(float_amt * float_pct, decimals)
even_res = evenround(float_amt * float_pct, decimals)


##############################################################
# That's it, let's see if they match:

sfix, sfloat, seven = (
str(fixed_res), (ffmt % float_res), (ffmt % even_res)
)
if sfix != sfloat or sfix != seven:
print FMTCOLS % (
fixed_pct, fixed_amt, # for ease ;-)
sfix, sfloat, seven, ('No, BUMMER!', 'Yes')[sfix==seven]
)

# end of file.
----------------------------------

Results are :

[14:01] C:\pywk\fixedpt>python fptester.py
PCT AMT Fixed Float EFloat EFloat==Fixed
--- --- ----- ----- ------ -------------
0.01 0.50 0.00 0.01 0.00 Yes
0.02 0.25 0.00 0.01 0.00 Yes
0.05 0.10 0.00 0.01 0.00 Yes
0.05 0.50 0.02 0.03 0.02 Yes
0.05 0.70 0.04 0.03 0.04 Yes
0.05 0.90 0.04 0.05 0.04 Yes
0.06 0.75 0.04 0.05 0.04 Yes
0.09 0.50 0.04 0.05 0.04 Yes
0.10 0.05 0.00 0.01 0.00 Yes
0.10 0.25 0.02 0.03 0.02 Yes
0.10 0.35 0.04 0.03 0.04 Yes
0.10 0.45 0.04 0.05 0.04 Yes
0.10 0.65 0.06 0.07 0.06 Yes
0.10 0.85 0.08 0.09 0.08 Yes
0.001 0.500 0.000 0.001 0.000 Yes
0.002 0.250 0.000 0.001 0.000 Yes
0.005 0.100 0.000 0.001 0.000 Yes
0.005 0.500 0.002 0.003 0.002 Yes
0.005 0.700 0.004 0.003 0.004 Yes
0.005 0.900 0.004 0.005 0.004 Yes
0.006 0.750 0.004 0.005 0.004 Yes
0.009 0.500 0.004 0.005 0.004 Yes
0.010 0.050 0.000 0.001 0.000 Yes
0.010 0.250 0.002 0.003 0.002 Yes
0.010 0.350 0.004 0.003 0.004 Yes
0.010 0.450 0.004 0.005 0.004 Yes
0.010 0.650 0.006 0.007 0.006 Yes
0.010 0.850 0.008 0.009 0.008 Yes
0.0001 0.5000 0.0000 0.0001 0.0000 Yes
0.0002 0.2500 0.0000 0.0001 0.0000 Yes
0.0003 0.5000 0.0002 0.0001 0.0002 Yes
0.0005 0.1000 0.0000 0.0001 0.0000 Yes
0.0005 0.3000 0.0002 0.0001 0.0002 Yes
0.0005 0.5000 0.0002 0.0003 0.0002 Yes
0.0005 0.9000 0.0004 0.0005 0.0004 Yes
0.0006 0.2500 0.0002 0.0001 0.0002 Yes
0.0006 0.7500 0.0004 0.0005 0.0004 Yes
0.0009 0.5000 0.0004 0.0005 0.0004 Yes
0.0010 0.0500 0.0000 0.0001 0.0000 Yes
0.0010 0.1500 0.0002 0.0001 0.0002 Yes
0.0010 0.2500 0.0002 0.0003 0.0002 Yes
0.0010 0.4500 0.0004 0.0005 0.0004 Yes
0.0010 0.6500 0.0006 0.0007 0.0006 Yes
0.0010 0.8500 0.0008 0.0009 0.0008 Yes
0.00001 0.50000 0.00000 0.00001 0.00000 Yes
0.00002 0.25000 0.00000 0.00001 0.00000 Yes
0.00005 0.10000 0.00000 0.00001 0.00000 Yes
0.00005 0.50000 0.00002 0.00003 0.00002 Yes
0.00005 0.70000 0.00004 0.00003 0.00004 Yes
0.00005 0.90000 0.00004 0.00005 0.00004 Yes
0.00006 0.75000 0.00004 0.00005 0.00004 Yes
0.00007 0.50000 0.00004 0.00003 0.00004 Yes
0.00009 0.50000 0.00004 0.00005 0.00004 Yes
0.00010 0.05000 0.00000 0.00001 0.00000 Yes
0.00010 0.25000 0.00002 0.00003 0.00002 Yes
0.00010 0.35000 0.00004 0.00003 0.00004 Yes
0.00010 0.45000 0.00004 0.00005 0.00004 Yes
0.00010 0.65000 0.00006 0.00007 0.00006 Yes
0.00010 0.85000 0.00008 0.00009 0.00008 Yes
0.000001 0.500000 0.000000 0.000001 0.000000 Yes
0.000002 0.250000 0.000000 0.000001 0.000000 Yes
0.000005 0.100000 0.000000 0.000001 0.000000 Yes
0.000005 0.500000 0.000002 0.000003 0.000002 Yes
0.000005 0.900000 0.000004 0.000005 0.000004 Yes
0.000006 0.750000 0.000004 0.000005 0.000004 Yes
0.000009 0.500000 0.000004 0.000005 0.000004 Yes
0.000010 0.050000 0.000000 0.000001 0.000000 Yes
0.000010 0.250000 0.000002 0.000003 0.000002 Yes
0.000010 0.450000 0.000004 0.000005 0.000004 Yes
0.000010 0.650000 0.000006 0.000007 0.000006 Yes
0.000010 0.850000 0.000008 0.000009 0.000008 Yes
0.0000001 0.5000000 0.0000000 0.0000001 0.0000000 Yes
0.0000002 0.2500000 0.0000000 0.0000001 0.0000000 Yes
0.0000005 0.1000000 0.0000000 0.0000001 0.0000000 Yes
0.0000005 0.5000000 0.0000002 0.0000003 0.0000002 Yes
0.0000005 0.9000000 0.0000004 0.0000005 0.0000004 Yes
0.0000006 0.7500000 0.0000004 0.0000005 0.0000004 Yes
0.0000009 0.5000000 0.0000004 0.0000005 0.0000004 Yes
0.0000010 0.0500000 0.0000000 0.0000001 0.0000000 Yes
0.0000010 0.2500000 0.0000002 0.0000003 0.0000002 Yes
0.0000010 0.4500000 0.0000004 0.0000005 0.0000004 Yes
0.0000010 0.6500000 0.0000006 0.0000007 0.0000006 Yes
0.0000010 0.9500000 0.0000010 0.0000009 0.0000010 Yes
0.00000001 0.50000000 0.00000000 0.00000001 0.00000000 Yes
0.00000002 0.25000000 0.00000000 0.00000001 0.00000000 Yes
0.00000003 0.50000000 0.00000002 0.00000001 0.00000002 Yes
0.00000005 0.10000000 0.00000000 0.00000001 0.00000000 Yes
0.00000005 0.30000000 0.00000002 0.00000001 0.00000002 Yes
0.00000005 0.50000000 0.00000002 0.00000003 0.00000002 Yes
0.00000005 0.70000000 0.00000004 0.00000003 0.00000004 Yes
0.00000005 0.90000000 0.00000004 0.00000005 0.00000004 Yes
0.00000006 0.25000000 0.00000002 0.00000001 0.00000002 Yes
0.00000009 0.50000000 0.00000004 0.00000005 0.00000004 Yes
0.00000010 0.05000000 0.00000000 0.00000001 0.00000000 Yes
0.00000010 0.15000000 0.00000002 0.00000001 0.00000002 Yes
0.00000010 0.25000000 0.00000002 0.00000003 0.00000002 Yes
0.00000010 0.35000000 0.00000004 0.00000003 0.00000004 Yes
0.00000010 0.45000000 0.00000004 0.00000005 0.00000004 Yes
0.00000010 0.65000000 0.00000006 0.00000007 0.00000006 Yes
0.00000010 0.85000000 0.00000008 0.00000009 0.00000008 Yes
0.00000010 0.95000000 0.00000010 0.00000009 0.00000010 Yes
0.000000001 0.500000000 0.000000000 0.000000001 0.000000000 Yes
0.000000002 0.250000000 0.000000000 0.000000001 0.000000000 Yes
0.000000005 0.100000000 0.000000000 0.000000001 0.000000000 Yes
0.000000005 0.500000000 0.000000002 0.000000003 0.000000002 Yes
0.000000005 0.900000000 0.000000004 0.000000005 0.000000004 Yes
0.000000006 0.750000000 0.000000004 0.000000005 0.000000004 Yes
0.000000009 0.500000000 0.000000004 0.000000005 0.000000004 Yes
0.000000010 0.050000000 0.000000000 0.000000001 0.000000000 Yes
0.000000010 0.250000000 0.000000002 0.000000003 0.000000002 Yes
0.000000010 0.450000000 0.000000004 0.000000005 0.000000004 Yes
0.000000010 0.650000000 0.000000006 0.000000007 0.000000006 Yes
0.000000010 0.850000000 0.000000008 0.000000009 0.000000008 Yes
0.000000010 0.950000000 0.000000010 0.000000009 0.000000010 Yes
0.0000000001 0.5000000000 0.0000000000 0.0000000001 0.0000000000 Yes
0.0000000002 0.2500000000 0.0000000000 0.0000000001 0.0000000000 Yes
0.0000000005 0.1000000000 0.0000000000 0.0000000001 0.0000000000 Yes
0.0000000005 0.5000000000 0.0000000002 0.0000000003 0.0000000002 Yes
0.0000000005 0.9000000000 0.0000000004 0.0000000005 0.0000000004 Yes
0.0000000006 0.7500000000 0.0000000004 0.0000000005 0.0000000004 Yes
0.0000000009 0.5000000000 0.0000000004 0.0000000005 0.0000000004 Yes
0.0000000010 0.0500000000 0.0000000000 0.0000000001 0.0000000000 Yes
0.0000000010 0.2500000000 0.0000000002 0.0000000003 0.0000000002 Yes
0.0000000010 0.4500000000 0.0000000004 0.0000000005 0.0000000004 Yes
0.0000000010 0.6500000000 0.0000000006 0.0000000007 0.0000000006 Yes
0.0000000010 0.8500000000 0.0000000008 0.0000000009 0.0000000008 Yes
0.00000000001 0.50000000000 0.00000000000 0.00000000001 0.00000000000 Yes
0.00000000002 0.25000000000 0.00000000000 0.00000000001 0.00000000000 Yes
0.00000000005 0.10000000000 0.00000000000 0.00000000001 0.00000000000 Yes
0.00000000005 0.50000000000 0.00000000002 0.00000000003 0.00000000002 Yes
0.00000000005 0.90000000000 0.00000000004 0.00000000005 0.00000000004 Yes
0.00000000006 0.75000000000 0.00000000004 0.00000000005 0.00000000004 Yes
0.00000000009 0.50000000000 0.00000000004 0.00000000005 0.00000000004 Yes
0.00000000010 0.05000000000 0.00000000000 0.00000000001 0.00000000000 Yes
0.00000000010 0.25000000000 0.00000000002 0.00000000003 0.00000000002 Yes
0.00000000010 0.45000000000 0.00000000004 0.00000000005 0.00000000004 Yes
0.00000000010 0.65000000000 0.00000000006 0.00000000007 0.00000000006 Yes
0.00000000010 0.85000000000 0.00000000008 0.00000000009 0.00000000008 Yes
0.000000000001 0.500000000000 0.000000000000 0.000000000001 0.000000000000 Yes
0.000000000002 0.250000000000 0.000000000000 0.000000000001 0.000000000000 Yes
0.000000000005 0.100000000000 0.000000000000 0.000000000001 0.000000000000 Yes
0.000000000005 0.300000000000 0.000000000002 0.000000000001 0.000000000002 Yes
0.000000000005 0.500000000000 0.000000000002 0.000000000003 0.000000000002 Yes
0.000000000005 0.700000000000 0.000000000004 0.000000000003 0.000000000004 Yes
0.000000000005 0.900000000000 0.000000000004 0.000000000005 0.000000000004 Yes
0.000000000006 0.750000000000 0.000000000004 0.000000000005 0.000000000004 Yes
0.000000000009 0.500000000000 0.000000000004 0.000000000005 0.000000000004 Yes
0.000000000010 0.050000000000 0.000000000000 0.000000000001 0.000000000000 Yes
0.000000000010 0.150000000000 0.000000000002 0.000000000001 0.000000000002 Yes
0.000000000010 0.250000000000 0.000000000002 0.000000000003 0.000000000002 Yes
0.000000000010 0.350000000000 0.000000000004 0.000000000003 0.000000000004 Yes
0.000000000010 0.450000000000 0.000000000004 0.000000000005 0.000000000004 Yes
0.000000000010 0.650000000000 0.000000000006 0.000000000007 0.000000000006 Yes
0.000000000010 0.850000000000 0.000000000008 0.000000000009 0.000000000008 Yes

Regards,
Bengt Richter

James J. Besemer

unread,
Oct 1, 2002, 6:29:10 PM10/1/02
to

Paul Rubin wrote:

I believe this still contains a hidden error. I believe in all cases
that blindly adding a fudge factor fixes some cases and makes others
worse.. But I don't have time to prove it. If you don't believe me
then feel free to use the above definition. It'll pro'lly be right most
of the time.

Regards

James J. Besemer

unread,
Oct 1, 2002, 6:43:13 PM10/1/02
to

Tim Peters wrote:

>No single rounding discipline is

>suitable for all commercial applications,
>

Ain't it the Truth!!

Businesses in the US are required to file quarterly reports about periodic tax payments made during the quarter. Taxes, e.g. on FICA and Medicare, are computed as a percentage of each employee's wages and deducted from the employee's paycheck each pay period. On the quarterly report, the actual liability is computed, based as a percentage of total payroll for the period.

These two computations (sum of percentages and percentage of sums) do not always produce the same amount. And yet the amounts on the return have to match to the exact penny -- no rounding off to the nearest dollar (like on personal returns). Consequently, the form includes an explicit entry for "round off error," to correct for minor differences in computed liability vs. actual payments.

Paul Rubin

unread,
Oct 1, 2002, 7:17:00 PM10/1/02
to
"James J. Besemer" <j...@cascade-sys.com> writes:
> >OK, fix the rounding direction:
> >
> > def round(x):
> > if x < 0: sign = -1.0
> > else: sign = 1.0
> > return sign * (abs(x) + .00000001)
> >
> I believe this still contains a hidden error. I believe in all cases
> that blindly adding a fudge factor fixes some cases and makes others
> worse.. But I don't have time to prove it. If you don't believe me
> then feel free to use the above definition. It'll pro'lly be right
> most of the time.

I haven't yet seen a complete definition of "right", so I have no way
to prove or disprove that the rounding function above always gives the
"right" answer.

Chris Gonnerman

unread,
Oct 1, 2002, 9:45:56 PM10/1/02
to
Whoops again... I didn't realize that the FixedPoint
module actually IS using "banker's" rounding.

Guess I should read the docs, eh?

I second Tim Peter's motion... the rounding rule should
be configurable on a per-application basis.

Chris Gonnerman

unread,
Oct 1, 2002, 9:43:50 PM10/1/02
to
----- Original Message -----
From: "Paul Boddie" <pa...@boddie.net>


> Here are some example results from an extended
> version of the original program:
>
> PCT AMT Fixed Fixed Fixed Scaled Scaled Float
> 2dp 4dp -> 2dp 2dp 4dp

<< hack >>


> 0.10 0.45 0.04 0.0450 0.04 0.05 0.0450 0.05
> 0.10 0.65 0.06 0.0650 0.06 0.07 0.0650 0.07
> 0.10 0.85 0.08 0.0850 0.08 0.09 0.0850 0.09

Whoops... can someone on the FixedPoint team explain this
to me? When did 0.85 * 0.10, rounded to 2 places, become
0.08? It should be 0.09.

Perhaps I misunderstood. Am I supposed to use higher
precision and then lower it after computation?

This isn't any better than floats.

Chris Gonnerman

unread,
Oct 1, 2002, 9:21:09 PM10/1/02
to
----- Original Message -----
From: "Paul Rubin" <phr-n...@NOSPAMnightsong.com>

> I haven't yet seen a complete definition of "right", so I
> have no way to prove or disprove that the rounding function
> above always gives the "right" answer.

How complete do you need? All arithmetic rounding,
whenever it is performed, must produce results entirely
and exactly equal to the results achieved when doing
the same arithmetic *in decimal*, by hand, on paper,
following one of the standard rounding rules. There
are two such rules used in the US, "classic" and
"banker's." For most business purposes, "classic"
rounding ( >= 0.5 up, < 0.5 down) is the expected mode,
but in an ideal world either should be available.

In other words, 0.7 * 0.05 should always be 0.035, and
never 0.034999...

It's really simple. For monetary purposes, anyone with
a (successful) eighth-grade education or equivalent
should be able to do the math on paper (or do they even
teach that anymore?) and not be surprised by the
computer's results.

If this isn't a good enough definition of "right" for
you, please tell me what, exactly, is wrong with it.
What do you need to understand the problem?

Paul Rubin

unread,
Oct 1, 2002, 10:22:46 PM10/1/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> If this isn't a good enough definition of "right" for
> you, please tell me what, exactly, is wrong with it.
> What do you need to understand the problem?

We've been over this several times already. 9 bananas at 3 for a dollar.

(9 * 1) / 3 = 3.00
9 * (1/3) = 2.97

What part of your definition of "right" says which one of these to choose?

Christian Tismer

unread,
Oct 1, 2002, 10:45:24 PM10/1/02
to
Paul Rubin wrote:

> "Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
>
>>>>>>(.70 * .05) + .000000001
>>>>>
>>>rounds to the right thing.
>>
>>Cool. Now PROVE that's right in all cases.
>
>
> What do you mean by "right"? What do you mean by "all cases"?

Well, you can guess what's right:
What do you think is the definition of a business arithmetic?
Having no or minimum possible errors. And if, then positive
to my account, please. :)
All cases probably means every number inserted into the
above formula which can occour in business. This is finite.

>>Like I said, decimal arithmetic is STILL the only choice.
>
>
> I'm not convinced decimal arithmetic is "right in all cases".

It has a better chance to be right (meaning exact) since its
number base is identical to the money number base.
Having more prime factors than (2, 5) in it would lead to
even more fractions being exact, but I doubt that is not
what they want. They want the computer think exactly as
wrong as they do.

ciao - chris

--
Christian Tismer :^) <mailto:tis...@tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/

Christian Tismer

unread,
Oct 1, 2002, 10:56:06 PM10/1/02
to
Paul Rubin wrote:
> "Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
>
>>>What do you mean by "right"? What do you mean by "all
>>>cases"?
>>
>>The original subject is *business* arithmetic. There is no
>>choice but to use decimal arithmetic if you want your math
>>results to match good-old-fashioned pen and paper math.
>
>
> No, I'm still not convinced. Is business arithmetic well-defined
> enough that you can never do the same calculation two ways and get
> different results?
>
> Example: bunches are three for a dollar. How much do you pay if you
> buy nine bananas?
>
> Answer #1:
> price_per_banana = 1.00 / 3
> number_of_bananas = 9
> total_price = number_of_bananas * price_per_banana
> print total_price
>
> Answer #2:
> price_per_bunch = 1.00 # three bananas
> number_of_bunches = 3
> total_price = number_of_bunches * price_per_banana
> print total_price

Sorry, this is an argument I cannot accept.
A discussion about decimal or floating arithmetic
doesn't give the right to break basic rules of
numeric computation. It is a known fact that
division by a number that contains a prime factor
relatively prime to the number base produces
an error.
I would fire such a programmer in any case, whether the
error shows up in decimal math, or wether it hides
due to graceful rounding rules in floating point.

> With floating point and rounding on output, you get the same answer
> both ways.

Exercise: Find the extremes where this claim is proven wrong.

Finally I see a big advantage in decimal math:
Doing wrong numeric shows up earlier :-)

Chris Gonnerman

unread,
Oct 1, 2002, 11:00:33 PM10/1/02
to
----- Original Message -----
From: "Paul Rubin" <phr-n...@NOSPAMnightsong.com>

OK. You either didn't read my previous post or didn't
understand it. The problem you are talking about is
"above" the problem I'm talking about.

The answer to the question you are asking is not
directly relevant to my problem. I'm not complaining
about floating point arithmetic. I'm complaining about
floating BINARY points. BINARY, not decimal.

I don't care it it's ScaledDecimal or FixedPoint or
freakin' IBM BCD... I just don't EVER want to see:

0.70 * 0.05 = 0.034999...

... in a monetary application.

What you are discussing is the hazard of insufficient
precision. Fine. At four or five decimal places,
FixedPoint (for instance) produces the results you
appear to expect. The problem I am discussing is
entirely an artifact of BINARY floating point.

DECIMAL floating point would be fine... just so the
math doesn't surprise any reasonably intelligent,
educated user.

Chris Gonnerman

unread,
Oct 1, 2002, 11:11:48 PM10/1/02
to
----- Original Message -----
From: "Christian Tismer" <tis...@tismer.com>

> It has a better chance to be right (meaning exact) since its
> number base is identical to the money number base.
> Having more prime factors than (2, 5) in it would lead to
> even more fractions being exact, but I doubt that is not
> what they want. They want the computer think exactly as
> wrong as they do.

Huh. You managed to back me up and bust my chops at the
same time.

What, exactly, is wrong with wanting math involving
essentially decimal monetary amounts to lead to proper
decimal rounding? Paul keeps waving bananas in my face.
That's not the problem... since time immemorial, grocers
have had to handle three-for-a-buck sales. They know
how they want the math done; and adding some extra
precision to the intermediate steps solves Paul's
complaint. So why does he keep complaining?

I have read several posts asserting that "we" don't know
or can't explain properly how we want things rounded or
computed. What's so hard to follow here? Didn't you all
learn decimal arithmetic in grade school (or local
equivalent)? I'd like to know what the problem is here.

My only complaint is that the fixedpoint.py module does
only banker's rounding. Bah. I do have bank customers,
but they don't let me handle their math (it's done in
true, old-fashioned BCD as far as I can tell). Most of
my business customers do the >= 0.5 up, < 0.5 down form
they learned in school... so what I need is a math library
that does it that way without losing or gaining
"unexpected" pennies along the way.

Tim Peters

unread,
Oct 1, 2002, 11:07:02 PM10/1/02
to
[Chris Gonnerman]

> Whoops again... I didn't realize that the FixedPoint
> module actually IS using "banker's" rounding.

Yes, and rigorously so.

> Guess I should read the docs, eh?
>
> I second Tim Peter's motion... the rounding rule should
> be configurable on a per-application basis.

Then please help Doug do that, including *defining* the rounding disciplines
you think are needed; I was never able to bully <wink> those out of its
users, so "banker's rounding" is the only thing that's there (it happens to
be better for overall accuracy than biased "add a half and chop" rounding,
so I was keener to give casual users something less likely to get them into
trouble). There are also difficult design decisions whenever a "global
magic setting" is involved, including thread safety and what to do if
different parts of an app need different settings (where "different parts"
can include 3rd-party library modules you're only dimly aware of, too -- you
can't step on what they need to happen, and neither can you allow them to
step on what you need). I can't make time to help with this, other than to
point out such issues.


Tim Peters

unread,
Oct 1, 2002, 11:22:16 PM10/1/02
to
[Bengt Richter]

> This is unproved, but maybe with the help of timbot et al we can prove
> the range over which is is safe and useful. The test run doesn't prove
> anything, but I thought it interesting. Just run it as you would
> your version.

Note that you have no control over the accuracy of operations like 10.00**i,
or of % formats, because Python has no control over them -- they're
delegated to the platform libm pow() and libc sprintf(), and the quality of
those varies massively across platforms. If you want to be safe, you can't
leave such things to the competency of your platform library writers.
Floating divmod is also a bitch, limited by the quality of the platform
fmod().

I'm unclear on why so many are defending binary fp to people who insist they
don't want it. It's not like binary fp is in danger of going away in our
lifetimes -- this is like "defending" Perl against Python <wink>.

every-numeric-scheme-sucks-for-some-apps-and-you're-always-better-off-
using-one-that-doesn't-obviously-suck-for-yours-ly y'rs - tim


Chris Gonnerman

unread,
Oct 1, 2002, 11:19:59 PM10/1/02
to
----- Original Message -----
From: "Tim Peters" <tim...@email.msn.com>

> > I second Tim Peter's motion... the rounding rule should
> > be configurable on a per-application basis.
>
> Then please help Doug do that, including *defining* the
> rounding disciplines you think are needed; I was never able
> to bully <wink> those out of its users,

I keep hearing this... what is the problem, exactly? So far as I've ever
been taught, there are only two rules:

"Classic" rounding:
>= 0.5 (units) up, < 0.5 down

"Banker's" rounding:
> 0.5 up, < 0.5 down, ties broken by rounding
to the next even unit

(units = pennies in US)

> so "banker's rounding" is the only thing that's there (it
> happens to be better for overall accuracy than biased "add
> a half and chop" rounding, so I was keener to give casual
> users something less likely to get them into trouble).

I have to agree here. This is not IMHO the "least surprising"
choice, but given two options, it's the most accurate.

> There are also difficult design decisions whenever a
> "global magic setting" is involved, including thread safety
> and what to do if different parts of an app need different
> settings (where "different parts" can include 3rd-party
> library modules you're only dimly aware of, too -- you
> can't step on what they need to happen, and neither can you
> allow them to step on what you need).

A class mixin was suggested for this; I can't think of a
better way.

> I can't make time to help with this, other than to
> point out such issues.

Granted. If I get the time I'll study the code.

Paul Rubin

unread,
Oct 2, 2002, 12:17:48 AM10/2/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> writes:
> What, exactly, is wrong with wanting math involving essentially
> decimal monetary amounts to lead to proper decimal rounding? Paul
> keeps waving bananas in my face. That's not the problem... since
> time immemorial, grocers have had to handle three-for-a-buck sales.
> They know how they want the math done; and adding some extra
> precision to the intermediate steps solves Paul's complaint. So why
> does he keep complaining?

I'm told that a certain rounding method might not always produce the
same answer as doing it the "right" way. So I asked for a deterministic
algorithm for finding the "right" answer in order to compare the results.
So far I haven't gotten that algorithm.

Chris Gonnerman

unread,
Oct 2, 2002, 12:46:15 AM10/2/02
to
----- Original Message -----
From: "Paul Rubin" <phr-n...@NOSPAMnightsong.com>


> I'm told that a certain rounding method might not always
> produce the same answer as doing it the "right" way. So I
> asked for a deterministic algorithm for finding the "right"
> answer in order to compare the results. So far I haven't
> gotten that algorithm.

Ah.

It's not (really) the rounding algorithm that's at fault.

0.70 * 0.05 in decimal is 0.035. (Hopefully this, at least,
can be treated as a fact) The nearest approximation to that
number which floats can handle (on my current Intel hardware)
is 0.034999...

>>> 0.7 * 0.05
0.034999999999999996

but if you just put in the number, you get this:

>>> 0.035
0.035000000000000003

so:

>>> 0.035 == (0.7 * 0.05)
0

Gah. It doesn't matter what "method" or "rules" of
rounding you use, when you can't rely on the basic math
to be right.

You have mentioned the three-for-a-buck bananas many
times. In decimal floating point, 1/3 = 0.3333... You
have indicated several times that this result is
problematic. Sure, it is. But, my business-major
friends and customers (intersecting sets) are expecting
that, and in some cases have even alerted me to it
before I thought of it.

None of them would ever expect that the 5% of 70 cents
calculation would come up with anything other than 3.5
cents, rounded to 4 cents (of course).

Dennis Lee Bieber

unread,
Oct 2, 2002, 12:55:03 AM10/2/02
to
Chris Gonnerman fed this fish to the penguins on Tuesday 01 October
2002 06:21 pm:


> In other words, 0.7 * 0.05 should always be 0.035, and
> never 0.034999...
>

When I took science classes, I'd have been dinged a few points if I
turned in 0.035 -- the rule was to report results to the same
significance as the inputs.

0.7 * 0.05 => 0.0 in those science classes. The assumption here being
that the accuracy of measurements is such that the last supplied
decimal place is already an estimate -- one does not add places which
imply increased precision.

>>> print "%5f" % (0.7 * 0.05)
0.035000
>>> print "%5.3f" % (0.7 * 0.05)
0.035
>>> print "%5.2f" % (0.7 * 0.05)
0.03
>>> print "%5.1f" % (0.7 * 0.05)
0.0


Any "formal" application likely has formatted output, and under those
constraints this example passes as "right"

--
> ============================================================== <
> wlf...@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
> wulf...@dm.net | Bestiaria Support Staff <
> ============================================================== <
> Bestiaria Home Page: http://www.beastie.dm.net/ <
> Home Page: http://www.dm.net/~wulfraed/ <

Dennis Lee Bieber

unread,
Oct 2, 2002, 1:02:12 AM10/2/02
to
Chris Gonnerman fed this fish to the penguins on Tuesday 01 October
2002 06:43 pm:


>
> Perhaps I misunderstood. Am I supposed to use higher
> precision and then lower it after computation?
>

Well, that /is/ how the M$ VB6 "currency" type does -- it maintains 4
decimal places (using 64bit scaled integer arithmetic), and likely
expect the programmer to specify a display string with only two
decimals.


I wouldn't be surprised to find that many COBOL programs may be using
extra digits internally.

Dennis Lee Bieber

unread,
Oct 2, 2002, 1:05:32 AM10/2/02
to
Chris Gonnerman fed this fish to the penguins on Tuesday 01 October
2002 08:11 pm:


> My only complaint is that the fixedpoint.py module does
> only banker's rounding. Bah. I do have bank customers,
> but they don't let me handle their math (it's done in
> true, old-fashioned BCD as far as I can tell). Most of
> my business customers do the >= 0.5 up, < 0.5 down form
> they learned in school... so what I need is a math library
> that does it that way without losing or gaining
> "unexpected" pennies along the way.
>

Okay, and which way should it do -0.5? Is that rounded to 0.0 or -1.0?

James J. Besemer

unread,
Oct 2, 2002, 3:38:40 AM10/2/02
to

Dennis Lee Bieber wrote:

> When I took science classes, I'd have been dinged a few points if I
>turned in 0.035 -- the rule was to report results to the same
>significance as the inputs.
>
> 0.7 * 0.05 => 0.0 in those science classes. The assumption here being
>that the accuracy of measurements is such that the last supplied
>decimal place is already an estimate -- one does not add places which
>imply increased precision.
>


Wow. That's pretty Wierd Science, if you ask me. ;o)

You must have gotten a lot of ZEROS whenever using universal constants,
like Planck's Constant, Avogadro's number, Amperes, Joules, Ergs or
Electron Volts vs. any practical quantity in chemistry or physics class.
;o)

I think you're confusing some other message about precision. E.g., I
suspect you were supposed to learn that

0.7 * 0.05 => 0.04 ( instead of 0.035)

Precision -- the number of significant digits is independent of the
scale factor, or magnitude, of the number.

Brian Quinlan

unread,
Oct 2, 2002, 3:48:53 AM10/2/02
to
> When I took science classes, I'd have been dinged a few points
if
> I turned in 0.035 -- the rule was to report results to the same
> significance as the inputs.
>
> 0.7 * 0.05 => 0.0 in those science classes.

I'll be that you were really taught to answer 0.04

Otherwise I fear for your scientific education :-)

Cheers,
Brian


dougfort

unread,
Oct 2, 2002, 4:49:33 AM10/2/02
to
"Tim Peters" wrote:

<snip/>


>
> Then please help Doug do that, including *defining* the rounding disciplines

<snip/>

This is a surprisingly hot topic. We were going to put it off to a later release,
but now I'd like to get something out before people lose interest.

Joe and I are supposed to get together for pair programming Thursday night (if my
day job cooperates). We'll try to have something out on CVS then. (Joe doesn't
know about this yet).

I'll be watching this list and fixedpoint.sourceforge.net for suggestions and
test cases.

Paul Boddie

unread,
Oct 2, 2002, 5:37:33 AM10/2/02
to
Tim Peters <tim...@comcast.net> wrote in message news:<mailman.1033498683...@python.org>...
>

[Example results]

> I don't see what this shows beyond that FixedPoint implements "banker's
> rounding" (round the infinitely precise result to the nearest retained
> digit, but in case of tie round to the nearest even number), while you seem
> to prefer "add a half and chop" rounding.

Well, ScaledDecimal attempts to implement the kind of rounding that
everyone learns in primary school. I think this is intuitive to most
people, although I'm willing to accept that indoctrination in the
banking sector can distort some people's expectations. ;-)

> No single rounding discipline is suitable for all commercial applications,
> and I hope the FixedPoint project finds a way to let users specify the
> rounding *their* app needs. Unfortunately, I was never able to find a
> FixedPoint user who was able to articulate the rounding rules they were
> trying to emulate <wink/sigh>.

Clearly, the class has to be configurable enough. The debate around
what people expect from decimal arithmetic or decimal data types has
demonstrated the need for a certain amount of flexibility.

> BTW, note that the result of expressions like
>
> %0.2f" % float_res
>
> is platform-dependent!

That's why I wrote...

"""
When the floating point result is only a "wafer thin mint" below 0.35
(on this Pentium III laptop running Windows 2000), the rounded,
truncated result is given as 0.3. Such conditions could be seen to be
a flaw when attempting to highlight problems with the FixedPoint
implementation.
"""

In other words, the use of the floating point arithmetic, the built-in
'round' function, and various formatting operators isn't a reliable
way of showing whether something (the floating point result) is
correct, let alone something else (the FixedPoint result). Indeed, not
having written the original test program, I can safely say that I
wouldn't have used floating point arithmetic to demonstrate anything
meaningful. <0.029999999999999999-wink>, as ActivePython 2.1.3 Build
214 on win32 says.

Paul

John Roth

unread,
Oct 2, 2002, 6:57:27 AM10/2/02
to

"Paul Rubin" <phr-n...@NOSPAMnightsong.com> wrote in message
news:7x4rc53...@ruckus.brouhaha.com...

That's because there isn't one. I used to work for an insurance
company that did one-off contracts for group insurance. Each
one had specialty rounding, specified by the actuaries to make
the contract come out right. In the face of that kind of illogic, you
can't even depend on there being three rounding policies: truncate,
round at .5 and roll up any fraction. I've seen specs that require
rounding up at .70, for example.

John Roth


Chris Gonnerman

unread,
Oct 2, 2002, 8:11:21 AM10/2/02
to
----- Original Message -----
From: "Dennis Lee Bieber" <wlf...@ix.netcom.com>


> Chris Gonnerman fed this fish to the penguins on Tuesday 01 October
> 2002 06:21 pm:
>
> > In other words, 0.7 * 0.05 should always be 0.035, and
> > never 0.034999...
>
> When I took science classes, I'd have been dinged a
> few points if I turned in 0.035 -- the rule was to report
> results to the same significance as the inputs.

Would you have been dinged for not paying attention in class?

This entire thread has to do with MONETARY applications. If
you are doing SCIENCE you probably want double-precision
floating point (the Python float).

Chris Gonnerman

unread,
Oct 2, 2002, 8:14:59 AM10/2/02
to
----- Original Message -----
From: "Dennis Lee Bieber" <wlf...@ix.netcom.com>


> Chris Gonnerman fed this fish to the penguins on Tuesday 01 October
> 2002 08:11 pm:
>
> > My only complaint is that the fixedpoint.py module does
> > only banker's rounding. Bah. I do have bank customers,
> > but they don't let me handle their math (it's done in
> > true, old-fashioned BCD as far as I can tell). Most of
> > my business customers do the >= 0.5 up, < 0.5 down form
> > they learned in school... so what I need is a math library
> > that does it that way without losing or gaining
> > "unexpected" pennies along the way.
> >
> Okay, and which way should it do -0.5? Is that
> rounded to 0.0 or -1.0?

I have to admit to being unclear on that point. If I
remember my business classes right, it's the absolute
value that matters; so for negative numbers, <= -0.5 down,
> -0.5 up. But I'd most likely dig up a business math
textbook and check before casting it in code.

Chris Gonnerman

unread,
Oct 2, 2002, 8:16:47 AM10/2/02
to
----- Original Message -----
From: "John Roth" <john...@ameritech.net>

> That's because there isn't one. I used to work for an
> insurance company that did one-off contracts for group
> insurance. Each one had specialty rounding, specified
> by the actuaries to make the contract come out right.
> In the face of that kind of illogic, you can't even
> depend on there being three rounding policies: truncate,
> round at .5 and roll up any fraction. I've seen specs
> that require rounding up at .70, for example.

I remember writing code to round up at 0.40, but I can't
remember who it was for...

Terry Reedy

unread,
Oct 2, 2002, 9:05:35 AM10/2/02
to
>>> OK, so don't use whole pennies (or smallest currency) as your
>>> base, use 10,000ths of a penny, but still use integer arithmetic
>>> for your calculations.

>> Normal Integers aren't big enough. 2^31 / 10K == 215K.

>That's why you want double-floats. 2^53 / 10K = 900G

Or 64 bit integers. AMD has just announced new series of 64 bit chips
initially aimed at server market but ultimately at desktops. I
suspect that such will be common by 2010.

TJR

Steve Holden

unread,
Oct 2, 2002, 10:04:20 AM10/2/02
to
"Chris Gonnerman" <chris.g...@newcenturycomputers.net> wrote in message
news:mailman.1033528684...@python.org...

> ----- Original Message -----
> From: "Tim Peters" <tim...@email.msn.com>
>
> > > I second Tim Peter's motion... the rounding rule should
> > > be configurable on a per-application basis.
> >
> > Then please help Doug do that, including *defining* the
> > rounding disciplines you think are needed; I was never able
> > to bully <wink> those out of its users,
>
> I keep hearing this... what is the problem, exactly? So far as I've ever
> been taught, there are only two rules:
>
> "Classic" rounding:
> >= 0.5 (units) up, < 0.5 down
>
> "Banker's" rounding:
> > 0.5 up, < 0.5 down, ties broken by rounding
> to the next even unit
>
> (units = pennies in US)
>
> > so "banker's rounding" is the only thing that's there (it
> > happens to be better for overall accuracy than biased "add
> > a half and chop" rounding, so I was keener to give casual
> > users something less likely to get them into trouble).
>
> I have to agree here. This is not IMHO the "least surprising"
> choice, but given two options, it's the most accurate.
>

Then there's what I'll call "numerical analyst's rounding", intended to
remove the bias of always rounding the same way at the boundary. I may be
wrong here; I took the classes in 1973 ....

> 0.5 (units) round away from zero,
< 0.5 round towards zero
= 0.5: round towards zero if next most significant digit is odd,
otherwise away from zero.

regards
-----------------------------------------------------------------------
Steve Holden http://www.holdenweb.com/
Python Web Programming http://pydish.holdenweb.com/pwp/
Previous .sig file retired to www.homeforoldsigs.com
-----------------------------------------------------------------------

Aahz

unread,
Oct 2, 2002, 12:49:00 PM10/2/02
to
In article <77udna...@ix.netcom.com>,

Dennis Lee Bieber <wlf...@ix.netcom.com> wrote:
>Chris Gonnerman fed this fish to the penguins on Tuesday 01 October
>2002 06:21 pm:
>>
>> In other words, 0.7 * 0.05 should always be 0.035, and
>> never 0.034999...
>
>When I took science classes, I'd have been dinged a few points if
>I turned in 0.035 -- the rule was to report results to the same
>significance as the inputs.
>
>0.7 * 0.05 => 0.0 in those science classes. The assumption here being
>that the accuracy of measurements is such that the last supplied
>decimal place is already an estimate -- one does not add places which
>imply increased precision.

Nope. Here's the correct calculation:

7x10^-1 * 5x10^-2 = 35x10^-3 -> 4x10^-2 -> 0.04
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/

Project Vote Smart: http://www.vote-smart.org/

Tim Peters

unread,
Oct 2, 2002, 2:46:55 PM10/2/02
to
[Tim]

>> BTW, note that the result of expressions like
>>
>> %0.2f" % float_res
>>
>> is platform-dependent!

[Paul Boddie]


> That's why I wrote...
>
> """
> When the floating point result is only a "wafer thin mint" below 0.35
> (on this Pentium III laptop running Windows 2000), the rounded,
> truncated result is given as 0.3. Such conditions could be seen to be
> a flaw when attempting to highlight problems with the FixedPoint
> implementation.
> """

That's a different issue. The results of the given expression are
platform-dependent even if float_res is exactly representable in binary fp.
For example,

>>> '%0.2f' % 1.125
'1.13'
>>>

on Windows but produces '1.12' on most other platforms. This doesn't have
to do with representation error (decimal 1.125 is exactly representable in
binary fp), it has to do with tbe platform sprintf's *decimal* rounding
policy in "exactly halfway" cases. That varies across platforms, even with
identical FP HW.


Tim Peters

unread,
Oct 2, 2002, 2:39:18 PM10/2/02
to
[Tim]

>> Then please help Doug do that, including *defining* the
>> rounding disciplines you think are needed; I was never able
>> to bully <wink> those out of its users,

[Chris Gonnerman]


> I keep hearing this... what is the problem, exactly?

I think the replies (including two of your own <wink>) spoke more eloquently
to this point than I could. The rules various commercial apps want are
utterly arbitrary by any rational criterion ("always round up", "round up
over 0.4", "round up over 0.7", "it depends on the *next* digit too", etc),
but that doesn't mean an arbitrary piece of code satisfies the requirements.
If I were still doing this, I expect I'd invent a way for users to plug in
their own "final rounding" function, and explicitly disown responsibility
for guessing what they want.


Tim White

unread,
Oct 2, 2002, 4:07:26 PM10/2/02
to
Actually, using the implied precision...
0.7 is a number between 0.65 and 0.75 and
0.05 is a number between 0.045 and 0.055,
which multiplied together give a number between 0.02925 and 0.04125
or 0.035 +/- 0.006 if we are not loosing any precision.
This can be expressed as 0.04 with one significant digit (as were the
original numbers) - but looses some precision.

Tim

Christian Tismer

unread,
Oct 2, 2002, 4:48:23 PM10/2/02
to
Chris Gonnerman wrote:
> ----- Original Message -----
> From: "Christian Tismer" <tis...@tismer.com>
>
>>It has a better chance to be right (meaning exact) since its
>>number base is identical to the money number base.
>>Having more prime factors than (2, 5) in it would lead to
>>even more fractions being exact, but I doubt that is not
>>what they want. They want the computer think exactly as
>>wrong as they do.
>
>
> Huh. You managed to back me up and bust my chops at the
> same time.

Not exactly what I intended to do, but near to it :-)

What I tried to say was that "doing it right" means to
"do it as wrong as by hand". What bankers compute by
hand is something defined (according to rules that I
never have learned...), and the requirement is IMHO
that the computer should behave exactly the same.

Nobody claims that there is *one* way to correct
errors. There are multiple ways, depending on certain
criteria which might not be compatible to each other.
Decimal math cannot minimize all possible errors,
maybe some obvious ones are treated in a more obvious
way that using float, and therefore I consider
it wrong like any other non-rational math would be.

The key is minimizing the surprize effect. Find their
requirements and implement what matches the customer's
needs, and he'll be quiet.
(This is my primary goal with my customers: convince
them that I'm not questionable for their results.
Appears to be good to use decimal in this case).

> What, exactly, is wrong with wanting math involving
> essentially decimal monetary amounts to lead to proper
> decimal rounding? Paul keeps waving bananas in my face.

As said, dividing by 3 first and multiplying then is
like asking for rounding errors. Unless we have a ternary
base as well, this will always give problems.

My only reqason to divide early is a possible overflow
condition. Otherwise, division almost always introduces
errors, which means it should be done as late as possible,
with the dividend being as large as possible.

(On bases: Actually, the primes (2, 3, 5) would give a quite
nice numeric base. They have lots of nice properties concerning
harmony, and the multiples of the prime factors produce
numbers people consider as "nice", most of the time.
So my number base would be at least 30 :-) )

all the best - chris

Dennis Lee Bieber

unread,
Oct 2, 2002, 7:43:15 PM10/2/02
to
Chris Gonnerman fed this fish to the penguins on Tuesday 01 October
2002 09:46 pm:


>
>>>> 0.7 * 0.05
> 0.034999999999999996
>
> but if you just put in the number, you get this:
>
>>>> 0.035
> 0.035000000000000003
>
> so:
>
>>>> 0.035 == (0.7 * 0.05)
> 0

The method that almost any good book on scientific computation gives
for comparison is to never test for equality for floating point
calculations. Rather one tests for an epsilon.

>>> a = .7 * .05
>>> b = .035
>>> print a, b
0.035 0.035
>>> a
0.034999999999999996
>>> b
0.035000000000000003
>>> eps = 0.00005
>>> abs(a-b) < eps
1
>>>
>>> eps = 0.001
>>> abs(a-b) < eps
1

Dennis Lee Bieber

unread,
Oct 2, 2002, 7:49:19 PM10/2/02
to
Steve Holden fed this fish to the penguins on Wednesday 02 October 2002
07:04 am:

> "Chris Gonnerman" <chris.g...@newcenturycomputers.net> wrote in
> message news:mailman.1033528684...@python.org...
>>

>> "Banker's" rounding:
>> > 0.5 up, < 0.5 down, ties broken by rounding
>> to the next even unit
>>
>

> Then there's what I'll call "numerical analyst's rounding", intended
> to remove the bias of always rounding the same way at the boundary. I
> may be wrong here; I took the classes in 1973 ....
>
> > 0.5 (units) round away from zero,
> < 0.5 round towards zero
> = 0.5: round towards zero if next most significant digit is odd,
> otherwise away from zero.
>

Same as "Banker's" with the difference that you are rounding to an
odd, and Banker's rounded to an even...

B nar
0.5 0 1
1.5 2 1
2.5 2 3
3.5 4 3

sum 8.0 8 8

Dennis Lee Bieber

unread,
Oct 2, 2002, 8:11:44 PM10/2/02
to
Brian Quinlan fed this fish to the penguins on Wednesday 02 October
2002 12:48 am:

I suspect those classes were rigged where all measurements were to the
same significance (not necessarily decimal places). 123000 * 0.00321 =>
1.23E5 * 3.21E-3 -> 3.95E2 -> 395

So yes, on that basis, both 0.7 and 0.05 are 1 signicant digit, giving
0.04 to one significant digit.

Then again, when I was taking those classes, I ran a slipstick and was
thereby automatically limited to ~2.5 significant digits (depending on
which end of the stick).

My math instructor of the time actually started the year off with a
declaration that answers only needed to be to that significance -- his
background would have been slipstick. Calculators were too expensive
for us poor folk...

Nowadays the calculator is lunch money, getting a good slipstick is
two+ weeks of lunch.

Chris Gonnerman

unread,
Oct 2, 2002, 11:16:01 PM10/2/02
to
----- Original Message -----
From: "Christian Tismer" <tis...@tismer.com>

> (On bases: Actually, the primes (2, 3, 5) would give a quite
> nice numeric base. They have lots of nice properties
> concerning harmony, and the multiples of the prime factors
> produce numbers people consider as "nice", most of the time.
> So my number base would be at least 30 :-) )

Didn't the ancient Babylonians use 60? I've read we have
them to blame for 360 degrees in a circle.

Just has another factor of 2... they must have been pretty
sharp mathematicians.

Chris Gonnerman

unread,
Oct 2, 2002, 11:12:51 PM10/2/02
to
----- Original Message -----
From: "Steve Holden" <sho...@holdenweb.com>


> "Chris Gonnerman" <chris.g...@newcenturycomputers.net> wrote in
message
> news:mailman.1033528684...@python.org...

> > I keep hearing this... what is the problem, exactly? So


> > far as I've ever been taught, there are only two rules:
> >
> > "Classic" rounding:
> > >= 0.5 (units) up, < 0.5 down
> >
> > "Banker's" rounding:
> > > 0.5 up, < 0.5 down, ties broken by rounding
> > to the next even unit
> >
> > (units = pennies in US)
> >
> > > so "banker's rounding" is the only thing that's there (it
> > > happens to be better for overall accuracy than biased
> > > "add a half and chop" rounding, so I was keener to give
> > > casual users something less likely to get them into
> > > trouble).
> >
> > I have to agree here. This is not IMHO the "least
> > surprising" choice, but given two options, it's the
> > most accurate.
>
> Then there's what I'll call "numerical analyst's rounding",
> intended to remove the bias of always rounding the same way
> at the boundary. I may be wrong here; I took the classes in
> 1973 ....
>
> > 0.5 (units) round away from zero,
> < 0.5 round towards zero
> = 0.5: round towards zero if next most significant digit
> is odd, otherwise away from zero.

Hmm. Given 5.5, rounding towards 0 would be 5.0, right?
So this is the exact opposite of Banker's rounding...
effectively rounding to an odd rather than even unit value.

I'm beginning to understand Tim's annoyance.

Tim Peters

unread,
Oct 2, 2002, 11:48:01 PM10/2/02
to
[Chris Gonnerman, ponders Yet Another Bizarre & Ill-Specified Rounding
Rule]

> ...


> I'm beginning to understand Tim's annoyance.

I'm just playing "bad cop" here, so that when someone is finally shamed into
really helping Doug sort this mess out, they'll love him by comparison.

i'm-old-i-don't-need-to-be-liked<wink>-ly y'rs - tim


Bengt Richter

unread,
Oct 3, 2002, 3:08:38 AM10/3/02
to
On Tue, 1 Oct 2002 23:22:16 -0400, "Tim Peters" <tim...@email.msn.com> wrote:

>[Bengt Richter]
>> This is unproved, but maybe with the help of timbot et al we can prove
>> the range over which is is safe and useful. The test run doesn't prove
>> anything, but I thought it interesting. Just run it as you would
>> your version.
>
>Note that you have no control over the accuracy of operations like 10.00**i,
>or of % formats, because Python has no control over them -- they're
>delegated to the platform libm pow() and libc sprintf(), and the quality of
>those varies massively across platforms. If you want to be safe, you can't
>leave such things to the competency of your platform library writers.
>Floating divmod is also a bitch, limited by the quality of the platform
>fmod().
I understand. OTOH, if you had an installer that automatically tested safety
using the available infrastructure, the large majority of platforms might be
able to get optimized speed, where the fallback would be a guaranteed
platform-independent implementation... Which gets you to the question of what
safety means. Probably it should mean always reproducing the exact results
of the fixed point at a given precision, within some range of operator input
values... Which gets to what the range should be. Range and precision could be
specified to the installer... Which gets to whether a given range and precision
will be adequate for a given application... which gets to whether the user
understands what is required and can communicate it... which gets to motivating
a platform-independent-limited-only-by-memory-you-decide-how-to-use-it implementation...
which (among other things) makes me think you're smart ;-)

>
>I'm unclear on why so many are defending binary fp to people who insist they
>don't want it. It's not like binary fp is in danger of going away in our
>lifetimes -- this is like "defending" Perl against Python <wink>.
>
Personally, I'm not defending fp, I'm just reacting to FUD and superstition.
Using 4 digitsof fraction and calling it good has consequences too (don't
get me wrong, I know you know, but how many of the users will?) e.g.,

>>> from fixedpoint import FixedPoint as F
>>> def compound(yearlyRate, timesPerYear, years):
... rate = 1 + yearlyRate/timesPerYear
... acc = yearlyRate/yearlyRate # make one of proper type
... for i in xrange(timesPerYear*years):
... acc *= rate
... return acc
...
>>> rate4 = F('.06',4) # 6%/yr
>>> rate50 = F('.06',50) # 6%/yr
>>> ratef = .06
>>> for r in [rate4,rate50,ratef]: print `r`
...
FixedPoint('0.0600', 4)
FixedPoint('0.06000000000000000000000000000000000000000000000000', 50)
0.059999999999999998
>>> for r in [rate4, rate50, ratef]:
... print compound(r, 12, 30)
...
6.0237
6.02257521226321618405404680891614858781950546708965
6.02257521226
>>> for r in [rate4, rate50, ratef]:
... print compound(r, 365, 30)
...
8.7520
6.04875261233655216027138099585778986587670630410143
6.04875261234

If you want to borrow money at 6%/year, read the fine print ;-)

For monthly it's not so bad, but on a million bucks the difference might
pay for lunch by 2032 ;-)

>>> print F((compound(rate4,12,30)-compound(rate50,12,30))*1000000)
1124.79

...especially compounded daily ;-)

>>> print F((compound(rate4,365,30)-compound(rate50,365,30))*1000000)
2703247.39
>>> print F((compound(ratef,365,30)-compound(rate50,365,30))*1000000)
0.00

BTW, you probably noticed I wasn't really rounding in my evenround function --
I was just noodging the value so the subsequent round inside %.f looked like it
was doing decimal bankers round. I just played a hunch with the BIAS multiply.
Using FixedPoint as a final formatter for floating point should work better than
bias-kludging the ordinary round. It's probably about as good an estimate of the
nearest decimal of a given precision to a given fp number as you can get.

>>> for i in range(20): print i, F(.35,i)
...
0 0.
1 0.3
2 0.35
3 0.350
4 0.3500
5 0.35000
6 0.350000
7 0.3500000
8 0.35000000
9 0.350000000
10 0.3500000000
11 0.35000000000
12 0.350000000000
13 0.3500000000000
14 0.35000000000000
15 0.350000000000000
16 0.3500000000000000
17 0.34999999999999998
18 0.349999999999999978
19 0.3499999999999999778

BTW, FWIW, since the denominator is 1L << -e where you round the conversion,
the following should hold (I think, not very tested):

exactHalf = 1L << (-e-1)
n = top + exactHalf # ok round except banker's tie with odd no
wasTie = not (n & ((exactHalf<<2)-1L)) # odd => even and no 1's below
n = (n >> -e) - wasTie # readjust for bad tie result

assert n == _roundquotient(top, 1L << -e)

Whether it's worth doing depends on speeds which I haven't tested. Depends on divmod and cmp...
Out of steam for tonight...

BTW2, I wonder if it would be worth while to do a banker's round like round, but guaranteeing
that the result's magnitude will be the first fp state >= the true decimal value. It would
be biased minisculely, but it might be helpful for printing non-surprising decimals. str(f)
seems to be solving part of the problem, but an alternate %f (%F?) to control bankers round
at a given precision might be nice. A few bits from fixedpoint could be cannibalized.

Regards,
Bengt Richter

Paul Boddie

unread,
Oct 3, 2002, 6:29:03 AM10/3/02
to
Tim Peters <tim...@comcast.net> wrote in message news:<mailman.1033584485...@python.org>...

>
> That's a different issue. The results of the given expression are
> platform-dependent even if float_res is exactly representable in binary fp.
> For example,
>
> >>> '%0.2f' % 1.125
> '1.13'
> >>>
>
> on Windows but produces '1.12' on most other platforms. This doesn't have
> to do with representation error (decimal 1.125 is exactly representable in
> binary fp), it has to do with tbe platform sprintf's *decimal* rounding
> policy in "exactly halfway" cases. That varies across platforms, even with
> identical FP HW.

Well, you're the recognised master of floating point, so I'll bow to
your expertise. In any case, my original point (or perhaps the
intended meaning of my original point) still stands: verifying decimal
arithmetic results using a combination of "unreliable" technologies
(floating point arithmetic and various "uncontrollable" rounding and
presentation techniques) isn't going to convince that many people that
the decimal arithmetic is wrong.

Paul

John Roth

unread,
Oct 3, 2002, 8:13:43 AM10/3/02
to

"James J. Besemer" <j...@cascade-sys.com> wrote in message
news:mailman.103351224...@python.org...

>
> Tim Peters wrote:
>
> >No single rounding discipline is
> >suitable for all commercial applications,
> >
> Ain't it the Truth!!
>
> Businesses in the US are required to file quarterly reports about
periodic tax payments made during the quarter. Taxes, e.g. on FICA and
Medicare, are computed as a percentage of each employee's wages and
deducted from the employee's paycheck each pay period. On the quarterly
report, the actual liability is computed, based as a percentage of total
payroll for the period.
>
> These two computations (sum of percentages and percentage of sums) do
not always produce the same amount. And yet the amounts on the return
have to match to the exact penny -- no rounding off to the nearest
dollar (like on personal returns). Consequently, the form includes an
explicit entry for "round off error," to correct for minor differences
in computed liability vs. actual payments.

It gets even worse. Sometimes you've got to stagger which employees get
their pay, deductions and so forth rounded which way so that the total
payroll comes out close to the overall computation. And there's simply
no way to put that kind of logic into a basic arithmetic computation.

John Roth

John Roth

unread,
Oct 3, 2002, 8:24:07 AM10/3/02
to

"Tim Peters" <tim...@comcast.net> wrote in message
news:mailman.1033584066...@python.org...

Exactly. The question is how. I'm leaning to making it a property of the
fixed point number object, but I don't see how that works as far as
either
literals or combining two fixed point objects with different rounding
policies
would go.

The best prior art I can figure is from COBOL, where most of this
stuff happens today. There, the rounding policy is set by the _result_
picture, which is a concept that simply isn't availible in Python (or
most
other "modern" languages, for that matter.)

And even there, oddball rounding policies require fixup code because
the language doesn't include anything other than round up if the
fraction
is >= .5.

John Roth
>
>


John Roth

unread,
Oct 3, 2002, 8:31:14 AM10/3/02
to

"Chris Gonnerman" <chris.g...@newcenturycomputers.net> wrote in
message news:mailman.1033614847...@python.org...

> ----- Original Message -----
> From: "Christian Tismer" <tis...@tismer.com>
>
>
> > (On bases: Actually, the primes (2, 3, 5) would give a quite
> > nice numeric base. They have lots of nice properties
> > concerning harmony, and the multiples of the prime factors
> > produce numbers people consider as "nice", most of the time.
> > So my number base would be at least 30 :-) )
>
> Didn't the ancient Babylonians use 60? I've read we have
> them to blame for 360 degrees in a circle.

They actually based things on 360 (close approximation to
the number of days in a year.) Using 60 cut the number of
actual numbers they had to deal with down to something
more reasonable.

And that was for "scientific" computation. Commercial
computation, IIRC, used much simpler arithmetic. I have
yet to see an abacus or counting table that was base 60!


>
> Just has another factor of 2... they must have been pretty
> sharp mathematicians.

John Roth

Aahz

unread,
Oct 3, 2002, 1:44:11 PM10/3/02
to
In article <mailman.1033616947...@python.org>,

Tim Peters <tim...@comcast.net> wrote:
>
>i'm-old-i-don't-need-to-be-liked<wink>-ly y'rs - tim

Can I be you when I grow up?

Bengt Richter

unread,
Oct 3, 2002, 2:16:55 PM10/3/02
to
On 3 Oct 2002 07:08:38 GMT, bo...@oz.net (Bengt Richter) wrote:
[...]

>
>BTW, you probably noticed I wasn't really rounding in my evenround function --
>I was just noodging the value so the subsequent round inside %.f looked like it
>was doing decimal bankers round. I just played a hunch with the BIAS multiply.
>Using FixedPoint as a final formatter for floating point should work better than
>bias-kludging the ordinary round. It's probably about as good an estimate of the
>nearest decimal of a given precision to a given fp number as you can get.
>
[...]

>
>BTW, FWIW, since the denominator is 1L << -e where you round the conversion,
>the following should hold (I think, not very tested):
>
> exactHalf = 1L << (-e-1)
> n = top + exactHalf # ok round except banker's tie with odd no
> wasTie = not (n & ((exactHalf<<2)-1L)) # odd => even and no 1's below
> n = (n >> -e) - wasTie # readjust for bad tie result
>
> assert n == _roundquotient(top, 1L << -e)
>
>Whether it's worth doing depends on speeds which I haven't tested. Depends on divmod and cmp...
>Out of steam for tonight...
>
Not enough steam to get it right ;-/ -- this should work better and faster:

exactHalf = 1L << (-e-1)
wasTie = not (top & (exactHalf-1L)) # no 1's below
n = top + exactHalf # ok round except banker's tie => odd
n = (n >> -e)
n -= n & wasTie # readjust for bad tie result


assert n == _roundquotient(top, 1L << -e)

Christian Tismer

unread,
Oct 3, 2002, 2:55:03 PM10/3/02
to
Chris Gonnerman wrote:
> ----- Original Message -----
> From: "Christian Tismer" <tis...@tismer.com>
>
>
>>(On bases: Actually, the primes (2, 3, 5) would give a quite
>>nice numeric base. They have lots of nice properties
>>concerning harmony, and the multiples of the prime factors
>>produce numbers people consider as "nice", most of the time.
>>So my number base would be at least 30 :-) )
>
>
> Didn't the ancient Babylonians use 60? I've read we have
> them to blame for 360 degrees in a circle.
>
> Just has another factor of 2... they must have been pretty
> sharp mathematicians.

Right. And see how simple they built their positional
system, using just two symbols:
http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Babylonian_numerals.html

Base ten is embedded into base 60. They could have done
it with 30 as well, but I have no idea why they have
choosen this, exactly. The above article gives some
discussion abou that.

ciao - chris

James J. Besemer

unread,
Oct 3, 2002, 3:54:25 PM10/3/02
to

Christian Tismer wrote:

> Base ten is embedded into base 60. They could have done
> it with 30 as well, but I have no idea why they have
> choosen this, exactly.

I expect because 60 is also divisible by 12.

Bengt Richter

unread,
Oct 3, 2002, 8:51:40 PM10/3/02
to

Sheesh ;-0
exactHalf = 1L << (-e-1)
wasTie = (top & ((exactHalf<<1)-1L))==exactHalf


n = top + exactHalf # ok round except banker's tie => odd
n = (n >> -e)
n -= n & wasTie # readjust for bad tie result
assert n == _roundquotient(top, 1L << -e)

I was too anxious to avoid the shift. Thanks for not stomping all over me ;-)
I'm testing a debugging class using sys.settrace, so the above is not getting thorough
attention, except that I can now look at it with the new tool ;-)

Greg Ewing

unread,
Oct 3, 2002, 9:59:24 PM10/3/02
to
Christian Tismer wrote:

> And see how simple they built their positional
> system, using just two symbols:
> http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Babylonian_numerals.html


Although I suspect that scribes would quickly
have learned to recognise each "cluster" of
marks, drawn in a recognisable pattern,
as a single symbol, in which case there
were really 14 symbols (5 + 9).

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Greg Ewing

unread,
Oct 3, 2002, 10:02:53 PM10/3/02
to
John Roth wrote:

> The best prior art I can figure is from COBOL, where most of this
> stuff happens today. There, the rounding policy is set by the _result_
> picture, which is a concept that simply isn't availible in Python


Perhaps intermediate results could be
accumulated using arbitrary precision,
and then a rounding function of your
choice applied, e.g.

d = bankers_round(a * b + c, 2)

Christian Tismer

unread,
Oct 3, 2002, 10:31:14 PM10/3/02
to
Greg Ewing wrote:
> Christian Tismer wrote:
>
>> And see how simple they built their positional
>> system, using just two symbols:
>> http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Babylonian_numerals.html
>
>
>
>
> Although I suspect that scribes would quickly
> have learned to recognise each "cluster" of
> marks, drawn in a recognisable pattern,
> as a single symbol, in which case there
> were really 14 symbols (5 + 9).

Right. Remarkably they did *lack a zero* !!!

Robin Becker

unread,
Oct 4, 2002, 1:13:32 PM10/4/02
to
In article <mailman.1033498683...@python.org>, Tim Peters
<tim...@comcast.net> writes
>I don't see what this shows beyond that FixedPoint implements "banker's
>rounding" (round the infinitely precise result to the nearest retained
>digit, but in case of tie round to the nearest even number), while you seem
>to prefer "add a half and chop" rounding. No single rounding discipline is
>suitable for all commercial applications, and I hope the FixedPoint project
>finds a way to let users specify the rounding *their* app needs.
>Unfortunately, I was never able to find a FixedPoint user who was able to
>articulate the rounding rules they were trying to emulate <wink/sigh>.
The bankers and similar rounding schemes are intended to be fairer ie
some 0.5 cases are rounded down and others rounded up.

I seem to remember that digits are unevenly distributed (Benford's law?)
so does that affect the 'fairness' of assuming an equal number of even
and odd cases?
--
Robin Becker

Tim Peters

unread,
Oct 4, 2002, 1:53:14 PM10/4/02
to
[Robin Becker]

> The bankers and similar rounding schemes are intended to be fairer ie
> some 0.5 cases are rounded down and others rounded up.

Right.

> I seem to remember that digits are unevenly distributed (Benford's law?)

In some cases, and when it does apply, Benford's Law applies to the
*leading* digit; but rounding has to do with the trailing digit:

http://mathworld.wolfram.com/BenfordsLaw.html

> so does that affect the 'fairness' of assuming an equal number of even
> and odd cases?

Not in Benford's Law cases, no. That doesn't mean no cases exist in which
bias could be introduced, but worry about that is a tail-wagging-the-dog
kind of thing.


Piet van Oostrum

unread,
Oct 4, 2002, 3:57:34 PM10/4/02
to
>>>>> Christian Tismer <tis...@tismer.com> (CT) writes:

CT> Greg Ewing wrote:
>> Christian Tismer wrote:
>>
>>> And see how simple they built their positional
>>> system, using just two symbols:
>>> http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Babylonian_numerals.html
>> Although I suspect that scribes would quickly
>> have learned to recognise each "cluster" of
>> marks, drawn in a recognisable pattern,
>> as a single symbol, in which case there
>> were really 14 symbols (5 + 9).

CT> Right. Remarkably they did *lack a zero* !!!

They used an empty space for the zero. But I saw in a museum a tablet
where the scribe made a calculation error because he failed to notice this
"zero".
--
Piet van Oostrum <pi...@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: P.van....@hccnet.nl

Bengt Richter

unread,
Oct 5, 2002, 1:11:19 AM10/5/02
to
On 4 Oct 2002 00:51:40 GMT, bo...@oz.net (Bengt Richter) wrote:

>On 3 Oct 2002 18:16:55 GMT, bo...@oz.net (Bengt Richter) wrote:
>
>>On 3 Oct 2002 07:08:38 GMT, bo...@oz.net (Bengt Richter) wrote:
>>[...]
>>>

[...]


>I'm testing a debugging class using sys.settrace, so the above is not getting thorough
>attention, except that I can now look at it with the new tool ;-)

It's called tracewatch.py, and defines a single class TraceWatch
Here's a little log of using it on the unchanged fixedpoint.py:
(I defined a formatting function (bits below) to monitor the variables in binary,
also right-justifying on the next line, so things line up).

>>> from fixedpt.fixedpoint import FixedPoint as F
>>> from tracewatch import TraceWatch as TW
>>> tw = TW()
>>> from miscutil import prb
>>> def bits(x):
... return '\n%60s' % prb(x)
...
>>> tw.addwatch('_roundquotient','#f', bits) #watch calls and returns
>>> tw.addwatch('n','_roundquotient', bits) #watch n
>>> tw.addwatch('leftover','_roundquotient', bits)
>>> tw.addwatch('c','_roundquotient', bits)
>>> tw.addwatch('x','_roundquotient', bits)
>>> tw.addwatch('y','_roundquotient', bits)
>>> F(1.125)
FixedPoint('1.12', 2)
>>> tw.on()
>>> F(1.125)
--------------------------------------------------------------------
File: "fixedpoint.py"
Line [scope] C:all, R:eturn eX:cept S:tack N:ew M:od E:quiv U:nbound
---- ------- -------------------------------------------------------
447 [_roundquotient]: C: _roundquotient(x=15099494400L, y=134217728L)
N: y =
1000000000000000000000000000
N: x =
1110000100000000000000000000000000
N: n =
1110000
N: leftover =
100000000000000000000000000
454 [_roundquotient]:
N: c =
0
456 [_roundquotient]:
458 [_roundquotient]: R: _roundquotient(...) =>
1110000
FixedPoint('1.12', 2)

Doing the same thing, but just monitoring _tento with default output
formatting is interesting because of the exception:

>>> from fixedpt.fixedpoint import FixedPoint as F
>>> from tracewatch import TraceWatch as TW
>>> tw = TW()
>>> tw.addwatch('_tento','#f') # watch calls _tento and returns
>>> tw.on()
>>> F(1.125)
--------------------------------------------------------------------
File: "fixedpoint.py"
Line [scope] C:all, R:eturn eX:cept S:tack N:ew M:od E:quiv U:nbound
---- ------- -------------------------------------------------------
405 [_tento]: C: _tento(n=2, cache={})
408 [_tento]: X: KeyError(2)
S: < set_precision:246 < __init__:133 < ?:1
411 [_tento]: R: _tento(...) => 100L
405 [_tento]: C: _tento(n=2, cache={2: 100L})
408 [_tento]: R: _tento(...) => 100L
405 [_tento]: C: _tento(n=2, cache={2: 100L})
408 [_tento]: R: _tento(...) => 100L
FixedPoint('1.12', 2)
>>> F(1.125)
405 [_tento]: C: _tento(n=2, cache={2: 100L})
408 [_tento]: R: _tento(...) => 100L
405 [_tento]: C: _tento(n=2, cache={2: 100L})
408 [_tento]: R: _tento(...) => 100L
405 [_tento]: C: _tento(n=2, cache={2: 100L})
408 [_tento]: R: _tento(...) => 100L
FixedPoint('1.12', 2)
>>> F(1.125,3)
405 [_tento]: C: _tento(n=3, cache={2: 100L})
408 [_tento]: X: KeyError(3)
S: < set_precision:246 < __init__:133 < ?:1
411 [_tento]: R: _tento(...) => 1000L
405 [_tento]: C: _tento(n=3, cache={2: 100L, 3: 1000L})
408 [_tento]: R: _tento(...) => 1000L
405 [_tento]: C: _tento(n=3, cache={2: 100L, 3: 1000L})
408 [_tento]: R: _tento(...) => 1000L
FixedPoint('1.125', 3)
>>> F(1.125,3)
405 [_tento]: C: _tento(n=3, cache={2: 100L, 3: 1000L})
408 [_tento]: R: _tento(...) => 1000L
405 [_tento]: C: _tento(n=3, cache={2: 100L, 3: 1000L})
408 [_tento]: R: _tento(...) => 1000L
405 [_tento]: C: _tento(n=3, cache={2: 100L, 3: 1000L})
408 [_tento]: R: _tento(...) => 1000L
FixedPoint('1.125', 3)
>>> tw.show()
-------------------------------------------------------------
B: watch Binding F: watch Function calls/rets T: Trace scope
X: name scope formatter
-- --------------------- ------------------- --------------
F: _tento _tento repr

It's evolved, but it's not finished evolving. There's only one thing to import,
so it's easy to use. But I am not finished messing with it yet. I want to be
able to monitor instance and class variables, and also *args,**kwargs.
Also redirectable output and simultaneous log with interactive sessions.

As you can see, it monitors a name binding for different kinds of changes:

N:ew - new binding to different-valued object,
M:od - old binding to same but modified object
E:quiv - new binding to new object of equivalent value
U:nbound - no binding at all

>>> from tracewatch import TraceWatch as TW
>>> tw = TW()
>>> tw.addwatch('m','?') # m in the interactive scope
>>> tw.on()
>>> if 1:
... m=1
... m+=1
... m = [1]
... m += [2]
... m = [1, 2]
... del m
...
--------------------------------------------------------------------
File: "<stdin>"
Line [scope] C:all, R:eturn eX:cept S:tack N:ew M:od E:quiv U:nbound
---- ------- -------------------------------------------------------
N: m = 1
3 [?]:
N: m = 2
4 [?]:
N: m = [1]
5 [?]:
M: m = [1, 2]
6 [?]:
E: m = [1, 2]
7 [?]:
U: m = Unbound
7 [?]:
>>> m=99
This is a new binding
N: m = 99
1 [?]:
>>> m=99
>>> m=99
>>> m=99
but this is the identical 99, so no trigger
>>> m=100
N: m = 100
1 [?]:
>>> m=100
E: m = 100
the "E" says it's new but Equivalent, so we know where the magic set ends for 2.2 ;-)
1 [?]:
>>> m=100
E: m = 100
1 [?]:

Regards,
Bengt Richter

Tim Peters

unread,
Oct 5, 2002, 1:09:21 PM10/5/02
to
[Bengt Richter]

> I understand. OTOH, if you had an installer that automatically
> tested safety using the available infrastructure, the large majority
> of platforms might be able to get optimized speed, where the fallback
> would be a guaranteed platform-independent implementation... Which
> gets you to the question of what safety means.

To me, it gets more to the question of what speed means <wink>. People
mucking with dollars and cents generally do two things a lot: I/O
conversions, and additions. Addition is fastest if using (conceptually
scaled) ints internally, but I/O conversions fastest if using a BCD-like
scheme internally. Fancy float alternatives are slower for both.

For this reason, I don't think it's worth investing any brainpower in
figuring out just how much you can do in binary fp and still get the same
results. As an intellectual exercise in the analysis and control of fp
rounding errors, it's a good one, but the more-or-less obvious all-integer
and all-BCD approaches are obviously correct instead of (at best) delicately
correct, and also run faster for most appropriate apps.

win/win-vs-lose/lose-it's-not-a-hard-choice<wink>-ly y'rs - tim


Cameron Laird

unread,
Oct 14, 2002, 8:23:49 PM10/14/02
to
In article <mailman.103299793...@python.org>,
Delaney, Timothy <tdel...@avaya.com> wrote:
.
.
.
>It's called Fixed Point arithmetic, and strangely enough, Tim Peters wrote
>an excellent implementation ...
>
>Tim Delaney
>

Someone who wants to pursue this, and likes the motivation
of a change to improve on the timbot, might consider a
Pythonic implementation of so-called "adaptive integers"
<URL: http://wiki.tcl.tk/4329 >. For a useful range of
data, these occupy *less* space than "normal" computer-
encoded numbers, and exhibit precision limited only by
available memory.

The URL above gives the most straightforward implementation.
Myself, I like a variant which encodes rational arithmetic ...
--

Cameron Laird <Cam...@Lairds.com>
Business: http://www.Phaseit.net
Personal: http://phaseit.net/claird/home.html

0 new messages