Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Unexpected behaviour of math.floor, round and int functions (rounding)

101 views
Skip to first unread message

René Silva Valdés

unread,
Nov 18, 2021, 9:30:51 PM11/18/21
to
Hello, I would like to report the following issue:

Working with floats i noticed that:

int(23.99999999999999/12) returns 1, and
int(23.999999999999999/12) returns 2

This implies that int() function is rounding, which doesn't appear to be
expected (documentation doesn't say anything about it). Looking further i
noticed that

0.5+0.49999999999999994 returns 1. This seems to be related to double
numbers' operations in C language, where 0.49999999999999994 is the
greatest floating-point value less than 0.5. Counting on this several
examples can be deduced, like:

round(0+0.49999999999999994) returns 0, and
round(1+0.49999999999999994) returns 2

This seems to be a known issue in Java (see reference)


Reference:

https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6430675

I hope this information is helpful, thanks in advance for reading this,

Kind regards,

René

2QdxY4Rz...@potatochowder.com

unread,
Nov 18, 2021, 9:40:58 PM11/18/21
to
On 2021-11-18 at 23:16:32 -0300,
René Silva Valdés <rene.sil...@gmail.com> wrote:

> Hello, I would like to report the following issue:
>
> Working with floats i noticed that:
>
> int(23.99999999999999/12) returns 1, and
> int(23.999999999999999/12) returns 2
>
> This implies that int() function is rounding ...

It's not int() that's doing the rounding; that second numerator is being
rounded before being divided by 12:

Python 3.9.7 (default, Oct 10 2021, 15:13:22)
[GCC 11.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 23.999999999999999
24.0
>>> (23.999999999999999).hex()
'0x1.8000000000000p+4'

MRAB

unread,
Nov 18, 2021, 9:54:33 PM11/18/21
to
I think this is a bit clearer because it shows that it's not just being
rounded for display:

Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929
64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 23.99999999999999 == 24
False
>>> 23.999999999999999 == 24
True

Julio Di Egidio

unread,
Nov 19, 2021, 2:05:37 AM11/19/21
to
On Friday, 19 November 2021 at 03:30:51 UTC+1, René Silva Valdés wrote:

> Hello, I would like to report the following issue:
> This seems to be related to double numbers' [...]

What every *programmer* should know about floating point:

<https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html>

Yes, it's definitely a known issue... ;)

HTH,

Julio

ast

unread,
Nov 19, 2021, 6:43:27 AM11/19/21
to
>>> 0.3 + 0.3 + 0.3 == 0.9
False

ast

unread,
Nov 19, 2021, 6:47:27 AM11/19/21
to
Better use math.isclose to test equality between 2 floats

>>> import math
>>> math.isclose(0.3 + 0.3 + 0.3, 0.9)
True

Mats Wichmann

unread,
Nov 19, 2021, 2:52:46 PM11/19/21
to
On 11/18/21 19:40, 2QdxY4Rz...@potatochowder.com wrote:
> On 2021-11-18 at 23:16:32 -0300,
> René Silva Valdés <rene.sil...@gmail.com> wrote:
>
>> Hello, I would like to report the following issue:
>>
>> Working with floats i noticed that:
>>
>> int(23.99999999999999/12) returns 1, and
>> int(23.999999999999999/12) returns 2
>>
>> This implies that int() function is rounding ...
>
> It's not int() that's doing the rounding; that second numerator is being
> rounded before being divided by 12:
>
> Python 3.9.7 (default, Oct 10 2021, 15:13:22)
> [GCC 11.1.0] on linux
> Type "help", "copyright", "credits" or "license" for more information.
> >>> 23.999999999999999
> 24.0
> >>> (23.999999999999999).hex()
> '0x1.8000000000000p+4'
>

The documentation has a fair bit to say on the subject of floating
point. I never remember the precise link, but fortunately it's pretty
easy to search for:

https://docs.python.org/3/tutorial/floatingpoint.html

Chris Angelico

unread,
Nov 19, 2021, 3:18:24 PM11/19/21
to
That's because 0.3 is not 3/10. It's not because floats are
"unreliable" or "inaccurate". It's because the ones you're entering
are not what you think they are.

When will people understand this?

(Probably never. Sigh.)

ChrisA

dn

unread,
Nov 19, 2021, 3:38:34 PM11/19/21
to
On 20/11/2021 09.17, Chris Angelico wrote:
> On Sat, Nov 20, 2021 at 5:08 AM ast <ast@invalid> wrote:
>> Le 19/11/2021 à 03:51, MRAB a écrit :
>>> On 2021-11-19 02:40, 2QdxY4Rz...@potatochowder.com wrote:
>>>> On 2021-11-18 at 23:16:32 -0300,
>>>> René Silva Valdés <rene.sil...@gmail.com> wrote:
>>>>> Working with floats i noticed that:
>>>>> int(23.99999999999999/12) returns 1, and
>>>>> int(23.999999999999999/12) returns 2


Has the OP (now) realised that the observation is a "feature" not a
"bug"? It is one of the difficulties of representing small numbers or
numerical-components in binary - there are many decimal values which
cannot be accurately-expressed in binary - exactly as noted.

...
>> >>> 0.3 + 0.3 + 0.3 == 0.9
>> False
>
> That's because 0.3 is not 3/10. It's not because floats are
> "unreliable" or "inaccurate". It's because the ones you're entering
> are not what you think they are.
>
> When will people understand this?
> (Probably never. Sigh.)


Am not aware of any institution which teaches the inner-workings of a
CPU/ALU/FPU/GPU in a general programming class, ie "Programming" and
particularly "Coding", have diverged from "Computer Science" - in at
least this respect.

As well as the approximations involved in trying to maintain
decimal-numbers (floats/floating-point numbers), and particularly values
to the 'right' of a decimal-point; we had to study?suffer classes in
"Numerical Analysis" and be able to gauge the declining accuracy and
precision of sundry calculations. A skill disappearing as fast as
slide-rules!?

This 'pool of ignorance' is particularly noticeable in folk who have
come 'up' through the 'CodeCamp'/'BootCamp' approach to training. On the
other hand, if one is not intending to 'get into' a scientific or highly
mathematical branch of computing/Python, eg commercial applications
using (only) Decimal (or int), the average web-app, and similar; why
bother?
--
Regards,
=dn

Julio Di Egidio

unread,
Nov 19, 2021, 4:20:14 PM11/19/21
to
On Friday, 19 November 2021 at 21:38:34 UTC+1, dn wrote:
> On 20/11/2021 09.17, Chris Angelico wrote:
> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast@invalid> wrote:
> >> Le 19/11/2021 à 03:51, MRAB a écrit :
> >>> On 2021-11-19 02:40, 2QdxY4Rz...@potatochowder.com wrote:
> >>>> On 2021-11-18 at 23:16:32 -0300,
> >>>> René Silva Valdés <rene.sil...@gmail.com> wrote:
> >>>>> Working with floats i noticed that:
> >>>>> int(23.99999999999999/12) returns 1, and
> >>>>> int(23.999999999999999/12) returns 2
> Has the OP (now) realised that the observation is a "feature" not a
> "bug"? It is one of the difficulties of representing small numbers or
> numerical-components in binary - there are many decimal values which
> cannot be accurately-expressed in binary - exactly as noted.

It's not "binary", it's *standard floating-point*: there exist countless other systems that are lossless up to arbitrary precision arithmetic and even intervallistic methods for correctness over the real numbers.

> >> >>> 0.3 + 0.3 + 0.3 == 0.9
> >> False
> >
> > That's because 0.3 is not 3/10. It's not because floats are
> > "unreliable" or "inaccurate". It's because the ones you're entering
> > are not what you think they are.
> >
> > When will people understand this?
> > (Probably never. Sigh.)
> Am not aware of any institution which teaches the inner-workings of a
> CPU/ALU/FPU/GPU in a general programming class, ie "Programming" and
> particularly "Coding", have diverged from "Computer Science" - in at
> least this respect.
>
> As well as the approximations involved in trying to maintain
> decimal-numbers (floats/floating-point numbers), and particularly values
> to the 'right' of a decimal-point; we had to study?suffer classes in
> "Numerical Analysis" and be able to gauge the declining accuracy and
> precision of sundry calculations. A skill disappearing as fast as
> slide-rules!?
>
> This 'pool of ignorance' is particularly noticeable in folk who have
> come 'up' through the 'CodeCamp'/'BootCamp' approach to training. On the
> other hand, if one is not intending to 'get into' a scientific or highly
> mathematical branch of computing/Python, eg commercial applications
> using (only) Decimal (or int), the average web-app, and similar; why
> bother?

You rather echo that very nonsense. "Computer science", despite the misnomer (it's called Informatics around Europe), is about data structures, algorithms, computability and complexity theory, and the like. It is a branch of *mathematics* and it has *zero* to do with computer architecture and/or programming proper (*). Indeed, on the other hand, my dear complete ass, I am not sure what happens around there but at least in my country computer architecture belongs to the first chapter of the first introductory course to programming in the software/information engineering course...

Indeed, this generalised emphasis on *computer science* is *completely misplaced* and is just another manifestation of the more general total fraud that our industry but really the entire (globalised) world has become.

And I'll stop there, but you by all means just keep going.

(HTH.)

Julio

(*) Yet you have been warned by your very own champion Nuts with his proverbial disclaimer: and if you do not understand what that means, you are utterly incompetent in computer science to begin with.

Chris Angelico

unread,
Nov 19, 2021, 4:21:48 PM11/19/21
to
On Sat, Nov 20, 2021 at 7:39 AM dn via Python-list
<pytho...@python.org> wrote:
> >> >>> 0.3 + 0.3 + 0.3 == 0.9
> >> False
> >
> > That's because 0.3 is not 3/10. It's not because floats are
> > "unreliable" or "inaccurate". It's because the ones you're entering
> > are not what you think they are.
> >
> > When will people understand this?
> > (Probably never. Sigh.)
>
>
> Am not aware of any institution which teaches the inner-workings of a
> CPU/ALU/FPU/GPU in a general programming class, ie "Programming" and
> particularly "Coding", have diverged from "Computer Science" - in at
> least this respect.
>

I think what I find annoying about this sort of thing is that people
triumphantly announce that the computer is WRONG. It's the numeric
equivalent of XKCD 169, and people get smug for the exact same reason,
and perhaps unfortunately, do not get their arms cut off.

"Computer, give me a number as close as possible to three tenths."

"HAH! THAT ISN'T THREE TENTHS! Hah you suck!"

*slash*

ChrisA

Ben Bacarisse

unread,
Nov 19, 2021, 4:55:48 PM11/19/21
to
Most people understand what's going on when it's explained to them. And
I think that being initially baffled is not unreasonable. After all,
almost everyone comes to computers after learning that 3/10 can be
written as 0.3. And Python "plays along" with the fiction to some
extent. 0.3 prints as 0.3, 3/10 prints as 0.3 and 0.3 == 3/10 is True.

The language (a language, not Python) could tell you that you were not
getting the value you asked for. Every 0.3 could come with a warning
that 0.3 can not be represented exactly as a floating point value. To
avoid the warning, the programmer would write ~0.3 meaning, exactly, the
binary (or whatever the base really is) floating point number closest to
0.3.

--
Ben.

Chris Angelico

unread,
Nov 19, 2021, 5:16:09 PM11/19/21
to
On Sat, Nov 20, 2021 at 9:07 AM Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>
> Chris Angelico <ros...@gmail.com> writes:
>
> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast@invalid> wrote:
>
> >> >>> 0.3 + 0.3 + 0.3 == 0.9
> >> False
> >
> > That's because 0.3 is not 3/10. It's not because floats are
> > "unreliable" or "inaccurate". It's because the ones you're entering
> > are not what you think they are.
> >
> > When will people understand this?
> >
> > (Probably never. Sigh.)
>
> Most people understand what's going on when it's explained to them. And
> I think that being initially baffled is not unreasonable. After all,
> almost everyone comes to computers after learning that 3/10 can be
> written as 0.3. And Python "plays along" with the fiction to some
> extent. 0.3 prints as 0.3, 3/10 prints as 0.3 and 0.3 == 3/10 is True.

In grade school, we learn that not everything can be written that way,
and 1/3 isn't actually equal to 0.3333333333. Yet somehow people
understand that computers speak binary ("have you learned to count
yet, or are you still on zeroes and ones?" -- insult to a machine
empire, in Stellaris), but don't seem to appreciate that floats are
absolutely accurate and reliable, just in binary.

But lack of knowledge is never a problem. (Or rather, it's a solvable
problem, and I'm always happy to explain things to people.) The
problem is when, following that lack of understanding, people assume
that floats are "unreliable" or "inaccurate", and that you should
never ever compare two floats for equality, because they're never
"really equal". That's what leads to horrible coding practices and
badly-defined "approximately equal" checks that cause far more harm
than a simple misunderstanding ever could on its own.

ChrisA

dn

unread,
Nov 19, 2021, 5:21:58 PM11/19/21
to
On 20/11/2021 10.21, Chris Angelico wrote:
> On Sat, Nov 20, 2021 at 7:39 AM dn via Python-list
> <pytho...@python.org> wrote:
>>>> >>> 0.3 + 0.3 + 0.3 == 0.9
>>>> False
>>>
>>> That's because 0.3 is not 3/10. It's not because floats are
>>> "unreliable" or "inaccurate". It's because the ones you're entering
>>> are not what you think they are.
>>>
>>> When will people understand this?
>>> (Probably never. Sigh.)
>>
>>
>> Am not aware of any institution which teaches the inner-workings of a
>> CPU/ALU/FPU/GPU in a general programming class, ie "Programming" and
>> particularly "Coding", have diverged from "Computer Science" - in at
>> least this respect.
>>
>
> I think what I find annoying about this sort of thing is that people
> triumphantly announce that the computer is WRONG. It's the numeric
> equivalent of XKCD 169, and people get smug for the exact same reason,
> and perhaps unfortunately, do not get their arms cut off.


I guess I'm a little more familiar with the 'arrogant' side of this
phenomenon because in formal courses, eg uni, I'd predict at least one
such personality per group of 50~60 students!

However, cultural-components should be considered. There is no
requirement that the OP possess an advanced command of the English
language! (nor of Python, nor of computers/computing, ...) - see OP's name!


That said, it can be frustrating for those of us who see particular
problems over-and-over. If it were evidenced in a course, I would be
addressing a short-coming in the training materials - but there is no
one Python-course to suit us all...

Another behavior is to assume that because 'I' learned something
years-ago, so should 'everyone else'. Even superficial reflection
reveals that this just isn't true - particularly when we have an
education system which is largely based upon 'date of manufacture'!
(just because I'm older than you doesn't make me 'smarter' - although I
am definitely better-looking!)


The problem is not limited to the limitations of (hah!) floating-point
arithmetic. We've seen other questions 'here' and on "Tutor", which
illustrate lack(s) of understanding of basic relationships between
Python (software) and the hardware which runs its instructions, eg the
basic simplicity of using a 'paper computer' for debugging, the order of
precedence, that the RHS must happen before we can consider anything on
the LHS, ... I've met 'graduates' who have never even peered 'under the
hood' of a computer!

(Incidentally, I recently recommended against employing one such
otherwise extremely promising ("on paper" and 'in person') candidate.
Quizzed by A.N.Other on the panel (who was suitably-impressed), the
justification was an apparent total lack of curiosity - a personality
essential for problem-solvers! (IMHO)


Another one of my 'pet-peeves' (one of the many! Although, perhaps not
to the extent of chopping-off people's arms - no matter how Monty
Python-ic that might be, Sir Knight; is the lack of
spelling/typing/proof-reading skills evidenced by many. Criticising a
(school) teacher of my acquaintance for exactly this, I was bluntly told
"but you knew what I meant" and thus he felt "communication" had been
achieved. Um, yes, er, quite true - but at what effort, and who's effort?

I find this phenomenon in computer-people particularly fascinating,
given that Python (insert any other language-name here) is very
particular and finds "far" instead of "for" far-from acceptable (hah!
again!). Does this mean that their coding-technique is to
(almost-literally) throw stuff at Python and have the computer 'find'
all the typos and spelling-errors? (and if so, is this good use of
time/good technique?)

Thus, and on this, the members of the aforementioned panel had agreed
without discussion: any documents forwarded by a candidate which
contained spelling mistakes were 'binned' (trashed) without the usual
acceptance/tolerance/mercy one would apply to work-documents.


Training/education takes time and costs money. Accordingly, we have this
constant 'battle' between wanting to educate/acquire knowledge versus
the cost-benefit of some fact/skill which may never/rarely be used - and
the 'factory component' of choosing what is good for a group/for 'the
average trainee' cf what is so for the individual.

Entries on the back of a post-card...
--
Regards,
=dn

Ben Bacarisse

unread,
Nov 19, 2021, 8:26:42 PM11/19/21
to
Chris Angelico <ros...@gmail.com> writes:

> On Sat, Nov 20, 2021 at 9:07 AM Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>>
>> Chris Angelico <ros...@gmail.com> writes:
>>
>> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast@invalid> wrote:
>>
>> >> >>> 0.3 + 0.3 + 0.3 == 0.9
>> >> False
>> >
>> > That's because 0.3 is not 3/10. It's not because floats are
>> > "unreliable" or "inaccurate". It's because the ones you're entering
>> > are not what you think they are.
>> >
>> > When will people understand this?
>> >
>> > (Probably never. Sigh.)
>>
>> Most people understand what's going on when it's explained to them. And
>> I think that being initially baffled is not unreasonable. After all,
>> almost everyone comes to computers after learning that 3/10 can be
>> written as 0.3. And Python "plays along" with the fiction to some
>> extent. 0.3 prints as 0.3, 3/10 prints as 0.3 and 0.3 == 3/10 is True.
>
> In grade school, we learn that not everything can be written that way,
> and 1/3 isn't actually equal to 0.3333333333.

Yes. We learn early on that 0.3333333333 means 3333333333/10000000000.
We don't learn that 0.3333333333 is a special notation for machines that
have something called "binary floating point hardware" that does not
mean 3333333333/10000000000. That has to be learned later. And every
generation has to learn it afresh.

> Yet somehow people
> understand that computers speak binary ("have you learned to count
> yet, or are you still on zeroes and ones?" -- insult to a machine
> empire, in Stellaris), but don't seem to appreciate that floats are
> absolutely accurate and reliable, just in binary.

Yes, agreed, but I was not commenting on the odd (and incorrect) view
that floating point operations are not reliable and well-defined, but on
the reasonable assumption that a clever programming language might take
0.3 to mean what I was taught it meant in grade school.

As an old hand, I know it won't (in most languages), and I know why it
won't, and I know why that's usually the right design choice for the
language. But I can also appreciate that it's by no means obvious that,
to a beginner, "binary" implies the particular kind of representation
that makes 0.3 not mean 3/10. After all, the rational 3/10 can be
represented exactly in binary in many different ways.

> But lack of knowledge is never a problem. (Or rather, it's a solvable
> problem, and I'm always happy to explain things to people.) The
> problem is when, following that lack of understanding, people assume
> that floats are "unreliable" or "inaccurate", and that you should
> never ever compare two floats for equality, because they're never
> "really equal". That's what leads to horrible coding practices and
> badly-defined "approximately equal" checks that cause far more harm
> than a simple misunderstanding ever could on its own.

Agreed. Often, the "explanations" just make things worse.

--
Ben.

Chris Angelico

unread,
Nov 19, 2021, 8:51:35 PM11/19/21
to
On Sat, Nov 20, 2021 at 12:43 PM Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>
> Chris Angelico <ros...@gmail.com> writes:
>
> > On Sat, Nov 20, 2021 at 9:07 AM Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
> >>
> >> Chris Angelico <ros...@gmail.com> writes:
> >>
> >> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast@invalid> wrote:
> >>
> >> >> >>> 0.3 + 0.3 + 0.3 == 0.9
> >> >> False
> >> >
> >> > That's because 0.3 is not 3/10. It's not because floats are
> >> > "unreliable" or "inaccurate". It's because the ones you're entering
> >> > are not what you think they are.
> >> >
> >> > When will people understand this?
> >> >
> >> > (Probably never. Sigh.)
> >>
> >> Most people understand what's going on when it's explained to them. And
> >> I think that being initially baffled is not unreasonable. After all,
> >> almost everyone comes to computers after learning that 3/10 can be
> >> written as 0.3. And Python "plays along" with the fiction to some
> >> extent. 0.3 prints as 0.3, 3/10 prints as 0.3 and 0.3 == 3/10 is True.
> >
> > In grade school, we learn that not everything can be written that way,
> > and 1/3 isn't actually equal to 0.3333333333.
>
> Yes. We learn early on that 0.3333333333 means 3333333333/10000000000.
> We don't learn that 0.3333333333 is a special notation for machines that
> have something called "binary floating point hardware" that does not
> mean 3333333333/10000000000. That has to be learned later. And every
> generation has to learn it afresh.

But you learn that it isn't the same as 1/3. That's my point. You
already understand that it is *impossible* to write out 1/3 in
decimal. Is it such a stretch to discover that you cannot write 3/10
in binary?

Every generation has to learn about repeating fractions, but most of
us learn them in grade school. Every generation learns that computers
talk in binary. Yet, putting those two concepts together seems beyond
many people, to the point that they feel that floating point can't be
trusted.

> Yes, agreed, but I was not commenting on the odd (and incorrect) view
> that floating point operations are not reliable and well-defined, but on
> the reasonable assumption that a clever programming language might take
> 0.3 to mean what I was taught it meant in grade school.

It does mean exactly what it meant in grade school, just as 1/3 means
exactly what it meant in grade school. Now try to represent 1/3 on a
blackboard, as a decimal fraction. If that's impossible, does it mean
that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?

> > But lack of knowledge is never a problem. (Or rather, it's a solvable
> > problem, and I'm always happy to explain things to people.) The
> > problem is when, following that lack of understanding, people assume
> > that floats are "unreliable" or "inaccurate", and that you should
> > never ever compare two floats for equality, because they're never
> > "really equal". That's what leads to horrible coding practices and
> > badly-defined "approximately equal" checks that cause far more harm
> > than a simple misunderstanding ever could on its own.
>
> Agreed. Often, the "explanations" just make things worse.
>

When they're based on a fear of floats, yes. Explanations like "never
use == with floats because 0.1+0.2!=0.3" are worse than useless,
because they create that fear in a way that creates awful cargo-cult
programming practices.

If someone does something in Python, gets a weird result, and comes to
the list saying "I don't understand this", that's not a problem. We
get it all the time with mutables. Recent question about frozensets
appearing to be mutable, same thing. I have no problem with someone
humbly asking "what's happening?", based on an internal assumption
that there's a reason things are the way they are. For some reason,
floats don't get that same respect from many people.

ChrisA

Ben Bacarisse

unread,
Nov 19, 2021, 10:26:15 PM11/19/21
to
Binary is a bit of a red herring here. It's the floating point format
that needs to be understood. Three tenths can be represented in many
binary formats, and even decimal floating point will have some surprises
for the novice.

>> Yes, agreed, but I was not commenting on the odd (and incorrect) view
>> that floating point operations are not reliable and well-defined, but on
>> the reasonable assumption that a clever programming language might take
>> 0.3 to mean what I was taught it meant in grade school.
>
> It does mean exactly what it meant in grade school, just as 1/3 means
> exactly what it meant in grade school. Now try to represent 1/3 on a
> blackboard, as a decimal fraction. If that's impossible, does it mean
> that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?

As you know, it is possible, but let's say we outlaw any finite notation
for repeated digits... Why should I convert 1/3 to this particular
apparently unsuitable representation? I will write 1/3 and manipulate
that number using factional notation.

The novice programmer might similarly expect that when they write 0.3,
the program will manipulate that number as the faction it clearly is.
They may well be surprised by the fact that it must get put into a
format that can't represent what those three characters mean, just as I
would be surprised if you insisted I write 1/3 as a finite decimal (with
no repeat notation).

I'm not saying your analogy would not help someone understand, but you
first have to explain why 0.3 is not treated as three tenths -- why I
(to use your analogy) must not keep 1/3 as a proper fraction, but I must
instead write it using a finite number of decimal digits. Neither is,
in my view, obvious to the beginner.

>> > But lack of knowledge is never a problem. (Or rather, it's a solvable
>> > problem, and I'm always happy to explain things to people.) The
>> > problem is when, following that lack of understanding, people assume
>> > that floats are "unreliable" or "inaccurate", and that you should
>> > never ever compare two floats for equality, because they're never
>> > "really equal". That's what leads to horrible coding practices and
>> > badly-defined "approximately equal" checks that cause far more harm
>> > than a simple misunderstanding ever could on its own.
>>
>> Agreed. Often, the "explanations" just make things worse.
>
> When they're based on a fear of floats, yes. Explanations like "never
> use == with floats because 0.1+0.2!=0.3" are worse than useless,
> because they create that fear in a way that creates awful cargo-cult
> programming practices.
>
> If someone does something in Python, gets a weird result, and comes to
> the list saying "I don't understand this", that's not a problem. We
> get it all the time with mutables. Recent question about frozensets
> appearing to be mutable, same thing. I have no problem with someone
> humbly asking "what's happening?", based on an internal assumption
> that there's a reason things are the way they are. For some reason,
> floats don't get that same respect from many people.

On all this, I agree. As a former numerical analyst, I want maximal
respect for all the various floating-point representations!

--
Ben.

Chris Angelico

unread,
Nov 20, 2021, 12:18:43 AM11/20/21
to
Not completely a red herring; binary floating-point as used in Python
(IEEE double-precision) is defined as a binary mantissa and a scale,
just as "blackboard arithmetic" is generally defined as a decimal
mantissa and a scale. (At least, I don't think I've ever seen anyone
doing arithmetic on a blackboard in hex or octal.)

> >> Yes, agreed, but I was not commenting on the odd (and incorrect) view
> >> that floating point operations are not reliable and well-defined, but on
> >> the reasonable assumption that a clever programming language might take
> >> 0.3 to mean what I was taught it meant in grade school.
> >
> > It does mean exactly what it meant in grade school, just as 1/3 means
> > exactly what it meant in grade school. Now try to represent 1/3 on a
> > blackboard, as a decimal fraction. If that's impossible, does it mean
> > that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?
>
> As you know, it is possible, but let's say we outlaw any finite notation
> for repeated digits... Why should I convert 1/3 to this particular
> apparently unsuitable representation? I will write 1/3 and manipulate
> that number using factional notation.

If you want that, the fractions module is there for you. And again,
grade school, we learned about ratios as well as decimals (or vulgar
fractions and decimal fractions). They have different tradeoffs. For
instance, I learned pi as both 22/7 and 3.14, because sometimes it'd
be convenient to use the rational form and other times the decimal.

> The novice programmer might similarly expect that when they write 0.3,
> the program will manipulate that number as the faction it clearly is.
> They may well be surprised by the fact that it must get put into a
> format that can't represent what those three characters mean, just as I
> would be surprised if you insisted I write 1/3 as a finite decimal (with
> no repeat notation).

Except that 0.3 isn't written as a fraction, it's written as a decimal.

> I'm not saying your analogy would not help someone understand, but you
> first have to explain why 0.3 is not treated as three tenths -- why I
> (to use your analogy) must not keep 1/3 as a proper fraction, but I must
> instead write it using a finite number of decimal digits. Neither is,
> in my view, obvious to the beginner.

Try adding 1/3 + e; either you have to convert 1/3 to a decimal, or
find a rational approximation for e (there aren't any really good ones
but 193/71 seems promising - that's 2.7183, close enough) and then go
to the work of rational addition. No, more likely you'll go for a
finite number of decimal digits.

> >> > But lack of knowledge is never a problem. (Or rather, it's a solvable
> >> > problem, and I'm always happy to explain things to people.) The
> >> > problem is when, following that lack of understanding, people assume
> >> > that floats are "unreliable" or "inaccurate", and that you should
> >> > never ever compare two floats for equality, because they're never
> >> > "really equal". That's what leads to horrible coding practices and
> >> > badly-defined "approximately equal" checks that cause far more harm
> >> > than a simple misunderstanding ever could on its own.
> >>
> >> Agreed. Often, the "explanations" just make things worse.
> >
> > When they're based on a fear of floats, yes. Explanations like "never
> > use == with floats because 0.1+0.2!=0.3" are worse than useless,
> > because they create that fear in a way that creates awful cargo-cult
> > programming practices.
> >
> > If someone does something in Python, gets a weird result, and comes to
> > the list saying "I don't understand this", that's not a problem. We
> > get it all the time with mutables. Recent question about frozensets
> > appearing to be mutable, same thing. I have no problem with someone
> > humbly asking "what's happening?", based on an internal assumption
> > that there's a reason things are the way they are. For some reason,
> > floats don't get that same respect from many people.
>
> On all this, I agree. As a former numerical analyst, I want maximal
> respect for all the various floating-point representations!
>

Yeah - or if not the full meaning of the representation, at least the
fact that it involves no more than 53 bits of "number", and all the
consequences of that.

Oh, and at least once, everyone should play with a stringified number
system, where you actually truly store the string representation and
do arithmetic on it. It has very interesting consequences that really
make you appreciate IEEE binary floating-point.

ChrisA

Julio Di Egidio

unread,
Nov 20, 2021, 5:17:53 AM11/20/21
to
On Saturday, 20 November 2021 at 06:18:43 UTC+1, Chris Angelico wrote:
> On Sat, Nov 20, 2021 at 3:41 PM Ben Bacarisse <ben.u...@bsb.me.uk> wrote:

> > Binary is a bit of a red herring here. It's the floating point format

> Not completely a red herring; binary floating-point as used in Python

Indeed it is a *complete* red-herring: the fact that it is "binary" is just irrelevant.

> > Oh, and at least once, everyone should play with a stringified number

You and co. are just too *incompetent and sloppy* to ever suggest anything educational!

Learn when it is the moment to just shut the fuck up for a change!

(When Dunning-Kruger is a compliment, you bloody guardians of the worst shit for all.)

*Plonk*

Julio

Ben Bacarisse

unread,
Nov 20, 2021, 5:35:56 AM11/20/21
to
You seem to be agreeing with me. It's the floating point part that is
the issue, not the base itself.

>> >> Yes, agreed, but I was not commenting on the odd (and incorrect) view
>> >> that floating point operations are not reliable and well-defined, but on
>> >> the reasonable assumption that a clever programming language might take
>> >> 0.3 to mean what I was taught it meant in grade school.
>> >
>> > It does mean exactly what it meant in grade school, just as 1/3 means
>> > exactly what it meant in grade school. Now try to represent 1/3 on a
>> > blackboard, as a decimal fraction. If that's impossible, does it mean
>> > that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?
>>
>> As you know, it is possible, but let's say we outlaw any finite notation
>> for repeated digits... Why should I convert 1/3 to this particular
>> apparently unsuitable representation? I will write 1/3 and manipulate
>> that number using factional notation.
>
> If you want that, the fractions module is there for you.

Yes, I know. The only point of disagreement (as far as can see) is
that literals like 0.3 appears to be confusing for beginners. You think
they should know that "binary" (which may be all they know about
computers and numbers) means fixed-width binary floating point (or at
least might imply a format that can't represent three tenths), where I
think it's not unreasonable for them to suppose that 0.3 is manipulated
as the rational number it so clearly is.

> And again,
> grade school, we learned about ratios as well as decimals (or vulgar
> fractions and decimal fractions). They have different tradeoffs. For
> instance, I learned pi as both 22/7 and 3.14, because sometimes it'd
> be convenient to use the rational form and other times the decimal.
>
>> The novice programmer might similarly expect that when they write 0.3,
>> the program will manipulate that number as the faction it clearly is.
>> They may well be surprised by the fact that it must get put into a
>> format that can't represent what those three characters mean, just as I
>> would be surprised if you insisted I write 1/3 as a finite decimal (with
>> no repeat notation).
>
> Except that 0.3 isn't written as a fraction, it's written as a
> decimal.

Are you worrying about the term I used? 0.3 is a rational number. It
is the way to write a particular fraction in decimal. I think the only
point of disagreement is that you hope or expect that anyone writing
three tenths as 0.3 should expect that that literal might be converted
to some collection of bits that can't represent it. I think it's not
unreasonable for a beginner to think otherwise.

>> I'm not saying your analogy would not help someone understand, but you
>> first have to explain why 0.3 is not treated as three tenths -- why I
>> (to use your analogy) must not keep 1/3 as a proper fraction, but I must
>> instead write it using a finite number of decimal digits. Neither is,
>> in my view, obvious to the beginner.
>
> Try adding 1/3 + e; either you have to convert 1/3 to a decimal, or
> find a rational approximation for e (there aren't any really good ones
> but 193/71 seems promising - that's 2.7183, close enough) and then go
> to the work of rational addition. No, more likely you'll go for a
> finite number of decimal digits.

Of course, but I'm not sure how this adds to the discussion. e is not
like three tenths. Most programming languages give us a way to write
what looks like exactly three tenths, but most don't give us a way write
a literal that looks like it means e exactly.

--
Ben.

Python

unread,
Nov 20, 2021, 8:08:51 AM11/20/21
to
Stronzzo Julio Di Egidio wrote:
> On Saturday, 20 November 2021 at 06:18:43 UTC+1, Chris Angelico wrote:
>> On Sat, Nov 20, 2021 at 3:41 PM Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>
>>> Binary is a bit of a red herring here. It's the floating point format
>
>> Not completely a red herring; binary floating-point as used in Python
>
> Indeed it is a *complete* red-herring: the fact that it is "binary" is just irrelevant.
>
>>> Oh, and at least once, everyone should play with a stringified number
>
> You and co. are just too *incompetent and sloppy* to ever suggest anything educational!
>
> Learn when it is the moment to just shut the fuck up for a change!

Something that YOU should really consider, Julio, almost every time
you are about to post cranky, insane, unappropriate posts.


Avi Gross

unread,
Nov 20, 2021, 4:31:09 PM11/20/21
to
This discussion gets tiresome for some.

Mathematics is a pristine world that is NOT the real world. It handles
near-infinities fairly gracefully but many things in the real world break
down because our reality is not infinitely divisible and some parts are
neither contiguous nor fixed but in some sense wavy and probabilistic or
worse.

So in any computer, or computer language, we have realities to deal with
when someone asks for say the square root of 2 or other transcendental
numbers like pi or e or things like the sin(x) as often they are numbers
which in decimal require an infinite number of digits and in many cases do
not repeat. Something as simple as the fractions for 1/7, in decimal, has an
interesting repeating pattern but is otherwise infinite.

.142857142857142857 ... ->> 1/7
.285714285714285714 ... ->> 2/7
.428571 ...
.571428 ...
.714285 ...
.857142 ...

No matter how many bits you set aside, you cannot capture such numbers
exactly IN BASE 10.

You may be able to capture some such things in another base but then yet
others cannot be seen in various other bases. I suspect someone has
considered a data type that stores results in arbitrary bases and delays
evaluation as late as possible, but even those cannot handle many numbers.

So the reality is that most computer programming is ultimately BINARY as in
BASE 2. At some level almost anything is rounded and imprecise. About all we
want to guarantee is that any rounding or truncation done is as consistent
as possible so every time you ask for pi or the square root of 2, you get
the same result stored as bits. BUT if you ask a slightly different
question, why expect the same results? sqrt(2) operates on the number 2. But
sqrt(6*(1/3)) first evaluates 1/3 and stores it as bits then multiplies it
by the bit representation of 6 and stores a result which then is handed to
sqrt() and if the bits are not identical, there is no guarantee that the
result is identical.

I will say this. Python has perhaps an improved handling of large integers.
Many languages have an assortment of integer sizes you can use such as 16
bits or 32 or 64 and possibly many others including using 8 or 1bits for
limited cases. But for larger numbers, there is a problem where the result
overflows what can be shown in that many bits and the result either is seen
as an error or worse, as a smaller number where some of the overflow bits
are thrown away. Python has indefinite length integers that work fine. But
if I take a real number with the same value and do a similar operation, I
get what I consider a truncated result:

>>> 256**40
2135987035920910082395021706169552114602704522356652769947041607822219725780
640550022962086936576
>>> 256.0**40
2.13598703592091e+96

That is because Python has not chosen to implement a default floating point
method that allows larger storage formats that could preserve more digits.

Could we design a more flexible storage form? I suspect we could BUT it
would not solve certain problems. I mean Consider these two squarings:

>>> .123456789123456789 * .123456789123456789
0.015241578780673677
>>> 123456789123456789 * 123456789123456789
15241578780673678515622620750190521

Clearly a fuller answer to the first part, based on the second, is
.015241578780673678515622620750190521

So one way to implement such extended functionality might be to have an
object that has a storage of the decimal part of something as an extended
integer variation along with storage of other parts like the exponent. SOME
operations would then use the integer representation and then be converted
back as needed. But such an item would not conform to existing standards and
would not trivially be integrated everywhere a normal floating point is
expected and thus may be truncated in many cases or have to be converted
before use.

But even such an object faces a serious problem as asking for a fraction
like 1/7 might lead to an infinite regress as the computer keeps lengthening
the data representation indefinitely. It has to be terminated eventually and
some of the examples shown where the whole does not seem to be the same
when viewed several ways, would still show the anomalies some invoke.

Do note pure Mathematics is just as confusing at times. The number
.99999999... where the dot-dot-dot notation means go on forever, is
mathematically equivalent to the number 1 as is any infinite series that
asymptotically approaches 1 as in

1/2 + 1/4 + 1/8 + ... + 1/(2**N) + ...

It is not seen by many students how continually appending a 9 can ever be
the same as a number like 1.00000 since every single digit is always not a
match. But the mathematical theorems about limits are now well understood
and in the limit as N approaches infinity, the two come to mean the same
thing.

Python is a tool. More specifically, it is a changing platform that hosts
many additional tools. For the moment the tools are built on bits which are
both very precise but also cannot finitely represent everything. Maybe if we
develop decent quantum computers and use QBITS, we might have a wider range
of what can be stored and used by programs using a language that can handle
the superpositions involved. But, I suspect even that kind of machine might
still not handle some infinities well as I suspect our real world, quantum
features and all, at some levels reduces to probabilities that at some level
are not exactly the same each time.

So, what should be stressed, and often is, is to use tools available that
let you compare numbers for being nearly equal. I have seen some where the
size in bits of the machine storage method is used to determine if numbers
are equal until within a few bits of the end of the representation. But
other places should be noted such as a hill-climbing algorithm that is
looking for a peak or valley, cannot be expected to converge to exactly what
you want and you may settle for it getting close enough as in 0.1%, or stop
if it does not improve in the last hundred iterations. The Mathematics may
require a precise answer, such as a point around which the slope of a curve
is zero, but the algorithm, especially when used with storage of variables
limited to some precision, may not reliably be able to zero in on a much
better answer so again, not a test for actual equality is needed, but one
close enough to likely be good enough.

I note how unamused I was when making a small table in EXCEL (Note, not
Python) of credit card numbers and balances when I saw the darn credit card
numbers were too long and a number like:

4195032150199578

was displayed by EXCEL as:

4195032150199570

It looks like I just missed having significant stored digits and EXCEL
reconstructed it by filling in a zero for the missing extra. The problem is
I had to check balances sometimes and copy/paste generated the wrong number
to use. I ended up storing the number as text using '4195032150199578 as I
was not doing anything mathematical with it and this allowed me to keep all
the digits as text strings can be quite long.

But does this mean EXCEL is useless (albeit some thing so) or that the tool
can only be used up to some extent and beyond that, can (silently) mislead
you?

Having said all that, this reminds me a bit about the Y2K issues where
somehow nobody thought much about what happens when the year 2000 arrives
and someone 103 years old becomes 3 again as only the final two digits of
the year are stored. We now have the ability to make computers with
increased speed and memory and so on and I wonder if anyone has tried to
make a standard for say a 256-byte storage for multiple-precision floating
point that holds lots more digits of precision as well as allowing truly
huge exponents. Of course, it may not be practical to have computers that
have registers and circuitry that can multiply two such numbers in a very
few cycles, and it may be done in stages in thousands of cycles, so use of
something big like that might not be a good default.


Chris Angelico

unread,
Nov 20, 2021, 5:05:03 PM11/20/21
to
Mostly, but all the problems come from people expecting decimal floats
when they're using binary floats.

> >> >> Yes, agreed, but I was not commenting on the odd (and incorrect) view
> >> >> that floating point operations are not reliable and well-defined, but on
> >> >> the reasonable assumption that a clever programming language might take
> >> >> 0.3 to mean what I was taught it meant in grade school.
> >> >
> >> > It does mean exactly what it meant in grade school, just as 1/3 means
> >> > exactly what it meant in grade school. Now try to represent 1/3 on a
> >> > blackboard, as a decimal fraction. If that's impossible, does it mean
> >> > that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?
> >>
> >> As you know, it is possible, but let's say we outlaw any finite notation
> >> for repeated digits... Why should I convert 1/3 to this particular
> >> apparently unsuitable representation? I will write 1/3 and manipulate
> >> that number using factional notation.
> >
> > If you want that, the fractions module is there for you.
>
> Yes, I know. The only point of disagreement (as far as can see) is
> that literals like 0.3 appears to be confusing for beginners. You think
> they should know that "binary" (which may be all they know about
> computers and numbers) means fixed-width binary floating point (or at
> least might imply a format that can't represent three tenths), where I
> think it's not unreasonable for them to suppose that 0.3 is manipulated
> as the rational number it so clearly is.

Rationals are mostly irrelevant. We don't use int/int for most
purposes. When you're comparing number systems between the way people
write them and the way computers do, the difference isn't "0.3" and
"3/10". If people are prepared to switch their thinking to rationals
instead of decimals, then sure, the computer can represent those
precisely; but that has different tradeoffs.

> > And again,
> > grade school, we learned about ratios as well as decimals (or vulgar
> > fractions and decimal fractions). They have different tradeoffs. For
> > instance, I learned pi as both 22/7 and 3.14, because sometimes it'd
> > be convenient to use the rational form and other times the decimal.
> >
> >> The novice programmer might similarly expect that when they write 0.3,
> >> the program will manipulate that number as the faction it clearly is.
> >> They may well be surprised by the fact that it must get put into a
> >> format that can't represent what those three characters mean, just as I
> >> would be surprised if you insisted I write 1/3 as a finite decimal (with
> >> no repeat notation).
> >
> > Except that 0.3 isn't written as a fraction, it's written as a
> > decimal.
>
> Are you worrying about the term I used? 0.3 is a rational number. It
> is the way to write a particular fraction in decimal. I think the only
> point of disagreement is that you hope or expect that anyone writing
> three tenths as 0.3 should expect that that literal might be converted
> to some collection of bits that can't represent it. I think it's not
> unreasonable for a beginner to think otherwise.

0.3 is a "rational number" but it's not being written as a fraction.
And most people aren't going to think of it as "3/10". The problem
here isn't mathematics, it's people, so the point isn't whether a
number could be considered rational in the abstract.

> >> I'm not saying your analogy would not help someone understand, but you
> >> first have to explain why 0.3 is not treated as three tenths -- why I
> >> (to use your analogy) must not keep 1/3 as a proper fraction, but I must
> >> instead write it using a finite number of decimal digits. Neither is,
> >> in my view, obvious to the beginner.
> >
> > Try adding 1/3 + e; either you have to convert 1/3 to a decimal, or
> > find a rational approximation for e (there aren't any really good ones
> > but 193/71 seems promising - that's 2.7183, close enough) and then go
> > to the work of rational addition. No, more likely you'll go for a
> > finite number of decimal digits.
>
> Of course, but I'm not sure how this adds to the discussion. e is not
> like three tenths. Most programming languages give us a way to write
> what looks like exactly three tenths, but most don't give us a way write
> a literal that looks like it means e exactly.
>

That's, once again, because people. People want a way to say "give me
a value that's as close as possible to 0.3" and so that's what we get.
And then they get caught out by the fact that it isn't what they want
it to be.

ChrisA

Chris Angelico

unread,
Nov 20, 2021, 5:17:42 PM11/20/21
to
On Sun, Nov 21, 2021 at 8:32 AM Avi Gross via Python-list
<pytho...@python.org> wrote:
>
> This discussion gets tiresome for some.
>
> Mathematics is a pristine world that is NOT the real world. It handles
> near-infinities fairly gracefully but many things in the real world break
> down because our reality is not infinitely divisible and some parts are
> neither contiguous nor fixed but in some sense wavy and probabilistic or
> worse.

But the purity of mathematics isn't the problem. The problem is
people's expectations around computers. (The problem is ALWAYS
people's expectations.)

> So in any computer, or computer language, we have realities to deal with
> when someone asks for say the square root of 2 or other transcendental
> numbers like pi or e or things like the sin(x) as often they are numbers
> which in decimal require an infinite number of digits and in many cases do
> not repeat. Something as simple as the fractions for 1/7, in decimal, has an
> interesting repeating pattern but is otherwise infinite.
>
> .142857142857142857 ... ->> 1/7
> .285714285714285714 ... ->> 2/7
> .428571 ...
> .571428 ...
> .714285 ...
> .857142 ...
>
> No matter how many bits you set aside, you cannot capture such numbers
> exactly IN BASE 10.

Right, and people understand this. Yet as soon as you switch from base
10 to base 2, it becomes impossible for people to understand that 1/5
now becomes the exact same thing: an infinitely repeating expansion
for the rational number.

> You may be able to capture some such things in another base but then yet
> others cannot be seen in various other bases. I suspect someone has
> considered a data type that stores results in arbitrary bases and delays
> evaluation as late as possible, but even those cannot handle many numbers.

More likely it would just store rationals as rationals - or, in other
words, fractions.Fraction().

> So the reality is that most computer programming is ultimately BINARY as in
> BASE 2. At some level almost anything is rounded and imprecise. About all we
> want to guarantee is that any rounding or truncation done is as consistent
> as possible so every time you ask for pi or the square root of 2, you get
> the same result stored as bits. BUT if you ask a slightly different
> question, why expect the same results? sqrt(2) operates on the number 2. But
> sqrt(6*(1/3)) first evaluates 1/3 and stores it as bits then multiplies it
> by the bit representation of 6 and stores a result which then is handed to
> sqrt() and if the bits are not identical, there is no guarantee that the
> result is identical.

This is what I take issue with. Binary doesn't mean "rounded and
imprecise". It means "base two". People get stroppy at a computer's
inability to represent 0.3 correctly, because they think that it
should be perfectly obvious what that value is. Nobody's bothered by
sqrt(2) not being precise, but they're very much bothered by 1/10 not
"working".

> Do note pure Mathematics is just as confusing at times. The number
> .99999999... where the dot-dot-dot notation means go on forever, is
> mathematically equivalent to the number 1 as is any infinite series that
> asymptotically approaches 1 as in
>
> 1/2 + 1/4 + 1/8 + ... + 1/(2**N) + ...
>
> It is not seen by many students how continually appending a 9 can ever be
> the same as a number like 1.00000 since every single digit is always not a
> match. But the mathematical theorems about limits are now well understood
> and in the limit as N approaches infinity, the two come to mean the same
> thing.

Mathematics is confusing. That's not a problem. To be quite frank, the
real world is far more confusing than the pristine beauty that we have
inside a computer. The problem isn't the difference between reality
and mathematics, or between reality and computers, or anything like
that; the problem, as always, is between people's expectations and
what computers do.

Tell me: if a is equal to b and b is equal to c, is a equal to c?
Mathematicians say "of course it is". Engineers say "there's no way
you can rely on that". Computer programmers side with whoever makes
most sense right this instant.

> So, what should be stressed, and often is, is to use tools available that
> let you compare numbers for being nearly equal.

No. No no no no no. You don't need to use a "nearly equal" comparison
just because floats are "inaccurate". It isn't like that. It's this
exact misinformation that I am trying to fight, because floats are NOT
inaccurate. They're just in binary, same as everything that computers
do.

> I note how unamused I was when making a small table in EXCEL (Note, not
> Python) of credit card numbers and balances when I saw the darn credit card
> numbers were too long and a number like:
>
> 4195032150199578
>
> was displayed by EXCEL as:
>
> 4195032150199570
>
> It looks like I just missed having significant stored digits and EXCEL
> reconstructed it by filling in a zero for the missing extra. The problem is
> I had to check balances sometimes and copy/paste generated the wrong number
> to use. I ended up storing the number as text using '4195032150199578 as I
> was not doing anything mathematical with it and this allowed me to keep all
> the digits as text strings can be quite long.
>
> But does this mean EXCEL is useless (albeit some thing so) or that the tool
> can only be used up to some extent and beyond that, can (silently) mislead
> you?

Oh, Excel is moronic in plenty of other ways.

https://www.youtube.com/watch?v=yb2zkxHDfUE

> Having said all that, this reminds me a bit about the Y2K issues where
> somehow nobody thought much about what happens when the year 2000 arrives
> and someone 103 years old becomes 3 again as only the final two digits of
> the year are stored. We now have the ability to make computers with
> increased speed and memory and so on and I wonder if anyone has tried to
> make a standard for say a 256-byte storage for multiple-precision floating
> point that holds lots more digits of precision as well as allowing truly
> huge exponents. Of course, it may not be practical to have computers that
> have registers and circuitry that can multiply two such numbers in a very
> few cycles, and it may be done in stages in thousands of cycles, so use of
> something big like that might not be a good default.
>

Yes, you could use 80-bit floats, 128-bit floats, or 256-bit floats,
but that won't change the fact that 0.3 can't be represented precisely
in binary, nor will it change the fact that 0.5 *can*. If people can't
think in binary, they won't think in binary with more bits either.

ChrisA

Grant Edwards

unread,
Nov 20, 2021, 5:21:41 PM11/20/21
to
On 2021-11-20, Chris Angelico <ros...@gmail.com> wrote:

> But you learn that it isn't the same as 1/3. That's my point. You
> already understand that it is *impossible* to write out 1/3 in
> decimal. Is it such a stretch to discover that you cannot write 3/10
> in binary?

For many people, it seems to be.

There are plenty of people trying to write code who don't even under
the concept of different bases.

I remember trying to explain the concept of CPU registers, stacks,
interrupts, and binary representations to VAX/VMS FORTRAN programmers
and getting absolutely nowhere.

Years later, I went through the same exercise with a bunch of Windows
C++ programmers, and they seemed similarly baffled.

Perhaps I was just a bad teacher.

--
Grant

Grant Edwards

unread,
Nov 20, 2021, 5:23:58 PM11/20/21
to
On 2021-11-20, Ben Bacarisse <ben.u...@bsb.me.uk> wrote:

> You seem to be agreeing with me. It's the floating point part that is
> the issue, not the base itself.

No, it's the base. Floating point can't represent 3/10 _because_ it's
base 2 floating point. Floating point in base 10 doesn't have any
problem representing 3/10.

--
Grant

Chris Angelico

unread,
Nov 20, 2021, 5:27:54 PM11/20/21
to
And to some extent, that's not really surprising; not everyone can
think the way other people do, and not everyone can think the way
computers do. But it seems that, in this one specific case, there's a
massive tendency to (a) misunderstand, and then (b) belligerently
assume that the computer acts the way they want it to act. And then
sometimes (c) get really annoyed at the computer for not being a
person, and start the cargo cult practice of "always use a
nearly-equal function instead of testing for equality", which we've
seen in this exact thread.

That's what I take issue with: the smug "0.1 + 0.2 != 0.3, therefore
computers are wrong" people, and the extremely unhelpful "never use ==
with floats" people.

ChrisA

Ben Bacarisse

unread,
Nov 20, 2021, 5:44:45 PM11/20/21
to
Every base has the same problem for some numbers. It's the floating
point part that causes the problem.

Binary and decimal stand out because we write a lot of decimals in
source code and computers use binary, but if decimal floating point were
common (as it increasingly is) different fractions would become the oft
quoted "surprise" results.

--
Ben.

Avi Gross

unread,
Nov 20, 2021, 6:00:07 PM11/20/21
to
Chris,

I generally agree with your comments albeit I might take a different slant.

What I meant is that people who learn mathematics (as I and many here
obviously did) can come away with idealized ideas that they then expect to
be replicable everywhere. But there are grey lines along the way where some
mathematical proofs do weird things like IGNORE parts of a calculation by
suggesting they are going to zero much faster than other parts and then wave
a mathematical wand about what happens when they approach a limit like zero
and voila, we just "proved" that the derivative of X**2 is 2*X or the more
general derivative of A*(X**N) is N*A*(X**(N-1)) and then extend that to N
being negative or fractional or a transcendental number and beyond.

Computers generally use finite methods, sometimes too finite. Yes, the
problem is not Mathematics as a field. It is how humans often generalize or
analogize from one area into something a bit different. I do not agree with
any suggestion that a series of bits that encodes a result that is rounded
or truncated is CORRECT. A representation of 0.3 in a binary version of some
floating point format is not technically correct. Storing it as 3/10 and
carefully later multiplying it by 20 and then carefully canceling part will
result in exactly 6. While storing it digitally and then multiplying it in
registers or whatever by 20 may get a result slightly different than the
storage representation of 6.0000000000... and that is a fact and risk we
generally are willing to take.

But consider a different example. If I have a filesystem or URL or anything
that does not care about whether parts are in upper or lower case, then
"filename" and "FILENAME" and many variations like "fIlEnAmE" are all
assumed to mean the same thing. A program may even simply store all of them
in the same way as all uppercase. But when you ask to compare two versions
with a function where case matters, they all test as unequal! So there are
ways to ask for a comparison that is approximately equal given the
constraints that case does not matter:

>>> alpha="Hello"
>>> beta="hELLO"
>>> alpha == beta
False
>>> alpha.lower() == beta.lower()
True

I see no reason why a comparison canot be done like this in cases you are
concerned with small errors creeping in:

>>> from math import isclose
>>> isclose(1, .9999999999999999999999)
True
>>> isclose(1, .9999999999)
True
>>> isclose(1, .999)
False

I will agree with you that binary is not any more imprecise than base 10.
Computer hardware is much easier to design though that works with binary.

So floats by themselves are not inaccurate but realistically the results of
operations ARE. I mean if I ask a long number to be stored that does not
fully fit, it is often silently truncated and what the storage location now
represent accurately is not my number but the shorter version that is at the
limit of tolerance. But consider another analogy often encountered in
mathematics.

If I measure several numbers in the real world such as weight and height and
temperature and so on, some are considered accurate only to a limited number
of digits. Your weight on a standard digital scale may well be 189.8 but if
I add a feather or subtract one, the reading may well shift to one unit up
or down. Heck, the same person measured just minutes later may shift. If I
used a deluxe scale that measures to more decimal places, it may get hard to
get the exact same number twice in a row as just taking a deeper breath may
make a change.

So what happens if I measure a box in three dimensions to the nearest .1
inch and decide it is 10.1 by 20.2 by 30.3 inches? What is the volume,
ignoring pesky details about the width of the cardboard or whatever?

A straightforward multiplication yields 4141.606 cubic inches. You may have
been told to round that to something like 4141.6 because the potential error
in each measure cannot result in more precision. In reality, you might even
calculate two sets of numbers assuming the true width may have been a tad
more or less and come up with the volume being BETWEEN a somewhat smaller
number and a somewhat larger number.

I claim a similar issue plagues using a computer to deal with stored
numbers, perhaps not stored 100% perfectly as discussed, and doing
calculations. The result often comes out more precisely than warranted. I
suspect there are modules out there that might do multi-step calculations
where at each step, numbers generated with extra precision are throttled
back so the extra precision is set to zeroes after rounding to avoid the
small increments adding up. Others may just do the calculations and keep
track and remove extra precision at the end.

And again, this is not because the implementation of numbers is in any way
wrong but because a real-world situation requires the humans to sort of dial
back how they are used and not over-reach.

So comparing for close-enough inequality is not necessarily a reflection on
floats but on the design not accommodating the precision needed or perhaps
on the algorithm used not necessarily being expected to reach a certain
level.
--
https://mail.python.org/mailman/listinfo/python-list

Rob Cliffe

unread,
Nov 20, 2021, 6:20:48 PM11/20/21
to


On 20/11/2021 22:59, Avi Gross via Python-list wrote:
> there are grey lines along the way where some
> mathematical proofs do weird things like IGNORE parts of a calculation by
> suggesting they are going to zero much faster than other parts and then wave
> a mathematical wand about what happens when they approach a limit like zero
> and voila, we just "proved" that the derivative of X**2 is 2*X or the more
> general derivative of A*(X**N) is N*A*(X**(N-1)) and then extend that to N
> being negative or fractional or a transcendental number and beyond.
>
>
    You seem to be maligning mathematicians.
    What you say was true in the time of Newton, Leibniz and Bishop
Berkeley, but analysis was made completely rigorous by the efforts of
Weierstrass and others.  There are no "grey lines".  Proofs do not
"suggest", they PROVE (else they are not proofs, they are plain wrong). 
It is not the fault of mathematicians (or mathematics) if some people
produce sloppy hand-wavy "proofs" as justification for their conclusions.
    I am absolutely sure you know all this, but your post does not read
as if you do.  And it could give a mistaken impression to a
non-mathematician.  I think we have had enough denigration of experts.
Best
Rob Cliffe



Chris Angelico

unread,
Nov 20, 2021, 6:23:09 PM11/20/21
to
On Sun, Nov 21, 2021 at 10:01 AM Avi Gross via Python-list
<pytho...@python.org> wrote:
> Computers generally use finite methods, sometimes too finite. Yes, the
> problem is not Mathematics as a field. It is how humans often generalize or
> analogize from one area into something a bit different. I do not agree with
> any suggestion that a series of bits that encodes a result that is rounded
> or truncated is CORRECT. A representation of 0.3 in a binary version of some
> floating point format is not technically correct. Storing it as 3/10 and
> carefully later multiplying it by 20 and then carefully canceling part will
> result in exactly 6. While storing it digitally and then multiplying it in
> registers or whatever by 20 may get a result slightly different than the
> storage representation of 6.0000000000... and that is a fact and risk we
> generally are willing to take.

Do you accept that storing the floating point value 1/4, then
multiplying by 20, will give precisely 5? Because that is
*guaranteed*. You don't have to expect a result "slightly different"
from 5, it will be absolutely exactly five:

>>> (1/4) * 20 == 5.0
True

This is what I'm talking about. Some numbers can be represented
perfectly, others can't. If you try to represent the square root of
two as a decimal number, then multiply it by itself, you won't get
back precisely 2, because you can't have written out the *exact*
square root of two. But you most certainly CAN write "1.875" on a
piece of paper, and it really truly does exactly mean fifteen eighths.
And you can write that number as a binary float, too, and it'll mean
the exact same value.

> But consider a different example. If I have a filesystem or URL or anything
> that does not care about whether parts are in upper or lower case, then
> "filename" and "FILENAME" and many variations like "fIlEnAmE" are all
> assumed to mean the same thing. A program may even simply store all of them
> in the same way as all uppercase. But when you ask to compare two versions
> with a function where case matters, they all test as unequal! So there are
> ways to ask for a comparison that is approximately equal given the
> constraints that case does not matter:

A URL has distinct parts to it: the domain has some precise folding
done (most notably case folding), the path does not, and you can
consider "http://example.com:80/foo" to be the same as
"http://example.com/foo" because 80 is the default port.

> >>> alpha="Hello"
> >>> beta="hELLO"
> >>> alpha == beta
> False
> >>> alpha.lower() == beta.lower()
> True
>

That's a terrible way to compare URLs, because it's both too sloppy
AND too strict at the same time. But if you have a URL representation
tool, it should be able to consider two things equal.

Floats are representations of numbers that can be compared for
equality if they truly represent the same number. The value 3/6 is
precisely equal to the value 7/14:

>>> 3/6 == 7/14
True

You don't need an "approximately equal" function here. They are the
same value. They are equal.

> I see no reason why a comparison canot be done like this in cases you are
> concerned with small errors creeping in:
>
> >>> from math import isclose
> >>> isclose(1, .9999999999999999999999)
> True
> >>> isclose(1, .9999999999)
> True
> >>> isclose(1, .999)
> False

This is exactly the problem though: HOW close counts as equal? The
only way to answer that question is to know the accuracy of your
inputs, and the operations done.

> So floats by themselves are not inaccurate but realistically the results of
> operations ARE. I mean if I ask a long number to be stored that does not
> fully fit, it is often silently truncated and what the storage location now
> represent accurately is not my number but the shorter version that is at the
> limit of tolerance. But consider another analogy often encountered in
> mathematics.

Not true. Operations are often perfectly accurate.

> If I measure several numbers in the real world such as weight and height and
> temperature and so on, some are considered accurate only to a limited number
> of digits. Your weight on a standard digital scale may well be 189.8 but if
> I add a feather or subtract one, the reading may well shift to one unit up
> or down. Heck, the same person measured just minutes later may shift. If I
> used a deluxe scale that measures to more decimal places, it may get hard to
> get the exact same number twice in a row as just taking a deeper breath may
> make a change.
>
> So what happens if I measure a box in three dimensions to the nearest .1
> inch and decide it is 10.1 by 20.2 by 30.3 inches? What is the volume,
> ignoring pesky details about the width of the cardboard or whatever?
>
> A straightforward multiplication yields 4141.606 cubic inches. You may have
> been told to round that to something like 4141.6 because the potential error
> in each measure cannot result in more precision. In reality, you might even
> calculate two sets of numbers assuming the true width may have been a tad
> more or less and come up with the volume being BETWEEN a somewhat smaller
> number and a somewhat larger number.

If those initial figures were accurate to three digits, you should
round it to 4140 cubic inches, because that's all the accuracy you
have. (Or, if you prefer, 4140 +/- 5.)

> I claim a similar issue plagues using a computer to deal with stored
> numbers, perhaps not stored 100% perfectly as discussed, and doing
> calculations. The result often comes out more precisely than warranted. I
> suspect there are modules out there that might do multi-step calculations
> where at each step, numbers generated with extra precision are throttled
> back so the extra precision is set to zeroes after rounding to avoid the
> small increments adding up. Others may just do the calculations and keep
> track and remove extra precision at the end.

When your input values aren't accurate, your output won't be accurate.
That's something the computer can never know. When you store the
number 3602879701896397/36028797018963968, did you actually mean that
number, or did you mean some other number that's kinda close to it? If
you don't tell the computer, it's going to assume that you wanted
exactly that number.

> And again, this is not because the implementation of numbers is in any way
> wrong but because a real-world situation requires the humans to sort of dial
> back how they are used and not over-reach.
>
> So comparing for close-enough inequality is not necessarily a reflection on
> floats but on the design not accommodating the precision needed or perhaps
> on the algorithm used not necessarily being expected to reach a certain
> level.

And close-enough equality is the correct thing to do when you know
exactly what the accuracy of your inputs is. If you need to be
completely rigorous about it, you'd have to store every number as a
range (so you might say that your input length is "10.05 to 10.15" or
"10.1, error 0.5") and do all arithmetic on those ranges. What you'd
find is that some operations widen the ranges and others don't. The
trouble is, that's not actually all that useful; Fermi estimates are
far more accurate than they seem like they "should be" because the
balance of probability is in favour of errors cancelling out, at least
partially.

ChrisA

Chris Angelico

unread,
Nov 20, 2021, 6:58:25 PM11/20/21
to
And if decimal floating point were common, other "surprise" behaviour
would be cited, like how x < y and (x+y)/2 < x.

ChrisA

Avi Gross

unread,
Nov 20, 2021, 7:38:46 PM11/20/21
to
Can I suggest a way to look at it, Grant?

In base 10, we represent all numbers as the (possibly infinite) sum of ten
raised to some integral power.

123 is 3 times 1 (ten to the zero power) plus
2 times 10 (ten to the one power) plus
1 times 100 (ten to the two power)

123.456 just extends this with
4 times 1/10 (ten to the minus one power) plus
5 times 1/100 (10**-2) plus
6 time 1/1000 (10**-3)

In binary, all the powers are not powers of 10 but powers of two.

So IF you wrote something like 111 it means 1 times 1 plus 1 times 2 plus 1
times 4 or 7. A zero anywhere just skips a 2 to that power. If you added a
decimal point to make 111.111 the latter part would be 1/2 plus 1/4 plus 1/8
or 7/8 which combined might be 7 and 7/8. So any fractions of the form
something over 2**N can be made easily and almost everything else cannot be
made in finite stretches. How would you make 2/3 or 3 /10?

But the opposite is something true. In decimal, to make the above it becomes
7.875 and to make other fractions of the kind, you need more and more As it
happens, all such base-2 compatible streams can be made because each is in
some sense a divide by two.

7/16 = 1/2 * .875 = .4375
7/32 = 1/2 * .4375 = .21875

and so on. But this ability is a special case artifact caused by a terminal
digit 5 always being able to be divided in tow to make a 25 a unit longer
and then again and again. Note 2 and 5 are factors of 10. In the more
general case, this fails. In base 7, 3/7 is written easily as 0.3 but the
same fraction in decimal is a repeating copy of .428571... which never
terminates. A number like 3/7 + 4/49 + 5/343 generally cannot be written in
base 7 but the opposite is also true that only a approximation of numbers in
base 2 or base 10 can ever be written. I am, of course, talking about the
part to the right of the decimal. Integers to the left can be written in any
base. It is fractional parts that can end up being nonrepeating.

What about pi and e and the square root of 2? I suspect all of them have an
infinite sequence with no real repetition (over long enough stretches) in
any base! I mean an integer base, of course. The constant e in base e is
just 1.

As has been hammered home, computers have generally always dealt in one or
more combined on/off or Boolean idea so deep down they tend to have binary
circuits. At one point, programmers sometimes used base 8, octal, to group
three binary digits together as in setting flags for a file's permissions,
may use 01, 02 and 04 to be OR'ed with the current value to turn on
read/write/execute bits, or a combination like 7 (1+2+4) to set all of them
at once. And, again, for some purposes, base 16 (hexadecimal) is often used
with numerals extended to include a-f to represent a nibble or half byte, as
in some programs that let you set colors or whatever. But they are just a
convenience as ultimately they are used as binary for most purposes. In high
school, for a while, and just for fun, I annoyed one teacher by doing much
of my math in base 32 leaving them very perplexed as to how I got the
answers right. As far as I know, nobody seriously uses any bases not already
a power of two even for intermediate steps, outside of some interesting
stuff in number theory.

I think there have been attempts to use a decimal representation in some
accounting packages or database applications that allow any decimal numbers
to be faithfully represented and used in calculations. Generally this is not
a very efficient process but it can handle 0.3 albeit still have no way to
deal with transcendental numbers.

As such, since this is a Python Forum let me add you can get limited support
for some of this using the decimal module:

https://www.askpython.com/python-modules/python-decimal-module

But I doubt Python can be said to do things worse than just about any other
computer language when storing and using floating point. As hammered in
repeatedly, it is doing whatever is allowed in binary and many things just
cannot easily or at all be done in binary.

Let me leave you with Egyptian mathematics. Their use of fractions, WAY BACK
WHEN, only had the concept of a reciprocal of an integer. As in for any
integer N, there was a fraction of 1/N. They had a concept of 1/3 but not of
2/3 or 4/9.

So they added reciprocals to make any more complex fractions. To make 2/3
they added 1/2 plus 1/6 for example.

Since they were not stuck with any one base, all kinds of such combined
fractions could be done but of course the square root of 2 or pi were a bit
beyond them and for similar reasons.

https://en.wikipedia.org/wiki/Egyptian_fraction

My point is there are many ways humans can choose to play with numbers and
not all of them can easily do the same thing. Roman Numerals were (and
remain) a horror to do much mathematics with and especially when they play
games based on whether a symbol like X is to the left or right of another
like C as XC is 90 and CX is 110.

To do programming learn the rules that only what can be represented in
binary has a chance to ...





-----Original Message-----
From: Python-list <python-list-bounces+avigross=veriz...@python.org> On
Behalf Of Grant Edwards
Sent: Saturday, November 20, 2021 5:24 PM
To: pytho...@python.org
Subject: Re: Unexpected behaviour of math.floor, round and int functions
(rounding)

On 2021-11-20, Ben Bacarisse <ben.u...@bsb.me.uk> wrote:

> You seem to be agreeing with me. It's the floating point part that is
> the issue, not the base itself.

No, it's the base. Floating point can't represent 3/10 _because_ it's base 2
floating point. Floating point in base 10 doesn't have any problem
representing 3/10.

--
Grant
--
https://mail.python.org/mailman/listinfo/python-list

Chris Angelico

unread,
Nov 20, 2021, 8:03:17 PM11/20/21
to
On Sun, Nov 21, 2021 at 11:39 AM Avi Gross via Python-list
<pytho...@python.org> wrote:
>
> Can I suggest a way to look at it, Grant?
>
> In base 10, we represent all numbers as the (possibly infinite) sum of ten
> raised to some integral power.

Not infinite. If you allow an infinite sequence of digits, you create
numerous paradoxes, not to mention the need for infinite storage.

> 123 is 3 times 1 (ten to the zero power) plus
> 2 times 10 (ten to the one power) plus
> 1 times 100 (ten to the two power)
>
> 123.456 just extends this with
> 4 times 1/10 (ten to the minus one power) plus
> 5 times 1/100 (10**-2) plus
> 6 time 1/1000 (10**-3)
>
> In binary, all the powers are not powers of 10 but powers of two.
>
> So IF you wrote something like 111 it means 1 times 1 plus 1 times 2 plus 1
> times 4 or 7. A zero anywhere just skips a 2 to that power. If you added a
> decimal point to make 111.111 the latter part would be 1/2 plus 1/4 plus 1/8
> or 7/8 which combined might be 7 and 7/8. So any fractions of the form
> something over 2**N can be made easily and almost everything else cannot be
> made in finite stretches. How would you make 2/3 or 3 /10?

Right, this is exactly how place value works.

> But the opposite is something true. In decimal, to make the above it becomes
> 7.875 and to make other fractions of the kind, you need more and more As it
> happens, all such base-2 compatible streams can be made because each is in
> some sense a divide by two.
>
> 7/16 = 1/2 * .875 = .4375
> 7/32 = 1/2 * .4375 = .21875
>
> and so on. But this ability is a special case artifact caused by a terminal
> digit 5 always being able to be divided in tow to make a 25 a unit longer
> and then again and again. Note 2 and 5 are factors of 10. In the more
> general case, this fails. In base 7, 3/7 is written easily as 0.3 but the
> same fraction in decimal is a repeating copy of .428571... which never
> terminates. A number like 3/7 + 4/49 + 5/343 generally cannot be written in
> base 7 but the opposite is also true that only a approximation of numbers in
> base 2 or base 10 can ever be written. I am, of course, talking about the
> part to the right of the decimal. Integers to the left can be written in any
> base. It is fractional parts that can end up being nonrepeating.

If you have a number with a finite binary representation, you can
guarantee that it can be represented finitely in decimal too.
Infinitely repeating expansions come from denominators that are
coprime with the numeric base.

> What about pi and e and the square root of 2? I suspect all of them have an
> infinite sequence with no real repetition (over long enough stretches) in
> any base! I mean an integer base, of course. The constant e in base e is
> just 1.

More than "suspect". This has been proven. That's what transcendental means.

I don't think "base e" means the same thing that "base ten" does.
(Normally you'd talk about a base e *logarithm*, which is a completely
different concept.) But if you try to work with a transcendental base
like that, it would be impossible to represent any integer finitely.

(Side point: There are other representations that have different
implications about what repeats and what doesn't. For instance, the
decimal expansion for a square root doesn't repeat, but the continued
fraction for the same square root will. For instance, 7**0.5 is
2;1,1,1,4,1,1,1,4... with an infinitely repeating four-element unit.)

> I think there have been attempts to use a decimal representation in some
> accounting packages or database applications that allow any decimal numbers
> to be faithfully represented and used in calculations. Generally this is not
> a very efficient process but it can handle 0.3 albeit still have no way to
> deal with transcendental numbers.

Fixed point has been around for a long time (the simplest example
being "work in cents and use integers"), but actual decimal
floating-point is quite unusual. Some databases support it, and REXX
used that as its only numeric form, but it's not hugely popular.

> Let me leave you with Egyptian mathematics. Their use of fractions, WAY BACK
> WHEN, only had the concept of a reciprocal of an integer. As in for any
> integer N, there was a fraction of 1/N. They had a concept of 1/3 but not of
> 2/3 or 4/9.
>
> So they added reciprocals to make any more complex fractions. To make 2/3
> they added 1/2 plus 1/6 for example.
>
> Since they were not stuck with any one base, all kinds of such combined
> fractions could be done but of course the square root of 2 or pi were a bit
> beyond them and for similar reasons.
>
> https://en.wikipedia.org/wiki/Egyptian_fraction

It's interesting as a curiosity, but it makes arithmetic extremely difficult.

> My point is there are many ways humans can choose to play with numbers and
> not all of them can easily do the same thing. Roman Numerals were (and
> remain) a horror to do much mathematics with and especially when they play
> games based on whether a symbol like X is to the left or right of another
> like C as XC is 90 and CX is 110.

Of course, there are myriad ways to do things. And each one has
implications. Which makes it even more surprising that, when someone
sits down at a computer and asks it to do arithmetic, they can't
handle the different implications.

ChrisA

Grant Edwards

unread,
Nov 20, 2021, 8:18:41 PM11/20/21
to
On 2021-11-21, Chris Angelico <ros...@gmail.com> wrote:

>> I think there have been attempts to use a decimal representation in some
>> accounting packages or database applications that allow any decimal numbers
>> to be faithfully represented and used in calculations. Generally this is not
>> a very efficient process but it can handle 0.3 albeit still have no way to
>> deal with transcendental numbers.
>
> Fixed point has been around for a long time (the simplest example
> being "work in cents and use integers"), but actual decimal
> floating-point is quite unusual. Some databases support it, and REXX
> used that as its only numeric form, but it's not hugely popular.

My recollection is that it was quite common back in the days before FP
hardware was "a thing" on small computers. CPM and DOS compilers for
various languages often gave the user a choice between binary FP and
decimal (BCD) FP.

If you were doing accounting you chose decimal. If you were doing
science, you chose binary (better range and precision for the same
number of bits of storage).

Once binary FP hardware became available, decimal FP support was
abandoned.

--
Grant

Avi Gross

unread,
Nov 20, 2021, 8:55:18 PM11/20/21
to
Not at all, Robb. I am not intending to demean Mathematicians as one of my degrees is in that subject and I liked it. I mean that some things in mathematics are not as intuitive to people when they first encounter them, let alone those who never see them and then marvel at results and have expectations.

The example I gave, is NOW, indeed on quite firm footing but for quite a while was not.

What we have in this forum recently is people taking pot shots at aspects of Python where in a similar way, they know not what is actually happening and insist it be some other way. Some people also assume that an email message work any way they want and post things to a text-only group that other cannot see or become badly formatted or complain why a very large attachment makes a message be rejected. They also expect SPAM checkers to be perfect and never reject valid messages and so on.

Things are what they are, not what we wish them to be. And many kinds of pure mathematics live in a Platonic world and must be used with care. Calculus is NOT on a firm footing when any of the ideas in it are violated. A quantum Mechanical universe at a deep level does not have continuity so continuous functions may not really exist and there can be no such thing as an infinitesimal smaller than any epsilon and so on. Much of what we see at that level includes things like a probabilistic view of an electron cloud forming the probability that an electron (which is not a mathematical point) is at any moment at a particular location around an atom. But some like the p-orbital have a sort of 3-D figure eight shape (sort of a pair of teardrops) where there is a plane between the two halves with a mathematically zero probability of the electron ever being there. Yet, quantum tunneling effects let it dross through that plane without actually ever being in the plane because various kinds of quantum jumps in a very wiggly space-time fabric can and will happen in a way normal mathematics may not predict or allow.

Which brings me back to the python analogy of algorithms implemented that gradually zoom in on an answer you might view as a local maximum or minimum. It may be that with infinite precision calculations, you might zoom in ever closer to the optimal answer where the tangent to such a curve has slope zero. Your program would never halt though if the condition was that it be exactly at that point to an infinite number of decimal places. This is a place I do not agree that the concept of being near the answer (or in this case being near zero) is not a good enough heuristic solution. There are many iterative problems (and recursive ones) where a close-enough condition is adequate. Some libraries incorporated into languages like Python use an infinite series to calculate something like sin(x) and many other such things, including potentially e and pi and various roots. Many of them can safely stop after N significant digits are locked into place, and especially when all available significant digits are locked. Running them further gains nothing much. So code like:

(previous_estimate - current_estimate) == 0

may be a bad idea compared to something like:

abs(previous_estimate - current_estimate) < epsilon

No disrespect to mathematics intended. My understanding is that mathematics can only be used validly if all underlying axioms are assumed to be true. When (as in the real world or computer programs) some axioms are violated, watch out. Matrix multiplication does not have a symmetry so A*B in general is not the same as B*A and even worse, may be a matrix of a different dimension. A 4x2 matrix and a 2x4 matrix can result in either a 2x2 or 4x4 for example. The violation of that rule may bother some people but is not really an issue as any mathematics that has an axiom for say an abelian group, simply is not expected to apply for a non-abelian case.




-----Original Message-----
From: Python-list <python-list-bounces+avigross=veriz...@python.org> On Behalf Of Rob Cliffe via Python-list
Sent: Saturday, November 20, 2021 6:19 PM
To:
Subject: Re: Unexpected behaviour of math.floor, round and int functions (rounding)



--
https://mail.python.org/mailman/listinfo/python-list

Chris Angelico

unread,
Nov 20, 2021, 9:01:54 PM11/20/21
to
On Sun, Nov 21, 2021 at 12:56 PM Avi Gross via Python-list
<pytho...@python.org> wrote:
>
> Not at all, Robb. I am not intending to demean Mathematicians as one of my degrees is in that subject and I liked it. I mean that some things in mathematics are not as intuitive to people when they first encounter them, let alone those who never see them and then marvel at results and have expectations.
>
> The example I gave, is NOW, indeed on quite firm footing but for quite a while was not.
>
> What we have in this forum recently is people taking pot shots at aspects of Python where in a similar way, they know not what is actually happening and insist it be some other way. Some people also assume that an email message work any way they want and post things to a text-only group that other cannot see or become badly formatted or complain why a very large attachment makes a message be rejected. They also expect SPAM checkers to be perfect and never reject valid messages and so on.
>
> Things are what they are, not what we wish them to be. And many kinds of pure mathematics live in a Platonic world and must be used with care. Calculus is NOT on a firm footing when any of the ideas in it are violated. A quantum Mechanical universe at a deep level does not have continuity so continuous functions may not really exist and there can be no such thing as an infinitesimal smaller than any epsilon and so on. Much of what we see at that level includes things like a probabilistic view of an electron cloud forming the probability that an electron (which is not a mathematical point) is at any moment at a particular location around an atom. But some like the p-orbital have a sort of 3-D figure eight shape (sort of a pair of teardrops) where there is a plane between the two halves with a mathematically zero probability of the electron ever being there. Yet, quantum tunneling effects let it dross through that plane without actually ever being in the plane because various kinds of
> quantum jumps in a very wiggly space-time fabric can and will happen in a way normal mathematics may not predict or allow.
>
> Which brings me back to the python analogy of algorithms implemented that gradually zoom in on an answer you might view as a local maximum or minimum. It may be that with infinite precision calculations, you might zoom in ever closer to the optimal answer where the tangent to such a curve has slope zero. Your program would never halt though if the condition was that it be exactly at that point to an infinite number of decimal places. This is a place I do not agree that the concept of being near the answer (or in this case being near zero) is not a good enough heuristic solution. There are many iterative problems (and recursive ones) where a close-enough condition is adequate. Some libraries incorporated into languages like Python use an infinite series to calculate something like sin(x) and many other such things, including potentially e and pi and various roots. Many of them can safely stop after N significant digits are locked into place, and especially when all available signific
> ant digits are locked. Running them further gains nothing much. So code like:
>
> (previous_estimate - current_estimate) == 0
>
> may be a bad idea compared to something like:
>
> abs(previous_estimate - current_estimate) < epsilon
>
> No disrespect to mathematics intended. My understanding is that mathematics can only be used validly if all underlying axioms are assumed to be true. When (as in the real world or computer programs) some axioms are violated, watch out. Matrix multiplication does not have a symmetry so A*B in general is not the same as B*A and even worse, may be a matrix of a different dimension. A 4x2 matrix and a 2x4 matrix can result in either a 2x2 or 4x4 for example. The violation of that rule may bother some people but is not really an issue as any mathematics that has an axiom for say an abelian group, simply is not expected to apply for a non-abelian case.
>

All of this is true, but utterly irrelevant to floating-point. If your
algorithm is inherently based on repeated estimates (Newton's method,
for instance), then you can iterate until you're "happy enough" with
the result. That's fine. But that is nothing whatsoever to do with the
complaint that 0.1+0.2!=0.3 or that you should "never use == with
floats" or any of those claims. It's as relevant as saying that my
ruler claims to be 30cm long but is actually nearly 310mm long, and
therefore the centimeter is an inherently unreliable unit and anything
measured in it should be treated as an estimate.

ChrisA

Rob Cliffe

unread,
Nov 20, 2021, 9:19:22 PM11/20/21
to


On 21/11/2021 01:02, Chris Angelico wrote:
>
> If you have a number with a finite binary representation, you can
> guarantee that it can be represented finitely in decimal too.
> Infinitely repeating expansions come from denominators that are
> coprime with the numeric base.
>
>
Not quite, e.g. 1/14 is a repeating decimal but 14 and 10 are not coprime.
I believe it is correct to say that infinitely recurring expansions
occur when the denominator is divisible by a prime that does not divide
the base.
Rob Cliffe

Avi Gross

unread,
Nov 20, 2021, 9:37:00 PM11/20/21
to
Chris,

You know I am going to fully agree with you that within some bounds, any combination of numbers that can accurately be represented will continue to be adequately represented under some operations like addition and subtraction and multiplication up to any point where they do not overflow (or underflow) the storage mechanism.

Division may be problematic and especially division by zero.

But bring in any number that is not fully and accurately representable, and it can poison everything much in the way including an NA poisons any attempts to take a sum or mean. Any calculation that includes an e is an example.

Of course there is not much in computing that necessarily relies on representable numbers and especially not when the numbers are dynamically gotten as in from a file or user and already not quite what is representable. I can even imagine a situation where some fraction is represented in a double and then "copied" into a regular singular float and some of it is lost/truncated.

I get your point about URL's but I was really focused at that point on filenames as an example on systems where they are not case sensitive. Some programming languages had a similar concept. Yes, you can have URL with more complex comparison functions needed including when something lengthens them or whatever. In one weird sense, as in you GET TO THE SAME PAGE, any URL that redirects you to another might be considered synonymous even if the two look nothing at all alike.

To continue, I do not mean to give the impression that comparing representable numbers with == is generally wrong. I am saying there are places where there may be good reasons for the alternative.

I can imagine an algorithm that starts with representable numbers and maybe at each stage continues to generate representable numbers, such as one of the hill climbing algorithms I am talking about. It may end up overshooting a bit past the peak and next round overshooting back to the other side and getting stuck in a loop. One way out is to keep track of past locations and abort when the cycle is seen to be repeating. Another is to leave when the result seems close enough.

However, my comments about over/underflow may apply here as enough iterations with representable numbers may at some point result in the kind of rounding error that warps the results of further calculations.

I note some of your argument is the valid difference between when your knowledge of the input numbers is uncertain and what the computer does with them. Yes, my measures of the height/width/depth may be uncertain and it is not the fault of a python program if it multiplies them to provide an exact answer as if in a mathematical world where numbers are normally precise. I am saying that the human using the program needs external info before they use the answer. In my example, I would note the rule that when dealing with numbers that are only significant to some number of digits, the final calculation should often be rounded down according to some rules. So instead of printing out the volume as 4140.606, the program may call some function like round() as in round(10.1*20.2*30.3, 1) so it displays 4141.6 instead. The Python language does what you ask and not what you do not ask.

Now a statistical program or perhaps an AI or Machine Learning program I write, might actually care about the probabilistic effects. I often create graphs that include perhaps a smoothed curve of some kind that approximates the points in the data as well as a light gray ribbon that represents some kind of error bands above and below and which suggest the line not be taken too seriously and there may be something like a 95% chance the true values are within the gray zone an even some chance they may be beyond it in an even lighter series of gray (color is not the issue) zones representing a 1% chance or even less.

Such approaches apply if the measurement errors are assume to be as much as .1 inches for each measure independently. The smallest volume would be (10.1 - 0.1)*(20.2 - 0.1)*(30.3 - 0.1) = 6070.2

The largest possible volume if all my measures were off by that amount in the other direction would be:

(10.1 + 0.1)*(20.2 + 0.1)*(30.3 + 0.1) = 6294.6

The above are truncated to one significant digit.

The Python program evaluates all the above representable numbers perfectly, albeit I doubt they are all representable in binary. But for human purposes, the actual answer for a volume has some uncertainty built-in to the method of measurement and perhaps others such as the inner sides of the box may not be perfectly flat or the angles things join at may not be precisely 90 degrees and filling it with something like oranges may not fit as much more if you enlarge it a tad as they may not stack much better from a minor change.

Python is not to blame in these cases if not programmed well enough. And I suggest the often minor errors introduced by a representation being not quite right in the last available decimal representation places (binary underneath) may be much smaller than these other introduced errors in the reality of such a superficially simple calculation.

>From my side of this discussion, I do not see much we basically disagree on, albeit we may word some ideas differently. I think I may opt out of further comments unless something new is mentioned and it does relate to Python or similar programming languages. I am spending too much time today on this one and another in an R mailing list and other things elsewhere so my main plans for the day have fallen behind 😉

Avi

-----Original Message-----
From: Python-list <python-list-bounces+avigross=veriz...@python.org> On Behalf Of Chris Angelico
Sent: Saturday, November 20, 2021 6:23 PM
To: pytho...@python.org
Subject: Re: Unexpected behaviour of math.floor, round and int functions (rounding)

--
https://mail.python.org/mailman/listinfo/python-list

Chris Angelico

unread,
Nov 20, 2021, 9:40:58 PM11/20/21
to
True, my bad. I can't remember if there's a term for that, but your
description is correct.

ChrisA

Avi Gross

unread,
Nov 20, 2021, 10:09:48 PM11/20/21
to
Sorry Chris,

I was talking mathematically where a number like pi or like 1/7 conceptually
have an infinite number of digits needed that are added to a growing sum
using ever smaller powers of 10, in the decimal case.

In programming, and in the binary storage, the number of such is clearly
limited.

Is there any official limit on the maximum size of a python integer other
than available memory?

And replying sparsely, yes, pretty much nothing can be represent completely
in base e other than integral multiples of e, perhaps. No other numbers,
especially integers, can be linear combinations of e or e raised to an
integral power.

Having said that, if you throw in another transcendental called pi and
expand to include the complex number i, then you can weirdly combine them
another way to make -1. I am sure you have seen equations like:

e**(pi*i) +1 = 0

By extension, you can make any integer by adding multiple such entities
together.

On another point, an indefinitely continued repeated fraction is sort of
similar to indefinitely summed series. Both can exist and demonstrate a
regularity when the actual digits of the number seemingly show no patterns.



-----Original Message-----
From: Python-list <python-list-bounces+avigross=veriz...@python.org> On
Behalf Of Chris Angelico
Sent: Saturday, November 20, 2021 8:03 PM
To: pytho...@python.org
Subject: Re: Unexpected behaviour of math.floor, round and int functions
(rounding)

If you have a number with a finite binary representation, you can guarantee
that it can be represented finitely in decimal too.
Infinitely repeating expansions come from denominators that are coprime with
the numeric base.

> What about pi and e and the square root of 2? I suspect all of them
> have an infinite sequence with no real repetition (over long enough
> stretches) in any base! I mean an integer base, of course. The
> constant e in base e is just 1.

More than "suspect". This has been proven. That's what transcendental means.

I don't think "base e" means the same thing that "base ten" does.
(Normally you'd talk about a base e *logarithm*, which is a completely
different concept.) But if you try to work with a transcendental base like
that, it would be impossible to represent any integer finitely.

(Side point: There are other representations that have different
implications about what repeats and what doesn't. For instance, the decimal
expansion for a square root doesn't repeat, but the continued fraction for
the same square root will. For instance, 7**0.5 is 2;1,1,1,4,1,1,1,4... with
an infinitely repeating four-element unit.)

> I think there have been attempts to use a decimal representation in
> some accounting packages or database applications that allow any
> decimal numbers to be faithfully represented and used in calculations.
> Generally this is not a very efficient process but it can handle 0.3
> albeit still have no way to deal with transcendental numbers.

Fixed point has been around for a long time (the simplest example being
"work in cents and use integers"), but actual decimal floating-point is
quite unusual. Some databases support it, and REXX used that as its only
numeric form, but it's not hugely popular.

> Let me leave you with Egyptian mathematics. Their use of fractions,
> WAY BACK WHEN, only had the concept of a reciprocal of an integer. As
> in for any integer N, there was a fraction of 1/N. They had a concept
> of 1/3 but not of
> 2/3 or 4/9.
>
> So they added reciprocals to make any more complex fractions. To make
> 2/3 they added 1/2 plus 1/6 for example.
>
> Since they were not stuck with any one base, all kinds of such
> combined fractions could be done but of course the square root of 2 or
> pi were a bit beyond them and for similar reasons.
>
> https://en.wikipedia.org/wiki/Egyptian_fraction

It's interesting as a curiosity, but it makes arithmetic extremely
difficult.

> My point is there are many ways humans can choose to play with numbers
> and not all of them can easily do the same thing. Roman Numerals were
> (and
> remain) a horror to do much mathematics with and especially when they
> play games based on whether a symbol like X is to the left or right of
> another like C as XC is 90 and CX is 110.

Of course, there are myriad ways to do things. And each one has
implications. Which makes it even more surprising that, when someone sits
down at a computer and asks it to do arithmetic, they can't handle the
different implications.

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list

Greg Ewing

unread,
Nov 21, 2021, 12:39:43 AM11/21/21
to
On 21/11/21 2:18 pm, Grant Edwards wrote:
> My recollection is that it was quite common back in the days before FP
> hardware was "a thing" on small computers. CPM and DOS compilers for
> various languages often gave the user a choice between binary FP and
> decimal (BCD) FP.

It's also very common for handheld calculators to work in decimal.
Most of HP's classic calculators used a CPU that was specifically
designed for doing BCD arithmetic, and many versions of it didn't
even have a way of doing arithmetic in binary!

--
Greg

Grant Edwards

unread,
Nov 21, 2021, 10:58:45 AM11/21/21
to
Yep, IIRC, it was a 4 bit processor because 4 bits is what it takes to
represent one decimal digit. The original Intel µProcessor was a also
4-bit processor designed around BCD rather than base-2 arithmatic
operations.

--
Grant



Peter J. Holzer

unread,
Nov 21, 2021, 1:32:40 PM11/21/21
to
On 2021-11-19 12:43:07 +0100, ast wrote:
> Le 19/11/2021 à 03:51, MRAB a écrit :
> > On 2021-11-19 02:40, 2QdxY4Rz...@potatochowder.com wrote:
> > > On 2021-11-18 at 23:16:32 -0300,
> > > René Silva Valdés <rene.sil...@gmail.com> wrote:
> > > > Hello, I would like to report the following issue:
> > > >
> > > > Working with floats i noticed that:
> > > >
> > > > int(23.99999999999999/12) returns 1, and
> > > > int(23.999999999999999/12) returns 2
> > > >
> > > > This implies that int() function is rounding ...
[...]
> > Python 3.10.0 (tags/v3.10.0:b494f59, Oct  4 2021, 19:00:18) [MSC v.1929
> > 64 bit (AMD64)] on win32
> > Type "help", "copyright", "credits" or "license" for more information.
> > >>> 23.99999999999999 == 24
> > False
> > >>> 23.999999999999999 == 24
> > True
>
> >>> 0.3 + 0.3 + 0.3 == 0.9
> False

Fascinating. The OP ran into the fact that FP numbers have a limited
number of digits in the mantissa (completely independent of the base).
Someone else mentions 0.3 and everybody takes off on that tangent.

hp

--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | h...@hjp.at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"
signature.asc

Peter J. Holzer

unread,
Nov 21, 2021, 1:41:10 PM11/21/21
to
On 2021-11-20 03:25:53 +0000, Ben Bacarisse wrote:
> Chris Angelico <ros...@gmail.com> writes:
>
> > It does mean exactly what it meant in grade school, just as 1/3 means
> > exactly what it meant in grade school. Now try to represent 1/3 on a
> > blackboard, as a decimal fraction. If that's impossible, does it mean
> > that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?
>
> As you know, it is possible, but let's say we outlaw any finite notation
> for repeated digits... Why should I convert 1/3 to this particular
> apparently unsuitable representation?

Because you want to use tools which require that particular
representation? Like for example a pocket calculator?

> I will write 1/3 and manipulate that number using factional notation.

On paper, maybe. But if after a few more steps you have fractions like
37645 / 9537654, you might reconsider that choice.

In a program? Yes, there are cases where you really want to use
fractions. That's why fractions.Fraction exists in Python (and similar
datatypes in many other programming languages). But they have their
limits, too (no π or √2) and for most problems you don't need them. I
don't actually think I ever used fractions.Fraction in my 7 years of
Python programming. (I think I used Math::BigRat in Perl, but I've been
programming in Perl for a lot longer.)
signature.asc

Chris Angelico

unread,
Nov 21, 2021, 1:44:20 PM11/21/21
to
On Mon, Nov 22, 2021 at 5:42 AM Peter J. Holzer <hjp-p...@hjp.at> wrote:
> (I think I used Math::BigRat in Perl, but I've been
> programming in Perl for a lot longer.)
>

Rodents Of Unusual Size? I don't think they exist...

ChrisA

Peter J. Holzer

unread,
Nov 21, 2021, 1:59:46 PM11/21/21
to
On 2021-11-21 10:57:55 +1100, Chris Angelico wrote:
> And if decimal floating point were common, other "surprise" behaviour
> would be cited, like how x < y and (x+y)/2 < x.

Yup. Took me a bit to find an example, but this can happen. My HP-48
calculator uses a mantissa of 12 decimal digits.

666666666666 + 666666666667 = 1333333333330
1333333333330 / 2 = 666666666665

666666666665 < 666666666666. QED.
signature.asc

Peter J. Holzer

unread,
Nov 21, 2021, 2:04:26 PM11/21/21
to
signature.asc

Greg Ewing

unread,
Nov 21, 2021, 8:02:21 PM11/21/21
to
On 22/11/21 4:58 am, Grant Edwards wrote:
> Yep, IIRC, it was a 4 bit processor because 4 bits is what it takes to
> represent one decimal digit.

That was the Saturn, first used in the HP-71B. The original
architecture (known as the "Nut")was weirder than that. It operated
serially on 56 bit words (14 BCD digits), and the instructions
had options for operating on various fields of a floating-point
number (mantissa, exponent, sign, etc.)

--
Greg

ast

unread,
Nov 23, 2021, 6:54:16 AM11/23/21
to
Le 19/11/2021 à 21:17, Chris Angelico a écrit :
> On Sat, Nov 20, 2021 at 5:08 AM ast <ast@invalid> wrote:
>>
>> Le 19/11/2021 à 03:51, MRAB a écrit :
>>> On 2021-11-19 02:40, 2QdxY4Rz...@potatochowder.com wrote:
>>>> On 2021-11-18 at 23:16:32 -0300,
>>>> René Silva Valdés <rene.sil...@gmail.com> wrote:
>>>>

>>
>> >>> 0.3 + 0.3 + 0.3 == 0.9
>> False
>
> That's because 0.3 is not 3/10. It's not because floats are
> "unreliable" or "inaccurate". It's because the ones you're entering
> are not what you think they are.
>
> When will people understand this?
>
> (Probably never. Sigh.)
>
> ChrisA
>

I posted that to make people aware of danger of float comparison,
not because I was not understanding what happened.

We can see there is a difference on the lsb, due to rounding.

>>> (0.3+0.3+0.3).hex()
'0x1.cccccccccccccp-1'
>>> 0.9.hex()
'0x1.ccccccccccccdp-1'
>>>

An isclose() function is provided in module math to do float
comparison safely.

>>> math.isclose(0.3+0.3+0.3, 0.9)
True

Chris Angelico

unread,
Nov 23, 2021, 11:08:46 AM11/23/21
to
On Wed, Nov 24, 2021 at 3:04 AM ast <ast@invalid> wrote:
>
> Le 19/11/2021 à 21:17, Chris Angelico a écrit :
> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast@invalid> wrote:
> >>
> >> Le 19/11/2021 à 03:51, MRAB a écrit :
> >>> On 2021-11-19 02:40, 2QdxY4Rz...@potatochowder.com wrote:
> >>>> On 2021-11-18 at 23:16:32 -0300,
> >>>> René Silva Valdés <rene.sil...@gmail.com> wrote:
> >>>>
>
> >>
> >> >>> 0.3 + 0.3 + 0.3 == 0.9
> >> False
> >
> > That's because 0.3 is not 3/10. It's not because floats are
> > "unreliable" or "inaccurate". It's because the ones you're entering
> > are not what you think they are.
> >
> > When will people understand this?
> >
> > (Probably never. Sigh.)
> >
> > ChrisA
> >
>
> I posted that to make people aware of danger of float comparison,
> not because I was not understanding what happened.

And I posted to show that equality is not the problem, and that float
comparison is not dangerous.

> We can see there is a difference on the lsb, due to rounding.
>
> >>> (0.3+0.3+0.3).hex()
> '0x1.cccccccccccccp-1'
> >>> 0.9.hex()
> '0x1.ccccccccccccdp-1'
> >>>
>
> An isclose() function is provided in module math to do float
> comparison safely.
>
> >>> math.isclose(0.3+0.3+0.3, 0.9)
> True

This is why isclose() was so controversial: it is very very easy to
misuse it. Like this.

ChrisA
0 new messages