Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

4 hundred quadrillonth?

2 views
Skip to first unread message

sean...@gmail.com

unread,
May 21, 2009, 5:05:25 PM5/21/09
to
The explaination in my introductory Python book is not very
satisfying, and I am hoping someone can explain the following to me:

>>> 4 / 5.0
0.80000000000000004

4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
It bothers me.

MRAB

unread,
May 21, 2009, 5:17:04 PM5/21/09
to pytho...@python.org

Christian Heimes

unread,
May 21, 2009, 5:36:04 PM5/21/09
to pytho...@python.org
sean...@gmail.com schrieb:

Welcome to IEEE 754 floating point land! :)

Christian

sean...@gmail.com

unread,
May 21, 2009, 5:41:14 PM5/21/09
to
On May 21, 5:36 pm, Christian Heimes <li...@cheimes.de> wrote:
> seanm...@gmail.com schrieb:

Thanks for the link and the welcome. Now onward to Bitwise
Operations....

Sean

Carl Banks

unread,
May 21, 2009, 5:53:38 PM5/21/09
to
On May 21, 2:05 pm, seanm...@gmail.com wrote:
> The explaination in my introductory Python book is not very
> satisfying, and I am hoping someone can explain the following to me:
>
> >>> 4 / 5.0
>
> 0.80000000000000004
>
> 4 / 5.0 is 0.8. No more, no less.

That would depend on how you define the numbers and division.

What you say is correct for real numbers and field division. It's not
true for the types of numbers Python uses, which are not real numbers.

Python numbers are floating point numbers, defined (approximately) by
IEEE 754, and they behave similar to but not exactly the same as real
numbers. There will always be small round-off errors, and there is
nothing you can do about it except to understand it.


> It bothers me.

Oh well.

You can try Rational numbers if you want, I think they were added in
Python 2.6. But if you're not careful the divisors can get
ridiculously large.


Carl Banks

Chris Rebert

unread,
May 21, 2009, 6:36:07 PM5/21/09
to Carl Banks, pytho...@python.org

The `decimal` module's Decimal type is also an option to consider:

Python 2.6.2 (r262:71600, May 14 2009, 16:34:51)
>>> from decimal import Decimal
>>> Decimal(4)/Decimal(5)
Decimal('0.8')

Cheers,
Chris
--
http://blog.rebertia.com

Chris Rebert

unread,
May 21, 2009, 6:36:43 PM5/21/09
to Carl Banks, pytho...@python.org
On Thu, May 21, 2009 at 2:53 PM, Carl Banks <pavlove...@gmail.com> wrote:

The `decimal` module's Decimal type is also an option to consider:

norseman

unread,
May 21, 2009, 6:45:57 PM5/21/09
to sean...@gmail.com, pytho...@python.org
======================================

Machine architecture, actual implementation of logic on the chip and
what the compiler maker did all add up to creating rounding errors. I
have read where python, if left to its own, will output everything it
computed. I guess the idea is to show
1) python's accuracy and
2) what was left over
so the picky people can have something to gnaw on.

Astrophysics, Astronomers and like kind may have wants of such.
If you work much in finite math you may want to test the combo to see if
it will allow the accuracy you need. Or do you need to change machines?

Beyond that - just fix the money at 2, gas pumps at 3 and the
sine/cosine at 8 and let it ride. :)


Steve

Grant Edwards

unread,
May 21, 2009, 7:09:57 PM5/21/09
to

Floating point is sort of like quantum physics: the closer you
look, the messier it gets.

--
Grant

Carl Banks

unread,
May 21, 2009, 7:56:52 PM5/21/09
to
On May 21, 3:45 pm, norseman <norse...@hughes.net> wrote:
> Beyond that - just fix the money at 2, gas pumps at 3 and the
> sine/cosine at 8 and let it ride. :)


Or just use print.

>>> print 4.0/5.0
0.8

Since interactive prompt is usually used by programmers who are
inspecting values it makes a little more sense to print enough digits
to give unambiguous representation.


Carl Banks

MRAB

unread,
May 21, 2009, 8:01:02 PM5/21/09
to pytho...@python.org
I have the same feeling towards databases.

Gary Herron

unread,
May 21, 2009, 8:19:42 PM5/21/09
to pytho...@python.org
MRAB wrote:

+1 as QOTW


And just to add one bit of clarity: This problem has nothing to do with
the OP's division of 4 by 5.0, but rather that the value of 0.8 itself
cannot be represented exactly in IEEE 754. Just try

>>> print repr(0.8) # No division needed
'0.80000000000000004'

Gary Herron

Rob Clewley

unread,
May 21, 2009, 8:33:25 PM5/21/09
to pytho...@python.org
On Thu, May 21, 2009 at 8:19 PM, Gary Herron <ghe...@islandtraining.com> wrote:
> MRAB wrote:
>>
>> Grant Edwards wrote:
>>>
>>> On 2009-05-21, Christian Heimes <li...@cheimes.de> wrote:
>>>>
>>>> sean...@gmail.com schrieb:
>>>>>
>>>>> The explaination in my introductory Python book is not very
>>>>> satisfying, and I am hoping someone can explain the following to me:
>>>>>
>>>>>>>> 4 / 5.0
>>>>>
>>>>> 0.80000000000000004
>>>>>
>>>>> 4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
>>>>> It bothers me.
>>>>
>>>> Welcome to IEEE 754 floating point land! :)
>>>

FYI you can explore the various possible IEEE-style implementations
with my python simulator of arbitrary floating or fixed precision
numbers:

http://www2.gsu.edu/~matrhc/binary.html

R. David Murray

unread,
May 21, 2009, 8:50:48 PM5/21/09
to pytho...@python.org
Gary Herron <ghe...@islandtraining.com> wrote:
> MRAB wrote:
> > Grant Edwards wrote:
> +1 as QOTW
>
> And just to add one bit of clarity: This problem has nothing to do with
> the OP's division of 4 by 5.0, but rather that the value of 0.8 itself
> cannot be represented exactly in IEEE 754. Just try
>
> >>> print repr(0.8) # No division needed
> '0.80000000000000004'

Python 3.1b1+ (py3k:72432, May 7 2009, 13:51:24)
[GCC 4.1.2 (Gentoo 4.1.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 4 / 5.0
0.8
>>> print(repr(0.8))
0.8

In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
point algorithm for Python so that the shortest repr that will round
trip correctly is what is used as the floating point repr....

--David

Gary Herron

unread,
May 21, 2009, 9:30:17 PM5/21/09
to pytho...@python.org
R. David Murray wrote:
> Gary Herron <ghe...@islandtraining.com> wrote:
>
>> MRAB wrote:
>>
>>> Grant Edwards wrote:
>>>
>> +1 as QOTW
>>
>> And just to add one bit of clarity: This problem has nothing to do with
>> the OP's division of 4 by 5.0, but rather that the value of 0.8 itself
>> cannot be represented exactly in IEEE 754. Just try
>>
>> >>> print repr(0.8) # No division needed
>> '0.80000000000000004'
>>
>
> Python 3.1b1+ (py3k:72432, May 7 2009, 13:51:24)
> [GCC 4.1.2 (Gentoo 4.1.2)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>
>>>> 4 / 5.0
>>>>
> 0.8
>
>>>> print(repr(0.8))
>>>>
> 0.8
>
> In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
> point algorithm for Python so that the shortest repr that will round
> trip correctly is what is used as the floating point repr....
>
> --David
>

Which won't change the fact that 0.8 and lots of other favorite floats
are still not representable exactly, but it will hide this fact from
most newbies. One of the nicer results of this will be that these
(almost) weekly questions and discussions will be come a thing of the
past.

With a sigh of relief,
Gary Herron

AggieDan04

unread,
May 21, 2009, 9:56:08 PM5/21/09
to
On May 21, 5:45 pm, norseman <norse...@hughes.net> wrote:

> seanm...@gmail.com wrote:
> > The explaination in my introductory Python book is not very
> > satisfying, and I am hoping someone can explain the following to me:
>
> >>>> 4 / 5.0
> > 0.80000000000000004
>
> > 4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
> > It bothers me.
>
> ======================================
>
> Machine architecture, actual implementation of logic on the chip and
> what the compiler maker did all add up to creating rounding errors. I
> have read where python, if left to its own, will output everything it
> computed. I guess the idea is to show
>         1) python's accuracy and
>         2) what was left over
> so the picky people can have something to gnaw on.

If you want to be picky, the exact value is
0.8000000000000000444089209850062616169452667236328125 (i.e.,
3602879701896397/2**52). Python's repr function rounds numbers to 17
significant digits. This is the minimum that ensures that float(repr
(x)) == x for all x (using IEEE 754 double precision).

> Astrophysics, Astronomers and like kind may have wants of such.
> If you work much in finite math you may want to test the combo to see if
>   it will allow the accuracy you need. Or do you need to change machines?

The error in this example is roughly equivalent to the width of a red
blood cell compared to the distance between Earth and the sun. There
are very few applications that need more accuracy than that.


AggieDan04

unread,
May 21, 2009, 10:01:03 PM5/21/09
to
On May 21, 5:36 pm, Chris Rebert <c...@rebertia.com> wrote:

> On Thu, May 21, 2009 at 2:53 PM, Carl Banks <pavlovevide...@gmail.com> wrote:
> > On May 21, 2:05 pm, seanm...@gmail.com wrote:
> >> The explaination in my introductory Python book is not very
> >> satisfying, and I am hoping someone can explain the following to me:
>
> >> >>> 4 / 5.0
>
> >> 0.80000000000000004
>
> >> 4 / 5.0 is 0.8. No more, no less.
...

>
> The `decimal` module's Decimal type is also an option to consider:
>
> Python 2.6.2 (r262:71600, May 14 2009, 16:34:51)
> >>> from decimal import Decimal
> >>> Decimal(4)/Decimal(5)
>
> Decimal('0.8')

>>> Decimal(1) / Decimal(3) * 3
Decimal("0.9999999999999999999999999999")
>>> Decimal(2).sqrt() ** 2
Decimal("1.999999999999999999999999999")

Decimal isn't a panacea for floating-point rounding errors. It also
has the disadvantage of being much slower.

It is useful for financial applications, in which an exact value for
0.01 actually means something.

Dave Angel

unread,
May 21, 2009, 10:13:45 PM5/21/09
to Rob Clewley, pytho...@python.org
Rob Clewley wrote:

> On Thu, May 21, 2009 at 8:19 PM, Gary Herron <ghe...@islandtraining.com> wrote:
>
>> MRAB wrote:
>>
>>> Grant Edwards wrote:
>>>
>>>> On 2009-05-21, Christian Heimes <li...@cheimes.de> wrote:
>>>>
>>>>> sean...@gmail.com schrieb:
>>>>>
>>>>>> The explaination in my introductory Python book is not very
>>>>>> satisfying, and I am hoping someone can explain the following to me:
>>>>>>
>>>>>>
>>>>>>>>> 4 / 5.0
>>>>>>>>>
>>>>>> 0.80000000000000004
>>>>>>
>>>>>> 4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
>>>>>> It bothers me.
>>>>>>
>>>>> Welcome to IEEE 754 floating point land! :)
>>>>>
>
> FYI you can explore the various possible IEEE-style implementations
> with my python simulator of arbitrary floating or fixed precision
> numbers:
>
> http://www2.gsu.edu/~matrhc/binary.html
>
>

It was over 40 years ago I studied Fortran, with the McCracken book.
There were big warnings in it about the hazards of binary floating
point. This was long before the IEEE
754, Python, Java, or even C.

In any floating point system with finite precision, there will be some
numbers that cannot be represented exactly. Beginning programmers
assume that if you can write it exactly, the computer should understand
it exactly as well. (That's one of the reasons the math package I
microcoded a few decades ago was base 10).

If you try to write 1/3 in decimal notation, you either have to write
forever, or truncate it somewhere. The only fractions that terminate
are those that have a denominator (in lowest terms) comprised only of
powers of 2 and 5. So 4/10 can be represented, and so can 379/625. Any
other fraction, like 1/7, or 3/91 will make a repeating decimal,
sometimes taking many digits to repeat, but not repeating with zeroes.

In binary fractions, the rule is similar, but only for powers of 2. If
there's a 5 in there, it cannot be represented exactly.

So people learn to use integers, or rational numbers (fractions), or
decimal representations, depending on what values they're willing to
have be approximate.

Something that escapes many people is that even when there's an error
there, sometimes converting it back to decimal hides the error. So 0.4
might have an error on the right end, but 0.7 might happen to look good.

Thanks for providing tools that let people play.

An anecdote from many years ago (1975) -- I had people complain about my
math package, that cos(pi/2) was not zero. It was something times
10**-13, but still not zero. And they wanted it to be zero. If
somebody set the math package to work in degrees, they'd see that
cos(90) was in fact zero. Why the discrepancy? Well, you can't
represent pi/2 exactly, (in any floating point or fraction system, it's
irrational). So if you had a perfect cos package, but give it a number
that's off just a little from a right angle, you'd expect the answer to
be off a little from zero. Turns out that (in a 13 digit floating point
package), the value was the next 13 digits of pi/2. And 12 of them were
accurate. I was pleased as punch.


Andre Engels

unread,
May 22, 2009, 2:31:21 AM5/22/09
to sean...@gmail.com, pytho...@python.org

Well, how much would 1 / 3.0 be? Maybe 0.3333333333... with a certain
(large) number of threes? And if you multiply that by 3, will it be
1.0 again? No, because you cannot represent 1/3.0 as a precise decimal
fraction.

Internally, what is used are not decimal but binary fractions. And as
a binary fraction, 4/5.0 is just as impossible to represent as 1/3.0
is (1/3.0 = 0.0101010101... and 4/5.0 = 0.110011001100... to be
exact). So 4 / 5.0 gives you the binary fraction of a certain
precision that is closest to 0.8. And apparently that is close to
0.80000000000000004


--
André Engels, andre...@gmail.com

rustom

unread,
May 22, 2009, 3:26:46 AM5/22/09
to
On May 22, 6:56 am, AggieDan04 <danb...@yahoo.com> wrote:
> The error in this example is roughly equivalent to the width of a red
> blood cell compared to the distance between Earth and the sun.  There
> are very few applications that need more accuracy than that.

For a mathematician there are no inexact numbers; for a physicist no
exact ones.
Our education system is on the math side; reality it seems is on the
other.

Steven D'Aprano

unread,
May 22, 2009, 10:28:05 AM5/22/09
to
On Thu, 21 May 2009 18:30:17 -0700, Gary Herron wrote:

>> In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
>> point algorithm for Python so that the shortest repr that will round
>> trip correctly is what is used as the floating point repr....
>>
>> --David
>>
>>
> Which won't change the fact that 0.8 and lots of other favorite floats
> are still not representable exactly, but it will hide this fact from
> most newbies. One of the nicer results of this will be that these
> (almost) weekly questions and discussions will be come a thing of the
> past.
>
> With a sigh of relief,

Yay! We now will have lots of subtle floating point bugs that people
can't see! Ignorance is bliss and what you don't know about floating
point can't hurt you!

>>> 0.8 - (0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1) == 0.0
False
>>> 0.8 - 0.5 - 0.2 - 0.1 == 0.0
False


--
Steven

Steven D'Aprano

unread,
May 22, 2009, 10:28:12 AM5/22/09
to
On Thu, 21 May 2009 18:56:08 -0700, AggieDan04 wrote:

> The error in this example is roughly equivalent to the width of a red
> blood cell compared to the distance between Earth and the sun. There
> are very few applications that need more accuracy than that.

Which is fine if the error *remains* that small, but the problem is that
errors usually increase and rarely cancel.


--
Steven

Mark Dickinson

unread,
May 22, 2009, 4:05:59 PM5/22/09
to
On May 22, 3:28 pm, Steven D'Aprano <st...@REMOVE-THIS-

cybersource.com.au> wrote:
> On Thu, 21 May 2009 18:30:17 -0700, Gary Herron wrote:
> >> In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
> >> point algorithm for Python so that the shortest repr that will round
> >> trip correctly is what is used as the floating point repr....
>
> >> --David
>
> > Which won't change the fact that 0.8 and lots of other favorite floats
> > are still not representable exactly, but it will hide this fact from
> > most newbies.  One of the nicer results of this will be that these
> > (almost) weekly questions and discussions will be come a thing of the
> > past.
>
> > With a sigh of relief,
>
> Yay! We now will have lots of subtle floating point bugs that people
> can't see! Ignorance is bliss and what you don't know about floating
> point can't hurt you!

Why do you think this change will give rise to 'lots of subtle
floating point bugs'? The new repr is still faithful, in the
sense that if x != y then repr(x) != repr(y). Personally, I'm
not sure that the new repr is likely to do anything for
floating-point confusion either way.

What's gone in 3.1 is the capricious nature of the old "produce
17 significant digits and then remove all trailing zeros" rule
for repr.

For a specific example of the randomness of the current repr
rule, choose a random decimal in [0.5, 1.0) with at
most 12 places (say) after the decimal point; for example, 0.567819.
Now type that number into Python at the interpreter prompt.
Then there's approximately a 9% chance (where the number 0.09 comes
from computing 2.**53/10.**17) that you'll see the number you
typed in (possibly with trailing zeros removed if you typed something
like 0.846100), and a 91% chance that you'll get the full 17
significant digits.

Try the same experiment for random decimals in the
interval [1.0, 2.0) and there's about a 45% chance you'll get
back what you typed in, and a 55% chance you'll get 17 sig. digs.

With the new repr, a float that can be specified with 15 significant
decimal digits or fewer will always use those digits for its repr.
It's not a panacea, but I don't see how it's worse than the old
repr.

Mark

Steven D'Aprano

unread,
May 22, 2009, 10:53:35 PM5/22/09
to
On Fri, 22 May 2009 13:05:59 -0700, Mark Dickinson wrote:

>> > With a sigh of relief,
>>
>> Yay! We now will have lots of subtle floating point bugs that people
>> can't see! Ignorance is bliss and what you don't know about floating
>> point can't hurt you!
>
> Why do you think this change will give rise to 'lots of subtle floating
> point bugs'? The new repr is still faithful, in the sense that if x !=
> y then repr(x) != repr(y). Personally, I'm not sure that the new repr
> is likely to do anything for floating-point confusion either way.

I'm sorry, did I forget a wink? Apparently I did :)

I don't think this change will *cause* bugs. However, it *may* (and I
emphasis the may, because it hasn't been around long enough to see the
effect) allow newbies to remain in blissful ignorance of floating point
issues longer than they should.

Today, the first time you call repr(0.8) you should notice that the float
you have is *not quite* the number you thought you had, which alerts you
to the fact that floats aren't the real numbers you learned about it
school. From Python 3.1, that reality will be hidden just a little bit
longer.


[...]


> With the new repr, a float that can be specified with 15 significant
> decimal digits or fewer will always use those digits for its repr. It's
> not a panacea, but I don't see how it's worse than the old repr.

It's only worse in the sense that ignorance isn't really bliss, and this
change will allow programmers to remain ignorant a little longer. I
expect that instead of obviously wet-behind-the-ears newbies asking "I'm
trying to creating a float 0.1, but can't, does Python have a bug?" we'll
start seeing not-such-newbies asking "I've written a function that
sometimes misbehaves, and after heroic effort to debug it, I discovered
that Python has a bug in simple arithmetic, 0.2 + 0.1 != 0.3".

I don't think this will be *worse* than the current behaviour, only bad
in a different way.


--
Steven

Lawrence D'Oliveiro

unread,
May 24, 2009, 3:17:07 AM5/24/09
to
In message <mailman.525.1242941...@python.org>, Christian
Heimes wrote:

> Welcome to IEEE 754 floating point land! :)

It used to be worse in the days before IEEE 754 became widespread. Anybody
remember a certain Prof William Kahan from Berkeley, and the foreword he
wrote to the Apple Numerics Manual, 2nd Edition, published in 1988? It's
such a classic piece that I think it should be posted somewhere...

Lawrence D'Oliveiro

unread,
May 24, 2009, 6:47:51 AM5/24/09
to
In message <7b986ef0-d118-4e0c-
afef-3c6...@b7g2000pre.googlegroups.com>, rustom wrote:

> For a mathematician there are no inexact numbers; for a physicist no
> exact ones.

On the contrary, mathematics have worked out a precise theory of
inexactness.

As for exactitude in physics, Gregory Chaitin among others has been trying
to rework physics to get rid of real numbers altogether.

Message has been deleted

Dave Angel

unread,
May 24, 2009, 7:51:42 PM5/24/09
to Dennis Lee Bieber, pytho...@python.org

Dennis Lee Bieber wrote:
> On Sun, 24 May 2009 22:47:51 +1200, Lawrence D'Oliveiro
> <l...@geek-central.gen.new_zealand> declaimed the following in
> gmane.comp.python.general:


>
>
>
>> As for exactitude in physics, Gregory Chaitin among others has been trying
>> to rework physics to get rid of real numbers altogether.
>>
>

> By decreeing that the value of PI is 3?
>
Only in Ohio.

Erik Max Francis

unread,
May 24, 2009, 7:59:57 PM5/24/09
to

I only see used versions of it available for purchase. Care to hum a
few bars?

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 18 N 121 57 W && AIM, Y!M, Skype erikmaxfrancis
I get my kicks above the wasteline, sunshine
-- The American, _Chess_

Lawrence D'Oliveiro

unread,
May 24, 2009, 9:39:02 PM5/24/09
to
In message <9MWdnTfMPPrjQoTX...@giganews.com>, Erik Max Francis
wrote:

> Lawrence D'Oliveiro wrote:
>
>> In message <mailman.525.1242941...@python.org>,
>> Christian Heimes wrote:
>>
>>> Welcome to IEEE 754 floating point land! :)
>>
>> It used to be worse in the days before IEEE 754 became widespread.
>> Anybody remember a certain Prof William Kahan from Berkeley, and the
>> foreword he wrote to the Apple Numerics Manual, 2nd Edition, published in
>> 1988? It's such a classic piece that I think it should be posted
>> somewhere...
>
> I only see used versions of it available for purchase. Care to hum a
> few bars?

Part I of this book is mainly for people who perform scientific,
statistical, or engineering computations on Apple® computers. The rest is
mainly for producers of software, especially of language processors, that
people will use on Apple computers to perform computations in those fields
and in finance and business too. Moreover, if the first edition was any
indication, people who have nothing to do with Apple computers may well buy
this book just to learn a little about an arcane subject, floating-point
arithmetic on computers, and will wish they had an Apple.

Computer arithmetic has two properties that add to its mystery:

* What you see is often not what you get, and
* What you get is sometimes not what you wanted.

Floating-point arithmetic, the kind computers use for protracted work with
approximate data, is intrinsically approximate because the alternative,
exact arithmetic, could take longer than most people are willing to wait--
perhaps forever. Approximate results are customarily displayed or printed to
show only as many of their leading digits as matter instead of all digits;
what you see need not be exactly what you've got. To complicate matters,
whatever digits you see are /decimal/ digits, the kind you saw first in
school and the kind used in hand-held calculators. Nowadays almost no
computers perform their arithmetic with decimal digits; most of them use
/binary/, which is mathematically better than decimal where they differ, but
different nonetheless. So, unless you have a small integer, what you see is
rarely just what you have.

In the mid 1960's, computer architects discovered shortcuts that made
arithmetic run faster at the cost of what they reckoned to be a slight
increase in the level of rounding error; they thought you could not object
to slight alterations in the rightmost digits of numbers since you could not
see those digits anyway. They had the best intentions, but they accomplished
the opposite of what they intended. Computer throughputs were not improved
perceptibly by those shortcuts, but a few programs that had previously been
trusted unreservedly turned treacherous, failing in mysterious ways on
extremely rare occasions.

For instance, a very Important Bunch of Machines launched in 1964 were found
to have two anomalies in their double-precision arithmetic (though not in
single): First, multiplying a number /Z/ by 1.0 would lop off /Z/'s last
digit. Second, the difference between two nearly equal numbers, whose digits
mostly canceled, could be computed wrong by a factor almost as big as 16
instead of being computed exactly as is normal. The anomalies introduced a
kind of noise in the feedback loops by which some programs had compensated
for their own rounding errors, so those programs lost their high accuracies.
These anomalies were not "bugs"; they were "features" designed into the
arithmetic by designers who thought nobody would care. Customers did care;
the arithmetic was redesigned and repairs were retrofitted in 1967.

Not all Capriciously Designed Computer arithmetics have been repaired. One
family of computers has enjoyed notoriety for two decades by allowing
programs to generate tiny "partially underflowed" numbers. When one of these
creatures turns up as the value of /T/ in an otherwise innocuous statement
like

if T = 0.0 then Q := 0.0 else Q := 702345.6 / (T + 0.00189 / T);

it causes the computer to stop execution and emit a message alleging
"Division by Zero". The machine's schizophrenic attitude toward zero comes
about because the test for T = 0.0 is carried out by the adder, which
examines at least 13 of /T/'s leading digits, whereas the divider and
multiplier examine only 12 to recognize zero. Doing so saved less than a
dollar's worth of transistors and maybe a picosecond of time, but at the
cost of some disagreement about whether a very tiny number /T/ is zero or
not. Fortunately, the divider agrees with the multiplier about whether /T/
is zero, so programmers could prevent spurious divisions by zero by slightly
altering the foregoing statement as follows:

if 1.0 * T = 0.0 then Q := 0.0 else Q := 702345.6 / (T + 0.00189 / T);

Unfortunately, the Same Computer designer responsible for "partial
underflow" designed another machine that can generate "partially
underflowed" numbers /T/ for which this statement malfunctions. On that
machine, /Q/ would be computed unexceptionably except that the product 1.0 *
T causes the machine to stop and emit a message alleging "Overflow". How
should a programmer rewrite that innocuous statement so that it will work
correctly on both machines? We should be thankful that such a task is not
encountered every day.

Anomalies related to roundoff are extremely difficult to diagnose. For
instance, the machine on which 1.0 * T can overflow also divides in a
peculiar way that causes quotients like 240.0 / 80.0, which ought to produce
small integers, sometimes to produce nonintegers instead, sometimes slightly
too big, sometimes slightly too small. The same machine multiplies in a
peculiar way, and it subtracts in a peculiar way that can get the difference
wrong by almost a factor of 2 when it ought to be exact because of
cancellation.

Another peculiar kind of subtraction, but different, afflicts the machines
that are schizophrenic about zero. Sets of three values /X/, /Y/ and /Z/
abound for which the statement

if (X = Y) and ((X - Z) > (Y - Z)) then writeln('Strange!');

will print "Strange!" on those machines. And many machines will print
"Strange!" for unlucky values /X/ and /Y/ in the statement

if (X - Y = 0.0) and (X > Y) then writeln('Strange!');

because of underflow.

/These strange things cannot happen on current Apple computers./

I do not wish to suggest that all but Apple computers have had quirky
arithmetics. A few other computer companies, some Highly Prestigious, have
Demonstrated Exemplary Concern for arithmetic integrity over many years. Had
their concern been shared more widely, numerical computation would now be
easier to understand. Instead, because so many computers in the 1960's and
1970's possessed so many different arithmetic anomalies, computational lore
has become encumbered with a vast body of superstition purporting to cope
with them. One such superstitious rule is "/Never/ ask whether floating-
point numbers are exactly equal".

Presumably the reasonable thing to do instead is to ask whether the numbers
differ by less than some tolerance; and this /is/ truly reasonable provided
you know what tolerance to choose. But the word /never/ is what turns the
rule from reasonable into mere superstition. Even if every floating-point
comparison in your program involved a tolerance, you would wish to predict
which path execution would follow from various input data, and whether the
different comparisons were mutually consistent. For instance, the predicates
X < Y - TOL and Y - TOL > X seem equivalent to the naked eye, but computers
exist (/not/ made by Apple!) on which one can be true and the other false
for certain values of the variables. To ask "Which?" violates the
superstitious rule.

There have been several attempts to avoid superstition by devising
mathematical rules called /axioms/ that would be valid for all commercially
significant computers and from which a programmer might hope to be able to
deduce whether his program will function correctly on all those computers.
Unfortunately, such attempts cannot succeed without failing! The paradox
arises because any such rules, to be valid universally, have to encompass so
wide a range of anomalies as to constitute the specifications for a
hypothetical computer far worse arithmetically than any ever actually built.
In consequence, many computations provably impossible on that hypothetical
computer would be quite feasible on almost every actual computer. For
instance, the axioms must imply limits to the accuracy with which
differential equations can be solved, integrals evaluated, infinite series
summed, and areas of triangles calculated; but these limits are routinely
surpassed nowadays by programs that run on most commercially significant
computers, although some computers may require programs that are so special
that they would be useless on any other machine.

Arithmetic anarchy is where we seemed headed until a decade ago when work
began upon IEEE Standard 754 for binary floating-point arithmetic. Apple's
mathematicians and engineers helped from the very beginning. The resulting
family of coherent designs for computer arithmetic has been adopted more
widely, and by more computer manufacturers, than any other single design.
Besides the undoubted benefits that flow from any standard, the principal
benefit derived from the IEEE standard in particular is this:

/Program importability:/ Almost any application of floating-point
arithmetic, designed to work on a few different families of computers in
existence before the IEEE Standard and programmed in a higher-level
language, will, after recompilation, work at least about as well on an Apple
computer or on any other machine that conforms to IEEE Standard 754 as on
any nonconforming computer with comparable capacity (memory, speed, and word
size).

The Standard Apple Numerics Environment (SANE) is the most thorough
implementation of IEEE Standard 754 to date. The fanatical attention to
detail that permeates SANE's implementation largely relieves Apple computer
users from having to know any more about those details than they like. If
you come to an Apple computer from some other computer that you were fond
of, you will find the Apple computer's arithmetic at least about as good,
and quite likely rather better. An Apple computer can be set up to mimic the
worthwhile characteristics of almost any reasonable past computer
arithmetic, so existing libraries of numerical software do not have to be
discarded if they can be recompiled. SANE also offers features that are
unique to the IEEE Standard, new capabilities that previous generations of
computer users could only yearn for; but to learn what they are, you will
have to read this book.

As one of the designers of IEEE Standard 754, I can only stand in awe of the
efforts that Apple has expended to implement that standard faithfully both
in hardware and in software, including language processors, so that users of
Apple computers will actually reap tangible benefits from the Standard. And
I thank Apple for letting me explain in this foreword why we needed that
standard.

Professor W. Kahan
Mathematics Department and
Electrical Engineering and
Computer Science Department
University of California at Berkeley
December 16, 1987


David Robinow

unread,
May 24, 2009, 11:19:35 PM5/24/09
to Dave Angel, pytho...@python.org
On Sun, May 24, 2009 at 7:51 PM, Dave Angel <da...@ieee.org> wrote:
>>        By decreeing that the value of PI is 3?
>>
>
> Only in Ohio.
Please, we're smarter than that in Ohio. In fact, while the Indiana
legislature was learning about PI, we had guys inventing the airplane.

http://en.wikipedia.org/wiki/Indiana_Pi_Bill

Lawrence D'Oliveiro

unread,
May 25, 2009, 12:21:19 AM5/25/09
to
In message <mailman.674.1243192...@python.org>, Dennis Lee
Bieber wrote:

> On Sun, 24 May 2009 22:47:51 +1200, Lawrence D'Oliveiro
> <l...@geek-central.gen.new_zealand> declaimed the following in
> gmane.comp.python.general:
>

>> As for exactitude in physics, Gregory Chaitin among others has been
>> trying to rework physics to get rid of real numbers altogether.
>

> By decreeing that the value of PI is 3?

Interesting kind of mindset, that assumes that the opposite of "real" must
be "integer" or a subset thereof...

Steven D'Aprano

unread,
May 25, 2009, 1:22:15 AM5/25/09
to


(0) "Opposite" is not well-defined unless you have a dichotomy. In the
case of number fields like the reals, you have more than two options, so
"opposite of real" isn't defined.

(1/3) Why do you jump to the conclusion that "pi=3" implies that only
integers are defined? One might have a mapping where every real number is
transferred to the closest multiple of 1/3 (say), rather than the closest
integer. That would still give "pi=3", without being limited to integers.

(1/2) If you "get rid of real numbers", then obviously you must have a
smaller set of numbers, not a larger. Any superset of reals will include
the reals, and therefore you haven't got rid of them at all, so we can
eliminate supersets of the reals from consideration if your description
of Chaitin's work is accurate.

(2/3) There is *no* point (2/3).

(1) I thought about numbering my points as consecutive increasing
integers, but decided that was an awfully boring convention. A shiny
banananana for the first person to recognise the sequence.


--
Steven

Message has been deleted

Erik Max Francis

unread,
May 25, 2009, 2:23:30 AM5/25/09
to
Dennis Lee Bieber wrote:
> On Mon, 25 May 2009 16:21:19 +1200, Lawrence D'Oliveiro

> <l...@geek-central.gen.new_zealand> declaimed the following in
> gmane.comp.python.general:
>
>> Interesting kind of mindset, that assumes that the opposite of "real" must
>> be "integer" or a subset thereof...
>
> No, but since PI (and e) are both transcendentals, there is NO
> representation (except by the symbols themselves) which is NOT an
> approximation.

Sure there are; you can just use other symbolic representations. In
fact, there are trivially an infinite number of them; e, e^1, 1/(1/e), e
+ 1 - 1, e + 2 - 2, etc.

Even if you restrict yourself to base-b expansions (for which the
statement is true for integer bases), you can cheat there too: e is 1
in base e.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 18 N 121 57 W && AIM, Y!M, Skype erikmaxfrancis

Men and women, women and men. It will never work.
-- Erica Jong

Dave Angel

unread,
May 24, 2009, 5:19:29 PM5/24/09
to Lawrence D'Oliveiro, pytho...@python.org
I remember the professor. He was responsible for large parts of the
Intel 8087 specification, which later got mostly codified as IEEE 754.
In those days, the 8087 was a couple hundred extra dollars, so few
machines had one. And the software simulation was horribly slow (on a
4.7 mhz machine). So most compilers would have two math libraries. If
you wanted 8087 equivalence, and the hardware wasn't there, it was dog
slow. On the other hand, if you specified the other math package, it
didn't benefit at all from the presence of the 8087.


Lawrence D'Oliveiro

unread,
May 25, 2009, 3:52:50 AM5/25/09
to
In message <mailman.702.1243237...@python.org>, Dave Angel
wrote:

> Lawrence D'Oliveiro wrote:
>
>> Anybody remember a certain Prof William Kahan from Berkeley ...


>>
> I remember the professor. He was responsible for large parts of the
> Intel 8087 specification, which later got mostly codified as IEEE 754.

The 8087 was poorly designed. It was stack-based, which caused all kinds of
performance problems that never really went away, though I think Intel tried
to patch over them with various SSE extensions. I believe AMD64 does have
proper floating-point registers, at last.

Apple's implementation of IEEE 754 was so rigorous that, when Motorola
introduced the 68881, which implemented a few of the "shortcuts" that Kahan
reviled in his foreword, Apple added a patch to its SANE library to restore
correct results, with the usual controversy over whether the performance
loss was worth it. If you didn't think it was, you could always use the
68881 instructions directly.

Hendrik van Rooyen

unread,
May 25, 2009, 5:59:39 AM5/25/09
to pytho...@python.org
"Dennis Lee Bieber" <wlf...@ix.netcom.com> wrote:


> On Sun, 24 May 2009 22:47:51 +1200, Lawrence D'Oliveiro


> <l...@geek-central.gen.new_zealand> declaimed the following in
> gmane.comp.python.general:
>
>

> > As for exactitude in physics, Gregory Chaitin among others has been trying
> > to rework physics to get rid of real numbers altogether.
>

> By decreeing that the value of PI is 3?

naah - that would be too crude, even for a physicist - pi is 22//7..

;-)

- Hendrik

Scott David Daniels

unread,
May 26, 2009, 2:10:02 AM5/26/09
to
Steven D'Aprano wrote:
> On Mon, 25 May 2009 16:21:19 +1200, Lawrence D'Oliveiro wrote:
>... (0) "Opposite" is not well-defined unless you have a dichotomy. In the
>... (1/3) Why do you jump to the conclusion that "pi=3" implies that only
>... (1/2) If you "get rid of real numbers", then obviously you must have a
>... (2/3) There is *no* point (2/3).
>... (1) I thought about numbering my points as consecutive increasing

> integers, but decided that was an awfully boring convention. A shiny
> banananana for the first person to recognise the sequence.

I'd call it F_3, but using a Germanic F (Farey sequence limit 3).

Do I get a banana or one with a few more ans?

--Scott David Daniels
Scott....@Acm.Org

Steven D'Aprano

unread,
May 26, 2009, 4:57:48 AM5/26/09
to
On Mon, 25 May 2009 23:10:02 -0700, Scott David Daniels wrote:

> Steven D'Aprano wrote:
>> On Mon, 25 May 2009 16:21:19 +1200, Lawrence D'Oliveiro wrote:
>>... (0) "Opposite" is not well-defined unless you have a dichotomy. In
>>the ... (1/3) Why do you jump to the conclusion that "pi=3" implies that
>>only ... (1/2) If you "get rid of real numbers", then obviously you must
>>have a ... (2/3) There is *no* point (2/3).
>>... (1) I thought about numbering my points as consecutive increasing
>> integers, but decided that was an awfully boring convention. A shiny
>> banananana for the first person to recognise the sequence.
>
> I'd call it F_3, but using a Germanic F (Farey sequence limit 3).

That's the one.


> Do I get a banana or one with a few more ans?

I know how to spell bananananana, I just don't know when to stop.

--
Steven

Lawrence D'Oliveiro

unread,
May 26, 2009, 7:33:51 PM5/26/09
to
In message <pan.2009.05...@REMOVE.THIS.cybersource.com.au>, Steven
D'Aprano wrote:

> On Sun, 24 May 2009 22:47:51 +1200, Lawrence D'Oliveiro
> <l...@geek-central.gen.new_zealand> declaimed the following in
> gmane.comp.python.general:
>

>> .. Gregory Chaitin among others has been trying to rework physics to get


>> rid of real numbers altogether.
>

> (1/2) If you "get rid of real numbers", then obviously you must have a
> smaller set of numbers, not a larger.

Chaitin is trying to use only computable numbers. Pi is computable, as is e,
sqrt(2), the Feigenbaum constant, and many others familiar to us all.

Trouble is, they only make up 0% of the reals. It's the other 100% he wants
to get rid of.

Steven D'Aprano

unread,
May 26, 2009, 11:43:39 PM5/26/09
to
On Wed, 27 May 2009 11:33:51 +1200, Lawrence D'Oliveiro wrote:

> Chaitin is trying to use only computable numbers. Pi is computable, as
> is e, sqrt(2), the Feigenbaum constant, and many others familiar to us
> all.
>
> Trouble is, they only make up 0% of the reals. It's the other 100% he
> wants to get rid of.


+1 QOTD


--
Steven

Luis Zarrabeitia

unread,
May 27, 2009, 11:07:21 AM5/27/09
to pytho...@python.org
On Thursday 21 May 2009 08:50:48 pm R. David Murray wrote:

> In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
> point algorithm for Python so that the shortest repr that will round
> trip correctly is what is used as the floating point repr....

Little question: what was the goal of such a change? (is there a pep for me to
read?) Shouldn't str() do that, and leave repr as is?

While I agree that the change gets rid of the weekly newbie question
about "python's lack precision", I'd find more difficult to explain why
0.2 * 3 != 0.6 without showing them what 0.2 /really/ means.

--
Luis Zarrabeitia (aka Kyrie)
Fac. de Matemática y Computación, UH.
http://profesores.matcom.uh.cu/~kyrie

Ned Deily

unread,
May 27, 2009, 2:33:38 PM5/27/09
to pytho...@python.org
In article <200905271107...@uh.cu>,

Luis Zarrabeitia <ky...@uh.cu> wrote:
> On Thursday 21 May 2009 08:50:48 pm R. David Murray wrote:
> > In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
> > point algorithm for Python so that the shortest repr that will round
> > trip correctly is what is used as the floating point repr....
>
> Little question: what was the goal of such a change? (is there a pep for me
> to
> read?)

See discussion starting here:

http://article.gmane.org/gmane.comp.python.devel/103191/

--
Ned Deily,
n...@acm.org

Luis Zarrabeitia

unread,
May 27, 2009, 3:37:10 PM5/27/09
to pytho...@python.org
On Wednesday 27 May 2009 02:33:38 pm Ned Deily wrote:
> In article <200905271107...@uh.cu>,

>
> > Little question: what was the goal of such a change? (is there a pep for
> > me to
> > read?)
>
> See discussion starting here:
>
> http://article.gmane.org/gmane.comp.python.devel/103191/

Thank you.

Mark Dickinson

unread,
May 27, 2009, 4:26:57 PM5/27/09
to
Luis Zarrabeitia <ky...@uh.cu> wrote:
> On Thursday 21 May 2009 08:50:48 pm R. David Murray wrote:
>
>> In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
>> point algorithm for Python so that the shortest repr that will round
>> trip correctly is what is used as the floating point repr....
>
> Little question: what was the goal of such a change? (is there a pep for me to
> read?) Shouldn't str() do that, and leave repr as is?

It's a good question. I was prepared to write a PEP if necessary, but
there was essentially no opposition to this change either in the
python-dev thread that Ned already mentioned, in the bugs.python.org
feature request (see http://bugs.python.org/issue1580; set aside
half-an-hour or so if you want to read this one) or amongst the people
we spoke to at PyCon 2009, so in the end Eric and I just went ahead
and merged the changes. It didn't harm that Guido supported the idea.

I think the main goal was to see fewer complaints from newbie users
about 0.1 displaying as 0.10000000000000001. There's no real reason
to produce 17 digits here. Neither 0.1 nor 0.10000000000000001
displays the true value of the float---both are approximations, so why
not pick the approximation that actually displays nicely. The only
requirement is that float(repr(x)) recovers x exactly, and since 0.1
produced the float in the first place, it's clear that taking
repr(0.1) to be '0.1' satisfies this requirement.

The problem is particularly acute with the use of the round function,
where newbies complain that round is buggy because it's not rounding
to 2 decimal places:

>>> round(2.45311, 2)
2.4500000000000002

With the new float repr, the result of rounding a float to 2 decimal
places will always display with at most 2 places after the point.
(Well, possibly except when that float is very large.)

Of course, there are still going to be complaints that the following
is rounding in the wrong direction:

>>> round(0.075, 2)
0.07

I'll admit to feeling a bit uncomfortable about the fact that the new
repr goes a little bit further towards hiding floating-point
difficulties from numerically-naive users.

The main things that I like about the new representation is that its
definition is saner (give me the shortest string that rounds
correctly, versus format to 17 places and then somewhat arbitrarily
strip all trailing zeros) and it's more consistent than the old. With
the current 2.6/3.0 repr (on my machine; your results may vary):

>>> 0.01
0.01
>>> 0.02
0.02
>>> 0.03
0.029999999999999999
>>> 0.04
0.040000000000000001

With Python 3.1:

>>> 0.01
0.01
>>> 0.02
0.02
>>> 0.03
0.03
>>> 0.04
0.04

A cynical response would be to say that the Python 2.6 repr lies only
some of the time; with Python 3.1 it lies *all* of the time. But
actually all of the above outputs are lies; it's just that the second
set of lies is more consistent and better looking.

There are also a number of significant 'hidden' benefits to using
David Gay's code instead of the system C library's functions, though
those benefits are mostly independent of the choice to use the short
float repr:

- the float repr is much more likely to be consistent across platforms
(or at least across those platforms using IEEE 754 doubles, which
seems to be 99.9% percent of them)

- the C library double<->string conversion functions are buggy on many
platforms (including at least OS X, Windows and some flavours of
Linux). While I won't claim that Gay's code (or our adaptation of
it) is bug-free, I don't know of any bugs (reports welcome!) and at
least when bugs are discovered it's within our power to fix them.
Here's one example of an x == eval(repr(x)) failure due to a bug in
the OS X implementation of strtod:

>>> x = (2**52-1)*2.**(-1074)
>>> x
2.2250738585072009e-308
>>> y = eval(repr(x))
>>> y
2.2250738585072014e-308
>>> x == y
False

- similar to the last point: on many platforms string formatting is
not correctly rounded, in the sense that e.g. '%.6f' % x does not
necessarily produce the closest decimal with 6 places after the
decimal point to x. This is *not* a platform bug, since there's no
requirement of correct rounding in the C standards. However, David
Gay's code does provide correctly rounded string -> double and
double -> string conversions, so Python's string formatting will now
always be correctly rounded. A small thing, but it's nice to have.

- since both round() and string formatting now both use Gay's code, we
can finally guarantee that round and string formatting give
equivalent results: e.g., that the digits in round(x, 2) are the
same as the digits in '%.2f' % x. That wasn't true before: round
could round up while '%.2f' % x rounded down (or vice versa) leading
to confusion and at least one semi-bogus bug report.

- a lot of internal cleanup has become possible as a result of not
having to worry about all the crazy things that platform string <->
double conversions can do. This makes the CPython code smaller,
clearer, easier to maintain, and less likely to contain bugs.

> While I agree that the change gets rid of the weekly newbie question
> about "python's lack precision", I'd find more difficult to explain why
> 0.2 * 3 != 0.6 without showing them what 0.2 /really/ means.

There are still plenty of ways to show what 0.2 really means. My
favourite is to use the Decimal.from_float method:

>>> Decimal.from_float(0.2)
Decimal('0.200000000000000011102230246251565404236316680908203125')

This is only available in 2.7 and 3.1, but then the repr change isn't
happening until 3.1 (and it almost certainly won't be backported to
2.7, by the way), so that's okay. But there's also float.hex,
float.as_integer_ratio, and Fraction.from_float to show the exact
value that's stored for a float.

>>> 0.2.hex()
'0x1.999999999999ap-3'
>>> Fraction.from_float(0.2)
Fraction(3602879701896397, 18014398509481984)

Hmm. That was a slightly unfortunate choice of example: the hex form
of 0.2 looks uncomfortably similar to 1.9999999.... An interesting
cross-base accident.

This is getting rather long. Perhaps I should put the above comments
together into a 'post-PEP' document.

Mark

Luis Zarrabeitia

unread,
May 27, 2009, 5:24:59 PM5/27/09
to pytho...@python.org
On Wednesday 27 May 2009 04:26:57 pm Mark Dickinson wrote:
> Luis Zarrabeitia <ky...@uh.cu> wrote:
> > On Thursday 21 May 2009 08:50:48 pm R. David Murray wrote:
> >> In py3k Eric Smith and Mark Dickinson have implemented Gay's floating
> >> point algorithm for Python so that the shortest repr that will round
> >> trip correctly is what is used as the floating point repr....
> >
> > Little question: what was the goal of such a change? (is there a pep for
> > me to read?) Shouldn't str() do that, and leave repr as is?
>
> It's a good question. I was prepared to write a PEP if necessary, but
> there was essentially no opposition to this change either in the
> python-dev thread that Ned already mentioned, in the bugs.python.org
> feature request (see http://bugs.python.org/issue1580; set aside
> half-an-hour or so if you want to read this one) or amongst the people
> we spoke to at PyCon 2009, so in the end Eric and I just went ahead
> and merged the changes. It didn't harm that Guido supported the idea.

Thank you for the reply and the explanation.
After reading the thread, I was sold on the idea. It still feels weird not
being able to introduce /that quickly/ the idea of float vs real to newbies,
but now I understand the tradeoff.

(Now, I'm going to miss showing that to my students who almost inevitably
complain about the uselessness of the 'floating point representation' chapter
in the numerical analysis course :D).

> There are still plenty of ways to show what 0.2 really means. My
>
> favourite is to use the Decimal.from_float method:
> >>> Decimal.from_float(0.2)
>
> Decimal('0.200000000000000011102230246251565404236316680908203125')

Oh, thank you. That was the next thing I was going to ask.

Aahz

unread,
May 28, 2009, 10:55:33 AM5/28/09
to
In article <4a1da210$0$90265$1472...@news.sunsite.dk>,

Mark Dickinson <dick...@gmail.com> wrote:
>
>This is getting rather long. Perhaps I should put the above comments
>together into a 'post-PEP' document.

Yes, you should. Better explanation of floating point benefits everyone
when widely available. I even learned a little bit here and I've been
following this stuff for a while (though by no means any kind of
numerical expert).
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/

"In many ways, it's a dull language, borrowing solid old concepts from
many other languages & styles: boring syntax, unsurprising semantics,
few automatic coercions, etc etc. But that's one of the things I like
about it." --Tim Peters on Python, 16 Sep 1993

0 new messages