Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Turn off ZeroDivisionError?

26 views
Skip to first unread message

Neal Becker

unread,
Feb 9, 2008, 5:03:53 PM2/9/08
to pytho...@python.org
I'd like to turn off ZeroDivisionError. I'd like 0./0. to just give NaN,
and when output, just print 'NaN'. I notice fpconst has the required
constants. I don't want to significantly slow floating point math, so I
don't want to just trap the exception.

If I use C code to turn off the hardware signal, will that stop python from
detecting the exception, or is python checking for 0 denominator on it's
own (hope not, that would waste cycles).

Ryszard Szopa

unread,
Feb 9, 2008, 6:10:21 PM2/9/08
to
On Feb 9, 11:03 pm, Neal Becker <ndbeck...@gmail.com> wrote:
> I'd like to turn off ZeroDivisionError. I'd like 0./0. to just give NaN,
> and when output, just print 'NaN'. I notice fpconst has the required
> constants. I don't want to significantly slow floating point math, so I
> don't want to just trap the exception.

What you are trying to do looks like something very, very wrong, in
the vast majority of cases. Think: normal Python code and the
interpreter itself are written under the assumption that dividing by
zero doesn't pass silently. Changing it is asking for bogus behavior.

Have you actually timed how big is the overhead of catching the
exception?

In [83]: import timeit

In [84]: x = """\
try:
1.0/rand.next()
except:
pass"""

In [85]: t = timeit.Timer(x, 'import random \nrand =
iter([random.randint(0,10) for i in xrange(1000000)])')

In [86]: x_nozero = "1.0/rand.next()"

In [87]: t_nozero = timeit.Timer(x_nozero, 'import random \nrand =
iter([random.randint(1,10) for i in xrange(1000000)])')

In [88]: t.repeat()
Out[88]: [0.91399192810058594, 0.8678128719329834,
0.86738419532775879]

In [89]: t_nozero.repeat()
Out[89]: [0.64040493965148926, 0.58412599563598633,
0.59886980056762695]

As you can see, the overhead isn't so huge.

If this overhead is too big for you, you should consider using a
different language (C? Fortran?) or at least a numeric package for
Python (NumPy?).

Anyway, turning off division by zero signaling looks like the wrong
answer to the wrong question.

HTH,

-- Richard

endange...@gmail.com

unread,
Feb 10, 2008, 11:46:41 AM2/10/08
to
Would a wrapper function be out of the question here?

def MyDivision(num, denom):
if denom==0:
return "NaN"
else
return num / denom

Mark Dickinson

unread,
Feb 10, 2008, 12:33:18 PM2/10/08
to
On Feb 9, 5:03 pm, Neal Becker <ndbeck...@gmail.com> wrote:
> If I use C code to turn off the hardware signal, will that stop python from
> detecting the exception, or is python checking for 0 denominator on it's
> own (hope not, that would waste cycles).

Yes, Python does do an explicit check for a zero denominator. Here's
an excerpt from floatdiv.c in Objects/floatobject.c:

if (b == 0.0) {
PyErr_SetString(PyExc_ZeroDivisionError, "float division");
return NULL;
}

This is probably the only sane way to deal with differences in
platform behaviour when doing float divisions.

Dikkie Dik

unread,
Feb 10, 2008, 1:54:25 PM2/10/08
to
Are you sure?

It could very well be that 1/(smallest possible number)>(greatest
possible number). So I would also trap any errors besides trapping for
the obvious zero division.

bearoph...@lycos.com

unread,
Feb 10, 2008, 3:10:19 PM2/10/08
to
Mark Dickinson:

> This is probably the only sane way to deal with differences in
> platform behaviour when doing float divisions.

What Python run on a CPU that doesn't handle the nan correctly?

Bye,
bearophile

Grant Edwards

unread,
Feb 10, 2008, 3:29:50 PM2/10/08
to

I've always found that check to be really annoying. Every time
anybody asks about floating point handling, the standard
response is that "Python just does whatever the underlying
platform does". Except it doesn't in cases like this. All my
platforms do exactly what I want for division by zero: they
generate a properly signed INF. Python chooses to override
that (IMO correct) platform behavior with something surprising.
Python doesn't generate exceptions for other floating point
"events" -- why the inconsistency with divide by zero?

--
Grant Edwards grante Yow! Where's th' DAFFY
at DUCK EXHIBIT??
visi.com

Steve Holden

unread,
Feb 10, 2008, 3:31:07 PM2/10/08
to pytho...@python.org

What's so special about one? You surely don't expect the Python code to
check for all possible cases of overflow before allowing the hardware to
proceed with a division?

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/

Neal Becker

unread,
Feb 10, 2008, 4:19:29 PM2/10/08
to pytho...@python.org
endange...@gmail.com wrote:

I bought a processor that has hardware to implement this. Why do I want
software to waste time on it?

Christian Heimes

unread,
Feb 10, 2008, 4:39:57 PM2/10/08
to pytho...@python.org
Grant Edwards wrote:
> I've always found that check to be really annoying. Every time
> anybody asks about floating point handling, the standard
> response is that "Python just does whatever the underlying
> platform does". Except it doesn't in cases like this. All my
> platforms do exactly what I want for division by zero: they
> generate a properly signed INF. Python chooses to override
> that (IMO correct) platform behavior with something surprising.
> Python doesn't generate exceptions for other floating point
> "events" -- why the inconsistency with divide by zero?

I'm aware result is arguable and professional users may prefer +INF for
1/0. However Python does the least surprising thing. It raises an
exception because everybody has learned at school 1/0 is not allowed.

>From the PoV of a mathematician Python does the right thing, too. 1/0 is
not defined, only the lim(1/x) for x -> 0 is +INF. From the PoV of a
numerics guy it's surprising.

Do you suggest that 1./0. results into +INF [1]? What should be the
result of 1/0?

Christian

[1]
http://en.wikipedia.org/wiki/Division_by_zero#Division_by_zero_in_computer_arithmetic

Jeff Schwab

unread,
Feb 10, 2008, 4:51:18 PM2/10/08
to pytho...@python.org

Will the amount of time wasted by the software exceed the amount of time
required to implement Python-level access to the hardware feature? At
any rate, the work-around should at least let you work on the rest of
the application, while a more efficient implementation can be developed.

Jeff Schwab

unread,
Feb 10, 2008, 4:51:18 PM2/10/08
to pytho...@python.org

Will the amount of time wasted by the software exceed the amount of time

Grant Edwards

unread,
Feb 10, 2008, 4:55:09 PM2/10/08
to
On 2008-02-10, Christian Heimes <li...@cheimes.de> wrote:
> Grant Edwards wrote:
>
>> I've always found that check to be really annoying. Every
>> time anybody asks about floating point handling, the standard
>> response is that "Python just does whatever the underlying
>> platform does". Except it doesn't in cases like this. All my
>> platforms do exactly what I want for division by zero: they
>> generate a properly signed INF. Python chooses to override
>> that (IMO correct) platform behavior with something
>> surprising. Python doesn't generate exceptions for other
>> floating point "events" -- why the inconsistency with divide
>> by zero?
>
> I'm aware result is arguable and professional users may prefer
> +INF for 1/0. However Python does the least surprising thing.

It appears that you and I are surprised by different things.

> It raises an exception because everybody has learned at school
> 1/0 is not allowed.

You must have gone to a different school than I did. I learned
that for IEEE floating point operations a/0. is INF with the
same sign as a (except when a==0, then you get a NaN).

>>From the PoV of a mathematician Python does the right thing,
>>too. 1/0 is not defined, only the lim(1/x) for x -> 0 is +INF.
>>From the PoV of a numerics guy it's surprising.
>
> Do you suggest that 1./0. results into +INF [1]?

That's certainly what I expected after being told that Python
doesn't do anything special with floating point operations and
leaves it all up to the underlying hardware. Quoting from the
page to linked to, it's also what the IEEE standard specifies:

The IEEE floating-point standard, supported by almost all
modern processors, specifies that every floating point
arithmetic operation, including division by zero, has a
well-defined result. In IEEE 754 arithmetic, a/0 is positive
infinity when a is positive, negative infinity when a is
negative, and NaN (not a number) when a = 0.

I was caught completely off guard when I discovered that Python
goes out of its way to violate that standard, and it resulted
in my program not working correctly.

> What should be the result of 1/0?

I don't really care. An exception is OK with me, but I don't
write code that does integer divide by zero operations.

--
Grant Edwards grante Yow! does your DRESSING
at ROOM have enough ASPARAGUS?
visi.com

Mark Dickinson

unread,
Feb 10, 2008, 4:56:04 PM2/10/08
to
On Feb 10, 3:10 pm, bearophileH...@lycos.com wrote:
> What Python run on a CPU that doesn't handle the nan correctly?

How about platforms that don't even have nans? I don't think either
IBM's hexadecimal floating-point format, or the VAX floating-point
formats
support NaNs. Python doesn't assume IEEE 754 hardware; though much
of its code
could be a lot simpler if it did :-).
rywhere ;-).

Mark

Grant Edwards

unread,
Feb 10, 2008, 4:56:07 PM2/10/08
to

Exactly. Espeically when Python supposedly leaves floating
point ops up to the platform.

--
Grant Edwards grante Yow! Don't hit me!! I'm in
at the Twilight Zone!!!
visi.com

Grant Edwards

unread,
Feb 10, 2008, 4:57:39 PM2/10/08
to
On 2008-02-10, Jeff Schwab <je...@schwabcenter.com> wrote:
> Neal Becker wrote:
>> endange...@gmail.com wrote:
>>
>>> Would a wrapper function be out of the question here?
>>>
>>> def MyDivision(num, denom):
>>> if denom==0:
>>> return "NaN"
>>> else
>>> return num / denom
>>
>> I bought a processor that has hardware to implement this. Why do I want
>> software to waste time on it?
>
> Will the amount of time wasted by the software exceed the amount of time
> required to implement Python-level access to the hardware feature?

The "time required to implement Python-level access to the
hardware features" is simply the time required to delete the
lines of code that raise an exception when demon is 0.0

> At any rate, the work-around should at least let you work on
> the rest of the application, while a more efficient
> implementation can be developed.

A more efficient implementation? Just delete the code that
raises the exception and the HW will do the right thing.

--
Grant Edwards grante Yow! Do you need
at any MOUTH-TO-MOUTH
visi.com resuscitation?

Mark Dickinson

unread,
Feb 10, 2008, 5:01:55 PM2/10/08
to
On Feb 10, 3:29 pm, Grant Edwards <gra...@visi.com> wrote:
> platform does".  Except it doesn't in cases like this. All my
> platforms do exactly what I want for division by zero: they
> generate a properly signed INF.  Python chooses to override
> that (IMO correct) platform behavior with something surprising.
> Python doesn't generate exceptions for other floating point
> "events" -- why the inconsistency with divide by zero?

But not everyone wants 1./0. to produce an infinity; some people
would prefer an exception.

Python does try to generate exceptions for floating-point events
at least some of the time---e.g. generating ValueErrors for
sqrt(-1.) and log(-1.) and OverflowError for exp(large_number).

I agree that the current situation is not ideal. I think the ideal
would be to have a floating-point environment much like Decimal's,
where the user has control over whether floating-point exceptions
are trapped (producing Python exceptions) or not (producing
infinities
and nans). The main difficulty is in writing reliable ANSI C that
can do this across platforms. It's probably not impossible, but
it is a lot of work.

Christian Heimes

unread,
Feb 10, 2008, 5:34:51 PM2/10/08
to pytho...@python.org
Grant Edwards wrote:
> You must have gone to a different school than I did. I learned
> that for IEEE floating point operations a/0. is INF with the
> same sign as a (except when a==0, then you get a NaN).

I'm not talking about CS and IEEE floating point ops. I was referring to
plain good old math. Python targets both newbies and professionals.
That's the reason for two math modules (math and cmath).

> That's certainly what I expected after being told that Python
> doesn't do anything special with floating point operations and
> leaves it all up to the underlying hardware. Quoting from the
> page to linked to, it's also what the IEEE standard specifies:
>
> The IEEE floating-point standard, supported by almost all
> modern processors, specifies that every floating point
> arithmetic operation, including division by zero, has a
> well-defined result. In IEEE 754 arithmetic, a/0 is positive
> infinity when a is positive, negative infinity when a is
> negative, and NaN (not a number) when a = 0.
>
> I was caught completely off guard when I discovered that Python
> goes out of its way to violate that standard, and it resulted
> in my program not working correctly.

Python's a/0 outcome doesn't violate the standards because Python
doesn't promise to follow the IEEE 754 standard in the first place. Mark
and I are working hard to make math in Python more reliable across
platforms. So far we have fixed a lot of problems but we haven't
discussed the a/0 matter.

The best we could give you is an option that makes Python's floats more
IEEE 754 like:

>>> from somemodule import ieee754
>>> with ieee754:
... r = a/0
... print r
inf

Christian

Paul Rubin

unread,
Feb 10, 2008, 5:38:17 PM2/10/08
to
Christian Heimes <li...@cheimes.de> writes:
> Python targets both newbies and professionals.
> That's the reason for two math modules (math and cmath).

Ehhh??? cmath is for complex-valued functions, nothing to do with
newbies vs. professionals.

Jeff Schwab

unread,
Feb 10, 2008, 5:40:03 PM2/10/08
to
Grant Edwards wrote:
> On 2008-02-10, Jeff Schwab <je...@schwabcenter.com> wrote:
>> Neal Becker wrote:
>>> endange...@gmail.com wrote:
>>>
>>>> Would a wrapper function be out of the question here?
>>>>
>>>> def MyDivision(num, denom):
>>>> if denom==0:
>>>> return "NaN"
>>>> else
>>>> return num / denom
>>> I bought a processor that has hardware to implement this. Why do I want
>>> software to waste time on it?
>> Will the amount of time wasted by the software exceed the amount of time
>> required to implement Python-level access to the hardware feature?
>
> The "time required to implement Python-level access to the
> hardware features" is simply the time required to delete the
> lines of code that raise an exception when demon is 0.0

You mean the C code that implements the interpreter? Assuming the
developer is comfortable getting the Python source code, modifying it,
compiling and deploying the modified version: In the best case, the
change introduces a portability nightmare. In the worst case, you break
all kinds of existing code, possibly in the standard libraries, that
assumes an exception will be throw on any attempt to divide by zero.

>> At any rate, the work-around should at least let you work on
>> the rest of the application, while a more efficient
>> implementation can be developed.
>
> A more efficient implementation? Just delete the code that
> raises the exception and the HW will do the right thing.

"Just fork python" is not a simpler solution than "just use this
pure-python function." Whether it's worth the headache can be
considered at length, but in the meantime, the pure-python solution is
quicker, safer way to continue development of the client code that
requires the functionality (NaN on divide-by-zero).

Mark Dickinson

unread,
Feb 10, 2008, 5:44:58 PM2/10/08
to
On Feb 10, 4:56 pm, Grant Edwards <gra...@visi.com> wrote:
> Exactly.  Espeically when Python supposedly leaves floating
> point ops up to the platform.

There's a thread at http://mail.python.org/pipermail/python-list/2005-July/329849.html
that's quite relevant to this discussion. See especially the
exchanges between Michael
Hudson and Tim Peters in the later part of the thread. I like this
bit, from Tim:

"I believe Python should raise exceptions in these cases by default,
because, as above, they correspond to the overflow and
invalid-operation signals respectively, and Python should raise
exceptions on the overflow, invalid-operation, and divide-by-0
signals
by default. But I also believe Python _dare not_ do so unless it
also
supplies sane machinery for disabling traps on specific signals
(along
the lines of the relevant standards here). Many serious numeric
programmers would be livid, and justifiably so, if they couldn't get
non-stop mode back. The most likely x-platfrom accident so far is
that they've been getting non-stop mode in Python since its
beginning."

Mark

Ben Finney

unread,
Feb 10, 2008, 5:50:07 PM2/10/08
to
Mark Dickinson <dick...@gmail.com> writes:

> On Feb 10, 3:29 pm, Grant Edwards <gra...@visi.com> wrote:
> > platform does".  Except it doesn't in cases like this. All my
> > platforms do exactly what I want for division by zero: they
> > generate a properly signed INF.  Python chooses to override
> > that (IMO correct) platform behavior with something surprising.
> > Python doesn't generate exceptions for other floating point
> > "events" -- why the inconsistency with divide by zero?
>
> But not everyone wants 1./0. to produce an infinity; some people
> would prefer an exception.

Special cases aren't special enough to break the rules.

Most people would not want this behaviour either::

>>> 0.1
0.10000000000000001

But the justification for this violation of surprise is "Python just
does whatever the underlying hardware does with floating-point
numbers". If that's the rule, it shouldn't be broken in the special
case of division by zero.

--
\ “If the desire to kill and the opportunity to kill came always |
`\ together, who would escape hanging?” —Mark Twain, _Following |
_o__) the Equator_ |
Ben Finney

Christian Heimes

unread,
Feb 10, 2008, 5:57:39 PM2/10/08
to pytho...@python.org
Grant Edwards wrote:
> A more efficient implementation? Just delete the code that
> raises the exception and the HW will do the right thing.

Do you really think that the hardware and the C runtime library will do
the right thing? Python runs on a lots platforms and architectures. Some
of the platforms don't have a FPU or don't support hardware acceleration
for floating point ops for user space applications. Some platforms don't
follow IEEE 754 semantics at all.

It took us a lot of effort to get consistent results for edge cases of
basic functions like sin and atan on all platforms. Simply removing
those lines and praying that it works won't do it.

Christian

Neal Becker

unread,
Feb 10, 2008, 6:04:21 PM2/10/08
to pytho...@python.org
Christian Heimes wrote:

> Grant Edwards wrote:
>> A more efficient implementation? Just delete the code that
>> raises the exception and the HW will do the right thing.
>

> Do you really think that the hardware and the C runtime library will do
> the right thing? Python runs on a lots platforms and architectures. Some
> of the platforms don't have a FPU or don't support hardware acceleration
> for floating point ops for user space applications. Some platforms don't
> follow IEEE 754 semantics at all.
>
> It took us a lot of effort to get consistent results for edge cases of
> basic functions like sin and atan on all platforms. Simply removing
> those lines and praying that it works won't do it.
>
> Christian
>

I think, ideally, that on a platform that has proper IEEE 754 support we
would rely on the hardware, and only on platforms that don't would we add
extra software emulation.

With proper hardware support, the default would be a hardware floating pt
exception, which python would translate.

If the user wanted, she should be able to turn it off during some
calculation (but that would not be the default).

Carl Banks

unread,
Feb 10, 2008, 7:01:35 PM2/10/08
to
On Feb 10, 5:50 pm, Ben Finney <bignose+hates-s...@benfinney.id.au>
wrote:

> Mark Dickinson <dicki...@gmail.com> writes:
> > On Feb 10, 3:29 pm, Grant Edwards <gra...@visi.com> wrote:
> > > platform does". Except it doesn't in cases like this. All my
> > > platforms do exactly what I want for division by zero: they
> > > generate a properly signed INF. Python chooses to override
> > > that (IMO correct) platform behavior with something surprising.
> > > Python doesn't generate exceptions for other floating point
> > > "events" -- why the inconsistency with divide by zero?
>
> > But not everyone wants 1./0. to produce an infinity; some people
> > would prefer an exception.
>
> Special cases aren't special enough to break the rules.
>
> Most people would not want this behaviour either::
>
> >>> 0.1
> 0.10000000000000001
>
> But the justification for this violation of surprise is "Python just
> does whatever the underlying hardware does with floating-point
> numbers". If that's the rule, it shouldn't be broken in the special
> case of division by zero.

Do you recall what the very next Zen after "Special cases aren't
special enough to break the rules" is?


that's-why-they-call-it-Zen-ly yr's,

Carl Banks

Grant Edwards

unread,
Feb 10, 2008, 7:05:56 PM2/10/08
to
On 2008-02-10, Ben Finney <bignose+h...@benfinney.id.au> wrote:
> Mark Dickinson <dick...@gmail.com> writes:


>>> platform does". platforms do exactly what I want for division


>>> by zero: they generate a properly signed INF.  Python chooses
>>> to override that (IMO correct) platform behavior with
>>> something surprising. Python doesn't generate exceptions for
>>> other floating point "events" -- why the inconsistency with
>>> divide by zero?
>>
>> But not everyone wants 1./0. to produce an infinity; some
>> people would prefer an exception.
>
> Special cases aren't special enough to break the rules.
>
> Most people would not want this behaviour either::
>
> >>> 0.1
> 0.10000000000000001
>
> But the justification for this violation of surprise is
> "Python just does whatever the underlying hardware does with
> floating-point numbers". If that's the rule, it shouldn't be
> broken in the special case of division by zero.

My feelings exactly.

That's the rule that's always quoted to people asking about
various FP weirdness, but apparently the rule only applies
when/where certain people feel like it.

--
Grant Edwards grante Yow! YOW!! I'm in a very
at clever and adorable INSANE
visi.com ASYLUM!!

Grant Edwards

unread,
Feb 10, 2008, 7:07:13 PM2/10/08
to
On 2008-02-10, Christian Heimes <li...@cheimes.de> wrote:

> Python's a/0 outcome doesn't violate the standards

It does.

> because Python doesn't promise to follow the IEEE 754 standard in the first place.

Whether a certain behavior violates that standard is
independant of whether somebody promised to follow the standard
or not.

> Mark and I are working hard to make math in Python more
> reliable across platforms. So far we have fixed a lot of
> problems but we haven't discussed the a/0 matter.
>
> The best we could give you is an option that makes Python's floats more
> IEEE 754 like:
>
>>>> from somemodule import ieee754
>>>> with ieee754:
> ... r = a/0
> ... print r
> inf

That would be great.

--
Grant Edwards grante Yow! A dwarf is passing
at out somewhere in Detroit!
visi.com

Carl Banks

unread,
Feb 10, 2008, 7:08:22 PM2/10/08
to
On Feb 10, 3:29 pm, Grant Edwards <gra...@visi.com> wrote:

I understand your pain, but Python, like any good general-purpose
language, is a compromise. For the vast majority of programming,
division by zero is a mistake and not merely a degenerate case, so
Python decided to treat it like one.


Carl Banks

Mark Dickinson

unread,
Feb 10, 2008, 8:12:56 PM2/10/08
to
On Feb 10, 5:50 pm, Ben Finney <bignose+hates-s...@benfinney.id.au>
wrote:
> Most people would not want this behaviour either::
>
>     >>> 0.1
>     0.10000000000000001

Sure. And if it weren't for backwards-compatibility and speed issues
one
could reasonably propose making Decimal the default floating-point
type
in Python (whilst still giving access to the hardware binary floating
point).
I dare say that the backwards-compatibility isn't really a problem: I
can
imagine a migration strategy resulting in Decimal default floats in
Python 4.0 ;-). But there are orders-of-magnitude differences in
speed
that aren't going to be solved by merely rewriting decimal.py in C.

I guess it's all about tradeoffs.

> But the justification for this violation of surprise is "Python just
> does whatever the underlying hardware does with floating-point
> numbers". If that's the rule, it shouldn't be broken in the special
> case of division by zero.

I'm not convinced that this is really the justification, but I'm not
quite sure
what we're talking about here. The justification for *printing*
0.1000...1 instead
of 0.1 has to do with not hiding binary floating-point strangeness
from users, since
they're eventually going to have to deal with it anyway, and hiding it
arguably
causes worse difficulties in understanding. The justification for
having
the literal 0.1 not *be* exactly the number 0.1: well, what are the
alternatives?
Decimal and Rational are very slow in comparison with float, and
historically
Decimal wasn't even available until recently.

Mark

Mark Dickinson

unread,
Feb 10, 2008, 8:29:05 PM2/10/08
to
On Feb 10, 7:08 pm, Carl Banks <pavlovevide...@gmail.com> wrote:
> I understand your pain, but Python, like any good general-purpose
> language, is a compromise.  For the vast majority of programming,
> division by zero is a mistake and not merely a degenerate case, so
> Python decided to treat it like one.

Agreed. For 'normal' users, who haven't encountered the ideas of
infinities and NaNs, floating-point numbers are essentially a
computational model for the real numbers, and operations that are
illegal in the reals (square root of -1, division by zero) should
produce Python exceptions rather than send those users hurrying to
comp.lang.python to complain about something called #IND appearing on
their screens.

But for numerically-aware users it would be nice if it were possible
to do non-stop IEEE arithmetic with infinities and NaNs.

Any suggestions about how to achieve the above-described state of
affairs are welcome!

Mark

Christian Heimes

unread,
Feb 10, 2008, 9:08:51 PM2/10/08
to pytho...@python.org
Grant Edwards wrote:
> That would be great.

I'm looking forward to review your patch anytime soon. :)

Christian


Christian Heimes

unread,
Feb 10, 2008, 9:49:07 PM2/10/08
to pytho...@python.org, Mark Dickinson
Mark Dickinson wrote:
> Any suggestions about how to achieve the above-described state of
> affairs are welcome!

I have worked out a suggestion in three parts.

Part 1
------

The PyFloat C API gets two more functions:

int PyFloat_SetIEEE754(int new_state) -> old state
int PyFloat_GetIEEE754(void) -> current state

By default the state is 0 which means no IEEE 754 return values for
1./0., 0./0. and maybe some other places. An op like 1./0. raises an
exception.

With state 1 float ops like f/0. returns copysign(INF, f) and for f=0.
it returns a NaN.

ints and longs aren't affected by the state.

The state is to be stored and fetched from Python's thread state object.
This could slow down floats a bit because every time f/0. occurs the
state has to be looked up in the thread state object.

Part 2
------

The two function are exposed to Python code as math.set_ieee754 and
math.get_ieee754. As an alternative the functions could be added to a
new module ieee754 or to the float type.

Part 3
------

contextlib gets a new context for ieee754

class ieee754(object):
def __init__(self, state=1):
self.new_state = state

def __enter__(self):
self.old_state = math.set_ieee754(self.new_state)

def __exit__(self, *args):
math.set_ieee754(self.old_state)

usage:

with contextlib.ieee754():
...

Christian


Mark Dickinson

unread,
Feb 10, 2008, 9:52:46 PM2/10/08
to
On Feb 10, 7:07 pm, Grant Edwards <gra...@visi.com> wrote:
> On 2008-02-10, Christian Heimes <li...@cheimes.de> wrote:
> >>>> from somemodule import ieee754
> >>>> with ieee754:
> > ...    r = a/0
> > ...    print r
> > inf
>
> That would be great.

Seriously, in some of my crazier moments I've considered trying to
write a PEP on this, so I'm very interested in figuring out exactly
what it is that people want. The devil's in the details, but the
basic ideas would be:

(1) aim for consistent behaviour across platforms in preference to
exposing differences between platforms
(2) make default arithmetic raise Python exceptions in preference to
returning infs and nans. Essentially, ValueError would be raised
anywhere that IEEE 754(r) specifies raising the divide-by-zero or
invalid signals, and OverflowError would be raised anywhere that IEEE
754(r) specifies raising the overflow signal. The underflow and
inexact signals would be ignored.
(3) have a thread-local floating-point environment available from
Python to make it possible to turn nonstop mode on or off, with the
default being off. Possibly make it possible to trap individual
flags.

Any thoughts on the general directions here? It's far too late to
think about this for Python 2.6 or 3.0, but 3.1 might be a possibility.

Ben Finney

unread,
Feb 10, 2008, 10:25:36 PM2/10/08
to
Christian Heimes <li...@cheimes.de> writes:

> The two function are exposed to Python code as math.set_ieee754 and
> math.get_ieee754.

Or, better, as a property, 'math.ieee754'.

--
\ "My, your, his, hers, ours, theirs, its. I'm, you're, he's, |
`\ she's, we're, they're, it's." -- Anonymous, |
_o__) alt.sysadmin.recovery |
Ben Finney

Christian Heimes

unread,
Feb 10, 2008, 11:01:15 PM2/10/08
to pytho...@python.org
Ben Finney wrote:
> Or, better, as a property, 'math.ieee754'.

No, it won't work. It's not possible to have a module property.

Christian

Christian Heimes

unread,
Feb 11, 2008, 1:22:12 AM2/11/08
to pytho...@python.org
Christian Heimes wrote:

> Mark Dickinson wrote:
>> Any suggestions about how to achieve the above-described state of
>> affairs are welcome!
>
> I have worked out a suggestion in three parts.

[snip]

I've implemented my proposal and submitted it to the experimental math
branch: http://svn.python.org/view?rev=60724&view=rev

Christian

greg

unread,
Feb 11, 2008, 2:26:23 AM2/11/08
to
Christian Heimes wrote:

> I'm not talking about CS and IEEE floating point ops. I was referring to
> plain good old math. Python targets both newbies and professionals.

Maybe there should be another division operator for
use by FP professionals?

/ --> mathematical real division
// --> mathematical integer division
/// --> IEEE floating point division (where supported)

--
Greg

greg

unread,
Feb 11, 2008, 2:35:00 AM2/11/08
to
Christian Heimes wrote:

> The state is to be stored and fetched from Python's thread state object.
> This could slow down floats a bit because every time f/0. occurs the
> state has to be looked up in the thread state object.

An alternative implementation might be to leave zero division
traps turned on, and when one occurs, consult the state to
determine whether to raise an exception or re-try that
operation with trapping turned off.

That would only incur the overhead of changing the hardware
setting when a zero division occurs, which presumably is a
relatively rare occurrence.

--
Greg

Steven D'Aprano

unread,
Feb 11, 2008, 7:07:49 AM2/11/08
to
On Sun, 10 Feb 2008 23:34:51 +0100, Christian Heimes wrote:

> Grant Edwards wrote:
>> You must have gone to a different school than I did. I learned that
>> for IEEE floating point operations a/0. is INF with the same sign as a
>> (except when a==0, then you get a NaN).
>
> I'm not talking about CS and IEEE floating point ops. I was referring to
> plain good old math. Python targets both newbies and professionals.
> That's the reason for two math modules (math and cmath).

Alas, Python doesn't do "plain good old math" either:

>>> 1/3 + 1/3 + 1/3 # three thirds makes one exactly
0

Hmmm... okay, let's try this:

>>> import math
>>> math.sqrt(10)**2 == 10
False

How about something a little more advanced? Euler's Identity says that e
to the power of i*pi equals -1. Python writes i as j, so we write this:

>>> math.e**(math.pi*1j) == -1
False

How about something absolutely fundamental to good old maths, like the
Distributive Law?

>>> 0.3*(0.1 + 0.9) == (0.3*0.1) + (0.3*0.9) # a(b+c) = ab + ac
False

Or even something as basic as this:

>>> 0.1 + 0.9 - 0.9 == 0.1 # x + y - y = x
False


Floating point maths is not and can not be the same as the maths we learn
about in schools. Floats are not reals. We should just give up the
fantasy of making floats the same as real numbers, because it cannot
happen. I applaud the effort to hide the complexity of floating point
maths, but compared to the things newbies already get surprised by,
having 1.0/0 return inf isn't even a blip on the radar.

In fact, most school kids learn that "one over nothing is infinity" (not
from their teachers, I think they pick it up by osmosis), so that will
probably cause less grief than the other examples I gave.

[...]

> Python's a/0 outcome doesn't violate the standards because Python
> doesn't promise to follow the IEEE 754 standard in the first place. Mark
> and I are working hard to make math in Python more reliable across
> platforms. So far we have fixed a lot of problems but we haven't
> discussed the a/0 matter.

And thank you for your efforts, they are appreciated.


> The best we could give you is an option that makes Python's floats more
> IEEE 754 like:
>
>>>> from somemodule import ieee754
>>>> with ieee754:
> ... r = a/0
> ... print r
> inf


Sounds like a good plan to me. I could live with that.

--
Steven

Neal Becker

unread,
Feb 11, 2008, 9:15:21 PM2/11/08
to pytho...@python.org
Mark Dickinson wrote:

> On Feb 10, 3:10 pm, bearophileH...@lycos.com wrote:
>> What Python run on a CPU that doesn't handle the nan correctly?
>
> How about platforms that don't even have nans? I don't think either
> IBM's hexadecimal floating-point format, or the VAX floating-point
> formats
> support NaNs. Python doesn't assume IEEE 754 hardware; though much
> of its code
> could be a lot simpler if it did :-).
> rywhere ;-).
>
> Mark

So just use it where it's available, and emulate where it's not.

John Nagle

unread,
Feb 14, 2008, 11:09:38 PM2/14/08
to

You also need to think about how conditionals interact with
quiet NANs. Properly, comparisons like ">" have three possibilities:
True, False, and "raise". Many implementations don't do that well,
which means that you lose trichotomy. "==" has issues; properly,
"+INF" is not equal to itself.

If you support quiet NANs, you need the predicates like "isnan".

I've done considerable work with code that handled floating
point exceptions in complex ways. I've done animation simulations
(see "www.animats.com") where floating point overflow could occur,
but just meant that part of the computation had to be rerun with a
smaller time step. So I'm painfully familiar with the interaction
of IEEE floating point, Windows FPU exception modes, and C++ exceptions.
On x86, with some difficulty, you can turn an FPU exception into a
C++ exception using Microsoft's compilers. But that's not portable.
x86 has exact exceptions, but most other superscalar machines
(PowerPC, Alpha, if anybody cares) do not.

For Python, I'd suggest throwing a Python exception on all errors
recognized by the FPU, except maybe underflow. If you're doing
such serious number-crunching that you really want to handle NANs,
you're probably not writing in Python anyway.

John Nagle

Mark Dickinson

unread,
Feb 15, 2008, 1:19:53 PM2/15/08
to
On Feb 14, 11:09 pm, John Nagle <na...@animats.com> wrote:
> You also need to think about how conditionals interact with
> quiet NANs. Properly, comparisons like ">" have three possibilities:

True. There was a recent change to Decimal to make comparisons (other
than !=, ==) with NaNs do the "right thing": that is, raise a Python
exception, unless the Invalid flag is not trapped, in which case they
return False (and also raise the Invalid flag). I imagine something
similar would make sense for floats.

> True, False, and "raise". Many implementations don't do that well,
> which means that you lose trichotomy. "==" has issues; properly,
> "+INF" is not equal to itself.

I don't understand: why would +INF not be equal to itself? Having
INF == INF be True seems like something that makes sense both
mathematically and computationally.

> If you support quiet NANs, you need the predicates like "isnan".

They're on their way! math.isnan and math.isinf will be in Python
2.6.

> C++ exception using Microsoft's compilers. But that's not portable.
> x86 has exact exceptions, but most other superscalar machines
> (PowerPC, Alpha, if anybody cares) do not.

Interesting. What do you mean by 'exact exception'?

> For Python, I'd suggest throwing a Python exception on all errors
> recognized by the FPU, except maybe underflow.

Yes: I think this should be the default behaviour, at least. It was
agreed quite a while ago amongst the Python demigods that the IEEE
overflow, invalid and divide-by-zero signals should ideally raise
Python exceptions, while underflow and inexact should be ignored. The
problem is that that's not what Python does at the moment, and some
people rely on being able to get NaNs and infinities the old ways.

> If you're doing
> such serious number-crunching that you really want to handle NANs,
> you're probably not writing in Python anyway.

If you're worried about speed, then I agree you probably shouldn't be
writing in Python. But I can imagine there are use-cases for nonstop
arithmetic with nans and infs where speed isn't the topmost concern.

Mark

Grant Edwards

unread,
Feb 15, 2008, 1:38:14 PM2/15/08
to
On 2008-02-15, Mark Dickinson <dick...@gmail.com> wrote:

>> If you're doing such serious number-crunching that you really
>> want to handle NANs, you're probably not writing in Python
>> anyway.

I disagree completely. I do a lot of number crunching in
Python where I want IEEE NaN and Inf behavior. Speed is a
completely orthogonal issue.

> If you're worried about speed, then I agree you probably
> shouldn't be writing in Python.

Even if you are worried about speed, using tools like like
numpy can do some pretty cool stuff.

> But I can imagine there are use-cases for nonstop arithmetic
> with nans and infs where speed isn't the topmost concern.

Frankly, I don't see that speed has anything to do with it at
all. I use Python for number-crunching because it's easy to
program in. When people complain about not getting the right
results, replying with "if you want something fast, don't use
Python" makes no sense.

--
Grant Edwards grante Yow! I put aside my copy
at of "BOWLING WORLD" and
visi.com think about GUN CONTROL
legislation...

Mark Dickinson

unread,
Feb 15, 2008, 1:44:17 PM2/15/08
to
On Feb 15, 1:38 pm, Grant Edwards <gra...@visi.com> wrote:

> On 2008-02-15, Mark Dickinson <dicki...@gmail.com> wrote:
>
> >> If you're doing such serious number-crunching that you really
> >> want to handle NANs, you're probably not writing in Python
> >> anyway.
>

Some dodgy quoting here: that wasn't me!

> I disagree completely. I do a lot of number crunching in
> Python where I want IEEE NaN and Inf behavior. Speed is a
> completely orthogonal issue.
>

Exactly.

Mark

Grant Edwards

unread,
Feb 15, 2008, 2:02:07 PM2/15/08
to
On 2008-02-15, Mark Dickinson <dick...@gmail.com> wrote:
> On Feb 15, 1:38 pm, Grant Edwards <gra...@visi.com> wrote:
>> On 2008-02-15, Mark Dickinson <dicki...@gmail.com> wrote:
>>
>> >> If you're doing such serious number-crunching that you really
>> >> want to handle NANs, you're probably not writing in Python
>> >> anyway.
>
> Some dodgy quoting here: that wasn't me!

Yup. That's indicated by the xtra level of ">". Sorry if that
mislead anybody -- I accidentally deleted the nested attribute
line when I was trimming things.

>> I disagree completely. I do a lot of number crunching in
>> Python where I want IEEE NaN and Inf behavior. Speed is a
>> completely orthogonal issue.
>
> Exactly.

--
Grant Edwards grante Yow! I know how to do
at SPECIAL EFFECTS!!
visi.com

Steve Holden

unread,
Feb 15, 2008, 2:35:34 PM2/15/08
to pytho...@python.org
Mark Dickinson wrote:
> On Feb 14, 11:09 pm, John Nagle <na...@animats.com> wrote:
>> You also need to think about how conditionals interact with
>> quiet NANs. Properly, comparisons like ">" have three possibilities:
>
> True. There was a recent change to Decimal to make comparisons (other
> than !=, ==) with NaNs do the "right thing": that is, raise a Python
> exception, unless the Invalid flag is not trapped, in which case they
> return False (and also raise the Invalid flag). I imagine something
> similar would make sense for floats.
>
>> True, False, and "raise". Many implementations don't do that well,
>> which means that you lose trichotomy. "==" has issues; properly,
>> "+INF" is not equal to itself.
>
> I don't understand: why would +INF not be equal to itself? Having
> INF == INF be True seems like something that makes sense both
> mathematically and computationally.
> [...]

There are an uncountable number of infinities, all different.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/

Jeff Schwab

unread,
Feb 15, 2008, 2:41:47 PM2/15/08
to

+ALEPH0?

Carl Banks

unread,
Feb 15, 2008, 3:20:05 PM2/15/08
to
On Feb 14, 11:09 pm, John Nagle <na...@animats.com> wrote:
> You also need to think about how conditionals interact with
> quiet NANs. Properly, comparisons like ">" have three possibilities:
> True, False, and "raise". Many implementations don't do that well,
> which means that you lose trichotomy. "==" has issues; properly,
> "+INF" is not equal to itself.

I'm pretty sure it is. It certainly is on my machine at the moment:

>>> float(3e300*3e300) == float(2e300*4e300)
True

Are you confusing INF with NAN, which is specified to be not equal to
itself (and, IIRC, is the only thing specified to be not equal to
itself, such that one way to test for NAN is x!=x).


> For Python, I'd suggest throwing a Python exception on all errors
> recognized by the FPU, except maybe underflow. If you're doing
> such serious number-crunching that you really want to handle NANs,
> you're probably not writing in Python anyway.

Even if that were entirely true, there are cases where (for example)
you're using Python to glue together numerical routines in C, but you
need to do some preliminary calculations in Python (where there's no
edit/compile/run cycle but there is slicing and array ops), but want
the same floating point behavior.

IEEE conformance is not an unreasonable thing to ask for, and "you
should be using something else" isn't a good answer to "why not?".


Carl Banks

Grant Edwards

unread,
Feb 15, 2008, 3:36:15 PM2/15/08
to
On 2008-02-15, Carl Banks <pavlove...@gmail.com> wrote:

>> For Python, I'd suggest throwing a Python exception on all
>> errors recognized by the FPU, except maybe underflow. If
>> you're doing such serious number-crunching that you really
>> want to handle NANs, you're probably not writing in Python
>> anyway.
>
> Even if that were entirely true, there are cases where (for
> example) you're using Python to glue together numerical
> routines in C, but you need to do some preliminary
> calculations in Python (where there's no edit/compile/run
> cycle but there is slicing and array ops), but want the same
> floating point behavior.

Or when you're verifying/testing algorithms implemented on
another platform that follows the IEEE standard.

Or when propagating quiet NaNs and Infinities is by far the
simplest way to do what needs to be done. Being able to feed
NaN values into a set of calculations and have outputs
dependant on invalid inputs turn out to be NaNs is just _so_
much simpler than either

1) Having a complete set of boolean logic code in parallel to
the calculations that keeps track of what's valid and
what's not.

2) Writing a floating point class with some sort of "valid
flag" that travels along with the values.

> IEEE conformance is not an unreasonable thing to ask for,

Especially when the hardware already provides it (or at least
provides something close enough to satisfy most of us who
whinge about such things).

> and "you should be using something else" isn't a good answer
> to "why not?".

even for Usenet. ;)

--
Grant Edwards grante Yow! A shapely CATHOLIC
at SCHOOLGIRL is FIDGETING
visi.com inside my costume..

Mark Dickinson

unread,
Feb 15, 2008, 3:59:10 PM2/15/08
to
On Feb 15, 2:35 pm, Steve Holden <st...@holdenweb.com> wrote:
> There are an uncountable number of infinities, all different.

If you're talking about infinite cardinals or ordinals in set theory,
then yes. But that hardly seems relevant to using floating-point as a
model for the doubly extended real line, which has exactly two
infinities.

Mark

Steve Holden

unread,
Feb 15, 2008, 5:27:40 PM2/15/08
to pytho...@python.org
True enough, but aren't they of indeterminate magnitude? Since infinity
== infinity + delta for any delta, comparison for equality seems a
little specious.

Mark Dickinson

unread,
Feb 15, 2008, 5:43:49 PM2/15/08
to
On Feb 15, 5:27 pm, Steve Holden <st...@holdenweb.com> wrote:
> True enough, but aren't they of indeterminate magnitude? Since infinity
> == infinity + delta for any delta, comparison for equality seems a
> little specious.

The equality is okay; it's when you start trying to apply arithmetic
laws like

a+c == b+c implies a == b

that you get into trouble. In other words, the doubly-extended real
line is a perfectly well-defined and well-behaved *set*, and even a
nice (compact) topological space with the usual topology. It's just
not a field, or a group under addition, or ...

Mark

Steven D'Aprano

unread,
Feb 15, 2008, 7:55:29 PM2/15/08
to
On Thu, 14 Feb 2008 20:09:38 -0800, John Nagle wrote:

> For Python, I'd suggest throwing a Python exception on all errors
> recognized by the FPU, except maybe underflow. If you're doing such
> serious number-crunching that you really want to handle NANs, you're
> probably not writing in Python anyway.

Chicken, egg.

The reason people aren't writing in Python is because Python doesn't
support NANs, and the reason Python doesn't support NANs is because the
people who want support for NANs aren't using Python.

Oh, also because it's hard to do it in a portable fashion. But maybe
Python doesn't need to get full platform independence all in one go?

# pseudo-code
if sys.platform == "whatever"
float = IEEE_float
else:
warnings.warn("no support for NANs, beware of exceptions")


There are use-cases for NANs that don't imply the need for full C speed.
Number-crunching doesn't necessarily imply that you need to crunch
billions of numbers in the minimum time possible. Being able to do that
sort of "crunch-lite" in Python would be great.

--
Steven

Steven D'Aprano

unread,
Feb 15, 2008, 7:59:15 PM2/15/08
to
On Fri, 15 Feb 2008 14:35:34 -0500, Steve Holden wrote:

>> I don't understand: why would +INF not be equal to itself? Having INF
>> == INF be True seems like something that makes sense both
>> mathematically and computationally.
>> [...]
>
> There are an uncountable number of infinities, all different.


But the IEEE standard only supports one of them, aleph(0).

Technically two: plus and minus aleph(0).

--
Steven

Mark Dickinson

unread,
Feb 15, 2008, 8:31:51 PM2/15/08
to
On Feb 15, 7:59 pm, Steven D'Aprano <st...@REMOVE-THIS-

Not sure that alephs have anything to do with it. And unless I'm
missing something, minus aleph(0) is nonsense. (How do you define the
negation of a cardinal?)

From the fount of all wisdom: (http://en.wikipedia.org/wiki/
Aleph_number)

"""The aleph numbers differ from the infinity (∞) commonly found in
algebra and calculus. Alephs measure the sizes of sets; infinity, on
the other hand, is commonly defined as an extreme limit of the real
number line (applied to a function or sequence that "diverges to
infinity" or "increases without bound"), or an extreme point of the
extended real number line. While some alephs are larger than others, ∞
is just ∞."""

Mark

Paul Rubin

unread,
Feb 15, 2008, 9:10:40 PM2/15/08
to
Mark Dickinson <dick...@gmail.com> writes:
> > But the IEEE standard only supports one of them, aleph(0).
> > Technically two: plus and minus aleph(0).
>
> Not sure that alephs have anything to do with it.

They really do not. The extended real line can be modelled in set
theory, but the "infinity" in it is not a cardinal as we would
normally treat them in set theory. Almost all of what we usually call
real analysis can be done in fairly weak subsystems of second order
arithmetic, which is a countable theory. There is a fairly interesting
rant arguing basically that set theory itself is bogus (not in the
sense of being wrong, but in the sense of being unnecessary for
most mathematics):

http://math.stanford.edu/~feferman/papers/psa1992.pdf

Jeff Schwab

unread,
Feb 15, 2008, 11:42:27 PM2/15/08
to
Paul Rubin wrote:
> Mark Dickinson <dick...@gmail.com> writes:
>>> But the IEEE standard only supports one of them, aleph(0).
>>> Technically two: plus and minus aleph(0).
>> Not sure that alephs have anything to do with it.
>
> They really do not. The extended real line can be modelled in set
> theory, but the "infinity" in it is not a cardinal as we would
> normally treat them in set theory.

Georg Cantor disagrees. Whether Aleph 1 is the cardinality of the set
of real numbers is provably undecidable.

http://mathworld.wolfram.com/ContinuumHypothesis.html

Paul Rubin

unread,
Feb 16, 2008, 12:08:54 AM2/16/08
to
Jeff Schwab <je...@schwabcenter.com> writes:
> > They really do not. The extended real line can be modelled in set
> > theory, but the "infinity" in it is not a cardinal as we would
> > normally treat them in set theory.
>
> Georg Cantor disagrees. Whether Aleph 1 is the cardinality of the set
> of real numbers is provably undecidable.

You misunderstand, the element called "infinity" in the extended real
line has nothing to do with the cardinality of the reals, or of
infinite cardinals as treated in set theory. It's just an element of
a structure that can be described in elementary terms or can be viewed
as sitting inside of the universe of sets described by set theory.
See:

http://en.wikipedia.org/wiki/Point_at_infinity

Aleph 1 didn't come up in the discussion earlier either. FWIW, it's
known (provable from the ZFC axioms) that the cardinality of the reals
is an aleph; ZFC just doesn't determine which particular aleph it is.
The Wikipedia article about CH is also pretty good:

http://en.wikipedia.org/wiki/Continuum_hypothesis

the guy who proved CH is independent also expressed a belief that it
is actually false.

Steven D'Aprano

unread,
Feb 16, 2008, 7:08:40 PM2/16/08
to
On Fri, 15 Feb 2008 17:31:51 -0800, Mark Dickinson wrote:

> On Feb 15, 7:59 pm, Steven D'Aprano <st...@REMOVE-THIS-
> cybersource.com.au> wrote:
>> On Fri, 15 Feb 2008 14:35:34 -0500, Steve Holden wrote:
>> >> I don't understand: why would +INF not be equal to itself? Having
>> >> INF == INF be True seems like something that makes sense both
>> >> mathematically and computationally.
>> >> [...]
>>
>> > There are an uncountable number of infinities, all different.
>>
>> But the IEEE standard only supports one of them, aleph(0).
>>
>> Technically two: plus and minus aleph(0).
>
> Not sure that alephs have anything to do with it. And unless I'm
> missing something, minus aleph(0) is nonsense. (How do you define the
> negation of a cardinal?)

*shrug* How would you like to?

The natural numbers (0, 1, 2, 3, ...) are cardinal numbers too. 0 is the
cardinality of the empty set {}; 1 is the cardinality of the set
containing only the empty set {{}}; 2 is the cardinality of the set
containing a set of cardinality 0 and a set of cardinality 1 {{}, {{}}}
... and so on.

Since we have generalized the natural numbers to the integers

... -3 -2 -1 0 1 2 3 ...

without worrying about what set has cardinality -1, I see no reason why
we shouldn't generalize negation to the alephs. The question of what set,
if any, has cardinality -aleph(0) is irrelevant. Since the traditional
infinity of the real number line comes in a positive and negative
version, and we identify positive ∞ as aleph(0) [see below for why], I
don't believe there's any thing wrong with identifying -aleph(0) as -∞.

Another approach might be to treat the cardinals as ordinals. Subtraction
isn't directly defined for ordinals, ordinals explicitly start counting
at zero and only increase, never decrease. But one might argue that since
all ordinals are surreal numbers, and subtraction *is* defined for
surreals, we might identify aleph(0) as the ordinal omega ω then the
negative of aleph(0) is just -ω, or {|{ ... -4, -3, -2, -1 }}. Or in
English... -aleph(0) is the number more negative than every negative
integer, which gratifyingly matches our intuition about negative infinity.

There's lots of hand-waving there. I expect a real mathematician could
make it all vigorous. But a lot of it is really missing the point, which
is that the IEEE standard isn't about ordinals, or cardinals, or surreal
numbers, but about floating point numbers as a discrete approximation to
the reals. In the reals, there are only two infinities that we care
about, a positive and negative, and apart from the sign they are
equivalent to aleph(0).


> From the fount of all wisdom: (http://en.wikipedia.org/wiki/
> Aleph_number)
>
> """The aleph numbers differ from the infinity (∞) commonly found in
> algebra and calculus. Alephs measure the sizes of sets; infinity, on the
> other hand, is commonly defined as an extreme limit of the real number
> line (applied to a function or sequence that "diverges to infinity" or
> "increases without bound"), or an extreme point of the extended real
> number line. While some alephs are larger than others, ∞ is just ∞."""

That's a very informal definition of infinity. Taken literally, it's also
nonsense, since the real number line has no limit, so talking about the
limit of something with no limit is meaningless. So we have to take it
loosely.

In fact, it isn't true that "∞ is just ∞" even in the two examples they
discuss. There are TWO extended real number lines: the projectively
extended real numbers, and the affinely extended real numbers. In the
projective extension to the reals, there is only one ∞ and it is
unsigned. In the affine extension, there are +∞ and -∞.

If you identify ∞ as "the number of natural numbers", that is, the number
of numbers in the sequence 0, 1, 2, 3, 4, ... then that's precisely what
aleph(0) is. If there's a limit to the real number line in any sense at
all, it is the same limit as for the integers (since the integers go all
the way to the end of the real number line).

(But note that there are more reals between 0 and ∞ than there are
integers, even though both go to the same limit: the reals are more
densely packed.)

--
Steven

Mark Dickinson

unread,
Feb 16, 2008, 7:30:08 PM2/16/08
to
On Feb 16, 7:08 pm, Steven D'Aprano <st...@REMOVE-THIS-

cybersource.com.au> wrote:
> On Fri, 15 Feb 2008 17:31:51 -0800, Mark Dickinson wrote:
> > Not sure that alephs have anything to do with it.  And unless I'm
> > missing something, minus aleph(0) is nonsense. (How do you define the
> > negation of a cardinal?)
>
> *shrug* How would you like to?

> Since we have generalized the natural numbers to the integers


>
> ... -3 -2 -1 0 1 2 3 ...
>
> without worrying about what set has cardinality -1, I see no reason why
> we shouldn't generalize negation to the alephs.

The reason is that it doesn't give a useful result. There's a natural
process for turning a commutative monoid into a group (it's the
adjoint to the forgetful functor from groups to commutative monoids).
Apply it to the "set of cardinals", leaving aside the set-theoretic
difficulties with the idea of the "set of cardinals" in the first
place, and you get the trivial group.

> There's lots of hand-waving there. I expect a real mathematician could
> make it all vigorous.

Rigorous? Yes, I expect I could.

And surreal numbers are something entirely different again.

> That's a very informal definition of infinity. Taken literally, it's also
> nonsense, since the real number line has no limit, so talking about the
> limit of something with no limit is meaningless. So we have to take it
> loosely.

The real line, considered as a topological space, has limit points.
Two of them.

Mark

Mark Dickinson

unread,
Feb 16, 2008, 8:47:39 PM2/16/08
to
On Feb 16, 7:30 pm, Mark Dickinson <dicki...@gmail.com> wrote:
> The real line, considered as a topological space, has limit points.
> Two of them.

Ignore that. It was nonsense. A better statement: the completion (in
the sense of lattices) of the real numbers is (isomorphic to) the
doubly-extended real line. It's in this sense that +infinity and -
infinity can be considered limits.

I've no clue where your (Steven's) idea that 'all ordinals are surreal
numbers' came from. They're totally unrelated.

Sorry. I haven't had any dinner. I get tetchy when I haven't had any
dinner.

Usenet'ly yours,

Mark

Steven D'Aprano

unread,
Feb 16, 2008, 9:39:12 PM2/16/08
to
On Sat, 16 Feb 2008 17:47:39 -0800, Mark Dickinson wrote:

> I've no clue where your (Steven's) idea that 'all ordinals are surreal
> numbers' came from. They're totally unrelated.

Tell that to John Conway.

[quote]
Just as the *real* numbers fill in the gaps between the integers, the
*surreal* numbers fill in the gaps between Cantor's ordinal numbers. We
get them by generalizing our use of the {|} notation for the ordinal
numbers.
[...]
The ordinal numbers are those where there aren't any numbers to the right
of the bar:

{|} = 0, the simplest number of all
{0|} = 1, the simplest number greater than 0
{0,1|} = 2, the simplest number greater than 1 (and 0)

and so on.
[end quote]

"The Book of Numbers", John W Conway and Richard K Guy, Copernicus Books,
1996, p.283.

I trust I don't have to explain this to Mark, but for the benefit of
anyone else reading, Conway invented surreal numbers.


--
Steven

Mark Dickinson

unread,
Feb 16, 2008, 9:44:23 PM2/16/08
to
On Feb 16, 9:39 pm, Steven D'Aprano <st...@REMOVE-THIS-

cybersource.com.au> wrote:
> On Sat, 16 Feb 2008 17:47:39 -0800, Mark Dickinson wrote:
> > I've no clue where your (Steven's) idea that 'all ordinals are surreal
> > numbers' came from.  They're totally unrelated.
>
> Tell that to John Conway.

Apparently I also get stupid when I haven't had any dinner. Or
perhaps dinner has nothing to do with it. I was thinking of the
nonstandard reals.

You're absolutely right, and I hereby forfeit my Ph.D. (for the second
time today, as it happens).

Mark

0 new messages