I have variables to initialize with is +inf (or - inf). To be sure that
later, all will be smaller (or larger) than my variables.
I tried to redefine the operators on an object. It goes for Inf>10 but I do
not arrive for 10>Inf (because that takes > of Int).
--------------------
def __gt__(self, val):
return 1
--------------------
Not need to say to me that I can use 99999999999999 or -99999999999999 or I
do not know what.
Nor from a MAX_INT of the machine... bus Python can go higher.
Thank you for you answers (thank you to also answer by email if possible,
because I will not have any more access to the news this weekend).
PS: Sorry for my English....
You can try to redefine __cmp__ to handle all the cases.
For example :
Inf(+1) is +oo
Inf(-1) is -oo
class Inf:
def __init__(self, sign=+1):
self.sign = sign
def __str__(self):
return self.sign<0 and "-Inf" or "+Inf"
def __cmp__(self, other):
if isinstance(other, Inf):
return cmp(self.sign, other.sign)
else:
return self.sign
def __pos__(self): return Inf(self.sign)
def __neg__(self): return Inf(-self.sign)
inf = Inf()
for t in ( \
"+inf", "-inf",
"10<-inf", "10>-inf", "10<+inf", "10>+inf",
"-inf<10", "-inf>10", "+inf<10", "+inf>10",
"-inf<-inf", "-inf==-inf", "-inf<+inf", "-inf==+inf",
"-inf>=-inf", "-inf>=+inf"):
print t, "=", eval(t)
This gives:
+inf = +Inf
-inf = -Inf
10<-inf = False
10>-inf = True
10<+inf = True
10>+inf = False
-inf<10 = True
-inf>10 = False
+inf<10 = False
+inf>10 = True
-inf<-inf = False
-inf==-inf = True
-inf<+inf = True
-inf==+inf = False
-inf>=-inf = True
-inf>=+inf = False
You may also redefine __add__, __mul__, ...
Best regards,
Christophe.
--
(o_ Christophe Delord _o)
//\ http://christophe.delord.free.fr/ /\\
V_/_ mailto:christop...@free.fr _\_V
>>> x=1e1000
>>> x
inf
>>> x>10
True
>>> x<10
False
>>> 10>x
False
>>> 10<x
True
>>> x-x
nan
>>> -x
-inf
Warning: +oo==+oo so don't be surprised by this example:
>>> 1e1000==1e2000
True
Christophe.
> Not need to say to me that I can use 99999999999999 or -99999999999999 or I
> do not know what.
> Nor from a MAX_INT of the machine... bus Python can go higher.
>
$ python
Python 2.0 (#1, Dec 18 2000, 10:19:52) [C] on osf1V4
Type "copyright", "credits" or "license" for more information.
>>> x = 1e1000
>>> x
1.7976931348623157e+308
>>> x > 10
1
>>> x - x
0.0
>>>
I missed the original post, hope this helps.
Donn Cave, do...@u.washington.edu
this is part of the IEEE 754 standard for floating point computation
(http://cch.loria.fr/documentation/IEEE754/) It is not guaranteed to be
implimented by Python (someone please correct me if I'm wrong). whether
this works is a function of the machine, compiler and library used to
compile Python. I'm not sure how common it is, but if you want your code
portable, it may not be reliabel on all machines, which is too bad,
because IEEE 754 is pretty darn good standard.
BTW: I'd love to see literals for Inf, -Inf and NaN in Python.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
NOAA/OR&R/HAZMAT (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
> Will somebody have an idea, to represent the infinite one?
>
> I have variables to initialize with is +inf (or - inf). To be sure that
> later, all will be smaller (or larger) than my variables.
> I tried to redefine the operators on an object. It goes for Inf>10 but I do
> not arrive for 10>Inf (because that takes > of Int).
>
> --------------------
>
> def __gt__(self, val):
>
> return 1
>
> --------------------
Well, here's a start:
class inf(int):
def __init__(self):
int.__init__(self)
self.sign = 1
def __str__(self):
return "<infinity>"
def __repr__(self):
return "<infinity>"
def __cmp__(self, n):
if isinstance(n, inf):
return cmp(self.sign, n.sign)
return self.sign
def __neg__(self):
retval = inf()
retval.sign = self.sign * -1
return retval
if __name__ == '__main__':
import sys
m = -inf()
n = inf()
assert(m < n)
assert(n > m)
assert(m == m)
assert(n == n)
assert(m != n)
assert(m < 0)
assert(n > 0)
assert(m < -sys.maxint)
assert(n > sys.maxint)
assert(m < -m)
assert(n == -m)
--
Cliff Wells, Software Engineer
Logiplex Corporation (www.logiplex.net)
(503) 978-6726 x308 (800) 735-0555 x308
With Python 2.2 and Linux it works. The example I gave is a copy-paste from IDLE. It also works with my Python 1.5.2 and 2.3 (I'm under Linux on an Intel processor)
So it seems not to be implemented in every binaries and/or platform and/or compiler and/or ... So this solution may not be portable and it may be safer to make a special class for the infinite like in my first reply.
Here is the proof that I'm not totaly drunk yet ;-)
[christ@localhost py]$ python1.5
Python 1.5.2 (#1, Apr 3 2002, 18:16:26) [GCC 2.96 20000731 (Red Hat Linux 7.2 2 on linux-i386
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x=1e1000
>>> x
inf
>>>
[christ@localhost py]$ python2.2
Python 2.2 (#1, Apr 12 2002, 15:29:57)
[GCC 2.96 20000731 (Red Hat Linux 7.2 2.96-109)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> x=1e1000
>>> x
inf
>>>
[christ@localhost py]$ python
Python 2.3a0 (#2, Jun 20 2002, 19:48:46)
[GCC 2.96 20000731 (Red Hat Linux 7.3 2.96-110)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> x=1e1000
>>> x
inf
>>>
[christ@localhost py]$
> Will somebody have an idea, to represent the infinite one?
>
> I have variables to initialize with is +inf (or - inf). To be sure that
> later, all will be smaller (or larger) than my variables.
> I tried to redefine the operators on an object. It goes for Inf>10 but I do
> not arrive for 10>Inf (because that takes > of Int).
>
> --------------------
>
> def __gt__(self, val):
>
> return 1
>
> --------------------
Here's a more complete infinite value (not that you'd need it, judging from
your requirements):
class inf(int):
def __init__(self):
int.__init__(self)
self.sign = 1
def __str__(self):
return "<%sinfinity>" % {-1: "-", 1: ""}[self.sign]
def __repr__(self):
return str(self)
def __cmp__(self, n):
if isinstance(n, inf):
return cmp(self.sign, n.sign)
return self.sign
def __neg__(self):
retval = inf()
retval.sign = self.sign * -1
return retval
def __mul__(self, n):
retval = inf()
retval.sign = self.sign * cmp(n, 0)
return retval
def __div__(self, n):
return self * n
def __add__(self, n):
if isinstance(n, inf):
if n.sign != self.sign:
return 0
return self
def __sub__(self, n):
return self + -n
if __name__ == '__main__':
import sys
m = -inf()
n = inf()
print m
print n
print m * n
print n * n
print m * m
assert(m < n)
assert(n > m)
assert(m == m)
assert(n == n)
assert(m != n)
assert(m < 0)
assert(n > 0)
assert(m != 0)
assert(n
assert(m < -sys.maxint)
assert(n > sys.maxint)
assert(m < -m)
assert(n == -m)
assert(n * 1 == n)
assert(n * 1000 == n)
assert(n * -1 == m)
assert(n * -1000 == m)
assert(n * 0 == 0)
assert(m * 0 == 0)
assert(n / 1 == n)
assert(n / -1 == m)
assert(n / -1000 == m)
assert(m / 1 == m)
assert(m / 1000 == m)
assert(m / -1 == n)
assert(m / -1000 == n)
assert(m + 1 == m)
assert(m - 1 == m)
assert(n + 1 == n)
assert(n - 1 == n)
assert(n + m == 0)
assert(n - m == n)
assert(n - n == 0)
assert(m + m == m)
assert(m - m == 0)
assert(m - n == m)
# print 1 / m # exception: division by zero
# print 1 / n # exception: division by zero
> Will somebody have an idea, to represent the infinite one?
>
> I have variables to initialize with is +inf (or - inf). To be sure that
> later, all will be smaller (or larger) than my variables.
> I tried to redefine the operators on an object. It goes for Inf>10 but I do
> not arrive for 10>Inf (because that takes > of Int).
>
> --------------------
>
> def __gt__(self, val):
>
> return 1
>
> --------------------
Hm, for some reason, my last two posts on this thread didn't seem to go
through... one more try (with a couple of little enhancements):
from __future__ import generators
class Infinity(int):
def __init__(self):
int.__init__(self)
self.sign = 1
def __str__(self):
return "%sInfinity" % {-1: "-", 1: ""}[self.sign]
def __repr__(self):
return "<%s>" % self
def __cmp__(self, n):
if isinstance(n, Infinity):
return cmp(self.sign, n.sign)
return self.sign
def __neg__(self):
retval = Infinity()
retval.sign = self.sign * -1
return retval
def __mul__(self, n):
retval = Infinity()
retval.sign = self.sign * cmp(n, 0)
return retval
def __div__(self, n):
if isinstance(n, Infinity):
# or should this raise an exception?
return self.sign / n.sign
return self * n
def __add__(self, n):
if isinstance(n, Infinity):
if n.sign != self.sign:
return 0
return self
def __sub__(self, n):
return self + -n
def __iter__(self):
return self.next()
def next(self):
n = 0
while 1:
yield n
n += self.sign
if __name__ == '__main__':
import sys
m = -Infinity()
n = Infinity()
print m
print n
print m * n
print n * n
print m * m
assert(m < n)
assert(n > m)
assert(m == m)
assert(n == n)
assert(m != n)
assert(m < 0)
assert(n > 0)
assert(m != 0)
assert(m < -sys.maxint)
assert(n > sys.maxint)
assert(m < -m)
assert(n == -m)
assert(n * 1 == n == -m)
assert(n * 1000 == n)
assert(n * -1 == m)
assert(n * -1000 == m)
assert(n * 0 == 0)
assert(m * 1 == m)
assert(m * 1000 == m)
assert(m * -1 == n)
assert(m * -1000 == -m == n)
assert(m * 0 == 0)
assert(n / 1 == n)
assert(n / -1 == m)
assert(n / -1000 == m)
assert(m / 1 == m)
assert(m / 1000 == m)
assert(m / -1 == n)
assert(m / -1000 == n)
assert(m + 1 == m)
assert(m - 1 == m)
assert(n + 1 == n)
assert(n - 1 == n)
assert(n + m == 0)
assert(n - m == n)
assert(n - n == 0)
assert(m + m == m)
assert(m - m == 0)
assert(m - n == m)
print m / m
print n / n
print m / n
print n / m
try:
print 1 / m
except ZeroDivisionError:
print "Can't divide by %s" % m
try:
print 1 / n
except ZeroDivisionError:
print "Can't divide by %s" % n
for i in Infinity():
if i > 10:
break
print i,
> On Thu, 20 Jun 2002 17:02:33 +0200
> erreur wrote:
>
>> Will somebody have an idea, to represent the infinite one? I have
>> variables to initialize with is +inf (or - inf). To be sure that
>> later, all will be smaller (or larger) than my variables. I tried to
>> redefine the operators on an object. It goes for Inf>10 but I do not
>> arrive for 10>Inf (because that takes > of Int). --------------------
>> def __gt__(self, val):
>>
>> return 1
>> --------------------
> Hm, for some reason, my last two posts on this thread didn't seem to go
> through... one more try (with a couple of little enhancements):
Odd, the newsgroup -> mailing list transfer must be really slow (or down)
today, I pulled up Pan and lo, my posts are there, they just haven't made
it to the mailing list yet (after almost 2 hours).
Or maybe I accidentally killfiled myself ;)
Chris Barker wrote:
> whether
> this works is a function of the machine, compiler and library used to
> compile Python. I'm not sure how common it is, but if you want your code
> portable, it may not be reliabel on all machines, which is too bad,
> because IEEE 754 is pretty darn good standard.
After posting that, I went and read some more:
http://cch.loria.fr/documentation/IEEE754/wkahan/754story.html
A quote from that article:
"Programming languages new ( Java ) and old ( Fortran ), and their
compilers, still lack competent support for features of IEEE 754 so
painstakingly provided by practically all hardware nowadays. S.A.N.E.,
the Standard Apple Numerical Environment on old MC680x0-based Macs is
the main exception. Programmers seem unaware that IEEE 754 is a standard
for their programming environment, not just for hardware."
Too bad. Python can't do anything that isn't supported by the compilers
it is built with.
Full disclosure: I took a Numerical Analysis course with Kahan when I
was at Berkeley. I understood about 1/10 of what he was trying to teach
us, but I did learn this: Floating point is hard!
Does 1e200**2 also work for you out of the box? I had to get rid of the
artificial check for overflow in the source to really use inf in
computations. I don't remember if 1e1000 works on my box without patch.
Huaiyu
>>> 1e200**2
Traceback (most recent call last):
File "<pyshell#1>", line 1, in ?
1e200**2
OverflowError: (34, 'Numerical result out of range')
>>> 1e400
inf
When the overflow occurs, the OverflowError exception is thrown. inf can't be the result of a computation (without patch and without catching exception).
>
> Huaiyu
> Christophe Delord wrote:
>> There is another simple solution. Just use a float that is bigger
>> than any 64-bit float. For example 10^1000. This float has a special
>> encoding meaning +oo !
>
> this is part of the IEEE 754 standard for floating point computation
> (http://cch.loria.fr/documentation/IEEE754/) It is not guaranteed to
> be implimented by Python (someone please correct me if I'm wrong).
> whether this works is a function of the machine, compiler and library
> used to compile Python. I'm not sure how common it is, but if you want
> your code portable, it may not be reliabel on all machines, which is
> too bad, because IEEE 754 is pretty darn good standard.
>
> BTW: I'd love to see literals for Inf, -Inf and NaN in Python.
do it yourself ;-)
>>> Inf = struct.unpack('f', '\x00\x00\x80\x7f')[0]
>>> Inf
1.#INF
>>> -Inf
-1.#INF
>>> NaN = struct.unpack('f', '\x7f\xff\xff\x7f')[0]
>>> NaN
1.#QNAN
don't know if it works anywhere but it does on Py2.2 win32.
BTW:
>>> struct.unpack('>f', '\x00\x00\x00\x01')
(1.4012984643248171e-045,)
>>> struct.unpack('<f', '\x00\x00\x00\x01')
(2.350988701644575e-038,)
>>> struct.unpack('@f', '\x00\x00\x00\x01')
(2.350988701644575e-038,)
so the format code '<' is same as '@', little endian which is correct on my
intel P3 but:
>>> struct.unpack('@f', '\x00\x00\x80\x7f')
(1.#INF,)
>>> struct.unpack('<f', '\x00\x00\x80\x7f')
(3.4028236692093846e+038,)
huh? they're no longer the same for special values of floats??
chris
--
Chris <clie...@gmx.net>
OK, that's what I thought. I still don't understand why it should default
to disabling inf and nan in computation. I've lived without the overflow
checking for a year, without noticeable bad effect (with much benefit, of
course).
Huaiyu
It can, but it's a x-platform crapshoot as to exactly how. For example,
under current Windows CVS, this still "works":
>>> 1e300 * 1e300
1.#INF
>>>
Here's a clumsier way <wink>:
>>> 1.6e308 + 1.6e308
1.#INF
>>>
Both of those assume the user hasn't installed and enabled the fpectl
module, whose purpose in life is to cause even this to complain.
maybe-number-crunchers-will-agree-on-something-after-i'm-dead-
but-i'm-not-holding-my-breath-ly y'rs - tim
> It can, but it's a x-platform crapshoot as to exactly how. For example,
> under current Windows CVS, this still "works":
>
> >>> 1e300 * 1e300
> 1.#INF
> >>>
>
> Here's a clumsier way <wink>:
>
> >>> 1.6e308 + 1.6e308
> 1.#INF
> >>>
>
> Both of those assume the user hasn't installed and enabled the fpectl
> module, whose purpose in life is to cause even this to complain.
>
>>> 1e300
1.0000000000000001e+300
>>> 1e300*1e300
inf
>>> 1e300**2
Traceback (most recent call last):
File "<pyshell#12>", line 1, in ?
1e300**2
OverflowError: (34, 'Numerical result out of range')
>>>
(under Linux with yesterday CVS)
So I was wrong. But addition, multiplication and exponentiation seem to be implemented differently (1e300+1e300 is inf and 1e300**2 is an overflow).
Is there a reason for these different behaviours?
Christophe.
The patch is in fact very simple: just remove the extra code whose sole
purpose is to throw an exception. No patch to catch the exception is
needed.
Tim Peters <tim...@comcast.net> wrote:
>
>It can, but it's a x-platform crapshoot as to exactly how. For example,
>under current Windows CVS, this still "works":
>
>>>> 1e300 * 1e300
>1.#INF
>>>>
>
>Here's a clumsier way <wink>:
>
>>>> 1.6e308 + 1.6e308
>1.#INF
>>>>
>
>Both of those assume the user hasn't installed and enabled the fpectl
>module, whose purpose in life is to cause even this to complain.
>
>maybe-number-crunchers-will-agree-on-something-after-i'm-dead-
> but-i'm-not-holding-my-breath-ly y'rs - tim
Now I'm confused. I thought from previous discussions that you intended to
disable inf and nan until full ieee floating point compliance is implemented
in Python. And I infered that that's in order to make Windows work.
Since it could be made to work on Linux, I simply commented out the checking
in pyport.h. Voila, instant ieee compliance!
Your example appears to show that Windows can also live with nan and inf.
So my question is, are there any significant platforms out there that still
really need such checkings which make nan and inf almost useless?
If, in reality, only some very niche platforms need such checkings, maybe it
is better to let those platforms without ieee compliance suffer (in terms of
having to add a small patch), rather than depriving most of the mainstream
platforms of a very useful feature?
I'm looking forward to the day when I don't need to patch Python when a new
version comes out.
Huaiyu
Huaiyu Zhu wrote:
> If, in reality, only some very niche platforms need such checkings, maybe it
> is better to let those platforms without ieee compliance suffer (in terms of
> having to add a small patch), rather than depriving most of the mainstream
> platforms of a very useful feature?
hear hear!! (or is that here here!)
Python float + and * are done via C + and * on C doubles. Python float **
is done via C libm pow(), and
x**y
strives to act the same as
import math
math.pow(x, y)
> Is there a reason for these different behaviours?
Of course. The question is whether they're rational reasons <0.9 wink>.
I intended that x**y act the same as math.pow(x, y).
> And I infered that that's in order to make Windows work.
It has nothing to do with Windows specifically; Python's 754 behavior "is
random" on all platforms, in platform-specific ways, because Python is
implemented in C, platform C 754 behavior "is (still) random", and the only
code ever contributed to address this sorry state of affairs is the fpectl
module (whose purpose is in a direction opposite the one you favor).
> Since it could be made to work on Linux, I simply commented out
> the checking in pyport.h. Voila, instant ieee compliance!
You happened to get one particular 754 behavior you like; full IEEE
compliance is a large and difficult task; subset compliance is worse than
none (if we're to believe Kahan's virulent rants about Java <wink>).
> Your example appears to show that Windows can also live with nan and inf.
Virtually all platforms can, given enough platform-specific support code,
almost none of which Python has.
> So my question is, are there any significant platforms out there
> that still really need such checkings which make nan and inf almost
> useless?
If Python were 100% 754-conforming, Guido and I would still favor enabling
exceptions on overflow, divide-by-zero, and invalid operation, by default.
As things are, you've got no choice at all, unless you want such exceptions
enabled and are on a platform where the fpectl module happens to work.
1. How much 754 support is provided by the underlying platforms.
2. How do such support vary across the platforms.
3. What are the Python designers' wishes.
Tim Peters <tim...@comcast.net> wrote:
>
>It has nothing to do with Windows specifically; Python's 754 behavior "is
>random" on all platforms, in platform-specific ways, because Python is
>implemented in C, platform C 754 behavior "is (still) random", and the only
>code ever contributed to address this sorry state of affairs is the fpectl
>module (whose purpose is in a direction opposite the one you favor).
I think you are talking about sub-issue 2 here. Is there some place we can
find out what these variations are, and how severe they are?
I can certainly contribute "the patch to enable inf and nan in Python" :-)
(attached), but I'm sure that's not the reason you are against it.
In any case I'd like to see what problems people report on different
platforms.
>> Since it could be made to work on Linux, I simply commented out
>> the checking in pyport.h. Voila, instant ieee compliance!
>
>You happened to get one particular 754 behavior you like; full IEEE
>compliance is a large and difficult task; subset compliance is worse than
>none (if we're to believe Kahan's virulent rants about Java <wink>).
I think you are talking about sub-issue 1 here. Do you actually believe
that subset-compliance is worse than none? I'd think that would depend on
what subset we are talking about. Certainly "can't provide full support"
does not imply "avoid as much as possible". There must be some concrete
criteria that are omitted here.
>> Your example appears to show that Windows can also live with nan and inf.
>
>Virtually all platforms can, given enough platform-specific support code,
>almost none of which Python has.
>
>> So my question is, are there any significant platforms out there
>> that still really need such checkings which make nan and inf almost
>> useless?
>
>If Python were 100% 754-conforming, Guido and I would still favor enabling
>exceptions on overflow, divide-by-zero, and invalid operation, by default.
>As things are, you've got no choice at all, unless you want such exceptions
>enabled and are on a platform where the fpectl module happens to work.
I think you are talking about sub-issue 3 here. Could you provide a
rationale in the context of numerical computation?
I can offer an example in support of the opposite side. Suppose x is a
vector containing elements that might be arbitrarily large, then
exp(-x**2)
might involve inf at certain elements, yet the result is perfectly regular,
containing only floats between 0 and 1. How can this be done by vectorized
computation if individual elements may throw exceptions?
Certainly you and Guido have the final say on sub-issue 3. On the other two
issues, however, I suspect that
1. Most platforms these days support 754 to a remarkably good degree, far
better than that is allowed in today's Python.
2. Most number crunchers would be more concerned that it works as well as
possible on the platforms available to them, rather than whether it also
works on other platforms.
3. The variations across major platforms are minor.
I'd like to hear people confirm or deny these suspicions. If answering
these questions is easier than answering the "should or shouldn't" question,
then the state of affairs could be advanced a little bit by considering
these questions first.
Huaiyu
PS. Here's the patch:
This patch removes the code that generate an OverflowError for 1e200**2.
It is generated with:
diff -rcN Python-2.2.1 Python-2.2.1.ieee > python-2.1.1-ieee-fpu.patch
It can be applied as
patch -p1 -d Python-2.2.1 < python-2.1.1-ieee-fpu.patch
Afterwards, it is still necesary to modify Makefile to add -lieee to LIBS.
This itself can be changed by using autoconf, but I've forgotten how to do
it. The second part of the patch does that.
diff -rcN Python-2.2.1/Include/pyport.h Python-2.2.1.ieee/Include/pyport.h
*** Python-2.2.1/Include/pyport.h Mon Mar 11 02:16:23 2002
--- Python-2.2.1.ieee/Include/pyport.h Wed May 8 15:20:30 2002
***************
*** 301,311 ****
*/
#define Py_ADJUST_ERANGE1(X) \
do { \
! if (errno == 0) { \
if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL) \
errno = ERANGE; \
} \
! else if (errno == ERANGE && (X) == 0.0) \
errno = 0; \
} while(0)
--- 301,311 ----
*/
#define Py_ADJUST_ERANGE1(X) \
do { \
! /*if (errno == 0) { \
if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL) \
errno = ERANGE; \
} \
! else */ if (errno == ERANGE && (X) == 0.0) \
errno = 0; \
} while(0)
***************
*** 313,320 ****
do { \
if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL || \
(Y) == Py_HUGE_VAL || (Y) == -Py_HUGE_VAL) { \
! if (errno == 0) \
! errno = ERANGE; \
} \
else if (errno == ERANGE) \
errno = 0; \
--- 313,320 ----
do { \
if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL || \
(Y) == Py_HUGE_VAL || (Y) == -Py_HUGE_VAL) { \
! /* if (errno == 0) \
! errno = ERANGE; */ \
} \
else if (errno == ERANGE) \
errno = 0; \
diff -rcN Python-2.2.1/configure.in Python-2.2.1.ieee/configure.in
*** Python-2.2.1/configure.in Mon Mar 11 02:14:23 2002
--- Python-2.2.1.ieee/configure.in Wed May 8 15:47:07 2002
***************
*** 1777,1786 ****
# (none yet)
# Linux requires this for correct f.p. operations
! AC_CHECK_FUNC(__fpu_control,
! [],
! [AC_CHECK_LIB(ieee, __fpu_control)
! ])
# Check for --with-fpectl
AC_MSG_CHECKING(for --with-fpectl)
--- 1777,1787 ----
# (none yet)
# Linux requires this for correct f.p. operations
! #AC_CHECK_FUNC(__fpu_control,
! # [],
! # [AC_CHECK_LIB(ieee, __fpu_control)
! #])
! AC_CHECK_LIB(ieee, __fpu_control)
# Check for --with-fpectl
AC_MSG_CHECKING(for --with-fpectl)
diff -rcN Python-2.2.1/pyconfig.h.in Python-2.2.1.ieee/pyconfig.h.in
*** Python-2.2.1/pyconfig.h.in Wed Oct 24 10:10:49 2001
--- Python-2.2.1.ieee/pyconfig.h.in Wed May 8 15:53:51 2002
***************
*** 748,754 ****
#undef HAVE_LIBDLD
/* Define if you have the ieee library (-lieee). */
! #undef HAVE_LIBIEEE
#ifdef __CYGWIN__
#ifdef USE_DL_IMPORT
--- 748,754 ----
#undef HAVE_LIBDLD
/* Define if you have the ieee library (-lieee). */
! #define HAVE_LIBIEEE
#ifdef __CYGWIN__
#ifdef USE_DL_IMPORT
OK, here it is:
>>> from math import *
>>> pow (1e200, 2)
inf
>>> 1e200**2
inf
Here's the new patch (without any testing for other effects). Are there
documentations of the circumstances where the commented-out code is supposed
to be useful? Are there unit tests for them?
Huaiyu
diff -rcN Python-2.2.1/Include/pyport.h Python-2.2.1.i/Include/pyport.h
*** Python-2.2.1/Include/pyport.h Mon Mar 11 02:16:23 2002
--- Python-2.2.1.i/Include/pyport.h Wed May 8 15:20:30 2002
diff -rcN Python-2.2.1/Modules/mathmodule.c Python-2.2.1.i/Modules/mathmodule.c
*** Python-2.2.1/Modules/mathmodule.c Thu Sep 6 01:16:17 2001
--- Python-2.2.1.i/Modules/mathmodule.c Fri Jun 21 19:06:48 2002
***************
*** 35,44 ****
* overflow, so testing the result for zero suffices to
* distinguish the cases).
*/
! if (x)
! PyErr_SetString(PyExc_OverflowError,
! "math range error");
! else
result = 0;
}
else
--- 35,44 ----
* overflow, so testing the result for zero suffices to
* distinguish the cases).
*/
! /* if (x)
! PyErr_SetString(PyExc_OverflowError,
! "math range error");
! else */
result = 0;
}
else
diff -rcN Python-2.2.1/configure.in Python-2.2.1.i/configure.in
*** Python-2.2.1/configure.in Mon Mar 11 02:14:23 2002
--- Python-2.2.1.i/configure.in Wed May 8 15:47:07 2002
***************
*** 1777,1786 ****
# (none yet)
# Linux requires this for correct f.p. operations
! AC_CHECK_FUNC(__fpu_control,
! [],
! [AC_CHECK_LIB(ieee, __fpu_control)
! ])
# Check for --with-fpectl
AC_MSG_CHECKING(for --with-fpectl)
--- 1777,1787 ----
# (none yet)
# Linux requires this for correct f.p. operations
! #AC_CHECK_FUNC(__fpu_control,
! # [],
! # [AC_CHECK_LIB(ieee, __fpu_control)
! #])
! AC_CHECK_LIB(ieee, __fpu_control)
# Check for --with-fpectl
AC_MSG_CHECKING(for --with-fpectl)
diff -rcN Python-2.2.1/pyconfig.h.in Python-2.2.1.i/pyconfig.h.in
*** Python-2.2.1/pyconfig.h.in Wed Oct 24 10:10:49 2001
--- Python-2.2.1.i/pyconfig.h.in Wed May 8 15:53:51 2002
[on platform variations]
> ...
> I think you are talking about sub-issue 2 here. Is there some
> place we can find out what these variations are, and how severe they are?
Not that I know of, although several projects have tried and given up. In
addition to hardware, it varies according to all of compiler, specific
compiler release, libraries, specific library releases, and configuration
options specific to all of the preceding. If and when vendors chose to
support the optional C99 754 gimmicks, then there will be a uniform way to
proceed. gcc is making some progress in that direction.
> I can certainly contribute "the patch to enable inf and nan in Python" :-)
> (attached), but I'm sure that's not the reason you are against it.
Because it's a crock <0.1 wink>. Making x**y different than pow(x, y)
sucks, and even worse if it's in ways that can vary across platforms. It's
also the case that we believe a majority of Python users *want* overflow
exceptions, and are well served by them. Unless and until there's a
coherent and reasonably x-platform scheme allowing users to enable and
disable specific numeric exceptions at will, we'll favor griping on overflow
(and divide-by-0, and invalid operation) when it doesn't take heroic effort
to do so (we leave * + - and / alone, except for /-by-0, because it's too
hard to check them -- fpectlmodule.c makes heroic efforts in that direction,
but doesn't work on all platforms, or, over time, even on all platforms it
used to work on).
> ...
> I think you are talking about sub-issue 1 here. Do you actually believe
> that subset-compliance is worse than none?
For a sufficiently broken subset, yes. "No possibility ever to raise an
exception" is in fact technically conforming to the standard, as the
possibility for trap handlers of any kind are only encouraged ("should", not
"shall"). That's an area where I want something stronger than the std
requires.
> I'd think that would depend on what subset we are talking about.
For example, fine by me if you can't get at directed rounding modes, despite
that they're mandatory. Others would disagree with that, btw.
> Certainly "can't provide full support" does not imply "avoid as much as
> possible". There must be some concrete criteria that are omitted here.
The lack of the possibility to enable and disable exceptions on overflow,
divide-by-0, and invalid operation are hangups to me. Others may include
underflow in that mix. Others will include sticky status flags, etc. You
need to hammer this out in the NumPy community first, though.
[Tim]
>> If Python were 100% 754-conforming, Guido and I would still
>> favor enabling exceptions on overflow, divide-by-zero, and invalid
>> operation, by default. As things are, you've got no choice at all,
>> unless you want such exceptions enabled and are on a platform where
>> the fpectl module happens to work.
> I think you are talking about sub-issue 3 here. Could you provide a
> rationale in the context of numerical computation?
Sorry, it's a waste of time to do that again. There are examples on both
sides, and what they show in aggregate is that different apps need different
exceptional-case behavior, and somtimes even across phases within a single
app, not that NonStop Rulz. There are even cases that can benefit greatly
just from being able to examine the inexact flag. 754 has lots of good
stuff.
> I can offer an example in support of the opposite side.
Of course you can. If you tried 3x harder, you could give twenty <wink>.
> ...
> How can this be done by vectorized computation if individual elements
> may throw exceptions?
It's amazing how much numeric computation got done on old Cray boxes, which
didn't give you a choice about blowing up on overflow either. I don't know
that vectorized computations have any relevance to core Python, but if you
really want to know, in your exp(-x**2) example people used vector merge to
replace "too big" values of x before feeding the X vector into the rest of
the computation. In the case of Python on a 754 box, you can almost
certainly get away with writing it as exp(-x*x) instead -- and it will
likely run faster, too (x**2 calls the platform pow(); x*x is a straight
multiply).
> Certainly you and Guido have the final say on sub-issue 3. On
> the other two issues, however, I suspect that
>
> 1. Most platforms these days support 754 to a remarkably good degree, far
> better than that is allowed in today's Python.
The support code required at the C level is platform-specific, and needs a
phalanx of cooperative platform experts to contribute it. Slash and burn
will not lead to a good and maintainable result.
> 2. Most number crunchers would be more concerned that it works as well as
> possible on the platforms available to them, rather than whether it
> also works on other platforms.
As I said above, that's why the NumPy community has to agree first. The
overwhelming majority of plain-core Python users are not number crunchers,
and don't know a NaN from an umlaut. Giving them either 1j or a NaN from
sqrt(-1) is a Bad Idea. I would often like 1j or a NaN in my own apps --
but I often want the ValueError Python tries to deliver in that case too.
> 3. The variations across major platforms are minor.
I take it you haven't looked at Python's fpectlmodule.c <wink>.
"Cliff Wells" <logiplex...@earthlink.net> wrote in message
news:20020620.124436....@software1.logiplex.internal...
> Thanks.... excelent !!
Thanks for replying. Sometimes when no one responds to the bits of code I post
I worry that perhaps people don't say anything because my code was just too
lame to comment on, but then I remember that it's really because I'm a total
<expletive/> and I feel better ;)
> "Cliff Wells" <logiplex...@earthlink.net> wrote in message
> news:20020620.124436....@software1.logiplex.internal...
> > In article <mailman.102460072...@python.org>, "Cliff Wells"
> > <logiplex...@earthlink.net> wrote:
> >
> >
> >
> > > On Thu, 20 Jun 2002 17:02:33 +0200
> > > erreur wrote:
> > >
> > >> Will somebody have an idea, to represent the infinite one? I have
> > >> variables to initialize with is +inf (or - inf). To be sure that
> > >> later, all will be smaller (or larger) than my variables. I tried to
> > >> redefine the operators on an object. It goes for Inf>10 but I do not
> > >> arrive for 10>Inf (because that takes > of Int). --------------------
> > >> def __gt__(self, val):
[snip]
OK, so I infer that
- each platform may well support 754 to a good degree
- there is probably a common subset that is good enough and large enough
- but they are not invoked the same way, so that a simple approach may only
support a small set of features and being incorrect on other features in
various platforms.
This is quite discouraging to hear. Maybe someone knowledgeable enough can
compile a list of major features on major platforms. Maybe for suitably
defined set of major features and major platforms, the cost is not high
enough to overwhelm the benefit. Once there is a start it might become
easier to enlarge the subset.
>> 2. Most number crunchers would be more concerned that it works as well as
>> possible on the platforms available to them, rather than whether it
>> also works on other platforms.
>
>As I said above, that's why the NumPy community has to agree first. The
>overwhelming majority of plain-core Python users are not number crunchers,
>and don't know a NaN from an umlaut. Giving them either 1j or a NaN from
>sqrt(-1) is a Bad Idea. I would often like 1j or a NaN in my own apps --
>but I often want the ValueError Python tries to deliver in that case too.
I realize that there are (at least) two types of numerical calculations,
those where the calculation determines the error bounds, and those where the
error bounds are calculated separately. The former is likely to occur in
traditional engineering with quite deterministic data, and the latter is
likely to occur in data mining and statistical applications with lots of
uncertainties. So I can see the point that a proper implementation would
have to have a choice whether overflow generates an exception.
Before that happens (and I understand that is going to be a long time), I'd
like to see overflow exception raised only once for each vector, with proper
information in the exception denoting which element encountered it first.
This is not likely to upset those who want to see the exceptions, while
making it much easier to ignore the exceptions if one wants. I'm not sure
whether this is a core Python issue or NumPy issue, though.
Huaiyu
I think it's a NumPy issue. However, the hope for the future is that
NumPy (actually NumArray) will be part of the standard library someday
(it really should be, a whole lot of people could benefit from it's use
that don't think they re doing "serious" number crunching). If it is,
and even if it is not, the better the computability between NumPy and
core Python the better. Having inf, -inf and NaN is critical for array
operations, but it can be very useful for all numerical work, I really
wish it reliable existed in Python.
I have to say it is very annoying that the vast majority of hardware has
been IEEE 754 compliant for years, and yet there is very little support
in compilers and libraries...aaarrggg!
by the way, The Mathworks seems to be able to provide a reasonable
subset for MATLAB on many platforms..are they just dealing with a
maintenance nightmare?
Nice implementation. May I suggest two small changes:
def __init__(self, sign=1):
int.__init__(self)
self.sign = sign
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, self.sign)
It's partially tested (my version of Python doesn't have generators...
I really should get with the times); the point is that, assuming
Infinity is available in the calling namespace, this will work:
Python 1.5.2 (#0, Apr 13 1999, 10:51:12) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> from some_module import Infinity
>>> i = Infinity()
>>> repr(i)
'Infinity(1)'
>>> j = eval(repr(i))
>>> j == i
1
>>> k = -eval(repr(i))
>>> k + i
0
>>> ^Z
Quoting from the Python reference manual (section 3.3.1):
> If at all possible, this should look like a valid Python expression
> that could be used to recreate an object with the same value (given
> an appropriate environment).
I'd be interested to hear if these minor modifications work with the
class proper; I had to remove the generator/yield lines, and the
subclassing from int, to get it to run under 1.5.2 :-)
Anyway, as I said: I like the class.
just-wanted-to-join-in-ly y'rs,
Steve
--
Steve Tregidgo
Thanks!
> Nice implementation. May I suggest two small changes:
>
> def __init__(self, sign=1):
> int.__init__(self)
> self.sign = sign
>
> def __repr__(self):
> return "%s(%s)" % (self.__class__.__name__, self.sign)
>
> It's partially tested (my version of Python doesn't have generators...
> I really should get with the times); the point is that, assuming
> Infinity is available in the calling namespace, this will work:
>
> Python 1.5.2 (#0, Apr 13 1999, 10:51:12) [MSC 32 bit (Intel)] on win32
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> from some_module import Infinity
> >>> i = Infinity()
> >>> repr(i)
> 'Infinity(1)'
> >>> j = eval(repr(i))
> >>> j == i
> 1
> >>> k = -eval(repr(i))
> >>> k + i
> 0
Looks good to me. I was more concerned with the other stuff and kind of
stuck the __repr__ implementation in as a placeholder.
However, I think I prefer this:
class Infinity(int):
def __init__(self):
int.__init__(self)
self.sign = 1
def __repr__(self):
return "%s()" % self
Basically the same effect, but it prints as "-Infinity()" rather than
"Infinity(-1)", not to mention being a bit shorter.
>>> from infinity import Infinity
>>> m = -Infinity()
>>> m
-Infinity()
>>> `m`
'-Infinity()'
>>> str(m)
'-Infinity'
>>> j = eval(repr(m))
>>> j
-Infinity()
>>> j == m
1
>>> k = -eval(repr(j))
>>> k
Infinity()
>>>
> Quoting from the Python reference manual (section 3.3.1):
> > If at all possible, this should look like a valid Python expression
> > that could be used to recreate an object with the same value (given
> > an appropriate environment).
>
> I'd be interested to hear if these minor modifications work with the
> class proper; I had to remove the generator/yield lines, and the
> subclassing from int, to get it to run under 1.5.2 :-)
I can't see how the __repr__ change would have any effect (other than
making it better). Not subclassing from int will be an issue:
>>> from infinity import Infinity
>>> i = Infinity()
>>> i / 1
Infinity()
>>> 1 / i
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: unsupported operand type(s) for /: 'int' and 'instance'
>>>
But since that was supposed to raise an exception anyway, it may not be
much of an issue (it just raises a less informative exception). The
exception raised by Infinity(int) is:
>>> from infinity import Infinity
>>> i = Infinity()
>>> i / 1
Infinity()
>>> 1 / i
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ZeroDivisionError: integer division or modulo by zero
>>>
Which isn't entirely correct either but a bit more informative.
> Anyway, as I said: I like the class.
It's always fun (and surprisingly easy) to write little things like this
in Python.
> just-wanted-to-join-in-ly y'rs,
> Steve
Regards,
Cliff