Are all sympy Floats rational?

405 views
Skip to first unread message

Kalevi Suominen

unread,
Mar 20, 2014, 8:19:24 AM3/20/14
to sy...@googlegroups.com
Hi,

The attribute 'is_rational' seems to give surprising results:

>>> from sympy import nfloat, sqrt
>>> x = nfloat(sqrt(2))
>>> x
1.41421356237310
>>> x.is_rational
True

This leads to problems if used as a basis for decisions.
There might be an explanation I don't see.
(Other than 'all truncated decimal (or binary) numbers are rational'.)

Kalevi

Sergey Kirpichev

unread,
Mar 20, 2014, 9:00:47 AM3/20/14
to sy...@googlegroups.com, Julien Rioux

On Thursday, March 20, 2014 4:19:24 PM UTC+4, Kalevi Suominen wrote:
This leads to problems if used as a basis for decisions.

Sure.   Julien, can you comment this?  This was introduced by commit 9c359bc (and this is not tested!).

In my view, it's wrong.  Floats are inexact numbers, we shouldn't mix them with rational arithmetic.

Julien Rioux

unread,
Mar 20, 2014, 9:07:18 AM3/20/14
to sy...@googlegroups.com, Julien Rioux

Indeed, it looks like a bug.
Cheers,
Julien

Aaron Meurer

unread,
Mar 20, 2014, 11:29:18 AM3/20/14
to sy...@googlegroups.com, Julien Rioux
That commit just replaced is_irrational = False with is_rational =
True. So I think you should dig further in the history.

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/4710c28b-4aea-42d3-bc4a-33d6366b37fd%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

Sergey B Kirpichev

unread,
Mar 20, 2014, 11:47:26 AM3/20/14
to sy...@googlegroups.com
On Thu, Mar 20, 2014 at 10:29:18AM -0500, Aaron Meurer wrote:
> That commit just replaced is_irrational = False with is_rational =
> True. So I think you should dig further in the history.

Hmm, yes. This commit:
https://github.com/skirpichev/old-sympy/commit/d474a83
?

Aaron Meurer

unread,
Mar 20, 2014, 5:15:25 PM3/20/14
to sy...@googlegroups.com
So it's basically always been that way. The motivation is probably
that any float representation must be finite, and hence rational.

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/20140320154726.GA3721%40darkstar.order.hcn-strela.ru.

Sergey B Kirpichev

unread,
Mar 20, 2014, 5:34:19 PM3/20/14
to sy...@googlegroups.com
On Thu, Mar 20, 2014 at 04:15:25PM -0500, Aaron Meurer wrote:
> So it's basically always been that way. The motivation is probably
> that any float representation must be finite, and hence rational.

But arithmetic operations on rational numbers obey very
different algebraic properties (e.g. it's a field).

Aaron Meurer

unread,
Mar 20, 2014, 5:35:43 PM3/20/14
to sy...@googlegroups.com
Can you think of a fact in the assumptions system (implemented or not)
that would break if floats are rational?

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/20140320213419.GA6998%40darkstar.order.hcn-strela.ru.

Christophe Bal

unread,
Mar 20, 2014, 5:55:08 PM3/20/14
to sympy-list
Hello.

There is a difference between a decimal, ie a rational which can be written N/10^P and a float. A float number is an approximation so you can't really see it as rationals. No ?

Christophe BAL 


Sergey B Kirpichev

unread,
Mar 20, 2014, 5:57:24 PM3/20/14
to sy...@googlegroups.com
On Thu, Mar 20, 2014 at 04:35:43PM -0500, Aaron Meurer wrote:
> Can you think of a fact in the assumptions system (implemented or not)
> that would break if floats are rational?

Any, that uses algebraic properties of the rational numbers field.

An example, associativity:
"((x + y) + z) - (x + (y + z)) is zero"

Joachim Durchholz

unread,
Mar 21, 2014, 5:13:34 AM3/21/14
to sy...@googlegroups.com
Am 20.03.2014 22:55, schrieb Christophe Bal:
> Hello.
>
> There is a difference between a decimal, ie a rational which can be written
> N/10^P and a float. A float number is an approximation so you can't really
> see it as rationals. No ?

One can see it as a rational. Then it's not a field because not all
combinations of values and operations will return another float.
Or one can see it as a IEEE model number (basically, an interval of
rationals, plus some extra values). That's a field, but the semantics
are complicated and unreliable.

See
www.ucs.cam.ac.uk/docs/course-notes/unix-courses/NumericalPython/files/paper_1.pdf
for a rough overview of the messy details.

Both approaches have their uses, but I'm unsure whether any of them adds
value to SymPy.

Sergey B Kirpichev

unread,
Mar 21, 2014, 5:27:02 AM3/21/14
to sy...@googlegroups.com
On Fri, Mar 21, 2014 at 10:13:34AM +0100, Joachim Durchholz wrote:
> One can see it as a rational. Then it's not a field because not all
> combinations of values and operations will return another float.
> Or one can see it as a IEEE model number (basically, an interval of
> rationals, plus some extra values). That's a field, but the
> semantics are complicated and unreliable.

A "field", in which "+" is not associative - not a field...

Aaron Meurer

unread,
Mar 21, 2014, 12:51:22 PM3/21/14
to sy...@googlegroups.com
Well, that particular example falls out of the scope of the assumption
system (it happens directly in the core).

I was looking for something more like sin(non-zero rational) ==
irrational, but sin(float) evaluates to float.

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/20140320215724.GA7879%40darkstar.order.hcn-strela.ru.

Kalevi Suominen

unread,
Mar 21, 2014, 3:55:29 PM3/21/14
to sy...@googlegroups.com
It looks like nobody really planned the semantics of Float.is_rational. I believe it does not make sense
and should be removed from Float's dictionary. If this breaks something it should be fixed anyway.

Kalevi Suominen

Aaron Meurer

unread,
Mar 21, 2014, 4:46:19 PM3/21/14
to sy...@googlegroups.com
Actually, that's a good point. Try removing it and see what tests fail.

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/42563c0c-245f-4cee-a62c-1d67273324ff%40googlegroups.com.

Richard Fateman

unread,
Mar 21, 2014, 5:57:50 PM3/21/14
to sy...@googlegroups.com, skirp...@gmail.com
Mathematically, all floats are rational numbers.  they can be written as
<integer1>  * 2 ^<integer2>.

They are in no way approximate except in the fuzzy minds of people who
took poorly constructed courses in "computing".  Some of those people
wrote crappy textbooks, too.




On Thursday, March 20, 2014 2:57:24 PM UTC-7, Sergey Kirpichev wrote:
On Thu, Mar 20, 2014 at 04:35:43PM -0500, Aaron Meurer wrote:
> Can you think of a fact in the assumptions system (implemented or not)
> that would break if floats are rational?

Any, that uses algebraic properties of the rational numbers field.

The properties that you expect fail because the operations + and * are
incorrect.  They could be made correct in the following way:
Any time two numbers are added or multiplied, see if the inexact flag in
IEEE floating-point unit is raised.

If the number result is inexact, re-do the computation using extended precision.
It no longer fits in the floating-point format, but is (a) rational  and (b) obeys
the expected field axioms.

if you ask is    sin(float)  which is a float,  how can that be rational,  you are mistaking the
mathematical function sin(x)  with a numerical  (in fact, rational) computation which
computes a rational number that is close to, but except for sin(0),  not equal to
the transcendental number sin(x) .

So there is no issue that the floats are not rational.  Each number is rational.
Your arithmetic is defective unless you work at it.  Alternatively you could say
that the subset of the rationals that can be represented in floating format does
not constitute a field.  Nor is it closed under sin(), log(), exp(). 

It is, in my opinion, a really bad mistake for sympy to accept the viewpoint that
the mathematically accurate answer is subsidiary to the misuses of ill-informed
(mathematically speaking) coders, or for that matter, programming language
designers.

Richard Fateman

unread,
Mar 21, 2014, 6:08:43 PM3/21/14
to sy...@googlegroups.com


On Friday, March 21, 2014 9:51:22 AM UTC-7, Aaron Meurer wrote:
Well, that particular example falls out of the scope of the assumption
system (it happens directly in the core).

I was looking for something more like sin(non-zero rational) ==
irrational, but sin(float) evaluates to float.

If you want to prove things about the mathematical function sin, don't expect to
do it with the numerical function.  In FORTRAN they used to be called
sinf(<single_float>)   and sind(<double_float>)  but programming languages
today are so clever that they put them both, and some other things all under
the name sin().  This doesn't mean they are REALLY the same, and that's
one of the costs of having a programming language and a mindset that
is billed as  "easy to learn".
RJF

Sergey B Kirpichev

unread,
Mar 22, 2014, 6:07:46 AM3/22/14
to sy...@googlegroups.com
On Fri, Mar 21, 2014 at 02:57:50PM -0700, Richard Fateman wrote:
> The properties that you expect fail because the operations + and * are
> incorrect. [...] So there is no issue that the floats are not
> rational.  Each number is rational.
> Your arithmetic is defective unless you work at it.

In [8]: (0.001 + 1.1) - 1.0
Out[8]: 0.10099999999999998

In [9]: 0.001 + (1.1 - 1.0)
Out[9]: 0.10100000000000009

[1]> (+ (+ 0.001 (+ 1.1 -1.0)))
0.101000026
[2]> (+ (+ 0.001 1.1) -1.0)
0.10100007

(That's for builtin float's in Python and CL).

Christophe Bal

unread,
Mar 22, 2014, 6:25:20 AM3/22/14
to sympy-list
>>> Mathematically, all floats are rational numbers.
>>> They can be written as <integer1>  * 2 ^<integer2>.

That is not true. Why ? Because of the arithmetic operations. See the preceding mail for an example.

Mathematically, there is not set of floats. The floats are a lot more complex, that is not a bad play on words, than the decimals.


--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.

Kalevi Suominen

unread,
Mar 22, 2014, 8:47:22 AM3/22/14
to sy...@googlegroups.com


On Friday, March 21, 2014 10:46:19 PM UTC+2, Aaron Meurer wrote:
Actually, that's a good point. Try removing it and see what tests fail.

Aaron Meurer

On Fri, Mar 21, 2014 at 2:55 PM, Kalevi Suominen <jks...@gmail.com> wrote:
> It looks like nobody really planned the semantics of Float.is_rational. I
> believe it does not make sense
> and should be removed from Float's dictionary. If this breaks something it
> should be fixed anyway.

Taking the tests with is_rational removed gave the following results.
 
============================= test process starts ==============================
executable:         /usr/bin/python  (2.7.3-final-0) [CPython]
architecture:       64-bit
cache:              yes
ground types:       python
random seed:        576970
hash randomization: on (PYTHONHASHSEED=1591061748)

.....
________________________________ xpassed tests _________________________________
sympy/core/tests/test_args.py: test_as_coeff_add
sympy/core/tests/test_args.py: test_sympy__matrices__expressions__matexpr__ZeroMatrix
sympy/core/tests/test_wester.py: test_V12

 tests finished: 5444 passed, 152 skipped, 338 expected to fail,
3 expected to fail but passed, in 3470.42 seconds

As far as I can see none of these exceptions has connections with Float.
(see eg. commit 0663271f)
In fact, I was rather expecting that no one would produce useful code
with serious tests of rationality of floats.

Kalevi Suominen

Joachim Durchholz

unread,
Mar 22, 2014, 9:42:54 AM3/22/14
to sy...@googlegroups.com
Am 21.03.2014 22:57, schrieb Richard Fateman:
> Your arithmetic is defective unless you work at it.

The problem is that it's not possible to implement reliable arithmetic
in Python.
You can't rely on the inexact flag. Not all processors implement it
cleanly, I hear some C libraries modify the FPU settings, and it's not
even threadsafe (well in Python it is, but not all C libraries that
Python uses are).

I think these issues can be tackled using a validation suite so people
can check that their hardware/software configuration works, and a lot of
effort.
You'd need to:
- Research all the corner cases known to exist.
- Defined what fp operations SymPy uses, and how they should work for
every corner case.
- Write unit tests that are specially marked as "fp validation tests".
Anybody who does accuracy-sensitive work with SymPy would be asked to
run the validation tests on their hardware/software configuration.

Until (and if!) that's done, we should write a warning that IEEE
arithmetic isn't accurate, that some parts of SymPy assume accuracy
nevertheless, and that Float shouldn't be used to avoid that effect.

Aaron Meurer

unread,
Mar 22, 2014, 12:24:12 PM3/22/14
to sy...@googlegroups.com
Oh, that's good. This was setting is_rational and is_irrational to
None? I think that's what they should be.

In that case, please submit a pull request with this change (and a
test for it).

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/522f5d2a-6bcd-4ae7-9573-2dfa677d8494%40googlegroups.com.

Kalevi Suominen

unread,
Mar 22, 2014, 2:25:58 PM3/22/14
to sy...@googlegroups.com


On Saturday, March 22, 2014 6:24:12 PM UTC+2, Aaron Meurer wrote:
Oh, that's good. This was setting is_rational and is_irrational to
None? I think that's what they should be.

I was testing with is_rational commented out (is_irrational does not exist in Float).
A missing attribute would reveal every attempted use. I think this is how it should
be in production code, to warn off any accidental use.

Kalevi Suominen

Aaron Meurer

unread,
Mar 22, 2014, 9:00:13 PM3/22/14
to sy...@googlegroups.com
I'm pretty sure that removing it is the same as setting it to None,
because it falls back to the superclass which sets unknown assumptions
to None. I would set them both explicitly to None to be clear that
that really is what we want, and not just that it isn't implemented.

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/ca368bca-abef-43ec-aba7-7ed55df20d76%40googlegroups.com.

Richard Fateman

unread,
Mar 23, 2014, 5:02:00 PM3/23/14
to sy...@googlegroups.com
You can prove that any  valid IEEE float is a rational   (invalid: infinities, NaNs) by looking at the definition.

A fraction  (or "mantissa")  times an integer power of 2 is always a rational number.  Using "*" as a shorthand
for "times"   does not mean that the intention was to use the defective multiplication in programming languages
such as python.
Common Lisp has  the function rational.

(rational 0.1d0)  returns  3602879701896397/36028797018963968

which is  an integer times 2^(-55).

I agree that most hardware and software makes the "inexact" flag difficult to access.
Maybe if someone with a popular language made a sufficient fuss, that would change.

RJF

Christophe Bal

unread,
Mar 23, 2014, 5:12:38 PM3/23/14
to sympy-list
>> You can prove that any  valid IEEE float is a rational 

No ! Why ? Because of the arithmetic rules. You can have approximation to do. With decimals, you have to do exact calculations.

Christophe BAL.



Joachim Durchholz

unread,
Mar 23, 2014, 6:23:46 PM3/23/14
to sy...@googlegroups.com
Am 23.03.2014 22:12, schrieb Christophe Bal:
>>> You can prove that any valid IEEE float is a rational
>
> No ! Why ? Because of the arithmetic rules. You can have approximation to
> do. With decimals, you have to do exact calculations.

You two are both right; you just mean different things when you say
"Float" and "Rational".

Richard refers to the domain, i.e. the list of valid values.
You to the abstract data structure, i.e. the domain plus operations.

Aaron Meurer

unread,
Mar 23, 2014, 6:35:21 PM3/23/14
to sy...@googlegroups.com
And when SymPy says "rational" it means the field. The assumptions
system wouldn't very useful if rational only meant integer/integer. We
want to be able to say things like rational + rational = rational and
rational1*rational2 = rational2*rational1 and so on.

Aaron Meurer

>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/532F5EF2.2010405%40durchholz.org.

Richard Fateman

unread,
Mar 23, 2014, 8:58:02 PM3/23/14
to sy...@googlegroups.com


On Sunday, March 23, 2014 3:35:21 PM UTC-7, Aaron Meurer wrote:
On Sun, Mar 23, 2014 at 5:23 PM, Joachim Durchholz <j...@durchholz.org> wrote:
> Am 23.03.2014 22:12, schrieb Christophe Bal:
>
>>>> You can prove that any  valid IEEE float is a rational
>>
>>
>> No ! Why ? Because of the arithmetic rules. You can have approximation to
>> do. With decimals, you have to do exact calculations.
>
>
> You two are both right; you just mean different things when you say "Float"
> and "Rational".
>
> Richard refers to the domain, i.e. the list of valid values.
> You to the abstract data structure, i.e. the domain plus operations.

And when SymPy says "rational" it means the field. The assumptions
system wouldn't very useful if rational only meant integer/integer.

Now we must address what is meant by integer.   In common lisp, integer means
arbitrary precision integer.   Consequently, every rational number IS an integer/integer.

So is every double-precision IEEE float  (except inf and NaN) convertible to the
exact rational number which it represents in the form  integer/integer.

It seems to me the meanings of words in sympy should correspond to their
meanings in mathematics not some hack mockery of the word that appears in
one or even several programming languages.  The data structure for IEEE double-float
is used to represent a subset of the rational numbers.   If you want to make a test to
see if you are looking at a structure which is known to be an IEEE double-float, sure
you can do it sometimes.   (Not by looking at the 64 bits themselves, but presumably in
some compile-time or run-time symbol table kind of thing associated with those bits.)
You could also have a data structure for "rational as the ratio of two arbitrary precision
integers,."    That's been found fairly useful for symbolic programming.



We
want to be able to say things like rational + rational = rational and
rational1*rational2 = rational2*rational1 and so on.

I'm not sure which you are saying, actually.

1.  we have a program to add any two rational numbers.  The result is a rational number.
2. we  can deduce from the fact that a,b are rational numbers that a+b is a rational number.

By the way, you may want to consider whether you are willing to compute with 1/0 and -1/0
and perhaps 0/0.   These are not rational numbers, but you might find them useful computational
objects.   (compare with IEEE +-inf and NaN).

RJF

Sergey Kirpichev

unread,
Mar 24, 2014, 4:31:23 AM3/24/14
to sy...@googlegroups.com

On Monday, March 24, 2014 4:58:02 AM UTC+4, Richard Fateman wrote:
So is every double-precision IEEE float  (except inf and NaN) convertible to the
exact rational number which it represents in the form  integer/integer.

It was noted in this thread several times: we are not interested
in this truism.  The problem is not with the data structure, but with operations.
Field properties doesn't hold for floats, it was shown for you several times in this
thread (CLisp example included).
 
2. we  can deduce from the fact that a,b are rational numbers that a+b is a rational number.

But now, suppose a, b and c - rational numbers.  Then we can deduce:
((a + b) + c) - (a + (b + c)) is zero.   But this conclusion will be wrong if
we count floats as rationals.

Joachim Durchholz

unread,
Mar 24, 2014, 6:42:01 AM3/24/14
to sy...@googlegroups.com
Am 24.03.2014 01:58, schrieb Richard Fateman:
>
> Now we must address what is meant by integer. In common lisp, integer
> meansarbitrary precision integer.
> Consequently, every rational number IS an integer/integer.
> [...]
> It seems to me the meanings of words in sympy should correspond to
> their meanings in mathematics not some hack mockery of the word that
> appears in one or even several programming languages.

Your position seems inconsistent - you derive your idea about the nature
of rational numbers from Common Lisp, yet you denounce exactly that kind
of reasoning as "some hack mockery of the word that appears in [...]
programming languages".

> It seems to me the meanings of words in sympy should correspond to their
> meanings in mathematics not some hack mockery of the word that appears in
> one or even several programming languages. The data structure for IEEE
> double-float is used to represent a subset of the rational numbers.

To paraphrase, IEEE numbers are some hack mockery of the word that
appear in hardware implementations.

> You could also have a data structure for "rational as the ratio of two
> arbitrary precision integers,."
> That's been found fairly useful for symbolic programming.

Isn't that what SymPy does?

----

Just to demonstrate how far away "meaning in mathematics" is from any
computerized representation, including that of Common List, here's a
short (and probably slightly wrong) description of what rational numbers
"are":

Going down to the very foundations, rational numbers "are" the minimum
model that satisfies the field axioms (see model theory for the
definition of "minimum" and why "the" minimum exists for these axioms).

Most importantly, rationals "are" not a pair of integers, because for
integer pairs, (1,2) != (2,4), but for rationals, 1/2 = 2/4.
Integer pairs are merely a possible *representation* of rationals.

And now, for fun, the real numbers:

These "are" the minimum model that satisfies the total and the dense
ordering axioms.
No mention of field properties. The real numbers just happen to have the
rational numbers embedded (where "embedded" is a term with a strict
definition in model theory).

Aaron Meurer

unread,
Mar 24, 2014, 10:32:48 AM3/24/14
to sy...@googlegroups.com
> On Mar 24, 2014, at 5:42 AM, Joachim Durchholz <j...@durchholz.org> wrote:
>
> Am 24.03.2014 01:58, schrieb Richard Fateman:
>>
>> Now we must address what is meant by integer. In common lisp, integer
>> meansarbitrary precision integer.
> > Consequently, every rational number IS an integer/integer.
>> [...]
> > It seems to me the meanings of words in sympy should correspond to
> > their meanings in mathematics not some hack mockery of the word that
> > appears in one or even several programming languages.
>
> Your position seems inconsistent - you derive your idea about the nature of rational numbers from Common Lisp, yet you denounce exactly that kind of reasoning as "some hack mockery of the word that appears in [...] programming languages".
>
>> It seems to me the meanings of words in sympy should correspond to their
>> meanings in mathematics not some hack mockery of the word that appears in
>> one or even several programming languages. The data structure for IEEE
>> double-float is used to represent a subset of the rational numbers.
>
> To paraphrase, IEEE numbers are some hack mockery of the word that appear in hardware implementations.
>
>> You could also have a data structure for "rational as the ratio of two
>> arbitrary precision integers,."
> > That's been found fairly useful for symbolic programming.
>
> Isn't that what SymPy does?
>
> ----
>
> Just to demonstrate how far away "meaning in mathematics" is from any computerized representation, including that of Common List, here's a short (and probably slightly wrong) description of what rational numbers "are":
>
> Going down to the very foundations, rational numbers "are" the minimum model that satisfies the field axioms (see model theory for the definition of "minimum" and why "the" minimum exists for these axioms).

You also need to add a characteristic 0 condition. Otherwise, the
smallest field is the trivial field.

Aaron Meurer

>
> Most importantly, rationals "are" not a pair of integers, because for integer pairs, (1,2) != (2,4), but for rationals, 1/2 = 2/4.
> Integer pairs are merely a possible *representation* of rationals.
>
> And now, for fun, the real numbers:
>
> These "are" the minimum model that satisfies the total and the dense ordering axioms.
> No mention of field properties. The real numbers just happen to have the rational numbers embedded (where "embedded" is a term with a strict definition in model theory).
>
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/53300BF9.4010001%40durchholz.org.

Richard Fateman

unread,
Mar 29, 2014, 12:16:17 AM3/29/14
to sy...@googlegroups.com


On Monday, March 24, 2014 3:42:01 AM UTC-7, Joachim Durchholz wrote:
Am 24.03.2014 01:58, schrieb Richard Fateman:
>
> Now we must address what is meant by integer.   In common lisp, integer
> meansarbitrary precision integer.
 > Consequently, every rational number IS an integer/integer.
> [...]
 > It seems to me the meanings of words in sympy should correspond to
 > their meanings in mathematics not some hack mockery of the word that
 > appears in one or even several programming languages.

Your position seems inconsistent - you derive your idea about the nature
of rational numbers from Common Lisp, yet you denounce exactly that kind
of reasoning as "some hack mockery of the word that appears in [...]
programming languages".

No, I derive my idea about the nature of rational numbers from standard mathematical
definitions.   It turns out that Common Lisp implements this.   Though I suppose
there is a finiteness argument that there are integers whose binary representation
is so long that they cannot be stored in your computer's memory.   and therefore
cause Common Lisp implementations some difficulty.   There are also integers whose
binary representation is so long that even using all the electrons in the known universe,
one cannot write the number out,  so it is perhaps just a matter of how you feel about
such inherent limits of finiteness of computational devices.  Or the universe.
 

> It seems to me the meanings of words in sympy should correspond to their
> meanings in mathematics not some hack mockery of the word that appears in
> one or even several programming languages.  The data structure for IEEE
> double-float is used to represent a subset of the rational numbers.

To paraphrase, IEEE numbers are some hack mockery of the word that
appear in hardware implementations.

um, if the phrase that appears in hardware implementations is IEEE754 binary arithmetic,
then it should not be a hack mockery.  though of course it might be if it doesn't actually
implement it.


> You could also have a data structure for "rational as the ratio of two
> arbitrary precision integers,."
 > That's been found fairly useful for symbolic programming.

Isn't that what SymPy does?

That's my belief.  So what you could do is take any "float"  and convert it to an exactly equal
numeric quantity that is a sympy rational.  And you could take that number and convert it to a float.
without loss.
 

----

Just to demonstrate how far away "meaning in mathematics" is from any
computerized representation, including that of Common List, here's a
short (and probably slightly wrong) description of what rational numbers
"are":

Going down to the very foundations, rational numbers "are" the minimum
model that satisfies the field axioms (see model theory for the
definition of "minimum" and why "the" minimum exists for these axioms).

You are just making this up.  For example, rational numbers arguably have always
existed.  They were discovered by humans at some time, but certainly before
model theory was discovered by humans.
 

Most importantly, rationals "are" not a pair of integers, because for
integer pairs, (1,2) != (2,4), but for rationals, 1/2 = 2/4.
Integer pairs are merely a possible *representation* of rationals.

I'll go along with that, and raise you.   (1,0) is an integer pair but usually
not considered a rational.   If you want to be pedantic.

And now, for fun, the real numbers:

There are  models that have been developed by  Blum - Schub - Smale for
computing with (computable) reals. Probably not of much value to sympy,
but you might find that more fun.

These "are" the minimum model that satisfies the total and the dense
ordering axioms.
why would sympy programmers care if most reals are not computable anyway .


No mention of field properties. The real numbers just happen to have the
rational numbers embedded (where "embedded" is a term with a strict
definition in model theory).

I think that real numbers were also discovered before model theory.
 

Richard Fateman

unread,
Mar 29, 2014, 12:34:36 AM3/29/14
to sy...@googlegroups.com

Your problem is that the notions of +  and - in your computer programming language
are apparently inadequate.  It is certainly possible to do this correctly by converting a,b,c
into ratios of integers,  which provides sympy a hint about how it should be doing the arithmetic,
 and getting 0.

The fact that  +  gets different answers for adding numbers a,b    and for adding a', b'   where
a-a' and b=b'    can't be a good thing in a system that is supposed to do mathematics.

Joachim Durchholz

unread,
Mar 29, 2014, 4:27:14 AM3/29/14
to sy...@googlegroups.com
Am 29.03.2014 05:16, schrieb Richard Fateman:
>
>
> On Monday, March 24, 2014 3:42:01 AM UTC-7, Joachim Durchholz wrote:
>>
>> Am 24.03.2014 01:58, schrieb Richard Fateman:
>>>
>>> Now we must address what is meant by integer. In common lisp, integer
>>> meansarbitrary precision integer.
>> > Consequently, every rational number IS an integer/integer.
>>> [...]
>> > It seems to me the meanings of words in sympy should correspond to
>> > their meanings in mathematics not some hack mockery of the word that
>> > appears in one or even several programming languages.
>>
>> Your position seems inconsistent - you derive your idea about the nature
>> of rational numbers from Common Lisp, yet you denounce exactly that kind
>> of reasoning as "some hack mockery of the word that appears in [...]
>> programming languages".
>>
>
> No, I derive my idea about the nature of rational numbers from standard
> mathematical
> definitions. It turns out that Common Lisp implements this.

You derive your idea that a number is just its representation from CL.
Which is not what math says. In math, a number is a model (not even a
representation) plus a set of axioms; by that definition, floats in any
programming language (including CL) aren't even numbers because they
don't satisfy the number axioms. (The purpose of floats is outside the
domain of symbolic math. That's why we're discussing it here.)

> Though I suppose
> there is a finiteness argument that there are integers whose binary
> representation
> is so long that they cannot be stored in your computer's memory.

There is, but it's a special case of what we need to do anyway - that
SymPy will either deliver a correct result or report an error.

> There are also
> integers whose
> binary representation is so long that even using all the electrons in the
> known universe,
> one cannot write the number out, so it is perhaps just a matter of how you
> feel about
> such inherent limits of finiteness of computational devices.

That's a more fundamental problem with floats actually.
A float runs out of precision long before the computer runs out of
memory, so SymPy could be forced to report failure long before memory is
exhausted.
Worse, float implementations do not reliably report precision loss. Not
for all C implementations (which Python takes its float implementation
from), and not even for all hardware chips that do the actual bit munging.

I have started to wonder whether we shouldn't throw Float support out.
We'd not doing numerics anyway. Not the kind that uses floats.

Christophe Bal

unread,
Mar 29, 2014, 4:47:11 AM3/29/14
to sympy-list
>>> [...] what you could do is take any "float"  and convert it
>>> to an exactly equal numeric quantity that is a sympy rational.
>>> And you could take that number and convert it to a float.
>>> without loss.

There is a little with this approach. The float algorithm are generally more efficient that the exact rational ones.


--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+unsubscribe@googlegroups.com.

To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.

Sergey Kirpichev

unread,
Mar 29, 2014, 8:24:13 AM3/29/14
to sy...@googlegroups.com
On Saturday, March 29, 2014 8:34:36 AM UTC+4, Richard Fateman wrote:
Your problem is that the notions of +  and - in your computer programming language
are apparently inadequate.

In which one?  Python, CLisp?
 
It is certainly possible to do this correctly by converting a,b,c
into ratios of integers

Sure.  It's possible to convert a,b,c to rationals, but then it will be rational
arithmetic, not arithmetic for floats (IEEE 754).
 
The fact that  +  gets different answers for adding numbers a,b    and for adding a', b'   where
a-a' and b=b'    can't be a good thing in a system that is supposed to do mathematics.
 
There are still reasons for floats (Christophe point you to major one).  But I admit, we shouldn't allow
using of Float's (or builtin float type) in symbolic mathematics.  An example: https://github.com/sympy/sympy/pull/2801

Joachim Durchholz

unread,
Mar 29, 2014, 9:33:24 AM3/29/14
to sy...@googlegroups.com
Am 29.03.2014 09:47, schrieb Christophe Bal:
>>>> [...] what you could do is take any "float" and convert it
>>>> to an exactly equal numeric quantity that is a sympy rational.
>>>> And you could take that number and convert it to a float.
>>>> without loss.
>
> There is a little with this approach. The float algorithm are generally
> more efficient that the exact rational ones.

You don't get feedback about precision loss. So you never know whether
that efficient float computation was really faster.

Besides, I'm not sure that the float speedup will really matter for
SymPy in general. Do numeric calculations even take up a significant
fraction of SymPy's running times?
If no, we shouldn't care.

Christophe Bal

unread,
Mar 29, 2014, 9:38:35 AM3/29/14
to sympy-list
Does Sympy supports something like Decimal("1.234") of the standard module decimal ? This could do the job. 

On the other hand, Sympy should print something like Float("1.234", 10) so as to say to the user that a float with a mantisse of 10 digits is used.

Good or wrong advices ? Is it feasible ?

Joachim Durchholz

unread,
Mar 29, 2014, 10:02:15 AM3/29/14
to sy...@googlegroups.com
Am 29.03.2014 14:38, schrieb Christophe Bal:
> Does Sympy supports something like Decimal("1.234") of the standard module
> decimal ? This could do the job.

If that conversion goes through a Python float, this would already incur
a loss of precision because 1.234 is not a multiple of a power of 0.5.

> On the other hand, Sympy should print something like Float("1.234", 10) so
> as to say to the user that a float with a mantisse of 10 digits is used.

It couldn't. There is no such thing as a mantissa of 10 decimal digits,
it's roughly 30 binary digits.

> Good or wrong advices ? Is it feasible ?

Feasibly: certainly.
Good advice: Not sure. It really depends on what you're doing with SymPy.

I think the most important part is that SymPy should never introduce a
Float on its own. (I do not believe it does, but then I never checked
every line of code in SymPy.)

Other than that, I do not think it matters that much. If somebody uses
Float, they'll get what they deserved IMHO - and if they know what
they're doing they might even get away with it.

Christophe Bal

unread,
Mar 29, 2014, 2:14:36 PM3/29/14
to sympy-list
>>> If that conversion goes through a Python float, ...

No. If the user types 1.234, this will be automatically a float, because this is the way Python works, but if he really needs a decimal, he has to explicitly use of Decimal("1.234"), because this is the way Python works.


>>> I think the most important part is that SymPy should
>>> never introduce a Float on its own.

Yes, I agree.


>>> If somebody uses Float, they'll get what they deserved IMHO...

It could be useful to know some technical informations.


--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+unsubscribe@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.

Aaron Meurer

unread,
Mar 29, 2014, 2:54:48 PM3/29/14
to sy...@googlegroups.com, Kalevi Suominen
Hi Kalevi.

Sorry that this discussion got a little derailed with some discussions. 

It would be great if you could submit this as a pull request. I think all you need to do is remove Float.is_rational, and then add tests that both Float.is_rational and Float.is_irrational are None (and maybe a brief comment by the tests explaining why).

Let us know if you need any help submitting a pull request. We have a guide here too https://github.com/sympy/sympy/wiki/development-workflow.

Aaron Meurer

On Sat, Mar 22, 2014 at 7:47 AM, Kalevi Suominen <jks...@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.

To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.

Joachim Durchholz

unread,
Mar 29, 2014, 3:34:33 PM3/29/14
to sy...@googlegroups.com
Am 29.03.2014 19:14, schrieb Christophe Bal:
>>>> If that conversion goes through a Python float, ...
>
> No. If the user types 1.234, this will be automatically a float, because
> this is the way Python works, but if he really needs a decimal, he has to
> explicitly use of Decimal("1.234"), because this is the way Python works.

I'd agree if you had written Decimal(1.234) in the part you snipped.
It was Decimal("1.234") though.

>>>> I think the most important part is that SymPy should
>>>> never introduce a Float on its own.
>
> Yes, I agree.
>
>
>>>> If somebody uses Float, they'll get what they deserved IMHO...
>
> It could be useful to know some technical informations.

Float arithmetic in Python is essentially what the C runtime offers.
Which is essentially just the hardware, which can vary in its behaviour,
particularly when it comes to the precision loss flag.

Richard Fateman

unread,
Mar 29, 2014, 4:04:49 PM3/29/14
to sy...@googlegroups.com


On Saturday, March 29, 2014 1:47:11 AM UTC-7, Christophe Bal wrote:
>>> [...] what you could do is take any "float"  and convert it
>>> to an exactly equal numeric quantity that is a sympy rational.
>>> And you could take that number and convert it to a float.
>>> without loss.

There is a little  {problem?} with this approach. The float algorithm are generally more efficient that the exact rational ones.

Take your choice.  Fast or correct.
 



Richard Fateman

unread,
Mar 29, 2014, 4:28:54 PM3/29/14
to sy...@googlegroups.com


On Saturday, March 29, 2014 5:24:13 AM UTC-7, Sergey Kirpichev wrote:
On Saturday, March 29, 2014 8:34:36 AM UTC+4, Richard Fateman wrote:
Your problem is that the notions of +  and - in your computer programming language
are apparently inadequate.

In which one?  Python, CLisp?

Common Lisp uses floating-point arithmetic for floats, exact arithmetic for the data type used for
rationals.   If you stick with integers and ratios of integers, Common Lisp does mathematically
valid  arithmetic.   There is a way of being considerably faster than ratio of integers, while
correctly representing WITHOUT ERROR OR ROUNDOFF all results from addition and multiplication of
numbers that originate as floats.  This is "binary rational"  and was used so far as I know in only
one system designed by George E. Collins.  It works like this:
a binary rational is an arbitrary-precision integer X 2^(power).   Power is an integer +-, and one could
also make it an arbitrary precision number, though 32 or even 16 bits is probably enough for most
practical and impractical purposes.
Advantage 1: No need for greatest common divisor calculations.  If the numerator in binary has k trailing zeros, strike them off and add k to the power.
Advantage 2: No overflow or underflow or rounding from add and multiply.  Exact results.
Advantage 3:  strictly a superset of floats.

Disadvantage 1.  Division is not exact.
Disadvantage 2:  Strictly a subset of "ratio of integers".


 
It is certainly possible to do this correctly by converting a,b,c
into ratios of integers

Sure.  It's possible to convert a,b,c to rationals, but then it will be rational
arithmetic, not arithmetic for floats (IEEE 754).

Well, if you want to do crappy arithmetic, maybe you should be forced to do this:

   crappy_float_plus(a, crappy_float_times(b,c))   etc...  instead of hinting/misinforming
the programmer as well as the reader of programs that the process indicated is addition or multiplication.
 
The fact that  +  gets different answers for adding numbers a,b    and for adding a', b'   where
a-a' and b=b'    can't be a good thing in a system that is supposed to do mathematics.
 
There are still reasons for floats (Christophe point you to major one). 

Of course there are.  Speed. Storage.
 
But I admit, we shouldn't allow
using of Float's (or builtin float type) in symbolic mathematics. 

I disagree, from experience.  People want to introduce floats.  For example, some people
write squareroot(x)   as  x^0.5
Saves typing, and they believe it is the same thing.
 

Joachim Durchholz

unread,
Mar 30, 2014, 3:03:01 AM3/30/14
to sy...@googlegroups.com
That's a no-brainer.
SymPy doesn't do number crunching where numeric speed is of essence, it
does symbolic math and correctness is far more important, so "correct"
it is.

Joachim Durchholz

unread,
Mar 30, 2014, 3:19:00 AM3/30/14
to sy...@googlegroups.com
Am 29.03.2014 21:28, schrieb Richard Fateman:
>
>
> On Saturday, March 29, 2014 5:24:13 AM UTC-7, Sergey Kirpichev wrote:
>>
>> On Saturday, March 29, 2014 8:34:36 AM UTC+4, Richard Fateman wrote:
>>>
>>> Your problem is that the notions of + and - in your computer programming
>>> language
>>> are apparently inadequate.
>>>
>>
>> In which one? Python, CLisp?
>>
>
> Common Lisp uses floating-point arithmetic for floats, exact arithmetic for
> the data type used for
> rationals. If you stick with integers and ratios of integers, Common Lisp
> does mathematically
> valid arithmetic.

There is nothing wrong with these statements.
I'm taking offense with this:

> So is every double-precision IEEE float (except inf and NaN)
> convertible to the exact rational number which it represents in the
> form integer/integer.
>
> It seems to me the meanings of words in sympy should correspond to
> their meanings in mathematics not some hack mockery of the word that
> appears in one or even several programming languages.

This sounds as if IEEE numbers were somehow a mathematical valid
implementation of arithmetic, which they simply aren't.

> There is a way of being considerably faster than ratio
> of integers, while
> correctly representing WITHOUT ERROR OR ROUNDOFF all results from addition
> and multiplication of
> numbers that originate as floats. This is "binary rational" and was used
> so far as I know in only
> one system designed by George E. Collins. It works like this:
> a binary rational is an arbitrary-precision integer X 2^(power). Power is
> an integer +-, and one could
> also make it an arbitrary precision number, though 32 or even 16 bits is
> probably enough for most
> practical and impractical purposes.

I do not understand - if you use 32 or 16 bits, you're back at the data
model of IEEE arithmetic.

> Advantage 1: No need for greatest common divisor calculations. If the
> numerator in binary has k trailing zeros, strike them off and add k to the
> power.
> Advantage 2: No overflow or underflow or rounding from add and multiply.
> Exact results.
> Advantage 3: strictly a superset of floats.
>
> Disadvantage 1. Division is not exact.
> Disadvantage 2: Strictly a subset of "ratio of integers".

They still come with the same problems as IEEE arithmetic, most
importantly precision loss, manifest e.g. at the inability to exactly
represent 1/3.

>> But I admit, we shouldn't allow
>> using of Float's (or builtin float type) in symbolic mathematics.
>
> I disagree, from experience. People want to introduce floats. For
> example, some people
> write squareroot(x) as x^0.5
> Saves typing, and they believe it is the same thing.

Actually in this case it is, since 0.5 can be represented exactly.

What's hard is to make SymPy detect whether the float is an exact
representation of what the user wanted (it could silently convert to
rational) and when it isn't (it should warn the user).

What can be made to work is x^"0.5" - SymPy gets to see the
representation and can convert to rational directly.

Richard Fateman

unread,
Mar 30, 2014, 11:38:48 PM3/30/14
to sy...@googlegroups.com


On Sunday, March 30, 2014 12:19:00 AM UTC-7, Joachim Durchholz wrote:
Am 29.03.2014 21:28, schrieb Richard Fateman:
>
>
> On Saturday, March 29, 2014 5:24:13 AM UTC-7, Sergey Kirpichev wrote:
>>
>> On Saturday, March 29, 2014 8:34:36 AM UTC+4, Richard Fateman wrote:
>>>
>>> Your problem is that the notions of +  and - in your computer programming
>>> language
>>> are apparently inadequate.
>>>
>>
>> In which one?  Python, CLisp?
>>
>
> Common Lisp uses floating-point arithmetic for floats, exact arithmetic for
> the data type used for
> rationals.   If you stick with integers and ratios of integers, Common Lisp
> does mathematically
> valid  arithmetic.

There is nothing wrong with these statements.
I'm taking offense with this:

 > So is every double-precision IEEE float  (except inf and NaN)
 > convertible  to the exact rational number which it represents in the
 > form  integer/integer.
 >
 > It seems to me the meanings of words in sympy should correspond to
 > their meanings in mathematics not some hack mockery of the word that
 > appears in one or even several programming languages.

This sounds as if IEEE numbers were somehow a mathematical valid
implementation of arithmetic, which they simply aren't.

There are numbers and there are operations.  The addition of IEEE754 binary numbers
does not always correspond to the mathematical sum.  Except for overflow, the addition
should result in the IEEE754 representable number that is closest to the true value.
(I'm leaving out some edge cases like adding NaNs etc.)

I'm saying that every IEEE754 valid number is exactly some rational number.

 

 > There is a way of being considerably faster than ratio
> of integers, while
> correctly representing WITHOUT ERROR OR ROUNDOFF all results from addition
> and multiplication of
> numbers that originate as floats.  This is "binary rational"  and was used
> so far as I know in only
> one system designed by George E. Collins.  It works like this:
> a binary rational is an arbitrary-precision integer X 2^(power).   Power is
> an integer +-, and one could
> also make it an arbitrary precision number, though 32 or even 16 bits is
> probably enough for most
> practical and impractical purposes.

I do not understand - if you use 32 or 16 bits, you're back at the data
model of IEEE arithmetic.

I guess you don't.  I said that you use arbitrary precision integers for the "fraction".
the exponent could be a  fixnum for most practical purposes. 
 

> Advantage 1: No need for greatest common divisor calculations.  If the
> numerator in binary has k trailing zeros, strike them off and add k to the
> power.
> Advantage 2: No overflow or underflow or rounding from add and multiply.
> Exact results.
> Advantage 3:  strictly a superset of floats.
>
> Disadvantage 1.  Division is not exact.
> Disadvantage 2:  Strictly a subset of "ratio of integers".

They still come with the same problems as IEEE arithmetic, most
importantly precision loss, manifest e.g. at the inability to exactly
represent 1/3.

Wrong. I said the numbers were closed under addition and multiplication.
They are not closed under division, and you need division to create 1/3.
You cannot create 1/3 from addition or multiplication of  floats or binary rationals.


>> But I admit, we shouldn't allow
>> using of Float's (or builtin float type) in symbolic mathematics.
>
> I disagree, from experience.  People want to introduce floats.  For
> example, some people
> write squareroot(x)   as  x^0.5
> Saves typing, and they believe it is the same thing.

Actually in this case it is, since 0.5 can be represented exactly.

The point I was making is that people seem to want to use floats.  In the
Macsyma / Maxima   rational function package, floats are converted to
rational numbers.  Sometimes this is just fine, and sometimes it really only
makes sense to leave the floats alone.
 

What's hard is to make SymPy detect whether the float is an exact
representation of what the user wanted (it could silently convert to
rational) and when it isn't (it should warn the user).
 
That's because it would have to read the user's mind. There has to be
some other mechanism.  For example, in Maxima there is a flag you
can set, keepfloat: true.


What can be made to work is x^"0.5" - SymPy gets to see the
representation and can convert to rational directly.

This looks like garbage.  If SymPy controlled the parser, it could get
some kind of structure like
   power(x,  float_like_object( exact_integer(0), exact_integer(5)))

and some later stage could decide whether that is a single/double/exact_ratio.... etc number.

for example, Maxima allows   1.2345d0   for a double float.  1.2345b0  for a bigfloat.
and presumably with a tweek of the system could allow for decimal floats, or conversion
of float_like_objects directly to ratios of integers.

I don't know if sympy has any access to such things.   But it could, if all input to sympy were
enclosed at the beginning and the end with

"   ................."

That is, all sympy programs are python character strings, and sympy has a hacked-up python parser
inside it.
Does python allow you to just write very long integers, e.g.
    x: := 213456789123456789123456789 ?
If you have to do some song-and-dance and/or use quotes, you lose some of the audience.
{Apologies -- I really don't know how big integers are integrated into python or sympy}

Sergey Kirpichev

unread,
Mar 31, 2014, 12:10:54 PM3/31/14
to sy...@googlegroups.com
On Monday, March 31, 2014 7:38:48 AM UTC+4, Richard Fateman wrote:
Does python allow you to just write very long integers, e.g.
    x: := 213456789123456789123456789 ?

Yes.  And almost every contemporary language allow you to do so.

Sergey Kirpichev

unread,
Mar 31, 2014, 12:28:23 PM3/31/14
to sy...@googlegroups.com
On Sat, Mar 29, 2014 at 01:28:54PM -0700, Richard Fateman wrote:
> Common Lisp uses floating-point arithmetic for floats, exact arithmetic
> for the data type used for rationals.

Good news. Python does same.

> If you stick with integers and ratios of integers, Common
> Lisp does mathematically valid arithmetic.

Excelent! Python too.

The problem is: float arithmetic is not a "valid arithmetic" by
design. And the question is: should we use float "numbers" in SymPy
at all.

> But I admit, we shouldn't allow
> using of Float's (or builtin float type) in symbolic mathematics.
>
> I disagree, from experience. People want to introduce floats. For
> example, some people
> write squareroot(x) as x^0.5
> Saves typing, and they believe it is the same thing.

But that's not necessary floats! It's just a different
notation for rational 1/2 (as "they believe").

In [1]: Rational('0.123')
Out[1]:
123
────
1000
In [2]: Float('0.123')
Out[2]: 0.123000000000000
In [3]: type(_)
Out[3]: sympy.core.numbers.Float
In [4]: sympify('0.123')
Out[4]: 0.123000000000000
In [5]: type(_)
Out[5]: sympy.core.numbers.Float

The little problem here is that float(0.3) != 3/10 internally, for
example (despite people think so). See this:
https://docs.python.org/2/tutorial/floatingpoint.html

In [16]: Decimal(0.3)
Out[16]:
Decimal('0.299999999999999988897769753748434595763683319091796875')
In [17]: Rational(0.3)
Out[17]:
5404319552844595
─────────────────
18014398509481984
In [18]: Rational(str(0.3))
Out[18]: 3/10

End of story. It seems, people want to introduce not floats but a fancy
notation for rationals. If this is only the reason why we keep Float
object - we should drop Float's and fix sympify to map python's floats
to Rational's instead (for example: sympify(0.2) -> Rational(str(0.2))).

Is there a real need for floats in symbolic mathematics package?

Joachim Durchholz

unread,
Mar 31, 2014, 2:15:52 PM3/31/14
to sy...@googlegroups.com
Well, I'm done here.

I'm not going to argue against the idea that somehow, the mere
convertibility of IEEE floats into rationals has any value for SymPy.

Richard Fateman

unread,
Mar 31, 2014, 5:11:21 PM3/31/14
to sy...@googlegroups.com
For suitable values of "contemporary".   I wonder what you think about the
widely used languages visual basic, excel,  etc.

Maybe you mean contemporary computer algebra language, in which case I agree.
But python is not a computer algebra language.
 

Richard Fateman

unread,
Mar 31, 2014, 5:17:56 PM3/31/14
to sy...@googlegroups.com


On Monday, March 31, 2014 9:28:23 AM UTC-7, Sergey Kirpichev wrote:
On Sat, Mar 29, 2014 at 01:28:54PM -0700, Richard Fateman wrote:
>    Common Lisp uses floating-point arithmetic for floats, exact arithmetic
>    for the data type used for rationals.

Good news.  Python does same.

>    If you stick with integers and ratios of integers, Common
>    Lisp does mathematically valid  arithmetic.

Excelent!  Python too.

The problem is: float arithmetic is not a "valid arithmetic" by
design.  And the question is: should we use float "numbers" in SymPy
at all.

I think the answer from a practical perspective is that you can't exclude floats.
 

>      But I admit, we shouldn't allow
>      using of Float's (or builtin float type) in symbolic mathematics.
>
>    I disagree, from experience.  People want to introduce floats.  For
>    example, some people
>    write squareroot(x)   as  x^0.5
>    Saves typing, and they believe it is the same thing.

But that's not necessary floats!  It's just a different
notation for rational 1/2 (as "they believe").

So are you taking the view, now, that 0.5 is rational?
 

You can't predict the future.
 
 If this is only the reason why we keep Float
object - we should drop Float's and fix sympify to map python's floats
to Rational's instead (for example: sympify(0.2) -> Rational(str(0.2))).
Sometimes floats can be very handy.  Finding approximations to roots
of a polynomial.  Compared to rational arithmetic, where different, usually
much slower methods, might be used.

Is there a real need for floats in symbolic mathematics package?
Just because you might not need them now, doesn't really answer the question.

 

Sergey B Kirpichev

unread,
Mar 31, 2014, 5:21:15 PM3/31/14
to sy...@googlegroups.com
On Mon, Mar 31, 2014 at 02:11:21PM -0700, Richard Fateman wrote:
> For suitable values of "contemporary".

Haskell, Ruby, Python...

Richard Fateman

unread,
Mar 31, 2014, 5:37:44 PM3/31/14
to sy...@googlegroups.com
Too many negatives.

not against merely any value. ??  You lost me.

I think the fact is, one can convert a float to a rational.

Is it useful "for sympy"?  How do we know if anything in sympy is useful?
I'm sure there are features of Macsyma that were put in there because
it was possible to do so, without much consideration as to whether it
might be useful.  Some of those features may have indeed never been
useful. Even 45 years later.


 

Aaron Meurer

unread,
Mar 31, 2014, 6:35:23 PM3/31/14
to sy...@googlegroups.com
SymPy floats are not machine floats, but arbitrary precision floats. These are very useful, and (at least by my understanding) far more efficient to work with than rationals if that's what you want. Arbitrary here means arbitrary but fixed. Rationals are arbitrary but unbounded. That makes a large difference. If you care about 100 digits, but only 100 digits, then you really do want to throw away anything beyond that. If you used rationals, you would have to round them down constantly (which btw is what causes these "arithmetic issues", or else use 10 times as much memory and CPU time than you wanted. 

Aaron Meurer
 

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.

Richard Fateman

unread,
Mar 31, 2014, 8:37:14 PM3/31/14
to sy...@googlegroups.com


On Monday, March 31, 2014 3:35:23 PM UTC-7, Aaron Meurer wrote:



SymPy floats are not machine floats, but arbitrary precision floats. These are very useful, and (at least by my understanding) far more efficient to work with than rationals if that's what you want.

bigfloats can be expensive too.
 
Arbitrary here means arbitrary but fixed. Rationals are arbitrary but unbounded. That makes a large difference. If you care about 100 digits, but only 100 digits, then you really do want to throw away anything beyond that. If you used rationals,
Typically people do rational arithmetic exactly, not periodically truncating the results.  If it was appropriate
to truncate the results, they would be using bigfloats.
Running some computations to completion with exact rationals means that the number of digits in the numerator and denominator tend to grow rapidly.  Sometimes, however, you want to do this because the rational answer comes out right  (for example, exactly zero) and the floating point version is non-zero and hence, relatively speaking, infinitely wrong.

 
you would have to round them down constantly (which btw is what causes these "arithmetic issues", or else use 10 times as much memory and CPU time than you wanted. 

The difference in costs and memory between floats and bigfloats tends to be enormous. there are also libraries for double-double  or "quad" precision floats, interval arithmetic, and other items of interest.  See what MPFR provides as an example.  But if sympy doesn't do regular double-float arithmetic, someone may want it later.
RJF


Aaron Meurer

unread,
Mar 31, 2014, 8:55:19 PM3/31/14
to sy...@googlegroups.com
On Mon, Mar 31, 2014 at 7:37 PM, Richard Fateman <fat...@gmail.com> wrote:


On Monday, March 31, 2014 3:35:23 PM UTC-7, Aaron Meurer wrote:



SymPy floats are not machine floats, but arbitrary precision floats. These are very useful, and (at least by my understanding) far more efficient to work with than rationals if that's what you want.

bigfloats can be expensive too.

Yes, but people rarely go beyond the default precision (15, which is roughly machine precision anyway). And anyway, bigints are also expensive...
 
 
Arbitrary here means arbitrary but fixed. Rationals are arbitrary but unbounded. That makes a large difference. If you care about 100 digits, but only 100 digits, then you really do want to throw away anything beyond that. If you used rationals,
Typically people do rational arithmetic exactly, not periodically truncating the results.  If it was appropriate
to truncate the results, they would be using bigfloats.

That's exactly what I'm saying. If your numbers represent something physically meaningful, then the smaller digits do not matter. There's no point keeping a billion digit numerator and denominator just because you wanted to have 1.0000000...01 instead of 1.0.
 
Running some computations to completion with exact rationals means that the number of digits in the numerator and denominator tend to grow rapidly.  Sometimes, however, you want to do this because the rational answer comes out right  (for example, exactly zero) and the floating point version is non-zero and hence, relatively speaking, infinitely wrong.

Yes, these false nonzeros are a pain. And they lead to wrong results if you don't compute with them carefully (e.g., https://github.com/sympy/sympy/issues/2949#issuecomment-38336823). But I don't think it's intractable. You have to keep track of precision carefully. 
 

 
you would have to round them down constantly (which btw is what causes these "arithmetic issues", or else use 10 times as much memory and CPU time than you wanted. 

The difference in costs and memory between floats and bigfloats tends to be enormous. there are also libraries for double-double  or "quad" precision floats, interval arithmetic, and other items of interest.  See what MPFR provides as an example.  But if sympy doesn't do regular double-float arithmetic, someone may want it later.
RJF

We probably should have a MachineFloat class that just wraps Python's float. It might speed things up.
 
Aaron Meurer



--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.

Richard Fateman

unread,
Apr 1, 2014, 11:25:42 AM4/1/14
to sy...@googlegroups.com


On Monday, March 31, 2014 5:55:19 PM UTC-7, Aaron Meurer wrote:



On Mon, Mar 31, 2014 at 7:37 PM, Richard Fateman <fat...@gmail.com> wrote:


On Monday, March 31, 2014 3:35:23 PM UTC-7, Aaron Meurer wrote:



SymPy floats are not machine floats, but arbitrary precision floats. These are very useful, and (at least by my understanding) far more efficient to work with than rationals if that's what you want.

bigfloats can be expensive too.

Yes, but people rarely go beyond the default precision (15, which is roughly machine precision anyway). And anyway, bigints are also expensive...
 
Well, that's not a great idea if sympy floats look like machine precision but are not.  Too easy to confuse.
bigints should be substantially less expensive because the computations involved with rounding are
not needed.  Assuming the bigints have about the same number of bits.
 
 
Arbitrary here means arbitrary but fixed. Rationals are arbitrary but unbounded. That makes a large difference. If you care about 100 digits, but only 100 digits, then you really do want to throw away anything beyond that. If you used rationals,
Typically people do rational arithmetic exactly, not periodically truncating the results.  If it was appropriate
to truncate the results, they would be using bigfloats.

That's exactly what I'm saying. If your numbers represent something physically meaningful, then the smaller digits do not matter.
Many people use computers for calculations that are not physically meaningful. That's most of Sage.
 
There's no point keeping a billion digit numerator and denominator just because you wanted to have 1.0000000...01 instead of 1.0.
It does if you exactly subtract 1 from it.
 
 
Running some computations to completion with exact rationals means that the number of digits in the numerator and denominator tend to grow rapidly.  Sometimes, however, you want to do this because the rational answer comes out right  (for example, exactly zero) and the floating point version is non-zero and hence, relatively speaking, infinitely wrong.

Yes, these false nonzeros are a pain. And they lead to wrong results if you don't compute with them carefully (e.g., https://github.com/sympy/sympy/issues/2949#issuecomment-38336823). But I don't think it's intractable. You have to keep track of precision carefully. 

Unfortunately it is in general intractable.  (see Daniel Richardson's results on unsolvability of the zero-equivalence problem)
 
 

 
you would have to round them down constantly (which btw is what causes these "arithmetic issues", or else use 10 times as much memory and CPU time than you wanted. 

The difference in costs and memory between floats and bigfloats tends to be enormous. there are also libraries for double-double  or "quad" precision floats, interval arithmetic, and other items of interest.  See what MPFR provides as an example.  But if sympy doesn't do regular double-float arithmetic, someone may want it later.
RJF

We probably should have a MachineFloat class that just wraps Python's float. It might speed things up.

Sounds plausible, 
You might have to do a lot of fiddly work.  E.g. consider the differences between  cos(0.5), cos(1/2), cos(MachineFloat(1/2)), cos (sympy_bigfloat("0.5"))   etc.

RJf
 
 

Sergey Kirpichev

unread,
Apr 4, 2014, 10:44:11 AM4/4/14
to sy...@googlegroups.com
On Mon, Mar 31, 2014 at 02:17:56PM -0700, Richard Fateman wrote:
> I think the answer from a practical perspective is that you can't exclude
> floats.

But why we need them as a class, derived from Number and Basic?

An example of real issue:
In [1]: n1=Float(0.001)
In [2]: n2=Float(1.1)
In [3]: n3=Float(1.0)
In [4]: x1=n1*x
In [5]: x2=n2*x
In [6]: x3=n3*x
In [10]: f1=Add(Add(x1, x2, evaluate=False), -x3, evaluate=False)
In [11]: f1
Out[11]: -1.0*x + 0.001*x + 1.1*x
In [13]: f2=Add(x1, Add(x2, -x3, evaluate=False), evaluate=False)
In [14]: f1
Out[14]: -1.0*x + 0.001*x + 1.1*x
In [20]: f1-f2 # Hey, sympy thinks associativity holds
Out[20]: 0

But:
In [21]: f1e=Add(Add(x1, x2), -x3); f1e
Out[21]: 0.101*x
In [22]: f2e=Add(x1, Add(x2, -x3)); f2e
Out[22]: 0.101*x
In [23]: f1e-f2e
Out[23]: -1.11022302462516e-16*x

> So are you taking the view, now, that 0.5 is rational?

Yes, for the provided example (x**0.5)

> End of story. It seems, people want to introduce not floats but a fancy
> notation for rationals.
>
> You can't predict the future.

I'm not attempting to do so.

> If this is only the reason why we keep Float
> object - we should drop Float's and fix sympify to map python's floats
> to Rational's instead (for example: sympify(0.2) -> Rational(str(0.2))).
>
> Sometimes floats can be very handy. Finding approximations to roots
> of a polynomial. Compared to rational arithmetic, where different,
> usually much slower methods, might be used.

But I don't suggest to forbid using float at all (python has builtin
float type, we also have mpf from mpmath; we can convert SymPy's numbers to these
types in some use cases).

> Is there a real need for floats in symbolic mathematics package?
>
> Just because you might not need them now, doesn't really answer the
> question.

Good point. But if we don't need them now - it's a good idea to
introduce such an object only on demand...

Sergey Kirpichev

unread,
Apr 4, 2014, 10:59:00 AM4/4/14
to sy...@googlegroups.com
On Friday, April 4, 2014 6:44:11 PM UTC+4, Sergey Kirpichev wrote:
In [13]: f2=Add(x1, Add(x2, -x3, evaluate=False), evaluate=False)
In [14]: f1
Out[14]: -1.0*x + 0.001*x + 1.1*x

A little typo, should be:

In [13]: f2=Add(x1, Add(x2, -x3, evaluate=False), evaluate=False)
In [14]: f2
Out[14]: 0.001*x + -1.0*x + 1.1*x

Richard Fateman

unread,
Apr 4, 2014, 11:33:59 AM4/4/14
to sy...@googlegroups.com


On Friday, April 4, 2014 7:44:11 AM UTC-7, Sergey Kirpichev wrote:
On Mon, Mar 31, 2014 at 02:17:56PM -0700, Richard Fateman wrote:
>    I think the answer from a practical perspective is that you can't exclude
>    floats.

But why we need them as a class, derived from Number and Basic?

It is perfectly possible for floating point numbers like 0.1  (which is not equal to 1/10)
to be subtracted from other floating point numbers and get 0.0,   or not, even if
the numbers you might (incorrectly) believe they exactly represent should come
out 0, or not.  It is not a proof of associativity, as I assume you realize :)

Any example in which you display floating point numbers in decimal is immediately
suspect unless you are using decimal floating point.  (for the purposes here of
exactness etc.)

I have no idea what evaluate=False means.  I am guessing what Add(  ) means.


In my opinion, truly a bad idea, not a good idea.  You must think about many
different aspect of computation even if you don't implement them yet.  I think that
Mathematica, not necessarily my favorite system, nevertheless was vastly improved
by writing most of the documentation before coding.    But in any case leaving
out floats seems particularly hazardous because:   Floats are already around.

You should aim sympy to be conceptually like a traditional
multi-tiered wedding cake, with layers built sturdily upon other layers, and nice to
look at.
http://www.weddingcakeforbreakfast.com/

Not like a giant cranberry-orange scone with odd shaped pieces sticking out at
random angles.

http://sweetpeaskitchen.com/2010/08/cranberry-orange-scones/

That's how most computer algebra systems end up, despite efforts to the
contrary.  Having looked at sympy but only in small pieces, I think it is already
looking scone-ish..

RJF

Sergey B Kirpichev

unread,
Apr 12, 2014, 4:02:46 AM4/12/14
to sy...@googlegroups.com
On Fri, Apr 04, 2014 at 08:33:59AM -0700, Richard Fateman wrote:
> I have no idea what evaluate=False means.  I am guessing what Add(  )
> means.

This example was actually for other sympy developers. Think about
Add(2, 2, evaluate=False) as a poor-man's analog of the Mathematica Hold[2+2].

> But in any case leaving
> out floats seems particularly hazardous because:   Floats are already
> around.

You totally miss the point. I'm against using these "inexact" numbers
in symbolic context, especially if we assume that some algebraic
properties are valid for any expression (associativity for the provided
above example).

Richard Fateman

unread,
Apr 12, 2014, 6:11:28 PM4/12/14
to sy...@googlegroups.com, skirp...@gmail.com


On Saturday, April 12, 2014 1:02:46 AM UTC-7, Sergey Kirpichev wrote:
On Fri, Apr 04, 2014 at 08:33:59AM -0700, Richard Fateman wrote:
>    I have no idea what evaluate=False means.  I am guessing what Add(  )
>    means.

This example was actually for other sympy developers.  Think about
Add(2, 2, evaluate=False) as a poor-man's analog of the Mathematica Hold[2+2].
That's awful.
Lisp  would use the syntax (+ 2 2).    If typed into a read-eval-print loop you would get 4.
If you typed (quote(+ 2 2))  you would get the symbolic expression (+ 2 2).
A keyboard shortcut for (quote(+ 2 2))   is    '(+ 2 2)

I'm sure you could come up with an infix version of Lisp's quote.

 

>    But in any case leaving
>    out floats seems particularly hazardous because:   Floats are already
>    around.

You totally miss the point.  I'm against using these "inexact" numbers
in symbolic context, especially if we assume that some algebraic
properties are valid for any expression (associativity for the provided
above example).

I think you are missing my point, which is that these numbers are not inexact
at all. They are precise numbers on the real line.   If you assume that this
finite set of "binary rationals"   (sometimes called dyadic numbers) is closed
under various operations, you may encounter difficulties. However, there is
nothing wrong with using them as input values, or when appropriate, output
values.   If you choose to do operations using finite-precision arithmetic typical
of scientific computing, there are lots of things to consider. 

Saying "sympy does symbolic stuff and will ignore floats" is appropriate if
you wish to exclude a large portion of potential users of sympy.  Why?
Well, consider all the computer algebra systems in existence.  Would they
be easier to build if they excluded floats?  Yes.  and yet they include floats.
And so should sympy...



For amusement, I wrote a (lisp) package that does binary rational arithmetic,
representing numbers as pairs. An odd integer of arbitrary length*, and a
second integer representing a power of two.     *Except for zero, which is 0,0.

If anyone cares for a copy, ask and  I'll mail you one.  Later I might post it.

Sergey B Kirpichev

unread,
Apr 14, 2014, 7:14:36 PM4/14/14
to sy...@googlegroups.com
On Sat, Apr 12, 2014 at 03:11:28PM -0700, Richard Fateman wrote:
> This example was actually for other sympy developers.  Think about
> Add(2, 2, evaluate=False) as a poor-man's analog of the Mathematica
> Hold[2+2].
>
> That's awful.
> Lisp  would use the syntax (+ 2 2).

That's fine, by my example was not about the syntax.

> You totally miss the point.  I'm against using these "inexact" numbers
> in symbolic context, especially if we assume that some algebraic
> properties are valid for any expression (associativity for the provided
> above example).
>
> I think you are missing my point, which is that these numbers are not
> inexact
> at all. They are precise numbers on the real line.

Rational numbers, real numbers,
integer numbers - not just about the notation. These notions include
also available operations, i.e. addition and multiplication. "Inexact"
in this context means that some well known algebraic properties for such
"numbers" - broken.

In the SymPy, we can't use such objects reliably. Well, until our "+"
and "*" are associative (see Expr and AssocOp classes).

> However, there is
> nothing wrong with using them as input values

I'm not against python's float literals or using floats
as output values.

Chris Smith

unread,
Nov 4, 2023, 2:17:47 PM11/4/23
to sympy
The `is_rational` flag can be used by the assumption system to make inferences about properties. Currently it is failing to allow `Eq(i/2, 0.5)` to be true (for integer `i`) because the rational lhs has a non-rational rhs. Yet, `Eq(S.Half, 0.5)` evaluates to True. So either we should make the latter fail or make modifications so the former passes. In https://github.com/sympy/sympy/pull/25865 I restore the `Float.is_rational == True` behavior.

Allowing something to report itself as `rational` in SymPy has no implication for how that object will behave in computation since that property must be defined and understood by those using SymPy. If some routine makes decisions based on the `is_rational` or `is_integer` property it will have to consider how Float should be included. An example is the `ceiling(x + 2)` vs `ceiing(x + 2.0)`: the first simplifies to `ceiling(x) + 2` while the latter does not. Both should be treated the same (and both should consider what the simplification is intended to convey: the exact value or the calculated value, for if `x` is a small Float then the result matters):

    >>> ceiling(S(2) + 1e-16)
    2
    >>> ceiling(x + y).subs({x:1e-6,y:2})
    3

The user must understand why this is correct -- the argument in the first case truncated to 2 and in the second, the evaluation machinery handled the calcuation so it was symbolically correct.

I think there is a case, however, to keep `rational` reserved for things that mathematically behave as rationals. Floats do not, e.g.

    >>> a, b = Float(.1, 1), Float(.3, 1)
    >>> s = a + b
    >>> s - a - b
     -0.004
    >>> s - s
    0

So there might be a better case made for modifying routines to handle Float (as I did in https://github.com/sympy/sympy/pull/25864) instead of limiting `is_rational` to the form of the number and invalidating other common mathematical assumptions about the behavior or rationals.

/c
Reply all
Reply to author
Forward
0 new messages