This leads to problems if used as a basis for decisions.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAKgW%3D6%2BdL44_G-05njyg1P2HnY%2Bq1Yg95p2xwxCH6jfKh9%2BA4A%40mail.gmail.com.
On Thu, Mar 20, 2014 at 04:35:43PM -0500, Aaron Meurer wrote:
> Can you think of a fact in the assumptions system (implemented or not)
> that would break if floats are rational?
Any, that uses algebraic properties of the rational numbers field.
Well, that particular example falls out of the scope of the assumption
system (it happens directly in the core).
I was looking for something more like sin(non-zero rational) ==
irrational, but sin(float) evaluates to float.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/20140322100746.GA21938%40darkstar.order.hcn-strela.ru.
Actually, that's a good point. Try removing it and see what tests fail.
Aaron Meurer
On Fri, Mar 21, 2014 at 2:55 PM, Kalevi Suominen <jks...@gmail.com> wrote:
> It looks like nobody really planned the semantics of Float.is_rational. I
> believe it does not make sense
> and should be removed from Float's dictionary. If this breaks something it
> should be fixed anyway.
Taking the tests with is_rational removed gave the following results.
Oh, that's good. This was setting is_rational and is_irrational to
None? I think that's what they should be.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/8913c2fa-e338-4b0d-a6d3-1fe915d6eb07%40googlegroups.com.
On Sun, Mar 23, 2014 at 5:23 PM, Joachim Durchholz <j...@durchholz.org> wrote:
> Am 23.03.2014 22:12, schrieb Christophe Bal:
>
>>>> You can prove that any valid IEEE float is a rational
>>
>>
>> No ! Why ? Because of the arithmetic rules. You can have approximation to
>> do. With decimals, you have to do exact calculations.
>
>
> You two are both right; you just mean different things when you say "Float"
> and "Rational".
>
> Richard refers to the domain, i.e. the list of valid values.
> You to the abstract data structure, i.e. the domain plus operations.
And when SymPy says "rational" it means the field. The assumptions
system wouldn't very useful if rational only meant integer/integer.
We
want to be able to say things like rational + rational = rational and
rational1*rational2 = rational2*rational1 and so on.
So is every double-precision IEEE float (except inf and NaN) convertible to the
exact rational number which it represents in the form integer/integer.
2. we can deduce from the fact that a,b are rational numbers that a+b is a rational number.
Am 24.03.2014 01:58, schrieb Richard Fateman:
>
> Now we must address what is meant by integer. In common lisp, integer
> meansarbitrary precision integer.
> Consequently, every rational number IS an integer/integer.
> [...]
> It seems to me the meanings of words in sympy should correspond to
> their meanings in mathematics not some hack mockery of the word that
> appears in one or even several programming languages.
Your position seems inconsistent - you derive your idea about the nature
of rational numbers from Common Lisp, yet you denounce exactly that kind
of reasoning as "some hack mockery of the word that appears in [...]
programming languages".
> It seems to me the meanings of words in sympy should correspond to their
> meanings in mathematics not some hack mockery of the word that appears in
> one or even several programming languages. The data structure for IEEE
> double-float is used to represent a subset of the rational numbers.
To paraphrase, IEEE numbers are some hack mockery of the word that
appear in hardware implementations.
> You could also have a data structure for "rational as the ratio of two
> arbitrary precision integers,."
> That's been found fairly useful for symbolic programming.
Isn't that what SymPy does?
----
Just to demonstrate how far away "meaning in mathematics" is from any
computerized representation, including that of Common List, here's a
short (and probably slightly wrong) description of what rational numbers
"are":
Going down to the very foundations, rational numbers "are" the minimum
model that satisfies the field axioms (see model theory for the
definition of "minimum" and why "the" minimum exists for these axioms).
Most importantly, rationals "are" not a pair of integers, because for
integer pairs, (1,2) != (2,4), but for rationals, 1/2 = 2/4.
Integer pairs are merely a possible *representation* of rationals.
And now, for fun, the real numbers:
These "are" the minimum model that satisfies the total and the dense
ordering axioms.
No mention of field properties. The real numbers just happen to have the
rational numbers embedded (where "embedded" is a term with a strict
definition in model theory).
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+unsubscribe@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/533683E2.8040200%40durchholz.org.
Your problem is that the notions of + and - in your computer programming language
are apparently inadequate.
It is certainly possible to do this correctly by converting a,b,c
into ratios of integers
The fact that + gets different answers for adding numbers a,b and for adding a', b' where
a-a' and b=b' can't be a good thing in a system that is supposed to do mathematics.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+unsubscribe@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/5336D267.1030803%40durchholz.org.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/522f5d2a-6bcd-4ae7-9573-2dfa677d8494%40googlegroups.com.
>>> [...] what you could do is take any "float" and convert it>>> to an exactly equal numeric quantity that is a sympy rational.>>> And you could take that number and convert it to a float.
>>> without loss.
There is a little {problem?} with this approach. The float algorithm are generally more efficient that the exact rational ones.
On Saturday, March 29, 2014 8:34:36 AM UTC+4, Richard Fateman wrote:Your problem is that the notions of + and - in your computer programming language
are apparently inadequate.
In which one? Python, CLisp?
It is certainly possible to do this correctly by converting a,b,c
into ratios of integers
Sure. It's possible to convert a,b,c to rationals, but then it will be rational
arithmetic, not arithmetic for floats (IEEE 754).
The fact that + gets different answers for adding numbers a,b and for adding a', b' where
a-a' and b=b' can't be a good thing in a system that is supposed to do mathematics.There are still reasons for floats (Christophe point you to major one).
But I admit, we shouldn't allow
using of Float's (or builtin float type) in symbolic mathematics.
An example: https://github.com/sympy/sympy/pull/2801
Am 29.03.2014 21:28, schrieb Richard Fateman:
>
>
> On Saturday, March 29, 2014 5:24:13 AM UTC-7, Sergey Kirpichev wrote:
>>
>> On Saturday, March 29, 2014 8:34:36 AM UTC+4, Richard Fateman wrote:
>>>
>>> Your problem is that the notions of + and - in your computer programming
>>> language
>>> are apparently inadequate.
>>>
>>
>> In which one? Python, CLisp?
>>
>
> Common Lisp uses floating-point arithmetic for floats, exact arithmetic for
> the data type used for
> rationals. If you stick with integers and ratios of integers, Common Lisp
> does mathematically
> valid arithmetic.
There is nothing wrong with these statements.
I'm taking offense with this:
> So is every double-precision IEEE float (except inf and NaN)
> convertible to the exact rational number which it represents in the
> form integer/integer.
>
> It seems to me the meanings of words in sympy should correspond to
> their meanings in mathematics not some hack mockery of the word that
> appears in one or even several programming languages.
This sounds as if IEEE numbers were somehow a mathematical valid
implementation of arithmetic, which they simply aren't.
> There is a way of being considerably faster than ratio
> of integers, while
> correctly representing WITHOUT ERROR OR ROUNDOFF all results from addition
> and multiplication of
> numbers that originate as floats. This is "binary rational" and was used
> so far as I know in only
> one system designed by George E. Collins. It works like this:
> a binary rational is an arbitrary-precision integer X 2^(power). Power is
> an integer +-, and one could
> also make it an arbitrary precision number, though 32 or even 16 bits is
> probably enough for most
> practical and impractical purposes.
I do not understand - if you use 32 or 16 bits, you're back at the data
model of IEEE arithmetic.
> Advantage 1: No need for greatest common divisor calculations. If the
> numerator in binary has k trailing zeros, strike them off and add k to the
> power.
> Advantage 2: No overflow or underflow or rounding from add and multiply.
> Exact results.
> Advantage 3: strictly a superset of floats.
>
> Disadvantage 1. Division is not exact.
> Disadvantage 2: Strictly a subset of "ratio of integers".
They still come with the same problems as IEEE arithmetic, most
importantly precision loss, manifest e.g. at the inability to exactly
represent 1/3.
>> But I admit, we shouldn't allow
>> using of Float's (or builtin float type) in symbolic mathematics.
>
> I disagree, from experience. People want to introduce floats. For
> example, some people
> write squareroot(x) as x^0.5
> Saves typing, and they believe it is the same thing.
Actually in this case it is, since 0.5 can be represented exactly.
What's hard is to make SymPy detect whether the float is an exact
representation of what the user wanted (it could silently convert to
rational) and when it isn't (it should warn the user).
What can be made to work is x^"0.5" - SymPy gets to see the
representation and can convert to rational directly.
Does python allow you to just write very long integers, e.g.x: := 213456789123456789123456789 ?
On Sat, Mar 29, 2014 at 01:28:54PM -0700, Richard Fateman wrote:
> Common Lisp uses floating-point arithmetic for floats, exact arithmetic
> for the data type used for rationals.
Good news. Python does same.
> If you stick with integers and ratios of integers, Common
> Lisp does mathematically valid arithmetic.
Excelent! Python too.
The problem is: float arithmetic is not a "valid arithmetic" by
design. And the question is: should we use float "numbers" in SymPy
at all.
> But I admit, we shouldn't allow
> using of Float's (or builtin float type) in symbolic mathematics.
>
> I disagree, from experience. People want to introduce floats. For
> example, some people
> write squareroot(x) as x^0.5
> Saves typing, and they believe it is the same thing.
But that's not necessary floats! It's just a different
notation for rational 1/2 (as "they believe").
If this is only the reason why we keep Float
object - we should drop Float's and fix sympify to map python's floats
to Rational's instead (for example: sympify(0.2) -> Rational(str(0.2))).
Is there a real need for floats in symbolic mathematics package?
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/20140331162823.GB30706%40darkstar.order.hcn-strela.ru.
SymPy floats are not machine floats, but arbitrary precision floats. These are very useful, and (at least by my understanding) far more efficient to work with than rationals if that's what you want.
Arbitrary here means arbitrary but fixed. Rationals are arbitrary but unbounded. That makes a large difference. If you care about 100 digits, but only 100 digits, then you really do want to throw away anything beyond that. If you used rationals,
you would have to round them down constantly (which btw is what causes these "arithmetic issues", or else use 10 times as much memory and CPU time than you wanted.
On Monday, March 31, 2014 3:35:23 PM UTC-7, Aaron Meurer wrote:SymPy floats are not machine floats, but arbitrary precision floats. These are very useful, and (at least by my understanding) far more efficient to work with than rationals if that's what you want.
bigfloats can be expensive too.
Arbitrary here means arbitrary but fixed. Rationals are arbitrary but unbounded. That makes a large difference. If you care about 100 digits, but only 100 digits, then you really do want to throw away anything beyond that. If you used rationals,Typically people do rational arithmetic exactly, not periodically truncating the results. If it was appropriate
to truncate the results, they would be using bigfloats.
Running some computations to completion with exact rationals means that the number of digits in the numerator and denominator tend to grow rapidly. Sometimes, however, you want to do this because the rational answer comes out right (for example, exactly zero) and the floating point version is non-zero and hence, relatively speaking, infinitely wrong.
you would have to round them down constantly (which btw is what causes these "arithmetic issues", or else use 10 times as much memory and CPU time than you wanted.
The difference in costs and memory between floats and bigfloats tends to be enormous. there are also libraries for double-double or "quad" precision floats, interval arithmetic, and other items of interest. See what MPFR provides as an example. But if sympy doesn't do regular double-float arithmetic, someone may want it later.
RJF
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/e82fd0c1-cc1a-4d4d-a174-85067cef49e6%40googlegroups.com.
On Mon, Mar 31, 2014 at 7:37 PM, Richard Fateman <fat...@gmail.com> wrote:
On Monday, March 31, 2014 3:35:23 PM UTC-7, Aaron Meurer wrote:SymPy floats are not machine floats, but arbitrary precision floats. These are very useful, and (at least by my understanding) far more efficient to work with than rationals if that's what you want.
bigfloats can be expensive too.Yes, but people rarely go beyond the default precision (15, which is roughly machine precision anyway). And anyway, bigints are also expensive...
Arbitrary here means arbitrary but fixed. Rationals are arbitrary but unbounded. That makes a large difference. If you care about 100 digits, but only 100 digits, then you really do want to throw away anything beyond that. If you used rationals,Typically people do rational arithmetic exactly, not periodically truncating the results. If it was appropriate
to truncate the results, they would be using bigfloats.That's exactly what I'm saying. If your numbers represent something physically meaningful, then the smaller digits do not matter.
There's no point keeping a billion digit numerator and denominator just because you wanted to have 1.0000000...01 instead of 1.0.
Running some computations to completion with exact rationals means that the number of digits in the numerator and denominator tend to grow rapidly. Sometimes, however, you want to do this because the rational answer comes out right (for example, exactly zero) and the floating point version is non-zero and hence, relatively speaking, infinitely wrong.
Yes, these false nonzeros are a pain. And they lead to wrong results if you don't compute with them carefully (e.g., https://github.com/sympy/sympy/issues/2949#issuecomment-38336823). But I don't think it's intractable. You have to keep track of precision carefully.
you would have to round them down constantly (which btw is what causes these "arithmetic issues", or else use 10 times as much memory and CPU time than you wanted.
The difference in costs and memory between floats and bigfloats tends to be enormous. there are also libraries for double-double or "quad" precision floats, interval arithmetic, and other items of interest. See what MPFR provides as an example. But if sympy doesn't do regular double-float arithmetic, someone may want it later.
RJFWe probably should have a MachineFloat class that just wraps Python's float. It might speed things up.
In [13]: f2=Add(x1, Add(x2, -x3, evaluate=False), evaluate=False)
In [14]: f1
Out[14]: -1.0*x + 0.001*x + 1.1*x
On Mon, Mar 31, 2014 at 02:17:56PM -0700, Richard Fateman wrote:
> I think the answer from a practical perspective is that you can't exclude
> floats.
But why we need them as a class, derived from Number and Basic?
On Fri, Apr 04, 2014 at 08:33:59AM -0700, Richard Fateman wrote:
> I have no idea what evaluate=False means. I am guessing what Add( )
> means.
This example was actually for other sympy developers. Think about
Add(2, 2, evaluate=False) as a poor-man's analog of the Mathematica Hold[2+2].
> But in any case leaving
> out floats seems particularly hazardous because: Floats are already
> around.
You totally miss the point. I'm against using these "inexact" numbers
in symbolic context, especially if we assume that some algebraic
properties are valid for any expression (associativity for the provided
above example).