I noticed this:
Sympy 0.6.7:
>>> 1.0 * Symbol('x')
x
Sympy 0.7.0
>>> 1.0 * Symbol('x')
1.0*x
although (0.7.0 again):
>>> 1 == Float(1.0)
True
>>> 1 * Symbol('x')
x
I assume that's intended, it is obvious why?
Thanks a lot,
Matthew
I bisected this to this commit:
commit b361ecdaa156e4531d36f853535c23c516d281b2
Author: Chris Smith <smi...@gmail.com>
Date: Sun May 1 18:56:37 2011 +0545
don't special-case 1.0 in flatten
Real(1.0) was being changed to S.One by flatten. This was removed
since no other Real was being treated that way, e.g. -1.0 is retained.
A couple of test were changed.
Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
>
On Sat, Jul 23, 2011 at 11:26 PM, Aaron Meurer <asme...@gmail.com> wrote:
> Hi.
>
> I bisected this to this commit:
>
> commit b361ecdaa156e4531d36f853535c23c516d281b2
> Author: Chris Smith <smi...@gmail.com>
> Date: Sun May 1 18:56:37 2011 +0545
>
> don't special-case 1.0 in flatten
>
> Real(1.0) was being changed to S.One by flatten. This was removed
> since no other Real was being treated that way, e.g. -1.0 is retained.
>
> A couple of test were changed.
I'm just asking - but is that change correct?
The reason to special case it would be that:
sympy.Float(1) == sympy.numbers.One()
and, for me, it's difficult to see the utility of leaving 1.0 in there.
Cheers,
Matthew
Also, as the commit message notes, there is the following
inconsistency in 0.6.7:
In [1]: -1.0*x
Out[1]: -1.0⋅x
In [2]: 1.0*x
Out[2]: x
Which is no longer there in 0.7.0+:
In [1]: -1.0*x
Out[1]: -1.0⋅x
In [2]: 1.0*x
Out[2]: 1.0⋅x
Aaron Meurer
On Sun, Jul 24, 2011 at 12:23 AM, Aaron Meurer <asme...@gmail.com> wrote:
> Note that Float(1) is not the same as Float(1.0). Fredrik or someone
> else would have to explain the details, but I think the reasoning
> behind Float(int) => Integer is something related to precision.
Right, sorry, I should have added that:
sympy.Float(1.0) == sympy.numbers.One()
> Also, as the commit message notes, there is the following
> inconsistency in 0.6.7:
>
> In [1]: -1.0*x
> Out[1]: -1.0⋅x
>
> In [2]: 1.0*x
> Out[2]: x
To me, that inconsistency is a benefit. Is there some disbenefit?
I'm asking honestly. For me it is just a question of reduced
readability in doctests and examples.
Cheers,
Matthew
==, yes, but is, no.
In [2]: Float(1.0) is S.One
Out[2]: False
In [3]: Float(1.0) == S.One
Out[3]: True
== works because of some type casting. You also get, for example:
In [4]: 0.5 == Rational(1, 2)
Out[4]: True
>
>> Also, as the commit message notes, there is the following
>> inconsistency in 0.6.7:
>>
>> In [1]: -1.0*x
>> Out[1]: -1.0⋅x
>>
>> In [2]: 1.0*x
>> Out[2]: x
>
> To me, that inconsistency is a benefit. Is there some disbenefit?
> I'm asking honestly. For me it is just a question of reduced
> readability in doctests and examples.
>
> Cheers,
>
> Matthew
>
Yes, I think there is a disbenefit, because you loose the precision
information in 1.0 when you convert it to S.One.
This will happen when you use Floats. They are assumed to be close to
(up to their precision), but not necessarily equal to the numbers the
represent. So with the default precision of 15 or something like
that, 1.0 is really 1 +- 1e-15. If you really want exact numbers
(i.e., rationals), you should use them. Otherwise, SymPy assumes that
Floats are not exact, and treats them as such.
Others, if any of this is not true, please correct me.
By the way, if you want to convert from floats to rationals, you can
use nsimplify:
In [14]: nsimplify(1.0*x, rational=True)
Out[14]: x
Aaron Meurer
On Sun, Jul 24, 2011 at 12:36 AM, Aaron Meurer <asme...@gmail.com> wrote:
> On Sat, Jul 23, 2011 at 5:28 PM, Matthew Brett <matthe...@gmail.com> wrote:
>> Hi,
>>
>> On Sun, Jul 24, 2011 at 12:23 AM, Aaron Meurer <asme...@gmail.com> wrote:
>>> Note that Float(1) is not the same as Float(1.0). Fredrik or someone
>>> else would have to explain the details, but I think the reasoning
>>> behind Float(int) => Integer is something related to precision.
>>
>> Right, sorry, I should have added that:
>>
>> sympy.Float(1.0) == sympy.numbers.One()
>
> ==, yes, but is, no.
Surely == is the relevant operator?
I believe that the integers up to around 2^52 are exactly
representable in 64 bit doubles:
http://stackoverflow.com/questions/440204/does-floor-return-something-thats-exactly-representable
so 1.0 will always be exactly representable in float. Indeed, it
seems to me confusing to imply that I don't have exactly 1.0 by
retaining it.
> Others, if any of this is not true, please correct me.
>
> By the way, if you want to convert from floats to rationals, you can
> use nsimplify:
>
> In [14]: nsimplify(1.0*x, rational=True)
> Out[14]: x
Right - but it seems ugly and unfortunate to add that to the doctests,
and examples, especially where we have matrices where we have to
iterate over all the values looking for these guys.
Well - sorry - I'll consider my peanut thrown and missed :)
Cheers,
Matthew
Well, maybe others could chip in here.
Aaron Meurer
While I am at it, surely this is surprising?:
In [44]: simplify(x * 1.0)
Out[44]: 1.0*x
and identity under addition doesn't have the same feature:
In [42]: x + 0.0
Out[42]: x
What do Mathematica etc do in this case?
See you,
Matthew
Aaron Meurer
Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/sympy/-/qWaiHaxKvXkJ.
> 1.0*x;
1.0 x
> x + 0.0;
x
Aaron Meurer
On Sun, Jul 24, 2011 at 2:47 AM, Simon <simon...@gmail.com> wrote:
> Matthew, I agree that the x + 0.0 case should be treated the same as 1.0 *
> x.
> I think that the Reals/floats should never automatically (outside of
> possible internal routines) be converted to exact numbers.
> What happens if you're doing a machine precision calculation that
> occasionally reaches exactly 1.0 then gets converted to a slower exact
> arithmetic?
Point taken.
> I think that the mathematica convention (see below) is the correct one.
>
> IPython console for SymPy 0.7.0-git (Python 2.7.1-64-bit) (ground types:
> python)
> In [1]: 1.0 * x
> Out[1]: 1.0⋅x
> In [2]: x + 0.0
> Out[2]: x
> Mathematica 8.0 for Linux x86 (64-bit)
> In[1]:= 1.0 * x
> Out[1]= 1. x
>
> In[2]:= x + 0.0
> Out[2]= 0. + x
OK - sounds reasonable...
Best,
Matthew
I hesitate to belabor a point that seems agreed, but this is the
doctest that caused me to notice:
https://github.com/statsmodels/formula/blob/master/formula/random_effects.py
It comes up because we have array entries that can be floats or
symbols, and it's convenient to process the array by using using
numpy.dot with an array of floats.
But, not to worry, now I understand the issue, I can see it's better that way.
Best,
Matthew