1.0 * Symbol('x') in 0.7.0

19 views
Skip to first unread message

Matthew Brett

unread,
Jul 23, 2011, 1:13:34 PM7/23/11
to sympy
Hi,

I noticed this:

Sympy 0.6.7:
>>> 1.0 * Symbol('x')
x

Sympy 0.7.0
>>> 1.0 * Symbol('x')
1.0*x

although (0.7.0 again):

>>> 1 == Float(1.0)
True
>>> 1 * Symbol('x')
x

I assume that's intended, it is obvious why?

Thanks a lot,

Matthew

Aaron Meurer

unread,
Jul 23, 2011, 6:26:18 PM7/23/11
to sy...@googlegroups.com
Hi.

I bisected this to this commit:

commit b361ecdaa156e4531d36f853535c23c516d281b2
Author: Chris Smith <smi...@gmail.com>
Date: Sun May 1 18:56:37 2011 +0545

don't special-case 1.0 in flatten

Real(1.0) was being changed to S.One by flatten. This was removed
since no other Real was being treated that way, e.g. -1.0 is retained.

A couple of test were changed.


Aaron Meurer

> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
>

Matthew Brett

unread,
Jul 23, 2011, 7:19:58 PM7/23/11
to sy...@googlegroups.com
Hi,

On Sat, Jul 23, 2011 at 11:26 PM, Aaron Meurer <asme...@gmail.com> wrote:
> Hi.
>
> I bisected this to this commit:
>
> commit b361ecdaa156e4531d36f853535c23c516d281b2
> Author: Chris Smith <smi...@gmail.com>
> Date:   Sun May 1 18:56:37 2011 +0545
>
>    don't special-case 1.0 in flatten
>
>        Real(1.0) was being changed to S.One by flatten. This was removed
>        since no other Real was being treated that way, e.g. -1.0 is retained.
>
>        A couple of test were changed.

I'm just asking - but is that change correct?

The reason to special case it would be that:

sympy.Float(1) == sympy.numbers.One()

and, for me, it's difficult to see the utility of leaving 1.0 in there.

Cheers,

Matthew

Aaron Meurer

unread,
Jul 23, 2011, 7:23:14 PM7/23/11
to sy...@googlegroups.com
Note that Float(1) is not the same as Float(1.0). Fredrik or someone
else would have to explain the details, but I think the reasoning
behind Float(int) => Integer is something related to precision.

Also, as the commit message notes, there is the following
inconsistency in 0.6.7:

In [1]: -1.0*x
Out[1]: -1.0⋅x

In [2]: 1.0*x
Out[2]: x

Which is no longer there in 0.7.0+:

In [1]: -1.0*x
Out[1]: -1.0⋅x

In [2]: 1.0*x
Out[2]: 1.0⋅x

Aaron Meurer

Matthew Brett

unread,
Jul 23, 2011, 7:28:18 PM7/23/11
to sy...@googlegroups.com
Hi,

On Sun, Jul 24, 2011 at 12:23 AM, Aaron Meurer <asme...@gmail.com> wrote:
> Note that Float(1) is not the same as Float(1.0). Fredrik or someone
> else would have to explain the details, but I think the reasoning
> behind Float(int) => Integer is something related to precision.

Right, sorry, I should have added that:

sympy.Float(1.0) == sympy.numbers.One()

> Also, as the commit message notes, there is the following
> inconsistency in 0.6.7:
>
> In [1]: -1.0*x
> Out[1]: -1.0⋅x
>
> In [2]: 1.0*x
> Out[2]: x

To me, that inconsistency is a benefit. Is there some disbenefit?
I'm asking honestly. For me it is just a question of reduced
readability in doctests and examples.

Cheers,

Matthew

Aaron Meurer

unread,
Jul 23, 2011, 7:36:00 PM7/23/11
to sy...@googlegroups.com
On Sat, Jul 23, 2011 at 5:28 PM, Matthew Brett <matthe...@gmail.com> wrote:
> Hi,
>
> On Sun, Jul 24, 2011 at 12:23 AM, Aaron Meurer <asme...@gmail.com> wrote:
>> Note that Float(1) is not the same as Float(1.0). Fredrik or someone
>> else would have to explain the details, but I think the reasoning
>> behind Float(int) => Integer is something related to precision.
>
> Right, sorry, I should have added that:
>
> sympy.Float(1.0) == sympy.numbers.One()

==, yes, but is, no.

In [2]: Float(1.0) is S.One
Out[2]: False

In [3]: Float(1.0) == S.One
Out[3]: True

== works because of some type casting. You also get, for example:

In [4]: 0.5 == Rational(1, 2)
Out[4]: True

>
>> Also, as the commit message notes, there is the following
>> inconsistency in 0.6.7:
>>
>> In [1]: -1.0*x
>> Out[1]: -1.0⋅x
>>
>> In [2]: 1.0*x
>> Out[2]: x
>
> To me, that inconsistency is a benefit.  Is there some disbenefit?
> I'm asking honestly.  For me it is just a question of reduced
> readability in doctests and examples.
>
> Cheers,
>
> Matthew
>

Yes, I think there is a disbenefit, because you loose the precision
information in 1.0 when you convert it to S.One.

This will happen when you use Floats. They are assumed to be close to
(up to their precision), but not necessarily equal to the numbers the
represent. So with the default precision of 15 or something like
that, 1.0 is really 1 +- 1e-15. If you really want exact numbers
(i.e., rationals), you should use them. Otherwise, SymPy assumes that
Floats are not exact, and treats them as such.

Others, if any of this is not true, please correct me.

By the way, if you want to convert from floats to rationals, you can
use nsimplify:

In [14]: nsimplify(1.0*x, rational=True)
Out[14]: x

Aaron Meurer

Matthew Brett

unread,
Jul 23, 2011, 7:59:27 PM7/23/11
to sy...@googlegroups.com
Hi,

On Sun, Jul 24, 2011 at 12:36 AM, Aaron Meurer <asme...@gmail.com> wrote:
> On Sat, Jul 23, 2011 at 5:28 PM, Matthew Brett <matthe...@gmail.com> wrote:
>> Hi,
>>
>> On Sun, Jul 24, 2011 at 12:23 AM, Aaron Meurer <asme...@gmail.com> wrote:
>>> Note that Float(1) is not the same as Float(1.0). Fredrik or someone
>>> else would have to explain the details, but I think the reasoning
>>> behind Float(int) => Integer is something related to precision.
>>
>> Right, sorry, I should have added that:
>>
>> sympy.Float(1.0) == sympy.numbers.One()
>
> ==, yes, but is, no.

Surely == is the relevant operator?

I believe that the integers up to around 2^52 are exactly
representable in 64 bit doubles:

http://stackoverflow.com/questions/440204/does-floor-return-something-thats-exactly-representable

so 1.0 will always be exactly representable in float. Indeed, it
seems to me confusing to imply that I don't have exactly 1.0 by
retaining it.

> Others, if any of this is not true, please correct me.
>
> By the way, if you want to convert from floats to rationals, you can
> use nsimplify:
>
> In [14]: nsimplify(1.0*x, rational=True)
> Out[14]: x

Right - but it seems ugly and unfortunate to add that to the doctests,
and examples, especially where we have matrices where we have to
iterate over all the values looking for these guys.

Well - sorry - I'll consider my peanut thrown and missed :)

Cheers,

Matthew

Aaron Meurer

unread,
Jul 23, 2011, 8:06:15 PM7/23/11
to sy...@googlegroups.com

Well, maybe others could chip in here.

Aaron Meurer

Matthew Brett

unread,
Jul 23, 2011, 8:23:54 PM7/23/11
to sy...@googlegroups.com
Hi,

While I am at it, surely this is surprising?:

In [44]: simplify(x * 1.0)
Out[44]: 1.0*x

and identity under addition doesn't have the same feature:

In [42]: x + 0.0
Out[42]: x

What do Mathematica etc do in this case?

See you,

Matthew

Aaron Meurer

unread,
Jul 23, 2011, 8:48:02 PM7/23/11
to sy...@googlegroups.com
By the way, if you only care printing, I think it should be possible
to configure the printer to do what you want. Then you just have to
add one line that configures the printer to the top of your doctests
(if it's a Sphinx file, you just need one line at the top of the
file).

Aaron Meurer

Simon

unread,
Jul 23, 2011, 9:47:49 PM7/23/11
to sy...@googlegroups.com
Matthew, I agree that the x + 0.0 case should be treated the same as 1.0 * x.
I think that the Reals/floats should never automatically (outside of possible internal routines) be converted to exact numbers.
What happens if you're doing a machine precision calculation that occasionally reaches exactly 1.0 then gets converted to a slower exact arithmetic?
I think that the mathematica convention (see below) is the correct one.

IPython console for SymPy 0.7.0-git (Python 2.7.1-64-bit) (ground types: python)

In [1]: 1.0 * x
Out[1]: 1.0⋅x

In [2]: x + 0.0
Out[2]: x

Mathematica 8.0 for Linux x86 (64-bit)

In[1]:= 1.0 * x
Out[1]= 1. x

In[2]:= x + 0.0
Out[2]= 0. + x

Simon

Aaron Meurer

unread,
Jul 23, 2011, 9:52:58 PM7/23/11
to sy...@googlegroups.com
Could you open an issue for this?

Aaron Meurer

> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.

> To view this discussion on the web visit
> https://groups.google.com/d/msg/sympy/-/qWaiHaxKvXkJ.

Aaron Meurer

unread,
Jul 23, 2011, 9:53:55 PM7/23/11
to sy...@googlegroups.com
By the way, Maple behaves like SymPy:

> 1.0*x;

1.0 x

> x + 0.0;

x

Aaron Meurer

Simon

unread,
Jul 24, 2011, 7:13:40 AM7/24/11
to sy...@googlegroups.com
The issue of cancelling floats and integers was also raised in

Should this issue be attached to that one.

Matthew Brett

unread,
Jul 24, 2011, 9:45:54 AM7/24/11
to sy...@googlegroups.com
Hi,

On Sun, Jul 24, 2011 at 2:47 AM, Simon <simon...@gmail.com> wrote:
> Matthew, I agree that the x + 0.0 case should be treated the same as 1.0 *
> x.
> I think that the Reals/floats should never automatically (outside of
> possible internal routines) be converted to exact numbers.
> What happens if you're doing a machine precision calculation that
> occasionally reaches exactly 1.0 then gets converted to a slower exact
> arithmetic?

Point taken.

> I think that the mathematica convention (see below) is the correct one.
>
> IPython console for SymPy 0.7.0-git (Python 2.7.1-64-bit) (ground types:
> python)
> In [1]: 1.0 * x
> Out[1]: 1.0⋅x
> In [2]: x + 0.0
> Out[2]: x
> Mathematica 8.0 for Linux x86 (64-bit)
> In[1]:= 1.0 * x
> Out[1]= 1. x
>
> In[2]:= x + 0.0
> Out[2]= 0. + x

OK - sounds reasonable...

Best,

Matthew

Vinzent Steinberg

unread,
Jul 24, 2011, 5:22:21 PM7/24/11
to sympy
On Jul 24, 1:28 am, Matthew Brett <matthew.br...@gmail.com> wrote:
> To me, that inconsistency is a benefit.  Is there some disbenefit?
> I'm asking honestly.  For me it is just a question of reduced
> readability in doctests and examples.

How would you get floating point numbers in doctests where you want
exact numbers?

Vinzent

Matthew Brett

unread,
Jul 25, 2011, 5:17:54 AM7/25/11
to sy...@googlegroups.com
Hi,

I hesitate to belabor a point that seems agreed, but this is the
doctest that caused me to notice:

https://github.com/statsmodels/formula/blob/master/formula/random_effects.py

It comes up because we have array entries that can be floats or
symbols, and it's convenient to process the array by using using
numpy.dot with an array of floats.

But, not to worry, now I understand the issue, I can see it's better that way.

Best,

Matthew

Vinzent Steinberg

unread,
Jul 25, 2011, 7:41:01 AM7/25/11
to sympy
On 25 Jul., 05:17, Matthew Brett <matthew.br...@gmail.com> wrote:
> Hi,
>
> On Sun, Jul 24, 2011 at 10:22 PM, Vinzent Steinberg
>
> <vinzent.steinb...@googlemail.com> wrote:
> > On Jul 24, 1:28 am, Matthew Brett <matthew.br...@gmail.com> wrote:
> >> To me, that inconsistency is a benefit.  Is there some disbenefit?
> >> I'm asking honestly.  For me it is just a question of reduced
> >> readability in doctests and examples.
>
> > How would you get floating point numbers in doctests where you want
> > exact numbers?
>
> I hesitate to belabor a point that seems agreed, but this is the
> doctest that caused me to notice:
>
> https://github.com/statsmodels/formula/blob/master/formula/random_eff...
>
> It comes up because we have array entries that can be floats or
> symbols, and it's convenient to process the array by using using
> numpy.dot with an array of floats.

Thanks, I have been curious. I think you should explicitly convert the
floats to ints in this case. (As already discussed, there is a reason
that Python does not autosimplify 1.0 to 1.)

Vinzent
Reply all
Reply to author
Forward
0 new messages