However, the 'simplify' works for this case:
In [3]: simplify(a**2.0/a)
Out[3]: a**1.0
Remarks:
We implicitly assume that a<>0 during this simplification.
If e.g.
>>> a = Symbol('a', positive=True) # then 'a' became to be 'real' too
then this simplification can be automatic, I think. At least I have no
objection to add the last case to the issue tracker.
--
Alexey U.
11.01.2012 00:27, krastano...@gmail.com пишет:
> On the other hand I don't know why a**2.0/a is not a**1.0. This seems likeHowever, the 'simplify' works for this case:
> a bug...
In [3]: simplify(a**2.0/a)
Out[3]: a**1.0
Remarks:
We implicitly assume that a<>0 during this simplification.
If e.g.
>>> a = Symbol('a', positive=True) # then 'a' became to be 'real' too
then this simplification can be automatic, I think. At least I have no
objection to add the last case to the issue tracker.
--
Alexey U.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
Thanks for all the responses,
you bring up a good argument about some of the additional
complications of simplifcation depending on some of the potential
factors of the Symbol. I appreciate your thoughtfullness on this
response.
on a similar note what is the best way to replace an expression that
has 1.0s everywhere
In [36]: print 1.0*a**1.0
1.0*a**1.0
this seems to be the simplest:
In [37]: print (1.0*a**1.0).subs(1.0, 1)
a
On Jan 10, 2:41 pm, "Alexey U. Gudchenko" <pr...@goodok.ru> wrote:
> 11.01.2012 00:27, krastanov.ste...@gmail.com пишет:
>
> > On the other hand I don't know why a**2.0/a is not a**1.0. This seems like
> > a bug...
>
> However, the 'simplify' works for this case:
>
> In [3]: simplify(a**2.0/a)
> Out[3]: a**1.0
>
> Remarks:
> We implicitly assume that a<>0 during this simplification.
>
> If e.g.
>
> >>> a = Symbol('a', positive=True) # then 'a' became to be 'real' too
>
> then this simplification can be automatic, I think. At least I have no
> objection to add the last case to the issue tracker.
>
> --
> Alexey U.
As others have noted, this was done intentionally. 1.0 is not the
same as 1. You should avoid using floating point numbers in powers.
Using them as coefficients is fine. The reason is that anything that
works using polynomials will only work with exact integer powers. For
example,
In [18]: factor(x**2 - 1)Out[18]: (x - 1)⋅(x + 1)
In [19]: factor(x**2.0 - 1)
Out[19]:
2.0
x - 1
We had some discussion about this some other places before. One is at
http://code.google.com/p/sympy/issues/detail?id=1374. I think there
were others too. If you want, you could probably bisect the change
and possibly find more info on it.
The non-simplification of x**2.0/x is indeed a bug. That should work.
I made http://code.google.com/p/sympy/issues/detail?id=2978 for this.
Aaron Meurer
f/g will be x, which is continuous. So yes, in this case, the
simplification changes the result.
For f=x and g=x**2, however, f/g could be simplified to 1/x without
eliminating a discontinuity(term?)
> The non-simplification of x**2.0/x is indeed a bug. That should work.
> I made http://code.google.com/p/sympy/issues/detail?id=2978 for this.
>
> Aaron Meurer
>
The last message from Stefan (about continuity) have convinced me that
this simplification must be automatic, but there is a question about
result of this automatic simplification.
The description of issue 2978 defines that this is must:
>>> x**2.0/x
x
I think that desired behavioral must be like:
>>> x**2.0/x
x**1.0
(and only then one might to convert floats to integers)
>>> x**1.1/x
x**0.1
In other words I don't tend to convert the floats like "1.0" to integer
"1" automatically in the expressions.
--
Alexey U.
Am 10.01.2012 21:52, schrieb krastano...@gmail.com:
And I'm not sure that one needs such an assumption here.
f(x)/g(x) where f(x)=x**2 and g(x)=x is continuous by continuation (I'm
unsure of the english terminology). In the cases when it's not continuous
it will simplify to something not continuous.
f/g will be x, which is continuous. So yes, in this case, the simplification changes the result.
x�/x has a discontinuity for x=0, and x does not, hence x�/x is not the
same as x. (I suspect it might even be possible to "prove" 1=0 if you
allow removing 0/0 from a product. Division by zero tends to be nasty
like that, but I haven't checked.)
However, (if x = 0 then 0 else x�/x) is the extension by continuity of
x�/x, and is indeed equal to x.
So:
Simplifying (x**2/x if x != 0 else 0) and (x**2/x if x != 0 else x) to x
is valid.
Simplifying extend_by_continuity(x**2/x) to x is valid (assuming a
hypothetical function extend_by_continuity).
simplify(x**2/x, extend_by_continuity=true) could be defined to return a
simplified expression that may, at the discretion of simplify(), have
been extended by continuity.
But simplifying x**2 to x in the general case is invalid.
If simplify does that without being told to do that, that's a bug.
No, no, it is continuous because the limit when x-->0 exists (equals
0), and the same as a value of function at this point, 0**2/0 (which by
definition is equal 0).
It was my mess when I began to talk about this case, but Stefan
convinced me.
--
Alexey U.
> 11.01.2012 02:16, Aaron Meurer пишет:
>
>> The non-simplification of x**2.0/x is indeed a bug. That should work.
>> I made http://code.google.com/p/sympy/issues/detail?id=2978 for this.
>>
>> Aaron Meurer
>>
>
> The last message from Stefan (about continuity) have convinced me that
> this simplification must be automatic, but there is a question about
> result of this automatic simplification.
>
> The description of issue 2978 defines that this is must:
>
>>>> x**2.0/x
> x
>
> I think that desired behavioral must be like:
>
>>>> x**2.0/x
> x**1.0
>
Sorry, that we a typo on my part. It should be x**1.0.
Aaron Meurer
> (and only then one might to convert floats to integers)
>
>>>> x**1.1/x
> x**0.1
>
> In other words I don't tend to convert the floats like "1.0" to integer
> "1" automatically in the expressions.
>
> --
> Alexey U.
>
simplify() actually does more than this, by the way, as it calls
cancel(), which removes all common zeros from the numerator and
denominator.
If you want to keep track of this, you'll have to use evaluate=False
and build up the expression manually.
Aaron Meurer
>>> e=(x**2*(1/x - z**2/x))
>>> e.expand()
-x*z**2 + x
>>> solve(_,x)
[0]
>>> solve(e,x)
[]
>>> e.subs(x,0)
0
I was going to say that you could use cancel() to get rid of these,
but that only holds for rational functions. For example, sin(x)/x - 1
has a "zero" at x = 0, but the only way to get this from solve is to
give check=False. So I think we should keep it like this, but document
that if you want the others, you should pass check=False to solve().
In [31]: solve(sin(x)/x, x)
Out[31]: []
In [32]: solve(sin(x)/x, x, check=False)
Out[32]: [0]
This is kind of analogous the force option to many of the simplify()
options. By default, we are rigorous, and only do the transformation
if it is valid. But we allow force=True to give the algebraic
solution. Rigorously, x/(x**2 + x) is not equal to 1/(x + 1), but many
fields consider them to be equivalent for the sake of simplification
of the theory.
Aaron Meurer
I think it's reasonable to keep it as it is now. The reason is that
we can then say that expr.subs(x, sol) will give 0. This wouldn't
hold for solutions based on continuous extension.
I was going to say that you could use cancel() to get rid of these,
but that only holds for rational functions. For example, sin(x)/x - 1
has a "zero" at x = 0, but the only way to get this from solve is to
give check=False. So I think we should keep it like this, but document
that if you want the others, you should pass check=False to solve().
The limit exists, but that's just half of the definition of "continuous".
> and the same as a value of function at this point, 0**2/0 (which by
> definition is equal 0).
f(x) = x²/x has no definition for x=0. It involves division by zero.
If you go from functions to relations, then for a, b != 0, we have
- a/b is a one-element set
- a/0 is the empty set since no r satisfies 0*r = a
- 0/0 is the the set of all values since all r satisfy 0*r = 0
So you can assign an arbitrary value to the result of 0/0 and it will be
"correct", but you don't know whether the value you assigned is "more"
or "less" correct than any other.
For example, what should (x²/x)/(x²/x) be for x=0?
If you do the x-->0 limit first, you'll get 1.
If you stick with the basic substitutability rules of math, you get
(x²/x)/(x²/x) = (0)/(0) = 0/0 = 0.
Hilarity ensues.
Oh, and I bet different people will have different assumptions about
which rule should take priority.
And if their assumptions differ from those that simplify() applies, they
will come here and ask what's wrong.
Regards,
Jo
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+unsubscribe@googlegroups.com.
Am 11.01.2012 11:32, schrieb Alexey U. Gudchenko:
11.01.2012 12:51, Joachim Durchholz пишет:
x²/x has a discontinuity for x=0, and x does not, hence x²/x is not the
same as x.
No, no, it is continuous because the limit when x-->0 exists (equals 0),
The limit exists, but that's just half of the definition of "continuous".f(x) = x²/x has no definition for x=0. It involves division by zero.
> and the same as a value of function at this point, 0**2/0 (which by
definition is equal 0).
If you go from functions to relations, then for a, b != 0, we have
- a/b is a one-element set
- a/0 is the empty set since no r satisfies 0*r = a
- 0/0 is the the set of all values since all r satisfy 0*r = 0
So you can assign an arbitrary value to the result of 0/0 and it will be "correct", but you don't know whether the value you assigned is "more" or "less" correct than any other.
For example, what should (x²/x)/(x²/x) be for x=0?
If you do the x-->0 limit first, you'll get 1.
If you stick with the basic substitutability rules of math, you get (x²/x)/(x²/x) = (0)/(0) = 0/0 = 0.
Hilarity ensues.
Oh, and I bet different people will have different assumptions about which rule should take priority.
And if their assumptions differ from those that simplify() applies, they will come here and ask what's wrong.
Regards,
Jo
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+unsubscribe@googlegroups.com.
Now, simplify will transform the latter expression into something that
doesn't strictly have the same definition at 0 (it removes the
singularity).
And x**a*x**b will automatically combine into x**(a + b) if a and b
have the same "term" in a coeff-term expansion (like
x**(2*y)*x**(4*y)). If it didn't do this, then x*x would not simplify
into x**2. OK, let's do that, but not if one of the exponents is
negative you say. Or rather, if one of the exponents is negative but
the resulting exponent is nonnegative. But what about exp(2)*exp(-2)
=> 1. That doesn't remove any information. exp(x)*exp(-x) => 1
doesn't even remove any information, as exp(x) can never be 0. And
what about x**(2*n)*x**-n, where n is Symbol('n', negative). So now
we start checking all kinds of assumptions in the core. And not just
anywhere in the core: in Mul.flatten, one of the most important
routines in the core as far as performance is concerned.
If you want to keep track of the numerator and denominator, I suggest
you keep them separate. They won't cancel on their own if you do
that. http://code.google.com/p/sympy/issues/detail?id=1966 would
allow you to create x/x without it canceling (actually, you can
already do it with Mul(x, 1/x, evaluate=False), but it's not as easy
as that issue would allow). But even then, I would recommend just
keeping the separate.
As far as assuming removable discontinuities are defined by continuous
extension being the standard practice, I think it depends on what you
are doing. In complex analysis, and any field that uses that kind of
math (like Physics), this is common, because removable singularities
aren't very interesting. This field also commonly assumes analytic
continuation too, so things like Sum(1/t**z, (t, 1, oo)) are assumed
to be defined for all z in the complex plane except for z = 1, even
though the sum only converges for Re(z) > 1. Definitely every other
field of mathematics I've encountered is more careful about this,
though.
I personally think SymPy strikes a good balance with this. As long as
you understand the behavior of various operations, and don't use too
many magic functions (like simplify()), there shouldn't be too many
surprises. At the worst, you'll get nan where you expected a number,
and that should be a sign to you that you need to use limit() or do
something else to get around a removable discontinuity.
Aaron Meurer
>> sympy+un...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/sympy?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to
> sympy+un...@googlegroups.com.
This letter is vary significant and clarify behavior of SymPy which
concerns with those topics:
automatic expressions transformation (SymPy policy)
simplify
singularities and continuity
core description - (Mul.flatten policy)
Of course we must add this to the wikipages and/or to the documentation.
--
Alexey U.
So feel free to throw something up on the wiki. You can think of the
wiki as a breeding grounds for documents that can eventually go into
the main docs when they are matured.
If anyone's interested, I did start writing up my thoughts about
automatic simplification a while back
(https://github.com/sympy/sympy/wiki/Automatic-Simplification). It's
still a work in progress, though. And it's just my opinion, so don't
take it as the final word on anything.
Aaron Meurer
So Float objects should always compare unequal to Rational objects,
for example Float(3.0) == Rational(3) should be false?
Hm. Mathematically, they should be the same.
Is there a document somewhere that explains the various SymPy types,
their relationship to Python numbers, and which of them should behave
how in what contexts?
This would help not just users but coders, too.
Regards,
Jo
That's not correct.
> A float number is not a decimal one.
Indeed, it's a base-2 one.
Regards,
Jo
Integral multiples of some power of 2, such as 2, 1, 0.5, 0.25, and
0.125, are not approximate.
In general, I do not understand what you mean by "does not exist,
mathematically".
IEEE floating-point arithmetic certainly is a mathematical structure, so
each floating-point number does exist, mathematically - it's just not a
real number. (It's not even rational number if you get to the fringe
cases, because associativity may fail.)
> This is not the same implementation for the Python module decimal which
> keeps exact representation, if it can.
Floats do the same.
I mean: they keep the exact representation, if they can.
(IEEE float has been designed to do that even in some cases where you
wouldn't expect it to, e.g. 1.0/3.0*1.0 will be exactly 1.0. Actually
this will work for any divisor that can be represented exactly.)
Regards,
Jo
I'm recently upgrading sympy to sympy-0.7.1-py2.7 (fom a fairly old
version).
One surprising change that I've noticed so far is
>>> a = sympy.Symbol('a')
>>> a**1 == a
True
>>> a**1.0 == a
False
I'm not clear on the reasoning. Is there any way to get around this?
This results in very strange behavior like
>>> a**1.0/a
a**1.0/a
The only solution I've found so far is the following:
>>> (a**1.0/a).subs(a**1.0, a)
1
I did attempt to search for this in the message board and the open
issues, but I didn't find anything.
Any help would be appreciated,
Thanks,
Rob