Hey guys,
I'm reading through the SymPy code and, well, it's somewhat overwhelming if you're new to the project because there's so much going on. (That's a good thing too - it means its robust!)
Can someone help explain how this works?
>>> from sympy import *
>>> x = Symbol("x")
>>> my_expression = sin(x)**2 + 2*sin(x) + 1
>>> my_expression.factor()
(1 + sin(x))**2
>>>
For instance, what data structures happen when I create my_expression, what happens when I factor it, etc. A high-level walk through would help. I see there's stuff going on at polytools.py, and I think _symbolic_factor gets called. It's just confusing to keep everything in my head when I don't yet have a high level understanding of how sympy expressions and what not actually work.
So let's start with my_expresion (the core part). You already know
that x is defined to be a Symbol object (because you did this
yourself). You should also know that sin and cos are Function
objects, which you can think of as container objects that hold
arguments and which also have various mathematical relations defined
on them.
As you may know, Python let's you override the behavior of the
built-in operations *, +, /, -, etc. on your own objects. So all
objects have method __mul__, __add__, etc. defined on them. When you
call x + y, this reduces to x.__add__(y). In SymPy, a.__add__(b) is
converted to Add(a, b). The same is true for __mul__ and Mul. So in
sin(x)**2 + 2*sin(x) + 1, sin(x) creates a sin object (when it is
called). This reduces to
sin(x).__pow__(2) + sin(x).__rmul__(2) + 1
Pow(x, 2).__add__(Mul(2, sin(x))).__add__(1)
Add(Pow(x, 2), Mul(2, sin(x)), 1)
which is what you will get if you call srepr(my_expression). Note
that 2*sin(x) actually calls __rmul__. That's because 2 (type int)
doesn't know how to multiply sin(x) (type sin), so sin(x)'s __rmul__
method is called. This brings up an important point. All Python
types are converted in this process to SymPy types, through the
sympify() function. So, for example, sin(x).__pow__(2) reduces to
sin(x).__pow__(sympify(2)), which results in
sin(x).__pow__(Integer(2)).
Now, Mul, Pow, and Add's __new__ methods contain logic to do automatic
simplifications like x + 2*x => 3*x or x*x => x**2. In this case, no
such simplifications needed to be applied.
All the args for any SymPy expression are stored in expr.args. So you
would have
>>> my_expression.args
(1, sin(x)**2, 2*sin(x))
>>> my_expression.args[1].args
(sin(x), 2)
If you want to see the code for all of this, you can look at the files
in sympy/core. The __op__ stuff is mostly in basic.py and expr.py.
The flattening routines are in add.py, pow.py, and mul.py. The code
for functions that sin is built off of is in function.py.
Now, to the factoring part. my_expression.factor() is a shortcut to
factor(my_expression). Because factoring is a polynomial algorithm,
the expression has to be converted to a Poly first. Poly is able to
represent polynomials with arbitrary symbolic "generators". In this
case, it determines that it should use sin(x) as a generator, so it
creates Poly(sin(x)**2 + 2*sin(x) + 1, sin(x)), which you can think of
as a wrapper around the polynomial y**2 + 2*y + 1, where y is set to
be sin(x) (for the purposes of Poly, it does not matter what the
generators are, other than that coefficients cannot contain symbols
from them, so you can think of it in this way).
Then, it calls the factorization algorithm on Poly. If you are
interested in how this works, I suggest you read the code and the
papers referenced there. In this case, it is able to factor the
polynomial using the squarefree factorization algorithm, which is
actually not too difficult to understand. In the general case, it
uses a complicated multivariate factorization algorithm that factors
any multivariate polynomial into irreducibles.
Anyway, Poly.factorlist returns something like [(Poly(sin(x) + 1),
2)]. factor() converts this into a normal SymPy expression (also
called a Basic expression or Expr expression) by passing it to Mul and
Pow (something like Mul(*[Pow(b, e) for b, e in expr]) would do it, I
think).
All the Poly stuff lives in sympy/polys. If you are interested, I can
explain a little how they work (internal representation, etc.). The
code for the Poly class lives in polytools.py, though the actual
factorization algorithm lives in sqftools.py and factortools.py.
And one thing that I didn't mention (and maybe you didn't even think
of) is the printing. You do not use pretty printing, so the printing
is rather simple (just recursively print the objects using the proper
operators). If you are interested, you can look at the code in
sympy/printing.
Let me know if this made sense, and if there are bits that you still
would like to know about. Also, remember that SymPy is written in a
fairly modular way, so it's completely unnecessary to know how a
module works unless you want to work on that module specifically
(e.g., you don't need to how the core works to write some
simplification algorithm, like in simplify.py).
Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to
> sympy+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/sympy?hl=en.
>
That's a good idea.
>
> So a little bit about my background with Python: I'm a pretty experienced
> Python programming, I understand operator overloading, I see your
> @_sympifyit decorators all over the place (I'm guessing they're to convert 2
> to Integer(2), as Aaron explained). In fact, I started writing my own
> Python symbolics math library years ago.
OK. I just wanted to be sure to mention it, since to people who don't
know about operator overloading, it can be the most mysterious part of
the whole thing (how does it "know" to convert x + x into 2*x?)
>
> One follow-up question is:
>
> Is there an advantage to having your own classes for Pow, Add, Mul, and
> other operators? Why didn't you just absorb those in classes for
> expressions, functions, etc, and have an expression + expression return an
> expression?
It makes for much better object oriented programming. You can see all
the methods of Add, Mul, and Pow specifically handle the behavior on
that type.
Let's use differentiation as an example. diff(expr, x) is implemented
as expr._eval_derivative(x) (technically, this is not 100% true
because there's also fdiff, but let's suppose that it's just done this
way for simplicity). If you do it this way, it's a very simple
recursive algorithm. You define your base cases, and define
Add
def _eval_derivative(self, x):
return Add(*[i._eval_derivative(x) for i in self.args])
Mul
def _eval_derivative(self, x):
return Add(*(Mul(*(self.args[:i] +
[self.args[i]._eval_derivative(x)] + self.args[i+1:]))))
Pow
def _eval_derivative(self, x):
# I hope I have the rule correct here
return self.exp*self.base._eval_derivative(x)*self.base**(self.exp
- 1) + log(self.base)*self.exp._eval_derivative(x)*self.base**self.exp
(note, I didn't copy this from the actual SymPy code, so it might be a
little different, but it will be similar).
This way, the code is all very simple, because each class only needs
to know how to deal with itself.
This also makes it much more modular. If you want to extend SymPy,
you just create your own class. You can make that class work with
diff() just by defining ._eval_derivative() on that class. With your
method, if someone wants to extend the code, they have to extend the
Expression class (which will be huge, btw).
Your method goes half way, because it has a separate "Add" class,
PolyTerm, but you are still keeping things like x and x**2 and x*y as
the same class, when the first should be a Symbol, the second a Pow,
and the third a Mul.
By the way, if you are interested in other ways to implement symbolics
in Python, you might look at the sympycore project, which is a
research project that tries to implement symbolics in the most
efficient way possible (and is often faster than SymPy as a result).
See http://code.google.com/p/sympycore/.
Aaron Meurer
Yeah, I see the advantage in that. That's really cool.
Here's another one. I see the Derivative class. But cos already has fdiff and an _eval_derivative from class inheritance. (Not sure what the difference between the two methods is, but anyway) So why is a Derivative class needed? Couldn't you just make it a function, something like:
@sympify_it
def derivative(f, with_respect_to):
return f.fdiff(with_respect_to)
The Derivative class has extra methods like _eval_nseries, but wouldn't those just get subsumed with whatever a derivative function returns? For example, if you call fdiff on cos, it returns -sin, which should have a _eval_nseries method anyhow. What's the point of having a separate Derivative class?
Aaron Meurer
Add(x, 2*x, y, -y)
The second and fourth arguments are Muls. Mul always puts any
Numerical coefficient in args[0] (note I use Numerical with an
uppercase N, meaning an actual number like 2 or 3/4 or 2.1; something
that would combine with another Number to create a new Number through
addition or multiplication). To simplify the below logic, suppose
that coeff_term(expr) returns (expr.args[0], Mul(*expr.args[:0])) if
expr is a Mul and expr.args[0] is a Number, and (1, expr) otherwise.
In other words, the first item is the Numerical coefficient and the
second argument is the rest. The logic in Add.flatten is something
like
termdict = {}
for arg in args:
coeff, term = coeff_term(arg)
if term in termdict:
termdict[term] += coeff
else:
# Add term to termdict
termdict[term] = coeff
Then follow through what we get for each term in our example
x:
coeff == 1
term == x
termdict == {x:1}
2*x:
coeff == 2
term == x
termdict == {x:3}
y:
coeff == 1
term == y
termdict == {x:3, y:1}
-y:
coeff == -1
term == y
termdict == {x:3, y:0}
Then, at the end, we (efficiently) multiply the terms and items in the
dictionary and those are our Add.args. Note that only in this last
step is the 0*y actually reduced to nothing (again, speed is a concern
here, so this is all done efficiently; see Add.flatten for how it's
actually done).
For Mul, it's a little more complicated, but actually not that much.
The problem here is that we combine not only x*x => x**2, but also
x**y*x**y => x**(2*y). But it's actually not to difficult. Instead
of a dictionary of term:coeff items, we have a dictionary of
base:expdict items, where expdict is itself a dictionary of term:coeff
items that works just like Add. So for example, Mul(x**2, y,
x**(2*y)) becomes
{x:{1:2, y:2}, y:{1:1}}. Note that each item in the inner dictionary
represents a separate base. This leads to the rule for automatically
combining exponents in a Mul: only combine them if they have the same
term from as_coeff_terms (this method is a Mul specific similar method
for the rule I mentioned above). Note that we used to just combine
all exponents automatically unequivocally, but this was a bad behavior
because it was too restrictive (it was impossible to represent
exp(x)*exp(y) as a Mul, for example).
Note that although we convert the Add dict into Muls and the Mul dict
into Pows, this done for historical/compatibility/hashing reasons. I
think it should be possible to build a core that just uses the dict
representation without .args. The main problem is how to keep things
hashable so that you can put them as dict keys.
By the way, Mul.flatten is way more complicated than Add.flatten, in
part because it actually does more than this (for example, it
automatically reduces things like sqrt(6)*sqrt(2) to 2*sqrt(3)).
Also, both methods are currently clobbered with special cases for
Order, nan, and the infinities, and the code is not always the easiest
to read because it's optimized for efficiency (which I want to stress
is *very* important at this level). Actually, Add.flatten is fairly
readable, and I would definitely start there if you want to understand
this stuff. Mul.flatten might require some print statements or a run
through with a debugger to fully understand everything that is going
on.
Aaron Meurer