sympy.plotting.experimental_lambdify vs. sympy.utilities.lambdify

137 views
Skip to first unread message

Ondřej Čertík

unread,
Jun 30, 2012, 8:58:02 PM6/30/12
to sympy
Hi,

I have read the initial commit introducing the experimental_lambdify
(https://github.com/sympy/sympy/commit/6ee1bd36924b068c1ac74b634e40628cb815137d),
but it is not clear to me what the problems are with the old lambdify
in sympy.utilities.

Can these two lambdify functions be merged? It seems to me that the
sympy.utilities.lambdify is missing
a feature of two, so why not to extend it?
Historically, sympy.utilities.lambdify was created for the plotting
module. So now the situation is quite confusing.

Ondrej

krastano...@gmail.com

unread,
Jun 30, 2012, 10:38:04 PM6/30/12
to sy...@googlegroups.com
Two reasons basically:

- the original lambdify has a very convoluted logic and seems hard to
maintain (the new one is not that great either, however it does not
break as easily) (in the commit history one can see how stuff
completely unrelated to the functionality of lambdify was bolted on
it)

- there were serious differences between what the docstring says and
what lambdify does in many corner cases

I have discussed a lot why I dislike the original lambdify back in
November when I was writing the plotting module. I can bring these
discussions back if there is interest. I have also documented this in
details at the beginning of the source file for the new lambdify. It
would be nice to merge them together, but there is too much cruft in
the old lambdify so this will be difficult (even useless, I would
prefer simply removing both lambdify functions and relying on good
code in sympy for performance, not on some strange combination of
libraries and eval(str) operations)

Aaron Meurer

unread,
Jun 30, 2012, 10:48:15 PM6/30/12
to sy...@googlegroups.com
On Jun 30, 2012, at 8:38 PM, "krastano...@gmail.com"
+1 to this. Each function should know how to numerically evaluate
itself using numpy or stdlib math (or whatever), and you should be
able to just do it directly, like expr.evalf(library=numpy) or
something like that. I don't see any reason why that wouldn't work.
eval'ing strings feels like a hack, but actually imo anything that
works by rebuilding the expression tree in some what or another is
inefficient because we already have the expression tree.

By the way, I myself am still a little confused about how
experimental_lambdify works, and how it differs from lambdify. Can
you give a simple example and show how it would work in each case?

Aaron Meurer

Ondřej Čertík

unread,
Jul 1, 2012, 3:20:36 AM7/1/12
to sy...@googlegroups.com
The idea is quite simple. In SymPy, sin(x)+cos(x) is equivalent to
Add(sin(x), cos(x)) and so if you need to evaluate it at x=5, you
need to run a couple function calls, recursively, and finally
you just call math.sin or math.cos from the Python's math module.

Using Python's math module directly, sin(x)+cos(x) is just
two function calls and a "+" on two floats. So it is much faster.

How can this be done without lambdify()?

------------------

Stefan, yes, I read the module introduction, but it was not clear to me at all
what the difference is. The description sounds like a temporary hack.
But from what you are saying, it seems to me it's a better implementation
of lambdify(). I agree with you that the best would be to simply rely on sympy
directly, but I don't know how to do that.

Ondrej

krastano...@gmail.com

unread,
Jul 1, 2012, 4:58:25 AM7/1/12
to sy...@googlegroups.com
>> +1 to this. Each function should know how to numerically evaluate
>> itself using numpy or stdlib math (or whatever), and you should be
>> able to just do it directly, like expr.evalf(library=numpy) or
>> something like that. I don't see any reason why that wouldn't work.
>> eval'ing strings feels like a hack, but actually imo anything that
>> works by rebuilding the expression tree in some what or another is
>> inefficient because we already have the expression tree.
>
> The idea is quite simple. In SymPy, sin(x)+cos(x) is equivalent to
> Add(sin(x), cos(x)) and so if you need to evaluate it at x=5, you
> need to run a couple function calls, recursively, and finally
> you just call math.sin or math.cos from the Python's math module.
>
> Using Python's math module directly, sin(x)+cos(x) is just
> two function calls and a "+" on two floats. So it is much faster.
>
> How can this be done without lambdify()?

I was just hopping that it won't be necessary performancewise. If you
want to use numpy, you will work on big arrays, and most of the time
will be outside-of-python number crunching, so a few more function
calls in total do not seem like a big problem. And you are saving many
more function calls by not using lambdify() (this counts if you are
evaluating it only once like in plotting).

> ------------------
>
> Stefan, yes, I read the module introduction, but it was not clear to me at all
> what the difference is. The description sounds like a temporary hack.
> But from what you are saying, it seems to me it's a better implementation
> of lambdify(). I agree with you that the best would be to simply rely on sympy
> directly, but I don't know how to do that.
>

The new lambdify is indeed a bit better, mainly because you do not
need to add manually every single class in order for it to work (check
the commit that adds Integral to lambdify for instance). However it
does not support all the options of the old lambdify, so there is work
to be done if we want to merge them.

It basically rebuilds the object tree using string manipulations over
the "str(expression)" output. It is a hack, like the old lambdify in
this regard (the code is documented in details however).

For an example that does not work in the old lambdify
lambdify(y, Sum(x**y, (x, 1, oo)))(-2)

krastano...@gmail.com

unread,
Jul 1, 2012, 5:02:19 AM7/1/12
to sy...@googlegroups.com
> For an example that does not work in the old lambdify
> lambdify(y, Sum(x**y, (x, 1, oo)))(-2)

It works in the new lambdify, because

1. the namespace for eval() is build step by step out of the current
expression (the old lambdify just dumped a lot of different object in
the namespace and hoped that it works)

2. stuff is wrapped in float() or complex() or ().evalf as necessary

Joachim Durchholz

unread,
Jul 1, 2012, 5:58:51 AM7/1/12
to sy...@googlegroups.com
Am 01.07.2012 10:58, schrieb krastano...@gmail.com:
> It basically rebuilds the object tree using string manipulations over
> the "str(expression)" output. It is a hack, like the old lambdify in
> this regard (the code is documented in details however).

It could be prone to breakage.
str(expression) might be changed by people who're not aware that
lambdify depends on things like parseability and adherence to naming
conventions in str().

A lambdify of whatever variant that traced the object tree directly
would be much preferrable over parsing a str(expression) output.

krastano...@gmail.com

unread,
Jul 1, 2012, 6:16:20 AM7/1/12
to sy...@googlegroups.com
> It could be prone to breakage.
> str(expression) might be changed by people who're not aware that lambdify
> depends on things like parseability and adherence to naming conventions in
> str().
>
> A lambdify of whatever variant that traced the object tree directly would be
> much preferrable over parsing a str(expression) output.
I agree completely. However, if we stick to using the eval() trick,
there will always be some moment when a string is used.

Joachim Durchholz

unread,
Jul 1, 2012, 6:44:48 AM7/1/12
to sy...@googlegroups.com
Hmm... I missed that one. (No surprise there :-) )
What does it do? What are the reasons why we decided to use it?

krastano...@gmail.com

unread,
Jul 1, 2012, 7:05:09 AM7/1/12
to sy...@googlegroups.com
If I understand correctly the commit logs and the blog posts of the
gsoc student that did this some years ago, sympy's evaluation to
floats is too slow in order to use for plotting. The solution was to
"compile" expressions to standard precision non-symbolic numpy or just
python.math. The horrifying solution employed both in the old lambdify
and in my code is to use eval() and some string processing. This is
(or not?) the optimal workaround that was found around the fact that
python function calls are inherently slow (thus prohibiting the use of
nested lambdas instead of eval). I believe that this can be called
"imitating a closure".

I suppose that this makes you, as a professional programmer, get a
headache over the fragile design :) I am sorry. I am not particularly
proud of this part of my code.

Joachim Durchholz

unread,
Jul 1, 2012, 8:40:53 AM7/1/12
to sy...@googlegroups.com
Am 01.07.2012 13:05, schrieb krastano...@gmail.com:
> I suppose that this makes you, as a professional programmer, get a
> headache over the fragile design :)

Hehe, sort of.
Not that I should be too critical of that, I have certainly written my
share of ugly code - mostly due to performance constraints.

> The horrifying solution employed both in the old lambdify and in my
> code is to use eval() and some string processing. This is (or not?)
> the optimal workaround that was found around the fact that python
> function calls are inherently slow (thus prohibiting the use of
> nested lambdas instead of eval).

I'm rather surprised by such a finding.
I mean, eval() can't be working much differently under the hood - in the
end, you still need to call an evaluator function on each leaf and
internal node of the expression tree.
Or, putting it another way: Whatever calls eval() is doing under the
hood, it should be faster if some Python code iterated over all nodes of
the expression tree and did the same calls. Actually it should be faster
because the eval-parse detour is cut out (except one approach might be
doing more work in C than the other, which could cancel all savings -
still, the difference shouldn't be *that* great).

krastano...@gmail.com

unread,
Jul 1, 2012, 9:19:54 AM7/1/12
to sy...@googlegroups.com
> Or, putting it another way: Whatever calls eval() is doing under the hood,
> it should be faster if some Python code iterated over all nodes of the
> expression tree and did the same calls. Actually it should be faster because
> the eval-parse detour is cut out (except one approach might be doing more
> work in C than the other, which could cancel all savings - still, the
> difference shouldn't be *that* great).

I do not think that this was ever tried. The original logic was
something like "multiprecision evalf is too slow, so translate
everything in numpy". It is just that at some point it was decided to
do the translation in the "lambdify" way instead what you just
suggested (and Aaron suggested mostly the same thing when he proposed
"expr.evalf(library=numpy)") flag.

Deprecating lambdify and refactoring everything to use a new "library"
keyword argument (or something similar) will be great though.

Joachim Durchholz

unread,
Jul 1, 2012, 10:02:58 AM7/1/12
to sy...@googlegroups.com
Am 01.07.2012 15:19, schrieb krastano...@gmail.com:
> Deprecating lambdify and refactoring everything to use a new "library"
> keyword argument (or something similar) will be great though.

Sounds reasonable.

Aaron Meurer

unread,
Jul 1, 2012, 10:54:53 AM7/1/12
to sy...@googlegroups.com
I see. So basically one has to convert the SymPy expression into a
literal Python code closure in order to be efficient.

The only other way I know how to do this is to use the ast module.
That should be more robust than string processing, but probably a
little harder to do. The code converting SymPy -> ast could probably
be useful for other things as well, though.

Aaron Meurer

>
> ------------------
>
> Stefan, yes, I read the module introduction, but it was not clear to me at all
> what the difference is. The description sounds like a temporary hack.
> But from what you are saying, it seems to me it's a better implementation
> of lambdify(). I agree with you that the best would be to simply rely on sympy
> directly, but I don't know how to do that.
>
> Ondrej
>
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>

Ondřej Čertík

unread,
Jul 1, 2012, 2:23:29 PM7/1/12
to sy...@googlegroups.com
Exactly. That is one approach, using pure Python.

The other (orthogonal) approach is to use numpy, and use the
fact that in plotting you usually need to evaluate things for many points
at once, and so there you simply use numpy to do the quick evaluations
for many points at once, and you can (probably?) live with the Python/SymPy
overhead of function calls. Then no eval() is necessary, just regular SymPy
methods.

However, I think we need both approaches, because for many use cases,
(i.e. numerical ODE integration), you don't know the points in advance,
so the numpy approach doesn't provide enough speedup, as the SymPy
overhead will kill you.

>
> The only other way I know how to do this is to use the ast module.
> That should be more robust than string processing, but probably a
> little harder to do. The code converting SymPy -> ast could probably
> be useful for other things as well, though.

Ah, yes! So if you have the code in AST, is it possible to compile it to Python
byte code? That would be the way to go, for both Python and numpy approach.

The AST can (hopefully) be built using SymPy methods --- clean and robust,
i.e. each SymPy class will have a method like "get_ast" or something. Depending
on the arguments to get_ast(), you would determine whether to use Python
built-in math, or numpy, or any other library. And then you simply compile
the AST.

If this works, that would indeed be the best.

Ondrej

krastano...@gmail.com

unread,
Jul 1, 2012, 2:35:05 PM7/1/12
to sy...@googlegroups.com
The AST stuff may be interesting and amusing, however before any
actual benchmarking it seems like overly complicated solution.

.evalf() is slow not because it traverses the expression tree (any
evaluation method would do this, closures would not make less function
calls which is the expensive part), but because it uses very
complicated logic to ensure precision. A simple flag "use numpy/python
math/whatever" that uses for instance methods called '_evalf_numpy'
and '_evalf_cmath' would be **way** simpler and presumably just as
fast.

And you will not need to cater specifically to cases where only half
of the expression can be translated (special functions), you just
default to multiprecision evalf() and throw away the high precision
part.

Aaron Meurer

unread,
Jul 1, 2012, 6:07:24 PM7/1/12
to sy...@googlegroups.com
Yep. Use the builtin compile() function. See
http://pythonic.pocoo.org/2008/3/29/ast-compilation-from-python for a
simple example.

Aaron Meurer

> That would be the way to go, for both Python and numpy approach.
>
> The AST can (hopefully) be built using SymPy methods --- clean and robust,
> i.e. each SymPy class will have a method like "get_ast" or something. Depending
> on the arguments to get_ast(), you would determine whether to use Python
> built-in math, or numpy, or any other library. And then you simply compile
> the AST.
>
> If this works, that would indeed be the best.
>
> Ondrej
>

Aaron Meurer

unread,
Jul 1, 2012, 11:34:53 PM7/1/12
to sy...@googlegroups.com
I did the following test. I applied the following patch:

diff --git a/sympy/core/add.py b/sympy/core/add.py
index d2cac9a..d5c3855 100644
--- a/sympy/core/add.py
+++ b/sympy/core/add.py
@@ -271,6 +271,9 @@ def as_coefficients_dict(a):
di.update(d)
return di

+ def eval_with_numpy(self, var, value):
+ return sum(i.eval_with_numpy(var, value) for i in self.args)
+
@cacheit
def as_coeff_add(self, *deps):
"""
diff --git a/sympy/functions/elementary/trigonometric.py
b/sympy/functions/elementary/trigonometric.py
index 70e9012..8c3f460 100644
--- a/sympy/functions/elementary/trigonometric.py
+++ b/sympy/functions/elementary/trigonometric.py
@@ -1,3 +1,5 @@
+import numpy
+
from sympy.core.add import Add
from sympy.core.numbers import Rational
from sympy.core.basic import C, sympify, cacheit
@@ -165,6 +167,11 @@ def inverse(self, argindex=1):
"""
return asin

+ def eval_with_numpy(self, var, value):
+ if self.args[0] != var:
+ raise NotImplementedError
+ return numpy.sin(value)
+
@classmethod
def eval(cls, arg):
if arg.is_Number:
@@ -401,6 +408,11 @@ def fdiff(self, argindex=1):
def inverse(self, argindex=1):
return acos

+ def eval_with_numpy(self, var, value):
+ if self.args[0] != var:
+ raise NotImplementedError
+ return numpy.cos(value)
+
@classmethod
def eval(cls, arg):
if arg.is_Number:

Then, I ran the following timings:

In [1]: a = sin(x) + cos(x)

In [3]: a.eval_with_numpy(x, 5)
Out[3]: -0.67526208919991215

In [4]: %timeit a.eval_with_numpy(x, 5)
10000 loops, best of 3: 58.5 us per loop

In [9]: import numpy

In [11]: b = lambdify(x, a, numpy)

In [13]: b(5)
Out[13]: -0.67526208919991215

In [12]: %timeit b(5)
100000 loops, best of 3: 9.64 us per loop

So you can see that the lambdify way is more than 6 times faster. The
interface I made up for the eval_with_numpy is very bad. A better one
would involve many more function calls and more complex logic, in
order too be correctly modular. Also, the numpy import would be
inside the functions, which would make it even slower.

6x speedup makes a big difference if you are going to make an
evaluation thousands or even millions of times.

It's not too hard to see why this happens. The eval_with_numpy way
involves logic, and three recursive function calls. The lambdify way
is just a fancy way of writing b = lambda x: numpy.sin(x) +
numpy.cos(x), which, like Ondrej noted, only involves two numpy
function calls and float addition.

Aaron Meurer

krastano...@gmail.com

unread,
Jul 2, 2012, 6:24:08 AM7/2/12
to sy...@googlegroups.com
> 6x speedup makes a big difference if you are going to make an
> evaluation thousands or even millions of times.

In fact, if you do the evaluation thousands or millions of times, the
difference will not be noticeable, because what will take long time is
the evaluation of the array inside numpy.sin, numpy.cos and numpy.add.
The difference between the two times is what is constant, not their
ratio.

In [22]: a = sin(x)+cos(x)

In [23]: b = lambdify(x, a, numpy)

In [24]: array_short = numpy.linspace(0,10,num=100)

In [25]: array_long = numpy.linspace(0,10,num=10000)

In [26]: %timeit a.eval_with_numpy(x, array_short)
10000 loops, best of 3: 65.9 us per loop

In [27]: %timeit a.eval_with_numpy(x, array_long)
1000 loops, best of 3: 1.05 ms per loop

In [28]: %timeit b(array_short)
100000 loops, best of 3: 15.7 us per loop

In [29]: %timeit b(array_long)
1000 loops, best of 3: 988 us per loop

As you can see it is a difference a constant of about 40us, not a ratio of 6x.

Another argument against the original benchmark: You did not take into
account the call to lambdify itself (in plotting, for example, the
lambdified expression is used only once (the new adaptive sampling
changes this)). The call to lambdify takes about as long as the
evaluation of the long array itself (10000 evaluations).

Of course, this is only true for numpy, not for python.math and
python.cmath. There one would indeed have 6x slowdown.

And all the discussion is about CPython, not something with a JIT like pypy.

In conclusion, there are some places where playing with closures will
be useful (all the non numpy stuff for instance), however there is no
obvious speed gain in the numpy case.

Although, I someone wants such a high performance, shouldn't he be
pointed to the autowrap cython and fortran implementations that we
have? This seems like a much cleaner solution (much less hacks).

Finally, if we implement the eval_with_numpy stuff, this will also
close issue 537 http://code.google.com/p/sympy/issues/detail?id=537

Ondřej Čertík

unread,
Jul 2, 2012, 7:44:46 PM7/2/12
to sy...@googlegroups.com
On Mon, Jul 2, 2012 at 3:24 AM, krastano...@gmail.com
<krastano...@gmail.com> wrote:
>> 6x speedup makes a big difference if you are going to make an
>> evaluation thousands or even millions of times.
>
> In fact, if you do the evaluation thousands or millions of times, the
> difference will not be noticeable, because what will take long time is
> the evaluation of the array inside numpy.sin, numpy.cos and numpy.add.

Let's label the numpy approach 2), the direct python's math module
approach as 1).

The approach 2) is only applicable to cases when you know all the
points in advance,
while the approach 1) can be used in any applications where you need fast
evaluations of the expressions, at points that you don't know in advance.

Also the approach 2) depends on numpy, while 1) only needs pure Python.


> The difference between the two times is what is constant, not their
> ratio.
>
> In [22]: a = sin(x)+cos(x)
>
> In [23]: b = lambdify(x, a, numpy)
>
> In [24]: array_short = numpy.linspace(0,10,num=100)
>
> In [25]: array_long = numpy.linspace(0,10,num=10000)
>
> In [26]: %timeit a.eval_with_numpy(x, array_short)
> 10000 loops, best of 3: 65.9 us per loop
>
> In [27]: %timeit a.eval_with_numpy(x, array_long)
> 1000 loops, best of 3: 1.05 ms per loop
>
> In [28]: %timeit b(array_short)
> 100000 loops, best of 3: 15.7 us per loop
>
> In [29]: %timeit b(array_long)
> 1000 loops, best of 3: 988 us per loop
>
> As you can see it is a difference a constant of about 40us, not a ratio of 6x.

Yes, this is the approach 2). I agree that using lambdify for 2) will
not, except
special cases, give a big speedup.

>
> Another argument against the original benchmark: You did not take into
> account the call to lambdify itself (in plotting, for example, the
> lambdified expression is used only once (the new adaptive sampling
> changes this)). The call to lambdify takes about as long as the
> evaluation of the long array itself (10000 evaluations).
>
> Of course, this is only true for numpy, not for python.math and
> python.cmath. There one would indeed have 6x slowdown.

Yes, that's the case 1). In there lambdify does provide significant speedup.
The 6x speedup is only for the simple case sin(x)+cos(x). For complicated
expressions, I expect the speedup to be much larger.

As another datapoint: the lambdify() was introduced at the time
when evalf() did use just Python floats, as far as I know,
and it did provide a significant speedup (using the approach 1).

>
> And all the discussion is about CPython, not something with a JIT like pypy.
>
> In conclusion, there are some places where playing with closures will
> be useful (all the non numpy stuff for instance), however there is no
> obvious speed gain in the numpy case.
>
> Although, I someone wants such a high performance, shouldn't he be
> pointed to the autowrap cython and fortran implementations that we
> have? This seems like a much cleaner solution (much less hacks).

Yes, ultimately, if you want such things to be fast, the best is to use
Fortran and you can actually use both 1) and 2) approaches
in Fortran. In gfortran, the approach 2) is sometimes faster,
probably because the compiler is somehow able
to generate faster code, for details, see for example [1].

For heavy array oriented numerics (in double precision), I fully
recommend to use Fortran,
I use it almost everyday and the code is pretty much as fast as one
can ever get,
with minimal effort.

However, while we can generate Fortran code, compile it and use
that for plotting, in many cases it is an overkill and a simple approach
1) or 2) provides enough speed to get the job done.

>
> Finally, if we implement the eval_with_numpy stuff, this will also
> close issue 537 http://code.google.com/p/sympy/issues/detail?id=537



Ondrej

[1] http://technicaldiscovery.blogspot.com/2011/07/speeding-up-python-again.html

Joachim Durchholz

unread,
Jul 3, 2012, 2:30:08 AM7/3/12
to sy...@googlegroups.com
Am 03.07.2012 01:44, schrieb Ondřej Čertík:
> However, while we can generate Fortran code, compile it and use
> that for plotting, in many cases it is an overkill and a simple approach
> 1) or 2) provides enough speed to get the job done.

I'm not sure that this is the right kind of overkill we're looking at here.

If you use a compiler to deal with the rare cases, you can apply the
same technique for the common cases. Yes it's run-time overkill, but if
using a compiler would work for all cases, providing a separate
implementation for the common cases would be a programmer-time overkill.

krastano...@gmail.com

unread,
Jul 3, 2012, 5:09:27 AM7/3/12
to sy...@googlegroups.com
>> However, while we can generate Fortran code, compile it and use
>> that for plotting, in many cases it is an overkill and a simple approach
>> 1) or 2) provides enough speed to get the job done.
>
>
> I'm not sure that this is the right kind of overkill we're looking at here.

There is one more thing. If a user is **not** relying at least on
numpy for evaluation of arrays or on autowrap for evaluation of many
unpredictable points, he is just doing it wrong.

The one exception is code inside sympy which was agreed at some point,
should not require more than a python interpreter. However, if you are
using nsolve with lambdify you are again doing it wrong, because the
only advantage that our nsolve has over scipy is that it can work in
arbitrary precision. If you don't need it, just work in scipy, do not
kill precision for performance inside sympy. The other example is
plotting, however because of matplotlib this already requires numpy.

Hence, with all the other stuff that needs refactoring in sympy, it
does not seem wise IMO to spend time on complicated AST using module,
when we already have low-performance evalf and easy-to-implement high
performance eval_to_numpy idea.

Ondřej Čertík

unread,
Jul 3, 2012, 4:00:41 PM7/3/12
to sy...@googlegroups.com
On Tue, Jul 3, 2012 at 2:09 AM, krastano...@gmail.com
<krastano...@gmail.com> wrote:
>>> However, while we can generate Fortran code, compile it and use
>>> that for plotting, in many cases it is an overkill and a simple approach
>>> 1) or 2) provides enough speed to get the job done.
>>
>>
>> I'm not sure that this is the right kind of overkill we're looking at here.
>
> There is one more thing. If a user is **not** relying at least on
> numpy for evaluation of arrays or on autowrap for evaluation of many
> unpredictable points, he is just doing it wrong.
>
> The one exception is code inside sympy which was agreed at some point,
> should not require more than a python interpreter. However, if you are
> using nsolve with lambdify you are again doing it wrong, because the
> only advantage that our nsolve has over scipy is that it can work in
> arbitrary precision. If you don't need it, just work in scipy, do not
> kill precision for performance inside sympy. The other example is
> plotting, however because of matplotlib this already requires numpy.

There is a difference between calling matplotlib, numpy or scipy,
and having a working fortran or C compiler working at your machine.

In particular, in scipy ODE, you can provide a simple Python function
to get the ODE integrated. I am pretty sure that our approach 1)
will be quite faster than high level eval_to_numpy().

Having to require a working C/Fortran compiler to do any plotting
or ODE solving is an overkill in my opinion and it will make
things more complicated for the end user.

(Allowing to also use C/Fortran is of course great for advanced
users.)

>
> Hence, with all the other stuff that needs refactoring in sympy, it
> does not seem wise IMO to spend time on complicated AST using module,
> when we already have low-performance evalf and easy-to-implement high
> performance eval_to_numpy idea.

We should implement this eval_to_numpy(), that shouldn't be too difficult.

Ondrej

Vinzent Steinberg

unread,
Jul 6, 2012, 5:37:53 AM7/6/12
to sy...@googlegroups.com
Am Dienstag, 3. Juli 2012 11:09:27 UTC+2 schrieb Stefan Krastanov:
The one exception is code inside sympy which was agreed at some point,
should not require more than a python interpreter. However, if you are
using nsolve with lambdify you are again doing it wrong, because the
only advantage that our nsolve has over scipy is that it can work in
arbitrary precision. If you don't need it, just work in scipy, do not
kill precision for performance inside sympy. The other example is
plotting, however because of matplotlib this already requires numpy.

Actually nsolve() is just a simple wrapper that connects the symbolical with the numerical world. It's syntactic sugar to save you from using lambdify() and mpmath.findroot(). In principle, it could also support the scipy algorithms for rootfinding. Maybe it should.

Vinzent

Vinzent Steinberg

unread,
Jul 6, 2012, 5:59:44 AM7/6/12
to sy...@googlegroups.com
Am Dienstag, 3. Juli 2012 22:00:41 UTC+2 schrieb Ondřej Čertík:
There is a difference between calling matplotlib, numpy or scipy,
and having a working fortran or C compiler working at your machine.

In particular, in scipy ODE, you can provide a simple Python function
to get the ODE integrated. I am pretty sure that our approach 1)
will be quite faster than high level eval_to_numpy().

Having to require a working C/Fortran compiler to do any plotting
or ODE solving is an overkill in my opinion and it will make
things more complicated for the end user.

(Allowing to also use C/Fortran is of course great for advanced
users.)

We already have sympy/utilities/compilef.py for compiling C code. It's a bit of a hack, but it only depends on libtcc (LGPL), which is quite small. Last time I tried, it was a bit faster than numpy for evaluation of complicated functions (including the overhead of compiling the C code).

The ugly part is that you have to get a development version (maybe it works with the current stable, it did not back then) of TCC and compile libtcc. (We could however provide the binary.)

Vinzent
Reply all
Reply to author
Forward
0 new messages