Just wanted to pop in and say that I really like this conversation.
Question, how fast could sympy core be if we were to pull out some of the assumptions logic until a final step at the end. What stops core from reaching polys speed?
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CADDwiVBUr%3DTGqE8OdfUQSnPqP-6Udb_98iuP2iXpCYyNwMMhKw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CADDwiVD_Knxf1%2B82-vmGOyPA%3DWfZgg1Np3PMyUDHoYaRzt4nqA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CADDwiVD_Knxf1%2B82-vmGOyPA%3DWfZgg1Np3PMyUDHoYaRzt4nqA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAP7f1AjQkV8E0_Y0D2UQSj-W%2BHNuDh1qXrsv2K8YoG4pOFBgMQ%40mail.gmail.com.
>>>>>> The goals are written above. I am myself concentrating on speed,
>>>>>> that's what I really
>>>>>>> The goals are written above. I am myself concentrating on speed,
>>>>>>> that's what I really
>>>>>>> want to nail down.
>>>>>>
>>>>>>
>>>>>> I'm somewhat sceptical about this.
>>>>>> A conversion to C++ will give a linear improvement.
>>>>>> Better algorithms can improve the big-Oh class.
>>>>>> Unless algorithmic improvements have been exhausted, this would be premature
>>>>>> optimization. (Are algorithmic improvements exhausted yet?)
>>>>>
>>>>> That is true. I usually prefer to use faster algorithms.
>>>>>
>>>>> But to be the devil's advocate, there are two issues with this line of thinking:
>>>>>
>>> One of my worries is not listed here, which is that you are doing
>>> things completely backwards from good software design with CSymPy,
>>> which is getting speed first, and something that works later.
>>
For the sympy.polys in this example,
is it using the sparse polynomial representation? I.e. it stores the
symbols (x, y, z) and then stores their powers and coefficients
in a dictionary, e.g.:
3*x^2*z + 2*x -> {3: (2, 0, 1), 2: (1, 0, 0)}
Things like Function threw me off when I melded SymPy with LogPy (my little logic programming system) or with Theano. It'll be nice for others to run into these.
By the way, is there any implementation of MiniKanren in C++? Maybe that in such a case one could try to write some algorithms in a rule-based format (as in MiniKanren), then either load these rules through SymPy and LogPy, or through CSymPy and some other parsing library (or maybe even by template metaprogramming). But I'm just speculating.
I do not know what exactly these are.
> We only need things
like multi-pattern unification and AC unification.
I read that AC unification can be polynomial, which sounds like a potential performance bottleneck; do we really need it?
I'm not sure what the consensus on dependencies is, but I myself think that a dependency is okay if we can either make sure that a specific, tested version is used, or if we expect the APIs to be stable and the need to fork for bug fixes or extensions is nonexistent.
Note that there are mature efficient projects that do this (and lots of
other things) already. Elan and Stratego/XT come to mind. They're heavy
dependencies though.
I'm assuming that this is what was meant by "multi-pattern unification".
I do not know what exactly these are.
I read that AC unification can be polynomial, which sounds like a
potential performance bottleneck; do we really need it?
Given expression (like sin(2*x) + cos(2*x)) we need to match against many
possible patterns (like, every known trig identity).
It's essentially a search in a potentially infinite decision tree. I think that's already polynomial, possibly even undecidable.
>> It's possible to storeCan't we deal with commutativity by normalizing the tree?
all of the patterns in a good data structure so that we can check them all
simultaneously (see Trie for something similar). We need to do this
matching in a way that is aware of commutative operators (like +). Naive
solutions to this are very slow. There exists sophisticated solutions.
E.g. in polynoms, it's already commonplace to put the higher exponents to the left; set up a total order over expressions that includes this and similar conventions.
Similar for associative operators; there, we turn A+(B+C) into plus(A,B,C) and don't have to worry about burdening the unification algorithm with semantic subtleties.
For trig simplification, there is a paper that I think uses some systematic way
to simplify it. So it might be that for that you don't need to check
all the trig
identities.
I think that's the general approach. Do not exhaust the decision tree, follow a search strategy.
Applies to trig, integrals, polynom simplification, and probably almost all areas of mathematics.
Can this be captured as a simple priority value on each substitution rule?
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CADDwiVBhF5gtUfzr7iCXJQbix8ecfVdt%3DWCyykxFdFoiCpFzEw%40mail.gmail.com.
I used the strategies module to pull out the tree search stuff. See either sympy.strategies or github.com/logpy/strategies/It is used in fu simp here