Why wouldn't simple type based dispatch work?
You might be right, I just want to understand the problem more.
To answer Aaron's question:
This, and another example is x + O(x). Let's stick to oo + 3.
On Wed, Jul 3, 2013 at 12:58 PM, Aaron Meurer <asme...@gmail.com> wrote:
> So, going back to what we discussed the first time we met in Los
> Alamos, how would you reimplement something like the oo logic so that
> it lives entirely in the Infinity class, not in Add.flatten (say for
> simplicity, oo + 3 should go to oo, but oo + 3*I should remain as oo +
> 3*I)?
This is a very good question and it is one of the details that I don't
know the answer 100% yet.
But I feel it is solvable.
I think the best would be to create a demo (from scratch) where we can
play with these ideas. Hopefully I'll get to this eventually.
It's tough to mull through a list of commits,On Wed, Jul 3, 2013 at 5:40 PM, Ronan Lamy <ronan...@gmail.com> wrote:
> 2013/7/3 Ondřej Čertík <ondrej...@gmail.com>
>>
>> On Wed, Jul 3, 2013 at 1:48 PM, Aaron Meurer <asme...@gmail.com> wrote:
>>
>> Why wouldn't simple type based dispatch work?
>> You might be right, I just want to understand the problem more.
>>
>> To answer Aaron's question:
>>
>> On Wed, Jul 3, 2013 at 12:58 PM, Aaron Meurer <asme...@gmail.com> wrote:
>> > So, going back to what we discussed the first time we met in Los
>> > Alamos, how would you reimplement something like the oo logic so that
>> > it lives entirely in the Infinity class, not in Add.flatten (say for
>> > simplicity, oo + 3 should go to oo, but oo + 3*I should remain as oo +
>> > 3*I)?
>>
>> This, and another example is x + O(x). Let's stick to oo + 3.
>
>
> x + O(x) is a bad example, because it should really not be represented by an
> Add.
>
>> This is a very good question and it is one of the details that I don't
>> know the answer 100% yet.
>> But I feel it is solvable.
>>
>> I think the best would be to create a demo (from scratch) where we can
>> play with these ideas. Hopefully I'll get to this eventually.
>
>
> How about this: https://github.com/rlamy/sympy/commits/binop ?
so let me just ask you
some questions about it (I know you posted this branch a while ago,
but I forgot the details).
- Does it handle nary operations or just binary?
- What about *args like Mul.flatten?
- If two types register dispatchers against one another, what are the
precedence rules?
>> - What about *args like Mul.flatten?Ditto here.
>
>
> It doesn't do anything about it.
>> - If two types register dispatchers against one another, what are theOK, but there is some kind of precedence on subclasses, right?
>> precedence rules?
>
>
> There are no precedence rules. If the dispatcher doesn't find a unique most
> derived implementation (e.g. if it finds implementations for types (A1, A2)
> and (B1, B2) such that A1 strictly subclasses B1 and B2 strictly subclasses
> A2) then it raises an error.
On Wed, Jul 3, 2013 at 4:40 PM, Ronan Lamy <ronan...@gmail.com> wrote:
> 2013/7/3 Ondřej Čertík <ondrej...@gmail.com>So the Order class would simply contain both the expression and the
>>
>> On Wed, Jul 3, 2013 at 1:48 PM, Aaron Meurer <asme...@gmail.com> wrote:
>>
>> Why wouldn't simple type based dispatch work?
>> You might be right, I just want to understand the problem more.
>>
>> To answer Aaron's question:
>>
>> On Wed, Jul 3, 2013 at 12:58 PM, Aaron Meurer <asme...@gmail.com> wrote:
>> > So, going back to what we discussed the first time we met in Los
>> > Alamos, how would you reimplement something like the oo logic so that
>> > it lives entirely in the Infinity class, not in Add.flatten (say for
>> > simplicity, oo + 3 should go to oo, but oo + 3*I should remain as oo +
>> > 3*I)?
>>
>> This, and another example is x + O(x). Let's stick to oo + 3.
>
>
> x + O(x) is a bad example, because it should really not be represented by an
> Add.
"x", so for example to put this into sympy:
x^2 + x + O(x)
the user would write:
Order(x^2 + x, x)
? I think that's a good idea.
Yes! Thanks. Here is how to view changes once you are in this branch:
>
>> This is a very good question and it is one of the details that I don't
>> know the answer 100% yet.
>> But I feel it is solvable.
>>
>> I think the best would be to create a demo (from scratch) where we can
>> play with these ideas. Hopefully I'll get to this eventually.
>
>
> How about this: https://github.com/rlamy/sympy/commits/binop ?
git diff c84e5df
So I can see that you defined the __pow__ operator in Expr to return
power(a, b) instead of the Power(a, b) class directly. The power(a, b)
is just a function, double dispatched. Then you change all Pow(a, b)
occurrences in sympy to a**b, which gets dispatched to power(a, b)
then. I assume you could have also just changed Power -> power?
Finally power() is then defined as follows:
+...@power.define(Expr, Expr)
+def _power_basecase(x, y):
+ return Pow(x, y)
+...@power.define(Expr, One)
+def _pow_Expr_One(x, one):
+ return x
+...@power.define(One, Expr)
+...@power.define(One, NaN)
+...@power.define(One, One)
+...@power.define(One, Zero)
+def _pow_One_Expr(one, x):
+ return one
etc. (there are some more rules, not important here)
So from this it is clear that power(Expr, One) is used first if
available, otherwise pow(Expr, Expr) is used as a backup plan.
Here are my questions:
* how is performance doing?
* currently your dispatch implementation uses issubclass(c_left, left)
etc., which potentially might be quite slow. Is there any way to just
check the types in a dictionary and only if it is not there, only then
do the slow dispatch that you implemented based on inheritance?
So for example if you put in (Add, One), then on the first run it
would figure out that it should call (Expr, One), and this first run
might be slower, that's ok. But on subsequent runs it would simply
return it from the dictionary directly, so this should be very fast?
Currently we can't use __class__ for it, because:
In [9]: Symbol.__class__
Out[9]: sympy.core.assumptions.ManagedProperties
In [10]: Zero.__class__
Out[10]: sympy.core.singleton.Singleton
In [11]: One.__class__
Out[11]: sympy.core.singleton.Singleton
Due to some sympy metaclasses machinery or something.
But we can create some attribute like _sympy_class_, which would be a
string or a number, unique for each sympy class. User defined types
would write there the name of the class, so that Expr, One, Zero, NaN
would all return unique name.
* What are your other conclusions or impressions from implementing it?
Ondrej
I'm a bit concerned about that MRO usage.
MRO as a concept is fundamentally broken. That brokenness doesn't matter
as long as MRO doesn't matter, i.e. as long as no multiple inheritance
is in play - and I think (I hope!) that SymPy does not use it.
Hm. Not a problem if the same function isn't overridden in both Expr and
AssocOp, so maybe... hopefully...
... but yes, that's going to have to be considered for multiple dispatch.
Actually, the properties of the addition can vary according to what objects are inside of Add. If you have just numbers, then you get a number, with symbols you get a summation of symbols, with infinities an complex numbers, more complex behavior.
In other parts of SymPy there are MatAdd, TensAdd to represent additions of matrices, tensors, etc..., which have their own properties. Developer deemed that it would be too complicated to add rules to Add, so they wrote other Add-like objects. In other cases, like the aforementioned infinities, Add is still kept, by adding behavioural rules to it.
But, isn't this breaking the logical correspondence of those two points in the bullet list? The plus sign is no more mapped to a single object, while the properties of addition are sometimes handled by Add, sometimes by other objects.
What about using instead of this an approach closer to C++ templates? I mean, the Add class would always represent a plus sign, for every expression, while its properties are managed by templates, so, keeping a C++ syntax:
In this case, the properties and behaviour of the arguments of Add are managed by special classes.
The point is, the addition of new objects to an Add argument list, could change the template of the resulting new Add, e.g. Add(x, y) ==> Add<AssocOpT>(x, y), while Add(x, y) + oo ==> Add<InfinitiesT>(x, y, oo).
At this point, multiple dispatch could be defined so as to keep templates in regard, but I'm still unsure how this could work, but I made up my mind to post this anyway. I'm not even sure that properties should be regarded as templates, but, in any case, using such an external marker given immediately information about special symbols inside Add.
So, in quantum mechanics, the exp( ) function could immediately catch that its Mul/Add content contains quantum mechanical operators, thus acting appropriately.
I was thinking about that. That AssocOp is interesting, it defines the commutative property of Add.
Mathematically, + has a thousand meanings, and we blur the
distinctions because they all act similarly. But at some point in a
CAS, you have to unblur any notational convenience that you would
otherwise make on paper.
> What about using instead of this an approach closer to C++ templates?
The problem with C++ is that many people do not know C++, and those that
believe they do sometimes misinterpret the definitions.
For that reason, it is usually better to stick with Python nomenclature
and concepts.
> I mean, the Add class would always represent a plus sign, for every
> expression, while its properties are managed by templates,
Here, the question would be how you'd represent those templates would be
in Python.
I do not think that the proposal can be properly understood without that.
Add[AssocOp](x, y, z) ==> x + y + z
w = var('w', commutative=False)
Add[NonCommutative](x, w) ==> x + w
@dispatch(Add[NonCommutative], Add[InfinitiesT])
def f(x, y):
# This function is called only when x is an Add with
# non-commutative elements, while x is
# an Add containing infinities.
...
Add(x, y) ===> Add[AssocOp](x, y)
Add(x, y) + Add(oo,1) ===> Add(Add[AssocOp](x,y), Add[InfinitiesT](oo))
===> Add[InfinitiesT](x,y,oo)
julia> immutable Point{T}
x::T
y::T
end
julia> p = Point{Int64}(3, 5)
Point{Int64}(3,5)
julia> p2 = Point{Float64}(3.0, 5.0)
Point{Float64}(3.0,5.0)
julia> p.x
3
julia> p.y
5
julia> f(v::Point{Float64}) = "Point... parametric type Float64"
f (generic function with 3 methods)
julia> f(v::Point) = "Point... no parametric type"
f (generic function with 1 method)
julia> f(p2)
"Point... parametric type Float64"
julia> f(p)
"Point... no parametric type"
julia> f(v::Point{Int64}) = "Point... parametric type Int64"
f (generic function with 2 methods)
julia> f(p)
"Point... parametric type Int64"
julia> p3 = Point{String}("hello", "world")
Point{String}("hello","world")
julia> f(p3)
"Point... no parametric type"
julia> g(v::Point{Integer}) = "Point... parametric type is Integer abstract class"
g (generic function with 1 method)
julia> g(p)
ERROR: no method g(Point{Int64})
type Expr{T} <: Basic
type NonCommutativeExpr{T} <: Expr{T}
type InfinitiesExpr{T} <: Expr{T}
type MatrixExpr{T} <: Expr{T}
type NonCommutativeInfinitiesExpr{T} <: Expr{T} # or better use multiple inheritance?
NonCommutativeExpr{Add}
NonCommutativeExpr{Mul}
MatrixExpr{Add} # instead of MatAdd
MatrixExpr{Mul} # instead of MatMul
I wonder how difficult would be to write just a very simple core in
Julia, just so that we can experiment with this.
See how extensible it is, and so on.
The former is natively supported, the latter requires a pattern matching library (there are 3 under development in Julia).
I think that using types properly would allow to avoid pattern matching.
One important thing concerning multiple dispatch: remove all multiple inheritances! With multiple inheritance there can be matching ambiguities, such as
class A
pass
class B
pass
class C(A, B)
pass
@dispatch(A)
def func(x):
pass
@dispatch(B)
def func(x)
pass
c = C()
func(c) # which one of the two dispatched methods should it match???
It seems to me, who knows nothing of computer science, that in fact Julia has in fact a powerful multiple dispatch system (though only on types not on values)
By all means, please go ahead!
We will discuss this once you post a PR, see how it looks, see if the
speed is enough
for tensors/quantum etc. And we can play with it.