We are running into some issues with Mul in the quantum stuff. There
are two main things:
1. How Mul combines adjacent args.
2. How Mul decides if two adjacent args can be multiplied.
For now, let's focus on the second of these. The difficulty right now
is that Mul allow any two Exprs to be multiplied. But in quantum
mechanics, certain things can't be multiplied. For example, the
following don't make sense:
Ket('a')*Operator('O)
Operator('O')*Bra('a')
We would like to add logic to check for these cases into the
__mul__/__rmul__ methods of Operator and State. But, this logic will
never be triggered in situations like this:
Bra('a')*Ket('a')*Operator('O) = Mul(Bra('a'), Ket('a'))*Operator('O')
Because the Mul*Operator triggers Mul.__mul__ (which has no validation
for Operators or States) rather than Operator.__rmul__.
I see two ways out of this:
1. Handle this like numpy arrays that have an __array_priority__
attribute. The idea is that the array having the highest numeric
value of the __array_priority__ attribute would have its __mul__ or
__rmul__ method called. This would make it possible for:
Mul(Bra('a'), Ket('a'))*Operator('O') to trigger Operator.__rmul__
This solution would not required changes to Mul, only to the __mul__
and __rmul__ methods of Expr. It would also require going through
sympy and adding the priority atttributes to relevant classes.
2. To add a new set of methods to Expr that are used by Mul to
validate if two things could be multiplied. This is more flexible. I
propose methods like:
_validate_mul(self, other)
_validate_rmul(self, other)
The end of Mul.flatten would check for these attributes and then
validate adjacent objects to make sure they could really be
multiplied. This would help us with symbolic Matrices and Tensors for
example, because these methods could do shape compatibility tests.
This issue is also related to how combines are done in Mul.flatten,
but I think we might be able to resolve this by itself. Thoughts?
Cheers,
Brian
--
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgra...@calpoly.edu
elli...@gmail.com
Also, Mul is optimized for commutative arguments, with non-commutatives special cased (read "hacked in"). I am thinking it might be smarter to have separate NCMul for general non-commutatives anyway.
So basically, these suggestions are on how to fix issue 1941, because as I see it, validating whether two objects can be multiplied together is just a special case of objects that combine with each other specially.
On Jul 21, 2010, at 1:34 PM, Brian Granger wrote:
> Hi,
>
> We are running into some issues with Mul in the quantum stuff. There
> are two main things:
>
> 1. How Mul combines adjacent args.
This one may prove to be the more difficult of the two to solve, because of the way Python evaluates expressions like a*b*c. Also, you have to consider that no matter how you try to make things work with __mul__ and __rmul__, it can always be thrown off with something like a*(b*c). Option 1 alone might not work for this reason.
It seems to me like you would have to have both options. The _validate_mul (or combine_mul, as Ondrej called it in http://github.com/certik/sympyx/commits/handler) would have to be there to handle the special multiplication behaviors. But I think you also need some kind of priority system to break ties of more than one object in the Mul has such a method. If you just do validation (no combining), then maybe you could do without option 1, except you will have to accept the fact that _validate_mul will always be called before _validate_rmul.
By the way, it is more general to send to the combination method the whole list of objects that have been gathered together already. This is for things that combine into each other, such as infinities and Order, so they can pull everything into themselves.
> The end of Mul.flatten would check for these attributes and then
> validate adjacent objects to make sure they could really be
> multiplied. This would help us with symbolic Matrices and Tensors for
> example, because these methods could do shape compatibility tests.
>
> This issue is also related to how combines are done in Mul.flatten,
> but I think we might be able to resolve this by itself. Thoughts?
Maybe for now you could do validation separate from combination, if it's easier to handle. Ultimately, I would like to see both separated out into the objects. The validation is a trivial special case of the combination, but not the other way around, because combination requires handling all args, not just adjacent ones. On the other hand, the issue #1 you name above could cause efficiency problems for the combination, so might need to be solved concurrently.
Aaron Meurer
>
> Cheers,
>
> Brian
>
Aaron, you essentially summarized it all, thanks!
Some points:
1) Problem with extending the Mul is that if it's not done right, it
slows things down a lot.
2) a*b and Mul(a, b) should be equivalent
However, what you raised is a real problem and we need to figure out
some solution to that.
Ondrej
I would very much like the option of using a custom Mul subclass, but
that is currently impossible in sympy. The reason is that my
expressions also have sympy scalars. Anytime one of those appears on
the left of a multiplication its __mul__ gets called and everything
becomes a Mul. To use custom Mul classes for anything, we need to
have something like the __array_priority that numpy has built into the
core.
> Also, Mul is optimized for commutative arguments, with non-commutatives special cased (read "hacked in"). I am thinking it might be smarter to have separate NCMul for general non-commutatives anyway.
Again, an NCMul still requires the __array_priority stuff. I do agree
though that an NCMul class probably makes sense.
> So basically, these suggestions are on how to fix issue 1941, because as I see it, validating whether two objects can be multiplied together is just a special case of objects that combine with each other specially.
> On Jul 21, 2010, at 1:34 PM, Brian Granger wrote:
>
>> Hi,
>>
>> We are running into some issues with Mul in the quantum stuff. There
>> are two main things:
>>
>> 1. How Mul combines adjacent args.
>
> This one may prove to be the more difficult of the two to solve, because of the way Python evaluates expressions like a*b*c. Also, you have to consider that no matter how you try to make things work with __mul__ and __rmul__, it can always be thrown off with something like a*(b*c). Option 1 alone might not work for this reason.
I am pretty sure it would cover things like this.
I haven't thought as much about the validation and combining things
and I need to look at Ondrejs branch for that.
> By the way, it is more general to send to the combination method the whole list of objects that have been gathered together already. This is for things that combine into each other, such as infinities and Order, so they can pull everything into themselves.
True
>> The end of Mul.flatten would check for these attributes and then
>> validate adjacent objects to make sure they could really be
>> multiplied. This would help us with symbolic Matrices and Tensors for
>> example, because these methods could do shape compatibility tests.
>>
>> This issue is also related to how combines are done in Mul.flatten,
>> but I think we might be able to resolve this by itself. Thoughts?
>
> Maybe for now you could do validation separate from combination, if it's easier to handle. Ultimately, I would like to see both separated out into the objects. The validation is a trivial special case of the combination, but not the other way around, because combination requires handling all args, not just adjacent ones. On the other hand, the issue #1 you name above could cause efficiency problems for the combination, so might need to be solved concurrently.
Yes, I think this makes sense. But I do think we need the
__array_priority type attributes to allow for custom Mul subclasses.
Cheers,
Brian
> Aaron Meurer
>>
>> Cheers,
>>
>> Brian
>>
>
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
Definitely. I may play with this some.
> 2) a*b and Mul(a, b) should be equivalent
I don't see how this can be the case, but maybe I am misunderstanding
you. Are you saying that the __mul__ and __rmul__ methods must always
return Mul and never custom subclasses? Different mathematical
entities have different multiplication rules and this goes beyond mere
commutativity. A perfect example is Matric/vector/tensor
multiplication. If we ever want to have these classes inhert from
Basic and be used in symbolic contexts, we really need a custom Mul
that is created by A*B.
> However, what you raised is a real problem and we need to figure out
> some solution to that.
>
> Ondrej
>
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
>
--
> On Wed, Jul 21, 2010 at 1:15 PM, Aaron S. Meurer <asme...@gmail.com> wrote:
>> First off, pragmatically speaking, is there a reason why you need to use the core Mul instead of writing your own custom Mul for the quantum stuff? I am just saying this because even though I agree that we need this change, changing anything in the core is very difficult and time consuming, because the code is so fragile, and you also have to pay close attention to speed concerns.
>
> I would very much like the option of using a custom Mul subclass, but
> that is currently impossible in sympy. The reason is that my
> expressions also have sympy scalars. Anytime one of those appears on
> the left of a multiplication its __mul__ gets called and everything
> becomes a Mul. To use custom Mul classes for anything, we need to
> have something like the __array_priority that numpy has built into the
> core.
Ah, I understand. You can't just have a custom class that keeps the scalar part separate because Mul is too greedy. In that case, yes, your priority system should work (I was thinking of prioritizing custom combining calls a la issue 1941, which may or may not end up being the same thing).
Perhaps a hotfix would be to add some .no_sympy_Mul property to your objects and make Mul (or Basic.__rmul__) not grab things if they have that (return NotImplemented). Of course, it wouldn't be as good of a fix as your proposal, but would be much faster to implement if you don't want to spend too much time on it.
Aaron Meurer
Yes. But my point is that Mul is supposed to handle all of this. E.g.
"*" is "Mul". So a*b and Mul(a, b) are equivalent. And it's the job
of Mul to do the right thing with the objects.
It's probably not the right approach. So we'll have Mul, that handles
basic multiplication in the core. Then QuantumMul, that does something
else, and NCMul, that handles noncommutative multiplication.
Now all these statements are different:
Mul(a, b)
NCMul(a, b)
QuantumMul(a, b)
now the question is what happens with
a*b
and that is solved by your priorities idea. So I am +1 to that.
Ondrej
I think this can be done with a careful implementation of double
dispatch in __mul__ and __rmul__, so I don't see the point of this
__array_priority. Though this implementation would require some sort of
complete ordering of the classes, which is conceptually similar to an
__array_priority, it doesn't have to appear explicitly in the code.
Ronan
The nice thing about the __array_priority is that it is easily
extensible. You can add new sympy types and fine tune their priority
without touching the core. Also, like numpy, the priority should be
used in all special methods like __add__, __mul__, __pow__, etc.
Cheers,
Brian
> Ronan
def __mul__(self,mv):
"""See MV.geometric_product(self,mv)"""
return(MV.geometric_product(self,mv))
def __rmul__(self,mv):
"""See MV.geometric_product(mv,self)"""
return(MV.geometric_product(mv,self))
@staticmethod
def geometric_product(mv1,mv2):
"""
MV.geometric_product(mv1,mv2) calculates the geometric
product the multivectors mv1 and mv2 (mv1*mv2). See
reference 5 section 3.
"""
product = MV()
if isinstance(mv1,MV) and isinstance(mv2,MV):
bladeflg1 = mv1.bladeflg
bladeflg2 = mv2.bladeflg
if bladeflg1:
mv1.convert_from_blades()
if bladeflg2:
mv2.convert_from_blades()
mul = MV()
for igrade in MV.n1rg:
gradei = mv1.mv[igrade]
if isinstance(gradei,numpy.ndarray):
for ibase in range(MV.nbasis[igrade]):
xi = gradei[ibase]
if xi != ZERO:
for jgrade in MV.n1rg:
gradej = mv2.mv[jgrade]
if isinstance(gradej,numpy.ndarray):
for jbase in range(MV.nbasis[jgrade]):
xj = gradej[jbase]
if xj != ZERO:
xixj =
MV.mtable[igrade][ibase][jgrade][jbase].scalar_mul(xi*xj)
product.add_in_place(xixj)
product.bladeflg = 0
if bladeflg1:
mv1.convert_to_blades()
if bladeflg2:
mv1.convert_to_blades()
if bladeflg1 and bladeflg2:
product.convert_to_blades()
else:
if isinstance(mv1,MV):
product = mv1.scalar_mul(mv2)
else:
product = mv2.scalar_mul(mv1)
return(product)
This only works because you have custom scalars as well. Because of
this, you have overridden the __mul__ and __rmul__ methods of *every*
object that is used in GA. But in quantum mechanics, I still need to
use all the regular sympy scalars that use the default Expr.__mul__
and Expr,__rmul__ that return Muls. Anytime a sympy scalar gets
multiplied by my classes on the left (scalar*mytype), Expr.__mul__
gets called, not mytype.__mul__.
Cheers,
Brian
--
I'm not sure this is the best design, but it's better than the current
situation, so +1 on the idea. Two remarks though:
* Since the whole point of this is to untie binary operators from Expr,
the mechanism (i.e. call_other()) shouldn't be implemented in expr.py.
* The repetitiveness of the code should be abstracted away in a
decorator.
Le samedi 24 juillet 2010 à 13:10 -0700, Brian Granger a écrit :
> I written tests for this now.I'm not sure this is the best design, but it's better than the current
>
>
> Brian
>
> On Sat, Jul 24, 2010 at 10:07 AM, Brian Granger <elli...@gmail.com>
> wrote:
> Here is an addition patch that adds this capability to all
> binary special methods in Expr:
>
>
> http://github.com/ellisonbg/sympy/commit/116acd6ef2bde6d0d0aa8c2f2ec1f380abbabab1
>
>
> The performance penalty is still around 1%. If there is
> support for this approach, I would like to merge it to trunk
> soon as this issue is holding up the quantum stuff. Also I
> was thinking that in the long run this could improve
> performance. Here is the idea. Currently, all of the basic
> operation classes (Mul, Add, Pow) have to have custom logic to
> non commutative entities. With the new priority based binary
> operations, we could implement an NCMul, and NCPow classes
> that have all of that logic. The result is that the work Mul
> and Pow have to do decreases. It would also be much simpler.
situation, so +1 on the idea. Two remarks though:
* Since the whole point of this is to untie binary operators from Expr,
the mechanism (i.e. call_other()) shouldn't be implemented in expr.py.
* The repetitiveness of the code should be abstracted away in a
decorator.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
-- Andy
Aaron Meurer
On May 21, 2011, at 9:11 PM, Andy Ray Terrel <andy....@gmail.com>
wrote:
Is it possible to implement x*y*z as reduce(operator.mul, [x, y, z])?
Aaron Meurer
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
Yes this was my fear. Maybe Ronan's comment on the double dispatch
will work better.
-- Andy
Just to be clear, the problem is not so much when the class Mul calls
the Mul constructor but when other classes (like Add or simplify) call
Mul directly instead of the operators. The style in sympy seems to
prefer this to using the algebraic operators. These calls will
basically always route around *any* dispatch system unless it is tied
into the expression constructors.
One solution would be to have _op_priority checked inside the
constructor, that way the canonicalization could remain O(n) but the
correct constructor could then be called. Mul's internals could call a
private constructor to side step check every call.
-- Andy
Hi,On 22 May 2011 18:17, Aaron S. Meurer <asme...@gmail.com> wrote:Is it possible to implement x*y*z as reduce(operator.mul, [x, y, z])?It works:In [2]: import operatorIn [3]: reduce(operator.mul, [x, y, z])Out[3]: x⋅y⋅zIn [4]: type(_)Out[4]: <class 'sympy.core.mul.Mul'>but I wouldn't use it on large examples because it takes O(n**2) steps to canonicalize the inputs (similar problem with sum([x,y,z]), we have even an issue open for this).
> — Andy
Even if we used the double dispatch, we could make Mul check the arguments to see if they are dispatchable before canonizing them. The only problem here might be the performance, because you would have to call __mul__ or __rmul__ on each object to see if it returns something or returns NotImplemented. Maybe we could require objects to implement some obj.__has_mul(other) that would quickly indicate whether obj.__mul__(other) would return NotImplemented or something else (and the same for __mul__). We could probably implement further optimizations for regular sympy objects that will always just combine in the Mul, like some property of those objects that would just be set to True and you don't have to call the function.
Does that sound like it would work? Does it make sense?
Aaron Meurer
Well it wouldn't work either. It would be an infinite loop.
operator.mul just calls the dispatches to the appropriate
.__mul__/.__rmul
-- Andy
Well you would need to have the priority of the objects to know which
algebraic op to call anyways. And catching exceptions is very slow.
Another scheme would be to give each operator a resolution dictionary
where each class that wants to overload the operator can register
itself. Then the operator can look up each of its args in the dict
and tell which is the highest priority. This is how multiple dispatch
usually works anyways, just at the compiler level.
The meta_expr could take care of this:
1) detect if _op_priority is defined:
2) then for each operator defined register it
This would allow for the class lookup to happen at import time and
keep canonicalization to O(n).
-- Andy
1) Implementing my own versions of Mul/Add/etc and using _op_priority
was not a good way to proceed. There are two reasons for this: i)
the dispatch of _op_priority is easily subverted by the many calls to
Mul/Add/Pow in the core. ii) Even if we fixed all of that, anyone who
wants to implement a cuatom Mul/Add/Pow has to reimplement most of the
entire sympy core to get basic things to work. Any everytime a new
class needs custom Mul/Add/Pow, all of that has to be reimplemented
*again*. That is a waste of time.
2) The best approach would be to *remove* most of the logic from
Mul/Add/Pow etc and replace it by a type of rule system the allows
individual object to declare how they are multiplied/added/etc.
The problem with 2) is that it would be a huge project that touches
most of the code base.
Cheers,
Brian
> On Sun, May 22, 2011 at 7:27 PM, Aaron S. Meurer <asme...@gmail.com> wrote:
>>
>> On May 22, 2011, at 12:12 PM, Mateusz Paprocki wrote:
>>
>> Hi,
>>
>> On 22 May 2011 18:17, Aaron S. Meurer <asme...@gmail.com> wrote:
>>>
>>> Is it possible to implement x*y*z as reduce(operator.mul, [x, y, z])?
>>
>> It works:
>> In [2]: import operator
>> In [3]: reduce(operator.mul, [x, y, z])
>> Out[3]: x⋅y⋅z
>> In [4]: type(_)
>> Out[4]: <class 'sympy.core.mul.Mul'>
>> but I wouldn't use it on large examples because it takes O(n**2) steps to
>> canonicalize the inputs (similar problem with sum([x,y,z]), we have even an
>> issue open for this).
>>
>> No, I mean is there a way for literally "x*y*z" to call (essentially)
>> reduce(operator.mul, [x, y, z])?
>> But like you said, it is O(n**2), so I guess it doesn't matter (unless you
>> can also do it O(n)).
>> Aaron Meurer
>
>
> Well it wouldn't work either. It would be an infinite loop.
> operator.mul just calls the dispatches to the appropriate
> .__mul__/.__rmul
>
> — Andy
Well, that's why I said "essentially".
Actually, obviously just vanilla "x*y*z" is equivalent to reduce(operator.mul, [x, y, z]). The problem comes when you associate the arguments, like (x*y)*z or x*(y*z). That's where even the double dispatch can mess up. Though one might argue that it should mess it up, since that is the way to make it do another thing.
Aaron Meurer