I kind of doubt this would conflict with Sherjil's work, since what
you are suggesting and actual matrices would be quite different, but
Sherjil would have to confirm.
I think if you implement non-commutative symbols with shape (see the
issue I linked to above), that would cover about 90% of use cases. If
you want to symbolically represent more than just matrix
multiplication and addition, you would have to do more than that, but
these would not be difficult to do. For example, A.T would just swap
the shape of A. The inverse would just be handled by Pow, i.e.,
A**-1. I suppose a little more work would need to be done if you want
to support matrices being non-invertible, and that being handled
correctly. But even that would not be hard (just have an attribute .
invertible, which would probably default to True if the shape is
square, and make ._eval_pow, not work for negative powers if it is
False).
The real problem (and I think it's what the main issue is in issue
1860), is how to make Mul handle the shape attribute correctly. The
way it is now, you have to modify Mul.flatten to make it work (also
see issue 1941).
Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to
> sympy+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/sympy?hl=en.
>
@Aaron: Hrm, yes getting Muls and Adds and such to recognize matrices
is quite a hurdle. Should I shy away from adding to flatten()?
Alternatively I could subclass all of these into MatrixExpr, MatrixMul
(MatMul?), MatrixAdd, etc... and build in a couple of checks into the
__add__, __mul__ operators before passing up to the superclass method.
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/sympy/-/OOTzZmk0RPoJ.
There's also some cool stuff you could do here, like implementing a
matrix chain multiplication algorithm
(http://en.wikipedia.org/wiki/Matrix_chain_multiplication) to compute
the most efficient way to multiply a product of matrices ABCD...
Aaron Meurer
Note that you should be able to handle the other stuff with the
assumptions (in theory anyway; assumptions are kind of a mess right
now, so I couldn't say how well it would work).
But even if we fix the core and option 1 becomes feasible, the code
can be refactored, so if you want something now, I am +1 to trying
option 3. I would rather prefer to avoid option 2, unless you are
going to fix it in a more general way (like issue 1941).
Aaron Meurer
I *strongly* recommend against subclassing Mul/Pow/Add as you will
find that it is *impossible* even with op_priority to have your
Mul/Pow/Add used all of the time. In a variety of contexts, you will
*want* your subclasses to be used, but sympy's base Mul/Add/Pow will
be used. There is no way to prevent this with the current design.
Just use the sympy's existing Mul/Add/Pow and live with its
limitations for now.
Cheers,
Brian
On Tue, Jun 28, 2011 at 9:00 AM, Matthew Rocklin <mroc...@gmail.com> wrote:
> There is a barebones implementation of a Matrix Expression class here:
> https://github.com/mrocklin/sympy/tree/matrix_expr/sympy/matrices
> Comments/criticism welcome.
--
Brian E. Granger
Cal Poly State University, San Luis Obispo
bgra...@calpoly.edu and elli...@gmail.com
--
Aaron Meurer
Consider the vector directional derivative of a matrix M that is the
function of a vector v, M(v), then if a is an arbitrary vector (LaTeX
expression) the definition of the directional derivative would be
a\cdot\nabla M(v) \equiv \lim_{h \rightarrow 0}\frac{M(v+ha)-M(v)}{h}
from this the properties of \nabla M(v) could be determined and if v is
expanded in a arbitrary basis the \nabla operator could also be
expanded. A similar treatment is possible for a matrix that is a
function of a matrix if the scalar product of two matrices is defined.
Differentiation would only work with a scalar (communicative) differentiation operator. If the matrix function is a function of a vector or matrix one would have to define the directional derivative for each case (which would be a scalar differential operator) and use the results of that operation to determine the properties of a vector or matrix derivative. Note that determining the operator properties would also require a definition for the scalar product of vectors and matrices.
Consider the vector directional derivative of a matrix M that is the function of a vector v, M(v), then if a is an arbitrary vector (LaTeX expression) the definition of the directional derivative would be
a\cdot\nabla M(v) \equiv \lim_{h \rightarrow 0}\frac{M(v+ha)-M(v)}{h}
from this the properties of \nabla M(v) could be determined and if v is expanded in a arbitrary basis the \nabla operator could also be expanded. A similar treatment is possible for a matrix that is a function of a matrix if the scalar product of two matrices is defined.
On 06/30/2011 04:20 AM, Aaron Meurer wrote:
As I pointed out in the other thread, non-commutative differentiation
already works in SymPy, so doing this should not be difficult.
Aaron Meurer
For more options, visit this group at
http://groups.google.com/group/sympy?hl=en.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+unsubscribe@googlegroups.com.
This is something that you would (hopefully) be able to do with the
new assumptions system. In other words, without having to modify the
core, you should be able to say ask(Q.invertable(A*B)) and it would
determine it based on whether you set A and B to be invertible. Shape
should work too, though I'm not sure if the system can currently
handle non-boolean assumptions (someone else will have to fill in
here).
By the way, is_invertible is perhaps something that could be
implemented in the core directly, so you could have support for
non-invertible symbols in any case. It should just be a matter of
making ._eval_power do the right thing in any case.
Aaron Meurer
>
>
> On Thu, Jun 30, 2011 at 7:05 AM, Alan Bromborsky <abr...@verizon.net>
> wrote:
>>
>> Differentiation would only work with a scalar (communicative)
>> differentiation operator. If the matrix function is a function of a vector
>> or matrix one would have to define the directional derivative for each case
>> (which would be a scalar differential operator) and use the results of that
>> operation to determine the properties of a vector or matrix derivative.
>> Note that determining the operator properties would also require a
>> definition for the scalar product of vectors and matrices.
>>
>> Consider the vector directional derivative of a matrix M that is the
>> function of a vector v, M(v), then if a is an arbitrary vector (LaTeX
>> expression) the definition of the directional derivative would be
>>
>> a\cdot\nabla M(v) \equiv \lim_{h \rightarrow 0}\frac{M(v+ha)-M(v)}{h}
>>
>>
>> from this the properties of \nabla M(v) could be determined and if v is
>> expanded in a arbitrary basis the \nabla operator could also be expanded. A
>> similar treatment is possible for a matrix that is a function of a matrix if
>> the scalar product of two matrices is defined.
>>
>>
>>
>> On 06/30/2011 04:20 AM, Aaron Meurer wrote:
>>>
>>> As I pointed out in the other thread, non-commutative differentiation
>>> already works in SymPy, so doing this should not be difficult.
>>>
>>> Aaron Meurer
>>>
>>>>>> sympy+un...@googlegroups.com.
>>>>>> For more options, visit this group at
>>>>>> http://groups.google.com/group/sympy?hl=en.
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "sympy" group.
>>>> To post to this group, send email to sy...@googlegroups.com.
>>>> To unsubscribe from this group, send email to
>>>> sympy+un...@googlegroups.com.
>>>> For more options, visit this group at
>>>> http://groups.google.com/group/sympy?hl=en.
>>>>
>>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sympy" group.
>> To post to this group, send email to sy...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> sympy+un...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/sympy?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to
> sympy+un...@googlegroups.com.
I'm afraid that the best place is the source code in sympy/assumptions/.
I'm not aware of any comprehensive and current write-up on the new
assumptions.
> >>> On Thu, Jun 30, 2011 at 1:58 AM, Amit<amit...@gmail.com>
> For more options, visit this group at
> http://groups.google.com/group/sympy?hl=en.
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy
I am -1 on putting shape into Expr itself. It is too specialized to
be there. It should only be in the classes that actually have shape
information. I agree though that we need to enhance sympy to know
about such things, but that logic type of logic should be in Mul and
should not be narrowly focused on shape but should include the
handling of all types that need custom logic upon multiplication.
As I have said before, I would forget about shape checking *entirely*
at this point in the game though so you can focus on the more
interesting and relevant aspects of this work. Once sympy is extended
in the right way, you will get shape checking almost for free...
Cheers,
Brian
>> >> >>> On Thu, Jun 30, 2011 at 1:58 AM, Amit<amit...@gmail.com>
--
On Thu, Jul 7, 2011 at 10:18 AM, Matthew Rocklin <mroc...@gmail.com> wrote:
> Hrm, I'm feeling blocked on this issue. I'd really like to be able to
> generate matrix expressions for the next section of my project and I'd like
> them to be able to do some things beyond what should go into the standard
> Expr class.
Other than shape checking, what types of things do you need to do?
> Do I have any options here other than to wait until the broader SymPy
> community decides what it's going to do about smarter handling of subclassed
> Expr's?
> Brian, you strongly recommend against subclassing Expr. Have you come up
> with a decent temporary alternative for the physics branch?
I may have said confusing things previously. I do think that
subclassing Expr is OK, just to override methods like
__mul__/__add__/__pow__ to provide custom Mul/Add/Pow logic. Here is
how we handle this...
We simply put the additional logic in standalone functions that walk
an expression tree and perform the needed logic. Something like this:
A = Matrix('A', (4,4))
B = Matrix('B', (4,4))
c = A*B
check_shape(c)
Here Matrix would be a custom subclass of Expr, that has the shape
information in args, but it wouldn't have any custom Mul/Pow/Add logic
implemented.
We have found this approach to be extremely simple to implement and
fully functional as well. For details of how we do this in the
quantum stuff see:
For our subclasses of Expr:
sympy.physics.quantum.qexpr
sympy.physics.quantum.state
sympy.physics.quantum.operator
For our functions that walk expressions:
sympy.physics.quantum.qapply
sympy.physics.quantum.represent
> I could look into fixing the larger SymPy issue. Is this likely beyond my
> scope?
Yes, I think it would be a much larger effort than you project would allow for.
Cheers,
Brian
>> >> >> Amit<amit...@gmail.com>
Aaron Meurer
>> >> >> >> Amit<amit...@gmail.com>
Well, you're right that it would be good to have an experimental code
base. You will learn much better what should and should not be done
by trying it and seeing what does and does not work than you will
sitting here discussing it. Those of us with more experience can
definitely direct you away from things that we can already see won't
work, but the only way to know the way you *should* do it is to try
implementing it.
Block matrices would be much more work than everything else, because
now you're combining your symbolic matrices with actual matrices,
which themselves would have to be extended to support such things.
Maybe MatrixExpr should just be added as a ground type to Matrix.
That might not be so easy to do, though, since most stuff isn't
written to take anything but shape 1x1 into account. I'd see what
Sherjil thinks about this.
Aaron Meurer
>> >> >> >> >> Amit<amit...@gmail.com>
Aaron Meurer
>> >> >> >> >> >> Amit<amit...@gmail.com>
As has been discussed before, this is a bad idea. Just subclass Expr.
Aaron Meurer
>
> Presently, I'm quite busy with this new encapsulation class Matrix_
> and can only think of working fulltime on MatrixSymbol after I've
> merged in my work on DOKMatrix, LILMatrix and the ongoing work on
> Matrix_.
>
> Matthew, you could set up a wiki page on 'matrix expressions' with
> some example code in it. 'symbolic matrices' can mean a few number of
> things, and is somewhat vague. It would be good if everyone could see
> what sort of concrete things are expected from matrix expressions.
> Approaches to implement Matrix Algebra, with pros and cons listed,
> could be added there for discussion, suggestion and improvement.
>
> Regards,
> Sherjil Ozair
>
1. x*(A*B)
2. (x*A)*B
3. (A*x)*B
4. A*(x*B)
5. (A*B)*x
6. A*(B*x)
Let us look at these. Obviously, 1 and 5 will work, since you have
complete control over A.__mul__(B). Similarly, in 4 and 6, you will
end up with A.__mul__(Mul(x, B)), but this is not a problem, since you
can make A.__mul__ check for the left-most noncommutative in a Mul.
But what about 2 and 3? Here, we have __mul__ being called by a Expr
type, namely, Mul(x, A). You want to get B.__rmul__(Mul(x, A)), since
you have implemented shape checking logic in MatrixExpr.__(r)mul__.
But b.__rmul__(a) is called with a*b only if a.__mul__(b) returns
NotImplemented. So, basically, we need Mul.__mul__(MatrixExpr) to
return NotImplemented. Well, let us look at the code for Mul.__mul__
(actually Expr.__mul__):
@_sympifyit('other', NotImplemented)
@call_highest_priority('__rmul__')
def __mul__(self, other):
return Mul(self, other)
Well, this is not very helpful, because the logic is buried in some
decorators. So let us look at the code for those decorators. I will
not bore you by pasting the entire code here (you can see it in
sympy/core/decorators.py), but basically, _sympifyit makes tries to
sympify other, and makes __mul__ return NotImplemented if it fails. I
am not delving into this, because sympify would not fail for
MatrixExpr (otherwise, we would actually have a problem doing any form
of x*A*B).
So call_highest_priority seems to be a better bet. This decorator I
will also not paste here, but it basically lets you define
._op_priority on your object, and if it is greater than Expr's (which
is quite arbitrarily set to 10.0), then it will call
other.__rmul__(self).
But wait, there's more. _op_priority can make the above cases work,
but * is not the only way that SymPy multiplies things. You will also
have to deal with variations on Mul(x, A, B). Mul completely ignores
_op_priority. Unfortunately, even if this may seem like a more
esoteric way to multiply things that you can just recommend users
avoid, it is used internally a lot, because Mul(*args) is more
efficient than reduce(operator.mul, args).
Thus, you see that it is quite impossible to make things work 100% of
the time without modifying the core. And actually, because of the Mul
thing that would not work at all and that is called by so many core
functions and methods, you will not even get something like things
working 90% of the time, but instead things will break when used a
certain way, and you will have hard to track bugs.
Aaron Meurer
Arron has provided a fantastic summary of the issues involve. The
summary is that even if you subclass Add/Mul/Pow, sympy will end up
creating expressions that don't use your subclasses everywhere. This
can't be fixed without changing the core, which is outside the scope
of your project. I know I sound like a broken record, but you should
resist the temptation to define your own Add/Mul/Pow classes.
Cheers,
Brian
--
On Wed, Jul 13, 2011 at 6:39 AM, Matthew Rocklin <mroc...@gmail.com> wrote:
> I have op_priority set to 11 so that basic arithmetic operations do use
> MatMuls and MatAdds. MatMul and MatAdd check shape, then call regular
> Mul/Add __new__ methods to take advantage of the already built flattening
> functions. After that's done they go through and change classes back to
> MatMul/MatAdd in case a normal Mul/Add was called within the __new__. So far
> this has worked very well. All of the associativity cases 1-6 throw the
> correct error. The more complex expressions that I've tried have also worked
> fine.
> Calling Mul or Add explicitly does break things. There's no entry point for
> me to insert clever logic as I don't want to edit core functions. My
> solution here is simply to say "Use MatMul or MatAdd instead, or, if you
> must use Mul/Add, call matrixify on the expression afterwards". Matrixify
> will be guaranteed to return a Matrix expression if MatrixSymbols exist
> within a normal Expression. I plan to put it in a few key functions like
> simplify and expand.
>
> Is this matrixify solution too much of a hack? Hopefully I can hide it away
> within a few key functions so that the user never needs to know about it
> unless they explicitly use Add or Mul. If they're using Add or Mul then I
> expect them to be clever enough to handle matrixify.
> I'm working on this branch .
As I understand it Matrixify goes through an expression and makes sure
that your Mul/Add/Pow subclasses are used. If that is the case, I
advise against it because the second a user starts to do things with
the expression they will have to call Matrixify again. Again, I would
just use the default Mul/Add/Pow and provide standalone functions like
check_shape that perform a specific type of action on an expression.
Cheers,
Brian
--
I'm not convinced. What other solution is there?
Also, creating Add/Mul/Pow subclasses looks like the obvious way of
extending sympy, so we want to make it work at some point. Creating
MatrixAdd, etc. will help a lot with that by highlighting all the places
where we assume things like a + b == Add(a, b). So if Matthew thinks he
can make it work without too much pain, compared to the alternatives,
then I think he should go for it.
For now just don't do it. The quantum stuff is equally complicated as
the matrix symbols stuff (in fact from a mathematical perspective,
they are fomally equivalent) and we are surviving just fine without
it. Last summer we started out by subclassing Add/Mul/Pow for the
quantum stuff. For weeks we kept finding things that were broken. It
was horrible and we realized that i) we would end up rewriting much of
sympy and 2) it would still be fundamentally broken. Once we gave up
on that are just used sympy Mul/Add/Pow we started to move 10 times
faster and had very few problems. We did have to give up on our grand
visions of doing custom logic in Mul/Add/Pow, but we have moved that
logic into standalone functions. Those functions are quite simple
though and work just great.
> Also, creating Add/Mul/Pow subclasses looks like the obvious way of
> extending sympy, so we want to make it work at some point. Creating
> MatrixAdd, etc. will help a lot with that by highlighting all the places
> where we assume things like a + b == Add(a, b). So if Matthew thinks he
> can make it work without too much pain, compared to the alternatives,
> then I think he should go for it.
It is not at all obvious to me that the model should be that
Add/Mul/Pow subclasses are encouraged. I think a much better approach
would be closer to what Mathematica has, where objects can define
rules that are applied when the objects are multiplied. IOW,
Mul/Add/Pow themselves should have extension APIs, but those extension
APIs should not be handled through subclassing. But again, this is a
very difficult problem and it is not clear to me what the solution is.
In either case, I stlll feel it is beyond the scope of the current
work.
Cheers,
Brian
Well, the dispatch solution only works for op.whatever, but as we
know, reduce(op.whatever, list_of_args) is asymptotically less
efficient than Whatever(*list_of_args).
Actually, half of the problem isn't so much making Add/Mul/etc. work
with dispatch rules, but making it *efficiently* work with dispatch
rules.
Aaron Meurer
-- Andy
Exactly. Just get the job done, implement the necessary code, and
whether or not Add/Mul/Pow classes can be extended is a separate
issue, and I would suggest not to, to keep things simple in the core,
so that we can refactor once we finish the assumptions refactoring.
Ondrej
Ondrej
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
Both have their advantages. For example, the laplace_expand() from
Mateusz's version doesn't make sense in Matthew's version. On the
other hand, Matthew's version truly lets you manipulate undefined
matrices, and can be a more efficient way of manipulating defined ones
(e.g., Identity(100000) vs. eye(100000)).
But other than some functions like laplace_expand() only making sense
for one, I don't see any reason why these two approaches can't or
shouldn't be merged. Actually, this is basically what was being
suggested when people said subs should work on symbolic matrices (a la
Matthew).
Aaron Meurer
I think your implementations are completely different, though perhaps
they could be merged. Mateusz's implementation is more about allowing
unevaluated functions on Matrices, whereas Matthews is more about
expressions where the variables are to be understood as matrices (I
use this instead of the ambiguous "symbolic matrices").
Both have their advantages. For example, the laplace_expand() from
Mateusz's version doesn't make sense in Matthew's version. On the
other hand, Matthew's version truly lets you manipulate undefined
matrices, and can be a more efficient way of manipulating defined ones
(e.g., Identity(100000) vs. eye(100000)).
But other than some functions like laplace_expand() only making sense
for one, I don't see any reason why these two approaches can't or
shouldn't be merged. Actually, this is basically what was being
suggested when people said subs should work on symbolic matrices (a la
Matthew).
Aaron Meurer
> I'm sure that, sooner or later, those approaches will have to be merged,
> because those are really two views of a very similar (if not the same)
> problem domain. My original motivation came from reading lecture notes
> for undergraduates about the finite element method. As usually there was an
> introduction to basics of algebra needed to understand the later material,
> and my question was why it must be so hard to do it in SymPy (if possible at
> all). My branch is about "symbolic matrices" with explicit content. However,
> I don't see any problem with allowing transition between those two views
> (well at least in one direction). Suppose we have expr = Eq(A*x, b), where
> A, x, b are matrices/vectors of appropriate shape. First, I would like to be
> able to manipulate the expression alone, check various shapes (and ask SymPy
> if it makes sense), etc. Then I would like to write something like
> expr.expand(fullform=True) and get the same but with MatrixExpr with
> explicit indexed symbols or values (if entities like zeros or ones matrix
> was used in expr). Then I would like to make further transformation on this
> "full form".
I would like to do things like differentiate c^T*A*c with respect to
the vector c. It's a common thing for finite elements. But I'd also
like block matrices. However, if you have symbolic matrix expressions,
you can put them in a matrix and then perform standard matrix
calculations on that and you'd have block matrix support. So, as long
as that's possible, there's no problem.
I've got by handling differentiating matrix-vector expressions in
Maple using their non-commutative support along with hacking together
handling the transpose and differentiation of it. But, I'd like proper
support.
If Sympy could support block matrices, that would be extremely useful
in control theory (where they're used all the time).
Cheers,
Tim.
--
Tim Lahey
PhD Candidate, Systems Design Engineering
University of Waterloo
http://about.me/tjlahey
It might be a good idea to collect these uses in a wiki page that
descirbes the "User Stories", borrowing from the SCRUM process.
@Mateusz, doing symbolic FEM is not crazy there are lots of people who
have done it, including yours truly. The issue is that FEM is usually
done for very large problems for which symbolics is prohibatively
expensive. The FEniCS, Sundance, and others approach is to use
symbolics to describe operators for building a numeric matrix, SyFi
took your approach and build symbolics down to the matrix level but
for difficult problems had real problems with the expansion of the
symbolic expressions, Dohp is doing the symbolics at the quadrature
points in a mesh to evaluate the residuals, but again is having
trouble with the symbolic expressions.
-- Andy
--
That's a little misleading. Maybe you should put them in the second person.
Aaron Meurer