RFC: Linear algebra in SymPy

51 views
Skip to first unread message

Andy Ray Terrel

unread,
Apr 3, 2011, 11:10:04 AM4/3/11
to sympy
Hello,

We currently have several students interested in symbolic sparse
matrix calculations for GSOC. IMHO, we could use a linear algebra
interface more compatible with scipy.linalg but I don't have a use for
symbolic sparse matrices.

Since I believe in not adding unnecessary features to a library, I
would like to know who the customers of a symbolic sparse matrix
format would be and what applications, point to code please, we need
to speed up.

Also how big are the matrices you are working with. You have to
remember that sparse matrices usually don't make sense on typical
numerical algorithms until you hit about 1000x1000, because optimized
BLAS can stream so much better. I would guess the number is much
lower with symbolics, but the storage will not be an issue until that
size since we only have a pointer to S(0).

Having this information would be quite beneficial in organizing what
the student projects should implement.

-- Andy

Aaron S. Meurer

unread,
Apr 3, 2011, 7:15:28 PM4/3/11
to sy...@googlegroups.com
Well, how sparse does a matrix have to be for a sparse implementation to be faster than the dense representation, in general?

There's also the line somewhere in between numerical matrices and symbolic matrices, which is matrices with exact numeric values (instances of Rational), to consider.

I remember people posting to this list with symbolic matrices in the past. Maybe a search will show how people are using them.

Aaron Meurer

> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>

Andy Ray Terrel

unread,
Apr 4, 2011, 1:11:37 AM4/4/11
to sy...@googlegroups.com, Aaron S. Meurer
On Mon, Apr 4, 2011 at 2:15 AM, Aaron S. Meurer <asme...@gmail.com> wrote:
> Well, how sparse does a matrix have to be for a sparse implementation to be faster than the dense representation, in general?

I haven't really tested this but most of the matrices I use in my day
job only have a few percentage of non-zeros (for example on the order
of 10 - 20 entries for rows at 100K).

>
> There's also the line somewhere in between numerical matrices and symbolic matrices, which is matrices with exact numeric values (instances of Rational), to consider.

Not sure what you are getting at here.

>
> I remember people posting to this list with symbolic matrices in the past.  Maybe a search will show how people are using them.

The only things I could find were with small matrices.

-- Andy

Brian Granger

unread,
Apr 4, 2011, 1:21:55 AM4/4/11
to sy...@googlegroups.com, Andy Ray Terrel
Andy,

> We currently have several students interested in symbolic sparse
> matrix calculations for GSOC.  IMHO, we could use a linear algebra
> interface more compatible with scipy.linalg but I don't have a use for
> symbolic sparse matrices.
>
> Since I believe in not adding unnecessary features to a library, I
> would like to know who the customers of a symbolic sparse matrix
> format would be and what applications, point to code please, we need
> to speed up.

In sympy.physics.quantum.qubit and gate we have extremely large and
very sparse symbolic unitary matrices. The sizes are as big as any
machine anyone has around. Currently we have support for
sympy.Matrix, numpy matrices and scipy.sparse, but we don't have any
way of handling large symbolic matrices. For *us* it would be killer
to have this capability in sympy, as in many cases, it is important to
have things in symbolic form. But, with that said, I can't think of
many (or any) other cases where sparse symbolic matrices are useful.

Another way of handling symbolic sparse matrices would be to add
suppor to scipy.sparse (numpy already has this support) for having
arbitrary objects as elements. I should also mention that we are
pretty much only interested in matrix-vector multiplies, matrix-scalar
mult. and matrix addition.

We can write some code examples if that would help.

Cheers,

Brian

> Also how big are the matrices you are working with.  You have to
> remember that sparse matrices usually don't make sense on typical
> numerical algorithms until you hit about 1000x1000, because optimized
> BLAS can stream so much better.  I would guess the number is much
> lower with symbolics, but the storage will not be an issue until that
> size since we only have a pointer to S(0).
>
> Having this information would be quite beneficial in organizing what
> the student projects should implement.
>
> -- Andy
>

> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
>

--
Brian E. Granger
Cal Poly State University, San Luis Obispo
bgra...@calpoly.edu and elli...@gmail.com

Frank K.

unread,
Apr 4, 2011, 1:58:53 AM4/4/11
to sympy

> > There's also the line somewhere in between numerical matrices and symbolic matrices, which is matrices with exact numeric values (instances of Rational), to consider.
>
> Not sure what you are getting at here.
>

There are some sparse matrix algorithms that are quite speedy, but
give approximate answers (so you get a determinant with a little error
bar, for example). These might be usable on a Rational matrix, but
certainly not a symbolic one.

Perhaps this is what Aaron is getting at, or just something I've made
up!

Andy Ray Terrel

unread,
Apr 4, 2011, 2:10:44 AM4/4/11
to sy...@googlegroups.com, Frank K.

Only exact iterative methods, such as conjugate gradients, are
feasible in SymPy. SymPy expressions have no real way to do any
minimization or determine small fill-in so these methods are out from
the get go.

Andy Ray Terrel

unread,
Apr 4, 2011, 2:15:09 AM4/4/11
to elli...@gmail.com, sy...@googlegroups.com
On Mon, Apr 4, 2011 at 8:21 AM, Brian Granger <elli...@gmail.com> wrote:
> Andy,
>
>> We currently have several students interested in symbolic sparse
>> matrix calculations for GSOC.  IMHO, we could use a linear algebra
>> interface more compatible with scipy.linalg but I don't have a use for
>> symbolic sparse matrices.
>>
>> Since I believe in not adding unnecessary features to a library, I
>> would like to know who the customers of a symbolic sparse matrix
>> format would be and what applications, point to code please, we need
>> to speed up.
>
> In sympy.physics.quantum.qubit and gate we have extremely large and
> very sparse symbolic unitary matrices.  The sizes are as big as any
> machine anyone has around.  Currently we have support for
> sympy.Matrix, numpy matrices and scipy.sparse, but we don't have any
> way of handling large symbolic matrices.  For *us* it would be killer
> to have this capability in sympy, as in many cases, it is important to
> have things in symbolic form.  But, with that said, I can't think of
> many (or any) other cases where sparse symbolic matrices are useful.
>
> Another way of handling symbolic sparse matrices would be to add
> suppor to scipy.sparse (numpy already has this support) for having
> arbitrary objects as elements.  I should also mention that we are
> pretty much only interested in matrix-vector multiplies, matrix-scalar
> mult. and matrix addition.
>
> We can write some code examples if that would help.

My understanding was that you had a dense block tiled in a block
diagonal fashion. If this is the case then a general sparse matrix
algorithm would be much slower than creating a matrix action function,
that is a function that inputs a vector and outputs a vector that
would be the result of the general A*x.

If you do have a code pointer that would be great.

-- Andy

Andy Ray Terrel

unread,
Apr 4, 2011, 2:17:12 AM4/4/11
to sy...@googlegroups.com, Frank K.
On Mon, Apr 4, 2011 at 9:10 AM, Andy Ray Terrel <andy....@gmail.com> wrote:
> On Mon, Apr 4, 2011 at 8:58 AM, Frank K. <ylfc...@gmail.com> wrote:
>>
>>> > There's also the line somewhere in between numerical matrices and symbolic matrices, which is matrices with exact numeric values (instances of Rational), to consider.
>>>
>>> Not sure what you are getting at here.
>>>
>>
>> There are some sparse matrix algorithms that are quite speedy, but
>> give approximate answers (so you get a determinant with a little error
>> bar, for example). These might be usable on a Rational matrix, but
>> certainly not a symbolic one.
>>
>> Perhaps this is what Aaron is getting at, or just something I've made
>> up!
>
> Only exact iterative methods, such as conjugate gradients, are
> feasible in SymPy.  SymPy expressions have no real way to do any
> minimization or determine small fill-in so these methods are out from
> the get go.
>

What I left out is that you won't in general be able to keep the
rational syntax. Most complex operations such at sqrt will turn the
Rational into an Expr.

-- Andy

Aaron S. Meurer

unread,
Apr 4, 2011, 5:07:53 PM4/4/11
to sy...@googlegroups.com, Frank K.
Well, rational numbers, in fact rational functions as well, are a special case of symbolic expressions because the zero-equivalence problem is solvable (and easily so). I thought most matrix operations did only things like multiplication, addition, and division. In that case, a rational number (function) will remain a rational number (function).

The matrices should be structured like the polys. In the polys, if the coefficients are rational numbers, it uses a special ground types. These can be optimized (e.g., gmpy). If they are symbolic expressions, you can potential get wrong results (because of zero-equivalence problems), and the algorithms are slower. In fact, I think the matrices could be using the exact same classes as the polys. A linear algebra GSoC project should seriously consider looking at the polys kind of design to work.

Aaron Meurer

Aaron S. Meurer

unread,
Apr 4, 2011, 5:11:14 PM4/4/11
to sy...@googlegroups.com, Frank K.

On Apr 4, 2011, at 3:07 PM, Aaron S. Meurer wrote:

> Well, rational numbers, in fact rational functions as well, are a special case of symbolic expressions because the zero-equivalence problem is solvable (and easily so). I thought most matrix operations did only things like multiplication, addition, and division. In that case, a rational number (function) will remain a rational number (function).
>
> The matrices should be structured like the polys. In the polys, if the coefficients are rational numbers, it uses a special ground types. These can be optimized (e.g., gmpy). If they are symbolic expressions, you can potential get wrong results (because of zero-equivalence problems), and the algorithms are slower. In fact, I think the matrices could be using the exact same classes as the polys. A linear algebra GSoC project should seriously consider looking at the polys kind of design to work.

This includes for rational functions, by the way. This is a place where it would be nice to have a Frac() object, which we could directly plug into matrices.

Aaron Meurer

Brian Granger

unread,
Apr 4, 2011, 5:23:04 PM4/4/11
to sy...@googlegroups.com, Andy Ray Terrel, Frank K.
On Sun, Apr 3, 2011 at 11:17 PM, Andy Ray Terrel <andy....@gmail.com> wrote:
> On Mon, Apr 4, 2011 at 9:10 AM, Andy Ray Terrel <andy....@gmail.com> wrote:
>> On Mon, Apr 4, 2011 at 8:58 AM, Frank K. <ylfc...@gmail.com> wrote:
>>>
>>>> > There's also the line somewhere in between numerical matrices and symbolic matrices, which is matrices with exact numeric values (instances of Rational), to consider.
>>>>
>>>> Not sure what you are getting at here.
>>>>
>>>
>>> There are some sparse matrix algorithms that are quite speedy, but
>>> give approximate answers (so you get a determinant with a little error
>>> bar, for example). These might be usable on a Rational matrix, but
>>> certainly not a symbolic one.
>>>
>>> Perhaps this is what Aaron is getting at, or just something I've made
>>> up!
>>
>> Only exact iterative methods, such as conjugate gradients, are
>> feasible in SymPy.  SymPy expressions have no real way to do any
>> minimization or determine small fill-in so these methods are out from
>> the get go.
>>
>
> What I left out is that you won't in general be able to keep the
> rational syntax.  Most complex operations such at sqrt will turn the
> Rational into an Expr.

I have been confused by this and a few other comments. Won't the
sparse matrices work with *any* Expr subclass?

Brian

> -- Andy
>
>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups "sympy" group.
>>> To post to this group, send email to sy...@googlegroups.com.
>>> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
>>> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>>>
>>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
>

--

SherjilOzair

unread,
Apr 5, 2011, 12:40:24 AM4/5/11
to sympy
Brian,

Could you give an example of how you construct large sparse matrices.
What sort of constructors would you like to see in the class ?

Are there any special class of sparse matrices that you guys use
often, like tridiagonal matrix or permutation matrix ?

>> I should also mention that we are
pretty much only interested in matrix-vector multiplies, matrix-
scalar
mult. and matrix addition.

Arent you also interested in solving A * x = B, or matrix-matrix
mult ?

Thank you for your help,
Sherjil Ozair

Brian Granger

unread,
Apr 5, 2011, 1:30:35 AM4/5/11
to sy...@googlegroups.com, SherjilOzair
Hi,

> Could you give an example of how you construct large sparse matrices.
> What sort of constructors would you like to see in the class ?

Our big sparse matrices are formed out of tensor products of 2x2
matrices, such as Pauli and Hadamard matrices. But these tensor
products have *many* identity matrices as well, which makes them
sparse. Here is our matrix tensor product code:

https://github.com/sympy/sympy/blob/master/sympy/physics/quantum/matrixutils.py#L232


Here is some specific examples:

In [1]: from sympy.physics.quantum import *

In [2]: from sympy.physics.quantum.gate import *

In [3]: g = X(2)

In [4]: represent(g, nqubits=4)

You could also look at what happens when you increase nqubits and also look at:

g = Y(2)
g = H(2)

2 could also be any number < nqubits

In many ways one of the most useful things we could have would be a
sparse tensor product.

> Are there any special class of sparse matrices that you guys use
> often, like tridiagonal matrix or permutation matrix ?

Nope, in practice we get matrices that look quite different and have
different sparsities.

>>> I should also mention that we are
> pretty much only interested in matrix-vector multiplies, matrix-
> scalar
> mult. and matrix addition.

> Arent you also interested in solving A * x = B, or matrix-matrix
> mult ?

Nope for this particular application, we don't need linear solves or
matrix-matrix mul.

Cheers,

Brian

> Thank you for your help,
> Sherjil Ozair
>

Andy Ray Terrel

unread,
Apr 5, 2011, 1:48:13 AM4/5/11
to elli...@gmail.com, sy...@googlegroups.com, Frank K.
On Tue, Apr 5, 2011 at 12:23 AM, Brian Granger <elli...@gmail.com> wrote:
> On Sun, Apr 3, 2011 at 11:17 PM, Andy Ray Terrel <andy....@gmail.com> wrote:
>> On Mon, Apr 4, 2011 at 9:10 AM, Andy Ray Terrel <andy....@gmail.com> wrote:
>>> On Mon, Apr 4, 2011 at 8:58 AM, Frank K. <ylfc...@gmail.com> wrote:
>>>>
>>>>> > There's also the line somewhere in between numerical matrices and symbolic matrices, which is matrices with exact numeric values (instances of Rational), to consider.
>>>>>
>>>>> Not sure what you are getting at here.
>>>>>
>>>>
>>>> There are some sparse matrix algorithms that are quite speedy, but
>>>> give approximate answers (so you get a determinant with a little error
>>>> bar, for example). These might be usable on a Rational matrix, but
>>>> certainly not a symbolic one.
>>>>
>>>> Perhaps this is what Aaron is getting at, or just something I've made
>>>> up!
>>>
>>> Only exact iterative methods, such as conjugate gradients, are
>>> feasible in SymPy.  SymPy expressions have no real way to do any
>>> minimization or determine small fill-in so these methods are out from
>>> the get go.
>>>
>>
>> What I left out is that you won't in general be able to keep the
>> rational syntax.  Most complex operations such at sqrt will turn the
>> Rational into an Expr.
>
> I have been confused by this and a few other comments.  Won't the
> sparse matrices work with *any* Expr subclass?
>

It was in reference to the idea of getting speedy mathematics through
Rationals. I wouldn't be thrilled with anything that didn't accept
any Expr subclass.

@Aaron: You are describing BLAS-like routines, LAPACK-like routines
often require sqrt's.

-- Andy

Aaron S. Meurer

unread,
Apr 5, 2011, 12:26:15 PM4/5/11
to sy...@googlegroups.com, SherjilOzair

So what do you do with the matrices?

Aaron Meurer

Aaron S. Meurer

unread,
Apr 5, 2011, 12:28:15 PM4/5/11
to sy...@googlegroups.com, elli...@gmail.com, Frank K.

Yes, it should work with anything, but if it is purely rational, you can use a more efficient implementation internally (like the coefficient domains in the polys). Ditto for rational functions.

>
> @Aaron: You are describing BLAS-like routines, LAPACK-like routines

> often require sort's.

Sorry, I don't know what this means (as I said somewhere else, I have only taken basic undergraduate linear algebra).

Aaron Meurer

Tim Lahey

unread,
Apr 5, 2011, 12:35:57 PM4/5/11
to sy...@googlegroups.com
On Tue, Apr 5, 2011 at 12:28 PM, Aaron S. Meurer <asme...@gmail.com> wrote:
>
> On Apr 4, 2011, at 11:48 PM, Andy Ray Terrel wrote:
>
>>
>> @Aaron: You are describing BLAS-like routines, LAPACK-like routines
>> often require sort's.
>
> Sorry, I don't know what this means (as I said somewhere else, I have only taken basic undergraduate linear algebra).
>
> Aaron Meurer

http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms

http://en.wikipedia.org/wiki/LAPACK

Basically, BLAS are the basic components of linear algebra (e.g.,
matrix multiplies, matrix-vector multiplies) while LAPACK is the
higher level algorithms, like factorization and solution of linear
systems. LAPACK builds upon BLAS.

Hope this helps, Aaron.

Cheers,

Tim.

--
Tim Lahey
PhD Candidate, Systems Design Engineering
University of Waterloo
http://about.me/tjlahey

Brian Granger

unread,
Apr 5, 2011, 12:54:57 PM4/5/11
to sy...@googlegroups.com, Aaron S. Meurer, SherjilOzair

90% of the time, we build them and them apply a sequence of them to a vector:

A*B*C*D*E*...(Y*(Z*v))

We could do the matrix-matrix muls first and then apply to the vector:

(A*B*C*D*E*...Y*Z)*v

But the product of all the matrices is usually dense, and we don't
want to have to store the entire big dense matrix.

Brian

Aaron S. Meurer

unread,
Apr 5, 2011, 10:53:32 PM4/5/11
to elli...@gmail.com, sy...@googlegroups.com, SherjilOzair

This reminds me. Implementing symbolic matrices would allow us to optimize multiplication of several matrices of different sizes using the matrix chain multiplication algorithm.

Aaron Meurer

Andy Ray Terrel

unread,
Apr 6, 2011, 1:40:05 AM4/6/11
to sy...@googlegroups.com, Aaron S. Meurer, elli...@gmail.com, SherjilOzair

Yes doing Mat-Mat operations with Mat-Vec are doable is almost never
the right thing to do.

>>
>> Brian
>
> This reminds me.  Implementing symbolic matrices would allow us to optimize multiplication of several matrices of different sizes using the matrix chain multiplication algorithm.

Well I could use matrix expressions, and already have a prototype
implementation in Ignition.

https://github.com/aterrel/ignition/tree/master/ignition/flame/tensors

I would get the functionality working, which probably would sustain a
student for the summer, before I dived into the matrix chain.

-- Andy

Tim Lahey

unread,
Apr 6, 2011, 2:05:15 AM4/6/11
to sy...@googlegroups.com
On Wed, Apr 6, 2011 at 1:40 AM, Andy Ray Terrel <andy....@gmail.com> wrote:
>
> Well I could use matrix expressions, and already have a prototype
> implementation in Ignition.
>
> https://github.com/aterrel/ignition/tree/master/ignition/flame/tensors
>
> I would get the functionality working, which probably would sustain a
> student for the summer, before I dived into the matrix chain.

I'm quite interested in this. I was looking at implementing something
like this for matrices as part of my GSoC project (along with block
matrices). That way, one would be able to do,

>>> a = x.transpose()*N*N.transpose()*x
>>> a.diff()
N*N.transpose()*x

where both x and N are column vectors. I chose this example because
this form is common in finite element applications. This is probably
one of the harder problems, since one needs to do some transformations
because of the x.transpose() at the beginning, but even being able to
do

>>> a = B*A
>>> a.transpose()
A' * B'

would be nice (where A' and B' are the transposes of A and B). Once
that's there, the other kind of problems (like the derivative above)
can be built upon that (since we already have non-commutative support
in Sympy).

Block matrix support would basically consist of building matrices out
of these matrix symbols.

Andy Ray Terrel

unread,
Apr 6, 2011, 2:37:45 AM4/6/11
to sy...@googlegroups.com, Tim Lahey

I look forward to seeing your application!

-- Andy

> Cheers,
>
> Tim.
>
> --
> Tim Lahey
> PhD Candidate, Systems Design Engineering
> University of Waterloo
> http://about.me/tjlahey
>

Reply all
Reply to author
Forward
0 new messages