Uses and Size of Sympy Matrices

663 views
Skip to first unread message

SherjilOzair

unread,
May 28, 2011, 9:56:54 AM5/28/11
to sympy
I would like to know how and where Sympy's matrices are used.
Is Sympy matrices used for numeric computing anywhere ?
Are Sympy Matrices expected to offer any advantage that matrices in
numpy/scipy or other libraries cannot offer ?

Is its use limited to symbolic ? What size of Matrices with symbolic
content is used ?
Operations on Expr are way costlier than operations on numerics. So,
knowing the size of the symbolic matrices that are required would help
me in optimization when writing algorithms for sparse matrices, and
also when refactoring Matrix.

I expect that one cannot use too large symbolic matrices, as solving/
inversing/etc. would result in expression blowup.

I would be glad if you could also tell what running time you would
expect from the matrices that you use.

Matthew Rocklin

unread,
May 28, 2011, 1:10:33 PM5/28/11
to sy...@googlegroups.com
In my work I would like to use SymPy Matrices to represent matrices before I turn them into large numeric matrices for computation. I.e. I probably wouldn't use symbolic matrices for actual computation, I would love to use them to manipulate the math as much as possible beforehand. 

To this end I would like to see SymPy matrices implement pluggable backends. It would be nice to be able to represent mathematical matrices and then, once everything has been optimized, to substitute in a Numpy Matrix, Sparse Matrix, Linear Operator, C-Code, External program, etc.... Others will certainly want to substitute in pure SymPy matrices or pure SymPy sparse matrices. 

Regarding your previous post about Sparse matrix algorithms I think the same thinking could apply. It would be nice to have the algorithms be backend-agnostic. I'm sure some people will want sparse matrices of pure SymPy objects, others might want speed. Others might have a new datatype that you haven't anticipated. If it's possible it would be nice to let the type of the elements be switchable. 

-matt


--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.


Mateusz Paprocki

unread,
May 28, 2011, 8:47:01 PM5/28/11
to sy...@googlegroups.com
Hi,

On 28 May 2011 15:56, SherjilOzair <sherji...@gmail.com> wrote:
I would like to know how and where Sympy's matrices are used.
Is Sympy matrices used for numeric computing anywhere ?

btw. mpmath has support for matrices with inexact coefficients.
 
Are Sympy Matrices expected to offer any advantage that matrices in
numpy/scipy or other libraries cannot offer ?

Is its use limited to symbolic ? What size of Matrices with symbolic
content is used ?
Operations on Expr are way costlier than operations on numerics. So,
knowing the size of the symbolic matrices that are required would help
me in optimization when writing algorithms for sparse matrices, and
also when refactoring Matrix.

I expect that one cannot use too large symbolic matrices, as solving/
inversing/etc. would result in expression blowup.

I would be glad if you could also tell what running time you would
expect from the matrices that you use.

Look into heurish(), ratint(), gosper_sum(), apart(), rsolve_*() (maybe other). They all use solver of linear systems. A few print statements will give you a clue how big and dense are those matrices and what are the typical types of coefficients. Running sympy.integrals tests this way reveals that the largest matrix was 100 x 71 and there were very many matrices of size around 20 x 20 and larger. It's up to you to perform detailed analysis, but the larger symbolic matrices we will be able to support the better. As to coefficients I would assume that it will be rationals in most cases, but from private discussions and a few issues (e.g. 1880) I know that support for large matrices with polynomial or rational function coefficients will be appreciated.
 

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.


Mateusz

SherjilOzair

unread,
May 29, 2011, 3:28:31 PM5/29/11
to sympy
Backend-agnostic is one of my major goals too. One of its cons is that
being unaware of the type can sometimes lead to lack of performance.
I don't know if I mentioned this before, but I faced a dilemma with
sum and Add.
Add is faster but only works for Sympy objects. sum is general but
slower.

This maybe the wrong thread to say this, but maybe sum could be
modified to use Add more smartly when the objects are all Sympy
objects. Views ?


On May 28, 10:10 pm, Matthew Rocklin <mrock...@gmail.com> wrote:
> In my work I would like to use SymPy Matrices to represent matrices before I
> turn them into large numeric matrices for computation. I.e. I probably
> wouldn't use symbolic matrices for actual computation, I would love to use
> them to manipulate the math as much as possible beforehand.
>
> To this end I would like to see SymPy matrices implement pluggable backends.
> It would be nice to be able to represent mathematical matrices and then,
> once everything has been optimized, to substitute in a Numpy Matrix, Sparse
> Matrix, Linear Operator, C-Code, External program, etc.... Others will
> certainly want to substitute in pure SymPy matrices or pure SymPy sparse
> matrices.
>
> Regarding your previous post about Sparse matrix algorithms I think the same
> thinking could apply. It would be nice to have the algorithms be
> backend-agnostic. I'm sure some people will want sparse matrices of pure
> SymPy objects, others might want speed. Others might have a new datatype
> that you haven't anticipated. If it's possible it would be nice to let the
> type of the elements be switchable.
>
> -matt
>

Vinzent Steinberg

unread,
May 30, 2011, 1:23:43 PM5/30/11
to sympy
On 29 Mai, 21:28, SherjilOzair <sherjiloz...@gmail.com> wrote:
> Backend-agnostic is one of my major goals too. One of its cons is that
> being unaware of the type can sometimes lead to lack of performance.
> I don't know if I mentioned this before, but I faced a dilemma with
> sum and Add.
> Add is faster but only works for Sympy objects. sum is general but
> slower.
>
> This maybe the wrong thread to say this, but maybe sum could be
> modified to use Add more smartly when the objects are all Sympy
> objects. Views ?

You can't modify sum(), because it is a Python built-in. It will
always have

a + b + c == a.__add__(b).__add_(c)


You could use Domain.sum, which would be implemented by Domain in
order to do it the most efficient way. So Matrix would only need to
worry about passing all relevant non-zeros.

Vinzent

Jeremias Yehdegho

unread,
May 31, 2011, 3:57:51 PM5/31/11
to sy...@googlegroups.com
Hi,

I might use sparse numeric matrices (coefficients typically in QQ or
some finite field) for computing reductions of a polynomial mod some
other polynomials. Currently this is a bit far away and I don't really
know the specifics other than that the matrices are sparse in general.

Jeremias

Brian Granger

unread,
May 31, 2011, 11:51:59 PM5/31/11
to sy...@googlegroups.com
Hi,

In sympy.physics.quantum we use sympy Matrix instances all over the
place. These can be quite large (100x100 up to many 1000x1000. In
the future we could get even bigger) and always have symbolic entries.
At times we do like to convert them to numerical numpy arrays, but in
many cases we really want the symbolic forms.

instant ;)

When we are dealing with large symbolic matrices, we are typically
just doing matrix/vector multplies. But for small matrices we do
other things like linear solves, decompositions and eigenvalue
problems. symbolic eigenvalues are great, but expressions quickly get
out of hand as the matrix size increases.

Cheers,

Brian

> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
>

--
Brian E. Granger
Cal Poly State University, San Luis Obispo
bgra...@calpoly.edu and elli...@gmail.com

Aaron Meurer

unread,
Jun 2, 2011, 1:37:19 PM6/2/11
to sy...@googlegroups.com
I use Matrices in the Risch algorithm. See
https://github.com/asmeurer/sympy/blob/integration3/sympy/integrals/prde.py#L155.
The function in the test at
https://github.com/asmeurer/sympy/blob/integration3/sympy/integrals/tests/test_prde.py#L59
should give you an idea of a typical usage.

The matrices are rational functions, with possible symbolic
coefficients, though the computability problems for symbolic
coefficients is something we know we will have to deal with (see the
comment at the top of constant_system()). At the moment, it doesn't
get very large with what is implemented, but it could when more things
are implemented. The main things that I need to do are rref(), with
correctness assured with rational functions, and the ability to
compute null spaces (mainly with rational entries, but I suppose they
could be any symbolic entries). This is the only part of the Risch
algorithm code that uses Expr instead of Poly, since Matrix doesn't
work with Poly (we would need a Frac class for that). I don't like
how I have to manually make sure rref calls cancel to assure
correctness (actually, if we had Frac, I could remove a ton of calls
to Poly.cancel in my code).

Like Mateusz pointed out, heurisch() solves a huge linear system. The
sizes he gives are a little misleading, since those are only for the
integrals that run fast enough to be in the tests. If you try to run
an integral like the one from issue 1441, it hangs because of a sparse
system of about 600 equations in about 450 variables (put a print
statement in the code).

Aaron Meurer

SherjilOzair

unread,
Jun 11, 2011, 4:47:21 AM6/11/11
to sympy
Hi,

On Jun 1, 8:51 am, Brian Granger <elliso...@gmail.com> wrote:
> Hi,
>
> In sympy.physics.quantum we use sympy Matrix instances all over the
> place.  These can be quite large (100x100 up to many 1000x1000.  In
> the future we could get even bigger) and always have symbolic entries.
>  At times we do like to convert them to numerical numpy arrays, but in
> many cases we really want the symbolic forms.
>

Could you tell what the densities of those large matrices are ?
It is defined by (number of non-zeros)/(N**2).

Also, it would be great if you could post a few such matrices that you
use. Or direct me to pieces of code which generate the matrices that
you use.

>
>
>
>
>
>
>
>
> On Sat, May 28, 2011 at 6:56 AM, SherjilOzair <sherjiloz...@gmail.com> wrote:
> > I would like to know how and where Sympy's matrices are used.
> > Is Sympy matrices used for numeric computing anywhere ?
> > Are Sympy Matrices expected to offer any advantage that matrices in
> > numpy/scipy or other libraries cannot offer ?
>
> > Is its use limited to symbolic ? What size of Matrices with symbolic
> > content is used ?
> > Operations on Expr are way costlier than operations on numerics. So,
> > knowing the size of the symbolic matrices that are required would help
> > me in optimization when writing algorithms for sparse matrices, and
> > also when refactoring Matrix.
>
> > I expect that one cannot use too large symbolic matrices, as solving/
> > inversing/etc. would result in expression blowup.
>
> > I would be glad if you could also tell what running time you would
> > expect from the matrices that you use.
>
> instant ;)
>
> When we are dealing with large symbolic matrices, we are typically
> just doing matrix/vector multplies.  But for small matrices we do
> other things like linear solves, decompositions and eigenvalue
> problems.  symbolic eigenvalues are great, but expressions quickly get
> out of hand as the matrix size increases.
>

Do you require to solve eigenvalue problems of matrices bigger than
4*4 ?
How are you doing it currently ?
Matrix.diagonalize only works for matrices smaller than 5*5, as
polys.roots can only solve degree 4 equations and less.

> Cheers,
>
> Brian
>
> > --
> > You received this message because you are subscribed to the Google Groups "sympy" group.
> > To post to this group, send email to sy...@googlegroups.com.
> > To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> > For more options, visit this group athttp://groups.google.com/group/sympy?hl=en.
>
> --
> Brian E. Granger
> Cal Poly State University, San Luis Obispo
> bgran...@calpoly.edu and elliso...@gmail.com

SherjilOzair

unread,
Jun 11, 2011, 4:48:11 AM6/11/11
to sympy
Aaron, it would be wonderful if we could discuss the implementation of
the Frac function.

On Jun 2, 10:37 pm, Aaron Meurer <asmeu...@gmail.com> wrote:
> I use Matrices in the Risch algorithm.  Seehttps://github.com/asmeurer/sympy/blob/integration3/sympy/integrals/p....
>  The function in the test athttps://github.com/asmeurer/sympy/blob/integration3/sympy/integrals/t...
> >> For more options, visit this group athttp://groups.google.com/group/sympy?hl=en.
>
> > --
> > Brian E. Granger
> > Cal Poly State University, San Luis Obispo
> > bgran...@calpoly.edu and elliso...@gmail.com

SherjilOzair

unread,
Jun 11, 2011, 4:49:33 AM6/11/11
to sympy
I meant the Frac class.

Aaron Meurer

unread,
Jun 11, 2011, 3:33:36 PM6/11/11
to sy...@googlegroups.com
Let's discuss it here on the mailing list. Probably Mateusz will have
better input about it than I will. He will be better to answer
questions about implementation. I could answer questions about my
potential use case.

Aaron Meurer

Vinzent Steinberg

unread,
Jun 11, 2011, 6:09:35 PM6/11/11
to sympy
On 11 Jun., 10:47, SherjilOzair <sherjiloz...@gmail.com> wrote:
> Do you require to solve eigenvalue problems of matrices bigger than
> 4*4 ?
> How are you doing it currently ?
> Matrix.diagonalize only works for matrices smaller than 5*5, as
> polys.roots can only solve degree 4 equations and less.

This is not entirely true, because we can find roots of higher order
using for examples factorization. But yes, for equations of degree
greater than 4 there is no general algorithm. But this does not matter
in this case, because sympy does not calculate eigenvalues using the
characteristic polynomial AFAIK.

Vinzent

SherjilOzair

unread,
Jun 12, 2011, 8:49:44 AM6/12/11
to sympy


On Jun 12, 3:09 am, Vinzent Steinberg
Arbitrary higher-order equations can not be solved.
And characteristic polynomial of an arbitrary matrix is arbitrary.
This means, that we can't use that method to compute eigenvals
reliably.
I don't think there are any other direct methods, though.

sympy uses this method.

def berkowitz_eigenvals(self, **flags):
"""Computes eigenvalues of a Matrix using Berkowitz method.
"""
return roots(self.berkowitz_charpoly(Dummy('x')), **flags)

eigenvals = berkowitz_eigenvals

Or are you referring to something else ?

>
> Vinzent

Aaron Meurer

unread,
Jun 12, 2011, 1:24:42 PM6/12/11
to sy...@googlegroups.com
If the eigenvalues cannot be expressed in radicals, then it doesn't
matter what method you use to compute them.

And for whatever reason, people always seem to be confused about this.
The general fifth order and higher polynomial does not have a
solution in radicals, and you can construct specific fifth order and
higher polynomials whose roots are not expressible in radicals (like
x**5 - x + 1). But that doesn't mean that *all* fifth order
polynomials don't have solutions in radicals. It's very easy to
construct a polynomial of any degree that has solutions in radicals.
For example (x - 1)**n is a nth degree polynomial, and the roots are
all 1. It's even possible to have an irreducible polynomial of degree
5 or greater whose solution is expressible in radicals.

So there's no reason to just "give up" if the polynomial is degree 5
or higher unless you are always solving the general equation. You can
easily create a square matrix of any size whose eigenvalues are easily
expressed (in radicals, or for example just integers). Yes, there
will be cases where it can only produce roots in the form of RootOf,
but either just let those pass through, or raise the error only when
that happens (depending on if it works to just let them pass through).

And by the way, maybe you were thinking that the characteristic
polynomial isn't computed by det(A - x*I), as is often taught in
linear algebra courses?

Aaron Meurer

SherjilOzair

unread,
Jun 12, 2011, 2:42:27 PM6/12/11
to sympy


On Jun 12, 10:24 pm, Aaron Meurer <asmeu...@gmail.com> wrote:
> If the eigenvalues cannot be expressed in radicals, then it doesn't
> matter what method you use to compute them.

What about symbolic elements ? Or exact elements ?

>
> And for whatever reason, people always seem to be confused about this.
>  The general fifth order and higher polynomial does not have a
> solution in radicals, and you can construct specific fifth order and
> higher polynomials whose roots are not expressible in radicals (like
> x**5 - x + 1).  But that doesn't mean that *all* fifth order
> polynomials don't have solutions in radicals.  It's very easy to
> construct a polynomial of any degree that has solutions in radicals.
> For example (x - 1)**n is a nth degree polynomial, and the roots are
> all 1.  It's even possible to have an irreducible polynomial of degree
> 5 or greater whose solution is expressible in radicals.

You would find that you were wrong in your last statement. If a
polynomial has solutions in radicals, then it is necessarily a
reducible polynomial. If x0, x1, x2, ... are the exact solutions in
radicals then (x - x0)(x - x1)... is the reduced polynomial.

But we don't want to discuss the theoretical subtleties of the abel-
ruffini theorem.

My point is that we can't be sure that an arbitrary matrix will have
such a well-conditioned equation like (x - a)**n. Thus the charpoly
method (berwkowitz OR det(A-x*I), doesn't matter) is not a good
method, as it is not reliable for n > 5.

The method will not return with an exact eigenvalue for large
matrices.

>
> So there's no reason to just "give up" if the polynomial is degree 5
> or higher unless you are always solving the general equation.  You can
> easily create a square matrix of any size whose eigenvalues are easily
> expressed (in radicals, or for example just integers).  Yes, there
> will be cases where it can only produce roots in the form of RootOf,
> but either just let those pass through, or raise the error only when
> that happens (depending on if it works to just let them pass through).

RootOf is a good method, but only if the eigenval is the final result
desired by the user. What if the user wants the eigenvectors or the
SVD ?
Maybe we could follow the wolfram-alpha example. It returns with "root
of <charpoly> near <approximate solution>".

So, that RootOf objects could return the approximate value if
explicitly asked for.

>
> And by the way, maybe you were thinking that the characteristic
> polynomial isn't computed by det(A - x*I), as is often taught in
> linear algebra courses?

Didn't get you here. det(A - x*I) or berkowitz both give a charpoly.
So it wouldn't matter which we use, other than the fact that one is
faster than the other.

Aaron Meurer

unread,
Jun 12, 2011, 3:54:52 PM6/12/11
to sy...@googlegroups.com
On Sun, Jun 12, 2011 at 12:42 PM, SherjilOzair <sherji...@gmail.com> wrote:
>
>
> On Jun 12, 10:24 pm, Aaron Meurer <asmeu...@gmail.com> wrote:
>> If the eigenvalues cannot be expressed in radicals, then it doesn't
>> matter what method you use to compute them.
>
> What about symbolic elements ? Or exact elements ?

You can think of "general" case to mean symbolic elements, and
"special cases" to mean numerical elements.

>
>>
>> And for whatever reason, people always seem to be confused about this.
>>  The general fifth order and higher polynomial does not have a
>> solution in radicals, and you can construct specific fifth order and
>> higher polynomials whose roots are not expressible in radicals (like
>> x**5 - x + 1).  But that doesn't mean that *all* fifth order
>> polynomials don't have solutions in radicals.  It's very easy to
>> construct a polynomial of any degree that has solutions in radicals.
>> For example (x - 1)**n is a nth degree polynomial, and the roots are
>> all 1.  It's even possible to have an irreducible polynomial of degree
>> 5 or greater whose solution is expressible in radicals.
>
> You would find that you were wrong in your last statement. If a
> polynomial has solutions in radicals, then it is necessarily a
> reducible polynomial. If x0, x1, x2, ... are the exact solutions in
> radicals then (x - x0)(x - x1)... is the reduced polynomial.

Irreducible means irreducible over the rationals. To take an example
from Wikipedia, x**5 - 5*x**4 + 30*x**3 - 50*x**2 + 55*x - 21 is
solvable in radicals, but you get

In [15]: factor(x**5 - 5*x**4 + 30*x**3 - 50*x**2 + 55*x - 21)
Out[15]:
5 4 3 2
x - 5⋅x + 30⋅x - 50⋅x + 55⋅x - 21

which if I'm not mistaken, means that it is irreducible (by the way
Mateusz, Poly.is_irreducible seems to be broken).

>
> But we don't want to discuss the theoretical subtleties of the abel-
> ruffini theorem.
>
> My point is that we can't be sure that an arbitrary matrix will have
> such a well-conditioned equation like (x - a)**n. Thus the charpoly
> method (berwkowitz OR det(A-x*I), doesn't matter) is not a good
> method, as it is not reliable for n > 5.

The point I was making is that it doesn't matter what method you use.
If the eigenvalue is not expressible in radicals, it's not expressible
in radicals.

>
> The method will not return with an exact eigenvalue for large
> matrices.
>
>>
>> So there's no reason to just "give up" if the polynomial is degree 5
>> or higher unless you are always solving the general equation.  You can
>> easily create a square matrix of any size whose eigenvalues are easily
>> expressed (in radicals, or for example just integers).  Yes, there
>> will be cases where it can only produce roots in the form of RootOf,
>> but either just let those pass through, or raise the error only when
>> that happens (depending on if it works to just let them pass through).
>
> RootOf is a good method, but only if the eigenval is the final result
> desired by the user. What if the user wants the eigenvectors or the
> SVD ?
> Maybe we could follow the wolfram-alpha example. It returns with "root
> of <charpoly> near <approximate solution>".

Well, it depends on how well those functions play with it. A single
RootOf object would be about as bad to work with as some arbitrary
symbolic expression. Like I was saying before, if it can work with it
(even if it has to return some enormous expression), just let it.
It's better to give a symbolic solution when you can, even if the
simplest form of it is not very simple.

If it's literally impossible to work with such symbolic eigenvalues,
maybe you could raise NotImplementedError.

Note that RootOf only works for polynomials with numerical
coefficients, so all the standard stuff is computable (e.g., you can
compare them). For polynomials that solve() can't handle (whether
they aren't solvable or it just isn't implemented) with symbolic
coefficients, you will just have to raise NotImplementedError if you
need all n roots.

>
> So, that RootOf objects could return the approximate value if
> explicitly asked for.

Yes:

In [23]: [RootOf(x**5 - 5*x**4 + 30*x**3 - 50*x**2 + 55*x - 21,
i).evalf() for i in range(5)]
Out[23]: [0.603805884142291, 1.5398957188017 - 4.39504023164041⋅ⅈ,
0.658201339127155 - 1.08185949306609⋅ⅈ, 0.658201339127155 +
1.08185949306609⋅ⅈ, 1.5398957188017 + 4.39504023164041⋅ⅈ]

>
>>
>> And by the way, maybe you were thinking that the characteristic
>> polynomial isn't computed by det(A - x*I), as is often taught in
>> linear algebra courses?
>
> Didn't get you here. det(A - x*I) or berkowitz both give a charpoly.
> So it wouldn't matter which we use, other than the fact that one is
> faster than the other.

I was referring to Vinzent's remark that the eigenvalues weren't
computed from the characteristic polynomial.

Aaron Meurer

Reply all
Reply to author
Forward
0 new messages