--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
I would like to know how and where Sympy's matrices are used.
Is Sympy matrices used for numeric computing anywhere ?
Are Sympy Matrices expected to offer any advantage that matrices in
numpy/scipy or other libraries cannot offer ?
Is its use limited to symbolic ? What size of Matrices with symbolic
content is used ?
Operations on Expr are way costlier than operations on numerics. So,
knowing the size of the symbolic matrices that are required would help
me in optimization when writing algorithms for sparse matrices, and
also when refactoring Matrix.
I expect that one cannot use too large symbolic matrices, as solving/
inversing/etc. would result in expression blowup.
I would be glad if you could also tell what running time you would
expect from the matrices that you use.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
I might use sparse numeric matrices (coefficients typically in QQ or
some finite field) for computing reductions of a polynomial mod some
other polynomials. Currently this is a bit far away and I don't really
know the specifics other than that the matrices are sparse in general.
Jeremias
In sympy.physics.quantum we use sympy Matrix instances all over the
place. These can be quite large (100x100 up to many 1000x1000. In
the future we could get even bigger) and always have symbolic entries.
At times we do like to convert them to numerical numpy arrays, but in
many cases we really want the symbolic forms.
instant ;)
When we are dealing with large symbolic matrices, we are typically
just doing matrix/vector multplies. But for small matrices we do
other things like linear solves, decompositions and eigenvalue
problems. symbolic eigenvalues are great, but expressions quickly get
out of hand as the matrix size increases.
Cheers,
Brian
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
>
--
Brian E. Granger
Cal Poly State University, San Luis Obispo
bgra...@calpoly.edu and elli...@gmail.com
The matrices are rational functions, with possible symbolic
coefficients, though the computability problems for symbolic
coefficients is something we know we will have to deal with (see the
comment at the top of constant_system()). At the moment, it doesn't
get very large with what is implemented, but it could when more things
are implemented. The main things that I need to do are rref(), with
correctness assured with rational functions, and the ability to
compute null spaces (mainly with rational entries, but I suppose they
could be any symbolic entries). This is the only part of the Risch
algorithm code that uses Expr instead of Poly, since Matrix doesn't
work with Poly (we would need a Frac class for that). I don't like
how I have to manually make sure rref calls cancel to assure
correctness (actually, if we had Frac, I could remove a ton of calls
to Poly.cancel in my code).
Like Mateusz pointed out, heurisch() solves a huge linear system. The
sizes he gives are a little misleading, since those are only for the
integrals that run fast enough to be in the tests. If you try to run
an integral like the one from issue 1441, it hangs because of a sparse
system of about 600 equations in about 450 variables (put a print
statement in the code).
Aaron Meurer
Aaron Meurer
And for whatever reason, people always seem to be confused about this.
The general fifth order and higher polynomial does not have a
solution in radicals, and you can construct specific fifth order and
higher polynomials whose roots are not expressible in radicals (like
x**5 - x + 1). But that doesn't mean that *all* fifth order
polynomials don't have solutions in radicals. It's very easy to
construct a polynomial of any degree that has solutions in radicals.
For example (x - 1)**n is a nth degree polynomial, and the roots are
all 1. It's even possible to have an irreducible polynomial of degree
5 or greater whose solution is expressible in radicals.
So there's no reason to just "give up" if the polynomial is degree 5
or higher unless you are always solving the general equation. You can
easily create a square matrix of any size whose eigenvalues are easily
expressed (in radicals, or for example just integers). Yes, there
will be cases where it can only produce roots in the form of RootOf,
but either just let those pass through, or raise the error only when
that happens (depending on if it works to just let them pass through).
And by the way, maybe you were thinking that the characteristic
polynomial isn't computed by det(A - x*I), as is often taught in
linear algebra courses?
Aaron Meurer
You can think of "general" case to mean symbolic elements, and
"special cases" to mean numerical elements.
>
>>
>> And for whatever reason, people always seem to be confused about this.
>> The general fifth order and higher polynomial does not have a
>> solution in radicals, and you can construct specific fifth order and
>> higher polynomials whose roots are not expressible in radicals (like
>> x**5 - x + 1). But that doesn't mean that *all* fifth order
>> polynomials don't have solutions in radicals. It's very easy to
>> construct a polynomial of any degree that has solutions in radicals.
>> For example (x - 1)**n is a nth degree polynomial, and the roots are
>> all 1. It's even possible to have an irreducible polynomial of degree
>> 5 or greater whose solution is expressible in radicals.
>
> You would find that you were wrong in your last statement. If a
> polynomial has solutions in radicals, then it is necessarily a
> reducible polynomial. If x0, x1, x2, ... are the exact solutions in
> radicals then (x - x0)(x - x1)... is the reduced polynomial.
Irreducible means irreducible over the rationals. To take an example
from Wikipedia, x**5 - 5*x**4 + 30*x**3 - 50*x**2 + 55*x - 21 is
solvable in radicals, but you get
In [15]: factor(x**5 - 5*x**4 + 30*x**3 - 50*x**2 + 55*x - 21)
Out[15]:
5 4 3 2
x - 5⋅x + 30⋅x - 50⋅x + 55⋅x - 21
which if I'm not mistaken, means that it is irreducible (by the way
Mateusz, Poly.is_irreducible seems to be broken).
>
> But we don't want to discuss the theoretical subtleties of the abel-
> ruffini theorem.
>
> My point is that we can't be sure that an arbitrary matrix will have
> such a well-conditioned equation like (x - a)**n. Thus the charpoly
> method (berwkowitz OR det(A-x*I), doesn't matter) is not a good
> method, as it is not reliable for n > 5.
The point I was making is that it doesn't matter what method you use.
If the eigenvalue is not expressible in radicals, it's not expressible
in radicals.
>
> The method will not return with an exact eigenvalue for large
> matrices.
>
>>
>> So there's no reason to just "give up" if the polynomial is degree 5
>> or higher unless you are always solving the general equation. You can
>> easily create a square matrix of any size whose eigenvalues are easily
>> expressed (in radicals, or for example just integers). Yes, there
>> will be cases where it can only produce roots in the form of RootOf,
>> but either just let those pass through, or raise the error only when
>> that happens (depending on if it works to just let them pass through).
>
> RootOf is a good method, but only if the eigenval is the final result
> desired by the user. What if the user wants the eigenvectors or the
> SVD ?
> Maybe we could follow the wolfram-alpha example. It returns with "root
> of <charpoly> near <approximate solution>".
Well, it depends on how well those functions play with it. A single
RootOf object would be about as bad to work with as some arbitrary
symbolic expression. Like I was saying before, if it can work with it
(even if it has to return some enormous expression), just let it.
It's better to give a symbolic solution when you can, even if the
simplest form of it is not very simple.
If it's literally impossible to work with such symbolic eigenvalues,
maybe you could raise NotImplementedError.
Note that RootOf only works for polynomials with numerical
coefficients, so all the standard stuff is computable (e.g., you can
compare them). For polynomials that solve() can't handle (whether
they aren't solvable or it just isn't implemented) with symbolic
coefficients, you will just have to raise NotImplementedError if you
need all n roots.
>
> So, that RootOf objects could return the approximate value if
> explicitly asked for.
Yes:
In [23]: [RootOf(x**5 - 5*x**4 + 30*x**3 - 50*x**2 + 55*x - 21,
i).evalf() for i in range(5)]
Out[23]: [0.603805884142291, 1.5398957188017 - 4.39504023164041⋅ⅈ,
0.658201339127155 - 1.08185949306609⋅ⅈ, 0.658201339127155 +
1.08185949306609⋅ⅈ, 1.5398957188017 + 4.39504023164041⋅ⅈ]
>
>>
>> And by the way, maybe you were thinking that the characteristic
>> polynomial isn't computed by det(A - x*I), as is often taught in
>> linear algebra courses?
>
> Didn't get you here. det(A - x*I) or berkowitz both give a charpoly.
> So it wouldn't matter which we use, other than the fact that one is
> faster than the other.
I was referring to Vinzent's remark that the eigenvalues weren't
computed from the characteristic polynomial.
Aaron Meurer