What is the difference between calculating Eigenvalues of a matrix M by
a) Eigenvalues[M]
b) Solve[CharacteristicPolynomial[M,x]==0,x]
c) Solve[Det[M - x IdentityMatrix[n]]==0,x]
?
Have a look at the following setting:
n = 12;
M = Table[Table[Random[Real, {0, 100}], {n}], {n}]
In this case all the 3 methods a, b and c give the same set of Eigenvalues
(neglecting small numerical differences and maybe using Chop).
As soon as I increase n to at least 13 the result of method c gives a different
set of solutions. In particular method c gives 18 solutions instead of the
expected 13, where the 5 new ones always lay close together with small
imaginary parts.
As far as I can see the problem occurs already when calculating the
characteristic polynomial:
h[x_] = Det[M - x IdentityMatrix[n]]
looks good up to n=12, for n>=13 this function looks very strange, including a
fraction and orders of x larger than n.
This is particularly annoying since the actual problem I am dealing with
involves the calculation of a characteristic function like this:
h2[lambda_]=Det[M[lambda] - lambda IdentityMatrix[31]]
where M[lambda] is a sparse 31x31 matrix having only the last row and the last
column as well as the main diagonal and the first secondary diagonals unequal
to zero. The dependence of lambda is an Exp[-lambda] in M[[31,31]].
Of course I want to solve the transcendental equation
FindRoot[h2[lambda]==0,{lambda,...}] afterwards. But since the calculation of
the determinant already "fails" (giving really high order terms in lambda) I
have no chance to get any sane results out of FindRoot.
In this case it also doesn't make a difference whether I use
h2[lambda_]=Det[M[lambda] - lambda IdentityMatrix[31]]
or
h2[lambda_]=CharacteristicPolynomial[M[lambda],lambda]
which is only available in Mathematica 5 whereas I am using 4.0 or 4.1 very
often.
Any suggestions, clarifications or hints are really appreciated.
Thank you,
Grischa
you know that all determinat calculations are very sensitive against
rounding errors ? This become more and more critical when the
size of the matrix increase. The main task of all eigensystem-solvers
is to avoid the computation of the determinant of
of the characteristic polynomial. So you should try to increase the
precision of your input data.
Regards
Jens
"Grischa Stegemann" <grix7-...@yahoo.de>
schrieb im Newsbeitrag
news:d4kktj$ed3$1...@smc.vnet.net...
For a matrix of approximate numbers, Eigenvalues will use Lapack functions.
Solve will call on a Jenkins-Traub based rootfinder to obtain roots of
the input given by CharacteristicPolynomial. Depending on the matrix,
even finding the characteristic polynomial can be numerically unstable.
Moreover, the step of finding its roots can likewise be problematic even
if the polynomial, expressed in Horner form, is stable as a "black box"
for numeric evaluation.
Explicit computation of Det[...] will depend very much on internals of
how it is computed. I think you are seeing the effect of it using a row
reduction based approach when dimension is sufficiently high and input
contains symbolic data. As to getting too many solutions, the
explanation follows readily from what you have observed. Det is
producing a rational function that, were it done in exact arithmetic,
would have denominator exactly cancelled by numerator factors. The
numeric root finder is passed the numerator which has degree too high,
and any later checking in Solve will reveal that the denominator does
not vanish at the roots (it merely "almost" vanishes).
All this simply shows that symbolic methods will not be up to the task
if input is not exact. Among your choices are:
(1) Use exact arithmetic.
(2) Use, say, PolynomialReduce or PolynomialQuotient/Remainder to figure
out a "close" numerator of correct degree, that is, such that the
fractional part may be discarded.
(3) Use more refined methods based on optimization e.g. least squares,
in order to get a better approximation of the determinant.
(4) Compute the determinant by interpolation. Specifically, find e.g.
Map[Det[mat/.lambda->#]&, Range[32]]
and use these in
InterpolatingPolynomial[mat,lambda]
I do not recommend (3) because it will involve nontrivial programming
and some familiarity with the relevant literature. Moreover it is not
likely to be worth the bother as an improvement to (2) since already in
computing Det you will have introduced numeric error. I would say either
(2) or (4) are your best bets unless exact arithmetic is an option and
does not turn out to be too slow.
Daniel Lichtblau
Wolfram Research