Out[164]= True
In[165]:= Eigenvalues[s, -6] // Chop
Out[165]= {0.382326, -0.382326, 0.350062, -0.350062, 0, 0}
In[166]:= Eigenvalues[SparseArray[s], -6] // Chop
Out[166]= {0.38245, -0.352447, 0.351011, 1.26736*10^-7, 0, 0}
----------------------------------------------------------------------
Why are the two results different? Is there an issue with precision
when computing eigenvalues of sparse arrays? The matrix is such that
the eigenvalues are symmetric about 0 so I'm pretty sure that the
first result is correct.
> I am computing the eigenvalues of a matrix 's'
> In[165]:= Eigenvalues[s, -6] // Chop
> Out[165]= {0.382326, -0.382326, 0.350062, -0.350062, 0, 0}
>
> In[166]:= Eigenvalues[SparseArray[s], -6] // Chop
> Out[166]= {0.38245, -0.352447, 0.351011, 1.26736*10^-7, 0, 0}
>
> Why are the two results different?
You should read up on the topic of numerical linear algebra.
The documentation center has a nice tutorial on the subject,
as implemented in Mathematica, that you can find by typing
"tutorial/LinearAlgebraInMathematicaOverview". This page is
listed as deprecated in V7 but I can't find anything else with
as much detail on the subject that has replaced it.
I think that the short answer to your question is that
different algorithms are used in the two cases. If, given a
SparseArray and less than 1/5 of the eigenvalues are
requested, then Eigenvalues will use the Arnoldi method.
When given a full matrix, one of the LAPACK routines will be
used. A few experiments show that these results will
typically differ slightly. The reason for relatively large
difference that you are getting is hard to say without
looking at the matrix, but it is certainly possible that
your matrix is ill-conditioned with respect to one technique
more than the other.
Mark McClure
Thanks for your reply. I read those docs but IMO they are somewhat
misleading. They state that the Arnoldi method may not converge but
not that it may converge and still give spurious eigenvalues. Also,
Mathematica doesn't allow one to explicitly chose the LAPACK routine -
only "Automatic" which in this case picks Arnoldi for a sparse array
anyway. This seems like a bug.
The matrix I was using is here (it is only 156x156):
https://netfiles.uiuc.edu:443/aroy2/www/sparse%20array/bugmatrix.dat
Is there a way to tell that it is ill-conditioned? As you can see, I
am very new to numerical methods.
Thanks,
Abhishek
the Arnoldi method is best suited to calculate the *largest*
eigenvalues, not the *smallest* as you desire. But there is an easy
solution. First, here are the options for using the Arnoldi method
with sparse matrices in Mathematica:
In particular the "Shift and Invert Spectral Transformation Mode" is
interesting here. When you want to calculate the *smallest*
eigenvalues by magnitude, Mathematica uses the Arnoldi method to
calculate the *largest* eigenvalues of the *inverse* of your matrix.
But since your matrix has two zero eigenvalues, it is not invertible,
and something seems to break in this method. The remedy is to *shift*
the matrix by a small amount before inverting, something like
Eigenvalues[SparseArray[s] // N, 6, Method -> {Arnoldi, MaxIterations -
> 10^5, Criteria -> Magnitude, Shift -> 0.01}] // Sort
What this does is to calculate the largest 6 eigenvalues of Inverse[s
+0.01], then back-transform the results to give you the corresponding
6 smallest eigenvalues of s.
The error Mathematica makes in your original calculation seems to be
that it takes a default value of zero for the shift: in fact,
Eigenvalues[SparseArray[s] // N, 6, Method -> {Arnoldi, MaxIterations -
> 10^5, Criteria -> Magnitude, Shift -> 0}] // Sort
again gives the wrong answer.
Here's the documentation on Arnoldi stuff as used in Mathematica:
http://www.caam.rice.edu/software/ARPACK/
Hope this works out!
Roman.
> Thanks for your reply. I read those docs but IMO they are somewhat
> misleading. They state that the Arnoldi method may not converge but
> not that it may converge and still give spurious eigenvalues.
I agree; a warning should be issued. There are simple tests based on
condition numbers for this sort of thing.
The matrix I was using is here (it is only 156x156):
> https://netfiles.uiuc.edu:443/aroy2/www/sparse%20array/bugmatrix.dat
>
Arnoldi iteration is a technique to find the *largest* eigenvalue of a
matrix. We can find other eigenvalues of the matrix A by applying Arnoldi
iteration to the matrix (A-sI)^(-1), where s is a number close to the
eigenvalue we want. This works since s+1/m is an eigenvalue of A close to s,
whenever m is a large eigenvalue of (A-sI)^(-1). Not surprisingly, there is
a problem if s is too close to an exact eigenvalue, for then A-sI is
singular. A simple test for this possibility is the condition number of
A-sI, which should not be too large. In your case, you are looking for
eigenvalues close to zero of a matrix that is singular to start with. The
condition number of this matrix is huge, since it's singular.
dat = N[Import[
"https://netfiles.uiuc.edu/aroy2/www/sparse%20array/bugmatrix.dat"]];
sp = SparseArray[dat];
LinearAlgebra`MatrixConditionNumber[sp]
7.22615*10^17
We can fix the situation by using the Shift sub-option of the Arnoldi
method. This options specifies the number s and should be set close to
zero, but certainly not equal to zero. We can do this as follows.
Eigenvalues[sp, 6,
Method -> {"Arnoldi", "Shift" -> 0.01}]
{0.382326, -0.382326, 0.350062, -0.350062, 4.74967*10^-15,
9.95731*10^-16}
Eigenvalues[dat, -6]
{0.382326, -0.382326, -0.350062, 0.350062, 2.29823*10^-15,
7.57673*10^-16}
Hope that helps,
Mark
SingularValueList[dat]
This is a list of the non-zero eigenvalues. As you will see there are
154 of these for your matrix, which indicates that two of the
eigenvalues are zero to machine accuracy.
Kevin
Thanks very much, Mark and Roman. That was quite educational.
I am going to submit a bug report about this since it seems
Mathematica ought to give at least a warning when shift is too close
to an eigenvalue, given that the user cannot explicitly choose Lapack
over the Arnoldi method.
Abhishek
i first noticed the problem when checking the quality of the solutions
by comparing eigenvalues with expectation of the matrix in each of
eigenvectors.
even more dramatically, simply rerunning Eigenvalues twice on the same
matrix produces somewhat different results!!!
If I chose to get only few eigenvales (thus forcing it to use Arnoldi
method) the problem goes away.
I was recently told that the version of Lapack used by Mathemica was
recompiled by people at Wolfram for added functionality -- did they
break it too?
My trusty C++ code has no problems with these matrices....
thoughts? suggestions?
thanks!
PS
i can send the matrix in a file to anybody willing to look into this
What platform are you using? I've had problems with the numeric
linear algebra routines on OS X, the same code seems to work on Windows.
Regards,
Sseziwa
Interestingly, 32 bit laptop executes this code flawlessly.
You can use the Evaluation->Kernel Configuration Options... dialog to
tailor the existing kernel or create a new one to be launched as...
SystemFiles\Kernel\Binaries\Windows\MathKernel
Sincerely,
John Fultz
jfu...@wolfram.com
User Interface Group
Wolfram Research, Inc.