Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Eigenvalues of sparse arrays

244 views
Skip to first unread message

gopher

unread,
Jul 31, 2009, 5:52:52 AM7/31/09
to
I am computing the eigenvalues of a matrix 's'.
----------------------------------------------------------------------
In[164]:= SparseArray[s] == s

Out[164]= True

In[165]:= Eigenvalues[s, -6] // Chop

Out[165]= {0.382326, -0.382326, 0.350062, -0.350062, 0, 0}

In[166]:= Eigenvalues[SparseArray[s], -6] // Chop

Out[166]= {0.38245, -0.352447, 0.351011, 1.26736*10^-7, 0, 0}
----------------------------------------------------------------------

Why are the two results different? Is there an issue with precision
when computing eigenvalues of sparse arrays? The matrix is such that
the eigenvalues are symmetric about 0 so I'm pretty sure that the
first result is correct.

Mark McClure

unread,
Aug 1, 2009, 3:58:43 AM8/1/09
to
On Fri, Jul 31, 2009 at 5:52 AM, gopher <gophe...@gmail.com> wrote:

> I am computing the eigenvalues of a matrix 's'

> In[165]:= Eigenvalues[s, -6] // Chop
> Out[165]= {0.382326, -0.382326, 0.350062, -0.350062, 0, 0}
>
> In[166]:= Eigenvalues[SparseArray[s], -6] // Chop
> Out[166]= {0.38245, -0.352447, 0.351011, 1.26736*10^-7, 0, 0}
>

> Why are the two results different?


You should read up on the topic of numerical linear algebra.
The documentation center has a nice tutorial on the subject,
as implemented in Mathematica, that you can find by typing
"tutorial/LinearAlgebraInMathematicaOverview". This page is
listed as deprecated in V7 but I can't find anything else with
as much detail on the subject that has replaced it.

I think that the short answer to your question is that
different algorithms are used in the two cases. If, given a
SparseArray and less than 1/5 of the eigenvalues are
requested, then Eigenvalues will use the Arnoldi method.
When given a full matrix, one of the LAPACK routines will be
used. A few experiments show that these results will
typically differ slightly. The reason for relatively large
difference that you are getting is hard to say without
looking at the matrix, but it is certainly possible that
your matrix is ill-conditioned with respect to one technique
more than the other.

Mark McClure


gopher

unread,
Aug 2, 2009, 5:57:13 AM8/2/09
to
On Aug 1, 2:58 am, Mark McClure <mcmcc...@unca.edu> wrote:

Thanks for your reply. I read those docs but IMO they are somewhat
misleading. They state that the Arnoldi method may not converge but
not that it may converge and still give spurious eigenvalues. Also,
Mathematica doesn't allow one to explicitly chose the LAPACK routine -
only "Automatic" which in this case picks Arnoldi for a sparse array
anyway. This seems like a bug.

The matrix I was using is here (it is only 156x156):
https://netfiles.uiuc.edu:443/aroy2/www/sparse%20array/bugmatrix.dat

Is there a way to tell that it is ill-conditioned? As you can see, I
am very new to numerical methods.

Thanks,
Abhishek

Roman

unread,
Aug 3, 2009, 5:47:39 AM8/3/09
to
Abhishkek,

the Arnoldi method is best suited to calculate the *largest*
eigenvalues, not the *smallest* as you desire. But there is an easy
solution. First, here are the options for using the Arnoldi method
with sparse matrices in Mathematica:

http://groups.google.com/group/comp.soft-sys.math.mathematica/browse_thread/thread/2a41064c0dbe5d54/26f379d722f6e1c8?hl=en#26f379d722f6e1c8

In particular the "Shift and Invert Spectral Transformation Mode" is
interesting here. When you want to calculate the *smallest*
eigenvalues by magnitude, Mathematica uses the Arnoldi method to
calculate the *largest* eigenvalues of the *inverse* of your matrix.
But since your matrix has two zero eigenvalues, it is not invertible,
and something seems to break in this method. The remedy is to *shift*
the matrix by a small amount before inverting, something like

Eigenvalues[SparseArray[s] // N, 6, Method -> {Arnoldi, MaxIterations -
> 10^5, Criteria -> Magnitude, Shift -> 0.01}] // Sort

What this does is to calculate the largest 6 eigenvalues of Inverse[s
+0.01], then back-transform the results to give you the corresponding
6 smallest eigenvalues of s.

The error Mathematica makes in your original calculation seems to be
that it takes a default value of zero for the shift: in fact,

Eigenvalues[SparseArray[s] // N, 6, Method -> {Arnoldi, MaxIterations -
> 10^5, Criteria -> Magnitude, Shift -> 0}] // Sort

again gives the wrong answer.

Here's the documentation on Arnoldi stuff as used in Mathematica:
http://www.caam.rice.edu/software/ARPACK/

Hope this works out!
Roman.

Mark McClure

unread,
Aug 3, 2009, 5:49:23 AM8/3/09
to
On Sun, Aug 2, 2009 at 5:59 AM, gopher <gophe...@gmail.com> wrote:

> Thanks for your reply. I read those docs but IMO they are somewhat
> misleading. They state that the Arnoldi method may not converge but
> not that it may converge and still give spurious eigenvalues.


I agree; a warning should be issued. There are simple tests based on
condition numbers for this sort of thing.

The matrix I was using is here (it is only 156x156):
> https://netfiles.uiuc.edu:443/aroy2/www/sparse%20array/bugmatrix.dat
>

Arnoldi iteration is a technique to find the *largest* eigenvalue of a
matrix. We can find other eigenvalues of the matrix A by applying Arnoldi
iteration to the matrix (A-sI)^(-1), where s is a number close to the
eigenvalue we want. This works since s+1/m is an eigenvalue of A close to s,
whenever m is a large eigenvalue of (A-sI)^(-1). Not surprisingly, there is
a problem if s is too close to an exact eigenvalue, for then A-sI is
singular. A simple test for this possibility is the condition number of
A-sI, which should not be too large. In your case, you are looking for
eigenvalues close to zero of a matrix that is singular to start with. The
condition number of this matrix is huge, since it's singular.

dat = N[Import[
"https://netfiles.uiuc.edu/aroy2/www/sparse%20array/bugmatrix.dat"]];
sp = SparseArray[dat];
LinearAlgebra`MatrixConditionNumber[sp]
7.22615*10^17

We can fix the situation by using the Shift sub-option of the Arnoldi
method. This options specifies the number s and should be set close to
zero, but certainly not equal to zero. We can do this as follows.

Eigenvalues[sp, 6,
Method -> {"Arnoldi", "Shift" -> 0.01}]
{0.382326, -0.382326, 0.350062, -0.350062, 4.74967*10^-15,
9.95731*10^-16}

Eigenvalues[dat, -6]
{0.382326, -0.382326, -0.350062, 0.350062, 2.29823*10^-15,
7.57673*10^-16}

Hope that helps,
Mark


Kevin J. McCann

unread,
Aug 4, 2009, 4:30:18 AM8/4/09
to
You might also try Singular Value Decomposition (SVD):

SingularValueList[dat]

This is a list of the non-zero eigenvalues. As you will see there are
154 of these for your matrix, which indicates that two of the
eigenvalues are zero to machine accuracy.

Kevin

gopher

unread,
Aug 7, 2009, 5:30:12 AM8/7/09
to
On Aug 3, 4:49 am, Mark McClure <mcmcc...@unca.edu> wrote:

Thanks very much, Mark and Roman. That was quite educational.

I am going to submit a bug report about this since it seems
Mathematica ought to give at least a warning when shift is too close
to an eigenvalue, given that the user cannot explicitly choose Lapack
over the Arnoldi method.

Abhishek

vadim.o...@gmail.com

unread,
Aug 28, 2009, 12:44:46 AM8/28/09
to
I have been doing full diagonalization of sparse symmetric real
matrices (Hamiltonians) of different sizes and just noticed that
Eigenvalues (and Eigensystem) break once the matrices are 256by256
(maybe smaller, but that's where i first saw it).

i first noticed the problem when checking the quality of the solutions
by comparing eigenvalues with expectation of the matrix in each of
eigenvectors.

even more dramatically, simply rerunning Eigenvalues twice on the same
matrix produces somewhat different results!!!

If I chose to get only few eigenvales (thus forcing it to use Arnoldi
method) the problem goes away.

I was recently told that the version of Lapack used by Mathemica was
recompiled by people at Wolfram for added functionality -- did they
break it too?
My trusty C++ code has no problems with these matrices....

thoughts? suggestions?
thanks!
PS
i can send the matrix in a file to anybody willing to look into this

Sseziwa Mukasa

unread,
Aug 29, 2009, 6:32:00 AM8/29/09
to

What platform are you using? I've had problems with the numeric
linear algebra routines on OS X, the same code seems to work on Windows.

Regards,

Sseziwa

Vadim Oganesyan

unread,
Aug 29, 2009, 6:32:12 AM8/29/09
to
Windows XP Pro x64

Interestingly, 32 bit laptop executes this code flawlessly.

John Fultz

unread,
Aug 31, 2009, 6:35:05 AM8/31/09
to
I have not a whit to contribute to the issue of whether or how this is
broken (too far off from my own field of expertise). But I thought you
might like to know that it's easy enough to run the 32-bit version from
your 64-bit install.

You can use the Evaluation->Kernel Configuration Options... dialog to
tailor the existing kernel or create a new one to be launched as...

SystemFiles\Kernel\Binaries\Windows\MathKernel

Sincerely,

John Fultz
jfu...@wolfram.com
User Interface Group
Wolfram Research, Inc.

0 new messages