smallest eigenvalue of sparse matrix

851 views
Skip to first unread message

Andrei Berceanu

unread,
Apr 9, 2015, 1:34:40 PM4/9/15
to julia...@googlegroups.com
How can I get the minimum eigenvalue of a sparse matrix?
 I found eigmin(A), but it only seems to work on dense matrices.
ERROR: `eigmin` has no method matching eigmin(::SparseMatrixCSC{Complex{Float64},Int64})

Alex

unread,
Apr 9, 2015, 1:53:32 PM4/9/15
to julia...@googlegroups.com
Try

eigs(A, nev=1, which=:SM)

You might want to look into the docs to see the other keywords, for example if you need the eigenvector(s) as well.

Best,

Alex.

Andrei Berceanu

unread,
Apr 9, 2015, 2:10:00 PM4/9/15
to julia...@googlegroups.com
I now get

CHOLMOD warning: not positive definite
ERROR: `-` has no method matching -(::(Array{Complex{Float64},1},Array{Complex{Float64},2},Int64,Int64,Int64,Array{Complex{Float64},1}), ::Float64)

I must probably mention that I generate my sparse matrix using this constructor: sparse(I,J,V), with
    I = Array(Int64,N)
    J = Array(Int64,N)
    V = Array(Complex{Float64},N)

Viral Shah

unread,
Apr 9, 2015, 2:15:19 PM4/9/15
to julia...@googlegroups.com
Could you post a self-sufficient code so that the errors can be reproduced?

-viral

Andrei Berceanu

unread,
Apr 9, 2015, 2:59:13 PM4/9/15
to julia...@googlegroups.com
It's hard to post the code because the function that generates the matrix A is quite involved.
However, I tracked down the error. In fact the problem was that whereas before I had

val = eigmin(full(A))

I was now trying to do

val = eigs(A, nev=1, which=:SM)

Which of course fails because eigs does not return a single value. So now I have come up with the "nice" construct

real(eigs(A, nev=1, which=:SR, ritzvec=false)[1][1])

which seems to give the same results as before.

However, I still see two issues here:
1. my matrix A is Hermitian, and I don't know if this info is used by eigs, or weather indeed it would make any difference (performance-wise)
2. using eigs like this instead of eigmin on the full matrix seems to be a few percent slower (i can provide timings if necessary). Is that to be expected?

Viral Shah

unread,
Apr 10, 2015, 3:25:28 AM4/10/15
to julia...@googlegroups.com
If your sparse matrix is small enough, just make it dense and use eigmin.

If not, then eigs is the right way. The algorithm used by eigs is the Arnoldi method, which is completely different from eig which calls LAPACK and factorizes the matrix.

-viral

Andrei Berceanu

unread,
Apr 10, 2015, 5:04:12 AM4/10/15
to julia...@googlegroups.com
According to Wikipedia:

Arnoldi finds the eigenvalues of general (possibly non-Hermitian) matrices; an analogous method for Hermitian matrices is the Lanczos iteration.

Considering that my sparse matrix is Hermitian, is there a Julia implementation of the Lanczos method for sparse matrices?

Viral Shah

unread,
Apr 10, 2015, 5:15:19 AM4/10/15
to julia...@googlegroups.com
I am pretty sure ARPACK is doing Lanzcos under the hood.

In fact, eigs does treat symmetric matrices differently. See:
https://github.com/JuliaLang/julia/blob/master/base/linalg/arnoldi.jl#L17

-viral

Andrei Berceanu

unread,
Apr 10, 2015, 5:43:58 AM4/10/15
to julia...@googlegroups.com
Yes, it treats real symmetric matrices differently:

    sym = issym(A) && !iscmplx

However my interest is in complex hermitian sparse matrices :)

Jiahao Chen

unread,
Apr 10, 2015, 12:03:36 PM4/10/15
to julia...@googlegroups.com
ARPACK does not contain special routines for Hermitian matrices. It only implements the implicitly restarted Arnoldi algorithm.

For Hermitian problems, Arnoldi and Lanczos are equivalent in exact arithmetic. You can find a presentation of both methods in any standard numerical linear algebra textbook, e.g. Trefethen and Bau.

In inexact arithmetic, the situation is a little more complicated. Lanczos is known to be numerically unstable and therefore is not used without extensive modification. Implicitly restarted Arnoldi, which is what ARPACK does, can be thought of as a method that tries to make Lanczos more stable.

As for the relative speeds of eigs and eigmin, there are so many factors involved in determining the rate of convergence of the two algorithms that I don't think any meaningful conclusion can be drawn.

Steven G. Johnson

unread,
Apr 10, 2015, 4:34:59 PM4/10/15
to julia...@googlegroups.com
For Hermitian problems, you should also try the LOBPCG algorithm, which is not implemented in Julia yet, but which is a nice alternative to Lanczos for Hermitian problems.  Unfortunately, there is no Julia implementation of LOBPCG yet that I know of, but you can probably translate one from Matlab.

Here is a toy implementation of LOBPCG for the smallest eigenvalue that I put together for class (warning: if you run it for too many iterations it goes a little wonky because the X matrix becomes singular, though it is straightforward to modify it to desingularize .... also, a serious implementation should update X'*X etcetera in-place):

# run n iterations of conjugate-gradient Rayleigh-quotient iteration on A, starting from x0,
# returning the list of n eigenvalue estimates
function cgrq(A, x0, n)
    λ = Array(typeof(real(zero(eltype(A)))), n)
    xprev = x0
    x = x0
    Ax = A*x
    Axprev = Ax
    for i = 1:n
        Ad = A*Ax
        X = [xprev x Ax]
        AX = [Axprev Ax Ad]
        ν, Z = eig(X' * AX, X' * X)
        iν = sortperm(ν, by=real)[1]
        xprev = x
        Axprev = Ax
        x = X * Z[:,iν]
        Ax = AX * Z[:,iν]
        λ[i] = real(ν[iν])
    end
    return λ
end

Jutho

unread,
Apr 13, 2015, 5:51:09 PM4/13/15
to julia...@googlegroups.com
When the matrix is real and symmetric, ARPACK does resort to Lanczos, or at least the implicitly restarted version thereof. Straight from the homepage: 
When the matrix A is symmetric it reduces to a variant of the Lanczos process called the Implicitly Restarted Lanczos Method (IRLM).

I always wondered why the ARPACK creators didn't bother to use the same for complex Hermitian problems. Even for that case, the Lanczos / Arnoldi orthogonalization will automatically yield a reduced matrix which is real and tridiagonal (instead of Heisenberg in the generic case) so I would think there is some benefit.


On a different note, typically you cannot easily find smallest eigenvalues of a matrix with a Krylov method if you just multiply with A, if smallest is supposed to mean closest to zero in absolute value (smallest magnitude). You can only find extremal eigenvalues, which are on the outer regions of the spectrum. For the smallest magnitude eigenvalues, in the generic case, one has to use a Krylov subspace built using the inverse of A.

Since your matrix is Hermitian, you know the spectrum will be real, but still it might have many positive and negative eigenvalues and so the same comments still apply if you are looking for the eigenvalues with smallest magnitude. If by smallest you mean, most negative, then there is no problem (you should use :SR in eigs). If you know more, e.g. that your matrix is not only Hermitian but also positive definite, then all eigenvalues are positive and smallest magnitude also means smallest real. 
Reply all
Reply to author
Forward
0 new messages