Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lowest few eigenvalues (and eigenvectors) of large hermitian/Symmetric Martix

1 view
Skip to first unread message

Ender Aysal

unread,
Oct 22, 1999, 3:00:00 AM10/22/99
to
Hi

I would like to caltulate the lowest few eigenvalues, and if possible,
also the eigenvectors, of a big matrix.
This matrix is real symmetric or complex hermitian (depending on my
Problem).

I so far used lanczos, but the alg. I have can only calculate the lowest
eigenvector and eigenvalue. But I need e few more eigenvalues, and
perhaps also some vectors.

Is there any lanczos i can calculate also a few more eigenvalues and
eigenvectors? Or any other alg.?
Or is it possible, if i got the lowest eigenvalue and eigenvector, to
calculate the next?

Hope for help.

Ender Aysal
www.wupper-magic.de
ay...@wupper-magic.de

aysal.vcf

Wretch

unread,
Oct 22, 1999, 3:00:00 AM10/22/99
to


I can't tell you the specific method to use, but you might
like to know that the MATLAB program has a function that allows
you to do precisely what you want.

The function is called EIGS. If you have Matlab available,
type "help eigs" to see how to implement it.

AC

Hans D. Mittelmann

unread,
Oct 22, 1999, 3:00:00 AM10/22/99
to
Hi,
have you looked at ARPACK?
http://www.caam.rice.edu/software/ARPACK/

Ender Aysal wrote:

> Hi
>
> I would like to caltulate the lowest few eigenvalues, and if possible,
> also the eigenvectors, of a big matrix.
> This matrix is real symmetric or complex hermitian (depending on my
> Problem).
>
> I so far used lanczos, but the alg. I have can only calculate the lowest
> eigenvector and eigenvalue. But I need e few more eigenvalues, and
> perhaps also some vectors.
>
> Is there any lanczos i can calculate also a few more eigenvalues and
> eigenvectors? Or any other alg.?
> Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> calculate the next?
>
> Hope for help.
>

> Ender Aysal
> www.wupper-magic.de
> ay...@wupper-magic.de

--
Hans D. Mittelmann http://plato.la.asu.edu/
Arizona State University Phone: (480) 965-6595
Department of Mathematics Fax: (480) 965-0461
Tempe, AZ 85287-1804 email: mitte...@asu.edu


Ender Aysal

unread,
Oct 23, 1999, 3:00:00 AM10/23/99
to
Sorry

I don't have access to MATLAB. Only NAG.

Wretch wrote:

> Ender Aysal wrote:
> >
> > Hi
> >
> > I would like to caltulate the lowest few eigenvalues, and if possible,
> > also the eigenvectors, of a big matrix.
> > This matrix is real symmetric or complex hermitian (depending on my
> > Problem).
> >
> > I so far used lanczos, but the alg. I have can only calculate the lowest
> > eigenvector and eigenvalue. But I need e few more eigenvalues, and
> > perhaps also some vectors.
> >
> > Is there any lanczos i can calculate also a few more eigenvalues and
> > eigenvectors? Or any other alg.?
> > Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> > calculate the next?
> >
> > Hope for help.
>

aysal.vcf

Ender Aysal

unread,
Oct 24, 1999, 3:00:00 AM10/24/99
to
"Hans D. Mittelmann" wrote:

> Hi,
> have you looked at ARPACK?
> http://www.caam.rice.edu/software/ARPACK/
>

> Ender Aysal wrote:
>
> > Hi
> >
> > I would like to caltulate the lowest few eigenvalues, and if possible,
> > also the eigenvectors, of a big matrix.
> > This matrix is real symmetric or complex hermitian (depending on my
> > Problem).
> >
> > I so far used lanczos, but the alg. I have can only calculate the lowest
> > eigenvector and eigenvalue. But I need e few more eigenvalues, and
> > perhaps also some vectors.
> >
> > Is there any lanczos i can calculate also a few more eigenvalues and
> > eigenvectors? Or any other alg.?
> > Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> > calculate the next?
> >
> > Hope for help.
> >

> > Ender Aysal
> > www.wupper-magic.de
> > ay...@wupper-magic.de
>
> --
> Hans D. Mittelmann http://plato.la.asu.edu/
> Arizona State University Phone: (480) 965-6595
> Department of Mathematics Fax: (480) 965-0461
> Tempe, AZ 85287-1804 email: mitte...@asu.edu

I think this is a package for fortran?

aysal.vcf

Ender Aysal

unread,
Oct 24, 1999, 3:00:00 AM10/24/99
to
Hi

I wanted to add that i am programming in C.
The problem is also, that the matrix is too big to be stored in memory. But
I have a procedure which acts on a Vector like this matrix.
So lanczos was very good, because i just needet to store two vectors in
memory.
So, is there perhaps anyone who knows how to make lanczos calculate a few
eigenvectors more?
Or is there any other algorithm i could programm?

Thanks

Ender Aysal

aysal.vcf

N. Shamsundar

unread,
Oct 24, 1999, 3:00:00 AM10/24/99
to
Regardless of which algorithm you use to compute the smallest
eigenvalue, there is a simple technique to "deflate" the matrix
after the smallest eigenvalue has been found.

The deflation is performed as follows. Let x1 be an eigenvector
corresponding to the eigenvalue. Do a Householder reflection of
your nXn matrix A with regard to this vector, i.e., find U=I - 2
x1 x1^T/x1^T x1, form A1=UAU^T, and drop the first row and first
column in A1. The remaining (n-1)X(n-1) matrix has as its
smallest eigenvalue the second smallest eigenvalue of A. The
procedure can be applied recursively to find the next smaller
eigenvalue, and so on.

Note that since U is rank-1, the matrix multiplications in UAU^T
can be reduced to matrix-vector multiplications and rank-one or
rank-two updates.

--
N. Shamsundar
University of Houston
shams...@uh.edu

Ender Aysal wrote in message
<381287B1...@wupper-magic.de>...

Steffen Boerm

unread,
Oct 25, 1999, 3:00:00 AM10/25/99
to
"N. Shamsundar" <shams...@uh.edu> writes:

> Regardless of which algorithm you use to compute the smallest
> eigenvalue, there is a simple technique to "deflate" the matrix
> after the smallest eigenvalue has been found.

Since the matrix apparently isn't stored in memory, an explicit
deflations seems not to be applicable.

But it is possible to remove a certain eigenvector from the space
that the Lanczos methods considers by simply making sure that the
start vector doesn't contain a component of this eigenvector.

If v_0,...,v_{q-1} are q pairwise orthogonal, normalized eigenvectors
computed in preceeding Lanzcos procedures, we can use

create_start_vector(x);
for(i=0; i<q; i++) {
s = scalar_product(x, v[i]);
linear_combination(x, -s, v[i]);
}
lanczos(x);

to find the smallest eigenvectors of the remaining space.

This kind of "deflation" works (at least in theory) for all methods
based on Krylov spaces as long as the matrix is symmetric.

If the matrix is the result of a finite-element- or finite-
differences-discretization that is elliptic, I would suggest to use
the eigenvalue multigrid from Hackbusch, "Multigrid Methods and
Applications". It's of optimal complexity for low-frequency eigenvectors
and only needs the evaluation of matrix-vector-products.

Best regards,
cu, Steffen 8-)

--
Steffen Boerm
EMail: s...@numerik.uni-kiel.de
WWW: http://www.numerik.uni-kiel.de/~sbo/


Peter Spellucci

unread,
Oct 25, 1999, 3:00:00 AM10/25/99
to Ender Aysal
In article <38105C5F...@wupper-magic.de>,
Ender Aysal <ay...@wupper-magic.de> writes:
|> This is a multi-part message in MIME format.
|> --------------E0140DE2ADC7F00B0E4879DC
|> Content-Type: text/plain; charset=us-ascii
|> Content-Transfer-Encoding: 7bit

|>
|> Hi
|>
|> I would like to caltulate the lowest few eigenvalues, and if possible,
|> also the eigenvectors, of a big matrix.
|> This matrix is real symmetric or complex hermitian (depending on my
....... snip .......
...current method computes the smallest only .......

try
http://ftp.caam.rice.edu/pub/software/ARPACK

this has anything you require

hope this helps
peter

mitte...@asu.edu

unread,
Oct 25, 1999, 3:00:00 AM10/25/99
to
Hi,
try the deflation suggested in other postings BUT note that there is a
C++ interface available to the ARPACK code!

Hans Mittelmann
--------------------
In article <38105C5F...@wupper-magic.de>,


Ender Aysal <ay...@wupper-magic.de> wrote:
> This is a multi-part message in MIME format.
> --------------E0140DE2ADC7F00B0E4879DC
> Content-Type: text/plain; charset=us-ascii
> Content-Transfer-Encoding: 7bit
>
> Hi
>
> I would like to caltulate the lowest few eigenvalues, and if possible,
> also the eigenvectors, of a big matrix.
> This matrix is real symmetric or complex hermitian (depending on my

> Problem).
>
> I so far used lanczos, but the alg. I have can only calculate the
lowest
> eigenvector and eigenvalue. But I need e few more eigenvalues, and
> perhaps also some vectors.
>
> Is there any lanczos i can calculate also a few more eigenvalues and
> eigenvectors? Or any other alg.?
> Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> calculate the next?
>
> Hope for help.
>
> Ender Aysal
> www.wupper-magic.de
> ay...@wupper-magic.de
>

> --------------E0140DE2ADC7F00B0E4879DC
> Content-Type: text/x-vcard; charset=us-ascii;
> name="aysal.vcf"
> Content-Transfer-Encoding: 7bit
> Content-Description: Card for Ender Aysal
> Content-Disposition: attachment;
> filename="aysal.vcf"
>
> begin:vcard
> n:Aysal;Ender
> x-mozilla-html:TRUE
> org:BUGH Wuppertal;Physik
> adr:;;;;;;
> version:2.1
> email;internet:ay...@wupper-magic.de
> title:Dipl.-Phys.
> x-mozilla-cpt:;0
> fn:Ender Aysal
> end:vcard
>
> --------------E0140DE2ADC7F00B0E4879DC--
>
>


Sent via Deja.com http://www.deja.com/
Before you buy.

Ender Aysal

unread,
Oct 25, 1999, 3:00:00 AM10/25/99
to
Thanks , that sounds good to me!

Ender

aysal.vcf

Niels L Ellegaard

unread,
Nov 8, 1999, 3:00:00 AM11/8/99
to
I have a number of nonsymmetric dense 1500x 1500 matrices, and I would
like to determine the lowest two egienvalues. (I mean the two most
negative eigenvalues). Is there a smart way of doing this or should I
just go ahead and use QR factorisation?


--
Niels L Ellegaard

'http://dirac.ruc.dk/~gnalle/'

Peter Spellucci

unread,
Nov 9, 1999, 3:00:00 AM11/9/99
to
In article <3826E00D...@ruc.dk>,

Niels L Ellegaard <gna...@ruc.dk> writes:
|> I have a number of nonsymmetric dense 1500x 1500 matrices, and I would
|> like to determine the lowest two egienvalues. (I mean the two most
|> negative eigenvalues). Is there a smart way of doing this or should I
|> just go ahead and use QR factorisation
effort for QR: about 30*n**3 ops ........ would take some while ......

http://ftp.caam.rice.edu/pub/software/ARPACK

has the solution. It might be that you need to shift the sprectrum such
that these eigenvalues become the absolutely largest, if you cannot use
inverse iteration with a small shift to make them just positive. but for
the dimensions you mention this should cause no trouble.
hope this helps
peter

Hans Mittelmann

unread,
Nov 9, 1999, 3:00:00 AM11/9/99
to
Peter Spellucci wrote:

the links are

http://www.caam.rice.edu/software/ARPACK/
ftp://ftp.caam.rice.edu/pub/software/ARPACK

Hans Mittelmann

--
-----
-------------------------------------------------------------------------
Hans D. Mittelmann WWW: http://plato.la.asu.edu/


Arizona State University Phone: (480) 965-6595
Department of Mathematics Fax: (480) 965-0461
Tempe, AZ 85287-1804 email: mitte...@asu.edu

-------------------------------------------------------------------------


Niels L Ellegaard

unread,
Nov 10, 1999, 3:00:00 AM11/10/99
to
Hans Mittelmann wrote:
>
> Peter Spellucci wrote:
>
> > In article <3826E00D...@ruc.dk>,
> > Niels L Ellegaard <gna...@ruc.dk> writes:
> > |> I have a number of nonsymmetric dense 1500x 1500 matrices, and I would
> > |> like to determine the lowest two egienvalues. (I mean the two most
> > |> negative eigenvalues). Is there a smart way of doing this or should I
> > |> just go ahead and use QR factorisation
> > effort for QR: about 30*n**3 ops ........ would take some while ......
> >
> > http://ftp.caam.rice.edu/pub/software/ARPACK
> >
> > has the solution. It might be that you need to shift the sprectrum such
> > that these eigenvalues become the absolutely largest, if you cannot use
> > inverse iteration with a small shift to make them just positive. but for
> > the dimensions you mention this should cause no trouble.
> > hope this helps
> > peter
>
> the links are
>
> http://www.caam.rice.edu/software/ARPACK/
> ftp://ftp.caam.rice.edu/pub/software/ARPACK
>
> Hans Mittelmann

Thank yu very much. I still have some questions though. I would also
like to calculate the determinant and the number of negative egenvalues.

I have been told that the number of negative eigenvalues of a symmetric
matrix can be found using Bunch Parlett factorisation. Is there a
similar way of calculating the number of negative eigenvalues in a dense
nonsymmetric matrix?

I have been using LU factorisation to calculate the determinant, but it
doesnt seem very numerically stable.
Is this right or am I making a mistake?

Summing everything up: Is there a way of calculating the following
without using QR-factorisation
* Most negative eigenvalue.
* Number of negative eigenvalues.
* Determinant.

Peter Spellucci

unread,
Nov 11, 1999, 3:00:00 AM11/11/99
to
In article <38298715...@ruc.dk>,

Niels L Ellegaard <gna...@ruc.dk> writes:

|> I have been told that the number of negative eigenvalues of a symmetric
|> matrix can be found using Bunch Parlett factorisation. Is there a
|> similar way of calculating the number of negative eigenvalues in a dense
|> nonsymmetric matrix?
unfortunately the answer is _no_
the use of the bunch-parlett-decomposition depends on Sylvesters law
of inertia which is valid for hermitian matrices only.
there is no counterpart in the unsymmetric case

|>
|> I have been using LU factorisation to calculate the determinant, but it
|> doesnt seem very numerically stable.
|> Is this right or am I making a mistake?
this is not right. Indeed using the LR-decomposition with pivoting (!)
(or, of course the more expensive QR-decomposition) is the only useful way
to compute the determinant. however, in order to compute it explicitly, you
must be very careful in order to avoid overflow or underflow. i.e. in multiplying
the pivots (the diagonal elements of R) you must scale the result by multiplying
or dividing by 2 (with proper housekeeping) in order to avoid this.
also, if you did interchanges, keep track of the sign!

|>
|> Summing everything up: Is there a way of calculating the following
|> without using QR-factorisation
you mean QR-algorithm, which uses QR-factorisation plus iteration

|> * Most negative eigenvalue.
codes from ARPACK

|> * Number of negative eigenvalues.

in the unsymmetric case you may try the following: shift by -||A||
this gives all eigenvalues a negative real part. Now compute the p
absolutely largest ("dominant") eigenvalues (that is, the p most on the left)
for some p
and increase p until one eigenvalue is found that becomes larger than -||A||
(if the separation of eigenvalues is not too bad and the number of negative
eigenvalues is not too large, this might go through)

|> * Determinant.
simple with LR-decomposition

hope this helps
peter

Niels L Ellegaard

unread,
Nov 16, 1999, 3:00:00 AM11/16/99
to
Peter Spellucci wrote:
>
> In article <38298715...@ruc.dk>,
> Niels L Ellegaard <gna...@ruc.dk> writes:
> |> I have been using LU factorisation to calculate the determinant, but it
> |> doesnt seem very numerically stable.
> |> Is this right or am I making a mistake?

> this is not right. Indeed using the LR-decomposition with pivoting (!)
> (or, of course the more expensive QR-decomposition) is the only useful way
> to compute the determinant. however, in order to compute it explicitly, you
> must be very careful in order to avoid overflow or underflow. i.e. in multiplying
> the pivots (the diagonal elements of R) you must scale the result by multiplying
> or dividing by 2 (with proper housekeeping) in order to avoid this.
> also, if you did interchanges, keep track of the sign!

I think my problem comes from the fact that my matrix is not completely
dense. Actually it has around
70% zeroes in it. Therefore my algorithm ends up with a divistion by
zero. There must be a smart way of solving this problem. Can anybody
point me to a nice pivoting algorithm that allows LU-decomposition of at
somewhat sparse matrix?

Thanks in advance

Ulrich Elsner

unread,
Nov 16, 1999, 3:00:00 AM11/16/99
to
According to Niels L Ellegaard <gna...@ruc.dk>:

>I think my problem comes from the fact that my matrix is not completely
>dense. Actually it has around 70% zeroes in it. Therefore my algorithm
>ends up with a divistion by zero. There must be a smart way of solving
>this problem. Can anybody point me to a nice pivoting algorithm that
>allows LU-decomposition of at somewhat sparse matrix?

Pivoting is almost always a must, even in a dense matrix (unless you KNOW
that you'll have no problems, e.g., because your matrix is positive
definite).

A sparse LU decomposition (SuperLU) can be found in Scalapack
Http://www.netlib.org/scalapack/prototype/.

But 70% zeros is not really sparse. So you might just want to use a normal
LU solver (e.g., from lapack).

Note: It is generally a mistake to write your own routines if there are
libraries available. Writing good, stable numerical software is hard.


Regards,

Ulrich

--
Ulrich Elsner, Fak. fuer Mathematik, TU Chemnitz, D-09107 Chemnitz, Germany


0 new messages