I would like to caltulate the lowest few eigenvalues, and if possible,
also the eigenvectors, of a big matrix.
This matrix is real symmetric or complex hermitian (depending on my
Problem).
I so far used lanczos, but the alg. I have can only calculate the lowest
eigenvector and eigenvalue. But I need e few more eigenvalues, and
perhaps also some vectors.
Is there any lanczos i can calculate also a few more eigenvalues and
eigenvectors? Or any other alg.?
Or is it possible, if i got the lowest eigenvalue and eigenvector, to
calculate the next?
Hope for help.
Ender Aysal
www.wupper-magic.de
ay...@wupper-magic.de
I can't tell you the specific method to use, but you might
like to know that the MATLAB program has a function that allows
you to do precisely what you want.
The function is called EIGS. If you have Matlab available,
type "help eigs" to see how to implement it.
AC
Ender Aysal wrote:
> Hi
>
> I would like to caltulate the lowest few eigenvalues, and if possible,
> also the eigenvectors, of a big matrix.
> This matrix is real symmetric or complex hermitian (depending on my
> Problem).
>
> I so far used lanczos, but the alg. I have can only calculate the lowest
> eigenvector and eigenvalue. But I need e few more eigenvalues, and
> perhaps also some vectors.
>
> Is there any lanczos i can calculate also a few more eigenvalues and
> eigenvectors? Or any other alg.?
> Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> calculate the next?
>
> Hope for help.
>
> Ender Aysal
> www.wupper-magic.de
> ay...@wupper-magic.de
--
Hans D. Mittelmann http://plato.la.asu.edu/
Arizona State University Phone: (480) 965-6595
Department of Mathematics Fax: (480) 965-0461
Tempe, AZ 85287-1804 email: mitte...@asu.edu
I don't have access to MATLAB. Only NAG.
Wretch wrote:
> Ender Aysal wrote:
> >
> > Hi
> >
> > I would like to caltulate the lowest few eigenvalues, and if possible,
> > also the eigenvectors, of a big matrix.
> > This matrix is real symmetric or complex hermitian (depending on my
> > Problem).
> >
> > I so far used lanczos, but the alg. I have can only calculate the lowest
> > eigenvector and eigenvalue. But I need e few more eigenvalues, and
> > perhaps also some vectors.
> >
> > Is there any lanczos i can calculate also a few more eigenvalues and
> > eigenvectors? Or any other alg.?
> > Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> > calculate the next?
> >
> > Hope for help.
>
> Hi,
> have you looked at ARPACK?
> http://www.caam.rice.edu/software/ARPACK/
>
> Ender Aysal wrote:
>
> > Hi
> >
> > I would like to caltulate the lowest few eigenvalues, and if possible,
> > also the eigenvectors, of a big matrix.
> > This matrix is real symmetric or complex hermitian (depending on my
> > Problem).
> >
> > I so far used lanczos, but the alg. I have can only calculate the lowest
> > eigenvector and eigenvalue. But I need e few more eigenvalues, and
> > perhaps also some vectors.
> >
> > Is there any lanczos i can calculate also a few more eigenvalues and
> > eigenvectors? Or any other alg.?
> > Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> > calculate the next?
> >
> > Hope for help.
> >
> > Ender Aysal
> > www.wupper-magic.de
> > ay...@wupper-magic.de
>
> --
> Hans D. Mittelmann http://plato.la.asu.edu/
> Arizona State University Phone: (480) 965-6595
> Department of Mathematics Fax: (480) 965-0461
> Tempe, AZ 85287-1804 email: mitte...@asu.edu
I think this is a package for fortran?
I wanted to add that i am programming in C.
The problem is also, that the matrix is too big to be stored in memory. But
I have a procedure which acts on a Vector like this matrix.
So lanczos was very good, because i just needet to store two vectors in
memory.
So, is there perhaps anyone who knows how to make lanczos calculate a few
eigenvectors more?
Or is there any other algorithm i could programm?
Thanks
Ender Aysal
The deflation is performed as follows. Let x1 be an eigenvector
corresponding to the eigenvalue. Do a Householder reflection of
your nXn matrix A with regard to this vector, i.e., find U=I - 2
x1 x1^T/x1^T x1, form A1=UAU^T, and drop the first row and first
column in A1. The remaining (n-1)X(n-1) matrix has as its
smallest eigenvalue the second smallest eigenvalue of A. The
procedure can be applied recursively to find the next smaller
eigenvalue, and so on.
Note that since U is rank-1, the matrix multiplications in UAU^T
can be reduced to matrix-vector multiplications and rank-one or
rank-two updates.
--
N. Shamsundar
University of Houston
shams...@uh.edu
Ender Aysal wrote in message
<381287B1...@wupper-magic.de>...
> Regardless of which algorithm you use to compute the smallest
> eigenvalue, there is a simple technique to "deflate" the matrix
> after the smallest eigenvalue has been found.
Since the matrix apparently isn't stored in memory, an explicit
deflations seems not to be applicable.
But it is possible to remove a certain eigenvector from the space
that the Lanczos methods considers by simply making sure that the
start vector doesn't contain a component of this eigenvector.
If v_0,...,v_{q-1} are q pairwise orthogonal, normalized eigenvectors
computed in preceeding Lanzcos procedures, we can use
create_start_vector(x);
for(i=0; i<q; i++) {
s = scalar_product(x, v[i]);
linear_combination(x, -s, v[i]);
}
lanczos(x);
to find the smallest eigenvectors of the remaining space.
This kind of "deflation" works (at least in theory) for all methods
based on Krylov spaces as long as the matrix is symmetric.
If the matrix is the result of a finite-element- or finite-
differences-discretization that is elliptic, I would suggest to use
the eigenvalue multigrid from Hackbusch, "Multigrid Methods and
Applications". It's of optimal complexity for low-frequency eigenvectors
and only needs the evaluation of matrix-vector-products.
Best regards,
cu, Steffen 8-)
--
Steffen Boerm
EMail: s...@numerik.uni-kiel.de
WWW: http://www.numerik.uni-kiel.de/~sbo/
try
http://ftp.caam.rice.edu/pub/software/ARPACK
this has anything you require
hope this helps
peter
Hans Mittelmann
--------------------
In article <38105C5F...@wupper-magic.de>,
Ender Aysal <ay...@wupper-magic.de> wrote:
> This is a multi-part message in MIME format.
> --------------E0140DE2ADC7F00B0E4879DC
> Content-Type: text/plain; charset=us-ascii
> Content-Transfer-Encoding: 7bit
>
> Hi
>
> I would like to caltulate the lowest few eigenvalues, and if possible,
> also the eigenvectors, of a big matrix.
> This matrix is real symmetric or complex hermitian (depending on my
> Problem).
>
> I so far used lanczos, but the alg. I have can only calculate the
lowest
> eigenvector and eigenvalue. But I need e few more eigenvalues, and
> perhaps also some vectors.
>
> Is there any lanczos i can calculate also a few more eigenvalues and
> eigenvectors? Or any other alg.?
> Or is it possible, if i got the lowest eigenvalue and eigenvector, to
> calculate the next?
>
> Hope for help.
>
> Ender Aysal
> www.wupper-magic.de
> ay...@wupper-magic.de
>
> --------------E0140DE2ADC7F00B0E4879DC
> Content-Type: text/x-vcard; charset=us-ascii;
> name="aysal.vcf"
> Content-Transfer-Encoding: 7bit
> Content-Description: Card for Ender Aysal
> Content-Disposition: attachment;
> filename="aysal.vcf"
>
> begin:vcard
> n:Aysal;Ender
> x-mozilla-html:TRUE
> org:BUGH Wuppertal;Physik
> adr:;;;;;;
> version:2.1
> email;internet:ay...@wupper-magic.de
> title:Dipl.-Phys.
> x-mozilla-cpt:;0
> fn:Ender Aysal
> end:vcard
>
> --------------E0140DE2ADC7F00B0E4879DC--
>
>
Sent via Deja.com http://www.deja.com/
Before you buy.
--
Niels L Ellegaard
http://ftp.caam.rice.edu/pub/software/ARPACK
has the solution. It might be that you need to shift the sprectrum such
that these eigenvalues become the absolutely largest, if you cannot use
inverse iteration with a small shift to make them just positive. but for
the dimensions you mention this should cause no trouble.
hope this helps
peter
the links are
http://www.caam.rice.edu/software/ARPACK/
ftp://ftp.caam.rice.edu/pub/software/ARPACK
Hans Mittelmann
--
-----
-------------------------------------------------------------------------
Hans D. Mittelmann WWW: http://plato.la.asu.edu/
Arizona State University Phone: (480) 965-6595
Department of Mathematics Fax: (480) 965-0461
Tempe, AZ 85287-1804 email: mitte...@asu.edu
-------------------------------------------------------------------------
Thank yu very much. I still have some questions though. I would also
like to calculate the determinant and the number of negative egenvalues.
I have been told that the number of negative eigenvalues of a symmetric
matrix can be found using Bunch Parlett factorisation. Is there a
similar way of calculating the number of negative eigenvalues in a dense
nonsymmetric matrix?
I have been using LU factorisation to calculate the determinant, but it
doesnt seem very numerically stable.
Is this right or am I making a mistake?
Summing everything up: Is there a way of calculating the following
without using QR-factorisation
* Most negative eigenvalue.
* Number of negative eigenvalues.
* Determinant.
|> * Most negative eigenvalue.
codes from ARPACK
|> * Number of negative eigenvalues.
in the unsymmetric case you may try the following: shift by -||A||
this gives all eigenvalues a negative real part. Now compute the p
absolutely largest ("dominant") eigenvalues (that is, the p most on the left)
for some p
and increase p until one eigenvalue is found that becomes larger than -||A||
(if the separation of eigenvalues is not too bad and the number of negative
eigenvalues is not too large, this might go through)
|> * Determinant.
simple with LR-decomposition
hope this helps
peter
> this is not right. Indeed using the LR-decomposition with pivoting (!)
> (or, of course the more expensive QR-decomposition) is the only useful way
> to compute the determinant. however, in order to compute it explicitly, you
> must be very careful in order to avoid overflow or underflow. i.e. in multiplying
> the pivots (the diagonal elements of R) you must scale the result by multiplying
> or dividing by 2 (with proper housekeeping) in order to avoid this.
> also, if you did interchanges, keep track of the sign!
I think my problem comes from the fact that my matrix is not completely
dense. Actually it has around
70% zeroes in it. Therefore my algorithm ends up with a divistion by
zero. There must be a smart way of solving this problem. Can anybody
point me to a nice pivoting algorithm that allows LU-decomposition of at
somewhat sparse matrix?
Thanks in advance
>I think my problem comes from the fact that my matrix is not completely
>dense. Actually it has around 70% zeroes in it. Therefore my algorithm
>ends up with a divistion by zero. There must be a smart way of solving
>this problem. Can anybody point me to a nice pivoting algorithm that
>allows LU-decomposition of at somewhat sparse matrix?
Pivoting is almost always a must, even in a dense matrix (unless you KNOW
that you'll have no problems, e.g., because your matrix is positive
definite).
A sparse LU decomposition (SuperLU) can be found in Scalapack
Http://www.netlib.org/scalapack/prototype/.
But 70% zeros is not really sparse. So you might just want to use a normal
LU solver (e.g., from lapack).
Note: It is generally a mistake to write your own routines if there are
libraries available. Writing good, stable numerical software is hard.
Regards,
Ulrich
--
Ulrich Elsner, Fak. fuer Mathematik, TU Chemnitz, D-09107 Chemnitz, Germany