Speeding up and reducing memory footprint of matrix evaulations

182 views
Skip to first unread message

Peter Brady

unread,
Apr 25, 2014, 1:35:08 PM4/25/14
to sy...@googlegroups.com
So I start with two sparse matrices, L, and R each with data on just a few bands (ie 3 to 5)

My goal is compute the largest and smallest eigenvalues of the matrix A given by:

A = -c*(L^-1*R)+d*(L^-1*R)**2     where c and d are constants

In my code this is written as:

    L = SparseMatrix(...)
    R = SparseMatrix(...)

    B = L.inv()*R
    
    A = np.array(-c*B+d*B**2).astype('double')

I can then use scipy/ARPACK to get the values I want.  If I convert L,R or B to numpy arrays before computing A, I get crappy eigenvalues so this has to be done symbolically.  My problem is that while computing B is manageable for the matrices I'm interested (from 20x20 to 160x160), computing A takes about 5 minutes and eats up a 15-30% of my memory so I need to run this in serial.  In contrast, if I convert B to a numpy array, it takes < 1s to compute A (although it is the wrong A, so it's essentially worthless).  

Is there some way to speed this up and/or reduce the memory footprint?  Ideally, I would like to run hundreds (maybe thousands) of different cases.  I'm fine with installing the necessary libraries on my machine (linux).

Thanks,
Peter.

Jason Moore

unread,
Apr 25, 2014, 1:53:45 PM4/25/14
to sy...@googlegroups.com
Do you use scipy.sparse? I has sparse solvers that should able to compute L.inv()*R efficiently and then supports addition, multiplication and squaring. Why do you get crappy eigenvalues when doing this numerically?

Secondly, shouldn't you use http://docs.sympy.org/dev/modules/matrices/sparse.html#sympy.matrices.sparse.SparseMatrix.solve for the symbolic B = L.inv()*R?


--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/40f267f5-50b1-42e8-9434-a6cac8a095cf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ondřej Čertík

unread,
Apr 25, 2014, 1:54:22 PM4/25/14
to sympy
Hi Peter,

On Fri, Apr 25, 2014 at 11:35 AM, Peter Brady <peter...@gmail.com> wrote:
> So I start with two sparse matrices, L, and R each with data on just a few
> bands (ie 3 to 5)
>
> My goal is compute the largest and smallest eigenvalues of the matrix A
> given by:
>
> A = -c*(L^-1*R)+d*(L^-1*R)**2 where c and d are constants
>
> In my code this is written as:
>
> L = SparseMatrix(...)
> R = SparseMatrix(...)
>
> B = L.inv()*R
>
> A = np.array(-c*B+d*B**2).astype('double')
>
> I can then use scipy/ARPACK to get the values I want. If I convert L,R or B
> to numpy arrays before computing A, I get crappy eigenvalues so this has to
> be done symbolically. My problem is that while computing B is manageable
> for the matrices I'm interested (from 20x20 to 160x160), computing A takes
> about 5 minutes and eats up a 15-30% of my memory so I need to run this in
> serial. In contrast, if I convert B to a numpy array, it takes < 1s to
> compute A (although it is the wrong A, so it's essentially worthless).

Can you post code that does the above? You can use gist:
https://gist.github.com/
or IPython notebook or whatever you prefer.

Essentialy doing L.inv() means you can't use Lapack, because the
condition number
will be bad. But I wonder if there is a way to rewrite the problem
without doing L.inv(),
possibly using some generalized eigenvalue problem or something, so
that you can still
use Lapack.

It's always good to do solid numerics, it's faster, you can use
Fortran and so on. But a separate issue is how to do this
symbolically, and SymPy should have the best algorithms. I need to
play with your code
to get better opinion how to speed this up.



> Is there some way to speed this up and/or reduce the memory footprint?
> Ideally, I would like to run hundreds (maybe thousands) of different cases.
> I'm fine with installing the necessary libraries on my machine (linux).

Perfect. This might be another great application for CSymPy. We are
implementing the matrix
support this summer as part of GSoC.

Ondrej

Ondřej Čertík

unread,
Apr 25, 2014, 4:30:24 PM4/25/14
to sympy
There is!

The idea is that for arpack, you only need to know A*x, not the A
directly. So here is
how to calculate A*x without explicitly calculating L^-1:

from numpy import dot
from numpy.linalg import inv, solve
from numpy.random import random

n = 5
R = random((n, n))
L = random((n, n))
c, d = 1.5, 2.5
S = dot(inv(L), R)
A = -c*S + d*dot(S, S)
x = random(n)
print "A*x calculation using L^-1:"
print dot(A, x)
print "A*x calculation without using L^-1:"
Sx = solve(L, dot(R, x))
S2x = solve(L, dot(R, Sx))
print -c*Sx + d*S2x



This prints:

A*x calculation using L^-1:
[-0.21153976 2.50748822 2.03177497 4.24144355 -4.15743541]
A*x calculation without using L^-1:
[-0.21153976 2.50748822 2.03177497 4.24144355 -4.15743541]

You can see that both approaches exactly agree. So you can continue using
numerics for this, which is always better, if you can.

Ondrej

Peter Brady

unread,
Apr 25, 2014, 5:17:57 PM4/25/14
to sy...@googlegroups.com
I've created a gist with the text for my L and R matrices as well as a simple function to read them in and turn them into SparseMatrices at  https://gist.github.com/pbrady/11303375

I can switch from B = L.inv()*R to B = solve(L,R) but most of the time is not spent computing B but rather A from B. (i.e in computing A=-c*B+d*B**2)

@Ondrej, I don't think I know what 'x' is

Ondřej Čertík

unread,
Apr 25, 2014, 5:42:52 PM4/25/14
to sympy
On Fri, Apr 25, 2014 at 3:17 PM, Peter Brady <peter...@gmail.com> wrote:
> I've created a gist with the text for my L and R matrices as well as a
> simple function to read them in and turn them into SparseMatrices at
> https://gist.github.com/pbrady/11303375
>
> I can switch from B = L.inv()*R to B = solve(L,R) but most of the time is
> not spent computing B but rather A from B. (i.e in computing A=-c*B+d*B**2)
>
> @Ondrej, I don't think I know what 'x' is

It's given to you by arpack. I've updated the script for you:

from numpy import dot
from numpy.linalg import inv, solve
from numpy.random import random
from scipy.sparse.linalg import LinearOperator, eigs

n = 5
R = random((n, n))
L = random((n, n))

def matvec(x):
Sx = solve(L, dot(R, x))
S2x = solve(L, dot(R, Sx))
return -c*Sx + d*S2x

c, d = 1.5, 2.5
S = dot(inv(L), R)
A = -c*S + d*dot(S, S)
x = random(n)
print "A*x calculation using L^-1:"
print dot(A, x)
print "A*x calculation without using L^-1:"
print matvec(x)

print "Largest real part n-2 eigenvalues of A using L^-1:"
print eigs(A, k=n-2, which="LR")[0]

print "Largest real part n-2 eigenvalues of A without using L^-1:"
A = LinearOperator((n, n), matvec=matvec)
print eigs(A, k=n-2, which="LR")[0]



This prints:

A*x calculation using L^-1:
[-33.82564316 -8.22498638 7.44165407 36.72209849 -1.70563463]
A*x calculation without using L^-1:
[-33.82564316 -8.22498638 7.44165407 36.72209849 -1.70563463]
Largest real part n-2 eigenvalues of A using L^-1:
[ 473.88681348+0.j 1.82296261+0.j 1.06790730+0.j]
Largest real part n-2 eigenvalues of A without using L^-1:
[ 473.88681348+0.j 1.82296261+0.j 1.06790730+0.j]

As you can see, the last line is computed completely without the
actual knowledge of A, only using the "solve" mechanism.
So this should fix the problem for you.

Can you please try this on your matrices and report back?

Ondrej
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sympy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sympy/714de0b2-3915-4455-a72e-4c06b8367f1e%40googlegroups.com.

Jason Moore

unread,
Apr 25, 2014, 5:52:22 PM4/25/14
to sy...@googlegroups.com
Does this work for you?

from __future__ import division
import numpy as np
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import spsolve

def get_matrix(filename):

    d = np.zeros(238)
    row = np.zeros(238)
    col = np.zeros(238)
    with open(filename,'r') as f:
        for k, line in enumerate(f):
            i,j,v = line.strip().split()
            d[k] = eval(v)
            row[k] = int(i)
            col[k] = int(i)
    return csr_matrix((d, (row, col)), shape=(80, 80))

R,L = get_matrix('R_80'),get_matrix('L_80')

B = spsolve(L, R)
c = 2.0
d = 3.0
A = -c * B + d * B ** 2

Ondřej Čertík

unread,
Apr 25, 2014, 6:11:03 PM4/25/14
to sympy
On Fri, Apr 25, 2014 at 3:52 PM, Jason Moore <moore...@gmail.com> wrote:
> Does this work for you?
>
> from __future__ import division
> import numpy as np
> from scipy.sparse import csr_matrix
> from scipy.sparse.linalg import spsolve
>
> def get_matrix(filename):
>
> d = np.zeros(238)
> row = np.zeros(238)
> col = np.zeros(238)
> with open(filename,'r') as f:
> for k, line in enumerate(f):
> i,j,v = line.strip().split()
> d[k] = eval(v)
> row[k] = int(i)
> col[k] = int(i)
> return csr_matrix((d, (row, col)), shape=(80, 80))
>
> R,L = get_matrix('R_80'),get_matrix('L_80')
>
> B = spsolve(L, R)
> c = 2.0
> d = 3.0
> A = -c * B + d * B ** 2

I use Jason's get_matrix() but my sparse solve for the eigenvalues:

https://gist.github.com/certik/11304917

Peter, are those eigenvalues correct? You can see that my
matrix-vector is correct,
as I test it on a random vector, but the eigenvalues look off.

Ondrej
> https://groups.google.com/d/msgid/sympy/CAP7f1AhuqsuuUSnjPD7jPpJCXdyC7p8cb2n0_SSNDMgjQAp6Yw%40mail.gmail.com.

Peter Brady

unread,
Apr 25, 2014, 6:29:18 PM4/25/14
to sy...@googlegroups.com
Hi guys,

I appreciate you taking the time to test some things out.  Ondrej, I had never looked into the 'LinearOperator' function before. I tried your idea and ended up with an arpack error:

ValueError: Error in inverting M: function gmres did not converge (info = 800).

Here's what I did
import sympy as sp
from sympy import symbols,SparseMatrix
import scipy.sparse as scp
import scipy.sparse.linalg as linalg

L = SparseMatrix(..)
R = SparseMatrix(..)
c,d = 0.9,0.01

Ls = scp.csc_matrix(np.array(L).astype('double'))
Rs = scp.csc_matrix(np.array(R).astype('double'))

def matvec(x):
    Sx = linalg.spsolve(Ls,Rs*x)
    S2x= linalg.spsolve(Ls,Rs*Sx)
    return -c*Sx+d*S2x
A = linalg.LinearOperator((npoints,npoints),matvec=matvec)


The eigenvalues should look like the attached plot.  The data in the gist corresponds to r=1.0 (the best conditioned case)  Jason's approach works for the better conditioned L and R but once r < 0.8, computing A from numerically evaluated B leads to spurious eigenvalues. (Numerics rather than symbolics was my first approach).  I can update my gist with more L and R matrices if anyone is interested.
eigen.png

Ondřej Čertík

unread,
Apr 25, 2014, 6:43:56 PM4/25/14
to sympy
On Fri, Apr 25, 2014 at 4:29 PM, Peter Brady <peter...@gmail.com> wrote:
> Hi guys,
>
> I appreciate you taking the time to test some things out. Ondrej, I had
> never looked into the 'LinearOperator' function before. I tried your idea
> and ended up with an arpack error:
>
> ValueError: Error in inverting M: function gmres did not converge (info =
> 800).

Can you post the whole script and the whole error? Did you try my
script on your data
from the gist and if so, do you get the same error? ff you do, then something
is different for your scipy installation. If it works, then maybe it
doesn't work for some
other configuration that you tried.
> https://groups.google.com/d/msgid/sympy/5a5ea24f-8cab-45e4-8d66-3082284e50f7%40googlegroups.com.

Peter Brady

unread,
Apr 25, 2014, 7:30:19 PM4/25/14
to sy...@googlegroups.com
Sadly, I forgot to upload my data before I left the office.  However, a new thought occurred to me.  I've been trying to determine the eigenvalues of this system to determine the stability of a finite difference scheme (unstable if any Re(lambda) > 0).  This scheme will (of course) be coded up in double precision, so rather than trying to find "pure" eigenvalues (undirtied by finite-precision mathematics), I should probably focus my attention on determining the eigenvalues of the actual system of equations as they will be implemented.  As such, I think the LinearOperator approach would be best to focus on.  I think this may be part of the reason that my eigenvalues say the system should be stable but the numerical implementation is unstable.  I'll spend some time trying to get this work and let you know how it goes.

Thanks again for your help.  

Ondřej Čertík

unread,
Apr 25, 2014, 7:52:24 PM4/25/14
to sympy
On Fri, Apr 25, 2014 at 5:30 PM, Peter Brady <peter...@gmail.com> wrote:
> Sadly, I forgot to upload my data before I left the office. However, a new
> thought occurred to me. I've been trying to determine the eigenvalues of
> this system to determine the stability of a finite difference scheme
> (unstable if any Re(lambda) > 0). This scheme will (of course) be coded up
> in double precision, so rather than trying to find "pure" eigenvalues
> (undirtied by finite-precision mathematics), I should probably focus my
> attention on determining the eigenvalues of the actual system of equations
> as they will be implemented. As such, I think the LinearOperator approach
> would be best to focus on. I think this may be part of the reason that my
> eigenvalues say the system should be stable but the numerical implementation
> is unstable. I'll spend some time trying to get this work and let you know
> how it goes.

Right. In any case, there is a way to avoid L^-1 by using the solve,
even for your equation.
So that's a progress.

That being said, for checking the numerical solver, having an exact
solution would be invaluable,
and we can even use SymPy to do fast matrix-vector multiplies and
arpack to calculate the eigenvalues.
So Arpack returns the vector, and we use the exact matrix from SymPy
to do the multiply.
Or even better, we should calculate the eigenvalues to arbitrary
precision. There are some algorithms
for it in sympy, e.g.:

https://github.com/sympy/sympy/blob/master/sympy/mpmath/matrices/eigen.py

We should try them on your matrix. That way ill conditioning shouldn't
matter, as we'll just use high
enough accuracy. So that might be the best so that you know what
eigenvalues are correct
and how much off is your numerical solver.

Ondrej
> https://groups.google.com/d/msgid/sympy/88fa65cc-d303-47d8-a127-daeaaf29af30%40googlegroups.com.

Peter Brady

unread,
Apr 28, 2014, 7:08:03 PM4/28/14
to sy...@googlegroups.com
A few updates,

If I switch to :

L = SparseMatrix(...)
R = SparseMatrix(...)*(npoints-1)

Linv_R = L.solve(R)
Linv_R2 = L.solve(d*R*Linv_R)
   
return np.array(-c*Linv_R+Linv_R2).astype('double')

Instead of writing :
B=L.inv()*R
return np.array(-c*B+d*B**2).astype('double')

the computation runs about twice as fast as doesn't blow up my memory.  

I'm trying to do this numerically but running into issues with using a linear operator for A:

Here's what I have (this is also on the updated gist: https://gist.github.com/pbrady/11303375)


from sympy import SparseMatrix,Rational
import scipy.sparse as scp
import scipy.sparse.linalg as splinalg
import scipy.linalg as linalg
from functools import partial
import numpy as np

def get_matrix(filename):

    d = {}
    with open(filename,'r') as f:
            for line in f:
                i,j,v = line.strip().split()
                d[(int(i),int(j))] = Rational(v)
    def fill_matrix(i,j):
        try:
            return d[(i,j)]
        except KeyError:
            return None
    nx = int(filename.split('_')[-1])
    return SparseMatrix(nx,nx,fill_matrix)

npoints = 40
L = get_matrix('L_{}'.format(npoints))
R = get_matrix('R_{}'.format(npoints))*(npoints-1)

banded = scp.dia_matrix(L).astype('double').data;
banded[[0,2],:] = banded[[2,0],:]
Rs = scp.csc_matrix(np.array(R).astype('double'))
c,d = 1.0,1e-2


bsolve = partial(linalg.solve_banded,(1,1),banded)
def matvec(x):
    phi_x = bsolve(Rs.dot(x))
    bphi_xx = bsolve(d*Rs.dot(phi_x))
    return -c*phi_x+bphi_xx

A = splinalg.LinearOperator((npoints,npoints),matvec=matvec,dtype='double')

Linv_R = bsolve(Rs.todense())
Linv_R2 = bsolve(Rs.dot(Linv_R))
A_full = -c*Linv_R+d*Linv_R2

rhs = np.random.randn(npoints)

print(np.allclose(A_full.dot(rhs),A*rhs))

*** the output of this is True *** indicating that the linear operator and the full matrix A are equivalent operations however:

>>>scp.linalg.eigs(A_full,k=6,which='LM',sigma=0,return_eigenvectors=False)
array([  1.26665321e-09 -9.92038587e-05j,
         2.25603714e+00 +0.00000000e+00j,
        -3.95890789e+00 +0.00000000e+00j,
        -7.07685154e+00 +0.00000000e+00j,
        -2.10871433e+00 +1.07638101e+01j,  -2.10871433e+00 -1.07638101e+01j])

>>>scp.linalg.eigs(A,k=6,which='LM',sigma=0,return_eigenvectors=False)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-59-779c8d1f2586> in <module>()
----> 1 scp.linalg.eigs(A,k=6,which='LM',sigma=0,return_eigenvectors=False)

/home/ptb/miniconda3/lib/python3.4/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in eigs(A, k, M, sigma, which, v0, ncv, maxiter, tol, return_eigenvectors, Minv, OPinv, OPpart)
   1272 
   1273     while not params.converged:
-> 1274         params.iterate()
   1275 
   1276     return params.extract(return_eigenvectors)

/home/ptb/miniconda3/lib/python3.4/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in iterate(self)
    733             else:
    734                 Bxslice = slice(self.ipntr[2] - 1, self.ipntr[2] - 1 + self.n)
--> 735                 self.workd[yslice] = self.OPa(self.workd[Bxslice])
    736         elif self.ido == 2:
    737             self.workd[yslice] = self.B(self.workd[xslice])

/home/ptb/miniconda3/lib/python3.4/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in <lambda>(x)
    665             else:  # real type
    666                 if mode == 3:
--> 667                     self.OPa = lambda x: np.real(Minv_matvec(x))
    668                 else:
    669                     self.OPa = lambda x: np.imag(Minv_matvec(x))

/home/ptb/miniconda3/lib/python3.4/site-packages/scipy/sparse/linalg/interface.py in matvec(self, x)
    132             raise ValueError('dimension mismatch')
    133 
--> 134         y = self._matvec(x)
    135 
    136         if isinstance(x, np.matrix):

/home/ptb/miniconda3/lib/python3.4/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in _matvec(self, x)
    950             raise ValueError("Error in inverting M: function "
    951                              "%s did not converge (info = %i)."
--> 952                              % (self.ifunc.__name__, info))
    953         return b
    954 

ValueError: Error in inverting M: function gmres did not converge (info = 400).


Does anyone have any expertise on using the eigs function with a LinearOperator?  The problem seems to be when when sigma !=0 (which is important in the present application)

Thanks again for your help.


Jason Moore

unread,
Apr 28, 2014, 7:35:12 PM4/28/14
to sy...@googlegroups.com
Why are you using a combination of SymPy and NumPy/SciPy? If you want to do the problem numerically, shouldn't you just use the NumPy/SciPy functionality?

It isn't clear to me what you gain by creating SymPy SparseMatrices which are full of SymPy Rationals and then inputing them into numpy.array(). Is it because you need the precision that SymPy offers for numerical values?

Ondřej Čertík

unread,
Apr 28, 2014, 7:37:56 PM4/28/14
to sympy
On Mon, Apr 28, 2014 at 5:35 PM, Jason Moore <moore...@gmail.com> wrote:
> Why are you using a combination of SymPy and NumPy/SciPy? If you want to do
> the problem numerically, shouldn't you just use the NumPy/SciPy
> functionality?
>
> It isn't clear to me what you gain by creating SymPy SparseMatrices which
> are full of SymPy Rationals and then inputing them into numpy.array(). Is it
> because you need the precision that SymPy offers for numerical values?

I think it's just easiness of manipulation or for historical reasons.
That shouldn't matter.

For the numerical problem, here is documentation for "sigma" in eigs():


sigma : real or complex, optional
Find eigenvalues near sigma using shift-invert mode. This requires
an operator to compute the solution of the linear system
``[A - sigma * M] * x = b``, where M is the identity matrix if
unspecified. This is computed internally via a (sparse) LU
decomposition for explicit matrices A & M, or via an iterative
solver if either A or M is a general linear operator.
Alternatively, the user can supply the matrix or operator OPinv,
which gives ``x = OPinv * b = [A - sigma * M]^-1 * b``.
For a real matrix A, shift-invert can either be done in imaginary
mode or real mode, specified by the parameter OPpart ('r' or 'i').
Note that when sigma is specified, the keyword 'which' (below)
refers to the shifted eigenvalues ``w'[i]`` where:

If A is real and OPpart == 'r' (default),
``w'[i] = 1/2 * [1/(w[i]-sigma) + 1/(w[i]-conj(sigma))]``.

If A is real and OPpart == 'i',
``w'[i] = 1/2i * [1/(w[i]-sigma) - 1/(w[i]-conj(sigma))]``.

If A is complex, ``w'[i] = 1/(w[i]-sigma)``.



Default is sigma=None. Since it worked for me with sigma=None, does
this work for you with sigma=None?

What exactly does "LM" (largest in magnitude) do with sigma=0? Isn't
sigma=None what you need?
Btw, didn't you say you need largest in real component (LR)?
> https://groups.google.com/d/msgid/sympy/CAP7f1Ag7KAwk6Uec9HywW6gwuDv9hZOEWw6wmw6Y%2BipkFfE9ww%40mail.gmail.com.

Peter Brady

unread,
Apr 28, 2014, 10:38:13 PM4/28/14
to sy...@googlegroups.com
A little background on my application:

I'm trying assess the impact of highly irregular gridpoints on an otherwise uniform mesh for finite-difference stencils.  I begin by specify a simple string (i.e. "alpha*f!(-r)+f!(0)+beta*f!(h)+a0*f(-r)+b0*f(0)+c0*f(h)+d0*f(2h)") and using sympy and taylor series expansions, I can generate the appropriate finite difference scheme to a specified order of accuracy.  The string is an example of a compact finite difference scheme, which is essentially an implicit spacial discretization which lends itself to small computational molecules and high-order.  In the examples, "L" is the coefficient matrix containing the derivatives of f (i.e. the "f!" terms in my string) and R is the coefficient matrix for the regular function evaluations of "f".  These sparse matrices are dynamically generated with the appropriate exact values using sympy.

Solving my differential equation (not computing eigenvalues) is done strictly numerically after I get my sympy matrices.  The problem with highly irregular stencils is that they can be very unstable and when investigating stability, eigenvalues are key.  Since the full discretization matrix would be ill conditioned, I figured using Sympy would still allow me to compute the eigenvalues I need.  However, it would be better if I could compute the eigenvalues of the actual spacial discretization as it is implemented numerically, which is a series of tridiagonal solves.  Using the LinearOperator should allow me to do this ....

Hope that helps put my quest in the appropriate context.  

Now on to eigenvalues..

Specifying 'LM' will give me the eigenvalues with the largest magnitude.  These are interesting as the typically dictate the maximum allowable timestep.

Specifying 'LR' will (in theory) give me the eigenvalues with the largest real component.  If a scheme is stable these will typically be small and negative.  The Arnoldi process used by eigs is not very good at finding small eigenvalues.  To get around this, one can specify 'sigma=0' which then finds the largest eigenvalues of the shifted-inverse (which correspond to my desired small eigenvalues). 

In general, I have found the 'LM' with sigma=0 is more robust than 'LR' with sigma=0.  In the actual code I make use of both and combine them to get a better idea of the whole eigenvalue spectrum.

In sum, I need this to work with sigma!=None.  I suspect the scipy mailing list might be the place to ask about this.



You received this message because you are subscribed to a topic in the Google Groups "sympy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sympy/mWiIRQ62U30/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sympy+un...@googlegroups.com.

To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.

Ondřej Čertík

unread,
Apr 29, 2014, 8:46:25 AM4/29/14
to sympy

I see. You might also try SLEPC, some of the solvers from there might work better.

Sent from my mobile phone.

Reply all
Reply to author
Forward
0 new messages