Getting rid of tiny imaginary parts in eigenvalues

1,271 views
Skip to first unread message

pedda

unread,
Dec 2, 2011, 2:17:29 AM12/2/11
to sympy
Hello everyone,

I have a problem with sympy that has been driving me nuts for the last
two nights. I am working on a simulation of a physical system that at
some point gives me this matrix:
[ 0, -1, 0, 0, 0, 0, -1, 0, 0]
[-1, 1.0*x, -1, 0, 0, 0, 0, -1, 0]
[ 0, -1, 1.0*x, 0, 0, 0, 0, 0, -1]
[ 0, 0, 0, 1.0*x, -1, 0, -1, 0, 0]
[ 0, 0, 0, -1, 1.0*x, -1, 0, -1, 0]
[ 0, 0, 0, 0, -1, 0, 0, 0, -1]
[-1, 0, 0, -1, 0, 0, 1.0*x, -1, 0]
[ 0, -1, 0, 0, -1, 0, -1, 0, -1]
[ 0, 0, -1, 0, 0, -1, 0, -1, 1.0*x]

My goal is to calculate the eigenvalues, but they always include an
imaginary part which does not allow me to use the Lamda function on
them because I always get errors about mpf. When I use evalf though, I
get values with .0E-20I

I have also tried using as_real_imag() with no success. What is the
best way of plotting the eigenvalues? Also, is there a way to
rearrange this matrix into a block diagonal? I tried get_block_diag
but that didn't really work.

Thank you very much in advance!

Peter

Chris Smith

unread,
Dec 2, 2011, 11:03:26 PM12/2/11
to sy...@googlegroups.com

Try nsimplify your input to the matrix or nsimplify the matrix so you are working only with Rationals, not Floats (like 1.0). Here, I compute the eigenvalues after nsimplifying the matrix:

>>> Matrix(m.rows, m.cols, [nsimplify(mi) for mi in m])


[ 0, -1, 0, 0, 0, 0, -1, 0, 0]

[-1, x, -1, 0, 0, 0, 0, -1, 0]
[ 0, -1, x, 0, 0, 0, 0, 0, -1]
[ 0, 0, 0, x, -1, 0, -1, 0, 0]
[ 0, 0, 0, -1, x, -1, 0, -1, 0]


[ 0, 0, 0, 0, -1, 0, 0, 0, -1]

[-1, 0, 0, -1, 0, 0, x, -1, 0]


[ 0, -1, 0, 0, -1, 0, -1, 0, -1]

[ 0, 0, -1, 0, 0, -1, 0, -1, x]
>>> Matrix(m.rows, m.cols, [nsimplify(mi) for mi in m]).eigenvals()
{2*x/3 + (-x**2/9 - 8/3)/((-1/2 + sqrt(3)*I/2)*(-8*x**3/27 + x*(x**2 - 8)/3 + 3*
x + sqrt((-x**2/9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1/
3)) - (-1/2 + sqrt(3)*I/2)*(-8*x**3/27 + x*(x**2 - 8)/3 + 3*x + sqrt((-x**2/9 -
8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1/3): 1, x/2 - sqrt(x*
*2 + 8)/2: 1, 0: 1, x + sqrt(2): 1, x: 1, 2*x/3 + (-x**2/9 - 8/3)/((-1/2 - sqrt(
3)*I/2)*(-8*x**3/27 + x*(x**2 - 8)/3 + 3*x + sqrt((-x**2/9 - 8/3)**3 + (-16*x**3
/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1/3)) - (-1/2 - sqrt(3)*I/2)*(-8*x**3/27
+ x*(x**2 - 8)/3 + 3*x + sqrt((-x**2/9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)
/3 + 6*x)**2/4))**(1/3): 1, 2*x/3 + (-x**2/9 - 8/3)/(-8*x**3/27 + x*(x**2 - 8)/3
+ 3*x + sqrt((-x**2/9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))
**(1/3) - (-8*x**3/27 + x*(x**2 - 8)/3 + 3*x + sqrt((-x**2/9 - 8/3)**3 + (-16*x*
*3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1/3): 1, x/2 + sqrt(x**2 + 8)/2: 1, x -
sqrt(2): 1}

If I use your original matrix and nsimplify the keys and values produced, it doesn't look as nice since that sqrt(2) isn't recognized.

>>> dict([[nsimplify(k),nsimplify(v)] for k,v in m.eigenvals().items()])
2*x/3 + (-x**2/9 - 8/3)/((-1/2 + sqrt(3)*I/2)*(-8*x**3/27 + x*(x**2 - 8)/3 + 3*
+ sqrt((-x**2/9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1/
)) - (-1/2 + sqrt(3)*I/2)*(-8*x**3/27 + x*(x**2 - 8)/3 + 3*x + sqrt((-x**2/9 -
/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1/3): 1, x/2 - sqrt(x*
2 + 8)/2: 1, x + 14142135623731/10000000000000: 1, 0: 1, x: 1, x - 141421356237
1/10000000000000: 1, 2*x/3 + (-x**2/9 - 8/3)/((-1/2 - sqrt(3)*I/2)*(-8*x**3/27
x*(x**2 - 8)/3 + 3*x + sqrt((-x**2/9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)
3 + 6*x)**2/4))**(1/3)) - (-1/2 - sqrt(3)*I/2)*(-8*x**3/27 + x*(x**2 - 8)/3 + 3
x + sqrt((-x**2/9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1
3): 1, 2*x/3 + (-x**2/9 - 8/3)/(-8*x**3/27 + x*(x**2 - 8)/3 + 3*x + sqrt((-x**2
9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 - 8)/3 + 6*x)**2/4))**(1/3) - (-8*x**3/2
+ x*(x**2 - 8)/3 + 3*x + sqrt((-x**2/9 - 8/3)**3 + (-16*x**3/27 + 2*x*(x**2 -
)/3 + 6*x)**2/4))**(1/3): 1, x/2 + sqrt(x**2 + 8)/2: 1}
>>

pedda

unread,
Dec 6, 2011, 2:19:25 AM12/6/11
to sympy
Thank you very much for your answer. nsimplifying has sped up the
process of finding the eigenvalues, but it is still not possible to
plot them as they are still complex and so I get an error:

TypeError: cannot create mpf from ...

and then the function. I have tried varies approaches but just adding
re() in front of it doesn't work. I then tried to define a function
which evaluates first and then just returns the real part but I still
get the same error.

def eigenv(y):
# evs are the eigenvalues
v = (list(evs)
[current_function]).evalf(subs={'x':y}).as_real_imag()
return v[0]

mp.plot(eigenv, [0,30])


How can I work around this?

Chris Smith

unread,
Dec 6, 2011, 2:26:04 AM12/6/11
to sy...@googlegroups.com
If you know they are real just substitute them away:

>>> 3+I*2
3 + 2*I
>>> _.subs(I,0)
3

Did you already try the chop option to evalf?

>>> 3+1e-29*I
3 + 1.0e-29*I
>>> _.evalf(chop=True)
3.00000000000000

pedda

unread,
Dec 8, 2011, 6:05:51 AM12/8/11
to sympy
The chop option worked well!

Now one last problem: One of the tasks was to calculate the
eigenvalues for an even bigger matrix (20 by 20) which sympy was not
able to do in 24 hours and counting, while Mathematica solves this
problem in about one minute (according to my tutor). Am I doing
something wrong?

Thanks, Peter

Chris Smith

unread,
Dec 8, 2011, 6:22:51 AM12/8/11
to sy...@googlegroups.com
Was it singular like the last one? Go ahead and post the matrix.

pedda

unread,
Dec 8, 2011, 9:11:32 AM12/8/11
to sympy
So, actually the matrix I was talking about is even bigger, it is
400x400. As you can see, this the smaller one which my computer can
also not handle. I also tried using Maxima which didn't work either.
Am I doing something wrong in the setup of the program? I have x
declared as a Symbol, the coefficients are converted to integers using
int() and I have included the assumption that x is real and
positive...

[2*x, -1, -1, -1, -1, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0,
0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0]
[ -1, x, 0, 0, 0, -1, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0,
0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0]
[ -1, 0, x, 0, 0, -1, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0,
0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0,
0, 0, 0]
[ -1, 0, 0, x, 0, -1, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1,
0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0,
0, 0, 0]
[ -1, 0, 0, 0, x, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0,
-1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0,
0, 0, 0]
[ 0, -1, -1, -1, -1, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0,
0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0,
0, 0, 0]
[ -1, 0, 0, 0, 0, 0, x, -1, -1, -1, -1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0,
0, 0, 0]
[ 0, -1, 0, 0, 0, 0, -1, 2*x, 0, 0, 0, -1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0,
0, 0, 0]
[ 0, 0, -1, 0, 0, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1,
0, 0, 0]
[ 0, 0, 0, -1, 0, 0, -1, 0, 0, x, 0, -1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
-1, 0, 0]
[ 0, 0, 0, 0, -1, 0, -1, 0, 0, 0, x, -1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, -1, 0]
[ 0, 0, 0, 0, 0, -1, 0, -1, -1, -1, -1, x, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1]
[ -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, x, -1, -1, -1,
-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0,
0, 0, 0]
[ 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0,
0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0,
0, 0, 0]
[ 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 2*x, 0,
0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1,
0, 0, 0]
[ 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, x,
0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
-1, 0, 0]
[ 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0,
x, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, -1, 0]
[ 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, -1, -1, -1,
-1, x, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1]
[ -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, x, -1, -1, -1, -1, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0,
0, 0, 0]
[ 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1, x, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, -1, 0,
0, 0, 0]
[ 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1, 0, x, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, -1,
0, 0, 0]
[ 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1, 0, 0, 2*x, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
-1, 0, 0]
[ 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, -1, 0]
[ 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, -1, -1, -1, -1, x, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1]
[ -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, x, -1, -1, -1, -1, 0, -1, 0, 0,
0, 0, 0]
[ 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, -1, x, 0, 0, 0, -1, 0, -1, 0,
0, 0, 0]
[ 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, -1, 0, x, 0, 0, -1, 0, 0, -1,
0, 0, 0]
[ 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0,
-1, 0, 0]
[ 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 2*x, -1, 0, 0, 0,
0, -1, 0]
[ 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -1, -1, -1, x, 0, 0, 0,
0, 0, -1]
[ 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0,
0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, -1, -1,
-1, -1, 0]
[ 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0,
0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, -1, x, 0,
0, 0, -1]
[ 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0,
0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1, 0, x,
0, 0, -1]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1,
0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, -1, 0, 0,
x, 0, -1]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0,
-1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, -1, 0, 0,
0, x, -1]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0,
0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, -1, -1,
-1, -1, 2*x]

pedda

unread,
Dec 10, 2011, 7:01:11 AM12/10/11
to sympy
I noticed there is an error in the matrix, which I thought might have
prevented sympy from calculating the right values, but Mathematica
calculates this and even bigger matrices instantly. What I find
interesting though is that Maxima has the same problems with the
matrix. I have attached a new matrix from a simpler system but my
sympy is still not able to calculate this. Any ideas on the reason?
Might this have something to do with precision ? I am using a 64 bit
system if this is of any help ...


[ x, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0, -1, -1, 0, 0, 1,


0, 0, 0, 0, -1, 0, 0]

[ 0, x, 0, 0, 0, 1, 0, 0, -1, 0, 0, 0, -1, -1, 0, -1, 0,
0, -1, 0, 0, 0, 0, 0]
[-1, 0, x, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0,
0, 0, 0, 0, 0, 1, 0]
[ 0, 0, -1, x, 0, 0, 0, 0, 0, 0, 0, 1, 0, -1, 0, -1, 0,
0, 0, 0, 0, 0, 0, 0]
[ 0, 0, 0, 0, x, -1, 0, 0, 0, 0, -1, 0, 1, 0, 0, 0, -1,


0, 0, 0, 0, 0, 0, 0]

[ 0, 1, 0, 0, -1, x, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
-1, 0, 0, 0, 0, 0, -1]
[ 0, 0, 0, 0, 0, 0, x, -1, 0, 0, -1, 0, -1, 0, 0, 0, 0,


0, -1, 0, 0, 0, 0, 0]

[-1, 0, 0, 0, 0, 0, -1, x, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1, 0, 0, -1, 0]
[ 0, -1, 0, 0, 0, 0, 0, 0, x, -1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, -1, 0, 0, -1]
[ 0, 0, 0, 0, 0, 0, 0, 0, -1, x, 0, -1, 0, -1, 0, 0, 0,


0, 0, 0, 0, -1, 0, 0]

[ 0, 0, 0, 0, -1, 0, -1, 0, 0, 0, x, 0, 0, 0, 1, 0, 0,
0, 0, 0, -1, 0, -1, -1]
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, -1, 0, x, 0, 0, 0, 0, 0,
-1, 0, -1, 0, 0, -1, -1]
[-1, -1, 0, 0, 1, 0, -1, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0,


0, 0, 0, -1, 0, 0, 0]

[-1, -1, 0, -1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0,
1, 0, -1, 0, 0, 0, 0]
[ 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 1, 0, -1, 0, 0, -1, 0,
0, 0, 0, 0, 0, 0, 0]
[ 0, -1, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0,
0, 0, 0, 0, 0, 0, 1]
[ 1, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,


-1, 0, 0, 0, 0, -1, 0]

[ 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, -1, 0, 1, 0, 0, -1,


0, 0, 0, 0, 0, 0, 0]

[ 0, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, -1, 0, 0, 0, -1]
[ 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, -1, 0, -1, 0, 0, 0,
0, -1, 0, 0, 0, 0, 0]
[ 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, -1, 0, -1, 0, 0, 0, 0,


0, 0, 0, 0, -1, 0, 0]

[-1, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, -1, 0, -1, 0]
[ 0, 0, 1, 0, 0, 0, 0, -1, 0, 0, -1, -1, 0, 0, 0, 0, -1,
0, 0, 0, 0, -1, 0, 0]
[ 0, 0, 0, 0, 0, -1, 0, 0, -1, 0, -1, -1, 0, 0, 0, 1, 0,

Chris Smith

unread,
Dec 10, 2011, 10:44:00 AM12/10/11
to sy...@googlegroups.com
On Sat, Dec 10, 2011 at 5:46 PM, pedda <notfor...@hmamail.com> wrote:
> I noticed there is an error in the matrix, which I thought might have
> prevented sympy from calculating the right values, but Mathematica
> calculates this and even bigger matrices instantly. What I find
> interesting though is that Maxima has the same problems with the
> matrix. I have attached a new matrix from a simpler system but my
> sympy is still not able to calculate this. Any ideas on the reason?
> Might this have something to do with precision ? I am using a 64 bit
> system if this is of any help ...

The matrix has unique symmetry. Perhaps M'a is using that whereas
sympy is trying to work through it with brute force. I don't work with
matices often enough to recognize this myself (other than it looks
like a finite-element sort of matrix).

pedda

unread,
Dec 10, 2011, 11:05:12 AM12/10/11
to sympy
Sorry for posting again, but I just made some progress on solving this
problem: I used sagenb.com and sympy to find the eigenvalues for the
matrix above and they were calculated in a very short time. Using my
local sage, I get the same result in the same time as well. Only when
using straight python, the calculation never finishes.

Sage uses python 2.6 while I have python 2.7.2 (updated yesterday) for
all my pure python scripts. Are there any know issues related to
python 2.7?

Thanks, Peter

Chris Smith

unread,
Dec 10, 2011, 11:42:02 AM12/10/11
to sy...@googlegroups.com
On Sat, Dec 10, 2011 at 5:46 PM, pedda <notfor...@hmamail.com> wrote:
> I noticed there is an error in the matrix, which I thought might have
> prevented sympy from calculating the right values, but Mathematica
> calculates this and even bigger matrices instantly. What I find
> interesting though is that Maxima has the same problems with the
> matrix. I have attached a new matrix from a simpler system but my
> sympy is still not able to calculate this. Any ideas on the reason?
> Might this have something to do with precision ? I am using a 64 bit
> system if this is of any help ...

Try the latest version of sympy. The determinant of 0 is computed in a
matter of a few seconds. What is disappointing is that the recent
changes to try detect singular matrices don't work on this matrix: the
rref form is a beautifully computed identity matrix without the zero
(somewhere along the way) being detected with .is_zero. And trying to
detect it by defining iszerofunc=lambda x:expand_mul(x) == 0 is still
working as I write. Perhaps if a symbolic matrix is being inverted,
the determinant should be computed to test for invertibility since
detecting a symbolic zero for a system this size is (apparently)
difficult. I can't even get the length of the string version of
element 0,0 of the "inverted" matrix to print after more than a
minute.

How are you generating these matrices?

Chris Smith

unread,
Dec 10, 2011, 11:55:36 AM12/10/11
to sy...@googlegroups.com
> Sage uses python 2.6 while I have python 2.7.2 (updated yesterday) for
> all my pure python scripts. Are there any know issues related to
> python 2.7?
>
I can't help you here. Maybe someone else will know. When I try in 2.6
to calculate the eigenvalues in sympy I get

>>> m.eigenvals()
...
File "sympy\core\assumptions.py", line 154, in getit
return self._assumptions[fact]
KeyError: 'bounded'
>>>

And in 2.7 I get

...
File "sympy\core\add.py", line 441, in _eval_is_positive
elif r.is_unbounded:
File "sympy\core\assumptions.py", line 156, in getit
return self._what_known_about(fact)
File "sympy\core\assumptions.py", line 373, in _what_known_about
a = getattr(self, 'is_' + pk)
File "sympy\core\assumptions.py", line 156, in getit
return self._what_known_about(fact)
File "sympy\core\assumptions.py", line 359, in _what_known_about
a = getattr(self, '_eval_is_' + k)()
File "sympy\core\add.py", line 406, in <lambda>
_eval_is_bounded = lambda self: self._eval_template_is_attr('is_bounded')
File "sympy\core\operations.py", line 205, in _eval_template_is_attr
a = getattr(t, is_attr)
File "sympy\core\assumptions.py", line 156, in getit
return self._what_known_about(fact)
File "sympy\core\assumptions.py", line 359, in _what_known_about
a = getattr(self, '_eval_is_' + k)()
RuntimeError: maximum recursion depth exceeded

pedda

unread,
Dec 10, 2011, 12:07:25 PM12/10/11
to sympy
The matrices are the representations of Hamiltonians that come up in
the one dimensional Hubbard model that is used in condensed matter
physics. The size is determined by the number of lattice sites and
electrons that are present in the system. x is the interaction energy
for the electrons.

Using sage, I am now able to calculate the eigenvalues of matrices
with 4 sites, 2 up, 2 down but when I crank up the size of the matrix
to 100x100, it gets lost in the calculation once again and I actually
just got a "Killed" in the terminal window. I updated to the git
version, but still no improvement... Any other tricks I could try?

Chris Smith

unread,
Dec 10, 2011, 12:36:32 PM12/10/11
to sy...@googlegroups.com

Don't do it symbolically? i.e., substitute in a value of x before
computing the eigenvalues.

pedda

unread,
Dec 10, 2011, 1:37:26 PM12/10/11
to sympy
I have tried to do it non-symbolically, but I think it is rather
inelegant. I am also trying to promote the use of python/sage over the
use of Mathematica with my fellow students and it is somewhat
important to show that they can do the same thing for free with other
advantages. Since Mathematica is able to calculate the eigenvalues
without any problem or delay, it is hard to convince someone to switch
to a system that takes so much longer and is not as reliable.

I just installed the python3 version of sympy, which didn't it done
either... You can by the way eliminate the recursion depth error by
adding sys.setrecursionlimit(2**20) to your file. So far, this is what
I found out from testing with various systems:
Sage with python 2.6: calculates up to 36x36 in a reasonable time
(65s)
python 2.7/python 3.2: calculate up to ~ 20x20
Sage with Maxima for calculation works faster, but I have problem with
really big matrices like 400x400 ... working on it though!

Ondřej Čertík

unread,
Dec 10, 2011, 2:19:35 PM12/10/11
to sy...@googlegroups.com
On Sat, Dec 10, 2011 at 10:37 AM, pedda <notfor...@hmamail.com> wrote:
> I have tried to do it non-symbolically, but I think it is rather
> inelegant. I am also trying to promote the use of python/sage over the
> use of Mathematica with my fellow students and it is somewhat
> important to show that they can do the same thing for free with other
> advantages. Since Mathematica is able to calculate the eigenvalues
> without any problem or delay, it is hard to convince someone to switch
> to a system that takes so much longer and is not as reliable.

Mathematica is old and well tested and polished system, so it's
currently more reliable.
SymPy is improved by people like you or me, who have a problem, and want to
solve it using opensource tools (because we believe in opensource),
as opposed to just feed it to Mathematica and
be done with it.

All you need are just eigenvalues of this matrix? Or also eigenvectors?

What is the algorithm that Mathematica uses? In particular, what
is the best algorithm for this type of matrix to get the eigenvalues?

Let's get it implemented in SymPy so that we can solve these kind of matrices.
I would be very interested in that myself. I do my PhD in atomic
electronic structure,
I do things numerically using LAPACK or other libraries, but I would love
to be able to solve simpler physical systems symbolically in sympy.
With Brian, we want to be able to do such things in the sympy.quantum module.

Would you be willing to contribute some example of how you generate
the matrices?

Anyway, if you know the right algorithm for your matrix, I would help
you implement it
in SymPy.

Ondrej

Ondřej Čertík

unread,
Dec 10, 2011, 2:21:42 PM12/10/11
to sy...@googlegroups.com
2011/12/10 Ondřej Čertík <ondrej...@gmail.com>:

So in particular -- can you send us a script, that generates the matrix above,
just for smaller number of electrons, so that we can solve it in sympy
(and check that the eigenvalues are correct)?
Then we'll try to scale it up and see why it gets slow and then let's see
how we can improve the algorithm.

Ondrej

Brian Granger

unread,
Dec 10, 2011, 4:01:09 PM12/10/11
to sy...@googlegroups.com
On Sat, Dec 10, 2011 at 10:37 AM, pedda <notfor...@hmamail.com> wrote:

Some perspective is appropriate:

* Finding eigenvalues symbolically for anything for very small
matrices is incredible slow and the answers you get are not going to
be meaningful symbolically - you are going to want numerical answers.
I am not saying sympy's algorithms couldn't be better.
* 400x400 is a small matrix for numerical computations. By insisting
on using a symbolic algorithm, you are making an easy problem hard.
Moving to a numerical approach would allow you to handle matrices of
*many* thousands of rows/columns almost instantly.
* For these types of matrices you really should be using the sparse
matrices in scipy.sparse. Otherwise you are wasting a ton of time
storing and multiplying zeros.

Cheers,

Brian


> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>

--
Brian E. Granger
Cal Poly State University, San Luis Obispo
bgra...@calpoly.edu and elli...@gmail.com

pedda

unread,
Dec 11, 2011, 5:40:02 AM12/11/11
to sympy
Hello everyone,

thank you very much for your participation! I have attached the code.
It is probably not the cleanest you can find, but it works for now..
You can control the size of the matrix by setting the variables sites,
n_up, and n_down which I have moved to the top of the script. The goal
was to make it work with (6,3,3).


#!/usr/bin/env python2
#
#
# Simulation of the 1D Hubbard Model

import copy
import itertools
from matplotlib.pyplot import figure,show,savefig,plot
import numpy
from sympy import Symbol
import sympy
import sys
import time

sys.setrecursionlimit(2**20)

# System definitions
sites = 3
n_up = 2
n_down = 1

# Set up the initial state
def create_state (sv, nu, nd):
for i in range(0,nu):
sv[0][i] = 1
for i in range(0,nd):
sv[1][i] = 1
return sv

# This function calculates the sign we have to add to the state
# when creating or annihilating a fermion
# The convention is that down spins are created before up spins
# and that the Mth site is occupied before the (M-1)th site
def fsign(sv, site, spin):
power = int(0);
for i in range(0, site):
power += sv[spin][i]
power += n_up*spin
return int((-1)**power)


# Creation of an electron at site with spin
# returns the zero matrix if an electron already exists
def create (sv, site, spin):
coeff = 1 - sv[spin][site]
sv[spin][site] = 1
fermion_sign = fsign(sv, site, spin)
return coeff*sv

# Annihilation of an electron at site with spin
# returns the zero matrix if no electron is found to annihilate
def annihilate (sv, site, spin):
coeff = sv[spin][site]
sv[spin][site] = 0
return coeff*sv


# Calculate the Hamiltonian for a given state and
# return the energy
def matrix_element(l_s, r_s, sites):
t_energy = 0
u_energy = 0

# determine interaction energy

if numpy.sum(abs(l_s-r_s)) == 0:
u_energy = numpy.sum([(r_s[0,i]*r_s[1,i]) for i in
range(0,len(l_s[0]))])

for i in range (0, sites):
for j in range(0, sites):
# only at neighboring sites
if i == (j+1) or i == (j-1) or (i == 0 and j == sites-1)
or (j == 0 and i == sites-1):
# sum over spins
for k in [0,1]:
# We make a copy since it is going to be modified
temp_state = copy.copy(r_s)

# We annihilate the electron at site i with spin k
# and create an electron at site j with spin k
temp_state = annihilate(temp_state, i, k)
fermion_sign = fsign(temp_state, i, k)

if numpy.sum(temp_state) != 0:

# try to create an electron at site j with
spin k
temp_state = create(temp_state, j, k)
fermion_sign = fsign(temp_state, j,
k)*fermion_sign

# if the right and the left hand states are
the
#same, we can add 1 to the energy
if numpy.sum(abs(l_s-temp_state)) == 0:
t_energy = t_energy + 1*fermion_sign

return -1*int(t_energy) + x*int(u_energy)

# How long is this going to take
totic = time.clock();

# x is the ratio of the interaction energy U and the hopping
# energy t
x = Symbol('x')

# spin up in first row
# spin down in second row
state = numpy.zeros([2,sites])

# create initial state
state = create_state(state, n_up, n_down)

# find all unique permutations
perms_up = list(set(list(itertools.permutations(state[0],sites))))
perms_down = list(set(list(itertools.permutations(state[1],sites))))

print "Set up permutations"

# the hamiltonian in matrix form
print "Created matrix"

hamiltonian =
sympy.matrices.Matrix(numpy.zeros([len(perms_up)*len(perms_down),
len(perms_up)*len(perms_down)]))

# just some info
print "Total combinations " + str(len(perms_up)*len(perms_down))

# preparation for the states
left_state = numpy.zeros([2,sites])
right_state = numpy.zeros([2,sites])

# We want to order the permutations so that the ones with the highest
number of
# interactions are treated first
states = []
max_int = min(n_up, n_down)

while (max_int >= 0):
for i in range(0,len(perms_up)):
for j in range(0, len(perms_down)):
int_e = 0
for k in range(0,sites):
int_e = int_e + perms_up[i][k]*perms_down[j][k]
if (int_e == max_int):
new_state = numpy.zeros([2,sites])
new_state[0] = perms_up[i]
new_state[1] = perms_down[j]
states.append(new_state)
max_int = max_int -1

# Now calculate the matrix elements
for i in range(0, len(perms_up)):
print "i: " + str(i)
for j in range(0, len(perms_down)):
for m in range(0, len(perms_up)):
for n in range(0, len(perms_down)):
left_state = states[i*len(perms_down)+j]
right_state = states[m*len(perms_down)+n]
energy = matrix_element(left_state, right_state,
sites)
hamiltonian[len(perms_down)*i+j,len(perms_down)*m+n] =
energy

print hamiltonian
evs = hamiltonian.eigenvals();
print str(len(evs)) + " unique Eigenvalues"


def eigenv(y,current_function):
return list(evs)[current_function].subs({x:y}).evalf(chop=True)

for i in range(0, len(evs)):
print sympy.simplify(list(evs)[i])
figure(0)
xvalues = numpy.array(range(0,30))
yvalues = [eigenv(y,i) for y in xvalues]
plot(xvalues, yvalues)

savefig('hubbard_'+str(sites)+'_'+str(n_up)+'_'+str(n_down)+'.png')
totoc = time.clock()
total = totoc-totic;
print "This took me " + str(total) + "s"
show()

Chris Smith

unread,
Dec 11, 2011, 10:27:23 AM12/11/11
to sy...@googlegroups.com
On Sun, Dec 11, 2011 at 4:25 PM, pedda <notfor...@hmamail.com> wrote:
> Hello everyone,
>
> thank you very much for your participation! I have attached the code.


Have you checked to see if there are any tricks in "The
one-dimensional Hubbard model By Fabian H. L. Essler" (viewable at
google books)?

Ondřej Čertík

unread,
Dec 11, 2011, 3:35:56 PM12/11/11
to sy...@googlegroups.com
On Sun, Dec 11, 2011 at 2:40 AM, pedda <notfor...@hmamail.com> wrote:
> Hello everyone,
>
> thank you very much for your participation! I have attached the code.
> It is probably not the cleanest you can find, but it works for now..
> You can control the size of the matrix by setting the variables sites,
> n_up, and n_down which I have moved to the top of the script. The goal
> was to make it work with (6,3,3).

Can you please post it into gist at https://gist.github.com/?

google groups have reformatted your message, so when you copy-paste it
and run it, you get tons of syntax errors.

Ondrej

pedda

unread,
Dec 11, 2011, 3:44:58 PM12/11/11
to sympy
Sure, you can download it from here:

https://gist.github.com/ee7157b4446dfe5a20fb

Chris, the book was recommended us during class but as far as I know
they "only" treat analytically solvable cases. People love this model
which is why the book is still hundreds of pages thick without talking
about anything numerical...

Aaron Meurer

unread,
Dec 22, 2011, 10:10:22 AM12/22/11
to sy...@googlegroups.com
Hi.

As you've discovered, linear algebra in SymPy can be slow. There are
two main reasons for this. The first is inefficient algorithms. All
over the module, things are implemented using the naive algorithms
that are taught in a first course in linear algebra, but which are not
actually the most efficient ways of computing things. The second is
slow data types (we should be using the ground types from the polys).

I think the problem here is a little bit of both. SymPy computes
eigenvalues by computing the characteristic polynomial, and then
finding the roots. If you try this on your matrix, you will see that
the computation of the chacteristic polynomial is what is slow. Now,
according to http://reference.wolfram.com/legacy/v5/Built-inFunctions/AdvancedDocumentation/LinearAlgebra/4.4.html
(scroll to the very bottom), Mathematica uses interpolation of the
characteristic polynomial to compute eigenvalues. I'm not sure what
this means. Maybe someone else will. So let's just look at how
Mathematica computes characteristic polynomials.

According http://reference.wolfram.com/mathematica/ref/CharacteristicPolynomial.html,
Mathematica just computes the characteristic polynomial using det(A -
lambda*I). This is also what we do.

As for determinant, we use the Baries method by default (see the
docstring of Matrix.det), though Berkowitz is also implemented.

Maple uses Berkowitz if the Matrix elements are not in a field, and
the Hessenberg algorithm if they are (see
http://www.maplesoft.com/support/help/Maple/view.aspx?path=LinearAlgebra/Generic/CharacteristicPolynomial).
Perhaps we should look into this algorithm.

Regarding ground types, you may have noticed that the other systems
(Maple, Sage, Mathematica, etc.) take care to define the ground domain
of matrices. In SymPy, this is not done: we just compute with purely
symbolic objects the same as we would with numbers. In many cases,
proper simplification is not done to ensure that an element is zero,
which can lead not only to expression explosion, but to wrong results.
What we need to do is use the ground types from the polys in the
matrices. This would help reduce expression explosion (because
rational functions would always be kept in simplest terms), and reduce
wrong results (because zero equivalence falls out of automatic
simplification for rational functions).

Finally, the last step in computing eigenvalues, which you didn't even
get to, is computing the roots of the characteristic polynomial.
Right now, if we can't compute all the roots, it just fails. We
should return RootOf objects instead. This would require expanding
RootOf to support symbolic arguments for symbolic matrices.

As Brian said, if your end goal is a floating point number, then you
should start with floating point numbers (and perhaps even use a
numerical library like scipy to compute them). But regardless, SymPy
should be faster with this symbolically.

Aaron Meurer

Reply all
Reply to author
Forward
0 new messages