determinant of a 3x3 matrix hangs

109 views
Skip to first unread message

Nico

unread,
Jul 30, 2010, 9:14:28 PM7/30/10
to sympy
Hi all,

this is a SymPy first-timer writing.
I just constructed a symbolic 3x3 matrix and would like to get the
determinant of it (code -- approx 30 lines -- is below), but this
appears to be a major task -- sympy hangs for ages (i.e., five minutes
and more). I don't know what I'm doing wrong -- is that a bug in
SymPy; using v0.6.7 here.

Any hints?

Cheers,
Nico


========================= *snip* =========================
#! /usr/bin/env python
# -*- coding: utf-8 -*-
'''
Calculates the J^{-1} J{-T} for where J is the Jacobian of the
bilinear mapping of the reference quadrilateral (-1,1)^2 to a given
quadrilateral in space.
'''
from sympy import pprint, Symbol
import sys
sys.displayhook = pprint
from sympy.matrices import *

# x0,...,x4 determine the quadrilateral
x00 = Symbol( 'x00' )
x01 = Symbol( 'x01' )
x02 = Symbol( 'x02' )
x0 = Matrix( [ x00, x01, x02 ] )

x10 = Symbol( 'x10' )
x11 = Symbol( 'x11' )
x12 = Symbol( 'x12' )
x1 = Matrix( [ x10, x11, x12 ] )

x20 = Symbol( 'x20' )
x21 = Symbol( 'x21' )
x22 = Symbol( 'x22' )
x2 = Matrix( [ x20, x21, x22 ] )

x30 = Symbol( 'x30' )
x31 = Symbol( 'x31' )
x32 = Symbol( 'x32' )
x3 = Matrix( [ x30, x31, x32 ] )

# The bilinear map
#
# F(hat{x}, hat{y} = alpha + beta hat{x} + gamma hat{y} + delta
hat{x} hat{y}
#
# maps the reference quadrilateral (-1,1)^2 to the actual
quadrilateral
# determined by the points x0,...,x3.
alpha = ( x0 + x1 + x2 + x3 ) / 4
beta = ( - x0 + x1 + x2 - x3 ) / 4
gamma = ( - x0 - x1 + x2 + x3 ) / 4
delta = ( x0 - x1 + x2 - x3 ) / 4

# The Jacobian of F is formed by
#
# J ( hat{x}, hat{y} ) = [ beta + delta hat{y}, gamma + delta hat{x},
C ].
#
# The last column can be chosen arbitrarily, so to avoid singularities
take it
# to be the cross product of the first two columns.
hatx = Symbol( 'hat{x}' )
haty = Symbol( 'hat{y}' )
C1 = beta + delta * haty
C2 = gamma + delta * hatx
C3 = C1.cross( C2 ).transpose()

J = C1.row_join(C2).row_join(C3)


#print J.det().simplify()
print J
print J.det()
print J.inv()
========================= *snap* =========================

Aaron S. Meurer

unread,
Jul 31, 2010, 12:58:46 AM7/31/10
to sy...@googlegroups.com
Hi!

I ran your script in SymPy 0.6.7 and got the same thing you did, with the slow performance. But I also ran it in the latest git master and it ran the whole thing in just under a minute. So something in the git master has fixed. Actually, I have no doubt in my mind what has done it: the new polys. Basically, the polys in SymPy have been rewritten completely and now many operations, especially those related to simplification, are an order of magnitude faster than they were before.

So basically, until SymPy 0.7.0 comes out (which could be some time; we don't exactly have a roadmap for it yet), I would recommend that you work off of the bleeding edge development version, which is the git master. Since your script has a shebang line, I am assuming you are on some kind of Unix. Basically, if you have never used git before, you need to just install it and run

git clone git://github.com/sympy/sympy.git

in whatever directory you want to put it, and then

cd sympy
./bin/isympy

or you can install it using

./setup.py install

(sorry if you already know how to use git, but I am just trying to be helpful since most people don't)

If you want to contribute to SymPy, which you are always welcome to do so, there are more things about git to learn, but if you just want to pull down the bleeding edge, that is it. The command

git pull

will update it whenever you want.

Please let us know how that works out for you.

Aaron Meurer

P.S., I just realized that this is the second time in so many days that I have explained this in an email, so I think I am going to make a wiki page for it :)

> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>

asmeurer

unread,
Jul 31, 2010, 12:59:45 AM7/31/10
to sympy


On Jul 30, 10:58 pm, Aaron S. Meurer <asmeu...@gmail.com> wrote:
> Hi!
>
> I ran your script in SymPy 0.6.7 and got the same thing you did, with the slow performance.  But I also ran it in the latest git master and it ran the whole thing in just under a minute.  So something in the git master has fixed.  Actually, I have no doubt in my mind what has done it:  the new polys.  Basically, the polys in SymPy have been rewritten completely and now many operations, especially those related to simplification, are an order of magnitude faster than they were before.  
>
> So basically, until SymPy 0.7.0 comes out (which could be some time; we don't exactly have a roadmap for it yet), I would recommend that you work off of the bleeding edge development version, which is the git master.  Since your script has a shebang line, I am assuming you are on some kind of Unix.  Basically, if you have never used git before, you need to just install it and run
>
> git clone git://github.com/sympy/sympy.git
>
> in whatever directory you want to put it, and then
>
> cd sympy
> ./bin/isympy
>
> or you can install it using
>
> ./setup.py install
>
> (sorry if you already know how to use git, but I am just trying to be helpful since most people don't)
>
> If you want to contribute to SymPy, which you are always welcome to do so, there are more things about git to learn, but if you just want to pull down the bleeding edge, that is it.  The command
>
> git pull
>
> will update it whenever you want.  
>
> Please let us know how that works out for you.
>
> Aaron Meurer
>
> P.S., I just realized that this is the second time in so many days that I have explained this in an email, so I think I am going to make a wiki page for it :)

http://code.google.com/p/sympy/wiki/GettingTheBleedingEdge

Nico Schlömer

unread,
Jul 31, 2010, 1:04:41 AM7/31/10
to sy...@googlegroups.com
Hi,

thanks for the hint. I'll check out the latest dev and try and run the
code with it.
But hey, calculating the determinant of a 3x3 matrix in under a
*minute? Just counting the number of operations that needs to be done,
it should be under a microsecond! -- Well obviously there's something
I don't understand here which is where the actual complication sits.
I'm more worried about actually calculating the inverse of that
matrix; I'm also gonna need that.

Cheers,
Nico

Aaron S. Meurer

unread,
Jul 31, 2010, 1:09:55 AM7/31/10
to sy...@googlegroups.com

On Jul 30, 2010, at 11:04 PM, Nico Schlömer wrote:

> Hi,
>
> thanks for the hint. I'll check out the latest dev and try and run the
> code with it.
> But hey, calculating the determinant of a 3x3 matrix in under a
> *minute? Just counting the number of operations that needs to be done,

> it should be under a microsecond! —

The problem is that it expands the determinant. If you don't want this, you can change method from the default to berkowitz (see the docstring of Matrix.det()), and it does it instantly (even in 0.6.7), but the result is unexpanded.

> Well obviously there's something
> I don't understand here which is where the actual complication sits.
> I'm more worried about actually calculating the inverse of that
> matrix; I'm also gonna need that.

There are also different methods for computing the inverse (see the docstring of Matrix.inv()), so you might give those a try too.

Aaron Meurer

Nico

unread,
Jul 31, 2010, 12:47:41 PM7/31/10
to sympy
Hi,

I'm running with the latest dev version now (Gentoo ebuild here
https://bugs.gentoo.org/show_bug.cgi?id=330639, in case anyone's
interested), and the code runs faster indeed. Computing the inverse is
still a pain, but when using "LU" it seems to work out alright.
Well, eventually I end up with that huge *polynomial* in xi, eta,
which I want to integrate over (-1,1)^2 -- seems to be a trivial thing
to do, but now, the code hangs there forever.

Can you confirm this? Any workarounds?

Cheers,
Nico

=========================== *snip* ===========================
#! /usr/bin/env python
# -*- coding: utf-8 -*-
'''
Calculates the J^{-1} J{-T} for where J is the Jacobian of the
bilinear mapping of the reference quadrilateral (-1,1)^2 to a given
quadrilateral in space.
'''
from sympy import pprint, Symbol, integrate
xi = Symbol( 'xi' )
eta = Symbol( 'eta' )
C1 = beta + delta * eta
C2 = gamma + delta * xi
C3 = C1.cross( C2 ).transpose()

J = C1.row_join(C2).row_join(C3)

absDetJ = J.det()
Jinv = J.inv( "LU" )
Lambda = Jinv * Jinv.transpose()

# Gradients of test functions
# phi0 = (1-xi) * (1-eta) / 4
# phi1 = (1+xi) * (1-eta) / 4
# phi2 = (1+xi) * (1+eta) / 4
# phi3 = (1-xi) * (1+eta) / 4
grad_phi0 = Matrix( [ -(1-eta), -(1-xi), 0 ] ) / 4
grad_phi1 = Matrix( [ (1-eta), -(1+xi), 0 ] ) / 4
grad_phi2 = Matrix( [ (1+eta), (1+xi), 0 ] ) / 4
grad_phi3 = Matrix( [ -(1+eta), (1-xi), 0 ] ) / 4

# Entry (I,J) of the local stiffness matrix is
#
# S(I,J) = int_{Quad} grad_phiI Lambda grad_phiJ.
#
# Compute those now.
print grad_phi0.transpose() * Lambda * grad_phi0
print 'Start integration...'
I = integrate( integrate( grad_phi0.transpose() * Lambda * grad_phi0 *
absDetJ, (xi,-1,1) ), (eta,-1,1) )
print 'Done.'
=========================== *snap* ===========================




On Jul 31, 1:09 am, "Aaron S. Meurer" <asmeu...@gmail.com> wrote:
> On Jul 30, 2010, at 11:04 PM, Nico Schlömer wrote:
>
> > Hi,
>
> > thanks for the hint. I'll check out the latest dev and try and run the
> > code with it.
> > But hey, calculating the determinant of a 3x3 matrix in under a
> > *minute? Just counting the number of operations that needs to be done,
> > it should be under a microsecond! —
>
> The problem is that it expands the determinant.  If you don't want this, you can change method from the default to berkowitz (see the docstring of Matrix.det()), and it does it instantly (even in 0.6.7), but the result is unexpanded.
>
> > Well obviously there's something
> > I don't understand here which is where the actual complication sits.
> > I'm more worried about actually calculating the inverse of that
> > matrix; I'm also gonna need that.
>
> There are also different methods for computing the inverse (see the docstring of Matrix.inv()), so you might give those a try too.  
>
> Aaron Meurer
>
>
>
>
>
>
>
>
>
> > Cheers,
> > Nico
>
> >>> For more options, visit this group athttp://groups.google.com/group/sympy?hl=en.
>
> >> --
> >> You received this message because you are subscribed to the Google Groups "sympy" group.
> >> To post to this group, send email to sy...@googlegroups.com.
> >> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> >> For more options, visit this group athttp://groups.google.com/group/sympy?hl=en.

Aaron S. Meurer

unread,
Jul 31, 2010, 4:27:05 PM7/31/10
to sy...@googlegroups.com
Well, you can't get around expanding the expression at one point or another. You managed to compute the expression without expanding it, but to integrate it, it must be first expanded, and that is what takes all the time.

It seems that the thing you are integrating is a 1x1 Matrix, so after pulling out the single element and running .as_numer_denom() on it (which took some time), it looks like it is not a polynomial in xi or eta, but a rational function, unless they cancel out (I gave up on running cancel()).

I couldn't find any good ways to work around this. Basically, even though we have new, faster polys, they have not been fully integrated yet. In particular, expand() still uses an older, slower routine, and what function you call, it is going to try running expand on it. Actually, even if you use poly(), which skips the slower expand() and uses the poly expand methods, it's still slow, because the new polys are inefficient for expressions with many variables. I tried all kinds of methods to expand what you got, but I couldn't get it to work.

So I went back and looked at the original matrix. Basically, you have a bunch of junk stuff like (-x10/4 - x30/4 + x00/4 + x20/4) that are constant with respect to eta and xi, but are slowing down the simplification. So I recommend that you substitute the coefficients of xi, eta, and the constant terms with dummy symbols and back substitute once you have integrated. So you might do something like this:

In [264]: A = numbered_symbols('a')

In [265]: revsubsdict = {A.next():-x00/4 - x30/4 + x10/4 + x20/4, A.next():(-x10/4 - x30/4 + x00/4 + x20/4), A.next():-x00/4 - x10/4 + x20/4 + x30/4, A.next():(-x10/4 - x30/4 + x00/4 + x20/4)} # And continue with the rest

In [266]: subsdict = {revsubsdict[i]:i for i in subsdict} # This is a dictionary comprehension and requires Python 2.7. Basically, reverse the dictionary.

Then work with J.subs(subsdict) instead of J, and do result.subs(revsubsdict) at the very end to get the original symbols back in again.

I didn't carry this through to the end, because it require more copying and pasting than I was willing to do at the moment (maybe you could figure out how to automate it better), but I think it should be more efficient for you. You can also play around with cse(), though I couldn't figure out how to make it not pull out xi and eta.

Aaron Meurer

Nico Schlömer

unread,
Jul 31, 2010, 4:39:11 PM7/31/10
to sy...@googlegroups.com
Hi Aaron,

thanks a lot for looking into this.
I've toyed around with Mathematica in the afternoon and I noticed that
here, too, you run into problems if you don't manually expand it.
I managed to get it work in the end, and I reckon if use the same
manual simplifications (e.g., starting off with the vectors alpha,
beta, gamma, delta, rather than x0,...,x3, making use of ), sympy
would get a boost. I guess I'll go on reading up on the sympy
documentation and play around.

Thanks!
Nico

Bastian Weber

unread,
Aug 2, 2010, 3:57:48 PM8/2/10
to sy...@googlegroups.com
Hi Aaron and Nico


because I already have struggled with similar problems I want to add my
two cents.

>
>
>
> On Sat, Jul 31, 2010 at 4:27 PM, Aaron S. Meurer <asme...@gmail.com> wrote:
>> Well, you can't get around expanding the expression at one point or another. You managed to compute the expression without expanding it, but to integrate it, it must be first expanded, and that is what takes all the time.
>>
>> It seems that the thing you are integrating is a 1x1 Matrix, so after pulling out the single element and running .as_numer_denom() on it (which took some time), it looks like it is not a polynomial in xi or eta, but a rational function, unless they cancel out (I gave up on running cancel()).
>>
>> I couldn't find any good ways to work around this. Basically, even though we have new, faster polys, they have not been fully integrated yet. In particular, expand() still uses an older, slower routine, and what function you call, it is going to try running expand on it. Actually, even if you use poly(), which skips the slower expand() and uses the poly expand methods, it's still slow, because the new polys are inefficient for expressions with many variables. I tried all kinds of methods to expand what you got, but I couldn't get it to work.
>>
>> So I went back and looked at the original matrix. Basically, you have a bunch of junk stuff like (-x10/4 - x30/4 + x00/4 + x20/4) that are constant with respect to eta and xi, but are slowing down the simplification. So I recommend that you substitute the coefficients of xi, eta, and the constant terms with dummy symbols and back substitute once you have integrated. So you might do something like this:
>>
>> In [264]: A = numbered_symbols('a')
>>
>> In [265]: revsubsdict = {A.next():-x00/4 - x30/4 + x10/4 + x20/4, A.next():(-x10/4 - x30/4 + x00/4 + x20/4), A.next():-x00/4 - x10/4 + x20/4 + x30/4, A.next():(-x10/4 - x30/4 + x00/4 + x20/4)} # And continue with the rest
>>
>> In [266]: subsdict = {revsubsdict[i]:i for i in subsdict} # This is a dictionary comprehension and requires Python 2.7. Basically, reverse the dictionary.
>>
>> Then work with J.subs(subsdict) instead of J, and do result.subs(revsubsdict) at the very end to get the original symbols back in again.
>>
>> I didn't carry this through to the end, because it require more copying and pasting than I was willing to do at the moment (maybe you could figure out how to automate it better), but I think it should be more efficient for you. You can also play around with cse(), though I couldn't figure out how to make it not pull out xi and eta.

I also think to "hide" temporarily unneeded complexity in dummy symbols
is a good way. Just for curiosity I played a bit around.


Dealing with polynomials in xi and eta, one could use avoid copy-pasting

I wrote some quick and dirty utility functions which make it easy to
introduce a unique symbol for each coefficient.

After building the inverse there are more calculations ( building Lambda
and multiplying by the gradient from two sides). Each of them increases
the number of elementary operations of the expanded expression quite a bit.

( As Ondrej told me a while ago, this can quickly be checked using
expr.count_ops() )


So I would repeat the step of introducing dummy vars as often as necessary.

Another thing:

You can make the expressions a lot easier if you calculate numerator and
denominator separately. This means instead of using

Jinv * Jinv.T * absDetJ

you could use

Jadj * Jadj.T / absDetJ.


At the end you should have a quite simple expression: a rational
function where numerator and denominator are polynomials in xi and eta
and each coefficient is a single symbol.

Before you pass this to integrate, maybe it is a good idea to convert
these two bivariate polynomials into mono-variate polynomials which just
depend on xi. Then you could perform the inner integral and substitute
back, such that, you obtain a function which just depends on eta.


But I am not sure if the solution of the integrals can be expressed as a
closed-form expression and even if so, I dont know if sympy can find it.


Finally another experience as a sympy user: while working with huge
expressions it often seems to me that printing (and pretty_printing even
more) is a very time consuming task - sometimes printing an expression
takes much longer than calculating it.

I attached the code with my additions (mainly the utility functions).

Comments are of course very welcome.

Regards,

Bastian.

det3by3.py

Aaron S. Meurer

unread,
Aug 3, 2010, 12:36:32 AM8/3/10
to sy...@googlegroups.com
Hi Bastian. Thanks for your input.

Yes, I agree. If you don't keep the numerator and denominator separate, then integrate has to figure out what they are using .as_numer_denom(). Sometimes just adding a few extra lines like

fa = fa*gd + ga*fd
fd = fd*gd

instead of

f = f + g

to keep the numerator and denominator separate can make things faster. I have to do this a lot in my integration work, but it's actually because I use Poly, which does not support rational functions, and I have to keep the two separate. Of course, you can put them together as fa/fd at the end, because .as_numer_denom() is fast in the trivial case where it is already a fraction. Finally, calling cancel() will cancel any common terms that maybe be in the numerator and detonator, which can make things simpler. On the other hand, ratint() calls cancel anyway, so you probably don't need to worry about it.

>
> At the end you should have a quite simple expression: a rational
> function where numerator and denominator are polynomials in xi and eta
> and each coefficient is a single symbol.
>
> Before you pass this to integrate, maybe it is a good idea to convert
> these two bivariate polynomials into mono-variate polynomials which just
> depend on xi. Then you could perform the inner integral and substitute
> back, such that, you obtain a function which just depends on eta.
>
>
> But I am not sure if the solution of the integrals can be expressed as a
> closed-form expression and even if so, I dont know if sympy can find it.

Well, I can give you good news here. All rational functions have elementary integrals, and SymPy has the complete algorithm for finding all of them (see sympy/integrals/rationaltools.py).

Of course, the result may not be a rational function (it will be a rational function plus possibly some logarithms and/or arctangents), which means that it might not get so lucky when integrating with respect to eta. Actually, I need to verify, but I think it might be the case that these are always elementary too, though the present integrate() is not guaranteed to find it. That is where my work this summer will come in, because the algorithm I am currently working on will be able to integrate expressos of the type returned by ratint() in a complete manner.

I know that it can do quadratic/quadratic (which is what you have), at least in my integration3 branch, with symbolic coefficients because I ran integrate((a*x**2 + b*x + c)/(d*x**2 + e*x + f), x) once. However, this took a *very* long time to complete (maybe an hour), and the result was quite complex. And the integral you have is the same, except instead of a-f, you have even more complex symbolic coefficients.

I tried running it, and indeed, it hung on the integration of the first integral, until several hours passed and I stopped it.

In master, outside of my integration3 branch, it will probably fail because of issue 1793. If that happens (CoercionFailed exception) try it there, or cherry-pick commit f7e24f3b43 from http://github.com/asmeurer/sympy/commit/f7e24f3b43af5cc9319ee21a9b1547ef2b980ccd onto master.

But the simplest thing you can do is to integrate (a*x**2 + b*x + c)/(d*x**2 + e*x + f) and back-substitute for a-f, and then do a similar thing the second time around. But even then, as I said, this takes about an hour (if I remember correctly) to complete, and it requires the above patch.

>
>
> Finally another experience as a sympy user: while working with huge
> expressions it often seems to me that printing (and pretty_printing even
> more) is a very time consuming task - sometimes printing an expression
> takes much longer than calculating it.

I agree. Do you know how to fix it? Vinzent has mentioned that printing is O(n**2) when it could be O(n) because of inefficient string concatenation when building the expression. I wonder to what extent that slows things down.

>
> I attached the code with my additions (mainly the utility functions).

I had to make one change to make it work with the new polys (see below).

>
> Comments are of course very welcome.
>
>
>
> Regards,
>
> Bastian.

> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>

> #! /usr/bin/env python
> # -*- coding: utf-8 -*-
> '''
> Calculates the J^{-1} J{-T} for where J is the Jacobian of the
> bilinear mapping of the reference quadrilateral (-1,1)^2 to a given
> quadrilateral in space.
> '''
> from sympy import pprint, Symbol, integrate

> import sympy

> # J ( hat{x}, hat{y} ) = [ beta + delta hat{y}, gamma + delta hat{x},C ].


> #
> # The last column can be chosen arbitrarily, so to avoid singularities take it
> # to be the cross product of the first two columns.
> xi = Symbol( 'xi' )
> eta = Symbol( 'eta' )
> C1 = beta + delta * eta
> C2 = gamma + delta * xi
> C3 = C1.cross( C2 ).transpose()
>
> J = C1.row_join(C2).row_join(C3)
>
>
>

> #
> import itertools
>
> def symbol_with_idcs(symb, *args):
> tmp = '_'.join(["%02i" % arg for arg in args])
> return sympy.Symbol(symb+'_'+ tmp)
>
> def introduce_dummies(M, vars, symb, degrees = 3):
> """
> returns a tuple (M, d)
> M ... Matrix (or expression) where all coeffs are single symbols
> d ... dict for restoring the original expressions
> """
> squeeze = False
> if not isinstance(M, sympy.Matrix):
> M = sympy.Matrix([M])
> squeeze = True
> n, m = M.shape
> M2 = M*0 # make a copy
> revsubsdict = {}
> for i in range(n):
> for j in range(m):
> coeff_orders = itertools.product(range(3), repeat=2) # cartesian p.
> tmp_poly = M[i,j].as_poly(*vars)
> for k, l in coeff_orders:
> c = tmp_poly.coeff(k,l)
Change this line to:
c = tmp_poly.nth(k, l)
> if c == 0:
> continue
> koeff = symbol_with_idcs(symb, i,j,k,l)
> M2[i,j] += koeff * vars[0]**k*vars[1]**l
> revsubsdict[koeff] = c
> if squeeze:
> M2 = M2[0]
> return M2, revsubsdict
>
> # obsolete: (?)
> def polynomize_matrix(M, vars):
> return M.applyfunc(lambda w: w.as_poly(*vars))
>
>
> J_old = J
>
>
> print "Step 1"
> J, revsubsdict1 = introduce_dummies(J, (xi, eta), 'k')
>
> absDetJ = J.det()
>
> # old:
> #Jinv = J.inv( "LU" )
> #Lambda = Jinv * Jinv.transpose()
>
>
> print "Step 2"
>
> # exploit that J.inv() == J.adjugate() / J.det()
> Jadj = J.adjugate()
> Lambda2 = Jadj * Jadj.T
>
> Lambda2, revsubsdict2 = introduce_dummies(Lambda2, (xi, eta), 'f')


>
>
> # Gradients of test functions
> # phi0 = (1-xi) * (1-eta) / 4
> # phi1 = (1+xi) * (1-eta) / 4
> # phi2 = (1+xi) * (1+eta) / 4
> # phi3 = (1-xi) * (1+eta) / 4
> grad_phi0 = Matrix( [ -(1-eta), -(1-xi), 0 ] ) / 4
> grad_phi1 = Matrix( [ (1-eta), -(1+xi), 0 ] ) / 4
> grad_phi2 = Matrix( [ (1+eta), (1+xi), 0 ] ) / 4
> grad_phi3 = Matrix( [ -(1+eta), (1-xi), 0 ] ) / 4
>
> # Entry (I,J) of the local stiffness matrix is
> #
> # S(I,J) = int_{Quad} grad_phiI Lambda grad_phiJ.
> #
> # Compute those now.

> #print grad_phi0.transpose() * Lambda * grad_phi0 # consumes too much time
>
>
> print "Step 3"
>
>
>
>
>
> Numerator = grad_phi0.transpose() * Lambda2 * grad_phi0
> Numerator, revsubsdict3 = introduce_dummies(Numerator, (xi, eta), 'g')
> Numerator = Numerator[0] # its only 1-by-1
>
>
> absDetJ, revsubsdict4 = introduce_dummies(absDetJ, (xi, eta), 'h')
>
>
>
>
>
> print 'Start integration... (will probably hang)'
>
> # this still hangs:
> I1 = integrate( Numerator / absDetJ, (xi,-1,1) )
> I2 = integrate( I1, (eta,-1,1) )
>
>
> print 'Done.'

Reply all
Reply to author
Forward
0 new messages