The dimension of the inverted matrix is reduced & then
multipled by another matrix (of dimensions 206) and the trace
taken (this again occurs over 2000 times). Thus, using
linear solve to accelerate the process doesn't appear to
be a valid solution to my problem (it would require using
solve for each of the columns of the matrix to be multipled).
I would be very grateful for any help or indeed insight anyone
may have on this matter.
Regards,
Roger Jones
(r...@leland.stanford.edu)
--
Regards,
Roger Jones
1: res1=LinearSolve[a,rhs];
2: res2=Inverse[a].rhs;
(here a and rhs are 412 X 412 dimensioned matrices)
case 1 repeatedly applied consumes .7 MB more memory
on each application, whereas case 2 is almost constant
(or small at the very least). I need to use method 1
as this is faster than case 2. Further, I need to apply
this many times (>2000) and so this is very important
consideration. Case 1 rapidly runs out of memory of
course.
Does LinearSolve perhaps generate hidden variables
which are not cleared? Any help offered will be
very gratefully received.
>1: res1=LinearSolve[a,rhs];
>2: res2=Inverse[a].rhs;
>(here a and rhs are 412 X 412 dimensioned matrices)
>case 1 repeatedly applied consumes .7 MB more memory
I have discovered the memory leak. It turns out that
using LinearSolve[a,rhs], with rhs a matrix, rather
than a vector, does produce the correct result however
it leads to constant increase in memory usage.