Lucas,
> I mean that the LU Decomposition solver takes a huge amount of RAM, and it
> seems to me that allocating that once and reusing the space would be better.
> Attached you can find a simple graph* showing how the free memory in time. I
> ran an instance of my program using around 164k cells, running on 7 threads.
> As you can see, the solving step consumes a lot of RAM, and then deallocates
> it after the solver finishes. What I wonder is if it is useful and possible to
> just do this allocation/freeing once, at the start of the program.
I don't think it matters. First, you can't know how much memory you're going
to use if you do a sparse LU decomposition. It all depends on the sparsity
pattern of the matrix. Second, MUMPS does this internally -- I don't think you
have control over it. Third, sparse decompositions are so expensive to compute
that the cost of memory allocation is likely completely negligible.
I think there's little to gain from trying to tweak the memory allocation part
of this. The fact that MUMPS takes a lot of memory is also nothing you can
change -- that's just what you get for using a direct solver.
Nice graph, though. It clearly shows what's going on!
Cheers