Hi all,
Our application models fractures in a rock.We use adaptive mesh refinement to obtain a fine mesh at fracture locations and a coarse mesh otherwise.
We specify the locations of the fractures and, thus, where the refinement has to happen.
As the iterative solvers do not converge for our problem, we use a direct solver (MUMPS or super-lu).
Depending on the amount of fractures in the model, the application crashes at a later or an earlier mesh-refinement step.
The solver super-lu gave as error "Not enough memory to perform factorisation."
MUMPS just results in a segmentation fault.
However, we are running this code on a fat-node of our cluster with 512 GB of memory.
The log-file indicates, that only a maximum of 256 GB of memory have been used and no other user was using that node at that time.
Thus, in theory, there should have been more memory available.
Therefore, the question is, why did it crash?
Thanks in advance
Marco and Jürg