Hello,
I'm currently working on the upgrading of my code, adding PETSc as an alternative to Trilinos for the Linear Algebra package.
I'm implementing this option following Tutorial 55.
However, I'm dealing with some issues when I try to run massive parallel simulations.
Especially large memory consumption occurs in the system setup phase.
After some debugging, I was able to figure out that the part of the code responsible for this is the generation of the sparsity pattern, i.e., the following rows:
BlockDynamicSparsityPattern dsp(local_partitioning);
DoFTools::make_sparsity_pattern(dof_handler, scratch_coupling, dsp, constraints, false, this_mpi_process);
I wanted to point out that this behavior doesn't depend on PETSc, but it is related only with the procedure wherewith we make the Block Sparsity Pattern (BSP). Indeed I ran into the same issue with Trilinos
if the above strategy is selected.
In the previous version of the code, I used these rows to generate the BSP:
TrilinosWrappers::BlockSparsityPattern sp(local_partitioning,MPI_COMM_WORLD);
DoFTools::make_sparsity_pattern(dof_handler, matrix_coupling,
sp, constraints, false, this_mpi_process);
sp.compress();
In this last case, the amount of memory required to generate the BSP is much less respect with the first case.
Any ideas what is going on? Am I doing wrong something?
Thank you very much for your support.
Matteo