MPI with METIS causes memory leak

54 views
Skip to first unread message

Alexander Greiner

unread,
Oct 21, 2024, 5:58:46 AM10/21/24
to deal.II User Group
Dear all,

after updating to the https://github.com/dealii/dealii/releases/download/v9.5.2/dealii-9.5.2-sonoma-intel.dmg I encounter the very same issues as already discussed here:

https://groups.google.com/g/dealii/c/-ogX1Bi243M/m/B5fUm6rNBAAJ

For step-17 I get the same error message indicating that the memory allocation failed, for step-18 and my personal code it just fills up the system RAM until one aborts it. (Everything works using only 1 core though.) Similarly, step-42 for example runs perfectly fine.
Since the issue seems to be spack miscompiling METIS, I don't really expect there is a solution for that, other than trying to recompile everything or switching to parallel::distributed::Triangulation. Still, I'm very grateful for any suggestions!

All the best,
Alex

Wolfgang Bangerth

unread,
Oct 21, 2024, 1:28:13 PM10/21/24
to dea...@googlegroups.com

On 10/21/24 03:58, 'Alexander Greiner' via deal.II User Group wrote:
>
> For step-17 I get the same error message indicating that the memory
> allocation failed, for step-18 and my personal code it just fills up the
> system RAM until one aborts it. (Everything works using only 1 core
> though.) Similarly, step-42 for example runs perfectly fine.
> Since the issue seems to be spack miscompiling METIS, I don't really
> expect there is a solution for that, other than trying to recompile
> everything or switching to parallel::distributed::Triangulation. Still,
> I'm very grateful for any suggestions!

Somewhere in step-18 and in your own code, you are calling
GridTools::partition_triangulation():
https://www.dealii.org/current/doxygen/deal.II/namespaceGridTools.html#a99eba8e3b388258eda37a2724579dd1d
It takes a defaulted third argument that you could choose other than
METIS and see whether that addresses the issue by simply not using METIS.

Best
W.

Alexander Greiner

unread,
Oct 22, 2024, 7:13:35 AM10/22/24
to deal.II User Group
Dear Wolfgang,

excellent! Thank you so much!

Even though I'm not calling GridTools::partition_triangulation() directly, following your hint I found out that the behavior can be controlled directly in the constructor of parallel::shared::Triangulation. 
So I changed 
triangulation(mpi_communicator,Triangulation<dim>::maximum_smoothing),
to
triangulation(mpi_communicator,Triangulation<dim>::maximum_smoothing,false,parallel::shared::Triangulation<dim>::partition_zorder),

Best,
Alex
Reply all
Reply to author
Forward
0 new messages