Fully distributed triangulation with GMSH

73 views
Skip to first unread message

Kumar Saurabh

unread,
Mar 14, 2023, 8:44:17 PM3/14/23
to deal.II User Group
Hi, 

I am trying to perform the mesh partition using the mesh generated from GMSH. The generated mesh is quite big with around 500K elements. 

I am new to deal.ii. But I get the impression that the parallel::distributed::Triangulation works only from the quadrilateral/hexahedral meshes and uses p4est backend.

I tried GridTools::partition_triangulation which tends to repeat the elements on all processor. This is not ideal as the number of elements is too large.

I want to ask if I can use parallel::fullydistributed::Triangulation to partition the tetrahedrals without repeating on all the processors. If so, is there any examples for the same. All the examples I saw tends to work for Hypercube type of meshes.

Thanks,
Kumar Saurabh

Peter Munch

unread,
Mar 15, 2023, 2:33:04 AM3/15/23
to deal.II User Group

Kumar Saurabh

unread,
Mar 15, 2023, 5:22:33 PM3/15/23
to deal.II User Group
HI Peter,

Thanks a lot. It seems to work. If I am understanding it correctly, having group size = MPI ranks, means that only proc 0 is reading the data and distributing it. Am I correct? 

Is it possible to verify the size of elements / memory size after the file is read and before the partition to actually verify that only proc 0 is reading and storing before partitioning?

Thanks,
Kumar Saurabh

blais...@gmail.com

unread,
Mar 19, 2023, 10:23:06 PM3/19/23
to deal.II User Group
Dear Kumar,

You can monitor the processes using your OS to verify that.
If you are loading a relatively large triangulation, this will be very apparent since the memory footprint of one process will be significantly larger. Even a quick "top" will enable you to see this.
Best
Bruno

Kumar Saurabh

unread,
Mar 20, 2023, 2:24:00 PM3/20/23
to deal.II User Group
Hi Bruno,

Thanks for the suggestion. It seems to be working fine. 

Thanks,
Kumar Saurabh

Reply all
Reply to author
Forward
0 new messages