Different resulting meshes in spatial adaptivity for d::p::d::triangulation and d::triangulation

50 views
Skip to first unread message

maurice rohracker

unread,
May 24, 2022, 11:23:30 AM5/24/22
to deal.II User Group
Dear deal.II community,

While parallelizing an in-house phase-field fracture code with spatial adaptivity using the p::d::triangulation we observed different meshes using the spatial adaptivity for the serial and the distributed triangulation.

We mark cells for refinement if at least one scalar value falls below a given threshold within a cell.

To see where this might come from, we counted the number of cells before and after the call of triangulation.prepare_coarsening_and_refinement() in the serial and distributed case.
The number of user-defined cells to be refined is the same (number of cells before calling triangulation.prepare_coarsening_and_refinement()). However, after this call, the number differs and therefore different meshes are observed.

Ideally, we would expect that there is no difference for the same problem (same mesh, same parameters, etc.).

We are using dealii 9.0.1 and do not set any smoothing operation in the constructor of the serial and the parallel meshes.

Is such a behavior expected, or is it difficult to compare the result for the serial and the distributed triangulation one by one?

Attached is also the resulting mesh (left parallel, right serial) as well as the phase-field value at a node before refinement (just to make sure that the problems are the same for the serial and distributed version).

Thanks for your help in advance.
Best regards,
Maurice Rohracker


parallelSerialRefinement.png
parallelSerialPointPfValue.png

Peter Munch

unread,
May 24, 2022, 11:31:22 AM5/24/22
to deal.II User Group
Hi Maurice,

deal.II 9.0.1 is quite old, don't you want to update ;)

If I understand this is the first refinement step. So results should be the same if you do everything the same. Question: with how many MPI processes are you running the `p::d::triangulation` simulation? Could you try to run with one process. If that works and it only happens in parallel, my best guess is that you have forgotten to update the ghost values of the vector. Is the problematic cell near an internal boundary? Are you running deal.II in debug mode?

Peter

Wolfgang Bangerth

unread,
May 24, 2022, 8:52:23 PM5/24/22
to dea...@googlegroups.com
On 5/24/22 09:23, 'maurice rohracker' via deal.II User Group wrote:
>
> Is such a behavior expected, or is it difficult to compare the result for the
> serial and the distributed triangulation one by one?

The latter. The p::d::T uses p4est, and p4est makes different assumptions than
deal.II which cells need to be refined to satisfy internal invariants. (An
example of an invariant both libraries enforce is that there is only one
hanging node per edge in 2d, but there are others.) I believe that it is
meaningful to compare the p::d::T meshes you get with different numbers of MPI
processes, but I don't think that we want to guarantee that different
triangulation classes result in the same numbers of cells.

Best
W.


--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

maurice rohracker

unread,
Jun 1, 2022, 10:08:32 AM6/1/22
to deal.II User Group
Thanks for your suggestions, especially the different underlying concepts for the serial and p::d::triangulation, which makes it hard to compare the results 1:1; that clarified my doubts.
Best, Maurice
Reply all
Reply to author
Forward
0 new messages