Multiple Triangulations

190 views
Skip to first unread message

Michael Harmon

unread,
Mar 2, 2016, 6:28:50 PM3/2/16
to deal.II User Group



I'm working with a problem where I have coupling between multiple triangulations.   I have a super_grid called Super_Triangulation shown:

And two sub triangulations, sub_triangulation_1:

and sub_triangulation_2,

I make Super_triangulation by merging the two sub tringulations and then locally refine all of them.  I construct terms on variables on each triangulation by looping through the sub-triangulaitons individually and getting the corresponding super cell.


This works great in the case where i dont locally refine any of the meshes... but when I do have locally refinements I get segmentation faults when looping over the cells.

Anybody have any advice on how to circumvent this???

Sebastian.Gonzalez-Pintor

unread,
Mar 4, 2016, 4:12:33 AM3/4/16
to deal.II User Group
You are **not** supposed to merge triangulations that are already refined. When the refinement is global, you can usually "coarse the triangulation" (by generating a triangulations with only the active cells of the particular triangulation) and then merging. Otherwise, you should perform the merging before the refinement (this should be the preferred way, because then you can still use the coarsening in case you needed), and then it should be no problem with the merging.

Michael Harmon

unread,
Mar 5, 2016, 5:55:12 PM3/5/16
to deal.II User Group
Hi Sebastian, 

The way I made the mesh the above merged mesh is to globally refine two meshes, print them out and read them in so that they only have one level.  Then merge them into the super one, then do more local refinement..  Do you know if there is a way to end up with the refined meshes like the above without resorting to merge?

Sebastian.Gonzalez-Pintor

unread,
Mar 6, 2016, 7:18:45 AM3/6/16
to deal.II User Group
Hi Michael,

you should take a look at step-1, and some other tutorials that deal with meshes.

I guess that something like this is what you want:


This can be done with the following function:

#include <deal.II/grid/tria.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_out.h>
#include <iostream>
#include <fstream>
#include <cmath>
using namespace dealii;
void your_grid ()
{
 
Triangulation<2> tria;
  std
::vector< unsigned int > rep (2);
  rep
[0] = 8;
  rep
[1] = 4;
 
const Point<2> ini(-1,0);
 
const Point<2> end(+1,1);
 
GridGenerator::subdivided_hyper_rectangle (tria, rep, ini, end);
 
Triangulation<2>::active_cell_iterator
  cell
= tria.begin_active(),
  endc
= tria.end();
 
for (; cell!=endc; ++cell)
 
{
   
const Point<2> cell_center = cell->center();
   
const double x_component = cell_center(0);
   
if (std::fabs(x_component) < 0.25)
   
{
      cell
->set_refine_flag ();
   
}
 
}
  tria
.execute_coarsening_and_refinement ();
  std
::ofstream out ("your_grid.eps");
 
GridOut grid_out;
  grid_out
.write_eps (tria, out);
  std
::cout << "Grid written to your_grid.eps" << std::endl;
}
int main ()
{
  your_grid
();
}

Sebas
main.cc

Daniel Arndt

unread,
Mar 7, 2016, 5:58:53 PM3/7/16
to deal.II User Group
Hey Michael,

Just as a sidenote: Coarse meshes must not have hanging nodes.
Therefore you need to use an approach as suggested by Sebastian.

Best,
Daniel

Michael Harmon

unread,
Mar 9, 2016, 4:14:48 PM3/9/16
to deal.II User Group
Hey guys, thanks for the replies.  I realize we cant have any hanging nodes on meshes that we merge together.  What I do is first make two coarse meshes call them mesh A and mesh B, I merge them into one called Mesh C and then locally refine all 3 meshes after the merging.

The issue I'm having is that I have PDES on all three meshes and they are coupled. When I construct all three meshes without any local refinement, I can assemble the coupling terms no problem.  I do this by looping through mesh A' cells and getting the corresponding cell in Mesh C by using:

typename DoFHandler<dim>::active_cell_iterator
Mesh_C_Cell(&Mesh_A_triangulation, & Mesh_A_cell->level(), Mesh_A_cell->index(), &dof_handler);

And do the same thing for mesh B.


However, when I try to do the same thing when performing a local refinement on all the meshes after merging the triangulation, I get segmentation faults when I try to get a Mesh C cell using the technique above.

I'm not sure if the ordering gets all messed up after the refinement.  Do you guys know whats going on?  Please let me know if I can do a better job of discribing the issue too, its hard for me to put into words.

Wolfgang Bangerth

unread,
Mar 9, 2016, 5:21:54 PM3/9/16
to dea...@googlegroups.com
On 03/09/2016 03:14 PM, Michael Harmon wrote:
>
> The issue I'm having is that I have PDES on all three meshes and they
> are coupled. When I construct all three meshes without any local
> refinement, I can assemble the coupling terms no problem. I do this by
> looping through mesh A' cells and getting the corresponding cell in Mesh
> C by using:
>
> typename DoFHandler<dim>::active_cell_iterator
> Mesh_C_Cell(&Mesh_A_triangulation, & Mesh_A_cell->level(),
> Mesh_A_cell->index(), &dof_handler);

This cannot work. You make assumptions that the cells of mesh C are
arranged in exactly the same way as those in mesh A, i.e., that cells
that have the same physical location also have the same index and level.
This assumption is unwarranted, and is the reason you get segfaults.

You need to find a different way of doing things. For example, you build a
std::map<cell_iterator,cell_iterator> cell_map_A_to_C;
between the *coarse* mesh cells of meshes A and C (and similarly B and
C). Then, instead of the case above, whenever you traverse the cells of
mesh A, you do so in a hierarchic way (i.e., loop over the coarse cells
and recurse to the children), and keep track of which child of the
C-mesh cell corresponds to the A-mesh cell's child. In essence, you need
to see the two meshes as trees (or forests) of cells and traverse the
two trees synchronously.

Best
W.


--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@math.tamu.edu
www: http://www.math.tamu.edu/~bangerth/

Michael Harmon

unread,
Mar 9, 2016, 5:35:47 PM3/9/16
to deal.II User Group
Thanks that makes a lot of sense! 

I wondering do you know if the fix with the map you mentioned would work in the case that the 3 meshes are distributed?

Wolfgang Bangerth

unread,
Mar 9, 2016, 5:43:26 PM3/9/16
to dea...@googlegroups.com
On 03/09/2016 04:35 PM, Michael Harmon wrote:
>
> I wondering do you know if the fix with the map you mentioned would work
> in the case that the 3 meshes are distributed?

In the case of 3 meshes, conceptually, you still have three forests,
where one is the union of the other two. (A forest is a collection of
trees, where one quad-tree originates from each coarse cell.) Since each
processor stores the entire coarse mesh, you can still build the maps
between the coarse meshes of A and C, and of B and C.

The problem is that for distributed meshes, each processor only stores a
subset of all higher levels of the trees in the forest, namely those
that correspond to locally owned cells and their parents. The subforest
a processor stores of mesh C need not have anything to do with the
subforest it stores of mesh A or B, though, and you will have difficulty
working with this.


These conceptually complications to me indicate that you have the wrong
design for your problem. Can you describe again why you can't just store
a single mesh?

Michael Harmon

unread,
Mar 9, 2016, 6:09:29 PM3/9/16
to deal.II User Group

Dear Dr. Bangerth,


My problem is on a semiconductor-liquid interface which is depicted in the image:


Different charged particles reside in separate subdomains (i.e. electrons only exist in the semiconductor Omega_{S} and ions only exist the liquid Omega_{e}).  The particles interact with one another at the interface (Sigma) of the subdomains.  There is a electric field present throughout the union of Omega_{S} and Omega_{E} and depends on the density of the charge particles. The charge particle's in both subdomains are then advected by the electric field. 

At first was going to use one triangulation and extend the carriers by zero outside their subdomain as was done in step-46, but realized this wouldnt work with a distributed triangulation.  I had success by using merging 2 triangulations that were only refined globally into a third, but am now having issues where i could use local refinement.  I also havent quite gotten the distributed version to work yet.... but that's another problem. =)


I'm not sure this is the best way to go about solving my problem.  Is there a better way to go about what am I trying to do?


Thanks a ton!

Michael Harmon

unread,
Mar 9, 2016, 6:11:32 PM3/9/16
to deal.II User Group
Sorry, I meant to clarify that I had success when I made separate triangulations for Omega_{S} and Omega_{E} and then merged them together to make a triangulation for the electric field.

Thanks again,

Wolfgang Bangerth

unread,
Mar 9, 2016, 6:28:28 PM3/9/16
to dea...@googlegroups.com
On 03/09/2016 05:09 PM, Michael Harmon wrote:
> <https://lh3.googleusercontent.com/-Lyz2EVMqYCQ/VuCpq3QdJCI/AAAAAAAAAiA/kJXVvPej7pA/s1600/Abstractdomaindecomp.png>
>
>
> Different charged particles reside in separate subdomains (i.e.
> electrons only exist in the semiconductor Omega_{S} and ions only exist
> the liquid Omega_{e}). The particles interact with one another at the
> interface (Sigma) of the subdomains. There is a electric field present
> throughout the union of Omega_{S} and Omega_{E} and depends on the
> density of the charge particles. The charge particle's in both
> subdomains are then advected by the electric field.
>
> At first was going to use one triangulation and extend the carriers by
> zero outside their subdomain as was done in step-46, but realized this
> wouldnt work with a distributed triangulation.

Yes, at least not yet. (But we also don't have a date of arrival for
this -- though it's high on our list.)

What are the equations you need to solve on the two subdomains for the
charge carriers?

Michael Harmon

unread,
Mar 10, 2016, 10:29:11 AM3/10/16
to deal.II User Group
There are two charge carriers in each subdomain; each is modeled using a reaction-advection-diffusion equation and solved using an LDG method. Additionally there is one Poisson equation to for the electric field on the merged domain which is solved using a mixed method.

The fact Im solving for densities and fluxes is making the number of DOFs blow up very quickly and so I looked into a non-uniform mesh to reduce the size of the DOFs.  However, the assembling coupling terms using data between the merged mesh and the sub-meshes is making it tricky to do any sort of non-global refinement.

Wolfgang Bangerth

unread,
Mar 10, 2016, 5:06:47 PM3/10/16
to dea...@googlegroups.com
On 03/10/2016 09:29 AM, Michael Harmon wrote:
> There are two charge carriers in each subdomain; each is modeled using a
> reaction-advection-diffusion equation and solved using an LDG method.
> Additionally there is one Poisson equation to for the electric field on
> the merged domain which is solved using a mixed method.
>
> The fact Im solving for densities and fluxes is making the number of
> DOFs blow up very quickly and so I looked into a non-uniform mesh to
> reduce the size of the DOFs. However, the assembling coupling terms
> using data between the merged mesh and the sub-meshes is making it
> tricky to do any sort of non-global refinement.

Dealing with multiple meshes seems too complicated in the parallel
context. Since the hp::DoFHandler doesn't yet work on parallel meshes,
using FE_Nothing is also not an option, sadly.

My suggestion would be to create a mesh that has the cells of both of
your subdomains, but where the cells along the interface are not
connected. You can achieve this by creating a mesh by hand, as shown in
step-14, where the vertices along the interface exist twice, at the same
locations but with logically different numbers, and consequently the
mesh looks continuous but really has the equivalent of a discontinuous
crack along the middle.

You can then solve both of the r-a-d equations on the entire mesh. Of
course, you really only care about the solution on one half of the
domain, and will likely have no source and zero boundary values for this
part of the problem on the other half of the domain. The difficulty is
to solve the global Poisson equation on the entire mesh, because that is
now not connected any more between the two halves. To do this, you'll
have to create constraints (just like hanging node constraints) that
ensure that the solution values at the left and right of the interface
are equal.

I think if you follow this route, you'll be much happier because you
only have to deal with one mesh. I can foresee some difficulties in
parallel because a processor may end up with some fine cells on the left
of the interface without having access to those cells on the right, but
I think that these issues can be handled with appropriate algorithms by
communicating data as necessary -- maybe not the most effective, but
still easier than implementing hp::DoFHandler for the parallel case.

Best

Michael Harmon

unread,
Mar 11, 2016, 11:24:34 AM3/11/16
to deal.II User Group
Thanks for the reply Dr. Bangerth, I might actually drop the distributed case and stick with your original suggestion.

I have one last question related to this. I can post it as another question if its more appropriate.  My problem is in 2D and I use 5 direct solvers, because the DOFS are less than 100k each and I have multiple RHSes.    Right now 80% of the time is spend by the linear solves, so my question is do you think using a distributed direct solver in this case would give me any speed up?

Thanks again!

Wolfgang Bangerth

unread,
Mar 11, 2016, 11:29:27 AM3/11/16
to dea...@googlegroups.com
On 03/11/2016 10:24 AM, Michael Harmon wrote:
>
> I have one last question related to this. I can post it as another
> question if its more appropriate. My problem is in 2D and I use 5
> direct solvers, because the DOFS are less than 100k each and I have
> multiple RHSes. Right now 80% of the time is spend by the linear
> solves, so my question is do you think using a distributed direct solver
> in this case would give me any speed up?

Maybe, but if your linear systems are so small, how much time (in
seconds) do you even spend running your program? If you spend a couple
of hours (optimistic) implementing a feature that reduces your run time
from 1 minute to 30 seconds, you will need to run the program many times
to amortize your investment!

Michael Harmon

unread,
Mar 11, 2016, 11:44:43 AM3/11/16
to deal.II User Group
The run times are rather long because there is bad-scaling between the two subdomains resulting in a really stiff system of equations with long physical times to steady state.  The systems are rather small, but have to solved many times, maybe hundreds of thousands of times, leading to simulation times that take days.  I'm a little bit stuck, because at this scale it seems direct sovlers are the way to go.. but I'm not sure how to get more performance out of the solvers. 

I've thought about making threads that solve each of the linear systems, so that 4 four systems can be solved at once, but I'm not sure this will work well, and definitely wont scale well. :) 

Wolfgang Bangerth

unread,
Mar 11, 2016, 11:53:05 AM3/11/16
to dea...@googlegroups.com
On 03/11/2016 10:44 AM, Michael Harmon wrote:
> The run times are rather long because there is bad-scaling between the
> two subdomains resulting in a really stiff system of equations with long
> physical times to steady state. The systems are rather small, but have
> to solved many times, maybe hundreds of thousands of times, leading to
> simulation times that take days. I'm a little bit stuck, because at
> this scale it seems direct sovlers are the way to go.. but I'm not sure
> how to get more performance out of the solvers.
>
> I've thought about making threads that solve each of the linear systems,
> so that 4 four systems can be solved at once, but I'm not sure this will
> work well, and definitely wont scale well. :)

Yes, tough problem. A factor of 4 would already be helpful. I don't
really have a particularly good suggestion. A direct solver is likely
the fastest option. If the systems are so small, I'm also not sure how
much you can gain by using a parallel direct solver. Maybe another
factor of 2 or 4, but probably not a factor of 100.

If you're just solving for the steady state, why not directly solve for
it instead of using a time iteration?

Cheers

Bruno Turcksin

unread,
Mar 11, 2016, 12:07:15 PM3/11/16
to deal.II User Group

On Friday, March 11, 2016 at 11:44:43 AM UTC-5, Michael Harmon wrote:
The run times are rather long because there is bad-scaling between the two subdomains resulting in a really stiff system of equations with long physical times to steady state.  The systems are rather small, but have to solved many times, maybe hundreds of thousands of times, leading to simulation times that take days.  I'm a little bit stuck, because at this scale it seems direct sovlers are the way to go.. but I'm not sure how to get more performance out of the solvers. 

If you have a lot of time steps it is possible to parallelize the problem in time (see  https://en.wikipedia.org/wiki/Parareal). This might require a lot of work to implement though.

Best,

Bruno

Michael Harmon

unread,
Mar 11, 2016, 12:15:46 PM3/11/16
to deal.II User Group
The steady-state equations dont have a unique solutions, so I used a time iteration in order to guarantee existence and uniqueness, but now paying the price. :)

Michael Harmon

unread,
Mar 11, 2016, 12:18:04 PM3/11/16
to deal.II User Group
Thanks!  I'll look into this!

Wolfgang Bangerth

unread,
Mar 11, 2016, 12:18:57 PM3/11/16
to dea...@googlegroups.com
On 03/11/2016 11:15 AM, Michael Harmon wrote:
> The steady-state equations dont have a unique solutions, so I used a
> time iteration in order to guarantee existence and uniqueness, but now
> paying the price. :)

Use a time iteration to get close enough to the solution, then continue
on with a Newton iteration to finish it out. Or use an implicit time
integrator, in which case you should be able to make the time step very
large as you get closer to the solution you seek.

Best

Michael Harmon

unread,
Mar 13, 2016, 10:33:03 PM3/13/16
to deal.II User Group
thanks!

Hamed Babaei

unread,
Sep 26, 2017, 5:19:58 PM9/26/17
to deal.II User Group
Hi All,
I have used what Sebastian suggested to create a finer mesh within the center of a square plate with coarse mesh. 
I solve the elasticity problem in 3D space applying external in-plane displacement to deform a thin plate with one element in thickness direction. Besides, I plot the results in the deformed configuration using MappingQEulerian. 
However, the problem is that the central part with the fine mesh get separated from the rest of the domain and get distorted in an nonphysical manner.

My Triangulation part is followed and the pictures of the initial and distorted mesh is attached. I appreciate it if you could guide me to the reason behind this problem.

Thanks and regards,
Hamed
   
std::vector< unsigned int > repetitions(dim, 20);
      if (dim == 3)
      repetitions[dim-3] = 1;

      GridGenerator::subdivided_hyper_rectangle(triangulation,
                                             repetitions,
         Point<dim>(0.0, 0.0, 0.0),
         Point<dim>(0.25,10.0, 10.0),
         true);

          typename Triangulation<dim>::active_cell_iterator
     cell = triangulation.begin_active(),
     endc = triangulation.end();
     for (; cell!=endc; ++cell)
     {
       const Point<dim> cell_center = cell->center();
       const double y_component = cell_center(1);
       const double z_component = cell_center(2);
       if (y_component < 6 && y_component > 4 &&
           z_component < 6 && z_component > 4 )
       {
         cell->set_refine_flag ();
       }
     }
     triangulation.execute_coarsening_and_refinement ();
initial mesh.tif
distorted mesh.tif

Wolfgang Bangerth

unread,
Sep 26, 2017, 5:31:27 PM9/26/17
to dea...@googlegroups.com
On 09/26/2017 03:19 PM, Hamed Babaei wrote:
>
> My Triangulation part is followed and the pictures of the initial and
> distorted mesh is attached. I appreciate it if you could guide me to the
> reason behind this problem.

I don't think it's the generation of the mesh, but that it has to do
with either (i) computing the displacement field, or (ii) how the
displacement field is used in deforming the mesh.

Since you have hanging nodes, do you apply hanging node constraints to
the displacement field you then put into MappingQEulerian? And how do
you generate the plot with DataOut?

Best
W.


--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Hamed Babaei

unread,
Sep 26, 2017, 6:14:55 PM9/26/17
to deal.II User Group
Dear Dr. Bangert,

Since you have hanging nodes, do you apply hanging node constraints to
the displacement field you then put into MappingQEulerian? 

It turns out that I have not applied hanging node constraints properly. I just did so and the problem was solved.
Thank you very much for your help.

Best regards,
Hamed  
Reply all
Reply to author
Forward
0 new messages