parallel::distributed::Triangulation with copy_triangulation and create_union_triangulation

174 views
Skip to first unread message

Sam Cox

unread,
Mar 12, 2015, 1:34:20 PM3/12/15
to dea...@googlegroups.com
Hi guys,

I'm attempting to create a union triangulation of two parallel::distributed::Triangulation<dim>. I want to make a copy of my current triangulation, then adapt my mesh, then take the union of the two. A MWE is:

#include <deal.II/grid/tria.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_boundary_lib.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <fstream>
#include <cmath>

using namespace dealii;
void first_grid ()
{
    Triangulation<2> triangulation;
    Triangulation<2> triangulation2;
    Triangulation<2> triangulation3;

    GridGenerator::hyper_cube (triangulation);
 
    triangulation2.copy_triangulation(triangulation);
    triangulation.refine_global (4);
    GridTools::create_union_triangulation(triangulation,triangulation2,triangulation3);

}

void second_grid ()
{
    parallel::distributed::Triangulation<2> par_triangulation(MPI_COMM_WORLD,
                                                              typename Triangulation<2>::MeshSmoothing
                                                              (Triangulation<2>::smoothing_on_refinement |
                                                               Triangulation<2>::smoothing_on_coarsening));
    parallel::distributed::Triangulation<2> par_triangulation2(MPI_COMM_WORLD,
                                                              typename Triangulation<2>::MeshSmoothing
                                                              (Triangulation<2>::smoothing_on_refinement |
                                                               Triangulation<2>::smoothing_on_coarsening));
    parallel::distributed::Triangulation<2> par_triangulation3(MPI_COMM_WORLD,
                                                               typename Triangulation<2>::MeshSmoothing
                                                               (Triangulation<2>::smoothing_on_refinement |
                                                                Triangulation<2>::smoothing_on_coarsening));

    GridGenerator::hyper_cube (par_triangulation);
    par_triangulation.refine_global (4);
   
    par_triangulation2.copy_triangulation(par_triangulation);
    GridTools::create_union_triangulation(par_triangulation,par_triangulation2,par_triangulation3);
}

int main (int argc, char *argv[])
{
    Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);
    first_grid ();
    second_grid ();
}

first_grid() shows what I'd like to do in the serial case (which works fine), second_grid() the parallel case I'm interested in. This MWE fails: I get a EXC_BAD_ACCESS on the par_triangulation2.copy_triangulation() call. If however I remove the call to refine_global, it will complete. Also, if I move the refine_global call to after the copy_triangulation, I get a EXC_BAD_ACCESS at create_union_triangulation.

Is there a way to do what I'm wanting? I'm using v8.1, hence the call to GridTools rather than GridGenerator. I'm happy to upgrade to v8.2 if necessary, I just haven't got round to it.
I would say I have 3 questions:
(1) Is this the best way to generate an 'empty' par_triangulation2 and par_triangulation3, which I can then use in copy_triangulation and create_union_triangulation?
(2) is there a way to use copy_triangulation with parallel::distributed::Triangulation when I have refined the first mesh?
(3) Will this then work with the create_union_triangulation?

I'm aware this may well be beyond what the library can do - the 2 meshes will be distributed differently, so calling their union when cores don't even own similar areas on the meshes might be tricky. On the other hand, it might all work beautifully under the hood, and I'm just calling the wrong initializers!

Thanks!

PS: parallel::distributed::Triangulation<2> par_triangulation(MPI_COMM_WORLD,
                                                              typename Triangulation<2>::MeshSmoothing
                                                              (Triangulation<2>::smoothing_on_refinement |
                                                               Triangulation<2>::smoothing_on_coarsening));
is the 'emptiest' parallel triangulation I can seem to create - is this correct?

Sam

Timo Heister

unread,
Mar 12, 2015, 3:02:35 PM3/12/15
to dea...@googlegroups.com
Hey Sam,

this is not documented or checked right now, but I assume
create_union_triangulation is not implemented at all for distributed
Triangulations. I'm not sure what the best way to implement it is
either, because parts of the mesh likely exist on different processors
only. :-(

What are you trying to achieve? Maybe there is an easier workaround...
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Timo Heister
http://www.math.clemson.edu/~heister/

Sam Cox

unread,
Mar 12, 2015, 3:55:50 PM3/12/15
to dea...@googlegroups.com
Hi Timo,

That's my thought entirely - the distribution probably makes it unworkable - I was just hoping there might be some deal.II magic underneath!

My aim is to compute a global error bound. I have one working nicely so far, but it has some nasty terms in it I have yet to implement - if I have meshes \zeta^j and \zeta^{j+1} at timesteps t^j and t^{j+1} respectively, I'd like to build terms on the cells and edges of the union mesh \zeta^j \cup \zeta^{j+1}, for example \sum_{K \in \zeta^j \cup \zeta^{j+1}} ||T^{j+1}-T^j||_K, where K are the cells in the union triangulation and T my FE variable.

My ideal strategy would be to compute the union triangulation, and use SolutionTransfer onto the union triangulation.
My error bound won't really need computing in full production runs of my code, only to show that it works in small cases, so I could always implement this using a standard Triangulation<dim>, and turning this off when I want to run in distributed mode. But that wouldn't be ideal, and might take a fair bit of work on my code which is a few thousand lines.
Any thoughts on how I might progress? Using SolutionTransfer etc to project the old solution onto the new mesh would probably work for cell integrals, but I can't see any way in which I would be able to do any edge integrals on the union triangulation if I only have the latest mesh, which may be a coarsening in parts of the old mesh, and thus not even include some edges in the old mesh.

Thanks,
Sam

Wolfgang Bangerth

unread,
Mar 12, 2015, 4:55:12 PM3/12/15
to dea...@googlegroups.com

> this is not documented or checked right now, but I assume
> create_union_triangulation is not implemented at all for distributed
> Triangulations. I'm not sure what the best way to implement it is
> either, because parts of the mesh likely exist on different processors
> only. :-(

We are missing an assertion. The source triangulations need to be local,
but I think the destination triangulation could be distributed.

I assume that that actually covers the most common cases where someone
wants to assemble a triangulation from different pieces. In that case,
just build the individual pieces as local triangulations.

W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@math.tamu.edu
www: http://www.math.tamu.edu/~bangerth/

Timo Heister

unread,
Mar 12, 2015, 5:17:05 PM3/12/15
to dea...@googlegroups.com
I don't have a solution but maybe an idea: if \zeta^{j+1} is created
from \zeta^j by coarsening and refining certain cells, wouldn't a mesh
\tilde{\zeta} that applies only refinements to \zeta^j be the right
space to work in?

Sam Cox

unread,
Mar 12, 2015, 5:56:39 PM3/12/15
to dea...@googlegroups.com
Yes, that would be correct. I could perhaps work from that, but that still relies on copying a distributed mesh which has been refined.

One idea you're giving me is to only do refinements, use the resulting triangulation, then apply the coarsening. This sadly wouldn't work (if it's possible at all) as I need to solve the full system on the new mesh before calculating the errors.

A crazier scheme (which I think logically works, though I doubt deal.II and p4est would oblige) might be:

1. Calculate T^j on \zeta^j
2. Apply only refinement, to give \tilde{\zeta}.
3. interpolate T^j onto \tilde{\zeta}.
4. Apply coarsening to \tilde{\zeta} to give \zeta^{j+1}.
5. Calculate T^{j+1} on \zeta^{j+1}.
6. Undo the coarsening to get back to \tilde{\zeta}.
7. interpolate T^{j+1} onto \tilde{\zeta}.
8. Perform integration for error bound on \tilde{\zeta}.
9. Apply coarsening again to end up at \zeta^{j+1}.

What are the chances of those selective refinements, coarsenings and uncoarsenings being possible? :) Alternatively, the full refine_and_coarsen could be done (instead of 2-4), and then step 3 applied when on \tilde{\zeta} before or after 7.

There might be a way with using save/load_coarsen/refine_flags, but I have no experience of these. Are they implemented for parallel::distributed (parallel::distributed seems to inherit these from Triangulation, but they might not actually be implemented), and is such an operation as the undoing of a coarsening achievable, even on a locally-held mesh? I've just spotted copy_new_triangulation_to_p4est and copy_local_forest_to_triangulation, no idea whether these might provide a way in?

Sam

Timo Heister

unread,
Mar 14, 2015, 4:23:33 PM3/14/15
to dea...@googlegroups.com
One problem is that the coarse/refine flags you set are only hints and
you have no guarantee that this is what will happen exactly.

> There might be a way with using save/load_coarsen/refine_flags, but I have
> no experience of these. Are they implemented for parallel::distributed
> (parallel::distributed seems to inherit these from Triangulation, but they
> might not actually be implemented), and is such an operation as the undoing
> of a coarsening achievable, even on a locally-held mesh?

load/saving of flags should work, as long as you don't do any
refinement in between (because that will cause a repartition).

> I've just spotted
> copy_new_triangulation_to_p4est and copy_local_forest_to_triangulation, no
> idea whether these might provide a way in?

The first one copies the coarse mesh into p4est and the latter
reconstructs the local triangulation by coarsening/refining from the
local part of the p4est.

I think a reasonable strategy might be to work on an implementation
that allows you to copy a distributed Triangulation. That would be
useful to have in deal.II anyways.

Wolfgang Bangerth

unread,
Mar 15, 2015, 7:58:35 AM3/15/15
to dea...@googlegroups.com

> A crazier scheme (which I think logically works, though I doubt deal.II and
> p4est would oblige) might be:
>
> 1. Calculate T^j on \zeta^j
> 2. Apply only refinement, to give \tilde{\zeta}.
> 3. interpolate T^j onto \tilde{\zeta}.
> 4. Apply coarsening to \tilde{\zeta} to give \zeta^{j+1}.
> 5. Calculate T^{j+1} on \zeta^{j+1}.
> 6. Undo the coarsening to get back to \tilde{\zeta}.
> 7. interpolate T^{j+1} onto \tilde{\zeta}.
> 8. Perform integration for error bound on \tilde{\zeta}.
> 9. Apply coarsening again to end up at \zeta^{j+1}.
>
> What are the chances of those selective refinements, coarsenings and
> uncoarsenings being possible? :) Alternatively, the full refine_and_coarsen
> could be done (instead of 2-4), and then step 3 applied when on \tilde{\zeta}
> before or after 7.

This can be implemented for sure, but it's probably not what you want because
you lose information when you coarsen the mesh and transfer a solution onto
it. In other words, coarsening then refining a mesh and taking the solution
along for the ride, does not yield the same finite element function on the
final mesh as you started out with. In fact, since refinement is an embedding
operation, what you get on the final mesh is pointwise identical to what you
had on the coarse mesh.

That is, sometimes, not a big deal, but if you're into estimating errors then
the difference is probably precisely what you care about.


> There might be a way with using save/load_coarsen/refine_flags, but I have no
> experience of these. Are they implemented for parallel::distributed
> (parallel::distributed seems to inherit these from Triangulation, but they
> might not actually be implemented), and is such an operation as the undoing of
> a coarsening achievable, even on a locally-held mesh? I've just spotted
> copy_new_triangulation_to_p4est and copy_local_forest_to_triangulation, no
> idea whether these might provide a way in?

You can save/load flags, but only for the part that is stored locally on each
processor. It is likely of no help to you in your context.

Like Timo, I'm not seeing a simple solution to what you're trying to do --
sorry :-(

Best

Wolfgang Bangerth

unread,
Mar 15, 2015, 7:59:01 AM3/15/15
to dea...@googlegroups.com
On 03/12/2015 02:01 PM, Timo Heister wrote:
> this is not documented or checked right now, but I assume
> create_union_triangulation is not implemented at all for distributed
> Triangulations.

Correct. A patch to check this in the form of an assertion is forthcoming.

Sam Cox

unread,
Mar 17, 2015, 11:53:19 AM3/17/15
to dea...@googlegroups.com
Thanks both of you.

To attempt to delve a little deeper, is there any way to manually control or halt the repartitioning after refinement? The idea would be to go to \zeta{j+1} as usual except without allowing repartitioning, then undoing the coarsen steps, to \tilde{\zeta}, before reapplying the coarsens and forcing a repartition. Would this be possible?

Alternatively, would it be possible to use the save/load feature as a way of creating a copy? Save to file, then reload onto a blank Triangulation or parallel::distributed::Triangulation with identical coarse mesh, giving a copy of the mesh, without erasing the first? And how reproducible is the partitioning in this context? It's my understanding that p4est takes the cells in order, chops these into equal (+-1) parts, and distributes these parts (with suitable ghost cells) to processors - how reproducible is the possible +-1 inequality in lengths, and the order of processors distributed to?

This of course would leave me in the position of needing to implement a create_union_triangulation parallel!

Timo Heister

unread,
Mar 20, 2015, 1:35:23 PM3/20/15
to dea...@googlegroups.com
> To attempt to delve a little deeper, is there any way to manually control or
> halt the repartitioning after refinement?

Good idea. This is not implemented yet, but I was thinking of adding
this for a different problem anyways. I will get back to you.
https://github.com/dealii/dealii/issues/673

Wolfgang Bangerth

unread,
Mar 21, 2015, 9:48:02 PM3/21/15
to dea...@googlegroups.com

Sam,

> Alternatively, would it be possible to use the save/load feature as a way of
> creating a copy? Save to file, then reload onto a blank Triangulation or
> parallel::distributed::Triangulation with identical coarse mesh, giving a copy
> of the mesh, without erasing the first? And how reproducible is the
> partitioning in this context?

This should create the exact same triangulation again, i.e., do what
copy_triangulation should be doing.

It's of course rather inefficient to it this way. I think it shouldn't be
terribly difficult to implement a copy_triangulation function, though.


> It's my understanding that p4est takes the cells
> in order, chops these into equal (+-1) parts, and distributes these parts
> (with suitable ghost cells) to processors - how reproducible is the possible
> +-1 inequality in lengths, and the order of processors distributed to?

It's a deterministic algorithm that produces the same result every time.

Timo Heister

unread,
Mar 22, 2015, 9:14:59 AM3/22/15
to dea...@googlegroups.com
I just hacked something up: https://github.com/dealii/dealii/pull/678

Can you play with this and report back?

Sam Cox

unread,
Mar 27, 2015, 7:19:57 AM3/27/15
to dea...@googlegroups.com
Thanks Timo, Wolfgang,

that change looks to be working so far Timo, though I am a little stuck on how to change my DoFHandler for the second mesh after copying. A MWE:

#include <deal.II/grid/tria.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/fe/fe_dgq.h>
#include <deal.II/dofs/dof_tools.h>

#include <cmath>

using namespace dealii;

int main (int argc, char *argv[])
{
    Triangulation<2> triangulation;
    Triangulation<2> triangulation2;
   
    DoFHandler<2>   dof_handler1 (triangulation);
    DoFHandler<2>   dof_handler2 (triangulation2);
    FE_DGQ<2>       fe (2);
   
    GridGenerator::hyper_cube (triangulation);
   
    dof_handler1.distribute_dofs(fe);
   
    triangulation2.copy_triangulation(triangulation);
    dof_handler2.distribute_dofs(fe);

    triangulation2.copy_triangulation(triangulation);
//    dof_handler2.clear();
//    dof_handler2.initialize(triangulation2,fe);
    dof_handler2.distribute_dofs(fe); //EXC_BAD_ACCESS ERROR

    std::cout<<"Complete"<<std::endl;
}


dof_handler2 is associated with triangulation2. The call to distribute_dofs(fe) works first time, but if I change triangulation2 again by copying triangulation again, I get an EXC_BAD_ACCESS error on the line commented when I try to redistribute the dofs. Is this to do with triangulation2 having changed entirely, not through a process of coarsens and refines? If I put in the two commented lines, I get a similar error on the .initialize(). (I'm guessing clear() doesn't remove the association with triangulation2, just gets rid of all other data?)

What am I doing wrong here? Do I need to destroy either triangulation2 or dof_handler2 in between, or is there another way?

Bests,
Sam

Timo Heister

unread,
Mar 27, 2015, 7:59:32 AM3/27/15
to dea...@googlegroups.com
You are not running in debug mode, are you? I am getting the exception

An error occurred in line <8939> of file
</ssd/deal-git/source/grid/tria.cc> in function
void dealii::Triangulation<dim,
spacedim>::copy_triangulation(const dealii::Triangulation<dim,
spacedim>&) [with int dim = 2, int spacedim = 2]
The violated condition was:
(vertices.size() == 0) && (levels.size () == 0) && (faces == NULL)
The name and call sequence of the exception was:
ExcTriangulationNotEmpty(vertices.size(), levels.size())
Additional Information:
You are trying to perform an operation on a triangulation that is only
allowed if the triangulation is currently empty. However, it currently
stores 4 vertices and has cells on 1 levels.
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



Sam Cox

unread,
Mar 27, 2015, 11:33:31 AM3/27/15
to dea...@googlegroups.com
Ah, my mistake! I'm in XCode, and my setup with 8.3pre must not be quite right - Debug mode doesn't trigger this!

However, I've run it in command line and it seems to work perfectly. One slight question is whether Triangulation constructor should call repartition? Currently,
   #include <deal.II/grid/tria.h>

#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_boundary_lib.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/fe/fe_dgq.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/distributed/solution_transfer.h>
#include <fstream>

#include <cmath>

using namespace dealii;

void check_partitioning(unsigned int n_T, DoFHandler<2> &dof_handler)
{
    IndexSet partitioning (n_T), relevant_partitioning (n_T);
    partitioning = dof_handler.locally_owned_dofs();
    DoFTools::extract_locally_relevant_dofs (dof_handler,
                                             relevant_partitioning);
    std::cout << "part";
    partitioning.print(std::cout);
    std::cout << "rel_part";
    relevant_partitioning.print(std::cout);

}

void third_grid ()
{
    parallel::distributed::Triangulation<2>::Settings settings = parallel::distributed::Triangulation<2,2>::no_automatic_repartitioning;
//    parallel::distributed::Triangulation<2>::Settings settings = parallel::distributed::Triangulation<2,2>::default_setting;

    parallel::distributed::Triangulation<2> par_triangulation(MPI_COMM_WORLD,
                                                              typename Triangulation<2>::MeshSmoothing
                                                              (Triangulation<2>::smoothing_on_refinement |
                                                               Triangulation<2>::smoothing_on_coarsening),
                                                              settings);


    GridGenerator::hyper_cube (par_triangulation);
   
    par_triangulation.refine_global (4);
   
    FE_DGQ<2>     fe (2);
    DoFHandler<2> dof_handler (par_triangulation);
    dof_handler.distribute_dofs (fe);
    unsigned int n_T = dof_handler.n_dofs();
    check_partitioning(n_T, dof_handler);
    par_triangulation.repartition();
    dof_handler.distribute_dofs (fe);
    check_partitioning(n_T, dof_handler);

}

int main (int argc, char *argv[])
{
    Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);
    third_grid ();
return 0;
}

Sam Cox

unread,
Mar 27, 2015, 11:42:41 AM3/27/15
to dea...@googlegroups.com
Overeager on the submit button there!

I've also discovered that what I was worried about, and was in the middle of pointing out, doesn't happen anyway - it all looks great!

Sam

Timo Heister

unread,
Mar 28, 2015, 5:46:07 PM3/28/15
to dea...@googlegroups.com
Sam,

I am sorry, I don't understand your question, can you rephrase please?

Sam Cox

unread,
Mar 28, 2015, 7:20:07 PM3/28/15
to dea...@googlegroups.com
Don't worry Timo - I was in the middle of posing a possible fix, but accidentally pressed 'submit' too soon. While finishing my message afterwards, I realised I had misunderstood the output from the MWE I was giving, and that the problem I thought needed fixing didn't actually exist. So no action required - it all looks to be working as I'd expect.

I will come back to this thread as I work through, particularly re copying a parallel triangulation, but am off on vacation for a week so will have to pick this up when I return!

Sam

Sam Cox

unread,
Apr 15, 2015, 11:52:38 AM4/15/15
to dea...@googlegroups.com
Hi both,

I've moved on in my thinking a fair way, but have come across another sticking point. Can I ask - is VectorTools::interpolate_to_different_mesh meant to work for parallel::distributed::Triangulation? I have an example working where it works fine for p::d::Triangulation on one process, but breaks on 2+ processes. My example (sorry that it is not very minimal-looking, but it isn't far off!):


#include <deal.II/grid/tria.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_boundary_lib.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/fe/fe_dgq.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/lac/sparse_matrix.h>
#include <deal.II/lac/solver_control.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/lac/compressed_simple_sparsity_pattern.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/distributed/solution_transfer.h>
#include <fstream>
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/timer.h>
#include <cmath>
#include <deal.II/lac/generic_linear_algebra.h>
#include <deal.II/lac/sparsity_tools.h>

template <int dim>
class SeventhProblem
{
public:
    SeventhProblem (unsigned int prob_number);
    ~SeventhProblem ();
    void run (unsigned int cycle);
private:
    void setup_system ();
    void setup_second_system ();
    void assemble_system ();
    void solve ();
    MPI_Comm mpi_communicator;
    parallel::distributed::Triangulation<2>::Settings settings;
    parallel::distributed::Triangulation<dim> triangulation;
    DoFHandler<dim> dof_handler;
    FE_Q<dim> fe;
    IndexSet locally_owned_dofs;
    IndexSet locally_relevant_dofs;
    ConstraintMatrix constraints;
    TrilinosWrappers::SparseMatrix system_matrix;
    TrilinosWrappers::MPI::Vector locally_relevant_solution;
    TrilinosWrappers::MPI::Vector interpolated_locally_relevant_solution;
    TrilinosWrappers::MPI::Vector system_rhs;
    parallel::distributed::Triangulation<dim> second_triangulation;
    DoFHandler<dim> second_dof_handler;
    FE_Q<dim> second_fe;
    IndexSet second_locally_owned_dofs;
    IndexSet second_locally_relevant_dofs;
    TrilinosWrappers::MPI::Vector second_locally_relevant_solution;
    ConditionalOStream pcout;
    TimerOutput computing_timer;
    unsigned int prob_number;
    flag_struct flags;
};
template <int dim>
SeventhProblem<dim>::SeventhProblem (unsigned int prob_number)
:
mpi_communicator (MPI_COMM_WORLD),
settings (parallel::distributed::Triangulation<2,2>::no_automatic_repartitioning),
triangulation (mpi_communicator,
               typename Triangulation<dim>::MeshSmoothing
               (Triangulation<dim>::smoothing_on_refinement |
                Triangulation<dim>::smoothing_on_coarsening),
               settings),
dof_handler (triangulation),
fe (2),
second_triangulation (mpi_communicator,
                      typename Triangulation<dim>::MeshSmoothing
                      (Triangulation<dim>::smoothing_on_refinement |
                       Triangulation<dim>::smoothing_on_coarsening),
                      settings),
second_dof_handler (second_triangulation),
second_fe (2),
pcout (std::cout,
       (Utilities::MPI::this_mpi_process(mpi_communicator)
        == 0)),
computing_timer (pcout,
                 TimerOutput::summary,
                 TimerOutput::wall_times),
prob_number(prob_number),
flags (flags)
{}

template <int dim>
SeventhProblem<dim>::~SeventhProblem ()
{
    dof_handler.clear ();
    second_dof_handler.clear ();
}

template <int dim>
void SeventhProblem<dim>::setup_system ()
{
    TimerOutput::Scope t(computing_timer, "setup");
    dof_handler.distribute_dofs (fe);
    locally_owned_dofs = dof_handler.locally_owned_dofs ();
    DoFTools::extract_locally_relevant_dofs (dof_handler,
                                             locally_relevant_dofs);
    locally_relevant_solution.reinit (locally_owned_dofs,
                                      locally_relevant_dofs, mpi_communicator);
    system_rhs.reinit (locally_owned_dofs, mpi_communicator);
    system_rhs = 0;
    constraints.clear ();
    constraints.reinit (locally_relevant_dofs);
    DoFTools::make_hanging_node_constraints (dof_handler, constraints);
    VectorTools::interpolate_boundary_values (dof_handler,
                                              0,
                                              ZeroFunction<dim>(),
                                              constraints);
    constraints.close ();
    CompressedSimpleSparsityPattern csp (locally_relevant_dofs);
    DoFTools::make_sparsity_pattern (dof_handler, csp,
                                     constraints, false);
    SparsityTools::distribute_sparsity_pattern (csp,
                                                dof_handler.n_locally_owned_dofs_per_processor(),
                                                mpi_communicator,
                                                locally_relevant_dofs);
    system_matrix.reinit (locally_owned_dofs,
                          locally_owned_dofs,
                          csp,
                          mpi_communicator);
}

template <int dim>
void SeventhProblem<dim>::setup_second_system ()
{
    TimerOutput::Scope t(computing_timer, "setup");
    second_dof_handler.distribute_dofs (fe);
    second_locally_owned_dofs = second_dof_handler.locally_owned_dofs ();
    DoFTools::extract_locally_relevant_dofs (second_dof_handler,
                                             second_locally_relevant_dofs);
    second_locally_relevant_solution.reinit (second_locally_owned_dofs,
                                             second_locally_relevant_dofs, mpi_communicator);
    interpolated_locally_relevant_solution.reinit(second_locally_owned_dofs,
                                                  second_locally_relevant_dofs, mpi_communicator);
}

template <int dim>
void SeventhProblem<dim>::assemble_system ()
{
    TimerOutput::Scope t(computing_timer, "assembly");
    const QGauss<dim> quadrature_formula(3);
    FEValues<dim> fe_values (fe, quadrature_formula,
                             update_values | update_gradients |
                             update_quadrature_points |
                             update_JxW_values);
    const unsigned int dofs_per_cell = fe.dofs_per_cell;
    const unsigned int n_q_points = quadrature_formula.size();
    FullMatrix<double> cell_matrix (dofs_per_cell, dofs_per_cell);
    Vector<double> cell_rhs (dofs_per_cell);
    std::vector<types::global_dof_index> local_dof_indices (dofs_per_cell);
    typename DoFHandler<dim>::active_cell_iterator
    cell = dof_handler.begin_active(),
    endc = dof_handler.end();
    for (; cell!=endc; ++cell)
        if (cell->is_locally_owned())
        {
            cell_matrix = 0;
            cell_rhs = 0;
            fe_values.reinit (cell);
            for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
            {
                const double
                rhs_value
                = (fe_values.quadrature_point(q_point)[1]
                   >
                   0.5+0.25*std::sin(4.0 * numbers::PI *
                                     fe_values.quadrature_point(q_point)[0])
                   ? 1 : -1);
                for (unsigned int i=0; i<dofs_per_cell; ++i)
                {
                    for (unsigned int j=0; j<dofs_per_cell; ++j)
                        cell_matrix(i,j) += (fe_values.shape_grad(i,q_point) *
                                             fe_values.shape_grad(j,q_point) *
                                             fe_values.JxW(q_point));
                    cell_rhs(i) += (rhs_value *
                                    fe_values.shape_value(i,q_point) *
                                    fe_values.JxW(q_point));
                }
            }
            cell->get_dof_indices (local_dof_indices);
            constraints.distribute_local_to_global (cell_matrix,
                                                    cell_rhs,
                                                    local_dof_indices,
                                                    system_matrix,
                                                    system_rhs);
        }
    system_matrix.compress (VectorOperation::add);
    system_rhs.compress (VectorOperation::add);
}
template <int dim>
void SeventhProblem<dim>::solve ()
{
    TimerOutput::Scope t(computing_timer, "solve");
    LA::MPI::Vector
    completely_distributed_solution (locally_owned_dofs, mpi_communicator);
    SolverControl solver_control (dof_handler.n_dofs(), 1e-12);
    TrilinosWrappers::SolverCG solver(solver_control, mpi_communicator);
    TrilinosWrappers::PreconditionAMG preconditioner;
    TrilinosWrappers::PreconditionAMG::AdditionalData data;
    preconditioner.initialize(system_matrix, data);
    solver.solve (system_matrix, completely_distributed_solution, system_rhs,
                  preconditioner);
    pcout << " Solved in " << solver_control.last_step()
    << " iterations." << std::endl;
    constraints.distribute (completely_distributed_solution);
    locally_relevant_solution = completely_distributed_solution;
}

template <int dim>
void SeventhProblem<dim>::run (unsigned int cycle)
{
    if (cycle == 0)
    {
        GridGenerator::hyper_cube (triangulation);
        triangulation.refine_global (1);
        GridGenerator::hyper_cube (second_triangulation);
        second_triangulation.refine_global (1);
        setup_system();
    }
    else
    {
        flag_struct flags_blank;
       
        Vector<float> estimated_error_per_cell (triangulation.n_active_cells());
        KellyErrorEstimator<dim>::estimate (dof_handler,
                                            QGauss<dim-1>(3),
                                            typename FunctionMap<dim>::type(),
                                            locally_relevant_solution,
                                            estimated_error_per_cell);
       
        parallel::distributed::GridRefinement::
        refine_and_coarsen_fixed_number (triangulation,
                                         estimated_error_per_cell,
                                         0.5, 0.3);
        flags.r_flags.clear();
        flags.c_flags.clear();
        triangulation.save_refine_flags(flags.r_flags);
        triangulation.save_coarsen_flags(flags.c_flags);
       
        triangulation.prepare_coarsening_and_refinement();
       
        triangulation.execute_coarsening_and_refinement ();
       
        setup_system();
        pcout << " Number of active cells: "
        << triangulation.n_global_active_cells()
        << std::endl
        << " Number of degrees of freedom: "
        << dof_handler.n_dofs()
        << std::endl;
        assemble_system ();
        solve ();
       
        setup_second_system();
        second_locally_relevant_solution = locally_relevant_solution;

        VectorTools::interpolate_to_different_mesh(dof_handler, locally_relevant_solution,
                                                   second_dof_handler, interpolated_locally_relevant_solution);
        second_triangulation.load_coarsen_flags(flags.c_flags);
        second_triangulation.load_refine_flags(flags.r_flags);
        second_triangulation.execute_coarsening_and_refinement();
    }
    pcout << std::endl;
}

void seventh_grid()
{
   
    ConditionalOStream           pcout(std::cout,
                                       (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD)
                                        == 0));
   
    pcout << "7th Starting" << std::endl;
    SeventhProblem<2> lap(1);
    const unsigned int n_cycles = 5;
    for (unsigned int cycle=0; cycle<n_cycles; ++cycle)
    {
        pcout << "Cycle " << cycle << ':' << std::endl;
        lap.run(cycle);
    }
    std::cout << "7th Complete" << std::endl;

}

int main (int argc, char *argv[])
{
    Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);
    ConditionalOStream           pcout(std::cout,
                                       (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD)
                                        == 0));
    deallog.depth_console (0);

    seventh_grid ();
    pcout<<"Complete"<<std::endl;
}

A rough outline: on cycle 0, set up triangulation and second_triangulation, and solve on triangulation.
for cycle 1..n,
     coarsen and refine triangulation. Solve on this new mesh. Then interpolate backwards to the previous mesh, which is held in second_triangulation.
     coarsen and refine second_triangulation to now match triangulation.
end for

This works for me (8.3.pre) on a single process, but on two processes gives me

7th Starting
Cycle 0:

Cycle 1:
 Number of active cells: 4
 Number of degrees of freedom: 25
 Solved in 1 iterations.
libc++abi.dylib: terminating with uncaught exception of type int
libc++abi.dylib: terminating with uncaught exception of type int
[host-119-96:18807] *** Process received signal ***
[host-119-96:18806] *** Process received signal ***
[host-119-96:18806] Signal: Abort trap: 6 (6)
[host-119-96:18806] Signal code:  (0)
[host-119-96:18807] Signal: Abort trap: 6 (6)
[host-119-96:18807] Signal code:  (0)
[host-119-96:18807] [ 0] 2   libsystem_platform.dylib            0x00007fff8f57ff1a _sigtramp + 26
[host-119-96:18807] [ 1] 3   ???                                 0x00007fff5fbfc658 0x0 + 140734799791704
[host-119-96:18807] [ 2] 4   libsystem_c.dylib                   0x00007fff8e6adb53 abort + 129
[host-119-96:18807] [ 3] 5   libc++abi.dylib                     0x00007fff8ac48a21 __cxa_bad_cast + 0
[host-119-96:18807] [ 4] 6   libc++abi.dylib                     0x00007fff8ac709d1 _ZL26default_unexpected_handlerv + 0
[host-119-96:18807] [ 5] 7   libobjc.A.dylib                     0x00007fff847457eb _ZL15_objc_terminatev + 124
[host-119-96:18807] [ 6] 8   libc++abi.dylib                     0x00007fff8ac6e0a1 _ZSt11__terminatePFvvE + 8
[host-119-96:18807] [ 7] 9   libc++abi.dylib                     0x00007fff8ac6db30 _ZN10__cxxabiv1L22exception_cleanup_funcE19_Unwind_Reason_CodeP17_Unwind_Exception + 0
[host-119-96:18807] [ 8] 10  libepetra.11.dylib                  0x0000000105853d15 _ZN15Epetra_BlockMap21ConstructUserVariableIiEEvT_iPKS1_PKiS1_RK11Epetra_Commb + 3845
[host-119-96:18807] [ 9] 11  libepetra.11.dylib                  0x000000010584c49f _ZN15Epetra_BlockMapC1EiiPKiS1_iRK11Epetra_Comm + 111
[host-119-96:18807] [10] 12  libepetra.11.dylib                  0x00000001058b6452 _ZN15Epetra_FEVector28createNonlocalMapAndExporterIiEEvv + 178
[host-119-96:18807] [11] 13  libepetra.11.dylib                  0x00000001058b5953 _ZN15Epetra_FEVector14GlobalAssembleIiEEi18Epetra_CombineModeb + 99
[host-119-96:18807] [12] 14  libepetra.11.dylib                  0x00000001058b4a6a _ZN15Epetra_FEVector14GlobalAssembleE18Epetra_CombineModeb + 58
[host-119-96:18807] [13] 15  libdeal_II.8.3.pre.dylib            0x000000010259e578 _ZN6dealii16TrilinosWrappers10VectorBase8compressENS_15VectorOperation6valuesE + 168
[host-119-96:18807] [14] 16  libdeal_II.8.3.pre.dylib            0x000000010230a3f1 _ZNK6dealii16ConstraintMatrix10distributeINS_16TrilinosWrappers3MPI6VectorEEEvRT_ + 1409
[host-119-96:18807] [15] 17  libdeal_II.8.3.pre.dylib            0x000000010123277b _ZN6dealii11VectorTools29interpolate_to_different_meshILi2ELi2ENS_10DoFHandlerENS_16TrilinosWrappers3MPI6VectorEEEvRKNS_12InterGridMapIT1_IXT_EXT0_EEEERKT2_RKNS_16ConstraintMatrixERSB_ + 667
[host-119-96:18807] [16] 18  libdeal_II.8.3.pre.dylib            0x0000000101231ffa _ZN6dealii11VectorTools29interpolate_to_different_meshILi2ELi2ENS_10DoFHandlerENS_16TrilinosWrappers3MPI6VectorEEEvRKT1_IXT_EXT0_EERKT2_S8_RS9_ + 314
[host-119-96:18807] [17] 19  step-32                             0x0000000100005119 _ZN14SeventhProblemILi2EE3runEj + 1193
[host-119-96:18807] [18] 20  step-32                             0x0000000100002a69 _Z12seventh_gridv + 329
[host-119-96:18807] [19] 21  step-32                             0x0000000100002bbc main + 124
[host-119-96:18807] [20] 22  libdyld.dylib                       0x00007fff83ac75c9 start + 1
[host-119-96:18807] [21] 23  ???                                 0x0000000000000001 0x0 + 1
[host-119-96:18807] *** End of error message ***
[host-119-96:18806] [ 0] 2   libsystem_platform.dylib            0x00007fff8f57ff1a _sigtramp + 26
[host-119-96:18806] [ 1] 3   ???                                 0x00007fff5fbfc658 0x0 + 140734799791704
[host-119-96:18806] [ 2] 4   libsystem_c.dylib                   0x00007fff8e6adb53 abort + 129
[host-119-96:18806] [ 3] 5   libc++abi.dylib                     0x00007fff8ac48a21 __cxa_bad_cast + 0
[host-119-96:18806] [ 4] 6   libc++abi.dylib                     0x00007fff8ac709d1 _ZL26default_unexpected_handlerv + 0
[host-119-96:18806] [ 5] 7   libobjc.A.dylib                     0x00007fff847457eb _ZL15_objc_terminatev + 124
[host-119-96:18806] [ 6] 8   libc++abi.dylib                     0x00007fff8ac6e0a1 _ZSt11__terminatePFvvE + 8
[host-119-96:18806] [ 7] 9   libc++abi.dylib                     0x00007fff8ac6db30 _ZN10__cxxabiv1L22exception_cleanup_funcE19_Unwind_Reason_CodeP17_Unwind_Exception + 0
[host-119-96:18806] [ 8] 10  libepetra.11.dylib                  0x0000000105853d15 _ZN15Epetra_BlockMap21ConstructUserVariableIiEEvT_iPKS1_PKiS1_RK11Epetra_Commb + 3845
[host-119-96:18806] [ 9] 11  libepetra.11.dylib                  0x000000010584c49f _ZN15Epetra_BlockMapC1EiiPKiS1_iRK11Epetra_Comm + 111
[host-119-96:18806] [10] 12  libepetra.11.dylib                  0x00000001058b6452 _ZN15Epetra_FEVector28createNonlocalMapAndExporterIiEEvv + 178
[host-119-96:18806] [11] 13  libepetra.11.dylib                  0x00000001058b5953 _ZN15Epetra_FEVector14GlobalAssembleIiEEi18Epetra_CombineModeb + 99
[host-119-96:18806] [12] 14  libepetra.11.dylib                  0x00000001058b4a6a _ZN15Epetra_FEVector14GlobalAssembleE18Epetra_CombineModeb + 58
[host-119-96:18806] [13] 15  libdeal_II.8.3.pre.dylib            0x000000010259e578 _ZN6dealii16TrilinosWrappers10VectorBase8compressENS_15VectorOperation6valuesE + 168
[host-119-96:18806] [14] 16  libdeal_II.8.3.pre.dylib            0x000000010230a3f1 _ZNK6dealii16ConstraintMatrix10distributeINS_16TrilinosWrappers3MPI6VectorEEEvRT_ + 1409
[host-119-96:18806] [15] 17  libdeal_II.8.3.pre.dylib            0x000000010123277b _ZN6dealii11VectorTools29interpolate_to_different_meshILi2ELi2ENS_10DoFHandlerENS_16TrilinosWrappers3MPI6VectorEEEvRKNS_12InterGridMapIT1_IXT_EXT0_EEEERKT2_RKNS_16ConstraintMatrixERSB_ + 667
[host-119-96:18806] [16] 18  libdeal_II.8.3.pre.dylib            0x0000000101231ffa _ZN6dealii11VectorTools29interpolate_to_different_meshILi2ELi2ENS_10DoFHandlerENS_16TrilinosWrappers3MPI6VectorEEEvRKT1_IXT_EXT0_EERKT2_S8_RS9_ + 314
[host-119-96:18806] [17] 19  step-32                             0x0000000100005119 _ZN14SeventhProblemILi2EE3runEj + 1193
[host-119-96:18806] [18] 20  step-32                             0x0000000100002a69 _Z12seventh_gridv + 329
[host-119-96:18806] [19] 21  step-32                             0x0000000100002bbc main + 124
[host-119-96:18806] [20] 22  libdyld.dylib                       0x00007fff83ac75c9 start + 1
[host-119-96:18806] [21] 23  ???                                 0x0000000000000001 0x0 + 1
[host-119-96:18806] *** End of error message ***
--------------------------------------------------------------------------
orterun noticed that process rank 0 with PID 18806 on node host-119-96.eduroam-local.wifi.le.ac.uk exited on signal 6 (Abort trap: 6).
--------------------------------------------------------------------------

Hence my question about whether interpolate_to_different_mesh() will work with p::d::Triangulation on multiple processes!

Thanks,
Sam

Timo Heister

unread,
Apr 15, 2015, 4:06:43 PM4/15/15
to dea...@googlegroups.com
> sticking point. Can I ask - is VectorTools::interpolate_to_different_mesh
> meant to work for parallel::distributed::Triangulation?

No. I doubt anybody checked it yet, sorry. It might be easy to
implement if you have the same partitioning (I see that you play with
that in the example), but it probably needs some minor fixes.

Sam Cox

unread,
Apr 17, 2015, 11:24:02 AM4/17/15
to dea...@googlegroups.com
Would you be able to advise what might be going on? I've looked at the source code and the error is thrown at the dummy.distribute() line in interpolate_to_different_mesh(). I've discovered that removing the cell2->set_dof_values_by_interpolation(cache,u2) line (instead of the .distribute() ) allows the program to proceed, so I think it might be this that is having the issues. I don't understand enough of the internals to get very far on this.

I've also looked into using FEFieldFunction + interpolate, but I couldn't get this to work in parallel either. Which of these (or anything else) is best when I have a second mesh (mesh2), and want to interpolate from mesh1 to mesh2 without changing mesh1? SolutionTransfer doesn't seem to fit this criteria.

Sam

Timo Heister

unread,
Apr 17, 2015, 12:33:14 PM4/17/15
to dea...@googlegroups.com
Sam,

your code won't compile for me (flag_struct, namespaces, LA, etc.).

I think interpolate_to_different_mesh will work for the case that the
parallel distribution of both meshes is the same using this branch:

https://github.com/tjhei/dealii/tree/par_interp_diff_mesh

Can you test and report back?
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



Sam Cox

unread,
Apr 20, 2015, 5:26:28 AM4/20/15
to dea...@googlegroups.com
Thanks Timo. Apologies for the slightly shoddy MWE. Yes, that seems to be exactly what's needed - seems to work fine for me!

Thanks again,

Sam

Timo Heister

unread,
Apr 21, 2015, 1:58:01 PM4/21/15
to dea...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages