FETools::interpolate with distributed vectors

81 views
Skip to first unread message

Daniel Abele

unread,
Aug 20, 2025, 4:12:05 PMAug 20
to deal.II User Group
Hi,
I'm writing a parallel DG solver using deal.II version 9.6.2. 
I would like to generate a continuous solution of the final state. I tried to do this by interpolating from `FE_DGQ` (any degree) to `FE_Q{1}` using `FETools::interpolate`. This works as long as there is only one MPI process. With more than one process, I get significant errors on the borders between processes. 

I attached a minimal example of my code. I create one dofhandler each for DGQ and Q elements. I create some made up discontinuous solution using the DGQ dofhandler for illustration, normally this would be output from my solver. Then I interpolate to Q{1} by first creating a ghosted vector from the discontinuous solution and a non-ghosted result vector for the continuous solution and calling `FETools::interpolate` with them. I write the discontinuous and continuous solutions to VTK files.

I read somewhere (can't find it now, but it makes sense anyway) that the input vector of `interpolate` needs to be ghosted, the output vector non-ghosted. But the ghosts don't seem to make a difference. Normally I would use the ghost indices from `DoFTools::extract_locally_relevant_dofs`. I tried to add ALL indices (`add_range(0, n_dofs)`) as ghosts or no ghosts at all (clear index set), and got the same numerical errors. I also tried calling `Vector::update_ghost_values` manually, but I think Vector::operator= does that implicitly anyway?

I plotted the continous and discontinuous solution in Paraview (using WarpByScalar filter). With one process, it's a perfect straight line through the cell centers. With two or more processes (pictured below: two processes) there is a kink in the middle where the processes meet.
interpolation_n2.png

Am I doing something wrong? Is there a different/better way to achieve what I need?

Thanks,
Daniel
main.cpp

Wolfgang Bangerth

unread,
Aug 26, 2025, 7:35:30 PMAug 26
to dea...@googlegroups.com

Daniel:
> interpolation_n2.png
>
> Am I doing something wrong? Is there a different/better way to achieve
> what I need?

I haven't explored this in detail, but I think what you see is
ultimately a result of the fact that what you are doing is not a
well-defined problem (though I admit that it could be done better).

At its core, you are interpolating onto a continuous finite element
space that has its nodes at exactly those locations where the function
you are trying to interpolate is discontinuous. The algorithm behind
FETools::interpolate() has to somehow choose which of the possible
values of the input function it wants to assign each output DoF. The way
it does this is that it loops over all locally owned cells
https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L146-L148
and for each DoF on these cells, if the DoF is locally owned
https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L202-L206
it adds the value of the input function to the output vector (also
keeping track how often it added something to the output vector):
https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L208-L212

In your case, this means that for every DoF in the interior of a
processor's subdomain, we write into the output vector more than once
and then we take the average of all of these values:
https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L234-L249
This makes sure that the output vector has *averages* of the inputs at
node positions. But for DoFs that lie at the interfaces between
processes, we only get contributions from the MPI process that owns the
DoF, and so you're not averaging over the contributions from all
adjacent cells but only those adjacent cells owned by that process (see
line 206).

One could consider this undesirable: We should really be adding up from
all adjacent cells. You could try what happens if you remove the
if-condition in line 206. In the sequential case, this should make no
difference. In the parallel case, I think it should achieve what you are
trying to do. Would you want to try that out, and if it works as
suggested make that into a patch to deal.II?

Best
W.

blais...@gmail.com

unread,
Aug 27, 2025, 1:33:27 AMAug 27
to deal.II User Group

An alternative that could maybe be better posed would be to assemble an L2 projection with the right-hand side being the DG values at the quadrature point. This would give you the "optimal" fit of the DG results with your CG scheme, which may not always coincide with the averaging of the nodal values as is done when using fe_tools_interpolate :)

Daniel Abele

unread,
Aug 28, 2025, 8:39:52 AMAug 28
to deal.II User Group
I haven't tried Wolfgang's suggestion yet.

I tried the L2 projection approach using VectorTools::project (https://dealii.org/current/doxygen/deal.II/namespaceVectorTools.html#a51a13e948e1296dbfdfc297167e4fd5a). The code I added is below. I initialize CellDataStorage by evaluating the DG solution at the quadrature points using FEValues. The AffineConstraints object might contain some actual constraints in the real program. This seems to work fine, and result is identical parallelized. Am I missing something? It's more expensive than the simple interpolation, but not significant compared to the DG solver as long as it's not done too often. And I only really need it when writing output, which is quite expensive anyway. And it's a bit more code. But more numerically sound.

This goes in the original code in the place of the call to FeTools::interpolate:
```
    auto v = Vector{local_owned_dofs_cont, MPI_COMM_WORLD};
    struct MyData
    {
        double u;
    };
    auto gauss = d2::QGauss<dim>(fe_cont.tensor_degree() + 1);
    auto quadrature_data = d2::CellDataStorage<decltype(dof_cont)::active_cell_iterator, MyData>{};
    auto fe_values = d2::FEValues<dim>(mapping, fe_dg, gauss, d2::update_values);
    for (auto&& cell : dof_dg.active_cell_iterators())
    {
        if (!cell->is_locally_owned())
        {
            continue;
        }

        fe_values.reinit(cell);

        quadrature_data.initialize(cell, fe_values.n_quadrature_points);
        auto data = quadrature_data.get_data(cell);
        auto local_u = std::vector<double>(fe_values.n_quadrature_points);
        fe_values.get_function_values(u, local_u);
        for (auto i : fe_values.quadrature_point_indices())
        {
            *data[i] = MyData{local_u[i]};
        }
    }
    auto ac = d2::AffineConstraints<double>{local_owned_dofs_cont};
    ac.close();
    d2::VectorTools::project(
        mapping,
        dof_cont,
        ac,
        gauss,
        [&quadrature_data](const decltype(dof_cont)::active_cell_iterator& cell, unsigned int qidx) -> double
        { return quadrature_data.get_data(cell)[qidx]->u; },
        v);
```

Daniel Abele

unread,
Aug 28, 2025, 9:49:13 AMAug 28
to deal.II User Group
I'm wondering if it could become a problem that the cell iterators are mixed. The CellDataStorage is initialized with cell iterators from the DG-DofHandler but the lookup uses cell iterators from the CG-DoFHandler. They should always refer to the same triangulation cell. But I don't know if the iterators from two DoFHandler are always compatible in regards to the project function.

Luca Heltai

unread,
Aug 28, 2025, 11:06:07 AMAug 28
to Deal.II Users
if you have a cell from dh1, and want a cell from dh2, you can always do

auto cell2 = cell1->as_dof_handler_iterator(dh2);

https://dealii.org/current/doxygen/deal.II/classCellAccessor.html#a404b1ab2d79efa71b55f03367132dc0f

L.
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/dealii/9e918aa1-3270-4dd6-8cf2-52d94aa759dfn%40googlegroups.com.

blais...@gmail.com

unread,
Sep 1, 2025, 5:13:56 AMSep 1
to deal.II User Group
I think you will be fine if you proceed like you are doing right now.
It does not take much time, but if you solver ends up being in parallel and your problems are quite large, you might have to solve a big linear system (although that big linear system is a mass matrix which is one of the easiest thing to solve with an iterative linear solver).

Essentially, this projection is enabling you to find the optimal CG fit to you DG solution, which is, I presume, what you desire.
Cheers
Brunobär
Reply all
Reply to author
Forward
0 new messages