Interpolated Boundary Conditions from Distributed Solution

69 views
Skip to first unread message

ky...@math.uh.edu

unread,
Nov 29, 2018, 5:04:45 PM11/29/18
to deal.II User Group

Hello All,

I am trying to interpolate a finite element solution to use as dirichlet boundary conditions for an auxiliary problem. I have the following working for serial runs:

Functions::FEFieldFunction<dim,DoFHandler<dim>,  TrilinosWrappers::MPI::BlockVector> boundaryconditions(dof_handler, curl_vel);

VectorTools::interpolate_boundary_values(dof_handler,
                                                 wall
,
                                                 boundaryconditions
,
                                                 vorticity_constraints
,
                                                 fe
.component_mask(vorticity_extractor));




In context, I project the curl of velocity onto a finite element space and I want to use this as boundary conditions for vorticity.

The problem is when I want to do this in parallel, FEFieldFunction is not able to handle it directly, as noted in the documentation.

The documentation does give a template on how to use FEFieldFunction with a distributed vector, see https://www.dealii.org/9.0.0/doxygen/deal.II/classFunctions_1_1FEFieldFunction.html

What I though might work (using the template given) was something like the following

class DirichletConditions : public Functions::FEFieldFunction<dim, DoFHandler<dim>, TrilinosWrappers::MPI::BlockVector>{
   
public:
       
DirichletConditions(const DoFHandler<dim>& dofh, const LA::MPI::BlockVector &src, int n_components):
               
Functions::FEFieldFunction<dim, DoFHandler<dim>, TrilinosWrappers::MPI::BlockVector>(dofh, src), n_comp(n_components){};

       
void vector_value (const Point< dim > &p, Vector< double > &values) const {
           
Vector<double> vec_val(n_comp);
           
bool   point_found = true;
           
try
           
{
               
for(int comp = 0; comp < n_comp; ++comp)
                    vec_val
[comp] = this->value(p, comp);
           
}
           
catch (...)
           
{
                point_found
= false;
           
}

           
if (point_found)
                values
= vec_val;
           
else
                values
= 0.0;
       
}

   
private:
       
const int n_comp;
   
};

   
DirichletConditions BC(dof_handler, curl_vel, n_components);


   
VectorTools::interpolate_boundary_values(dof_handler,
                                                 wall
,
                                                 boundaryconditions
,
                                                 vorticity_constraints
,
                                                 fe
.component_mask(vorticity_extractor));


This works in serial, but when I switch to MPI I get the following expected error

An error occurred in line <488> of file </home/kyle/lib/dealii-8.5.0/include/deal.II/numerics/fe_field_function.templates.h> in function
    unsigned int dealii::Functions::FEFieldFunction<dim, DoFHandlerType, VectorType>::compute_point_locations(const std::vector<dealii::Point<dim> >&, std::vector<typename DoFHandlerType::active_cell_iterator>&, std::vector<std::vector<dealii::Point<dim> > >&, std::vector<std::vector<unsigned int> >&) const [with int dim = 2; DoFHandlerType = dealii::DoFHandler<2>; VectorType = dealii::TrilinosWrappers::MPI::BlockVector; typename DoFHandlerType::active_cell_iterator = dealii::TriaActiveIterator<dealii::DoFCellAccessor<dealii::DoFHandler<2>, false> >]
The violated condition was:
    !my_pair.first->is_artificial()
Additional information:
    The given point is inside a cell of a parallel::distributed::Triangulation that is not locally owned by this processor.


I'm really not sure if this is the right way to go about what I'm trying to do.

-Thanks,
Kyle Williams

P.S. I don't think it really matters in this case since this has not been updated with 9.0.0, but I am using version 8.5.0 with Trilinos and p4est

Jean-Paul Pelteret

unread,
Dec 1, 2018, 6:05:27 AM12/1/18
to dea...@googlegroups.com
Dear Kyle,

I’m not too familiar with FEFieldFunction, but I’m aware of the suggested usage within the parallel::distributed context. I’ve taken a quick look into the source code for FEFieldFunction (in particular the compute_point_locations() function that’s throwing this error), and I think that this might be a bug. Before doing anything, a cache of points is created to speed up the process of finding cells that encompass the computation points. In producing this cache, on this line we call the GridTools::compute_point_locations() function. This seems to incorrect for the case when using a parallel::distributed::Triangulation and, as best as I can tell (see entry 71 in the 9.0.0 changelog), there is in fact a specialised version of this function called GridTools::distributed_compute_point_locations() for this exact scenario.

It would be a great help if you could make a minimal test case that exhibits this behaviour. We could then use that as a basis from which to construct a patch to fix the issue (if it is indeed a bug - as I said, I’m not quite so familiar with this part of our library).

Best,
Jean-Paul

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Message has been deleted

ky...@math.uh.edu

unread,
Dec 13, 2018, 6:24:07 AM12/13/18
to deal.II User Group
Thank you very much!

I will report back with a minimal test case.

I am still curious though, would this generally be the correct way to implement what I am trying to achieve?

Wolfgang Bangerth

unread,
Dec 30, 2018, 1:36:13 PM12/30/18
to dea...@googlegroups.com
On 12/13/18 4:24 AM, ky...@math.uh.edu wrote:
>
> I will report back with a minimal test case.
>
> I am still curious though, would this generally be the correct way to
> implement what I am trying to achieve?

Yes, I think your approach is reasonable. (Up to possible bugs in the
implementation of the FEFieldFunction class, as you mentioned.)

But FEFieldFunction is not particularly efficient. If I understand you right,
you first solve for a problem with solution (say) U on a domain, and then you
want to use (a function of) the values of U at the boundary as the boundary
conditions for another problem V that is defined on the same domain. If the
two problems use the same mesh, it might be more efficient to use a different
approach for this that doesn't require FEFieldFunction. For example, you could
replicate what VectorTools::interpolate_boundary_values() does and evaluate U
via an FEFaceValues object because you know what faces you need to evaluate
on. Or you could impose the Dirichlet boundary conditions weakly, using
something like the Nitsche method (i.e., how DG methods impose boundary values).

Or maybe that all doesn't matter because using FEFieldFunction is fast enough
for you. All I'm trying to say is that if FEFieldFunction is *not* fast
enough, then there are other options.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Reply all
Reply to author
Forward
0 new messages