Constraints for pure Neumann boundary conditions in parallel

216 views
Skip to first unread message

Marc Fehling

unread,
Apr 28, 2016, 7:36:51 AM4/28/16
to deal.II User Group
Dear community,

I am currently facing an issue trying to impose global constraints on a parallel distributed mesh. I followed step-11 and basically tried to implement the third possibility that constraints the boundary dofs to have a mean value of zero. This works with one CPU, but fails with more than one, because each CPU would need the information of all boundary dofs and I don't know how to distribute it. The other question is how this will worsen the total runtime of the program, especially regarding solver and preconditioner (currently: GMRES with AMG).

Although this topic has been already mentioned years ago (like in this post), there may is an actual easy and fancy solution for this issue available now.

My motivation of all this is to solve the incompressible Navier-Stokes equation and use pure Neumann boundary conditions for the Pressure-Poisson equation for Chorin's projection. Setting the first dof to zero by hand during matrix assembly (like the first mentioned possibility in step-11) yields strange results and exploding velocities, so I tried constraining the dofs like in step-11, which seems reasonable for me.

If someone has experience with CFD, do you have another way/opinion on how to get pure Neumann boundary conditions to work in parallel?

Best regards,
Marc

PS: You'll find a short code snippet attached, in which I tried to constrain the boundary dofs of each parallel distributed dofhandler to one specific dof I picked and distributed globally. The remaining boundary dofs from the other CPUs are not yet considered here. I would be thankful for any hit that would help me to achieve this. :)
 (...)
 
 dealii
::IndexSet local_boundary_dofs;
 
DoFTools::extract_boundary_dofs (dof_handler,
                                 
ComponentMask(),
                                  local_boundary_dofs
);
 local_boundary_dofs
.compress ();
 
 
// get index of very first boundary dof and distribute it on all CPUs
 
const unsigned int first_local_boundary_dof
   
= local_boundary_dofs.nth_index_in_set (0);
 
const unsigned int first_boundary_dof
   
= dealii::Utilities::MPI::min (first_local_boundary_dof, MPI_COMM_WORLD);
 
 
// prepare ConstraintMatrix
 constraints
.clear ();
 dealii
::IndexSet relevant_dofs_with_first_boundary_dof = dofs_relevant;
 relevant_dofs_with_first_boundary_dof
.add_index (first_boundary_dof);
 relevant_dofs_with_first_boundary_dof
.compress ();
 constraints
.reinit (relevant_dofs_with_first_boundary_dof);
 constraints
.add_line (first_boundary_dof);
 
 
// line in the ConstraintMatrix
 
unsigned int local_to_global_index;
 
for(unsigned int i=0; i<local_boundary_dofs.n_elements(); ++i) {
   local_to_global_index
= local_boundary_dofs.nth_index_in_set (i);
   
if (local_to_global_index != first_boundary_dof)
     constraints
.add_entry (first_boundary_dof,
                            local_to_global_index
,
                           
-1);
 
}
 
 
DoFTools::make_hanging_node_constraints (dof_handler, constraints);
 constraints
.close ();
 
 
(...)


Timo Heister

unread,
Apr 28, 2016, 9:07:47 AM4/28/16
to dea...@googlegroups.com
> constraints the boundary dofs to have a mean value of zero

This is really difficult because you don't know all the boundary DoFs
on a single node. Even if you figure out a way to communicate this
information, you would still have the issue of potentially
constraining a very large number of unknowns with each other.

> equation for Chorin's projection. Setting the first dof to zero by hand
> during matrix assembly (like the first mentioned possibility in step-11)
> yields strange results and exploding velocities

I have used the option to constrain a single DoF in the past and that
should certainly work.

Another option is to project out the null space in every Krylov
iteration to get a mean zero solution.

Finally, I have assembled epsilon*mass matrix into a pressure poisson
problem and that also works just fine.


--
Timo Heister
http://www.math.clemson.edu/~heister/

Wolfgang Bangerth

unread,
Apr 28, 2016, 10:09:51 AM4/28/16
to dea...@googlegroups.com
On 04/28/2016 06:36 AM, Marc Fehling wrote:
> I followed step-11
> <https://dealii.org/8.4.0/doxygen/deal.II/step_11.html> and basically
> tried to implement the third possibility that constraints the boundary
> dofs to have a mean value of zero. This works with one CPU, but fails
> with more than one, because each CPU would need the information of all
> boundary dofs and I don't know how to distribute it. The other question
> is how this will worsen the total runtime of the program, especially
> regarding solver and preconditioner (currently: GMRES with AMG).

In addition to what Timo already said, here's an additional consideration:

If you couple M of the total of N degrees of freedom (e.g., the M DoFs
on the boundary), you will get an MxM dense sub-block of your system
matrix. If you run in parallel, I suspect that you have a large problem,
where M is going to be large as well. In 3d, for example, you will have
M=O(N^{2/3}), so an MxM dense block of the matrix may well contain the
vast majority of matrix elements. You really don't want to do this. It
will certainly also throw off any of the preconditioners you would
typically use.

The better approach is to pin a single degree of freedom to a particular
value, e.g., the first DoF on the first processor.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@math.tamu.edu
www: http://www.math.tamu.edu/~bangerth/

Marc Fehling

unread,
Apr 29, 2016, 4:23:06 AM4/29/16
to deal.II User Group
Thank you!

I'll give the option to constrain a single DoF another try. I guess there is something wrong in the code at another place.
If there are some relevant changes made regarding this issue, I'll keep you updated.

Best,
Marc

bjpal...@gmail.com

unread,
May 6, 2016, 1:53:00 PM5/6/16
to deal.II User Group

Marc,

I was able to get the single constrained DoF approach to work using the apply_boundary_values method. My code looks like this:

       // Find first degree of freedom on boundary
        IndexSet boundary_dofs;
        DoFTools::extract_boundary_dofs(p_dof,ComponentMask(),boundary_dofs);
        // Find first active node on boundary. Start by finding size of
        // selected_dofs and then get value of first boundary point on each
        // processor
        int first = -1;
        if (boundary_dofs.n_elements() > 0) {
          first = boundary_dofs.nth_index_in_set(0);
        }
        // Find lowest index bounary degree of freedom across all processors
        std::vector<int> fval(p_nprocs,0);
        fval[p_me] = first;
        Utilities::MPI::sum(fval,p_comm,fval);
        first = fval[0];
        int fsize = fval.size();
        for (int i=1; i<fsize; i++) {
          if (first == -1 && fval[i] != -1) first = fval[i];
          if (fval[i] != -1 && first > fval[i]) first = fval[i];
        }
        // We now have global index of first boundary degree of freedom.
        // Set it to zero
        std::map<types::global_dof_index,double> boundary_values;
        boundary_values.insert(
            std::pair<types::global_dof_index,double>(first,0.0));
        LA::MPI::Vector tmp_solution;
        tmp_solution.reinit(p_locally_owned_dofs,p_comm);
        tmp_solution = p_local_solution;
        MatrixTools::apply_boundary_values(boundary_values,p_system,
            tmp_solution,p_rhs,false);
        p_local_solution = tmp_solution;

My local solution vector has ghosts nodes, which causes the apply_boundary_values function to fail so I copy the solution vector to a temporary vector without ghost nodes and use that in the apply_boundary_values method instead. I don't have any hanging nodes in my grid.

Bruce
Reply all
Reply to author
Forward
0 new messages