PETSc-p4est Boundary

96 views
Skip to first unread message

Florian

unread,
Nov 30, 2015, 10:59:20 AM11/30/15
to deal.II User Group
Dear all,

for parallel computing I use the petsc and p4est framework (or I try to use it). Eeverything works good, so far. 
Except one thing:

The geometry is cube with the following id's:

typename Triangulation<dim>::cell_iterator cell = triangulation.begin (), endc = triangulation.end();
   
for (; cell!=endc; ++cell)
     
for (unsigned int face=0; face<GeometryInfo<dim>::faces_per_cell; ++face){
       
if (cell->face(face)->at_boundary()){
 
if (cell->face(face)->center()[0] ==  0)   cell->face(face)->set_boundary_id (2);
 
if (cell->face(face)->center()[0] ==  1)   cell->face(face)->set_boundary_id (3);
       
}
 
}
           
 triangulation
.refine_global (2);


Now I'd like to apply some boundary_conditions:

  constraints.clear ();
    DoFTools::make_hanging_node_constraints (dof_handler, constraints);
    FEValuesExtractors::Scalar x_component (0);
    FEValuesExtractors::Scalar y_component (1);
    FEValuesExtractors::Scalar z_component (2);


    VectorTools::interpolate_boundary_values (dof_handler,2, ZeroFunction<dim> (dim), constraints);
    VectorTools::interpolate_boundary_values(dof_handler, 3, ConstantFunction<dim>(0.5,dim), constraints, (fe.component_mask(x_component)));
  constraints.close();
  constraints.distribute (locally_relevant_solution);

Let's check if it worked...

for (typename DoFHandler<dim>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); cell!=endc; ++cell) { 
if (cell->is_locally_owned()){
for (unsigned int i=0; i<GeometryInfo<dim>::vertices_per_cell; ++i){
if(cell->vertex(i)(0)==1){
if(locally_relevant_solution(cell->vertex_dof_index(i,0))==0){
cout<<cell->vertex(i)(1)<<"\t"<<cell->vertex(i)(2)<<endl;
}
}
 }
}
}

It didn't
0.5 0
0.5 0.25
0.5 0.25
0.5 0.5
0.5 0.5
0.75 0.5
0.5 0.75
0.75 0.5
1 0.5
0.5 0.75
0.5 1
0 0.5
0.25 0.5
0.25 0.5
0.5 0.5

Paraview shows the same


Any ideas?

Thank you guys

Bye Bye


Wolfgang Bangerth

unread,
Nov 30, 2015, 11:08:07 AM11/30/15
to dea...@googlegroups.com
On 11/30/2015 09:59 AM, Florian wrote:
> Dear all,
>
> for parallel computing I use the petsc and p4est framework (or I try to use
> it). Eeverything works good, so far.
> Except one thing:
>
> The geometry is cube with the following id's:
>
> |
> typenameTriangulation<dim>::cell_iterator cell =triangulation.begin(),endc
> =triangulation.end();
> for(;cell!=endc;++cell)
> for(unsignedintface=0;face<GeometryInfo<dim>::faces_per_cell;++face){
> if(cell->face(face)->at_boundary()){
> if(cell->face(face)->center()[0]==0) cell->face(face)->set_boundary_id (2);
> if(cell->face(face)->center()[0]==1) cell->face(face)->set_boundary_id (3);
> <https://lh3.googleusercontent.com/-m_3rx3IP_a0/VlxxoNCRL0I/AAAAAAAAF2g/QEG0-cWk_aU/s1600/screenshot.png>
>
>
> Any ideas?

It's hard to tell what may be going wrong without seeing a complete program.
There are so many places where something could have gone wrong.

Best
W.


--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@math.tamu.edu
www: http://www.math.tamu.edu/~bangerth/

Florian

unread,
Nov 30, 2015, 11:16:35 AM11/30/15
to deal.II User Group
Here the entire code:

#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
#include <deal.II/base/timer.h>
#include <deal.II/base/utilities.h>
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/index_set.h>


#include <deal.II/lac/generic_linear_algebra.h>


#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/sparsity_tools.h>

#include <deal.II/lac/petsc_parallel_sparse_matrix.h>
#include <deal.II/lac/petsc_parallel_vector.h>
#include <deal.II/lac/petsc_solver.h>
#include <deal.II/lac/petsc_precondition.h>



#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/dofs/dof_tools.h>

#include <deal.II/fe/fe_values.h>
#include <deal.II/fe/fe_system.h>      
#include <deal.II/fe/fe_q.h>

#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/numerics/matrix_tools.h>


#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>

#include <fstream>
#include <iostream>



using std::cout;
using std::endl;
using namespace dealii;

  template <int dim>
 class GPMPP
 {
 public:
   GPMPP ();
   ~GPMPP ();

    void run ();

  private:
   void make_grid ();
   void setup_system ();
   void setup_boundaries ();
   void solve ();
 std::streambuf *psbuf, *backup;
   MPI_Comm                                  mpi_communicator;

    parallel::distributed::Triangulation<dim> triangulation;

    DoFHandler<dim>                           dof_handler;

        FESystem<dim>        fe;
   IndexSet                                  locally_owned_dofs;
   IndexSet                                  locally_relevant_dofs;

    ConstraintMatrix                          constraints;

    PETScWrappers::MPI::SparseMatrix                     system_matrix;
   PETScWrappers::MPI::Vector                           locally_relevant_solution;
   PETScWrappers::MPI::Vector                           update_locally_relevant_solution;
   PETScWrappers::MPI::Vector                           system_rhs;

    ConditionalOStream                        pcout;

    TimerOutput                               computing_timer;
 };


 
  template <int dim>
 GPMPP<dim>::GPMPP ()
   :
   mpi_communicator (MPI_COMM_WORLD),
   triangulation (mpi_communicator,
                  typename Triangulation<dim>::MeshSmoothing
                  (Triangulation<dim>::smoothing_on_refinement |
                   Triangulation<dim>::smoothing_on_coarsening)),
   dof_handler (triangulation),
   fe (FE_Q<dim>(1), dim),
   pcout (std::cout,
          (Utilities::MPI::this_mpi_process(mpi_communicator)
           == 0)),
   computing_timer (mpi_communicator,
                    pcout,
                    TimerOutput::summary,
                    TimerOutput::wall_times)
 {
 

}



  template <int dim>
 GPMPP<dim>::~GPMPP (){
     
       
   dof_handler.clear ();
 }


template <int dim>
void GPMPP<dim>::make_grid ()  {
   TimerOutput::Scope t(computing_timer, "make_grid");
   GridGenerator::hyper_cube (triangulation,0,1);


        typename Triangulation<dim>::cell_iterator cell = triangulation.begin (), endc = triangulation.end();
   for (; cell!=endc; ++cell)
     for (unsigned int face=0; face<GeometryInfo<dim>::faces_per_cell; ++face){
       if (cell->face(face)->at_boundary()){
                       if (cell->face(face)->center()[0] ==  0)   cell->face(face)->set_boundary_id (2);
                      if (cell->face(face)->center()[0] ==  1)   cell->face(face)->set_boundary_id (3);
       }
             }
         
    triangulation.refine_global (2);
}

template <int dim>
void GPMPP<dim>::setup_system ()  {
  TimerOutput::Scope t(computing_timer, "setup_system");
 dof_handler.distribute_dofs (fe);
      locally_owned_dofs = dof_handler.locally_owned_dofs ();
DoFTools::extract_locally_relevant_dofs (dof_handler, locally_relevant_dofs);
 
        locally_relevant_solution.reinit (locally_owned_dofs,  locally_relevant_dofs, mpi_communicator);
       update_locally_relevant_solution.reinit (locally_owned_dofs,  locally_relevant_dofs, mpi_communicator);
system_rhs.reinit (locally_owned_dofs, mpi_communicator);
      constraints.clear ();
  constraints.reinit (locally_relevant_dofs);
    DoFTools::make_hanging_node_constraints (dof_handler, constraints);                                            
                                                                                                       
        constraints.close ();  
        DynamicSparsityPattern dsp (locally_relevant_dofs);
   
        DoFTools::make_sparsity_pattern (dof_handler, dsp,   constraints, false);
      SparsityTools::distribute_sparsity_pattern (dsp,dof_handler.n_locally_owned_dofs_per_processor(),mpi_communicator,locally_relevant_dofs);
     
        system_matrix.reinit (locally_owned_dofs,locally_owned_dofs,dsp,mpi_communicator);
}

template <int dim>
void GPMPP<dim>::setup_boundaries (){
constraints.clear ();
   DoFTools::make_hanging_node_constraints (dof_handler, constraints);
   FEValuesExtractors::Scalar x_component (0);
   FEValuesExtractors::Scalar y_component (1);
   FEValuesExtractors::Scalar z_component (2);


        VectorTools::interpolate_boundary_values (dof_handler,2, ZeroFunction<dim> (dim), constraints);
   VectorTools::interpolate_boundary_values(dof_handler, 3, ConstantFunction<dim>(0.5,dim), constraints,        (fe.component_mask(x_component)));

        constraints.close();
   constraints.distribute (locally_relevant_solution);


       
        for (typename DoFHandler<dim>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); cell!=endc; ++cell) {
         if (cell->is_locally_owned()){
                 for (unsigned int i=0; i<GeometryInfo<dim>::vertices_per_cell; ++i){
                           if(cell->vertex(i)(0)==1){
                                     if(locally_relevant_solution(cell->vertex_dof_index(i,0))==0){
                                         (locally_relevant_solution(cell->vertex_dof_index(i,0))=0.1);
                                          cout<<cell->vertex(i)(1)<<"\t"<<cell->vertex(i)(2)<<"\t"<<locally_relevant_solution(cell->vertex_dof_index(i,0))<<endl;
                                }
                              }
                      }
              }
      }
     
}





template <int dim>
void GPMPP<dim>::run (){
     make_grid();
   setup_system ();
       setup_boundaries();
}





int main(int argc, char *argv[]){
     using namespace dealii;
     Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
     deallog.depth_console (0);
     GPMPP<3> problem;
     problem.run ();
 return 0;
}



Wolfgang Bangerth

unread,
Nov 30, 2015, 11:24:05 AM11/30/15
to dea...@googlegroups.com
On 11/30/2015 10:16 AM, Florian wrote:
> Here the entire code:
> [...]

In debug mode, this program yields an error for me (regardless of whether I
use one or two processors):


--------------------------------------------------------
An error occurred in line <881> of file
</home/bangerth/p/deal.II/1/dealii/include/deal.II/lac/petsc_vector_base.h> in
function
const dealii::PETScWrappers::internal::VectorReference&
dealii::PETScWrappers::internal::VectorReference::operator=(const
PetscScalar&) const
The violated condition was:
!vector.has_ghost_elements()
The name and call sequence of the exception was:
ExcGhostsPresent()
Additional Information:
You are trying an operation on a vector that is only allowed if the vector has
no ghost elements, but the vector you are operating on does have ghost
elements. Specifically, vectors with ghost elements are read-only and cannot
appear in operations that write into these vectors.

See the glossary entry on 'Ghosted vectors' for more information.

Stacktrace:
-----------
#0 /home/bangerth/p/deal.II/1/install/lib/libdeal_II.g.so.8.4.0-pre:
dealii::PETScWrappers::internal::VectorReference::operator=(double const&) const
#1 /home/bangerth/p/deal.II/1/install/lib/libdeal_II.g.so.8.4.0-pre: void
dealii::ConstraintMatrix::distribute<dealii::PETScWrappers::MPI::Vector>(dealii::PETScWrappers::MPI::Vector&)
const
#2 ./step-1: GPMPP<3>::setup_boundaries()
#3 ./step-1: GPMPP<3>::run()
#4 ./step-1: main
--------------------------------------------------------

Does this run for you in debug mode? Which version of deal.II are you using?

Jean-Paul Pelteret

unread,
Nov 30, 2015, 11:24:06 AM11/30/15
to deal.II User Group
My first suggestion is that you should specify a tolerance for your initial face centre check:
if ((cell->face(face)->center()[0] - 1.0) < 1e-6) ... etc.
I've found this to be an issue for me in the past.

Bruno Turcksin

unread,
Nov 30, 2015, 11:24:56 AM11/30/15
to deal.II User Group
Hi

the boundary IDs are not kept after refinement in parallel (see the documentation https://dealii.org/developer/doxygen/deal.II/classparallel_1_1distributed_1_1Triangulation.html)

Best,

Bruno

Timo Heister

unread,
Nov 30, 2015, 12:05:53 PM11/30/15
to dea...@googlegroups.com
> the boundary IDs are not kept after refinement in parallel (see the
> documentation
> https://dealii.org/developer/doxygen/deal.II/classparallel_1_1distributed_1_1Triangulation.html)

But he is setting them on the coarse mesh and every refinement will
copy the indicators to the children regardless of the parallel
partitioning. I think this should work as expected. Or am I totally
wrong here?

--
Timo Heister
http://www.math.clemson.edu/~heister/

Florian

unread,
Dec 1, 2015, 4:19:48 AM12/1/15
to deal.II User Group
I tried to use a tolerance and setting the ID after refinement. Unfortunately none of these worked.

I guess the problem are those 'ghost values'. Thanks to bangerth, for this hint. Maybe I can fix this somehow.

My version is the latest 8.3.0 and I run this in release mode. My bad!

Thanks so far. :)
Reply all
Reply to author
Forward
0 new messages