Output incorrect along parallel partition

72 views
Skip to first unread message

Jonathan Russ

unread,
Jun 5, 2016, 4:37:04 PM6/5/16
to deal.II User Group
Hello -

I modified Step-36 to perform a modal analysis of a structure and am trying to get the output_results function working properly. I am using a shared memory triangulation (METIS) and the output (i.e. the eigenvectors) are only incorrect along the partition. Attached are two pictures illustrating what I mean (colored by partition and colored by the first eigenvector when the program is executed on 2 processors).

Is there anything obviously wrong with this output function? When I run the program in serial it seems to work just fine. Could this be something to do with the difference between locally-owned and locally-relevant degrees of freedom?

  template <int dim>
 
void EigenvalueProblem<dim>::output_results () const
 
{
    std
::vector<DataComponentInterpretation::DataComponentInterpretation>
      data_component_interpretation
     
(dim, DataComponentInterpretation::component_is_part_of_vector);
   
   
DataOut<dim> data_out;
    data_out
.attach_dof_handler (dof_handler);

   
for (unsigned int i=0; i<eigenvectors.size(); ++i)
   
{
      data_out
.add_data_vector (eigenvectors[i],
                                std
::string("Mode_") +
                               
Utilities::int_to_string(i+1),
                               
DataOut<dim>::type_dof_data,
                                data_component_interpretation
);
   
}
   
     
    std
::vector<types::subdomain_id> partition_int (triangulation.n_active_cells());
   
GridTools::get_subdomain_association (triangulation, partition_int);
   
const Vector<double> partitioning(partition_int.begin(),
                                    partition_int
.end());
    data_out
.add_data_vector (partitioning, "partitioning");

    data_out
.build_patches ();
   
const std::string filename = ("eigenvectors-" +
                                 
Utilities::int_to_string (this_mpi_process, 1) +
                                 
".vtu");
    std
::ofstream output (filename.c_str());
    data_out
.write_vtu (output);
   
   
if (this_mpi_process == 0)
   
{
        std
::vector<std::string> filenames;
       
for (unsigned int i=0; i<n_mpi_processes; ++i)
            filenames
.push_back ("eigenvectors-" +
                                 
Utilities::int_to_string (i, 1) +
                                 
".vtu");
        std
::ofstream master_output ("eigenvectors.pvtu");
        data_out
.write_pvtu_record (master_output, filenames);
   
}
 
}


Thank you in advance for taking a look,
Jonathan
eigenvector1.png
partition.png

Artur Safin

unread,
Jun 7, 2016, 3:25:26 PM6/7/16
to deal.II User Group
Jonathan,

You may have figured this out already -- you might have forgotten to add

hanging_node_constraints.distribute (localized_solution);

after you solve your system (see step-17 for example).

Artur

Jonathan Russ

unread,
Jun 7, 2016, 7:13:18 PM6/7/16
to deal.II User Group
Hi Artur -

Unfortunately I don't have any hanging node constraints in the model. I do have constraints but there aren't any along the mesh partition. I still can't figure out why my output looks the way it does. The solve function looks like this:

 unsigned int EigenvalueProblem<dim>::solve ()
 
{
   
TimerOutput::Scope t(computing_timer, "solve");
    pcout
<< "   Number of eigenvectors requested: "
             
<< eigenvectors.size() << std::endl;
   
   
PETScWrappers::PreconditionBoomerAMG::AdditionalData data;
    data
.symmetric_operator = true;
   
PETScWrappers::PreconditionBoomerAMG preconditioner(mpi_communicator, data);
   
SolverControl linear_solver_control(dof_handler.n_dofs(),1.0e-16,false,false);
   
PETScWrappers::SolverCG linear_solver(linear_solver_control,mpi_communicator);
    linear_solver
.initialize(preconditioner);

   
SolverControl solver_control (2000, 1e-14, false, false);
   
SLEPcWrappers::SolverKrylovSchur eigensolver(solver_control, mpi_communicator);
   
   
double shift_freq = parameters.get_double ("Shift frequency");
   
double eigen_shift = std::pow( 2.0 * PI * shift_freq, 2.0);
   
SLEPcWrappers::TransformationShiftInvert::AdditionalData additional_data(eigen_shift);
   
SLEPcWrappers::TransformationShiftInvert shift (mpi_communicator, additional_data);
    shift
.set_solver(linear_solver);
    eigensolver
.set_transformation (shift);
    eigensolver
.set_which_eigenpairs (EPS_SMALLEST_REAL);
    eigensolver
.set_problem_type (EPS_GHEP);
   
    eigensolver
.solve (stiffness_matrix,mass_matrix,eigenvalues,eigenvectors,
                       eigenvectors
.size());
   
   
// Normalize the vectors

   
for (unsigned int i=0; i<eigenvectors.size(); ++i)

        eigenvectors
[i] /= eigenvectors[i].linfty_norm ();

   
   
// Finally return the number of iterations it took to converge:
   
return solver_control.last_step ();
 
}

Is the call to distribute necessary after a parallel solution even if there aren't any hanging node constraints? I have constraints on the circumference so that there is zero displacement there but I do not have any constraints along the partition.

Thanks again for your help,
Jonathan

Artur Safin

unread,
Jun 8, 2016, 2:00:07 PM6/8/16
to deal.II User Group
Jonathan,

Sorry, you're right -- the distribute() function is not the culprit here.

I remember having the exact same issue a while ago, but don't quite remember what caused it. Can you post your program here?

Artur

Jonathan Russ

unread,
Jun 8, 2016, 6:44:37 PM6/8/16
to deal.II User Group
Hi Artur!

Yes sir! Here are all the files necessary to run it in case you want to. Thank you very much for taking a look! If you have any questions about my code let me know!

Jonathan
CMakeLists.txt
mesh.ucd
parameters.prm
step-36.cc

Artur Safin

unread,
Jun 9, 2016, 3:11:51 PM6/9/16
to deal.II User Group
Jonathan,

My understanding of the situation is this: whenever you output a solution in parallel, each processor loops over all the cells that it owns and stores the associated DoFs. On a cell that a processor owns, it is possible that not all the DoFs will belong to said processor. So to output the solution correctly, you will also need to have some of the locally relevant DoFs, which is something that your code does not have.

This can be achieved by changing the beginning of the output function to:

-----------------------------------------------------------------------------------------------------------------
  template <int dim>
  void EigenvalueProblem<dim>::output_results () const
  {
    std::vector<DataComponentInterpretation::DataComponentInterpretation>
      data_component_interpretation
      (dim, DataComponentInterpretation::component_is_part_of_vector);
    
    DataOut<dim> data_out;
    data_out.attach_dof_handler (dof_handler);

    std::vector<Vector<double>> eigenvectorZ;
    eigenvectorZ.resize(eigenvectors.size());
    for (unsigned int i=0; i<eigenvectors.size(); ++i)
    {
      eigenvectorZ[i].resize(dof_handler.n_dofs());
      eigenvectorZ[i] = eigenvectors[i];
      data_out.add_data_vector (eigenvectorZ,
                                std::string("Mode_") +
                                Utilities::int_to_string(i+1),
                                DataOut<dim>::type_dof_data,
                                data_component_interpretation);
    }

    // Print out the first eigenvector to a file

    ....

-----------------------------------------------------------------------------------------------------------------


Artur

Jonathan Russ

unread,
Jun 9, 2016, 7:36:50 PM6/9/16
to deal.II User Group
Thank you again Artur. That worked perfectly.
Reply all
Reply to author
Forward
0 new messages