Synchronization issue for parallel computation of material forces

73 views
Skip to first unread message

Seyed Ali Mohseni

unread,
Feb 2, 2017, 7:34:49 AM2/2/17
to deal.II User Group
Hi,

With regard to the approach which was recently discussed in: https://groups.google.com/d/msg/dealii/25C57MYvfpg/JFB8aXKoAgAJ

There is a little problem when computing the material forces in a parallel setting.

                                   


I assume it has something to do with the synchronization. 

The computation of material forces is done similar to the system_rhs vector.



Vector<double> cell_cf(dofs_per_cell); // Cell Configurational forces

typename DoFHandler<dim>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end();
for (; cell != endc; ++cell)
{
if ( cell->is_locally_owned() )
{
fe_values.reinit(cell);

cell_cf = 0;

                ...

                // Configurational force computation ...

                ...
                
                cell->get_dof_indices(local_dof_indices);

                constraints.distribute_local_to_global(cell_cf, local_dof_indices, configurational_forces);
        }
}

configurational_forces.compress(VectorOperation::add);


For a single core computation it is validated and correct.


What could be possibly wrong when going to parallel?



Best regards,

Seyed Ali Mohseni  




Seyed Ali Mohseni

unread,
Feb 2, 2017, 10:31:19 AM2/2/17
to deal.II User Group
And this is the procedure how I output them:

DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);

std::vector<std::string> configurational_forces_magnitude(dim, "config_forces");

std::vector<DataComponentInterpretation::DataComponentInterpretation> configurational_forces_interpretation(dim, DataComponentInterpretation::component_is_part_of_vector);

data_out.add_data_vector(configurational_forces, configurational_forces_magnitude, DataOut<dim>::type_dof_data, configurational_forces_interpretation);

Very weird. I can compute everything correctly for one core, but why is it not multicore compatible?


Kind Regards,
S. A. Mohseni

Jean-Paul Pelteret

unread,
Feb 2, 2017, 10:38:20 AM2/2/17
to deal.II User Group
Dear Seyed,

The method of output of solution vectors in parallel is different to how its done in serial. Have you compared your approach to the tutorials that use parallel computing?

Regards,
Jean-Paul

Seyed Ali Mohseni

unread,
Feb 2, 2017, 10:58:30 AM2/2/17
to deal.II User Group
I am using a procedure similar to step-40.

DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);

// ========================[ CONFIGURATIONAL FORCES ]==========================


std::vector<std::string> configurational_forces_magnitude(dim, "config_forces");

std::vector<DataComponentInterpretation::DataComponentInterpretation> configurational_forces_interpretation(dim, DataComponentInterpretation::component_is_part_of_vector);

data_out.add_data_vector(configurational_forces, configurational_forces_magnitude, DataOut<dim>::type_dof_data, configurational_forces_interpretation);

// ================================[ WRITE VTU ]===============================

// Construct data structures
data_out.build_patches();

// Write output
const std::string filename = ("solution-" + Utilities::int_to_string(timestep_no, 4) + "." + Utilities::int_to_string(triangulation.locally_owned_subdomain(), 4)); 
std::ofstream output(("output/" + filename + ".vtu").c_str()); // Store file name string as output stream
data_out.write_vtu(output); // Write .vtu output file

// ================================[ WRITE PVTU ]==============================

// Write "master record" (names of the various files that combined represents the graphical data for the entire domain)
if ( Utilities::MPI::this_mpi_process(mpi_com) == 0 ) // only 1st processor runs this task!
{
std::vector<std::string> filenames;
for (unsigned int i = 0; i < Utilities::MPI::n_mpi_processes(mpi_com); ++i)
filenames.push_back("solution-" + Utilities::int_to_string(timestep_no, 4) + "." + Utilities::int_to_string(i, 4) + ".vtu");

std::ofstream master_output(("output/solution-" + Utilities::int_to_string(timestep_no, 4) + ".pvtu").c_str());
data_out.write_pvtu_record(master_output, filenames);
}

The above is my complete procedure for output of the material forces only.

Kind regards,
S. A. Mohseni

Seyed Ali Mohseni

unread,
Feb 3, 2017, 6:56:54 AM2/3/17
to deal.II User Group
I realized my vector is not an MPI Vector and I am trying to use the compress function. This won't work I assume. So I changed my global variable to

LA::MPI::Vector configurational_forces;

This means I have to initialize it by the following command:

configurational_forces.reinit(locally_owned_dofs, mpi_com);


Now I have a problem. If I use the above approach and synchronize. Everything works, but when I try to output or store, I think something goes wrong. In my opinion, I just need to initialize the configurational_forces vector with a larger IndexSet and not just to a locally owned one. Is that possible?

When I run the problem I receive the following error:

----------------------------------------------------
Exception on processing: 

--------------------------------------------------------
An error occurred in line <131> of file </home/seyedali/programming/c++/projects/dealii/source/lac/petsc_vector_base.cc> in function
    dealii::PETScWrappers::internal::VectorReference::operator double() const
The violated condition was: 
    (index >= static_cast<size_type>(begin)) && (index < static_cast<size_type>(end))
Additional information: 
    You tried to access element 18 of a distributed vector, but only elements 0 through 17 are stored locally and can be accessed.
--------------------------------------------------------

Aborting!
----------------------------------------------------


Any ideas?

Jean-Paul Pelteret

unread,
Feb 3, 2017, 9:09:33 AM2/3/17
to deal.II User Group
Dear Seyed,

For output purposes, the output vectors must contain values for both locally owned and locally relevant DoFs. This is because DataOut needs to output the solution for an entire patch (i.e. one whole cell) , which demands that it knows the solution at all of the DoFs in that cell.

In step-40, the two relevant lines are:
    locally_relevant_solution.reinit (locally_owned_dofs,
                                     locally_relevant_dofs, mpi_communicator);
and
data_out.add_data_vector (locally_relevant_solution, "u");

So one easy way to do this is to simply create a new temporary vector in your output() function, copy the solution from the vector with only locally owned DoFs to the new one that contains the union of the owned and relevant indices.
locally_relevant_solution = locally_owned_solution;
This will copy the relevant data from other processors to the each process that requires it.

I hope this helps,
Jean-Paul

Seyed Ali Mohseni

unread,
Feb 4, 2017, 5:09:59 PM2/4/17
to deal.II User Group
Thanks, Jean-Paul. I was thinking maybe something in the same way like step-40.

I tried your approach, but can I just set it equal to each other?

I did the following:

Main class
LA::MPI::Vector configurational_forces;
LA::MPI::Vector local_configurational_forces;

setup_system() class
configurational_forces.reinit(locally_owned_dofs, locally_relevant_dofs, mpi_com);

compute_config_forces() class
Vector<double> cell_cf(dofs_per_cell); // Cell Configurational forces

typename DoFHandler<dim>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end();
for (; cell != endc; ++cell)
{
if ( cell->is_locally_owned() )
{
fe_values.reinit(cell);

cell_cf = 0;

                ...

                // Configurational force computation ...

                ...
                
                cell->get_dof_indices(local_dof_indices);

                constraints.distribute_local_to_global(cell_cf, local_dof_indices, local_configurational_forces);
        }
}

local_configurational_forces.compress(VectorOperation::add);

output_results() class
// ========================[ CONFIGURATIONAL FORCES ]==========================

configurational_forces = local_configurational_forces;  (here I receive an error!)

std::vector<std::string> configurational_forces_magnitude(dim, "config_forces");

std::vector<DataComponentInterpretation::DataComponentInterpretation> configurational_forces_interpretation(dim, DataComponentInterpretation::component_is_part_of_vector);

data_out.add_data_vector(configurational_forces, configurational_forces_magnitude, DataOut<dim>::type_dof_data, configurational_forces_interpretation);

I receive the following error:

make all 
Scanning dependencies of target solid_mechanics
[ 50%] Building CXX object CMakeFiles/solid_mechanics.dir/solid_mechanics.cc.o
/home/seyedali/fe_models/deal.II/solid_mechanics/solid_mechanics.cc(950): error: no operator "=" matches these operands
            operand types are: const dealii::LinearAlgebraPETSc::MPI::Vector = const dealii::LinearAlgebraPETSc::MPI::Vector
  configurational_forces = local_configurational_forces;
                        ^
          detected during:
            instantiation of "void SolidMechanics<dim>::do_initial_timestep() [with dim=2]" at line 1081
            instantiation of "void SolidMechanics<dim>::run() [with dim=2]" at line 1113

compilation aborted for /home/seyedali/fe_models/deal.II/solid_mechanics/solid_mechanics.cc (code 2)
make[2]: *** [CMakeFiles/solid_mechanics.dir/solid_mechanics.cc.o] Fehler 2
CMakeFiles/solid_mechanics.dir/build.make:62: die Regel für Ziel „CMakeFiles/solid_mechanics.dir/solid_mechanics.cc.o“ scheiterte
CMakeFiles/Makefile2:195: die Regel für Ziel „CMakeFiles/solid_mechanics.dir/all“ scheiterte
make[1]: *** [CMakeFiles/solid_mechanics.dir/all] Fehler 2
Makefile:83: die Regel für Ziel „all“ scheiterte
make: *** [all] Fehler 2

I assume the size doesn't match. Can I really just write it like you suggested?

Jean-Paul Pelteret

unread,
Feb 4, 2017, 5:24:31 PM2/4/17
to dea...@googlegroups.com
Seyed,

Thats a compile-time error, not a run-time error. I bet that your output_results() function is marked as const, so you cannot modify the configurational_forces vector unless its mutable. But if you have any doubts, stick to what’s done in step-40. I’m not particularly familiar with the way that one does things with PETSc vectors, but the principles that I mentioned earlier still apply.

Best,
Jean-Paul

On 04 Feb 2017, at 23:09, 'Seyed Ali Mohseni' via deal.II User Group <dea...@googlegroups.com> wrote:

output_results()

Message has been deleted

Seyed Ali Mohseni

unread,
Feb 5, 2017, 1:04:22 PM2/5/17
to deal.II User Group
I tried the procedure from step-40 and these are the results:

DEAL.II:


C++ CODE:


I already tried some interpolate_boundary_values approach, but I don't think it is something related to boundaries since I just have boundary conditions at the bottom and upper boundary. We still have bad results at the left and right boundary though.

Any ideas what could cause such a behavior?

Is it even correct to initialize a global MPI vector within the setup_system() function using the locally_owned_dofs and locally_relevant_dofs information before assembly and solution of the system and fill this vector then after the system has been solved? Finally, passing it then to the output_results() function?


Kind regards,
Seyed Ali

Seyed Ali Mohseni

unread,
Feb 5, 2017, 3:45:00 PM2/5/17
to deal.II User Group
Is it possible to output a MPI vector with pcout for different ranks/cores?

Within the constructor I define the following:

pcout(std::cout, (Utilities::MPI::this_mpi_process(mpi_com) == 0)),

Hence, when I replace 0 with 1 or 2 it should output for the second or third core, but it doesnt work and I receive a PETSc error.

Am I not able to output for a different core than the first one? 

Best regards,
S. A. Mohseni

Seyed Ali Mohseni

unread,
Feb 6, 2017, 8:39:42 AM2/6/17
to deal.II User Group
Maybe this helps: 

How do you compute "reaction forces" within deal.II? 

Did you ever compute reaction forces and write them as output within a parallel framework?

Best,
Seyed Ali

Daniel Arndt

unread,
Feb 6, 2017, 9:09:42 AM2/6/17
to deal.II User Group
Seyed,

your results look like the ghost values are wrong. My first guess would be that you are still using a completely distributed vector (one initialized with locally_owned_dofs only) instead
of a ghosted one (initialized with locally_owned_dofs and locally_relevant_dofs). Can you confirm that after solving you do something like

configurational_forces = local_configurational_forces;

and that

configurational_forces.has_ghost_values() == true
local_configurational_forces.has_ghost_values() == false

holds? Can you confirm that you use configurational_forces and not local_configurational_forces for output?
What is the output of configurational_forces.print(std::cout)?

What kind of error do you get when trying with pcout instead of std::cout?

Maybe this helps: 

How do you compute "reaction forces" within deal.II? 
What exactly do you mean by that? We are much better if you can provide us with formulas rather than application-specific names.
From my understanding, you mean a projection, isn't it? So solving for u such that (u,v)=(f,v) for all v and given f?
This is implemented in the VectorTools::project functions.[1]

Best,
Daniel

[1] https://www.dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html

Jean-Paul Pelteret

unread,
Feb 6, 2017, 9:42:12 AM2/6/17
to dea...@googlegroups.com
Dear Seyed,

Daniel’s been nice enough to furnish you with more information, but at this point one of your deleted comments is quite telling to me:

Any ideas what could cause such a behavior? I am really not an expert yet to dive deeply inside the deal.II parallel structure. 

As we explain in our introductory note for the forum, its really up to you to make some more effort to understand the parts of deal.II that you make use of. We do our best here to supplement the extensive documentation that’s already in place, but if you don’t have a basic understanding as to what steps need to be taken to approach a problem and why they’re necessary, then there’s not much more we can do. Sometimes a bigger hammer is not the best solution to solving a problem.

Good luck with solving your problem.

Best regards,
Jean-Paul

Seyed Ali Mohseni

unread,
Feb 6, 2017, 12:40:29 PM2/6/17
to deal.II User Group
@Daniel: Thanks. I just tried Jean-Paul's approach again, but with a new procedure. It worked. If I could, I would mark both of your comments as the best answer.

Thank you everyone, again :) 

I will not forget your kind assistance and hopefully help you also soon, if possible.

Reply all
Reply to author
Forward
0 new messages