Dear Luca,
thanks for your answer, I've moved along but I am still encountering problems in the interpolation. This seems to be a problem with the way I use PETSc.
So, as far as I understand, I need to initialize a PETSc vector of length n_global_particles * components (in my case, 2, the space dimension), and locally each process should handle n_locally_owned_particles * components.
First, just a simple question: is n_global_particles working correctly on processes without any? It seems that it returns zero when no particles are locally owned, but the documentation, as I understand, says it should yield the same value on all processes.
The code I am using is this:
void Step3::setup_system()
{
GridTools::partition_triangulation(n_mpi_processes, triangulation);
dof_handler.distribute_dofs(fe);
DoFRenumbering::subdomain_wise(dof_handler);
const std::vector<IndexSet> locally_owned_dofs_per_proc = DoFTools::locally_owned_dofs_per_subdomain(dof_handler);
const IndexSet locally_owned_dofs = locally_owned_dofs_per_proc[this_mpi_process];
field.reinit(locally_owned_dofs, mpi_communicator);
VectorTools::interpolate(dof_handler, initial_function, field);
// Particles
particle_handler.initialize(triangulation, StaticMappingQ1<2>::mapping);
std::vector<Point<2>> particles_to_insert;
for (const auto &cell : dof_handler.active_cell_iterators())
{
if (cell->subdomain_id() == this_mpi_process)
{
if(cell->center()(1) < 0.03)
{
particles_to_insert.emplace_back(cell->center());
std::cout << "MPI_rank: "<< this_mpi_process << " cell_center: " << cell->center() << std::endl;
}
}
}
particle_handler.insert_particles(particles_to_insert);
particle_handler.update_cached_numbers();
// n_global_particles returns zero when no particles are locally owned: compute the real number
int total_particles = Utilities::MPI::sum(particle_handler.n_global_particles(), mpi_communicator);
for(int i = 0; i < n_mpi_processes; i++)
{
if(this_mpi_process == i)
std::cout << "MPI_process: " << i
<< " locally_owned_particles: " << particle_handler.n_locally_owned_particles()
<< " total_particles: " << total_particles
<< " n_global_particles: " << particle_handler.n_global_particles()
<< std::endl;
}
field_on_particles.reinit(mpi_communicator, total_particles*2 , particle_handler.n_locally_owned_particles()*2);
field_on_particles.compress(VectorOperation::add); // unnecessary?
for(int i = 0; i < n_mpi_processes; i++)
{
if(this_mpi_process == i)
std::cout << "VECTOR"
<< "MPI_process: " << i
<< " vec.size: " << field_on_particles.size()
<< " vec.local_size: " << field_on_particles.local_size()
<< std::endl;
}
Particles::Utilities::interpolate_field_on_particles(dof_handler, particle_handler, field, field_on_particles);
}
As you can see, I am following the Particles::Utilities::interpolate_field_on_particles, by initializing a PETSc vector with "the size of the vector must be n_locally_owned_particles times the n_components", the total size I guess it should be the n_global_particles * components. I might be completely wrong, though.
The output I am getting is this, followed by the MPI abort:
/usr/bin/mpirun -np 4 cmake-build-debug/step-3
MPI_rank: 0 cell_center: 0.015625 0.015625
MPI_rank: 0 cell_center: 0.046875 0.015625
MPI_rank: 0 cell_center: 0.078125 0.015625
MPI_rank: 0 cell_center: 0.109375 0.015625
MPI_rank: 0 cell_center: 0.140625 0.015625
MPI_rank: 0 cell_center: 0.171875 0.015625
MPI_rank: 0 cell_center: 0.203125 0.015625
MPI_rank: 0 cell_center: 0.234375 0.015625
MPI_rank: 0 cell_center: 0.265625 0.015625
MPI_rank: 0 cell_center: 0.296875 0.015625
MPI_rank: 0 cell_center: 0.328125 0.015625
MPI_rank: 0 cell_center: 0.359375 0.015625
MPI_rank: 2 cell_center: 0.390625 0.015625
MPI_rank: 2 cell_center: 0.421875 0.015625
MPI_rank: 2 cell_center: 0.453125 0.015625
MPI_rank: 2 cell_center: 0.484375 0.015625
MPI_rank: 2 cell_center: 0.515625 0.015625
MPI_rank: 2 cell_center: 0.546875 0.015625
MPI_rank: 2 cell_center: 0.578125 0.015625
MPI_rank: 2 cell_center: 0.609375 0.015625
MPI_rank: 2 cell_center: 0.640625 0.015625
MPI_rank: 2 cell_center: 0.671875 0.015625
MPI_rank: 2 cell_center: 0.703125 0.015625
MPI_rank: 2 cell_center: 0.734375 0.015625
MPI_rank: 2 cell_center: 0.765625 0.015625
MPI_rank: 2 cell_center: 0.796875 0.015625
MPI_rank: 2 cell_center: 0.828125 0.015625
MPI_rank: 2 cell_center: 0.859375 0.015625
MPI_rank: 2 cell_center: 0.890625 0.015625
MPI_rank: 2 cell_center: 0.921875 0.015625
MPI_rank: 2 cell_center: 0.953125 0.015625
MPI_rank: 2 cell_center: 0.984375 0.015625
MPI_process: 1 locally_owned_particles: 0 total_particles: 32 n_global_particles: 0
MPI_process: 0 locally_owned_particles: 12 total_particles: 32 n_global_particles: 12
MPI_process: 3 locally_owned_particles: 0 total_particles: 32 n_global_particles: 0
MPI_process: 2 locally_owned_particles: 20 total_particles: 32 n_global_particles: 20
VECTORMPI_process: 3 vec.size: 64 vec.local_size: 0
VECTORMPI_process: 1 vec.size: 64 vec.local_size: 0
VECTORMPI_process: 0 vec.size: 64 vec.local_size: 24
VECTORMPI_process: 2 vec.size: 64 vec.local_size: 40
The error is the following:
--------------------------------------------------------
An error occurred in line <220> of file </media/murer/murer/Programmi/dealii_9.2.0_MPI_P4est/include/deal.II/particles/utilities.h> in function
void dealii::Particles::Utilities::interpolate_field_on_particles(const dealii::DoFHandler<dim, spacedim>&, const dealii::Particles::ParticleHandler<dim, spacedim>&, const InputVectorType&, OutputVectorType&, const dealii::ComponentMask&) [with int dim = 2; int spacedim = 2; InputVectorType = dealii::PETScWrappers::MPI::Vector; OutputVectorType = dealii::PETScWrappers::MPI::Vector]
The violated condition was:
static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(interpolated_field.size()) == static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(particle_handler.get_next_free_particle_index() * n_comps)
Additional information:
Dimension 64 not equal to 24.
Stacktrace:
-----------
#0 cmake-build-debug/step-3: void dealii::Particles::Utilities::interpolate_field_on_particles<2, 2, dealii::PETScWrappers::MPI::Vector, dealii::PETScWrappers::MPI::Vector>(dealii::DoFHandler<2, 2> const&, dealii::Particles::ParticleHandler<2, 2> const&, dealii::PETScWrappers::MPI::Vector const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#1 cmake-build-debug/step-3: Step3::setup_system()
#2 cmake-build-debug/step-3: Step3::run()
#3 cmake-build-debug/step-3: main
--------------------------------------------------------
Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
set breakpoint pending on
break MPI_Abort
set breakpoint pending auto
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 255.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------
An error occurred in line <220> of file </media/murer/murer/Programmi/dealii_9.2.0_MPI_P4est/include/deal.II/particles/utilities.h> in function
void dealii::Particles::Utilities::interpolate_field_on_particles(const dealii::DoFHandler<dim, spacedim>&, const dealii::Particles::ParticleHandler<dim, spacedim>&, const InputVectorType&, OutputVectorType&, const dealii::ComponentMask&) [with int dim = 2; int spacedim = 2; InputVectorType = dealii::PETScWrappers::MPI::Vector; OutputVectorType = dealii::PETScWrappers::MPI::Vector]
The violated condition was:
static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(interpolated_field.size()) == static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(particle_handler.get_next_free_particle_index() * n_comps)
Additional information:
Dimension 64 not equal to 40.
Stacktrace:
-----------
#0 cmake-build-debug/step-3: void dealii::Particles::Utilities::interpolate_field_on_particles<2, 2, dealii::PETScWrappers::MPI::Vector, dealii::PETScWrappers::MPI::Vector>(dealii::DoFHandler<2, 2> const&, dealii::Particles::ParticleHandler<2, 2> const&, dealii::PETScWrappers::MPI::Vector const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#1 cmake-build-debug/step-3: Step3::setup_system()
#2 cmake-build-debug/step-3: Step3::run()
#3 cmake-build-debug/step-3: main
--------------------------------------------------------
Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
set breakpoint pending on
break MPI_Abort
set breakpoint pending auto
[murerpc:11716] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[murerpc:11716] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Process finished with exit code 255
I have tried many solutions, even initializing the vector as the filed solution, so I have excess space, but to no avail.
What am I missing?
Thanks for any help you can provide!
Franco