When I run the code in series the solution vector is correctly transferred. However, this fails if I run on more than one processor (even 2 processors). The runtime error is show below.
I examined the way the mesh is partitioned in both cases (from the parallel output files) and it appears to be the same.
Details of how each mesh and dofhandler are defined in the code, as well as how the data is transferred (using VectorTools::interpolate) are appended after the error message.
Any guidance would be appreciated.
Best,
David
--------------------------------------------------------
An error occurred in line <1588> of file </usr/local/include/deal.II/lac/la_parallel_vector.h> in function
Number dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::operator()(dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::size_type) const [with Number = double; MemorySpace = dealii::MemorySpace::Host; dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::size_type = unsigned int]
The violated condition was:
partitioner->in_local_range(global_index) || partitioner->ghost_indices().is_element(global_index)
Additional information:
You tried to access element 54 of a distributed vector, but this
element is not stored on the current processor. Note: The range of
locally owned elements is [135,242], and there are 27 ghost elements
that this vector can access.
A common source for this kind of problem is that you are passing a
'fully distributed' vector into a function that needs read access to
vector elements that correspond to degrees of freedom on ghost cells
(or at least to 'locally active' degrees of freedom that are not also
'locally owned'). You need to pass a vector that has these elements as
ghost entries.
Stacktrace:
-----------
#0 ./main: dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::operator()(unsigned int) const
#1 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::extract_subvector_to<unsigned int const*, double*>(unsigned int const*, unsigned int const*, double*) const
#2 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::internal::DoFAccessorImplementation::Implementation::extract_subvector_to<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, double*>(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, unsigned int const*, unsigned int const*, double*)
#3 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::DoFCellAccessor<3, 3, false>::get_dof_values<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, double*>(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, double*, double*) const
#4 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::DoFCellAccessor<3, 3, false>::get_dof_values<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, double>(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, dealii::Vector<double>&) const
#5 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::DoFCellAccessor<3, 3, false>::get_interpolated_dof_values<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, double>(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, dealii::Vector<double>&, unsigned short) const
#6 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::FEValuesBase<3, 3>::CellIteratorContainer::get_interpolated_dof_values<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, dealii::Vector<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::value_type>&) const
#7 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::FEValuesBase<3, 3>::get_function_values<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, std::vector<dealii::Vector<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::value_type>, std::allocator<dealii::Vector<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::value_type> > >&) const
#8 /usr/local/lib/libdeal_II.g.so.9.5.0: dealii::Functions::FEFieldFunction<3, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, 3>::vector_value_list(std::vector<dealii::Point<3, double>, std::allocator<dealii::Point<3, double> > > const&, std::vector<dealii::Vector<double>, std::allocator<dealii::Vector<double> > >&) const
#9 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::VectorTools::internal::interpolate<3, 3, dealii::PETScWrappers::MPI::Vector, dealii::VectorTools::interpolate<3, 3, dealii::PETScWrappers::MPI::Vector>(dealii::hp::MappingCollection<3, 3> const&, dealii::DoFHandler<3, 3> const&, dealii::Function<3, dealii::PETScWrappers::MPI::Vector::value_type> const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)::{lambda(dealii::TriaActiveIterator<dealii::DoFCellAccessor<3, 3, false> > const&)#1} const>(dealii::hp::MappingCollection<3, 3> const&, dealii::DoFHandler<3, 3> const&, dealii::VectorTools::interpolate<3, 3, dealii::PETScWrappers::MPI::Vector>(dealii::hp::MappingCollection<3, 3> const&, dealii::DoFHandler<3, 3> const&, dealii::Function<3, dealii::PETScWrappers::MPI::Vector::value_type> const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)::{lambda(dealii::TriaActiveIterator<dealii::DoFCellAccessor<3, 3, false> > const&)#1} const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#10 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::VectorTools::interpolate<3, 3, dealii::PETScWrappers::MPI::Vector>(dealii::hp::MappingCollection<3, 3> const&, dealii::DoFHandler<3, 3> const&, dealii::Function<3, dealii::PETScWrappers::MPI::Vector::value_type> const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#11 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::VectorTools::interpolate<3, 3, dealii::PETScWrappers::MPI::Vector>(dealii::Mapping<3, 3> const&, dealii::DoFHandler<3, 3> const&, dealii::Function<3, dealii::PETScWrappers::MPI::Vector::value_type> const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#12 /usr/local/lib/libdeal_II.g.so.9.5.0: void dealii::VectorTools::interpolate<3, 3, dealii::PETScWrappers::MPI::Vector>(dealii::DoFHandler<3, 3> const&, dealii::Function<3, dealii::PETScWrappers::MPI::Vector::value_type> const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#13 ./main: MultiPhysicsBVP<3, 1>::solve_cp()
#14 ./main: crystalPlasticity<3>::run()
#15 ./main: main
--------------------------------------------------------
The following is a description of how the solution vectors are declared and initialized in each of the meshes.
Source mesh:
(note that dofHandlersSet, solutionSet and fe from this mesh are defined as pointers)
//Declare and define triangulation
parallel::distributed::Triangulation<dim> triangulation_pf;
GridGenerator::subdivided_hyper_rectangle (triangulation_pf, userInputs_pf.subdivisions,Point<dim>(), Point<dim>(userInputs_pf.domain_size[0],userInputs_pf.domain_size[1],userInputs_pf.domain_size[2]));
//Create FESet containing a fe for each field
FESystem<dim>* fe;
fe=new FESystem<dim>(FE_Q<dim>(QGaussLobatto<1>(degree+1)),1);
FESet.push_back(fe);
//distribute DOFs for every field
DoFHandler<dim>* dof_handler;
dof_handler=new DoFHandler<dim>(triangulation_pf);
dofHandlersSet.push_back(dof_handler);
dof_handler->distribute_dofs (*fe)
//SolutionSet is declared and initialized as
typedef dealii::LinearAlgebra::distributed::Vector<double> vectorType_pf;
std::vector<vectorType_pf*> solutionSet;
//For every field we do
vectorType_pf *U, *R;
U=new vectorType_pf; R=new vectorType_pf;
solutionSet.push_back(U); residualSet.push_back(R);
matrixFreeObject.initialize_dof_vector(*R, fieldIndex); *R=0;
matrixFreeObject.initialize_dof_vector(*U, fieldIndex); *U=0;
constraintsDirichletSet[fieldIndex]->distribute(*solutionSet[fieldIndex]);
constraintsOtherSet[fieldIndex]->distribute(*solutionSet[fieldIndex]);
solutionSet[fieldIndex]->update_ghost_values();
//Apply Initial conditions for every field index (this interpolation works even in parallel)
VectorTools::interpolate (*dofHandlersSet[var_index], InitialCondition<dim,degree>(var_index,userInputs_pf,this), *solutionSet[var_index]);
Target mesh:
//Declare and define triangulation
parallel::distributed::Triangulation<dim> triangulation_cp
GridGenerator::subdivided_hyper_rectangle (triangulation_cp, userInputs_cp.subdivisions, Point<dim>(), Point<dim>(userInputs_cp.span[0],userInputs_cp.span[1],userInputs_cp.span[2]), true);
//Declare finite element
FESystem<dim> FE_Scalar;
FE_Q<dim> fe_q(1);
FE_Scalar = FE_System(fe_q, 1);
//Create dofHandler and distribute DOFs
DoFHandler<dim> dofHandler_Scalar;
dofHandler_Scalar (triangulation_cp), //In the constructor
dofHandler_Scalar.distribute_dofs (FE_Scalar);
//Declare solution vector
PETScWrappers::MPI::Vector solution_cp;
//Non-ghosted solution
PETScWrappers::MPI::Vector non_ghosted_solution_cp;
The data transfer is done the following way:
IndexSet own_dofs = dofHandler_Scalar.locally_owned_dofs();
IndexSet locally_relevant_dofs;
DoFTools::extract_locally_relevant_dofs(dofHandler_Scalar, locally_relevant_dofs);
//Creating function with data from source mesh
Functions::FEFieldFunction<dim, vectorType_pf> fe_function_1(
*pf_obj.getDofHandlersSet()[0], *pf_obj.getSolutionSet()[0]);
//Setting size of solution_cp and non_ghosted_solution vectors
solution_cp.reinit(own_dofs, locally_relevant_dofs, mpi_communicator);
non_ghosted_solution_cp.reinit(own_dofs, mpi_communicator);
// Data transfer via interpolation of fe_function_1 - Works in serial but crashes not in parallel
VectorTools::interpolate(dofHandler_Scalar, fe_function_1, non_ghosted_solution_cp);
solution_cp = non_ghosted_solution_cp;
solution_cp.compress(VectorOperation::insert);
David:
> I have two distinct distributed meshes representing the same spatial
> domain and (for now) with equal refinement (number of cells),
Do I understand correctly that these meshes are identical, but that you
store them in separate variables? Why not just one mesh that two
DoFHandlers build on?
> The violated condition was:
>
> partitioner->in_local_range(global_index) || partitioner-
> >ghost_indices().is_element(global_index)
>
> Additional information:
> You tried to access element 54 of a distributed vector, but this
> element is not stored on the current processor. Note: The range of
> locally owned elements is [135,242], and there are 27 ghost elements
> that this vector can access.
>
> A common source for this kind of problem is that you are passing a
> 'fully distributed' vector into a function that needs read access to
> vector elements that correspond to degrees of freedom on ghost cells
> (or at least to 'locally active' degrees of freedom that are not also
> 'locally owned'). You need to pass a vector that has these elements as
> ghost entries.
Before we go into the weeds, is this true in your case? Is the vector
you are passing into the FEFieldFunction (where the error originates)
ghosted?
Best
Wolfgang
> Do I understand correctly that these meshes are identical, but that you
> store them in separate variables? Why not just one mesh that two
> DoFHandlers build on?
>
> Yes, right now they are the same mesh (for testing) but in the future we will
> want to have different meshes that represent the same domain but with
> different subdivisions.
There lies heartbreak. If you use meshes with different partitioning, you
cannot easily transfer information from one mesh to another because the
information you need from one mesh to interpolate onto the other is no longer
guaranteed to live on the same MPI process.
It's possible to do this (there are tools such as FERemoteEvaluation), but it
becomes *much* more difficult and expensive to have meshes that are
partitioned differently.