Access to Trilinos::MPI::Vector::owned_elements() for debugging purposes?

66 views
Skip to first unread message

Maxi Miller

unread,
Sep 10, 2017, 3:46:55 PM9/10/17
to deal.II User Group
In my program I have a vector "present_solution", defined as

LinearAlgebraTrilinos::MPI::Vector      present_solution;
const size_t dof_numbers = dof_handler.n_dofs();

IndexSet solution_partitioning(dof_numbers), solution_relevant_partitioning(dof_numbers);

solution_partitioning
= dof_handler.locally_owned_dofs();
DoFTools::extract_locally_relevant_dofs(dof_handler, solution_relevant_partitioning);
present_solution
.reinit(solution_relevant_partitioning, MPI_COMM_WORLD);



and a ConstraintMatrix "boundary_constraints" defined as

ConstraintMatrix boundary_constraints;

boundary_constraints
.clear();
boundary_constraints
.reinit(solution_relevant_partitioning);
DoFTools::make_hanging_node_constraints(dof_handler, boundary_constraints);
VectorTools::interpolate_boundary_values(dof_handler, 0, BoundaryValues<dim>(), boundary_constraints);
boundary_constraints
.close();

Thus I assume that both have the same dimensions. But now when calling
boundary_constraints.distribute(present_solution);

I get the error (when running with more than one mpi thread):

An error occurred in line <1367> of file </home/roland/Downloads/dealii/include/deal.II/lac/trilinos_vector.h> in function
    dealii
::IndexSet dealii::TrilinosWrappers::MPI::Vector::locally_owned_elements() const
The violated condition was:
    owned_elements
.size()==size()
Additional information:
   
The locally owned elements have not been properly initialized! This happens for example if this object has been initialized with exactly one overlapping IndexSet.

In order to debug this, I would like to access the element "owned_elements", but it is declared private. Is there another way to debug my problem?
Thanks!



Timo Heister

unread,
Sep 10, 2017, 5:25:07 PM9/10/17
to dea...@googlegroups.com

Maxi Miller

unread,
Sep 11, 2017, 7:45:03 AM9/11/17
to deal.II User Group
Then I get 

An error occurred in line <754> of file <~/Downloads/dealii/include/deal.II/lac/constraint_matrix.templates.h> in function
   
void dealii::internal::{anonymous}::import_vector_with_ghost_elements(const dealii::TrilinosWrappers::MPI::Vector&, const dealii::IndexSet&, const dealii::IndexSet&, dealii::TrilinosWrappers::MPI::Vector&, dealii::internal::bool2type<false>)
The violated condition was:  
   
!vec.has_ghost_elements()
Additional information:  
   
You are trying an operation on a vector that is only allowed if the vector has no ghost elements, but the vector you are operating on does have ghost elements. Specifically, vectors with ghost elements are read-only and cannot appear in operations that write into the
se vectors
.
 
See the glossary entry on 'Ghosted vectors' for more information.
 
Stacktrace:
-----------
#0  /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre:  
#1  /opt/dealII/lib/libdeal_II.g.so.9.0.0-pre: void dealii::ConstraintMatrix::distribute<dealii::TrilinosWrappers::MPI::Vector>(dealii::TrilinosWrappers::MPI::Vector&) const
#2  main: Step15::MinimalSurfaceProblem<2>::run()
#3  main: main
--------------------------------------------------------
 
Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
  set breakpoint pending on
  break MPI_Abort
  set breakpoint pending auto
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 255.
 
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
 
Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/
.gdbinit:
 
set breakpoint pending on
 
break MPI_Abort
 
set breakpoint pending auto
[linux-lb8c:17949] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[linux-lb8c:17949] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

at the same line (.distribute())

Timo Heister

unread,
Sep 11, 2017, 11:03:42 AM9/11/17
to dea...@googlegroups.com
ConstraintMatrix::distribute() wants a vector without ghost elements:
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.dealii.org_developer_doxygen_deal.II_classConstraintMatrix.html-23a1676b89d3936a007bb48a9d5210e6f07&d=DwIFaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=mSqFm4ckHMJrlN6Gq_dkTGSMsn9c-GCEJ6j6O8J7x84&s=7DYMTNfhJ04mW0OOJQIezXLqgsq0wbNwX5Oy2Efj-LE&e=

You should do this like it is done in step-40 for example: solve and
distribute() with a vector without ghost elements, then copy into a
vector with ghost elements.

On Mon, Sep 11, 2017 at 7:45 AM, 'Maxi Miller' via deal.II User Group
> The deal.II project is located at https://urldefense.proofpoint.com/v2/url?u=http-3A__www.dealii.org_&d=DwIFaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=mSqFm4ckHMJrlN6Gq_dkTGSMsn9c-GCEJ6j6O8J7x84&s=C38_qonnShGqxG7SGKiuM2PYS_d9odaF_Ti7XlFWvl8&e=
> For mailing list/forum options, see
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_forum_dealii-3Fhl-3Den&d=DwIFaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=mSqFm4ckHMJrlN6Gq_dkTGSMsn9c-GCEJ6j6O8J7x84&s=m_d3mJoErmBXFcRN9xymGivXfHJ4SoMMl9zyX0LjIIQ&e=
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com.
> For more options, visit https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_optout&d=DwIFaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=mSqFm4ckHMJrlN6Gq_dkTGSMsn9c-GCEJ6j6O8J7x84&s=XHTiaHgjpRTVazAw7SvWUMynM_RsN9wNBhochHUn3hs&e= .
Reply all
Reply to author
Forward
0 new messages