On 9/27/22 11:54, Rahul Gopalan Ramachandran wrote:
>
> I am having problem with applying static condensation as explained in step-44
> in parallel. The objective is to employ a parallel solver. Any guidance on the
> issue would be much helpful.
>
> As of now, the code works when run on single MPI process, but fails when np >
> 1. The error originates from the following part of the code where it tries to
> read the elements of the tangent_matrix, which is a
> PETScWrapper::MPI::BlockSparseMatrix. The issue arises trying to access the
> dofs not owned by the MPI process at the boundary partition.
>
> //-----------------------------------------------------------------------------
> for (unsigned int i=0; i<dofs_per_cell; ++i)
> for (unsigned int j=0; j<dofs_per_cell; ++j)
> data.k_orig(i,j) = tangent_matrix.el(data.local_dof_indices[i],
> data.local_dof_indices[j]);
> //-----------------------------------------------------------------------------
>
Rahul -- the problem is that, unlike vectors, matrices are generally not
written to use data structures that provide you with the equivalent of "ghost
elements". That is because, in most applications, matrices are "write only":
you build the matrix element-by-element, but you never read from it
element-by-element. As a consequence, people don't provide the facilities to
let you read elements stored on another process.
If you need this kind of functionality, you probably have to build it
yourself. The way I would approach this is by looking at your algorithm,
figuring out which matrix elements you need to be able to read for rows not
locally owned (most likely the rows that correspond to locally-relevant but
not locally-owned; it may also be locally-active but not locally-owned). You
will then have to read these elements on the process that owns them and send
them to all processes that need them but don't own them. In the current
development version, you can use functions such as Utilities::MPI::Isend to do
that with whatever data structure you find convenient, but it can also be done
with standard MPI calls.
This isn't particularly convenient, but it is the best anyone can offer short
of writing the necessary functionality in PETSc itself.
Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email:
bang...@colostate.edu
www:
http://www.math.colostate.edu/~bangerth/