Dear developers and users
I am trying to parallelize my incompressible navierstokes solver based on step-57. I am not sure how to initialize the various temporary distributed vectors in preconditioners. For example, I have a class called ApproximateMassSchur which equals B(diag(M))^{-1}B^T.
Here is the serial code which works fine:
class ApproximateMassSchur : public Subscriptor
{
public:
ApproximateMassSchur(const BlockSparseMatrix<double> &M);
void vmult(Vector<double> &dst, const Vector<double> &src) const;
private:
const SmartPointer<const BlockSparseMatrix<double>> mass_matrix;
mutable Vector<double> tmp1, tmp2;
};
ApproximateMassSchur::ApproximateMassSchur(const BlockSparseMatrix<double> &M)
: mass_matrix(&M), tmp1(M.block(0, 0).m()), tmp2(M.block(0, 0).m())
{
}
void ApproximateMassSchur::vmult(Vector<double> &dst,
const Vector<double> &src) const
{
mass_matrix->block(0, 1).vmult(tmp1, src);
mass_matrix->block(0, 0).precondition_Jacobi(tmp2, tmp1);
mass_matrix->block(1, 0).vmult(dst, tmp2);
}
mass_matrix is a block matrix with subblocks [Mu, B^T; B, Mp], and you can see in the vmult I am computing dst = B*(diag(Mu))^{-1}*B^T*src.
Here is how I attempted to adapt it with PETSc, it is just a incomplete sketch:
class ApproximateMassSchur : public Subscriptor
{
public:
ApproximateMassSchur(const PETScWrappers::MPI::BlockSparseMatrix &M);
void vmult(PETScWrappers::MPI::Vector &dst, const PETScWrappers::MPI::Vector &src) const;
private:
const SmartPointer<const PETScWrappers::MPI::BlockSparseMatrix> mass_matrix;
const MPI_Comm& comm;
mutable PETScWrappers::MPI::Vector tmp1, tmp2;
};
ApproximateMassSchur::ApproximateMassSchur(const PETScWrappers::MPI::BlockSparseMatrix &M)
: mass_matrix(&M), comm(mass_matrix->get_mpi_communicator()),
tmp1(owned_partitioning[0], comm), tmp2(owned_partitioning[0], comm)
{
}
void ApproximateMassSchur::vmult(PETScWrappers::MPI::Vector &dst,
const PETScWrappers::MPI::Vector &src) const
{
mass_matrix->block(0, 1).vmult(tmp1, src);
mass_matrix->block(0, 0).precondition_Jacobi(tmp2, tmp1);
mass_matrix->block(1, 0).vmult(dst, tmp2);
}
The temporary vectors tmp1 and tmp2 should have the size as mass_matrix->block(0, 0).m(), which is the number of velocity dofs, and should be distributed. I would like to know:
1. What is the best practice here to initialize tmp1 and tmp2? One way I can think of is constructing a std::vector<IndexSet> owned_partitioning as in step-55, and pass it into ApproximateMassSchur.
(I'm assuming owned_partitioning[0] returns the locally owned velocity dofs) But I don't like this idea because I think the distribution information should somehow be obtained from mass_matrix.
2. PETScWrappers::SparseMatrix does not have a precondition_Jacobi member function. So should I construct a temporary matrix and invert the diagonal terms manually?
Thank you very much!
Jie