PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

224 views
Skip to first unread message

Anna Avdeeva

unread,
Sep 13, 2017, 4:49:50 AM9/13/17
to deal.II User Group
Dear All,

I have modified step-22 to solve system of Maxwell's equations, and then used step-40(and 55) to parallelize the code.
In the serial version I used

template<int inner_solver_id>
struct InnerPreconditioner;
template<>
struct InnerPreconditioner<0>
{
typedef SparseDirectUMFPACK<double> type;
};
template<>
struct InnerPreconditioner<1>
{
// type of preconditioner for an iterative solver
};

Which defined the type of preconditioner I need for CG solver, in order to obtain yet another preconditioner for FGMRES solution of my system.
Now with parallel implementation I would like to use PETScWrappers::SparseDirectMUMPS instead of SparseDirectUMFPACK.

However I am getting the following error:

 error: ‘SparseDirectMUMPS’ in namespace ‘LA’ does not name a type
 typedef LA::SparseDirectMUMPS type;

Here LA is defined as 
namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
using namespace dealii::LinearAlgebraPETSc;
#  define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
}

Is it possible to do something similar to step-22 with  SparseDirectMUMPS, or I have to rewrite the code and remove the struct InnerPreconditioner ?

Thank you very much in advance for all your help.

Anna
 








Timo Heister

unread,
Sep 13, 2017, 8:52:22 AM9/13/17
to dea...@googlegroups.com
Anna,

> Now with parallel implementation I would like to use
> PETScWrappers::SparseDirectMUMPS instead of SparseDirectUMFPACK.
>
> However I am getting the following error:
>
> error: ‘SparseDirectMUMPS’ in namespace ‘LA’ does not name a type
> typedef LA::SparseDirectMUMPS type;

You have to use PETScWrappers::SparseDirectMUMPS here


--
Timo Heister
http://www.math.clemson.edu/~heister/
Message has been deleted

Anna Avdeeva

unread,
Sep 19, 2017, 4:30:38 AM9/19/17
to deal.II User Group
Dear Timo,

Thank you for your reply.

I am still having trouble with implementing my code with MUMPS.

I will briefly describe the problem:

I am solving 2 systems of Maxwell's equations in the following way:

the systems are Ax1=b1 and Ax2=b2;
A is a sparse block symmetric matrix (C -M; -M -C).
I solve it with FGMRES preconditioned with Pa= (B 0; 0 B) = (C+M 0; 0 C+M).
To obtain inv(Pa) I need to obtain inv(B). And this is what I would like to achieve with MUMPS. However, to save time I want to compute inv(B) only ones for both b1 and b2.

So I would like to reuse LDLt the second time I solve the system with b2.

With SparseDirectUMFPACK<double>  I did as following.

1) I declared std_cxx11::shared_ptr<typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type> matrix_B_preconditioner; in my main class,
2) at the beginning of setup_dofs I had matrix_B_preconditioner.reset();
3) I assembled the system and rhs separately, and after assembling the system matrix I defined
matrix_B_preconditioner = std_cxx11::shared_ptr< typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type >
( new typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type() );
matrix_B_preconditioner->initialize(matrix_B.block(0, 0), typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type::AdditionalData() );

4) when solving 
const LinearSolvers::Pa_Preconditioner<typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type >
pa_preconditioner(*matrix_B_preconditioner);

Where 
a) InnerPreconditioner was
template<int inner_solver_id>
struct InnerPreconditioner;
template<>
struct InnerPreconditioner<0>
{
typedef SparseDirectUMFPACK<double> type;

//typedef PETScWrappers::SparseDirectMUMPS type;
};

b) Pa_Preconditioner was:
template<class PreconditionerB>
class Pa_Preconditioner: public Subscriptor
{
public:
Pa_Preconditioner(const PreconditionerB &preconditioner_B);
void vmult(LA::MPI::BlockVector &dst, const LA::MPI::BlockVector &src) const;
private:
const PreconditionerB &preconditioner_B;
};

template<class PreconditionerB>
Pa_Preconditioner<PreconditionerB>::
Pa_Preconditioner(const PreconditionerB &preconditioner_B) :
preconditioner_B (&preconditioner_B)
{
}
template<class PreconditionerB>
void Pa_Preconditioner<PreconditionerB>::
vmult(LA::MPI::BlockVector &dst, const LA::MPI::BlockVector &src) const

 preconditioner_B.vmult(dst.block(0), src.block(0));
 preconditioner_B.vmult(dst.block(1), src.block(1));
}

As far as I understand with MUMPS I have to change this strategy and modify Pa_Preconditioner class and have matrix B as input to it, and change Pa_Preconditioner::vmult to

template<class Btype>
Pa_Preconditioner<Btype>::
Pa_Preconditioner(const Btype &B,const MPI_Comm &mpi_communicator) :
B (&B),
mpi_communicator (&mpi_communicator)
{
}

template<class Btype>
void Pa_Preconditioner<Btype>::
vmult(LA::MPI::BlockVector &dst, const LA::MPI::BlockVector &src) const
SolverControl cn;
PetScWrappers::SparseDirectMUMPS solver(cn,mpi_communicator);
solver.set_symmetric_mode(true);
solver.solve(B,dst.block(0),src.block(0));
solver.solve(B,dst.block(1),src.block(1));
}

Does this look reasonable? I think, this makes program much less general than my previous version where I could choose between direct and iterative solvers with the template parameter. And also I am not sure that LDLt decomposition of B is reused in this case.

Any suggestions will be highly appreciated.

Thank you in advance, 
Anna


Timo Heister

unread,
Sep 19, 2017, 1:45:31 PM9/19/17
to dea...@googlegroups.com
> template<class Btype>
> void Pa_Preconditioner<Btype>::
> vmult(LA::MPI::BlockVector &dst, const LA::MPI::BlockVector &src) const
> {
> SolverControl cn;
> PetScWrappers::SparseDirectMUMPS solver(cn,mpi_communicator);
> solver.set_symmetric_mode(true);
> solver.solve(B,dst.block(0),src.block(0));
> solver.solve(B,dst.block(1),src.block(1));
> }
>
> Does this look reasonable? I think, this makes program much less general
> than my previous version where I could choose between direct and iterative
> solvers with the template parameter. And also I am not sure that LDLt
> decomposition of B is reused in this case.

It looks like you are reconstructing the decomposition every time
inside your vmult. Why do you do that?
Message has been deleted

Anna Avdeeva

unread,
Sep 20, 2017, 2:12:55 AM9/20/17
to deal.II User Group
Dear Timo,


The main reason I do this is that I do not understand how to reuse this decomposition in deal.ii.
I am relatively new to deal.ii and C++, and I have never used MUMPS before.


The way I set it up with SparseDirectUMFPACK was to use InnerPreconditioner structure:


template<int inner_solver_id>
struct InnerPreconditioner;


template<>
struct InnerPreconditioner<0>{

typedef SparseDirectUMFPACK<double> type;

};


and defined inner preconditioner as :


matrix_B_preconditioner= std_cxx11::shared_ptr<typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type >(new typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type());


matrix_B_preconditioner->initialize(matrix_B.block(0, 0), typename LinearSolvers::InnerPreconditioner<inner_solver_id>::type::AdditionalData());


I can not simply substitute


typedef SparseDirectUMFPACK<double> type; with typedef PETScWrappers::SparseDirectMUMPS type;

since PETScWrappers::SparseDirectMUMPS does not have member function vmult.


I am having trouble to understand where I need to add this function vmult.

I would appreciate any advice on this. Maybe there is some example in deal.ii that use MUMPS to construct a preconditioner.


Thank you very much in advance,

Anna

Timo Heister

unread,
Sep 25, 2017, 8:29:38 AM9/25/17
to dea...@googlegroups.com
Anna,

> The main reason I do this is that I do not understand how to reuse this
> decomposition in deal.ii.
> I am relatively new to deal.ii and C++, and I have never used MUMPS before.

Well, this has nothing to do with MUMPS or deal.II. It sounds like you
are struggling because you are not familiar with c++ templates and
template specialization and how we use it in step-22. I also think you
are hung up on a detail ("how can I switch between solvers based on
the dimension?"), while you could just always use MUMPS.

> since PETScWrappers::SparseDirectMUMPS does not have member function vmult.

You could create a specialized InverseMatrix class that does something
different depending on the type PreconditionerType. I would just
unconditionally replace the vmult() inside InverseMatrix to call
::solve() of SparseDirectMUMPS.

> I would appreciate any advice on this. Maybe there is some example in
> deal.ii that use MUMPS to construct a preconditioner.

for example:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_blob_master_tests_mpi_step-2D40-5Fdirect-5Fsolver.cc&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=uMnkipA0_Q1Sqba93dRnQ4Sg2_vE74W35qmO5qny_HU&s=yIxjkjwXu_nX2OcEMuXTJD50rGqnh8G1J6qvEp3tT7I&e=
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_blob_master_tests_petsc_sparse-5Fdirect-5Fmumps.cc&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=uMnkipA0_Q1Sqba93dRnQ4Sg2_vE74W35qmO5qny_HU&s=pF_uHK7JsU8Qn2h7uzEU6DSwyeprAg3-2k1hwM3rr_A&e=

Timo Heister

unread,
Sep 25, 2017, 8:31:33 AM9/25/17
to dea...@googlegroups.com
and from your email I got off-list (please try to use the mailinglist):

> [0]PETSC ERROR: #1 PetscCommDuplicate() line 137 in
> /home/anna/petsc-3.6.4/src/sys/objects/tagm.c
> An error occurred in line <724> of file
> </home/anna/dealii-8.5.0/source/lac/petsc_solver.cc> in function
> void dealii::PETScWrappers::SparseDirectMUMPS::solve(const
> dealii::PETScWrappers::MatrixBase&, dealii::PETScWrappers::VectorBase&,
> const dealii::PETScWrappers::VectorBase&)

I assume that you either did not set the MPI communicator in the mumps
object or you set it to something invalid.


On Mon, Sep 25, 2017 at 8:29 AM, Timo Heister <hei...@clemson.edu> wrote:
> Anna,
>
>> The main reason I do this is that I do not understand how to reuse this
>> decomposition in deal.ii.
>> I am relatively new to deal.ii and C++, and I have never used MUMPS before.
>
> Well, this has nothing to do with MUMPS or deal.II. It sounds like you
> are struggling because you are not familiar with c++ templates and
> template specialization and how we use it in step-22. I also think you
> are hung up on a detail ("how can I switch between solvers based on
> the dimension?"), while you could just always use MUMPS.
>
>> since PETScWrappers::SparseDirectMUMPS does not have member function vmult.
>
> You could create a specialized InverseMatrix class that does something
> different depending on the type PreconditionerType. I would just
> unconditionally replace the vmult() inside InverseMatrix to call
> ::solve() of SparseDirectMUMPS.
>
>> I would appreciate any advice on this. Maybe there is some example in
>> deal.ii that use MUMPS to construct a preconditioner.
>
> for example:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_blob_master_tests_mpi_step-2D40-5Fdirect-5Fsolver.cc&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=1A8jBef7oOJePKduQoE3_poqbYEfwhW0Egy3MO3dVSk&s=g_sD1-HBOWyhL4OywtSIRMXOm4UH7tzq3rvYXsKc4rg&e=
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_blob_master_tests_petsc_sparse-5Fdirect-5Fmumps.cc&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=1A8jBef7oOJePKduQoE3_poqbYEfwhW0Egy3MO3dVSk&s=t5b9JVDBAWGDxrQI-5oI-gBFq7pfzKrWpXjhXN-HUhA&e=

Anna Avdeeva

unread,
Sep 25, 2017, 10:44:58 PM9/25/17
to deal.II User Group
Dear Timo,

Sorry, I did not mean to write to your personal e-mail. Just pressed the wrong reply button by mistake.
Thank you very much for your reply.  I will check the way how I set mpi_communicator now. Hopefully, will find the problem.

I have changed step-40 couple of days ago myself to MUMPS, instead of CG and it works. So at least I can be sure that I do understand how to use MUMPS, and PetSc with MUMPS were installed correctly.

Anna 
Reply all
Reply to author
Forward
0 new messages