PETScWrappers::MPI::BlockSparseMatrix

103 views
Skip to first unread message

Мария Бронзова

unread,
Jan 17, 2022, 6:42:41 AM1/17/22
to deal.II User Group
Dear colleagues,

I was trying to make adjustments in my script to enable parallel computing (as it takes unbearably long to calculate it even for small problems due to big sparsity matrix, I guess). 

In my script I am using block matrices and vectors with complex elements (i.e. BlockSparseMatrix<std::complex<double>>). I discovered that LA::MPI::BlockSparseMatrix<std::complex<double>> system_matrix returns an error while running the script (error: ‘dealii::LinearAlgebraPETSc::MPI::BlockVector’ {aka ‘dealii::PETScWrappers::MPI::BlockVector’} is not a template
  149 |     LA::MPI::BlockVector<std::complex<double>> locally_relevant_solution;), whereas LA::MPI::BlockSparseMatrix system_matrix works just fine.

Could you please give me a hint on how to possibly fix this issue?

Thank you a lot!

With kind regards,
Mariia Bronzova




Wolfgang Bangerth

unread,
Jan 17, 2022, 7:27:13 PM1/17/22
to dea...@googlegroups.com

Mariia,

> I was trying to make adjustments in my script to enable parallel computing (as
> it takes unbearably long to calculate it even for small problems due to big
> sparsity matrix, I guess).
>
> In my script I am using block matrices and vectors with complex elements
> (i.e. BlockSparseMatrix<std::complex<double>>). I discovered that
> *LA::MPI::BlockSparseMatrix<std::complex<double>> system_matrix* returns an
> error while running the script (error:
> ‘dealii::LinearAlgebraPETSc::MPI::BlockVector’ {aka
> ‘dealii::PETScWrappers::MPI::BlockVector’} is not a template
>   149 |     LA::MPI::BlockVector<std::complex<double>>
> locally_relevant_solution;), whereas *LA::MPI::BlockSparseMatrix
> system_matrix* works just fine.
>
> Could you please give me a hint on how to possibly fix this issue?

There is no issue to fix. The class is not a template, so
LA::MPI::BlockSparseMatrix
is the correct syntax, and
LA::MPI::BlockSparseMatrix<std::complex<double>>
is wrong.

The underlying issue is that you choose the type to be used for vectors and
matrices in PETSc at the time you compile PETSc. This is different from
deal.II and Trilinos classes, where you choose the type whenever you create a
variable in your own code. If you compile PETSc with default options, vector
and matrix entries are always 'double'. If you want anything else, you have to
say so when you compile PETSc.

Best
W.


--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Мария Бронзова

unread,
Jan 19, 2022, 5:41:09 AM1/19/22
to deal.II User Group
Dear Wolfgang,

Thank you a lot for the explanation, I got PETSc preinstalled already and didn't have a chance to see there were different compiling options. Now it's clear!

Anyway, I ran into a further issue with my script that I wanted to parallelize. I am dealing with non spd matrices in terms of a mixed displacement-pressure formulation problem. I decided to use MUMPS direct solver, as I cannot come up with a reasonable preconditioner for an iterative solver. I have a block matrix structure and was trying to solve the system via:

PETScWrappers::SparseDirectMUMPS solver(solver_control, mpi_communicator);
solver.solve(system_matrix, completely_distributed_solution, system_rhs);

But I receive an error:

error: cannot convert ‘dealii::LinearAlgebraPETSc::MPI::BlockSparseMatrix’ {aka ‘dealii::PETScWrappers::MPI::BlockSparseMatrix’} to ‘const dealii::PETScWrappers::MatrixBase&’
  925 |    solver.solve(system_matrix, completely_distributed_solution, system_rhs);

 
I checked the member function documentation for the solve() function of MUMPS and found only the solve() function that depends on the const MatrixBase, but I guess in my case const BlockMatrixBase would make more sense. 

Could you please tell me, whether there is a way to go about it? 

Thank you a lot for your support and time!

With kind regards,
Mariia Bronzova
вторник, 18 января 2022 г. в 01:27:13 UTC+1, Wolfgang Bangerth:

Wolfgang Bangerth

unread,
Jan 19, 2022, 11:42:10 AM1/19/22
to dea...@googlegroups.com
On 1/19/22 03:41, Мария Бронзова wrote:
> I checked the member function documentation for the solve() function of
> MUMPS and found only the solve() function that depends on the *const
> MatrixBase, *but I guess in my case *const* *BlockMatrixBase * would
> make more sense.
>
> Could you please tell me, whether there is a way to go about it?

You just can't use a block matrix with the PETSc direct solver. But
since we use block matrices and vectors primarily for the construction
of solvers and preconditioners, there is also no real reason to use
block matrices if you use a direct solver.

Мария Бронзова

unread,
Jan 24, 2022, 8:03:12 AM1/24/22
to deal.II User Group
Dear Wolfgang,

Thank you for your response. There is indeed not much of a need to keep the block structure in the code with a direct solver, but I was wondering still, as the script looks more compact this way.

Anyway, I rewrote several script parts to get rid of block matrices and vectors. The compiler does not complain anymore, while building CXX object, but it returns an error while linking CXX executable. It concerns the affine_constraints (as provided below). I wonder, whether there is a problem for using complex type constraints in terms of parallel computing? Could you please assist in understanding the issue?

Thank you a lot!

Kind regards,
Mariia Bronzova

/home/masianichik/dealii/include/deal.II/lac/affine_constraints.h:2475: error: undefined reference to 'void dealii::AffineConstraints<std::complex<double> >::distribute_local_to_global<dealii::PETScWrappers::MPI::SparseMatrix, dealii::PETScWrappers::MPI::Vector>(dealii::FullMatrix<std::complex<double> > const&, dealii::Vector<std::complex<double> > const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, dealii::PETScWrappers::MPI::SparseMatrix&, dealii::PETScWrappers::MPI::Vector&, bool, std::integral_constant<bool, false>) const'
/home/masianichik/dealii/examples/step-22/step-22.cc:601: error: undefined reference to 'void dealii::AffineConstraints<std::complex<double> >::distribute<dealii::PETScWrappers::MPI::Vector>(dealii::PETScWrappers::MPI::Vector&) const'
collect2: error: ld returned 1 exit status
make[3]: *** [CMakeFiles/step-22.dir/build.make:137: step-22] Error 1
make[2]: *** [CMakeFiles/Makefile2:291: CMakeFiles/step-22.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:271: CMakeFiles/run.dir/rule] Error 2
make: *** [Makefile:215: run] Error 2

среда, 19 января 2022 г. в 17:42:10 UTC+1, Wolfgang Bangerth:

Wolfgang Bangerth

unread,
Jan 27, 2022, 4:40:19 PM1/27/22
to dea...@googlegroups.com

Mariia

> Anyway, I rewrote several script parts to get rid of block matrices and
> vectors. The compiler does not complain anymore, while building CXX object,
> but it returns an error while linking CXX executable. It concerns the
> affine_constraints (as provided below). I wonder, whether there is a problem
> for using complex type constraints in terms of parallel computing? Could you
> please assist in understanding the issue?
>
> Thank you a lot!
>
> Kind regards,
> Mariia Bronzova
>
> /home/masianichik/dealii/include/deal.II/lac/affine_constraints.h:2475: error:
> undefined reference to 'void dealii::AffineConstraints<std::complex<double>
> >::distribute_local_to_global<dealii::PETScWrappers::MPI::SparseMatrix,
> dealii::PETScWrappers::MPI::Vector>(dealii::FullMatrix<std::complex<double> >
> const&, dealii::Vector<std::complex<double> > const&, std::vector<unsigned
> int, std::allocator<unsigned int> > const&,
> dealii::PETScWrappers::MPI::SparseMatrix&,
> dealii::PETScWrappers::MPI::Vector&, bool, std::integral_constant<bool,
> false>) const'

I have to admit that I don't understand the problem -- but I also don't have
an installation that uses complex PETSc types. If you look at the bottom of
source/lac/affine_constraints.cc in the deal.II installation that you
compiled, you will find these lines:

#define INSTANTIATE_DLTG_VECTORMATRIX(MatrixType, VectorType) \
template void AffineConstraints<MatrixType::value_type>:: \
distribute_local_to_global<MatrixType, VectorType>( \
const FullMatrix<MatrixType::value_type> &, \
const Vector<VectorType::value_type> &, \
const std::vector<AffineConstraints::size_type> &, \
MatrixType &, \
VectorType &, \
bool, \
std::integral_constant<bool, false>) const

...

INSTANTIATE_DLTG_VECTORMATRIX(PETScWrappers::SparseMatrix, Vector<PetscScalar>);
INSTANTIATE_DLTG_VECTORMATRIX(PETScWrappers::SparseMatrix,
PETScWrappers::MPI::Vector);
INSTANTIATE_DLTG_VECTORMATRIX(PETScWrappers::MPI::SparseMatrix,
Vector<PetscScalar>);
INSTANTIATE_DLTG_VECTORMATRIX(PETScWrappers::MPI::SparseMatrix,
PETScWrappers::MPI::Vector);

This should tell the compiler to compile the (first) function you are missing
above. In particular, the last two lines should do this. I don't have an idea
why that isn't working. You might have to dig around a bit to see why it is
not instantiating the function you want.
Reply all
Reply to author
Forward
0 new messages