Difference between BlockDynamicSparsityPattern and TrilinosWrappers::BlockSparsityPattern

27 views
Skip to first unread message

Bruno Blais

unread,
Aug 30, 2019, 4:27:48 PM8/30/19
to deal.II User Group
Hello,
I am currently working on a parallel implementation of step-57, thus I am learning to live with BlockVectors, BlockMatrices and BlockSparsityPatterns in parallel.
Originally, I thought that I could make my sparsity pattern the following way (i.e as in step-57, but distributing it afterwards) :
std::vector<unsigned int> block_component(dim+1, 0);
block_component
[dim] = 1;
DoFRenumbering::component_wise (dof_handler, block_component);
dofs_per_block
.resize (2);
DoFTools::count_dofs_per_block (dof_handler, dofs_per_block, block_component);
unsigned int dof_u = dofs_per_block[0];
unsigned int dof_p = dofs_per_block[1];

locally_owned_dofs
.resize(2);
locally_owned_dofs
[0] = dof_handler.locally_owned_dofs().get_view(0, dof_u);
locally_owned_dofs
[1] =dof_handler.locally_owned_dofs().get_view(dof_u, dof_u + dof_p);

IndexSet locally_relevant_dofs_acquisition;
DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs_acquisition);
locally_relevant_dofs
.resize(2);
locally_relevant_dofs
[0] = locally_relevant_dofs_acquisition.get_view(0, dof_u);
locally_relevant_dofs
[1] = locally_relevant_dofs_acquisition.get_view(dof_u, dof_u + dof_p);

... Place where I make my constraints ...

BlockDynamicSparsityPattern dsp(dofs_per_block, dofs_per_block);
DoFTools::make_sparsity_pattern(dof_handler, dsp, nonzero_constraints);

SparsityTools::distribute_sparsity_pattern(
 dsp
,
 dof_handler
.locally_owned_dofs_per_processor(),
 mpi_communicator
,
 locally_relevant_dofs_acquisition
);

system_matrix
.reinit(dsp);
pressure_mass_matrix
.reinit(dsp.block(1, 1));

When launching in sequential - Debug I have no issue and my solver works perfectly. When launching in parallel, within my pre-conditioner, I get an error such as :
The violated condition was:
   
in.trilinos_partitioner().SameAs(m.DomainMap()) == true
Additional information:
   
Column map of matrix does not fit with vector map!

Clearly, I am doing something wrong with my sparsity pattern since it appears the block of my vector and the block of my matrix are incompatible in size.
I have found that step 32 uses a TrilinosWrappers::BlockSparsityPattern instead of a BlockDynamicSparsityPattern. Is this what I should do with my case?
I am unsure what is the distinction between what I am doing right now and what the TrilinosWrappers::BlockSparsityPattern would do?

If there is a lack of information, I can post a link to the code which is, regretfully, quite big.
Best
Bruno

Wolfgang Bangerth

unread,
Aug 30, 2019, 11:41:05 PM8/30/19
to dea...@googlegroups.com

Bruno,
I don't quite recall if we ever used the BlockDynamicSparsityPattern in a
parallel context. For sure, the way you're initializing it implies that every
process allocates the memory for all DoFs as it's not given the information
about locally_relevant_dofs. I'd have to look up whether there is a way to do
that. On the other hand, I would assume that the Trilinos version has a way to
designate what are the locally relevant dofs on each process.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Bruno Blais

unread,
Sep 1, 2019, 3:02:51 PM9/1/19
to deal.II User Group
Dear Wolfgang,
Thank you very much for your message.
Is there a different between how DynamicSparsityPatterns and BlockDynamicSparsityPatterns behave?
When you look at step-40, which is the first "MPI" step, the way the sparsity pattern is made is (https://dealii.org/current/doxygen/deal.II/step_40.html#LaplaceProblemsetup_system)
DynamicSparsityPattern dsp(locally_relevant_dofs);
 DoFTools::make_sparsity_pattern(dof_handler, dsp, constraints, false);
   dsp,
   dof_handler.n_locally_owned_dofs_per_processor(),
   mpi_communicator,
   locally_relevant_dofs);

My understanding was that you were only making the sparsity pattern for your own locally relevants dofs and the distribution step would make sure that everything was coherent. However, the sparsity pattern is made in a completely different way in step-32 (https://dealii.org/current/doxygen/deal.II/step_32.html) and the TrilinosWrappers:BlockSparsityPattern is used.

Consequently, I was a bit confused as to the difference between these approaches. It seems like the second one is necessary for Block matrices?
I will take a deeper look into this, but I must say my understanding of that point right now is relatively poor :(
Thanks!
Bruno

Wolfgang Bangerth

unread,
Sep 3, 2019, 7:09:00 PM9/3/19
to dea...@googlegroups.com

Bruno,

> Is there a different between how DynamicSparsityPatterns and
> BlockDynamicSparsityPatterns behave?

The latter is just an array of the former. Under the hood, every block
is simply a DynamicSparsityPattern that can be initialized in the same
way one always does.


> When you look at step-40, which is the first "MPI" step, the way the
> sparsity pattern is made is
> (https://dealii.org/current/doxygen/deal.II/step_40.html#LaplaceProblemsetup_system)
> |
> DynamicSparsityPattern
> <https://dealii.org/current/doxygen/deal.II/classDynamicSparsityPattern.html>dsp(locally_relevant_dofs);
> DoFTools::make_sparsity_pattern
> <https://dealii.org/current/doxygen/deal.II/group__constraints.html#gad93530ee35c780e9ef7bc5e4856039df>(dof_handler,dsp,constraints,false);
> SparsityTools::distribute_sparsity_pattern
> <https://dealii.org/current/doxygen/deal.II/namespaceSparsityTools.html#ae2c7bdbdb62642f60d60087e4cb6195f>(
>    dsp,
>    dof_handler.n_locally_owned_dofs_per_processor(),
>    mpi_communicator,
>    locally_relevant_dofs);
> |
>
> My understanding was that you were only making the sparsity pattern for
> your own locally relevants dofs and the distribution step would make
> sure that everything was coherent.

Correct.


> However, the sparsity pattern is made
> in a completely different way in step-32
> (https://dealii.org/current/doxygen/deal.II/step_32.html) and the
> TrilinosWrappers:BlockSparsityPattern is used.

Correct. In step-40, we use PETSc, which has no sparsity pattern data
structure of its own, and so we need to initialize the PETSc matrices
with our own. In contrast, Trilinos has its own sparsity pattern
classes, and so to initialize a Trilinos matrix, we need to build one of
their sparsity patterns. (Or block patterns, as it may be -- which again
is just an array of patterns.)


> Consequently, I was a bit confused as to the difference between these
> approaches. It seems like the second one is necessary for Block matrices?
> I will take a deeper look into this, but I must say my understanding of
> that point right now is relatively poor :(

The difference has nothing to do with blocks and everything with whether
you base your linear algebra on PETSc or Trilinos.

Does this help?

Cheers

Bruno Blais

unread,
Sep 4, 2019, 7:04:52 PM9/4/19
to deal.II User Group
Dear Wolfgang,
Thank you, everything is clear now and i managed to accomplish what I wanted. 
Thanks!
Bruno
Reply all
Reply to author
Forward
0 new messages