petsc & trilinos blocksparsematrix reinit with zero locally owned components

37 views
Skip to first unread message

richard....@gmx.at

unread,
Jul 30, 2019, 9:51:18 AM7/30/19
to deal.II User Group
Hello everyone,

(question formulated at the bottom, for further info read this)

I am working on an ALE-FSI code in deal.ii available here:
The nice feature of this specific method is, that with pulling back the fluid into the 
Lagrangian/undeformed/reference configuration 
(see. e.g. in publications from Wick, Richter, Jodlbauer, Langer&Yang etc)
one can use globally defined function spaces for (vector-valued) velocity and
displacement fields and for the scalar pressure.
Globally herein means, in solid and fluid domain, i.e., we can use the following:

FESystem<dim> fe;
fe (FE_Q<dim>(pressDegree+1), dim,  // velocities
     FE_Q<dim>(pressDegree+1), dim,  // displacements
     FE_DGP<dim>(pressDegree), 1),   // pressure

Now, as we want to efficiently solve this system using the preconditioners for 
each block independently, we use a LDU-decomposition ( Jodlbauer et al., 2019, IJNME).
Therefore, we sort degrees of freedom via the cell->material_id(), which shall 
indicate fluid or solid subdomain. So, we get from
[vel ; def ; press] to [ vel_fluid ; def_fluid ; press_fluid ; vel_solid ; def_solid ; press_solid].

The setup() so far looks something like this [see setup.txt]:
---------------------------------------
1.) dist dofs.
2.) renumbering: Cuthill_McKee, component_wise & finally depending on material_id().
3.) count sizes of blocks (which gives correct numbers).
4.) create indexsets for locally owned & relevant dofs.
5.) setup constraints.
6.) create bdsp:

BlockDynamicSparsityPattern dsp (6,6);
    std::vector <unsigned int> n_dof_per_block(6);
    n_dof_per_block[0] = n_v_f;
    n_dof_per_block[1] = n_u_f;
    n_dof_per_block[2] = n_p_f;
    n_dof_per_block[3] = n_v_s;
    n_dof_per_block[4] = n_u_s;
    n_dof_per_block[5] = n_p_s;

    for (int i=0; i<6; i++)
    for (int j=0; j<6; j++)
    if (true) //n_dof_per_block[i] > 0 && n_dof_per_block[j] > 0)
    dsp.block(i,j).reinit(n_dof_per_block[i], n_dof_per_block[j]);

    dsp.compress();
    dsp.collect_sizes();

    DoFTools::make_sparsity_pattern (dof_handler, dsp,
        constraints, false,
    Utilities::MPI::this_mpi_process(mpi_communicator));

    SparsityTools::distribute_sparsity_pattern (dsp,
                                                dof_handler.locally_owned_dofs_per_processor(),
                                                mpi_communicator,
                                                locally_relevant_dofs);

7.) Now use bdsp to initialize system_matrix (LA::MPI::BlockSparseMatrix):

    system_matrix.reinit (locally_owned_partitioning,
          locally_owned_partitioning,
                                      dsp,
                                      mpi_communicator);
--------------------------------

The very last part (reinit) is working only iff every processor owns parts of fluid AND solid domain (which is so far not ensured).
This behaviour can be seen running exactly the same problem with mpirun -n 2 or -n 3, which is attached.

THE QUESTION IS NOW:
How may one reinit the blocks in a BlockSparseMatrix, if not every subdomain has locally_owned_dofs in all blocks? 
Or what is the point I am missing here?

I added a chunk of code at the end, that shows, that the block(block_row,block_col).reinit(...) fails, if the locally_owned indexset is empty.

Thanks for any help in advance, 
Richard
setup.txt
success.txt
error.txt

Daniel Arndt

unread,
Jul 30, 2019, 11:04:16 AM7/30/19
to dea...@googlegroups.com
Richard,

[...]
The very last part (reinit) is working only iff every processor owns parts of fluid AND solid domain (which is so far not ensured).
This behaviour can be seen running exactly the same problem with mpirun -n 2 or -n 3, which is attached.

THE QUESTION IS NOW:
How may one reinit the blocks in a BlockSparseMatrix, if not every subdomain has locally_owned_dofs in all blocks? 
Or what is the point I am missing here?

I added a chunk of code at the end, that shows, that the block(block_row,block_col).reinit(...) fails, if the locally_owned indexset is empty.

This assertion was removed in https://github.com/dealii/dealii/pull/6106. Either use a recent release or simply remove it in your copy of the deal.II source files.

Best,
Daniel

richard....@gmx.at

unread,
Jul 31, 2019, 4:43:47 AM7/31/19
to deal.II User Group
Dear Daniel,

thank you very much for your quick and concise answer!
Just for the record & other beginners out there:
I was using deal.ii v9.0.1 since the code (see original post) 
I adapted used that version of deal.ii, BUT I will take this as
a sign to switch to the newest release!

If switching to v9.1.0 does not do the trick,
I will post here again.

Thanks once again,
Kind regards,
Richard
Reply all
Reply to author
Forward
0 new messages