--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/a7ff55e9-b406-403b-b176-75690c2d121en%40googlegroups.com.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/7b15f86c-0069-42c9-b7ed-4bde752ee07bn%40googlegroups.com.
Distributing dofs and collecting the locally owned and locally relevant sets.
// Solid
dof_handler_sld.distribute_dofs(fe_sld);
sld_owned_dofs = dof_handler_sld.locally_owned_dofs();
sld_relevant_dofs = DoFTools::extract_locally_relevant_dofs(dof_handler_sld);
// Shell
dof_handler_sh.distribute_dofs(fe_sh);
sh_owned_dofs = dof_handler_sh.locally_owned_dofs();
sh_relevant_dofs = DoFTools::extract_locally_relevant_dofs(dof_handler_sh);
Then I apply boundary conditions to both solid and shell dof_handlers. Then I define the coupling points and retrieve the coupled DOFs:
sld_coup_dofs = nodal_coupling(coup_points , dof_handler_sld, tria_pft_sld.n_vertices() );
sh_coup_dofs = nodal_coupling(coup_points , dof_handler_sh, tria_pft_sh.n_vertices() );
const std::vector<unsigned int> dofs_per_block = {n_dof_sld, n_dof_sh};
const std::vector<unsigned int> locally_owned_sizes = {sld_owned_dofs.size(), sh_owned_dofs.size()};
I reinitialize each of the blocks in my sparsity pattern:
BlockDynamicSparsityPattern dsp(dofs_per_block , dofs_per_block);
for (unsigned int i=0; i<dofs_per_block.size(); ++i)
for (unsigned int j=0; j<dofs_per_block.size(); ++j)
dsp.block(i,j).reinit(dofs_per_block[i], dofs_per_block[j]);
Then I use the solid and shell dof handlers to set up sub-matrices A and D.
dsp.collect_sizes();
DoFTools::make_sparsity_pattern(dof_handler_sld, dsp.block(0,0)); // A
DoFTools::make_sparsity_pattern(dof_handler_sh, dsp.block(1,1)); // D
Then, for the off-diagonal sub-matrices B and D, I add the coupled DOFs to the sparsity pattern.
for (unsigned int s = 0; s < sld_coup_dofs.size() ; s++)
for (unsigned int d = 0; d < 3 ; d++) {
dsp.block(0,1).add(sld_coup_dofs[s][d] , sh_coup_dofs[s][d] ); // B
dsp.block(1,0).add(sh_coup_dofs[s][d] , sld_coup_dofs[s][d] ); // C
}
Where I start to get confused is when I try to reinitialize the system matrix. What I want to do is something like:
system_matrix.reinit(dsp, mpi_communicator);
This has my MPI communicator and the sparsity pattern that I’ve built up. However, this isn’t a valid call to PETScWrappers::MPI::BlockSparseMatrix.reinit(). There’s a similar function that takes the arguments ( const std::vector< IndexSet > &sizes, const BlockDynamicSparsityPattern &bdsp, const MPI_Comm com )
I don’t really understand what I would put in for the "sizes" vector. What exactly am I trying to pass with this argument? Is it all of the locally owned/relevant dofs? Do I just combine the vector of locally owned shell dofs and locally owned solid dofs?

