Sparsematrix initialization in P4est program

69 views
Skip to first unread message

gabriel...@koeln.de

unread,
Feb 17, 2019, 4:29:06 PM2/17/19
to deal.II User Group
Hey everybody, 
I have a problem with initializing a Sparsematrix in a parallel program.
 am using a parallel::distributed::triangulaton (as in the Step40 tutorial) on 
the Navier Stokes Projection solver from the step35 tutorial.
The problem arises in the function 

NavierStokesProjection<dim>::initialize_gradient_operator()
{
{
DynamicSparsityPattern dsp(dof_handler_velocity.n_dofs(), dof_handler_pressure.n_dofs());
DoFTools::make_sparsity_pattern (dof_handler_velocity, dof_handler_pressure, dsp);
sparsity_pattern_pres_vel.copy_from (dsp);
}

where I have to use two different FE-spaces, one for the pressure and one for the velocity.
To perform this in parallel I used the following code (according to the sparsematrices in step40)

{
DynamicSparsityPattern dsp(locally_relevant_dofs_vel.size(),locally_relevant_dofs_pres.size());
DoFTools::make_sparsity_pattern (dof_handler_velocity, dof_handler_pressure, dsp);
sparsity_pattern_pres_vel.copy_from (dsp);
}

This code compilates fine, but when Running it with MPI, it stops in the "make_sparsity_pattern" line with the error code
"this->is_artificial() == false
Additional information:
Cant asks for Dofs on articial cells."

Does somebody know how I can fix this?

My last try was to use the Indexsets instead of their size, when declaring dsp.
But unfortunately DynamicSparsityPattern can only take on IndexSet.


Thanks a lot

Gabriel




David Wells

unread,
Feb 18, 2019, 4:21:06 PM2/18/19
to deal.II User Group
Hi Gabriel,

I believe that you are still using the standard deal.II SparseMatrix class; is this correct? If so, this class won't work with fully distributed calculations. You will need to use either the Trilinos or PETSc wrappers since not all information is available on the local processor with the default SparseMatrix class.

Either way: I believe the error message you encountered is a result of a bug in deal.II. I don't think that the routine GridTools::get_finest_common_cells(), which is called by that particular make_sparsity_pattern variant, works with distributed triangulations. If you want to use distributed triangulations you will need to set up the coupled sparsity pattern in a different way.

If you want to parallelize step-35, it may be better to start by looking at step-32, which shows how to implement the Stokes equations (coupled to a Boussinesq equation) in a fully distributed setting. I believe they set up the coupling of the matrix with velocity and pressure blocks in a different way that should work for you.

Does this make sense?

Thanks,
David Wells

gabriel...@koeln.de

unread,
Feb 25, 2019, 12:32:13 PM2/25/19
to deal.II User Group


Am Montag, 18. Februar 2019 22:21:06 UTC+1 schrieb David Wells:
Hi Gabriel,

I believe that you are still using the standard deal.II SparseMatrix class; is this correct? If so, this class won't work with fully distributed calculations. You will need to use either the Trilinos or PETSc wrappers since not all information is available on the local processor with the default SparseMatrix class.

Either way: I believe the error message you encountered is a result of a bug in deal.II. I don't think that the routine GridTools::get_finest_common_cells(), which is called by that particular make_sparsity_pattern variant, works with distributed triangulations. If you want to use distributed triangulations you will need to set up the coupled sparsity pattern in a different way.

If you want to parallelize step-35, it may be better to start by looking at step-32, which shows how to implement the Stokes equations (coupled to a Boussinesq equation) in a fully distributed setting. I believe they set up the coupling of the matrix with velocity and pressure blocks in a different way that should work for you.

Does this make sense?

Thanks,
David Wells


Hey,
sorry for the late respons, I had some stuff to work at.

Today I had a close look to the  Step-32 and 31 tutorial  
and I think that this doen't help very much because in the Step-32 tutorial the pressure_fe_degree eguals the velocity_fe_degree,
which, in my view, makes thi function "make_sparsity_pattern" applicable, because you only have to insert one Dof_Handler object.
Thus one can use the same Dof_Handler for velocity and pressure, and everything is created as in the Step-32

In my case the velocity_fe_degree is one degree larger then the pressure_fe_degree. 
If I understood the construction right, I can't perform it in this case.

Do you see my problem, or am I wrong?

Best Regards
Gabriel

David Wells

unread,
Feb 26, 2019, 8:45:44 PM2/26/19
to deal.II User Group
Hi Gabriel,

step-32 does use a pressure element of degree 1 lower than the velocity, but your main point is still right: step-35 uses multiple dof handlers for the velocity and pressure while step-32 uses FESystem to put everything in one dof handler. Therefore the step-32 approach won't work unless you rewrite step-35 to use FESystem and block vectors.

Regardless, we should support creating sparsity patterns in parallel with multiple dof handlers: this is a bug in the make_sparsity_pattern function in deal.II.

I am not sure what to suggest you do: the step-35 approach won't work due to a bug in the library itself, but rewriting step-35 (or a similar code) to use FESystem and blocks is a lot of work. Unfortunately, I think the second approach is the most reasonable one since its what the other parallel examples do.

Thanks,
David Wells

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "deal.II User Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dealii/9ibgrQ0mFBs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

David Wells

unread,
Feb 27, 2019, 5:22:50 PM2/27/19
to deal.II User Group
Hi Gabriel,

In case you are interested, I have written a patch and a new test for this issue: its available at


Thanks,
David Wells
Reply all
Reply to author
Forward
0 new messages