Local solution vectors in HDG for MPI application

29 views
Skip to first unread message

Praveen C

unread,
Mar 4, 2016, 12:13:12 PM3/4/16
to Deal.II Googlegroup
Dear all

I have a HDG code which uses parallel::distributed::Triangulation and Trilinos MPI vectors.

In HDG, there is local solution and skeleton solution. The skeleton solution which lives on the faces of the cells is obtained by solving a global problem so this has to be stored as a distributed Trilinos vector.

But the local (or cell) solution need not be distributed at all.

Is it possible to have an ordinary Vector for solution_local when my mesh is distributed ?

Best

Martin Kronbichler

unread,
Mar 4, 2016, 12:21:35 PM3/4/16
to dea...@googlegroups.com
Dear Praveen,

You would need to do that manually, but it is not very difficult. You need to do three things: First, you keep a separate list of a local numbering of cells (you obtain this by looping through the local triangulation and counting only on the locally owned cells for which you need degrees of freedom), and then you create dealii::Vector<double> of length n_local_cells * fe.dofs_per_cell. Finally, when you are on cell k, you index into the vector with local_cell_number * fe.dofs_per_cell. As long as you do not need to communicate anything, that should be perfectly fine.

Best,
Martin
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Praveen C

unread,
Mar 4, 2016, 12:34:12 PM3/4/16
to Deal.II Googlegroup
Dear Martin

Thanks, I see that it is doable using your solution.

Currently my local solution is a Trilinos MPI Vector which is allocated as

solution_local.reinit (locally_owned_dofs, mpi_communicator);

So I never need to call compress on this. Does making this into an ordinary Vector give me any speedup ?

Would mesh adaptation and solution transfer work if I use an ordinary Vector for solution_local ?

Best
praveen

Martin Kronbichler

unread,
Mar 5, 2016, 2:54:14 AM3/5/16
to dea...@googlegroups.com
Dear Praveen,

> Currently my local solution is a Trilinos MPI Vector which is allocated as
>
> solution_local.reinit (locally_owned_dofs, mpi_communicator);
>
> So I never need to call compress on this. Does making this into an
> ordinary Vector give me any speedup ?

An ordinary vector is likely to give you speedup because it does not
have to do index translation (MPI->global). On the other hand, it does
not allow you using FEValues::get_function_values,
cell->get_dof_values(), and solution transfer in particular. (You would
basically need to access the vector entries on your own whenever you
want to do anything because the local numbers in the vector do not match
with the global numbers in the DoFHandler.)

So I'd say that your current solution is pretty good and definitely the
more flexible one. One optimization you could apply manually (and you
would need to do with dealii::Vector manually as well) is when you read
the values of DoFs on a cell which is likely the place where performance
differences matter: In DG typically all cell dofs are consecutive. Since
Trilinos stores the local numbers for array access in the same order as
the global numbers, so you could use
TrilinosWrappers::MPI::Vector::get_trilinos_vector() to get a reference
to the Epetra_Vector object. This allows you for direct access by
operator[], which you can get by letting the IndexSet locally_owned_dof
tell you the local number of the first cell dof.

Best,
Martin

Praveen C

unread,
Mar 5, 2016, 5:05:34 AM3/5/16
to Deal.II Googlegroup

On Sat, Mar 5, 2016 at 1:24 PM, Martin Kronbichler <kronbichl...@gmail.com> wrote:
So I'd say that your current solution is pretty good and definitely the more flexible one. One optimization you could apply manually (and you would need to do with dealii::Vector manually as well) is when you read the values of DoFs on a cell which is likely the place where performance differences matter: In DG typically all cell dofs are consecutive. Since Trilinos stores the local numbers for array access in the same order as the global numbers, so you could use TrilinosWrappers::MPI::Vector::get_trilinos_vector() to get a reference to the Epetra_Vector object. This allows you for direct access by operator[], which you can get by letting the IndexSet locally_owned_dof tell you the local number of the first cell dof.

Dear Martin

Your advice on trilinos indexing for MPI is very relevant to my application. I will try this. 

Thanks.
praveen
Reply all
Reply to author
Forward
0 new messages