Dear Praveen,
> Currently my local solution is a Trilinos MPI Vector which is allocated as
>
> solution_local.reinit (locally_owned_dofs, mpi_communicator);
>
> So I never need to call compress on this. Does making this into an
> ordinary Vector give me any speedup ?
An ordinary vector is likely to give you speedup because it does not
have to do index translation (MPI->global). On the other hand, it does
not allow you using FEValues::get_function_values,
cell->get_dof_values(), and solution transfer in particular. (You would
basically need to access the vector entries on your own whenever you
want to do anything because the local numbers in the vector do not match
with the global numbers in the DoFHandler.)
So I'd say that your current solution is pretty good and definitely the
more flexible one. One optimization you could apply manually (and you
would need to do with dealii::Vector manually as well) is when you read
the values of DoFs on a cell which is likely the place where performance
differences matter: In DG typically all cell dofs are consecutive. Since
Trilinos stores the local numbers for array access in the same order as
the global numbers, so you could use
TrilinosWrappers::MPI::Vector::get_trilinos_vector() to get a reference
to the Epetra_Vector object. This allows you for direct access by
operator[], which you can get by letting the IndexSet locally_owned_dof
tell you the local number of the first cell dof.
Best,
Martin