LinearOperator MPI Parallel Vector

43 views
Skip to first unread message

Doug Shi-Dong

unread,
Apr 22, 2020, 10:00:10 PM4/22/20
to deal.II User Group
Hello,

I have been using the LinearAlgebra::distributed::Vector class for MPI parallelization since the way it works is more familiar to what I had worked with and seemed more flexible.

However, for parallelization, I have to either use a Trilinos or PETSc matrix since the native deal.II SparseMatrix is only serial (correct me if I'm wrong). Seems like I can do matrix-vector multiplications just fine between LA::dist::Vector and the wrapped matrices. However, when it gets to LinearOperator, it looks like a TrilinosWrappers::SparseMatrix wrapped within a LinearOperator only works with a TrilinosWrappers::MPI::Vector, and same thing for PETSc.

I am wondering what the community is using as their go-to parallel matrices and vectors, and if you've been mixing them. E.g. matrix-free with Trilinos/PETSc vectors, or PETSc matrices with LA::dist::Vector. From what I've seen from some tutorials, there is a way to code it up such that either Trilinos or PETSc wrappers are used interchangeably, but the LA::dist::Vector does not seem be nicely interchangeable with the Trilinos/PETSc ones. 

I was kind of hoping to be able to use LA::dist::Vector for everything, am I expecting too much from it? Maybe I just need to fix the LinearOperator implementation to mix-and-match the data structure? If I do commit to Trilinos matrices/vectors, will I have trouble doing some matrix-free or GPU stuff in the far future?

Best regards,

Doug

Doug Shi-Dong

unread,
Apr 22, 2020, 10:40:43 PM4/22/20
to deal.II User Group
More so, I just found that the AffineConstraints function have not been instantiated for a mix of TrilinosWrappers::SparseMatrix and LA::dist::Vector, hence it likely has not been tried out/tested.

Seems like it's just not a thing to use LA::dist::Vector other than for matrix-free?

peterrum

unread,
Apr 23, 2020, 4:50:40 AM4/23/20
to deal.II User Group
Dear Doug,

Could you post a short code how you want to use the LinearOperator so that I know what actually is not working.

Regarding Trilinos + LA::dist::Vectror: there is an open PR (https://github.com/dealii/dealii/pull/9925) which adds the instantiations (hope I did not miss any).

Regarding GPU: currently there is only support for matrix-free and non for matrix-based algorithms. These GPU implementations use currently LA::dist::Vector. 

Personally, I am always using LA::dist::Vector (not just in the matrix-free code but also) in combination with TrilinosWrappers::SparseMatrix. It works very well! I have no experience how well it works with PETSc.

Peter

Doug Shi-Dong

unread,
Apr 23, 2020, 2:18:58 PM4/23/20
to deal.II User Group
Hello Peter,

Glad to hear my "expectations" of LA::dist::Vector to work everywhere wasn't wrong. I also mainly use Trilinos' implementation of the SparseMatrix, and matrix-vector products have been all I need.

This PR will indeed solve my AffineConstraints issue.

I have attached the modified linear_operator04a.cc. Literally just added the la_parallel_vector header and changed the typedef to get the compilation error in the screenshot. This can easily be remedied by adding a initialize_dof_vector function to the TrilinosWrappers::SparseMatrix class.

Doug
linear_operator_04.cc
Screenshot from 2020-04-23 14-15-33.png

Bruno Turcksin

unread,
Apr 24, 2020, 2:17:34 PM4/24/20
to deal.II User Group

On Thursday, April 23, 2020 at 4:50:40 AM UTC-4, peterrum wrote:
Regarding GPU: currently there is only support for matrix-free and non for matrix-based algorithms. These GPU implementations use currently LA::dist::Vector.

We actually support matrix-based algorithms on the GPU and we have wrappers around cuSOLVER and some cuBLAS fonctions but you have to build a SparseMatrix on the CPU first. So it doesn't work with MPI.

Best,

Bruno
Reply all
Reply to author
Forward
0 new messages