Hello,
I have been using the LinearAlgebra::distributed::Vector class for MPI parallelization since the way it works is more familiar to what I had worked with and seemed more flexible.
However, for parallelization, I have to either use a Trilinos or PETSc matrix since the native deal.II SparseMatrix is only serial (correct me if I'm wrong). Seems like I can do matrix-vector multiplications just fine between LA::dist::Vector and the wrapped matrices. However, when it gets to LinearOperator, it looks like a TrilinosWrappers::SparseMatrix wrapped within a LinearOperator only works with a TrilinosWrappers::MPI::Vector, and same thing for PETSc.
I am wondering what the community is using as their go-to parallel matrices and vectors, and if you've been mixing them. E.g. matrix-free with Trilinos/PETSc vectors, or PETSc matrices with LA::dist::Vector. From what I've seen from some tutorials, there is a way to code it up such that either Trilinos or PETSc wrappers are used interchangeably, but the LA::dist::Vector does not seem be nicely interchangeable with the Trilinos/PETSc ones.
I was kind of hoping to be able to use LA::dist::Vector for everything, am I expecting too much from it? Maybe I just need to fix the LinearOperator implementation to mix-and-match the data structure? If I do commit to Trilinos matrices/vectors, will I have trouble doing some matrix-free or GPU stuff in the far future?
Best regards,
Doug