Shared Memory Parallel Linear Solver

110 views
Skip to first unread message

Paras Kumar

unread,
Feb 14, 2020, 7:22:15 AM2/14/20
to deal.II User Group
Dear deal.ii Community,

I am working  on finite deformation hyperelasticity problem which is essentially a nonlinear-vector-valued problem with displacement as the unknown at each support point(dim=2,3).  With regards to parallelism, we currently restrict ourselves to shared memory parallel (SMP) only. The FE assembly process has been paralleized using the workstream function.

This question pertains to (possibly faster) linear solvers. Currently. For the current 2D problem, we use the SparseDirectUMFPACK solver. As can been seen in the time log below (ths example was computed on a 36 core machine with 64 threads), the linear solver consumes the most time. I also tried using the CG solver, but it was much much slower, probably due to the large condition number stemming out of  highly refined mesh in regions of interest.

Triangulation:
     
Number of active cells: 2059646
     
Number of Degrees of Freedom: 4119454

+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start    |  7.81e+03s |            |
|                                             |            |            |
| Section                         | no. calls |  wall time | % of total |
+---------------------------------+-----------+------------+------------+
| Assemble Linear System          |        50 |       875s |        11% |
| Linear Solver                   |        40 |  5.55e+03s |        71% |
| Make Constraints                |        50 |       255s |       3.3% |
| PBC setup                       |         1 |      1.02s |         0% |
| Set Boundary IDs                |         1 |     0.109s |         0% |
| Stress Output Computation       |        11 |       464s |       5.9% |
| Stress Output Overall           |        12 |     1e+03s |        13% |
| Stress Output Writing           |        11 |       379s |       4.9% |
| Stress output initialize        |         1 |       158s |         2% |
| System Setup                    |         1 |       260s |       3.3% |
+---------------------------------+-----------+------------+------------+

While going through the documentation, I came across options for DMP linear solvers but couldn't find any with SMP.

Could somenone please guide me to any such possibilities within the library or through interface to some external library.

Best regards,
Paras

Bruno Turcksin

unread,
Feb 14, 2020, 8:34:41 AM2/14/20
to deal.II User Group
Paras,

You could try to use SuperLU_MT (see https://portal.nersc.gov/project/sparse/superlu/) but we don't have wrapper for it. The Krylov solver in deal.II are multithreaded but the preconditioners are not. What you can try is to use deal.II solvers and STRUMPACK (https://github.com/pghysels/STRUMPACK) for the preconditioner but again we don't have wrapper for it.

Best,

Bruno

Wolfgang Bangerth

unread,
Feb 14, 2020, 12:42:40 PM2/14/20
to dea...@googlegroups.com

> While going through the documentation, I came across options for DMP linear
> solvers but couldn't find any with SMP.
>
> Could somenone please guide me to any such possibilities within the library or
> through interface to some external library.

You could also go through PETSc. You can build the PETSc matrix on a single
node and then see whether PETSc supports any of their sparse direct solvers
using multiple threads.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Paras Kumar

unread,
Feb 26, 2020, 12:27:42 PM2/26/20
to deal.II User Group

Dear Bruno,

Thank you  for your response.


On Friday, February 14, 2020 at 2:34:41 PM UTC+1, Bruno Turcksin wrote:
Paras,

You could try to use SuperLU_MT (see https://portal.nersc.gov/project/sparse/superlu/) but we don't have wrapper for it. The Krylov solver in deal.II are multithreaded but the preconditioners are not. What you can try is to use deal.II solvers and STRUMPACK (https://github.com/pghysels/STRUMPACK) for the preconditioner but again we don't have wrapper for it.

Best,

Bruno


A bit more search let to the fact that the SuperLu-MT solver is also available in Trilinos-Ameso2 package (see https://docs.trilinos.org/latest-release/packages/amesos2/doc/html/classAmesos2_1_1Superlumt.html#a102812eafbb572475b7330b825809b09).

The current code uses dealii::SparseMatrix<double> and dealii::Vector<double> objects to store the system matrix and, the solution and rhs vectors respectively.

In order to test it out, my idea was to use a dealii::TrilinosWrappers::SparseMatrix (which would be reinit from the already assembled dealii::SparseMatrix) and then extract the underlying matrix (using the TrilinosWrappers::SparseMatrix::trilinos_matrix() function) for use with the Superlu-MT solver. This works fine.

For the vectors (solution & rhs), the idea was to copy the values from the dealii::Vector to dealii::LinearAlgebra::EpetraWrappers::Vector. As an intermediate step I first copy the dealii::Vector to dealii::LinearAlgebra::Vector<doule> using the constructor : https://www.dealii.org/current/doxygen/deal.II/classLinearAlgebra_1_1Vector.html#af441b2c23134f8e6e08c833f918888a2. This works fine. However, when I try to call dealii::LinearAlgebra::EpetraWrappers::Vector::reinit(dealii::LinearAlgebra::Vector<doule>) the following error occurs:

An error occurred in line <85> of file </opt/spack/var/spack/stage/dealii-9.0.1-ssxm6qr7mlmymdmjssazlr4s32qar5cv/dealii-9.0.1/source/lac/trilinos_epetra_vector.cc> in function
    virtual void dealii::LinearAlgebra::EpetraWrappers::Vector::reinit(const dealii::LinearAlgebra::VectorSpaceVector<double>&, bool)
The violated condition was:
    dynamic_cast<const Vector *>(&V)!=nullptr
Additional information:
    (none)

Stacktrace:
-----------
#0  /opt/spack/opt/spack/linux-ubuntu18.04-x86_64/gcc-7.3.0/dealii-9.0.1-ssxm6qr7mlmymdmjssazlr4s32qar5cv/lib/libdeal_II.g.so.9.0.1: dealii::LinearAlgebra::EpetraWrappers::Vector::reinit(dealii::LinearAlgebra::VectorSpaceVector<double> const&, bool)
#1  /calculate/Paras_Work/PhD_Work/FRASCAL-CODE/MNC-Frac/_build/libmncfrac.so: std::pair<unsigned int, double> mncfrac::Solve_linearEquationSystem<2u, double>(mncfrac::LinearAlgebraItems<double>&, mncfrac::buildingblocks::inputreader::LinearSolverParameters const&, dealii::Vector<double>&)
#2  ./variants/homogenization: mncfrac::homogenization::RVE<2u, double>::Solve_timeStep(dealii::Tensor<2, 2, double> const&, mncfrac::utilities::BoundaryConditionType const&, dealii::Vector<double>&)
#3  ./variants/homogenization: mncfrac::homogenization::RVE<2u, double>::Homogenize(dealii::Tensor<2, 2, double> const&, mncfrac::utilities::BoundaryConditionType)
#4  ./variants/homogenization: main


I could not understand why does the conversion fails? Could you please help me understand where am I doing wrong. Any better alternatives you would suggest?

It is clear that copying matrices and vectors for each solver call is probably not very efficient, but this is just for trying out the solver.


Best regards,
Paras

Bruno Turcksin

unread,
Feb 26, 2020, 9:25:18 PM2/26/20
to dea...@googlegroups.com
Paras,

Le mer. 26 févr. 2020 à 12:27, Paras Kumar <parasku...@gmail.com> a écrit :
> For the vectors (solution & rhs), the idea was to copy the values from the dealii::Vector to dealii::LinearAlgebra::EpetraWrappers::Vector. As an intermediate step I first copy the dealii::Vector to dealii::LinearAlgebra::Vector<doule> using the constructor : https://www.dealii.org/current/doxygen/deal.II/classLinearAlgebra_1_1Vector.html#af441b2c23134f8e6e08c833f918888a2. This works fine. However, when I try to call dealii::LinearAlgebra::EpetraWrappers::Vector::reinit(dealii::LinearAlgebra::Vector<doule>) the following error occurs:
>
> An error occurred in line <85> of file </opt/spack/var/spack/stage/dealii-9.0.1-ssxm6qr7mlmymdmjssazlr4s32qar5cv/dealii-9.0.1/source/lac/trilinos_epetra_vector.cc> in function
> virtual void dealii::LinearAlgebra::EpetraWrappers::Vector::reinit(const dealii::LinearAlgebra::VectorSpaceVector<double>&, bool)
> The violated condition was:
> dynamic_cast<const Vector *>(&V)!=nullptr
> Additional information:
> (none)
You are not allow to use reinit with a vector of a different type. I
am afraid you will have to copy the elements yourself. I would advise
you to choose this vector
https://www.dealii.org/current/doxygen/deal.II/classTrilinosWrappers_1_1MPI_1_1Vector.html
It is easier to fill in.

Best,

Bruno
Reply all
Reply to author
Forward
0 new messages