Finite Strain Compressibility problem with PETSc Data Structures

112 views
Skip to first unread message

Animesh Rastogi IIT Gandhinagar

unread,
Jan 15, 2021, 12:46:30 PM1/15/21
to deal.II User Group
Dear Dealii community,

I am using a modified version of Code Gallery Quasi Static Compressibility program to compute eigenvalues and eigenvectors of my system matrix. I have used ARPACK solver, however, I did not get the expected results. Bruno suggested me to go for PETSc/Slepsc solver to compute the eigenvalues. For this, I would need to convert the dealii data structures to the PETSc data structures using PETSc wrappers.

I looked at the Step-17 and Step-36 for this. However, I am unable to understand how I can change my current dealii data structures to the one using PETSc wrappers. I am also confused regarding what all variables do I need to convert using PETScWrappers. Also, for the system matrix, do I need to use PETScWrappers::MPI::BlockSparseMatrix or PETScWrappers::BlockSparseMatrix.

This finite strain code already assembles the system matrix in parallel using multi-threading. What changes do I need to do in the assembly if I go on to use PETSc data structures.

It would be great if someone could provide help with the above queries.

Thanks a lot!

Animesh

Wolfgang Bangerth

unread,
Jan 15, 2021, 3:51:30 PM1/15/21
to dea...@googlegroups.com
On 1/15/21 10:46 AM, Animesh Rastogi IIT Gandhinagar wrote:
>
> I am using a modified version of Code Gallery Quasi Static Compressibility
> <https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdealii.org%2Fdeveloper%2Fdoxygen%2Fdeal.II%2Fcode_gallery_Quasi_static_Finite_strain_Compressible_Elasticity.html&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7C3f03e1b7a92342c0331e08d8b97d7f35%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637463295948040000%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=wukJxVr%2BMJ0%2BGymh3b5cYRkeUH%2BFCzH2qsBBiujG2h8%3D&reserved=0>
> program to compute eigenvalues and eigenvectors of my system matrix. I have
> used ARPACK solver, however, I did not get the expected results. Bruno
> suggested me to go for PETSc/Slepsc solver to compute the eigenvalues. For
> this, I would need to convert the dealii data structures to the PETSc data
> structures using PETSc wrappers.

All objects that relate to the linear system you are solving (or to the
eigenvalue problem you are solving). So, matrices and vectors.


> I looked at the Step-17 and Step-36 for this. However, I am unable to
> understand how I can change my current dealii data structures to the one using
> PETSc wrappers. I am also confused regarding what all variables do I need to
> convert using PETScWrappers. Also, for the system matrix, do I need to use
> PETScWrappers::MPI::BlockSparseMatrix or PETScWrappers::BlockSparseMatrix.

That depends on whether you want to run your problem on multiple machines
using MPI. If you've been using ARPACK, then I suspect that you're not using
MPI. So then PETScWrappers::BlockSparseMatrix.


> This finite strain code already assembles the system matrix in parallel using
> multi-threading. What changes do I need to do in the assembly if I go on to
> use PETSc data structures.

None, really. The functions filling matrices should work on PETSc data
structures just fine.

Best
W.


--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Animesh Rastogi IIT Gandhinagar

unread,
Jan 15, 2021, 5:16:37 PM1/15/21
to deal.II User Group
Dear Prof. Bangerth,

Thank you for your reply. I tried as you suggested. However, I am getting the following error -

/home/animesh/Documents/dealii/dealii-9.2.0/examples/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis.cc:873:20: error: ‘BlockSparseMatrix’ in namespace ‘dealii::PETScWrappers’ does not name a type
     PETScWrappers::BlockSparseMatrix         tangent_matrix;


For vectors, the documentation says that "This class is deprecated, use PETScWrappers::MPI::BlockVector". So I used as suggested in the documentation and am getting the following error related to that.

/home/animesh/Documents/dealii/dealii-9.2.0/examples/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis.cc:1288:37: error: no matching function for call to ‘dealii::PETScWrappers::MPI::BlockVector::reinit(std::vector<unsigned int>&)’
     system_rhs.reinit(dofs_per_block);

Also, I wish to use a Direct solver by PETSc and could not find any related to the PETSc block vector. In my current code, I am using SparseDirectUMFPACK. Do I need to convert my entire code to be using non-blocked vectors and matrices to use the Direct Solver by PETSc? (Although the system is "blocked", it only has one block that contains the displacement degree of freedom).

Thanks!

Animesh

Wolfgang Bangerth

unread,
Jan 15, 2021, 11:53:17 PM1/15/21
to dea...@googlegroups.com, Animesh Rastogi IIT Gandhinagar

> Thank you for your reply. I tried as you suggested. However, I am getting the
> following error -
>
> /home/animesh/Documents/dealii/dealii-9.2.0/examples/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis.cc:873:20:
> error: ‘BlockSparseMatrix’ in namespace ‘dealii::PETScWrappers’ does not name
> a type
>      PETScWrappers::BlockSparseMatrix         tangent_matrix;

I now remember that we removed the non-MPI class at some point.


> So I used as suggested in the documentation and am getting the following error
> related to that.
>
> /home/animesh/Documents/dealii/dealii-9.2.0/examples/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis.cc:1288:37:
> error: no matching function for call to
> ‘dealii::PETScWrappers::MPI::BlockVector::reinit(std::vector<unsigned int>&)’
>      system_rhs.reinit(dofs_per_block);

Look up the documentation of that class and what arguments the reinit() and
constructor functions take:
https://www.dealii.org/current/doxygen/deal.II/classPETScWrappers_1_1MPI_1_1BlockVector.html


> Also, I wish to use a Direct solver by PETSc and could not find any related to
> the PETSc block vector. In my current code, I am using SparseDirectUMFPACK. Do
> I need to convert my entire code to be using non-blocked vectors and matrices
> to use the Direct Solver by PETSc? (Although the system is "blocked", it only
> has one block that contains the displacement degree of freedom).

Correct. PETSc does not know the concept of "block" vectors and matrices. So
if you want to use a direct solver via PETSc, then you have to put everything
into non-blocked objects.

The point of blocks is to facilitate block-based solvers such as Schur
complements etc. If you don't intend to build solvers this way, there is no
benefit to using block matrices and vectors. Just because you have multiple
solution components doesn't mean that you *have* to split your linear algebra
objects into blocks.

Animesh Rastogi IIT Gandhinagar

unread,
Jan 16, 2021, 2:06:14 PM1/16/21
to deal.II User Group
Dear Prof. Bangerth,

Thank you for your explanation. I have converted matrices and vectors into non-blocked objects and have changed them into PETScWrappers::MPI data types.

I would like to use SparseDirectMumps to solve the linear system. The documentation tells that I would need an mpi_communicator object to use this solver. Since my code is running on a single machine in serial (only the assembly is hyper-threaded), how do I need to initialize mpi_communicator? Do I need to pass this mpi_communicator object when I setup the sparsity pattern, tangent matrix etc in the system_setup() function. I could not understand what other changes do I need to bring in my code so that I can use SparseDirectMumps solver. Ultimately, my plan is to use the SLEPc eigenvalue solver after the solution convergences at every load/displacement step.

Thanks a lot!

Animesh

Wolfgang Bangerth

unread,
Jan 17, 2021, 1:03:22 PM1/17/21
to dea...@googlegroups.com

> I would like to use SparseDirectMumps to solve the linear system
> <https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdealii%2Fcode-gallery%2Fblob%2Fmaster%2FQuasi_static_Finite_strain_Compressible_Elasticity%2Fcook_membrane.cc%23L2171&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7C632a418808d74ae65e4e08d8ba51cd69%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637464207804492937%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=3quCIk6VKgjyi7Zuln4b7yBND9kqpixCRnmmH830xMA%3D&reserved=0>.
> The documentation tells that I would need an mpi_communicator object to use
> this solver. Since my code is running on a single machine in serial (only the
> assembly is hyper-threaded), how do I need to initialize mpi_communicator?

Just use MPI_COMM_SELF in all places where you need to pass a communicator.

Animesh Rastogi IIT Gandhinagar

unread,
Jan 18, 2021, 3:29:27 PM1/18/21
to deal.II User Group
Dear Prof. Bangerth,

Thank you for your reply. I have changed the required data structures using PETSc wrappers. I have another doubt regarding this. Since I am running on a single machine, how should I use the reinit() function for PETSc Sparse Matrix. I was thinking to use the (3/5th) reinit() function.

How should I initialize the "index set" that is required for this function? In step-17, it uses the locally_owned_dofs to initialize the index set, however, that program was for multiple machines which is not the case with me.

Below is my system_setup() function currently -

template <int dim,typename NumberType>
  void Solid<dim,NumberType>::system_setup()
  {
    timer.enter_subsection("Setup system");

    //std::vector<unsigned int> block_component(n_components, u_dof); // Displacement

    // The DOF handler is then initialised and we renumber the grid in an
    // efficient manner. We also record the number of DOFs per block.

    dof_handler_ref.distribute_dofs(fe);
    DoFRenumbering::Cuthill_McKee(dof_handler_ref);

    //DoFRenumbering::component_wise(dof_handler_ref, block_component);
    //DoFTools::count_dofs_per_block(dof_handler_ref, dofs_per_block,
                                 //  block_component);

    std::cout << "Triangulation:"
              << "\n\t Number of active cells: " << triangulation.n_active_cells()
              << "\n\t Number of degrees of freedom: " << dof_handler_ref.n_dofs()
              << std::endl;

    // Setup the sparsity pattern and tangent matrix
    tangent_matrix.clear();
    {
      //const types::global_dof_index n_dofs_u = dofs_per_block[u_dof];

      DynamicSparsityPattern csp(dof_handler_ref.n_dofs());

      //csp.block(u_dof, u_dof).reinit(n_dofs_u, n_dofs_u);
      //csp.collect_sizes();

      Table<2, DoFTools::Coupling> coupling(n_components, n_components);
      for (unsigned int ii = 0; ii < n_components; ++ii)
        for (unsigned int jj = 0; jj < n_components; ++jj)
          coupling[ii][jj] = DoFTools::always;
      DoFTools::make_sparsity_pattern(dof_handler_ref,
                                      coupling,
                                      csp,
                                      constraints,
                                      false);
      sparsity_pattern.copy_from(csp);
    }

 
   tangent_matrix.reinit(<Index_Set Required>, <Index_Set_Required>, sparsity_pattern,mpi_communicator);

    system_rhs.reinit(mpi_communicator,dof_handler_ref.n_dofs(),dof_handler_ref.n_dofs());
    //system_rhs.collect_sizes();

    solution_n.reinit(mpi_communicator,dof_handler_ref.n_dofs(),dof_handler_ref.n_dofs());
    //solution_n.collect_sizes();

    setup_qph();

    timer.leave_subsection();
  }


Thanks!

Animesh

Wolfgang Bangerth

unread,
Jan 19, 2021, 2:25:53 AM1/19/21
to dea...@googlegroups.com
On 1/18/21 1:29 PM, Animesh Rastogi IIT Gandhinagar wrote:
>
> How should I initialize the "index set" that is required for this function? In
> step-17
> <https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.dealii.org%2Fcurrent%2Fdoxygen%2Fdeal.II%2Fstep_17.html&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7C2c8127fe83f34bdd8e4308d8bbefc22d%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637465985723412724%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=wri%2BF49jx4BqWdoLNTpAT3ZrLf%2FNj69MqkKbc%2FtkUsg%3D&reserved=0>,
> it uses the locally_owned_dofs to initialize the index set, however, that
> program was for multiple machines which is not the case with me.

If you have only one processor, then the locally_owned_dofs are simply all
DoFs. You can use the same function you use to get the locally_owned_dofs set
on every processor in the parallel context, but an easier way is to call
complete_index_set(dof_handler.n_dofs())

Animesh Rastogi IIT Gandhinagar

unread,
Jan 24, 2021, 7:25:38 PM1/24/21
to deal.II User Group
Dear Prof. Bangerth,

Thank you for your reply. I was able to setup the slepc eigenvalue solver. The issue is that it is taking a lot of time to calculate the smallest eigenvalue for just 10000 degrees of freedom. Largest eigenvalue is calculated pretty fast. I tried many solvers in slepc - Krylov Schur, JD, Arnoldi iteration, Lanczos etc. However, I am facing the same issue. Also, the KrylovSchur solver is giving me following error after taking a lot of time - The number of converged eigenvectors is 0 but 1 were requested. Could you please help me with this. I also tried to do shift-invert operation (wondering if that would help) and got the following error -

[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Shift-and-invert requires a target 'which' (see EPSSetWhichEigenpairs), for instance -st_type sinvert -eps_target 0 -eps_target_magnitude
[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.13.1, May 02, 2020
[0]PETSC ERROR: ./cook_membrane on a arch-linux-c-debug named animesh-Inspiron-7572 by animesh Sun Jan 24 18:14:28 2021
[0]PETSC ERROR: Configure options --download-mumps --download-scalapack --download-ptscotch=0 --with-shared-libraries
[0]PETSC ERROR: #1 EPSCheckCompatibleST() line 112 in /home/animesh/Documents/slepc-3.13.4/src/eps/interface/epssetup.c
[0]PETSC ERROR: #2 EPSSetUp() line 303 in /home/animesh/Documents/slepc-3.13.4/src/eps/interface/epssetup.c
[0]PETSC ERROR: #3 EPSSolve() line 136 in /home/animesh/Documents/slepc-3.13.4/src/eps/interface/epssolve.c





Code snippet -

  std::cout<<"Calculating the smallest eigenvalue at  "<<time.current()<<std::endl;
     

        SolverControl           eigen_solver_control(dof_handler_ref.n_dofs(), 1e-9);
        SLEPcWrappers::SolverLanczos eigensolver(eigen_solver_control,mpi_communicator);
        eigensolver.set_which_eigenpairs(EPS_SMALLEST_MAGNITUDE);
        eigensolver.set_problem_type(EPS_GHEP);

        eigensolver.solve(tangent_matrix,
                          eigenvalues,
                          eigenfunctions);
        for (unsigned int i = 0; i < eigenfunctions.size(); ++i)
          eigenfunctions[i] /= eigenfunctions[i].linfty_norm();
         mineigenvalue = *min_element (eigenvalues.begin(),eigenvalues.end());

Wolfgang Bangerth

unread,
Jan 25, 2021, 12:41:12 PM1/25/21
to dea...@googlegroups.com
On 1/24/21 5:25 PM, Animesh Rastogi IIT Gandhinagar wrote:
>
> Thank you for your reply. I was able to setup the slepc eigenvalue solver. The
> issue is that it is taking a lot of time to calculate the smallest eigenvalue
> for just 10000 degrees of freedom. Largest eigenvalue is calculated pretty
> fast. I tried many solvers in slepc - Krylov Schur, JD, Arnoldi iteration,
> Lanczos etc. However, I am facing the same issue. Also, the KrylovSchur solver
> is giving me following error after taking a lot of time - /The number of
> converged eigenvectors is 0 but 1 were requested. /Could you please help me
> with this. I also tried to do shift-invert operation (wondering if that would
> help) and got the following error

Animesh,
I don't know the eigenvalue solvers well enough to help. But step-36 solves
for the smallest eigenvalues. You might want to compare what you have against
step-36 and understand why that program works and yours doesn't.
Reply all
Reply to author
Forward
0 new messages