how to implement MatrixFreeOperation::compute_diagonal in vector-valued problem?(which modified based on step-37 which used Multigrid method)

74 views
Skip to first unread message

llf m

unread,
Nov 8, 2019, 10:10:54 AM11/8/19
to deal.II User Group

Deal all,
I want to use matrix-free method to implement a phase field simulation, the control equation consists of the allen-cahn equations and  mechanics equation.
Is there any tutorial dealing with vectored-problem using matrix-free? as I have looked at the step-37,48,59, they are all scalar problem.
As a implementation routine, I want to solve single mechanics equation by matrix-free at first and modify base on the step-37 code, and there are the problems:

1) as it is a vector-valued problem need to set FEEvaluation template n_component to dim
FEEvaluation<dim, fe_degree, fe_degree+1, dim, number> phi(data);
then
phi.submit_dof_value()  should use type: Tensor<1,dim,VectorizedArray<number>> 
as the dof index in phi.submit_dof_value() goes  through phi.dofs_per_component rather phi.dofs_per_cell.
is that right?

2) since need to extract the diagonal values of stiffness matrix, 
in step-37 use 
phi.submit_dof_value(make_vectorized_array<number>(1.), i);
to extract the diagonal values of stiffness matrix while set other dof values to zero by
phi.submit_dof_value(VectorizedArray<number>(), j);
So in vector-valued case I wanted to add a unit vector(Tensor) (1,0,0)/(0,1,0)/(0,0,1) to dof values at first
but as the dof index in phi.submit_dof_value() goes  through phi.dofs_per_component rather phi.dofs_per_cell
what I can to is to submit a Tensor<1,dim,VectoriedArray> (1,1,1) to the data field.
is that right?
then the local_compute_diagonal(...) function would be:
template <int dimint fe_degreetypename number>
  void MechanicsOperator<dimfe_degreenumber>::local_compute_diagonal(
    const MatrixFree<dimnumber&             data,
    LinearAlgebra::distributed::Vector<number&dst,
    const unsigned int &,
    const std::pair<unsigned intunsigned int&cell_range) const
  {
    FEEvaluation<dim, fe_degree, fe_degree + 1, dim, number> phi(data);
    AlignedVector<Tensor<1,dim,VectorizedArray<number>>>
      diagonal(phi.dofs_per_component);
    for (unsigned int cell = cell_range.first; cell < cell_range.second++cell)
      {
        phi.reinit(cell);
        for (unsigned int i = 0; i < phi.dofs_per_component++i)
        {
          for (unsigned int j = 0; j < phi.dofs_per_component++j)
          {
            phi.submit_dof_value(Tensor<1,dim,VectorizedArray<number>>(), j);
          }
          Tensor<1,dim,VectorizedArray<number>> diagonal_tensor;
          for (unsigned int d=0; d<dim; d++)
            diagonal_tensor[d] = 1.;
          phi.submit_dof_value(diagonal_tensor, i);
          phi.evaluate(falsetrue);
          for (unsigned int q = 0; q < phi.n_q_points++q)
            {
              phi.submit_symmetric_gradient(Cmatx*phi.get_symmetric_gradient(q),q); // while Cmatx is the 4-rank elastic tensor
            }   
          phi.integrate(false,true);
          diagonal[i] = phi.get_dof_value(i);         
        }
        for (unsigned int i = 0; i < phi.dofs_per_component++i)
          phi.submit_dof_value(diagonal[i],i);               
        phi.distribute_local_to_global(dst);
      }
  }

3) is the Multigrid still work well in vector-valued problem? or in multi-physics couple problem?
I want to use it to solve the couple equations(Allen-Cahn and mechanics equalibrium equations)
because it might be the easy one to be build.
I am still searching others' answers in the mailing list but have not found the answer about this topic( 1) and 2) ) yet(maybe).
Any suggestion or any materials related to this implementation would be appreciated .

best,
m.
Message has been deleted

llf m

unread,
Nov 14, 2019, 3:01:11 AM11/14/19
to deal.II User Group
hello, I find one kind of treatment in dealii/tests/matrix_free/stokes_computation.cc(line 884-919): 
use FEEvaluation<...>::begin_dof_values() to loop over the dofs_per_cell rather than dofs_per_component

FEEvaluation<dim, degree_v, degree_v + 1, dim, number> velocity(data, 0);
    for (unsigned int cell = cell_range.first; cell < cell_range.second++cell)
      {
        velocity.reinit(cell);
        AlignedVector<VectorizedArray<number>> diagonal(velocity.dofs_per_cell);
        for (unsigned int i = 0; i < velocity.dofs_per_cell++i)
          {
            for (unsigned int j = 0; j < velocity.dofs_per_cell++j)
              velocity.begin_dof_values()[j] = VectorizedArray<number>();
            velocity.begin_dof_values()[i] = make_vectorized_array<number>(1.);

            velocity.evaluate(falsetruefalse);
            for (unsigned int q = 0; q < velocity.n_q_points++q)
              {
                velocity.submit_symmetric_gradient(
                  viscosity_x_2(cell, q) * velocity.get_symmetric_gradient(q),
                  q);
              }
            velocity.integrate(falsetrue);

            diagonal[i] = velocity.begin_dof_values()[i];
          }

        for (unsigned int i = 0; i < velocity.dofs_per_cell++i)
          velocity.begin_dof_values()[i] = diagonal[i];
        velocity.distribute_local_to_global(dst);
      }
I use this method to deal with my case, but still not work well(worse than PreconditionIdentity).
here is my code attached.

best,
m.
test_mechanics_mf_2.cc

Martin Kronbichler

unread,
Nov 15, 2019, 2:47:55 AM11/15/19
to deal.II User Group
Dear m.,

As you have already found out, the code in the Stokes example should do what you need for computing the diagonal. Usually, the approach with FEEvaluation::begin_dof_values() is slightly simpler than the approach with the tensor you mentioned in the first email. We have computed vector-valued problems with matrix-free, including multigrid preconditioners which compute the diagonal in a similar way, so they do work.

One thing I would recommend to do is to compare the diagonal you get with the cell-by-cell approach with computing the global diagonal by applying the operator globally on all unit vectors, i.e., outside the matrix-free loop. This will only work up to a few 10s of thousands of vectors because it is an n^2 operation and obviously the wrong choice for production runs, but it is a useful debugging approach in case you do not know for sure whether the interior things are set up correctly (well, constraints might still possibly mess this up, but that is a different question).

Regarding your last question:
is the Multigrid still work well in vector-valued problem? or in multi-physics couple problem?
I want to use it to solve the couple equations(Allen-Cahn and mechanics equalibrium equations

This depends a lot on the equation. For coupled problems with non-trivial coupling, the answer is generally "no, multigrid alone will not work out of the box". But that is not specific to the implementation (e.g. whether matrix-free or not) but due to the mathematics of the underlying equations and multigrid convergence theory. In general, you can expect that simple smoothers like Chebyshev will not like multiphysics component coupling, and you need to address smoothing "by hand". What people do is to iterate between the field in an appropriate way and use simple smoothers on each component of the multiphysics problem, as e.g. described in the paper here: https://www.sciencedirect.com/science/article/pii/S0045782516307575 . But I would say that the field is still pretty ad hoc. Then, the next question with Allan-Cahn is whether something like Chebyshev+point-Jacobi still works well in the nonlinear case with the typical contrasts in the terms. Maybe yes, maybe no. The consensus in the field is that for strongly varying coefficients you need to either construct very specific smoothers, or, more commonly, use operator-dependent level transfer operators that are able to react to those case. AMG is supposed to handle some cases better than GMG with naive transfer as we provide in deal.II - but AMG is far from a black box. I have done work with Cahn-Hilliard, which involves a block system of two components, and there you definitely do not want to simply apply multigrid to the 2x2 block system as a whole, but use some algebraic manipulations on the blocks with a basic preconditioner of some related operator instead: https://www.sciencedirect.com/science/article/pii/S0898122112004191 .

I hope this helps.

Best,
Martin

llf m

unread,
Nov 15, 2019, 11:37:46 AM11/15/19
to deal.II User Group
Dear Martin,
Thanks for your reply, it helps me a lot indeed.
It takes me a bit of time to look the paper you provide(thanks much again).
I know that the preconditioner for specific problem is complicated, especially for matrix-free does not provide enough matrix information.
But I am still surprise and doubtful that the GMG which use the diagonal elements does not accelerate the solve in simple elastic equalibrium equation which 
A_ij = symmetric_gradient : Cijkl : symmetric_gradient
it takes more time that without preconditioner(PreconditionIdentity).
Yes, the couple Allen-Cahn and mechanics is more complicated,  I have tried that
1) assemble the whole system and use AMG to precondition and solve together.
2) use the strategy like schur complement with block matrix(operator), but in this case can't use AMG in schur complement as only provide the vmult().
As the couple system is time-dependent and nonlinear, so the matrix-free is so attractive because it donot need to assemble system matrix every newton step in each time step.
And I would try to understand the paper if need to construct a specific preconditioner or use couple/uncouple MG.
It seems that it can't use the strategy provided by the first paper within matrix-free?
Best,
m.
在 2019年11月15日星期五 UTC+8下午3:47:55,Martin Kronbichler写道:

llf m

unread,
Nov 15, 2019, 11:44:58 AM11/15/19
to deal.II User Group
And your suggestion to compare the diagonal is quite helpful, as inverse_entries is protected one may need to add some lines to output the diagonal entries computed cell-by-cell in the compute_diagonal() function after distribute local diagonal while the global diagonal can be reached by applying the operator globally on all unit vectors like your mention above.


在 2019年11月15日星期五 UTC+8下午3:47:55,Martin Kronbichler写道:
Dear m.,
Reply all
Reply to author
Forward
0 new messages