How to specify finite element component in MatrixFree?

101 views
Skip to first unread message

yy.wayne

unread,
Nov 2, 2022, 5:24:23 AM11/2/22
to deal.II User Group
In test/matrix-free/assemble_matrix_02.cc a Stokes matrix is assembled in MatrixFree form, and DoFs correspond to velocity and pressure are called by FEEvaluation<...,dim> velocity(..., 0) and  FEEvaluation<...,1> velocity(..., dim), respectively.

I have a multi-comp system two as in step-29 with real and imaginary parts. But initializing the 2 FEEvaluations by FEEvaluation<...,1> real(matrix_free_data, 0) and FEEvaluation<...,1> real(matrix_free_data, 1) gives error, because matrix_free_data.n_components() is 1 not 2. Maybe it's because I'm using fe_collection, not just FESystem in the assemble_matrix_02 test.

If I have a FEEvaluation<...,dim=2> integrator including real and imaginary parts, is there a way to decouple them like in FEValues (for example, fe.system_to_component_index, or fe_values_real = fe_values[FEValuesExtractors])?

Best,
Wayne

yy.wayne

unread,
Nov 2, 2022, 9:40:36 AM11/2/22
to deal.II User Group
I tried to modify my code that FEEvaluation is constructed without matrix_free_data as in the above .cc in dealii/test, but get following error information:
        "FEEvaluation was initialized without a matrix-free object. Integer indexing is not possible".

Guess the only method to do MatrixFree problem with multi-components now is through BlockVector in matrix_vector_Stokes test? Since I never write dealii prokect with blocked dofs before I'm affraid codes besides MatrixFree parts should be changed as well... Should I try rearrange step-29 like problem with block vector? 

yy.wayne

unread,
Nov 2, 2022, 12:55:44 PM11/2/22
to deal.II User Group
Well a sad thing I notice is a function used for p-multigrid in step-75: reinit() in MGTwoLevelTransfer, is imlpemented only for LinearAlgebra::distributed::Vector, not  LinearAlgebra::distributed::BlockVector.
202211-3.png

I'm fine with writing some unimplemented functions myself, but a guidence is really helpful. While going through this problem I found different choices and not sure what's the best way having this done. (For example, LinearAlgebraTrilinos::MPI::BlockVector or LinearAlgebra::distributed::BlockVector? Whether we need locally_relevant_dofs when reinit solution vector?(It's not used here stokes_computation) Or is the non-specialized MGTransferLevel Class works as well for p coarsening?)

In short, I'm solving step-29(2 components) with matrix-free and p-multigrid technique. Is current dealii capable of solving it? If not what's the fastest way to implement it? Thanks!

Peter Munch

unread,
Nov 2, 2022, 4:44:53 PM11/2/22
to deal.II User Group
Hi

The block version of MGTransferGlobalCoarsening is MGTransferBlockGlobalCoarsening (see  https://www.dealii.org/developer/doxygen/deal.II/classMGTransferBlockGlobalCoarsening.html).

I don't know the details of step-29. But what I would recommend you to do is to take a look at https://github.com/peterrum/dealii-spirk/blob/bb52df5d1023f9c41ca51a082be769bc0171bf75/include/operator.h#L616-L660. Here, I am solving a complex-valued Helmholtz operator (in my case a sum of mass and Laplace matrix, which arises from complex implicit Runge-Kutta formulation; see https://arxiv.org/abs/2209.06700). This complex code was the motivation for writing MGTransferBlockGlobalCoarsening in the fist place^^

I think the code should answer most of your questions.

Regarding

> FEEvaluation<...,1> real(matrix_free_data, 0) and FEEvaluation<...,1> real(matrix_free_data, 1)

I guess you have to replace the "1" by a "0" in the second FEEval.

Hope this helps,
Peter

yy.wayne

unread,
Nov 2, 2022, 10:47:05 PM11/2/22
to deal.II User Group
Thank you peter.

It looks awesome! I've got a few question about it.
  1. In your github code, you actually assemble a block matrix system(in MatrixFree) but didn't renumber dofs as done in most Stokes examples? There's no multi dof_handlers. The solution in HeatProblem is a non-complex one then(half size of src and dst processed in MatrixFree operator)?
  2. Can I implement p-multigrid precondition with given MGTransferBlockGlobalCoarsening?
  3. It's a parallel version but vectors are not initialized with solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So matrix_free->initialize_dof_vector has same function done?
Best,
Wayne

Peter Munch

unread,
Nov 3, 2022, 3:08:03 AM11/3/22
to deal.II User Group
Hi Wayne,

> The solution in HeatProblem is a non-complex one then(half size of src and dst processed in MatrixFree operator)?

We are solving equation (10) from the paper; it is the  reformulated version of equation (9), which is a complex system. The heat equation is indeed not complex; but the diagonalization approach in the case of fully implicit stage parallel IRK makes the system complex (inter term of equation (2) consists of complex blocks that need to be solved).

> In your github code, you actually assemble a block matrix system(in MatrixFree) but didn't renumber dofs as done in most Stokes examples? There's no multi dof_handlers.

I am not sure which lines you are referring to, but if I assemble the matrix it only happens for the coarse-grid problem; also I don't think I did setup the matrix for the complex system and instead used simply Chebyshev iterations around point Jacobi. I have another project, where we actually assemble the matrix.

I am not sure what happens in the Stokes examples. But my guess is that they use a FESystem consisting of a FE for pressure and a vectorial FE for velocity. To be able to use block vector in this case the DoFs need to be renumbered so that DoFs of a component are contiguous, which is not the case normally.

In my case, I am using a single a single scalar DoFHandler. This describes both the real and the imaginary part. As a consequence, one only has to pass a single DoFHander/AffineConstraints to MatrixFree. This is somewhat different approach than it is done in the other tutorials and has been adopted from our two-phase solver adaflo (https://github.com/kronbichler/adaflo), where we extensively use it reduce overhead for the level-set problem (single DoFHandler that describes the level-set field, normal, and curvature). MatrixFree works quite nicely (just like in the case if you would use FESystem) if you use it with a scalar DoFHandler and pass a BlockVector to the matrix-free loops. I think we don't have a tutorial for this way to work with block vectors in the context of MatrixFree. However, we made lot of progress in improving the usability in the last two years, since we use this feature in 2-3 projects with external collaboration partners. Unfortunately, we have not moved all utility functions to deal.II yet.

> Can I implement p-multigrid precondition with given MGTransferBlockGlobalCoarsening?

Yes. It is used in the code.

> It's a parallel version but vectors are not initialized with solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So matrix_free->initialize_dof_vector has same function done?

Yes, initialize_dof_vector() does the same just smarter. A general recommendation: always use initialize_dof_vector. If MatrixFree notices that you use the internal partitioner, it can do some short cuts. Else, it needs to do some additional checks to figure out if the vectors are comptible.

Hope this helps,
Peter

yy.wayne

unread,
Nov 3, 2022, 6:18:58 AM11/3/22
to deal.II User Group
Hi Peter,

Thanks for your comprehensive reply, really helpful. By assemble actually I mean the MatrixFree integrator part which corresponds to assembling in MatrixBased form :). But you resolve my question, since my main concern is one scalar DoFHandler describing both real and imaginary part. A last question is what will you do when real and imaginary parts have different boundary conditions, if only on DoFHandler is provided? 

Best,
Wayne

Peter Munch

unread,
Nov 3, 2022, 12:06:28 PM11/3/22
to deal.II User Group
Hi Wayne,

in that case I would provide the same DoFHandler to MatrixFree twice and two different AffineConstraint objects. See the version of MatrixFree::reinit() that takes vectors https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#abb081e301fadbfd94e0169611bae3838.

Peter

yy.wayne

unread,
Nov 3, 2022, 9:40:50 PM11/3/22
to deal.II User Group
Thanks for you patience and wonderful answer :)

Best,
Wayne

yy.wayne

unread,
Nov 17, 2022, 8:53:29 PM11/17/22
to deal.II User Group
Hi Peter, sorry for coming back to this discussion again. 

I've applied the vmult function and compute diagonal function for a complex MatrixFree object, and tested these functions. But there's one last ingredient absent in the multi-component MatrixFree framework with p-coarsening, transfers.

Different to the your spirk article where it has two components with same constraints, in my case the Dirichlet BC on real dofs and imaginary dofs are different. However when constructing MGTransferBlockGlobalCoarsening or MGTransferGlobalCoarsening, a MGTwoLevelTransfer object is needed but it only receive a single dofhandler and constraint. The problem here is the single constraint, since dofhandler for real and imaginary parts are same. I haven't figured out the solution yet.

Or I've notice that a GMG block transfer class MGTransferBlockMatrixFree receives a vector of  MGConstrainedDoFs. Your implementation of a complex Helmholtz MatrixFree method with p-coarsening is very helpful, it address most of my problems, but only differ in different constrains for two components.

Best,
Wayne

Peter Munch

unread,
Nov 18, 2022, 12:32:27 AM11/18/22
to deal.II User Group
Hi Wayne,

currently MGTransferBlockGlobalCoarsening takes in the constructor only a single MGTransferGlobalCoarsening instance and uses this for all blocks (i.e., uses the same AffineConstraints instance for all blocks): https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.h#L691-L692 

However, this can be simply generalized to add a second constructor that takes a vector of MGTransferGlobalCoarsening objects. This should not be too much work, since MGTransferBlockGlobalCoarsening is derived from MGTransferGlobalCoarsening, which contains all the logic. The new constructor would have false here https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3379 and the function https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3385-L3393 would need to generalized to access entries within the vector.

Hope this helps,
Peter

yy.wayne

unread,
Nov 18, 2022, 1:53:36 AM11/18/22
to deal.II User Group
Thank you Peter.

Best.
Wayne

Reply all
Reply to author
Forward
0 new messages