Hi Wayne,
> The solution in HeatProblem is a non-complex one then(half size of src and dst processed in MatrixFree operator)?
We are solving equation (10) from the paper; it is the reformulated version of equation (9), which is a complex system. The heat equation is indeed not complex; but the diagonalization approach in the case of fully implicit stage parallel IRK makes the system complex (inter term of equation (2) consists of complex blocks that need to be solved).
> In your github code, you actually assemble a block matrix system(in MatrixFree) but didn't renumber dofs as done in most Stokes examples? There's no multi dof_handlers.
I am not sure which lines you are referring to, but if I assemble the matrix it only happens for the coarse-grid problem; also I don't think I did setup the matrix for the complex system and instead used simply Chebyshev iterations around point Jacobi. I have another project, where we actually assemble the matrix.
I am not sure what happens in the Stokes examples. But my guess is that they use a FESystem consisting of a FE for pressure and a vectorial FE for velocity. To be able to use block vector in this case the DoFs need to be renumbered so that DoFs of a component are contiguous, which is not the case normally.
In my case, I am using a single a single scalar DoFHandler. This describes both the real and the imaginary part. As a consequence, one only has to pass a single DoFHander/AffineConstraints to MatrixFree. This is somewhat different approach than it is done in the other tutorials and has been adopted from our two-phase solver adaflo (
https://github.com/kronbichler/adaflo), where we extensively use it reduce overhead for the level-set problem (single DoFHandler that describes the level-set field, normal, and curvature). MatrixFree works quite nicely (just like in the case if you would use FESystem) if you use it with a scalar DoFHandler and pass a BlockVector to the matrix-free loops. I think we don't have a tutorial for this way to work with block vectors in the context of MatrixFree. However, we made lot of progress in improving the usability in the last two years, since we use this feature in 2-3 projects with external collaboration partners. Unfortunately, we have not moved all utility functions to deal.II yet.
> Can I implement p-multigrid precondition with given MGTransferBlockGlobalCoarsening?
Yes. It is used in the code.
> It's a parallel version but vectors are not initialized with solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So matrix_free->initialize_dof_vector has same function done?
Yes, initialize_dof_vector() does the same just smarter. A general recommendation: always use initialize_dof_vector. If MatrixFree notices that you use the internal partitioner, it can do some short cuts. Else, it needs to do some additional checks to figure out if the vectors are comptible.
Hope this helps,
Peter