Incremental build of MG objects

34 views
Skip to first unread message

luca.heltai

unread,
Feb 4, 2021, 5:15:41 AM2/4/21
to Deal.II Users
Dear all,

I’ve been experimenting a bit with the MatrixFree MG framework, and I was wondering how the more experts among you do things usually.

Background: we're building a framework called smoothed adaptive fem, where the usual

solve -> estimate -> mark -> refine

is done only in the first and last step of solution process, while in between we replace the solve step by a smooth step:

smooth -> estimate -> mark -> refine

This is, in essence, one ascending phase of the V-cycle multigrid method. Things work very well, but they’d work much better if I replaced the smooth step by one (or more) actual V-cycle steps.

I’ve been using step-37 and step-50 (thanks to the fantastic job of Katharina, Martin, Thomas, and Timo!) as a base for my experiments, and they work fantastically well.

This is what I have implemented:


+--------------------------------------------------+
| |
| |
| Estimate/mark/refine |
| ^ |
| + |
| Current-level Current-level |
| + ^ |
| Pre smooth| | Post smooth |
| v + |
| Coarser level Coarser level |
| + ^ |
| Pre smooth| | Post smooth |
| v + |
| Coarsest level +--> Coarse Solve |
| |
+--------------------------------------------------+



The issue I’m facing is the following: on each level, I’m setting up the whole hierarchy of the MG framework, just to apply one or two V-cycle steps, and the whole setup cost is basically comparable to the application of the single V-cycle algorithm.

The thing is, once I finished the V-cycle, estimated the error, and transferred the solution, the whole hierarchy of MG is destroyed and rebuilt from scratch. While this is fine if you are using MG as a preconditioner, and need to call it many times, in my case this is the most time consuming part (!).

Is there a way to reuse as much as possible of the existing MG objects, i.e., detect what levels need to be rebuilt and what can remain the same?

I was under the impression that the entire coarse hierarchy depended on the dof partitioning of the finest level. Is this true? If not, is there a way to detect what levels of the MG hierarchy have changed, and which ones are left alone after refining? This would save me, for example, from recomputing the coarse matrix everytime (since it is, afterall, the same matrix!) and reinitializing AMG on it.

Thanks,
Luca.





Martin Kronbichler

unread,
Feb 13, 2021, 9:41:06 AM2/13/21
to dea...@googlegroups.com, Peter Munch
Hi Luca,

Sorry for the delayed answer.

> +--------------------------------------------------+
> | |
> | |
> | Estimate/mark/refine |
> | ^ |
> | + |
> | Current-level Current-level |
> | + ^ |
> | Pre smooth| | Post smooth |
> | v + |
> | Coarser level Coarser level |
> | + ^ |
> | Pre smooth| | Post smooth |
> | v + |
> | Coarsest level +--> Coarse Solve |
> | |
> +--------------------------------------------------+
This is an interesting approach.
> The issue I’m facing is the following: on each level, I’m setting up the whole hierarchy of the MG framework, just to apply one or two V-cycle steps, and the whole setup cost is basically comparable to the application of the single V-cycle algorithm.
>
> The thing is, once I finished the V-cycle, estimated the error, and transferred the solution, the whole hierarchy of MG is destroyed and rebuilt from scratch. While this is fine if you are using MG as a preconditioner, and need to call it many times, in my case this is the most time consuming part (!).

I can easily imagine that. The question is: Have you measured what
exactly consumes the time? If you through the whole process again, I
think the heaviest parts are typically the setup of the MatrixFree
objects, followed by MGTransferMatrixFree (I assume those are the data
structures you are using). But the setup of the Triangulation and
DoFHandler::distribute(_mg)_dofs() should also be quite expensive.


> Is there a way to reuse as much as possible of the existing MG objects, i.e., detect what levels need to be rebuilt and what can remain the same?

There is a way, but it is not straight-forward and takes a few days of
work (if you already know where to look). As you suspect, the coarse
hierarchy depends on the partitioning on the finest level, so the
objects really need to be refreshed because the parallel distribution
might change. I think the easiest way would be to hook things up on the
global coarsening infrastructure that Peter (on CC) has been building
recently, because there we have already separate triangulations and
other things. At the same time, the transfer would probably be a bit
orthogonal to what is done right now, because you would only transfer
into new unknowns on a few isolated mesh cells.

I think we could definitely identify some strategy to make this work, as
I see you point of reusing a coarse AMG, for example. How is your time
plan and resources on this topic?

Best,
Martin

Reply all
Reply to author
Forward
0 new messages