Exception when using MGTransferMatrixFree::interpolate_to_mg with periodic BCs

103 views
Skip to first unread message

Guilhem Poy

unread,
Sep 2, 2021, 12:14:23 PM9/2/21
to deal.II User Group
Dear deal-ii users and developers,

I am currently implementing a code that minimizes an energy functional depending on a traceless symmetric tensor (5D solution) for liquid crystal physics. The physics details matters not, I will just mention that I am using a combination of trust-region Newton iterations with a Truncated Conjugate Gradient (TCG) solver and matrix-free approach with multigrid preconditioning. This works exceedingly well for simple problems with Dirichlet boundary conditions, and I must say I am very impressed by the quality and efficiency of the dealii library and the excellent documentation, so a big thanks to all the developers!

However, I have a problem when I want to use periodic boundary conditions together with matrix-free multigrid preconditioning (it works fine with matrix-free but no preconditioner). Specifically, the problem arises when I try to use the method MGTransferMatrixFree::interpolate_to_mg, that allows me to transfer the solution of my problem at each level of the triangulation (where it is further used in the calculation of the system matrix at this level, which in my case corresponds to the hessian of the free energy functional, but I think that it is irrelevant for the problem I have).

Basically, this method tries to access an element of a distributed vector that is not stored on the current MPI process, so I get an exception raised by LinearAlgebra::distributed::Vector. If I use Dirichlet BCs instead of periodic ones, the problem goes away. So it seems to be a problem with the way that the vectors of my problem are partitioned when I include periodic BCs. To illustrate this problem, I am attaching a MWE that shows how I setup matrix-free and multigrid objects with periodic BCs, and raises the same error as in my main code. I am using deal.II v9.3 on an archlinux distribution.

I have looked at all the tutorials related to matrix-free and multigrid objects (this is how I wrote the base program in the first place), but was not able to find a solution on my own. I have tried various way of specifying the AffineConstraints objects for the level matrix-free objects, but none worked out. I also tried dummy AffineConstraints like in step 59 which uses periodic faces, but this does not work, so I guess this is is specific to FE_DGQ (whereas I am using FE_Q spaces). I must do something wrong because it works so well without periodic BCs, so if anyone has an idea this would be great :) I am happy to provide any additional details that you would deem important.

Best regards,

Guilhem

main.cpp

peterrum

unread,
Sep 3, 2021, 4:29:46 AM9/3/21
to deal.II User Group
Hi Guilhem,

Thanks for reporting the bug and providing a failing test. I have opened a PR (hopefully) fixing the issue (https://github.com/dealii/dealii/pull/12738) and simplified your code a bit (since the issue was not related to MatrixFree). Maybe you could verify that it indeed works for you!?

Looking forward to your result you will obtain with MatrixFree! If you want you can also checkout the new global-coarsening multigrid infrastructure as shown in step-75.

PM

Guilhem Poy

unread,
Sep 3, 2021, 6:33:23 AM9/3/21
to deal.II User Group
Hi Peter,

I just used your PR as a patch in my candi-deal-ii setup, and I am happy to report that indeed the bug was solved, thanks a lot for your reactivity! If you want to further simplify the new test in your PR, I think the AffineConstraints object is not needed.

Now I just need to investigate why I get terrible performance (in terms of solver iteration) with my multigrid preconditioner when there is periodic BCs, probably an implementation problem on my side.

Step-75 uses hp-adaptivity, right? I am really not familiar with this, I think I will already try first to make things work correctly with adaptive mesh. But definitely, I am already extremely impressed with Matrix-Free performance: back in 2016 I wrote a code using matrix-based minimization of a simpler version of my free energy functional (unit-normed 3D solution field, I think I have two papers listed in the dealii publication list), and my new (and more complex) matrix-free code is actually much faster than the old one, even without multigrid (the old one was using AMG, so maybe not optimal)!

Best

Guilhem

Guilhem Poy

unread,
Sep 3, 2021, 8:41:56 AM9/3/21
to deal.II User Group
Dear all,

I have a follow-up question concerning the aforementioned performance loss with PBCs+GMG. To keep things simple, I modified step-37 by coloring the hypercube and adding periodic BCs on the 2-3 surfaces, this is only a few additional lines in comparison to the original step-37.cc. The associated file is attached, PBCs can be switched out by commenting out the macro PERIODIC_BCS at the top. As clearly visible when running the program, O(N) scaling is lost as soon as PBCs are activated, with terrible number of iterations (>100) for the solver.

So probably the problem is that I do not use make_periodicity_constraints on the level_constraints for the level matrix operators, but I cannot figure out how I should specify them. What I find weird is that an AffineConstraints object containing the PBCs is already built internally in MGConstrainedDoFs, so I was hoping that the MatrixFreeOperator will automatically take these into account, but I guess this is not the case? I tried merging mg_constrained_dofs.get_level_constraints with the custom-built level_constraints in the level setup loop, but this does not work (and makes things worse in terms of iterations). Or maybe the loss of performance is due to something else?

Best regards,

Guilhem
step-37.cc

peterrum

unread,
Sep 6, 2021, 10:06:25 AM9/6/21
to deal.II User Group
Hi Guilhem,

> I just used your PR as a patch in my candi-deal-ii setup, and I am happy to report that indeed the bug was solved, thanks a lot for your reactivity!

Thanks for the feedback!

> But definitely, I am already extremely impressed with Matrix-Free performance: back in 2016 I wrote a code using matrix-based minimization of a simpler version of my free energy functional (unit-normed 3D solution field, I think I have two papers listed in the dealii publication list), and my new (and more complex) matrix-free code is actually much faster than the old one, even without multigrid (the old one was using AMG, so maybe not optimal)!

Happy to hear!

> So probably the problem is that I do not use make_periodicity_constraints on the level_constraints for the level matrix operators, but I cannot figure out how I should specify them. What I find weird is that an AffineConstraints object containing the PBCs is already built internally in MGConstrainedDoFs, so I was hoping that the MatrixFreeOperator will automatically take these into account, but I guess this is not the case? I tried merging mg_constrained_dofs.get_level_constraints with the custom-built level_constraints in the level setup loop, but this does not work (and makes things worse in terms of iterations). Or maybe the loss of performance is due to something else?

I have taken a look at your code and made some fixed (in particular regarding the boundary ids). However, these changes did not help regarding the convergence. What I did in the second step is to replace the local-smoothing infrastructure by the (new) global-coarsening infrastructure and this change worked (constant iteration numbers of 6; with one fix - see: https://github.com/dealii/dealii/pull/12745). The local-smoothing infrastructure should also work, but I am not that familiar with that. Maybe any of the local-smoothing experts could take a look at the code. It is probably something trivial...

The global-coarsening infrastructure is not a replacement of the local-smoothing infrastructure but only an alternative implementation with pros and cons. For detail, just ping me or take a look at step-75 and the 9.3 release paper!

Hope this helps!

PM

main.cc

Guilhem Poy

unread,
Sep 6, 2021, 10:41:43 AM9/6/21
to deal.II User Group
Hi Peter,

Thanks for taking a look at the code. Indeed I tried the same merge of constraints as you but for some reason, this does not seem to be the right approach with local smoothing. For global coarsening things seem to be simpler because you assemble dof_handlers and constraints the same way for every level (whereas with local one has to use mg_constrained_dofs).

I am tempted to switch to global coarsening (which your modified step-37 perfectly illustrates, thanks for that!), but I saw one sentence in the doc which makes me a bit afraid that I will not be able to do it for my full code with multi-component solution: Systems involving multiple components of one of these element, as well as, systems with different elements or other elements are currently not implemented (https://www.dealii.org/current/doxygen/deal.II/classMGTransferGlobalCoarsening.html). So I cannot do this for my FESystem(FE_Q(p), 5) ?

Best regards,

Guilhem

peterrum

unread,
Sep 6, 2021, 12:28:05 PM9/6/21
to deal.II User Group
> I tried the same merge of constraints as you but for some reason, this does not seem to be the right approach with local smoothing.

It should somehow work. Let's see if we get some input from others!

> For global coarsening things seem to be simpler because you assemble dof_handlers and constraints the same way for every level (whereas with local one has to use mg_constrained_dofs).

Exactly. The "magic" happens inside of the transfer operator.

> I am tempted to switch to global coarsening (which your modified step-37 perfectly illustrates, thanks for that!), but I saw one sentence in the doc which makes me a bit afraid that I will not be able to do it for my full code with multi-component solution: Systems involving multiple components of one of these element, as well as, systems with different elements or other elements are currently not implemented (https://www.dealii.org/current/doxygen/deal.II/classMGTransferGlobalCoarsening.html). So I cannot do this for my FESystem(FE_Q(p), 5) ?

Actually it does. The documentation was a bit outdated: https://github.com/dealii/dealii/pull/12746

PM

Guilhem Poy

unread,
Sep 6, 2021, 2:20:28 PM9/6/21
to deal.II User Group
That is great to know, I will be sure to check how global coarsening works out in my code, thanks!
Reply all
Reply to author
Forward
0 new messages