Mean Value Constraints

178 views
Skip to first unread message

Stephen

unread,
Dec 26, 2020, 12:43:21 AM12/26/20
to deal.II User Group
Hi all,

Does anyone here have any experience applying mean value constraints (specifically with periodic boundary conditions)? I'm having some trouble. As far as I can tell, there are two approaches (both of which I have been able to get working). The first approach is to just add the mean value constraint straight into the constraint matrix. I got this working just fine and was able to get an output but the problem is that this condition seems to make my matrices dense(?) or, at the very least, require a huge amount of memory hindering this approach for the types of fine spatial meshes I require for this application. The other approach is to just apply periodic boundary conditions to the matrices, try to solve the system anyway and, because UMFPACK is awesome, somehow manage to get a solution out of all of this (which will of course only be defined up to a constant). The problem here is that the solution I am getting is of order 10^12! When I postprocess and subtract the mean value, what I get looks correct but I am losing a lot of accuracy in the process but it does work for the fine meshes that I require. Anyone have any ideas or do I just have to live with the reduced accuracy here? Thanks!

Konrad Simon

unread,
Dec 26, 2020, 5:06:36 AM12/26/20
to deal.II User Group
Hi,

On Saturday, December 26, 2020 at 6:43:21 AM UTC+1 smetca...@gmail.com wrote:
Hi all,

Does anyone here have any experience applying mean value constraints (specifically with periodic boundary conditions)? I'm having some trouble. As far as I can tell, there are two approaches (both of which I have been able to get working). The first approach is to just add the mean value constraint straight into the constraint matrix. I got this working just fine and was able to get an output but the problem is that this condition seems to make my matrices dense(?) or, at the very least, require a huge amount of memory hindering this approach for the types of fine spatial meshes I require for this application.

If you think of the mean value constraint as an additional row it looks like (for Poisson equation) u_0+u_1+...+u_n=0 and hence it couples all DoFs. This is mathematically ok but not efficient since this constraint is non-local. What one can also do is just constrain one DoF to a specific value (this would also remove rigid motion in elasticity). But think about your solution variable: If it is in the Sobolev space H^1 then point evaluations may not be defined for dimension larger than 2. Similarly if, for example, the pressure in a mechanical or fluid problem is often just in L^2. Point evaluations do not make sense there at all. 
 
The other approach is to just apply periodic boundary conditions to the matrices, try to solve the system anyway and, because UMFPACK is awesome, somehow manage to get a solution out of all of this (which will of course only be defined up to a constant). The problem here is that the solution I am getting is of order 10^12! When I postprocess and subtract the mean value, what I get looks correct but I am losing a lot of accuracy in the process but it does work for the fine meshes that I require. Anyone have any ideas or do I just have to live with the reduced accuracy here? Thanks!

This is of course like solving an ill-posed problem but some direct solvers are smart enough to deal with zero pivots (redundant rows).

Both approaches above do work but only for small and non-parallel problems. This is what you can do:

Many people do not think of the fact that even iterative solvers can deal with singular problems if the right hand side is in the orthogonal complement of the kernel. For the mean value constraint do the following:

1.  Setup boundary conditions in constraints if necessary. 
2. Assemble the system matrix and rhs and apply your constraints
3. Create a class that wraps your system matrix. This class must provide a vmult() function
4. Before starting the iterative solver remove the mean value, see step-57 
4. In the vmult() function remove the mean value (i.e., project the rhs on the orthogonal complement of the kernel of the kernel)
5. use the wrapped matrix in the solver.

Hope that helps.

 Best,
Konrad

Konrad Simon

unread,
Dec 26, 2020, 8:45:04 AM12/26/20
to deal.II User Group
Little correction: I wrote "In the vmult() function remove the mean value (i.e., project the rhs on the orthogonal complement of the kernel of the kernel)", I meant "In the vmult() function remove the mean value after you multiply (i.e., project the rhs on the orthogonal complement of the kernel of the system matrix)"

This paper by Bochev and Lehoucq describes the technique with a toy model (Poisson with purely natural BCs). It was not new at that time but to the best of my knowledge they were the first ones to write this down in a proper way. The technique I mentioned above works for other kernels in system matrices as well. For example in linearized elasticity one maybe needs to remove not the mean value but rigid motions (well not really rigid but infinitesimal rotations - invariance under rotations cannot be linear) from the kernel. Such motions do not affect stress but constitute of nontrivial displacements. Also in this case one can project the rhs onto the orthogonal complement of the kernel of the system matrix. In this case the kernel has dimension 3 in 2D (one for scaling+90 degree rotation and two independent shifts) and dimension 6 in 3D. Once you project the rhs a Krylov solver can deal with your singular problem. 

Cheers,
Konrad 

Stephen

unread,
Dec 26, 2020, 11:28:38 AM12/26/20
to deal.II User Group
Thanks, I'll give it a try!

Wolfgang Bangerth

unread,
Dec 28, 2020, 1:23:19 PM12/28/20
to dea...@googlegroups.com
On 12/26/20 3:06 AM, Konrad Simon wrote:
> What one can also do is just constrain one DoF to a specific value (this would
> also remove rigid motion in elasticity). But think about your solution
> variable: If it is in the Sobolev space H^1 then point evaluations may not be
> defined for dimension larger than 2. Similarly if, for example, the pressure
> in a mechanical or fluid problem is often just in L^2. Point evaluations do
> not make sense there at all.

Right, this is the correct approach: Constrain a single degree of freedom to
zero (or any other value you choose) and solve the problem. Then you can
subtract the mean value of the solution *after* solving the linear system.
(See VectorTools::subtract_mean_value and VectorTools::compute_mean_value.)

If you're uncomfortable with the ill-posedness of taking a point value, you
can also take the mean value along a small segment of the boundary (step-1) or
a small part of the domain. But in practice, this is not necessary and
Konrad's solution is what everyone seems to be doing.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Corbin Foucart

unread,
May 19, 2023, 12:04:26 PM5/19/23
to deal.II User Group
I'm working on a similar pure Neumann problem with a rank-1 deficiency (pressure known only up to a constant). I've implemented the subspace projection method Konrad mentioned, with good results. However, I'm interested in comparing it to the single DoF constraint method, as well as an alternative penalization method alluded to in the paper by Bochev. I had a few questions:
  1. For a Sparse or ChunkSparse matrix object, is there a simple way of obtaining the eigenvalues? I know I can rebuild deal.ii with ARPACK, but this seems like overkill 
  2. Exactly how would I use an AffineConstraints object to "pin" the degree of freedom? Is it a matter of a single call to AffineConstraints::add_line()? It's unclear to me from the documentation whether this would add an additional "row" to the linear system, or replace an existing one (I'd be looking to do the second)
  3. I've implemented the subspace projection by wrapping the ChunkSparse matrix class  and overriding the vmult() function; however, at the moment, I'm forced to copy an original ChunkSparse matrix object into the wrapped object because the AffineConstraints::distribute_local_to_global() function doesn't recognize the wrapped matrix --I receive a linker error
libHDG_experimental.so: error: undefined reference to 'void dealii::AffineConstraints::distribute_local_to_global<HDG::LinearSystem::ChunkSparseWrapper, dealii::Vector >(dealii::FullMatrix const&, dealii::Vector const&, std::vector<unsigned int, std::allocator > const&, HDG::LinearSystem::ChunkSparseWrapper&, dealii::Vector&, bool, std::integral_constant<bool, false>) const'

I find that surprising, given that my wrapper class inherits publicly from ChunkSparseMatrix.

Any guidance would be appreciated!
Corbin

Daniel Arndt

unread,
May 19, 2023, 1:47:42 PM5/19/23
to dea...@googlegroups.com
Corbin,

- Using ARPACK might indeed be an option. Do you really need to know the whole spectrum, though?
- Yes, you would just call AffineConstraints::add_line() to add constrain a given DoF to zero. Note that it's a no-op if the DoF is already constrained. This doesn't change the size of the linear system.
- You will need to include deal.II/lac/affine_constraints.templates.h, deal.II/lac/affine_constraints.h only contains the declarations but not the definitions.

Best,
Daniel

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/ae39fbcc-c76b-4c03-bfbb-93d19ab2e3d7n%40googlegroups.com.

Corbin Foucart

unread,
May 26, 2023, 8:08:13 AM5/26/23
to deal.II User Group
Thank you very much for the pointers, Daniel!

- Including the templates header did the trick. I no longer need to copy the data over
- I was able to implement the penalty method and the DoF constraint method; the subspace projection method seems unambiguously best in terms of accuracy, iterations, and wall clock time. I benchmarked all three approaches against an MS for the pure Neumann problem.
- Re: spectrum--no, I only need the max and min eigenvalues to get at the conditioning of the system. However, this isn't crucial as I can just measure how the number of conjugate gradient iterations scale as a proxy.

Cheers,
Corbin

Reply all
Reply to author
Forward
0 new messages