Petrov-Galerkin stabilization approach to solve Navier-Stoke equations

251 views
Skip to first unread message

Jie Cheng

unread,
Oct 3, 2017, 11:30:04 PM10/3/17
to deal.II User Group
Hi everyone

I am working on an incompressible Navier-Stokes solver which may later be extended to slightly compressible solver. The flow of interest could sometimes be convection-dominated so I would like to use Petro-Galerkin stabilization method, specifically, the streamline-upwind Petrov Galerkin (SUPG) method.

There are already a few tutorials on solving Navier-Stokes equations, but none of them uses stabilization method. I started learning deal.ii a month ago and I still don't feel comfortable writing such a complicated program from scratch. So I wonder if anyone has done similar things. Any suggestions on how to solve the resulting matrix system of SUPG method are also welcome.

Thank you
Jie Cheng

Wolfgang Bangerth

unread,
Oct 3, 2017, 11:37:34 PM10/3/17
to dea...@googlegroups.com, Jie Cheng

Jie,
I think you want to look at this code gallery program:
https://dealii.org/developer/doxygen/deal.II/code_gallery_two_phase_flow.html

It's a large program, so it may not be easy to read through, but I am pretty
sure that it uses stabilization for the N-S equations.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Jie Cheng

unread,
Oct 4, 2017, 10:17:32 AM10/4/17
to deal.II User Group
Hi Wolfgang

It seems that the NS equation solver in this gallery code is based on projection method. The continuous Galerkin Finite Elements with high-order stabilization is used for the level set solver though. 

How hard would it be to first extend Step-57 to be non-steady, and then add stabilization terms? My main concern is whether Newton iteration + FGMRES approach remains the same.

Thank you
Jie

Wolfgang Bangerth

unread,
Oct 4, 2017, 6:24:25 PM10/4/17
to dea...@googlegroups.com

> How hard would it be to first extend Step-57 to be non-steady, and then add
> stabilization terms? My main concern is whether Newton iteration + FGMRES
> approach remains the same.

I don't know how much work it will be to add a time dependence, but if you
treat the nonlinearity implicitly, and time discretized model will remain
nonlinear and needs to be solved with a Newton scheme or similar. The
appropriate solver for that system then remains FGMRES.

Jie Cheng

unread,
Oct 8, 2017, 12:15:42 AM10/8/17
to deal.II User Group
Hi Wolfgang

I do plan to use Newton's iteration + FGMRES. Based on your experience, does the preconditioner remain the same?

Thanks
Jie

Jie Cheng

unread,
Oct 20, 2017, 12:46:02 AM10/20/17
to deal.II User Group
Hi Wolfgang
 
 but if you treat the nonlinearity implicitly, and time discretized model will remain
nonlinear and needs to be solved with a Newton scheme or similar. The
appropriate solver for that system then remains FGMRES.

  Thank you for your hints on my previous question. Now I have extended step-57 to be time-dependent. I added a u,t term which is approximated using backward Euler, and all the remaining terms are treated implicitly. Newton iteration is still used at every time step. The system becomes
  (A + M/dt)U = F - (\delta v,  (u^{k+1} - u^k)/dt)_\Omega, BU = P, which is similar to what we have in step-57 except the LHS(0, 0) and RHS(0) are changed. 

  I'm still using the same preconditioner developed in step-57. So far it works the simple case of lid-driven cavity flow. My questions are: 
  1. It doesn't make much sense to keep using the preconditioner developed for steady NSE here, are there any simple ways to adapt it for time-dependent problems?
  2. Instead of working on better preconditioners, can I simply treat the convection term explicitly (using the value at last time step)? The reason is that doing so would make the LHS of the equation symmetric  (I think), which should be easy to solve. This seems to be much easier.

  What is the correct direction to go (for engineering applications where convection is significant)?

Thanks
Jie
   

Martin Kronbichler

unread,
Oct 20, 2017, 5:42:43 AM10/20/17
to dea...@googlegroups.com
Hi Jie,

> Thank you for your hints on my previous question. Now I have
> extended step-57 to be time-dependent. I added a u,t term which is
> approximated using backward Euler, and all the remaining terms are
> treated implicitly. Newton iteration is still used at every time step.
> The system becomes
> (A + M/dt)U = F - (\delta v, (u^{k+1} - u^k)/dt)_\Omega, BU = P,
> which is similar to what we have in step-57 except the LHS(0, 0) and
> RHS(0) are changed.
>
> I'm still using the same preconditioner developed in step-57. So far
> it works the simple case of lid-driven cavity flow. My questions are:
> 1. It doesn't make much sense to keep using the preconditioner
> developed for steady NSE here, are there any simple ways to adapt it
> for time-dependent problems?

You should check the literature. In case your time step is not too
large, the mass matrix will give a significant contribution to the
system matrix. This is good because one knows how to approximate the
Schur complement for a velocity mass matrix - it is simply a pressure
Poisson matrix, with Dirichlet conditions on that operator where the
velocity has Neumann and Neumann conditions where you specify the flow.
There are several variants of preconditioners in the literature. One is
Cahouet-Chabard 1988 that simply uses the inverse Poisson matrix and the
inverse mass matrix to take the mass and viscous contributions in
velocity into account. I have used it in a recent publication:
https://doi.org/10.1177/1094342016671790 - check also literature cited
there, this mailing list is not the right place to ask for a literature
review.

Then, there are also pressure reaction-convection-diffusion operators
that also take convection into account. I have not seen them a lot for
the full block system in the time-dependent case, probably because the
range where they are really better than Cahouet-Chabard yet the Schur
complement approximation stays good enough is quite narrow - especially
once you also want to have good yet cheap approximation of the velocity
matrix.

> 2. Instead of working on better preconditioners, can I simply treat
> the convection term explicitly (using the value at last time step)?
> The reason is that doing so would make the LHS of the equation
> symmetric (I think), which should be easy to solve. This seems to be
> much easier.
Yes, this is Cahouet-Chabard and indeed very efficiently done. But you
get a CFL limit which may or may not be limiting for you.

Best,
Martin

Wolfgang Bangerth

unread,
Oct 20, 2017, 8:34:12 AM10/20/17
to dea...@googlegroups.com
On 10/20/2017 03:42 AM, Martin Kronbichler wrote:
> Then, there are also pressure reaction-convection-diffusion operators
> that also take convection into account. I have not seen them a lot for
> the full block system in the time-dependent case, probably because the
> range where they are really better than Cahouet-Chabard yet the Schur
> complement approximation stays good enough is quite narrow - especially
> once you also want to have good yet cheap approximation of the velocity
> matrix.

I think Timo Heister's PhD thesis has an overview.

Wolfgang Bangerth

unread,
Oct 20, 2017, 8:35:20 AM10/20/17
to dea...@googlegroups.com
On 10/19/2017 10:46 PM, Jie Cheng wrote:
> but if you treat the nonlinearity implicitly, and time discretized model
> will remain
> nonlinear and needs to be solved with a Newton scheme or similar. The
> appropriate solver for that system then remains FGMRES.
>
>
> Thank you for your hints on my previous question. Now I have extended
> step-57 to be time-dependent. I added a u,t term which is approximated using
> backward Euler, and all the remaining terms are treated implicitly. Newton
> iteration is still used at every time step. The system becomes
> (A + M/dt)U = F - (\delta v, (u^{k+1} - u^k)/dt)_\Omega, BU = P, which is
> similar to what we have in step-57 except the LHS(0, 0) and RHS(0) are changed.

Jie -- I think this would be a very interesting program for others as well.
Would you be interested in contributing it to the code gallery?

Best
Wolfgang

Jie Cheng

unread,
Oct 20, 2017, 12:36:07 PM10/20/17
to deal.II User Group
Hi Martin and Wolfgang

  Thank you very much for the helpful comments and references. I'll start to read the works you mentioned.
 
Jie -- I think this would be a very interesting program for others as well.
Would you be interested in contributing it to the code gallery?

  I'd love to contribute after I work it out!

Jie 

Wolfgang Bangerth

unread,
Oct 20, 2017, 12:37:27 PM10/20/17
to dea...@googlegroups.com
On 10/20/2017 10:36 AM, Jie Cheng wrote:
> Jie -- I think this would be a very interesting program for others
> as well.
> Would you be interested in contributing it to the code gallery?
>
>
> I'd love to contribute after I work it out!

Jie,
that would be fantastic -- please do let us know if you want help with
making the code available to others as well! In general, the
requirements for the code gallery are not very high, so it should not be
too much work once you have it running!

Best & thanks in advance

SebG

unread,
Nov 1, 2018, 5:58:23 PM11/1/18
to deal.II User Group
Dear Jie, dear deai.ii user,

I am working on the Cahouet-Chabbard preconditioner in context buoyancy-driven flow problems. Somehow my preconditioner does not work quite well. The number of iterations depends on the time step, which should not be the case. With more than 50 iterations it is also quite large. I would like to ask if you or someone else could provide some details of your implementation or give tips.

The velocity block of the system matrix is given by:
alpha / timestep * M + gamma * c * K .
M, K are the velocity mass and stiffness matrix. The scalars alpha and gamma are related to the time discretization and c is a dimensionless parameter. If I am not wrong, the Cahouet-Chabbard Schur complement approximation is given by
S^-1 = alpha / timestep * K_p^-1 + gamma * c * M_p^-1 .

I am assembling the pressure stiffness and pressure mass matrix explicitly. However my problem is a pure Dirichlet problem w.r.t. the velocity, which in contrast mean that preconditioner is using Neumann BCs. Therefore, I am constraining one pressure dof, which regularizes the pressure laplace matrix. This approach is discussed in another thread. For this reason I have two ConstraintMatrix object one for the system matrix and one for the preconditioner.

I also attached my code which is based step-32 but in serial.

Best wishes,
Sebastian
buoyantfluid-solver.cc
default_parameters.prm

Jie Cheng

unread,
Nov 1, 2018, 8:30:23 PM11/1/18
to dea...@googlegroups.com
Hi Sebastian,

I think what you described is correct. I could not see why it did not work out. But I recommend reading Timo's dissertation. Also, for the implementation, you can check out my code for the implicit scheme and explicit-implicit scheme.

Best,
Jie

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "deal.II User Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dealii/41VjIh5dzng/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sebastian Glane

unread,
Nov 9, 2018, 6:42:09 PM11/9/18
to dea...@googlegroups.com
Hi Jie,

thanks a lot. The link to your repository was very helpful. I could significantly reduce the number of iterations using your implementation of computing the pressure laplacian through:

K_p = B diag(velocity_mass_matrix)^-1 B^T.

I have tested the dependence of the preconditioner on the time step and the mesh size and observed that it is likely to be mesh independent, i.e. if the mesh is refined the number of outer (GMRES) iterations does not change, which is great.

However there is a dependence on the size of the time step. Both variants with assembly and your implementation show an increase in the number of iterations.

time step
iterations with B B^T
iterations with assembly
1e-5
6 to 7
6 or 16 ringing
1e-4
8 to 9
18 to 19
1e-3
10 to 11
21 to 22
1e-2
12 to 14
24 to 27
1e-1
23 to 24
51 to 54

Did you make the same or similar observations? I have read some of the literature and as far as I understand it the preconditioner should be independent of the step size and viscosity.

Another point I would like to raise is related to  mesh independence, which is not quite true. In every step the solution of the pressure laplace matrix (K_p) requires by far the largest number of iterations. I am monitoring the iterations of all subsolvers:

     7 GMRES iterations for stokes system,  (A: 8, Kp: 682, Mp: 60)

For A (velocity block) one cycle of TrilinosWrappers::PreconditionAMG is used. Mp is preconditioned using PreconditionSSOR and for Kp I am using SparseILU. For the example quoted the pressure space has 4800 dofs. I am wondering why the SparseILU with roughly 100 iterations per precondition step does such a bad job in this case.

Of course, I don't expect you to answer all of this but it would great if you make some comments about your experience with this preconditioner. By the way I am also using an IMEX scheme in my solver, see https://github.com/sebglane/BoussinesqFluidSolver/tree/modular_version .

Best wishes,

Sebastian

Jie Cheng

unread,
Nov 28, 2018, 10:09:50 AM11/28/18
to dea...@googlegroups.com
Hi Sebastian,

Sorry for the late reply. I do have similar observations that the effect of the preconditioner is related to the time step size and viscosity, and I am not surprised. Because this preconditioner consists of three different parts if I remember correctly, and these two parameters decide the "weights" of the two easy parts: diffusion term and mass term.

Pressure Laplace question: I don't have a good way to deal with that. If you do, please let me know.

For the other sub-preconditioners: I played with AMG before but never found a set of parameters that could give me good performance in general. Mp is easy, with or without preconditioners. SparseILU is a bad idea when you do adaptive mesh: for some unknown reason, it always crashes my program when I do local refinement. Also, don't use BlockJacobi when you go parallel, which is also bad.

Thank you,
Jie
Reply all
Reply to author
Forward
0 new messages