SIPDG method for 4th order problem -- uniform refinement good, adaptive refinement trouble.

101 views
Skip to first unread message

Sean Carney

unread,
Aug 17, 2024, 9:26:50 AM8/17/24
to deal.II User Group
Hi all,

I recently implemented a symmetric interior penalty DG method for a 4th order problem with an interface.

I took as my "base code" step 74 (thank you for that!) and modified the assemble, estimator, and energy_norm functions to suit my needs.

Unfortunately, I'm seeing some troubling results when doing local refinement. A simple example of the unexpected behavior that I'm seeing can be seen in the attached images.

Here the analytic solution to my problem posed on the (2d) unit cube is a piecewise polynomial. It depends only on the horizontal variable $x$, and for $x < 1/2$, the solution $u = 0$. For $x > 1/2$, the solution equals: [quintic polynomial]*[exp(x)].

On a uniform coarse mesh with 4x4 grid cells, the computed solution is already decent--in the eyeball norm, it is reasonable, and the error measured in my "energy" norm (a kind of $H^2$ norm) is on the order of 1e-1.

However, if I refine just one cell in the mesh, the computed solution is obviously incorrect to the eye, and the energy norm error is correspondingly large (~2e1).

Whatever is causing this behavior is, I believe, also causing me to to see incorrect rates of convergence upon a large number of local refinements. There may be trouble in the "estimate" and "energy_norm" functions, but the attached images suggest there is definitely trouble in the "assemble" function (I'm using a direct solver, so nothing here is related to iterative linear solvers).

In contrast, if I refine globally, I get nice convergence rates--consistent (or slightly better than) with what I expect from theory.

Additionally, I also am (of course) observing nice rates from the Step-74 code upon local refinement. Both for the L-shaped domain with a corner and for the smooth problem on the unit square.

I have been slowly loosing my mind trying to figure out what sort of error I've introduced to make things not work, but can't find anything--for example, all of the logic for working with jumps and averages across faces when hanging nodes are present, I think, is handled by the MeshWorker framework.

Does anyone have any suggestions for things to try? I greatly appreciate any input!

Thank you--
--Sean
bad--locally-refined.png
good--base4x4grid.png

Wolfgang Bangerth

unread,
Aug 17, 2024, 8:10:24 PM8/17/24
to dea...@googlegroups.com

Sean:
It is hard to tell what exactly is going wrong, but before trying to bark up a
tree that is perhaps the wrong one, here are two questions:

* For the case of adaptive refinement, the solution looks discontinuous also
at edges without refinement. Is that reasonable? What's the element you are using?

* One of the things worth keeping in mind is that what you are visualizing is
a piecewise linear representation of the solution. However, I suspect you are
using a higher polynomial finite element. In other words, what you see in
Visit is not what the solution *actually* is. Take a look at the documentation
of DataOut::build_patches() for a discussion of how this can be addressed by
using subdivisions of the mesh to get a closer representation of what you are
computing.

Best
W.


On 8/17/24 07:26, Sean Carney wrote:
> *** Caution: EXTERNAL Sender ***
>
> Hi all,
>
> I recently implemented a symmetric interior penalty DG method for a 4th order
> problem with an interface.
>
> I took as my "base code" step 74 (thank you for that!) and modified the
> assemble, estimator, and energy_norm functions to suit my needs.
>
> Unfortunately, I'm seeing some troubling results when doing local refinement.
> A simple example of the unexpected behavior that I'm seeing can be seen in the
> attached images.
>
> Here the analytic solution to my problem posed on the (2d) unit cube is a
> piecewise polynomial. It depends only on the horizontal variable $x$, and for
> $x < 1/2$, the solution $u = 0$. For $x > 1/2$, the solution equals: [quintic
> polynomial]*[exp(x)].
>
> On a /uniform /coarse mesh with 4x4 grid cells, the computed solution is
> already decent--in the eyeball norm, it is reasonable, and the error measured
> in my "energy" norm (a kind of $H^2$ norm) is on the order of 1e-1.
>
> However, if I refine just one cell in the mesh, the computed solution is
> obviously incorrect to the eye, and the energy norm error is correspondingly
> large (~2e1).
>
> Whatever is causing this behavior is, I believe, also causing me to to see
> incorrect rates of convergence upon a large number of local refinements. There
> may be trouble in the "estimate" and "energy_norm" functions, but the attached
> images suggest there is definitely trouble in the "assemble" function (I'm
> using a direct solver, so nothing here is related to iterative linear solvers).
>
> In contrast, if I refine globally, I get nice convergence rates--consistent
> (or slightly better than) with what I expect from theory.
>
> Additionally, I also am (of course) observing nice rates from the Step-74 code
> upon local refinement. Both for the L-shaped domain with a corner and for the
> smooth problem on the unit square.
>
> I have been slowly loosing my mind trying to figure out what sort of error
> I've introduced to make things not work, but can't find anything--for example,
> all of the logic for working with jumps and averages across faces when hanging
> nodes are present, /I think/, is handled by the MeshWorker framework.
>
> Does anyone have any suggestions for things to try? I greatly appreciate any
> input!
>
> Thank you--
> --Sean
>
> --
> The deal.II project is located at http://www.dealii.org/
> <http://www.dealii.org/>
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> <https://groups.google.com/d/forum/dealii?hl=en>
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com
> <mailto:dealii+un...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/d63bfe3a-ad3b-44e2-9870-d6fc51e10104n%40googlegroups.com <https://groups.google.com/d/msgid/dealii/d63bfe3a-ad3b-44e2-9870-d6fc51e10104n%40googlegroups.com?utm_medium=email&utm_source=footer>.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/


Sean Carney

unread,
Aug 18, 2024, 11:02:35 PM8/18/24
to deal.II User Group
Wolfgang,

Thanks for your reply, and also for being cautious on which which tree to bark up :-)


* For the case of adaptive refinement, the solution looks discontinuous also
at edges without refinement. Is that reasonable? What's the element you are using?

I am currently using 2nd order FE_DGQ elements. Unfortunately, no, the solution certainly should not be discontinuous
at any point in the domain. 

My sense is that something is incorrect in my program's assemble_system function, which in turn causes the computed
solution to be wrong. The picture from the adaptive refinement case was simply intended to support this point.

To reiterate, what is incorrect is not an issue in the case of uniform refinement--
if I take the mesh with 4x4 cells and refine to 8x8, the energy norm error decreases accordingly. This continues with repeated 
refinements; I see reasonable convergence rates as I drive $h$ to zero.

Short of posting my assemble_system code, does this behavior sound like anything you have seen before?
Would be grateful for any other places to look.


* One of the things worth keeping in mind is that what you are visualizing is
a piecewise linear representation of the solution. However, I suspect you are
using a higher polynomial finite element. In other words, what you see in
Visit is not what the solution *actually* is. Take a look at the documentation
of DataOut::build_patches() for a discussion of how this can be addressed by
using subdivisions of the mesh to get a closer representation of what you are
computing.

This is good to keep in mind, thanks for pointing out.  At the current moment, however, I think the situation is a bit more
complicated, for the reasons mentioned above and in my first message.

Best regards,
-Sean

Wolfgang Bangerth

unread,
Aug 18, 2024, 11:33:51 PM8/18/24
to dea...@googlegroups.com
On 8/18/24 21:02, Sean Carney wrote:
>
> * For the case of adaptive refinement, the solution looks discontinuous also
> at edges without refinement. Is that reasonable? What's the element you
> are using?
>
>
> I am currently using 2nd order FE_DGQ elements. Unfortunately, no, the
> solution certainly should not be discontinuous
> at any point in the domain.

But if the element is discontinuous, how do you enforce that the solution
"should not be discontinuous"?


> My sense is that something is incorrect in my program's assemble_system
> function, which in turn causes the computed
> solution to be wrong. The picture from the adaptive refinement case was simply
> intended to support this point.
>
> To reiterate, what is incorrect is not an issue in the case of uniform
> refinement--
> if I take the mesh with 4x4 cells and refine to 8x8, the energy norm error
> decreases accordingly. This continues with repeated
> refinements; I see reasonable convergence rates as I drive $h$ to zero.
>
> Short of posting my assemble_system code, does this behavior sound like
> anything you have seen before?
> Would be grateful for any other places to look.

There's a bug somewhere, as there always is :-) I'm sure many of us have seen
the sort of thing you're seeing, but what specifically the problem is is of
course hard to tell without actually knowing both the method and the code :-(

The only suggestion I have is to double-check that all lifting or
stabilization terms you have use a stabilization factor that is large enough
for the small edge size you have on the refined side, not just for the large edge.

Best
W.

Sean Carney

unread,
Aug 27, 2024, 5:06:15 PM8/27/24
to deal.II User Group
Wolfgang,

My apologies for the delayed response to your helpful suggestions. I have been in the process of moving this last week, and only
now am I finding some time to come back to my 4th order interface problem. :-)


But if the element is discontinuous, how do you enforce that the solution
"should not be discontinuous"?

I suppose this is a rhetorical question, but, to be clear, continuity of the solution is of course enforced weakly by incorporating
into the bilinear form terms that penalize jumps in the solution across quadrilateral edges.

 
The only suggestion I have is to double-check that all lifting or
stabilization terms you have use a stabilization factor that is large enough
for the small edge size you have on the refined side, not just for the large edge.


In the end, it turns out that the penalty parameter (that is weakly enforcing continuity of the discrete solution) was simply not large
enough.

This is a little embarrassing to admit, but alas, it is indeed the case. This outcome of course does not preclude the existence of
bugs in the code, but at least it's another piece of evidence that the method is implemented properly. It just needs to be used properly. =D

Actually, I did play around with the value of the parameter; for example, I tried setting it equal to e.g. $p(p+1)$, as in Step 47, I tried 
setting it equal to 10, as it is in a numerical example described in the literature, but cranking up the penalty parameter to 50, for example,
was sufficient to observe results consistent with what we would expect from theory.

Of course, the ambiguity that comes with introducing a penalty parameter is a standard "complaint" one might expect from working
with an interior penalty method. I guess this experience shows there are some merit to these complaints.

Anyhow, if it can help anyone in the future that wants to implement an adaptive mesh refinement with an interior penalty method: 
please don't be afraid to liberally experiment with the numerical value of one's penalty parameter.

Best regards,
-Sean

 

Jan Philipp Thiele

unread,
Aug 28, 2024, 6:37:29 AM8/28/24
to deal.II User Group
Good to know it works now!
Out of curiosity:
Does the penalization in your jump terms include the edge size?
The formulation of SIP for Poisson that I know has a factor of $\eta/h_F$ with $\eta$ being the penalty parameter.
Consequently, smaller edges are penalized more which could be worth trying.
Best,
Philipp

Sean Carney

unread,
Aug 29, 2024, 5:15:12 PM8/29/24
to deal.II User Group
Hi Phillip,

Yes, the formulation we're using indeed has a factor of the form $\eta/h_F^q$, where $q$ is an integer. 

Since we have a fourth order equation, (rather) loosely speaking our solution is expected to be in $H^2$, 
so we have penalty terms involving jumps in both the solution and the gradient of the solution across faces. 
From dimensional considerations, one can see that we need $q = 3$ for the terms involving jumps of the 
solution, while $q = 1$ for terms involving jumps of the solution gradient.

I spent many, many hours trying to ensure that the correct $h_F$ was being used upon anisotropic (local) 
refinement, but it turns out deal.II indeed handles it correctly...it's simply that $\eta$ matters a lot too. =D

Best regards,
-Sean
Reply all
Reply to author
Forward
0 new messages