Inaccurate convergence rate using ParsedConvergenceTable class

102 views
Skip to first unread message

Marco Feder

unread,
Aug 7, 2021, 12:23:16 PM8/7/21
to deal.II User Group
Dear all,

I was expanding step-12 using a manufactured solution to check the order p in H^1 (and p+1 in L^2) norm on a uniformly refined mesh for the DG upwind method. I've used the ParsedConvergenceTable with a parameter file as described in the docs. As Rate_key I'm using the DoFs, while as Rate_mode I have reduction_rate_log2. 

With p=1 and p=2 everything is fine. But if I set the finite element degree to 3, then the H^1 convergence rate decreases, as you can see in the attached image.
Screenshot 2021-08-07 at 17.56.06.png
This, however, doesn't happen if I use a classical ConvergenceTable. Namely, I first compute the local error in each cell, and then the global error in the classical way:

VectorTools::integrate_difference(mapping,dof_handler,solution, Solution<dim>(),H1_error_per_cell, QGauss<dim>(fe->tensor_degree()+1),VectorTools::H1_norm);

const double H1_error = VectorTools::compute_global_error(triangulation, H1_error_per_cell,  VectorTools::H1_norm); //assuming I provided also the gradient method for the Solution<dim> class



Does anyone have any idea why this is happening? My guess was that while computing the H^1 semi-norm the ParsedConvergenceTable class does some approximation to compute the gradient from the exact solution expression and hence that could be the source of the issue. Conversely, in the "ConvergenceTable" way I do define explicitly the gradient of the exact solution in the Solution<dim> class.

Best,
Marco


Luca Heltai

unread,
Aug 8, 2021, 12:49:46 PM8/8/21
to dea...@googlegroups.com
Ciao Marco,

I was expanding step-12 using a manufactured solution to check the order p in H^1 (and p+1 in L^2) norm on a uniformly refined mesh for the DG upwind method. I've used the ParsedConvergenceTable with a parameter file as described in the docs. As Rate_key I'm using the DoFs, while as Rate_mode I have reduction_rate_log2. 

With p=1 and p=2 everything is fine. But if I set the finite element degree to 3, then the H^1 convergence rate decreases, as you can see in the attached image.
<Screenshot 2021-08-07 at 17.56.06.png>

Is it possible that you are using different quadrature rules in the two cases? Your image shows a deterioration of the error on the order of 1e-8 for H1, and 1e-12 for L2, which is very close to machine precision.

Internally the parsed convergence table does the exact same thing you wrote explicitly. (If you check the source code, you’ll see that it calls integrate_difference and compute_global_error for each error type you specify in the parameter file).

This, however, doesn't happen if I use a classical ConvergenceTable. Namely, I first compute the local error in each cell, and then the global error in the classical way:

VectorTools::integrate_difference(mapping,dof_handler,solution, Solution<dim>(),H1_error_per_cell, QGauss<dim>(fe->tensor_degree()+1),VectorTools::H1_norm);

const double H1_error = VectorTools::compute_global_error(triangulation, H1_error_per_cell,  VectorTools::H1_norm); //assuming I provided also the gradient method for the Solution<dim> class



Does anyone have any idea why this is happening? My guess was that while computing the H^1 semi-norm the ParsedConvergenceTable class does some approximation to compute the gradient from the exact solution expression and hence that could be the source of the issue. Conversely, in the "ConvergenceTable" way I do define explicitly the gradient of the exact solution in the Solution<dim> class.

If the Solution<dim> class implements the Gradient, then ParsedConvergenceTable should use that. 

You are calling “error_from_exact” of that class, right?

This is where it is called:


Line 621. As you see, the quadrature rule used is a Gauss formula of order (dh.get_fe().degree+1)*2.

Can you check if you get the same results with this order?

L.

Luca Heltai

unread,
Aug 8, 2021, 12:52:03 PM8/8/21
to dea...@googlegroups.com
The other option is that you are not calling the `error_from_exact` function that takes a mapping argument, but the one without it, and your boundary cells are introducing some error due to a worse mapping…. Are you calling `error_from_exact` with mapping as a first argument?

Luca

Il giorno 8 ago 2021, alle ore 6:49 PM, Luca Heltai <luca....@gmail.com> ha scritto:



Marco Feder

unread,
Aug 8, 2021, 3:15:25 PM8/8/21
to deal.II User Group
Ciao Luca!

> Can you check if you get the same results with this order?
Yes, the rates are the same. 

> Are you calling `error_from_exact` with mapping as a first argument?
Yes

Actually, I've fixed the issue after reading this:
> If the Solution<dim> class implements the Gradient, then ParsedConvergenceTable should use that. 

Indeed, I realised I wrote:
error_table.error_from_exact(mapping, dof_handler, solution, exact_solution);

where exact_solution is a FunctionParser<dimwhich will be initialised with the expression of the analytical solution, so it doesn't give any info about the Gradient. Instead, if I replace exact_solution in the last argument with the "classical" Solution<dim>() (so that there's also an implementation of Gradient) I have my expected rates. Apparently it wasn't using the Gradient function. 

However, looking at the source code and in the documentation of integrate_difference, I don't see how the H^1 norm was computed without having an implementation of the Gradient. I would expect to see an exception asking for its implementation. Was it using some FD-like approximation or am I missing something?


Thanks,
Marco

Timo Heister

unread,
Aug 8, 2021, 4:26:01 PM8/8/21
to dea...@googlegroups.com
FunctionParser uses a finite difference approximation for the gradient. I think this is explained in the class documentation.
This explains the results you see...

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/dc72f870-1549-471c-b93b-854bfe3b67d9n%40googlegroups.com.

Marco Feder

unread,
Aug 9, 2021, 10:39:02 AM8/9/21
to deal.II User Group
You're right, I totally missed the default parameter h in the constructor.

Thanks,
Marco

luca.heltai

unread,
Aug 10, 2021, 5:04:21 AM8/10/21
to Deal.II Users
If you want to use symbolic calculations, you could also leverage SymEngine, and use Functions::SymbolicFunction

https://www.dealii.org/current/doxygen/deal.II/classFunctions_1_1SymbolicFunction.html

In your code, you could create such object by making a

std::unique_ptr<Functions::SymbolicFunction<dim>> fun;

parse its expression from file, and then

fun = std::make_unique<Functions::SymbolicFunction<dim>>(expression);

In this way the generated function would compute the gradient symbolically.

L.
> To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/bd3c5f62-86dc-4c39-b3da-a64418e7bc8cn%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages