Accuracy and convergence of tau method

149 views
Skip to first unread message

Ciro Sobrinho

unread,
Oct 10, 2023, 6:34:38 AM10/10/23
to Dedalus Users
Dear Dedalus community,

I am interested in using Dedalus v3 to a research problem in which accuracy plays an important role. So I have been trying to understand how accurate the tau method is.

By applying the tau method, one solves approximate equations. Interpreting the tau terms as residuals of those equations, I understand that a necessary condition for an accurate solution is the output of small magnitude taus. However, taking the Rayleigh Benard script as an example, with the given resolution Nx,Nz = 256,64, the tau terms are far from small. In the graph below, I plot the L^\infty norm along time for all the tau terms, and also the same norm for the divergence of the velocity field, which is supposed to be incompressible. As the results show, the only tau term which remains within machine error is the tau_p for the pressure gauge, while the others display high magnitude. As a consequence, the equations present large residuals. We can see that the divergence of the velocity field reaches a maximum of approximately 0,23, which is quite unacceptable for my purposes.

By playing with the resolution, I found that the tau terms have lower magnitude if I increase the ratio between Nz to Nx. In the case Nx,Nz = 64,256, the tau terms present better results, even though the initialization of the flow is still bad.

Surprisingly, the previous situation does not get better if I increase the x resolution, say to Nx,Nz = 256,256.

So I would like to ask: how to control the errors when applying the tau method? Is there a way to guarantee that they are indeed small and the equations are well satisfied? Is there a systematic study about accuracy or convergence of this method with respect to resolution?

I am attaching below the graphs I mentioned above and the script as well.

Thank you very much for any help or shared experience!
Best regards,
Ciro Campolina

Nx256Nz256.pdf
Nx64Nz256.pdf
Nx256Nz64.pdf
rayleigh_benard.py

Keaton Burns

unread,
Oct 10, 2023, 9:17:17 AM10/10/23
to dedalu...@googlegroups.com
Hi Ciro,

These are great questions. The first thing to point out is that these types of errors are not unique to the tau formulation in Dedalus — really *every* polynomial spectral method (collocation, classical tau, ultraspherical, etc…) can be written as some type of tau method, just with different tau terms, since they all need to modify the equations in some way to accommodate the boundary conditions. The interface in Dedalus just makes this explicit and accessible, for exactly the examination you’re doing here, but the same sorts of errors are still present in every method.

The default resolution for the KH example is indeed very low, because it is simply meant as a fast example people can quickly run on their laptops to make their first plots with Dedalus. The errors are large in the transient, but the div(u) error does go down to about 10^-4 as you approach the convective steady state. This behavior is fairly typical when starting from unstable equilibria, i.e. there is a strong transient that requires more resolution that the steady state.

As you are seeing though, the tau error is really a measure of the truncation error in z, since these terms are introduced to accommodate the boundary conditions. The expected behavior is as you see, for the tau error to decrease as you increase the z resolution. I think the large initial error (t<5) is simply due to the random initial conditions — picking smoother initial conditions should reduce the initial tau’s and divergence.

Now the behavior with x resolution is a little more complicated.  It looks to me like the saturated amplitude of the taus (1e-15) is about the same when Nz=256 and Nx=64 or 256. The difference is that the second transient around t=30 has higher error for Nx=256. One explanation might be that there’s a very sharp plume / flow structure forming, and the numerical truncation at Nx=64 limits how fine it can be, and correspondingly limits the z-truncation error. But at Nx=256, it may become smaller and then temporarily induce a larger error, since there are more horizontal modes available in the tau terms as well as the equations. Still the peak pointwise error in that simulation (minus the initial condition) is around 1e-10, which overall I think is pretty good for a nonlinear simulation!  

In summary though, I think you’re taking the perfect approach of re-running the simulation and checking these errors as a function of resolution. Ultimately refinement (in space and time) is the proper way to reduce the errors.  Plotting 2D power spectra can also help identify which dimension needs more/less refinement — you typically want some balance and not to over-refine, since this will require slower-than-necessary timestepping and at some point will provide no improvement to the flow statistics.

I hope this helps a bit!
Best,
-Keaton


--
You received this message because you are subscribed to the Google Groups "Dedalus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dedalus-users/69242338-3bf1-4ff2-879c-cc5148411db7n%40googlegroups.com.

Ciro Sobrinho

unread,
Oct 10, 2023, 10:49:42 AM10/10/23
to Dedalus Users
Dear Keaton,

Thanks for the feedback and the quick response.

As you explained, the tau terms are indeed very useful to track and control the truncation errors in the simulation outputs. 

I verified what you said about the initial transient behavior. I tried a smoother initial condition, for a problem which is not starting from an unstable equilibrium. The error due to the taus are indeed simply oscillating around a more stable reference value the whole simulation.

About the resolution in x, I agree that the low resolution might be coarse-graining the simulation, and so damping finer structures that could be contributing to the higher error in other resolutions. The best is to keep tracking the errors for each simulation and see if they are sufficiently bounded.

Thanks again for the useful information!
Best regards,
Ciro Campolina
Reply all
Reply to author
Forward
0 new messages