Strange Adaptive grid

574 views
Skip to first unread message

Antoon van Hooft

unread,
Feb 2, 2016, 9:41:49 AM2/2/16
to basilisk-fr
Hello, i have a wierd result when simulating a Rayleigh-Taylor instability:
https://www.youtube.com/watch?v=rFio6F5Vrdk
The density field looks fine. But the adaptive grid levels make no sense:
https://www.youtube.com/watch?v=p1akjSM4qYU

you can see the grid strange refinement in het lower part of the simulation, Due to the initial symetry i would expect the grid refinement to be symetric.

What is going wrong?

the code i used is:


#include "grid/quadtree.h"
#include "navier-stokes/centered.h"
#include "tracer.h"
#include "diffusion.h"
#define temp 10
#define MAXLEVEL 9

scalar T
[];
scalar
* tracers = {T};
mgstats mgT
;
face vector av
[];

double Ra,Pr;
int main()
{
   
    boundary
(all);
    a
=av;
    L0
= 2.;
    X0
= -1;
    Y0
= -1;
    DT
= 0.2;
    init_grid
(1 << MAXLEVEL);

       
Ra = 1e11,Pr = 1; run();
}

event init (t = 0)
{    
    mask
(y >  0.5 ? top :
    y
< -0.5 ? bottom : none);
   
const face vector muc[] = {Pr/sqrt(Ra),Pr/sqrt(Ra)};
    mu
= muc;
   
foreach()
    T
[]=(y<0);
    boundary
(all);    
}

event acceleration (i++)
{   const face vector D[] = {1./sqrt(Ra), 1./sqrt(Ra)};
    mgT
= diffusion (T, dt, D);
   
const face vector delta_ij[] = {0.,1.};
    foreach_face
()
   
{
    av
.y[] = (T[0,1] + T[])/2.*delta_ij.y[];
           
}
}

event logfile (t <= temp; t += 0.1)
{
      scalar omg
[];
      vorticity
(u, omg);
      stats s
= statsf (omg);
      fprintf
(ferr, "%g %d %g %g %g\n", t, i, dt, s.sum, s.max);
}

event movie (t += 0.01; t <= temp)
{
     
static FILE * fp = popen ("ppm2mpeg > level.mpg", "w");
      scalar omg
[];
     
foreach()
    omg
[]=level;
      output_ppm
(omg, fp, linear = true);
   
static FILE * fp1 = popen ("ppm2mpeg > Temp.mpg", "w");
     
foreach()
    omg
[]=level;
        output_ppm
(T, fp1, spread = 2);
}
event profiles (t = end)
{
    scalar levol
[];
   
foreach()
    levol
[]=level;
    FILE
* fp1 = fopen("levels", "w");
   
for (double y = -0.5; y <= 0.5; y += 0.01)
        fprintf
(fp1, "%g %g\n", y, interpolate (levol, 0, y));
      fclose
(fp1);
}

event adapt (i+=1)
{
  adapt_wavelet
((scalar *){u,T}, (double[]){0.001,0.001,0.001}, MAXLEVEL);
       
   
}



Antoon van Hooft

unread,
Feb 3, 2016, 10:52:11 AM2/3/16
to basilisk-fr
I have studied the problem a bit more. It does not happen when i change the maximum discretization error of {u} to 0.1:
  adapt_wavelet ((scalar *){u,T}, (double[]){0.1,0.1,0.001}, MAXLEVEL);
Furthermore, I see in a plot of vorticity in the early stages of the instability that the refinement itself causes vorticity structures.

https://youtu.be/jIxeeoLJK6k







Stephane Popinet

unread,
Feb 4, 2016, 3:46:27 AM2/4/16
to basil...@googlegroups.com
Hi Anton,

I believe your troubles are due to the initial condition. Your initial
pressure field is zero, not the hydrostatic pressure field required to
balance gravity. The solver can find the pressure for you, but this
requires a few timesteps (and also depends on the value of your initial
timestep and on the TOLERANCE value). During this time the velocity
field is not balanced in the bulk (which is of course not correct). This
unbalanced velocity field is tracked (correctly) by the adaptive mesh.
You should try to, for example, turn adaptivity on only after, say, 10
timesteps and also reduce the initial timestep, so that the solution has
time to settle to hydrostatic balance. You can also try to initialise a
pressure field which is close to the hydrostatic pressure field. In any
case, you need to monitor closely what happens for the first few timesteps.

More generally, people often forget that initial conditions are
generally not trivial at all (in particular because they need to be
solutions of the Navier-Stokes equations...).

cheers

Stephane

felixh...@hispeed.ch

unread,
Feb 4, 2016, 6:42:45 AM2/4/16
to basilisk-fr
Hi Anton and Stephane

could it also be a way to run a short simulation on a regular grid until the pressure field has settled, then dump it into a file and restart with an adaptive grid. How would this be achieved?

and, other question, where do I find the code for adapt_wavelet? I fiddled around in order to feel what happens and learned a lot :-)

cheers felix


Antoon van Hooft

unread,
Feb 4, 2016, 7:57:04 AM2/4/16
to basilisk-fr, pop...@basilisk.fr
Hallo Stephane and Felix,

I tried both: adapting after 100 timesteps and initialize the correct pressure field.
The following happens: After 100 timesteps the grid is coarsened (top and bottom), and temporarily the gridrefinement focusses around the instability interface. Next, I see the erroneous vorticity structures occur in the lower part of the simulation. This coincides with the refinement of the lower half of the domain (see link).
 
If the errorous vorticity structures are due to the incorrect pressure initialization i would expect them to occur in the first 100 timesteps as well. However, they remain absent in the first 100 timesteps. 

It seems that the problem is not with the system is adapting to the incorrect pressure field. Since the "problem" propagates from fine levels in the downward-right direction under 45 degrees I have the feeling that something goes wrong in the refinement procedure.


Message has been deleted

Antoon van Hooft

unread,
Feb 13, 2016, 6:22:53 AM2/13/16
to basilisk-fr
I did some more investigations on this problem.

I initialized a Lamb-Chaplygin dipolar vortex, but not the corresponding initial pressure distribution. This was no problem, Out of enthusiasm I even made a movie:
https://www.youtube.com/watch?v=muBUNmdiTbo

Next I added an extra force term namely, F = e_z X u (X means outer product). such that in a 2D incompressible flow, the vortex dynamics are not altered and the solution of the simulation should remain the same (only with different pressure fields).
Now the problem occurs.

Finally, I did not initialize the dipolar vortex but only applied a forceterm that does only have a azimuthal component whose magnitude only depends on the distance from the centre. Such that initially the pressure is correct and the acceleration force cannot be written as a gradient of a scalar field such that is does not directly (in a hydrostatic sense) affect the pressure field.
The problem still remains, and the solution greatly differs from a simulation without adaptive grid refinement.

Any suggest

Stephane Popinet

unread,
Feb 14, 2016, 10:16:54 AM2/14/16
to basilisk-fr
Hi Anton,

Very nice movie. Could you please file a bug report following the
instructions here:

http://basilisk.fr/sandbox/bugs/README

I will try to have a look soon.

cheers

Stephane

PS: when you have registered on the basilisk.fr wiki, send me your login
name and I will add you to the people who can run simulations on the
site. This will help with your bug report.

Stephane Popinet

unread,
Feb 16, 2016, 2:59:51 PM2/16/16
to Antoon van Hooft, basilisk-fr
Hi Antoon,

Thanks for the bug report, I made a few reasonably minor changes (aside
from the way adapt_wavelet is called). See:

http://basilisk.fr/sandbox/bugs/adapt_accel.c

The results look fine to me:

http://basilisk.fr/sandbox/bugs/adapt_accel/RT_T.mpg
http://basilisk.fr/sandbox/bugs/adapt_accel/RT_level.mpg

cheers

Stephane

PS: I added Antoonvh to the list of users who can run simulations on the
basilisk.fr server.

felixh...@hispeed.ch

unread,
Feb 17, 2016, 3:23:02 AM2/17/16
to basilisk-fr
just for fun: i did a 3D version of this in a slightlty different formulation (using a constant gravity vector as stephane showed earlier), ran it with openmp and realized that the fingering started at the boundaries of the processor realms. have a look yourselves!

the code (ra3.c) is:

/**
# fingering in 3D*/




#include "grid/octree.h"
#include "navier-stokes/centered.h"
#include "tracer.h"

#define rho_fl1 1000
#define mu_fl1 0.01

# define rho_air 1020
# define mu_air 0.01
# define SIGMA 0.0

# define L0 2.

# define t_end 60.

#define LEVEL 7

/**
The interface between the two-phases will be tracked with the volume
fraction field *f*. We allocate the fields for the variable density
and viscosity (*alphav*, *alphacv* and *muv* respectively). */

scalar f[];
scalar * tracers = {f};
face vector alphav[];
scalar rhov[];
face vector muv[];
scalar f1[];
scalar umag2[];


u.t[front]  = dirichlet(0);
uf.t[front] = dirichlet(0);
u.t[back]   = dirichlet(0);
uf.t[back]  = dirichlet(0);
u.t[top]  = dirichlet(0);
uf.t[top] = dirichlet(0);
u.t[bottom]   = dirichlet(0);
uf.t[bottom]  = dirichlet(0);
u.t[left]  = dirichlet(0);
uf.t[left] = dirichlet(0);
u.t[right]   = dirichlet(0);
uf.t[right]  = dirichlet(0);


int main() {
  size (L0);
  origin (-L0/2,-L0/2 ,-L0/2);
  init_grid (1 << (LEVEL));
  run();
}

event step (t+=0.001; t<0.02)
{
}


event init (t = 0) {
  
  /**
  The density and viscosity are defined by the variable fields we
  allocated above. */
  
  alpha = alphav;
  rho = rhov;
  mu = muv;

  foreach() {
    f[]= ((z+0.1*L0)<0.0?1.0:0.0);
  }
  boundary ({f});


  /* define a gravity vector */

  const face vector g[] = {0.,0.,-9.81};
  a = g;
}



#define rho(f) ((f)*rho_fl1 + (1. - (f))*rho_air)
#define mu(f)  ((f)*mu_fl1 + (1. - (f))*mu_air)

event properties (i++) {
  foreach()
    f[]=(f[]>1.0?1.0:
         f[]<0.0?0.0:
         f[]);

  foreach_face() {
    double ff = (f[] + f[-1,0,0])/2.;
    alphav.x[] = fm.x[]/rho(ff);
    muv.x[] = fm.x[]*mu(ff);
  }
  foreach() {
    rhov[] = cm[]*rho(f[]);
    umag2[] = sq(u.x[]) + sq(u.y[]) + sq(u.z[]);
  }
}

#if 1
event movies (t+=0.02) {
  static FILE * fp =
    popen ("gfsview-batch3D slide3.gfv | ppm2theora > ra3f.ogv", "w");
    output_gfs (fp, t = t);
    fprintf (fp, "Save stdout { format = PPM width = 1024 height = 768}\n");

}
#endif


event logfile (i+=1) {
  fprintf (stderr, "%g %g %d %d %d\n", 
 t, dt, mgp.i, mgpf.i, mgu.i);
}


event end(t=t_end) {
}

event adapt (i++) {
  adapt_wavelet ({rhov}, (double []){0.4}, maxlevel = LEVEL, minlevel = LEVEL-3);
}

/**
If gfsview is installed on the system, we can also visualise the
simulation as it proceeds. */

#if 1
event gfsview (t += 0.1) {
  output_gfs (stdout, t = t);
}
#endif



the viz code ra3.gfv is: 

# GfsView 3D
View {
  tx = -0.015964 ty = 0.0908398
  sx = 1 sy = 1 sz = 1
  q0 = 0.540164 q1 = 0.263422 q2 = 0.351024 q3 = 0.718061
  fov = 28.1412
  r = 0.3 g = 0.4 b = 0.6
  res = 1
  lc = 0.001
  reactivity = 0.1
}
Squares {
  r = 0 g = 0 b = 0
  shading = Constant
  maxlevel = -1
  font_size = 1
  raster_font = 1
  line_width = 1
} {
  n.x = 0 n.y = -1 n.z = 0
  pos = -0.5
} f {
  amin = 1
  amax = 1
  cmap = Jet
}
Isosurface {
  r = 1 g = 1 b = 1
  shading = Constant
  maxlevel = -1
  font_size = 1
  raster_font = 1
  line_width = 1
} {
  n.x = 0 n.y = 0 n.z = 1
  pos = 0
} P {
  amin = 1
  amax = 1
  cmap = Jet
} f {
  level = 0.1674
  reversed = 1
  use_scalar = 0
}
Squares {
  r = 0 g = 0 b = 0
  shading = Constant
  maxlevel = -1
  font_size = 1
  raster_font = 1
  line_width = 1
} {
  n.x = 1 n.y = 0 n.z = 0
  pos = -0.5
} umag2 {
  amin = 1
  amax = 1
  cmap = Jet
}
Cells {
  r = 0 g = 0 b = 0
  shading = Constant
  maxlevel = -1
  font_size = 1
  raster_font = 1
  line_width = 1
} {
  n.x = 0 n.y = 0 n.z = 1
  pos = -0.5
}

compile with qcc -O2 -Wall -fopenmp -o ra3 ra3.c -lm

and run with ./ra3 | gfsview -s ra3.gfv

enjoy and any comments, suggestions and ameliorations are very welcome :-)

cheers felix

ps: is there a possibility to send movies?

Stephane Popinet

unread,
Feb 17, 2016, 4:08:07 AM2/17/16
to basil...@googlegroups.com
Hi Felix,

What happens is that because numerical errors are the only source of
noise in the system (because you don't add noise explicitly), they are
amplified by the instability. If these numerical errors are not
distributed uniformly, you will see the non-uniform distribution in the
instability.

The main source of numerical errors is the tolerance on the Poisson
equation for the pressure. With the default relaxation scheme, the error
distribution (below the tolerance threshold) depends on the way the grid
is traversed. When you use OpenMP, the grid is traversed differently
than on a single core, which leads to an error distribution which
reflects the partitioning between cores, which is then amplified by the
instability.

Of course, this could also be due to some parallelisation bug. To check
that this is not the case, you can use the "Jacobi" relaxation scheme
instead of the default. The Jacobi relaxation is independent from the
order of traversal. You can try this using

% qcc -DJACOBI=1 ...

and you will see (as I did) that the results become independent of the
number of cores (and also that the instability takes much longer to grow
because the noise is more uniformly distributed).

A better way to do things is probably to add some initial noise
explicitly (as Antoon did for the Boussinesq instability).

> ps: is there a possibility to send movies?

I am not sure what the size limit is for the mailing list. You could use
youtube or, better still, write up an example in the sandbox/ on basilisk.fr

cheers

Stephane

Zhuoran Lyu

unread,
Feb 18, 2016, 11:05:15 PM2/18/16
to basilisk-fr, pop...@basilisk.fr
Hi Stephane,

Hello. I just ran hele-shaw program in your examples folder but met a problem. It gives me the "Floating point exception (core dumped)" error all the time. Could you please help me have a look?
Thank you very much!
Zhuoran

Stephane Popinet

unread,
Feb 23, 2016, 4:23:36 AM2/23/16
to basilisk-fr


On 23/02/16 08:54, Antoon van Hooft wrote:
> I do however, not know how to run simulations on the basilisk server.
> When i am logged in I cannot (or do not know howto) play in the sandbox.

You just need to create a *.c file in the sandbox and either use
"Preview" or "Save" the page. This will then compile and run the *.c
file (and generate any figures you may have included in the page).

cheers

Stephane
Reply all
Reply to author
Forward
0 new messages