Different results running same code

519 views
Skip to first unread message

HUI WANG

unread,
Jul 9, 2021, 9:28:40 AM7/9/21
to basilisk-fr
Dear All,

Recently, I am doing some convergence studies of drop impact with axi.h. However, the results always show differences when I run the same code with different number of cores. Please find the attached file of the test results.

I reported this problem in 3D simulations several months ago, and the differences were neglectable (https://groups.google.com/g/basilisk-fr/c/Rc3SCjNLfQs/m/OEO6pl6xAQAJ). But now for axisymmetric cases, the evolution of interfaces is greatly affected.

What I am missing?  Someone had this issue before? 

Best
Hui
axi_cores.pdf

HUI WANG

unread,
Jul 13, 2021, 3:30:26 AM7/13/21
to basilisk-fr
Any helps?

Michael Negus

unread,
Jul 13, 2021, 10:33:09 AM7/13/21
to basilisk-fr
Hi Hui,

Whenever I've had this sort of issue I've found that I've not parallelised one of my for loops correctly (not foreach). However, it's hard to diagnose any problem without seeing source code, as there could be a number of reasons why things change in this manner. So would you be willing to share the source code?

All the best,

Michael

HUI WANG

unread,
Jul 13, 2021, 12:01:22 PM7/13/21
to basilisk-fr
Hello Michael,

Thanks for your response. Please find the attached original code

Best
Hui

drop.c
adapt_wavelet_limited.h

HUI WANG

unread,
Jul 16, 2021, 9:10:22 AM7/16/21
to basilisk-fr
Hello again,

I did several tests using some codes from basilisk sandbox. Please find the test results in the attachment. In short, all these three tests show the differences of results under different numbers of processors(cores). It seems that this is a common problem that exists for all simulations (I guess)? For cases where multiple droplets and bubbles are expected or the interface interactions are more dynamic(drop impact), the difference of the results will appear more visible? 

If this is the issue, my question would become how to mitigate this effect? Is there any solution to minimize this effect? 

Thanks a lot
Hui
Core_effect.pdf

j.a.v...@gmail.com

unread,
Jul 16, 2021, 6:36:40 PM7/16/21
to basilisk-fr
Hallo Hui,
 
The residual field for the various Poisson problems may differ when using a different number of cores.
You can set the JACOBI flag (i.e. compile with -DJACOBI option) to mitigate it.

The effect is most prominent when the residual-induced perturbation is a seed for an instability.
In that case, adding a physical perturbation to the initialization maybe desired, and maybe reduce the TOLERANCE.

Antoon

Op vrijdag 16 juli 2021 om 13:10:22 UTC schreef huiw...@outlook.com:

HUI WANG

unread,
Jul 17, 2021, 3:56:26 AM7/17/21
to basilisk-fr
Hello Antoon,

Thanks for the advice, I will try what you suggested

Best
Hui

HUI WANG

unread,
Jul 26, 2021, 6:11:38 AM7/26/21
to basilisk-fr
Hello Antoon, Hello all,

I tried to reduce the Tolerance and add the flag -DJACOBI, but the results still show big differences. 

Furthermore, I found that the results are quite different even I run the same code with the same number of cores (processor). As shown in the attached two pics, they are outputted by the same code that ran yesterday and today respectively using the same number of cores. I tried to run this code multiple times, every time the results would be different, and there are two times of non-convergence with the error 'dtnext: Assertion `n < 0x7fffffff' failed'. I reported this problem before with 3D simulations(https://groups.google.com/g/basilisk-fr/c/Rc3SCjNLfQs/m/OEO6pl6xAQAJ). For the Axisymmetric case at this moment in my case, it is a little bit hard to do the convergence study, since the results will be somehow random every time. Is this a common problem exists in Basilisk code too? 

Thanks a lot
Hui
25juillet.png
26juillet.png

HUI WANG

unread,
Nov 19, 2021, 4:37:07 AM11/19/21
to basilisk-fr
Hello Antoon,

I found this post talking about the results difference caused by the number of cores(https://groups.google.com/g/basilisk-fr/c/kqqMiMcuh5A/m/-bRCcLZfEgAJ). As you and professor Stephane Popinet suggested that adding an initial noise would help to mitigate this effect, but I am not very clear on how to properly add this noise and how it works. Could you give me some advice?

In your bug test in Boussinesq Rayleigh-Taylor(http://basilisk.fr/sandbox/bugs/adapt_accel.c), you just simply add a noise at t=0 as following (I try to run this test but it always gives "Segmentation fault")
foreach() 
  T[] = (y < 0.0) + 0.01*noise();

For my case of drop impact onto liquid pool, how should I add this noise? For example, is it right way to define the initial drop and pool as follows? Do I need to add a noise to the initial velocity too?
event init (t = 0) {
  scalar f1[], f2[];
  fraction (f1, pool(x));
  fraction (f2, drop(x,y));
  foreach() {
    f[] = f1[] + f2[]+0.001*noise();
    u.x[] = -f2[];
  }
}

Thanks a lot
Hui

j.a.v...@gmail.com

unread,
Nov 19, 2021, 5:42:18 AM11/19/21
to basilisk-fr
Hallo Hui,

 > I try to run this test but it always gives "Segmentation fault"
It runs fine for me.

> how should I add this noise?
I do not think there is a generic answer. It is about adding a perturbation which can be amplified by the instability mechanism. Ideally you should set a perturbation to your match the physical model system.

> is it right way to define the initial drop and pool as follows?
For the droplet in a pool you could perturb the interfaces, but adding noise to every cell maybe too rigorous. Perhaps something like:

...
foreach() {
  f[] = f1[] + f2[];
  if (f[] < 1 && f[] > 0) // Only add noise to cells that contain the interface
      f[] = clamp(f[] + 0.001*noise(), 0, 1);
   ...
}

Notice that the perturbation will still be sensitive to the number of cores you use.

Antoon
Op vrijdag 19 november 2021 om 10:37:07 UTC+1 schreef huiw...@outlook.com:
Reply all
Reply to author
Forward
0 new messages