Dump/restart with a stl file : shift in coordinates

604 views
Skip to first unread message

tialou nal

unread,
Nov 23, 2021, 11:35:17 AM11/23/21
to basilisk-fr

Dear all,

I would like to run a geometry (from a .stl file) with MPI. Therefore I follow the dump/start tutorial of tangaroa.c.

After failing to converge the calculation on a computer center, I tried to run in sequential from a restart file.

I encounter the following issue : after dumping a snapshot of the geometry and restarting from it, I have a shift of the geometry in its coordinates after the first iteration (see fig_shift.png, on the left both geometries from restart and the first run for dumping at t = 0, and on the right after the first iteration, in grey the shift from the restart).

I follow this procedure, with the .c file enclosed:

rm restart dump*

qcc -DDUMP=1 -Wall -w -O2 debug_for_forum.c -o program -L$BASILISK/gl -lglutils -lm

./program

rm -r -f data_0

mv data data_0

mv dump-0 restart

./program

The geometry used is the one from the cor.c of Antoonvh’s sandbox, and I’m exporting the data to paraview with functions from output_vtu_foreach.h of Acastillo’s sandbox

I tried to restart from different snapshots. I don’t understand what I’m doing wrong, any help would be great.

Kind regards,

Cyprien

fig_shift.png
debug_for_forum.c

j.a.v...@gmail.com

unread,
Nov 24, 2021, 4:18:31 AM11/24/21
to basilisk-fr
Hallo Cyprien,

I am not sure what causes your particular results, but I do have a comment:

Because dump does not work for face fields, the embedded boundary cannot be properly reconstructed after restoration. With MPI,  you could reinitialize it, after restoring, using the distance field if you store that too.  

Antoon
Op dinsdag 23 november 2021 om 17:35:17 UTC+1 schreef cyprien.l...@gmail.com:

tialou nal

unread,
Nov 24, 2021, 11:34:48 AM11/24/21
to basilisk-fr
Hi Antoon,

Thank you for your reply, it was indeed effective. It corrected the shift (and thus the simulation).

I would need an other advice.
I'm using the restore procedure in order to run a geometry with a MPI/mesh up to 512^3 cells. Since the distance function is not parallelised yet, I have to run the first iteration in sequential... but it's too much for the RAM (30GB). I would be curious to know whether there is a trick to avoid a crash in local due to resources.

Kind regards,

Cyprien

Damien Huet

unread,
Sep 16, 2022, 12:37:09 PM9/16/22
to basilisk-fr
Hi Cyprien and Antoon,

I have also struggled to run in parallel a case with embedded boundaries defined from an STL file. Thanks to Antoon's comment I am dumping the distance scalar, and I reconstruct the cell and face fractions at the restart step. In my case, restoring the original distance scalar "d" led to a segmentation fault: I am not sure where it comes from (it looks like the distance function is setting a lot of properties that I don't understand to "d"). The trick I found was to define a secondary scalar field (or use the cell fraction "cs"), give it the distance values: foreach() cs[] = d[];, and not dump "d" (i.e. keep it a local variable). I will publish this case on my sandbox, and maybe a bug report on the dump/restart of "d", once I can access the wiki via darcs again.

About Cyprien not being able to run a larger case due to RAM issue on one core: where you able to find a way to bypass this problem in the end? I am personally able to generate a distance field on one core at level 9 or 10 for a winding channel: I make sure that the initial grid is coarse and that I refine the mesh from there (rather than defining a fine grid and coarsening the mesh from there). A better way to do it would of course be to parallelize the distance() function. At first I thought that a few well placed "boundary" calls would suffice, but I am not able to understand the implementation of this function (nor the functions it calls). Is there a conceptual reason why the function distance() would be hard to parallelize?

Thank you both for your questions and comments which were very helpful.

Best,

Damien

Cyprien jsc

unread,
Sep 22, 2022, 9:09:30 AM9/22/22
to basilisk-fr
Hi Damien,

Basilisk doesn't store face fields indeed, and embed.h needs the "fs" embedded volume fraction field, which is lost in the restart. In my case the work-around proposed by Antoon in this thread was very helpful to resolve the segmentation fault (but without the distance function):

For the RAM issue, I used the same trick to define a grid to the maximum level only in the solid geometry region. Otherwise, on a supercomputer you can increase the RAM/core, run the first iteration on one mega-core, save the ‘d’ field, and restart in parallel.
On TGCC/Irene for the iteration 0 it would be a job with: 
#MSUB -n 1
#MSUB -c 5

Cheers,

Cyprien

Ning Wang

unread,
May 11, 2023, 3:40:49 AM5/11/23
to basilisk-fr
Dear everyone,

By using dump() with its default config, I get a file "dump" which contains flow domain information and its  file size is consistent with my own output for Tecplot
But, when I use  restore(), the grid config is reinitialized and it is not same as the result I dump() whose AMR grid.
how the grid infromation can be saved and read in the recalculation  with restore()?
Sincerely look forward to any comments from you!

Regards,
Ning Wang

2022年9月22日木曜日 21:09:30 UTC+8 cyprien.l...@gmail.com:

Cyprien jsc

unread,
May 11, 2023, 9:25:53 AM5/11/23
to basilisk-fr
Hi Ning Wang,

You have a good example here : http://basilisk.fr/src/examples/tangaroa.c.

I have an example in my sandbox to handle some assertion errors with embed.h and a complex geometry : http://basilisk.fr/sandbox/Cyprien_Lemarechal/pipe_geometry.c

Once you have your dump file, it should be as easy as :

event init (t = 0){

  if (!restore (file = "restart")) {
    //do something when not restarting
  }
  else{
    //do something for the restart
  }
}

If your grid is
reinitialized, you may have left a init_grid() or a refine() in your event init (‘else’).

Cheers,

Cyprien

Ning Wang

unread,
May 11, 2023, 11:17:54 AM5/11/23
to basilisk-fr
Hi Cyprien,

Thank you very much for your advise and now i have induced dump-file successfully and started to re-calculation !  

Regards,
Ning Wang

2023年5月11日木曜日 21:25:53 UTC+8 cyprien.l...@gmail.com:

Ning Wang

unread,
Aug 17, 2023, 10:30:44 PM8/17/23
to basilisk-fr
Dear everyone,

I am wondering that whether i can write a Dump file which is used during dump() and restore() or not? I am aimming at "marco simulation + mirco simulation", which is meaning that using a coarse mesh domain to solve a marco scale problem and using a fine mesh small domain to solve mirco scale problem such as boundary flow. 

This may need to evoke two different calculations at the same time. The marco scale calculation results in a time step provide the initial field and boundary conditions for the micro-scale calculation.The marco scale calculation maps the flow field data to the micro-scale calculation. After the micro-scale calculation is completed The data is then remapped to the macro-scale calculation, and then the macro-scale calculation for the next time step is performed, which means that an interface between the two calculations is required. 

If the binary dump file can be customized and edited, I can evoke two calculations at the same time, through "macro-scale calculation dump()-modify dump file-micro-scale calculation restore()-micro-scale calculation dump()-transcode dump file- Assigning values to the macro-scale calculation local field "This process realizes the communication between the two calculations

Are there any other examples where "simulation + simulation" can be achieved? Is there an easier communication interface between calculations? Thanks!

Regards,
Ning Wang

2023年5月11日木曜日 15:40:49 UTC+8 Ning Wang:
Reply all
Reply to author
Forward
0 new messages