Dear Phantom users,
I'm trying to plot the trajectory of some gas particles in my phantom simulation. If I understand the code correctly, the indices of the particles serve as the particle IDs, so I could use the index to select a certain particle at different timestep (I think splash does the same way). This works fine if I only use OPENMP parallelization. However, if I use the hybrid MPI+OPENMP parallelization, the trajectory looks quite strange (see the attached plot). I suspect the indices might have been changed while the processors communicate by MPI. I wonder whether this is a bug in Phantom, or it is a feature to accelerate the simulations with MPI enabled.
If the indices have to be changed with MPI enabled, then I have to switch back to OPENMP, but it is known that OPENMP parallelization is not very scalable in general. For my problem: I am studying the effects of stellar bars/spirals on the galactic gaseous disk, and the bar usually drives a lot of gas into the galactic center, making a high-density ring there which cause the code runs quite slow (takes > 1 month using ~5 million particles with ISOTHERMAL, OPENMP and INDEPENDENT TIMESTEP, CPU: 32 X Intel® Xeon® Gold 6136 processor, Compiler: ifort 18). I am not sure whether there is a better way to make the code runs faster with or without MPI (e.g. how many threads per physical core is the most optimized configuration in Phantom?).
I would really appreciate it if someone could provide me a few suggestions.
Best,
Zhi