Hello,
I am an undergraduate student working on a simulation of two colliding vortex rings, based on the Basilisk code found here:
https://basilisk.fr/sandbox/Antoonvh/two_rings.cIn my initial tests on a personal laptop, I used a reduced scale (200 particles and a maximum grid refinement level of 8) to match the computational resources available. Even with these settings, the simulation required approximately five hours to complete. This is significantly longer than the runtime for the demonstration video, which used a higher resolution (10^5 particles, maxlevel 11).
My goal is to run this simulation more efficiently. I now have access to my university's GPU cluster and would like guidance on how to leverage this resource.
Could you please advise on the best way to accelerate this code? Specifically:
GPU Acceleration: Is it possible to adapt this specific simulation (which uses an octree grid) for GPU computation? If so, what would be the recommended approach?
Parallel Computing: Would standard CPU-based parallelization (e.g., OpenMP) be a more straightforward path for performance gains with this code?
Simplifications: Are there other optimizations or simplifications I could implement to reduce the computational cost without sacrificing the core physics?
Any advice or pointers to relevant documentation would be greatly appreciated.
Thank you for your time and assistance.