Ways to to reduce dedalus memory for 2D Kelvin Helmholtz Simulation
77 views
Skip to first unread message
Matthew Hudes
unread,
Dec 21, 2025, 5:17:06 PM12/21/25
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Dedalus Users
Hello everyone,
I have recently started using dedalus v3 for simulating the 2D Kelvin Helmholtz Instability. I used the 2d shear flow example script as a starting point (https://github.com/DedalusProject/dedalus/tree/master/examples/ivp_2d_shear_flow). The main difference in my code is that I have changed a lot of the parameters and I switched to using the vorticity representation of Navier Stokes, since this will be important for my application. I have attached a MWE here. The code works for lower resolutions, but I am attempting and 8192 by 8192 grid, which fails (out of memory) before the first time step is complete. I am using a node with 48 cores and 187.5 GB of memory.
I have tried a few things to reduce memory usage, like reducing the number of tasks I save snapshots of and not storing the absolute value of vorticity as a flow property. However, I am not surprised these have not had an effect since my code is running out of memory before the first time step is complete. I did try moving the "nu*lap(om)" term to the RHS in an attempt to reduce the size of the LHS matrix, but this did not seem to solve the problem. I also tried using the SBDF2 timestepper, but I think it is equivalent memory wise to RK222.
I have seen various formulas around for calculating memory needs, such as
25*Ncc*F* F*Nx*Ny*Nz*8*10^{-9} = 335GB
in my case, but I am unsure if this formula applies for dedalus V3 and my setup. If so, what are possible strategies for reducing the memory requirements of the code?
I also would be happy for any advice about proper utilization of dedalus! I also am curious what peoples thoughts are on the memory usage of dedalus compared to other pseudo spectral codes. Does it seem to be equivalent or perhaps more memory intensive than other packages (like GHOST for example: https://github.com/pmininni/GHOST/tree/master).
Adrian Fraser
unread,
Dec 29, 2025, 11:53:13 AM12/29/25
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Dedalus Users
Hi there,
I think your MWE failed to attach, mind re-sending it? I don't know if there's a magic bullet to solve this for your use case, but I'm happy to take a peek at your MWE.
Adrian
Matthew Hudes
unread,
Dec 29, 2025, 10:27:43 PM12/29/25
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Dedalus Users
Hello!
I don't know why it didn't attach the first time, but I think it is attached now. I appreciate you taking a look - any advice would be appreciated! This MWE is fairly simple, but I think is representative of the core of what I am doing.
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Dedalus Users
Hi Matthew,
Unfortunately I don't see anything obvious for how to improve memory usage. There's changes you can make to your CFL that would make things run faster (mainly adding a threshold so it's not changing dt unless necessary), but I don't see any of the usual culprits for memory usage. Sorry!