Building a local Computer for DNS of Multiphase flows

87 views
Skip to first unread message

Alperen Türkyılmaz

unread,
Jul 21, 2025, 9:17:25 AMJul 21
to basilisk-fr

Hello everyone,

I hope you're doing well. I am currently working on bubble breakup simulations that require a high level of mesh refinement. I have been using the Bridges-2 cluster with one or two nodes (each equipped with 128 processors). However, as the refinement increases, the simulation time has become significantly longer, and the overall performance has slowed down considerably.

Since I do not have access to a larger number of CPUs to benchmark my case, I am considering building my own computer but I am uncertain about how many CPUs would be optimal for this type of simulation .

Have any of you built your own cluster for similar DNS simulations? If so, could you kindly share your experience and provide guidance on the number of processors that would be reasonable for such simulations and what should I pay attention to by building my own computer? I understand that the requirements vary depending on the specific case, but any insights from your experience would be extremely helpful.

I would greatly appreciate any advice or suggestions you can share.

Best,

Alperen

Wojciech Aniszewski

unread,
Jul 22, 2025, 5:59:40 AMJul 22
to Alperen Türkyılmaz, basilisk-fr
Dear Alperen,

I do not have the experience of building a larger supercomputer myself. However, I know that the (regional) supercomputer I use is made out of over a hundred of AMD EPYC 9654 (Genoa) nodes at 768 RAM each, of what I was able to find they have MSRP of over 15k per node. My recent simulations generally stop at around 1024 cores, maybe up to 2048 for extreme cases. That of course gives 8-16 such nodes employed.

By the way that reminds me. Even though I invested many workhours into optimising these simulations (meaning e.g. spatially restricting the refinement, manually assuring the maximum refinement happens only in the 'regions of interest' ,limiting RAM usage, etc.) - I still have OOM errors (out-of-memory) every now and again. Meaning I should probably use more of them in fact. Of course, in Basilisk the number of e.g. scalar fields you have initialized, I/O and pre/post processing you do on the runtime will impact the RAM.

When using regular uniform grids, we came up with the uber-simplistic estimation that each 32^3 cube should have one processor serving it (that's of course 32768 gridpoints per proc). That doesn't really translate to an octree AMR simulation with an advanced code such as Basilisk (it actually needs more), but is a good rule of thumb to start estimations.

hope any of that helps
regards
Vôitek
> --
> You received this message because you are subscribed to the Google Groups "basilisk-fr" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to basilisk-fr...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/basilisk-fr/8ef61fad-63a1-4fc0-86ad-563c30babef4n%40googlegroups.com.


--
/^..^\
( (••) ) Wojciech (Vôitek) ANISZEWSKI
(|)_._(|)~
GPG ID : AC66485E
Twitter : @echo_dancers3
Mastodon: @w...@fediscience.org
BlueSky : @aniszewski.bsky.social
Scholar : https://tinyurl.com/y28b8gfp
OrcId : https://orcid.org/0000-0002-4248-1194
RG : https://www.researchgate.net/profile/Wojciech_Aniszewski

signature.asc

Conor Olive

unread,
Jul 22, 2025, 9:40:58 AMJul 22
to basilisk-fr
If you wanted to build a single node with more than 128x2 you would be looking at the top spec'd Threadripper Pro or Epyc series CPUs. I would not recommend attempting to build a cluster from scratch unless you have previous experience configuring these types of systems or have access to cheap electricity, as you will need to learn quite a lot of sysadmin skills, including those specific to HPC. As for what is optimal for basilisk in particular, I am not sure.

Best,
Conor
Reply all
Reply to author
Forward
0 new messages