Recommended best setup for PC to run FDS (CPU)

1,242 views
Skip to first unread message

John S.

unread,
Jan 28, 2022, 11:29:22 AM1/28/22
to FDS and Smokeview Discussions
Hi all,

I am purchasing a new workstation to run FDS simulations.  It is important that I can cut down on my computation times, as well as hopefully run Pyrosim and/or Blender.

Currently, I am using an Ubuntu Linux build with a Xeon E5-2630 v3 @ 2.40 GHz with 8 cores and 16GB of RAM (probably DDR3).  

I am running a very simple and small simulation on a candle that I have been working on improving.  I have taken into account 3D Heat transfer and decreased radiative angle increments and my current mesh size for the "candle" OBST and plume is 3.75mm (7.5mm outer mesh).  The total mesh is about 20,000 units (4.5 by 4.5 by 10cm), 8 meshes, and it takes about 3 days 10 hours to run the current iteration with a "candle wick" OBST 3.75 by 3.75mm with vent at .019713 kg/m^2/s.

I would like to improve my computation times significantly and decrease my mesh size to 1 or 2mm, ideally.  I would like to run a Windows 10/11 PC instead of Linux.  Any suggestions on speeds, architecture, number of cores, and if it's important RAM capacity?

Thanks for any help!
----
Note:  I have also been exploring running FDS on AWS following a tutorial from documentation (https://aws.amazon.com/blogs/compute/fire-dynamics-simulation-cfd-workflow-using-aws-parallelcluster-elastic-fabric-adapter-amazon-fsx-for-lustre-and-nice-dcv/) but it is tough to do all the coding.  If this is a faster option (72cores c5n.18xlarge instance) then I might just do that.

John S.

unread,
Jan 31, 2022, 9:38:52 AM1/31/22
to FDS and Smokeview Discussions
On 8 meshes, what is the best CPU?  More cores?  More processor speed?

Can I base my evaluation of processors on CPU benchmarks?  Higher benchmark, better for FDS computation time?

Thanks!

Marcos Vanella

unread,
Jan 31, 2022, 10:38:44 AM1/31/22
to fds...@googlegroups.com
In terms of how to go about a simulation cost you need to think that every time you half the cell size in each direction (in 3D) the computing cost will go up by a factor of 16 (8 times more computational cells and about half the time step due to CFL constraint).
In parallel, you can reduce that time by dividing the domain in more meshes and using more MPI processes (ideally all meshes having the same number of cells). So, for your 8 mesh case that takes 3 days if you reduce the cell sizes by a half, you will need to split the domain in 8*16=128 meshes and processes (need a computer with 128 cores of the same type) and the run should take about that time, maybe a bit more. This should give you a bulk idea of what you need for a specific calculation.
My experience is that processors nowadays run about twice as fast in serial mode than their counterparts of 6-8 years ago.


--
You received this message because you are subscribed to the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fds-smv+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fds-smv/c5796d2d-219a-4ceb-8d90-ecbce66adb81n%40googlegroups.com.

John S.

unread,
Jan 31, 2022, 10:55:03 AM1/31/22
to fds...@googlegroups.com
Thank you for the information.

One follow up question:  do you think there is a computational gain by running less meshes (say 8) on more cores (say 16-24)?  

I read in another older discussion that gains in computational time were less when going from 8-16-128 meshes than going from say 1-8.

Thanks!

You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/DANmoM6K0vE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fds-smv/CACS9bmNHkcOi4s6D_226n0xyAAOL17_-bbvJ4z9%2BDhk4Wb8DiQ%40mail.gmail.com.
--
Best,
John T. Stebbins

John S.

unread,
Jan 31, 2022, 10:56:09 AM1/31/22
to fds...@googlegroups.com
Edit:  I think what you said about running in serial sort of answers my question.

Marcos Vanella

unread,
Jan 31, 2022, 11:06:13 AM1/31/22
to fds...@googlegroups.com
What you might have read is probably related to using OpenMP and MPI together. Either way, splitting the domain in more meshes and using more MPI processes is in general the way to get more parallel efficiency. Note that there is a point where more meshes and MPI processes will also lead to diminishing returns. Read on this in the User Guide. This is related to the cost of communicating mesh boundary information among processes vs the cost of computing on the mesh.
As a rule of thumb you should get reasonable strong parallel scaling with mpi down to meshes with 16^3 cells. About RAM memory, you want to have at least 2GB of RAM per core if possible.

John S.

unread,
Jan 31, 2022, 11:27:15 AM1/31/22
to FDS and Smokeview Discussions
Thanks, I will check the section on the guide on this.

One more follow up question:  What weight should I give user benchmarks, such as PassMark Benchmark software?  

It gives the i9-12900K (16 cores, 3.2Ghz base) a score of 40,258 vs. Xeon Gold 6230R (26 cores, 2.10GHz base) a score of 30,391.  The Xeon is $1,100 more expensive and usually sold with a Workstation setup, whereas the i9 is more for high performance desktops.  The scores seem consistent with other benchmarks (Geekbench, PCMark).

John S.

unread,
Jan 31, 2022, 11:30:42 AM1/31/22
to FDS and Smokeview Discussions
I was hoping to model the plume in it's own mesh (currently 8 meshes) because the SMV output when cutting it using MULT looks fractured.  I haven't done too many studies comparing thermocouple and heat release outputs.

This means that my meshes won't all be the same size and it won't be trivial to calculate 16+ meshes and ijk coordinates, especially if changing things like OBST sizes and overall system size.

Kevin McGrattan

unread,
Jan 31, 2022, 11:30:51 AM1/31/22
to fds...@googlegroups.com
I would consider Specmark


because they focus on high performance computing and floating point intensive apps. 

John S.

unread,
Jan 31, 2022, 11:31:36 AM1/31/22
to FDS and Smokeview Discussions
Thanks for your suggestions.  I realize these are a lot of questions.

John S.

unread,
Jan 31, 2022, 3:01:47 PM1/31/22
to FDS and Smokeview Discussions
Any recommendations in general about an i9-12900K (16 cores, 3.2Ghz base) vs. Xeon Gold 6238R (28 cores, 2.20GHz base) or similar?

John Van Workum

unread,
Feb 1, 2022, 8:43:46 AM2/1/22
to FDS and Smokeview Discussions
If you want to run some benchmarks, we have different CPUs available in our HPC cloud. You are welcome to run some evaluation tests.

John S.

unread,
Feb 1, 2022, 9:03:33 AM2/1/22
to FDS and Smokeview Discussions
Thanks, I am doing some research of my own about Xeon CPUs, as there seems to be a lot of info out there.  There are also a lot of recommendations about OpenFoam, ANSYS and other CFD packages for hardware.  I don't know if it is apples to apples but it seems like a good place for general info about CPUs.

It would be great to run simulations on your HPC cloud as a direct test.  You have several different configurations to test?  Please let me know how to get this started.

Thanks for all the replies.

Reply all
Reply to author
Forward
0 new messages