Peter_J
unread,Oct 23, 2012, 3:43:46 AM10/23/12Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to fds...@googlegroups.com
Hi Luke,
I have ran several large-cell cases (30-50mln cells) last months, and so far my conclusion are as follows:
1. When you are running single mesh case, you can assume: 1mln cells / 1,0 GB RAM
2.
When you are running large scenarios involving use of multiple parallel
meshes: 1mln cells / 2,15 GB RAM (calculated from my largest case).
3.
It is crucial to remember about some memory reservation for Your OS.
This value can differ between 0,5-2GB depending on Your installed OS
If You are building model where amount of grid cells is close to Your RAM capacity - You cant be sure, if solver will use your hard drive to dump some temporary data from calculation process (page file / virtual memory) - or not. If it happens, You will notice significant performance drop of Your calculation. So, If You are performing calculation with number of cells close to Your maximum RAM capacity, and then You expand your memory - you can observe some improvement in calculation speed - all this can happen because solver will stop using virtual memory. But further expansion of RAM wont bring any difference. So, discussion about impact of amount of RAM in terms of callculation speed - can be reduced to removing bottleneck problem. If Your amount of memory is adequate the main role in calculation speed plays speed of Your cores, and of course: number of cells per core ratio.
It is good practice to evaluate maximum amount of cells in accordance to rules of thumb listed above, before starting any calculations.
In my opinion You should have two times more GB of RAM, then the cores if You are going to run large sceneries.
I have single calculation station, equipped with 4-processor mainboard with 16-core OPTERON CPUs. It gives 64 cores on single work station. I have 64GB of RAM, but according to my benchmarks this value must me multiplied by 2 in order to use full potential of this machine. In next Year I will run several benchmarks to study more deeply problem of calculation performance against memory used, optimal cells amount per single process and other problems - and I will gladly share results with FDS community.