We are trying to run FDS for a geometry having a large mesh size of 60
mn. Even after some grid coarsening, we may still be around 45 - 50
mn.
The cluster we will use has the following configuration:
Nodes: 10
Cores / node: 4 (Intel Quad core)
RAM / node: 8 GB (2 GB / core)
OS: Linux 64 bit
FDS version: 5.1.6 (Linux MPI version, 64 bit)
We got stuck at the first stage itself. FDS is not able to read the
large mesh. It seems that it is using the RAM of only the main node (8
GB), and does not share the task with other nodes.
Is the mesh reading process not parallelised in FDS? Is it limited by
the RAM of the main node? In that case, how does one read a large mesh
(~ 50 mn)?
Request feedback from users who have run large mesh size problems in
FDS, and also the developers of FDS parallel version. Thanks in
advance.
On Feb 2, 8:11 am, shashank <borde.shash...@gmail.com> wrote:
I have run 18 million cells divided between 50 meshes and processed on
50 processors. The processors on our cluster have only 1 GB of RAM each.
To reiterate Jason's question, how many meshes do you have the 60
million cells divided into? Are they divided evenly? How many cells are
in the largest mesh?
Dave
--
Dave McGill
School of Fire Protection
Seneca College
1750 Finch Ave E.
Toronto, ON
M2J 2X5
I am running the with 16 mesh blocks.
Each of which is around 3.5 mn size.
Largest of them all is a mesh block with 4.5 mn size.
I assigned these mesh blocks keeping in mind: 1 million cells = 1 Gb
RAM.
But, as I said above, it is using the RAM of only the main node while
reading the mesh (8
GB), and does not share the task with other nodes.
Your views please!
Thank You !
-Shashank
In your first post you indicate that there is 2GB of RAM/core. As
Jason has indicated above, a single GB can handle about 1 million
cells. So your nodes can each handle about 2 million cells. If each is
being assigned 3.5 million cells, that is the source of your problem.
If you want to keep the same number of cells then split each of the
3.5 million cell meshes into 2 separate meshes, and split the 4.5
million cell mesh into 3 separate meshes. (You also have to ensure
that the mesh boundaries are not in an area of high activity.
Dave
Try to have as small interfaces as possible.
I have experienced up to 3.5 GB per 1 million cells, with multiple
grids. Apply this to 4.5 million cells, and the memory need can be >15
Gb for one mesh.
The computer will distribute more than 2 GB pr. core if needed.
If it is possible, try keeping the number of cells equal in all
meshes, and below 2 GB pr core.
Jens
Now, I am trying to split the mesh blocks of around 1mn cells.
I hope it should work!!
Thank You!
regard Jens
It is working.
Thank you to all of you for your suggestions.
Currently, I am monitoring the same. But I am concerned about the time
sstep it takes for calculation.
The time step is around 7 milliseconds, and it is taking a lot of
time.
If any one has some ideas to increase the time step and reduce overall
time for simulation, please let me know.
Thank You!
-Shashank
dt < min(dx/u,dy/v,dz/w)
Make an estimate of dt based on your smallest grid cell and largest
velocity component. Does it make sense?
> > > > > > > > 416-491-5050, ext, 6186- Hide quoted text -
>
> - Show quoted text -
Absolutely, it makes sense.
But, I need some clarifications.
1. How do I predict the largest velocity component ? (should I try
some trial runs to get the value of u,v &w ?)
2. If I make an estimate of dt and set it initially, then as
simulation proceeds it will change the time step value.
3. Effectively, as we are calculating the smallest dt value, can we
set 'not to allow time step, that is to lock time step ?'
4. Also, as it is the smallest time step, so it won't march 2-3 cells
in single time step. (please correct me, if I am wrong), and thus it
won't affect the result.
If above all is true then, I think, I will be able to change the time
step.
Your views please!
-Shashank
2. Yes
3.+4. We strongly recommend against locking the time step to fixed
value except under special circumstances (generally various
verification exercises). If your time step input is too large, you
will have stability problems and your results may be meaningless. If
your time step input is too small, then your run will take longer than
it needs too. Either way there is no benefit to you.
If your simulation is taking too long you have three options:
a) use more processors
b) use a larger grid size
c) see if you can find a more efficient meshing strategy (if you have
large blocked off regions changing meshing may allow you to reduce the
number of grid cells)