Reading large meshes (~ 50 - 60 mn) in FDS

168 views
Skip to first unread message

shashank

unread,
Feb 2, 2010, 8:11:57 AM2/2/10
to FDS and Smokeview Discussions
hi,


We are trying to run FDS for a geometry having a large mesh size of 60
mn. Even after some grid coarsening, we may still be around 45 - 50
mn.

The cluster we will use has the following configuration:

Nodes: 10
Cores / node: 4 (Intel Quad core)
RAM / node: 8 GB (2 GB / core)

OS: Linux 64 bit
FDS version: 5.1.6 (Linux MPI version, 64 bit)

We got stuck at the first stage itself. FDS is not able to read the
large mesh. It seems that it is using the RAM of only the main node (8
GB), and does not share the task with other nodes.

Is the mesh reading process not parallelised in FDS? Is it limited by
the RAM of the main node? In that case, how does one read a large mesh
(~ 50 mn)?

Request feedback from users who have run large mesh size problems in
FDS, and also the developers of FDS parallel version. Thanks in
advance.

Kevin

unread,
Feb 2, 2010, 8:17:47 AM2/2/10
to FDS and Smokeview Discussions
Each process must store some, but not all, information about the other
meshes. How much is case-dependent. I suggest that you coarsen your
grids until you can run the case, then monitor the RAM usage of each.
Is it really the case that the 8 GB of RAM is divided by 4, 2 GB per
core? If that is the case, you may not have enough RAM for the
individual processes.

dr_jfloyd

unread,
Feb 2, 2010, 8:23:02 AM2/2/10
to FDS and Smokeview Discussions
FDS divides memory on a MESH line basis. Do you have only one MESH?
If so, then it will only be assigned to one processor.

On Feb 2, 8:11 am, shashank <borde.shash...@gmail.com> wrote:

David Mcgill

unread,
Feb 2, 2010, 8:50:53 AM2/2/10
to fds...@googlegroups.com
Hi Shashank,

I have run 18 million cells divided between 50 meshes and processed on
50 processors. The processors on our cluster have only 1 GB of RAM each.
To reiterate Jason's question, how many meshes do you have the 60
million cells divided into? Are they divided evenly? How many cells are
in the largest mesh?

Dave

--
Dave McGill
School of Fire Protection
Seneca College
1750 Finch Ave E.
Toronto, ON
M2J 2X5

416-491-5050, ext, 6186

dr_jfloyd

unread,
Feb 2, 2010, 8:59:51 AM2/2/10
to FDS and Smokeview Discussions
As a rule of thumb: 1 million cells = 1 Gb RAM.

shashank

unread,
Feb 2, 2010, 11:52:48 PM2/2/10
to FDS and Smokeview Discussions
Thank you to all , for your replies !

I am running the with 16 mesh blocks.
Each of which is around 3.5 mn size.
Largest of them all is a mesh block with 4.5 mn size.

I assigned these mesh blocks keeping in mind: 1 million cells = 1 Gb
RAM.

But, as I said above, it is using the RAM of only the main node while
reading the mesh (8


GB), and does not share the task with other nodes.

Your views please!
Thank You !
-Shashank

Dave McGill

unread,
Feb 3, 2010, 8:23:42 AM2/3/10
to FDS and Smokeview Discussions
Hi Shashank,

In your first post you indicate that there is 2GB of RAM/core. As
Jason has indicated above, a single GB can handle about 1 million
cells. So your nodes can each handle about 2 million cells. If each is
being assigned 3.5 million cells, that is the source of your problem.
If you want to keep the same number of cells then split each of the
3.5 million cell meshes into 2 separate meshes, and split the 4.5
million cell mesh into 3 separate meshes. (You also have to ensure
that the mesh boundaries are not in an area of high activity.

Dave

Jens

unread,
Feb 4, 2010, 4:30:00 AM2/4/10
to FDS and Smokeview Discussions
The "1GB per 1 million cells" rule of thumb is ok with 1 or even a few
meshes.
With more meshes, the interfaces between the meshes has a great effect
on the memory utilization.

Try to have as small interfaces as possible.

I have experienced up to 3.5 GB per 1 million cells, with multiple
grids. Apply this to 4.5 million cells, and the memory need can be >15
Gb for one mesh.
The computer will distribute more than 2 GB pr. core if needed.
If it is possible, try keeping the number of cells equal in all
meshes, and below 2 GB pr core.

Jens

shashank

unread,
Feb 4, 2010, 4:46:58 AM2/4/10
to FDS and Smokeview Discussions
Thank you guys for your replies!!

Now, I am trying to split the mesh blocks of around 1mn cells.
I hope it should work!!

Thank You!

Jens

unread,
Feb 9, 2010, 4:06:15 AM2/9/10
to FDS and Smokeview Discussions
Hi shashank - did it work out for you?

regard Jens

shashank

unread,
Feb 10, 2010, 1:15:24 AM2/10/10
to FDS and Smokeview Discussions
Actually, I am running the same.

It is working.

Thank you to all of you for your suggestions.

Currently, I am monitoring the same. But I am concerned about the time
sstep it takes for calculation.
The time step is around 7 milliseconds, and it is taking a lot of
time.
If any one has some ideas to increase the time step and reduce overall
time for simulation, please let me know.

Thank You!
-Shashank

Kevin

unread,
Feb 10, 2010, 9:23:30 AM2/10/10
to FDS and Smokeview Discussions
The CFL constraint is

dt < min(dx/u,dy/v,dz/w)

Make an estimate of dt based on your smallest grid cell and largest
velocity component. Does it make sense?

> > > > > > > > 416-491-5050, ext, 6186- Hide quoted text -
>
> - Show quoted text -

shashank

unread,
Feb 12, 2010, 7:16:49 AM2/12/10
to FDS and Smokeview Discussions
Thanks Kevin!!

Absolutely, it makes sense.

But, I need some clarifications.

1. How do I predict the largest velocity component ? (should I try
some trial runs to get the value of u,v &w ?)

2. If I make an estimate of dt and set it initially, then as
simulation proceeds it will change the time step value.

3. Effectively, as we are calculating the smallest dt value, can we
set 'not to allow time step, that is to lock time step ?'

4. Also, as it is the smallest time step, so it won't march 2-3 cells
in single time step. (please correct me, if I am wrong), and thus it
won't affect the result.

If above all is true then, I think, I will be able to change the time
step.

Your views please!
-Shashank

dr_jfloyd

unread,
Feb 12, 2010, 8:33:02 AM2/12/10
to FDS and Smokeview Discussions
1. Kevin's statement was meant as a way for you to estimate if the
time step seemed reasonable for your simulation or if something was
wrong. If you do not have velocity slices, you can use any number of
correlations in the literature to estimate what velocities in a fire
plume or a ceiling jet will be. Use that to approximate what your
time step size would be. You can also rerun your simulation over the
time period where you would expect the largest velocities and put in
velocity slice files.

2. Yes

3.+4. We strongly recommend against locking the time step to fixed
value except under special circumstances (generally various
verification exercises). If your time step input is too large, you
will have stability problems and your results may be meaningless. If
your time step input is too small, then your run will take longer than
it needs too. Either way there is no benefit to you.

If your simulation is taking too long you have three options:
a) use more processors
b) use a larger grid size
c) see if you can find a more efficient meshing strategy (if you have
large blocked off regions changing meshing may allow you to reduce the
number of grid cells)

Reply all
Reply to author
Forward
0 new messages