Launching FDS jobs on Linux - command prompts

419 views
Skip to first unread message

John S.

unread,
Sep 30, 2021, 4:48:32 PM9/30/21
to FDS and Smokeview Discussions
I am running latest version of FDS6 on Ubuntu Linux, single machine, Xeon 8 core 16 logical CPU processor.

I noticed with the latest version of FDS, when I run using the command:

fds filename.fds

The program only uses 4/16 logical processors at a time.  I can see it using 100% of four processors, and jumping between logical processors.  I noticed also on my Windows PC with 11th gen i5 the default fds prompt only uses 40% of the entire processor load.

I thought that you could ask FDS to use a specific number of cores in Windows with the command:

fds_local -p 8 -o 2 filename.fds

I was reading the documentation, and I don't see many instructions for Linux.  I see you can use Slurm to specify what processors to use, but it seems to be for Linux clusters and is a bit more complicated than I'd like to get.


My question:

Is there a way on Linux to ask FDS6 to use the entire or most of the processors?


Thank you!

Kevin

unread,
Oct 1, 2021, 5:58:05 PM10/1/21
to FDS and Smokeview Discussions
Try this

mpiexec -n 8 fds my_8_mesh_job.fds


John S.

unread,
Oct 4, 2021, 12:23:25 PM10/4/21
to FDS and Smokeview Discussions
Thanks!  That is the right command prompt.  

Do you know if there is a way to specify number of processors?  On the Windows code, you use "-o 2" to specify 8x2=16.  

I should have specified.  I want to use 1 mesh by 8 or 16 processors.  Is that possible?

Kevin McGrattan

unread,
Oct 4, 2021, 12:37:01 PM10/4/21
to fds...@googlegroups.com
export OMP_NUM_THREADS=8
mpiexec -n 1 fds my_1_mesh_job.fds

John S.

unread,
Oct 5, 2021, 9:59:08 AM10/5/21
to FDS and Smokeview Discussions
Thank you, that did the trick.

It didn't really speed up my computation time, though, as I'd hoped.  Didn't change it at all actually.


Any tips on how to speed up computation time?

Kevin McGrattan

unread,
Oct 5, 2021, 10:07:39 AM10/5/21
to fds...@googlegroups.com
At best, doing what you are doing will speed the case by a factor of 2. At best. At worst, nothing. Depends on your hardware, hyperthreading, and all that. I rarely use the OpenMP functionality. It is best to divide your case into multiple meshes and just run

mpiexec -n 8 fds <my_8_mesh_case.fds>

o...@aquacoustics.biz

unread,
Oct 6, 2021, 5:11:44 AM10/6/21
to FDS and Smokeview Discussions
If your processor has just 8 real cores then allocating all of these to OpenMPI ends up with over-subscription because the OS (and maybe other stuff) is a concurrent demand on at least one core.   Try running your model for a short time with mpiexec -n X fds my_8_mesh_job.fds where X ranges from 2 to 7 and compare the performance.  And turn hyperthreading off through your BIOS (usually Del or a function key, F1, F2, F10 or F12, during boot).

John S.

unread,
Oct 8, 2021, 2:22:22 PM10/8/21
to FDS and Smokeview Discussions
I had a little trouble breaking up my mesh into multiple meshes in the past.


Can you give an example of how to break up the following mesh?

&MESH IJK=30,30,30, XB=-.075,.075,-.075,.075,0,.15 /

Kevin McGrattan

unread,
Oct 8, 2021, 2:55:56 PM10/8/21
to fds...@googlegroups.com
Divide everything by 2 if you want 8 meshes.
Message has been deleted

John S.

unread,
Oct 8, 2021, 5:19:13 PM10/8/21
to FDS and Smokeview Discussions
Or rather:

&MESH IJK=15,15,15, XB=-.075,0.0,-.075,0.0,0,.075 /
&MESH IJK=15,15,15, XB=0.005,.75,0.005,.75,0.08,.15 /
On Friday, October 8, 2021 at 5:17:27 PM UTC-4 John S. wrote:
So, for example:


&MESH IJK=15,15,15, XB=-.075,0.0,-.075,0.0,0,.075 /
&MESH IJK=15,15,15, XB=0.0,.75,0.0,.75,0.75,.15 /


Would be two meshes of the same mesh size as the original?

And if you had 8 meshes you would have IJK=3.75,3.75,3.75
and have 8 quadrants of 3.75 in x,y,z directions?

John S.

unread,
Oct 8, 2021, 5:20:10 PM10/8/21
to FDS and Smokeview Discussions
Would be two meshes of the same mesh size as the original?

And if you had 8 meshes you would have IJK=3.75,3.75,3.75
and have 8 quadrants of 3.75 in x,y,z directions?

Kevin

unread,
Oct 8, 2021, 5:25:33 PM10/8/21
to FDS and Smokeview Discussions
You seem to be making this harder than it is.

&MESH IJK=15,15,15, XB=-.075,0.0,-.075,0.0,0,.075, MULT_ID='mesh' /
&MULT ID='mesh', DX=0.075, DY=0.075, DZ=0.075, I_UPPER=1, J_UPPER=1, K_UPPER=1 /

John S.

unread,
Oct 8, 2021, 7:37:30 PM10/8/21
to FDS and Smokeview Discussions
I didn't know you could use MULT on mesh.  I will try this this weekend.  Thank you!

John S.

unread,
Oct 11, 2021, 8:39:31 AM10/11/21
to FDS and Smokeview Discussions
The process seems to run, but before it starts counting the time step it throws a screen full of lines with this error:

WARNING: AREA_ADJUST not applied to reoriented SHAPE or SHAPE that spans multiple MESHES.


I think it may be the cylinder obstruction I drew with MULT in the center of the meshes.  I used the framework from the FDS Guide to draw it.  After I cut off the simulation early, I opened the SMV file to find that the cylinder and all other obstructions looked OK.  I will maybe run a shortened sim to see if the temperatures and everything comes out OK when compared to an earlier one.

Any clues as to this Error message?

Kevin McGrattan

unread,
Oct 11, 2021, 8:59:25 AM10/11/21
to fds...@googlegroups.com
It is not an ERROR but rather a WARNING. It means that a specified HRRPUA or similar will not be adjusted because of the fact that the cylinder is not perfectly aligned with the mesh. That is, the HRR is not HRRPUA times the ideal area of the circular cylinder, but rather the HRRPUA times the area of the Lego-block cylinder.

John S.

unread,
Oct 11, 2021, 9:13:45 AM10/11/21
to fds...@googlegroups.com
Ok.  That makes sense.  Thank you.

I will test it to see if it effects results, and maybe read more into it if I can.

On Mon, Oct 11, 2021 at 8:59 AM Kevin McGrattan <mcgr...@gmail.com> wrote:
It is not an ERROR but rather a WARNING. It means that a specified HRRPUA or similar will not be adjusted because of the fact that the cylinder is not perfectly aligned with the mesh. That is, the HRR is not HRRPUA times the ideal area of the circular cylinder, but rather the HRRPUA times the area of the Lego-block cylinder.

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fds-smv/CAAJimDE8n5xK%3D%2Bn5GtLgvxavWMMYUK0xv2RR3JvU_-oXBNG3gA%40mail.gmail.com.
--
Best,
John T. Stebbins

John S.

unread,
Oct 12, 2021, 5:56:56 PM10/12/21
to FDS and Smokeview Discussions

I tried the eight mesh code.  It cut the computation time on my Linux 16 core Xeon system in half!  35 instead of 69 hours running 8x2=16 threads.  Thanks for all the help!


Another thing, I want to understand how to divide up the meshes a little better.  

Can you give me an example of the same computational grid divided into 12 meshes?  Is that possible?  I'm asking because I have a Windows system that has 12 logical cores, so I want to see how fast I can run FDS on max CPU usage on that setup.

Kevin McGrattan

unread,
Oct 13, 2021, 9:11:18 AM10/13/21
to fds...@googlegroups.com
How did you run the case? How many MPI processes and how many OpenMP threads?

John S.

unread,
Oct 13, 2021, 9:23:28 AM10/13/21
to fds...@googlegroups.com
I ran it with 8 MPI processes and OMP set to 2 on a 16 thread processor.

On Wed, Oct 13, 2021 at 9:11 AM Kevin McGrattan <mcgr...@gmail.com> wrote:
How did you run the case? How many MPI processes and how many OpenMP threads?

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.

Kevin McGrattan

unread,
Oct 13, 2021, 9:25:06 AM10/13/21
to fds...@googlegroups.com
When you say you cut the time in half, what are you comparing it with? A single MPI process with one OpenMP thread?

John S.

unread,
Oct 13, 2021, 9:29:42 AM10/13/21
to fds...@googlegroups.com
Yes, single MPI process with one OpenMP thread.

On Wed, Oct 13, 2021 at 9:25 AM Kevin McGrattan <mcgr...@gmail.com> wrote:
When you say you cut the time in half, what are you comparing it with? A single MPI process with one OpenMP thread?

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.

Kevin McGrattan

unread,
Oct 13, 2021, 9:44:44 AM10/13/21
to fds...@googlegroups.com
A factor of 2 speed up in your case is not good. It should be much better. Are your meshes the same size? Do you have sprinklers or particles that might require extra work in one particular mesh? Issue the command "lscpu" and report what you have. For example, on my linux cluster:

$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                12
On-line CPU(s) list:   0-11
Thread(s) per core:    1
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz
Stepping:              4
CPU MHz:               1209.863
CPU max MHz:           3100.0000
CPU min MHz:           1200.0000
BogoMIPS:              5200.26
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              15360K
NUMA node0 CPU(s):     0-5
NUMA node1 CPU(s):     6-11



John S.

unread,
Oct 13, 2021, 9:54:09 AM10/13/21
to fds...@googlegroups.com
The only thing I can think of is I have the radiative angles turned to 2.  I do not have any sprinklers or particles, just basic soot and CO yield.  Unfortunately, I am not in the location of my Linux cluster today, so I will have to check this tomorrow AM.

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.

John S.

unread,
Oct 13, 2021, 10:12:02 AM10/13/21
to FDS and Smokeview Discussions
I also have 3D Heat Transfer turned on across all OBST.

John S.

unread,
Oct 13, 2021, 10:15:31 AM10/13/21
to FDS and Smokeview Discussions
  It was also suggested to turn off hyper threading and run just 7 MPI by 1 OMP so no cores were overbooked.  

Kevin McGrattan

unread,
Oct 13, 2021, 10:41:50 AM10/13/21
to fds...@googlegroups.com
We do not use hyperthreading. Why 7 MPI processes?

I would suggest that you turn off hyperthreading and run 8 meshes with 8 MPI  processes, one per core. Set OpenMP threads to 1. 

John S.

unread,
Oct 13, 2021, 10:57:09 AM10/13/21
to FDS and Smokeview Discussions
Is hyperthreading automatically enabled?  I thought that it was because I have 16 logical cores that the OS uses automatically.  How do I turn hyperthreading off?  I looked at a couple of websites but they seem a bit risky.  If you have an easy solution, I'd like to hear it.

7 MPI processes with 7 meshes was suggested because 1 core would be operating the OS and the job would only go as fast as the slowest core.

I will complete this simulation with 8x2 then try to run 8x1 and compare the difference.

Kevin McGrattan

unread,
Oct 13, 2021, 11:39:55 AM10/13/21
to fds...@googlegroups.com
I do not know how to turn off hyperthreading. I think you need root privilege and you need to access the BIOS. Get a system administrator to help. 

As for your job, if you have eight meshes, run with 8 MPI processes. I do not understand what you mean by 1 core operating the OS. Are you saying you want to use the computer for other things? 

Glenn Forney

unread,
Oct 13, 2021, 11:42:35 AM10/13/21
to fds...@googlegroups.com
hyperthreading is a BIOS setting.  To turn it off you need to reboot your computer.  As it comes back up, look closely at output for something like Press F2 to change settings. It may be F2, F10 or Del etc.  This will take you into the BIOS settings.  Look for a setting for hyperthreading.  You'll want to disable it then save settings and resume the boot up.  We turn off hyperthreading on all of our computers that we use to run FDS.  We have not found them to be of benefit for running our cases.


--
You received this message because you are subscribed to the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fds-smv+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fds-smv/c424a7cf-dbae-450d-9fe2-1760a22a3a62n%40googlegroups.com.


--
Glenn Forney

John S.

unread,
Oct 13, 2021, 11:55:23 AM10/13/21
to FDS and Smokeview Discussions
Kevin,

I think the idea is that one of the cores will be running the OS in the background, or any other processes that run while FDS is running.


Glenn,

Thanks, I think I saw a tutorial on how to do that.  I will try it tomorrow and see if it speeds up the code.

Glenn Forney

unread,
Oct 13, 2021, 12:02:21 PM10/13/21
to fds...@googlegroups.com
on my work laptop, a Dell,  I had to press F12 boot options.  this brought up a screen which let me select Bios settings.  Under Bios settings I selected Performance settings

guess my point was that a PC with 4 real cores and hyperthreading turned on (8 logical cores) will not be as good as PC with 8 real cores



--
Glenn Forney

Kevin McGrattan

unread,
Oct 13, 2021, 12:03:10 PM10/13/21
to fds...@googlegroups.com
If you run 8 meshes with 7 cores, you have completely defeated the purpose of running in parallel. 8 meshes, 8 cores.

John S.

unread,
Oct 13, 2021, 12:11:39 PM10/13/21
to FDS and Smokeview Discussions
Got it.  I will turn off hyperthreading and run 8x1 and let you know the results.

Tim O'Brien

unread,
Oct 13, 2021, 11:01:50 PM10/13/21
to fds...@googlegroups.com

Dear John,

 

Yes, your model is inherently scalable under MPI with expected reductions in computational time.  Here are the results for a 10 second simulation, all with 1 OMP.

 

1 MPI    543.5 s

2 MPI    378.3 s

4 MPI    180.3 s

6 MPI    144.9 s

8 MPI    101.9 s

9 MPI    92.3 s

 

And here are the corresponding mesh designations:

 

Candle1

 

&MESH IJK=30,30,30, XB=-.075,.075,-.075,.075,0,0.15 /

 

 

Candle2

 

&MESH IJK=30,30,15, XB=-.075,.075,-.075,.075,0,0.075 /

&MESH IJK=30,30,15, XB=-.075,.075,-.075,.075,0.075,.15 /

 

 

Candle4

 

&MESH IJK=30,15,15, XB=-0.075, 0.075,  -0.075, 0.000, 0.000, 0.075, MULT_ID='mesh' /

&MULT ID='mesh', DY=0.075, DZ=0.075, J_UPPER=1, K_UPPER=1 /

 

 

Candle6

 

&MESH IJK=30,10,15, XB=-0.075, 0.075, -0.075, -0.025, 0.000, 0.075 MULT_ID='mesh' /

&MULT ID='mesh', DY=0.050, DZ=0.075, J_UPPER=2, K_UPPER=1 /

 

 

Candle8

 

&MESH IJK=15,15,15, XB=-0.075, 0.000, -0.075,-0.000, 0.000, 0.075, MULT_ID='mesh' /

&MULT ID='mesh', DX=0.075, DY=0.075, DZ=0.075, I_UPPER=1, J_UPPER=1, K_UPPER=1 /

 

 

Candle9

 

&MESH IJK=10,10,30, XB=-0.075, -0.025, -0.075,-0.025, 0.000, 0.150, MULT_ID='mesh' /

&MULT ID='mesh', DX=0.050, DY=0.050, I_UPPER=2, J_UPPER=2 /

 

I could keep on adding more MPI resources but you will see that the returns for additional CPU time are decreasing. 

 

Remember that when you run your model to explicitly define just 1 OMP process, otherwise the default (4) will be applied and you will again be over-subscribed.

 

 

t.            

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fds-smv/34a1fd40-9029-4380-96f6-d79d8b3746d2n%40googlegroups.com.

John S.

unread,
Oct 14, 2021, 10:31:07 AM10/14/21
to FDS and Smokeview Discussions
The time improved from 69 hours to 40 hours on the 8 MPI 2 OMP run of my code.

I started the 8x1 code and it looks like it is running even faster.  Could be done at 29 hours.

However, the temperatures at my thermocouples were 4 degrees higher on the 8 MPI code and the curve from the temperature by time graph was very different.


I guess back to the drawing board.  I think in the paper written by NIST about modeling candle flames, they had one mesh for the candle, i.e. all obstructions and vents, and other, larger meshes for the rest of the computational area.  I might try that.



Thanks for all the help, folks.

John S.

unread,
Oct 14, 2021, 10:35:21 AM10/14/21
to FDS and Smokeview Discussions
On a side note, I don't know how this system is set up, but when I went to the BIOS and disabled hyperthreading, it was caused havoc.  Startup was slow and buggy and the System Monitor reported only 2 CPUs being active and they were spiking like crazy.  I ran lscpu:

$ lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 48 bits virtual
CPU(s):                          2
On-line CPU(s) list:             0,1
Thread(s) per core:              2
Core(s) per socket:              1
Socket(s):                       1
NUMA node(s):                    1

Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           63
Model name:                      Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping:                        2
CPU MHz:                         1356.911
CPU max MHz:                     3200.0000
CPU min MHz:                     1200.0000
BogoMIPS:                        4789.24
L1d cache:                       32 KiB
L1i cache:                       32 KiB
L2 cache:                        256 KiB
L3 cache:                        20 MiB
NUMA node0 CPU(s):               0,1

Kevin McGrattan

unread,
Oct 14, 2021, 10:41:04 AM10/14/21
to fds...@googlegroups.com
I suggest you restore the BIOS settings and get some help. 

John S.

unread,
Oct 14, 2021, 10:43:25 AM10/14/21
to fds...@googlegroups.com
I restored the BIOS to factory.  It’s fine now.

On Thu, Oct 14, 2021 at 10:41 AM Kevin McGrattan <mcgr...@gmail.com> wrote:
I suggest you restore the BIOS settings and get some help. 

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.

John S.

unread,
Oct 14, 2021, 11:03:48 AM10/14/21
to fds...@googlegroups.com
Not much help I can get from IT or anyone in my department when it comes to Linux and technical computing.  I think maybe when I was in BIOS I flipped a switch to turn the number of cores per processor from two to one.  I can try either disabling hyperthreading as instructed and seeing if that leaves me with 8 cores or resetting defaults and turning cores per processor from 2 to 1 without switching off hyper threading.  That is instead of changing both those at once.  If neither of those work maybe I need BIOS instructions specific to HP and maybe their level 3 or 4 tech support can help.  

Honestly, if I improve from 70 to 29 hours that’s pretty good for me.

Kevin McGrattan

unread,
Oct 14, 2021, 11:21:39 AM10/14/21
to fds...@googlegroups.com
What does the lscpu command say now?

John S.

unread,
Oct 14, 2021, 11:22:08 AM10/14/21
to fds...@googlegroups.com
But as I said, it looks like the 8 meshes messed with the computation and results.

Kevin McGrattan

unread,
Oct 14, 2021, 11:30:20 AM10/14/21
to fds...@googlegroups.com
Try this -- run your 8 mesh job using 8 MPI processes. As the job is running, open up another shell and issue the 'top' command. Then hit 1, and you should see a breakdown of the activity on each CPU/core. Maybe it does not matter if the hyperthreading is on or off as long as the 8 MPI processes are assigned to the 8 cores.

John S.

unread,
Oct 14, 2021, 11:41:18 AM10/14/21
to fds...@googlegroups.com
Will try and let you know.

As I said above, I think the meshes chopping the plume and OBST and vent are messing with the results.  I think I am going to try to run 1 mesh for the entire OBST, one for the vents and plume and then all the other meshes on the empty grid.

This surely won’t be as fast, but maybe I can knock off some computational time.

On Thu, Oct 14, 2021 at 11:30 AM Kevin McGrattan <mcgr...@gmail.com> wrote:
Try this -- run your 8 mesh job using 8 MPI processes. As the job is running, open up another shell and issue the 'top' command. Then hit 1, and you should see a breakdown of the activity on each CPU/core. Maybe it does not matter if the hyperthreading is on or off as long as the 8 MPI processes are assigned to the 8 cores.

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.

John S.

unread,
Oct 14, 2021, 1:30:53 PM10/14/21
to FDS and Smokeview Discussions
It looks like the first 8 CPUs are running between 50-100%, the 9th 40-50%, the 10th and 11th 10-15% and the 12th-16th less than 2%.

$ lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 48 bits virtual
CPU(s):                          16
On-line CPU(s) list:             0-15
Thread(s) per core:              2
Core(s) per socket:              8

Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           63
Model name:                      Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping:                        2
CPU MHz:                         2542.554

CPU max MHz:                     3200.0000
CPU min MHz:                     1200.0000
BogoMIPS:                        4788.98
L1d cache:                       256 KiB
L1i cache:                       256 KiB
L2 cache:                        2 MiB
L3 cache:                        20 MiB
NUMA node0 CPU(s):               0-15

Kevin McGrattan

unread,
Oct 14, 2021, 2:06:54 PM10/14/21
to fds...@googlegroups.com
Try this

mpiexec -n 8 -genv KMP_AFFINITY=scatter <fds.exe> <jobname.fds>

I don't know how the scheduler assigns the MPI processes on your computer. The KMP_AFFINITY parameter above tells the scheduler to scatter the processes among the cores and threads. See if this changes anything.

John S.

unread,
Feb 4, 2022, 5:20:29 PM2/4/22
to FDS and Smokeview Discussions
Hi all,

I was able to increase my meshes to 8 and 16 using MULT namelist effectively on Windows.  However, I am unable to get it to start on Linux.  When I use this input:

mpiexec -n 16 fds <fds16meshjob>.fds

I get the following error:

ERROR: Number of meshes exceeds number of MPI processes. Set MPI_PROCESS on each MESH line so that each MESH is assigned to a specific MPI process (CHID: CandleFigueComputeTest1_5)

ERROR: FDS was improperly set-up - FDS stopped


The is with OMP_NUM_THREADS=1.  This is what my mesh looks like:

&MESH IJK=8,8,8, XB=-.045,-.015,-.045,-0.015,0,.03, MULT_ID='mesh' /
&MULT ID='mesh', DX=0.03, DY=0.03, DZ=0.03, I_UPPER=2, J_UPPER=2, K_UPPER=2 /

Same mesh works fine on windows with command: fds_local -p 16 -o 1 <fds16meshjob>.fds
How can I change the code to assign MPI processes per line when the code doesn't list each line?

Any help?

Thanks!

Kevin McGrattan

unread,
Feb 4, 2022, 5:59:24 PM2/4/22
to fds...@googlegroups.com
I created this error recently because I noticed that many people were running jobs with more meshes than MPI processes, not realizing that FDS will automatically group all the excess meshes onto a single MPI process. If you use the MULT feature as you have done, you must use the same number of MPI processes as meshes.

John S.

unread,
Feb 4, 2022, 6:24:15 PM2/4/22
to fds...@googlegroups.com
Ok.  But I thought my mesh was 16 meshes.  Windows seems to think so, am I wrong?

On Fri, Feb 4, 2022 at 5:59 PM Kevin McGrattan <mcgr...@gmail.com> wrote:
I created this error recently because I noticed that many people were running jobs with more meshes than MPI processes, not realizing that FDS will automatically group all the excess meshes onto a single MPI process. If you use the MULT feature as you have done, you must use the same number of MPI processes as meshes.

--
You received this message because you are subscribed to a topic in the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/fds-smv/7xD9WgqG54E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to fds-smv+u...@googlegroups.com.

Glenn Forney

unread,
Feb 4, 2022, 8:44:10 PM2/4/22
to FDS and Smokeview Discussions

your &MULT line creates 27 (3*3*3) meshes not 16   

John S.

unread,
Feb 4, 2022, 9:12:43 PM2/4/22
to fds...@googlegroups.com
Thanks for the feedback.  I think I am confused as to how to create meshes with MULT.  I used Kevin’s example of 8 meshes and I just doubled the I_UPPER, J_UPPER, etc, and the DX I had to match the size of the repeating XB layout.  

Can you give me an example of an 8 mesh and a 16 mesh with the same grid size so I can make sense out of the calculation?

I’ve been over the Guide and trial and error based on this thread but I can’t seem to get it.  I was able to successfully draw a cylinder with MULT a while ago but this I can’t get.

Thanks for your help

John S.

unread,
Feb 4, 2022, 9:29:10 PM2/4/22
to fds...@googlegroups.com
I think I figured it out.  The I_UPPER=0 would make 1by1by1, so the I_UPPER=1 makes 2by2by2=8.  You can only make cubes.  3by3by3 would be 27 and 4by4by4 64, etc.

I would need two adjacent MULT lines of 8 meshes? Correct?  Match up the beginning to end like writing the meshes line by line.

Glenn Forney

unread,
Feb 4, 2022, 9:48:39 PM2/4/22
to fds...@googlegroups.com
almost.  I_UPPER, J_UPPER and K_UPPER can have different values.  so I_UPPER=1 J_UPPER=1 K_UPPER=3 would get you 2*2*4=16 .  You would have to adjust your cell size and number of cells to be what you want

You received this message because you are subscribed to the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fds-smv+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fds-smv/CAM143d4idOFmyV7cSgFTrUvr0hpLRysyZPZevqcRDSaEF8Dfhw%40mail.gmail.com.


--
Glenn Forney

John S.

unread,
Feb 4, 2022, 9:55:33 PM2/4/22
to fds...@googlegroups.com
I think I get it.  Thank you.

Kevin McGrattan

unread,
Feb 5, 2022, 9:44:12 AM2/5/22
to fds...@googlegroups.com
I did it this way for the C programmers out there.
Reply all
Reply to author
Forward
0 new messages