FDS 6.6 running using MPI_Processs

732 views
Skip to first unread message

gsztar

unread,
Nov 3, 2017, 8:16:37 AM11/3/17
to FDS and Smokeview Discussions
Hi, 
I would like to run my fds file on 8 cores using MPI, but I received following warning:

Mesh 1 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
 Mesh 1 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 14 is assigned to MPI Process 0
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 1 is assigned to MPI Process 0
 Mesh 14 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 1 is assigned to MPI Process 0
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 14 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 14 is assigned to MPI Process 0
 Mesh 1 is assigned to MPI Process 0
 Mesh 1 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
 Mesh 1 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 14 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 14 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 14 is assigned to MPI Process 0
 Mesh 1 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 2 and only one MPI process exists
 Mesh 2 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 3 and only one MPI process exists
 Mesh 3 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 4 and only one MPI process exists
 Mesh 4 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 5 and only one MPI process exists
 Mesh 5 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 6 and only one MPI process exists
 Mesh 6 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 7 and only one MPI process exists
 Mesh 7 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 8 and only one MPI process exists
 Mesh 8 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 9 and only one MPI process exists
 Mesh 9 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 10 and only one MPI process exists
 Mesh 10 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 11 and only one MPI process exists
 Mesh 11 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 12 and only one MPI process exists
 Mesh 12 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 13 and only one MPI process exists
 Mesh 13 is assigned to MPI Process 0
WARNING: MPI_PROCESS set for MESH 14 and only one MPI process exists
 Mesh 14 is assigned to MPI Process 0
 
Could please help me, please?

Salah Benkorichi

unread,
Nov 3, 2017, 8:19:19 AM11/3/17
to fds...@googlegroups.com
I see,
This happens when you set MPI_PROCESS but then you run it in serial rather than in parallel. 
Upload your file with the meshes only, to check.

--
You received this message because you are subscribed to the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fds-smv+unsubscribe@googlegroups.com.
To post to this group, send email to fds...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fds-smv/dc9458fc-c7ef-4450-b6c4-5e367afe9a3c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

gsztar

unread,
Nov 3, 2017, 8:21:23 AM11/3/17
to FDS and Smokeview Discussions
Continue:

 Completed Initialization Step  1
 Completed Initialization Step  1
 Completed Initialization Step  1
 Completed Initialization Step  1
 Completed Initialization Step  2
 Completed Initialization Step  2
 Completed Initialization Step  2
 Completed Initialization Step  1
 Completed Initialization Step  1
 Completed Initialization Step  1
 Completed Initialization Step  1
 Completed Initialization Step  2
 Completed Initialization Step  3
 Completed Initialization Step  4
 Completed Initialization Step  3
 Completed Initialization Step  4
 Completed Initialization Step  3
 Completed Initialization Step  4
 Completed Initialization Step  3
 Completed Initialization Step  4

 Fire Dynamics Simulator

 Current Date     : November  3, 2017  13:13:35
 Version          : FDS 6.6.0
 Revision         : FDS6.6.0-131-g88ae75a-HEAD
 Revision Date    : Wed Nov 1 16:03:29 2017 -0400
 Compiler         : Intel ifort 17.0.4
 Compilation Date : Nov 01, 2017 21:03:34

 MPI Enabled;    Number of MPI Processes:       1
 OpenMP Enabled; Number of OpenMP Threads:      1

 MPI version: 3.1
 MPI library version: Intel(R) MPI Library 2017 Update 3 for Linux* OS


 Job TITLE        :
 Job ID string    : SD1_test


 Fire Dynamics Simulator

 Current Date     : November  3, 2017  13:13:35
 Version          : FDS 6.6.0
 Revision         : FDS6.6.0-131-g88ae75a-HEAD
 Revision Date    : Wed Nov 1 16:03:29 2017 -0400
 Compiler         : Intel ifort 17.0.4
 Compilation Date : Nov 01, 2017 21:03:34

 MPI Enabled;    Number of MPI Processes:       1
 OpenMP Enabled; Number of OpenMP Threads:      1

 MPI version: 3.1
 MPI library version: Intel(R) MPI Library 2017 Update 3 for Linux* OS


 Job TITLE        :
 Job ID string    : SD1_test


 Fire Dynamics Simulator

 Current Date     : November  3, 2017  13:13:35
 Version          : FDS 6.6.0
 Revision         : FDS6.6.0-131-g88ae75a-HEAD
 Revision Date    : Wed Nov 1 16:03:29 2017 -0400
 Compiler         : Intel ifort 17.0.4
 Compilation Date : Nov 01, 2017 21:03:34

 MPI Enabled;    Number of MPI Processes:       1
 OpenMP Enabled; Number of OpenMP Threads:      1

 MPI version: 3.1
 MPI library version: Intel(R) MPI Library 2017 Update 3 for Linux* OS


 Job TITLE        :
 Job ID string    : SD1_test


 Fire Dynamics Simulator

 Current Date     : November  3, 2017  13:13:35
 Version          : FDS 6.6.0
 Revision         : FDS6.6.0-131-g88ae75a-HEAD
 Revision Date    : Wed Nov 1 16:03:29 2017 -0400
 Compiler         : Intel ifort 17.0.4
 Compilation Date : Nov 01, 2017 21:03:34

 MPI Enabled;    Number of MPI Processes:       1
 OpenMP Enabled; Number of OpenMP Threads:      1

 MPI version: 3.1
 MPI library version: Intel(R) MPI Library 2017 Update 3 for Linux* OS


 Job TITLE        :
 Job ID string    : SD1_test

 Time Step:      1, Simulation Time:      0.17 s
 Time Step:      1, Simulation Time:      0.17 s
 Time Step:      1, Simulation Time:      0.17 s
 Time Step:      1, Simulation Time:      0.17 s
 Time Step:      2, Simulation Time:      0.33 s
 Time Step:      2, Simulation Time:      0.33 s
 Time Step:      2, Simulation Time:      0.33 s
 Time Step:      2, Simulation Time:      0.33 s
 Time Step:      3, Simulation Time:      0.50 s
 Time Step:      3, Simulation Time:      0.50 s
 Time Step:      3, Simulation Time:      0.50 s
 Time Step:      3, Simulation Time:      0.50 s
 Time Step:      4, Simulation Time:      0.67 s
 Time Step:      4, Simulation Time:      0.67 s
 Time Step:      4, Simulation Time:      0.67 s
 Time Step:      4, Simulation Time:      0.67 s
 Time Step:      5, Simulation Time:      0.83 s
 Time Step:      5, Simulation Time:      0.83 s
 Time Step:      5, Simulation Time:      0.83 s
 Time Step:      5, Simulation Time:      0.83 s
 Completed Initialization Step  2
 Completed Initialization Step  2
 Completed Initialization Step  2
 Completed Initialization Step  2

Everything is repeated tree times.

gsztar

unread,
Nov 3, 2017, 8:26:32 AM11/3/17
to FDS and Smokeview Discussions

&MESH ID='Mesh03', IJK=52,174,13, XB=2.2,12.6,0.2,35.0,0.0,2.6, MPI_PROCESS=0/
&MESH ID='Mesh14', IJK=16,13,5, XB=2.2,5.4,10.8,13.4,2.6,3.6, MPI_PROCESS=1/
&MESH ID='Mesh15', IJK=8,5,5, XB=43.0,44.6,22.0,23.0,2.6,3.6, MPI_PROCESS=1/
&MESH ID='Mesh05', IJK=32,158,13, XB=24.0,30.4,0.2,31.8,0.0,2.6, MPI_PROCESS=1/
&MESH ID='Mesh16', IJK=21,9,5, XB=58.8,63.0,8.8,10.6,2.6,3.6, MPI_PROCESS=1/
&MESH ID='Mesh12', IJK=8,5,5, XB=26.2,27.8,22.0,23.0,2.6,3.6, MPI_PROCESS=1/
&MESH ID='Mesh07', IJK=80,87,13, XB=54.4,70.4,-6.8,10.6,0.0,2.6, MPI_PROCESS=2/
&MESH ID='Mesh09', IJK=33,12,13, XB=63.8,70.4,10.6,13.0,0.0,2.6, MPI_PROCESS=2/
&MESH ID='Mesh01', IJK=43,174,13, XB=-6.4,2.2,0.2,35.0,0.0,2.6, MPI_PROCESS=3/
&MESH ID='Mesh08', IJK=66,106,13, XB=70.4,83.6,-9.4,11.8,0.0,2.6, MPI_PROCESS=4/
&MESH ID='Mesh10', IJK=48,147,13, XB=30.4,40.0,0.2,29.6,0.0,2.6, MPI_PROCESS=5/
&MESH ID='Mesh11', IJK=26,131,13, XB=40.0,45.2,0.2,26.4,0.0,2.6, MPI_PROCESS=6/
&MESH ID='Mesh06', IJK=46,130,13, XB=45.2,54.4,-1.4,24.6,0.0,2.6, MPI_PROCESS=6/
&MESH ID='Mesh02-merged', IJK=57,174,13, XB=12.6,24.0,0.2,35.0,0.0,2.6, MPI_PROCESS=7/
To unsubscribe from this group and stop receiving emails from it, send an email to fds-smv+u...@googlegroups.com.

Salah Benkorichi

unread,
Nov 3, 2017, 8:26:56 AM11/3/17
to fds...@googlegroups.com
Ok, this is an issue with your environment. 
I pressume you're using Linux. 
Did you put this line in your .bashrc file? If not, do so and retest it.

source /home/salah/FDS/FDS6/bin/FDSVARS.sh

--
You received this message because you are subscribed to the Google Groups "FDS and Smokeview Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fds-smv+unsubscribe@googlegroups.com.

To post to this group, send email to fds...@googlegroups.com.

Salah Benkorichi

unread,
Nov 3, 2017, 8:33:34 AM11/3/17
to fds...@googlegroups.com
After the installation, you should be prompt with a message asking you to put some lines in your .bashrc file. 
For me : 

export PATH=/home/salah/FDS/FDS6/bin:$PATH
export LD_LIBRARY_PATH=/usr/lib64:/home/salah/FDS/FDS6/bin/LIB64:$LD_LIBRARY_PATH
# number of OpenMPI threads - set to no more than MIN(4,number of cores / 2)
export OMP_NUM_THREADS=4

log off and log in again.

To unsubscribe from this group and stop receiving emails from it, send an email to fds-smv+unsubscribe@googlegroups.com.

To post to this group, send email to fds...@googlegroups.com.

Salah Benkorichi

unread,
Nov 3, 2017, 8:35:42 AM11/3/17
to fds...@googlegroups.com
Also, delete this env that you still have in your .bashrc 
#FDS environment -----------------------
export MPIDIST_ETH=/shared/openmpi_64
export MPIDIST_IB=
source ~/.bashrc_fds $MPIDIST_ETH
#FDS -----------------------------------

export MPIDIST_FDS=/opt/FDS/FDS6/bin/openmpi_64
INTEL_SHARED_LIB=$IFORT_COMPILER_LIB/intel64



Salah Benkorichi

unread,
Nov 3, 2017, 8:41:16 AM11/3/17
to fds...@googlegroups.com
For me when I run parallel jobs, I set OMP_NUM_THREADS=1. 
Here how it should work for you: 
$ clear

salah@sbenkorichi /media/salah/Shared Work Area/fds_examples/fds-meshes-error $ mpiexec -np 8 fds test.fds 
 Mesh 1 is assigned to MPI Process 0
 Mesh 2 is assigned to MPI Process 1
 Mesh 3 is assigned to MPI Process 1
 Mesh 4 is assigned to MPI Process 1
 Mesh 5 is assigned to MPI Process 1
 Mesh 6 is assigned to MPI Process 1
 Mesh 7 is assigned to MPI Process 2
 Mesh 8 is assigned to MPI Process 2
 Mesh 9 is assigned to MPI Process 3
 Mesh 10 is assigned to MPI Process 4
 Mesh 11 is assigned to MPI Process 5
 Mesh 12 is assigned to MPI Process 6
 Mesh 13 is assigned to MPI Process 6
 Mesh 14 is assigned to MPI Process 7
 Completed Initialization Step  1
 Completed Initialization Step  2
 Completed Initialization Step  3
 Completed Initialization Step  4

 Fire Dynamics Simulator

 Current Date     : November  3, 2017  12:40:10
 Version          : FDS 6.6.0
 Revision         : FDS6.6.0-131-g88ae75a-HEAD
 Revision Date    : Wed Nov 1 16:03:29 2017 -0400
 Compiler         : Intel ifort 17.0.4
 Compilation Date : Nov 01, 2017 21:03:34

 MPI Enabled;    Number of MPI Processes:       8
 OpenMP Enabled; Number of OpenMP Threads:      1

 MPI version: 3.1
 MPI library version: Intel(R) MPI Library 2017 Update 3 for Linux* OS


 Job TITLE        : 
 Job ID string    : test

 Time Step:      1, Simulation Time:      0.17 s
 Time Step:      2, Simulation Time:      0.33 s
 Time Step:      3, Simulation Time:      0.50 s
 Time Step:      4, Simulation Time:      0.67 s

Try how I run it and report back.
Reply all
Reply to author
Forward
0 new messages