FDS prediction of Gas Species

450 views
Skip to first unread message

Damien

unread,
Aug 17, 2010, 7:52:21 AM8/17/10
to FDS and Smokeview Discussions
Hi,

I am trying to validate FDS for predicting gas species in a building
containing an atrium and a first floor level. The fire is a kersoene
pool fire located in the atrium. The 6 gas sampling points are located
on the first floor level. I have uploaded the fds input file
("Atrium_2MW_kerosene.txt") and the results I have obtained ("Atrium
gas species results.doc"). The results of CO and CO2 concentrations do
not correlate at all well and I am wondering if anyone has had similar
trouble with relative large enclosures or if there is errors in my
input file? I am relatively new to FDS and this validation work is
part of a masters thesis. I would really appreciate any input.

Thanks.

dr_jfloyd

unread,
Aug 17, 2010, 8:24:09 AM8/17/10
to FDS and Smokeview Discussions
You have a 2MW fire in a large space. While I have not run this file,
my guess is that this fire remains fairly well ventilated. In which
case the CO is all post-flame and as stated in 6.1.3 of the Tech
manual: "The proposed model of CO production still does not contain
the necessary kinetic mechanism to predict the
“post-flame” concentration of CO without the prescription of the
measured value of the post-flame CO yield.". Tewarson's data in the
handbook is from the ASTM E2058 test apparatus. This is not the same
fire as a 2 MW pool fire. Why do you feel that the yields from the
ASTM E2058 are appropriate for a 2 MW pool fire?

Kevin

unread,
Aug 17, 2010, 9:02:11 AM8/17/10
to FDS and Smokeview Discussions
Your experimental data does not look right to me. In my experience, CO
measurements are sometimes erratic because the production rate of CO
is not necessarily constant, especially for bigger fires. But the CO2
increase and the O2 decrease should look much like the temperature
data and follow the HRR. There appear to be a number of sensors where
the O2 and CO2 do not behave like I would expect.

I suggest that you first look at all the temperature data. If that is
not matching reasonably well, nothing else will. Next O2. Some of the
experimental measurements look reasonable, others I do not believe.
What is the difference between device G3 and G5 (Fig 16 and Fig 17)?
How can the model predict so well in one case and so badly in the
other? Fig 18 – why does the measured O2 concentration increase to
0.215 for approximately 100 s? Where are the uncertainty bounds on
these measurements? If you publish this work, you have to be able to
put uncertainty bounds on your measurements. Otherwise, how can you
assess the accuracy of the model?

What’s up with CO2 at G3 (Fig 10)? Do you believe the measurement?
Looks like noise to me. How about CO2 at G5 (Fig 12)? What happens at
200 s to make the CO2 concentration drop by 80%? I see no change in
the HRR at this point. Figure 9? What’s going on there? 16% CO2? Why
these sudden increases? Is someone smoking a cigarette near G2? Maybe
the forklift?

As far as your input file, CO_PRODUCTION=.TRUE. is not appropriate
here. This is just for under-ventilated compartment fires. I suggest
that you turn it off and just use the fixed yield. Even then, FDS
does not match CO data very well because it is very difficult to
predict the production rate. CO_PRODUCTION is an attempt to do it in
under-ventilated compartments. In your case, the fire is well-
ventilated and we just have to rely on an estimate of CO_YIELD.

Look at temperatures – the O2 and CO2 should follow the trend. If the
time trace does not qualitatively follow the FDS prediction, you need
to explain why the fire behavior so suddenly changed. Notice that all
the FDS plots rise with the HRR, steady out, and then decrease. If the
fire’s HRR is what is displayed, and nothing was done in the test lab
(like opening a door or whatever), then I would not trust either an O2
or CO2 measurement that does not follow the same trend as FDS. FDS may
over or under-predict the measurement, but typically it follows the
same trend.
> > Thanks.- Hide quoted text -
>
> - Show quoted text -

Damien Flynn

unread,
Aug 17, 2010, 11:08:44 AM8/17/10
to fds...@googlegroups.com, drjf...@gmail.com
I appreciate that the CO Yield value of 0.012 does not apply to every fire case however with the lack of data available I decided to use the value of 0.012 as I could not find any other value for kerosene. Can you recommend a different value from your experience?

dr_jfloyd

unread,
Aug 17, 2010, 11:27:12 AM8/17/10
to FDS and Smokeview Discussions
The theory manual states that the CO model does not predict CO for
well ventilated fires and only uses a post-flame value. Therefore,
your ability to validate the model relies very strongly on your
ability to specify a proper CO yield. I'm not sure how you thought
you would perform a validation exercise without being able to specify
appropriate inputs.

If I do a Google Scholar search for "kerosene pool fire CO yield" the
4th item is NISTIR 7013 which has CO yield (~3 %) and soot yield (~10
%) for a 2 MW Jet A fire. Jet A is basically kerosene. Since your CO
predictions appear to be a factor of 2 - 4 low and your input yield is
1 %.....

welchs

unread,
Aug 19, 2010, 12:35:56 PM8/19/10
to FDS and Smokeview Discussions
As few points of clarification here. I hold my hand up to supplying
the "dodgy data", this derives from the DETR-PIT sponsored atrium fire
test series conducted by a European consortium in the LBTF structure
at BRE Cardington in 1999. I agree with all of Kevin's observations
on the measurements (save the cigarette and forklift, G2 was just
broken!) but these types of uncertainties are inevitable in a major
test programme (13 full-scale tests with 400+ channels of data per
test). Whilst the temperature data is generally robust the gas
species measurements are much more tricky and it is harder to filter
odd rogue data, though usually obvious under closer examination as per
here. As Damian is aware the data was supplied "as is" earlier this
year, those particular data cited are unpublished, and I only became
aware of the interest in gaseous species via the mails to this list -
hence I had not paid any particular attention to them! Our own
interests in model validation off this test series have thus far
focued purely on the fluid dynamics* due to the reasons cited above.

Regarding defining an input yield for CO, please also note that the
tests were supported by a series of calibration burns using oxygen
depletion calorimetry on an "identical" corner wall setup using the
same fuel and fuel trays, so whilst allowing for minor differences
from things like ambient temperature variation we already have a means
of deriving the required inputs for each test. I have just looked up
the 2MW-kerosine fire measurements and the average mass ratio of the
CO/CO2 yields in the hood was 0.01095, which converted by MCO2/MC * f
= 44/12 * 0.88 = 0.0353 (SD=0.004), pretty similar to the NISTIR 7013
value of 0.030 +/- 0.009. The exact fuel used was Jet A1, "kerosine"
is just used as a generic descriptor, so the results were expected to
be very similar to those for Jet A in the NIST study. This gives a
final confirmed scaling of 0.0353/0.012 = 2.95, pretty close to the
factor of ~3 needed to make Damian happy!

Stephen

* http://www.see.ed.ac.uk/~swelch/Atrium_project/

Kevin

unread,
Aug 19, 2010, 1:50:08 PM8/19/10
to FDS and Smokeview Discussions
Thanks Stephen

What you are describing is exactly my experience over the past 10
years dredging up old experimental data sets for model validation.
Experiments done at NIST and elsewhere over the years have this same
problem -- too many channels of data that are not properly analyzed.
No one wants to "throw out" data, but it has to be done. It should be
done by the persons doing the experiment. It should not be left to the
modelers because it puts us in an awkward position -- if the model and
measurement don't match, do we throw it out?

It should be recognized that CFD models transport mass and energy in
much the same way. Different BCs, but there should be some consistency
in the species and temperature plots. If one changes dramatically and
not the other, I usually suspect that something is wrong.
Reply all
Reply to author
Forward
0 new messages