'a priori' vs 'a posteriori' modelling debate

276 views
Skip to first unread message

Rein

unread,
Apr 24, 2008, 6:40:22 PM4/24/08
to FDS and Smokeview Discussions
I know the topic is not popular in some circles of the fire community
but I think that users of fire models need to be more involved in the
'a priori' vs. 'a posteriori' debate.


I have just read this short article by Prof Beard from Heriot-Watt
University that appears in last issue of Industrial Fire Journal:

- Reliability of computer models in fire safety design
http://hemmingfire.com/news/fullstory.php/aid/148

He emphasis on the need to conduct more a priori comparisons with well
instrumented experimental tests to increase further our knowledge in
fire modelling. He has a point in that 'real world' fire engineering
applications are most frequently applied to simulate events which real
behaviour had not and will never be measured (thus these are a priori
simulation). However, most fire model validation were conducted a
posteriori. (see note below for some definitions).

I was wondering what are the opinions in this discussion group. I
certainly agree with Prof Beard on this one; we need more a priori
comparisons of fire modelling to reflect on the strengths and
limitations of current tools.

Cheers
G.

NOTE: There are roughly speaking two types of modelling comparisons: a
priori (aka blind) and a posteriori (aka open). In a priori
simulations the user has access only to the description of the
scenario and is responsible for developing appropriate inputs from
this description plus a series of assumptions that the user finds
appropriate. In a priori simulations, the user has no access to the
experimental results of the real behaviour. A posteriori simulations
are those conducted when the user had access to the results from the
experiment. Most fire model validation are a posteriori.

Rein

unread,
Apr 24, 2008, 6:44:21 PM4/24/08
to FDS and Smokeview Discussions
The direct link to the article is this one:

http://www.hemmingfire.com/cp/6/Fire%20Modelling.pdf

G.

On Apr 24, 11:40 pm, Rein <rei...@gmail.com> wrote:
> I know the topic is not popular in some circles of the fire community
> but I think that users of fire models need to be more involved in the
> 'a priori' vs. 'a posteriori' debate.
>
> I have just read this short article by Prof Beard from Heriot-Watt
> University that appears in last issue of Industrial Fire Journal:
>
> - Reliability of computer models in fire safety designhttp://hemmingfire.com/news/fullstory.php/aid/148

Kevin

unread,
Apr 25, 2008, 8:41:57 AM4/25/08
to FDS and Smokeview Discussions
I'll give you the model developer's perspective. First, be careful
with words like "blind." There are subtle variations on this idea. In
fact, ASTM E 1355, Standard Guide for Evaluating the Predictive
Capability of Deterministic Fire Models, refers to three types of
validation exercises -- blind, specified and open. Blind means that
the modeler is given only a "basic description" of the scenario. I
believe that Dalmarnock falls into this category. Specified means that
the modeler is given the "most complete" information about the
scenario. For a "real" fire scenario with a variety of combustibles,
I'm not sure that this is even possible because we as a community have
not yet come to a consensus about what "complete" means. Finally, I
think everyone agrees that "open" means that the modeler has access to
as much information about the experiment as possible, including all
the results.

As a developer, a blind validation exercise is not of much value. This
does not mean that exercises like Dalmarnock are of little value to
AHJs and others, but for me, simply saying that 8 people simulated a
complicated fire scenario with FDS a got widely different results does
not tell me what to work on, other than everything. We've known for
years that it is difficult to model real combustibles because of
limited and ambiguous thermo-physical property data, simplified models
of the solid phase, simplified models of under-ventilated combustion,
and a user community with a wide range of experience. And we're
working in all these areas. But with limited resources, where to put
the money and effort to get the most bang for the buck?

For me, a preferable validation strategy is to work in stages. First,
for a given compartment geometry, test the model's ability to just do
smoke and heat transport from a specified fire like a gas burner. The
modeling can be done a priori and characterized as a "specified"
exercise if you like. Alot of routine FPE design work involves a fire
model and a specified "design" fire, and this kind of exercise is a
good way to quantify the accuracy of the model for this kind of design
work. After testing the smoke and heat transport, then move on to look
at target response (sprinklers, smoke detectors, etc), and then
finally the fire itself (prediction of growth and spread on real
objects). If you combine the prediction of the fire, the smoke and
heat transport, and the target response, then I, the developer, do not
know which part of the model is weak, and which is strong. Large
variations in temperature in Dalmarnock were a direct result of FDS
predicting different HRRs for furnishings that people modeled with
different assumptions and thermo-physical properties. So what is not
working? The solid phase? The gas phase? The user him or herself?
Probably all of the above. What do I do now?

I have been struggling in recent months to put together a new
Validation Guide for FDS that takes up where our work with the US
Nuclear Regulatory Commission (NUREG 1824) left off. Struggling,
simply because it is a hell of a lot of work to take dozens of
experimental datasets and FDS calculations and present comparisons in
a cohesive way. We're trying to automate the process so that with each
"minor" upgrade (FDS 5.1 to 5.2, for example) we will rerun ALL of our
validation cases and replot everything. If we change the physics of
the model, we have to redo all of our validation work. This was a
lesson learned from the NRC. They don't care about what we published
in FSJ 5 years ago. How is the model now? Some of our validation
exercises were originally of the "specified" type. But that has little
meaning now. Now they are all effectively open, and we just don't have
the money to do full-scale fire experiments each time we update FDS.
Just not practical. Even if we could, I can tell you that it is rare
that an experiment is conducted exactly as specified in the test plan.
Life just doesn't work that way, and those of you who do full-scale
testing know exactly what I'm talking about.

All this being said, I'll say again that "blind" validation exercises,
however you define them, can be of value to the end user and the fire
authorities, but they are of less value to developers. I would prefer
that if given the opportunity and the resources, that a series of
increasingly complex fire scenarios be part of any validation
exercise. Then we learn what the model does well and not well. If you
just jump to the fully-furnished building and set it alight, then we
cannot make progress. Worse, it says to the skeptics that fire models
are all wrong and should not be trusted.
> > experiment. Most fire model validation are a posteriori.- Hide quoted text -
>
> - Show quoted text -

dr_jfloyd

unread,
Apr 25, 2008, 9:31:25 AM4/25/08
to FDS and Smokeview Discussions
For most fire protection applications, tests like Dalmarnock are not
represetative of how modeling is used. Performance goals for fire
protection systems rarely involve fully flashed over compartment
fires. You want detection and egress to occur before conditions
become untenable. You want suppression systems to activate well
before flashover. Additionally, in a design context one has no
business trying to model a real fire. A building and its fire
protection systems will be in existence for many, many years and there
is no way a fire protection engineer can know what combustible loading
will be present throughout the life time of the building. Instead one
specifies design fires (which may be based upon tests like
Dalmarnock). For this the open or specified type validation exercises
are perfectly appropriate. However, the reality of testing is that it
is very difficult to do a pre-test computation even if one is given
detailed specifications. Inevitably something occurs during the test
that was a deviation from the specification (a door was opened at 50 s
rather than the 30 s specified in the test plan).

Simon Ham, FiSEC, UK

unread,
Apr 25, 2008, 10:35:34 AM4/25/08
to FDS and Smokeview Discussions
I read Alan Beard’s article with considerable interest and the bottom
line is yes we are all playing with fire and some of us may be better
at it or more realistic in our use and understanding of the
limitations inherent in CFD modelling.

I deplore the use of emotive statements such as “using models as part
of decision-making may be dangerous” without far greater explanation
and justification. I accept that this is a magazine article and not a
technical paper but given a choice between an ability to experiment on
my computer or not being able to do so because I do not have
sufficient resources to carry out large scale experimentation, I will
continue to opt for the former.

Having looked at ‘The Dalmarnock Fire Tests: Experiments and
Modelling’, I am not convinced that the particular exercise did
anything but demonstrate that there are inconsistencies in
experimental fires and also in models. Inconsistencies in respect of
timing are likely to occur for a number of reasons particularly during
the early stages of fire development. I very much doubt that detailed
information was available on the way in which the newspaper ignition
source was crumpled the time between the heptane being pored on and it
being ignited by the blow torch or even the length of time that the
blow torch was applied. All these factors might influence the early
stage growth of the fire together with the actual rather than assumed
air leakage of the fire compartment and the moisture content of the
furniture and other contents.

One of the differences between most fire models and full scale fire
experiments is that the model results are repeatable but full scale
fire experiments are less so.

I am only making the above points because there is an implicit
assumption that fire experiments are a true reflection of reality
rather than artificial constructs in their own right.

There is a need to ensure that the inputs of fire models reflect a
realistic scenario. I am largely usingFDS for researching smoke
movement . My models leak in appropriate places and contain stairways,
lift shafts and other elements that induce stack pressure whereas the
models that I have seen used by others working in similar areas are
effectively sealed. Naturally I consider that my approach is the more
reasonable.

I have doubts as to the precision of the results from FDS models, but
use my judgement as to whether the outputs are a reasonable reflection
of what I would anticipate and are suitable for the purpose for which
the scenario is being set up. I tend to use a comparative approach to
fire engineering, generally comparing the results of the scenario that
I am modelling to a code compliant base model. With this approach a
lot of the inconsistencies are cancelled out and mostly irrelevant. I
do appreciate that users working in other fields need a greater degree
of precision from their outputs but I am satisfied that my results are
‘good enough’ for the purpose for which I use them. I am not sure that
I would want to rely solely on the temperature output from an FDS
model in a scenario where the temperature may be critical to the
safety of a building element. In most circumstances I would not be
particularly concerned that a fire in a compartment takes 3 minutes or
10 minutes to reach flashover unless this can makes a significant
difference to the safety of the occupants.

The fact that a FDS model fails to reflect an experiment is not that
important. It only tells us what we already know. Some people are
better at modelling than others and it is unwise to accept any results
as a reflection of reality without some form of corroboration from
other sources. We should be helping to explain to those involved in
fire safety decision–making that the outputs from a model may be
significantly better than an educated guess not frightening them into
dismissing the use of models due to their apparent unreliability.

In my view a priori and blind comparison issues are of academic
interest but should not be allowed to detract from the overall
benefits that may be derived from the use of such a powerful modelling
tool.


On Apr 24, 11:40 pm, Rein <rei...@gmail.com> wrote:
> I know the topic is not popular in some circles of the fire community
> but I think that users of fire models need to be more involved in the
> 'a priori' vs. 'a posteriori' debate.
>
> I have just read this short article by Prof Beard from Heriot-Watt
> University that appears in last issue of Industrial Fire Journal:
>
> - Reliability of computer models in fire safety designhttp://hemmingfire.com/news/fullstory.php/aid/148
Reply all
Reply to author
Forward
0 new messages