I'll give you the model developer's perspective. First, be careful
with words like "blind." There are subtle variations on this idea. In
fact, ASTM E 1355, Standard Guide for Evaluating the Predictive
Capability of Deterministic Fire Models, refers to three types of
validation exercises -- blind, specified and open. Blind means that
the modeler is given only a "basic description" of the scenario. I
believe that Dalmarnock falls into this category. Specified means that
the modeler is given the "most complete" information about the
scenario. For a "real" fire scenario with a variety of combustibles,
I'm not sure that this is even possible because we as a community have
not yet come to a consensus about what "complete" means. Finally, I
think everyone agrees that "open" means that the modeler has access to
as much information about the experiment as possible, including all
the results.
As a developer, a blind validation exercise is not of much value. This
does not mean that exercises like Dalmarnock are of little value to
AHJs and others, but for me, simply saying that 8 people simulated a
complicated fire scenario with FDS a got widely different results does
not tell me what to work on, other than everything. We've known for
years that it is difficult to model real combustibles because of
limited and ambiguous thermo-physical property data, simplified models
of the solid phase, simplified models of under-ventilated combustion,
and a user community with a wide range of experience. And we're
working in all these areas. But with limited resources, where to put
the money and effort to get the most bang for the buck?
For me, a preferable validation strategy is to work in stages. First,
for a given compartment geometry, test the model's ability to just do
smoke and heat transport from a specified fire like a gas burner. The
modeling can be done a priori and characterized as a "specified"
exercise if you like. Alot of routine FPE design work involves a fire
model and a specified "design" fire, and this kind of exercise is a
good way to quantify the accuracy of the model for this kind of design
work. After testing the smoke and heat transport, then move on to look
at target response (sprinklers, smoke detectors, etc), and then
finally the fire itself (prediction of growth and spread on real
objects). If you combine the prediction of the fire, the smoke and
heat transport, and the target response, then I, the developer, do not
know which part of the model is weak, and which is strong. Large
variations in temperature in Dalmarnock were a direct result of FDS
predicting different HRRs for furnishings that people modeled with
different assumptions and thermo-physical properties. So what is not
working? The solid phase? The gas phase? The user him or herself?
Probably all of the above. What do I do now?
I have been struggling in recent months to put together a new
Validation Guide for FDS that takes up where our work with the US
Nuclear Regulatory Commission (NUREG 1824) left off. Struggling,
simply because it is a hell of a lot of work to take dozens of
experimental datasets and FDS calculations and present comparisons in
a cohesive way. We're trying to automate the process so that with each
"minor" upgrade (FDS 5.1 to 5.2, for example) we will rerun ALL of our
validation cases and replot everything. If we change the physics of
the model, we have to redo all of our validation work. This was a
lesson learned from the NRC. They don't care about what we published
in FSJ 5 years ago. How is the model now? Some of our validation
exercises were originally of the "specified" type. But that has little
meaning now. Now they are all effectively open, and we just don't have
the money to do full-scale fire experiments each time we update FDS.
Just not practical. Even if we could, I can tell you that it is rare
that an experiment is conducted exactly as specified in the test plan.
Life just doesn't work that way, and those of you who do full-scale
testing know exactly what I'm talking about.
All this being said, I'll say again that "blind" validation exercises,
however you define them, can be of value to the end user and the fire
authorities, but they are of less value to developers. I would prefer
that if given the opportunity and the resources, that a series of
increasingly complex fire scenarios be part of any validation
exercise. Then we learn what the model does well and not well. If you
just jump to the fully-furnished building and set it alight, then we
cannot make progress. Worse, it says to the skeptics that fire models
are all wrong and should not be trusted.
> > experiment. Most fire model validation are a posteriori.- Hide quoted text -
>
> - Show quoted text -