Less than a year ago, I wrote a blog about the verification and
validation process that we have developed for FDS:
http://blog.fds-smv.net/2009/09/fds-verification-and-validation.html
In particular, I addressed some of the issues that we have encountered
in regard to journal papers on FDS. To quote the blog:
"When FDS was first released, we had in mind the idea that V&V would
be performed by students and engineers using the model for research or
commercial applications, and the results would be published in the
fire literature. This did indeed happen, and there are numerous
papers, reports, and theses spread across the various journals and
websites. However, several years ago as we were working on a V&V study
with the US Nuclear Regulatory Commission, it became apparent that we
could not just depend on the fire literature as a repository of FDS
V&V work. There were several reasons:
-- V&V, especially Validation work, cannot be easily crammed into a
short journal article.
-- Results of older versions of the model lose their validity after a
few years.
-- Often the experimental conditions and uncertainties are unknown.
-- Often the work is performed by students who are just learning how
to use the model.
-- There are too many different ways of quantifying accuracy, which
gets back to the question above as to what "works well" means.
-- Cases have to be re-run with each new release, and we cannot expect
journals to keep publishing the same old stuff.
For these reasons, we decided to maintain two manuals, Volumes 2 and 3
of the FDS Technical Reference Guide, called the FDS Verification and
Validation Guides, respectively. In these, we have compiled existing
V&V work and continually add case studies to demonstrate mathematical
accuracy and physical fidelity."
The blog goes on to encourage students who would like to work with FDS
to work within the framework that we developed. We have a Road Map
filled with interesting, challenging project ideas, plus an ever-
growing collection of V&V calculations that we'd like to continue to
add to and improve. Many of the changes that we make to FDS come
directly from the results of our V&V calculations. They also come by
way of researchers who alert us to potential problems that they are
having with a particular feature in FDS or Smokeview.
All this being said, only a few students have contacted us or adopted
our process of working with the source code, automating the running of
cases and the plotting of results, and adding to our V&V Guides. And
yet, we receive paper after paper for review with the flaws we've
listed above.
Most of these papers consist of routine validation work (comparison of
FDS version x.y.z with experiment) that is more appropriate for our
Validation Guide
rather than an archival journal. These journals cannot possibly
publish all of these papers, no matter how well the work is performed.
We'd much prefer
to see ideas for new algorithms published in archival journals (along
with some validation work to demonstrate the new algorithm is an
improvement over the old). Such
work will stand the test of time. Routine validation work is only good
for a few months, after which a new minor release might slightly
change the results. This
is why we re-run all our validation cases with each minor release. It
is not only good development practice, it is a requirement of the
regulatory agencies that use FDS.
I have had a number of long conversations with friends in academia,
and we always reach a very difficult impasse. It has to do with the
Goal. Our Goal as model developers is to produce a fast, accurate,
robust, useful piece of software for performing fire and low-speed
fluid flow simulations. This is consistent with our mission of
technology transfer at NIST, but it is also a personal passion shared
among all the developers and those who've helped us this past decade.
We feel that it is in everyone's best interest to develop and maintain
a fire model that is useful, open, and above all, accurate. This all
sounds great, so why the impasse? Because the mission of a university
professor is to educate students and to do research. And the way that
this research is communicated is via papers. Trouble is, much of what
we do to develop and maintain FDS is not appropriate for archival
journals. And certainly we do not work on 3 or 4 year cycles.
Here's a case in point. Last year, a message was posted to the
Discussion Group saying that there was a problem with the velocity
boundary condition in FDS. The person posting the message said that he/
she could not get the right pressure drop when simulating simple air
flow through a duct. Randy McDermott identified the problem and
implemented a better velocity BC (Werner-Wengle wall model) than the
one that was in FDS at the time. This is not to say that all
calculations done prior to this were fatally flawed, but rather the
new model worked better than the old, and there was no cost increase
in CPU time. Better physics, no CPU hit, no brainer -- with the next
minor release of FDS we implemented the model after checking that none
of the V&V cases that we run with each minor release was adversely
affected. In a nutshell, this is how we develop code. Randy probably
spent the sum total of 2 weeks on it, and no archival journal papers
were published (alas for Randy!). Yet we all have benefited from this
improvement.
Now if instead of working at NIST, Randy were teaching at a
university, here's how this would have happened. Professor Randy would
have recognized that the FDS velocity BC needed improvement. He would
have assigned the problem to a student, who would have pored through
the literature and found papers on about a dozen different wall
models, including the W-W model. Months would be needed for this, then
more months to learn FDS, more months to learn how to program it, more
months to run a bunch of cases. Now a few years have gone by, and it's
time to publish those papers. The papers get sent to a fluids journal,
say, and they are rejected. Why? Because the W-W model has already
been shown to work in a wide variety of different applications, and
the work that the student did, while beneficial to the development of
FDS, is not really new and appropriate for an archival journal.
Perhaps a conference proceedings. In any case, chances are that in the
few years that this has happened, Randy's alter-ego at NIST or VTT or
wherever might have already done the work (Prof Randy would not have
told us about his efforts). So at the end of all this, we have a
student who hopefully has received a good education, but there is
little else to show for all that hard work. Worse yet, the student
graduates and all of his/her experience is lost. This is not a very
efficient way to make technological advances.
The tone of original post implies that there is something wrong with
the way we develop FDS. That it does not fit within the academic
framework
that has evolved over the past millenium (427 years if you're at the U
of Edinburgh). All we can say is that we've borrowed lots of
development ideas from
organizations and companies like the one whose name is at the top of
this web page. Do you think that Google, Apple, Microsoft, etc., can
delay making improvements to their products so as not to disrupt the
progress of graduate students? Do you not concede that the electronic
gizmo in your pocket is going to be rubbish in less than a year. I
cannot imagine a computer science professor writing to Google telling
them to slow down -- the students can't keep up.
But I can already anticipate what you're going to say next. What about
peer review? What about careful academic scholarship? My response is
that we've
put into place a system of quality control via our V&V Guides a system
that maintains a much higher degree of reliability and accuracy than
we had even
a few years ago. And we believe that this system is much better than
the somewhat ad hoc way that we used to do things -- that is, develop
some
algorithm, run some calculations, publish a paper, and then repeat.
Until FDS 5, we did not carefully control the FDS source code. We had
no
versioning system and no systematic way of releasing new versions. We
now have what is commonly called a "Configuration Management Plan." A
description
of it is in Volume 4 of the FDS Technical Reference Guide. The four
volumes that make up the FDS Technical Reference Guide -- Mathematical
Model, Verification,
Validation, and the Configuration Management Plan ought to be the
starting point for a review of FDS. What is in the literature does not
present an accurate picture of what FDS is today. We're not opposed to
publishing papers. We even do it from time to time. But journal
publications do not drive our development efforts. We're driven by
user need, and we cannot wait 3 years to implement something that is
going to help someone right now.
When I was in school, we were taught that engineers solve problems.
That is what we do each and every day. Ask yourself this question --
are you solving problems or are you just publishing papers? Most of
the FDS papers I see in the literature do not offer solutions to
problems. They certainly identify problems, but that's easy. You don't
need an advanced degree to download FDS, run a few calculations, and
then observe that something doesn't seem quite right. We get that
everyday in this forum. Instead of worrying about publishing papers,
it would be much more beneficial to all of us to read the Road Map and
monitor the Issue Tracker to see what we're working on and then help
us solve these problems. If I were sitting on a Ph.D review panel,
this is the first (and last) question that I would ask the student --
did you solve a problem?
Guillermo writes: "The output that academia respects most and gives
more credit
for is funding not papers. If my team were to win funding, from NIST
or otherwise, to research topics listed in the roadmap, we could
contribute to the roadmap. So far, our sponsors has been convinced to
fund us on other topics (blind simulations, large-scale experiments
and a priori/a posteriori modelling, forecasting fire dynamics,
multiscale modelling of tunnel fires, modelling ignition at high heat
fluxes). As we are always sending proposal to potential sponsors,
maybe NIST will get one soon."
I cannot believe that your sponsors would object to the fact that
their funding led to an improvement in FDS, regardless of the original
intent. Why else would they fund you? In fact, if you have used FDS to
address any of the topics that you have listed, then you will
undoubtedly come across limitations in the model. Why not try to solve
these problems? Simply publishing a paper pointing out that some older
version of FDS did not simulate something appropriately is not helpful
to us or anyone running a current version of the model. Proposing a
solution, and then working with us to implement that solution would be
of value, and the student would benefit enormously from learning some
very useful software development skills. Even if no papers are
published, I still see great value in working a problem through to a
final solution -- a detailed description in Volume 1 of the Tech
Guide, a few good verification cases in Volume 2, an experimental
dataset in Volume 3. Given that many people in fire protection
engineering are using FDS, you would know that your contribution to
the field has been made. A paper in a journal is just that -- a paper
in a journal.
On Aug 1, 6:40 am, Rein <
rei...@gmail.com> wrote:
> I did not intend to imply that the reviewer only criticize on the
> version of FDS. He/She had other issues with the work as well, of
> lesser importance tough. BTW, the reviewer did not mention one single
> positive thing (or neutral) about the work. That is quite a
> challenge!. You can judge by yourself, the work is Chapter 2 of this
> recent PhD thesishttp://
hdl.handle.net/1842/3418. Anyway, enough of
> > > > Interesting stuff,- Hide quoted text -
>
> - Show quoted text -...
>
> read more »