Research and the release rate of FDS versions

216 views
Skip to first unread message

Rein

unread,
Jul 30, 2010, 9:14:28 AM7/30/10
to FDS and Smokeview Discussions, G.R...@ed.ac.uk
I would like to post in this forum some questions related to the
release rate of FDS versions.

First, note that I am not a code developer but an user. I am a
particular type of user: a researcher. An academic who studies
methodologies to use some of the tools available to fire safety
engineers. Hopefully I have helped or will help to develop the state
of the art in fire modelling. I certainly support improvements to
FDS and greatly admire the work of the FDS developers team.

For the last two years I am facing two problems stemming from the
different versions of FDS that are released with some frequency.
Research (my research at least) has a characteristic time that seems
to be significantly longer that the time between version releases.
This means that when we start a research project on modelling that
includes FDS we use the newest version available but by the time the
work is finishing and concluding, there are one or two new versions of
FDS available. Note: PhD thesis in the UK last for three or four
years.

This has led to two issues in my research group:

I - Different results from different versions
We have accidentally observed two times already (in two different PhD
thesis work) that the same input of FDS produces significantly
different results in different consecutive versions of FDS (eg v5.2.0,
v5.2.5 and v5.3.0). In one case it took us months to figure out the
problem. Both of these problems were discussed in due time in this
forum. My personal recommendation to my students is that they fix the
FDS version of their interest at the beginning of the project and stay
with it until the very end of the thesis.
I wonder if other researchers would like to express their views on
this issue?

II - Peer reviewed of FDS results
We have had reviewers complaining and requesting rejection of our
modelling papers because results are not in the latest version of FDS.
NOTE: just the review process in fire and combustion typically takes
anything from 3 to 16 months.

About modelling work on the Dalmarnock Fire Tests (Chp 2 of
http://hdl.handle.net/1842/3418), one anonymous reviewer recently said
"Given that [version] 5 has significant changes in the combustion
model and other submodels, there is no value to the community in
publishing a paper on FDS 4 unless the paper can demonstrate that the
conclusions on FDS performance will remain valid for the current
version and versions under active development".

To demonstrate that the conclusions of the work apply to the newest
version of FDS effectively implies repeating the thesis work. This is
not affordable most in most cases, at least not for my group. Should
academics fear every new version of FDS because it means one extra
year of work for my PhD students?

I wonder if other researchers would like to express their views on
this issue?

III - Corollary
Following from issues I and II, I am confronted with the question,
what happens with buildings which design was aided and approved using
previous versions of FDS. What are the implications to forensic
investigations that reached conclusions in court using previous
version of FDS?
Message has been deleted

JWilliamson

unread,
Jul 30, 2010, 10:58:40 AM7/30/10
to FDS and Smokeview Discussions
Guillermo,
I am glad that you are bringing this to everyone's attention.

"My personal recommendation to my students is that they fix the
FDS version of their interest at the beginning of the project and stay
with it until the very end of the thesis."
I would recommend a slightly more proactive approach. Given the
duration of thesis research, you cannot perform the final simulations
that will be ultimately published until later in the research process.
Students have to study the models, and run preliminary simulations in
order to get up to speed with FDS. Along the way, the student needs to
get familiar with the development of FDS, by participating in this
group, and establishing contact with the developers directly. The
students need to determine if upcoming revisions will have a
substantial impact on their results/conclusions. That knowledge will
have a big impact on the version of FDS selected when the student
performs their Final simulations.

This is probably not the best approach, because the writing/publishing
process will frequently take a year (or more) from the time you start
running final simulations. Major revisions, like advancing from 4.x.x
to 5.x.x (to 6.x.x in the future) will always be problematic because
the model changes can be dramatic. Minor revisions, i.e. 5.2.x to
5.3.x, are only problematic if a specific submodel is changed that
directly impacts your research. That is why the student needs to be in
contact with the developers along the way.

On reviewer comments: You do not always need to reproduce the
simulations with the newest version, but it can happen. You can
sometimes get away with identifying the issues that have been changed
and those that remain the same. If you get stuck with a comment, and
the changes to the code are too dramatic to translate across versions,
I think you have to concede that you cannot publish the work beyond
the thesis. Conversely, you could try to get a second student (maybe
an undergrad?) to reproduce the work with the newer version of FDS.

On design/forensic applications: These concerns can be addressed by
managing uncertainty and using sensitivity analysis. It is admittedly
more difficult with forensic investigations.

Regards,
Justin
> About modelling work on the Dalmarnock Fire Tests (Chp 2 ofhttp://hdl.handle.net/1842/3418), one anonymous reviewer recently said

Bryan Klein

unread,
Jul 30, 2010, 2:07:17 PM7/30/10
to FDS and Smokeview Discussions
I have seen journal articles that proclaim some deficiency or
curiosity of FDS results, that had since been resolved and released in
subsequent versions of FDS during the time that the article was
written and pushed into the publishing process. This does not say
that the information was not valuable at the time it was written,
especially as feedback to the FDS development group and/or user
community. But, it does beg the question, why it was deemed
appropriate for archival publication if it is practically meaningless
in such a short period of time?

I realize that the academic model places high merit on this popular
method of information dissemination. But, that does not mean it is
the most appropriate form or method for the situation. In my opinion
the output that academia respects and gives credit for should probably
be updated to include work and formats that bring about more direct
improvements to the field that it relates to. In this improved model,
contributing a updated sub-model to the FDS development project and
working with the development team to have it incorporated and
documented in the User/Technical guides would have much greater value
overall, than a paper discussing why the version of the day doesn't do
something quite right or as expected. Then all the student would have
to do for the "thesis" would be to write up the effort and point to
SVN numbers.

It seems that often is the case where a student picks a topic to work
on, runs some models, maybe conducts an experimental series to
validate or compare to FDS results and then writes up a paper about
the experience and wipes his hands of any future involvement, mission
accomplished. This is almost entirely useless to the real work that
must be done to advance the field.

Kevin and the development team have spent a good amount of time
writing up documents like the 'FDS Road Map' (
http://code.google.com/p/fds-smv/wiki/FDS_Road_Map ) and even provided
there in bold text "Potential Research Topic's" that any student or
professor could take on as a project. In the almost 4 years I was at
NIST, very few if any were taken on by anyone outside the development
team, and if it was the work was not communicated or coordinated with
the developers and did not lead to changes in the FDS code base. Why
are such obvious opportunities for advancement overlooked? And if
these are not the most pressing issues of the day, and there are
better Topics for Research, why are they not being communicated to the
development team for either posting to the Road Map or coordinated
research and development? I guess because that would not work well
with the academic publishing model.

It seems that in general the problems that were stated are not caused
by the frequency of FDS releases, but instead with attempting to fit
something that is and should be evolving and dynamic, into a container
and process that requires it to be essentially static and unchanging.
FDS will never be a finished work, it is a tool that must continue to
make progress as technology, science and numerical methods advance
what we know and can do with computational models.

To respond on point:
I - Advise the students to understand the development pattern of the
project, and to make themselves aware over time of changes to the code
and what it means to their work. There are change logs, an open
repository of source code that is browse-able online and a discussion
group/issue tracker that can be used to clarify what is happening with
FDS. If the changes impact the results, understand them and either
update or explain in the paper why updating was unnecessary.

II - I would argue that rejecting a modeling study only on the grounds
of not using the very latest version of FDS is laziness on the part of
the reviewer. Not understanding what is different between the version
used and the recent version at the time of submission is the same
characteristic on the part of the author. I could see a case where a
well educated reviewer who knows what has changed in the FDS model
since submission might recognize an area that should be addressed in
the results, and in that case the reviewer should reject and state the
areas that need to be addressed.

I would agree with the reviewer you quoted if your paper attempts to
state anything about the predictive quality of FDS 4. FDS 5 has
improvements that may impact the results in favor of predictive
quality and this should not be overlooked. Just because it is hard to
do does not mean it isn't right.

III - I agree with Justin about how these things are handled in
engineering and forensic work. Managing uncertainty and erring on the
side of conservative/responsible engineering analysis tend to guard
against a need to rework the entire process with each subsequent
release. But, even so, if changes to FDS uncover significant
discrepancies in the merit of previous results, then there should be a
review and reassessment to ensure the best possible understanding of
the scenario is applied. My question would be, who is responsible for
the follow-up review to determine if subsequent releases provide
better or different answers?

DanielBak

unread,
Jul 30, 2010, 2:40:53 PM7/30/10
to FDS and Smokeview Discussions
All,

These are very interesting suggestions and comments about the academia
portion of the key question.

Regarding the consequence on forensic work using FDS, I believe that
the following should be considered.

Let say that an FDS user was tasked to determine whether an individual
was affected (injured or died) during the course of a fire. The user
then utilized FDS version 4. The user, with all best intentions,
science, cautions, ethical behavior, and appropriate sensitivity
analyses, has determined that the target individual would have a high
likelihood that he or she died during that fire. Then, the court
takes appropriate action against individual who contributed in the
death of the victim after the testimony of the expert witness.

A few years later, we discover that should the same end user utilized
version 6 of FDS results and conclusions reached would be less
critical to the well being of the "victim". In other words, the
conclusion is that the victim could not have died from the fire. The
courts would then take different course of action.

The scenario painted is not unusual. I contend that it is the price
of progress, improvement, and refinment of the modeling.

I don't beleive that it is recommended to revise forensic conclusions
unless it is done for a purely academic curiosity. I am certain that
this topic can be detated to death but let's not go there. Rather,
let's be recognize that fire science is evolving and should be
practiced with caution, best intentions, and most updated and proven
science.

Cheers, Daniel
> About modelling work on the Dalmarnock Fire Tests (Chp 2 ofhttp://hdl.handle.net/1842/3418), one anonymous reviewer recently said

Bryan Klein

unread,
Jul 30, 2010, 9:00:19 PM7/30/10
to FDS and Smokeview Discussions
Daniel,

I would hope that cases of such a serious nature, go under much more
rigorous review and would include more than the results from only a
few FDS simulations. For litigation, I have typically seen FDS used
as a supplemental tool to investigate and add understanding to an
issue, with the support of many other sources of information to reach
a final set of conclusions.

But, I accept your scenario as a possibility, and would say that a
similar trend of reevaluating forensic evidence is happening in
regards to DNA testing to overturn wrongful convictions, and is not
just for academic curiosity. http://truthinjustice.org/invalid-science.htm

How well would a modelers work stand up under future scrutiny?

Interesting stuff,
-Bryan

Kristopher Overholt

unread,
Jul 31, 2010, 3:57:23 AM7/31/10
to fds...@googlegroups.com

I would also add that there has been a great amount of talk regarding reproducible experiments in the computational science community, as far as attatching the code/data that were used to run the models. The fact of the delay of traditional publishing models, development speed of the model, and the ease of third parties sharing the same input file that was discussed in a paper all create a unique situation that make me wonder why the input file cannot be easily included in the final paper or website that is a result of scientific studies.

It's one thing to have an open source model that the developers are working hard on. And another to have a huge amount of papers and talks that are out of sync based on factors that are out of their control.

In my humble opinion, this is equally a result of the publication lag as much as it is a tradition or unwillingness of the scientists, students, or engineers to share their model with the community as a whole. In this type of field involving life safety and property protection, we should embrace the idea of open source more than any other, and I think that we can lead the way with respect to openness and speed/availability of results for all fields.

Kris

On Jul 30, 2010 8:00 PM, "Bryan Klein" <kl...@thunderheadeng.com> wrote:
Daniel,

I would hope that cases of such a serious nature, go under much more
rigorous review and would include more than the results from only a
few FDS simulations.  For litigation, I have typically seen FDS used
as a supplemental tool to investigate and add understanding to an
issue, with the support of many other sources of information to reach
a final set of conclusions.

But, I accept your scenario as a possibility, and would say that a
similar trend of reevaluating forensic evidence is happening in
regards to DNA testing to overturn wrongful convictions, and is not
just for academic curiosity.  http://truthinjustice.org/invalid-science.htm

How well would a modelers work stand up under future scrutiny?

Interesting stuff,
-Bryan



On Jul 30, 1:40 pm, DanielBak <vich...@gmail.com> wrote:
> All,
>

> These are very interesting sug...

Rein

unread,
Jul 31, 2010, 7:15:12 AM7/31/10
to FDS and Smokeview Discussions

Good debate. Thanks to all.

Justin and Bryan - thanks for the suggestion for the student to keep
track of upcoming releases before determining the FDS version for the
PhD. However, one important impediment to this approach is that parts
of the course of research are unpredictable by its own nature; it is
about discovering something we did not know before starting and thus,
parts of the work very often end up being significantly different from
what the initial PhD thesis plan was wishing for. Sure anyone in this
forum who has been in research graduate school knows what I am
referring to.

Bryan - the output that academia respects most and gives more credit
for is funding not papers. If my team were to win funding, from NIST
or otherwise, to research topics listed in the roadmap, we could
contribute to the roadmap. So far, our sponsors has been convinced to
fund us on other topics (blind simulations, large-scale experiments
and a priori/a posteriori modelling, forecasting fire dynamics,
multiscale modelling of tunnel fires, modelling ignition at high heat
fluxes). As we are always sending proposal to potential sponsors,
maybe NIST will get one soon :)

Daniel - I agree; fire science is a very complex and slowly evolving
discipline that should be "practiced with caution, best intentions,
and most updated and proven science". The problem for me is that in
fire, we are too little people trying to solve a too complex problem
too fast. We ought to expand the community to reach further in science
and engineering, and also to attract more brilliant minds. We also
need to debate more and in a transparent manner, like we are just
doing here. Enough of old boys club behaviour.

Kris - I agree with you that the input file should be added to paper
as an appendix. The only problem that I have had with that in the past
is that certain type of reviewers will criticize every small line of
your code instead of focusing on the bigger picture, the contribution
and findings of the work. Thus, those papers have not been published
so far. But I wish we could publish a paper on FDS one day with the
input file included.



An alternative to the issues I originally posted that I have been
considering lately is to avoid PhD thesis that involve only FDS or
puts a large weight into FDS results. If the thesis uses other codes
as well (Fluent, CFX, Star-CD, FireFoam, zone models), it is more
difficult for the work to be damaged by diverging results in new
versions, or reviewers demanding for latest releases.

I do however think there is merit in studies using previous versions
of FDS (within limits). My take is that the conclusions on a study of
a previous version of FDS, while it might not apply directly to the
newest version, it does apply to technology and literature that used
previous versions of FDS. It also applies to the buildings designed
and to the forensic findings reached with the aid of previous
versions. It is not that when a new version of a code is developed
that history of science and technology is restarted from zero.

I am finishing this reply with a note to a paper recently published in
Fire Safety Journal I read yesterday comparing FDS 4 and FDS 5
results. It is an interesting read:

"Comparison of FDS predictions by different combustion models with
measured data for enclosure fires" Fire Safety Journal 45 (5), August
2010, Pages 298-313. http://dx.doi.org/10.1016/j.firesaf.2010.06.002

dr_jfloyd

unread,
Jul 31, 2010, 9:33:42 AM7/31/10
to FDS and Smokeview Discussions
Guillermo, you paint the picture that the sole reason this paper was
rejected was that it used FDS 4. Is that truly the case or was the
use of a very obsolete version of FDS in a paper for an archival
journal only one component of the review?

When I review a journal paper that uses FDS, I look for a number of
things. Including --

The quality of the paper:
-Is there enough of a description of the FDS modeling so that the
reader can understand what was done?
-Do I feel that FDS was used properly?
-Are conclusions substantiated by the work presented in the paper?

The quality of the work (is it worthy of publication in an archival
journal):
-Is the work original enough?
-Will the paper be of use to larger community once published?

This last is where many FDS papers fall short. If all a paper does is
document how FDS x.y.z was used to simulate tests a,b and c and here
are the results, then I am probably not going to recommend publishing
the paper. Why? Because there is nothing there to stand the test of
time, it will in all likelihood be dated by the time it is published.
Now if the paper were to discuss, for example, how different
approaches were tried to model the tests and here is how the results
change and this is why, then I would be more likely to recommend it
for publication as there is value to paper beyond just the results.

Does this mean students shouldn’t use FDS? No, I don’t think that is
at all the case; however, it may place some additional burden on the
student. Clearly, at some point the student needs to fix the version
being used and I agree that the reality is that will need to take
place a number of months out from writing the dissertation. This
clearly brings with it the risk that a new version may make the
student’s work dated. If all a student is doing is running
simulations of FDS and comparing the results against a test, then
becoming dated is almost guaranteed (though I would ask is that really
worthy of an advanced degree)? If a student is using FDS to gain
insight into a set of experiments, using FDS to perform virtual
experiments, adding a new physical model to FDS, or similar uses of
FDS, then the work is less likely to become dated. The student
though, should monitor changes in FDS and when doing the final writing
should address the potential limitations of the version being used
with the version currently released. In my opinion, however, the fact
that a student’s work may be dated by the time a student is done is
not a reason for the student not to do the work (though it may make
publication difficult). The purpose of a dissertation is demonstrate
the ability to perform original research, creative problem solving,
critical thinking, and all the other skills we expect of an MS or PhD.


On Jul 31, 7:15 am, Rein <rei...@gmail.com> wrote:
> 2010, Pages 298-313.http://dx.doi.org/10.1016/j.firesaf.2010.06.002

Rein

unread,
Aug 1, 2010, 6:40:39 AM8/1/10
to FDS and Smokeview Discussions

I did not intend to imply that the reviewer only criticize on the
version of FDS. He/She had other issues with the work as well, of
lesser importance tough. BTW, the reviewer did not mention one single
positive thing (or neutral) about the work. That is quite a
challenge!. You can judge by yourself, the work is Chapter 2 of this
recent PhD thesis http://hdl.handle.net/1842/3418. Anyway, enough of
talking about this infamous review; I only added as an illustrative
example in the original message. It is over now. The discussion here
should be more general.

I agree with the general guidelines that Jason follows for reviews. I
personally put more weight on originality. But experts are free to
chose their own criteria and guidelines, and it is up for the Editor
to decide if the criteria is fair and well supported or not.

G.

Kevin

unread,
Aug 2, 2010, 9:14:12 AM8/2/10
to FDS and Smokeview Discussions
Less than a year ago, I wrote a blog about the verification and
validation process that we have developed for FDS:

http://blog.fds-smv.net/2009/09/fds-verification-and-validation.html

In particular, I addressed some of the issues that we have encountered
in regard to journal papers on FDS. To quote the blog:


"When FDS was first released, we had in mind the idea that V&V would
be performed by students and engineers using the model for research or
commercial applications, and the results would be published in the
fire literature. This did indeed happen, and there are numerous
papers, reports, and theses spread across the various journals and
websites. However, several years ago as we were working on a V&V study
with the US Nuclear Regulatory Commission, it became apparent that we
could not just depend on the fire literature as a repository of FDS
V&V work. There were several reasons:

-- V&V, especially Validation work, cannot be easily crammed into a
short journal article.

-- Results of older versions of the model lose their validity after a
few years.

-- Often the experimental conditions and uncertainties are unknown.

-- Often the work is performed by students who are just learning how
to use the model.

-- There are too many different ways of quantifying accuracy, which
gets back to the question above as to what "works well" means.

-- Cases have to be re-run with each new release, and we cannot expect
journals to keep publishing the same old stuff.

For these reasons, we decided to maintain two manuals, Volumes 2 and 3
of the FDS Technical Reference Guide, called the FDS Verification and
Validation Guides, respectively. In these, we have compiled existing
V&V work and continually add case studies to demonstrate mathematical
accuracy and physical fidelity."


The blog goes on to encourage students who would like to work with FDS
to work within the framework that we developed. We have a Road Map
filled with interesting, challenging project ideas, plus an ever-
growing collection of V&V calculations that we'd like to continue to
add to and improve. Many of the changes that we make to FDS come
directly from the results of our V&V calculations. They also come by
way of researchers who alert us to potential problems that they are
having with a particular feature in FDS or Smokeview.

All this being said, only a few students have contacted us or adopted
our process of working with the source code, automating the running of
cases and the plotting of results, and adding to our V&V Guides. And
yet, we receive paper after paper for review with the flaws we've
listed above.
Most of these papers consist of routine validation work (comparison of
FDS version x.y.z with experiment) that is more appropriate for our
Validation Guide
rather than an archival journal. These journals cannot possibly
publish all of these papers, no matter how well the work is performed.
We'd much prefer
to see ideas for new algorithms published in archival journals (along
with some validation work to demonstrate the new algorithm is an
improvement over the old). Such
work will stand the test of time. Routine validation work is only good
for a few months, after which a new minor release might slightly
change the results. This
is why we re-run all our validation cases with each minor release. It
is not only good development practice, it is a requirement of the
regulatory agencies that use FDS.

I have had a number of long conversations with friends in academia,
and we always reach a very difficult impasse. It has to do with the
Goal. Our Goal as model developers is to produce a fast, accurate,
robust, useful piece of software for performing fire and low-speed
fluid flow simulations. This is consistent with our mission of
technology transfer at NIST, but it is also a personal passion shared
among all the developers and those who've helped us this past decade.
We feel that it is in everyone's best interest to develop and maintain
a fire model that is useful, open, and above all, accurate. This all
sounds great, so why the impasse? Because the mission of a university
professor is to educate students and to do research. And the way that
this research is communicated is via papers. Trouble is, much of what
we do to develop and maintain FDS is not appropriate for archival
journals. And certainly we do not work on 3 or 4 year cycles.

Here's a case in point. Last year, a message was posted to the
Discussion Group saying that there was a problem with the velocity
boundary condition in FDS. The person posting the message said that he/
she could not get the right pressure drop when simulating simple air
flow through a duct. Randy McDermott identified the problem and
implemented a better velocity BC (Werner-Wengle wall model) than the
one that was in FDS at the time. This is not to say that all
calculations done prior to this were fatally flawed, but rather the
new model worked better than the old, and there was no cost increase
in CPU time. Better physics, no CPU hit, no brainer -- with the next
minor release of FDS we implemented the model after checking that none
of the V&V cases that we run with each minor release was adversely
affected. In a nutshell, this is how we develop code. Randy probably
spent the sum total of 2 weeks on it, and no archival journal papers
were published (alas for Randy!). Yet we all have benefited from this
improvement.

Now if instead of working at NIST, Randy were teaching at a
university, here's how this would have happened. Professor Randy would
have recognized that the FDS velocity BC needed improvement. He would
have assigned the problem to a student, who would have pored through
the literature and found papers on about a dozen different wall
models, including the W-W model. Months would be needed for this, then
more months to learn FDS, more months to learn how to program it, more
months to run a bunch of cases. Now a few years have gone by, and it's
time to publish those papers. The papers get sent to a fluids journal,
say, and they are rejected. Why? Because the W-W model has already
been shown to work in a wide variety of different applications, and
the work that the student did, while beneficial to the development of
FDS, is not really new and appropriate for an archival journal.
Perhaps a conference proceedings. In any case, chances are that in the
few years that this has happened, Randy's alter-ego at NIST or VTT or
wherever might have already done the work (Prof Randy would not have
told us about his efforts). So at the end of all this, we have a
student who hopefully has received a good education, but there is
little else to show for all that hard work. Worse yet, the student
graduates and all of his/her experience is lost. This is not a very
efficient way to make technological advances.

The tone of original post implies that there is something wrong with
the way we develop FDS. That it does not fit within the academic
framework
that has evolved over the past millenium (427 years if you're at the U
of Edinburgh). All we can say is that we've borrowed lots of
development ideas from
organizations and companies like the one whose name is at the top of
this web page. Do you think that Google, Apple, Microsoft, etc., can
delay making improvements to their products so as not to disrupt the
progress of graduate students? Do you not concede that the electronic
gizmo in your pocket is going to be rubbish in less than a year. I
cannot imagine a computer science professor writing to Google telling
them to slow down -- the students can't keep up.

But I can already anticipate what you're going to say next. What about
peer review? What about careful academic scholarship? My response is
that we've
put into place a system of quality control via our V&V Guides a system
that maintains a much higher degree of reliability and accuracy than
we had even
a few years ago. And we believe that this system is much better than
the somewhat ad hoc way that we used to do things -- that is, develop
some
algorithm, run some calculations, publish a paper, and then repeat.
Until FDS 5, we did not carefully control the FDS source code. We had
no
versioning system and no systematic way of releasing new versions. We
now have what is commonly called a "Configuration Management Plan." A
description
of it is in Volume 4 of the FDS Technical Reference Guide. The four
volumes that make up the FDS Technical Reference Guide -- Mathematical
Model, Verification,
Validation, and the Configuration Management Plan ought to be the
starting point for a review of FDS. What is in the literature does not
present an accurate picture of what FDS is today. We're not opposed to
publishing papers. We even do it from time to time. But journal
publications do not drive our development efforts. We're driven by
user need, and we cannot wait 3 years to implement something that is
going to help someone right now.

When I was in school, we were taught that engineers solve problems.
That is what we do each and every day. Ask yourself this question --
are you solving problems or are you just publishing papers? Most of
the FDS papers I see in the literature do not offer solutions to
problems. They certainly identify problems, but that's easy. You don't
need an advanced degree to download FDS, run a few calculations, and
then observe that something doesn't seem quite right. We get that
everyday in this forum. Instead of worrying about publishing papers,
it would be much more beneficial to all of us to read the Road Map and
monitor the Issue Tracker to see what we're working on and then help
us solve these problems. If I were sitting on a Ph.D review panel,
this is the first (and last) question that I would ask the student --
did you solve a problem?

Guillermo writes: "The output that academia respects most and gives
more credit
for is funding not papers. If my team were to win funding, from NIST
or otherwise, to research topics listed in the roadmap, we could
contribute to the roadmap. So far, our sponsors has been convinced to
fund us on other topics (blind simulations, large-scale experiments
and a priori/a posteriori modelling, forecasting fire dynamics,
multiscale modelling of tunnel fires, modelling ignition at high heat
fluxes). As we are always sending proposal to potential sponsors,
maybe NIST will get one soon."

I cannot believe that your sponsors would object to the fact that
their funding led to an improvement in FDS, regardless of the original
intent. Why else would they fund you? In fact, if you have used FDS to
address any of the topics that you have listed, then you will
undoubtedly come across limitations in the model. Why not try to solve
these problems? Simply publishing a paper pointing out that some older
version of FDS did not simulate something appropriately is not helpful
to us or anyone running a current version of the model. Proposing a
solution, and then working with us to implement that solution would be
of value, and the student would benefit enormously from learning some
very useful software development skills. Even if no papers are
published, I still see great value in working a problem through to a
final solution -- a detailed description in Volume 1 of the Tech
Guide, a few good verification cases in Volume 2, an experimental
dataset in Volume 3. Given that many people in fire protection
engineering are using FDS, you would know that your contribution to
the field has been made. A paper in a journal is just that -- a paper
in a journal.



On Aug 1, 6:40 am, Rein <rei...@gmail.com> wrote:
> I did not intend to imply that the reviewer only criticize on the
> version of FDS. He/She had other issues with the work as well, of
> lesser importance tough. BTW, the reviewer did not mention one single
> positive thing (or neutral) about the work. That is quite a
> challenge!. You can judge by yourself, the work is Chapter 2 of this
> recent PhD thesishttp://hdl.handle.net/1842/3418. Anyway, enough of
> > > > Interesting stuff,- Hide quoted text -
>
> - Show quoted text -...
>
> read more »

Rein

unread,
Aug 2, 2010, 6:56:04 PM8/2/10
to FDS and Smokeview Discussions

I never said or intended to imply that FDS development should be slow
down. The thought is ludicrous, but I do see Kevin likes academics...

G.
> ...
>
> read more »
Reply all
Reply to author
Forward
0 new messages