.
I am posting it here to open the conversation to the public and to archive it.
Wed, Jun 21, 2017
Dear Bryan,
I was asked on social media to give my opinion, related to the last public resource shared by Thunderhead (
http://www.thunderheadeng.com/2017/06/smoke-visibility-and-obscuration-in-pyrosim/). Since I highly value your initiative in sharing knowledge and have some possibly critique opinions, I would rather move the discussion to a personal communication. I have also taken advantage of this form of communication to include Kevin and Randy into the discussion, as it is more about the physics and less about the graphics engine.
To put my opinion under a theme, I would start with a quote of Ferziger, following Prof Blocken (10.1016/j.jweia.2014.03.008):
“The frequently heard argument ‘any solution is better than none’ can be dangerous in the extreme. The greatest disaster one can encounter in computation is not instability or lack of convergence but results that are simultaneously good enough to be believable but bad enough to cause trouble.” (Ferziger, 1993c)
In the paper, Daniel has presented the way how visibility is calculated in CFD in a very plausible way, but then also gave the impression that we can model the optics with good results. The problem is, we cannot (or we are extremely uncertain on how well we do it). Now, why I think that:
1. The use of Lambert-Beer law is transferred from optical spectroscopy studies, in which the uncertainty of the transmittance modeling was also critiqued (10.1002/cphc.201600114). In laboratory conditions with a single, precisely calibrated light source, it is challenging to get this correct. The way how easily this law was transferred directly from optical theory to use in complex flow fields in buildings is something I could never understand.
2. In laboratory tests, I found it easy to match the obscuration from optical densitometer with CFD for high Ys fuels. We have the same chamber and measurement equipment as VTT (it is a standardized test chamber), and used a very similar fire source. The less sooty fuel we used, the more difficult was the match. This is due to exponential nature of the law. At high mass densities of smoke (obscuration densities of 0.5 and more), the small changes in the mass density cause almost no difference in the measured transmittance – and that gives the illusion that the error is small. In low values of obscuration, this measurement is very unpredictable and way more reliant on the correct value of specific_mass_extinction coefficient (which as Widmann shown is dependent on wavelength). Sadly, at mass density of smoke corresponding to the critical value of visibility (5-10 m), this is quite a problem. This is one of the main reasons why in our paper [10.1016/j.firesaf.2017.03.053] we have proposed the use of conservative Ys = 0.1 g/g value. The same problem with transmittance was noticed by Dr Justin Williamson from SWRI in his studies – I had a short discussion on this with him a year ago.
3. The K (or C) values used in theory for “emitting” and “reflecting” signs are not constants, but variables dependent on the dimensionless brightness of the object (and the size of the object!). You can find that in the conclusions of the original paper from Jin from 1970, but this was never transferred into SFPE Handbook. The original paper is attached (I have found it in open internet, so I hope I do not break the law by sharing :)). You also need to understand this value was plotted for 180 lx external light and for white smoke (scattering effect – Lambert-Beer should not apply here…). Also, in new SFPE Handbook chapter, the K values of Jin are discredited as “possibly too low”. It is impossible to transfer these experimental results into a general model of visibility in a way how you would like to use it.
4. The parameter of critical contrast coefficient (typically used as 0.02) ranges from 0.01 to 0.05, and this is also hidden somewhere in the K constant. There is immense spread in the population on this, and the best research we have was Jin’s work through the glass and in white smoke conditions.
5. Validation in laboratory conditions and results for real buildings are two different things. In buildings, you have a mix of daylight and artificial light (Lambert-Beer law can be used only for single sources, and reflections ruin everything…). This means you not only cannot calculate the amount of light at each point of space, but it is also difficult to get the luminosity of walls correct. Without this value, you cannot model visibility of an object through RGB, as you did with the black boxes. Ray-tracing does give some possibilities, but still, that is a rough estimation and not a measurement.
6. Daylight/LED are polychromatic, so yeah – again Lambert-Beer law does not apply… We would have to use some approximation function for different wavelengths I think. All these small uncertainities with the law make going from validation studies into real buildings really difficult.
7. I have not even touched on how deeply does the user influence this through Y_soot; H_c_eff; Ks; external light choices etc. A small change in input parameters yields huge changes on results, and you must be aware that this approach can be potentially used to justify some choices related to evacuation signs and lighting of the compartments.
8. I would also like to mention the problems with threshold values for visibility. For example, the 5 m for familiar and 10 m for unfamiliar values comes from experiments on BRI Tokyo experts (familiar) and their wives (unfamiliar) and their stress levels in a smoke chamber filled with irritant smoke. This is really not well-documented theory of how people move through smoke… I know this is not connected to your model, but if we give people illusion that they can model visibility, they will rely on this part of science to find the threshold values.
Now the idea to be able to visualize the realistic visibility in smoke is fantastic. If we could prove through modelling, that we should change the placement of signs or use of larger/brighter signs, this would be a powerful tool. But we do not really model visibility in CFD. We just use simple mathematics to translate mass density of smoke into a value of something, that layman could understand. I fear that if you advertise this approach on the market, it will be abused to justify terrible engineering solutions – and I base that on how FDS is massively misused in my country. I have already seen such approach to measure the distances in Smokeview but that was quite rare. Now people may cite your recommendations, and it may be much harder to elaborate to laymen why this modeling is so uncertain.
I wish I could write to you “Bryan, this model is terrible, use this one instead” but sadly I do not see a viable alternative. The default values in FDS for variables related to estimation of visibility seem conservative and safe, but i don't know how well this transfers into post-procesing of smoke obscuration. One of my colleagues in ITB has started PhD on modeling light obscuration in sooty smoke for various wavelengths, but we just start to build the lab and RGB densitometers. It will take 2-3 years at best to provide answers. If you have any idea how we can help you in improving this in nearby future, I will try to do my best to help. We will also try to communicate any reliable results we get. I look forward to a good visibility model!
As for today, I would rather advertise this approach as a way to give immersion into how smoke conditions in fire can look like (for VR training this is good enough I guess), but I would refrain from building the illusion that this theory can be used as a precise engineering method. It is a very rough estimation, at best.
Please do not treat my opinion as a critique of your work - I truelly believe that your developments are very usefull to the community. I just had the "pleasure" to dig into this theory, and was really shocked with the results... This is why i've chosen to share my doubts with you.
With best regards
Wojciech Wegrzynski
-------------- Next Message --------------
Wed, Jun 21, 2017
Wojciech Wegrzynski,
I want to thank you for your comments. Since this is not my area of expertise, your knowledge is very useful. My goal in the post was to summarize how the calculation is done in FDS.
I would like to hear what Kevin and Randy have to say.
Sincerely,
Daniel Swenson
-------------- Next Message --------------
Wed, Jun 21, 2017
All:
This recently became an issue in Smokeview when Glenn changed the default value of the smoke albedo. We noticed that the "beam_detector" verification case (which shows up in the user guide) changed behavior. This brought to my attention that we really did not have a quantifiable verification case for VISIBILITY (we just happened to look at the image in the user guide).
I know now that Glenn has a verification case for Smokeview, but we still do not have one in the FDS verification suite that would alert us to a problem with the beam_detector case.
Further, to my knowledge the 2007 VTT cases are not in the current validation guide. I do not know why, or maybe I am missing them (Jason or Kevin please comment).
It would be great if we could get these cases into our guides.
Best,
Randy
-------------- Next Message --------------
Wed, Jun 21, 2017
Re verification case:
Are you asking for a beam_detector case for the FDS devc output or a case for smokeview rendering? If we want a beam_detector case for the graphical representation of visibility someone will have to write an image processing routine (I don't know if Matlab can do this or if we would need other software) to figure out if a rendered smokeview image has the correct RGB values as a function of depth.
Re VTT 2007:
Before Dan's post I can't say I recall having seen a presentation on the VTT visibility experiments. Did the 2007 report author provided us with the fds files, data in csv format, and a test report?
Wojciech,
I don't disagree that there are a lot of assumptions behind what we use to determine visibility in design. However, buildings requiring smoke control systems are being built around the world today, and we can't tell an architect just wait a decade before starting construction so we can settle the science on smoke obscuration.
If you use the simple equations in NFPA 92, you can wind up with cost prohibitive exhaust systems that do not, in my opinion, add commensurate value for safety. If we want to have open floor plans and atriums in our buildings, what options do we have other than linking a prediction of soot density to the ease of egress? At this point the concept of visibility as we output it is what we have available to us. In my opinion, if reasonably conservative fire sizes and total soot production are used, and designs also evaluate temperature and toxicity (which cause the actual harm), then we have mitigation for error in our assumptions in how we deal with visibility. Now I agree, there are people using models whose modeling assumptions lack adequate conservatism. That, however, is not a developer issue. It is an issue for bodies that set standards and for regulatory authorities.
Jason Floyd, Ph.D. | Senior Consultant
JENSEN HUGHES
Advancing the Science of Safety
-------------- Next Message --------------
Wed, Jun 21, 2017
Re verification:
Glenn has the image part worked out (we may need to add something to more easily output the numbers for post-processing). I would like to get this coupled to the VISIBILITY device output to make sure those are consistent. AND that they are examined quantitatively by firebot in the verification process.
-------------- Next Message --------------
Wed, Jun 21, 2017
I added devices for obscuration to the faces of the OBSTs in the beam_detector case. The output of these could be compared to how Smokeview colors the obstructions.
-------------- Next Message --------------
Wed, Jun 21, 2017
Thank you all for your feedback,
It is amazing that the developers are here, first hand and eager to participate in solving the engineers' problems!
Dear Jason, I am aware of the typical workflow - myself am smoke control engineer and do that on daily basis. I wouldn't say I am happy with the status quo, but I am more/less okay with the way we work today. If someone takes "visibility" plot and bases his results on the 10-meter value, with K=3 (or even 8) for a reasonably chosen design fire, it seems conservative and safe.
Now the things shown by Daniel are a bit different. What I wanted to emphasize is that with Unity 3D (or even Blender) you can produce beautiful smoke texture that looks very very real, but the scientific value of such render is limited (or at best unknown). I would like to prevent a situation in which an engineer would base his design on the fact that "in my render the evacuation sign is visible, therefore the requirements are met", before we are too far into this. This already happened in the past with smokeview, but I am afraid that with more realistic renders, this may be overused. Current physics of multi-source light and smoke obscuration are nowhere close to that!
You, of course, are not responsible for the terrible engineering in the world (and I appreciate all of your efforts in this area!, especially the summer school in Julich :)), but I think there should be some kind of response to this new, emerging ideas, before they enter the mainstream and surprise us. I think it would be safe to flag this concept as "useful for creating an immersive and realistic experience in VR" but we should not promote it as a viable engineering tool. At least with the current state of knowledge.
Regarding beam detectors - the application of lambert-beer law for that is much better than for evacuation signs, so this use seems justified. I will ask my colleagues to run some heptane fires with lower Ys than VTT for you, and save the mass loss, temperature and obscuration measurements, then we can do the statistical run and share you a reference case for validation. Maybe this will be helpful.
Best regards
Wojciech