MaCFP-2 Condensed Phase Workshop - Developing requirements for data set quality

58 views
Skip to first unread message

MaCFP Condensed Phase Discussions

unread,
Apr 23, 2021, 2:21:07 PM4/23/21
to MaCFP Condensed Phase Discussions
During the Condensed Phase portion of the MaCFP-2 Workshop, we had a lively discussion on a number of topics. In hopes of continuing that progress, I have copied below some of the key questions & comments provided during this meeting.

This thread focuses specifically on what should be our next steps to develop requirements for data set quality. Please feel free to add new questions to this thread or your thoughts on how to address any of these issues.

The key question is "what constitutes 'good' data?" and "how can we identify (and update, if possible) data that does not meet this standard, as needed?

The github repository (https://github.com/MaCFP/matl-db/tree/master/Non-charring/PMMA ) provides a preliminary summary of key factors influencing material response in various mg- and g-scale tests along with initial "outlier criteria", which can be used to help identify clearly incorrect behavior in measurement data (for this material, and our current collection of experimental measurements).

Key topics discussed during the workshop:

1. Calibration
1a. Should we require calibration data in the repository? What data, what calibration frequency is needed, and how should this be defined/reported?
1b. What are these calibrations for mg- and g-scale tests (specifically when applied to characterize pyrolyzing solids)
1c. Can we define and/or provide a series of standard reference materials and procedures needed to calibrate our apparatus of interest?


2. Uncertainty
Calculation and reporting of uncertainty for each experimental data set is important. It was suggested that such information be provided. How can we best do this and, ultimately, who is responsible (individual labs, the community)? A combined uncertainty analysis should be provided to account for all sources of error in a measurement (not simply calculate repeatability).

It was noted that a lack of knowledge produces some of this uncertainty (e.g., test to test, measured HRR in cone experiments may change if, for example, boundary conditions (BCs) vary.) As a community, it would be valuable to define what exactly we need to control, measure, and report in each test to reduce this uncertainty in each experiment of interest. This could include - clearly defining BCs and providing validation simulations of model setup.


3. "Acceptance" of data
(How) can we remove extraneous measurements (either clearly aphysical responses or outliers versus other data)? It was suggested that we (a) request additional experiments to confirm results [some labs did provide such data and, in doing so, were able to identify causes of the observed effects, which was a valuable outcome for the community] and (b) that we 'Flag' data as not approved with a description as to why that is the case.

David Morrisset

unread,
Apr 24, 2021, 12:47:53 PM4/24/21
to MaCFP Condensed Phase Discussions
Very good presentations by all this week! 

A few thoughts for discussion...

1) In terms of calibration for HRR results at the gram scale, standard procedures for the cone and FPA outline a methane burner calibration process. So each participant could potentially submit the data for the initial burner calibration reporting the methane flow rate, O2, CO2, CO, duct flow rate, duct temp, etc and all of the constituent inputs to the HRR calculation. Seeing the variation in each individual component between labs opposed to the compounded variations expressed in the HRR alone may be insightful. Other metrics to consider (and include in the metadata) could include the model of gas analyzer and data logger used and more importantly, the delay times the response times of the analyzers. 

Characterizing any variations in these metrics across labs may lend insight into the apparent variations, or at least give further intuition into which of the constituent components of the HRR calculations are most consistent across labs. 
If mass loss rate is of particular interest in characterizing materials for modeling, then perhaps the time resolved MLR can also be compared across each lab (having less inputs to compound error in). Im sure the community might know of a procedure using precision weights or perhaps an initial experiment of a small pool fire to calibrate the load cells for MLR.

Another aspect to consider for calibration is checking external heat flux boundary condition. This could include determining the uniformity of the heat flux distribution over the sample surface for each cone/heating element, while this should be standardized, it would be interesting to see if the distributions align across each lab. Or assessing how the uncertainty in the HF calibration may lead to variation in HRR results.

2) Uncertainty - Again for HRR at the bench scale, outlining the total compounded uncertainty for the HRR measurement in the way presented in examples elsewhere (e.g., Bryant & Bundy 2019 https://doi.org/10.6028/NIST.TN.2077 , Zhao & Dembsey 2007 https://doi.org/10.1002/fam.947) could be incorporated into reporting the data collected for the study here. In addition to measurement error, statistical uncertainty and repeatability can be accounted for with the scatter between trials in a given lab which could be greater than two standard deviations of the observed data for small sets of trials (https://doi.org/10.1016/j.firesaf.2021.103335 - the analysis would have to be modified to use the student's t-distribution and calculate degrees of freedom based on the inputs to the calculation). Isaac makes a good point that the total uncertainty reported needs to capture more than the statistical repeatability.

One other source of uncertainty for these gram scale experiments is the uncertainty for the heat flux gauges used for calibration. The error on HF gauges can be on the order of kW/m2; do we know as a community how to propagate this uncertainty in calibration into the ultimate HRR or MLR recorded? Using a known HF uncertainty can be easily accounted for in ignition experiments (plotting t_ig vs HF) for example, but in what way can the HF uncertainty be reflected in error bars placed on the time resolved HRR curves for a given HF exposure? Or are these differences negligible?

A compounded uncertainty does not address the scatter seen across laboratories, but calculating a total uncertainty seems to be a best practice regardless and may indicate whether or not the variation between labs falls within the error bars of the compounded uncertainty in the HRR measurement. There is a chance that some degree of scatter across labs could be attributed to slight variations from uncertainty in the HF gauges as mentioned above.

The uncertainty coming from a "lack of knowledge" is a very good point. I guess one way to assess this - albeit overly idealistic - would be to have the same experimentalist bringing the same exact sample material, insulation backing, sample holders, heat flux gauge for calibration, etc. and run experiments themselves at a handful of laboratories. If nothing else this could gain insight into relevant aspects of the methodology that are not currently conveyed in the meta data that might vary lab to lab. 


-David Morrisset
The University of Edinburgh

Kevin McGrattan

unread,
Apr 28, 2021, 10:27:49 AM4/28/21
to MaCFP Discussions
I have a suggestion for improving the uniformity of cone calorimeter measurements. Why not ask all participants to measure the temperature of a plate thermometer with the exact same inconel plate thickness and the exact same insulation material, for all nominal heat flux values. Then, use this same insulation material as the backing for the actual sample. I agree that there are uncertainties in heat flux gauge measurements, and uncertainties in the specific lab procedures. The plate thermometer should provide a more robust way to ensure that the actual samples are being exposed to the same heat source. This is also a good way for modelers to check their simulations of the cone and their assumptions about "incident heat flux." 

Morgan Bruns

unread,
Apr 29, 2021, 4:15:04 PM4/29/21
to MaCFP Discussions
This seems like a good idea to me. It's an extra step, but it seems like it might be the easiest approach to assure consistency between labs. Plate thermometers are about as simple and inexpensive as you can get. The only technical issue I could imagine would be spectral differences between heaters, but that seems unlikely to have a significant effect. Is there any reason not to specify this as a requirement in future rounds of cone tests? What nominal heat fluxes would be reasonable? 25 kW/m^2 and 50 kW/m^2? Is anyone willing to give this a try and provide the community with some characteristic plate thermometer temperatures at these heat fluxes?

isaac.l...@gmail.com

unread,
Apr 29, 2021, 6:01:35 PM4/29/21
to MaCFP Discussions
Effectively, this would check only if incident heat flux is the same across our labs, correct? There may be significant differences in net heating of the sample due to insulation / backing conditions and sample holder configuration (e.g., presence of a holder frame). Before starting such a round robin, we might want to consider reviewing metadata to confirm whether cone temperatures and sample distances (to the cone) are similar across tests. Many of our labs provided this info; others that did not could hopefully be encouraged to do so. Assessing variations in this info, along with a review of backing conditions, might let us group results of high/low measured HRR in a reasonable way (i.e., due to correlation between known differences in setup). 

We might want to work through that analysis first before starting up a new round robin to only check, effectively, whether or not each lab uses properly calibrated heat flux gauges and sets their heaters appropriately. Already, each lab should be following ASTM E1354 (or a similar standard) which provides guidance mandating calibration of the heat flux gauge used to set this heat flux and how to set the temperature controller accordingly, prior to each test.

ASTM E1354 also prescribes a backing material of set density and minimum thickness. Some groups intentionally choose a lower density material, others a metal plate;  I suspect each believe their method offers specific advantages. Really, if top and back surface boundary conditions are appropriately accounted for during model parameter calibration, these variations should not matter (from a model prediction standpoint; though direct comparison of HRR would remain a challenge).

-----

What might be more valuable then would be not to just start a round robin where we all check if a plate thermometer with identical backing records the same (nominal) incident heating across labs, but to have each lab validate their respective heating conditions. For example, regardless of insulation type/thickness, holder configuration, and exact heater temperature you use, each group should verify that their assumed boundary conditions are indeed correct. We might do this by having each lab agree to take a standardized representative sample (i.e., a flat copper or brass plate of known thickness, coated with a thermally stable black paint of known absorptivity) place that on top of their insulation of choice (thickness, density, and assumed thermophysical properties) and demonstrate that they can reproduce (i.e., simulate) experimentally measured temperature rise in that plate when it is exposed to a prescribed heating condition.

This would not just validate incident heat flux at the top of the sample, but better confirm both top and bottom boundary conditions are correctly parameterized. Also including a standardized plate TC test, as Kevin suggested, might assist in this exercise to confirm  boundary conditions (top plate, only radiation, but not convective losses for a given setup), but that'd likely be just one step in the process.

If there's support for such an effort, it could likely be organized similar to the round robin to assess heat flux gauge calibrations that was performed in the early 2000s: https://doi.org/10.1016/j.firesaf.2006.04.004

Kevin McGrattan

unread,
Apr 29, 2021, 6:10:11 PM4/29/21
to MaCFP Discussions
A plate thermometer is a thin plate of Inconel on top of, I think, a centimeter of insulation material. Why use copper or brass? Now if the cone standard calls for a particular kind of backing insulation, just set the Inconel plate on top of that and use this same insulation material for the actual sample. In essence, the steady-state Inconel temperature is the highest surface temperature achievable for that particular nominal heat flux. This temperature is handy in setting a lower bound on your heat flux too.


isaac.l...@gmail.com

unread,
Apr 29, 2021, 6:21:48 PM4/29/21
to MaCFP Discussions
" Now if the cone standard calls for a particular kind of backing insulation, just set the Inconel plate on top of that and use this same insulation material for the actual sample. "
The challenge is, different labs are already *not* using this setup, so it is unclear to me how this would fully characterize BCs in a given lab's configuration. The insulation used for a standard Plate TC or for an ASTM E1354 'standard cone test' is apparently not the universal default choice. Various groups have written respective justifications for using another backing material, so I suspect we will have more success not in convincing them to change that, but to just set up a means to confirm their assumed boundary conditions(in their preferred configuration) are correct.

Copper, inconel.. I don't have a strong preference either way so long as it's painted uniformly black and has well-known kpc for this analysis. If all labs would switch to a single proposed insulation, then a standard plate TC might work. If we're committed to our respective setups, which I think is more likely,  just keeping the metal plate part of that plate TC and simulating that response might offer more directly useful information on each group's test setup.

Hostikka Simo

unread,
Apr 30, 2021, 3:52:12 AM4/30/21
to MaCFP Discussions

On the plate tests:

 

As all labs are supposed to calibrate their heaters using heat flux gauges, that are presumably more accurate ways of heat flux metering than the plate, the plate tests should not show any differences. In theory. In practice, of course, this would be a relatively easy way of checking the consistency of assumed incident heat fluxes. Doing this test before a flammability tests would be a verification test for the experimental work.

 

As mentioned already, the plate cannot reveal differences btw setups that are caused by more detailed characteristics of the incident radiation. For example, we currently investigate if the different source temperatures in cone and FPA, in the interaction with the PMMA absorption spectrum, can explain why cone and FPA give different burning rates. Plate verification could still claim that these tests were identical!

 

So, plate will be useful verification, but not sufficient for removing differences.

 

Simo

--
You received this message because you are subscribed to the Google Groups "MaCFP Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to macfp-discussi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/macfp-discussions/20e575c0-525b-46b1-9675-10f5eed758cfn%40googlegroups.com.

Brännström, Fabian

unread,
Apr 30, 2021, 4:10:21 AM4/30/21
to MaCFP Discussions
Dear all,

further about the plate...

please correct if I am wrong, I have only a bit of experience with plate
thermometers. It sounds for me that we shift the problem of unprecise
measurements from the standard cone to the plate thermometer.

When looking for a common precise measurement system, I am not exactly sure if
the standard plate thermometer is the solution. For me it looks just like a
rough measurement tool, where the details (e.g. equal contact resistance between
the insulation and the plate on top and on the side) might differ again among
the labs (I think it is similar to Simo's comment).

Further the boundary (of the plate thermometer and the standard cone)
and connection of the sample holder itself is a challenge, at least different
from lab to lab (I assume).
Probably I am completely wrong here, but if this is right, we would need to have
a common robust sample holder system with better known boundary conditions
(which I assume a few of us already have).

In this robust sample holder one could place a "robust" plate thermometer or
something better. For testing the actual material one would keep the same setup
on the backside.

In general, special care has to be taken that the contact resistance between the
different layers (and to the lateral boundaries) are reproducible among the
labs.

@Simo: Nicolas Bal with Guillermo did quite interesting investigation of the
spectral impact of FPA vs Cone; probably you know this already.
Bal, Nicolas. ‘Uncertainty and Complexity in Pyrolysis Modelling’, 2012. https://www.era.lib.ed.ac.uk/handle/1842/6511.


Best Regards
Fabian

-- 
Univ.-Prof. Dr.-Ing. Fabian Brännström

Brandtechnologie und Brandschutzingenieurwesen
Fire Technology and Fire Safety Engineering

+49 (0) 202 439-2071
Raum W08.093
https://fire.uni-wuppertal.de/

Bergische Universität Wuppertal
Fakultät für Maschinenbau und Sicherheitstechnik
Gaußstraße 20
42119 Wuppertal

Stanislav I. Stoliarov

unread,
Apr 30, 2021, 9:50:54 AM4/30/21
to MaCFP Discussions

I tend to agree with Fabian.  I think plate thermometers have their own issues.  This discussion implies that cone should be our standard reference measurement, a benchmark for our models.  Are we certain about that?  Personally, I think that an inert environment radiation-driven gasification experiment represents a better benchmark for pyrolysis.  However, I also realize that few labs have this capability.

In any case, if we decide that cone is our benchmark and we need to harmonize these measurements between various labs, I suggest that we start with something simple.  Let us run 50 kW/m2 tests of our black PMMA and ask everyone to wrap the sample in aluminum foil, place it directly onto Kaowool PM insulation (at least 1 cm thick) and make sure that the sample is positioned 25 mm from the cone heater. No edge frames or any other contraptions should be used. We can distribute the insulation and foil to make sure that everyone uses the same materials. Let us also use the same standard duct flow rate of 24 L/s.

This approach will eliminate sample holder uncertainties and we will be left with uncertainties in oxygen consumption measurements and radiant heat flux. The uncertainties in the in oxygen consumption measurements will manifest themselves in the heats of combustion values obtained through integration of HRR. The heat of combustion should be between 24 and 26 kJ/g. If it is not, we know that the measurement is problematic and where the problem is. We can then re-normalize all HRR curves to produce the same heat of combustion, let us say 24.5 kJ/g. The remaining scatter in the average and peak HRR can now be attributed to uncertainties in the radiant heat flux settings. If all labs also send us the heater temperature data, which should be available in any standard cone, we should be able to figure out whether it is the calibration of the gauge or spatial non-uniformity of the heat flux that is responsible for each observed discrepancy.

Stas                              

Morgan Bruns

unread,
May 1, 2021, 12:13:54 PM5/1/21
to MaCFP Discussions
I'm not sure that we would need to consider the cone to be a benchmark any more or less than other tests. I think we need to be able to perform and model a range of experiments. We need careful specifications like what Stas lays out above if we are going to compare data and model predictions. 

What I like about adding a plate thermometer specification is that it seems to be a quick and easy test on the calibration of cone heaters. Additionally, if we keep the plate thermometer as the back surface in actual tests, it provides a well-defined boundary condition plus another well-defined measurement to compare in models.

Dietenberger, Mark -FS

unread,
May 10, 2021, 3:21:21 AM5/10/21
to Brännström, Fabian, MaCFP Discussions

The reason I have not considered the flat plate, is the question of how confident we are that it will keep its properties after several years of using it.

In the past I have used the time to ignition (or even time to Peak HRR) of PMMA, to verify the heat fluxes, but could not now do that since they changed its properties over the years. So, unto the Rexolite PS…

 

Anyways, I am quite satisfied with the pure ethylene glycol (EG) liquid, as it provides a nice way to check out the full functioning of the cone calorimeter in a single test (and also the FTIR, as it has EG in its characteristics, especially during the pre-ignition period). Since I am interested in wood and vegetation, the combustion water vapor emitted is also of interest, and EG is cooperative in the calibration for that, and has a behavior as a liquid that is most like the wood.

 

The weigh scale is also an important part of our cone calorimeter calibration, and with its fast response, the digital exponential smoothing needs to be done to its raw data to match the T90 responses of CO2, CO, and O2, and for that the EG combustion is cooperative in that it does not char, and hardly produces the black smoke. Our H2O sensor is not as fast responding as the CO2, CO, or O2 responses, so we use a digital deconvolution sharpening with the H2O sensor data. So if you have a pyrolysis process that is very fast, and model it with the mechanistic kinetics, then you will need to also account for the T90 responses of the cone sensors, for more accurate assessment of the pyrolysis and combustion modeling.

 

This question of radiation absorption differences between FPA and the cone calorimeter, would be interesting to explore with the EG.

 

Mark

Forest Service Shield

Mark Dietenberger, PhD
Research General Engineer

Forest Service

Forest Products Laboratory

p: 608-231-9531
f: 608-231-9303
mark.a.di...@usda.gov

One Gifford Pinchot Dr.
Madison, WI 53726
www.fs.fed.us
USDA LogoForest Service TwitterUSDA Facebook

Caring for the land and serving people

 

 

From: macfp-di...@googlegroups.com <macfp-di...@googlegroups.com> On Behalf Of Brännström, Fabian
Sent: Friday, April 30, 2021 3:10 AM
To: MaCFP Discussions <macfp-di...@googlegroups.com>

Subject: [External Email]Re: MaCFP-2 Condensed Phase Workshop - Developing requirements for data set quality

 

[External Email]
If this message comes from an unexpected sender or references a vague/unexpected topic;
Use caution before clicking links or opening attachments.
Please send any concerns or suspicious messages to: Spam....@usda.gov





This electronic message contains information generated by the USDA solely for the intended recipients. Any unauthorized interception of this message or the use or disclosure of the information it contains may violate the law and subject the violator to civil or criminal penalties. If you believe you have received this message in error, please notify the sender and delete the email immediately.
Reply all
Reply to author
Forward
0 new messages