Experimental Datasets (and metadata) of Interest

11 views
Skip to first unread message

Isaac Leventon

unread,
Oct 15, 2020, 3:12:58 PM10/15/20
to MaCFP Condensed Phase Discussions
Modelers,

One question / request that came up during the Oct. 15 virtual presentation of experimental results was to understand exactly what experimental measurements were most needed / wanted by modelers who plan on developing pyrolysis model parameter sets.

In this thread, please highlight what (in general, not necessarily a specific institution's measurements) data are most valuable/needed for your approach.

Further, please identify key metadata of interest. Much of this metadata is provided in related README files on the repo (https://github.com/MaCFP/matl-db). A brief overview of this data is provided in Table 1.2 of the report (https://github.com/MaCFP/matl-db/releases/tag/v1.0.0). Please consult these references first to confirm if certain data is already available (but still critical, and thus worth mentioning here) or not provided at all.

tristan...@gmail.com

unread,
Oct 28, 2020, 7:01:45 AM10/28/20
to MaCFP Condensed Phase Discussions
For the cone calorimeter experiments it might be useful to also provide information on the ignition time.

Simo Hostikka

unread,
Oct 28, 2020, 7:40:57 AM10/28/20
to MaCFP Condensed Phase Discussions
Good point Tristan! And I would think flame-out time as well.

Regarding metadata:
1) I am not sure how well the actual oxygen concentrations are reported for the controlled atmosphere cone calorimeters, CAPA II and FPA. In our device, for example, we cannot currently measure O2 during the test itself, but only before the test, if we want to include possible HRR in the measurement outputs. Some information is needed, anyway.
2) We have had discussion about absorption coefficient and radiation penetration, where the source temperature is very important. The heat temperature of cone, CAPA or FPA or similar should be known if developing radiation models.

Brännström, Fabian

unread,
Oct 28, 2020, 7:53:41 AM10/28/20
to MaCFP Condensed Phase Discussions
Hi all,

yes, I agree to both latest comments ... I was just writing, but I am actually a bit slower in writing than Simo.

As mentioned during the Webex session, it would be very valuable to get a more detailed 
understanding about the applied systems. This would account not
only for the TGA system, but also to the different variants for small scale
testing like the cone tests (but also all other systems).

- In particular it would be good to get an understanding about the specs (e.g.
accuracy, range, peak-to-peak noise, …) of the load balance in TGA and
Cone/FPA/Capa. E.g. for cone, I expect, that an accuracy of 0.1g is
challenging in terms of accuracy.

- Similar as for the TGA measurements, the cone temperature curve would be of
  interest; especially when comparing different systems. I am not sure, if there
  is some similar measurements for the FPA lamps!?

- Temperature profile in the exaust duct

- For the backside temperature measurements it would be of interest how the
contact was accomplished during the tests.

- And it would be good to add any special visual observations during the tests.

Further for the reporting I would like to see the unfiltered data (for TGA and
Cone) as well. This would give some additional insight to the fluctuations of
the measurements. E.g. that means, that it would be good to have the filtered and
unfiltered MLR data for the cone measurements in addition to the HRR curves as
well.

And I agree to have shorter names for the plotting. In the beginning I was a bit
confused by the non-disclosed laboratory names in the report, but actually
think, that it was a good decision, so that everyone could get a non biased
first overview.


Best Regards
Fabian



-- 
You received this message because you are subscribed to the Google Groups "MaCFP Condensed Phase Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to macfp-condensed-phase-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/macfp-condensed-phase-discussions/ff525b03-e0dc-4a4e-a2a2-f3a767e2fc07n%40googlegroups.com.

tristan...@gmail.com

unread,
Oct 28, 2020, 8:31:25 AM10/28/20
to MaCFP Condensed Phase Discussions
Additionally, I would like to have the original/raw data submitted, as well as some cleaned/processed data. The way I see it, the files that are in the repository right now are the cleaned files. This would be of interest for three reasons:
1. If any mistakes were made when processing the data, it could easily be checked by anyone.
2. Simulation campaings develop and it is not necessarily possible to define all the needed information from the get go, thus the additional information comes in handy.
3. People can employ their own favourite strategy to process the data.

Proposed file structure within the repo:
+ Institute_Label_Dir (e.g. NIST)
|   + Raw_Data_Dir
|    | raw_file_01.csv
|    | raw_file_02.csv
|    | raw_file_03.csv
|    | raw_file_04.csv
| cleaned_data_01.csv
| cleaned_data_02.csv
| README.md


Further explaination:
1. I guess this point is selfexplainatory.
2. I worked with a FTT Cone Calorimeter a while back. This allowed the user to save two CSV files (`test_label.csv` and `test_label_red.csv`) and both provided a lot of extra information. I assume that this concept is true for most of the other testing apparatuses that are out there, not only for this specific cone calorimeter. Since the CSV files are text files they are perfect for Git (only differences are saved, i.e. individual characters that have changed, and it does NOT create copies of the file which would blow up the repo, as is the case with images for example), also their size is relatively small to begin with, only a few hundred kB. Obviously some care needs to be taken that submitting data does not get out of hand. But data that is submitted goes to some screening anyway, either by a pull request to the repo directly or by sending the data to Morgan Bruns. Then it can be checked if the files are all plain text files (*.md, *.txt, *.csv or what have you - and also check the encoding ;) ) and that not too much data is submitted - even though I am not sure what this "too much" would actually mean.
3. The main idea here is, that the cleaned files submitted could/should follow some general procedure, that would be defined by the MaCFP commitee/group. For example the procedures that are used by Isaac for the predecisional report could be used as the base line. With having the raw data available as well, people are free to use their own strategies to process the data, which would be specifically helpful when testing out new strategies.
One institute submitted data that is an average out of seven TGA experiments and also normalised. Even though it is appreciated that they put in the effort, I would rather have access to the raw data, due to the points described above. It would be fine, to provide the processed data in extra columns in one of the cleaned data files but not solely the processed data. However, it might be a good strategy for the data processing and we could set up functions in the utilities Python module that would allow the users to process arbitrary data following this procedure. As a brief side note here, there is some discussion going on to transfer the MATLAB functionality, that is used right now to create the predecisional report, into Python.

Tristan

Stanislav I. Stoliarov

unread,
Oct 28, 2020, 9:04:38 AM10/28/20
to Simo Hostikka, MaCFP Condensed Phase Discussions
Hi Simo,

We measured oxygen concentration in the CAPA II multiple times.  Under standard operating conditions (nitrogen purge), this concentration is at or below 0.6 vol% in the vicinity of the sample.  We can also provide estimates of the heater temperature.

Stas
--------------------------------------------------------
Dr. Stanislav I. Stoliarov, Professor
University of Maryland
Department of Fire Protection Engineering
3104C J.M. Patterson Bldg.
4356 Stadium Dr.
College Park, MD 20742
Phone: 301.405.0928
Fax: 301.405.9383
Email: sto...@umd.edu


--

Randy McDermott

unread,
Oct 28, 2020, 9:08:21 AM10/28/20
to tristan...@gmail.com, MaCFP Condensed Phase Discussions
Tristan,

It is important that we think hard about what "raw" data you want submitted to the database.  Later in your email, you make a comment that there could possibly be "too much" data submitted.  You understand Git well enough to know that if we just dump every byte of data into the repo, it will balloon and become unwieldy.

There is another possibility, one that we employ with FDS output data.  We keep this data in a separate, neighboring repository (for example, we could create a repo called matl-raw).  This prevents the main repo from getting out of control in size.  If you are not careful about this, you could (in the long run) be forced to rewrite history of the repo.  I've had to do it, it's not pleasant.

Randy

tristan...@gmail.com

unread,
Oct 29, 2020, 8:46:27 AM10/29/20
to MaCFP Condensed Phase Discussions
Randy,

well, maybe I do not understand Git well enough. I've been under the impression that files that are not touched would not be part of commits downstream, i.e. are not copied thus are not bloating the repo. Furthermore, when (text) files receive changes not the whole new file is stored but only information on the difference. In contrast to, for example, images where git cannot keep track of the change within the file and therefore needs to save the whole new file.

I would not expect the individual raw data files to be larger than a couple hundred kB and also they would likely not be changed at all. If there would be changes their frequency should be very low.
I'm not aware of what the specific conditions of the FDS output data are, but I could imagine that if there would be substantial daily changes to all the files, effectively recreating/overwriting all of them, this would get out of hand fast. However, if you have more experience with these kind of things and think that a neighboring repo approach would be better I would be fine with that.

It might be a good idea to find out what the actual raw files look like and how large they are, depending on different apparatuses and manufacturers.

Tristan

Randy McDermott

unread,
Oct 29, 2020, 8:52:46 AM10/29/20
to tristan...@gmail.com, MaCFP Condensed Phase Discussions
Tristan,

It depends on the size of the files and whether or not they will be routinely overwritten (like the images example you gave).  My impression is that "raw data" refers to every data point taken for replicates of all experiments.  I would imagine this to be massive, not a few kB.  But, as you said, we can try and see.  I think this is something that you and Isaac and I can sort out offline.  We know we have a backup plan to use a neighboring repo.

Best,
Randy

tristan...@gmail.com

unread,
Nov 3, 2020, 5:52:25 AM11/3/20
to MaCFP Condensed Phase Discussions
Back to the original topic. Would it be useful to also collect information like the temperature in the room where the experiments are conducted?
Reply all
Reply to author
Forward
0 new messages