Citation for not discarding imprecise dates?

51 views
Skip to first unread message

Erik Marsh

unread,
Feb 1, 2022, 12:42:05 PM2/1/22
to OxCal
Hi all,

I have often seen archaeologists subjectively discard dates for having wide error ranges. As far as I know, this is a bad (and subjective) practice, as imprecise dates may in fact be accurate. And while perhaps not very useful individually, imprecise dates can certainly be built into Bayesian models.

(Assuming I'm right about this) – Does anyone know of a citation that explicitly states: this? That is, wide error ranges are no reason to manually reject dates.

This is of course NOT in the list of reasons a date would be an outlier in Bronk Ramsey 2009. I checked the best-known classic pubs by Buck, Christen, etc. but did not see this point explicitly addressed.

thanks!
Erik

RAY KIDD

unread,
Feb 1, 2022, 10:07:25 PM2/1/22
to ox...@googlegroups.com
Hello group and well said Erik!  There, or was, an observation that a useful lesson needed to be re-learned every seven years.  I think it had to do with the average time it took for a person to move through the hierarchy from novice etc.
The difference between precision and accuracy may be one of these lessons and must not be ignored.
Did it ever become someone’s Law?
Regards
Ray
Sent from my iPhone
--
You received this message because you are subscribed to the Google Groups "OxCal" group.
To unsubscribe from this group and stop receiving emails from it, send an email to oxcal+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/oxcal/c8f51db2-1235-43f0-a40b-6929276ef896n%40googlegroups.com.

MILLARD, ANDREW R.

unread,
Feb 2, 2022, 5:12:31 AM2/2/22
to ox...@googlegroups.com

Hi Erik,

 

This is one such statement, and maybe the cited references may say more: “Occasionally, we come across the belief that legacy radiocarbon dates with large standard errors are of little interpretative value because of their greater imprecision. …. however, not only can a Bayesian model handle these data effectively but these dates may actually have the most secure connection between sample and event (e.g., charcoal in a hearth or animal burial).  Despite their issues, legacy dates with large standard errors can be informative data for a Bayesian model (see Bayliss et al. 2011; Jay et al. 2012; Krus et al. 2015).”

 

Hamilton, W. D., & Krus, A. M. (2017). The Myths and Realities of Bayesian Chronological Modeling Revealed. American Antiquity, 83, 187-203. doi:10.1017/aaq.2017.57

 

 

 

Best wishes

Andrew

--

Dr. Andrew Millard

Associate Professor of Archaeology,

Durham University, UK

Email: A.R.M...@durham.ac.uk 

Personal page: https://www.dur.ac.uk/directory/profile/?id=160

Scottish Soldiers Project: https://www.dur.ac.uk/scottishsoldiers

Dunbar 1650 MOOC: https://www.futurelearn.com/courses/battle-of-dunbar-1650

 

 

From: ox...@googlegroups.com <ox...@googlegroups.com> On Behalf Of Erik Marsh
Sent: 01 February 2022 17:42
To: OxCal <ox...@googlegroups.com>
Subject: Citation for not discarding imprecise dates?

 

[EXTERNAL EMAIL]

--

Derek Hamilton

unread,
Feb 2, 2022, 6:11:16 AM2/2/22
to OxCal
Hi Erik,

Oftentimes the real 'problem' is the effect the dates with larger errors has on the resulting model. If you have say 10 large error dates and then add 2 or 3 dates with more modern errors, the model results probably will have an overall lower precision than if you went out and got 10 new well selected dates. If the numbers are reversed - 10 'precise' dates and 2 'imprecise' dates the model will probably handle those older dates without issue. The amount of the effect will depend on the number of dates, the size of the error, and overall width of the resulting calibrated date range. I think in most cases there is little to be gained from tossing older dates out simply because of the error size.

However...

I am working on a dataset that has about 70 dates in the first millennium cal BC that were made in the 80s. The errors range from ±50-90 years. Many of the dates were on bulk samples of charcoal from discrete dump deposits of burnt material in pits (accompanied by the remains of what were interpreted as oven superstructures) and then also hearth deposits, but there were also dates made on articulated animal bone. Recently, I made measurements on another ~70 samples with errors of approx. ±30.

Around half of the original dataset calibrates across the 'Hallstatt' plateau, and since the dated charcoal was unidentified (the excess material that was returned from the lab was subsequently part identified) I went back to those returned bags of samples and dated identified short-lived material from 14 of the contexts. Unsurprisingly, nearly every pair of measurements is statistically consistent, but the newer dates all calibrate off of the 'Hallstatt' plateau - with a couple having a tail of probability just at the end of the plateau.

With the charcoal dates I've run models using Charcoal Outlier analysis and another with After for all those older dates, but the start probability is still about 200 years earlier than if those dates are excluded.

So while all of the data is being presented in the monograph, a decision has been made to exclude the older dates on the grounds that 1) the charcoal was unidentified and may have an 'old wood' offset and 2) the number of lower precision measurements combined with the plateau in the calibration curve is probably making the model importantly wrong. However, I should stress that all of this is being presented in the monograph so that we can justify fully our reasoning and the reader can decide for themselves if they agree with our decision.

I hope this helps. Unfortunately, the question of excluding older dates usually isn't as simple as often presented in papers.

Best wishes,

Derek

Erik Marsh

unread,
Feb 2, 2022, 7:59:52 AM2/2/22
to OxCal
Andrew – Derek's paper is right on the money, thanks! Perhaps the section heading is an even clearer citation:
"Misconception 2: Old Radiocarbon Measurements with Large Errors Should Be Ignored".

This is a great paper that I had not seen – thanks!
I did check the citations. Most are examples of this mistake, and of course there is not much explanation for an arbitrary cut off. And there are many more examples that could be added to the list. From what I saw, none of the cited papers had a better or clearer summary of this important point. (However, I did not check Bayliss' 2011 Gathering Time book, since I don't have a PDF. If anyone does, please share!).

Derek – you have created an interesting problem by re-dating all those samples. – but it sounds like you've come up with a more-than-reasonable solution. This is really ideal, as you were able to go back to the original sample bags. I have to imagine that the old dates would have worked out quite well if weren't also fighting the Hallstatt. Importantly, you're rejecting based on old wood, not on a large error range. Sounds like a fascinating study – I look forward to seeing the publication.

Ray – maybe you are right, I will relearn this in seven years. So far, no one's made a claim to any law... as Derek's example makes clear, once you get into real data, things usually get messy.

Thanks all! Much appreciated.
Erik

Reply all
Reply to author
Forward
0 new messages