Chris and co.:
Can anyone explain to me why the summed probability distributions generated by OxCal and Calib are fundamentally different than those produced by CalPal, as observed by Buchanan et al. (2011) “A comment on Steele's (2010) “Radiocarbon dates as data: quantitative strategies for estimating colonization front speeds and event densities” in Journal of Archaeological Science?
-Matt
--
You received this message because you are subscribed to the Google Groups "OxCal" group.
To unsubscribe from this group and stop receiving emails from it, send an email to oxcal+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Chris and co.:
Can anyone explain to me why the summed probability distributions generated by OxCal and Calib are fundamentally different than those produced by CalPal, as observed by Buchanan et al. (2011) “A comment on Steele's (2010) “Radiocarbon dates as data: quantitative strategies for estimating colonization front speeds and event densities” in Journal of Archaeological Science?
-Matt
Ray:
Thanks for the emails. These are issues that have come up in several published articles, and a ms. I have submitted has been harshly criticized in the review process for failing to address them.
Regarding the Williams (2012) paper, I am not convinced by a blanket statement about a sample size of 500 dates or a 500-800 year moving average as being always appropriate. Primarily because Williams dealt with (and ONLY with) a time frame of 50,000 years. No attempt was made to evaluate sample size for shorter time intervals. So, a more accurate statement of his findings would be that these criteria are needed when dealing with time intervals of ca. 50,000 years. This is an important distinction to make, especially for folks (like me) working in North America where our total amount of time is 1/5 that range. I’m looking at an interval of 5000 years, where asking for 500 dates is equivalent to saying we need a date for every 10(!) years.
All of this is related, but somewhat beside the point because (as several folks have observed) probability distributions created under OxCal and Calib are fundamentally different (regardless of smoothing or any other option) than those under CalPal. This is easily observed by creating an equal-interval series of dates with overlapping standard errors and calibrating them – just as you did. Logically, if the distributions of all the dates are continuous and overlap, then the summed probability for the entire range should be more or less constant for the entire interval of time – Which, as you can see from your plots, it is not, regardless of smoothing.
So, some aspect of the procedure in OxCal (and Calib) makes it such that the resulting SPD is influenced by the shape of the calibration curve. This is not the case in CalPal – and in fact, the SPDs from CalPal actually appear as you would logically anticipate given a uniform distribution of dates with overlapping probabilities over a given time period. It is only when the standard error is less than the gap between individual dates that “artificial” peaks and valleys begin to form. Note, that these are “artificial” in the fact that they are influenced by the composition of the sample itself, and not the calibration curve. You can observe this by doing effectively the same thing that you’ve done, but comparing the results with those of what you would get in CalPal – see attached image in which, for an interval of 8000 radiocarbon years and a constant error of 120 years, a sample size of 33 is perfectly sufficient to capture the underlying uniform constant probability of dates. The paper that Andrew directed me to actually addresses why this is the case, though I can’t say I actually ~understand~ all of it, yet.
-Matt
In the model below, a uniform distribution of dates at 25 yr intervals between about 3500 and 1500 BP has a region of 21 additional dates at 5 yr intervals between 2600 and 2500 BP. Shown after modeling.
