Attributing additionality as demanded for carbon offsets is not straightforward. This paper tests two carbon projects using double machine learning, a recent causal attribution framework, and concludes that confounders and complex social-ecological interactions complicate the testing procedure. There are other papers like
https://iopscience.iop.org/article/10.1088/1748-9326/ae0f44 which do pixel-wise matching to identify counter-factuals and suggest that ex-post additionality assessment are more reliable than ex-ante (predictive) assessment. Standards bodies like Verra which earlier used historic baselines from the project site now use a method called dynamic baselines which compares projects with predictions produced by models trained in the same regional jurisdiction.
All of these are basically different methods of arriving at the "true" additionality, with reliability studies to determine how believable their version of "truth" is, but the bigger question is why go down this path at all. The very conceptualization of offsets is known to be objectionable, plus imposing stringent standards to determine additionality can be oppressive and unjust in itself when the pricing and methodology is dictated by buyers in the global north, even while determining additionality remains an enigma in itself. The paper rightly argues that payments should simply be made for contributions to creating credits, although estimating credits too is not rock solid in itself with so many unknowns on how to quantify the quality of ecosystem functioning. All in all, this is a super messy phase driven by a craze for quantification layered with top-down imposed accountability, but also sadly maybe there is no way out given the economic systems running in the world today.
Adi
-- Aaditeshwar SethMicrosoft Chair Professor, Computer Science and Engineering, IIT Delhi
Co-founder, Gram Vaani; Co-founder, CoRE Stack