Saturday, April 01, 2017

BlueSkiesResearch.org.uk: Independence day

We all know by now that Brexit means brexit. However, it is not so clear whether independence means independence or perhaps something else entirely. This has been an interesting and important question in climate science for at least 15 years and probably longer. The basic issue is, how do we interpret the fact that all the major climate models, which have been built at various research centres around the world, generally agree on the major issues of climate change? Ie, that the current increase in CO2 will generate warming at around the rate of 0.2C/decade globally, with the warming rate being higher at high latitudes, over land, at night, and in winter. And that this will be associated with increases in rainfall (though not uniformly everywhere – in fact this being focussed on the wettest areas, with many dry areas becoming drier). Etc etc at various levels of precision. Models disagree on the fine details but agree on the broad picture. But are these forecasts robust, or have we instead merely created multiple copies of the same fundamentally wrong model? We know for sure that some models in the IPCC/CMIP collection are basically copies of other models with very minor changes. Others appear to differ more substantially, but many common concepts and methods are widely shared. This has led some to argue that we can’t really read much into the CMIP model-based consensus as these models are all basically near-replicates and their agreement just means they are all making the same mistakes.

While people have been talking about this issue for a number of years, it seems to us that little real progress has been made in addressing it. In fact, there have been few attempts to even define what "independence" in this context should mean, let alone how it could be measured or how some degree of dependence could be accounted for correctly. Many papers present an argument that runs roughly like this:
  • We want models to be independent but aren’t defining independence rigorously
  • (Some analysis of a semi-quantitative nature)
  • Look, our analysis shows that the models are not independent!
Perhaps I’m not being entirely fair, but there really isn’t a lot to get your teeth into.
 
We’ve been pondering this for some time, and have given a number of presentations of varyng levels of coherence over the last few years. Last August we finally we managed to write something down in a manner that we thought tolerable for publication, as I wrote about at the time. During our trip to the USA, there was a small workshop on this topic which we found very interesting and useful, and that together with reviewer comments helped us to improve the paper in various ways. The final version was accepted recently and has now appeared in ESD. Our basic premise is that independence can and indeed must be defined in a mathematically rigorous manner in order to make any progress on this question. Further, we present one possible definition, show how it can be applied in practice, and what conclusions flow from this.

Our basic idea is to use the standard probabilistic definition of independence: A and B are independent if and only if P(A and B) = P(A) x P(B). In order to make sense of this approach, it has to be applied in a fundamentally Bayesian manner. That is to say, the probabilities (and therefore the independence or otherwise of the models) are not truly properties of the models themselves, but rather properties of a researcher’s knowledge (belief) about the models. So the issue is fundamentally subjective and depends on the background knowledge of the researcher: A and B are conditionally independent given X if and only if P(A and B given X) = P(A given  X) x P(B given X). Depending what one is conditioning on, this approach seems to be flexible and powerful enough to encapsulate in quantitative terms some of the ideas that have been discussed somewhat vaguely. For example, if X is the truth, then we arrive at the truth-centred hypothesis that the errors of an ensemble of models will generally cancel out and the ensemble mean will converge to the truth. It’s not true (or even remotely plausible), but we can see why it was appealing. More realistically, if X is the current distribution of model outputs, and A and B are two additional models, then this corresponds quite closely to the concept of model duplication or near-duplication. If you want more details then have a look at the paper.

Anyway, we don’t expect this to be the last word on the subject, though it may be the last we say about it for some while as we are planning on heading off in a different direction with our climate research for the coming months and perhaps years.

2 comments:

crandles said...

Glad you have got it published.

Different direction for research pointing to USA. Hmm, seems a little odd with current administration.

James Annan said...

Oh that confused someone else as well - must have been a bit cryptic. I just meant the deglaciation project.