Timely given events all over the place, this new paper by Mann et al has just appeared. It's a well-aimed jab at the detection and attribution industry which could perhaps be held substantially responsible for the sterile “debate” over the extent to which AGW has influenced extreme events (and/or will do so in the future). I've argued against D&A several times in the past (such as here, here, here and here) and don't intend to rehash the same arguments over and over again. Suffice to say that it doesn't usefully address the questions that matter, and cannot do so by design.
Mann et al argue that the standard frequentist approach to D&A is inappropriate both from a simple example which shows it to generate poor results, and from the ethical argument that “do no harm” is a better starting point than “assume harmless”. The precautionary versus proactionary principles can be argued indefinitely, and neither really works when reduced ad absurdum, so I'm not really convinced that the latter is a strong argument. A clearer demonstration could perhaps have been provided by a rational cost-benefit analysis in which costs of action versus inaction (and the payoffs) could have been explicitly calculated. This would have still supported their argument of course, as the frequentist approach is not a rational basis for decisions. I suppose that's where I tend to part company with the philosophers (check the co-author list) in preferring a more quantitative approach. I'm not saying they are wrong, it's perhaps a matter of taste.
[I find to my surprise I have not written about the precautionary vs proactionary principle before]
Other points that could have been made (and had I been a reviewer, I'd probably have encouraged the authors to include them) are that when data are limited and the statistical power of the analysis is weak, it is not only inevitable that any frequentist-based estimate that achieves statistical significance will be a large overestimate of the true magnitude of the effect, but there's even a substantial chance it will have the wrong sign! A Bayesian prior solves (or at least greatly ameliorates) these problems. Another benefit of the Bayesian approach is the ability to integrate different sources of information. My favourite example of the weakness of traditional D&A here is the way that we can (at least this was the case a few years ago) barely “attribute” any warming of the world's oceans under this methodology. The reason for this is that the internal variability of the oceans is large (and uncertain) enough that we cannot be entirely confident that an unforced ocean would not have warmed up by itself. On the other hand, it is absurd to believe the null hypothesis that we haven't warmed it, as it has been in contact with the atmosphere that we have certainly warmed, and the energy imbalance due to GHGs is significant, and we've even observed a warming very closely in line with what our models predict should have happened. But D&A can't assimilate this information. In the context of Mann et al, we might consider information about warming sea surface temperatures as relevant to diagnosing and predicting hurricanes, for example, rather than relying entirely on storm counts.
5 comments:
My favourite example of the weakness of traditional D&A here is the way that we can (at least this was the case a few years ago) barely “attribute” any warming of the world's oceans under this methodology. The reason for this is that the internal variability of the oceans is large (and uncertain) enough that we cannot be entirely confident that an unforced ocean would not have warmed up by itself.
I've seen you make this argument, but I've never quite understood it (which is almost certainly my own confusion, than a problem with your argument). When it comes to surface warming, I can see that it could be difficult because - by itself - you don't know if the warming is from a planetary energy imbalance, or from the oceans (for example). Hence, the attribution of surface warming can be difficult. When it comes to the oceans, though, the heat content is so large, and there is no other sufficiently larger energy reservoir, that any warming has to be due to a planetary energy imbalance. Given that the existence of a long-term planetary energy imbalance is almost certainly due to anthropogenic emissions, it's hard to see why one couldn't attribute ocean warming to anthropogenic influences.
Well, actually I think it's changed in the latest IPCC report, but previously....
It's not actually about attribution at all, but detection (which must come first in the traditional approach). If our obs are sufficiently imprecise and limited then it maybe impossible to reject the hypothesis that natural variability could have cause the observed warming. Having only a limited idea of how big natural variability is also makes this hard. Deep ocean obs are a bit sparse when you go back in time.
Okay, thanks, I see what you mean now.
The "Gambler's Fallacy" is little more than the application of Bayesian priors to roulette. (Or blackjack, or craps, etc.)
There are many real-world situations in which it is not only inappropriate, but misleading to the point of danger.
Is this one of those circumstances? I am not prepared to say.
But I would say that it should not be assumed, willy-nilly, that applying Bayesian priors will improve your predicting ability.
Oh, certainly silly priors can give poor results. But doing a calculation where you don't even realise that you implicitly embedded prior assumptions without thinking what they might be is not really a good alternative. It's kind of like saying that because it's possible to make a numerical error, it's better to read tea leaves than try to calculate an answer.
Post a Comment