Tuesday, September 01, 2009

Uniform prior: dead at last!

As I hinted at in a previous post, I've some news regarding the uniform prior stuff. I briefly mentioned a manuscript some time ago, which at that time had only just been submitted to Climatc Change (prompted in part by Myles Allen's snarky comments, I must remember to thank him if we ever meet). Well, eventually the reviews arrived, which were basically favourable, and the paper was accepted after a few minor revisions. The final version is here, and I've pasted the abstract at the bottom of this post.

The content is essentially the same as the various rejected manuscripts we've tried to publish (eg here and here): that is, a uniform prior for climate sensitivity certainly does not represent "ignorance" and moreover is a useless convention that has no place in research that aims to be policy-relevant. With a more sensible prior (even a rather pessimistic one) there seems to be no plausible way of creating the high tails that have been a feature of most published estimates. I'm sure you can join the dots to the recent IPCC report, and the research it leant on so heavily on this topic, yourself.

Obviously there's the possibility of learning lessons about how to present criticism of existing research. This topic came up again only recently, and it's obvious that there are pros and cons to the different approaches. I saw that Gavin Schmidt published a couple of papers recently (1, 2) that were comments without being comments, in that they basically focussed on weaknesses in previous papers without explicitly being presented as "Comment on" with accompanying reply. However, I'd certainly have liked to see Allen and Frame's attempted defence appear in public, as I believe its weakness goes a long way to making our case for us. As things stand, a 3rd party reader will see our point of view but may reasonably wonder whether there are strong arguments for the other side - but don't worry, there aren't :-)

On the other hand, there is no question that the final manuscript is improved by being able to go beyond the direct remit of merely criticising a single specific paper. In particular, the simple economic analysis that we tacked on converts what might be a rather abstruse and mathematical discussion of probability into a direct statement of financial implications (albeit a rather simplified one).

I think one particular difficulty we faced with either approach is that we were not able to present a simple glib solution to the choice of prior, as we do not believe that such a solution exists. The prior that we do use (Cauchy-type) is fairly pathological and hard to recommend. In particular, if one adopts an economic analysis based on a convex utility function such as Weitzman suggests then it's not going to give sensible answers as the expected loss will always be infinite (even for 1ppm extra of CO2, essentially irrespective of what observations we make). However, that is an argument primarily in the field of economics and even philosophy, and not particularly critical as far as the climate science itself goes. The take-home message is that even with such a horrible prior, the posterior is nothing like as scary as those presented in many recent papers.

Of course, this result does bring with it my first loss in climate-related bets. Jules had wagered £500 with me that this previous paper would, if rewritten appropriately, be accepted in Climatic Change, and I was pessimistic enough to take her on. I'm quite happy to lose that bet! (I'd be happy to lose the one on 20 year trends too, if it meant that global warming was a much smaller problem than it now appears.) I suppose I should revise my opinions of the peer review system upwards a little. Apart from the extremely long delay - well over a year so far, and it's not published yet - the process worked well this time, with sensible reviewers making a number of helpful suggestions.

Anyway, here's the abstract:

The equilibrium climate response to anthropogenic forcing has long been one of the dominant, and therefore most intensively studied uncertainties, in predicting future climate change. As a result, many probabilistic estimates of the climate sensitivity (S) have been presented. In recent years, most of them have assigned significant probability to extremely high sensitivity, such as P(S > 6C) > 5%.

In this paper, we investigate some of the assumptions underlying these estimates. We show that the popular choice of a uniform prior has unacceptable properties and cannot be reasonably considered to generate meaningful and usable results. When instead reasonable assumptions are made, much greater confidence in a moderate value for S is easily justified, with an upper 95% probability limit for S easily shown to lie close to 4C, and certainly well below 6C. These results also impact strongly on projected economic losses due to climate change.

10 comments:

sylas said...

Well done! Where is it going to appear?

James Annan said...

Oops! Climatic Change, and I've amended the post to make that clearer.

David B. Benson said...

Much.much better than the earlier draft I read here.

The choice of Cauchy distribution is clearly motivated (and rather inspired, methinks). A pessimistic cloistered expert?

ac said...

grats.

Did you look at the effect of using a 'skeptical' prior, centered on zero instead of positively centered? This ought to represent the state of knowledge of someone with no reason, physical or otherwise, to expect a sensitivity in either direction.

James Annan said...

ac,

No, but off the top of my head I can say that the data basically truncates the distribution around 1C (give or take) so in that case I'd expect the results would largely depend on the shape of the tail above that value. A Cauchy-type tail (1/S^2) would give results similar to what we show, but a bit more squashed up to the lower values.

Some people involved in Bayesian approaches to detection and attribution have looked at similar "skeptical" priors - in fact, they are fairly standard (to mimic the classical frequentist approach). However these are actually priors for the forced response to date, not the equilibrium sensitivity, so it's not quite the same thing.

EliRabett said...

Eli was interested to see the approach you took to setting up an expert prior. It would be useful perhaps to go through a Delphi process at this point to use as a prior say ten years from now.

Another point is that IEHO it is not easy to separate GCM results from observation, because the GCMs have been evaluated against observational data (note this is not tuning, this is sensitivity ananlysis in the broadest sense)

James Annan said...

10 years from now, the scientists of the day will decide they know much better than the previous generation and will discard their judgements as unreliable :-)

EliRabett said...

It would be good to have a data point....

Hank Roberts said...

Tangentially, curious if you've read this and have anything to add for readers like me, still trying to learn how to think better about statistics)
http://www.atm.damtp.cam.ac.uk/people/mem/#thinking-probabilistically

(link is down the page a ways, in this paragraph):



Here's a brief essay `On thinking probabilistically' ... written for the 15th 'Aha Huliko'a Workshop on Extreme Events, held at the University of Hawaii in January 2007. It tries to address some of the most deep-seated difficulties in understanding probability and statistics and, by implication, in understanding science itself. Even more than usual, the difficulties stem from unconscious assumptions. I try to show why there's far more to all this than the old `frequentist versus Bayesian' polemics.
... the deepest and most dangerous confusion of all comes from the hardcore frequentist or absolutist view of probability values as properties of things in the outside world, or material world -- i.e., as properties of what science calls objective reality....

James Annan said...

Thanks Hank,

That is a nice essay, I might post a more prominent link to it. I don't think there is anything revolutionary in it but it's nice to see some clear and firm statements on the topic.

It seems to me that the teaching of probability as a property of the system has done a lot of harm to the understanding of generations of scientists who end up blundering around hopelessly until they manage to unlearn everything they've been taught.

At an intuitive level, of course, we all act in a Bayseian manner all the time, but making it explicit means that calculations can be made more accurate and less prone to conceptual or careless errors.