Wednesday, September 02, 2009


Quite a coincidence. On the very day after I got the email indicating acceptance of this paper, Myles Allen has a (co-authored) manuscript up on the Arxiv:

A new method for making objective probabilistic climate forecasts from numerical climate models based on Jeffreys' Prior

I thought Myles was vehemently opposed to scientists making any statements in public that had not been peer-reviewed, but maybe he was outvoted by his co-authors. Anyway, he now seems comfortable in criticising the approach of Frame et al 2005 as "arbitrary", and says that "Setting the prior to a constant [meaning uniform] is not an option". Shame he didn't agree with us three and a half years ago - or even in 2007 when he was still promoting uniform priors - but better late than never. I'm not going to gloat - seriously, I'd be glad if the whole sorry mess was finished with.

Unfortunately, it is not quite so clear that the whole sorry mess really is finally finished with. Although they now state that uniform priors are unacceptable, they don't actually go the whole hog and accept that subjective priors are unavoidable, but instead present another cook-book solution - the Jeffreys' prior! Apparently, this approach now provides an "objective" solution that eliminates the "arbitrariness". Of course Frame et al made exactly the same claims back in 2005, right down to the choice of words. Plus ça change...but this time, I suppose they really mean it :-)

As yet, it seems like no-one has actually calculated a Jeffreys' prior in any such complex case, and this paper suggests a bunch of simplifications to make it at all tractable - including the assumption that the data are independent, which of course is something Allen was quick to criticise whenever I dared to suggest it. Probably the tablets of stone are being engraved as I type and the solution will be breathlessly announced via the pages of Nature shortly.

As I said in an email recently (and demonstrated in our paper), a more constructive step IMO may not be to attempt to prescribe the one true prior that everyone one must use, but rather to check carefully what any particular prior actually means, in terms of the decisions it supports. If the prior actually reduces to "OMG we're all going to die!!11!!eleventy!1!" (as a uniform prior on S does) then we should not be overly surprised if the posterior remains somewhat alarming, even when updated with whatever data we happen to have. But so far researchers seem curiously reluctant to present their prior predictive probabilities in that way.


EliRabett said...

An interesting exercise would be to go to a bunch of policy makers (also legislators) and ask them for their priors. It might require some formal set of questions to get at that, for example, what do you think is the most probable, what is the chance it is > 6K, etc. (At least some of the departmental bosses should be able to handle that as well as committee chairs). After that, run the observationally based calculation and show them the results. The change would be a good indicator of whether they are being realistic. This would be a policy exercise, not a science one.

Roger Jones said...

a more constructive step IMO may not be to attempt to prescribe the one true prior that everyone one must use, but rather to check carefully what any particular prior actually means, in terms of the decisions it supports

This is the only sensible thing to do. And in that context, uniform priors are not the worst thing in the universe because the central limits of successive uncertainties in a chain of consequences tend to concentrate the most likely outcomes in any case.

I know I've commented here and in the hutch on where I've been working on uncertainties in impacts risk where the decisions will be made. Have not had time to lay it all out online, due to too many fingers in different pies, but will soon. Promise.

An even more useful thing to do is to drop pdfs wherever possible because it looks too much like forecasting and switch to cdfs and express likelihoods in terms of exceedance, thus allowing likelihood and consequece to be expressed in risk terms. It also fosters more diagnostic approaches, also useful for assessing risk.

J&J, I use your sensitivity here doi:10.1016/j.gloenvcha.2008.04.008in this way.

James Annan said...

Hi Roger,

Can you send me a copy of that? We don't seem to have a subscription here.


Joshua Stults said...

I've read this post a couple times, but I still don't understand your beef with the Jeffreys prior. Is it the simplifications they make to get the problem into a tractable state?

James Annan said...

My beef is partly that I don't know what it actually is - and neither does anyone else, it seems - and partly that I don't believe that there is such a thing as an "objective" approach to this sort of probabilistic prediction, in the sense that many seem to interpret it. We simply have to be prepared to use (and defend) our judgements, rather than hiding behind obscure calculations. Trying to reduce things to some sort of automatic cook-book is simply doomed in principle, IMO. The track record of supposedly "objective" methods in this area is hardly encouraging.

("Objective" does also have some less objectionable interpretations, indeed part of the problem may be due to the confusion and ambiguity in its use. And I don't have any objection to complex calculations per se, just their use to obscure what is really going on.)