At last, the IPCC AR4 is out - at least, most of it is (there is still supplementary material to come). Since I was unhappy with some of what was written in the previous (2nd) draft of Chapter 9, I looked at that first.
At first glance, I'm pleased to see that it has been significantly improved. The drafts were never meant to be a polished final version, and indeed were only released on condition that they were kept private (although the 2nd draft can easily be found on the web). So I'll restrict my comments to what they have agreed on for the final version itself.
Section 9.6 "Observational Constraints on Climate Sensitivity", contains the following:
The defence the IPCC authors provide for the use of the uniform distribution is that it "enables comparison of constraints obtained from the data in different approaches". Of course this is not the same thing as generating a pdf which credibly represents the opinion of an intelligent researcher, but they don't actually go so far as to explicitly state this rather embarassing fact (which leads inescapably to the conclusion that these "pdfs" cannot be considered policy-relevant and used in decision support, eg economic analyses such as the Stern report etc). Most of the results they quote are based on uniform priors, but they hardly had a choice since this approach dominates the recent literature.
The section chapter also makes extensive reference to the "multiple constraints" argument (a significant feature of Hegerl et al's Nature paper, as well as our GRL paper), which is great. As I said more than a year ago, our calculation was rather simplistic and anyone who doesn't like it is welcome to generate their own answer, taking account of the arguments we have presented. Interestingly, I'm still waiting...
So in summary it might not be exactly what I would have written myself, but it's clearly a step in the right direction and it seems like the IPCC comment/review system has had some effect. We'll have to wait a little while longer to see what else they wrote about Bayesian estimation in the Appendix, since this is still not published. Whether this means Frame and Allen will now have the sense to slink away and pretend the whole sorry mess about uniform priors never really happened, remains to be seen.
At first glance, I'm pleased to see that it has been significantly improved. The drafts were never meant to be a polished final version, and indeed were only released on condition that they were kept private (although the 2nd draft can easily be found on the web). So I'll restrict my comments to what they have agreed on for the final version itself.
Section 9.6 "Observational Constraints on Climate Sensitivity", contains the following:
"Note that uniform prior distributions for ECS [equilibrium climate sensitivity], which only require an expert assessment of possible range, generally assign a higher prior belief to high sensitivity than, for example, non-uniform prior distributions that depend more heavily on expert assessments (e.g., Forest et al., 2006)."Many people may think this statement is too trivial to be worth making much of, but when I made essentially the same point about a uniform prior implying high prior belief in high sensitivity, Allen and Frame dismissed it as "just a rhetorical flourish". This statement from the IPCC also appears to directly contradict much of the peer-reviewed literature, which claims that uniform priors represent ignorance. It is encouraging to see that it is now the consensus of 2,500 climate scientists that this is not the case :-) Another significant aspect is the comment that even uniform priors "require an expert assessment of possible range", which at least takes a baby step towards acknowledging our point that the choice of upper bound can have a dramatic influence on the result. As far as I know, this critical detail (which undermines the whole rationale for uniform priors) does not appear anywhere in the peer-reviewed literature, although one reviewer did single it out as a particularly interesting point in one of our submissions. It could also conceivably be called trivial were it not for the fact that so many people have apparently been oblivious to it (or else deliberately deceptive in failing to mention it) for several years.
The defence the IPCC authors provide for the use of the uniform distribution is that it "enables comparison of constraints obtained from the data in different approaches". Of course this is not the same thing as generating a pdf which credibly represents the opinion of an intelligent researcher, but they don't actually go so far as to explicitly state this rather embarassing fact (which leads inescapably to the conclusion that these "pdfs" cannot be considered policy-relevant and used in decision support, eg economic analyses such as the Stern report etc). Most of the results they quote are based on uniform priors, but they hardly had a choice since this approach dominates the recent literature.
The section chapter also makes extensive reference to the "multiple constraints" argument (a significant feature of Hegerl et al's Nature paper, as well as our GRL paper), which is great. As I said more than a year ago, our calculation was rather simplistic and anyone who doesn't like it is welcome to generate their own answer, taking account of the arguments we have presented. Interestingly, I'm still waiting...
So in summary it might not be exactly what I would have written myself, but it's clearly a step in the right direction and it seems like the IPCC comment/review system has had some effect. We'll have to wait a little while longer to see what else they wrote about Bayesian estimation in the Appendix, since this is still not published. Whether this means Frame and Allen will now have the sense to slink away and pretend the whole sorry mess about uniform priors never really happened, remains to be seen.
16 comments:
I've seen comments to the effect that your approach tends to 'beg the question' (Eli springs to mind), but I'm not sure whether this is right. If you attribute a 'rational' prior, does this mean that the model response/pdf can't go beyond the range, or would something happen in the mathematical calculation which showed that something 'impossible' was going on?
Given that all ranges must include a probability for the extremes, is there a way in which you can work the statistical analysis so that a more 'realistic' probability curve results, with values less than 5% for the 'outliers'? I think what I'm asking about is the distribution curves, but I'm not sure.
BTW: what probability does the AR4 give for a climate sensitivity higher than 6.5C?
What you say, James, sounds as a set of obvious tautologies. I can't believe that someone would disagree with these statements.
Of course that the upper bound and shape of the priors will influence predictions. How it could not?
The dependence of the final result on the priors just reflects a lack of solid data. If there existed a lot of solid data, you could locate the sensitivity into a reasonable interval even if you started with the uniform interval (-100, 100 degrees) per doubling.
The word "uniform" is itself misleading because by a functional redefinition, one would get a very different notion of "uniformity". At some level, one could also think about a "uniform" sensitivity at the log scale. Sensitivity "X plus minus 1%" is as likely as "10 X plus minus 1%", at least within a certain interval, because we really don't know what the right scale is.
Such a uniform prior on the log scale - i.e. distribution p(sensitivity)=c/sensitivity - would clearly favor smaller values of sensitivity, and would be less sensitive to the (still required) truncation of the high values of sensitivity.
The prior may be taken gaussian based on one method, and other methods may be taken as inference.
The language of an upper bound is perhaps the wrong way of saying that what you are using is a boxcar shaped distribution with upper and lower bounds chosen by expert assessment, and then one should describe how the bounds were picked. What is getting lost here is also that the lower bound affects the result.
I would be interested in seeing what you would get if you chose the prior from an ensemble of GCM runs and got the pdf by matching this to the data, or visa versa.
Cripes I think lumo and I agree except that a Gaussian probably underestimates the tails.
Fergus,
There are plenty of possible priors that cover the whole number line but only assign quite a small probability to high values. The inverse quadratic that we used in the "Can we believe..." paper is one such (rather arbitrarily chosen) example. Actually I truncated it at 20C for convenience but it could have gone on for ever...the prior (and therefore posterior) 95% credible interval would have been bounded even in that case.
It's a specific (and IMO serious) limitation of the uniform prior that you cannot even allow the possibility of S>X without also assigning a prior belief of at least 50% to P(S>X/2) - that's keeping the lower bound fixed at zero, but this is a relatively uncontroversial choice. What this means is that people end up with such desperate compromises as U[0,10] - meaning a prior belief that S is more likely to be greater than 7C than it is to lie in the interval 2-4C, but that it cannot possibly be greater than 10C.
The AR4 doesn't give a probabilistic comment for such high sensitivities as 6.5C, instead choosing the deliberately meaningless "Values substantially higher than 4.5°C cannot be excluded".
Shock horror, I just about agree with Lumo. Pass me the smelling salts :-)
Even though the prior necessarily affects the posterior, one might hope that a reasonable range of choices does not lead to radically different results. I believe that this is the case (and furthermore that I have adequately shown this to be the case), but that depends (at least in part) on my successfully arguing that uniform priors are not reasonable :-) Clearly, others have disagreed on this point in the recent past, even if the IPCC marks a bit of a turning point in the tide (which is by no means certain, but I can hope).
Lumo's comment about just collecting more data reminds me of a comment Eli made some time ago to similar effect. I also read something recently in NewScientist about a putative new physics discovery. IIRC, the article said it was a 2-sigma result and that usually people waited until they had got to the 5-sigma level before really claiming something was there. That's fine when (a) you can see the 5-sigma threshold approaching in a reasonable time frame and at reasonable cost and (b) no critical decisions have to be made in the meantime. It is clear that many physicists don't understand Bayesian probability, but once the data are precise enough, it doesn't really matter what your prior was.
However, we don't have that luxury in climate science. There are decisions to be made ("wait and see" is a decision) based on imperfect information. The real world tends to be like that...
(BTW Eli, the lower bound on any shape of prior doesn't really matter in practice, so long as you don't actually set it implausibly high.)
Ah, here's the article:
Convinced by their analysis, the entire CDF experiment team approved the data on 4 January and Conway presented it at a conference in Aspen, Colorado, a few days later. The team had found a signal which, in particle physics lingo, had a 2-sigma significance - a 1 in 50 chance of being a random fluctuation. Normally, to merit new particle status a signal must be significant to 5-sigma - where there's only a 1 in 10 million chance of it being a fluctuation.
Comments on the accuracy of that characterisation are welcome (especially if it's wrong).
Concerning the idea that uniform priors represent ignorance: in the case of the bias of a coin flip a uniform prior over [0,1] is the "no information" prior, one bias is as likely as any other. But you have a natural upper and lower limits in the case of the bias of a coin. If you have to (or choose to) pick the limits, then the prior does not represent total ignorance.
Tom,
Even when there are natural bounds on a parameter, there are still any number of distributions that satisfy those bounds. Note that a uniform prior on P(Heads) ~ U[0,1] actually means that you think it is very unlikely that the bias is small (among other things). That's a specific belief that may well have some consequences for any rational decision you have to take - for example, it implies you would be relatively happy to bet on a fairly long run of the same side of the coin showing up in sequence of tosses.
The inability of Bayesian probability to represent the concept of "ignorance" is a well known limitation. Anyone who is going to use Bayesian probability has to deal with this. The solution is not to attempt to define "ignorance" as meaning a specific set of beliefs, especially when those beliefs are bizarre and extreme.
Homer moment over; it's not the range which matters; it's the uniformity! If you input 'ignorance' your output is going to be whatever it is that ignorance produces - could be right, could be wrong; could be clever, insightful, meaningful; but what are the odds?
So does this mean that no uniform prior can give a reliably meaningful result, other than in theoretical statistics?
If this is the case, then some kind of preliminary processing must be required; in this case, something which attributes a probability 'factor' before the event. Is this right?
Fergus,
It may be a bit of a trap to fall into to think of Bayesian probabilities as having the potential to be meaningful or insightful. Fundamentally, they represent the opinion of the researcher (assuming he has done his sums honestly and reasonably). The probabilities are no more insightful than he is! I suppose the process of exploring the relationship between priors and posteriors may lead to insights...
It's easy to construct situations in which uniform priors can give reasonable results. But they don't have a special role in representing reality, and I can't reconcile their properties with the term "ignorance" as it is usually used in the English language.
Perhaps the next step will be for someone to explicitly state "We define ignorance to be the uniform prior over [0,20]" :-)
True the that "no information" prior can be useless, since our prior state is usually not pure ignorance.
Do you know about conjugate priors? They can be useful:
http://en.wikipedia.org/wiki/Conjugate_prior
Useful if you are willing to model your ignorance as being due to too little of the same type of data that you are adding to the knowledge base.
But conjugate priors might just point to a normal distribution, nothing new.
Re: Conjgate priors
Yes, I'm aware of them - and also other ideas like Jeffreys Prior. I don't think we really need a mathematically "neat" solution though. As soon as one abandons a broad uniform prior the details don't matter too much anyway.
If the lower bound does not matter that says something very strong about the data, that it shows zero probability below some cut off.
Otherwise comparative sensitivity between upper and lower bounds is a measure of the asymmetry of the data, which is also a strong outcome.
Well, we can happily assign zero probability below a cut-off of 0, since negative sensitivity is physically unstable. The shape of the prior at low positive values will of course have some effect on the result, but whether we end up assigning a 0%, 10% or 20% probability to S < 1C (few would consider even the 2nd one reasonable) is pretty irrelevant from the POV of influencing any decisions we might take.
Do you have some pointer to a discussion of why < 0 is physically unstable? Or is this so obvious, even a small animal should be able to figure it out.
Plenty of people don't see it immediately, but it is simple enough.
Negative sensitivity means negative radiative feedback. So a small warming perturbation to the temperature results in less outgoing radiation, amplifying the warming in a runaway effect. Same for cooling.
Think of a ball bearing on the top of a hill (-x^2 shape) compared to a u-shaped valley (x^2).
Post a Comment