You've seen the film, now read the book :-)
(Oh, there's a paper size issue which cuts the header line off the pages of the pdf - that would have made it clear that this is submitted to GRL.)
I guess you could see this as a re-writing of the Comment on Frame et al as a stand-alone paper, but it's designed as more of a general comment about the whole field (at least, a large part of it) and is based largely on the presentation I gave earlier ths summer. However, since (as far as I can see) Frame and Allen are the only ones actually specifically advocating uniform priors, it's hard to avoid a direct rebuttal of ther claims. The increased character limit means that as well as explaining the problems with other approaches, we can present some new results.
The paper is basically complementary to our previous multiple constraints paper. That considered the effect of combining different observations, this one looks in more detail at the prior and we hope has put some final nails in the coffin of the uniform prior. We show that this approach doesn't work at all, and even if it did, the results would not actually be of any use. If we'd thought more carefully about it, perhaps we could have rolled both halves of the argument up into one paper at the outset, but it's too late for that now. The new results we show are based on the Forster and Gregory analysis. I know I said I wasn't intending to publish a paper based on this, but their analysis is particularly useful due to its independence from climate models and forcing estimates. It now seems to me that an upper 95% probability limit for climate sensitivity of about 4C is easy to justify.
I hope we manage to get some referees who do not have too much of an axe to grind in this debate. Any meaningful comment from readers here is of course also welcome.
We gave Nature the chance to reject it first, which didn't take long. Of course I knew it would be a waste of time sending it there, but I think it's only fair to give them the chance to make amends if I'm going to criticise them. Also, it's amusing to pick the bones out of their excuses. In this case, it was because there is apparently nothing new in our work - this from the same Nature that puffed up Hegerl et al as "the best guide yet" and refused to consider our comment pointing out some rather obvious limitations (effectively the same points that we discuss in our new manuscript). It seems quite clear to me that their editorial filter acts to obstruct rather than enable scientific progress.
(Oh, there's a paper size issue which cuts the header line off the pages of the pdf - that would have made it clear that this is submitted to GRL.)
I guess you could see this as a re-writing of the Comment on Frame et al as a stand-alone paper, but it's designed as more of a general comment about the whole field (at least, a large part of it) and is based largely on the presentation I gave earlier ths summer. However, since (as far as I can see) Frame and Allen are the only ones actually specifically advocating uniform priors, it's hard to avoid a direct rebuttal of ther claims. The increased character limit means that as well as explaining the problems with other approaches, we can present some new results.
The paper is basically complementary to our previous multiple constraints paper. That considered the effect of combining different observations, this one looks in more detail at the prior and we hope has put some final nails in the coffin of the uniform prior. We show that this approach doesn't work at all, and even if it did, the results would not actually be of any use. If we'd thought more carefully about it, perhaps we could have rolled both halves of the argument up into one paper at the outset, but it's too late for that now. The new results we show are based on the Forster and Gregory analysis. I know I said I wasn't intending to publish a paper based on this, but their analysis is particularly useful due to its independence from climate models and forcing estimates. It now seems to me that an upper 95% probability limit for climate sensitivity of about 4C is easy to justify.
I hope we manage to get some referees who do not have too much of an axe to grind in this debate. Any meaningful comment from readers here is of course also welcome.
We gave Nature the chance to reject it first, which didn't take long. Of course I knew it would be a waste of time sending it there, but I think it's only fair to give them the chance to make amends if I'm going to criticise them. Also, it's amusing to pick the bones out of their excuses. In this case, it was because there is apparently nothing new in our work - this from the same Nature that puffed up Hegerl et al as "the best guide yet" and refused to consider our comment pointing out some rather obvious limitations (effectively the same points that we discuss in our new manuscript). It seems quite clear to me that their editorial filter acts to obstruct rather than enable scientific progress.
26 comments:
Figure 2 says:
cyan: extended high tail (see text)
It is quite clear that you have extended both high and low tails not just the high tail.
How reasonable (or otherwise) would it be to extend just the high tail and show what effect that would that have?
crandles
Textual comment: you end the abstract with "very unlikely" and its unclear what this means... wouldn't it be more consistent with the earlier statement if you wrote it as P(S>4.5)<5%?
Chris,
We could have done that, but it wouldn't have materially affected the results. I actually thought it would be more misleading to say "extended tailS" as the prior P(S<1.5C) is smaller in the cyan version. In this region the posteriors basically coincide anyway.
Belette...yes, that's probably a good idea. I guesss most people understand "v unlikely" as synonymous with ~5%. There's always the revision...
> I guesss most people understand
> "v unlikely" as synonymous with ~5%.
Hm, I'd have thought (not based on this paper, but just from general usage and what little I recall of statistics) that
--- "~5%" meant p<=.05 and meant the difference was just barely enough to claim you thought you had a real rather than a chance outcome,
and
--- "very unlikely" meant, oh, one in a thousand.
So, I guess numbers do help understand!
Ankh,
Context (in particular, number 7) is everything :-)
This is a comment for broad audience rather than regular readers here (and actually part of outline of a review article I am obliged to write)....
It seems that there is a discrepancy of concepts which prevents from settling this controversy in public arena. The concept of "climate sensitivity" discussed by James (as well as by his peers who are direct targets of his criticism) is well-defined, and careful readers of his discussion probably understand it. But in the broader world the term has a related but different sense, and casual readers tend to accuse James (or his peers) that his results do not conform to their expectations.
(Japanese shogi is a game which probably has the same origin as chess. In shogi, I can reuse a pawn which I have captured as my pawn. I know I may not do that in chess. [This difference makes computer-programming shogi much difficult than computer-programming chess.] How bizarre it would be if I comment on chess players while assuming that they are playing shogi?)
Climate sensitivity, as discussed by James, is the equilibrium response of the climate system to CO2 concentration in the atmosphere. [Note to physicists: The term "equilibrium" here does not mean thermodynamic equilibrium. Thermodynamically speaking, what we discuss are non-equilibrium steady states.] It is the difference of two long-term average states of the climate system which has given two different, but each constant, CO2 concentration. In this case, the climate system in question does not include those processes which cause changes in CO2 concentration. Also, the response of the climate system to the temporal change of CO2 concentration is excluded by definition of the problem. Even though the climate with constant CO2 concentration can be highly variable, only the long-term average states are considered relevant.
Many scientists who want to understand the behaviour of the climate in the real world consider that the concept of climate sensitivity in this restricted sense has a great value as a guide. If we assume that so-called "climate surprise" events would not occur, the evolution of the state of the climate system can be approximated by relaxation towards ever-changing equilibrium response
with a time constant of several decades for temperature of the upper ocean as well as the atmosphere and with millenia for the sea level.
To properly answer the lay people's question "How sensitive is the climate?" we should account for all the nonlinear behaviour of the climate system, which may be chaotic and which may cause "climate surprises". [Note: Here the word "chaotic" is used in the modern applied-mathematic sense, and therefore it does not mean complete disorder.] But the only possible answers to this are subjective ones, or the ones which depend on some particular studies.
"Climate surprises" are so called because no scientist can yet give objective probability to those presumed events. Some people intuitively or precautionarily think that the climate sensitivity would be greater if "surprise" events be included in the average expectation. But this is not certain either, because some among many "surprise" effects act oppositely to CO2 forcing.
I don't think this sort of Bayesian thing is very tractable or interesting to Nature or to most scientists. I think a simple examination of the evidence and pointing out that it gives different inferences then what your opponents get, is sufficient, without bringing Bayes in.
I'm not sure how overwhelming your evidence is (got sick of weeding through the Bayes stuff), so that might also argue against a Nature submission.
My advice: strip it down to it's real elements and submit to a real specialty journal. Nature is for poseurs anyhow.
How dare you bash Nature? It's the best science fiction magazine still in publication.
TCO,
I think a simple examination of the evidence and pointing out that it gives different inferences then what your opponents get, is sufficient, without bringing Bayes in.
I'm not sure how overwhelming your evidence is (got sick of weeding through the Bayes stuff),
In that case perhaps you missed the main point of our argument, which is not that the evidence we have presented is fundamentally different from what has gone before, but that the prior assumptions (both stated and implicit) which are commonly made in the literature are wholly unreasonable - pathological, even, although we thought it prudent to keep that word out of the paper for now :-)
My meager analytical mind interperets the paper like this:
You are trying to define the tail of a probability curve for which you have insufficient data to nail down emperically. Due to the lack of data, the starting assumption effects the outcome, so people should be a little smarter about their initial assumption.
If this is done, the high sensitivity probbabilities are lower than with a dumb assumption.
BTW, can you emperically tie down the sensitivity by comparing 180ppm paleoclimate to 360ppm modern climates? That should give you a value of about 6 degrees. Or is that the sensitivity of the pliestocene, which is necessarily higher than the present due to larger ice-related feedback?
Hi James,
Interesting paper. Just a point of clarification: you say "Many observationally-based estimates of climate sensitivity (S) have been presented in recent years, with most of them assigning significant probability to extremely high sensitivity, such as P(S > 6C) > 5%. However, closer examination reveals that these estimates are based on a number of implausible implicit assumptions. We explain why these estimates cannot be considered credible and therefore have no place in the decision-making process."
We agree that these aren't policy relevant estimates of S, as we said in the final paragraph of our conclusions in Frame et al., [2005]: "suggesting traditional heuristic ranges of uncertainty in S [IPCC, 2001], may have greater relevance to medium-term policy issues than recent more formal estimates based on explicit uniform prior distributions in either S or lambda. This is not to suggest that formal estimates of uncertainty are unnecessary, but rather that their applicability in practical forecasting has been limited by [Bertrand's paradox]."
Just want to make that clear. [However, other authors using uniform priors have implied their estimates of S are policy relevant, so this criticism may be relevant to them.] Last post on this for me.
LL,
Yes, note that the prior assumption always affects the outcome (there is no "null prior"), so the question boils down to what range of priors we consider "reasonable", which is a necessarily subjective decision. If the results were so insensitive that even a uniform prior happened to give similar answers to more realistic assessments, that would be convenient, but since that isn't the case (at least when considering only a small amount of data), we have to consider the choice more carefully. No amount of rhetoric can overturn the plain fact that a uniform prior is actually not truly "ignorant" but actually makes very specific claims about S - in particular, it assigns a large probability to extremely alarming values.
The paleoclimate argument is basically the point of papers such as this, and this (and many others), and also forms part of the argument of this one. Once you account for the ice sheets, you get something like S~2.5C, but it's fair to say there is a debate as to how wide the uncertainty is on that central estimate. I think most people would agree that it adds support to the mainstream view of a moderate (but non-negligible) value for S.
Dave,
Thanks for commenting. Your comments about policy-relevant probabilities seems to rest substantially on your opinions as to what sort of policies you think people ought to be talking about, rather than anything to do with the climate system. Regardless, you have yet to convince me that your results are deserving of the term "probability", or provide any reason why anyone should actually consider them policy relevant at all. Once the rhetoric is brushed aside, it is clear that there is no real foundation to what you have done. Basically, you have thrown away the standard axioms and interpretation of Bayesian probability, and you have not explained what you have put in their place. if you are not going to attempt to conform to the standard axioms, then how do you justify the calculations which you do undertake? If you don't believe your results, why should anyone else?
James wrote: "Once the rhetoric is brushed aside, it is clear that there is no real foundation to what you have done."
I think this is completely unfair. You can see the distributions in F05 as either likelihoods with uniform sampling across the forecast variable, or as probabilities constructed with uniform priors, no more or less arbitrary than Chris Forest's uniform priors, Andronova and Schlesinger, Knutti et al 02, etc (see below). If you don't accept uniform priors as adequate representations of ignorance, fine, but then this whole debate turns on just how hard it is to find an acceptable prior. We suggested a useful reference prior, but you think it's alarmist because it places too much weight in the parts of the distribution you don't like. Fine. But to me, expert priors that have been formed in the last, say, twenty years suffer from the problem of not being truly independent of the data they are being used with. In information terms you think our forecasts are underconfident (you claim we can rule out more than we do), we think yours are over-confident (we think you spuriously rule out high S).
"Basically, you have thrown away the standard axioms and interpretation of Bayesian probability, and you have not explained what you have put in their place."
Well I'm happy to say that Dutch Book arguments are of limited real world value, if that's what you mean. They are important to statisticians, sure, but the value of coherence in the presence of error is something of an open question: why insist on all your beliefs cohering when you know plenty of them are wrong (as we do in the climate case)? If one of my fundamental beliefs is that some religious text is literally true and perfect, then I'm going to cohere around the propositions in that. It may be possible to have a fully coherent set of beliefs. These may bear no relation to reality.
Sigh. We've been around this loop several times, and it's pretty clear we disagree. You may disagree with our method, and it's your prerogitive to disagree and publish your own work, but it does bother me that you seem to be trying to paint our group - and me in particular - as some sort of incompetent alarmist(s), while tactfully ignoring or downplaying more extreme claims in higher profile work. Lest your readers get the mistaken impression that what we have done is anything other than mainstream, here are some quotes lifted from two papers which were far more high profile than F05:
Andronova and Schelsinger: "Here we use a simple climate/ocean model, the observed near-surface temperature record, and a bootstrap technique to objectively estimate the probability density function for ΔT2xCO2."
Knutti et al 2002: "The combination of ensemble simulations that take into account uncertainties in input and model parameters with the use of observational evidence as an independent constraint provides a powerful approach for an objective uncertainty assessment in global warming projections."
Both of these used uniform priors, and both made much more strong claims to objectivity than we did. We suggested that uniform priors were useful in answering a single question: "what does this data tell me about X assuming no prior knowledge of X?" Our paper brought to the attention of the climate community the presence of Bertrand's paradox in ensemble climate forecasts, and suggested that uniform priors were a useful way of answering the question above. We also pointed out that forecasts of TCR (and variants on it) were less sensitive to choice of prior. They're also more policy relevant for most real world planning purposes. After all, it's the TCR we'll actually live through. This was what we thought our paper was doing. As I've said before, you've given it a far stronger reading than we intended. We think this was a mis-reading, as I've argued before. It's obviously the reading that suggested itself to you (and one or two others, though not most readers), so the paper failed in its aims of communicating its point clearly enough. If I was writing it now, I would write it more carefully, and I wouldn't let certain co-authors slip the word "objective" in at the last minute. I would also delete the phrase "solution, in this context" and replace it with some alternative which suggests more explicitly that this is a useful way of answering the question above, rather than any sort of "General Solution" in a grand sense. [It's pretty obvious we didn't think we'd solved Bertrand's paradox - if we did we would have published the "solution" somewhere other than the specialist climate literature!]
Sigh. I'm pretty tired of all this. It all seems a bit OTT: You don't agree with our method of addressing this well-known statistical paradox regading an empirically dubious quantity (equlibrium climate sensitivity) of borderline 21st century relevance and we don't agree with yours. Great. I've done a pretty good job of not reading climate blogs in the last three or four months, and while I hope such a long post from my side of the fence might be of some use to your readers, I think I'll go back to spending way more time on SI.com than I do on climate blogs. As regards your paper, good luck with it. While I disagree with your reading of F05, I have no axe to grind with you presenting your methods and arguments clearly to the climate community.
How much data do you need before the prior becomes insignificant? More to the point, what experimental outcome is needed to distinguish between your and Dave's long tails? (as opposed to tall tales)
Japan's into big science; surely you can convince them to build a few billion Earths to give you some hard experimental results with good counting statistics for all possible outcomes...
LL,
It depends on "insignificant", but IMO we are already pretty much there, especially given the substantial uncertainties that arise in other aspects of the problem (like, what the regional/local effects will be, and how effective policy decisions will be in terms of actually affecting emissions anyway).
Dave,
I'm not sure if your appeal to consensus is intended to support your claims, or instead to argue that you were misled by others. Clearly, neither of those other papers addresses issues surrounding the choice of prior. And FWIW, Schlesinger denied that he had used a prior at all at the workshop this summer, after you left [1]. There's a point at which attempting rational debate is pretty futile! But clearly you have thought about this problem a lot, and once you abandon the untenable claim that there is such a thing as a truly "ignorant" prior, I think you'll find the rest falls into place pretty quickly.
I'm puzzled by your comments about coherence vs beliefs being "wrong". Sorry if this is egg-sucking stuff (other readers might find it useful), but it is important to realise there is no such thing as the "correct" probability. Bayesian probability is simply about creating a rational basis for decision making, by correctly linking the prior to the posterior via new information. The link to reality is improved by updating your beliefs in the light of new information, but that doesn't mean you can claim to have the "correct" probabilities at any point. The suggestion that there is a "correct" probability at all is merely a disguised assertion that there exists a "correct" prior, since this a precondition for the latter to exist. If Bertrand's Paradox tell you anything, it should be that there is no such thing as an ignorant prior!
Nevertheless, your statement that you find our prior unreasonably rules out high S, seems a sensible basis for meaningful discussion. There are numerous ways of characterising it, but I reckon that P(1.5 < S < 4.5) and P(S>6) are a useful shorthand. Our (rather arbitrary) "extended tail" prior has
P(1.5 < S < 4.5)=57%, P(S>6)=15%
and attempts to account for the state of knowledge about 20 years ago.
Your prior has
P(1.5 < S < 4.5)=15%, P(S>6)=70%
and claims to represent ignorance. However, writing with Hegerl in Nature, you switch to a prior which has
P(1.5 < S < 4.5)=30%, P(S>6)=40%
although there is no discussion or defence of this change, which clearly influences the results. In fact, given this range of choices you have made, IMO ours doesn't seem so markedly different, and Hegerl et al even rule out S>10 whereas ours assigns 5% probability to that outcome. So who exactly is spuriously ruling out high S a priori here??
I'd be genuinely interested in any meaningful defence of either one of your two choices. In case it's not clear, I don't think that there is any realistic way in which one can describe them both as ignorant - it is self-evident that they are making specific (and decidedly different) claims about S, which will directly feed though into some decision making processes. Even if you say you don't care about stabilisation scenarios, clearly some people do [2] :-)
In order to disagree substantively with our latest analysis, it seems to me that you'd have to argue either that Charney and the IPCC etc were wildly over-optimistic in their estimates (our prior already exaggerates their uncertainty), or extraordinarily prescient in anticipating the observations from a satellite that had not even been built still less launched at that time. Could have saved the cost of that, I guess! And we could have added Pinatubo and El Chichon on top for yet more evidence of a modest value for S, which only turned up after Charney. They obviously bolster the defence against any accusation of over-confidence in our results. Sure, natural variability could have rapidly obliterated (rather than adding to) the volcanically-forced cooling, but happening twice in a row is at the very most a 1 in 4 chance (probably much lower in a proper calculation) which therefore downweights any high tail for S by a substantial factor.
Nevertheless, I'd be interested to see any plausible estimate of climate sensitivity based on all the evidence that is available. I look forward to seeing how people address the issues we have raised, in their future papers.
So you feel unfairly picked on. OK, I can see your point of view on that. But as I've already mentioned, you are the one who was apparently offering advice on priors, the others were merely making careless and common mistakes, and you can be sure that if we had criticised them, they would have pointed to your paper as support (peer-reviewed, with eminent author list and all). [Indeed note that the Comment on Hegerl et al was rejected basically on the grounds that everyone else does it the same way.] It's disappointing that you have spent 6 months trying to prevent an open discussion of these issues in the literature.
James
[1] Note to innocent bystanders: the existence of the prior is both a consequence and requirement of the probability axioms - one can be ignorant of the choice implicitly made, but one cannot wish it away!
[2] Another note to bystanders - in case it's not clear, check the author list.
James, thanks. As always, helpful.
"If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone...."
http://www.analects-ink.com/mission/Confucius_Rectification.html
Are these fn7 definitions standard usage for climatology or statistics, that people in the field would assume understood by one another (or even by the press or bystanders)?
---
"7 In this Summary for Policymakers and in the Technical Summary, the following words have been used where appropriate to indicate judgmental estimates of confidence: virtually certain (greater than 99% chance that a result is true); very likely (90-99% chance); likely (66-90% chance); medium likelihood (33-66% chance); unlikely (10-33% chance); very unlikely (1-10% chance); exceptionally unlikely (less than 1% chance). The reader is referred to individual chapters for more details."
Ankh,
I think it's fair to say that these terms are ubiquitous in climate science, but probably not elsewhere. AIUI there was a lot of discussion about the presentation of uncertainty prior to (and presumably during the writing of) the TAR. This paper by Moss and Schneider gives some more details. The likely, v likely and virtually certain descriptors conveniently include the cases of 1, 2, and 3 sigma of a gaussian respectively (both one and 2-sided versions) but there is quite a difference between 1.1% and 9.9%, or 66% and 90%!
James,
1. Please, please have more of a story then selection of the prior. That is just so boring and philosophical.
2. Nature is still for poseurs.
3. I love your betting comments and looking to the markets for insight. Maybe you and Dick Cheney can get together and start the futures market for disasters. (joke, that was discussed and nixed as we didn't want people buying the options and then having an incentive to make the events happen). BTW, here is something that may intrigue you: http://www.climateaudit.org/?p=825#comment-48681
Dave Frame:
How about that hegerl paper with the messed up confidence intervals? You are a co-author. What is the status of correcting the confidence intervals that cross from upper to lower!?
Excuse me for joking (with a true story) first.
I am a disastrous player of shogi or chess.
I know the formal rule of shogi and that of chess, but I do not know
at all what are good strategies in either game. (I once tried to
learn them, but they did not stick in my brain at all.)
Therefore I just choose a random move among formally possible ones.
I cannot win over experienced players, but my behaviour irritate them.
Certainly their prior is not uniform, but mine is.
(Remember, I happen to have no sense in betting either.)
---
By the way, what do you think about this?
This quotation is the first two paragraphs from an essay by
A. Barrie Pittock, published in Eos (newsletter of AGU) this August.
(Its reference list is available free at
http://www.agu.org/eos_elec/climatechange_refs.html .)
Dr. Pittock certainly knows that climate sensitivity and
projected temperature rise by 2100 are two different things,
but he considers that upward revision of the former implies
upward revision of the latter.
I guess that he did not examine methodologies of evaluation of
climate sensitivity but just looked at conclusions of papers.
It seems important to have someone who knows the methods well to write a review article ....
-----Quote
Eos, Vol. 87, No. 34, 22 August 2006
PAGE 340
forum
Are Scientists Underestimating Climate Change?
The consensus view of climate scientists, as represented by the 2001
Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report,
is that the enhanced greenhouse effect likely will lead to global average
surface warming by 2100 of between 1.4 DEG and 5.8 DEG C, and global sea
level rise of between 9 and 88 centimeters. This assumes the climate
sensitivity is in the range 1.5 DEG -- 4.5 DEG C for an equilibrium
doubling of preindustrial carbon dioxide concentrations, and the Special
Report on Emissions Scenarios (SRES) range of emissions scenarios
[IPCC, 2000]. However, recent developments suggest that this dated IPCC
view might underestimate the upper end of the range of possibilities and
shift the probabilities toward an increasing risk of greater warmings and
sea level rises by 2100.
Recent estimates of the climate sensitivity, based on modeling, in some
cases constrained by recent or paleoclimatic data, suggest a higher range,
around 2 -- 6 DEG C [Annan and Hargreaves, 2006; Forster and Gregory, 2006;
Hegerl et al., 2006; Murphy et al., 2004, Piani et al., 2005; Stainforth
et al., 2005; Torn and Harte, 2006]. These estimates throw doubt on the
low end of the IPCC [2001] range and suggest a much higher probability of
warmings by 2100 exceeding the midlevel estimate of 3.0 DEG C.
-----End Quote
The rest of the essay suggests that Pittock is now a convinced alarmist.
He says, "The object of policy-relevant advice must be to avoid unacceptable
outcomes, not to determine the most likely outcome."
It is a pity that we may have lost the well-balanced reviewer I saw
in his book "Climate Change" (2005, CSIRO Publishing and Earthscan).
Nevertheless I guess that he may accept moderate equilibrium response,
though he will maintain his outlook of high temperature rise by 2100
with a logic involving transient processes.
Masuda-san,
Even random moves will beat an expert occasionally - but perhaps not within the lifetime of the universe :-)
Thanks for the Pittock quote, which appears to be a direct and blatant misrepresentation of our paper. He could have ignored our work, or said that the "consensus" disagrees with us (no doubt true), but pretending that it supports his point of view is pretty far-fetched. I'll email him to see what excuse he comes up with.
I certainly expect that sensitivity is close to 2.5C, and it is also clear that the more extreme SRES scenarios are fairly implausible. Therefore, a 2100 temperature rise of not much more than 2C should be quite easily achievable without great sacrifices. (I still think it would be sensible to try to increase the downward pressure on emissions.)
One big problem with designing policy for the extreme cases is that our opinions about the probability of extreme cases is likely to change rapidly. It is hard to agree what probability is the suitable threshold, and people are notoriously poor at estimating probabilities of extreme events, even when using formal methods!
James, interestingly, in a post in which he tries to use recent research on water feedback to downplay sensitivity, Chip Knappenberger refers to your post here when I ask him to comment on what Hansen said last year:
-“Paleoclimate data show that climate sensitivity is ~3°C for doubled CO2, including only fast feedback processes. Equilibrium sensitivity, including slower surface albedo feedbacks, is ~6°C for doubled CO2 for the range of climate states between glacial conditions and icefree Antarctica.”
- "Paleoclimate data and ongoing global changes indicate that ‘slow’ climate feedback processes not included in most climate models, such as ice sheet disintegration, vegetation migration, and GHG release from soils, tundra or ocean sediments, may begin to come into play on time scales as short as centuries or less."
Are you focussed on the sensitivies estimates that are included in the models, and not the longer-termm, slow processes that Hansen raises?
Regards,
Tom
Tom, I've been focussing on the "fast feedback" definition, which basically includes ocean, atmosphere and sea ice changes (and all the work I have criticised is also using this definition). Hansen appeals to changes in the carbon cycle and large ice sheets to cause amplification but this is pretty speculative - and it's not just me, everyone I've spoken to is sceptical.
James,
Thanks for your site.
This reminds me of Jaynes' argument that a uniform prior actually implies a lot of knowledge (the examples in his book with Bernoulli trials, introducing the Jeffreys prior). What do you think about the usefulness of priors generated from a maximum entropy principle? How would you go about doing that in this context (identifying the constraints, etc)?
I'm not an expert, but I think the maximum entropy prior just conceals the choice of prior via a "reference density" which is completely arbitrary.
Post a Comment