Stoat posits a "challenge to JA", based on a paper by Michel Crucifix (MC). There are some slightly subtle points which require a lengthy response to do them justice, so I'll post this here rather than as a comment.
MC looks at a total of 4 models which were integrated under both LGM (Last Glacial Maximum, ~20,000 years before present) and 2xCO2 conditions, and finds little relationship between both sets of results. (He does, actually, find a very good relationship between the Antarctic cooling at the LGM and the global warming at 2xCO2 - but as he has just pointed out to me, it's an inverse relationship!) He argues on this basis that it is inappropriate to simply scale the global LGM temperature change by the ratio of 2xCO2/LGM forcings to get a value for climate sensitivity.
To be honest, at first glance I thought that it was a bit of a straw-man argument, as surely no-one is seriously suggesting that one could do such a thing. However, James Hansen (eg here which refs to here) and some others have indeed presented pretty much this argument, so in that context MC's comments seem justified. In our GRL paper, we explicitly discussed the uncertainty in the LGM/2xCO2 relationship (which we had already shown to exist here) and attempted to account for this with a dollop of additional uncertainty on top of the simple forcing calculation. No doubt there is room for debate on the details of what we did, but we were hardly blazing a speculative trail here - Myles Allen has presented a vaguely similar analysis of the LGM on p42 of this presentation, for example, and there's a similar discussion on Ch29 of the "Avoiding dangerous climate change" book, as well as the cited Hansen work etc.
The main point behind our GRL paper was no to analyse the LGM, but to point out the fallacious nature of the (implied) arguments underlying many of the published climate sensitivity estimates. IMO these are based on what amounts to rather misleading wordplay rather than a valid calculation. The argument goes roughly as follows:
If we analyse event X (and use it to update a so-called "objective" or "ignorant" uniform prior), we end up with a broad posterior pdf for sensitivity with wide bounds -> X does not provide a "useful" constraint -> we can ignore event X completely in any further calculations to estimate climate sensitivity.
The fallacy is that between those two arrows, the term "useful" has changed its meaning from "providing a tight bound on its own in conjunction with a uniform prior" to "useful at all in conjunction with other data". This erroneous argument has been variously used for both the LGM state and short-term cooling after volcanic eruptions (and possibly elsewhere). But in order for event X to be truly useless, it would have to be the case that the likelihoods P(X|S=1C), P(X|S=3C), P(X|S=6C) and P(X|S=10C) (etc) are actually all equal, and no-one has actually made this (IMO) extraordinary claim!
An unfortunate limitation of MC's work - not in any way his own fault - is that there were only 4 coupled models available with both LGM and 2xCO2 integrations at the time of his investigation, and they only covered a fairly narrow range of sensitivity, which gives little chance for a significant result to emerge (any correlation of less than 0.95 would not have been significant at the 5% threshold). I suspect that he would have found stronger results if he'd had a larger sample of models which encompassed a wider range of sensitivities (although I'm sure there would still have been uncertainty around any correlation). The Hadley Centre and/or climateprediction.net have been promising for some years now to do some ensembles of LGM simulations with their ensembles. Until they or others actually get round to it, we are pretty much twiddling our thumbs, but here is a more optimistic look at things, and there is also a recently-submitted manuscript on jules' work page. IMO the real debate is not on the binary yes/no question "Does the LGM constrain climate sensitivity?", but rather "What evidence does the LGM provide relating to climate sensitivity (and more generally, other future climate changes), and how best can we use it?" If anyone wants to argue that the answer to this is "absolutely nothing whatsoever" then they are welcome to try, but I think they will find themselves well on the scientific fringes.
Two more side-notes:
Firstly, our recent manuscript, which revisits the question of an "ignorant" or "objective" prior, does not use the LGM at all (except inasmuch as it influenced the Charney report, which I guess is not very much).
And secondly, I see that Myles Allen was quite happy to describe the work of several climate scientists as "wrong" in the presentation I linked to above. So those who accuse me of libel in my criticism of others could perhaps benefit from a sense of proportion.
MC looks at a total of 4 models which were integrated under both LGM (Last Glacial Maximum, ~20,000 years before present) and 2xCO2 conditions, and finds little relationship between both sets of results. (He does, actually, find a very good relationship between the Antarctic cooling at the LGM and the global warming at 2xCO2 - but as he has just pointed out to me, it's an inverse relationship!) He argues on this basis that it is inappropriate to simply scale the global LGM temperature change by the ratio of 2xCO2/LGM forcings to get a value for climate sensitivity.
To be honest, at first glance I thought that it was a bit of a straw-man argument, as surely no-one is seriously suggesting that one could do such a thing. However, James Hansen (eg here which refs to here) and some others have indeed presented pretty much this argument, so in that context MC's comments seem justified. In our GRL paper, we explicitly discussed the uncertainty in the LGM/2xCO2 relationship (which we had already shown to exist here) and attempted to account for this with a dollop of additional uncertainty on top of the simple forcing calculation. No doubt there is room for debate on the details of what we did, but we were hardly blazing a speculative trail here - Myles Allen has presented a vaguely similar analysis of the LGM on p42 of this presentation, for example, and there's a similar discussion on Ch29 of the "Avoiding dangerous climate change" book, as well as the cited Hansen work etc.
The main point behind our GRL paper was no to analyse the LGM, but to point out the fallacious nature of the (implied) arguments underlying many of the published climate sensitivity estimates. IMO these are based on what amounts to rather misleading wordplay rather than a valid calculation. The argument goes roughly as follows:
If we analyse event X (and use it to update a so-called "objective" or "ignorant" uniform prior), we end up with a broad posterior pdf for sensitivity with wide bounds -> X does not provide a "useful" constraint -> we can ignore event X completely in any further calculations to estimate climate sensitivity.
The fallacy is that between those two arrows, the term "useful" has changed its meaning from "providing a tight bound on its own in conjunction with a uniform prior" to "useful at all in conjunction with other data". This erroneous argument has been variously used for both the LGM state and short-term cooling after volcanic eruptions (and possibly elsewhere). But in order for event X to be truly useless, it would have to be the case that the likelihoods P(X|S=1C), P(X|S=3C), P(X|S=6C) and P(X|S=10C) (etc) are actually all equal, and no-one has actually made this (IMO) extraordinary claim!
An unfortunate limitation of MC's work - not in any way his own fault - is that there were only 4 coupled models available with both LGM and 2xCO2 integrations at the time of his investigation, and they only covered a fairly narrow range of sensitivity, which gives little chance for a significant result to emerge (any correlation of less than 0.95 would not have been significant at the 5% threshold). I suspect that he would have found stronger results if he'd had a larger sample of models which encompassed a wider range of sensitivities (although I'm sure there would still have been uncertainty around any correlation). The Hadley Centre and/or climateprediction.net have been promising for some years now to do some ensembles of LGM simulations with their ensembles. Until they or others actually get round to it, we are pretty much twiddling our thumbs, but here is a more optimistic look at things, and there is also a recently-submitted manuscript on jules' work page. IMO the real debate is not on the binary yes/no question "Does the LGM constrain climate sensitivity?", but rather "What evidence does the LGM provide relating to climate sensitivity (and more generally, other future climate changes), and how best can we use it?" If anyone wants to argue that the answer to this is "absolutely nothing whatsoever" then they are welcome to try, but I think they will find themselves well on the scientific fringes.
Two more side-notes:
Firstly, our recent manuscript, which revisits the question of an "ignorant" or "objective" prior, does not use the LGM at all (except inasmuch as it influenced the Charney report, which I guess is not very much).
And secondly, I see that Myles Allen was quite happy to describe the work of several climate scientists as "wrong" in the presentation I linked to above. So those who accuse me of libel in my criticism of others could perhaps benefit from a sense of proportion.
[quote]The main point behind our GRL paper was no to analyse the LGM, but to point out the fallacious nature of the (implied) arguments underlying many of the published climate sensitivity estimates. IMO these are based on what amounts to rather misleading wordplay rather than a valid calculation. The argument goes roughly as follows:
ReplyDeleteIf we analyse event X (and use it to update a so-called "objective" or "ignorant" uniform prior), we end up with a broad posterior pdf for sensitivity with wide bounds -> X does not provide a "useful" constraint -> we can ignore event X completely in any further calculations to estimate climate sensitivity.
The fallacy is that between those two arrows, the term "useful" has changed its meaning from "providing a tight bound on its own in conjunction with a uniform prior" to "useful at all in conjunction with other data".
[/quote]
Implied arguments presumably means that it is not explicitly stated. I wonder if it is a case of you incorrectly inferring it.
It seems quite plausible to me that other scientists are carrying out the analysis of event X to update an arbitrary prior in order to gain a sense of which events are the more important constraints.
You are clearly interested in how you combine all the pieces of evidence. It doesn't follow that other scientists are ready to tackle this yet. So it seems to me that you are assuming other scientists are trying to do the same as you and this leads you to say silly(?) things like 'implied arguments are fallicious'. All it really means is other scientists are addressing different questions.
Putting all the evidence together does seem important to me but maybe you are out on a limb in attempting to do it so soon?
crandles
I didn't really mean to open up this particular bit of the debate again, but still...
ReplyDeleteIt seems quite plausible to me that other scientists are carrying out the analysis of event X to update an arbitrary prior in order to gain a sense of which events are the more important constraints.
Chris,
that may be how some of them wish to reinterpret their contributions in the light of our comments :-) but statements such as:
... the 90% confidence interval for DT2x is 1.0oC to 9.3oC. Consequently, there is a 54% likelihood that DT2x lies outside the IPCC range.
are as unequivocal as it is possible to be. Those authors (Andronova and Schlesinger) were explicitly presenting their results as an estimate of climate sensitivity, not just an academic investigation as to what one might think sensitivity was if one only had the particular data set that they used.
However, it's quite clear that there is a lot of confusion about the status and interpretation of the various "pdfs" that have been produced. I hope that our papers (and even blog posts) have helped to clarify...
So where do your holy priors come from, if not the LGM, do you just make 'em up?
ReplyDeleteAnon,
ReplyDeleteAs you would know if you had read the paper, we referred to NAS 1979, Morgan and Keith 1995 and Sokolov and Webster 2002. The first of these was (I understand) based largely on the limited model results available at the time, which have remained broadly unchanged in the 3 decades since.
Nevertheless, we didn't take them at their word, but substantially extended the RH tail in order to demonstrate the robustness of our analysis - unlike Hegerl et al, we didn't want to rule out S>10 a priori :-)
Now, if anyone wants to argue that "S is likely (P=70%) greater than 6C" would have been a more plausible representation of the state of play even in 1979, let alone at the time of the IPCC TAR in 2001, then they are welome to try. I only ask that they do so in an open and honest manner, rather than hiding behind terms such as "objective", "ignorant" or "unbiased" prior. As we have explained in tedious repetitive detail, these terms are at best rhetorical tricks, more likely plain wrong.
I think anyone who tries to claim that Charney (let alone the TAR) should have said that S is likely greater than 6, would be rightly regarded as a bit of a nutter. Even so, updating this with with Forster and Gregory gets them to P(S>6)=~15%, and once we've had another volcanic eruption (with the accompanying short-term cooling characteristic of moderate sensitivity) maybe they will be able to rejoin the ranks of the sane again.
talk about strawman arguments! nobody has ever said or implied (other than an excitable Greenpeace website article) that the probability of a high sensitivity is greater than 70% or 15%.
ReplyDeleteYour "faith-based initiative", of chopping off high sensitivities has pretty much been, well "crucified" shall we say. Your zeal to be cock-sure about the tight range of sensitivites seems to have no supporters, yet you whinge about the same few people it seems. I mean the authors are always "Annan & Hargreaves" -- you mean Jonty Rougier or Hans von Storch won't go out on a limb with you (of course the Oxford gang won't! :-)
Anon,
ReplyDeleteA uniform prior on [0,20] implies P(S>6)=70%.
As for "chopping off high sensitivities", that is surely Hegerl et al with their prior P(S>10)=0, not us with our P(S>10)=5%. I guess I should find space to work that detail into the paper. It is the data that chops the high values off, assming you don't assert an absurdly high level of belief in them a priori.
Sorry to spoil your rant with some facts. HTH HAND
You're sort of like a bad Republican politician, with your strawman readings of what constitutes a high sensitivity, and a low probability.
ReplyDeleteI have seen nothing in the literature of anybody claiming a high sensitivity (say >6K) of more than a few percent. At the risk of you whingeing about my IP address again -- I think Crucifix has demonstrably shown that such die-hard bounds or constraints that you & your wife espouse are not based on sound data. Which (I imagine among other personal reasons), is why you get no public support for your quixotic vendettas.
And M. Allen nowhere near descends to the depths that you do in snarky comments etc. Presumably his strong publication record in peer-reviewed literature is enough to sustain his career and he needn't stoop to the level of being a snippy blogger! If the JAMSTEC higher-ups knew enough English to read your blog (or cared about you "science" enough to do so), I wonder what they would think?
I have seen nothing in the literature of anybody claiming a high sensitivity (say >6K) of more than a few percent.
ReplyDeleteShame your education didn't stretch to reading Andronova and Schlesinger, which I quoted above. But anyway, the debate here is more to do with what can be used as a credible prior, not the posterior (which inevitably has a markedly lower probability of high S than the prior does).
We'll find out what public support we get once more scientists have had the opportunity to see and digest our arguments. We've certainly had some encouragement from those who have seen it so far. I'm happy to put the ideas out there and see if they fly rather than issue pompous claims about what "the community" is ready to accept.
I don't think JAMSTEC has a policy on whether we are only allowed to do Democrat science :-)