So Judith is going on about probability and uncertainty again, this time on the back of our "new" paper (which as you can see from its title page, was actually submitted in 2008 and previous versions of which date back a lot further). I suppose this is further evidence that the dead tree version actually means something to a lot of people. As jules suggests, I may have recalibrate my opinion about the benefits of being talked about :-)
Judith doesn't seem to like Bayesian probability. Well, that's her opinion, and it does not appear to be shared by the majority. To be clear, I don't object at all to people trying more esoteric approaches. Indeed we were quite explicit in sidestepping this debate in the paper, which does not attempt to argue that the Bayesian way is the only way. What I do object to is people throwing away the Bayesian principle on the basis of inadequate analyses. If it is to be shown inadequate, let that at least be on the basis of decent attempts.
Perhaps a useful way to think about a Bayesian analysis is that rather than magically providing us with (probabilistic) answers, it is merely a rational process to convert initial assumptions into posterior judgements: thus establishing that the posterior is only as credible as the inputs. One obvious way to test the robustness of the posterior is to try different inputs, and (subject to space constraints and the whims of reviewers) we tried to be pretty thorough in both this paper and the earlier "multiple constraints" one in GRL. People often think this just means trying different priors, but other components of the calculation are also uncertain and subjective. I've also tried to be as explicit as possible in encouraging others to present alternative calculations, rather than either blindly accepting or rejecting our own. I'm aware of a couple of reasonably current observationally-based analyses from people who were certainly aware of our arguments, and they generated estimates for the equilibrium sensitivity of ~1.9-4.7C (Forest/Sokolov group) and ~1.5-3.5C (Urban and Keller, Tellus 2010). (I read those values off their graphs, they were not explicitly presented). Like I said, it is going to be interesting to see how the IPCC handles this issue, as all these papers strongly challenge the previous consensus of the AR4.
The stuff Curry quotes at length about the lack of "accountable" forecasts (the term is a technical one) is basically a red herring. Accountable forecasts are not available for daily weather prediction either, or indeed any natural process known to man, but that does not prevent useful (and demonstrably valuable) probabilistic forecasts being made. In fact the lack of an accountable forecast system doesn't even prevent perfectly reliable (or at least arbitrarily close to perfectly reliable) probabilistic forecasts being made. What it does mean is that we need to be careful in how we make and interpret probabilistic forecasts, not least so that we don't throw out something that is actually useful, just because it does not reach a level of perfection which is actually unattainable. Which is somewhat ironic, given Judith's interpretation of what was written.
Judith summarises with "I don't know why imprecise probability paradigms aren't more widely used in climate science. Probabilities seem misleading, given the large uncertainty."
I believe the reason why these paradigms aren't more widely used because people have not yet shown that they are either necessary or sufficiently beneficial. I believe that in many areas, a sensible Bayesian analysis will generate reasonable and useful results that are adequately robust to the underlying assumptions, and I think our own sensitivity analyses, and the results I've cited above, bear this out (in the specific case of the climate sensitivity). If Judith wishes to make the case for other methods doing a better job here, she is welcome to try. In fact I've been waiting for some time for her to make a substantive contribution to back up her vague and incoherent "Italian flag" analysis. Merely handwaving about how she doesn't believe Bayesian analyses won't convince many people who actually work in this area. At least, that is my subjective opinion on the matter :-)
As a calibration of the value of her opinion, it's telling that she refers to the awful Schwartz 2007 thing as a "good paper". This was of course critically commented on not only by yours truly (along with Grant Foster, Gavin Schmidt, and Mike Mann), but perhaps more tellingly by Knutti, Frame and Allen - with whom I have not always seen eye to eye on matters of probability, so when we agree on something that may probably be taken as robust agreement! Even Nicola Scafetta found something (else) to criticise in Schwartz's analysis. Even Schwartz admitted it was wrong, in his reply to our comments! But Judith remains impressed. So much the worse for her.
I also spotted a comment on her post a couple of days ago, claiming to have found a major error in our paper. I expect Judith will answer it when she has the time, if one of her resident experts doesn't beat her to it. I'm busy with a barrel of my own fish to shoot :-)
Judith doesn't seem to like Bayesian probability. Well, that's her opinion, and it does not appear to be shared by the majority. To be clear, I don't object at all to people trying more esoteric approaches. Indeed we were quite explicit in sidestepping this debate in the paper, which does not attempt to argue that the Bayesian way is the only way. What I do object to is people throwing away the Bayesian principle on the basis of inadequate analyses. If it is to be shown inadequate, let that at least be on the basis of decent attempts.
Perhaps a useful way to think about a Bayesian analysis is that rather than magically providing us with (probabilistic) answers, it is merely a rational process to convert initial assumptions into posterior judgements: thus establishing that the posterior is only as credible as the inputs. One obvious way to test the robustness of the posterior is to try different inputs, and (subject to space constraints and the whims of reviewers) we tried to be pretty thorough in both this paper and the earlier "multiple constraints" one in GRL. People often think this just means trying different priors, but other components of the calculation are also uncertain and subjective. I've also tried to be as explicit as possible in encouraging others to present alternative calculations, rather than either blindly accepting or rejecting our own. I'm aware of a couple of reasonably current observationally-based analyses from people who were certainly aware of our arguments, and they generated estimates for the equilibrium sensitivity of ~1.9-4.7C (Forest/Sokolov group) and ~1.5-3.5C (Urban and Keller, Tellus 2010). (I read those values off their graphs, they were not explicitly presented). Like I said, it is going to be interesting to see how the IPCC handles this issue, as all these papers strongly challenge the previous consensus of the AR4.
The stuff Curry quotes at length about the lack of "accountable" forecasts (the term is a technical one) is basically a red herring. Accountable forecasts are not available for daily weather prediction either, or indeed any natural process known to man, but that does not prevent useful (and demonstrably valuable) probabilistic forecasts being made. In fact the lack of an accountable forecast system doesn't even prevent perfectly reliable (or at least arbitrarily close to perfectly reliable) probabilistic forecasts being made. What it does mean is that we need to be careful in how we make and interpret probabilistic forecasts, not least so that we don't throw out something that is actually useful, just because it does not reach a level of perfection which is actually unattainable. Which is somewhat ironic, given Judith's interpretation of what was written.
Judith summarises with "I don't know why imprecise probability paradigms aren't more widely used in climate science. Probabilities seem misleading, given the large uncertainty."
I believe the reason why these paradigms aren't more widely used because people have not yet shown that they are either necessary or sufficiently beneficial. I believe that in many areas, a sensible Bayesian analysis will generate reasonable and useful results that are adequately robust to the underlying assumptions, and I think our own sensitivity analyses, and the results I've cited above, bear this out (in the specific case of the climate sensitivity). If Judith wishes to make the case for other methods doing a better job here, she is welcome to try. In fact I've been waiting for some time for her to make a substantive contribution to back up her vague and incoherent "Italian flag" analysis. Merely handwaving about how she doesn't believe Bayesian analyses won't convince many people who actually work in this area. At least, that is my subjective opinion on the matter :-)
As a calibration of the value of her opinion, it's telling that she refers to the awful Schwartz 2007 thing as a "good paper". This was of course critically commented on not only by yours truly (along with Grant Foster, Gavin Schmidt, and Mike Mann), but perhaps more tellingly by Knutti, Frame and Allen - with whom I have not always seen eye to eye on matters of probability, so when we agree on something that may probably be taken as robust agreement! Even Nicola Scafetta found something (else) to criticise in Schwartz's analysis. Even Schwartz admitted it was wrong, in his reply to our comments! But Judith remains impressed. So much the worse for her.
I also spotted a comment on her post a couple of days ago, claiming to have found a major error in our paper. I expect Judith will answer it when she has the time, if one of her resident experts doesn't beat her to it. I'm busy with a barrel of my own fish to shoot :-)
32 comments:
I think the fish may need shooting.
One of the few inhabitants of Curry's blog I pay any attention to, Pekka the Finn, seems to agree with at least a part of Harvey's comment.
Paul Middents
James, you say "In fact I've been waiting for some time for her to make a substantive contribution to back up her vague and incoherent "Italian flag" analysis". Do you remember saying that McIntyre and McKitrick couldn't even do simple addition and subtraction( perhaps in 2003 or 04),in commenting on MM03's complete and utter refutation of Mannian math. Did you ever publish your criticism of MM03, James?
Paul, why not give Judy the opportunity to demonstrate her expertise?
Nice try, anon. I'm afraid nobody cares about that ancient history even enough to bother double-checking your reference. IMHO a better use of your time would be watching some ice melt.
Anon, no, I don't recall that clearly.
Oh Anon, the old wounds still hurt, don't they? I wonder if McKitrick is still emo about the whole degree/radian thing :)
Paul, I also agree with a part of Harvey's comment :-)
James,
Can't you see that arguing about the probability of AGW exceeding 5C is a bit like Russian officers arguing about how many cylinders the pistol has that they are about to hold to their heads.
No, I think that if S>5 we are in big trouble, so it is worth thinking about the credibility of this carefully. If S<3 (or optimistically, closer to 2C) then the trouble is far more moderate IMO.
So you wouldn't play Russian Roulette with a six cylinder revolver but you would with a twelve cylinder one?
For me it is much more important to discover which chamber it is that the bullet is in.
"Ron Cram:I am a fan of Stephen Schwartz of Brookhaven National Labs. ... In 2010, he also published “Why hasn’t Earth warmed as much as expected?”
curryja: Thanks for the links, i agree these are good papers."
Given Stephen's poster at the AGU on the non-anthropogenic source of the rise of atmospheric CO2, he's obviously gone off the deep end of bad-science...
"the trouble is far more moderate"
Depending on how you define "moderate," paleo seems to say otherwise. There is also the issue of there being no good analogue for the forcing we're applying.
James, you say you don't call the comment made about McIntyre. On October 20, 2004, in a posting to sci.environment, you said" McIntyre and McKitrick didn't know the difference between multiplication and subtraction and you will start looking into the issue in more detail. Huh, interesting you don't recall.
Anonymous, I'm going to ask for a direct link, 'cause I can't find it myself.
>"So you wouldn't play Russian Roulette with a six cylinder revolver but you would with a twelve cylinder one?"
And you believe that this is an excellent reason for all climate scientists to completely give up on all work on trying to understand more about the climate system?
And presumably you also believe we should take action that is so draconian that it has the effect of wiping out 99% of GDP because the game of Russian roulette just might have a low probability of having a worse effect. Very precautionary obviously, but do we all really want a policy that is highly likely to be a cure that is much worse than the disease?
You might prefer to be safe than sorry, but while we may dislike the denialosphere, there is plently of volume there to suggest that a lot of people do not want a cure that is likely to be worse than the disease.
Have you shown that it is Russian roulette with completely disasterous consequences rather than just a disease (that has less than 100% infection and death rates) and that potential cures are definitely not worse than the disease?
Also how would you know how much of a cure was needed if all climate scientists decided to give up in accordance with what you are saying?
IMO showing whether the really scary pdfs are highly dependant on either ignoring most of the data or on using pathological priors while the more moderate results are hardly altered by different reasonable priors providing they use at least a reasonable portion of the data available is a valuable contribution.
Well, Alexander Harvey has promoted A&H from wrong to merely vacuous. Progress!
Judy has yet to step in and clarify. You'd think that the reason might be a fear of demonstrating her lack of familiarity with the subject, but that can't really be it, can it? An ignorant prior joke comes to mind, but now everyone can roll their own without my having to say it out loud. :)
In which Our Judy takes the spotlight. Yikes.
Thanks for the link, from there I got to curryquotes which is rather good...
I'd go so far as priceless, James.
The conversation on your paper has spilled over to RC as well. I thought it was a good paper that showed the utility without overstating it. Pretty eloquent, actually, and I rarely agree with statistical approaches used by many climate scientists.
Oh, shame, it seems that at least some of them might have worked it out - the difference between a likelihood and probability density, that is. Though they managed to go all round the houses for a couple of days before getting there.
Somehow it's all my fault that they got confused. So I can still be wrong, even when I'm right. Glad that's all sorted out.
Thanks also to crandles for his contribution over there which raised the average quality of discussion markedly :-)
I was really quite tempted to add that I really must get on with reviewing principia mathematica for climate change evidence so that I could declare principia mathematica to be vacuous ;)
As asked at Tamino's, this question might be apt here too:
http://tamino.wordpress.com/2011/01/25/milankovitch-cycles/#comment-47751
Imprecise probability theories (e.g. evidence theory, possibility theory, plausibility theory) seem to be far better fits to the climate sensitivity problem. As far as I can tell, these haven’t been used in climate science (other than Krieger’s 2005 Ph.D. thesis), and my fledgling attempts (e.g. Italian flag analysis; note Part II is under construction).
A candidate for Curryquotes?
DC, yeah, I thought that was good too.
James & Jules,
I am involved in an exchange with Judith Curry that involves your work.
http://judithcurry.com/2011/02/06/lisbon-workshop-on-reconciliation-part-v-the-science-is-not-settled/#comment-38944
I realize that you don’t follow her unless forced—somewhat like the feckless Sub Lieutenant damned with this in his first fitness report.
“The ratings follow him but only out of curiosity.”
What do you make of this statement concerning a conversation she says she had at the Lisbon conference?
“In my discussion with van der Sluijs at the Workshop, we both agreed that the uncertainty surrounding sensitivity was scenario uncertainty, and not statistical uncertainty (something to create a pdf for). I stand by my previous statements on that topic.”
I was referring to climate sensitivity to CO2 doubling. What on earth does this number have to do with scenario uncertainty? Obviously how bad it gets depends on both scenario and sensitivity. Is she really implying that gaining a better handle on sensitivity will not be of great assistance to the policy makers?
I will say she has engaged on this to greater extent than usual for her. However, her responses are mainly self referential. Come to think of it, there is this family in Colorado that argues the same way.
Paul Middents
Paul,
Thanks. It looks to me like someone is confused. As you commented there, van der Sluis (with Dessaie) seems entirely comfortable with the use of Bayesian probability for sensitivity, among other things (eg sec 4.1.2 of this). And you are right that it has nothing to do with scenarios.
Probably it is all made clear in her forthcoming papers that she would no doubt have written by now if only people would pointing out inconsistencies and inadequacies in what she says :-)
Incidentally, it seems that the "Chatham House Rule" was honoured mainly in the breach at that workshop!
"the uncertainty surrounding sensitivity was scenario uncertainty, and not statistical uncertainty"
I also found the above to be uninterpretable.
"However, her responses are mainly self referential."
hmmm
"I have written entire posts on this subject (totalling over 10,000 words), see: [...]"
I'm even more confused how Stainforth et al. (2007) is meant to be a valid objection to James'n'Jules' paper?... which I find is the basis of her criticism.
Lazar,
Your attempt to pin Dr. Curry down was much more pointed and coherent than mine. Unfortunately you got exactly the same result. She seems to assume anyone who questions her has not been reading her blog.
Curry's habit of self-reference is aggravating. In the case of the Pielke's, at least, they sometimes refer to their peer reviewed literature and not just 10,000 bits of bloggoreah.
Chuckle... I actually thought yours were more to the point, those were good finds on van der Sluis' paper. And you were even polite about it.
JC's behavior is just strange, to me, of a scientist, and not just a scientist but one with three decades experience and a moderately sized catalog. She eschews a worked example and some maths and a published paper. She loses the only advantage of the blog approach by not engaging critics. And she expects to convince the world and science and governments to throw out probability theory. I guess reading Climate Audit for two years has such effects.
I must hope her paper gets published soon.
Maybe a preprint might be available.
Wow, she's "written entire posts on this subject (totalling over 10,000 words)"! I certainly concede to her in the blogorrhea stakes. I guess that settles things then. If that's not an unfortunate choice of words...
And clearly what she writes can't be utter nonsense, because she gets lots of comments :-)
Post a Comment