He just can't stop himself, burrowing away (don't much like the idea of a Curry hole, euuuurgh.)
It's quite amusing to watch the contortions he'll go to in order to avoid admitting a mistake. Recall that this started with his novel idea that one could determine the "correctness" of a probabilistic prediction of an event, by whether the event in question actually happens. Eg the prediction "likely to rain tomorrow" is correct if and only if the rain actually falls tomorrow.
While this might sound intuitively appealing, it quickly falls apart under any careful examination (as Doswell and Brooks warn). That is, it leads to conclusions that are obviously nonsensical and/or inconsistent. For example, if we say that a roll of a fair die is likely to come up 1-5, then this statement is correct in the sense of, well, being correct, but Roger's analysis would determine it to have been false if the roll actually turned out to be 6.
Oh, but at this point, rather than admitting that his usage of "correct" made no sense, Roger decided that for some reason his method only applies in truly epistemic cases where probability is a state of belief rather than a long-run property. It's funny that while (dishonestly) accusing me of making the IPCC out to be infallible, he then tries his best to ensure that his personal "correctness" theory is unfalsifiable. But I'm sure he is blind to that irony. Of course, no explanation is forthcoming as to why his theory, if it is useful and valid, should fall flat so quickly when confronted with a simple example. I tried again with a handmade imperfect die which is initially not known to be fair, but for which I still make the same prediction and again throw a 6. In Roger-world the probabilistic prediction is incorrect. However, in this case the long-run frequency of a 6 can subsequently found by experiment, and let's assume it turns out to be 20±0.1%. Was the original probabilistic statement still Roger-incorrect? Answer came there none...
Best of all, entirely unprompted, he came up with an example based on an asteroid falling on Boulder. While he had several times insisted that a prediction at the 90% level should be considered "incorrect" if the event did not occur, he then stated that if I predict that it is 10% probable that an asteroid hits Boulder tomorrow (ie 90% probable that it does not), then my prediction is correct if the asteroid DOES hit! This, he explains, is due to the "baseline expectation" which apparently allows Roger to invert his original definition of "correctness" whenever he feels like it. It's a bit odd that he came up with this new twist completely unprompted, as it blows apart all his previous analysis, but it's not as if his theory made any sense anyway.
Naturally, the actual paper that he co-authored contains no mention of this "baseline expectation".
With his latest post on aleatory and epistemic uncertainty, one might hope that he could have at last been starting to realise that the concept of "correctness" of a probabilistic prediction cannot in general be determined from the occurrence - or otherwise - of the predicted event (the occurrence of an event assigned a probability of zero is of course an exception). But based on the comments, it seems that this insight still eludes him.
It does seem that one infallible guide to "Pielkeian correctness" has emerged, though. If Roger says it, then it is correct, no matter how many impossible or ridiculous contortions and evasions are required to avoid admitting error.
It's quite amusing to watch the contortions he'll go to in order to avoid admitting a mistake. Recall that this started with his novel idea that one could determine the "correctness" of a probabilistic prediction of an event, by whether the event in question actually happens. Eg the prediction "likely to rain tomorrow" is correct if and only if the rain actually falls tomorrow.
While this might sound intuitively appealing, it quickly falls apart under any careful examination (as Doswell and Brooks warn). That is, it leads to conclusions that are obviously nonsensical and/or inconsistent. For example, if we say that a roll of a fair die is likely to come up 1-5, then this statement is correct in the sense of, well, being correct, but Roger's analysis would determine it to have been false if the roll actually turned out to be 6.
Oh, but at this point, rather than admitting that his usage of "correct" made no sense, Roger decided that for some reason his method only applies in truly epistemic cases where probability is a state of belief rather than a long-run property. It's funny that while (dishonestly) accusing me of making the IPCC out to be infallible, he then tries his best to ensure that his personal "correctness" theory is unfalsifiable. But I'm sure he is blind to that irony. Of course, no explanation is forthcoming as to why his theory, if it is useful and valid, should fall flat so quickly when confronted with a simple example. I tried again with a handmade imperfect die which is initially not known to be fair, but for which I still make the same prediction and again throw a 6. In Roger-world the probabilistic prediction is incorrect. However, in this case the long-run frequency of a 6 can subsequently found by experiment, and let's assume it turns out to be 20±0.1%. Was the original probabilistic statement still Roger-incorrect? Answer came there none...
Best of all, entirely unprompted, he came up with an example based on an asteroid falling on Boulder. While he had several times insisted that a prediction at the 90% level should be considered "incorrect" if the event did not occur, he then stated that if I predict that it is 10% probable that an asteroid hits Boulder tomorrow (ie 90% probable that it does not), then my prediction is correct if the asteroid DOES hit! This, he explains, is due to the "baseline expectation" which apparently allows Roger to invert his original definition of "correctness" whenever he feels like it. It's a bit odd that he came up with this new twist completely unprompted, as it blows apart all his previous analysis, but it's not as if his theory made any sense anyway.
Naturally, the actual paper that he co-authored contains no mention of this "baseline expectation".
With his latest post on aleatory and epistemic uncertainty, one might hope that he could have at last been starting to realise that the concept of "correctness" of a probabilistic prediction cannot in general be determined from the occurrence - or otherwise - of the predicted event (the occurrence of an event assigned a probability of zero is of course an exception). But based on the comments, it seems that this insight still eludes him.
It does seem that one infallible guide to "Pielkeian correctness" has emerged, though. If Roger says it, then it is correct, no matter how many impossible or ridiculous contortions and evasions are required to avoid admitting error.
19 comments:
I find it likely that Roger will eventually figure out he is very wrong about this. I also find it very likely that he will not have the ability to admit his mistake. Hey Roger, try to prove me wrong!
I think you are almost right - arguing about probability is probably a waste of time.
There's glory for you!'
'I don't know what you mean by "glory",' Alice said.
Humpty Dumpty smiled contemptuously. 'Of course you don't — till I tell you.'
Hank:
Wasn't the ability to believe 3 impossible things before breakfast important behind the looking glass.
Brings Eric May to mind somehow.
Rumleyfips
Yes, it was the Red Queen and impossible things.
But where, oh where, is Jessica?
Thanks for the update on this, James. I read the original post at RP's place, and a few of the comments, but was insufficiently interested to keep following it.
The asteroid example you cite would seem to be actually related to the content of the paper. That is, the IPCC AR1 does include predictions of things that are unlikely, very unlikely, etc. I wonder how RP handled those in the paper.
I had been assuming that there was no way in hell that RP could be stupid enough (or deceptive enough) to handle a case where AR1 says "X is only 33% likely to occur" by counting that as a prediction for which AR1 will be wrong 67% of the time. Surely he can't possibly have done that ... right?
Please tell me I'm misunderstanding something here.
Perhaps someone should corner Roger and make him assign probabilities to a long list of highly improbable events. Then we can look forward to a blog post "Over 99% of Roger Pielke Jr's Predictions are Incorrect".
Roger's most recent post suggests to me that he doesn't realise that the IPCC statements he is interested in can be viewed as specifying a p.d.f. is a subjectivist Bayesian sort of a way (and are not even intended to provide the basis for falsification of the theory, but to provide a rough indication of what might happen that is relevant to impact studies). I'd be very grateful if you could pop over and check my comments re. MAXENT etc., just to make sure I am being fair/reasonable.
Ned, I did wonder about this, but based on the numbers presented I suspect that Roger turned the few "(v) unlikely" statements around, ie counted them as 66/90% "correct" predictions of the inverse. Or else he could even have just ignored them, there actually aren't enough of them to materially affect the results anyway. Which at least avoids this one particular avenue of stupidity.
I wasn't trying to trap Roger into an accidental slip, merely leading him along the path to realising that his idea was irredeemably broken...
DM, will do....
Ta! FWIW, I think Rogers problem is caused by the difficulty in reasoning about the probability of a probability, and is confusing the expectation of a probability (after having maginalised over its uncertainty) with the probability itself. However at the current time my comment on that is still in moderation, so it may not be avaliable just yet.
DM,
Well I'm not a fan of maxent for starters. But I'm also not a fan of the IPCC confusion over confidence in probabilities (especially when they end up with low confidence in high probabilities). That seems an unhelpful level of recursion to me, especially noting that most of the authors are basically physicists who struggle with the concept of probability as an expression of a degree of belief in the first place (eg statements about "the full range of uncertainty"). It seems that the IPCC approach only encourages this, by enabling them to thnk they can pigeonhole the "subjective" bit in the confidence and convince themselves that the "underlying" probability is somehow real. So I'm not going to defend them over that.
But principally, I think it's important to realise that Roger's blizzard of posts is a very straightforward smokescreen to bury the car crash of his original claims. His goal is to get everyone to agree that it's all far too complicated and even experts disagree. But they can't disagree on whether his original idea is credible, because it's obviously nonsense to think that the correctness of a probabilistic claim can be determined solely on the observed outcome. Which is where I came in :-)
For the record, it was SIX impossible things:
"I can't believe that!" said Alice.
"Can't you?" the Queen said in a pitying tone. "Try again: draw a long breath, and shut your eyes."
Alice laughed. "There's not use trying," she said: "one can't believe impossible things."
"I daresay you haven't had much practice," said the Queen. "When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast."
http://en.wikiquote.org/wiki/Through_the_Looking-Glass
RPJr is clearly going for a new world record.
I hate to be a pedant (OK, not that much) but it was the White Queen who believed the six impossible things before breakfast. The Red Queen had to keep running faster and faster to stay in the same place.
Many thanks JA. RP Jr. has dismissed my counter example and (moderately politely) asked me to "push off, there's a good chap", so I'll leave him to it.
The introduction of the missing "expected" after I had pointed out the error as being blog shorthand (or words to that effect) is evidence that you are right.
I recently read Judith Curry's paper "reasoniong about climate uncertainty", and that seemed to me to be worded in order to cast doubt, while providing no useful guidance on how it could be improved (at least not in a form that could actually be used). Some concrete examples of specific statements and how JC would have made them might actually encourage progress, but they were completely absent.
Her comment "In the presence of scenario uncertainty, which characterises climate model simulations, attempts to produce a p.d.f. for climate sensitivity (e.g. Annan and Hargreaves 2009) are arguably misguided and misleading", was intruging. I would have thought "scenario uncertainty" would normally relate to uncertainty over the way in which future emissions/forcings would work out, but for p.d.f.s of climate sensitivity based on hindcasts the p.d.f.s sould already include the uncertainty due to the uncertainty in estimates/measurements of the forcings?
The fact that in a paper on "reasoning about climate uncertainty" she included a footnote explaining that PDF meant "probability denisty function" was also mildly amusing!
The Curry thing...ugh, but being dissed by her is probably preferable to being praised. I think she is currying confusion by using "scenario uncertainty" in a broader sense, but I have long since given up on expecting her to come up with anything useful (*cough* Italian flag *cough*). I will blog on the whole issue of Climatic Change some time soon.
Maybe someone should send him this url:
http://www.metoffice.gov.uk/news/releases/archive/2011/weather-game
h/t Material World on BBC - which has an article on it in the latest edition.
Pielke pere and John n-g are having a delightful exchange over at Climate Abyss.
http://blog.chron.com/climateabyss/2011/08/roger-pielke-jr-s-inkblot/
Paul Middents
Just listening to Material World...coincidentally we met someone who had been working on that game, when they visited Japan recently. The flash was more than a bit tedious and some bits of display were missing but I still managed to be a red-hot meteorologist of course :-)
I made the mistake of playing it while busy at work and misread the probability markers a few times, putting my marks at the opposite end to the one I intended, and putting a poor ice cream seller out of business.
I hope I didn't skew the results too much.
Post a Comment