Roger Pielke has a
new post up asserting that 28% of the IPCC's findings are incorrect. Although it's obviously a rather implausible figure, I was expecting this claim to be backed up with some sort of evidence of errors, or at least sloppiness, or something, so I had a look at his paper that he cites to justify the claim.
It turns out that 100%-28% = 72% is merely the average (lower bound) probability level associated with the statements they made. Such as "It is very likely that hot extremes, heat waves and heavy precipitation events will continue to become more frequent." Here "very likely" means greater than 90%. So, given 10 such statements, the IPCC is saying that they would expect the "very likely" outcome to occur about 9 times, and not occur about once. And similarly for "likely" (66%). Averaging over all the probabilistic statements, it should be expected that in about 28% of cases, the (probabilistically) preferred outcome will not actually happen.
And in Roger-world, this means that 28% of the statements are "incorrect". Note, however, that he does not make this silly claim in the paper itself, but only in his blog post.
To see why this interpretation is nonsensical, consider a single roll of a fair die. I state (accurately) that it is "likely" to lie in the range 1-5. If I roll a 6, then in Roger-world, my statement was incorrect. However, it was not incorrect, and Roger is simply wrong to claim so.
As you can see from the comments, I challenged Roger on this, and his response (entirely in character) is to duck and weave. In his
comment #5, for example, he shamelessly misrepresents what I said, and brings up the red herring of a definitive prediction (when in fact I had clearly made a
probabilistic one, and the distinction is of course absolutely fundamental to the point). The obvious elephant in the room that Roger cannot bring himself to acknowledge is that the statement is correct irrespective of the outcome of the roll. "
Correctness" of a single statement simply isn't something that can be directly validated (or invalidated) by the outcome, and the accurate
calibration of a probabilistic prediction system actually relies on having the appropriate number of "failures" for each level of probability.
I realise of course that having done some rather boring textual analysis that in his own words amounts to "Nothing too interesting, really", Roger is just rabble-rousing on his blog. I'm confident that any competent scientist will see straight though it, but that's hardly his target audience.
As for what the 72%/28% average actually does mean, it doesn't actually tell us anything except that the IPCC makes a lot of statements about things that it is only (by its own admission) moderately confident about. It might in principle be interesting to see how the confidence level changes over time, but only if the set of statements were to be held fixed from one assessment to the next. People have looked at climate sensitivity estimates (hardly changed) and detection and attribution (increased markedly in confidence) but not a lot else AIUI. I suppose we can anticipate Roger claiming that the next report is either more correct, or less, depending on what mix of statements they happen to include :-)
Incidentally, and although it's a minor point it is perhaps telling in terms of his overall level of competence, Roger is also wrong where he claims that if the statements are not independent, then the proportion of "incorrect" will be higher than 28%. Actually, if the statements are not independent (while still being correctly calibrated), then the proportion that do not come to pass would still be 28% in expectation, just with higher variance, meaning that either a larger or smaller proportion would not be surprising. Unlike the simple misinterpretation in his blog post title, this elementary error is actually made in the paper itself.