Roger continues to flounder away, trying to salvage something from his latest statistical train-wreck. It's all remarkably trivial for someone who claims to "fully understand verification of probabilistic forecasts". Commenter Steve Scolnik skewers him neatly with a quotation from Doswell and Brooks:
Nevertheless, RP chooses to make up nonsense and misrepresent what I said, without even having the decency to link to my post. I nowhere say, or imply that the IPCC statements "could not be judged to be wrong because of their probabilistic nature", indeed as he well knows I have explicitly contradicted this nonsense claim of his multiple times in the past. A single probabilistic statement at the "likely" level cannot generally meaningfully be validated because no outcome is sufficiently improbable to falsify it (under the standard significance testing paradigm). Once you have a large enough ensemble of statements, such as those the IPCC make, their judgement as a whole can easily be validated because it is highly improbable that either a small or large number of the particular events should occur, if the probability was accurate.
(Even this approach suffers from the usual problems of frequentist statistics, in that it does not actually address the issue of "how likely is it that the probabilistic system is well calibrated, given these results" but rather answers "how likely are these results, if the probabilistic system is well calibrated". However, if the probability level is small enough, we can safely reject the system anyway. This digression is probably best ignored by all readers, I just put it in to head off another avenue for nit-picking.)
Even his own analogy, he fails to be consistent with himself. Having stated unequivocally that a large proportion of the findings of the IPCC are "incorrect" he admits wrt some hypothetical bet on a football game:
"It is important to understand that the judgment [A] may have been perfectly sound and defensible at the time that it was made ... Perhaps then the outcome was just bad luck, meaning that the 10% is realized 10% of the time. Actually, we can never know the answer to whether the expectation was actually sound or not"
So he's prepared to consider probabilistic judgements "perfectly sound" when it suits him, but "incorrect" whenever the IPCC make them. Uh-huh.
"An important property of probability forecasts is that single forecasts using probability have no clear sense of "right" and "wrong." That is, if it rains on a 10 percent PoP forecast, is that forecast right or wrong? Intuitively, one suspects that having it rain on a 90 percent PoP is in some sense "more right" than having it rain on a 10 percent forecast. However, this aspect of probability forecasting is only one aspect of the assessment of the performance of the forecasts. In fact, the use of probabilities precludes such a simple assessment of performance as the notion of "right vs. wrong" implies. This is a price we pay for the added flexibility and information content of using probability forecasts. Thus, the fact that on any given forecast day, two forecasters arrive at different subjective probabilities from the same data doesn't mean that one is right and the other wrong! It simply means that one is more certain of the event than the other. All this does is quantify the differences between the forecasters."Of course, there isn't a cigarette-paper of difference between what I was saying, and what Doswell and Brookes are saying, because this is all well-established basic stuff.
Nevertheless, RP chooses to make up nonsense and misrepresent what I said, without even having the decency to link to my post. I nowhere say, or imply that the IPCC statements "could not be judged to be wrong because of their probabilistic nature", indeed as he well knows I have explicitly contradicted this nonsense claim of his multiple times in the past. A single probabilistic statement at the "likely" level cannot generally meaningfully be validated because no outcome is sufficiently improbable to falsify it (under the standard significance testing paradigm). Once you have a large enough ensemble of statements, such as those the IPCC make, their judgement as a whole can easily be validated because it is highly improbable that either a small or large number of the particular events should occur, if the probability was accurate.
(Even this approach suffers from the usual problems of frequentist statistics, in that it does not actually address the issue of "how likely is it that the probabilistic system is well calibrated, given these results" but rather answers "how likely are these results, if the probabilistic system is well calibrated". However, if the probability level is small enough, we can safely reject the system anyway. This digression is probably best ignored by all readers, I just put it in to head off another avenue for nit-picking.)
Even his own analogy, he fails to be consistent with himself. Having stated unequivocally that a large proportion of the findings of the IPCC are "incorrect" he admits wrt some hypothetical bet on a football game:
"It is important to understand that the judgment [A] may have been perfectly sound and defensible at the time that it was made ... Perhaps then the outcome was just bad luck, meaning that the 10% is realized 10% of the time. Actually, we can never know the answer to whether the expectation was actually sound or not"
So he's prepared to consider probabilistic judgements "perfectly sound" when it suits him, but "incorrect" whenever the IPCC make them. Uh-huh.
7 comments:
As I previously wrote,
42.
What the hell? He wants to make a beef with the IPCC with statements like:
That is why evaluation of probabilistic statements is necessary.
while refusing to admit his own problems with statements here.
Me:
The issue is that neither the 30% or the 50% projections has any basis in physical reality. Read what James Annan and other were trying to tell you 3 years ago. And then you used those invalid boundary parameters to question basic conclusions of the science community. So, of course, when it turns out that the invalid projections were wrong, it is going to be pointed out. A simple admission that these were wrong would suffice, I'm sure.
And finalizes the argument by telling me
More importantly you seem to gloss over the entire point of the sensitivity analysis exercise, which is to explore the sensitivity of conclusions to assumptions, not to predict a single 'right' conclusion. In this case the need to revisit the IPCC conclusion is insensitive to whether the community accepts a surface temperature change of 50%, 30% or 15% -- the implications are robust to such uncertainties.
This gives a rather clear explanation of probabilistic climate predictions:
http://ukclimateprojections.defra.gov.uk/content/view/1989/500/
“It is very important to understand what a probability means in UKCP09. The interpretation of probability generally falls into two broad categories. The first type of probability relates to the expected frequency of occurrence of some outcome, over a large number of independent trials carried out under the same conditions: for example the chance of getting a five (or any other number) when rolling a dice is 1 in 6, that is, a probability of about 17%. This is not the meaning of the probabilities supplied in UKCP09, as there can only be one pathway of future climate. In UKCP09, we use the second type (called Bayesian probability) where probability is a measure of the degree to which a particular level of future climate change is consistent with the information used in the analysis, that is, the evidence. In UKCP09, this information comes from observations and outputs from a number of climate models, all with their associated uncertainties. The methodology which allows us to generate probabilities is based on large numbers (ensembles) of climate model simulations, but adjusted according to how well different simulations fit historical climate observations in order to make them relevant to the real world. The user can give more consideration to climate change outcomes that are more consistent with the evidence, as measured by the probabilities. Hence, Figure 8(a) does not say that the temperature rise will be less than 2.3ºC in 10% of future climates, because there will be only one future climate; rather it says that we are 10% certain (based on data, current understanding and chosen methodology) that the temperature rise will be less than 2.3ºC. One important consequence of the definition of probability used in UKCP09 is that the probabilistic projections are themselves uncertain, because they are dependent on the information used and how the methodology is formulated.”
Which makes me think of Richard Tol’s comment over at Rogers:
“Roger and James are both right. Roger is right if one assumes that the IPCC predicts events, James is right if one assumes that the IPCC predicts probability density functions.”
Could it be that the argument is really about what the IPCC actually predicts?
This also relates to the difference in the IPCC lingo between likelihood and confidence. Another hornet’s nest. It seems that the quote by Doswell and Brookes conflates the two where it says that “It simply means that one is more certain of the event than the other.” Being (un)certain relates to confidence; not likelihood (in IPCC lingo at least). It’s rather blurry at this point though and perhaps I’m utterly confused here.
The title of Roger’s initial post is still entirely off-base of course, no matter how you slice it.
The only thing that is clear so far is that Roger is completely confused and his approach is incoherent. In his latest he appears to accept that a prediction may be wrong even when the event turns out precisely in agreement with it (non-appearance of an asteroid). He seems to think that rhetoric and sophistry can overcome simple mathematics.
He seems to be, at the very least, a dishonest broker, with his sleight-of-hand meta-statistic spin?
So what is the probability of Roger not posting a comment that points to this post? Eli did an experiment early this afternoon, and as of 9:22 it looks like the house won.
It's this sort of childish behavior that makes him such a delight.
Must be some software problem over at Blog Pielke; trying to select a userid then preview a response seems to just blank the comment field. Repeated multiple tries.
Post a Comment