Apparently global warming is statistically significant again.
But we all know that the difference between "significant" and not significant is not itself statistically significant, don't we?
Richard Black is usually pretty good, so it's a shame to see the old canard "If a trend meets the 95% threshold, it basically means that the odds of it being down to chance are less than one in 20." Of course, you all know why that's not true (at least, if you don't, you will after reading this).
But we all know that the difference between "significant" and not significant is not itself statistically significant, don't we?
Richard Black is usually pretty good, so it's a shame to see the old canard "If a trend meets the 95% threshold, it basically means that the odds of it being down to chance are less than one in 20." Of course, you all know why that's not true (at least, if you don't, you will after reading this).
9 comments:
So, basically, we could test hypotheses "the Earth follows a white noise (or red noise, or indigo noise) trend" versus "the Earth follows the IPCC mean model trend" for any given number of years, and come up with a Bayesian probability that one is true over the other? (of course, there would always be another trend which could beat both). If we can apply reasonable priors.
-M
You could, although the mean of the IPCC models is a very specific outcome compared to a much broader class of statistical models, which might complicate things. I'd prefer to consider the whole ensemble of IPCC models.
Such an approach would only give relative probabilities, unless you also included "none of the above" as an option, in which case this would probably win, given enough data :-)
The 'none of the above' win in this situation would be meaningless without a physical explanation (or a guess?) of the reasons for the increased accuracy of the speculated statistical model?
Anyway this sounds like using this approach would give the models tested a definite order of predictive capability?
As I've said elsewhere I'm not very good in statistics.
Given that no model is ever "true" (even a simple statistical one) it would be too easy to reject anything in favour of "something else". Probably what we really care about is the magnitude of the model (predictive) error.
Plus, you'd only get people complaining the models were tuned to the data, thus past performance is no guarantee of predictive skill (and they'd be largely right, though one might still reasonably prefer a model that has a historical fit over one that doesn't even achieve that).
This "statistic" might be of interest, what do you think of the method?
Reconstruction of the extra-tropical NH mean temperature over
the last millennium with a method that preserves low-frequency
http://web.dmi.dk/solar-terrestrial/staff/boc/millennium_reconstr.pdf
The method it uses have got some shortcomings highlighter earlier:
http://www.people.fas.harvard.edu/~tingley/Comment_on_Christiansen.pdf
IMO a limitation of these methods is that there really isn't much signal in the data, even in the best case where we the proxies actually do indicate the local temperature with a given precision. I have some work in progress on this.
Looking forward to it :)
jyyh, isn't "none of the above" the only supposition one can make coming from a position of complete ignorance? :)
Ignoring the kinds of entities used in describing the presumed physical world would indicate the words uttered from a position of complete ignorance would be unphysical too, and thus impossible to hear, but sadly yes. :)
Post a Comment