Via Andrew Parnell's tweet, this article has to be one of the worst explanations of p-values I have ever read. It's not just that the basic interpretation of a p-value is wrong (no it is not the probability that the null hypothesis is true), but it's also drowned in waffle and bafflegab.
Interestingly, it seems to be the one article on that site by author "Statistician Nathan Green" that does not have comments attached. I've retweeted AP's comment to the author just for fun...
Interestingly, it seems to be the one article on that site by author "Statistician Nathan Green" that does not have comments attached. I've retweeted AP's comment to the author just for fun...
3 comments:
Tch. One is supposed to drown waffles in maple syrup.
The article does actually contain a cop-out:
"The strict statistical interpretation of what a significance test tells us is actually a little more subtle and is often misunderstood, but for now this explanation is just fine."
Hmmm. Is there hope of explaining this to lay folk, given that you need to bring in the notion of 'prior knowledge' and what not? Serious question. It would be great if it could be done, as "p < 0.05 so it must be true" is probably the statistical fallacy of our time...
Martin, yes but having said that, he says that the "strict definition...is often misunderstood" without seeming to realise that it's precisely articles like his that perpetuate and promote the misunderstanding!
mmmmm...waffles....now where was I...oh never mind.
Post a Comment