I read
this amusing article in NS while travelling recently, and it reminded me that I'd been meaning to blog about the story for some time. A spot of googling reveals that several others have beaten me to it, but I wasn't going to miss the chance to use my headline pun...
The basic gist is that an astrophysicist called J. Richard Gott III claims to have discovered a principle by which the future duration of an ongoing event can be confidently predicted, with absolutely no knowledge other than the past duration. In particular in this article, he asserts that the human race doesn't have long left on Planet Earth, and further, that the human space program doesn't have long left either, so we had better get on with colonising somewhere else.
It's basically a warmed-over version of the
Doomsday "argument", of course - one version of which is that given a total number of N humans (over the entire lifespan of the species), I can assume that with 95% probability my position in the list lies in the (0.025N, 0.975N) interval. Actually, I am number 60B in the order, meaning that I can expect there to be somewhere between 1.5B and 2400B more people (with 95% probability). That means a 2.5% probability that we'll be extinct in the next few decades! Gott does the same thing with the number of years during which there will be a space program, and works out that it is likely to end quite soon, so we had better get on with moving elsewhere while we can.
The argument is nonsense and a spot of googling reveals that many others have shredded it:
Andrew Gelman (where I first read about this) doesn't like it but provides a charitable interpretation of the whole thing as a frequentist statement: given an ordered set, 95% of the members do indeed lie in the middle 95% of the ordering, and thus the intervals constructed by this method are valid confidence intervals for the size of the set, given random samples from it. That's true enough, but (as he also points out) does not justify the misinterpretation of these frequentist confidence intervals as if they were meaningful Bayesian credible intervals, which is what Gott is doing. (It does explain how Gott can demonstrate the success of his method on large historical data sets, for that gives the procedure a meaningful frequentist interpretation.)
Brian Weatherall rips a hole in it, first with a bit of "mockery" (his term) about how it leads to idiotic predictions for several examples such as the durability of the internet or the iPhone (and if anyone doesn't think these predictions are indeed idiotic, I'll happily bet against them as he offers to), and then with a simple example as to how it leads to the following nonsensical claim: if A has a longer historical duration than B, then the future duration of A will certainly (with probability 1!) be as long as the future duration of B - he does this by considering the durations of the events A, B, and "A and B".
Best of all, there is a
lovely letter reprinted on Tierney's blog (which also covers the story). Gott has been pushing this idea for a long time now, and following his publication of it in Nature back in 1993(!), this rebuttal was published (I was going to just post an excerpt, but it is so nicely written that I don't want to cut anything out):
“There are lies, damn lies and statistics” is one of those colorful phrases that bedevil poor workaday statisticians who labor under the illusion that they actually contribute to the advancement of scientific knowledge. Unfortunately, the statistical methodology of astrophysicist Dr. John Gott, reported in Nature 363:315-319 (1993), which purportedly enables one to put statistical limits on the probable lifetime of anything from human existence to Nature itself, breathes new life into the saying.
Dr. Gott claimed that, given the duration of existence of anything, there is a 5% probability that it is in its first or last 2.5% of existence. He uses this logic to predict, for example, the duration of publication of Nature. Given that Nature has published for 123 years, he projects the duration of continued publication to be between 123/39 = 3.2 years and 123×39=4800 years, with 95% certainty. He then goes on to predict the future longevity of our species (5000 to 7.8 million years), the probability we will colonize the galaxy and the future prospects of space travel.
This technique would be a wonderful contribution to science were it not based on a patently fallacious argument, almost as old as probability itself. Dubbed the “Principle of Indifference” by John Maynard Keynes in the 1920s, and the “Principle of Insufficient Reason” by Laplace in the early 1800s, it has its origins as far back as Leibniz in the 1600’s (1) . Among other counter-intuitive results, this principle can be used to justify the prediction that after flipping a coin and finding a head, the probability of a head on the next toss is 2/3. (2) It has the been the source of many an apparent paradox and controversy, as alluded to by Keynes, “No other formula in the alchemy of logic has exerted more astonishing powers. For its has established the existence of God from total ignorance, and it has measured with numerical precision the probability that the sun will rise tomorrow.” (3) Perhaps more to the point, Kyburg, a philosopher of statistical inference, has been quoted as describing it as “the most notorious principle in the whole history of probability theory.” (4)
Simply put, the principle of indifference says that it you know nothing about a specified number of possible outcomes, you can assign them equal probability. This is exactly what Dr. Gott does when he assigns a probability of 2.5% to each of the 40 segments of a hypothetical lifetime. There are many problems with this seductively simple logic. The most fundamental one is that, as Keynes said, this procedure creates knowledge (specific probability statements) out of complete ignorance. The practical problem is that when applied in the problems that Dr. Gott addresses, it can justify virtually any answer. Take the Nature projection. If we are completely uncertain about the future length of publication, T, then we are equally uncertain about the cube of that duration, T cubed. Using Dr. Gott’s logic, we can predict the 95% probability interval for T cubed as T3/39 to 39T cubed. But that translates into a 95% probability interval for the future length of publication to be T/3.4 to 3.4T, or 42 to 483 years, not 3 to 4800. By increasing the exponent, we can come to the conclusion that we are 95% sure that the future length of anything will be exactly equal to the duration of its past existence, T. Similarly, if we are ignorant about successively increasing roots of T, we can conclude that we are 95% sure that the future duration of anything will somewhere between zero and infinity. These are the kind of difficulties inherent in any argument based on the principle of indifference.
On the positive side, all of us should be encouraged to learn that there can be no meaningful conclusions where there is no information, and that the labors of scientists to predict such things as the survival of the human species cannot be supplanted by trivial (and in this case specious) statistical arguments. Sadly, however, I believe that this realization, together with the superficial plausibility (and wide publicity) of Dr. Gott’s work, will do little to weaken the link in many people’s minds between “lies” and “statistics”.
Steven N. Goodman, MD, MHS, PhD
Asoc. Professor of Biostatistics and Epidemiology
Johns Hopkins University
References
1. Hacking I. The Emergence of Probability, 126, ( Cambridge Univ. Press, Cambridge,1975).
2. Howson C, Urbach P. Scientific Reasoning: The Bayesian Approach, 100, (Open Court, La Salle, Illlinois, 1989).
3. Keynes JM. A Treatise on Probability, 89, (Macmillan, London: 1921)
4. Oakes M. Statistical Inference: A commentary for the social sciences, 40, (Wiley, New York, 1986).
Apparently back then, Gott's argument was sufficiently novel that Nature did not feel able to argue that "
everyone thinks like this, so you can't criticise it" :-) More likely, the lesser political importance of the topics under discussion meant that they did not feel such a strong need to defend a "consensus" built on such methods.
Regular readers will probably by now have recognised an uncanny resemblance between Gott's argument and the "ignorant prior" so beloved of certain climate scientists. Indeed both succumb to the same argument - Goodman's demonstration of inconsistency via different transformations of the variable (duration of Nature magazine) is exactly
what I did with Frame's method.
Of course I wasn't claiming to have discovered anything new in my comment, but it's interesting to note that essentially the same argument was thrashed out so long ago right there in the pages of Nature itself. It doesn't seem to have slowed down Gott either, as he continues to peddle his "theory" far and wide.