I read this amusing article in NS while travelling recently, and it reminded me that I'd been meaning to blog about the story for some time. A spot of googling reveals that several others have beaten me to it, but I wasn't going to miss the chance to use my headline pun...
The basic gist is that an astrophysicist called J. Richard Gott III claims to have discovered a principle by which the future duration of an ongoing event can be confidently predicted, with absolutely no knowledge other than the past duration. In particular in this article, he asserts that the human race doesn't have long left on Planet Earth, and further, that the human space program doesn't have long left either, so we had better get on with colonising somewhere else.
It's basically a warmed-over version of the Doomsday "argument", of course - one version of which is that given a total number of N humans (over the entire lifespan of the species), I can assume that with 95% probability my position in the list lies in the (0.025N, 0.975N) interval. Actually, I am number 60B in the order, meaning that I can expect there to be somewhere between 1.5B and 2400B more people (with 95% probability). That means a 2.5% probability that we'll be extinct in the next few decades! Gott does the same thing with the number of years during which there will be a space program, and works out that it is likely to end quite soon, so we had better get on with moving elsewhere while we can.
The argument is nonsense and a spot of googling reveals that many others have shredded it:
Andrew Gelman (where I first read about this) doesn't like it but provides a charitable interpretation of the whole thing as a frequentist statement: given an ordered set, 95% of the members do indeed lie in the middle 95% of the ordering, and thus the intervals constructed by this method are valid confidence intervals for the size of the set, given random samples from it. That's true enough, but (as he also points out) does not justify the misinterpretation of these frequentist confidence intervals as if they were meaningful Bayesian credible intervals, which is what Gott is doing. (It does explain how Gott can demonstrate the success of his method on large historical data sets, for that gives the procedure a meaningful frequentist interpretation.)
Brian Weatherall rips a hole in it, first with a bit of "mockery" (his term) about how it leads to idiotic predictions for several examples such as the durability of the internet or the iPhone (and if anyone doesn't think these predictions are indeed idiotic, I'll happily bet against them as he offers to), and then with a simple example as to how it leads to the following nonsensical claim: if A has a longer historical duration than B, then the future duration of A will certainly (with probability 1!) be as long as the future duration of B - he does this by considering the durations of the events A, B, and "A and B".
Best of all, there is a lovely letter reprinted on Tierney's blog (which also covers the story). Gott has been pushing this idea for a long time now, and following his publication of it in Nature back in 1993(!), this rebuttal was published (I was going to just post an excerpt, but it is so nicely written that I don't want to cut anything out):
Apparently back then, Gott's argument was sufficiently novel that Nature did not feel able to argue that "everyone thinks like this, so you can't criticise it" :-) More likely, the lesser political importance of the topics under discussion meant that they did not feel such a strong need to defend a "consensus" built on such methods.
Regular readers will probably by now have recognised an uncanny resemblance between Gott's argument and the "ignorant prior" so beloved of certain climate scientists. Indeed both succumb to the same argument - Goodman's demonstration of inconsistency via different transformations of the variable (duration of Nature magazine) is exactly what I did with Frame's method.
Of course I wasn't claiming to have discovered anything new in my comment, but it's interesting to note that essentially the same argument was thrashed out so long ago right there in the pages of Nature itself. It doesn't seem to have slowed down Gott either, as he continues to peddle his "theory" far and wide.
The basic gist is that an astrophysicist called J. Richard Gott III claims to have discovered a principle by which the future duration of an ongoing event can be confidently predicted, with absolutely no knowledge other than the past duration. In particular in this article, he asserts that the human race doesn't have long left on Planet Earth, and further, that the human space program doesn't have long left either, so we had better get on with colonising somewhere else.
It's basically a warmed-over version of the Doomsday "argument", of course - one version of which is that given a total number of N humans (over the entire lifespan of the species), I can assume that with 95% probability my position in the list lies in the (0.025N, 0.975N) interval. Actually, I am number 60B in the order, meaning that I can expect there to be somewhere between 1.5B and 2400B more people (with 95% probability). That means a 2.5% probability that we'll be extinct in the next few decades! Gott does the same thing with the number of years during which there will be a space program, and works out that it is likely to end quite soon, so we had better get on with moving elsewhere while we can.
The argument is nonsense and a spot of googling reveals that many others have shredded it:
Andrew Gelman (where I first read about this) doesn't like it but provides a charitable interpretation of the whole thing as a frequentist statement: given an ordered set, 95% of the members do indeed lie in the middle 95% of the ordering, and thus the intervals constructed by this method are valid confidence intervals for the size of the set, given random samples from it. That's true enough, but (as he also points out) does not justify the misinterpretation of these frequentist confidence intervals as if they were meaningful Bayesian credible intervals, which is what Gott is doing. (It does explain how Gott can demonstrate the success of his method on large historical data sets, for that gives the procedure a meaningful frequentist interpretation.)
Brian Weatherall rips a hole in it, first with a bit of "mockery" (his term) about how it leads to idiotic predictions for several examples such as the durability of the internet or the iPhone (and if anyone doesn't think these predictions are indeed idiotic, I'll happily bet against them as he offers to), and then with a simple example as to how it leads to the following nonsensical claim: if A has a longer historical duration than B, then the future duration of A will certainly (with probability 1!) be as long as the future duration of B - he does this by considering the durations of the events A, B, and "A and B".
Best of all, there is a lovely letter reprinted on Tierney's blog (which also covers the story). Gott has been pushing this idea for a long time now, and following his publication of it in Nature back in 1993(!), this rebuttal was published (I was going to just post an excerpt, but it is so nicely written that I don't want to cut anything out):
“There are lies, damn lies and statistics” is one of those colorful phrases that bedevil poor workaday statisticians who labor under the illusion that they actually contribute to the advancement of scientific knowledge. Unfortunately, the statistical methodology of astrophysicist Dr. John Gott, reported in Nature 363:315-319 (1993), which purportedly enables one to put statistical limits on the probable lifetime of anything from human existence to Nature itself, breathes new life into the saying.
Dr. Gott claimed that, given the duration of existence of anything, there is a 5% probability that it is in its first or last 2.5% of existence. He uses this logic to predict, for example, the duration of publication of Nature. Given that Nature has published for 123 years, he projects the duration of continued publication to be between 123/39 = 3.2 years and 123×39=4800 years, with 95% certainty. He then goes on to predict the future longevity of our species (5000 to 7.8 million years), the probability we will colonize the galaxy and the future prospects of space travel.
This technique would be a wonderful contribution to science were it not based on a patently fallacious argument, almost as old as probability itself. Dubbed the “Principle of Indifference” by John Maynard Keynes in the 1920s, and the “Principle of Insufficient Reason” by Laplace in the early 1800s, it has its origins as far back as Leibniz in the 1600’s (1) . Among other counter-intuitive results, this principle can be used to justify the prediction that after flipping a coin and finding a head, the probability of a head on the next toss is 2/3. (2) It has the been the source of many an apparent paradox and controversy, as alluded to by Keynes, “No other formula in the alchemy of logic has exerted more astonishing powers. For its has established the existence of God from total ignorance, and it has measured with numerical precision the probability that the sun will rise tomorrow.” (3) Perhaps more to the point, Kyburg, a philosopher of statistical inference, has been quoted as describing it as “the most notorious principle in the whole history of probability theory.” (4)
Simply put, the principle of indifference says that it you know nothing about a specified number of possible outcomes, you can assign them equal probability. This is exactly what Dr. Gott does when he assigns a probability of 2.5% to each of the 40 segments of a hypothetical lifetime. There are many problems with this seductively simple logic. The most fundamental one is that, as Keynes said, this procedure creates knowledge (specific probability statements) out of complete ignorance. The practical problem is that when applied in the problems that Dr. Gott addresses, it can justify virtually any answer. Take the Nature projection. If we are completely uncertain about the future length of publication, T, then we are equally uncertain about the cube of that duration, T cubed. Using Dr. Gott’s logic, we can predict the 95% probability interval for T cubed as T3/39 to 39T cubed. But that translates into a 95% probability interval for the future length of publication to be T/3.4 to 3.4T, or 42 to 483 years, not 3 to 4800. By increasing the exponent, we can come to the conclusion that we are 95% sure that the future length of anything will be exactly equal to the duration of its past existence, T. Similarly, if we are ignorant about successively increasing roots of T, we can conclude that we are 95% sure that the future duration of anything will somewhere between zero and infinity. These are the kind of difficulties inherent in any argument based on the principle of indifference.
On the positive side, all of us should be encouraged to learn that there can be no meaningful conclusions where there is no information, and that the labors of scientists to predict such things as the survival of the human species cannot be supplanted by trivial (and in this case specious) statistical arguments. Sadly, however, I believe that this realization, together with the superficial plausibility (and wide publicity) of Dr. Gott’s work, will do little to weaken the link in many people’s minds between “lies” and “statistics”.
Steven N. Goodman, MD, MHS, PhD
Asoc. Professor of Biostatistics and Epidemiology
Johns Hopkins University
References
1. Hacking I. The Emergence of Probability, 126, ( Cambridge Univ. Press, Cambridge,1975).
2. Howson C, Urbach P. Scientific Reasoning: The Bayesian Approach, 100, (Open Court, La Salle, Illlinois, 1989).
3. Keynes JM. A Treatise on Probability, 89, (Macmillan, London: 1921)
4. Oakes M. Statistical Inference: A commentary for the social sciences, 40, (Wiley, New York, 1986).
Apparently back then, Gott's argument was sufficiently novel that Nature did not feel able to argue that "everyone thinks like this, so you can't criticise it" :-) More likely, the lesser political importance of the topics under discussion meant that they did not feel such a strong need to defend a "consensus" built on such methods.
Regular readers will probably by now have recognised an uncanny resemblance between Gott's argument and the "ignorant prior" so beloved of certain climate scientists. Indeed both succumb to the same argument - Goodman's demonstration of inconsistency via different transformations of the variable (duration of Nature magazine) is exactly what I did with Frame's method.
Of course I wasn't claiming to have discovered anything new in my comment, but it's interesting to note that essentially the same argument was thrashed out so long ago right there in the pages of Nature itself. It doesn't seem to have slowed down Gott either, as he continues to peddle his "theory" far and wide.
I read Gott's letter back in 1993 and could never make up my mind whether it was.
ReplyDelete1. An April's fool joke at the wrong time of year.
2. A hoax, like Sokal's.
3. Genuine in that both the author and the editors believed it made sense.
Apparently 3 was the correct answer.
James,
ReplyDeleteThe "Doomsday Argument" gets a good bit of ink in peer-reviewed journals (including one paper by me "Sorting out the Anti-Doomsday Arguments"). High level journals in physics, statistics, philosophy, and the general science biggies Nature and Science, and even one book. It's still not refuted. Just try to find a peer-reviewed paper is a slam-dunk refutation.
It proven to be a bit of a tar-baby, many have thought it easy to conquer, but none have succeeded in publishing the definitive refutation in a peer-reviewed journal.
I am reluctant to accept blog articles for this one. If you think its so obviously wrong, then I suggest you try to write that definitive refutation. And, good luck on that one :)
I guess I should try to refute your blog.
ReplyDeleteI assume you admit that the doomsday argument works for the urn model, since everyone else does.
The urn model: You have 2 identical looking urns, one with a million consequetively numbers balls (1,2,3,...) and one with 1. You pick an urn at random and take a random ball from it, the ball reads 7. You are pretty sure you picked the urn with 10 balls, huh?
So just tack on "But it works for the urn model." to the end of your blog. QED, you got nothing.
It can call it Baysean, frequentist, Babtist, or Freudian. You still have to explain why it works for the urn model and not for the human situation.
Tom,
ReplyDeleteYour presentation of "the urn model" appears garbled (million balls or 10?). Can you clarify what you really mean? Note that in your case you have defined the prior (assuming you use "pick at random" to mean equal prior probability for each urn) and there is no possible paradox.
The whole thing turns on the prior!
I did make an error, the correct urn model is: 2 urns, 1 with 1,000,000 balls, 1 with 10 balls. You pick an urn at random and you get a ball with 7 on it. Therefore, it's a good guess you picked the urn with 10 balls. I do mean an equal prior.
ReplyDeleteI don't think the urn model itself leads to a paradox. But applying the same reasoning as in the Doomsday argument does leads to a paradox, a counter-intuitive result, on the face of it, at least.
I'm a little slow on the uptake.
ReplyDeleteYou're point is that the urn model is different from the Doomsday Argument. The urn model has a known prior and the Doomsday argument does not.
The proponents of the Doomsday argument counter by saying that there must be a probabilty shift if learn that you are the 60 billionth human. In other words, if you had a prior, then you would have to shift it if you learned this fact. (Actually it's not really a counter, more of staking out a new position.)