This got briefly mentioned at a workshop I attended last week, and has been splashed around the internet a bit so I might as well add my ¥2. Marty Weitzman has circulating early versions of his manuscript widely over a number of months (latest one can be found
here), but despite several attempts I've not yet managed to convince him of my POV.
His paper, and main result ("we're all dooomed") has two basic components. The first is that under some assumptions about how one learns probabilistically about future hazards such as climate change, the pdf (eg of climate sensitivity S) will inevitably have a "long tail". By "long tail", he does not mean it will necessarily assign a particularly
high probability to extreme cases such as P(S>6C), but rather that the pdf will naturally follow a shape that "only" decreases as a polynomial function in S (say 1/S
2 or 1/S
3), rather than say the exponential decay of a Gaussian (e
-cS2) or other friendlier functions. The second component of his argument is the observation that under any reasonably risk-averse attitude, then as one considers increasingly high impacts, the loss in utility arising from such impacts increases more rapidly than their probabiity decreases (based on the long-tailed pdf), giving a divergent sum, infinite expected utility loss and a conclusion that Something Must Be Done. Or perhaps, that we are all doomed whatever we do.
But there are IMO a few problems with this work. I've had a long, interesting but ultimately fruitless exchange of emails with Marty, in which I failed to persuade him of my points. He's a famous economist and I'm not, so maybe I am wrong. But it's my blog, so here are my opinions, for better or worse.
Firstly, I disagree wth how he has characterised the nature of the uncertainty in the system. He models it as if S (climate sensitivity) is a sample from an unknown distribution, and the only way in which we can learn about S is to draw samples from this distribution in order to infer its shape. AIUI this is fundamentally incompatible with all of the Bayesian work that has been done, in which S is viewed as a constant about which we learn in various ways, with the pdf being simply an expression of our current uncertainty over S, rather than anything intrinsic to S itself. This may seem like a semantic detail at first but in fact it appears to be fundamental to his analysis. To appreciate the distinction, note that under his viewpoint, our estimate of S will converge to a pdf of finite width which must cover all of the recent individual estimates, whereas I (and I believe all climate scientists, even those with who I have had strong disagreements recently) would say that our pdf of S will in principle converge towards a point estimate, especially if we were to carefully operationalise the definition and then go and do a suitable experiment on the whole earth system, which is a plausible experiment at least in thought (we may in practice lose interest in estimating S).
I do wonder if it might be possible to rescue the mathematical content of what he has done via some reinterpretation of his framework but he doesn't seem to accept (or perhaps understand) my complaint in the first place, so I can't see that happening (at least not in his manuscript). I actually don't have any fundamental objection to distributions with polynomial tails, in fact
this paper presents such a distribution and I had already realised when writing it that in principle it leads to an unbounded loss under even for a rather tame quadratic cost function (although I truncated the pdf at 20C for pragmatic reasons). My criticism of much previously published work on estimating climate sensitivity is not that their estimates have long tails, but that the probabilities in these long tails are unreasonably high due to the pathological decisions which have been taken along the way.
Next, we have the utility function. I'm not convinced that it makes sense to extrapolate some convenient (perhaps also theoretically and/or empirically justifiable) functional form right down to the singularity at 0 (yes, complete destruction of the entire world economy). But not being an economist, I don't have any particular grounds to criticise and it would be rash to express too much scepticism based on nothing more than my own ignorance of these matters.
Notably, although he talks in terms of climate sensitivity, there is nothing in Marty's maths that depends specifically on a
doubling of CO2. A rise to 1.4x (which we have already passed) will cause half the climate change, but that would still give an unbounded expected loss in utility (half of infinity...). By the same argument, a rise even of 1ppm is untenable. Come to think of it, putting on the wrong sort of hat would become a matter of global importance (albedo effect).
When the Dismal Theorem was mentioned at the workshop, a wag in the audience (who had I'm sure already seen the full manuscript) described it as not so much an economic disaster, as an economics disaster. If it becomes widely endorsed by the economics community (Richard Tol is already on record as enthusiastically endorsing it, and there's a long list of acknowledgements to people who presumably did not all say it was bunk) it may come to have more significance in terms of determining to what extent modern economic theory can (or cannot) be used to credibly inform decision making under uncertainty, than in actually informing those decisions. Time will tell.