This got briefly mentioned at a workshop I attended last week, and has been splashed around the internet a bit so I might as well add my ¥2. Marty Weitzman has circulating early versions of his manuscript widely over a number of months (latest one can be found here), but despite several attempts I've not yet managed to convince him of my POV.

His paper, and main result ("we're all dooomed") has two basic components. The first is that under some assumptions about how one learns probabilistically about future hazards such as climate change, the pdf (eg of climate sensitivity S) will inevitably have a "long tail". By "long tail", he does not mean it will necessarily assign a particularly high probability to extreme cases such as P(S>6C), but rather that the pdf will naturally follow a shape that "only" decreases as a polynomial function in S (say 1/S

But there are IMO a few problems with this work. I've had a long, interesting but ultimately fruitless exchange of emails with Marty, in which I failed to persuade him of my points. He's a famous economist and I'm not, so maybe I am wrong. But it's my blog, so here are my opinions, for better or worse.

Firstly, I disagree wth how he has characterised the nature of the uncertainty in the system. He models it as if S (climate sensitivity) is a sample from an unknown distribution, and the only way in which we can learn about S is to draw samples from this distribution in order to infer its shape. AIUI this is fundamentally incompatible with all of the Bayesian work that has been done, in which S is viewed as a constant about which we learn in various ways, with the pdf being simply an expression of our current uncertainty over S, rather than anything intrinsic to S itself. This may seem like a semantic detail at first but in fact it appears to be fundamental to his analysis. To appreciate the distinction, note that under his viewpoint, our estimate of S will converge to a pdf of finite width which must cover all of the recent individual estimates, whereas I (and I believe all climate scientists, even those with who I have had strong disagreements recently) would say that our pdf of S will in principle converge towards a point estimate, especially if we were to carefully operationalise the definition and then go and do a suitable experiment on the whole earth system, which is a plausible experiment at least in thought (we may in practice lose interest in estimating S).

I do wonder if it might be possible to rescue the mathematical content of what he has done via some reinterpretation of his framework but he doesn't seem to accept (or perhaps understand) my complaint in the first place, so I can't see that happening (at least not in his manuscript). I actually don't have any fundamental objection to distributions with polynomial tails, in fact this paper presents such a distribution and I had already realised when writing it that in principle it leads to an unbounded loss under even for a rather tame quadratic cost function (although I truncated the pdf at 20C for pragmatic reasons). My criticism of much previously published work on estimating climate sensitivity is not that their estimates have long tails, but that the probabilities in these long tails are unreasonably high due to the pathological decisions which have been taken along the way.

Next, we have the utility function. I'm not convinced that it makes sense to extrapolate some convenient (perhaps also theoretically and/or empirically justifiable) functional form right down to the singularity at 0 (yes, complete destruction of the entire world economy). But not being an economist, I don't have any particular grounds to criticise and it would be rash to express too much scepticism based on nothing more than my own ignorance of these matters.

Notably, although he talks in terms of climate sensitivity, there is nothing in Marty's maths that depends specifically on a doubling of CO2. A rise to 1.4x (which we have already passed) will cause half the climate change, but that would still give an unbounded expected loss in utility (half of infinity...). By the same argument, a rise even of 1ppm is untenable. Come to think of it, putting on the wrong sort of hat would become a matter of global importance (albedo effect).

When the Dismal Theorem was mentioned at the workshop, a wag in the audience (who had I'm sure already seen the full manuscript) described it as not so much an economic disaster, as an economics disaster. If it becomes widely endorsed by the economics community (Richard Tol is already on record as enthusiastically endorsing it, and there's a long list of acknowledgements to people who presumably did not all say it was bunk) it may come to have more significance in terms of determining to what extent modern economic theory can (or cannot) be used to credibly inform decision making under uncertainty, than in actually informing those decisions. Time will tell.

His paper, and main result ("we're all dooomed") has two basic components. The first is that under some assumptions about how one learns probabilistically about future hazards such as climate change, the pdf (eg of climate sensitivity S) will inevitably have a "long tail". By "long tail", he does not mean it will necessarily assign a particularly high probability to extreme cases such as P(S>6C), but rather that the pdf will naturally follow a shape that "only" decreases as a polynomial function in S (say 1/S

^{2}or 1/S^{3}), rather than say the exponential decay of a Gaussian (e^{-cS2}) or other friendlier functions. The second component of his argument is the observation that under any reasonably risk-averse attitude, then as one considers increasingly high impacts, the loss in utility arising from such impacts increases more rapidly than their probabiity decreases (based on the long-tailed pdf), giving a divergent sum, infinite expected utility loss and a conclusion that Something Must Be Done. Or perhaps, that we are all doomed whatever we do.But there are IMO a few problems with this work. I've had a long, interesting but ultimately fruitless exchange of emails with Marty, in which I failed to persuade him of my points. He's a famous economist and I'm not, so maybe I am wrong. But it's my blog, so here are my opinions, for better or worse.

Firstly, I disagree wth how he has characterised the nature of the uncertainty in the system. He models it as if S (climate sensitivity) is a sample from an unknown distribution, and the only way in which we can learn about S is to draw samples from this distribution in order to infer its shape. AIUI this is fundamentally incompatible with all of the Bayesian work that has been done, in which S is viewed as a constant about which we learn in various ways, with the pdf being simply an expression of our current uncertainty over S, rather than anything intrinsic to S itself. This may seem like a semantic detail at first but in fact it appears to be fundamental to his analysis. To appreciate the distinction, note that under his viewpoint, our estimate of S will converge to a pdf of finite width which must cover all of the recent individual estimates, whereas I (and I believe all climate scientists, even those with who I have had strong disagreements recently) would say that our pdf of S will in principle converge towards a point estimate, especially if we were to carefully operationalise the definition and then go and do a suitable experiment on the whole earth system, which is a plausible experiment at least in thought (we may in practice lose interest in estimating S).

I do wonder if it might be possible to rescue the mathematical content of what he has done via some reinterpretation of his framework but he doesn't seem to accept (or perhaps understand) my complaint in the first place, so I can't see that happening (at least not in his manuscript). I actually don't have any fundamental objection to distributions with polynomial tails, in fact this paper presents such a distribution and I had already realised when writing it that in principle it leads to an unbounded loss under even for a rather tame quadratic cost function (although I truncated the pdf at 20C for pragmatic reasons). My criticism of much previously published work on estimating climate sensitivity is not that their estimates have long tails, but that the probabilities in these long tails are unreasonably high due to the pathological decisions which have been taken along the way.

Next, we have the utility function. I'm not convinced that it makes sense to extrapolate some convenient (perhaps also theoretically and/or empirically justifiable) functional form right down to the singularity at 0 (yes, complete destruction of the entire world economy). But not being an economist, I don't have any particular grounds to criticise and it would be rash to express too much scepticism based on nothing more than my own ignorance of these matters.

Notably, although he talks in terms of climate sensitivity, there is nothing in Marty's maths that depends specifically on a doubling of CO2. A rise to 1.4x (which we have already passed) will cause half the climate change, but that would still give an unbounded expected loss in utility (half of infinity...). By the same argument, a rise even of 1ppm is untenable. Come to think of it, putting on the wrong sort of hat would become a matter of global importance (albedo effect).

When the Dismal Theorem was mentioned at the workshop, a wag in the audience (who had I'm sure already seen the full manuscript) described it as not so much an economic disaster, as an economics disaster. If it becomes widely endorsed by the economics community (Richard Tol is already on record as enthusiastically endorsing it, and there's a long list of acknowledgements to people who presumably did not all say it was bunk) it may come to have more significance in terms of determining to what extent modern economic theory can (or cannot) be used to credibly inform decision making under uncertainty, than in actually informing those decisions. Time will tell.

## 17 comments:

I think there is a finite chance of me dying in the next 24 hours. This, to me, is similarly catastrophic as the destruction of entire economy of the whole world is to the world. Does this mean that this so screws up my utility functions that economic theory cannot help me make any decisions?

This seems like deciding it is impossible to answer the question of should I stay in bed all day to minimise the catastrophic risk?

Interesting post, I'm just reading the paper so won't comment further yet. On a related note, an ex-head of BAS (Dougal Goodman) once said that he had studied the distribution of losses in the oil industry and concluded that it had a "fat tail" and integrated to infinity. Dunno if that kind of stuff ever got published.

The climate system of this planet is exquistly sensitive to small purtubations, on a scale of centuries to millenia.

A recent set of model runs showed that 10 ppm could make all the difference, between, say, the Greenland ice sheet melting or not.

(Hope this helps.)

I wrote up wot I think: http://scienceblogs.com/stoat/2007/10/weitzmans_dismal_theorem.php Which in summary is: its irrelevant. Do take a look and point out my errors.

And I disagree with your previous commentor.

Belette, I think I disagree on almost all points except for your conclusion :-) I've replied over there.

David, ref please.

Chris, I think the general principle is that we can (and some say, should) choose as a society to behave in various ways even if as individuals we might wish to make different choices. It is clear that people often behave in "irrational" ways, but maybe we should still try to make "rational" policy. Also, some may have descendants to think about, and may attach utility to their well-being.

Ah well, some agreement at least. But you didn't pick me up on asserting that there is precious little evidence for the shape of the tail of the PDF. Given that, how do you justify integrating over it, when you could just as easily get a different shape by different assumptions?

James,

You should know by now not to believe what others say about me.

I like the Dismal Theorem because it is the first new thing in the economics of climate change since 1996. I don't know what it means, though.

The Dismal Theorem relies on Bayesian learning. Essentially, this means that observations are the only source of knowledge. At present, we cannot exclude really high climate sensitivities. I am not sure that we can design an experiment that would exclude those. That said, Weitzman assumes that we add observations one year at the time, but of course we also observe more and more of the past.

On the utility function, good point. All the standard utility function go to minus infinity if income goes to zero (or earlier), but this is based on extrapolation, not on observation.

Anyway, what Weitzman really means to say is that uncertainty is central in climate policy, and that cost-benefit analysis needs to be applied with care.

James>...some convenient (perhaps also theoretically and/or empirically justifiable) functional form right down to the singularity at 0 (yes, complete destruction of the entire world economy).

If we are going to consider that kind of catastrophic singularity, then the Vinge type of positive singularity where utility goes to infinity should be considered as well.

In that case, it can be argued that no resources should be spent to mitigate AGW if doing so would delay this positive singularity...

Belette, I agree it's hard to do anything too quantitative, but I think it would be sensible to consider a range of "reasonable" analyses and see what happens. The general argument for a polynomial tail seems OK to me, in the cases where Marty's characterisation of the uncertainty is relevant. However I don't think it is relevant in this case (and perhaps nowhere else outside of maths texts books, either).

Richard:

"Anyway, what Weitzman really means"...

I've learnt not to believe what others say about him :-)

But on a more substantive point, my biggest complaint is that Marty's conception of "Bayesian learning" is too limited to have any direct relevance to the real world.

If the only way we could learn about S was to directly measure S

on other planets, and then use the resultingdistributionof S as our estimated distribution for the value of S on Earth (completely ignoring any special knowledge about the Earth and how its S may be expected to differ from S on other planets), his approach would be valid. But in fact we have a single planet, with a single S (not a distribution) and we learn directly about it by a variety of imperfect observations of S, not by drawing other samples from some distribution from which S is also presumed to be drawn.To belabour the point: climate models are intended as approximations to the Earth. They are not random draws from some distribution, from which the Earth is also presumed to be independently drawn. In the latter case, there would be no purpose in comparing the models to the Earth.

I think perhaps the only safe conclusion from the Dismal Theorem is that in situations where it applies, economics has no use. I'm not sure if it is really that new, after all Sting wrote 20 years ago:

"Your economic theory makes no sense"

but the margin on the record sleeve was too small to contain the proof.

:-)

Steve, well that sort of singularity is a rather different beast, but OTOH both could be argued to be silly extrapolations of relationships outside their domains of applicability :-)

Belette --- Disagree with what?

That it helps?

James Annan --- The link is in a comment to an older thread on Real Climate. I'll keep loooking for it when I can.

In the meantime, Dr. James Hansen et al. make much the same point, although more qualitatively, IHMO:

Climate change and trace gases

James Annan --- This wasn't the paper I wanted, but I fear it will have to do:

GCM-Ice Model: LGM to Holocene

Wrong again. That is the correct paper in my previous post.

DB - with all of it. The climate is certainly not exquisitely sensitive to small perturbations or it wouldn't be stable, as it is. But rather than conduct our conversation at someones table, if you want to continue I suggest the global change list. In fcat the conversation you want is already in the archives there.

Belette --- Read the two papers I linked in previous posts. Not what I consider to be stable and I stand by my full statement, not your rendition of it.

I did go to Global Change, put I didn't see an appropriate thread...

David, I certainly (and I think William) thought you were making a general statement about the current (and near future) climate state, not referring to specific times in the past where thresholds may have existed. In any case, that is all rather irrelevant to Weitzman's manuscript.

(You can start new threads too on globalchange if you don't think anyone has discussed your specific point adequately.)

James,

You may be right. If we can learn faster than Bayesian, Weitzman's Dismal Theorem may fall apart.

One way to accelerate learning, is to reduce sulphur emissions at a faster rate.

You may be interested to learn that your blog is apparently behind the Great Fire Wall. At least, I could not get in from Beijing.

Richard

Post a comment