Saturday, July 30, 2005

Emissions Scenarios

One major source of uncertainty in trying to predict how the climate is likely to change in the future, is uncertainty in the future emissions of greenhouse gases. The IPCC generated a set of emissions scenarios, published in 2001 (apparently no update is planned to coincide with AR4 - I don't know the reasoning behind this decision). The scenarios cover a wide range of possible future trajectories for demographic and economic change over the next 100 years, based on 4-6 main storylines (the A1 storyline has 3 main variants) each of which has about 10 variants. The main "marker" scenarios describe emissions ranging from 5 to 30Gt of carbon per year in CO2 by 2100 (compare to current emissions of about 7Gt per year).

In principle, we can simply convolve the uncertainty in scenarios with the uncertainty in climate response to generate a probabilistic forecast for future climate (and Wigley and Raper did exactly this back in 2001, in a paper in Science). However, I don't think it is as simple in practice as they indicated. There are in my opinion two major difficulties with trying to generate probabilistic forecasts using the scenarios.

The first problem is that the scenarios were all explicitly and deliberately predicated on no action being specifically taken to reduce GHG emissions (although some storylines include emission reduction as a side-effect of other environmental policies). Eg, from the Technical Summary:

As required by the Terms of Reference however, none of the scenarios in the set includes any future policies that explicitly address additional climate change initiatives, although GHG emissions are directly affected by non-climate change policies designed for a wide range of other purpose.

Now, although it may sometimes seem like not much is happening yet, in fact it seems clear to me that there is at least a modest groundswell of action in roughly the right direction. The Kyoto protocol is ratified, and even the USA is taking some steps towards mitigation (especially at the local level if not federal). Ok, it is not much so far, but give it a decade or two and it seems likely to me that the IPCC scenarios will prove to be overall a pessimistic viewpoint of where we are heading. So, a "forecast" based on them is at best a forecast of where we might have been going if we took no action at all to reduce emissions, not a forecast of where we are actually heading as of today. Of course more action could be taken (some will always argue that more action is needed, whatever is actually done), but any assessment should surely be based on a realistic assessment of how much action has already been taken and what is in the pipeline. I don't know why the scenarios were designed to exclude any mitigation effects, and it makes the decision to not update them seem rather unfortunate, but perhaps someone will have a good explanation for this.

The second problem is perhaps a little more subtle, and it is that there is no obviously correct way to attach probabilities to the individual scenarios. The scenarios are essentially presented as possibilities, with no assessment of their probabilities. Obviously they are intended to cover a range of reasonable possibilities (they would have little use otherwise) but they are quite explicitly NOT assigned any sort of relative likelihoods:
Preferences for the scenarios presented here vary among users. No judgment is offered in this report as to the preference for any of the scenarios and they are not assigned probabilities of occurrence.
This is reinforced again in the summary, even more explicitly (with my bold emphasis):
Probabilities or likelihoods are not assigned to individual SRES scenarios. None of the SRES scenarios represents an estimate of a central tendency for all driving forces and emissions, such as the mean or median, and none should be interpreted as such. The statistics associated with the frequency distributions of SRES scenarios do not represent the likelihood of their occurrence. The writing team cautions against constructing a central, "best-estimate" scenario from the SRES scenarios; instead it recommends use of the SRES scenarios as they are.
W&R take this as a green light to assign equal probability to each scenario. They say: "We therefore assume all 35 emissions scenarios to be equally likely" (my emphasis). It is not entirely clear from the wording in their paper if they believe that this is a reasonable deduction based on the SRES giving no preference, or if they are acknowledging that this is an entirely personal judgement on their part. If the former, they are clearly wrong, and if the latter, they are certainly entitled to make this assumption if they believe it is appropriate. but it must be clearly flagged as their own opinion, and the resulting forecast should not presented as if it is an objective one based on the IPCC TAR, as Karl and Trenberth did in referring to the W&R paper in Science, 2003:
In the absence of climate mitigation policies, the 90% probability interval for warming from 1990 to 2100 is 1.7C to 4.9C.
There is no such thing as "the 90% probability interval for warming". There is W&R's 90% probability interval, based on their beliefs about scenarios (and their beliefs about climate sensitivity, but IMO adopting the the IPCC's "likely" range of 1.5-4.5C is relatively uncontroversial). Karl and Trenberth may also endorse this estimate if they agree with W&R's assumption. But it is not comparable to (say) the 90% confidence interval for the number of heads in 100 tosses of a fair coin.

As I've mentioned before, there is also a subjective element in the estimate of climate sensitivity - but at least this is based on a considerable amount of evidence and has been the subject of substantial debate amongst climate scientists. In contrast, the probabilistic distibution over future scenarios seems little more than a wild guess made purely on the grounds of convenience.

So, it's one thing to poke holes in research, but that leaves the question of what climate scientists should do instead. In my view, it seems unwise (and is certainly unnecessary) for them to try to make socioeconomic forecasts when economists are not prepared to do so. The obvious alternative, suggested in the SRES itself, is simply to use the different scenarios (perhaps just the marker scenarios) and present the results from each one separately. That means giving a number of probabilistic forecasts, each of which is conditional on an emissions scenario. This also makes it simple for climate scientists to make up their own scenarios which include mitigation, and demonstrate the effects that mitigation could have. This is, of course, exactly the sort of information that policy-makers should find useful. After all, future emissions are at least in part a controllable input and what we we all want to know is, to what extent we should try to control them?

Anyone who wants to make a probabilistic estimate of climate change based on their estimates of emissions and climate response is welcome to do so, of course. However, even though one can reasonably use the IPCC's estimate of climate sensitivity as the basis for one input to the calculation, there is no such consensus interpretation of the scenarios, so the assignment of probabilistic weights is entirely the researcher's own responsibility. A deliberately ignorant "uniform prior" might be defensible from a Bayesian viewpoint, but the results will be highly dependent on this assumption and I for one have little confidence in them.

8 comments:

William M. Connolley said...

Nice post. Thanks.

Anonymous said...

Yes, nice post. Of course, an alternative to all of this divining might be to ask "What future do we want? (and who is we?)" and then ask "How do we get there?" Have a look at this discussion:

R.A. Pielke Jr., Sarewitz, D. and R. Byerly Jr., 2000: Decision Making and the Future of Nature: Understanding and Using Predictions. Chapter 18 in Sarewitz, D., R.A. Pielke Jr., and R. Byerly Jr., (eds.), Prediction: Science Decision Making and the Future of Nature. Island press: Washington, DC.
http://sciencepolicy.colorado.edu/admin/publication_files/resourse-73-2000.06.pdf

James Annan said...

Roger,

I would certainly agree that there is quite a disconnect between what we predict (eg global or perhaps regional average temperature, largely cos we can) and what an end-user might want at the local (even national) level if they wanted to adapt.

OTOH it is clear that many warnings of environmental risk or damage do not rely on explicit predictions, at least in the somewhat limited sense in which you use the term. That seems to be a closer analogue to where we are in climate prediction, compared to (say) the skill and detail of weather forecasting.

Anyway, any rational decision-making process must rely on some estimate of how different decisions will influence the likely outcomes (ie predictions). I don't see this as an alternative to divining, just a generalisation of it.

markbahner said...

James,

I was looking for other information when I came across this post.

You write, "I don't know why the scenarios were designed to exclude any mitigation effects, and it makes the decision to not update them seem rather unfortunate, but perhaps someone will have a good explanation."

I have a very good explanation. But it seems hard to believe you can't puzzle through to what the explanation is. So on the "Teach a man to fish..." theory, I'll give you a hand.

1) Would you agree that there is no reason to update the projections from IPCC TAR (issued in 2001) if they still (as of 2006) represent a scientifically valid assessment of future events?

2) Do you think the projections in the IPCC TAR were a scientifically valid assessment of future events when it was issued in 2001?

HTH,
Mark

James Annan said...

1. Yes, it's obvious enough that this would be true, but it is also obvious that this is a hypothetical question. The issue is whether the updating would lead to significant enough changes to make it worth the effort, not whether new scenarios would be exactly the same as the existing ones (which no-one would argue to be the case).

2. I'm not an economist, but clearly there was a significant bunch of economists (ie SRES authors) who thought so at the time (or, more precisely, thought so at the time they were writing the scenarios, which was probably several years prior to 2001). It still seems rather limited to predicate all scenarios on the hypothesis that no mitigation takes place, but if that's what they were asked to do...

markbahner said...

1. OK, so the questions are whether the IPCC TAR analysis was scientifically valid in 2001, and whether the changes have been significant enough since then.

2. Let's not try to guess what the authors were thinking. I was asking you whether *you* think the IPCC TAR projections were scientifically valid when they were published in 2001.

For example, are the IPCC TAR projections falsifiable? And is it necessary for the projections to be falsifiable in order to be scientifically valid?

James Annan said...

Well, the first part of 1 is repeated in 2, so I will start with its second half:

1. I do think that the scenarios should be updated, although as long as they are carefully used and interpreted, they are probably good enough. That puts quite a responsibility on climate scientists who are not experts in economics though.

2. Yes, I think they were valid - not necessarily perfect - at that time (or at least, when written - clearly some aspects are bound to date rapidly, and the SRES were actually produced several years prior to 2001).

For your followup about falsifiability, I will do some of my own "teaching a man to fish" by asking you whether a forecast that says "70% chance of rain" tomorrow is falsifiable. Or even "my next coin toss has a 50% chance of turning up heads".

markbahner said...

You write, regarding the IPCC TAR projections, "Yes, I think they were (scientifically) valid..."

So does that mean you think they were falsifiable? Or do you think falsifiability is not necessary for a projection to be scientifically valid?

If you think the IPCC TAR projections were falsifiable, what future events do you think would show the IPCC TAR projections to be false? Note that the IPCC TAR *explicitly* states that, "Scenarios are images of the future or alternative futures. They are neither predictions nor forecasts." How can any future event falsify the IPCC TAR projections if they are explicitly NOT "predictions" or "forecasts"?

You then write, "For your followup about falsifiability, I will do some of my own 'teaching a man to fish' by asking you whether a forecast that says '70% chance of rain' tomorrow is falsifiable. Or even 'my next coin toss has a 50% chance of turning up heads'."

Here are my answers to your questions:

1) If a weather forecaster says there is a 70% of rain tomorrow, and it doesn't rain tomorrow, I would say there is a 70% chance the forecaster was wrong. So I'd say such a weather forecast is falsifiable.

2) I don't think a prediction of 50/50 chance of landing on heads for a single coin toss is falsifiable. However, a prediction of a 50/50 chance of landing on heads for 1000 coin tosses is falsifiable.

But I hope we can get back to my questions, since you were the one who asked for opinions on why the IPCC TAR scenarios weren't being updated in AR4. In your opinion, what future events would falsify the IPCC TAR projections…especially given the fact that they are explicitly NOT predictions or forecasts of the future?

Oh, that leads me to another question...have you read any of the "Limits to Growth" series of books (e.g., “Beyond the Limits,” or “Limits to Growth: The 30-year Update”)?