In this post, I'm going to be exclusively discussing the problem of estimating climate sensitivity (equilibrium surface air temperature response of the atmosphere-ocean system to doubled CO2), which has been a widely-studied subject for many years. It's not quite as valuable to policy-makers as a direct estimate of how the climate will change over time, but climate sensitivity is a good target for study since it is a simple easy-to-understand concept that directly relates to more complex projections, and is also accessible to a wide range of models from simple to complex. By sidestepping the issue of the large uncertainty in future emissions, we can also focus on the geophysical rather than socio-economic uncertainties, the latter of which are at least partially controllable.
The IPCC says that climate sensitivity is likely to be in the range of 1.5-4.5C, an estimate which has not changed for many years. Originally this was based on very limited evidence, but subsequent research appears to confirm that this early estimate was a pretty good (if lucky) one. However, there is still a 3 degree difference between the high and low ends of this range (which are themselves not hard limits), and moreover the use of a "likely range" is a rather woolly description of the uncertainty. So it would be nice to have a better answer.
Firstly, we need to understand where the uncertainty comes from. The basic radiative response to increased CO2 is well understood (although not by me, in any great detail). A doubling of CO2 will raise the surface temperature by about 1C, other things being equal. But other things aren't equal. The warmer temperature will almost certainly result in changes to the amount and distribution of water vapour in the atmosphere - which is itself a greenhouse gas - and this will probably add substantially to the warming due to CO2 alone. I'm not going to go into the details here (partly because I am no expert on the subject) but the IPCC take on water vapour can be found here. Cloud distributions are also likely to change, and they can have both a cooling effect (due to increased albedo) and a warming one (insulation, especially at night). The overall balance depends on several details such as the height at which they form. There are numerous competing hypotheses, but limited direct evidence for how these things are likely to change (the IPCC summary is here).
One approach to estimating climate sensitivity does not attempt to evaluate these individual uncertainties. Instead, we can look at historical changes in the climate, and relate this to the changes in radiative forcing. In principle, this will give us a measure of the climate sensitivity directly, without having to determine the detailed interaction of every effect. Two early attempts using recent decades/century can be found here and here, and I've discussed a recent attempt here (see also here for more analysis of that work). Unfortunately, since we do not know the net forcing accurately enough (especially the cooling due to suphate aerosols) this approach does not produce a useful constraint. Another approach involves looking at the observed cooling due to volcanic eruptions. This seems like a potentially more useful approach to me as the forcing is large and fairly well observed, but the effect of natural interannual variability means that the volcanic cooling is somewhat obscured in the observed record. Also, determining the relationship between the (short-term) response to a volcano and the longer term response to increased CO2 relies on at least a moderately sophisticated model, which introduces another layer of uncertainty. For example, the coupled atmosphere-ocean MIROC3.2 model appears to respond less strongly to volcanoes than the simpler MAGICC model of Wigley and Raper, given the same climate sensitivity (which can be tuned in MAGICC to emulate different GCMs). At still longer time scales, we can look to the paleoclimate record. The Last Glacial Maximum had a much colder climate, lower greenhouse gases and much larger ice sheets than at present. A direct calculation of the implications for climate sensitivity is presented here. Further back into history, we find evidence of a much warmer climate in the Cretaceous, with an estimated CO2 of perhaps 2000ppm.
These estimations all have their limitations, due to uncertainties in both the forcing and the global temperature change. But they do present a broadly consistent picture in the context of the uncertainty of the IPCC estimate. It's important to realise in particular that the "sensitivity could be >10C" result that one gets by looking at the last few decades alone does not in any way contradict or invalidate other methods that suggest a substantially lower bound. It is simply using a particular subset of the available information and saying that this information alone does not limit climate sensitivity. All of the various estimates point to most likely values within or at least close to the standard IPCC range.
Another approach to estimating climate sensitivity is to take a more directly model-based method in order to evaluate all those uncertain feedbacks I mentioned above. What this entails is building a model, based on our understanding of the basic physics of the climate system, and then cranking up the CO2 to see what happens. The IPCC collates results from all the major climate research centres, and last year Gerald Meehl stated that the most recent model results lie in the range of about 2-4.4C. There may be some differences with previous results, which some might consider important, but it is a small change, very much evolution rather than revolution.
Unfortunately, these models all contain a vast number of parameters which control the model behaviour, and which are not directly constrained by theory or observations. Examples might be such things as "the average speed that ice crystals fall" or "droplet size in clouds". Some of them describe rather abstract concepts and it is not clear how they could be measured even in principle, let alone practice. The range of plausible values often extends through orders of magnitude, and changing the parameters can have a significant effect on the model behaviour. Until recently, parameters were generally adjusted by hand until the model behaviour looked "good" according to the rather subjective opinion of the model builders - a time-consuming and tedious process, that leaves open the possibility that other "good" parameter values would result in a model that simulates our current climate reasonably well, but which has a substantially higher or lower climate sensitivity.
If the uncertainty in the basic physics can be quantified, then this can in principle provide a constraint on climate sensitivity. Although many of the parameters cannot be directly determined, it is clear that the overall interaction between the physical processes actually generates our current climate. So if, by appropriately sampling parameter uncertainty, we can generate a ensemble of models that simulates our climate reasonably, then this should give the answer. This is a technically complex task, due to the computational demands, but recently, several groups (including me) have been working on methods for generating probabilistic estimates in this manner.
There is an important weasel word in the above paragraphs - the qualifier "reasonably" in "simulates our climate reasonably" is not easy to quantify. This is an issue which has only shot to prominence recently, so I will attempt to explain it in a bit more detail. No model can hope to simulate our climate perfectly, so it is natural to ask the question: how bad does a model have to get before we decide that it is no longer considered credible? Although I've stated that in simple enough terms, it turns out that this is a rather difficult question to answer, and it has not (to my knowledge) been addressed anywhere in the climate prediction literature to date. This does suggest that there could be significant limitations in much previous probabilistic climate estimation work which may therefore need re-evaluation. Our first steps towards directly accounting for model error in climate prediction are described in a couple of as-yet unpublished manuscripts here and here (I'm sure there are imperfections in this work, and don't claim it is anything more than a small step in roughly the right direction). Jonty Rougier is doing a lot of interesting work in this area, and his recent papers are well worth a read - this manuscript provides a nice summary of the theoretical foundations. Another difficult question is how we decide on a prior distribution for the parameters, especially since their meaning is not always clear and the expert's "prior" opinion is liable to be influenced by his knowledge of model output when run at different parameter values (ie it is not really a "prior" and there is a risk of double-counting evidence). Both of these factors can substantially affect the results.
One crucial take-home message is that there is no way to generate a truly objective estimate of climate sensitivity, and never will be. The strength of various strands of evidence needs to be assessed and weighed up by experts, and even an objective method necessarily relies on subjective decisions regarding the inputs. This is partly what motivated my recent ideas about "betting on climate change", since a prediction/betting market provides a transparent and open mechanism for aggregating subjective (albeit informed) opinions. It may be a little too far out in left field to catch on seriously, though.
In recent work using ensembles of GCM simulations, Murphy et al generated a distribution for climate sensitivity which was a bit higher than the IPCC TAR. Stainforth et al used a diffferent approach and found that a substantial proportion of their ensemble had sensitivity of about 10C, but due to the uncertainties mentioned above, they did not attempt to give a direct probabilistic interpretation of their results (unfortunately, it seems that some people didn't quite grasp this point). One particular aspect of the Stainforth et al work, which I was very surprised by, was their decision to only assess the annually-averaged climate of their models rather than look at the seasonal climate (as Murphy et al did). The seasonal cycle does not provide a brilliant constraint on climate sensitivity but I strongly suspect it would substantially narrow their extremely broad distribution. In my cynical moments, I wonder if they didn't look at seasonal variation precisely because it would have eliminated their "exciting" results :-) I understand a more careful analysis will soon be forthcoming. However, it will still inevitably involve some subjectivity over deciding how poor a seasonal cycle is considered acceptable, and it will be interesting to see how they address this issue.
My bottom line is that there are no data which suggests that climate sensitivity is greater than about 6C, and there are also no models with a sensitivity of 6C or more which have been shown to provide credible simulations of the present and past climate. Perhaps the best candidate for such a model is the "high sensitivity" version of the MIROC3.2 model. This has a sensitivity of 6.3C and gives a reasonable simulation of the present day climate; however it provides a very poor hindcast of recent temperature changes (particularly the post-Pinatubo cooling, see the manuscript referred to above). The "lower sensitivity" (4.0C) version of MIROC3.2 generates a much better hindcast. In contrast to this, the ultra-sensitive models of Stainforth et al have not been shown to simulate even the seasonal cycle adequately. I wouldn't be shocked if the IPCC estimate of the range of plausible climate sensitivity creeps up a little bit, but as far as I can see none of the strands of evidence point to a value significantly above 5C as being likely, even if such a value cannot be categorically ruled out.
Disclaimer: my recent (as yet unpublished) work points towards an upper limit of about 6C (and a most likely value rather lower than this). But I honestly did not go looking for such a result, and much higher values would certainly have been more readily publishable :-)
And what prospects are there for the future? This post is already far too long, so I'll save that for another time.
5 comments:
Thanks for this article.
Re "it has not (to my knowledge) been addressed anywhere in the climate prediction literature to date."
Does this Hadley Centre technical note by David Sexton and James Murphy count?
Obviously the 'climate prediction index' discussed will be a work in progress for a long time to come.
Chris Randles
Maybe my wording was a little unclear. They certainly tried to address it, but through a somewhat unsatisfactory method :-) The crucial point is that the discrepancy between model and observational data is due both to errors in the model as well as errors on the observations. Their "CPI" aims to only account for the latter. I know they are working hard to improve things.
James,
I am a little confused by your terminology. I would understand sensitivity to be in units of degrees (C or F depending on your location) per something (ppm of CO2 say). You talk about it in terms of degrees C with no qualification. Thus, you haven't described sensitivity at all but the product of sensitivity times some forcing. The IPCC range (if I understand it correctly) is based on assumptions about economic growth and emissions intensity (all those messy socio-economic things you seem to be trying to avoid in this post because they are a huge source of uncertainty). Can you provide an estimate of sensitivity rather than an estimate of temperature change which implicitly includes projections of forcing variables (emissions, gas concentrations, whatever...)
And for some reason I missed your practically first comment that the assumption was for a doubling of CO2! Sorry.
Anonymous #3 and #4,
:-)
You are right that "sensitivity" could mean pretty much anything, but unless otherwise defined, it is usually safe to assume in climate prediction that it means "sensitivity of equilibrium global temperature to a doubling of CO2".
Post a Comment