I've been on holiday recently - yes, I flew, the first time I've gone on a foreign non-work-related trip in about a decade - so the first I heard about this was a few days ago when I bumped into someone I knew on the way home (can't go far in Boulder without meeting a climate scientist, it seems).
On the basis of "if you can't think of anything nice to say"...this ought to be a short post, but I don't have time for that, so you'll have to make do with a long one :-) RC has beaten me to it with the wonderfully diplomatic observation that the underlying idea has all been known for 20+ years but this version is "probably the most succinct and accessible treatment of the subject to date". R+B's basic point is that if "feedback" f is considered to be Gaussian, then sensitivity = l0/(1-f) is going to be skewed, which seems fair enough. Where I part company with them is when they claim that this gives rise to some fundamental and substantial difficulty in generating more precise estimates of climate sensitivity, and also that it explains the apparent lack of progress in improving on the long-standing 1979 Charney report estimate of 1.5-4.5C at only the "likely" level. (Stoat's complaints also seem pertinent: f cannot really be a true Gaussian, unless one is willing to seriously consider large negative sensitivity, and even though a Gaussian is a widespread and often reasonable distribution, it is hard to find any theoretical or practical basis for a Gaussian abruptly truncated at 1).
Let's just recap on a small subset of the things we have observed since 1979. Most obviously, there has been about 30 years of rather steady warming, just as expected by the models at the time including most famously the Hansen prediction. The overall ocean warming is also observable, but probably a little lower than models simulate. There have been 2 major volcanic eruptions, following each of which there was a clearly observable but rather short-term cooling, exactly characteristic of a mid-range sensitivity. IIRC the magnitude and duration of the second cooling (Pinatubo) was also explicitly predicted between the eruption and the peak of the cooling itself. Perhaps most interestingly (since it does not depend either on climate models, or uncertainties in ocean heat uptake), a satellite was sent up in 1983 to measure the radiation balance of the planet, and its data since then (as analysed by Forster and Gregory last year) are in line with a low sensitivity. Of course there is a lot more we've learnt besides that, and also substantial improvements in model resolution and realism - I've just focussed some of the things that should most directly impact on estimates of climate sensitivity.
There seems to be a rather odd debate going on amongst some climate scientists about whether new observations will reduce uncertainty (I'll have more to say on this when a particular paper appears). I say it's rather odd, because I thought it was well known (it is certainly true, but true and well known are not always close cousins) that new observations are always expected to reduce uncertainty, and although it is possible that they may not do so on particular occasions, is always a surprise when this occurs. However, the vast bulk of observations (not just limited to those I have mentioned) have been singularly unexceptional, matching mid-range expectations with an uncanny accuracy (I'm ignoring stuff like ice sheets which have no direct relevance to estimating S). I fully accept that some of these observations are not be an especially stringent test of sensitivity, but they do all point the same way and it is hard to find any surprises at all in there . Remember that one of the biggest apparent surprises, the lack of warming in the satellite atmospheric record, was effectively resolved in favour of the models.
I can think of several alternative theories as to why the uncertainty in the IPCC estimate has not reduced, which R+B do not touch upon. Most obviously, I've explained (here and here) that the probabilistic methods generally used to generate these long-tailed pdfs are essentially pathological in their use of a uniform prior (under the erroneous belief that this represents "ignorance"), together with only looking at one small subset of the pertinent data at a time, and therefore do not give results that can credibly represent the opinions of informed scientists. While I think this effect probably dominates, there may also be the sociological effect of this range as some sort of anchoring device, which people are reluctant to change despite its rather shaky origins. Ramping up uncertainty (at least at the high end) is a handy lever for those who argue for strong mitigation, and it would also be naive to ignore the fact that scientists working in this area benefit from its prominence.
So in summary, Roe and Baker have now attempted to justify the pdfs that have been generated as not only reasonable, but inevitable on theoretical grounds. However, they have made no attempt to address the issues we have raised. It is notable that in their lengthy list of acknowledgees, there are many eminent and worthy scientists thanked but not one who I recognise as having actually published any work in this area - apart from Myles Allen who appears to have been a referee. The real question IMO is not whether a fat tail is inevitable, but rather whether it is possible to generate a pdf which credibly attempts to take account of the points I have raised, and still maintains any such significant tail. That challenge has remained on the table for a year and a half now, and no-one has taken it up...
Allen and Frame certainly aren't going to try, because they have gleefully seized upon Roe and Baker to justify a bait-and-switch. After failing to make any progress themselves, they have conveniently decided that it isn't such an interesting question after all, so let's not take too close a look at what has gone on thankyouverymuch. There's a couple of bizarre curve-balls in their comment: they start off by saying that the uncertainty isn't surprising because 4C warmer will be a "different planet". But nothing in Roe and Baker, or anywhere else in the relevant literature, depends on such nonlinearity in the sensitivity. In fact some of the published estimates are explicitly phrased in terms of the classical definition of a sensitivity as the derivative dT/dF (and everyone else uses this implicitly anyway). That is, the uncertainty being discussed is in our estimate of that gradient, rather than the nonlinearity as this line is extrapolated out to +3.7W/m2. So I can only interpret that comment as them preparing the ground for when people eventually do get around to agreeing that the linear sensitivity is actually close to 0.75K/W/m2 (~3C for doubled CO2) so they can wring their hands and say "ooh, it might get worse in the future". Of course the reason that people use the linear sensitivity to directly derive the 2xCO2 value is that all the evidence available, including probably every plausible model integration ever performed, indicates a modest amount of nonlinearity in that range. Allen and Frame's comment doesn't even reach the level of a hypothesis, as they have not presented any testable idea about how a significant nonlinearity could arise. There are other details I'm not very impressed by - the wording seems a bit naive and imprecise but I bet they would just say they were dumbing down for the audience so it would only seem petty to nitpick. Anyway they have at last admitted elsewhere (if grudgingly) that a uniform prior does not actually represent "no knowledge" so I see no need to pursue them further.
I don't think it is clearly expounded the R+B article itself, but in the comments to Stoat's post, Roe expounds his belief that sensitivity is intrinsically not a number, but a pdf. This seems to indicate rather muddled and confused thinking to me. True aleatory uncertainty is hard to find in the real world, and I've seen no plausible argument that the climate system exhibits it to any significant extent. We may on occasion choose to separate out some part of the uncertainty and treat it as effectively aleatory and therefore irreducible (eg consider the weather v climate distinction: if asked for the temperature on Christmas day 50 years from now, an honest answer will always be a rather broad pdf, however precisely we come to understand the forced response which will influence the shape and position of the pdf). But this is not a fundamental distinction, just a practical one - with a sufficiently accurate model and observations, the temperature really could in principle be predicted accurately. For concreteness in the current context, let's consider the following definition of S, which is based on Morgan and Keith's 1995 survey: S is defined to be the observed global temperature rise, measured as a 30-year average, 200 years after the CO2 level is doubled from the pre-industrial level and then held fixed (with other anthropogenic forcings unchanged). This experiment is just about within mankind's grasp if we chose to do it and weren't too bothered about killing a few people along the way, so it seems to be an operationally meaningful definition (at least as a thought experiment) that would clearly result in a specific number. Repeating this experiment several times in a model with different initial conditions will give very slightly different answers, but their range will be negligibly small (< 0.1C) compared to the uncertainties in S that we are presently stuck with. The only large initial-condition-related uncertainty in model calculations of sensitivity is the well-known numerical artefact that causes some slab ocean runs to go cold, and that has no physically realistic basis. So I don't see Roe's point here to be a substantive one.
On the basis of "if you can't think of anything nice to say"...this ought to be a short post, but I don't have time for that, so you'll have to make do with a long one :-) RC has beaten me to it with the wonderfully diplomatic observation that the underlying idea has all been known for 20+ years but this version is "probably the most succinct and accessible treatment of the subject to date". R+B's basic point is that if "feedback" f is considered to be Gaussian, then sensitivity = l0/(1-f) is going to be skewed, which seems fair enough. Where I part company with them is when they claim that this gives rise to some fundamental and substantial difficulty in generating more precise estimates of climate sensitivity, and also that it explains the apparent lack of progress in improving on the long-standing 1979 Charney report estimate of 1.5-4.5C at only the "likely" level. (Stoat's complaints also seem pertinent: f cannot really be a true Gaussian, unless one is willing to seriously consider large negative sensitivity, and even though a Gaussian is a widespread and often reasonable distribution, it is hard to find any theoretical or practical basis for a Gaussian abruptly truncated at 1).
Let's just recap on a small subset of the things we have observed since 1979. Most obviously, there has been about 30 years of rather steady warming, just as expected by the models at the time including most famously the Hansen prediction. The overall ocean warming is also observable, but probably a little lower than models simulate. There have been 2 major volcanic eruptions, following each of which there was a clearly observable but rather short-term cooling, exactly characteristic of a mid-range sensitivity. IIRC the magnitude and duration of the second cooling (Pinatubo) was also explicitly predicted between the eruption and the peak of the cooling itself. Perhaps most interestingly (since it does not depend either on climate models, or uncertainties in ocean heat uptake), a satellite was sent up in 1983 to measure the radiation balance of the planet, and its data since then (as analysed by Forster and Gregory last year) are in line with a low sensitivity. Of course there is a lot more we've learnt besides that, and also substantial improvements in model resolution and realism - I've just focussed some of the things that should most directly impact on estimates of climate sensitivity.
There seems to be a rather odd debate going on amongst some climate scientists about whether new observations will reduce uncertainty (I'll have more to say on this when a particular paper appears). I say it's rather odd, because I thought it was well known (it is certainly true, but true and well known are not always close cousins) that new observations are always expected to reduce uncertainty, and although it is possible that they may not do so on particular occasions, is always a surprise when this occurs. However, the vast bulk of observations (not just limited to those I have mentioned) have been singularly unexceptional, matching mid-range expectations with an uncanny accuracy (I'm ignoring stuff like ice sheets which have no direct relevance to estimating S). I fully accept that some of these observations are not be an especially stringent test of sensitivity, but they do all point the same way and it is hard to find any surprises at all in there . Remember that one of the biggest apparent surprises, the lack of warming in the satellite atmospheric record, was effectively resolved in favour of the models.
I can think of several alternative theories as to why the uncertainty in the IPCC estimate has not reduced, which R+B do not touch upon. Most obviously, I've explained (here and here) that the probabilistic methods generally used to generate these long-tailed pdfs are essentially pathological in their use of a uniform prior (under the erroneous belief that this represents "ignorance"), together with only looking at one small subset of the pertinent data at a time, and therefore do not give results that can credibly represent the opinions of informed scientists. While I think this effect probably dominates, there may also be the sociological effect of this range as some sort of anchoring device, which people are reluctant to change despite its rather shaky origins. Ramping up uncertainty (at least at the high end) is a handy lever for those who argue for strong mitigation, and it would also be naive to ignore the fact that scientists working in this area benefit from its prominence.
So in summary, Roe and Baker have now attempted to justify the pdfs that have been generated as not only reasonable, but inevitable on theoretical grounds. However, they have made no attempt to address the issues we have raised. It is notable that in their lengthy list of acknowledgees, there are many eminent and worthy scientists thanked but not one who I recognise as having actually published any work in this area - apart from Myles Allen who appears to have been a referee. The real question IMO is not whether a fat tail is inevitable, but rather whether it is possible to generate a pdf which credibly attempts to take account of the points I have raised, and still maintains any such significant tail. That challenge has remained on the table for a year and a half now, and no-one has taken it up...
Allen and Frame certainly aren't going to try, because they have gleefully seized upon Roe and Baker to justify a bait-and-switch. After failing to make any progress themselves, they have conveniently decided that it isn't such an interesting question after all, so let's not take too close a look at what has gone on thankyouverymuch. There's a couple of bizarre curve-balls in their comment: they start off by saying that the uncertainty isn't surprising because 4C warmer will be a "different planet". But nothing in Roe and Baker, or anywhere else in the relevant literature, depends on such nonlinearity in the sensitivity. In fact some of the published estimates are explicitly phrased in terms of the classical definition of a sensitivity as the derivative dT/dF (and everyone else uses this implicitly anyway). That is, the uncertainty being discussed is in our estimate of that gradient, rather than the nonlinearity as this line is extrapolated out to +3.7W/m2. So I can only interpret that comment as them preparing the ground for when people eventually do get around to agreeing that the linear sensitivity is actually close to 0.75K/W/m2 (~3C for doubled CO2) so they can wring their hands and say "ooh, it might get worse in the future". Of course the reason that people use the linear sensitivity to directly derive the 2xCO2 value is that all the evidence available, including probably every plausible model integration ever performed, indicates a modest amount of nonlinearity in that range. Allen and Frame's comment doesn't even reach the level of a hypothesis, as they have not presented any testable idea about how a significant nonlinearity could arise. There are other details I'm not very impressed by - the wording seems a bit naive and imprecise but I bet they would just say they were dumbing down for the audience so it would only seem petty to nitpick. Anyway they have at last admitted elsewhere (if grudgingly) that a uniform prior does not actually represent "no knowledge" so I see no need to pursue them further.
I don't think it is clearly expounded the R+B article itself, but in the comments to Stoat's post, Roe expounds his belief that sensitivity is intrinsically not a number, but a pdf. This seems to indicate rather muddled and confused thinking to me. True aleatory uncertainty is hard to find in the real world, and I've seen no plausible argument that the climate system exhibits it to any significant extent. We may on occasion choose to separate out some part of the uncertainty and treat it as effectively aleatory and therefore irreducible (eg consider the weather v climate distinction: if asked for the temperature on Christmas day 50 years from now, an honest answer will always be a rather broad pdf, however precisely we come to understand the forced response which will influence the shape and position of the pdf). But this is not a fundamental distinction, just a practical one - with a sufficiently accurate model and observations, the temperature really could in principle be predicted accurately. For concreteness in the current context, let's consider the following definition of S, which is based on Morgan and Keith's 1995 survey: S is defined to be the observed global temperature rise, measured as a 30-year average, 200 years after the CO2 level is doubled from the pre-industrial level and then held fixed (with other anthropogenic forcings unchanged). This experiment is just about within mankind's grasp if we chose to do it and weren't too bothered about killing a few people along the way, so it seems to be an operationally meaningful definition (at least as a thought experiment) that would clearly result in a specific number. Repeating this experiment several times in a model with different initial conditions will give very slightly different answers, but their range will be negligibly small (< 0.1C) compared to the uncertainties in S that we are presently stuck with. The only large initial-condition-related uncertainty in model calculations of sensitivity is the well-known numerical artefact that causes some slab ocean runs to go cold, and that has no physically realistic basis. So I don't see Roe's point here to be a substantive one.
24 comments:
Maybe a bit naive but what if:
Temperature rise methane production release from tundra / swamps -----> higher temperature different ocean circulation less CO2 uptake -----> hotter, --------> less CO2 uptake -----> change in circulation ------> ice sheets melts --------> hotter and so on... cold that not be a non linearity for certain T:s? (I guess you could change the order)
Or is that ruled out by proxidata?
Just a thought like...
So... I'm curious as to how much the number-vs-pdf bit matters. I'm inclinded to think that if people can't even agree on that, then the issue is very unclear, and we're back to mediaeval philosophers arguing about infinity without having first defined it.
Magnus,
All that could happen, but it's not relevant to climate sensitivity which is specifically defined as the temp change while holding the forcing fixed at 2xCO2 (or temp change per unit forcing for the "gradient" definition). In the 200 year experiment I mention, we might need to devise a method for absorbing methane if the natural environment starts to emit it in larger quantities.
Belette,
Mostly I think it just indicates muddled thinking. But if it is also used as a justification for some substantial "irreducible uncertainty" then it risks damaging the scientific process (and certainly damaging the credibility of this niche of climate science). I think it is also fundamental to Marty Weitzman's argument.
James, will you be writing to Science about this?
Oh no, it had never crossed my mind. It's not really wrong so much as irrelevant. I'm also optimistic that it will be largely ignored, since from what I've heard, other people working in this area (apart than Allen and Frame, of course) are equally surprised that it was considered publishable.
Ahh, never mind me :) I’ll blame it on the early morning!
What about an increase in solar input ------> more water vapour-----> less ice-----> more forest? To small change I guess... but at least now I’m in the right ballpark?
Gha! Never mind I just realised the mistake... should learn to think before I write :)
James wrote:
"It's not really wrong so much as irrelevant."
Isn't it fair to say that their conclusion that by definition a long tail is inevitable wrong, in your opinion; and from a policy maker's perspective, isn't that important?
James wrote:
"I'm also optimistic that it will be largely ignored"
Maybe by the specialists, but it's certainly generated a lot of publicity, presumably because it got into Science.
Well, there is not really any advice for policymakers, just some analysis of what others have published. I don't think it so clearly wrong as to be worth commenting on officially - and I also know there would be no chance of getting such a comment published, given how hard it is even in cases when there are clearly identifiable errors with significant consequences.
You are not going to agree with this but I want to see why.
Your experiment involves 200 years of good climate observations versus having 30 years of good climate data. Obviously we cannot just take that 200 to 30 year ratio for at least 3 reasons.
1. There is considerable diminishing returns though the 200 year period.
2. We have thousands of years of paleo climate data but there is more uncertainty about the conditions at that time.
3. Your 200 years are specifically designed to measure sensitivity whereas the last 30 years have seen co2 levels rising at similar rates to the previous 30 years which makes it difficult to apportion temperature rises to which CO2 rises. To measure sensitivity accurately you need a sharp change and we haven't had that.
If I think 3 is by far the most important reason and completely unscientifically off the top of my head wildly estimate 10% of the reduction in range from 3K to .1K is due to the information in the last 30 years then we would expect .3K reduction in the range.
In fact, if the lower end has increased from 1.5K to 2K then we have seen a better reduction in range than my wild estimate would expect.
Why has this all occured at the low end of the range rather than some at the top end of the range? Well this is due to the reasons that Roe and Baker have detailed.
Thoughts?
James,
When you wrote that ..."with a sufficiently accurate model and observations, the temperature [Xmas day 50 years from now] really could in principle be predicted accurately," you can't have been serious, can you? What happened to Lorenz (not to mention just plain old stochastic noise? I realize this isn't entirely relevant to the point about the pdf of f, but it surely confuses things. Comments?
Eric Steig (RC)
I'm guessing that this:
> if we chose to do it and weren't
> too bothered about killing a few
> people along the way
is meant as a reference to thinking about choices like this?:
http://www.sciam.com/article.cfm?articleID=76613503-E7F2-99DF-3E772052740833A2
"... In one setup, the choice was whether or not to push someone onto a railroad track to prevent a runaway train from killing five other people; in another, the choice was whether to flip a switch that would route the train from a track where it could strike five people to another track where it would kill only one...."
Either we kill some people now, in our own lifetimes, or we leave 'the train on the track headed toward the larger crowd' but it hits them after our lifetime and we don't actually see it happen?
Or, of course, a miracle ....
I think (ok dangerous ground) that there will be one climate sensitivity for this earth in any particular number of years and I think this is what JA is saying. However for a different number of years the cs will be different. (Of course given 20 earths we would have 20 answers) This may be my vision of a Baysean POV. From that standpoint there is a single correct climate sensitivity, however, it neglects measurement uncertainty. Of course, I could be pushing my luck here
Chris,
I don't think you got my point. One complaint of the Bayesians is that "climate sensitivity" is a rather hidden parameter (either in the classical derivative form dT/dF, or the alternative of a true equilibrium), so it is hard to make a direct observation of the climate system that can be used to challenge predictions of such a theoretical construct. I'm just trying to side-step that objection by presenting everything in terms of a directly observable (operationally defined) version which is pretty much the same thing (it's not quite identical, but would be just as useful to policymakers). It is also more-or-less what modellers do to calculate the sensitivity of their models.
Hank,
In order to keep at 550pm someone might need to nuke a few nations...
Eric,
Obviously the model (and initialisation) would have to be essentially perfect :-)
My Lorenz models have always been deterministic, so any uncertainty in their outputs can only ever be epistemic. "Stochastic" inputs, when they are used, are (almost?) always really representing unknown processes, not "random" ones.
But this is perhaps an unnecessary distraction, because the issue is not whether there can ever be any truly "random" effect, but whether such random effects could be large enough to substantially affect the climate sensitivity. I don't believe it, and a vague comment about "chaos" does not support the hypothesis. Models are chaotic too, and show no such effect. In fact, as with Allen and Frame's "intelligent warming", there does not even seem to be a testable hypothesis associated with it.
Eli is a smart bunny, and generally close to the truth. But after 200 years, the climate would be close to equilibrium (for global temp if not details such as ice sheets), unless S is very high. And when talking of "20 earths", what are the proposed differences between these earths? Ie, what is the distribution from which they are taken? If one allows different land fractions, atmospheric composition, orbital parameters etc...of course S will change. If it's just the current state of the atmosphere then I don't believe it.
>"If it's just the current state of the atmosphere then I don't believe it."
So are you saying that the state of the atmosphere cannot affect El Nino cycles even with 50 years to try?
Or are you saying it is only the 30 year global average temperature that is predictable so such cycles will average out down to your 0.1C uncertainty?
or something else?
>"I don't think you got my point."
I realise that I was talking about a completely different point to what you were saying.
Or are you saying it is only the 30 year global average temperature that is predictable so such cycles will average out down to your 0.1C uncertainty?
Yup, that's exactly my point.
CPDN did a whole lot of replicates (same parameters but different initial conditions) in their original experiment, didn't they? I don't recall any presentation of those results - do you know if they found these to give significant differences (other than perhaps the nonphysical crashes)?
For presentation of those results you want Knight et al
PNAS July 07
I wrote to Sylvia mid July with a few questions. I saw her recently and she said it was still in her in-tray.
William has said he thinks a text file of the parameters and sensitivities should be available. At the time I knew Knight et al was in preparation and thought it might be reasonable for them to withhold it to allow them a stab at this before making the data available. Now it is published I agree it should be available and asked for it. So if anyone else wants to add their voices to a call for this then feel free to ask.
(Sorry about going off on a ranting tangent like that.)
William recently mentioned the known ic related cold equator models (ocean ice west of ecuador) that occur with some slabs. We have seen large divergences just from initial conditions in this way but this is known to be unphysical.
I think there are others with quite a bit more than .1C difference for a year. With 30 year average, the differences each year could easily average more than .1C and average to less than .1C.
This is only a model rather than reality and the model may well not have enough natural variability - I don't recall seeing 1998 el nino size variation. (Well I have seen much larger but dismissed it as computing error.) But see the paper rather than relying on me.
It might be table 6 of the supplementary information that you want.
Supp info
Thanks, I hadn't seen that paper. It looks like my assertion was basically right - initial conditions have very little effect, so even if this is treated as an "intrinsic uncertainty" in climate sensitivity, it is not a significant one.
>"let's consider the following definition of S, which is based on Morgan and Keith's 1995 survey: S is defined to be the observed global temperature rise, measured as a 30-year average, 200 years after the CO2 level is doubled from the pre-industrial level and then held fixed (with other anthropogenic forcings unchanged)."
For sensitivity from *now*, presumably this creates a problem and it is necessary to hold the GHG level fixed for 200 years then double the CO2 level then keep them fixed for 200 years. If you didn't do this hold steady first you would count the committed warming twice.
If we knew what the committed warming was we would have a much better idea of the sensitivity and the doubling part would be less needed to narrow down the sensitivity.
So if you want to operationally define it then why not just say stabilise CO2 levels as soon as possible. After 50 years at stable CO2 level, estimate the equilibrium temperature with best models at that time and compare to preindustrial temperature. Finally adjust for the change in CO2 not being a doubling. There is more uncertainty with doing this than the full 500+year method but after 500 years I doubt we would be that interested in the sensitivity to want to go to that much trouble to find out.
Any comments on my previous point about the need to try to shift the focus to say we also have to look at what has to happen in the future before we know what sensitivity is as well as your focus on what has happened to get some idea of the progress made?
Oh, I would start from pre-industrial, on the basis that we know the p-i temperature well enough to calculate S. Stabilising now and comparing to p-i would work just about as well (smaller signal, uncertainties are relatively larger) but that ignores the possibility of nonlinearity in the response out to 2xCO2 (not that I expect that to be large). Settling on 2xCO2 is largely historical of course...
I also agree we would be able to make a good estimate of S well in advance of the actual equilibration - I think we already have a pretty good estimate!
Post a Comment