I've been on holiday recently - yes, I flew, the first time I've gone on a foreign non-work-related trip in about a decade - so the first I heard about this was a few days ago when I bumped into someone I knew on the way home (can't go far in Boulder without meeting a climate scientist, it seems).
On the basis of "if you can't think of anything nice to say"...this ought to be a short post, but I don't have time for that, so you'll have to make do with a long one :-) RC has beaten me to it with the wonderfully diplomatic observation that the underlying idea has all been known for 20+ years but this version is "probably the most succinct and accessible treatment of the subject to date". R+B's basic point is that if "feedback" f is considered to be Gaussian, then sensitivity = l0/(1-f) is going to be skewed, which seems fair enough. Where I part company with them is when they claim that this gives rise to some fundamental and substantial difficulty in generating more precise estimates of climate sensitivity, and also that it explains the apparent lack of progress in improving on the long-standing 1979 Charney report estimate of 1.5-4.5C at only the "likely" level. (Stoat's complaints also seem pertinent: f cannot really be a true Gaussian, unless one is willing to seriously consider large negative sensitivity, and even though a Gaussian is a widespread and often reasonable distribution, it is hard to find any theoretical or practical basis for a Gaussian abruptly truncated at 1).
Let's just recap on a small subset of the things we have observed since 1979. Most obviously, there has been about 30 years of rather steady warming, just as expected by the models at the time including most famously the Hansen prediction. The overall ocean warming is also observable, but probably a little lower than models simulate. There have been 2 major volcanic eruptions, following each of which there was a clearly observable but rather short-term cooling, exactly characteristic of a mid-range sensitivity. IIRC the magnitude and duration of the second cooling (Pinatubo) was also explicitly predicted between the eruption and the peak of the cooling itself. Perhaps most interestingly (since it does not depend either on climate models, or uncertainties in ocean heat uptake), a satellite was sent up in 1983 to measure the radiation balance of the planet, and its data since then (as analysed by Forster and Gregory last year) are in line with a low sensitivity. Of course there is a lot more we've learnt besides that, and also substantial improvements in model resolution and realism - I've just focussed some of the things that should most directly impact on estimates of climate sensitivity.
There seems to be a rather odd debate going on amongst some climate scientists about whether new observations will reduce uncertainty (I'll have more to say on this when a particular paper appears). I say it's rather odd, because I thought it was well known (it is certainly true, but true and well known are not always close cousins) that new observations are always expected to reduce uncertainty, and although it is possible that they may not do so on particular occasions, is always a surprise when this occurs. However, the vast bulk of observations (not just limited to those I have mentioned) have been singularly unexceptional, matching mid-range expectations with an uncanny accuracy (I'm ignoring stuff like ice sheets which have no direct relevance to estimating S). I fully accept that some of these observations are not be an especially stringent test of sensitivity, but they do all point the same way and it is hard to find any surprises at all in there . Remember that one of the biggest apparent surprises, the lack of warming in the satellite atmospheric record, was effectively resolved in favour of the models.
I can think of several alternative theories as to why the uncertainty in the IPCC estimate has not reduced, which R+B do not touch upon. Most obviously, I've explained (here and here) that the probabilistic methods generally used to generate these long-tailed pdfs are essentially pathological in their use of a uniform prior (under the erroneous belief that this represents "ignorance"), together with only looking at one small subset of the pertinent data at a time, and therefore do not give results that can credibly represent the opinions of informed scientists. While I think this effect probably dominates, there may also be the sociological effect of this range as some sort of anchoring device, which people are reluctant to change despite its rather shaky origins. Ramping up uncertainty (at least at the high end) is a handy lever for those who argue for strong mitigation, and it would also be naive to ignore the fact that scientists working in this area benefit from its prominence.
So in summary, Roe and Baker have now attempted to justify the pdfs that have been generated as not only reasonable, but inevitable on theoretical grounds. However, they have made no attempt to address the issues we have raised. It is notable that in their lengthy list of acknowledgees, there are many eminent and worthy scientists thanked but not one who I recognise as having actually published any work in this area - apart from Myles Allen who appears to have been a referee. The real question IMO is not whether a fat tail is inevitable, but rather whether it is possible to generate a pdf which credibly attempts to take account of the points I have raised, and still maintains any such significant tail. That challenge has remained on the table for a year and a half now, and no-one has taken it up...
Allen and Frame certainly aren't going to try, because they have gleefully seized upon Roe and Baker to justify a bait-and-switch. After failing to make any progress themselves, they have conveniently decided that it isn't such an interesting question after all, so let's not take too close a look at what has gone on thankyouverymuch. There's a couple of bizarre curve-balls in their comment: they start off by saying that the uncertainty isn't surprising because 4C warmer will be a "different planet". But nothing in Roe and Baker, or anywhere else in the relevant literature, depends on such nonlinearity in the sensitivity. In fact some of the published estimates are explicitly phrased in terms of the classical definition of a sensitivity as the derivative dT/dF (and everyone else uses this implicitly anyway). That is, the uncertainty being discussed is in our estimate of that gradient, rather than the nonlinearity as this line is extrapolated out to +3.7W/m2. So I can only interpret that comment as them preparing the ground for when people eventually do get around to agreeing that the linear sensitivity is actually close to 0.75K/W/m2 (~3C for doubled CO2) so they can wring their hands and say "ooh, it might get worse in the future". Of course the reason that people use the linear sensitivity to directly derive the 2xCO2 value is that all the evidence available, including probably every plausible model integration ever performed, indicates a modest amount of nonlinearity in that range. Allen and Frame's comment doesn't even reach the level of a hypothesis, as they have not presented any testable idea about how a significant nonlinearity could arise. There are other details I'm not very impressed by - the wording seems a bit naive and imprecise but I bet they would just say they were dumbing down for the audience so it would only seem petty to nitpick. Anyway they have at last admitted elsewhere (if grudgingly) that a uniform prior does not actually represent "no knowledge" so I see no need to pursue them further.
I don't think it is clearly expounded the R+B article itself, but in the comments to Stoat's post, Roe expounds his belief that sensitivity is intrinsically not a number, but a pdf. This seems to indicate rather muddled and confused thinking to me. True aleatory uncertainty is hard to find in the real world, and I've seen no plausible argument that the climate system exhibits it to any significant extent. We may on occasion choose to separate out some part of the uncertainty and treat it as effectively aleatory and therefore irreducible (eg consider the weather v climate distinction: if asked for the temperature on Christmas day 50 years from now, an honest answer will always be a rather broad pdf, however precisely we come to understand the forced response which will influence the shape and position of the pdf). But this is not a fundamental distinction, just a practical one - with a sufficiently accurate model and observations, the temperature really could in principle be predicted accurately. For concreteness in the current context, let's consider the following definition of S, which is based on Morgan and Keith's 1995 survey: S is defined to be the observed global temperature rise, measured as a 30-year average, 200 years after the CO2 level is doubled from the pre-industrial level and then held fixed (with other anthropogenic forcings unchanged). This experiment is just about within mankind's grasp if we chose to do it and weren't too bothered about killing a few people along the way, so it seems to be an operationally meaningful definition (at least as a thought experiment) that would clearly result in a specific number. Repeating this experiment several times in a model with different initial conditions will give very slightly different answers, but their range will be negligibly small (< 0.1C) compared to the uncertainties in S that we are presently stuck with. The only large initial-condition-related uncertainty in model calculations of sensitivity is the well-known numerical artefact that causes some slab ocean runs to go cold, and that has no physically realistic basis. So I don't see Roe's point here to be a substantive one.
On the basis of "if you can't think of anything nice to say"...this ought to be a short post, but I don't have time for that, so you'll have to make do with a long one :-) RC has beaten me to it with the wonderfully diplomatic observation that the underlying idea has all been known for 20+ years but this version is "probably the most succinct and accessible treatment of the subject to date". R+B's basic point is that if "feedback" f is considered to be Gaussian, then sensitivity = l0/(1-f) is going to be skewed, which seems fair enough. Where I part company with them is when they claim that this gives rise to some fundamental and substantial difficulty in generating more precise estimates of climate sensitivity, and also that it explains the apparent lack of progress in improving on the long-standing 1979 Charney report estimate of 1.5-4.5C at only the "likely" level. (Stoat's complaints also seem pertinent: f cannot really be a true Gaussian, unless one is willing to seriously consider large negative sensitivity, and even though a Gaussian is a widespread and often reasonable distribution, it is hard to find any theoretical or practical basis for a Gaussian abruptly truncated at 1).
Let's just recap on a small subset of the things we have observed since 1979. Most obviously, there has been about 30 years of rather steady warming, just as expected by the models at the time including most famously the Hansen prediction. The overall ocean warming is also observable, but probably a little lower than models simulate. There have been 2 major volcanic eruptions, following each of which there was a clearly observable but rather short-term cooling, exactly characteristic of a mid-range sensitivity. IIRC the magnitude and duration of the second cooling (Pinatubo) was also explicitly predicted between the eruption and the peak of the cooling itself. Perhaps most interestingly (since it does not depend either on climate models, or uncertainties in ocean heat uptake), a satellite was sent up in 1983 to measure the radiation balance of the planet, and its data since then (as analysed by Forster and Gregory last year) are in line with a low sensitivity. Of course there is a lot more we've learnt besides that, and also substantial improvements in model resolution and realism - I've just focussed some of the things that should most directly impact on estimates of climate sensitivity.
There seems to be a rather odd debate going on amongst some climate scientists about whether new observations will reduce uncertainty (I'll have more to say on this when a particular paper appears). I say it's rather odd, because I thought it was well known (it is certainly true, but true and well known are not always close cousins) that new observations are always expected to reduce uncertainty, and although it is possible that they may not do so on particular occasions, is always a surprise when this occurs. However, the vast bulk of observations (not just limited to those I have mentioned) have been singularly unexceptional, matching mid-range expectations with an uncanny accuracy (I'm ignoring stuff like ice sheets which have no direct relevance to estimating S). I fully accept that some of these observations are not be an especially stringent test of sensitivity, but they do all point the same way and it is hard to find any surprises at all in there . Remember that one of the biggest apparent surprises, the lack of warming in the satellite atmospheric record, was effectively resolved in favour of the models.
I can think of several alternative theories as to why the uncertainty in the IPCC estimate has not reduced, which R+B do not touch upon. Most obviously, I've explained (here and here) that the probabilistic methods generally used to generate these long-tailed pdfs are essentially pathological in their use of a uniform prior (under the erroneous belief that this represents "ignorance"), together with only looking at one small subset of the pertinent data at a time, and therefore do not give results that can credibly represent the opinions of informed scientists. While I think this effect probably dominates, there may also be the sociological effect of this range as some sort of anchoring device, which people are reluctant to change despite its rather shaky origins. Ramping up uncertainty (at least at the high end) is a handy lever for those who argue for strong mitigation, and it would also be naive to ignore the fact that scientists working in this area benefit from its prominence.
So in summary, Roe and Baker have now attempted to justify the pdfs that have been generated as not only reasonable, but inevitable on theoretical grounds. However, they have made no attempt to address the issues we have raised. It is notable that in their lengthy list of acknowledgees, there are many eminent and worthy scientists thanked but not one who I recognise as having actually published any work in this area - apart from Myles Allen who appears to have been a referee. The real question IMO is not whether a fat tail is inevitable, but rather whether it is possible to generate a pdf which credibly attempts to take account of the points I have raised, and still maintains any such significant tail. That challenge has remained on the table for a year and a half now, and no-one has taken it up...
Allen and Frame certainly aren't going to try, because they have gleefully seized upon Roe and Baker to justify a bait-and-switch. After failing to make any progress themselves, they have conveniently decided that it isn't such an interesting question after all, so let's not take too close a look at what has gone on thankyouverymuch. There's a couple of bizarre curve-balls in their comment: they start off by saying that the uncertainty isn't surprising because 4C warmer will be a "different planet". But nothing in Roe and Baker, or anywhere else in the relevant literature, depends on such nonlinearity in the sensitivity. In fact some of the published estimates are explicitly phrased in terms of the classical definition of a sensitivity as the derivative dT/dF (and everyone else uses this implicitly anyway). That is, the uncertainty being discussed is in our estimate of that gradient, rather than the nonlinearity as this line is extrapolated out to +3.7W/m2. So I can only interpret that comment as them preparing the ground for when people eventually do get around to agreeing that the linear sensitivity is actually close to 0.75K/W/m2 (~3C for doubled CO2) so they can wring their hands and say "ooh, it might get worse in the future". Of course the reason that people use the linear sensitivity to directly derive the 2xCO2 value is that all the evidence available, including probably every plausible model integration ever performed, indicates a modest amount of nonlinearity in that range. Allen and Frame's comment doesn't even reach the level of a hypothesis, as they have not presented any testable idea about how a significant nonlinearity could arise. There are other details I'm not very impressed by - the wording seems a bit naive and imprecise but I bet they would just say they were dumbing down for the audience so it would only seem petty to nitpick. Anyway they have at last admitted elsewhere (if grudgingly) that a uniform prior does not actually represent "no knowledge" so I see no need to pursue them further.
I don't think it is clearly expounded the R+B article itself, but in the comments to Stoat's post, Roe expounds his belief that sensitivity is intrinsically not a number, but a pdf. This seems to indicate rather muddled and confused thinking to me. True aleatory uncertainty is hard to find in the real world, and I've seen no plausible argument that the climate system exhibits it to any significant extent. We may on occasion choose to separate out some part of the uncertainty and treat it as effectively aleatory and therefore irreducible (eg consider the weather v climate distinction: if asked for the temperature on Christmas day 50 years from now, an honest answer will always be a rather broad pdf, however precisely we come to understand the forced response which will influence the shape and position of the pdf). But this is not a fundamental distinction, just a practical one - with a sufficiently accurate model and observations, the temperature really could in principle be predicted accurately. For concreteness in the current context, let's consider the following definition of S, which is based on Morgan and Keith's 1995 survey: S is defined to be the observed global temperature rise, measured as a 30-year average, 200 years after the CO2 level is doubled from the pre-industrial level and then held fixed (with other anthropogenic forcings unchanged). This experiment is just about within mankind's grasp if we chose to do it and weren't too bothered about killing a few people along the way, so it seems to be an operationally meaningful definition (at least as a thought experiment) that would clearly result in a specific number. Repeating this experiment several times in a model with different initial conditions will give very slightly different answers, but their range will be negligibly small (< 0.1C) compared to the uncertainties in S that we are presently stuck with. The only large initial-condition-related uncertainty in model calculations of sensitivity is the well-known numerical artefact that causes some slab ocean runs to go cold, and that has no physically realistic basis. So I don't see Roe's point here to be a substantive one.