Friday, February 01, 2013

A sensitive matter

So, sensitivity has been in the climate blogosphere a bit recently. Just a few days ago, that odd Norwegian press release got some people excited, but it's not clear what it really means. There is an Aldrin et al paper, published some time ago - which gave a decent constraint on climate sensitivity, though nothing particularly surprising or interesting IMO. We thought we had sorted out the sensitivity kerfuffle several years ago, but it seems that the rest of the world still hasn't yet caught up. As I said to Andy Revkin (and he published on his blog), the additional decade of temperature data from 2000 onwards (even the AR4 estimates typically ignored the post-2000 years) can only work to reduce estimates of sensitivity, and that's before we even consider the reduction in estimates of negative aerosol forcing, and additional forcing from black carbon (the latter being very new, is not included in any calculations AIUI). It's increasingly difficult to reconcile a high climate sensitivity (say over 4C) with the observational evidence for the planetary energy balance over the industrial era. But the Norwegian press release seems to refer to as yet unpublished research, and some of the claims seem a bit hard to credit. So we will have to wait for more details before drawing any more solid conclusions.

Before then, there was the minor blogstorm (at least in some quarters) surrounding Nic Lewis' criticism of the IPCC's stubborn adherence to their old estimate of climate sensitivity. This, of course, being despite the additional evidence which I've just mentioned above.

When I looked at the IPCC drafts, I didn't actually notice the substantial change in estimated aerosol uncertainty that Nic focussed on. With limited time and energy to wade through several hundred pages of draft material, I mostly looked for how and where they had (or had not, but perhaps should have) referred to my work, to make sure it was fairly and accurately represented. I was pretty unimpressed with some parts of first draft, actually, and made a number suggestions. Of course in line with the IPCC conditions, I'm not going to say what was or was not in any draft. According to IPCC policy, my comments will all be available in the fullness of time, but I have also criticised this delayed release so in the spirit of openness here is one comment I made about their discussion of sensitivity in Chapter 12 (p55 in the first order draft):
It seems very odd to portray our work as an outlier here. Sokolov et al 2009, Urban and Keller 2010, Olson et al (in press JGR) have also recently presented similar results (and there may be more as yet unpublished, eg Aldrin at the INI meeting back in 2010). Such "observationally constrained pdfs" were all the rage a few years ago and featured heavily in the last IPCC report, there is no clear explanation for your sudden dismissal of them in favour of what seems to be a small private opinion poll. A more balanced presentation could be: "Annan and Hargreaves (2011a) criticize the use of uniform priors and argue that sensitivities above 4.5°C are extremely unlikely (less than 5%). Similar results have been obtained by a number of other researchers [add citations from the above]."

Note for the avoidance of any doubt I am not quoting directly from the unquotable IPCC draft, but only repeating my own comment on it. However, those who have read the second draft of Chapter 12 will realise why I previously said I thought the report was improved :-) Of course there is no guarantee as to what will remain in the final report, which for all the talk of extensive reviews, is not even seen by the proletariat, let alone opened to their comments, prior to its final publication. The paper I refer to as a "small private opinion poll" is of course the Zickfeld et al PNAS paper. The list of pollees in the Zickfeld paper are largely the self-same people responsible for the largely bogus analyses that I've criticised over recent years, and which even if they were valid then, are certainly outdated now. Interestingly, one of them stated quite openly in a meeting I attended a few years ago that he deliberately lied in these sort of elicitation exercises (i.e. exaggerating the probability of high sensitivity) in order to help motivate political action. Of course, there may be others who lie in the other direction, which is why it seems bizarre that the IPCC appeared to rely so heavily on this paper to justify their choice, rather than relying on published quantitative analyses of observational data. Since the IPCC can no longer defend their old analyses in any meaningful manner, it seems they have to resort to an unsupported "this is what we think, because we asked our pals". It's essentially the Lindzen strategy in reverse: having firmly wedded themselves to their politically convenient long tail of high values, their response to new evidence is little more than sticking their fingers in their ears and singing "la la la I can't hear you".

Of course, this still leaves open the question of what the new evidence actually does mean for climate sensitivity. I have mentioned above several analyses that are fairly up to date. I have some doubts about Nic Lewis' analysis, as I think some of his choices are dubious and will have acted to underestimate the true sensitivity somewhat. For example, his choice of ocean heat uptake is based on taking a short term trend over a period in which the observed warming is markedly lower than the longer-term multidecadal value. I don't think this is necessarily a deliberate cherry-pick, any more than previous analyses running up to the year 2000 were (the last decade is a natural enough choice to have made) but it does have unfortunate consequences. Irrespective of what one thinks about aerosol forcing, it would be hard to argue that the rate of net forcing increase and/or over-all radiative imbalance has actually dropped markedly in recent years, so any change in net heat uptake can only be reasonably attributed to a bit of natural variability or observational uncertainty. Lewis has also adjusted the aerosol forcing according to his opinion of which values are preferred - concidentally, he comes down on the side of an answer that gives a lower sensitivity. His results might be more reasonable if he had at least explored the sensitivity of his result to the assumptions made. Using the last 30y of ocean heat data and simply adopting the official IPCC forcing values rather than his modified versions (since after all, his main point is to criticise the lack of coherence in the IPCC report itself) would add credibility to his analysis. A still better approach would be to use a model capable of representing the transient change, and fitting it to the entire time series of the various relevant observations. Which is what people like Aldrin et al have done, of course, and which is why I think their results are superior.

But the point stands, that the IPCC's sensitivity estimate cannot readily be reconciled with forcing estimates and observational data. All the recent literature that approaches the question from this angle comes up with similar answers, including the papers I mentioned above. By failing to meet this problem head-on, the IPCC authors now find themselves in a bit of a pickle. I expect them to brazen it out, on the grounds that they are the experts and are quite capable of squaring the circle before breakfast if need be. But in doing so, they risk being seen as not so much summarising scientific progress, but obstructing it.

There's a nice example of this in Reto Knutti's comment featured by Revkin. While he starts out be agreeing that estimates based on the energy balance have to be coming down, he then goes on to argue that now (after a decade or more of generating and using them) he doesn't trust the calculations because these Bayesian estimates are all too sensitive to the prior choices. That seems to me to be precisely contradicted by all the available literature, which demonstrates that so long as absurd priors are avoided, the results are actually remarkably robust. Our own Climatic Change paper, Salvador Pueyo, Aldrin and the other papers above all use a wide range of different priors based on a range of different arguments but still arrive at very similar answers (at least, similar enough in the context of the hypothetical "long tail" for the pdf of climate sensitivity)! It looks rather like the IPCC authors have invented this meme as some sort of talismanic mantra to defend themselves against having to actually deal with the recent literature.

204 comments:

1 – 200 of 204   Newer›   Newest»
Paul S said...

Regarding the new black carbon forcing estimates in Bond et al. 2012 (the paper reported on in the linked Guardian article), this is the main reference used for the BC direct estimate in the Second-order draft, which presumably informs the total aerosol RF estimate in some way.

I anticipate there will be much confusion/outrage/shrieking over this matter since the reported value for BC in the draft (and I'm pretty sure this won't change) is considerably different to the 0.71 figure in Bond et al. Essentially the difference is a matter of scope and some creative accounting in the Bond et al. paper.

The IPCC BC estimate refers to fossil fuel and biofuel sources only, with other BC sources included in other aerosol categories, whereas the Bond et al. estimate includes all sources. Also, their method of estimating "BC" direct forcing is necessarily contaminated by other aerosol types. For the purposes of their report organic aerosols only have a scattering (negative forcing) effect even though they are known to also absorb. The absorbing effect of organic aerosols is thus implicitly included in the BC estimate, although to an unknown extent.

Nick Barnes said...

Is the short version still sensitivity is 3K?

Ir'Rational said...

No, I'd suggest it's falling.
And I'd not forget the Milikan effect, either!

EliRabett said...

The real issue with BC forcing is that it is not global, but intensely local, depending not only on emissions (Asian brown cloud) but also absorptions (Greenland darkening)

A global value for forcing is thus misleading.

BBD said...

Ir'Rational

So say S falls to ~2.5K. Does it mean the sceptics are correct and we can all go to the pub and forget about AGW?

William M. Connolley said...

What Nick asked :-)

Mark B. said...

BBD

It would certainly mean that the skeptics were correct to question the high estimate that was the consensus of all the world's climate scientists (and thus, could not be questioned by any rational person).

BBD said...

Mark B

What, you mean this bit:

, we conclude that the global mean equilibrium warming for doubling CO2, or ‘equilibrium climate sensitivity’, is likely to lie in the range 2°C to 4.5°C, with a most likely value of about 3°C. Equilibrium climate sensitivity is very likely larger than 1.5°C.

IPCC AR4 WG1 6.9.4

?

Brian French said...

My take on this...
Gig is (should be) up for Climate alarmists. There are three factors that all need to be considered when determining policy associated with the restrictions to use of fossil fuels and expelling of CO2: 1/ Is it bad? 2/ If bad is it a true crisis? and 3/ Do humans contribute significantly to the expelling of CO2?
Whether or not the earth has warmed over the past 150 years, there is no argument so that's not a significant issue.
3/ Do Humans contribute to CO2. Answer is yes. CO2 in the atmosphere has increased by 50% in the last generation.
These two "proscribed" facts result in the so-called 98% consensus (77/79 climate scientists agreed to these facts in a survey.
However, they are not critical factors in policy making.
1/ Are CO2 and Fossil Fuels bad? Considering that the growth in the standard of living in the world is almost due to cheap energy provided by fossil fuels, this can be argued against. If one's world view is that humans are parasites, then this improvement in living standards can be seen to be of little value. If you have children, then you will probably sacrifice a level of harm to environment to stay warm in the winter and feed your children.

A study now being reported upon (most notably by Andrew Revkin in the New York Times) shows that the most likely result of increased CO2 is below the lower end of the scale as forecast by the UNs IPCC. To describe the study in a sentence, the last 20 years of lack of increase in actual events (in warming and sea levels and extreme events) proves that there is little sensitivity to the climate by CO2. So the entire argument of Michael Mann, James Hansen and David Suzuki is proven wrong.

The temperatures are therefore likely to increase only about degree in the next century and sea levels under a foot.

Here's the NY Times piece: http://dotearth.blogs.nytimes.com/2013/01/26/weaker-global-warming-seen-in-study-promoted-by-norways-research-council/?comments#permid=20

Brian French said...

Here's another study that brings into question worries about climate:
http://www.nature.com/nclimate/journal/v2/n12/full/nclimate1589.html

steven said...

BBD

'So say S falls to ~2.5K. Does it mean the sceptics are correct and we can all go to the pub and forget about AGW?"

No, it means

1. Lukewarmers were correct..
2. We might have a longer window of opportunity to decide on mitigation, although thats an open question. And folks who argue that we only have X years left to fix the problem should probably reconsider their claims and reconsider the tactical wisdom of drawing lines in the sand.
3. castigating folks who argued that sensitivity was more likely to be less than 3 rather than greater than 3, was probably not the optimal choice in tribal warfare.

Paul S said...

1. Lukewarmers were correct.

I just flipped a coin and guessed correctly. Do you think that suggests I have some deep understanding concerning coin tosses?

3. castigating folks who argued that sensitivity was more likely to be less than 3 rather than greater than 3, was probably not the optimal choice in tribal warfare.

Can you point out anywhere this has happened?

Anonymous said...

James
I very largely agree with what you say, but may I respond on some of your comments relating to my recent energy-balance based climate sensitivity estimate? You write, in reference to it:

"his choice of ocean heat uptake is based on taking a short term trend over a period in which the observed warming is markedly lower than the longer-term multidecadal value."

The ocean heat uptake (OHU) figure that I took, its value over the last decade, is actually higher than if I had computed the trend over a longer-term multi-decadal period, and therefore resulted in my sensitivity estimate being higher, not lower. It makes sense to estimate OHU over much the same period that surface temperatures are measured, as I did. If natural, internal, fluctuations move heat from the ocean to the atmosphere, that will be reflected in a reduction in ocean heat content, and hence in measured OHU, and an increase in surface temperatures. W hen computing the energy-balance climate sensitivity estimate, those two effects will countervail each other, reducing the impact of internal variability on the estimate.

" Lewis has also adjusted the aerosol forcing according to his opinion of which values are preferred"

I 'm not sure that "adjusted" is the best description of what I did regarding aerosol forcing. I simply used the IPCC's own best estimate of -0.73 W/m^2 based on (satellite) observations, rather than the IPCC's preferred (main) estimate of -0.90 W/m^2. The latter was a composite estimate based on modelled aerosol forcing in GCM simulations, and on their "expert assessment" of a range of −0.68 to −1.52 W/m² for inverse estimates of aerosol forcing, in addition to the satellite observations derived estimates. IMO, use of GCM-based estimates should, insofar as possible, be avoided when making an observationally-based estimate of climate sensitivity.

In principle including inverse estimates is fine, but, as set out in my Appendix 1, several of the inverse estimates included by the IPCC came from highly unsuitable studies. For instance, two that were based purely on global energy balance estimates, with climate sensitivity assumed to be 3 K; three did not themselves actually estimate global aerosol forcing; and one turns out to have used a model with a serious code error, correction of which substantially reduces its estimate of aerosol cooling. Excluding unsuitable studies like those, the mean of the inverse estimates is actually in line with the -0.73 W/m^2 satellite observation based best estimate.

"His results might be more reasonable if he had at least explored the sensitivity of his result to the assumptions made. Using the last 30y of ocean heat data and simply adopting the official IPCC forcing values rather than his modified versions"

I agree that would have been a useful addition to my work. I am happy to do as you suggest. Using 1981-2011 ocean heat data (again for 0-2000m, from Levitus et al, 2012) , rather than the last 10 years, to compute the trend would have reduced the recent-period OHU estimate, scaling up as before to allow for heat uptake in the deeper ocean and elsewhere, by 0.08 W/m. Adopting the IPCC's main, composite, estimate of aerosol forcing would have reduced the change in total forcing between the 1870s (where I also originally scaled down the estimated aerosol forcing) and the latest decade by 0.13 W/m^2. The net change in {forcing - OHU} would therefore be reduced by 0.05 W/m^2. As a result, my best estimate of climate sensitivity would have increased by 0.05 K, from 1.62 K to 1.67 K. Not a very large change.

Paul S said...

Nic Lewis,

I simply used the IPCC's own best estimate of -0.73 W/m^2 based on (satellite)

There is no 'best estimate' of -0.73 given in the draft. That's just the average of satellite-based studies assessed. However, this doesn't translate to a satellite-based estimate of total aerosol forcing because many of these studies only cover indirect effects.

When accounting for the studies which don't include a direct forcing estimate the average from satellite-based observational studies is -1.0W/m^2.

Anonymous said...

We have a few lines of evidence through which to estimate ECS: obs, paleo, and GCMs. Of these, the former were the main source of very very long tails due to the poor constraints of the instrumental record particularly on OHU and forcings. The latter two pointed pretty strongly to an ECS of say 2.5-4.5.

What we've seen lately are a number of papers that try to hack off the long tails of the obs-calculated ECS, but in doing tend to commit a number of pretty significant mistakes causing them to underestimate ECS somewhat. Schmittner, for example, though also presenting the results of Aldrin et al. as 2C value when their apples to apples comparison to GCM produced values went up to 3.3C.

We'll see what Skeie et al. produce, but being able to chop 1.2C off of a previous estimate due to a measly 10 years of additional data sounds a little odd, to say the least.

Meanwhile, the GCMs and paleo estimates seem to be pretty stubbornly hovering in the 3-4C neighborhood. Certainly the paleo record illustrates that either the planet is vastly more responsive to very small changes in globally averaged surface temp, or else ECS and Earth System Sensitivity aren't below 2C.

The first thing I would question, in light of that, is whether these recent obs-based attempts to constrain ECS are in fact doing so, rather than calculating something between the TCR and ECS rather than a "pure" ECS. And there are some good reasons to suspect this is the case, e.g. Armour et al. 2013 (J Clim).

Anonymous said...

Paul S,
"There is no 'best estimate' of -0.73 given in the draft. That's just the average of satellite-based studies assessed. However, this doesn't translate to a satellite-based estimate of total aerosol forcing because many of these studies only cover indirect effects."

I think what you say is completely incorrect. Lines 30-34 on page 7-49 of the leaked SOD make clear that the -0.73 W/m^2 estimate (derived by bootstrap-processing the satellite-based estimates) is for AFari+aci - that is for Adjusted foring, including both direct and all indirect effects.

Anonymous said...

Er, 1.8C, not 1.2C wrt Skeie et al.

Paul S said...

Nic Lewis,

I'm fully aware what it says in the text, but I've also followed the references to the studies used to construct this average, given in Table 7.4. Here's a listing of the papers, their estimates and their scopes:

Bellouin et al., 2012 = -0.9; direct + first indirect effect, no AF.

Dufresne et al., 2005 = -0.72; given as total aerosol forcing but specifically simply the sum of direct + first indirect so unclear whether it includes AF.

Lebsock et al., 2008 = -0.42; first indirect effect only, no direct or AF.

Quaas and Boucher, 2005 = -0.3 and -0.4 (two separate estimates); first indirect effect only, no direct or AF.

Quaas et al., 2008 = -1.1; direct + first indirect effect, no AF.

Quaas et al., 2009 = -1.2; appears to be total including AF.

Storelvmo et al., 2009 = unable to find a stated observation-based estimate, IPCC authors may have inferred a figure from something in the paper.

Lohmann and Lesins, 2002 = -0.85; appears to be total including AF.

Quaas et al., 2006 = -0.3 and -0.5 (two separate estimates); indirect effects only but including cloud lifetime so would relate to AF indirect, no direct.

Sekiguchi et al., 2003 = -1.3; assume relates to AF.

Anonymous said...

Paul S
"I'm fully aware what it says in the text"

In that case, why did you deny that the SOD text gave a best estimate of -0.73 W/m^2 for total aerosol forcing based on satellite observations?

I don't think one can place much confidence in your interpretation of the papers. Take, for instance, the first one you cite, Bellouin et al 2012. The estimate of -0.9, W/m^2 you give is based on an unadjusted direct forcing figure of -0.5 W/m^2. Bellouin goes on to make corrections for cloudy skies and changes in aerosols since pre-industrial times, giving a final direct forcing estimate of -0.3 W/m^2, leading to a total figure of -0.7 W/m^2, not -0.9 W/m^2.

I don't propose to argue with you further at this point. I shall wait and see what the final version of the WG1 report has to say.

Paul S said...

Nic Lewis,

'Best estimate' infers a value arrived upon by a full assessment of different considerations. The phrase 'best estimate' is specifically used when referring to the -0.9 figure.

The -0.73 figure quoted is simply an average of various satellite-based estimates, many of which do not relate to total direct+indirect forcing, and only three to total AF. Whether or not the IPCC authors involved recognised this at the time of writing the draft isn't clear.

The -0.5 value for direct forcing in the Bellouin paper relates to the radiative difference between an Earth with anthropogenic + natural aerosols versus natural-only. It therefore is their estimate for the total all-history anthropogenic direct aerosol forcing.

IPCC and other estimates tend to instead specifically refer to a change from a particular date (e.g. 1750). This is problematic for the previous estimate because anthropogenic aerosol emissions existed prior to 1750. In order to account for this they make a comparison between natural and pre-industrial (including anthropogenic) aerosol conditions in HadGEM2-A historical runs and derive an adjustment factor for change from preindustrial. There's a problem here comparing to IPCC estimates because the model preindustrial is 1860 whereas we want to compare to 1750. This means the -0.3 figure is almost certainly an underestimate in terms of IPCC reporting.

Picking either value doesn't affect the average across studies of the total effect being -1.0, even though half of these don't include AF effects.

Magnus said...

Are you not dismissing other ways to get the ECS a bit easy? How to be sure the last about 100 years captures all the processes?

RobH said...

Funny, I argue about CS quite often with AGW skeptics, and the only time I ever use any argument related to CS anything above 3C is to say that, Lindzen's estimates of <1C is just as unlikely as the estimates that are above 4.5C. Outside of that, I say the IPCC central figures are probably about close to reality.

But somehow, again, these comments create an illusion of false comparison, with the "skeptics" jumping on Annan's words as meaning CS is "likely" lower than 2C... omitting the comparison to the relative likelihood of CS above 4.5C. In skeptics' minds 3C is "high CS" so they seem to think he's comparing "lower than 2C" to the IPCC's central figures.

Correct me if I'm wrong but I don't think that's what Annan is saying. As far as I can see, this doesn't really change the needle on CS. It's just a rather pedantic matter of clipping off the long tail.

Carrick said...

Wondering if people have seen the back and forth on RealClimate on uniform priors. This comment (and some of the ones that follow), I thought were pretty interesting.

Yes, using a flat prior for climate sensitivity doesn’t make sense at all.
Subjective and objective Bayesians disagree on many things, but they would agree on that. The reasons why are repeated in most text books that discuss Bayesian statistics, and have been known for several decades. The impact of using a flat prior will be to shift the distribution to higher values, and increase the mean, median and mode. So quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s hope the IPCC authors understand all that.

Carrick said...

Eli: The real issue with BC forcing is that it is not global, but intensely local, depending not only on emissions (Asian brown cloud) but also absorptions (Greenland darkening)


I guess the question for me is, are there any aerosol forcings that are truly global?

James Annan said...

Yeah, I should probably have had a tl;dr version, which is that sensitivity is still about 3C.

The discerning reader will already have noted that my previous posts on the matter actually point to a value more likely on the low side of this rather than higher, and were I pressed for a more precise value, 2.5 might have been a better choice even then. But I'd rather be a little conservative than risk being too Pollyanna-ish about it.

Joel said...

James: The fact that you believe that the high-end estimates of climate sensitivity are very unlikely is now a "Breaking..." story over in Wattsland: http://wattsupwiththat.com/2013/02/01/encouraging-admission-of-lower-climate-sensitivity-by-a-hockey-team-scientist I have submitted a comment there pointing out in essence that this story actually broke about 7 years ago.

James Annan said...

Carrick - yes, I spotted that, but didn't wade in as I wanted to see how it played out. Jewson is a sometime collaborator of Myles Allen and Dan Rowlands on Jeffreys prior stuff, I'm more in the subjective camp myself but one sure prediction is that as soon as they abandon uniform priors (for sensitivity) the estimates will look much more convergent and short-tailed anyway. Which is, of course, precisely the reverse of what Reto said. His words only seem plausible if one considers that uniform priors are actually a reasonable choice for this problem. Which I believe we showed to be wrong, a few years ago.

Anonymous said...

Nic Lewis,

what is your take on the new black carbon data, particularly, would it further reduce your sensitivity estimate ?

Anonymous said...

James,

I think you're correct in pointing out that the instrumental record shows high values of sensitivity to be unlikely. But I don't see how the last decade of temperature data makes such a difference to that argument (being heavily influenced by La Nina and solar minimum (F&R 2011), and being relatively short in duration: a robust conclusion shouldn't depend on adding on few more datapoints).

The revised estimate of net forcing (in the SOD of AR5) has by far the bigger influence on shifting the (still very broa) pdf of sensitivity to smaller values.

That implies that high values of S are less likely than they were deemed before. Whether low values (say, below 2 or 1.5) are now more likely is a much trickier question, since then you quickly come into the range of values that seem incompatible with paleo and GCM's. So the pdf of S seems to have gotten narrower, taking everything into account.

Q, related to what TB brought up: How does the effective sensitivity (as derived from a straight summation over the instrumental period including OHU) differ from equilibrium sensitivity?

James Annan said...

I certainly agree that the Norwegian press release seems a bit unlikely in that respect (last 10y). However if sensitivity is low, we learn more rapidly than if it is high. Also, estimates using the record up to 2000 actually already pointed to a moderate value as a best estimate, and the models actually predicted some gradual *acceleration* in warming subequent to that. But certainly the changes in forcing have a substantial influence too.

I don't think the paleo record has any problem with a sensitivity down to about 2C, maybe even a touch lower. Once you go way back in time, it's questionable whether the concept of sensitivity really applies (it needs an equilibrium climate to exist, for starters). That's a whole new can of worms for a future day.

Any gap between equilibrium and effective sensitivity is likely to be quite small when the first is small - there's a plot in a Sokolov/Forest paper showing this for one particular model, but I'm pretty confident it will be generally true.

BBD said...

James says:

I don't think the paleo record has any problem with a sensitivity down to about 2C, maybe even a touch lower.

It's interesting that the lower estimates for S (~2 - 2.5C) do look low in the context of the most recent paleoclimate estimates.

Specifically the new
Hansen et al.
paper (currently on arxiv) and the intercomparison done by the PALAEOSENS project (Rohling et al. 2012). The Hansen estimate is again about 3C (actually just over, so up from Hansen & Sato 2012), and Rohling et al. gives a range of 2.2 - 4.8C.

Magnus Westerstrand asks above if estimates derived from modern observations might be missing something. I wonder about this too.

KarSteN said...
This comment has been removed by the author.
KarSteN said...

@nic-lewis:

Speaking of serious code errors - such corrections don't always tend to reduces the estimates of the negative aerosol forcing. As it happens, Bellouin et al. ACPD 2012 had to revise their estimate from -0.9 W/m2 (0.7W/m2 IPCC-adjusted) to -1.3W/m2 (1.0W/m2 IPCC-adjusted). As Paul S correctly pointed out, it's RF rather than AF (so you may add another -0.2W/m2 to this number in order to make it apples to apples).

Furthermore, your dismissal of GCM estimates is foolish at best. When it comes to TCR/ECS-estimates, RF-only estimates are pointless as the surface temperature response might well exhibit an entirely different spatial response (with inevitable changes in the resulting ECS, ususally expressed in terms of AF or RF-efficacy) as demonstrated in Jones et al. GRL 2007 (Fig.12). If one seeks to deduce the TCR/ECS from satellite estimates, there is no way around complementary GCM simulations. Note, that there is another satellite based indirect forcing estimate (Penner et al. GRL 2012) which also points to a stronger effect that hasn't been included in the AR5-SOD yet (I also miss the comprehensive EUCAARI paper by Kulmala et al. ACP 2011, which provides global GCM estimates in strong agreement with current thinking).

On a related note, if Vernier et al. GRL 2011 are right, I reckon that the tropospheric AOD could be slightly overestimated at the expense of a higher stratospheric background AOD. Have yet to see a paper which makes mention of it, however. The sum would tend to increase the negative forcing even further (although by a insignificant margin), as stratospheric aerosol forcing is strictly negativ.

Bottomline: What Paul S said. He knows exactly what he is talking about! The current most likely total anthropogenic aerosol AF estimate is -0.9 or -1.0W/m2. It happens to be also in agreement with the (yet irrelevant) AR5-SOD estimate. If you keep choosing to ignore GCM results regarding aerosol AF, I guarantee you that your ECS estimate will be wrong. If I were to put more mainstream numbers in an EBM, the TCR best estimate would be close to where your ECS number is (which would translate in an ECS best estimate of 2.8K).


@lindaserena:
The revised BC forcing estimate is merely based on a upward revision of the BC aerosol loading in the atmosphere which seems to have been underestimated so far (higher than previously thought). This might turn out to be the case for other aerosol species as well . Wasn't scope of their study. I wouldn't expect too much of a change from what's already been known (unless you wish to believe in some sort of miracle).

Howard said...

Eli:

What makes the anthropogenic WMGHG forcing so unusual in the historical and geologic record is the fact that it is well mixed over the entire globe.

Glaciation cycles are caused by focused hemispherical forcings that leverage huge feedbacks with very low global average change in watts. This fits your *intensely local* perfectly and shows how they can have significant global consequences. This leveraged local forcing that produces global climate change is also very strongly influenced by the Jekyll and Hyde geographic nature of the north and south hemispheres.

The fact that BC and the brown cloud influence the high latitude, very highly sensitive north would imply that these effects may very well likely produce amplified positive feedbacks.

The temperature data in Alaska, Canada and Siberia, glacial melt in Greenland and declining arctic sea ice may all be strongly influenced or caused by these *intensely local* climate forcings.

I am sure you would agree that the local events of Greenland melting, Arctic sea ice reductions and extreme north latitude temperature anomalies have global weather and climate implications.

PantsOnFire said...

A controversial paper has now been published which proposes that rainforest condensation and evaporation is a major forgotten player in global climate: http://www.atmos-chem-phys.net/13/1039/2013/acp-13-1039-2013.pdf

EliRabett said...

Eli started to read the Bond, et al. paper (it is over 250 pages) when he hopped into a seminar about open burning observed from satellites where it was claimed that the GFED (Global Fire Emissions Database) they use is way too low. This would mean there is much more BC in the atmosphere on average (also organic carbon, etc) than Bond used in their modeling. YMMV

EliRabett said...

Howard, Eli certainly does not disagree, but the Rabett is not comfortable with single valued simplifications such as global temperature and average bc forcings. In the later case, as you point out the focusing of effects is what has the global impact and not the average forcing.

David Young said...

James, I noticed in your discussion paper at Climate of the Past, you estimate sensitivity for the LGM as 1.7C with a range of 1.2-2.4C. You then say its not robust because of nonlinear effects. What's the basis for this? Does your result agree or disagree with the latest Hansen estimate referenced above?

Anonymous said...

@ourchangingclimate: related to what TB brought up: How does the effective sensitivity (as derived from a straight summation over the instrumental period including OHU) differ from equilibrium sensitivity?

@BBD: Magnus Westerstrand asks above if estimates derived from modern observations might be missing something. I wonder about this too

Yes, probably. Which, to be fair to James's point, may indicate a bit of hypocrisy on the end of those who swore by it earlier. For the people who care about the politics of scientific ingroups, that may be worth getting into. For people who care about the right answer from a science standpoint and for policymakers, it's probably worth focusing on the actual answer.

Which is that different regions of the climate system respond to TOA imbalances over different timescales, and focusing on shorter timescales can give you a much lower answer than focusing on longer timescales due to the relative dominance of negative feedbacks.

Anonymous said...

@David Young: I noticed in your discussion paper at Climate of the Past, you estimate sensitivity for the LGM as 1.7C with a range of 1.2-2.4C.

That's not an apples-to-apples comparison with 2xCO2 ECS estimates.

See: http://thingsbreak.wordpress.com/2012/10/11/a-new-lgm-reconstruction-with-implications-for-climate-sensitivity/

Howard said...

@thingsbreakThat's not an apples-to-apples comparison with 2xCO2 ECS estimates.

Absolutely.

However, it's not just cold regime feedbacks versus warm regime feedbacks. There is no way to equate the feedback responses from regionally concentrated very high forcings from insolation cycles (with a net low global average forcing) versus forcings from well-mixed greenhouse gas increases.

When you pile other localized and temporal anthropogenic effects like aerosols, deforestation, irrigated agriculture, ozone depletion on top of the well-mixed and long-lived CO2, you have a real mess that is, at this point in time, impossible to untangle.

Therefore, published TCS and ECS figures for our current situation are not believable. It's also very hard to understand what would be the best policy recommendations going forward.

Heretical, I know.

James Annan said...

For paleo, you want this post and the associated paper. I haven't seen the Hansen paper, but expect that he will be using a somewhat too cold estimate for the Last Glacial Maximum (ie, 6C colder than modern rather than the newer estimate of about 4C). As is a lot of the paleosense stuff.

By the way, the LGM temperature estimate paper has been formally accepted.

KenH said...

Given that in your Dec. 21 2013 post, "How cold was the last glacial maximum", your conclusion was 4C colder, and that in your comment here ,you estimate a sensitivity of 2.5 - 3C, then with a doubling of CO2, we can expect a temperature increase of about 2/3 of the warming since the LGM. Is this a fair perspective / characterization of these results?

Anonymous said...

Isn't the Aldrin paper based on AR4 aerosol forcing data and wouldn't their sensitivity estimate require a further reduction for the new data ? (And another reduction for black carbon ?)

Wouldn't that apply as well for many other studies and simulations ? There appears to be a big incoherence issue, if this is not addressed throughout the referenced literature, perhaps by introduction of a correction formula.

Alex Harvey said...

Dear James,

Good on you for having the courage to write this.

I don't think you should back away from what you said, though.

You referred to the IPCC's "stubborn adherence to their old estimate of climate sensitivity" - which surely means the 3 K/doubling of CO2 figure? If you're really only talking about the high tail - didn't the Charney report in 1979 essentially dismiss the high tail as very unlikely?

Later you've said,

But I'd rather be a little conservative [3 K] than risk being too Pollyanna-ish about it [2.5 K].

Well that's precisely why the IPCC remain stubborn, and likely to be exactly what Reto Knutti would say, too.

Is it reasonable, rational, or scientific to defend a target that is not arrived at by reasonable analysis simply because of considerations of a need to be conservative?

Regarding Reto Knutti have a look at,

R. Knutti, 2008: 'Why are climate models reproducing the observed global surface warming so well?', GRL, VOL. 35, L18704, doi:10.1029/2008GL034932.

Five years ago he noted, "First, the most likely and obvious (although not the only) interpretation ... is that the total aerosol effect is smaller than suggested by most aerosol models."
...
"I argue that the current agreement of model simulated and observed warming (given the other forcings) points towards a relatively small total aerosol effect."

Then he appeared lately as a co-author on the Rohling et al. (2012; Nature) paper that excludes climate sensitivities lower than 2.2 K - based on remote paleo studies.

As you've said,
"Once you go way back in time, it's questionable whether the concept of sensitivity really applies (it needs an equilibrium climate to exist, for starters)."

Michel Crucifix's work has shown this very well - even as recently as in the paleo record as the LGM.

It seems to me that if the IPCC presses ahead with the 3 K figure despite all observations then it really will lose all credibility.

David Young said...

Perhaps this is coincidence, but Nic Lewis' estimate based on modern observations is very close to James' estimate based on the LGM. I would just note that in fact aerosol forcings are very uncertain according to AR4 with an error bar equal to almost 200% of the central value. I sometimes wonder if we will ever know these values very accurately since they are so small compared to the total energy flows in the system.

James Annan said...

Ken, yes that seems a reasonable way of putting it.

lindaserena, in principle you are probably right. Of course literature is always going to be incoherent in this way whenever you try to take a snapshot, as it takes time for new data to percolate through the literature. Rather than trying some detailed calculation (which is outside the IPCC's remit) I think a more workable approach would be for the IPCC authors to review the literature intelligently, eg cite a paper and explain why it is likely biased a bit one way or the other.

Magnus said...

Saw the discussion over at RC about "geologists" think that the data on LGM is not that good at the moment... Might be worth remembering.

doskonaleszare said...

lindaserena

"Isn't the Aldrin paper based on AR4 aerosol forcing data and wouldn't their sensitivity estimate require a further reduction for the new data ? (And another reduction for black carbon ?)"

The old AR4 GHG+DIRAERO RF (as used in Aldrin et al 2012) is similar to the new AR5 GHG+DIRAERO+INDIRAERO RF, so I would expect the numbers to remain the same.

And it seems that the posterior values of the total RF in (unpublished) Skeie et al are very close to the AR5 SOD estimates, which isn't very surprising considering Myhre is their co-author.

http://www.uib.no/People/ngfhd/EarthClim/Calendar/Oslo-2012/ECS_Olavsgard.pdf
slide #13

Anonymous said...

KarSteN
"If you keep choosing to ignore GCM results regarding aerosol AF, I guarantee you that your ECS estimate will be wrong. If I were to put more mainstream numbers in an EBM, the TCR best estimate would be close to where your ECS number is (which would translate in an ECS best estimate of 2.8K)."
I take the opposite view regarding GCMs, although I accept that they have their uses. If GCM derived estimates are used for primary variables, IMO the result cannot be regarded as a genuine observationally-derived estimate. Inverse estimates of aerosol forcing do not have to rely much on GCMs, are for AF, and are in line with the SOD's satellite-observation derived central AF estimate of -0.73 W/m^2 (discounting estimates from studies in the SOD's list that are, for reasons such as those I mentioned in my 2/2/13 5:59 am comment, obviously useless).
I gave a calculation of the effects on my ECS estimate of substituting the main composite SOD aerosol adjusted forcing estimate of -0.9 W/m^2 for its satellite-derived estimate, along with James's suggestion of a 30 year OHU trend, in my 2/2/13 5:59 am comment. If only the aerosol forcing estimate were so changed, my ECS estimate would instead have increased by 9%, to 1.77 K – over 1 K short of the unsupported 2.8 K number that you plucked out of the air. (You can check the 9% change back to my original, detailed, calculations.)

Anonymous said...

lindaserena said

"Isn't the Aldrin paper based on AR4 aerosol forcing data and wouldn't their sensitivity estimate require a further reduction for the new data ? (And another reduction for black carbon ?)"

The Aldrin 2012 paper was based on AR4 forcing estimates, but their main results sensitivity estimate doesn't require reduction for the new aerosol forcing data. That is because the AR4 uncertainty ranges, which Aldrin used as his prior distributions, are just about wide enough for the observational data used by his model to overwhelm the original AR4 aerosol forcing estimates. Hence his final posterior direct + indirect aerosol estimate of -0.7 W/m^2. This is AF, not RF, since the observed NH and SH temperatures on which it is based reflect all effects of aerosols - they cannot and do not distinguish the main RF component from the total AF. When Aldrin adds a fixed further negative indirect aerosol forcing to his prior to allow for the possible existence of a cloud lifetime (2nd indirect) effect, the upper (least negative) end of the prior range becomes too negative for the data to overwhelm the prior, biasing (relative to what the observational data is implying) the aerosol forcing estimate to more negative values, hence the increase in the ECS estimate when he does so.

I'm not sure whether the new black carbon forcing estimates would, in theory at least, make much difference - the BC effect is already in the observed temperatures.

Aldrin's use of a uniform prior for ECS will have biased up his mean and 95%/97.5% bound estimates for sensitivity, but probably doesn't make much difference to his aerosol forcing estimates.


doskonaleszare said

"And it seems that the posterior values of the total RF in (unpublished) Skeie et al are very close to the AR5 SOD estimates, which isn't very surprising considering Myhre is their co-author."

The Skeie et al total aerosol forcing estimate is -0.8 W/m^2, between the SOD main composite central estimate of -0.9 W/m^2 and its satellite-observation only central estimate of -0.73 W/m^2. Skeie's -0.8 W/m^2 posterior mean estimate compares with a prior mean of -1.65 W/m^2 (90% range approximately -3.0 to -1.0 W/m^2). With the posterior mean less negative than the 95% point of the prior, the observational data must be pointing to an even less negative total aerosol AF estimate than -0.8 W/m^2, with the wing of the prior pulling the posterior to a more negative mean.

David Young said...

I agree with Nic Lewis about GCM's. There is little reason to take them seriously at conditions different than today's and even over the last 30 years, the track record is not that good. I will not go through the laundry list of issues as they have been discussed elsewhere by myself and Gerry Browning. Considering the billions of dollars invested, its a low return on investment. BTW, I don't think anyone has found the tropical upper tropospheric hot spot predicted by all models yet. Maybe investing in better data would make sense. Observationally based methods are far more convincing for me.

BBD said...

James:

I haven't seen the Hansen paper, but expect that he will be using a somewhat too cold estimate for the Last Glacial Maximum (ie, 6C colder than modern rather than the newer estimate of about 4C). As is a lot of the paleosense stuff.

Isn't the problem here that MARGO may be biased high, especially for the tropics? Isn't this what David Lea was getting at in comments at RC? If Lea is correct, then the Hansen estimate *from the LGM* (Hansen & Sato 2012) may be more accurate that you allow.

HS12 uses an estimated LGM/Holocene difference of 5C. The latest Hansen study extends the analysis right across the Cenozoic, but estimates LGM/Holocene difference as 4.5C.

BBD said...

Looking back, I seem to have skipped a link to Hansen & Sato (2012).

David Young said...

James, Is it possible to reconstruct with any accuracy LGM global temperatures without the use of models? Is the proxy data geographically distributed over the whole planet?

KarSteN said...

@nic-lewis:
As a matter of fact, the inverse AF estimates in the AR5-SOD are in the ballpark of -1.1W/m2. Whether you discount some of these studies or not is irrelevant for me. Feel free to submit your own papers if you know better. As long as they are not disputed in the literature, their results are valid (unless they have major coding errors). I also disagree with your critizism (statistical flaws in one paper aside) or praise of particular studies. Take for example the Ring et al. ACS 2012 paper: They allow for natural variations which tend to limit the aerosol forcing in their model. Unfortunately, the model won't tell you whether these natural variations are just a fluke or not (as no physics are involved). However you seem to like it regardless.

Regarding your calculation, in my point of view it contains a major flaw as you are using instantaneous forcing values instead of an temporally integrated forcing in order to deduce ECS (in terms of surface temperature). You can of course try to adjust for that, in assuming a reasonable value for the Earth's heat uptake. Your value is however an instantaneous number, while you'd indispensably be required to use an integrated value which reflects the counterbalancing long-term effects of the strong volcanic forcing pulses (which remain in the system, yet unaccounted in your analysis). It is irrelevant whether your start and end period isn't apparently affected from volcanic eruptions itself. While true for their short-term effects, it's wrong for their long-term-efffects (see e.g. Gleckler et al. Nature 2006). That is one of the obvious reasons why GCMs tend to produce higher ECS estimates. Keep also in mind, that the anthropogenic aerosol forcing was likely a bit stronger in the 1950s-70s period than the 1980s-90s period. Another factor which has to be accounted for. I therefore strongly caution against such simplified ECS estimates. They are certainly wrong and biased too low.

You may circumvent this issue by using more a sophisticated EBM, or focusing on the TCR of a shorter period of time. Unfortunately, the signal-to-noise-ratio would decrease considerably. Getting rid of short-term fluctuations helps. For example, one can test the land temperatures as a proxy for an intermediate TCR/ECS value. Assuming a constantly increasing forcing from 1970 onwards (anthropogenic aerosols slightly positive, but counterbalanced by the El Chichon and Pinatubo eruptions), with the GHG-forcing (1970-2010) to be 1.8W/m2 and an corresponding land temperature increase of roughly 1K, the intermediate TCR/ECS value would be 2K. The global temperature increased 0.6K (1970-2010) which yields a TCR of 1.23. This number is however likely to be contaminated by the mentioned volcanic eruptions (less important for land temperatures). Given that the land temperature TCR/ECS value is at least 2K, I regard your low ECS estimate to be wrong. I don't dare to say in how far the land based TCR/ECS value has to be corrected to obtain the true ECS, but 2.4-2.8 seems a reasonable range. Discarding all other feedbacks, the global number would translate into an ECS of 2.0-2.5 ... at least.

I strongly recommend to consult the corresponding literature on volcanic impacts on climate and oceans (e.g. Robock et al. RG Stenchikov et al. JRG 2009; Cole-Dai CC 2010; Timmreck CC 2012). Of course you are always entitled to believe whatever you want, may it relate to volcanoes or tropospheric aerosols. You shouldn't be too surprised, however, that none of the mainstream experts gets excited about your numbers. Applause from Judith Curry is a safe sign that something went seriously wrong.

P.S.: In my previous post I erroneously referred to Jones et al. GRL 2007. It correctly reads Jones et al. JGR 2007.

Anonymous said...

But aren't we seeing an accelerating trend in the long term warming recently rather than a slowdown?

Lean/Rind (2008) and Foster/Rahmsdorf (2011) have shown that the 00:s has the same trend as the 80:s and 90:s once you adjust for volcanos, sun and ENSO.

And according to the Wild (2012) review of historical aerosol-forcing https://www1.ethz.ch/iac/people/wild/WildBAMS_2012.pdf there was a brightening in the 80:s and 90:s which turned into a dimming in the 00:s.

This points at that the underlying long term trend was higher in the 00:s. And of course, the ice melting is another indicator that shows acceleration.

Paul S said...

Eli,

The Bond et al. direct effect estimate for BC is derived using observational estimates of BC AAOD (Aerosol Absorption Optical Depth). They take modelled relationships between AAOD and direct RF, then scale by the difference between modelled and observed AAOD (in all cases models produce lower AAOD, particularly in South Asia, East Asia and Africa). Thus the direct estimate is not at all dependent on any emissions inventory.

In order to estimate indirect effects that accompany the direct effect, Bond et al. then scale their emissions inventory/modelled aerosol burden to match this AAOD data. So, while they may have used an inventory which underestimates emissions, for the purposes of the paper they do scale these to match observations pertaining to atmospheric BC burden. It is therefore unlikely that their total BC forcing estimate would be altered much through the introduction of improved emissions inventories.

Howevever, if some sources of BC are being more underreported than others that could affect the apportionment of BC forcings to different sectors, which could have a significant effect on estimates for total co-emitter forcing.

The example you give is open burning, which is the BC source type which gives the largest negative forcing when evaluated with co-emitters.

PaulB said...

I agree that Nic Lewis is not justified in using the -0.73 value for aerosol forcing. The SOD does not present it as a best estimate, and on examination it's clear that it's not supported by the references. I wrote about it at some length here.

Anonymous said...

Dean,

Lean/Rind (2008) and Foster/Rahmsdorf (2011) adjust for TSI, which may be different from the "sun" (a Maunder minimum would give a definite answer at the latest), and they adjust for the ENSO index which IS different from the ENSO effect on temperature.
El Nino leftover warm water pools drift into different parts of the oceans and continue to warm for years, an effect not characterized by the ENSO index, but clearly visible in sea surface temperature maps and in the temperature step changes and plateaus following El Ninos. The studies severely underestimate the ENSO regression coefficients and are very wrong.

aaaaa said...

"Lean/Rind (2008) and Foster/Rahmsdorf (2011) adjust for TSI, which may be different from the "sun""

I don't know about LR08, but FR11 adjust for a ~11 year cycle in the temperature record. They don't assume it's only TSI.

"El Nino leftover warm water pools drift into different parts of the oceans and continue to warm for years"

Yet after the 1998 El Nino global temperatures plummeted. To the extent some warming influence may have remained after the El Nino, it's evidently far smaller than the total temporary warming the El Nino produced which subsequently faded away. That goes for La Ninas too. After a La Nina GAT shoots back up.

It's clear that these temporary ENSO departures bias short-term trends in GAT.

Correcting for this using the ENSO index is at least addressing the matter. Far better than the alternative of not correcting for ENSO or the solar cycle *at all* which assumes the GAT trend since 2002 is not influenced by either.

Alec Rawls said...

James leaves out one of the changes in estimated forcing that would lower the sensitivity estimate. In addition to reduced aerosol cooling and increased black carbon warming there is the IPCC's new admission of strong evidence for some mechanism of solar forcing substantially stronger than TSI (p. page 7-43):

"Many empirical relationships have been reported between GCR or cosmogenic isotope archives and some aspects of the climate system (e.g., Bond et al., 2001; Dengel et al., 2009; Ram and Stolz, 1999). The forcing from changes in total solar irradiance alone does not seem to account for these observations, implying the existence of an amplifying mechanism such as the hypothesized GCR-cloud link."

Chapter 7 goes on to assess the GCR-cloud mechanism as too weak to have any significant effect on temperature but this leaves intact the admission that the solar-climate correlation evidence implies the existence of SOME such mechanism ("AN amplifying mechanism") even if we don't know what it is.

TSI is the only solar variable included in the "consensus" climate models so they have too little 20th century forcing (and hence too high a sensitivity) on this front as well.

Anonymous said...

"As a matter of fact, the inverse AF estimates in the AR5-SOD are in the ballpark of -1.1W/m2. Whether you discount some of these studies or not is irrelevant for me. Feel free to submit your own papers if you know better. As long as they are not disputed in the literature, their results are valid (unless they have major coding errors)."

Personally, I prefer to read the inverse studies involved, engage my brain and make an intelligent assessment of how capable they are of providing a valid estimate of aerosol forcing. For instance, one of the studies, Gregory et al 2002, does not produce any estimate of aerosol forcing. It merely states what estimate thereof it uses. And that (highish) estimate was derived from HadCM3 AOGCM simulations, not observations. On your approach, that is a valid inverse estimate. But it clearly is no such thing, hence I reject it.

"It is irrelevant whether your start and end period isn't apparently affected from volcanic eruptions itself. While true for their short-term effects, it's wrong for their long-term-effects. … That is one of the obvious reasons why GCMs tend to produce higher ECS estimates."

I am aware that volcanic eruptions have longer term effects. But that seems of little relevance to my heat balance based climate sensitivity estimate.

I prefer to make my own estimates of such effects using EBM simulations and observationally-constrained estimates of climate sensitivity (S) and effective ocean vertical diffusivity (Kv) rather than taking results of studies that use GCM simulations. Most GCMs appear to have excessive ocean heat uptake (OHU) (see, e.g., Hansen, 2011, ACP), as well as climate sensitivities that IMO are excessive. I have done EBM simulations based on Pinatubo magnitude volcanic forcing. I used Kv= 0.6e-4 m^2/s (in line with Hoffert's original 1980 estimate) or treble that figure (typical observational and inverse estimates are in this range). To be consistent with the results of my heat balance estimate of sensitivity, I used S= 1.63. Pinatubo effectively finished its eruption nearly 9 years before the start of my final 2002-2011 decade.

On the basis of my simulations, during 2002-2011 Pinatubo would have depressed the mean global surface temperature by about 0.02 K, and increased mean OHU by about 0.04 W/m^2. Not a large effect. And there was a smaller volcanic eruption a decade or so before my starting 1871-1880 period, which would have had perhaps 25%-30% as much effect during that decade. So, if we took out the effects of both volcanoes, the change in mean global surface temperatures between the two decades would have been about 0.015 K (2%) higher, and the increase in the change in { forcing net of OHU } would have been about 0.03 W/m^2 (also 2%) higher. Lo and behold, these two effects cancel out.

So the decade earlier volcanic eruptions have no effect at all on my climate sensitivity estimate. Which is actually obvious from the physics involved, if you think about it.

David Young said...

Karsten (or whoever you are),

Many peer reviewed publications have turned out to be completely wrong. You may disagree with Nic, but don't expect the argument from the perfection of the literature to work for me.

Statistics is not a strong suit of climate scientists so far as I can see, so on this score I would tend to believe Nic over you, who have no credentials so far as I can tell.



s21519 said...

Maybe we should all stop assuming that climate sensitivity is a constant and instead consider the possibility that sensitivity may vary with temperature.

Paul S said...

Gregory et al 2002, does not produce any estimate of aerosol forcing. It merely states what estimate thereof it uses. And that (highish) estimate was derived from HadCM3 AOGCM simulations, not observations.

That's not what the text says:

'We derive limits for the forcing (Table 1) by comparison of the spatiotemporal patterns of temperature change in observations and experiments with the Hadley Centre AOGCM.'

That is, they compared patterns of change in a GCM with those in observations and the best fit occured when scaled with an aerosol forcing of -1.01 +/- 0.6W/m^2 +. If you think about it, it doesn't make sense that they would state a GCM-produced forcing estimate with such large uncertainties.

The critique of Shindell & Faluvegi 2009 is also without merit, where it states:

A second does not estimate aerosol forcing over 90S–28S, and concludes that over 1976–2007 it has been large and negative over 28S–28N and large and positive over 28N–60N, the opposite of what is generally believed.

The geographical shifts of emissions sources over the past 30 years, with reductions in N.America & Europe and increases in Africa & South Asia, mean that we actually do expect the zonal aerosol forcing pattern described. The strength of what we should expect is not fully understood, but this potentially ties in with my earlier post concerning a large underestimation of open burning emissions. The largest sources of open burning, and of model-underestimated AAOD, are in the tropics.

Paul S said...

On a general point concerning inverse aerosol forcing estimates, most of these are necessarily estimating the forcing change from the late 19th Century or early 20th Century. In order to make them comparable to IPCC forcing estimates (from 1750/1765) you would need to add an adjustment factor to account for forcing change prior to the scope of the study. Typically this would result in an addition of about -0.1 to -0.2 W/m^2.

Anonymous said...

NnN

La Nina is NOT the opposite of El Nino.

You have to consider that La Nina upwells cold water. Cold water sinks back once the upwelling stops.

El Nino slushes warm water from the Pacific warm pool eastwards. Much of the warm water remains on the surface after the El Nino and the is moved into different parts of the oceans.

Bob Tisdale explained this much better in his video and attributes temperature step functions to the ENSO process:

http://bobtisdale.files.wordpress.com/2012/05/4-rest-o-world.png

Howard said...

I agree with the Rabbit that global averaged forcings and global average temperature are immature measures of climate.

They are certainly not geologically sound factors in my experience.

As an admitted climate dilettante, I am puzzled that scientists equate *intensely local* insolation driven climatic forcings (eg. Pleistocene glacial Melancholic cycles), to LLWMGHG forcings.

Am I wrong in thinking that physically different forcing mechanisms produce unique and measurably different feedbacks?

If this line of question is completely and fundamentally bollocks, I'd like to hear it.

Thanks,


KarSteN said...

@Nic Lewis:
"Personally, I prefer to read the inverse studies involved, engage my brain and make an intelligent assessment of how capable they are of providing a valid estimate of aerosol forcing."

I also have my reservations about inverse modeling studies. Some need to be rejected indeed (I mentioned one already), others might be useful. Unfortunately, I can't see how you'd be able to obtain a valid estimate of aerosol forcing given your dismissal of GCMs and expert opinion likewise. Re satellite estimates, I still wonder what your opinion on the revised Bellouin et al. ACPD 2012 estimate is (non-adjusted total RF of -1.0W/m2).

Re volcanoes ... the modelled impact on OHC due to volcanic eruptions for the 1955-1998 period alone is -0.11W/m2 (HadCM3). Krakatoa or Katmai won't be of lesser impact (although with some counterbalancing effects from early 19th century eruptions). The observed 0-2000m OHC between 1960-2010 provides another 0.25W/m2 which makes it 0.35W/m2 for this period. According to your estimate, the Earth's heat uptake in the remaining 1880-1960 period is hence a mere 0.08W/m2. Tough assumption, to put it mildly. Note, I did nothing more than pulling a few numbers straight-forwardly. Back of the envelope calculation of the simplest sort. Take GISS instead of HadCRUT4, take a more objective aerosol forcing and you get closer to where the mainstream is ...


@David Young:
No worries, I am fully aware of what the papers are saying. If you prefer to listen to stats experts rather than climate experts, you are welcomed to do so. There are however some statisticians, who would object to Nics analysis as well. Not sure you would trust them then ...

David Young said...

Karsten, I still find it hard to take you seriously. I can read the literature too and I find that generally the data is noisy and the models questionable. I have 32 years experience in solving the Navier-Stokes equations and I also respect Gerry Browning who actually rigorously proves his assertions. What is interesting about auditors such as Nic, Browning, and McIntyre is that they show a willingness to discuss and defend in detail their conclusions and an honesty that is refreshing. And they are willing to use their real names. What's the deal with people who expect to be taken seriously and hide behind the cloak of anonymity. Doesn't inspire confidence.

Perhaps you don't know that there is a growing realization that the literature is pretty unreliable in fields such as medicine and climate science. The reasons are complex, but suffice it to say that there is a well documented positive results bias that should instill a sense of caution, especially where there is a strong overlap between science and monetary or ideological interests.

Carrick said...

Karsten: "Unfortunately, I can't see how you'd be able to obtain a valid estimate of aerosol forcing given your dismissal of GCMs and expert opinion likewise"

Um.. Why do you suppose you need GCMs or expert opinions (either one or separately) in order to get a "valid estimate of aerosol forcings"?

GCMs aren't used to derive aerosol histories (they are a consumer of them in fact) and as to "expert opinions"...I take it knowledge is unaccessible in your worldview without a gatekeeper?

James Annan said...

Maybe an issue of terminology, GCMs use histories of aerosol loadings/emissions but calculate the resulting forcing.

James Annan said...

That Hansen and Sato paper says 5±1C, unless I'm mistaken.

I have done some sensitivity tests wrt the tropical SST (will be in the final LGM reconstruction paper). It actually doesn't make all that much difference, perhaps because there is a lot of other data. But still, I agree it would help to bring the estimates a bit closer together.

Anonymous said...

Karsten (or whoever you are)

and

And they are willing to use their real names. What's the deal with people who expect to be taken seriously and hide behind the cloak of anonymity. Doesn't inspire confidence.

It's a minor issue, since I'm all in favour of internet anonymity if people want that. However, Karsten isn't anonymous. Just click on his profile. You can then look him up:

http://www.geog.ox.ac.uk/staff/khaustein.html

Paul S said...

Howard,

With regards comparison to LGM climate to inform understanding of future WMGHG warming, this is usually done through equilibrium rather than transient experiments. Rather than asking how climate may have evolved from what might be considered an intial "spark" due to Milankovitch cycles, the question asked is: Why, and by how much, are current climate conditions different from those at the LGM. It's worth noting that current insolation conditions are not so different from what they were at the LGM. The contrasting states are thus considered to be a function of the boundary conditions of each period.

The forcing difference between the two states is derived from the change in boundary conditions. Models differ on the quantity but it seems to total about -9 W/m^2 (or +9 W/m^2 depending on your perspective). There are three main components to this total forcing, of roughly equal magnitude: land ice albedo, land elevation and WMGHGs. This, incidentally, is where James gets the 1.7ºC doubling BoE calculation for warming due to 2xCO2: 4ºC LGM GAT change / 9 W/m^2 forcing * 3.7 W/m^2 2xCO2 forcing.

The reason this estimate is very likely wrong relates to your general point about different forcing types leading to different feedbacks. It seems to me that climate efficacy of different forcings is the key diagnostic here. If you wanted to alter the GAT of Earth the most efficient method of doing so would be do apply a radiative forcing over the oceans. A similar magnitude forcing only over land would produce much lower temperature change. In terms of the LGM forcings only WMGHGs effect a significant forcing over the oceans. While the local effects of ice albedo and land elevation change are huge (because the local forcings are huge, particularly for ice albedo), their widescale climate effects are limited by their location on land. Hence, despite only representing 1/3 of the total forcing, WMGHGs are probably responsible for most of the GAT difference between the Holocene and LGM.

KarSteN said...

@Carrick:
Accessing knowledge and interpreting it correctly are two different things. Expertise clearly helps in the latter case. Of course you don't like the experts message (and you never will). Nic's musings (which I personally appreciate) seem to be easier to digest for you.

@SteveF:
Thanks. I am in favour of anonymity as well. It's all about content. Sure enough, as contrarians will always blame you for "cloaking behind whatever they think it is" (David Young as prime example), I decided to leave anonymity behind. Some still can't connect the dots.

Alex Harvey said...

James,

You've implied above that the paleo record excludes very low sensitivities. Exactly how low you haven't specified, but in so far as you agree that the ECS at the LGM can't be the same as in the present climate, and you must agree there are enormous gaps in our knowledge about the climate in the LGM, it seems to me we are forced back into, to some extent, putting all faith back into the GCMs at this point, in order to assess the nonlinearities. That's one of the ideas I've taken from your latest sensitivity estimate, whether rightly or wrongly.

So if at this point all GCMs hypothetically turned out to share similar flaws - e.g. regarding the unknowns for which there's essentially no data - the responses at the LGM of water vapour, clouds, aerosols etc - wouldn't that undermine validated model approaches to estimating climate sensitivity from even the LGM? There are still some things about the LGM that we simply assume in faith because there's just no data, right?

Couldn't your recent estimate, therefore, in principle be vulnerable to the possibility of all models being wrong in the same way? Doesn't this imply, therefore, that actually, nothing has really been 'excluded'?

Carrick said...

KarSteN: Of course you don't like the experts message (and you never will). Nic's musings (which I personally appreciate) seem to be easier to digest for you.

Since you are apparently a relatively new postdoc, I will make the suggestion to stay away from trying to read other people's minds or assuming motives for a particular question.

James Annan: Maybe an issue of terminology, GCMs use histories of aerosol loadings/emissions but calculate the resulting forcing.

Thank you, that was helpful.

Dr Norman Page said...

It is increasingly clear that the earth has entered a cooling trend which will last until 2030 and probably beyond. It is also clear that the temperature sensitivity to CO2 is below the low end of the model ranges.The models are simply structured incorrectly so that their average range is an average of improperly structured models. For a discussion of this see my post “Global Cooling -Timing and Amount.” on my blog
http://climatesense-norpag.blogspot.com/
an earlier post on that site on 11/18/12 “Global Cooling Climate and Weather forecasting” provides links to the relevant data suggesting cooling.
The best discussion of temperature sensitivity to CO2 is seen in John Kehr’s The Inconvenient SKEPTIC on page 230 he persuasively estimates the sensitivity to a CO2 doubling from 380 ppm to be 0.7 degrees. Look at the Eemian Interglacial cooling phase ice core temp v CO2 for example. and also the SST temp trend trend v CO2 for the last ten years which would both produce negative sensitivities. The modellers simply picked a time frame which produced a sensitivity to match their preconceptions.

KarSteN said...

Carrick:
Priceless reply (as expected). Your "logic" speaks volumes!
nuff said ...

Anonymous said...

lindaserena
You give no argument against my hypothesis that the long term warming trend was faster during the 00:s. That the Foster/Rahmsdorf (2011) method is not perfect is a truism.

BTW, how will IPCC handle the Brysse et al? http://www.sciencedirect.com/science/article/pii/S0959378012001215

They showed a low bias in IPCC evaluations for sea level rise, CO2 emissions, sea ice decline, permafrost melt and carbon feedbacks, rainfall intensity and northern hemisphere snow cover.

Perhaps the numbers in the final IPCC release generally should be revised upwards a bit to account for the ESLD-effect? Regarding climate sensitivity I speculate that the ESLD-effect might cause underestimation of cloud feedback and aerosol negative forcing. Both are very uncertain and it is tempting to put them lower or even to zero like some papers have done. Effects that you didn't account for or know about are more likely on the positive side.

Anonymous said...

Dean,

if you look at Bob Tisdale's graphic,

http://bobtisdale.files.wordpress.com/2012/05/4-rest-o-world.png

warming has gone up in steps with El Ninos and remained constant thereafter.

I would interpret this as a sequence of step functions followed by multiyear declines (when El Nino leftover warm water pools slowly lose their heat), superseeded by a warming trend, caused by other natural and anthropogenic causes, which in sum just keep the temperature about constant after the step function.

The regression requires that variables are linear to their temperature responses.

Therefore you would have to feed a regression not with the ENSO index but a function that is linear to the temperature response of the ENSO process - something like a step function followed by a slow multiyear decline for each El Nino event, and perhaps the ENSO index wiggles on top, though the latter does not matter a lot on multiyear timescale.

You would then certainly compute a MUCH higher ENSO regression coefficient, explaining a significant part of the warming since 1977.

The remaining trend due to anthropogenic and other natural forcings (AMO also not yet included) would then be reduced accordingly. That remainder may have increased a little recently or may have not.

Anonymous said...

lindaserena,
Bob Tisdale is not a serious reference. And neither can I bother delving into your speculations, which looks pretty random. If someone here with some real knowledge have a good argument for why my reasoning was flawed, I'd like to know.

Anonymous said...

Dean,

The initial "speculation" was Rahmstorf/Forster's assumption of a linear relationship between the ENSO index and the temperature response of the ENSO process.

That speculation is unproven, unreferenced and obviously invalid.

And this would have been the basic prerequisite to apply a linear regression.

The ENSO index only considers parameters of a certain area of the tropical pacific and not leftover warm water pools elsewhere. I don't know if there is additional "real knowledge" or a "serious reference" discussing this issue besides Tisdale's work, but I guess if not, it would have been up to Rahmstorf/Forster to try to explain, why sea surface temperature maps obviously fail to support their speculation.

Carrick said...

Karsten, aren't you being a trifle childish and petty here? People know who you are, and you still act this way in public.

There's nothing priceless about that, really it's a bit sad.

EliRabett said...

Sorry James, you are going to have to put out your own damn fire. Curry and the wursts do not do subtle

James Annan said...

To be honest, Curry doesn't feature on my radar much these days - too much hot air, not enough beef. On the other hand, I did have a very nice curry at the weekend (home-made with Christmas leftover duck, since you ask).

James Annan said...

Dean, agreed re: Tisdale, and I might start culling some of the lower signal comments. Note however that all literature requires a modest acceleration of warming rate, at a faster rate for the higher sensitivity values. We simply aren't seeing it.

Carrick said...

Regarding "step functions", is it really that exotic to suggest that climate system might have metastable states that it oscillates about?

TLS certainly seems to do that.

(The apparent steps are volcanic eruptions of course. This leads to some wildly speculative thoughts that I refuse to publicly own.)

I've seen this sort of behavior in other (physical) complex systems before, where when you "bang" it, it finds a new, lower minimum.

I believe one name for this phenomenon is "alternative" (or "alternate") stable states. A good physics example of this is the double-well potential.

PeteB said...

Hi James,

I'm interested in your reaction to http://www.realclimate.org/?comments_popup=13891#comment-312900

from David Lea

"The MARGO data is dominated by older foram transfer function estimates, which even its most ardent practitioners would agree do not record tropical changes accurately. This is an important point that is affecting a number of recent estimates of sensitivity using MARGO data. "

and his follow on post " if you look at Fig. 2 in Hargreaves et al, the observational band for LGM tropical cooling they use, based on MARGO, is -1.1 to -2.5 deg C, equating to a sensitivity of about 2.5 deg. Using an estimate of the mean tropical cooling based on geochemical proxies of 2.5-3 deg would yield a sensitivity closer to 3.5 deg (but perhaps Julia will comment)."

James Annan said...

Carrick,

The climate system has been pretty stable for 10,000 years. I would agree that once you start looking at much longer time scales, the whole concept of an equilibrium climate looks increasingly dubious.

PeteB, yes, I've had some email discussion with David, and the LGM paper has been slightly amended as a result. Interestingly, the overall LGM result isn't much changed by altering the tropical data. But we are very much at the mercy of the proxy people as they reinterpret their data :-)

guthrie said...

James - re. acceleration of warming - I thought the models were still generally too close together just now to tell, maybe by 2020 we'll have a better idea.

There's also the solar output decreasing, and the one I'm really curious about - oceanic heat content and ice melting. There's a massive amount of energy being used to melt arctic ice and I'm getting the unscientific impression that the atmospheric circulation seems designed to funnel all the heat that we should be detecting at lower latitudes up into the lovely heat sink at the north pole.

James Annan said...

Well, it's true that any acceleration would be hard to spot. But quite a sustained steadying, with the limited ocean warming and changes to forcing estimates all points in the same direction.

Melting Arctic ice isn't a significant heat sink.

Magnus said...

Better put then me
http://dotearth.blogs.nytimes.com/2013/02/04/a-closer-look-at-moderating-views-of-climate-sensitivity/?comments#permid=53

"Raymond Pierrehumbert of the University of Chicago sent this note within a group e-mail exchange:

All I have to say is -- remember the Pliocene. If the geochemists are right that the CO2 was only 400ppm then, and if that is how different a 400ppm world is from the present, then an 800ppm world is beyond contemplation. There are multiple, conflicting lines of evidence in climate sensitivity, and nothing has really ruled out the possibility of a tail that extends over 4C for a doubling, and that's without even allowing for some kind of carbon cycle feedback that causes land to turn from a sink to a source of CO2. Sure, there is more evidence favoring 4C or under than there is favoring the fatter tail, but the tail has a nonzero (and unquantified) probability and that's what low-probability catastrophic events are all about. The policy community has no reason to stop thinking about such things.

But even a 4C per doubling world gives plenty of cause for panic, and the PETM does tend to point towards that."

Looking to close in time is it not possible that something big is missed?

PeteB said...

"without even allowing for some kind of carbon cycle feedback that causes land to turn from a sink to a source of CO2."

that seems to be a big advantage to the paelo estimates rather than estimates based on the current century - that any non-linear effects (carbon cycle changes, ice albedo changes, etc) are more likely to show up

James Annan said...

As I replied directly to that email:

I don't understand Ray Pierrehumbert's reference to "conflict" between different estimates. I'm not aware of any plausible analysis that does not assign high probability at least to the range 2.5-3 (and a bit beyond). To use an analogy, there is no conflict between my new bathroom scales that claim a precise value of 81.3±0.1kg and my old ones that say 81±0.5 (or even someone who, on eyeballing me, says "80±5").

Magnus said...

Me thinks the problem then is in how "media" "blogs" reports on the problem... http://www.realclimate.org/index.php/archives/2013/02/unforced-variations-feb-2013/comment-page-1/#comment-316402

A relatively small issue gets blown up because researchers discuses them and the "media" fails to describe weaknesses of single results and fail to discuss different type of methods... and what kind of effect it might have in the future let alone how it is represented in the literature in the past and today... (bla bla bla...)

BBD said...

James

But we are very much at the mercy of the proxy people as they reinterpret their data :-)

Exactly. And my understanding of Lea's point is that there is a problem with MARGO and that this impacts not just your estimate, but all estimates which use it.

Until this is investigated in greater detail I am going to remain properly sceptical about relatively low LGM sensitivity estimates.

DocRichard said...

Could it be that we are missing something very important in this debate on the minutiae of the CS figure, and that is, that the climate controversy is now over, as far as policy is concerned?

By this, I mean that lukewarmers, when pressed for a 90% confidence figure for ECS will usually give a value between 1.2C and 2.4C.

In giving this, they are stating that it is likely that if we continue BAU up to 2050, the planet is going to cross the 2C increase threshold at some point in the future, and that 2C threshold has been set as something to be avoided.

Of course, some lukewarmers will then begin to argue that the effects of a 2C increase have been wrongly forecast on the alarmist side, or that the precautionary principle can be set aside.

On the other hand, other lukewarmers may well agree that we should continue with the decarbonisation programme that is already started in many enlightened countries and localities.

This is not to call an end to the fun for the climatologists. There is still plenty to argue about, as this comment list demonstrates, but as far as policy goes, someone should tell the policy makers the good news that the reasonable climate skeptics are now on side.

Carrick said...

I think I didn't state my question very clearly.

I'm not that interested in Tisdale's theories about why the Earth is warming.

I was raising a very different issue, which relates to Tisdale's claim that temperature doesn't rise steadily but rather in a stair-step fashion---and I pointed to the satellite measurements of lower stratospheric cooling as evidence for that.

In other physical systems (including ecological ones) you often observe alternative stable states. Amazon rain forests are a good example of that... the trees retain a lot of moisture and keep the system in a metastable rainforest like environment. If you clear cut the trees, and try later to go back and grow new trees, the system "resists" returning to a rainforest like state.

In simple English, knock an nonlinear system with hysteresis hard enough, it may not return back to its original stability point. In the case of lower stratospheric temperature, those big "knocks" correspond of course to the Chinchon and Pinatubo eruptions.

With that long preamble, regarding "step functions", is it really that exotic to suggest that climate system might have metastable states that it oscillates about?

Seems to me that is "no".

Carrick said...

And just to be clear on this "In simple English, knock an nonlinear system with hysteresis hard enough, it may not return back to its original stability point. In the case of lower stratospheric temperature, those big "knocks" correspond of course to the Chinchon and Pinatubo eruptions."

If you look at the the lower stratospheric temperature, you do see a large positive temperature perturbation immediately after the volcanic eruption (this is expected of course), but rather than returning to the same relatively constant temperature, it shifts to a new, lower temperature operating point.

And again to be clear, the attribution for that new lower stratospheric temperature is increased atmospheric CO2 concentration.

Mikel Mariñelarena said...

In my (very humble) view, the beauty of the observational approach to calculating the climate sensitivity, as opposed to paleo or GCMs, is that most terms of the equation are solved with values that we can be quite confident about.

For example, if we look at the past century and a half, we have an *instrumentally observed* TCR of ~0.7C for a radiative forcing that is also quite well know (except for its pesky anthopogenic aerosol component). If memory serves, the latest IPCC estimates of the RF for this period, excluding anthro aerosols, were +3.2 Wm2 or thereabouts.

This is already very close to the 2xCO2 RF of 3.7 Wm2 so it seems to me that we can start to make some conclusions. For example, if we are to go from a TCR of 0.7C to an ECR of, say, 3C we need a very strong negative antho aerosol RF and a very large ocean delay. How else could we possibly multiply the observed TCR by over 4 with so little remaining RF?

I don't know much about OHU but I happen to live in the vicinity of a city with very frequent pollution episodes associated to winter thermal inversions so I do have a feeling for the direct aerosols/temperature relationship. I know that aerosols are a complicated issue but frankly speaking, I think that the IPCC estimate for the direct effect of aerosols (+0.5 Wm2) is also too high. During these episodes I am unable to find any difference between the temperatures in the city and the sorroundings or even distant rural areas of the same region.

In the same vein, for such a strong *global* direct effect, we should be seeing a huge effect in the areas where anthro aerosols (especially sulfates) are concentrated: http://en.wikipedia.org/wiki/File:Gocart_sulfate_optical_thickness.png On the GISTEMP or Met Office maps I fail to see any cooling (or even comparative absence of warming) in China or Eastern Europe. How aerosols can have such a strong effect over the whole globe when we are unable to see any over the localized areas wher they actually are has always escaped me. But perhaps I'm totally wrong in this respect and somebody will charitably explain why.

Steve said...

I think DocRichard makes a good point.

In this debate, I think it important that the scientists involved, if they think that despite the uncertainties, we know enough that it makes excellent sense to start on a serious program of reducing CO2 emissions, then point needs to be made clear. Otherwise the back and forth will be used as evidence that it is unwise to make any policy decision.

Sorry, but the issue is too important for the attitude to be "hey, I'm just a scientist, I don't talk about policy implications."

Paul S said...

Mikel,

You're jumbling different timescales with the TCR estimate: IPCC forcings relate to the difference between 1750 and 2011.

I've got the GISS time series in front of me, so I'll use that. There is ~0.8ºC warming from 1880. Median IPCC anthro forcing without aerosols between 1880 and 2011 comes to about 2.6 W/m^2. TCR over this period by 0.8 / 2.6 * 3.7 = 1.1ºC. This would generally be considered to indicate an ECS range of something like 1.5 to 2.5ºC.

Regarding your local aerosol conditions: The inversion simply means you're getting more aerosols at ground level. When there is no inversion the aerosols are still around in the same quantity, exerting a climatic influence, just higher up and more dispersed.

The global average direct forcing is a sum of very large positive and negative forcings at regional scales. Positive forcing from absorbing aerosols in some parts of South Asia appears to reach about 10 W/m^2! Your local conditions will likewise consist of aerosol effects with competing forcings, plus there is Ozone to consider. It's not necessarily the case that you should expect cooling due to your local aerosol loading.

I'll also note that aerosols are not as local as most people think. One of the largest regional "pools" of negative radiative forcing to form over the past few decades is off the Pacific coast of North America. This is not due to emissions in the US, Canada or Mexico but particles originating in East Asia which cross the ocean in a matter of days.

KarSteN said...

@Mikel:
"In the same vein, for such a strong *global* direct effect, we should be seeing a huge effect in the areas where anthro aerosols (especially sulfates) are concentrated:"

Your link only shows the 2005-07 average AOD. The spatio-temporal variablity is however huge! While the US, Europe and Russia were heavily affected from cooling sulfate aerosols between the 1950s-80s, those very same regions have brightened thereafter (less pronounced in many parts of Eastern Europe). In turn, China is recording ever increasing aerosol concentrations (though with slightly higher AAOD/AEOD ratios), culminating in the last decade as apparent in the (modelled) GOCART AOD for 2005-07. A sluggish warming with stagnation after 2000 (though not significant) is the result. Together with potentially increased tropical aerosol concentrations, they might even have an impact on the stratospheric background AOD currently (Vernier et al. GRL 2011; Fyfe et al. GRL 2013). The southern hemisphere on the other hand, is generelly less affected. The aerosol concentrations have only slowly risen, resulting in an smooth temperature increase.

More to the point, it is not enough just to live next to a pollution source and to check the local thermometer readings (aerosols have a negligible cooling effect over land in winter). There is an entire branch of research that is devoting its time to merely study atmospheric aerosols. I apologize for not being able to provide all the basic details here. There are dozens of textbooks which explain the physical background at any desired level of difficulty. I can however point to some figures, which may help to support my point with regard to the local effects. Look carefully at the following images (all land temperatures from the recently established BEST dataset) and remember what I just said about the spatio-temporal aerosol distribution.

Cooling 1950-80: US, Europe, Russia, NH
No warming after 2000: China
Smooth trend: South America, SH

Although many more factors play a vital role (see Paul S' reply), one can clearly identify the aerosol effects in the surface temperature data. The oceans are also susceptible to aerosol cooling as the ocean surface albedo is low (natural background aerosol loading is also low). Needless to say, the interplay of external forcing and internal variability complicates exact attribution. Note however, that the inappropriately named "AMO" is no mystically driven cycle as some papers insist to claim. External forcing is definitely a (major) contributing factor.

Steve Bloom said...

Like Carrick, Mikel has a track record of being unable to keep the prize from distracting his eyes.

David Young said...

Yea, Karsten, what this says to me is that the SH temp trends are likely to be more reflective of the true GHG sensitivity. Might be an interesting postdoc exercise and possibly a paper, assuming it gets by the "redefinition of the refereed literature" some of your colleagues seem to favor. As you explain so well, NH temps are subject to so many influences aside from GHG forcings, it may be hopeless to model. You will not like the source and its not published yet, but Spencer has a post estimating the waste heat forcing in the US and its not negligible. And then there are land use changes, etc., etc., etc.

Carrick said...

Steve Bloom's another one that the advise I gave Karsten too would apply.

Steve's one of these guys who thinks he reads other people's minds perfectly. Really I've seen few people more clueless. And simultaneously dislikable.

It's a gift really.

James Annan said...

Now now, play nicely everyone, or I might have to bring out the naughty step. So far you've mostly avoided being too snitty, for which I am grateful.

Carrick, I don't think there is really any theoretical or observational support for what you are suggesting. There is plenty of internal variability such that such small jumps in climatological behaviour cannot really be sustained as different equilibria. While it can be tempting to draw lots of little line segments, that introduces a lot of degrees of freedom and probably doesn't provide a physically meaningful description. Ignoring those two eruptions, the straight line fit looks pretty reasonable to me.

EliRabett said...

The real question is do you trust physics or observation. With physics the issue is did you leave something out or have an extra bit.

With observation you have noise and you may not be measuring what you think you are.

David Young said...


Carrick's suggestion is amply supported and well known. This behaviour is very common in fluid dymanical systems. Multiple pseudo stable steady states. The system can often jump back and forth, for example between attached flow and a massively separated flow. Just examine any test of a simple wing to see it. There is a perfect example from the Swedish wind tunnel test. Will try to dig up the link. It can be virtually impossible to simulate these nonlinear behaviours because the numerical dissipation often overwhelms the the real dissipation. Too much dissipation usually leads to apparent but false solutions. Browning can prove that climate models suffer from too much dissipation. Simpler systems can often expose simulation issues that need attention. It would seem inevitable to me that this phenomena arises in weather and climate where you have all the ingredients, boundary layers, convection, shear layers, etc.

David Young said...

Eli, the distintion between "physics" and observation is so false. Physics can only be seen if it is accurately simulated and its not. We can't even predict a turbulent boundary layer in a pressure gradient, much less cumulous convection. Observation is the only real anchor where your models are really rather hopeless. That's how subgrid models are "tuned" after all. First principles subgrid models simply don't exist and so your only other altermative is expert judgment, which is a euphamisn for prejudice.

David Young said...

Another point. If you ever fly, you should thank the FAA that observation is still the gold standard for certification. Reliance on models would be a public safety disaster and everyone with any sense knows it. Of course, if you want to use models to design an airplane, I would like you to be the one to flight test it.

Carrick said...

James: I don't think there is really any theoretical or observational support for what you are suggesting. There is plenty of internal variability such that such small jumps in climatological behaviour cannot really be sustained as different equilibria.

I agree with what you're saying here about the lack of support for it. My question was I guess more whether it could really be ruled out, or whether it was obviously a ridiculous speculation on some other grounds...

How you go about testing for a series of steps seems to be a bit of an issue in this case, given the amount of internal noise in the climate system..

I went back and looked at HadAT2 (50 and 100 hPa), since this goes back to 1958. Truthfully this data pretty much follows a linear trend, except for three spikes and a corresponding downwards overshoot corresponding to responses to volcanic eruptions, and the period after about 2005, where there is a persistent slightly positive slope in the data.

Figure here.

This is an extremely weak case, which—though—is different than no case. ;-)

As to theoretical justifications, I would guess the biggest criticism would be that we don't see similar behavior in climate models, right?.

More generically there is theoretical support for a system (like climate) that exhibits hysteresis to show persistent, quasistable behavior (even if not identifiable states).

I don't know off hand how well hysteresis in the physical system is actually characterized to say whether or not climate models adequately capture that dynamics. Timothy Merlis, I believe gave a presentation in AGU last fall that had some measurements of hysteresis from a climate model. If I recall right he was comparing climate sensitivity for positive and negative changes in forcing, and got a small asymmetry in the two directions. We don't get negative volcanic eruptions very often, so it'll be a while before we can compare data to model on that one.

Carrick said...

David Young: Carrick's suggestion is amply supported and well known. This behaviour is very common in fluid dymanical systems


Indeed, most of us carry around two fluid dynamical systems that exhibit exactly this same behavior... namely our cochleas. I admit this is a digression and may not be relevant to the climate system.

Because they are an active system, cochlea are capable of generating spontaneous stable narrowband internal oscillations, which are often measurable in the ear canal as "spontaneous otoacoustic emissions". These emissions are present in about 80% of the normal hearing population, sometimes being associated with objective tinnitus, but often the subject is unaware of the presence of these emissions.

These spontaneous emissions typically are seen in a spectrum as a series of discrete emissions, approximately equally spaced in log-frequency (the human basilar membrane has an approximately log-frequency place-frequency map...these two have been shown to be related).

Anyway, it turns out that these emissions can also be perturbed by external stimulus... If you place a tone near in frequency to a spontaneous emission, you can completely suppress the emission. This means, unlike climate, you can perform experiment on this fluid dymanical system that we carry a couple of around with us.

One of the experiments you can perform is to play a loud noise into the ear with the emissions.. in some subjects you will induce a second pattern of emissions, which is referred in that field as "alternate state emissions" when present.

I'd be interested in seeing David Young's wind tunnel reference.

Carrick said...

I think Eli's point is that you don't just have observations in the absence of a credible physical model, and it's one I agree with.

This is the particular weakness that I see in Tisdale's analysis of data. Even if he's right, he can't assign attribution to what he sees without a proposed, testable physical mechanism.

At best, he may have described an aspect of the dynamics associated with temperature change, that haven't been widely accepted yet, that might pertain to how climate responds to a change in net forcing. So even if right, I don't see how this upsets any apple carts.

Or perhaps, at worse, he's just being engaging in "wiggleology."

It's very easy to see (and hear!) patterns in noise.

James Annan said...

Carrick,

Models are part of it, but also I think the argument about internal variability is hard to deny. You are looking at fractions of a degree in the global average, when the local variability is orders of magnitude higher. There aren't disjoint wells here to switch between, and laminar versus turbulent flow is a complete red herring.

Note that once you get really picky, the concept and/or time scale of "equilibrium" gets a bit fuzzy anyway. A volcanic perturbation will take decades (or longer) to recover from fully, as it pushes a cold pulse some way down into the ocean (via convection) which takes a long time to reverse through diffusion. Warm anomalies, on the other hand, stay at the surface and can dissipate faster. If you talk about an atmosphere-only system, on the other hand (or even include a shallow mixed-layer ocean) then hysteresis really isn't an issue (at least in models under moderate perturbations). Of course if you are talking about snowball earth, that's another matter entirely. But you weren't.

Paul S said...

Carrick,

This (Forster et al. 2011) is a good reference for modelled stratospheric temperatures. You can see in Figure 3 that some of these chemistry-climate models do capture this overshoot behaviour following the effects of large volcanic eruptions. For reference, MSU4 is the same thing as TLS.

Alex Harvey said...

Carrick writes,

I agree with what you're saying here about the lack of support for it. My question was I guess more whether it could really be ruled out, or whether it was obviously a ridiculous speculation on some other grounds...
...
As to theoretical justifications, I would guess the biggest criticism would be that we don't see similar behavior in climate models, right?


This is the same point I am trying to make. There are assumptions in the models that can't be verified for lack of data.

If scientists are struggling to model clouds in the present climate then how much harder is it to know what clouds did in remote climates that we can't directly observe.

I have never seen anyone explain how we *know* that cloud changes didn't cause a large part of the cooling leading us out of the LGM - without making the largely unjustifiable assumption that the GCMs must be just getting this bit right.

Why? No one really believes GCMs are getting clouds just spot on even in the present climate where we have some (poor) observations to compare them with. This accounts, for instance, for the recent modifications to the aerosol indirect forcing. Cloud observations have taught us that the indirect forcings aren't quite what we thought they were 5 years ago.

Meanwhile Dessler's recent paper on MODIS (Zhou et al.) appeals to lack of resolution on thin clouds - these thin clouds simply can't be seen - so we really don't know for sure what's going on there.

Now imagine an earth covered in a largely unknowable configuration of kilometers thick ice sheets. These change the dynamics of the atmosphere in again, ways that are not fully understood, and it's not at all unrealistic to suppose that clouds in such a climate are quite different than clouds in the present climate.

At this point, all trust is thrown to the models - despite knowing that they're probably not right.

I'm not suggesting there's an alternative to such reasoning - but it does seem to me unreasonable, based on paleo where the data often isn't there, to say conclusively that such-and-such is "disproved".

All we can say is, "it's difficult to make a strong argument for such-and-such".

Paul S said...

Alex,

I can't really understand what you're getting at because cloud changes are considered to play a major role in the temperature difference between the Holocene and the LGM. It was one of the main criticisms of the Schmittner 2011 paper that it used a model which didn't incorporate the possibility of dynamic changes in clouds.

I would think the PMIP3 model runs which simulate the LGM all indicate different cloud properties between then and today.

Mikel Mariñelarena said...

Re Paul S 6/2/13 9:55 am

Paul,

Thanks a lot for your reply.

I think that the anthro RF differences between 1770 and 1880 must have been very small. How much could LLGHGs, ozone, aerosols, etc have changed over that period? And also to jump from a TCR of 1.1C to an ECS of 2.5C you would need an OHU that is simply not in the observations. But perhaps you could eventually get there with slow feedbacks, I don't know. We're still talking about a low sensitivity, anyway.

In any case, what really bothers me is how the IPCC arrives at a direct aerosol RF of -0.5 Wm2. If this were smaller we'd have another reason to pull the ECS down, in addition to those used in Nic's calculations. But I'm just especulating. Perhaps I don't understand the direct effect at all and I'm trying to find out if that is the case.

The inversion simply means you're getting more aerosols at ground level. When there is no inversion the aerosols are still around in the same quantity, exerting a climatic influence, just higher up and more dispersed.

That's an interesting point. Sometimes I guess that this what happes but, thinking about it, it doesn't seem to explain what I see. When there's no inversion, most of the times this aerosol load gets washed away by rain in winter or dispersed by winds in summer. In all cases, we have a region (Santiago) with an average load of aerosols in the last decades much higher than the surrounding areas (Valparaiso, virtually no pollution or Chillán, very little). I would expect to see some cooling (or at the very least some less warming) in Santiago compared to those stations. What I see is more warming.

Positive forcing from absorbing aerosols in some parts of South Asia appears to reach about 10 W/m^2!

You mean at the surface? What aerosols can warm the surface so strongly in South Asia?

One of the largest regional "pools" of negative radiative forcing to form over the past few decades is off the Pacific coast of North America.

We all know about the brown cloud episodes but that region (a very rainy one at that) can't possibly be one of the largest "pools" of negative aerosol RF. Not that my Wikipedia image is the "source of truth", of course, but it doesn't even show up there. If you read Wild or Ramanathan it is clear that the most important pools are in Asia (up to -40 Wm2 over some cities).

Regards, Mikel

Mikel Mariñelarena said...

Re KarSteN 6/2/13 10:55 am

Hi Karsten,

Thanks a lot for taking the time to respond to my comment.

A couple of remarks. The absence of warming post-2000 in China is actually visible in all the rest of your images. It seems to be a rather global phenomenon, as discussed at length elsewhere. Also, the pollution problem in China began in the 80s, with the establishment of the famous 16 "special economic zones" all along the east coast. I guess emissions must be higher now than ever but they were at full swing in the 90s too.

As for the Russian image, I think that most of the Russian territory (especially Siberia) was largely unaffected by aerosols. One of the NH regions with the lowest aerosol loading in the mid centrury must have been Grenland but it shows a specially marked cooling in that period too: http://berkeleyearth.lbl.gov/regions/greenland I'm pretty sure that I've read our own host James doubting about the aerosols attribution for the mid-century cooling.

However, I have read your profile and I would like to learn from you. Rather than arguing about the details above, I could make better use of your time if you explained what is wrong with my reasoning/understanding below:

1) The IPCC says that a significant part of the warming we should be seeing at the surface due to GHGs is being masked by the opposing net effect of anthro aerosols.

2) A considerable part of that effect (apparently larger than thought before, according to the SOD) is produced by the direct effect; aerosols scatter sunlight and cool the surface.

3) Unlike GHGs, aerosols are washed away in days/weeks and largely stay close to their emission points (industrialized areas) or downwind from them. Their effect is not global.

4) If the temperature of the whole globe is being dragged down by the aerosols direct effect but most of the globe (the majority of the oceans, the polar regions, the deserts,...) is basically unaffected by this DE, it makes sense to look at the instrumental record to see the coolness in the affected industrialized regions that would compensate for the lack of aerosols on the rest of the globe. If the regions below the aerosols are not cooling and the rest of the globe is not even affected by the aerosol DE, how can the global temp be being dragged down by it?

Best regards, Mikel

David Young said...

Just a slight quibble. It's attached flow vs separated flow. Transition to turbulence is so nonlinear it's virtually impossible to predict. If models are too dissipative, then they will probably miss these things too.

Carrick said...

Thanks James & Paul for your comments. I have seen Forester 2011, but thanks for the reference in any case.

I had been thinking, though I didn't elucidate it, of interactions between the atmosphere and surface & ocean. I would accept that a full AOGCM would be much more likely to produce behavior of the sorts I've been mulling over than a "simple" atmospheric circulation model.

Regarding James comments about the difference in frequency response to a positive versus negative change in forcing (volcanic versus warming)... isn't an example of the sort of nonlinearities that lead to hysteresis?

Imagine going through a closed loop in forcings, clearly you won't return to the original state when you do so, though given enough time it may relax back to the original quaistationary state. And if the relaxation time is slow enough an, other parts of the system may adapt to the changed climate (I tend to think biosphere feedback, but cyrosphere works too), and you'll never end up back at the original starting point (without additional changes in forcings to bring you there).

Also I wasn't thinking of snow ball Earth so much, but a better example, one I was thinking of but didn't bring up, is the arctic ice melt off we're witnessing.

Multiyear ice taking longer to reform than it does to melt would be one example of a similar phenomena. Arctic ice melt itself is expected to lead to a change in polar atmospheric circulation patterns, which will likely produce a shift in seasonal climate patterns. I would say this is a clear example of a "state change".

Anyway, thanks for the comments. Don't mean to beat this into the ground. It's given me some stuff to think about.

EliRabett said...

Cripes, agreeing with James and Carrick, that's one for the books. However, David Young is simply wrong on wind tunnels. The large subsonic ones have pretty much vanished and been replaced by modeling. Last time Eli looked some of the hypersonic ones were on shaky legs. Simply too expensive

David Young said...

Bull!! I have first hand experience and know.

KarSteN said...

@Mikel:
Russian was perhaps not directly affected from aerosols, but european emissions got advected towards Russia and the Arctic with the prevalent westerly winds. The anthropogenic aerosols are also visible in Greenland ice cores. Distinctively so: Black Carbon in the first half of the past century, sulfates with the typical spike between the 1950s-80s. We know they got there. Keep in mind that small amounts really count, particularly in regions with low natural background concentrations (as already mentioned). Interestingly (from a laymen perspective), this is not the main reason for the cooling up there, as direct effects are only part of the story. Owing to its amplifying nature, Greenland and the Arctic are reacting stronger to any imposed forcing (has to do with the change in the Hadley circulation which depends on the average temperature).

With this, I hope you begin to understand that your third point is flawed. Aerosols are transported long distances, especially if they are not being washed out by rain. We simply keep producing massive amounts of aerosols, and sure enough, at the end of the day it's a question of mass balance. The more we emit, the more stays in the atmosphere for 3-7 days. I'd recommend to watch this stunning model simulation: GOCART-GEOS5

Their effect is not global, but regionally so strong that we get a discernable global effect. In order to know how strong an effect they have on average, we can't set models aside. While we can measure their atmospheric concentrations and their local radiative effect, the global impact can only be modelled. But we can use these local measurements to verify and constrain the models. If you dare, have a look at some models results which show the temperature response at the surface after allowing for fast feedbacks (from secondary indirect effects and from flux perturbation): Jones et al. JGR 2007 --> Figure 2

Note that it is a particularly sensitive model, so that the effect for all forcings will be a bit lower (say -0.8K instead of -1.16K for sulfates). Apparantly, Russia is quite heavily effected from aerosol cooling and so is the Arctic region (including Greenland). The authors reinforce what I've just said: "... [B]ut there is also a strong response over the Arctic where the forcing is minimal". If you look at these images, I am sure you can answer the fourth question for yourself. Of course, one can reject these results (as Nic Lewis conveniently does), but the better the aerosol physics and chemistry is represented in the models, the closer we get to the observed temperature patterns (in most cases). Whether they have a tendency to over- or underestimate things, we can't say for sure yet. There is however plenty of reason to assume that these nasty little aerosols matter a lot.

Btw, as it was brought up in the discussion, the more recent version of the same model (HadGEM2) reproduces the stratospheric temperatures fairly well. The results are in a freshly accepted paper by Mitchell et al. GRL 2013 --> Figure 2 ("OA" refers to "other anthropogenic", mainly ozone)

James Annan said...

According to Boeing, the 787 design used 800,000h of supercomputer modelling and 15,000h of wind tunnel tests.

http://www.boeing.com/commercial/787family/programfacts.html

I think that makes David Young 2% correct :-)

More generally, of course any strong scientific research program is going to combine observational analysis with theoretical/modelling work. I'm not sure of the point of this line of argument.

James Annan said...

Carrick,

Well like I said, once you really get into the weeds, the concept of an equilibrium climate looks a bit shaky anyway. If nothing else, plants and animals evolve! Plus, the solar insolation is always changing, not just the sun spot cycle but also orbital parameters. Oh, I almost forgot topographic changes, though they a few orders of magnitude slower (not counting ice sheets and associated sea level here).

Even a simple ocean model usually has a number of slightly different equilibria actually, due to shifts in the location of convective activity (by a few grid boxes - I'm not talking here about a big reorganisation of the circulation). The effect on global climate is very small however. It may be an artefact of finite resolution, I'm not sure if anyone has looked into it in detail.

David Young said...

Yes James, modeling is used a lot in aircraft design. However, the testing is still required for verification. And there have been some spectacular failures of modeling. Can't go into details but suffice it to say that the literature on fluid dynamics and structural modeling is very deceptive due to positive results bias and monetary interest in showing your model is better. The cost of flight testing a new aircraft is far bigger than either the wind tunnel testing or the computer modeling. There is also that little problem called nonlinearity and chaos. Little matters rarely discussed in the literature. The code "worked" is an incantation used to hide every problem known to man.

I think the point was raised earlier that "physics" was more reliable than observation. I was just making the point that observationally based estimates of sensitivity have some obvious advantages.

EliRabett said...

Whateverand so on

David Young said...

"The declining use of wind tunnels is due to a variety of reasons, including overseas competition, the consolidation of the airplane building industry and fees NASA enacted in 2002. With $2.46 billion in deferred maintenance projects, according to a 2010 report, the agency is moving to demolish many facilities in lieu of upgrading them." Didn't see any mention that modeling made them unnecessary.

EliRabett said...

See James, increasingly wind tunnels are only used for verification of calculation and not for design itself which drives the whole thing. If they had to be used for everything they would be. This is not only for aircraft. Thus many fewer are needed.

Eli has been told this extends to the back woods of Tennessee

David Young said...

Yes, but observation was critical in developing the computational models in the first place and the models only have predictive power in the range of the data used to derive them. It's not like Newtonian mechanics where the model is almost perfect. In the nonlinear world, its different.

Anyway, especially for such a complex system as climate or weather, my bet is on observations. Models are less trustworthy in my opinion.

Mikel Mariñelarena said...

Karsten,

Please let me ask a straightforward question. If the anthro aerosols direct effect is a primary global forcing why should I not expect to see cooling in the regions that have developed a huge aerosol load in the last decades?

I believe that this is the approach you took yourself when pointing out the mid-century cooling over such regions. Of course, I appreciate that there may be confounding factors in some places but surely not in all of them. I have examined the temps evolution since the 90s over eastern Asia, Northen India and especially over some specific locations: Shanghai, Shenzhen, Beijing, Bombay,... I see no special cooling.

Many thanks again, Mikel

Paul S said...

James,

This is a bit late in the day but I thought I'd take issue with your response to Reno Knutti's comment. You seem to take it as a comment specifically about sensitivity priors but my reading is that his is a wider point about all the prior assumptions that go into these instrumental record studies.

I mentioned one interesting element to the Aldrin et al. estimate on a previous thread: their posterior fit produces more SH warming than NH. This is presumably because their simple hemispheric climate model is too linear/symmetric in response to RF to produce more warming in the NH, as is clearly the case in observations.

Because of this implicit model assumption, and the fact that their variable forcing parameter is NH-leaning aerosol forcing, their posterior sensitivity estimate appears to be dominated by the SH warming trend. The NH warming is effectively ignored in terms of sensitivity but seems to have been used to produce a lower aerosol RF in the posterior. After all, according to the prior expectations of their model, if the NH is warming faster than the SH it seems unlikely there can be much difference in RF between the hemispheres.

KarSteN said...

@Mikel,
good question. There are two issues here. One is that the composition of the asian aerosols are a bit different. It contains more Black Carbon which makes it more absorptive. Paul S has elaborated this issue earlier already (see posting from 6 Feb 9:55am). It will still have an effect away from the source regions once it's mixed upwards in the atmosphere. This could well have a surface cooling effect. But as they also contain sulfates and Organic Carbon, one should indeed expect a local cooling effect given their extremely high concentrations. I quickly consulted the Climate Explorer and checked the GISS data over the heaviest polluted Southern Asian region according to this and this (Fig.2a). The result for the period 1950-today looks like this (red line; temperature anomaly in °C): South Asia and Pacific

The delayed response of the Western Pacific region (just east of Southern China) is shown in orange. Note that the mid-1980s and 1990s dip in the Pacific is due to El Chichon and Pinatubo volcanic eruptions. In black and blue the temperature evalution over Europe and Northern America. While we saw truly global dimming between 1950s-80s (the sulfate aerosols were literally blown around the entire northern hemisphere), we currently find a more heterogeneous pattern. What we definitely see is a cooling over Southern Asia, together with a sluggish warming of the Western Pacific.

Whether the total global anthropogenic aerosol forcing is lower (less negative) today than it was in the dimming phase 40 years ago can't be told with certainty. I would reckon that this is the case (at least for the NH) as recent emission inventories suggest. Another thing is certain however, we've seen brightening over Europe and Northern America. The GHG forcing between 1980-today is a bit more than +1W/m2. The surface temperature response a staggering +1°C on average for both regions at the same time. It would require a hellish climate sensitivity to accomplish that. Very unlikely!

David Young said...

Kirsten, I don't think the data shows what you claim. South Asia kept up with N America temps.

Isn't this a good argument for using S hemisphere temps in sensitivity calculations? Aerosol forcing is half that of N hemisphere so the uncertainty is also lower.

James Annan said...

David,

There is some merit in your suggestion, and many analyses do consider NH and SH at least quasi-independently. But note that the SH also misses (or at least minimises) some of the feedbacks that act in the NH, such as snow/ice albedo. (not that there isn't any of that in the SH, but I'm pretty sure it will be a smaller effect).

James Annan said...

Paul,

It is not disputed that you can get pretty much any answer out if you put in sufficiently extreme assumptions. Also, the interesting question is not whether the range of plausible subjective decisions affects the output, but whether plausible subjective decisions can result in assigning as much as 15% probability to S greater than 4.5C. I don't see support for this in the literature.

Taking Aldrin, for example, his set of results in Fig 6 only achieves this extreme a result for at most two plots (he doesn't actually show 85% level, but let's give the benefit of the doubt). One of these, he throws away data past 1990, which is clearly not a reasonable thing to do but is simply included to show the effect of recent data. The other, is the extreme case of cloud effect. Some may argue that this is reasonable, but....

*Both* of these results (and many others in the paper) *also* use a uniform prior on S! It's not even the 0-10 of most studies, but 0-20!

Now, if the IPCC authors wish to claim that this is a reasonable choice, then I think at this point the onus is very much on them to explain why, in the light of our paper on the topic. It's now got to the point that people are openly ridiculing them on the subject, such as Steve Jewson, who is not just a random blog commenter but a sometime collaborator with Myles Allen who publishes a lot on weather/climate statistics. None of the RC bloggers have tried to contradict him, either...

"quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s hope the IPCC authors understand all that."

and further on

"Sorry to go on about it, but this prior thing this is an important issue. So here are my 7 reasons for why climate scientists should *never* use uniform priors for climate sensitivity, and why the IPCC report shouldn’t cite studies that use them. [...]"

I have not seen any defence of uniform priors in the literature (or indeed elsewhere) subsequent to our paper of a few years ago. Yet the IPCC continues to use these studies, without any explanation or justification...

But perhaps you are more talking about other limitations and approxiimations in the model. Sure, all models are wrong. But where are the analyses that support P(S gt 4.5) = 15%? Why are all these recent analyses underestimating this so badly (if you accept the IPCC view)?

Has it actually got to the stage that the IPCC authors will say "this model/analysis must be wrong, because it does not assign 15% probability to S gt 4.5C, and we already know this is the answer"?

EliRabett said...

So the question devolves to at what point you have to set your prior. Eli surmises that James want to use everything he knows about the system to set the prior. Others are of the Sgt. Schultz school

No snark zone: Eli agrees that David is absolutely correct that the wind tunnels were vital in developing the computation models

Anonymous said...

KarSteN,

I am amazed by your interpretation of Berkeley earth data and attribution to aerosols.

The issues are,

there is hardly any decrease visible in China, despite the secular increase of aerosols in the new millenium,
the increase starting in the 1970s due to the cleanup of aerosols since the 1970s in the US and 1980s in Europe is highest in ... Russia (!) as well as the decrease in the 1940s-1970s, while the cleanup in Russia took place in the 1990s as a consequence of the collapse of the Soviet Union,
there is no explanation for the sharp turns in temperature if supposed to be caused by (multi)-decade long cleanup processes,
and there is no explanation for the temperature increase 1915-1942, when CO2 just increased from 300 to 310 ppm.

All such data and these issues correlate much better with PDO and AMO and their main shifts in 1976/77and 1942/1943, and if you would just add one more country - Greenland - the correlation with AMO and not (mainly) aerosols is evident.

http://berkeleyearth.lbl.gov/regions/greenland
http://en.wikipedia.org/wiki/File:Amo_timeseries_1856-present.svg


I think there are a lot of problems not yet solved, and in the context of someone earlier addressing the concern that climate scientists may run out work if sensitivity would be much lower than previously thought, I find it absolutely amazing that nobody appears to have studied yet the long term temperature effects of the ENSO process.

James Annan said...

Eli, I wouldn't want to be too dogmatic about it, I'm certainly open to people summarising their prior knowledge in a wide variety of ways. My claim is that the results (in the context of S greater than 4.5) are robust to reasonable attempts to do this.

Conversely, if you start out with the presumption that sensitivity is "likely" (70% probability) greater than 6C, as Aldrin does, and then only use limited data from the last 100y or so to update this belief, then you should not be surprised to get an alarming result. If anything, it's surprising that their results are not more extreme. If they had used a uniform prior extending to 200C, or 10000000C (if 0-20 is ignorant, surely these choices are even more ignorant), they would have got an even higher result - as we showed, in these analyses the posterior mean (let alone upper bound) tends to infinity as the upper bound on the prior does. This is because the likelihood function does not quite reach zero but is bounded below by some epsilon for all large S. So integrate over a wide enough range, you can get any answer you want.

Rob Dekker said...

James,
I'm not an expert in statistics at all, as may many other people reading your arguments.

Could you explain (or refer to a page that explains) what a 'uniform prior' actually is, and what the alternative is, when determining climate sensitivity (or any other variable) given one or more 'experiments' ?

Also, do you have an example when a using a 'uniform prior' is appropriate, and example where it is not appropriate ?

Paul S said...

James,

My comment doesn't relate to sensitivity priors, uniform or otherwise. It's more about the other prior assumptions made, and I think this is what Knutti is talking about too.

My own thoughts on ECS are mainly informed by your own work, and others looking at paleoclimate data, which generally seem to be finding that >4.5 is unlikely to very unlikely.

Regarding the instrumental record inverse studies, I do think they may be biased low due to their implicit and explicit assumptions. I've outlined one such issue with Aldrin et al. in the post above. While details differ between the studies pretty much all seem to share an assumed expectation that the spatial pattern of aerosol forcing should match the spatial pattern of climate response. Yet there is no evidence to support this assumption. If it's wrong then all these studies are likely to underestimate aerosol forcing and sensitivity.

Note that the above is a best case scenario, which I'm not sure applies to many of the inverse estimates. The Forest papers I've noticed all have their prior expected aerosol response set solely by latitudinally-defined SO2 emissions rather than forcing. They also completely ignore carbonaceous aerosols which have a different spatial pattern, being more dominant in the Tropics.

I decided to check what GCMs say about the relationship between aerosol forcing and response by comparing latitudinal temperature change difference between HistoricalGHG and Historical all-forcing runs. This isn't an ideal comparison since all-forcing includes other spatially diverse forcings but aerosols should be the most prominent of these.

In the MRI-CGCM3 and HadGEM2-ES models I checked the difference is fairly uniform across all latitude bands except NH high latitudes which warm much less in the all-forcing runs.

While I was doing that I thought I'd find a crude transient sensitivity estimate for each. Using IPCC estimates for WMGHG forcing between 1850/1860 and 2005 (~2.4W/m^2) and the GAT warming in a HadGEM2-ES historicalGHG run (1.6ºC) gives a transient sensitivity across the period of 2.45ºC. This is very similar to the diagnosed TCR from a 1% per year run, so I'm probably on the right track here. IPCC estimates for all forcings other than aerosol comes to ~2.6W/m^2. The diagnosed aerosol forcing in HadGEM2-ES comes to -1.2W/m^2 and the all-forcing GAT change is 0.53ºC, which means a transient sensitivity across the period of 1.4ºC.

With MRI-CGCM3 the same test for historicalGHG comes to 1.8ºC. For historical all-forcing, using the diagnosed -1.0W/m^2 aerosol influence and GAT change of 0.52ºC, transient sensitivity is calculated as 1.2ºC.

These calculations crudely indicate that estimates using simple climate models based on historical instrumental data may be considerably underestimating future sensitivity to WMGHGs.

James Annan said...

Rob,

A (bounded) uniform distribution is just one where the probability of a range within the bounds is directly proportional to its width. In the case of the previously popular U[0,10] prior for S, this means P(S gt 6)=40%, for example, and P(1.5 lt S lt 4.5) = 30%. I'm not going to risk symbols for "less than" or "greater than" because of html problems.

The detailed versions of my argument are probably best found here and in the associated paper, which I hope is quite readable.

The executive summary is that there is no way to capture the concept of "ignorance" in Bayesian probability, in that any probability distribution represents a precise and specific set of probabilities assigned to particular events. You cannot start out from "ignorance" and then use the data to determine the objective probability of anything.

As for when it doesn't matter, if the observations happen to provide a strong constraint with short (eg gaussian) tails then it may not matter. In estimating my weight, it may be reasonable to start with a uniform prior and directly interpret the readout of my scales with associated error (assumed gaussian) as an estimate of my weight (eg 81.3±0.1kg). In fact that is what most people will do automatically. More properly, you should start out with a prior, maybe 82±1, and update it - but so long as the prior is broad relative to the obs error, the result is virtually identical anyway.

James Annan said...

Paul,

An obvious test that would be more directly informative than your estimates, would be to see if the energy balance model analysis correctly diagnoses the sensitivity of a GCM, given equivalent observations. Aldrin does this for 3 GCMs and gets good results with all true values lying within the predicted "likely" ranges, a max error of 0.8C and negligible mean bias. It is hard to argue from model results that his method does not work extremely well.

It is also the case that some EBMs have been used directly as simple emulators of GCMs, eg MAGICC and the Bern model. If this required incompatible sensitivities, I'm sure it would have been spotted.

I could be wrong, but I think that one possible issue with your analysis is that many of the GCMs mix too much heat down into the deep ocean, which would cause a bias in your estimation (in that their transient response will be damped relative to the equilibrium, to a greater extent that in reality).

Paul S said...

James,

The simulations they use for evaluating their method are 1% per year CO2 increase, not historical runs with spatially diverse forcing. I'm not so surprised that they were able to get a decent result in that situation.

A small matter: they give two different sensitivities for IPSL - 4.4 in one place and 4.1 in another. The actual reported sensitivity is 4.4 so their calculated 3.1 is 1.3 out.

Paul S said...

Er.. should be '3.3 is 1.1 out.'

KarSteN said...

@lindaserena:
I would kindly ask you to read what others and I have already pointed out several times. Aerosols are transported over vast distances. Russia is as much affected from European emissions as it is from local emissions. Above all, Northern Russia has got subarctic conditions. The amplifying Arctic response to changes in the external forcing (in both directions) hence plays a major role.

I've also mentioned already, that there are different aerosol types. Southern Asia has a higher proportion of absorptive BC aerosols which hampers attribution as to what their local temperature effect is. The latest temperature decrease is therefore not simply attributable to aerosols (unlikely caused by them anyways).

We saw increasing BC emissions also in the first half of the last century, which acted to counterbalance the cooling of the similarly increasing sulfate aerosols to some extent. Likely no net aerosol forcing effect. Secondly, it was a recovery phase from several major volcanic eruptions between 1883-1910. Third, the solar forcing got noticably stronger in this period. Fourth, due to bad farming practices (mainly US), the surface albedo was reduced locally (keyword "Dust bowl"). Together with a weak GHG forcing, there was no way around a pronounced warming phase. Such forced, AMOC strength as well as the preferred ENSO phase can vary, leading to temporal temperature amplifications (e.g. in form of a decade long NAO+/- or El Nino/La Nina phase).

Regarding Temperature vs "AMO": Correlation and causation are two entirely different thing. Sulfate emissions do perfectly correlate with "AMO". We know they cool the surface. For more on the ENSO, consult the scientific literature. It is a endless list. Textbooks do it likewise ...

James Annan said...

Paul, that's a fair point, but it still doesn't show that (or explain why) their results should be biased for a realistic scenario. Surely all you've shown is that your method doesn't work very well?

Rob Dekker said...

Thanks, James, your explanation of 'uniform prior' and the refs to further explanation of your argument are crystal clear. As I understand now, a uniform prior does impose 'assumptions', such as a uniform distribution of probability for ranges.

Now, armed with this understanding, but still quite inexperienced in statistics, I have two follow-up questions :

1) Why is a 'prior' needed at all ? In a set experiments, why not just take the first experiment as the 'prior' and any subsequent experiments adjust the pdf according to Bayesian statistics ? Or am I opening up a can of worms now ?

2) Does the IPCC rely on a 'uniform prior' anywhere in their assessment of climate sensitivity ? If so, how ? Is that by individual papers that used a uniform prior in their individual assessment of CS, or is the IPCC range 1.5-4.5 C itself based on a uniform prior corrected by all the observations ?

Rob Dekker said...

I noticed that Nic Lewis is around, and I wonder if he intends to nicely write up his argument and submit it to a peer-reviewed journal.

I read Nic's defense here, but I did not see that he refuted the argument I made on his estimate of CS on Stoat :
http://scienceblogs.com/stoat/2012/12/20/people-if-you-want-to-argue-with-stoats-first-read-enough-to-be-a-weasel-parrots-neednt-apply/#comment-24749
----
It seems to me that Lewis’ calculations are reasonable, but they lowball temperature change, ignore ocean heat absorption below 2000 m, and high-ball radiative forcing.
As a result, the number 1.6 C he ends up with is at the low end, just as James Annan already asserted.

For corrections to Lewis number of CS, the following comments may relevant :

(1) Lewis picked HADCRUT for global temperature, and calculated 0.727 C since 1880.
However, we know that HADCRUT has poor coverage over the Arctic, where significant warming is happening over the past couple of decades.
If we take GISS LOTIinstead, we obtain something like 0.85 C since 1880.
http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A2.gif

(2) Lewis’ assessment of 0.4 W/m^2 ocean heat uptake (up to 2000 m) over the 2001-2010 period may be an underestimate.
Loeb et al 2012 obtained 0.5 W/m^2 based on a combination of ocean measurements and satellite data.
http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo1375.html

(3) Lewis assumed heat uptake below 2000 m to be negligent, which is almost certainly an underestimate.
The deep Southern Ocean alone is very likely warming at 0.03 C/decade, for which AR5 reports 48 TW warming since at least 1992.
This Southern Ocean warming translates to approximately 0.1 W/m^2 increase in the Ocean Heat Flux number that Lewis used.
I’m pretty sure this number is also an underestimate, but unfortunately we have very little (or no) data for deep ocean warming in other oceans.

(4) The last, and largest, adjustment to Lewis’ numbers is radiative forcing. He obtains 2.09 W/m^2 from Figure 8.18 in AR5.
However figure 8.18 shows that 2.09 W/m^2 for the total radiative forcing since 1750, instead of the 1880 timeframe that he based his temperature number on.
By 1880 there is a 0.25 W/m^2 forcing in place, so Lewis RF needs to be adjusted down by some 0.25 W/m^2.

Taking all these 4 adjustments into account, for climate sensitivity we obtain 3.7 * (0.85 / (2.09 – 0.25 – 0.1 – 0.5)) = 2.53 C/doubling.

Now, I’m not saying that my “analysis” of the factors involved are any better than Lewis’ numbers, but I also can’t see that they are any worse.
-----

Alex Harvey said...

Paul S.,

I didn't follow the Schmittner et al. controversy much. Meanwhile you're missing my point a bit, probably because I haven't made it well.

In the Kohler et al. 2010 QSR paper on LGM forcing -

The feedback parameters for water vapour, lapse rate, and clouds can only be estimated with climate models... (p. 3).

And,

Aerosols in the climate system are responsible for various effects. They scatter and reflect incoming radiation (direct or albedo effect) and they alter the physics of clouds (indirect effects). The physical understanding of the impact of aerosols (including dust) on climate for present day is very low ... Here, we focus only on the direct effect of aerosols and base our estimates on observations and modelling results concentrating on mineral dust in the atmosphere. We are aware that this view does not cover all effects which might need consideration, but our understanding of these additional effects is still incomplete and for paleo-applications too limited to come to quantitative conclusions.

Models agree that the net cloud feedback is strongly positive, mainly because (my understanding) it was impossible to simulate features of the 20th century climate in models with neutral or negative cloud feedbacks (or, perhaps I should just say, models with a low climate sensitivity).

Still, there are features of the 20th century that can't be modelled anyway - e.g. certain modes of internal variability.

Now, you say there was a controversy about Schmittner et al. not getting the cloud response 'right' in relation to LGM dynamics. So, my point is, how can anyone know what the 'right' cloud response is when we can't even make a 3-d model of the LGM earth without some guessing?

Dessler made a fascinating observation of cloud feedbacks in some of the models he looked at in -

A determination of the cloud feedback from climate variations over the past decade, A.E. Dessler, Science 330, 1523(2010); DOI 10.1126/science.1192546

He writes,

The sign of the short-wave feedback shows more variation among models; it is positive in five of the models and negative in three. There is
also a clear tendency for models to compensate for the strength of one feedback with weakness in another. The models with the strongest shortwave feedbacks tend to have the weakest longwave feedbacks, whereas models with the weakest short-wave feedbacks have the strongest longwave feedbacks.


So the models all agree on the net cloud feedback but they don't even agree on the sign of the component LW/SW cloud feedbacks.

So, what basis do we have for believing that these models are getting clouds about right in the LGM, or during the transition from glacial to interglacial states? If there was any alternative to using the models for clouds in paleo studies I am sure we all agree we would prefer these alternatives.

For instance, who is to say the LGM net cloud feedback wasn't strongly positive and the interglacial net cloud feedback somewhat negative? Now, I am not suggesting you can make a strong argument for this - the point is with such an absence of both data and theory - how can the possibility - or similar speculations about the other many unknowns - be said to be truly "excluded"?

Paul S said...

James,

Surely all you've shown is that your method doesn't work very well?

I would agree with that, under the assumption that the purpose of the method is to determine expected tempearture change due to doubled CO2. However, the point I'm trying to make is that the assumptions which make my method fail are also embedded in most of these inverse estimates. I would predict that the Aldrin method would not be able to accurately determine 2xCO2 ECS for CMIP5 models if tested against historical runs rather than 1%CO2 per year, and that it would tend towards significant underestimates.

Steve Bloom said...

"So, what basis do we have for believing that these models are getting clouds about right in the LGM, or during the transition from glacial to interglacial states?"

One thing to do is consider cloud forcing in the context of what climate was actually doing at those (and, importantly, other deeper) times and what is known about other forcings (quite a lot). It's hard for cloud forcing to go too far off the rails without becoming unphysical relative to those past climate states.

James Annan said...

Several distinct threads going on here...

Rob: Yes, these Bayesian calculations provided one major source of evidence in the last IPCC assessment, and they quite specifically and deliberately adopted the U[0,10] prior as defining ignorance, even going so far as to re-analyse some published results which did not originally use this prior in order to work out what they would have got had they done so. Nic Lewis has written on this too - they seem to have messed up their calculations in one case.

To defend the IPCC for a moment (am I allowed to do that?), the whole debate did only rise to the fore while they were engaged in writing, so one could, if generously-minded, excuse them for doing this last time. However, they seem to be sticking to much the same approach in the latest draft of the AR5, which is completely indefensible in my view. It seems that others are more prepared to agree publicly, making it hard for them to claim to represent a consensus on the matter.

PaulS, I agree that it would be a better test to use more realistic scenarios, but what are your reasons for claiming that other methods will have the same bias as yours? Do you think the Gillet et al study (which uses a completely unrelated approach but also finds a low transient response) also has a low bias?

Alex,

We can't be sure the models are right in detail - to the extent that they disagree with each other, of course they must mostly be wrong - but on the other hand, we can compare the simulations to proxy data, and find that the models do a reasonably good job overall. So whatever they are doing wrong, can't be a huge problem. See eg here and here for our own particular approach to this (but many others have worked on it, of course).

Alex Harvey said...

Steve,

That's the thing - what do we really know?

The cloud feedback according to the AR4 is 0.69 W m–2 K–1. If we use the Annan and Hargreaves value of LGM temperature drop ~ 4 K, we get forcing from clouds must be of magnitude ~ 2.76 W m-2 - which is large, even with the smaller temperature drop.

Meanwhile Kohler finds the net GHG forcing is of a similar magnitude - 2.81 W m-2. Dust is 1.88 W m-2. The largest forcing comes from changes in land ice - 3.17 W m-2.

It seems to me there's a fair amount of 'slack' in these figures. If the net cloud feedback changed sign the slack in lost forcing could be picked up by compensation in the other unknowns.

Just eyeballing the figures it appears to me you could make up the lost forcing just in the uncertainty alone on the other figures. Total uncertainty is +/- 3.19 W m-2. But given lack of scientific knowledge and data it doesn't seem reasonable to take the uncertainty as some sort of physical limit.

And ironically it appears to be that tiny 0.01 W m-2 orbital forcing that accounts for most of the visible changes in climate - i.e. the melting of mid-high latitude ice - reminding us again that treating the earth as a 0-dimensional model responding to changes in the global average forcing doesn't make sense.

In any case, how do you see an alternate view of cloud forcing as unphysical?

Alex Harvey said...

James, I saw your post after I submitted my last one - thanks for the refs - I'll look at these.

Paul S said...

James,

My method assumes a linear relationship between forcing and response at the global scale. This works ok for WMGHG simulations but breaks down, so far always on the side of underestimation, when applied to simulations which incorporate more realistic forcings. Most inverse methods also assume this linear relationship, so it seems intuitive that they would similarly underestimate, at least in comparison to model runs with decent aerosol implementation.

The Gillett et al. study seems a better approach to me, using a GCM to provide a more educated guess for expected temperature response. I think it depends on your perspective how far it agrees with Aldrin et al. et al. They suggest a TCR range of 1.3 - 1.8, which would presumably scale to something like 1.6 - 3.6 or 2 - 3 for ECS. The uncertainty ranges overlap but Aldrin's median would lie at the low end of Gillett's range.

I would like to see Gillett's approach applied to other CMIP5 models and suspect the 1.3 - 1.8 range wouldn't survive too well, though perhaps that's obvious. If we take the MRI-CGCM3 model mentioned previously, that has a diagnosed 2xCO2 TCR of 1.7ºC and ECS ~2.6ºC, and its historical run underestimates observed global temperature change by about 0.2 - 0.3ºC. The Gillett method would, I assume, prescribe a scaling up to perhaps 2.5ºC for TCR in order to match observed change.

As an aside (though relevant, I think, in terms of the other GCMs discussed), my linear method doesn't do too badly with CanESM2 historical. There is about 1ºC warming, -0.8W/m^2 aerosol forcing according to Gillett et al. text, so estimated TCR would be ~2.1ºC compared to 2.3ºC diagnosed for 2xCO2.

Mikel Mariñelarena said...

Hi Karsten,

If you're still reading this, I'm still trying to get my head around the notion that we don't need to see much cooling under the most aerosol-laden areas for the direct effect to be strongly negative at the global level.

I see two issues in your explanations of why the current situation is different from the mid-century cooling:

1) While your graph shows a noticeable cooling over China in the 00s (much more noticeable than the HAD or GISS maps I had checked), it's a bit strange that this cooling starts so late. China's sulfur emissions had been growing relentlessly for 2 1/2 decades before this cooling took place and had surpassed the US by the 90s. See for example Smith et al figure 3 http://www.atmos-chem-phys.net/11/1101/2011/acp-11-1101-2011.pdf

2) My understanding is that both reflective and absorptive specimens exert a cooling forcing at the surface level. So I'm not sure why more BC in the Asian emissions should prevent us from seeing the aerosol cooling in the surface instruments.

Any feedback appreciated.

Mikel

Alex Harvey said...

James,

I've had a bit of a look at your papers and while I haven't fully digested them I note the following -

You look at the PMIP models. Now the PMIP models, like all the GCMs, only exist because they were firstly able to simulate various features of the 20th century climate.

Therefore, you don't have any models with a very low climate sensitivity in PMIP. So it is not accurate to say that you have shown that very low sensitivity models perform badly against the LGM data.

Let's suppose hypothetically a fundamental breakthrough in physics allows us to build a very low sensitivity model that simulates the 20th century climate better than any of the existing models. Let's suppose this model also simulates the internal variability better than existing ones too.

Can you truly say you know how such a model will perform against the LGM data? I think, the best you can say is it's pointless to speculate about such a hypothetical unknown.

Rob Dekker said...

James,
Thank you very much for your response in the use of priors (uniform ones) in the IPCC AR4.
I found Nic's argument about figure Figure 9.20 in A4, here :
http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/

I assume that is the one you are talking about ?
Incidentally, his reasoning (and choice of paper) reminded me of the example you gave in your 2011 paper. Was that a coincidence ?

I read the piece, and I agree with Nic and you. It seem inappropriate to apply a uniform prior on S before plotting the result of Forster and Gregory 2006 in Figure 9.20.

However, I think the rhetoric about this "the IPCC alteration" etc is overblown (as always in the blogoshere?).

After all, we are just talking about a figure in the AR5, not about the actual range of climate sensitivity that Forster and Gregory calculated.

Besides, AR4 does not distinguish between short-term (transitional) climate response and equilibrium climate sensitivity, (while the AR5 does). The Forster and Gregory paper clearly falls in the former category, while your study on LGM clearly is in the latter, so it seems that Lewis and you are talking about two different beasts. Is that correct ?

Also, it is very clear (from the SOD) that there is little dispute about the range of the TCR, but there is still uncertainty about the long tail of the Equilibrium Climate Sensitivity, which (according to the SOD) is still kind of determined by an "expert consensus". In other words, an "opinion".

So it seems to me that maybe a good proposal would be to suggest that we set up a formal method to determine equilibrium climate sensitivity from (what must be) paleo-climate research and model results that include ice sheet and other long-term effects of a forcing.

Which would thus NOT include papers like Forster and Gregory 2006, which deal with only short-term climate response.

KarSteN said...

@Mikel:
The recent cooling (last 5 years or so) in Asia is most likely due to the frequent "Warm Arctic Cold Continent" (WACC) weather pattern in winter. That has also a considerable impact on the annual average temperature. The stagnation in the early 2000s might be more related to aerosols, but not necessarily either.

While all aerosols have a cooling influence at the surface, it doesn't necessarily apply if you allow for atmospheric adjustments. It also doesn't apply in case of BC aerosols next to the surface. The issue is that BC and sulfate aerosols have increased in lockstep in China. Only modeling can provide a clue as to what’s happening locally. As BC for example has a heating effect in higher layers of the atmosphere, changes in convection, precipitation and so on so forth are to be expected. In essence, you need to simulate the so-called adjusted forcing, or the forcing efficacy for each aerosol component (which takes these effects into account). You will never know how exactly an impact they have upon the atmospheric circulation if you wouldn't study these effects in models. While models have their issues (you find many possible solutions using different models), it is the only option we've got to find out what's going on.

As far as the global temperature of the last decade is concerned, no tropospheric aerosols are required to explain things. Only stratospheric aerosols play a minor cooling role (perhaps related to tropospheric aerosols). Further cooling comes from a slightly decreased solar flux and a series of La Nina events. On top of that, 2012 saw three extreme cold NH winter months (Jan, Feb, Dec in Eurasia) which acted to decease the annual average global temperature by approx. 0.1K (refers to what I've just mentioned in the first paragraph). Whether a fluke or due to Arctic sea ice feedbacks, we don't know yet. Take all these effects and it is absolutely no surprise that the global trend seems to have weakened. Not for long I’m afraid ...

David Young said...

James, I have a question about your paleo discussion paper. You way that the linear sensitivity is 1.7K based on delta T / delta forcing. My question is why you then say nonlinearity means that the response to CO2 must be greater. The difference between LGM and today is very large and so it should encompass the nonlinearities associated with albedo feedbacks and significant CO2 feedback. Is the response of albedo today going to be significantly different?

James Annan said...

Rob, no you're not right here - the Forster and Gregory work was definitely estimating the equilibrium response, even though only based on transient output. Certainly I agree that these sort of analyses were only one part of the evidence that the IPCC based their overall judgement on, but they seemed to be quite a major component.

Alex, yes let's assume the existence of a flying pink unicorn, and then all our problems are solved.

David, the main issue is that the negative forcings for the LGM include a large albedo effect from the ice sheets, and this does not add linearly with the effect of lower CO2. However, models disagree as to how much non-linearity there is here. So the direct energy balance argument can't give a precise estimate for 2xCO2, even if we knew the exact forcing and temperature change at the LGM.

Alex Harvey said...

James, I don't see why you need to ridicule the suggestion, or, indeed, why you have related this to "then all our problems are solved". My interest is purely scientific and I remain somewhat disturbed that discussion of very low climate sensitivity is somehow made into a thought-crime. Because, you see, I have in front of me a set of university lecture notes (www.envsci.rutgers.edu/~broccoli/MPCC_lectures/climate_sensitivity.ppt) on paleoclimate sensitivity and it states quite clearly that even the sign of the cloud feedback is uncertain. The possibility that the net cloud feedback is negative cannot be truly excluded, and everyone knows that. My point would stand, though, wouldn't it? Until someone actually builds a GCM with a very low climate sensitivity no one can in principle know how it would perform in paleoclimate simulations.

Alex Harvey said...

James, let me ask the question another way - you may feel paleoclimate studies including your own points strongly to a most likely value and range on climate sensitivity. That's fine. Do you feel equally as comfortable to argue,

paleosensitivity = 2 - 4.5 K
Therefore, the net cloud feedback is positive

If no, then you must admit that it would be hard to test a net negative cloud feedback against paleo data without having any models available that include such a feedback. Don't you agree?

James Annan said...

Alex, I ridiculed it because your entire line of argument is ridiculous. If you imagine into existence a model that has zero sensitivity to CO2 but which otherwise simulates every directly observable behaviour of the earth system in perfect detail, then sure, we might well consider that the climate system sensitivity could be zero. So what? While you're about it, how about hypothesising a cure for cancer too?

Alex Harvey said...

James, of course we can't hypothesise into existence a low sensitivity model, or a cure for cancer.

My point concerns only the logic of some of these arguments. It is gratuitous (a non sequitur I think) to assert that a validated models approach to paleo sensitivity - while I do not at all dispute the value of these methods in and of themselves or the importance of their contributions to our knowledge - rules out low sensitivity. It is just logically an invalid argument - as far as I can see. Unless I am missing something (and I may be...), the only valid argument against low sensitivity is that we have failed (thus far) to build low sensitivity models that can simulate some features of the 20th century climate. (And of course, there are question marks hanging over how hard we have actually tried, e.g. P. Huybers 2010 J. Clim).

Steve Bloom said...

Alex, the thing is that a balance of forcings and feedbacks needs to be able to track along with known climate. If you change that balance by making cloud feedback substantially negative, you need to make some other forcing(s)/feedback(s) more positive to keep things on the rails. Which ones? And if as you seem to want to believe cloud feedback is substantially negative now, what should we be expecting those more positive ones to do?

Alternatively, if you don't change anything else, you end up with having to explain a climate that's more stable than the proxies indicate.

Also, recall that sensitivity is a model output, not something that's specified.

BTW, my impression is that the reason it's still said that the cloud feedback could be a little negative is because it's hard to measure directly, and so to constrain its value such that a negative value can be excluded with some certainty it's necessary to pin down the values of other forcings/feedbacks sufficiently. That said, at this point the smart money seems to be firmly on a positive value.

Steve Bloom said...

"the only valid argument against low sensitivity is that we have failed (thus far) to build low sensitivity models that can simulate some features of the 20th century climate."

20th c. changes are pretty subtle. Paleo sounds much more challenging. As Lindzen says IIRC, do we get big changes like the glacial cycles even with low sensitivity? If so, where does that leave us?

Paul S said...

Mikel,

Regarding the effect of Black Carbon on surface temperatures, this is strongly dependent on the altitude at which the aerosols are situated. Ban-Weiss et al. 2011 is a handy reference. Table 1 summarises relevant information. As a quick-and-dirty general rule, BC below 5km will warm the surface, increasingly so at lower altitudes. Above 5km will cool the surface.

Unfortunately observations of the vertical distribution of BC are few and far between. The Bond et al. BC paper uses results from a single aeroplane-mounted experiment in January 2009 which sampled varying atmospheric heights along a Pacific longitude from 67S to 80N latitude. The general picture appears to be that BC is mostly below 5km except in the SH extratropics.

Mikel Mariñelarena said...

Thanks for the link, Paul S. I'll make sure to read it.

Karsten:

So the take-home message would then be that 1) In the mid-century the NH cooled because of anthropogenic aerosols (especially the most polluted areas) 2) Now anthropogenic aerosols have a larger content of absorptive elements so directly observing this cooling in the most polluted areas is very complicated but we can nevertheless be sure that the global effect of these aerosols is markedly negative.

Would that be about right?

Thanks,

Mikel

David Young said...

Mikel and Karsten, I now feel like I know less about aerosols than I did before this post. But I guess that's true for any complex topic. The more you know, the more cautious you are. But I forgot, the "science is settled."

Alex Harvey said...

Karsten/Mikel,

As far as the mid-century northern hemisphere cooling goes, attributed usually to anthropogenic aerosol emissions as you are saying, I wonder what became of the highly publicised Thompson et al. 2010 Nature article -

Thompson, D. W. J., Wallace, J. M., Kennedy, J. J. & Jones, P. D., (2010): 'An abrupt drop in Northern Hemisphere sea surface temperature around 1970', Nature 467, 444-447.

The paper has been cited 28 times (in Google Scholar) and as far as I can see it has not been rebutted. (Although I didn't check thoroughly.)

In any case, this paper and theory therein significantly reduces the need to use aerosol forcing while explaining the the mid-century cooling.

Incidentally, it is also relevant to the matter of how much we should trust the treatment of feedbacks in the GCMs. It certainly makes me uneasy that models required a certain forcing from aerosols, that varies significantly from model to model, to simulate the 20th century climate - if it turns out that the physical reality of the aerosol cooling is challenged.

If Thompson et al. are correct, it would be worrying that none of the GCMs simulate this abrupt 1970s ocean cooling.

Steve Bloom said...

Models don't really do abrupt, Alex. That's cause for concern about the future.

Re Thompson et al. (2010), maybe you should check more thoroughly before putting too much weight on it.

Alex Harvey said...

Steve,

If you change that balance by making cloud feedback substantially negative, you need to make some other forcing(s)/feedback(s) more positive to keep things on the rails. Which ones?

My point is that uncertainty both in theory and in data makes it conceivable (to me, anyway), that a net negative cloud feedback could be compensated by changes in forcings elsewhere.

The largest forcing from Kohler's Table 1 actually relates to the cryosphere - 3.17 W m-2 come from changes in land ice, 0.55 W m-2 from sea level, and 0.82 W m-2 from snow. Total uncertainty of cryosphere forcing is 1.5 W m-2.

But how reliable is this data? Let's consider land ice. Our knowledge comes from two methods of interpreting oxygen isotopes from the deep ocean that seem to agree nicely with each other. Kohler says there is an assumption that the fraction of delta-O-18 that varies due to sea level change is caused to 85% (15%) by the waxing and waning of the N. American and Eurasian ice sheets. So it's incredibly clever, but do you feel that it's certain and beyond challenge? I doubt it. Maybe because I don't understand it, but it seems God-like to find oxygen istopes in the deep ocean and deduce a 3-d model of the ice covered earth at the LGM.

Meanwhile, the inferred equilibrium climate sensitivity depends directly and indirectly on thousands of kinds of data that may be revised later.

The history of LGM cooling is a good example. James and Julia found the LGM global average cooling is about 4 K. But very early estimates (CLIMAP, 1976) had this as very small (around 1 K I think); meanwhile, other estimates have been as high as 6 K.

So is the latest Annan and Hargreaves cooling the final word? I am sure James agrees that it probably isn't - because his estimate depends on other data that depends on other data and so on and this data can change.

So maybe you'll say I just don't want to accept what the science is telling me. I meanwhile feel that you're all being a bit overconfident, that is drawing too much conclusion from not enough data. And sadly, the only alternative would be the unsatisfying option of simply not drawing any conclusions. Now, I wonder how I'd go at getting a paper published that requested people to draw fewer conclusions please. :-)

James Annan said...

Alex,

There's a vast amount of evidence of a heavily-glaciated planet at the LGM, including sea level changes, isostatic rebound, direct geological evidence...really, you are barking up a very silly tree here. There are some minor quibbles about the precise boundaries of the ice sheets and how they varied over time, and there are several subtly different ice sheet reconstructions. None of this matters at all for the general picture, however.

(In fact, the "LGM" was not really a contemporaneous state globally, ice was retreating in some areas and expanding in others throughout this interval. However this really is a minor detail on the global scale.)

Alex Harvey said...

Steve,

Re Thompson et al. (2010), maybe you should check more thoroughly before putting too much weight on it.

Fair enough. So I've had a closer albeit quick look at the abstracts citing it and I do see one paper by Xu and Ramanathan 2012 GRL that appears to argue for aerosol forcing as the cause of mid century cooling. I note that these authors take the Thompson et al. 2010 Nature article as implying measurement error - which is odd because that doesn't seem to be what Thompson et al. actually said.

Meanwhile, as far as I can see, most other papers citing Thompson et al. do so favourably and in the explanation of natural variability in the 20th century climate.

E.g.

Sutton and Dong, (2012): 'Atlantic Ocean influence on a shift in European climate in the 1990s', Nature Geoscience, 5, 788–792, doi:10.1038/ngeo1595.

Wu, Z., N.E. Huang, J.M. Wallace, B.V. Smoliak, and X. Chen, (2011): 'On the time-varying trend in global-mean surface temperature', Climate Dynamics, Volume 37, Issue 3-4, pp 759-773, DOI:10.1007/s00382-011-1128-8.

Steve Bloom said...

"Now, I wonder how I'd go at getting a paper published that requested people to draw fewer conclusions please."

Easy, you'd run some numbers indicating that much if not most of what's currently understood about climate is wrong. And you'd have plenty of company -- papers like that get published with some frequency.

Re Thompson et al. (2010), they discuss an interesting question but AFAIK there's no definitive answer as yet (although maybe check the SOD). Whether it's measurement error (or misinterpretation, more to the point) or the AMO or a freshwater injection or internal variability (with or without an anthropogenic component) doesn't seem to be crucial to the big picture, especially since the phenomenon (if that's what it was) reversed itself after a while. Or maybe I'm just not clear as to your point.

If we see another one it'll be a different story since adequate instrumentation is now in place. If not, it may be a mystery permanently, along with many other such.

Serendipitously, this paper just appeared. It's a reminder that much of the ocean-atmosphere circulation is changing, including the AMOC. Expect lots of interesting consequences. Little AMOC wobbles are going to seem like small potatoes.

Also: "it seems God-like" This is perhaps more a case of a sufficiently advanced technology seeming like magic, per Clarke. Or maybe it's just that flattery is the sincerest form of flat-earthery. :)

Rob Dekker said...

James, you are right. I confused Forster and Gregory 2006 with Gregory and Forster 2008, which does attempt to estimate the Transient Climate Response (TCR) from past temperature records and estimated forcing.
http://www.gfdl.noaa.gov/cms-filesystem-action?file=user_files/ih/papers/gregory_forster.pdf

Incidentally, they obtain a TCR of 1.3 - 1.7 - 2.3 K (in your triad notation (P.S. I really like that notation since it captures the relevant points in the PDF very nicely).

The interesting thing about their result that is that not just that the median is very close to the Equilibrium Climate Sensitivity (ECS) from the Forster and Gregory 2006 paper, but also that the 5%-95% uncertainty range is very tight.

This triggered an idea. Since we know that ECS must be greater than TCR (since ocean heat capacity is surely positive), and that F&G 2006 used a completely different method that G&F 2008, isn't it possible to statistically constraint the ECS result from F&G 2006 (or any other ECS estimate) with the PDF of TCR from G&F 2008 ?

That would be a constraint on the low end of the ECS PDF, which, combined with the high-end constraints (of not using a uniform prior) should result in a nicely constrained ECS estimate from the F&G 2006 study. Are such inter-study constraints allowed statistically, and acceptable by a larger scientific community ?

Alex Harvey said...

James/Steve,

Yes I have reconsidered and agree that ice sheet forcing is a bad example. I was thinking of the difficulties understanding ice-albedo feedbacks in the present climate where we have satellite observations - but of course to get the LGM forcing we don't need to understand why the ice moved - just where it moved from and to where. Okay.

I think the other points I have made still stand.

Paul S said...

Alex,

I don't understand Thompson et al.'s attribution of the abrupt change in NH-SH to an NH cooling event. If you look at the NH and SH time series next to each other it seems to me the more obvious change is an abrupt warming in the SH - an apparent step-change from a relatively cool period between 1946-70 to a warm period from 1970-present.

KarSteN said...

Mikel,
the Ban-Weiss paper is indeed the best reference I know of which demonstrates the height dependency of absorbing aerosols (thanks Paul for this addition). The mid-20th-century cooling is mostly sulfate-driven which makes attribution much easier indeed. One striking feature is, that land cooled more than ocean at that time. A rather incontrovertible fact that oceans can't be the driver. They do play their role, no doubt about that, but you need aerosols to explain most of the observed global temperature pattern. And yes, the high share of non-absorbing aerosols exerts a strong negative forcing also today.


Alex Harvey,
In fact, Thompson et al. 2010 elaborate on the Great Salinity Anomaly (GSA). They clearly mention that the drop is neither explicable with aerosols nor with multidecadal ocean variability. It's a bit puzzling though why Nature found it worthwhile to be published. IMHO not really surprising news. GSA also seems fairly well understood (see Gelderloos et al. 2012). Xu and Ramanathan don't argue for the aerosol cooling, they just take it for granted as it is a non-controversial issue. They merely try to answer the question whether the observed latitudinal forcing response is a robust feature in all transient climate states. In doing so, they must account for the strong inter-hemispheric asymmetry in aerosol forcing in order to not render the analysis useless.

In my point of view, you fail to understand, that decadal ocean variability (e.g. the extremely strong NAO positive phase in the North Atlantic in the early 1990s which is subject of the Sutton and Dong 2012 paper) and aerosol forcing are two entirely different things. Both non-controversial issues, supported by a vast amount of evidence. They are there! We know it! We just haven't managed to quantify exactly how oceans respond to different external forcing factors at different time-scales yet. And there are a bunch of different time-scales to consider (decadal to multi-centennial variability for the AMOC alone). However, it is clear that ocean variability interferes with aerosol forcing as well as with any other forcing.

Mikel Mariñelarena said...

And yes, the high share of non-absorbing aerosols exerts a strong negative forcing also today.

Karsten,

The crux of the issue is this: If we cannot really observe the strong negative forcing of sulfates at the surface over the most polluted areas because their effect is confounded by other forcings such as warming aerosols, how can we be so sure that the total aerosol *global* effect at the surface is strongly (or even weakly) negative? How do we know that the absorptive aerosols effect counteracts the scattering aerosols effect less in the non-polluted areas so as to produce a global negative forcing?

Alex Harvey said...

Karsten,

You seem to be misrepresenting the views of the Thompson et al. authors - all of whom individually are big names in climate science.

From the accompanying Nature News article -
The ocean cooling also coincides with a 0.2 °C drop in global mean temperature from the late 1960s to the mid-1970s ... . Researchers have blamed this short-lived cooling, more pronounced in the Northern Hemisphere, on a build-up of sunlight-blocking sulphate aerosols from fossil fuels, which began to clear in the 1970s as pollution controls took hold.

Thompson and his colleagues think a circulation change in the North Atlantic is a more likely culprit. ... Michael Mann ... isn't so sure. ...

http://www.nature.com/news/2010/100922/full/467381a.html

To make it clearer that Nature News wasn't misrepresenting them Andrew Revkin solicited their views in email at the same time and printed it on his blog:
http://dotearth.blogs.nytimes.com/2010/09/22/a-sharp-ocean-chill-and-20th-century-climate/

There were other comments at the time from researchers, e.g. Roger Pielke Sr.
http://pielkeclimatesci.wordpress.com/2010/09/23/comment-to-andy-revkin-on-the-dot-earth-post-a-sharp-ocean-chill-and-20th-century-climate/

So you are entitled to your views about aerosol cooling but you can't claim there is no controversy or on-going debate.

Steve Bloom said...

Alex, this is only a debate because of the very limited data available for the time. Karsten correctly points out that the resolution of this debate in favor of a given explanation won't mean much.

Why you even brought it up is a mystery. As I said above, there are many past climate events for which we lack the data to explain with certainty, and this is just one of them.

Steve Bloom said...

Ales, read the whole Revkin post, at the end of which it becomes clear, and he agrees, that science-by-press-release had the effect of way over-stating the significance of the paper (which wasn't even novel since Trenberth & Shea had previously spotted the issue). And indeed, there's now been several years and 28 cites and... no excitement. Sorry.

Alex Harvey said...

Steve, Revkin's opinion is hardly what matters here.

The authors of the paper - being very big names in climate science - believe that aerosol cooling is not the right explanation of the mid century cooling.

And then there are others making arguments like this independently - e.g.

Tsonis, A. A., K. Swanson, and S. Kravtsov (2007), A new dynamical mechanism for major climate shifts, Geophys. Res. Lett., 34, L13705, doi:10.1029/2007GL030288.

The standard explanation for the post 1970s warming is that the radiative effect of greenhouse gases overcame shortwave reflection effects due to aerosols [Mann and Emanuel, 2006]. However, comparison of the 2035 event in the 21st century simulation and the 1910s event in the observations with this event, suggests an alternative hypothesis, namely that the climate shifted after the 1970s event to a different state of a warmer climate, which may be superimposed on an anthropogenic warming trend.

Steve Bloom said...

Alex, now you're just throwing things at the wall. Personally, I spend my time on this subject attempting to gain a coherent understanding of the climate system, in particular large-scale circulation changes (and so yes, I've seen all this material before). You seem interested in the opposite. Good luck and so long.

Alex Harvey said...

Steve, no I am not throwing things at the wall.

I don't see how you or anyone can honestly claim that this mid century cooling business is settled. Plenty of scientists are skeptical that aerosol cooling is necessarily the right answer.

I suppose my argument isn't strong insofar as I'm mostly dropping names of doubting scientists but on the other hand the existence of doubting scientists does show there is a controversy.

You say above,

this is only a debate because of the very limited data available for the time.

I find this a strange thing to say. It sounds like, "our beliefs are correct, and all doubters will know it as soon as we have data to prove it". Isn't that circular?

And how can you imply that Tsonis et al. is not relevant here? The Tsonis theory is taken seriously by many. And I don't see any fundamental contradiction between Tsonis et al. and Thompson et al.

But what I do find strange is the lack of debate about all this despite the obvious existence of opposing theories in the literature. If the aerosol cooling hypothesis is so rock solid why can't you just point me to a rebuttal of either Tsonis et al. or Thompson et al.? You don't find this odd?

KarSteN said...

Mikel,
well, we can and we do observe the forcing of aerosols on a regular base. Downwelling irradiance measurements, data from AERONET and EARLINET (Europe), or satellite imagery are operationally available. Extensive field campaigns provide further evidence and guidance as to whether the measured effects are aerosol related and to what extent. Clear-sky and all-sky measurements in different regions of the planet tell us a lot about their impact. We also have some confidence in the emission estimates, such that the aerosol composition isn't a mystery after all. Sure, it is difficult to know exactly what is going on at each particular moment, but the global numbers have converged considerably over the last few years. This is expressed in the reduction of the uncertainty range for the aerosol forcing as shown in the leaked AR5 SOD.

Rob Dekker said...

Alex,
Nobody here (nor any scientist that you or anyone here referred to) claims "that this mid century cooling business is settled".

As Steve there are many past climate events for which we lack the data to explain with certainty, and this is just one of them.

There is hardly ever "certainty" in scientific work, but that is not a valid reason to give uncertainty undue weight by overemphasizing doubt, misinterpreting peoples comments and creating strawman arguments.

Paul S said...

If the aerosol cooling hypothesis is so rock solid why can't you just point me to a rebuttal of either Tsonis et al. or Thompson et al.? You don't find this odd?

Because no-one believes it presents any contradiction to the idea of aerosol cooling?

I'll give you an example why not: This image shows a comparable NH-SH SST plot from the CSIRO CMIP5 model output. Viewed in terms of the Thompson et al. study I would identify from this time series an abrupt NH cooling in the late 1960s, which is obviously too quick to have been caused by anthropogenic aerosols.

Trouble is that we know the anthropogenic aerosol cooling influence in this model is considerably large.

When looking at noisy datasets these types of apparent patterns can pop up, but you have to be cautious of what they really mean. In the case of the SST datasets analysed by Thompson et al. it looks to me that natural variability (volcanic influences, ENSO etc.) have conspired to make a relatively gradual trend look like a sudden one.

Unfortunately their attempt to account for such natural variability doesn't really improve the picture. They apply equal-sized adjustments to each hemispheric dataset for volcanic and ENSO effects. This might be ok for most volcanoes but the 1963-4 Mt Agung eruption had distinctly assymetric hemispheric effects with an SH bias. Their adjustment left the SH still feeling the cold, while making the NH too warm. Their ENSO adjustment I'm also not too sure about - I believe it tends to effect SH SSTs more (?).

James Annan said...

Well blogger has now decided that this conversation should be drawing to a close.

I don't disagree with their judgement :-)

David Young said...

James, I want to thank you for an interesting and informative post and discussion.

Alex Harvey said...

James, I thank you as well for hosting the interesting discussions - thanks for your patience regarding my perhaps naive ideas. :-)

By the way, for those interested in the aerosols vs internal variability issue an interesting post serendipitously appeared today at Isaac Held's blog - link

James Annan said...

Of course compliments are always welcome :-) But (to explain what I meant) blogger has gone into manual approval mode for comments which makes it hard to carry on a conversation, especially if I miss some email notifications.

I'm sure Isaac Held will appreciate your discussions over there :-)

«Oldest ‹Older   1 – 200 of 204   Newer› Newest»