Monday, May 20, 2013

More on that recent sensitivity paper

Now I'm embarrassed at my naivety...it is all as clear as day. The story goes as follows:

Way back in the mists of time (well, about 2011 or so) the IPCC authors agreed that the "likely" value for the equilibrium climate sensitivity was 2-4.5C. They then wrote the first draft to match, which was easy enough as they seemed to be unaware of most of the recent literature on the matter, and could easily brush off the few papers they did know about (like ours) as outliers.

Inconveniently for them, the observations of the planetary energy balance are actually incompatible with their preferred choice, and as well as some reviewers telling them about the papers that had already appeared, more papers continued to be written - too many to be just ignored this time. So that left them with a bit of a credibility gap.

The brilliant solution they have come up with is to write a paper on the planetary energy balance, which in numerical terms of course basically confirms what all the recent papers have said, but describe this with the phrasing that their result "is in agreement with earlier estimates, within the limits of uncertainty." (Here, "earlier" clearly refers to papers which do not use the last decade of data, ie those up to around AR4 time). Thus, this paper can be cited as support for the 2-4.5C "likely" range! They've even got one of their loudest critics, Nic Lewis, to agree with this!

Whoever came up with that wording certainly deserves a Nobel prize...for chutzpah. I suspect that Nic may regret putting his name to it, although he could argue - with some justification - that the numerical results should outweigh the verbal gymnastics.

Note by the way that it's not just the recent decade of data that points to a more moderate sensitivity estimate. For example, back in 2000, Forest et al generated an 90% range of 1.3-4.2C, when they used an expert prior - but at that time, the IPCC experts had all decided that a uniform prior was the correct approach.

157 comments:

SteveS said...

Has David Rose just stolen your password and taken over your blog, James? That's pretty strong stuff :-)

Anyway, I've been wondering: is it fair to lump all the 'IPCC authors' of this paper together as one?

Piers Forster, Jonathan Gregory and Gunnar Myhre have all published estimates on the low side before.

I wonder if they agree or disagree with the 2-4.5K range. If they do, what's their logic? Perhaps they look at the other lines of evidence and think that the energy budget results need tweaking up a bit. If so they could do a great service by making sure AR5 explains this very clearly.

Or do you think they disagree with 2-4.5K and just get steamrollered by the rest?

Unknown said...

James,

Wahy so you lay so much emphasis on teh recent climate results (which are only 40 years after all), and so little on the results from the LGM, like your own.

I am thinking also of the PALEOSENS team results synthesising a couple of dozen papers, who ended up with estimates for ECS much closer to theIPCC 2-4C.

Why do paleoclimate and recent warming models not agree, or have better agreement?

Anonymous said...

James,

Interesting thoughts. I wonder if you are right.

As I now understand it, having had a debate about this point today with one of the relevant WG1 lead authors involved, any study with a 95% (or even 97.5%) bound that is higher than 4.5 K will be considered by the IPCC as being consistent with a 4.5 K 'likely' range upper bound, at least if their lower 5% bound is 2 K or lower.

Likewise, any study with a 5% (or even 2.5%) lower bound below 2 K will be considered by the IPCC as being consistent with a 2 K 'likely' range lower bound, at least if their upper 95% bound is 4.5 K or higher.

On that basis, few studies are going to be treated as inconsistent with the 4.5 K upper bound (my Journal of Climate study being one), and they will no doubt be dismissed as outliers. And even if there were several studies that came up with a lower bound, or a mode or median, well below 2 K they would all be treated as consistent with a 2-4.5 K range if their upper 95% bounds exceeded 4.5 K.


They are certainly going to cite this paper in AR5, BTW - it was accepted by the 15 March deadline.

I certainly think the numerical results should outweigh the verbal gymnastics. I suspect the IPCC authors were never likely to change their minds on the 2-4.5 K range no matter how strong the observational evidence. They seem in thrall to the lure of GCM simulations. Not very scientific IMO.

Mac said...

Please excuse me if I might seem unimpressed :)

1) Do you believe this Otto study has really excluded any values for ECS that have been considered previously as really significant?

2) Do you believe this Otto study might be biased low - possibly by at least 0.5 degrees - among many other things since it makes a lot of assumptions (like linear feedbacks) and seems to place almost no weight on the energy imbalance already present in the oceans before 1970 (or now for that matter)? (or other reason - your insight could be interesting)

3) What do you think about the so-called long-term feedbacks - given that in 10-15 years at most the Arctic would be free of ice in the NH summer (when it really matters, and would still show ice in the winter, minimizing the heat loss in the winter from the ocean) - do you believe now it would be a good time to also talk about this?

Chip Knappenberger said...

Nic,

Considering Kyle Swanson's latest

http://onlinelibrary.wiley.com/doi/10.1002/grl.50562/abstract

which shows the CMIP5 models do worse than CMIP3 models (which are already doing bad enough) with recent temperature trends, you'd think that the IPCCers would be starting to have cause for concern about GCM performance, rather than being "in thrall" to their "lure"!

-Chip

PeteB said...

nic,

I'm no expert in any of this and apologies if I am teaching my grandma to suck eggs

'likely' in ipcc speak > 66%
'very likely' > 95%

http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch1s1-6.html

the current AR4 'likely' range was 2 - 4.5

James has made the point that recent studies would not seem to support <2 and >4.5 being equally likely

AR4 has separate sections for observational constraints on climate sensitivity

http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-6.html

9.6.2.1 Estimates of Climate Sensitivity Based on 20th-Century Warming

9.6.2.2 Estimates Based on Individual Volcanic Eruptions

9.6.3.1 Estimates of Climate Sensitivity Based on Data for the Last Millennium

9.6.3.2 Inferences About Climate Sensitivity Based on the Last Glacial Maximum


In section 9.6.4 "Summary of Observational Constraints for Climate Sensitivity" it concludes
.....Results from studies of observed climate change and the consistency of estimates from different time periods indicate that ECS is very likely larger than 1.5°C with a most likely value between 2°C and 3°C. The lower bound is consistent with the view that the sum of all atmospheric feedbacks affecting climate sensitivity is positive. Although upper limits can be obtained by combining multiple lines of evidence, remaining uncertainties that are not accounted for in individual estimates (such as structural model uncertainties) and possible dependencies between individual lines of evidence make the upper 95% limit of ECS uncertain at present. Nevertheless, constraints from observed climate change support the overall assessment that the ECS is likely to lie between 2°C and 4.5°C with a most likely value of approximately 3°C

I guess we can't just rely on energy balance models of current warming, and need to look at other lines of evidence, in particular LGM estimates (like Jules and James recent paper, that IIRC had an estimate of about 2.5°C with a high probability of being under 4°C)

James Annan said...

SteveS,

I agree that the different authors probably have somewhat different perspectives. But the IPCC as a collective has a unified voice. "Steamrollered" is your choice of word, but I'm not going to disagree with it :-)

I'd be interested to know which bits you particularly think of as "strong stuff": some of it is my own interpretation which may not be entirely correct, but some of it is a simple description of facts.

Unknown: there is no reason at all why estimates generated from different data should "agree" in the sense of generating the same pdfs. They should "agree" in the sense of overlapping within their uncertainties (which they do), and the true value of the climate sensitivity should lie in their intersection. This seems to be one of the major misunderstandings of some climate scientists, even though we wrote a very straightforward paper about this back in 2006.

Nic, I don't think it is necessarily based on the primacy of the GCM results, this range may be merely being used as a convenient justification. I think there is strong resistance to a change, for a number of reasons, mostly bogus.

Mac, the last IPCC report said a value substantially higher than 4.5C could not be excluded, and such high ranges are routinely used in economic analysis (eg Marty Weitzmann). I don't believe these value are plausible, but they seem "really significant" to some people.

Anonymous said...

So is the Annan number still 2.5C, or have they been steamrollered down to 2C.

BFrykman said...

Has it ever occurred to any of you that perhaps none of you are really doing any real science at all?

You are all engaged in esoteric jargon, but does that really make science? It didn't make phrenology science even though a community calling themselves scientists, armed with their own impressive jargon, studied it intensely.

You do engage in predictions and science is generally about predictions (concerning real life, not computer games). But Jeane Dixon also spent a lifetime doing predictions and her science had a huge political following too. Some prominent people in Washington supported her science too, just like they do for yours(hers was called the science of Astrology)

It seems to me that real science has to, at some point, square with accurate predictions about the real world.

Can your science beat darts or dice in this regard?

Lastly, even the greatest men of real science often then go on to get things completely wrong and lead great movements of their disciples down a dead end trail of nonsense. (Newton devoted the majority of his adult career towards the study of the science of Alchemy).

More recent examples of this include Edward Teller leading his throng of followers towards building X-ray laser canons for Reagan - it was all a fiction and only the dimmest bulbs in his rat pack believed in any of it but the federal money kept flowing and their parties were fabulous.

Why don't one of you brave souls go out and seek federal funding leading to research developing the idea that climatology is nothing but a farce?

You could sell it based on the trillions it would save the economy. Get your life insurance up to date first, however, and then wear a bullet proof vest. You wont be winning a popularity contest at the next IPCC Gala.

Unknown said...

BFrykman

Newton is a poor example because he spanned the historic divide between science and alchemy. JM Keynes called him "Not the first scientist, but the last of the Babylonians".

Einstein also spent the last years of his life in what many considered useless endeavours, but they were not alchemical.

Otherwise, all I can say is - would you travel on an aeroplane designed by the principles Jeane Dixon brings to her "art"?

Mac said...

James> ... the last IPCC report said a value substantially higher than 4.5C could not be excluded, and such high ranges are routinely used in economic analysis (eg Marty Weitzmann). I don't believe these value are plausible, but they seem "really significant" to some people.

That is very relevant in the context that I mentioned on the feedbacks from Arctic Ice loss (and which you seem to want to avoid talking about) - I seem to remember that IPCC and most models were predicting free-ice summer around 2070 but we might see it before 2020 - how big are those (normally very, very slow) feedbacks? And if such a large uncertainty proved to be true for that essential aspect - wouldn't it be much wiser to also consider the bigger values for sensitivity - at least until the polar ice is (mostly) gone - like it was the last time when CO2 was at 400-500 ppm ?

PeteB said...

Mac,


http://onlinelibrary.wiley.com/doi/10.1029/2011JD015804/abstract

Results show that the globally and annually averaged radiative forcing caused by the observed loss of sea ice in the Arctic between 1979 and 2007 is approximately 0.1 W m−2; a complete removal of Arctic sea ice results in a forcing of about 0.7 W m−2, while a more realistic ice-free summer scenario (no ice for 1 month and decreased ice at all other times of the year) results in a forcing of about 0.3 W m−2, similar to present-day anthropogenic forcing caused by halocarbons.

I think that is why we need to use several lines of evidence (including times where there was a much bigger temperature change, e.g. LGM studies) to come up with the overall best estimate sensitivity - as per lots of papers (several by James and Jules)

SteveS said...

Ignoring incompatible papers... massaging messages in the peer-reviewed literature out of fear of a credibility gap... bringing on board Nic Lewis to cover their backs...

I don't know either way, so I used the word 'strong' rather than 'wrong'.

It seems every media outlet in the English-speaking world is talking about climate sensitivity now. People see you as some kind of independent voice of sanity. When you put fingers to keyboard in anger like this, all sorts of people will read, and they may not be able to judge at which points they should laugh!

I enjoy strong coffee. I also like to know it's responsibly sourced :)

BFrykman said...

RE: "Otherwise, all I can say is - would you travel on an aeroplane designed by the principles Jeane Dixon brings to her 'art'?"

Did I convey the sense that I am not a supporter of science? I apologize if I conveyed that impression.

But your example raises an interesting point:

Applied science (engineering) builds our airplanes for us which are, at least scientifically speaking, terribly simple things.

At one point in my career I worked with auto pilots and in doing so had to learn the basic principles upon which heavier than air flight was made possible.

In fact theoretical science at that time had it all wrong.

Aerodynamic Lift was, until recently, thought to be the result of Mr Bernoulli's Principle. In fact, four engine jet transports were built capable of carrying hundreds of people on safe, fast and efficient transcontinental journeys was all based upon some pretty bad science.

Mr Bernoulli's Principle has been grounded since my earlier flights and now Newton's third law of motion is flying high in its place, or so it seems.

Modeling is the business of science to be sure, but as Clint once so eloquently put it "A man has got to know his limitations" and in my opinion, so does science.

I suggest climate modelers turn their attention to the simpler side of complex and show us how to model the behavior of the world's economy or, even simpler than that, just the model the, easy as pie, Dow Jones Averages.

Once they have made their followers wealthy beyond measure by adopting their models to their own behavior, they can then gain the confidence of we skeptics who claim the Earth's economy is Tiddly Winks when compared with the Earth's climatological behavior.

Now you will have the ammo to shoot all the skeptics right out of the sky when we claim darts are at least as accurate as your GCMs in predicting what data sets purporting to represent the Earths average temperature might be doing at some point in the future.




Carl C said...

I guess (basically) they wanted Nic Lewis inside the tent pissing out, rather than the usual, outside pissing in? It seems this paper is both ammunition for the usual hordes of skeptics ie "we beat the climate scientists by dragging their estimates down a half-degree" as well as the climate scientists ie "the range didn't change by much - global warming is still disastrous."

James Annan said...

SteveS, the content of the first draft will be public eventually, at which point everyone will be able to see how they originally argued for their preferred sensitivity estimate. I expect that the inclusion of Nic Lewis on this paper was more serendipitous than conspiratorial. And the massaged message...well, I can't see any other plausible interpretation, but I'm open to suggestions, not just from you, but from anyone: is there a reasonable defence of their claim that this result for ECS is "in agreement with earlier estimates"? Note that the press release makes it explicitly clear that this is intended to endorse the 2-4.5C range: "A new study led by Oxford University concludes the latest observations of the climate system’s response to rising greenhouse gas levels are consistent with conventional estimates of the long-term ‘climate sensitivity’, despite a “warming pause” over the past decade."

JCH, 2.5 is still fine by me, though I wouldn't be surprised by a value a bit lower or higher. I don't think the recent decade really changes the best estimate all that much, but it helps to confirm what sensible people were saying several years ago about extremely high values :-)

BBD said...

Just up, on the Met Office research news page, an article from Alexander Otto:

Using only the data from the decade from 2000-2009 we find a 5-95% confidence interval for equilibrium climate sensitivity of 1.2-3.9°C. We compare the range to the range of the CMIP5 models of 2.2-4.7°C saying that the range overlaps but is slightly moved to lower values. If we use the data from 1970-2009, also including the last decade, instead we find a 5-95% confidence interval of 0.9-5°C for equilibrium climate sensitivity.

Comparing these ranges directly to the IPCC's range for climate sensitivity from AR4 is difficult. For one, the IPCC didn't directly give a 5-95% confidence interval (i.e. no upper 95% limit), and secondly, the IPCC range is not derived formally from an analysis of data, but is a consensus expert assessment of all the different lines of evidence underlying the IPCC report. Hence the IPCC's likely range of 2.0-4.5°C is not directly comparable to a 17-83% confidence interval derived from our study. IPCC typically down-grades confidence levels from those reported in individual studies to account for "unknown unknowns".

For all investigated periods apart from the last decade alone our derived confidence intervals fully include the 2-4.5°C range. They do extend below it, but that is not an inconsistency - which is why we conclude that, given all the uncertainties, our results are consistent with previous estimates for ECS.

Carl C said...

I just don't quite get the zeal for the new constraints as some "smoking gun" based on less than a decade (2000-2009) of data, in a field that usually goes by "decadal means", especially given the millenial reconstructions & hindcasts etc, even though skeptics like Nic Lewis just wave their hands and say "GCM's are useless." It seems the Oxford crew were being magnanimous in allow Nic Lewis on a Nature paper, but he & the "Watts" & "auditors" will just use it as ammunition. I think if 10 years ago, the climate sensitivity was given as 0.5-1.5C -- they'd still have careers saying it's "-1 to 0" ;-)

Martin Vermeer said...

I suggest climate modelers turn their attention to the simpler side of complex and show us how to model the behavior of the world's economy or, even simpler than that, just the model the, easy as pie, Dow Jones Averages.

I don't think economics, i.e., the behaviour of flocks of humans, is the example you really want to give for illustrating the notion of 'simple' --- especially not given the recent track record of whatever passes for modelling this ;-)

Paul S said...

Carl C,

It's confusingly-worded in places but when they talk about the 2000-2009 result they are actually talking about 2000-2009 as a decadal mean compared to their 1860-1879 reference period.

Carrick said...

Here is something I don't quite understand.

Nic's post on BH has this nice little graphic:

figure

where he's shown the results by decade for the ECS probability density function.

What I don't get is 2000-2009 has the largest mode of any of the decadal periods he considered.

Given the apparent slow down in surface-air temperature for that decade, what is driving the larger estimate of ECS?

Paul S said...

Carrick,

I think that's partly because "the slow down" is only apparent as a slow down if you look at the past five years in the context of the previous decade of particularly fast warming. Try plotting the HadCRUT4 annual time series from 1970 but with 1998-2005 blanked out - the past five years look like they could easily be a linear continuation. Also, this study only uses data up to 2009, and "the slow down" wasn't apparent at all at that point.

The main reason for a greater ECS estimate is ocean heat uptake rate, which dramatically increased in the early 2000s.

David Young said...

Carl & Marten,

Saying models are useless is extreme. You know what I think, namely that observationally constrained estimates are much more trustworthy. We should continue to improve models mindful of what they can and cannot do and the numerous untrue dogmas surrounding them. And by all means use modern methods, not the best methods of the 1960's.

David Young said...

PaulS, I would be less concerned about people misinterpreting what James says than people like Hansen who seems intent on going Linus Pauling. No matter what you say, some will misrepresent it. This is a critical issue, and I would be more afraid of self censorship than "overflowings of liberty" as a famous Anerican founding father said.

Carl C said...

Well it's the models, even really old models actually, that showed us climate sensitivity in the first place, e.g. what a doubling of CO2 would do to the atmospheric system.

And as I've mentioned paleo & modern hindcasts & ensembles run for 100 to 1000 simulated years have proven their worth, and their errors/uncertainties are pretty well-known by now. Which is why it's hard to take seriously someone the deniers prop up who just immediately "pooh-poohs" GCMs etc, whether it's Nic Lewis or Screaming Lord Monckton or whatever.

David Young said...

Carl, There are some papers that shed light on this. It is discussed on the anthrpogenic data point thread here. Actually, Gerry Browning is better on this than our work.

I thought radiative physics and simple energy balance models were what told us about sensitivity, but the story keeps changing.
Why in the world would you think that a model with acknowledged large errors would be accurate for long time periods when it's bad for shorter periods compared to a weather model run? To any mathematician this requires at least a plausible explanation.

David Young said...

One other thing deserves mention. In the David Rose star chamber proceedings, it was commented here rather cynically that Myles Allen's very mild attitude must be disingenuous. It now appears that Allen deserves credit for honesty since at the time he surely already knew about these results.

I claimed that "Myles Allen is even taking about a sensitivity of 2C" and was excoriated for it with citations from Allen's previous work. Neither expecting an admission of error not really caring for one, it is important to correct the record.

David Young said...

Paul S, Unless I'm not interpreting it correctly, I seem to remember that the rate of ocean heat content increase slowed starting around 2002 or 2003. Perhaps that was sea surface temperature?

Paul S said...

David Young,

That's basically the picture down to 700m depth, but the rise from 2000-2003 is large enough that the average uptake rate over the whole decade remains relatively high despite flattening after 2003.

Down to 2000m that flattening isn't apparent in observations at all.

You can see the data for different depths plotted on this slideshow.

Patr.Fleg. said...
This comment has been removed by the author.
Patr.Fleg. said...

Regarding estimates of sensitivity James, what is your opinion of the latest research by Julie Brigham-Grette et al., which indicates that climate around three million years ago, with similar CO2 concentrations as today, was warmer, especially in the arctic? Doesn't this indicate a rather high climate sensitivity?

Link is here: http://www.umass.edu/newsoffice/ice-free-arctic-may-be-our-future

Paul S said...

Patr.Fleg.,

The PALEOSENS review mentioned previously features one estimate referring to that period (this paper - Pagani et al. 2010). Using a global temperature anomaly estimate of ~4ºC and estimated CO2 ~400ppm they discern a 2xCO2 sensitivity of ~8ºC, which is just derived from (4ºC/1.9W/m2) * 3.7W/m2.

Note that the authors refer to this estimate specifically in relation to Earth System Sensitivity, as opposed to the fast feedback ECS estimates under discussion here. The key difference is that they don't set out all the slow boundary condition changes from our Holocene situation, such as topography, ice sheets and glaciers (as I understand it there was very little ice anywhere), sea level, other GHGs, which will all have influence on the fast feedback climate conditions.

Aside from that distinction I'm not sure how much the result really says about Earth System Sensitivity to 400ppm looking out from today. I'm thinking about this in terms of what prefigured these Pliocene conditions - a "hothouse" world for many millions of years - and, as a corollary to that, situational differences in climate stability. ~3 million years ago the continents were in basically the same position as today, importantly with Antarctica over the South Pole, yet it was still a hothouse rather than the current ice age state. Given what happened next I think it reasonable to suppose the hothouse state had become somewhat unstable at that point and just needed a little push to nudge things into an ice age. Now we're in an ice age, which seems to be stable, so I suspect we'll need a bigger push than 400ppm to get back up to Pliocene conditions, even if we talk about a timescale of 100k years for an Earth system response.

David Young said...

On a personal level I'm quite sympathetic towards Prof. Allen. His monitor is probably shooting white hot flames at him from email from the usual suspects who are wondering why he didn't manage to "hide the decline." I'm sure some of it is mean, libelous, and nasty. As Churchill observed, anyone can rat once, but to re-rat takes real talent. i would advise Allen that to rat honestly is superior to being a weasel which based on the last few days events may seem like an attractive option which in the long run will prove less attractive.

KarSteN said...

@David Young:

Your comment nicely demonstrates on which planet you seem to be living on. Working in the same building/lab as the principal/first co-author (as well as Myles Allen), I can assure you that no pressure whatsoever from whomsoever is put at them/him. I find your comment utterly ridiculous.


@Paul S:

Re your comment in the other thread: I also don't exactly get why Otto et al. 2013 made this assumption regarding the aerosol forcing. The two references aren't in support of their claim. Luckily, they provided the results without adjusted aerosol forcing (see Table S2; case D). As a result, ECS/TCR increases to 2.4/1.5. Currently I'm on vacation, so I can't ask them directly why they decided to chose the numbers as they did. Given that Gunnar Myhre, Bjorn Stevens and Ulrike Lohmann are on the paper, I'd reckon that they knew why they did it. I'm gonna ask them upon my return in mid-June.

Given that they are using the adjusted radiative forcing from Forster et al. 2013 (which implies 3.44W/m2 rather than 3.7W/m2 forcing for CO2 doubling), does that imply a scaling factor for the current forcing as well? I am asking because I haven't had the time to check the details yet.

On a related issue, I wonder how the results change if they had applied the Balmaseda et al. 2013 (B13) results. I find it particularly interesting, as the integrated OHC change after the Pinatubo eruption in B13 matches that of the modelled volcanic signal in the oceans (see Stenchikov et al. 2009). Given that the volcanic signal remains in the system, I wonder how it is accounted for with their method? My point is, are the additional 4x10^22J heat content change (which are currently still in the system, assuming that the model results from Stenchikov et al. 2009 are correct) reflected in the total system heat uptake in Otto et al. 2013? Although their heat uptake for the 1990s is positive, I think this is merely a result of the OHC estimate from Levitus et al. 2012 which they are using (which shows no significant OHC drop after the Pinatubo eruption). Any thoughts on that?

David Young said...

KarSTen,

I don't think Oxford is pressuring Allen. If his institute is funded by soft money there is always the inherent bias introduced by wanting that funding to continue. I'm much more concerned by the political hangers on in this debate on the political side, you know the types who can't wait to read their daily dose of the Guardian. Allen blogs there. Commenters are pretty biased and have fixed political points of view.

If you have read this blog regularly you have seen some milder hints of it here mostly by anonymous posters. There has been some indirect criticism of James here also. Jules has born more of the brunt. I remember several suggestions that James statements could be misinterpreted by "the enemy." Anyway, I don't know where you have been, but any grown up would realize that there is often subtle pressure in this field to "keep your nose clean."

Myles Allen has shown promising signs recently. He defended Richard Lindzen against a hostile moderator at a recent debate. Lindzen holds that ECS is about 0.8C and has received some very hostile treatment elsewhere. He more or less passed up the opportunity to pile onto David Rose. But he just attacked Matt Ridley in rather personal and in my view dishonest terms. Prof. Allen apparently still holds that urgent political action is needed.

You also might find it instructive to read up on Vioxx and Merck, or the whole issue of vertebraeplasty, or even Vitamin C. There things happened before your time, but were wake up calls in medicine that led to recognition of the problems that needed to be addressed.

I realize you are young and inexperienced, but I suggest you read some of the climate gate commentary. Richard Muller is actually excellent and not a "denier."

I would be quite interested to have Prof. Allen post here. I would be happy to have him answer questions and of course would regard him as rather a final authority on his position. I do think though that recent ECS estimates are not really fully consistent with IPCC AR4 and I agree with James that the IPCC needs to rethink their position.

Alex Harvey said...

James,

You and others here are talking a lot about squaring the AR4 best estimate of ECS with the results of Otto et al., and the verbal gymnastics are of course impressive.

However, I want to know why we're not talking about squaring a possible observed ECS of ~1.9 K with the AR4 GCMs specifically.

AR4 Table 8.4 shows the sensitivities of 23 GCMs and they range from 2.1 to 4.4. If the ECS is in reality 1.9 then surely the most scientifically important issue is how wrong the models are.

I am also still curious how you feel this would impact your own results - particularly your most recent paper on paleo ECS. After all, you have used the same models - even if your method weighted the ones that agreed best with the observations. Surely if you had new models that agree with nature, you'd get a revision down of ECS - wouldn't you?

Paul S said...

Karsten,

I'll make one correction to my comment on the other thread. I specifically said that the Quaas 2009 estimate seemed to be in agreement with ACCMIP's -1.2W/m2 because I understand comparing these things can be a minefield. Reading the method for each I now see Quaas 2009 estimates the difference between 1750-early 2000s whereas the ACCMIP modelled values are for 1850-2000. Adopting a typical adjustment would put Quaas 2009 at ~ -1.0W/m2 compared to 1850. The main point that ACCMIP != CMIP5 still stands though.

Regarding the Balmaseda reanalysis results, if I dodgily eyeball the difference between that and Levitus 0-2000m + Purkey and Johnson abyssal I think you'd get something like 2.4ºC adopting their other best estimate values, and up to 3ºC taking into account the sensitivity tests.

One thing this does tend to indicate is a rather robust most likely range of 2-3ºC, in agreement with what AR4 said about time-evolving observational estimates.

My take is that 2-4.5ºC is arguably a defensible likely range, but also that reducing to 2-3.5ºC would be a reasonable reflection of available evidence (actually I like 1.8-3.6, so you can talk about a factor of 2 difference and also break the dependence on numbers divisible by 0.5). It depends on how conservative/agressive you want to be with regards the range of estimates and unknown unknowns.

One problem I do see is that the 2-4.5ºC range is generally discussed as if it has a normal distribution or even a uniform distribution, whereas I think it has to be said the lower half appears much more likely than the upper half.

BBD said...
This comment has been removed by the author.
BBD said...

Paul S; Karsten

According to the SI, Otto et al. didn't use Levitus 2012 alone. They used an update to Domingues 2008 for the 0 - 700m layer and L12 for 700 - 2000m:

The oceans account for about 94% of the estimated trend for total heat uptake from 1971–2010. For the upper (0–700 m) oceans we use an update [1] up to 2009 of a 3-year running mean of annual upper ocean heat content estimated from ocean temperature observations [2]. We add a deep (700–2000 m) ocean heat uptake estimated from five-year observational averages [3]. For the abyssal (2000–6000 m) ocean heat uptake we use a global trend estimate [4] made from observations taken between 1981 and 2010, but centred on 1992–2005. We apply that abyssal ocean trend only from 1992–2010 given limited observations prior to this period.

Apparently this increases the ECS estimate (again, from the SI):

The ECS and TCR estimates are of course sensitive to the choice of the specific datasets, i.e. the choice of the upper ocean heat uptake dataset makes a big difference (an alternative would be Ref. 3 [3], lowering the ECS estimate), comparable to the impact of the forcing adjustments due to aerosols (see below).

***

References:

1 Church, J. A. et al. Revisiting the Earth's sea-level and energy budgets from 1961 to 2008. Geophysical Research Letters 38, L18601, doi:10.1029/2011gl048794 (2011).

2 Domingues, C. M. et al. Improved estimates of upper-ocean warming and multi-¬‐decadal sea--‐level rise. Nature 453, 1090-1093,doi: http://www.nature.com/nature/journal/v453/n7198/suppinfo/natu re07080_S1.html (2008).

3 Levitus, S. et al. World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010. Geophysical Research Letters 39, L10603, doi:10.1029/2012gl051106 (2012).

4 Purkey, S. G. & Johnson, G. C. Warming of Global Abyssal and Deep Southern Ocean Waters between the 1990s and 2000s: Contributions to Global Heat and Sea Level Rise Budgets*. Journal of Climate 23, 6336-6351, doi:10.1175/2010jcli3682.1 (2010).

Paul S said...

BBD,

I just took the Balmaseda 2013 chart, drew an eyeball trend on it from 2000-2009 and checked the slope against their heat uptake rate key. It read out at ~ 0.85W/m2. That compares to 0.65W/m2 listed in the SI. Plugging 0.85 into their ECS formula gives 2.4, using the alternative forcing from the sensitivity test makes it 3.

Interestingly 1990-1999 shows a negative heat uptake rate in Balmeseda 2013 reanalysis time series, resulting in an ECS estimate of ~ 1.5ºC.

Carl C said...

David Young, I have to agree with "KarSteN" -- you seem to be pretty delusional re: Allen and everything else. The notion that there is some "great climate scientist conspiracy" is hilarious considering you're posting on James' blog, which has probably had the biggest attacks against Prof. Allen et al.

And his "defense" of Lindzen was on personal grounds; I suppose you probably don't even know that he was a post-doc of Lindzen's at MIT? So I suppose although he doesn't agree with Lindzen's climate skepticism, he doesn't want to see his old boss just flamed in public. The impression I always got about Lindzen & other crotchety old scientists is the one I get from reading your posts, i.e. "old geezer thinks he can come back from zombie-hood and teach those young whippersnappers a thing or two."

I am continually finding it amazing that the prospect of Nic Lewis on a Myles Allen paper is somehow a "smoking gun" that "global warming is over" -- as if it legitimizes all the skeptic bullshit over the years.....

David Young said...

Carl, Your post has little to do with what I actually said and more to do with stereotypes of skeptics.

There is no conspiracy of climate scientists. It's mostly very open, but have you forgotten about climate gate? Maybe a little bit of a secret collaboration. Some people tend to forget things that they wish had not happened.

I know Allen disagrees with Lindzen. Allen's defense of him from personal attack is a hopeful sign, just as I said.

The issues of bias in science is just as bad in medicine, but I've said that here many times. Did you read it or just lash out?

Finally, Lewis' paper is not a smoking gun but a wake up call to confess your errors and correct the record.. If medicine can do it, climate science can.

Read the references given on previous threads and get back to me if you have substantive responses. It is no surprise to me that GCM's disagree with the recent string of ECS estimates.

Your post is just more confirmation that there might be a problem that should be addressed by climate science. Stereotypes of outsiders is of course a sign of prejudice but I'm sure you harbor no such biases.

Carl C said...

oh yes David, your invention of "scandals" is really persuasive, as is your concern trolling and numerous intimations that "climate scientists need to confess.". jesus h. christ, you must be getting Alzheimer's......

BBD said...

Paul S

Dare I say it (?) but these sensitivity estimates derived from the instrumental record are a little too sensitive to decadal variability in OHC (and aerosol forcing!) to inspire confidence.

David Young said...

Now, now Carl, I was wrong apparently that you harbour no such bias. I confess my error!! I did not invent anything. Some people still think that Vioxx is safe and effective, that vertebraeplasty works or that climategate didn't happen.

I can't help you with the name calling.

Carl C said...

"climategate" was an invention of the right-wing media machine (e.g. Murdoch et al). It did fool me for about a day until I looked into what the hell it really was. That you are still talking about it says more about your gullibility. Based on such concocted BS as "climategate" - I suppose you think Obama is a Muslim born in Kenya and forged his birth certificate too....

David Young said...

OK, Carl, I see I really misjudged you. I had thought you were just misguided but now I see you are a "denier." Richard Muller is excellent on climate gate. Just to expand your horizons you might want to view his video on it. There's a lot of other stuff there too such as some debunking of An Inconvenient Truth and energy policy stuff too. Muller is not a member of your mythical right-wing media machine.

Carl C said...

you mean Richard Muller who even after taking right-wing Libertarian Koch brothers money, had to come to the conclusion that global warming was a reality? If he can finally come around to reason, what's your excuse (other than senility)? http://www.huffingtonpost.com/2012/07/29/richard-muller-climate-change-humans-koch_n_1715887.html

David Young said...

This conversation is not productive Carl, so I'll leave your comments to speak for themselves.

Carl C said...

Translation - you've had your ass kicked once again.....

David Young said...

This "exchange" I think makes my point that there is a problem here. Basically, its the political hangers on who are the most libelous and nasty. i still feel sympathy for Myles Allen if he is subjected to the same abuse. Political diatribes are not science and they do not help advance science or really do anything but make the abuser feel good.

Carl C said...
This comment has been removed by the author.
Carl C said...

Look, David, you're the one coming here posing as "concerned about climate scientists/Myles Allen" - yet scratch under the surface and you have the same idiotic and discredited views a la Senator Inhofe et al. If Myles et al get "harsh political abuse" and "hate mail" it's from your moronic ilk the "climate skeptics" and "right-wing think tank" jerkoffs like Pat Michaels & Myron Ebell. You're a classic "concern troll" and you probably don't even know it....

David Young said...

Name calling and nothing more. I rest my case.

James Annan said...

Hey, play nicely, people - I don't have time to referee all this.

FWIW climategate was hugely overstated in some sections of the mdeia/blogs - while some of the humans involved do seem to have been guilty of some human foibles (which I will not defend) the scientific relevance was negligible. And Muller owes plenty of apologies for his hubristic and insulting attitude towards the scientists whose results he ended up basically replicating to no-one's surprise except presumably his own.

James Annan said...

Alex, it would be a bit of a surprise to me, but not a huge shock, if the true value of sensitivity were to lie just below the GCM range. For what it's worth, the ensemble PMIP2 models we used actually included a simpler EMIC-type model with a sensitivity of 1.8C, and it did compare well to the data (though seemed a little too insensitive).

David Young said...

I'm not sure I would call climate gate of little scientific relevance. You must admit that Muller is a straight shooter who is not afraid to call it the way he sees it. He's been vilified by both sides in this sad saga. History will judge this matter. I note that in fact there was no public response to Muller from Schmidt or Mann. I do regard it as telling too that Annals of Statistics was where the real debate took place and not in a climate journal.

I agree with Alex that these results beg the question of why the models seem so wrong. I'm not surprised of course, but some people have some work to do to find out why.

Paul S said...

BBD,

I guess you do dare ;)

I've noticed one thing which would go some way to explain the lower sensitivities derived for the 90s: the volcanic forcings are very weedy.

This is strange because the CMIP5 Forster et al. 2013 forcing time series, which this study uses as a base, features much larger volcanic signatures. I can't work out why or how, but they've ended up with a Pinatubo forcing signature which peaks ~ 0.6W/m2 smaller than the mean in Forster 2013. Translated to the whole decade that would mean an RF ~ 0.2W/m2 lower than published, indicating an ECS of ~ 2.5ºC with their ocean data or ~ 1.8ºC using Balmeseda 2013.

Magnus said...

And now this:
http://link.springer.com/article/10.1007%2Fs00382-012-1647-y

Paul S said...

Magnus,

Early twenty-first century aerosol forcing is found to be extremely unlikely to be less than −1.7 W m−2

Well... that's a bolt out of the blue.

Paul S said...

On reflection I think they mean it's extremely unlikely that aerosol forcing is more negative than -1.7. I tend to assume the sign is negative and talk about higher and lower values in relation to their distance from zero.

-1.7 as a most-negative result is not so surprising.

Steve Reynolds said...

Latest from Nic Lewis:
http://bishophill.squarespace.com/blog/2013/5/24/updated-climate-sensitivity-estimates-using-aerosol-adjusted.html

Best estimate of ECS using most recent data and different 0-700m heat uptake data sets: 1.53, 1.59, 1.71, 1.79 C.

BBD said...

NL is going to come a cropper with this unsubtle enthusiasm for unfeasibly low ECS estimates contrived from uncertain data.

It's as if he doesn't even know about the paleoclimate-derived estimates - termed "constraints" in some circles.

BBD said...

Paul S

WRT volcanic forcing oddities. Thanks very much for this. More raised eyebrows.

Think we should get the Auditor in?

;-)

Alex Harvey said...

The question will need to be asked - if for the sake of argument ECS did settle on 1.9 C - how is it that models came to be built with such large sensitivity and yet no models were ever built that had low sensitivity?

I must confess that my pet theory that's based on my prior interest in Thomas Kuhn's work has always been that climate science - with both its hotly charged political implications and large uncertainty in both its theory and data - is the perfect recipe for the biases of researchers collectively to significantly impact both on the data and the theory.

So much for my own bias, but surely if the experts are finally conceding that the models are significantly exaggerating sensitivity - a thing that is supposed to be an 'emergent' property of the models - how is it possible that for the last 40 years a model with a sensitivity *lower* than the actual sensitivity of the climate has never been proposed, never been built.

It's kind of obvious, I think, isn't it, that the modellers simply didn't like models with low sensitivity, and surely that must be the only reason why there aren't any.

The other point is - who is going to finally build a model with a low sensitivity so that such a model can actually be studied? This lack of models with low sensitivity seems to be a gaping hole in climate science - all the research that simply can't occur because modellers won't build low sensitivity models.

Am I wrong? :)

BBD said...

It's kind of obvious, I think, isn't it, that the modellers simply didn't like models with low sensitivity, and surely that must be the only reason why there aren't any.

And:

all the research that simply can't occur because modellers won't build low sensitivity models.

Whoa! Steady now! The models are attempts to approximate the physics of the climate system. They aren't "built with" a particular sensitivity - S is an *emergent behaviour* of the model once it is run.

IMO you are teetering on the brink of going too far ;-)

Next, the ECS-is-less-than-2C meme is extremely premature. As James says, it is a possibility, but it would still be a surprise.

Alex Harvey said...

BBD, I think my point would stand even using James' value of 2.5. Even 2.5 is well on the low side of the range. Don't you find it strange that we've got more than 20 models in the 2.1 - 4.4 range and we haven't got one that's 1.5? 1.5 is after all just a model with the usual positive water vapour feedback and a neutral net cloud feedback.

I think this point needs to be thought about.

I also note you haven't really said anything about my point about all the studies that can't occur absent these lower sensitivity models.

PeteB said...

Has anybody tried this technique on GCM model runs of known sensitivity ?

Magnus said...

The 10th, 50th and 90th percentiles of our observationally constrained PDF for the Transient Climate Response are 1.6, 2.0 and 2.4 °C respectively, compared with the 10–90 % range of 1.0–3.0 °C assessed by the IPCC.

BBD said...

Alex Harvey

BBD, I think my point would stand even using James' value of 2.5. Even 2.5 is well on the low side of the range.

No, it isn't. The IPCC AR4 range for ECS is 2C - 4.5C with ~3C as most likely value.

Don't you find it strange that we've got more than 20 models in the 2.1 - 4.4 range and we haven't got one that's 1.5?

No, not at all. It is simply an indication that ECS is greater than 1.5C (as is paleoclimate behaviour). You have ignored what I said about sensitivity being an emergent property of the models. I get the feeling you are concern trolling.

PeteB said...

Alex,

"1.5 is after all just a model with the usual positive water vapour feedback and a neutral net cloud feedback."

have you got a source for that - I don't think that is correct

BBD said...

Magnus, that's an interesting link. Thanks for posting it up.

Carl C said...

I find it fascinating that the "perennially skeptical ones" are so easily tossing out decades of climate model research, in favor on one paper they got one of their lackeys on to drag the PDFs down a little bit. Further hilarity ensues when they assert that climate modelers have conspired to keep "low sensitiviity runs" out of the literature. That would be akin to herding cats for 30 years, to ultimately have a final Armageddon with dogs.

KarSteN said...

Paul S,

the Quaas et al. 2008 forcing estimate is indeed lower, but the trouble I'm having is to understand why they referred to Bellouin et al. 2013 and Lebsock et al. 2008 in order to justify their reduced aerosol forcing estimate? The first reference doesn't support their assumption, while the latter reports only about the first indirect effect. Rather, the scaled TRF (total aerosol RF) from Bellouin et al. 2013 seems to match the ACCMIP results from Shindell et al. 2013 fairly well, apart from the fact that they apply the"effective" RF (which I interpret to be the adjusted RF) which Bellouin et al. 2013 didn't. However, ARF/ERF should be even stronger (slightly more negative), shouldn't it? I'm not saying that the scaled forcing should be as high, but their referencing puzzles me (for the time being).

Regarding Balmaseda, and re what you've noticed for their 1990s OHC trend, I interpret this negative value as purely volcanically induced. Without Pinatubo, we would certainly have seen a positive heat updake rate that decade. The difference is what is still (to a large extent) in the system. Any meaningful ECS estimate should therefore account for the difference, or am I missing something fundamental here? An example: The average OHC increase between 1970-2010 is a bit more than 21x10^22 W/m2 (= 0.3 W/m2/yr heating rate per unit area of the earth sfc) according to Balmaseda et al. 2013. Add approx. 5x10^22 W/m2 for the Pinatubo and El Chichon eruptions (two dips) which are still in the system (60% of 8x10^22 W/m2, if we follow Stenchikov et al. 2009), and the heating rate per unit area of the earth sfc increases to 26x10^22 W/m2 (= 0.37 W/m2/yr ) for the same period. Think of it as a volcanic OHC offset with very slowly e-folding signal abatement. This number is already above the total system heat uptake of 0.35 W/m2/yr for 1970-2010 in Otto et al. 2013. Sure, the strength of the volcanic signal in the ocean is model dependent, but if the models are right, the ECS estimates were biased low as our current OHC anomaly estimates contain the volcanic signal (and hence obscuring the full anthropogenic OHC signal). Is it just me who thinks that this issue tends to be underappreciated in most recent energy balance estimate discussions? Or am I really that wrong?

As BBD rightly said, the sensitivity of the ECS estimates to decadal OHC variability is insanely high. I would therefore not rely on them if I were to make an educated guess. By the way, I'm less concerned about the volcanic forcing estimate. If I am right, the uncertainties which relate to volcanically induced OHC changes are much larger (as just detailed), outweighing the former particularly at longer time scales.

BBD,

I think Nic Lewis still believes a low aerosol forcing estimate is justified. If he thinks his dismissive attitude towards more credible results is justified, fine for him. I beg to differ. I read and value his contributions accordingly.

David Young said...

I thought the sensitivity to data uncertainty is taken into account in the ECS estimates and the uncertainty bounds.

I continue to be surprised that people suggest that models or LGM ECS estimates are more reliable than using modern data. If modern data is uncertain, then LGM estimates of temperature and forcings must it seems to me be much more uncertain. Aerosol forcing for 24000 years ago? We still can't agree on what it has been the last decade. I would refer you to Annals of Statistics. James may be able to better defend LGM estimates or comment on the error bars.

As to GCM's, i won't rehash here what was said on previous threads. I really don't think many people have seriously tried to defend them on a technical level. Certainly no one here. Arguments from expended effort are not technical arguments. Kind of like saying that vertebraeplasty must work since its done hundreds of thousands of times a year by physicians and anecdotes support it, and the investment is huge or that Vioxx must be safe because Merck invested billions in it.

Trying to be as fair as i can, the answer I got from the experts was that "the climates I get look reasonable." Plausible, but not very satisfying especially in light of their failure to predict the last decades worth of global temperatures, their lack of skill at regional climate, etc. I also think there is plenty of a priori evidence that they are not trustworthy as does Gerry Browing who is far more of an expert than I am having worked with these models for 30 years.

Magnus said...

Is not to much done about the single nr say 2 or 3 ECS when the distribution is changed? One distribution might be better getting the single number the other describing the actual distribution... or what am I missing?

PeteB said...

David,

"If modern data is uncertain, then LGM estimates of temperature and forcings must it seems to me be much more uncertain"

I'm no expert - but as I understand it , the advantages of LGM based studies are

1) It is a much bigger temperature difference,
2) It is over a much longer timescales -between two quasi-equilibrium climate states, so decadal variations tend to be averaged out

Although, as you point out, there is significant uncertainty in both temperature change and forcings (and even if the sensitivity was the same then as now)

Did you find a source for the 1.5 deg C climate sensitivity including water vapour feedback ?

I'd be interested in applying the Otto calculation and the Lewis calculation to different model runs (using different models) and see how accurately it estimates the sensitivity of the model. (Maybe this has already be done). I suspect that, depending on the model, there may be some differences.

Anonymous said...

Carrick:
"What I don't get is 2000-2009 has the largest mode of any of the decadal periods he considered. Given the apparent slow down in surface-air temperature for that decade, what is driving the larger estimate of ECS?"

That is because the study used the 0-700 m ocean heat uptake observational dataset with about the highest increase over 2000-09 of all the datasets.

Paul S:
" This is strange because the CMIP5 Forster et al. 2013 forcing time series, which this study uses as a base, features much larger volcanic signatures. I can't work out why or how, but they've ended up with a Pinatubo forcing signature which peaks ~ 0.6W/m2 smaller than the mean in Forster 2013."

Well, a regular poster on a blog you no doubt view with disdain, WUWT, was able to figure out the reason for himself. Maybe that's because sceptics like him (Willis Eschenbach) concentrate on analysing data themselves.

Pete B:
"Did you find a source for the 1.5 deg C climate sensitivity including water vapour feedback ?"

Table 1 of Soden and Held, 2006, An Assessment of Climate Feedbacks in Coupled Ocean–Atmosphere Models, supports a 1.5 C sensitivity from the Planck feedback net of the combined, intimately associated, lapse rate/water wapour feedbacks.

PeteB said...

Thanks Nic,

The range of combined λ(wv)+ λ L feedbacks is 0.81–1.20 in IPCC AR4

So what range of climate sensitivities would that give for a doubling of CO2 if we just included water vapour and Lapse rate feedbacks ?

Alex Harvey said...

Pete B,

My source is AR4 chapter 8 -

"In AOGCMs, the water vapour feedback constitutes by far
the strongest feedback, with a multi-model mean and standard
deviation for the MMD at PCMDI of 1.80 ± 0.18 W m–2 °C–1, followed by the (negative) lapse rate feedback (–0.84 ± 0.26 W m–2 °C–1) and the surface albedo feedback (0.26 ± 0.08 W m–2 °C–1). The cloud feedback mean is 0.69 W m–2 °C–1 with a very large inter-model spread of ±0.38 W m–2 °C–1
(Soden and Held, 2006)."

If you add this all up you get about 1.46 °C per doubling of CO2, excluding the cloud feedback.

BBD, Carl C, there is no conspiracy theory in the assertion that the AR4 studied 23 GCMs and not one of them included a net neutral or (gasp!) a net negative cloud feedback.

I'm also sure you'll both agree that Andrew Dessler is not a conspiracy theorist and he knows a thing or two about clouds and cloud feedbacks and he observed (Dessler, 2010, Science):

"The observations show that 60 to 80% of the total cloud feedback comes from a positive long-wave feedback, with the rest coming from a weaker and highly uncertain positive short-wave feedback. With the exception of onemodel, the models also produce positive longwave cloud feedbacks, a result also in accord with simple theoretical arguments (34).

"The sign of the short-wave feedback shows more variation among models; it is positive in five of the models and negative in three. There is also a clear tendency for models to compensate for the strength of one feedback with weakness in another. The models with the strongest shortwave feedbacks tend to have the weakest longwave feedbacks, whereasmodels with theweakest short-wave feedbacks have the strongest longwave feedbacks."


So note this carefully - while all models stubbornly agree on a strongly positive net cloud feedback they can't agree with each other as to whether or not this is caused primarily by a very strong LW cloud feedback or a very strong SW cloud feedback.

I am puzzled as to why no one can see that this is evidence of some kind of unscientific problem in the completely mysterious and undocumented model development process.

Alex Harvey said...

I should add to my previous comment that Dessler only examined 10 models - and of these 9 had a positive LW feedback, and 1 had a negative LW feedback. So there is no problem for modellers to build models with negative LW feedbacks - just so long as it gets compensated by a very strong positive SW feedback.

Clearly, Nature abhors a net negative cloud feedback, but is okay with a negative LW feedback so long as it gets compensated by a highly uncertain and strongly positive SW feedback. ;-)

PeteB said...

Alex - thanks

so if I take the mean values for a minute

wv feedback = +1.80
lr feedback = -0.84
total = 0.96
and that equates to 1.46°C per doubling of CO2

If we add in cloud feedback = 0.69

total = 1.65

So is the mean of the models including cloud feedback = 1.46°C * 1.65/0.96 = 2.51°C per doubling of CO2 ?

- I thought it was higher than that

James Annan said...

Magnus, thanks for the link, I'll not be reading it for a few weeks though. Remember that the earlier ensemble of HadCM3 had a 90% sensitivity range of 2.4-5.4C, that was originally thought to be quite reasonable, but now looks rather high....

Alex, you are over-reaching quite a lot there on model problems. I don't think you should be too over-excited at the possibility of all models being too sensitive: it's possible, but seems unlikely to me and your ideas about how modellers behave seem completely spurious. If you think it is so easy to build a GCM with low sensitivity, why don't you build one yourself? BTW in all your feedback calculations (and PeteB too), what are you doing with the Planck response? I don't follow the numbers, but I'm not really paying attention.

Anonymous said...

Maybe someone here can clarify something for me (apologies if it has already been covered). I've been reading Nic Lewis's posts on WUWT. He seems to define the TCR and ECS as

TCR = F_2x \Delta T / \Delta F

ECS = F_2x \Delta T / (\Delta F - \Delta Q),

where F_2x is the change in forcing due to a doubling of CO_2 (I hope my notation makes sense).

It seems that Nic Lewis then uses measured values for \Delta T, \Delta Q and \Delta F. This seems correct for \Delta T, but seems wrong for \Delta F and \Delta Q. It seems that these should only be the contributions due to changes in C0_2 and not the full changes in forcing and ocean heating rate for the time interval considered. Otherwise \Delta F could equal F_2x before the CO_2 has doubled.

Have I understood this correctly or am I missing something?

Alex Harvey said...

Pete B., James,

I am also confused.

I assumed the zero-dimensional linear feedback model, delta T = (G0 / (1 - (G0*F))) * delta Q, where G0 =~ 0.266 W m-2 K-1, delta Q(2xCO2) =~ 3.7 W m-2, and F = 1.8 - 0.84 + 0.26 = 1.22 W m-2 K-1. Thus, delta T(2xCO2) = (0.266 / (1 - (0.266*1.22))) * 3.7 =~ 1.46 K / 2xCO2.

If I then include the cloud feedback, F =~ 1.8 - 0.84 + 0.26 + 0.69 = 1.91 W m-2 K-1, and delta T(2xCO2) is only 2 K / 2xCO2, which is certainly too low for the AR4 central value. If I include the errors then the upper bound is 1.8 + 0.18 - 0.84 + 0.26 + 0.26 + 0.08 + 0.69 + 0.38 = 2.91, and delta T(2xCO2) is 3.9 K / 2xCO2.

I guess in the GCMs the feedbacks don't really add linearly like this, but I suspect I am doing something else wrong. I also note that Forster and Gregory (2006) found a climate feedback parameter of 2.3 +/- 1.4 W m-2 K-1 - which appears to be much higher than these AR4 numbers.

Maybe someone can point out my error.

David Young said...

Alex, I think it is almost certainly true that modelers do not use ECS as a figure of merit to set model parameters. That said, they do use a host of others.

I have been told many times that models are not "tuned." Based on my experience with subgrid models, this cannot be true. Simple turbulence models are "tuned". The constants are set so that certain figures of merit are matched more or less exactly. These vary from analytic solutions in simple situations to well known experimentally determined relationships. Lucia had a paper recently she found where modelers describe how they do the tuning. The problem is that there are an infinite set of figures of merit to choose from and in a chaotic system its clear to most people that its simply impossible to match all of the important ones.

What may be happening is that to match the figures of merit, the cloud feedbacks have to add up to roughly a constant.

I think it likely that no one modeler understands in detail all the subgrid models used. In climate there are many of them. Aerosols, turbulence, convection, evaporation, etc. etc.

Turbulence modeling itself is a complex field and there is a relatively small group of experts who design the models. Most model comsumers just use them without delving too deeply into the details. I suspect this is part of the problem here. The "communicators" with a few exceptions probably have little idea what the details of the models are so they must rely on glosses on the truth. One that is simply not true is the doctrine that models convert an initial value problem into a boundary value problem. This is patently untrue. All climate and weather simulations are initial value problems. The climate models use time varying boundary conditions, but that does not change the fundamental mathematics of the simulation. Our recent work shows that this idea that the "boundary value problem" is well posed, while the initial value problem is not is simply false. A comforting doctrine that is believed by a lot of people and leads to all kinds of secondary error in reasoning.

Another thing to bear in mind is that making the model more complex by including more "physics" can actually make it less accurate. One problem is that there are more parameters and relationships to tune. The relationships may not be constrained by actual data, so you end up guessing. You need to choose a level of modeling where things are well constrained by data This fidelity dogma has been held out here by a vocal hanger on who I will not embarrass further.

So, it is indeed quite complicated. My feeling is that most model builder are rather honest on the science. That's true of Lacis for example. He is a New Yorker, but we won't hold that against him. :-) Turbulence modelers are generally the most honest of the computational fluid dynamics experts, because they know all about the failures as well as the successes and know how weak many of the constraints on their models are.

Alex Harvey said...

James,

I don't understand why climate scientists seem to get defensive when I say that scientists are just as biased as the rest of us.

In other fields - e.g. medicine, psychology, physics - scientists know they are biased and thus design experiments in a way that mitigates against their own bias. The blind, double blind.

I also can't see how it can be denied that the model development process appears to us as ad hoc and mysterious. There is no documentation so that we can't go back and find out what decisions were made or why.

Thus, we have papers in the literature (like Dessler 2010, the Swanson 2013 paper mentioned above, and Huybers 2010 J.Clim) that look at the outputs of these models and reason back to the model development process. It has struck me as nothing short of bizarre that scientists are applying the scientific method to the models themselves to work out how they might have been built.

I'll bet no one here can tell me or find out why some of the GCMs have a negative LW cloud feedback - especially if 'simple theoretical arguments' suggest the LW feedback ought to be positive.

I don't see this as a conspiracy theory, and there is no accusation of 'misbehaviour'. But it does appear to be an unfortunate fact that we don't really understand these models that so much of climate science has put so much trust into.

Now who is going to build a model with a low sensitivity? Obviously these models weren't built by one person but by hundreds of mostly anonymous people. My guess is low sensitivity models will appear soon as a matter of course - they'll be needed simply to model the observed energy balance.

But one would hope that completely new models would be built from the ground up in such a way that every single decision is explained clearly so that the whole model can be properly scrutinised.

PeteB said...

Alex, James

Yes - I don't know how to get from table 1 in Soden and Held 2006 to climate sensitivity in deg C per doubling of CO2

http://www.gfdl.noaa.gov/bibliography/related_files/bjs0601.pdf

that has Plank, Lapse Rate, Water Vapour, Surface Albedo (which I think we have missed) and cloud feedback

If we wanted to estimate the sensitivity excluding cloud feedback, I guess it isn't as simple as adding just the bits that you want together - because any cloud feedback will cause extra water vapor / lapse rate feedback etc ?

It was just Alex's original claim that models just including water vapour feedback was around 1.5 deg C and I thought that was too low

I can see my original calculation was rubbish - but I don't see how excluding a (mean) cloud feedback of 0.64 would reduce the mean sensitivity to 1.46. I guess this is partly we have missed the surface albedo feedback

I'd actually be more interested in the sensitivity excluding cloud feedback (including water vapour, lapse rate and albedo)

Alex Harvey said...

Pete,

Oddly enough, my claim wasn't based on the calculation I showed above, or the section of the AR4 I quoted; I have believed/"known" this for a while, and now can't remember where I read it, or how/why I came to this conclusion.

I did include the albedo by the way - that's the 0.26 figure.

I *think* to get from Soden & Held's table you would use another formula delta T = delta Q / lamba, where lambda is the 'effective climate sensitivity', and this appears to be the difference between the Planck feedback and the sum of other feedbacks.

That leads to a different result though - assuming the Planck feedback is ~ 3.3 W m-2 then 3.7 / (3.3 - (1.8 - 0.84 + 0.26 + 0.69)) = 2.7 K / doubling CO2 (clouds included) or 3.7 / (3.3 - (1.8 - 0.84 + 0.26)) = 1.8 K / doubling CO2 (clouds excluded).

I also think this is why Forster and Gregory's feedback parameter is larger, i.e. it includes the Planck feedback.

Anyway it would be nice if an expert can shed some light onto this. :-)

Alex Harvey said...

David,

To reiterate the most important point, I have no doubt and have intention of implying that anyone is anything other than meticulously honest.

So:
> Alex, I think it is almost certainly true that modelers do not use ECS as a figure of merit to set model parameters.

I don't know, let me quote Huybers, 2010, J.Clim:

As another example, of the 414 stable model versions Stainforth et al. (2005) analyzed, six versions yielded a negative climate sensitivity. Those six versions were apparently subjected to greater scrutiny and were excluded because of nonphysical interactions between the model’s mixed layer ocean and tropical clouds. Scrutinizing models that fall outside of an expected range of behavior, while reasonable from a model development perspective, makes them less likely to be included in an ensemble of results and, therefore, is apt to limit the spread of a model ensemble. In this sense, the covariance between the CMIP3 model feedbacks may be symptomatic of the uneven treatment of outlying model results.

So there is a literature about this.

David Young said...

Yes, but you notice the negative sensitivity models were ruled out based on nonphysical interactions. You don't need to consider a model that shows hurricanes in antarctica. That's the only rational or scientific way to choose models and their parameters, by comparison with actual data.

BBD said...

David Young

That's the only rational or scientific way to choose models and their parameters, by comparison with actual data.

And I agree wholeheartedly.

If the LGM and the Holocene are about 4.5C and 6W/m^2 apart, then models with an emergent sensitivity of ~3C ECS/2xCO2 appear to be doing a reasonable job of simulating the climate system.

Magnus said...

So is it just me that is a bit confused over the silence of swapping distributions without discussing what it changes?

David Young said...

Yea, LGM estimates vary a lot. James and Jules come up with a central value of 1.7C for the "linear" sensitivity. I know there are lots of higher ones too. What was aerosol forcing 24000 years ago? We don't know what it is at present. You saw some of that on this blog earlier where James was being challenged on their proxies for temperature if I remember correctly.

I would want to use modern data for calibrating models just because climate proxies are pretty controversial and I believe the error bars ad unknown unknowns must get bigger the further back you go.

Modern aircraft test data is a lot more reliable than the data from 50 years ago. Measurement technology has gotten a lot better since then. I don't think it would fly to use data from a B17 to argue for a turbulence model.

It is also true that even if a GCM had the "right" sensitivity assuming we can define that number exactly, does NOT mean that anything else about the GCM is right. I actually think the modelers claims about setting things if possible based on first principles and data is right. Sensitivity is a fall out of these choices. But I believe Browning that the GCM's have way too much dissipation and probably damp quite strongly the real dynamics. One can imagine a situation where this leads to too high a temperature increase, but I would need some help from Gerry to see if that's the expected outcome.

Alex Harvey said...

David,

As Huybers said it is correct to exclude a model that is unphysical but on the other hand the list of things that models can't simulate properly is very long. In the AR5 SOD we've got: problems simulating large-scale precipitation; regional climate change; concern about ENSO, NAO, QBO; still can't do the Madden-Julian Oscillation.

And of course we know that they still can't really simulate clouds or aerosols.

So if you wanted to exclude any model for being unphysical it wouldn't be hard to find a justification to do so.

The paper by Swanson is also very interesting and seems to provide more evidence that this sort of filtering is occurring based on the modellers' subjective expectations or desires rather than of only the purely objective considerations of the underlying physics:

Swanson 2013 GRL

[1] Climate change simulations are the output of enormously complicated models containing resolved and parameterized physical processes ranging in scale from microns to the size of the Earth itself. Given this complexity, the application of subjective criteria in model development is inevitable. Here we show one danger of the use of such criteria in the construction of these simulations, namely the apparent emergence of a selection bias between generations of these simulations. Earlier generation ensembles of model simulations are shown to possess sufficient diversity to capture recent observed shifts in both the mean surface air temperature as well as the frequency of extreme monthly mean temperature events due to climate warming. However, current generation ensembles of model simulations are statistically inconsistent with these observed shifts, despite a marked reduction in the spread among ensemble members that by itself suggests convergence towards some common solution. This convergence indicates the possibility of a selection bias based upon warming rate. It is hypothesized that this bias is driven by the desire to more accurately capture the observed recent acceleration of warming in the Arctic and corresponding decline in Arctic sea ice. However, this convergence is difficult to justify given the significant and widening discrepancy between the modeled and observed warming rates outside of the Arctic.

David Young said...

Alex, Don't worry, in my view its a miracle climate models are not worse than they are. Using them for ECS determination is not convincing to me. I have found some evidence that in fact when Hansen started pushing weather models with very coarse grids and the attendant massive unphysical numerical dissipation for climate some of his reviewers said much the same thing Browning and I have been saying. Despite being very persistent, I have found no real scientific or mathematical reason for why this should work. "The results look reasonable" is not a scientific assertion so its impossible to test it. "Its an attractor", is really a tautology.

Swanson may be right that there is an undue convergence of models that makes their range of results too small. Slingo and Palmer pointed out a similar problem in Transactions of the Royal Society. It one of Cheif's favorite references and its very good on this. It's in fact impossible to disentangle these ranges of choices and there are so many that cause and effect between modelers motivations and results is virtually impossible I would say. As always with a complex and impossibly complex system (models and modelers) one can only observe the results, and that is not very good in my opinion.

Another possibliity is that as more "physics" is included in models, they become less accurate. I explained how this can happen above.

PeteB said...

Alex - excellent thanks

I tried copying an pasting the table from appendix 1 into excel and (for the 12 models that show a effective sensitivity I got

Everything including cloud feedback
(in ascending order)
2.27
2.27
2.40
2.47
2.53
2.70
3.14
3.16
3.81
3.81
4.07
4.20
mean 3.07

Exclude Cloud Feedback
1.62
1.70
1.80
1.80
1.82
1.85
1.89
2.00
2.04
2.09
2.13
2.15
Mean 1.91

To be honest I didn't realise cloud feedback made that big a difference - so thanks !

PeteB said...

not appendix 1, I mean table 1

Alex Harvey said...

Pete,

Later in the AR4 Chapter 8 -

Using feedback parameters from Figure 8.14, it can be estimated that in the presence of water vapour, lapse rate and surface albedo feedbacks, but in the absence of cloud feedbacks, current GCMs would predict a climate sensitivity (±1 standard deviation) of roughly 1.9°C ± 0.15°C (ignoring spread from radiative forcing differences). The mean and standard deviation of climate sensitivity estimates derived from current GCMs are larger (3.2°C ± 0.7°C) essentially because the GCMs all predict a positive cloud feedback (Figure 8.14) but strongly disagree on its magnitude.

Alex Harvey said...

I would be most grateful if someone can tell me why I get a significantly lower climate sensitivity using Soden & Held's values for wv/lr + clouds + albedo in

1) delta T = G0 * (delta Q + F*delta T)

where the Planck response is implicit in G0 =~ 0.266 K W-1 m2 than I do using another formula

2) lamba = delta Q / delta T

where the Planck response is included in lamba.

I suppose there is something fundamental that I don't understand or am doing wrong, but I thought both methods should yield approximately the same result.

Magnus said...

Missed 100 still this is interesting
http://www.businessspectator.com.au/article/2013/5/27/science-environment/uncertainty-no-excuse-procrastinating-climate-change#ixzz2UcFWjwIi

David Young said...

James, This has been an interesting discussion. I also want to express my heartfelt thanks to Carl C. for falling into the usual pattern of hangers on and proving my point about the nasty nature of the climate debate far better than I could have done without his help. ;-)

I'm still very interested in technical defenses of GCM's and really would like to know if I am missing something.

Carl C said...

???? as I pointed out - the "nasty nature of climate debate" has been primarily from the denialist/skeptic camp. Just look up Ken Cuccinelli et al. Your concern trolling over GCM's is laughable. There is plenty of literature out there if you really wanted to learn about climate modelling and their history - no need to just troll on a handful of blogs....

David Young said...

Name calling again. See anthropogenic data point thread for references to the literature. Probably above your pay grade, but you could try.

Carl C said...

Name calling is appropriate considering your ignorance on this topic - you are just reiterating the same old discredited septic nonsense ie "I can't believe climate models work" etc. You obviously never looked at any of the physics in the models, from any of the groups etc. You're a typical hick who fancies themselves an expert just by hanging around a blog and acting "concerned for science". Absolutely laughable.....amongst the dumbest white-trash morons that America offers no doubt....

David Young said...

Self parody is your strong suit!! If you can't read the previous thread, I can't help you. James may moderate this soon, hopefully. You have the first prize here for nasty insults.

Carl C said...

Again - your strawman arguments are a joke. To whit - you have no evidence that any scientist on the Otto et al paper has been personally slammed by other climate scientists (for "downgrading" sensitivity or whatever). Yet you still persist in this bogus meme that there is "nasty backlash" over this paper. You just make stuff up and believe it - like a good little Fox News viewer.....

David Young said...

Literal minded aren't we? Switching topics? Your comments here are what I fear other scientists are subjected tot. Are you unable to see that?

Carl C said...

Science should be "literal minded" -- it's obvious you're not up to even that basic standard. Your bogus "fear" for "climate scientists" & other concern trolling is mildly amusing at best....

David Young said...

Prof. Imanuel took it very seriously. I really need add nothing else to your exceptional representation of your ability at self parody. A bronze medal is in order right for ethical behavior right behind leading politicians. You can see some mild forms of it here on previous threads concerning Nic Lewis and even Jules. Did you miss that bit?

Carl C said...
This comment has been removed by the author.
David Young said...

We have a number of papers in the "refereed" literature on this subject. See previous thread. You are an outsider so you have no idea what I've been saying on a technical level. I would suggest google scholar or the previous thread. I assume you can read, but perhaps I am mistaken. Papers accepted but in press are available to responsible scientists. You will have to wait.

Carl C said...

Hilarious - both your imaginary papers & imaginary credentials. Oh sure they're "in press" now. You sound (again) like the typical hick who gets all his information from Rupert Murdoch...

David Young said...

You could start with

Math. Model. Nat. Phenom., Vol 6, No 3, 2011, pp. 2-27.

AIAA paper 2013-663.

You may find that your libelous comments are not just par for the climate science course but manifestly false.

Carl C said...

haha libelous? some old fart from Boeing comes here pontificating about climate models, as if his third-rate background in CFD is of any significance? hell, MetOffice models have a longer documented and tested history & background than any Fortran 66 you're still running....

Steve Bloom said...

Looked at the two abstracts. Sorry, David, I just don't see how fluid dynamics at that scale is going to have any relevance to GCMs. As previously noted, if you can't convince the Proprietors you're highly unlikely to be able to convince anyone else in the modeling community.

The models have much larger problems, although I suppose your particular expertise isn't especially relevant to those.


Carl C said...

Dave Young is the typial old geezer who feels irrelevant yet pompous enough to throw his imaginary weight around into fields he doesn't fully understand - you can see this phenom with angry old men like Pielke & Lindzen (although they're much closer to the actual fields of climate of course).....

Alex Harvey said...

Carl C, Pielke Sr. has argued since at least 2006 or so that the climate sensitivity is being exaggerated by the IPCC, based on the observed energy balance. If the IPCC lead authors have now admitted the same, for approximately the same reasons, but six or seven years late, it's extraordinary to me that you'd smear him as an 'angry old man'. If he's an 'angry old man', what are these recalcitrant IPCC lead authors?

Carl C said...

Trying to spin Pielke Sr's work/ravings as if it aligns/predates the current stuff in the Otto paper is pretty weak. Especially as Pielke uses climate models which of course you'd disregard right out of hand with the typical, tedious "argument of incredulity".....

David Young said...

Steve, I gave a rational of relevance on the previous thread. Navier-Stokes is a subset of the weather/climate problem. Gerry Browning is making some of the same points and he's worked in climate for 30 years. He's rigorous and trustworthy and smart .

Alex Harvey said...

Carl C,

I think I've got your argument.

Pielke has used climate models
Pielke also criticises climate models,
See, he's senile!

I think it has been as obvious as day since at least

Kiehl, J. T. (2007), Twentieth century climate model response and climate
sensitivity, Geophys. Res. Lett., 34, L22710, doi:10.1029/2007GL031383
http://www.atmos.washington.edu/2008Q2/591A/Articles/Kiehl_2007GL031383.pdf

that climate models are tuned to reproduce the 20th century warming, and thus the entire IPCC argument that 'without CO2 forcing, the 20th century warming can not be explained', is bunk.

I have been amazed for a long time that Kiehl's paper didn't settle the matter there and then.

I think everyone has known that the models are not reliable for either 20th century warming, or sensitivity, and I think scientists have hoped for a long time that a sudden temperature rise would save them from having to admit this.

Of course, Kiehl's paper shows that the models are certainly, and mostly wrong, so I guess the hope was for a 'wrong method, right answer' type defence.

So Trenberth has been searching for missing heat in the deep oceans; Hansen searching for it in implausible theories about volcanic aerosols. Only skeptics have consistently argued that the data should just be taken at face value.

Thus, Pielke was right. Everything he said was based on the up to date data at the time, and as someone who has followed his points closely, I never seen anyone actually discuss the matter with him sensibly. All responses have been to ignore him, to call him a denier, to ask why he publishes stuff on his blog, to ask why he published stuff in low impact journals, and most of all, to appeal always to the uncertainty in the data.

One of these authors, Reto Knutti, is also in the peer reviewed literature in 2008, asserting the same - that it's pretty likely that climate sensitivity has been exaggerated. These guys have known this for a long time.

All that seems to have changed, in Otto et al., I think, is that it's just getting too embarrassing now for them to continue to deny what data says at face value. Of course, I suspect they'd still be denying this data, were it not for the fact that too many of their colleagues have come out and agreed essentially with the lukewarmers.

This data says what it says and has said it for a long time now.

David Young said...

Alex, There are two options with People like Carl C, who are unfortunately pretty common in this debate. Ignore them (wrestling with a hog will get you very muddy and make the hog frothing mad). The second is to put on your climbing coveralls, boots and crampons and hope for a teachable moment.

You noted I'm sure all the political and cultural prejudice common in politics but well moderated in many fields of science. Smear is accurate as is libel when it's about professional credentials. Sad and dishonest.

Carl C said...

Alex, simply post three references to Pielke Sr papers that have said years ago that climate sensitivity has been overstated based on energy balance calcs. Why do you bring up and run with a straw man Kiehl paper? It sure sounds like you're running from your thesis that Pielke Sr is some friggin' Cassandra....

Carl C said...

and if you want to talk about politics & science & public policy --- consider all the scrutiny from various Oxbridge twat "auditors" like McIntyre & Monckton & Nic Lewis on climate issues --- yet how conservative cause-celebres academic piffle such as the "Laffer curve" and "Reinhart-Rogoff austerity" pontifications go straight into public policy & conservative agendas etc.

David Young said...

These mathematical issues take a long time to come to fruition. Alex is right I think that model performance is not looking very defensible. That may cause some honest people to start asking why. That's how we started down this road on CFD and found the literature to be less than objective. This is not a matter that is of anything other than scientific interest to me so I have confidence the truth will come out eventually. Collaborations to "hide the decline" never last forever.

Steve, we do have another paper that will come out soon on your point about "missing physics." James has seen it. It makes the case that simpler models constrained by good data can be better than those that have "more physics."

In any case, in the absence of good numerics, calibrating subgrid models can turn into a hopeless exercise.

PeteB said...

Carl C,

You have loads of papers in this area - what is your current thinking ?

Would you now agree with James that the chances of sensitivity of greater than 4 or 4.5 is very low ? or do you think that the "long tail" has a high probability ?
http://julesandjames.blogspot.co.uk/2006/12/inconvenient-truth.html

What about James 'best guess' of 2.5 Deg C per doubling ?

Alex Harvey said...

Carl C,

You only need to type 'Pielke ocean heat' into Google to see how many posts he has written on the observed energy balance and its inconsistency with the high climate sensitivity claims. He also has peer reviewed papers on this.

What I want to know is, since you obviously don't know anything at all about Pielke or his views, what makes you feel it's okay to be making personal attacks and false claims about his work in public forums?

PeteB said...

Just reread the thread that I linked to earlier from 2006 and noted the following comment

2) Yes, although I would personally choose 2.5C as my best estimate - the gentle upward creep in recent years seems to be based on sociological factors more than scientific.

Also, do you and Jules count as a real climatologists yet :-)

David Young said...

One of Browning's main points that is very conclusively supported is that the continuum equation being solved (the hydrostatic approximation if my memory is accurate) is ill posed and thus simulations blow up, ie, numbers go to infinity in finite time. This is "fixed" using nonphysical viscosity and even hyper viscosity. The well known result of this is damping of the dynamics. A lot of the frequency content is simply removed over time by this in physical viscosity. So models cannot predict regional climate for example. I personally think Gerry is almost certainly right about this.

By the way, grid refinement studies would not fix this problem. The infinite grid answer is not a solution to the continuum PDE being solver.

David Young said...

Steve, I'm wondering what you mean by "fluid dynamics at this scale." Are you talking about the Reynolds number? Turbulence is important in the atmosphere and the same modeling techniques are used as we use.

Carl C said...

Well I asked for a list of Pielke's published papers in this area, and now you just refer me to googling his old blog posts? If it were that important to him couldn't he have published a paper on it, maybe with his greatest cheerleader, "Junior" (and can you really trust a guy who agrees with his dad so much ;-)?

It just seems too convenient IMHO to shovel off everything catastrophic into the deep ocean now e.g. heat uptakes & tipping points etc all "covered" by cramming it into the deep blue sea. I've seen vogues & fads in this field for over 10 years now - but it still seems that the fundamentals of climate sensitivity are still there e.g. 2.5-3C rise being likely/very likely. If Myles Allen & James Annan are agreeing on a 2.5C sensitivity (and they've hardly agreed on much before ;-) --- then that's a hell of a consensus for me.

And we're still screwed aren't we? Because ultimately James Inhofe & Rand Paul & Sarah Palin et al will think Allen & Annan are both just Euro-commies trying to bring down America etc etc.....

Carrick said...

Doing my best to avoid stepping in the ____ <- fill in blank that Carl has left lying on the floor...

It has always been interesting to me how many people in the modeling community not appreciate the consequences of too coarse of quantization of the model. I don't really have a good explanation for it, so this is just an observation.

That said, Isaac Held certainly is aware.

For people that are, shall we say a bit "green" to the publication field, that might be a place to start... unless this catapults Isaac into the land of "deniers" now. /facepalm

Carrick said...

When I say "modeling community" I don't mean global circulation modeling community. I've limited experience talking to climate modelers... speaking from my own areas of expertise in physics & acoustics.

David Young said...
This comment has been removed by the author.
Carl C said...

*sigh* I'll try to make it simpler for you dopes. Climate modellers are not trying to model via CFD as you do on your sewer pipe studies in Mississippi or your crashing plane studies in Seattle (ie specific, small time & dimensional scales). What would the friggin' Reynolds number of engineering mean at even a high resolution .1 degree climate model?

Climate models are basically weather models run for long periods of time to glean stats on temperature, precipitation etc. They are not trying to predict exact weather patterns, but they use the same/similar models. And these models nowadays are doing pretty good jobs. I mean, when you hear a weather forecast on Faux News of "90% rain tomorrow" -- do you similarly screech BS about how it's not modelling the resolution you want etc?

David Young said...
This comment has been removed by the author.
David Young said...

Carrick, thanks. What little I've read of Held seems excellent. As to why there isn't more interest in these issues, I have a hypothesis. In a highly competitive field there is constant pressure to get the "right" answer from your code. This has both professional and monetary sides to it. Since the codes are complicated and have many knobs, there are a large number of witches to be hunted in case of a negative result. We make the case that this situation has lead to a lack of progress and also incorrect heuristics for interpreting model results. It is impossible to come up with a predictive subgrid model if there is no clear understanding of numerical convergence issues. You already know this I suspect.

As to the ____ on the floor, my apologies for using he who must not be named as a teaching foil. He illustrates my point perfectly.

David Young said...

James, is this thread still open for comments?

Tom C said...

My, my, Carl C is quite the class act. Do I understand correctly that he is an actual climate modeler? Ironic that Lewandowsky et al. are publishing papers about how skeptics are paranoid conspiracy nuts, while every other post from Carl mentions the Koch brothers, Sarah Palin, and every other mental tic of the Left. Maybe Carl should refrain from insults re religion and politics and hike over to Roy Spencers latest post. Seems to me there is something that must be dealt with honestly at this point and vulgar bluster will not help.

David Young said...

James and Steve Bloom, There is a new paper in Science that seems to confirm our results on complexity of models.

Rather than reducing biases stemming from an inadequate representation of basic processes, additional complexity has multiplied the ways in which these biases introduce uncertainties in climate simulations. – Bjorn Stevens and Sandrine Bony

James Annan said...

Back from our extended break, will resume usual service shortly - or at least, some sort of service :-)

But I think this thread has pretty much run its course anyway...

David Young said...

I'm going to perform a test here to see if Carl is anything more than an Al Gore bot. In any discrete realization of the Navier-Stokes equations or the hydrostatic approximation (which seems to rule out accurate simulation of cumulus convection) there is a numerical viscosity associated with the grid size. Now my research indicates that the Reynolds number for the atmosphere is quite large at the planetary scales and not small at smaller scales. If the grid doesn't resolve these viscous scales the real viscosity at the planetary scales, there is a very large nonphysical dissipation that will damp the dynamics. This is probably what Browning refers to in his papers as "unphysical dissipation". There is apparently even a "hyperviscosity" that is larger than the discrete viscosity. So, given this large artificial dissipation, why would you expect dynamics at any scale to be resolved?

In terms of the "scale" of the dynamics, this is just a fiction Carl and Bloom have come up with. The dynamics is the same at large scales as at small scales, it involves vortex evolution and dissipation. If Carl has ever gotten in one of those "crashing" airplanes, he can see this vortex dynamics at take off if the humidity is very near the saturation point. There are shear layers and boundary layer too in climate. In climate, they must be unresolved, but that's another issue. They must be included if at all through subgrid models. The "scale" issue seems to me to be an unscientific gloss that can only be rooted in a schoolyard bullying tactic: "My problem is harder than your problem." Perhaps acceptable in the Al Gore world of political smears and untruths, but not a very scientific statement, unless of course there might be some actual substance behind the bluster.

Carrick said...

Tom C: "Do I understand correctly that he is an actual climate modeler"

He says he is an IT specialist from PA, so that doesn't seem likely.

He does seem to be an expert on paranoia.

Resolution matters in weather models too.

Duh. ECMWF for the win.

I admit my knowledge of sewer-pipes is limited to what Hagen–Poiseuille tells me. Perhaps Carl is confused on that issue too. This is my newest research tool (went up yesterday), and does involve a pipe at least.

David Young said...

Just did a quick scan of the documentation of the latest NCAR community climate model. I was able to verify that yes the leapfrog time marching scheme is still used despite Paul Williams having demonstrated that the filter used is vastly too dissipative. There is a nonphysical horizontal diffusion added. So regardless of grid resolution, there is Browning's nonphysical diffusion. The real diffusion is so small I suspect that it will never be resolvable in these methods. The spatial method is a spectral method. Forget about resolving sharp fronts and of course there is the usual eddy viscosity for the boundary layer with the usual hand waving as to why it isn't totally ad hoc.

KarSteN said...

As a late update to my comment on 25/5/13 12:01 pm, let me add a few things in response to a quite lengthy chat I had with Alex Otto last week.

First, I was wrong in assuming that their method would neglect the volcanic OHC imprint. Equ. 1 in their paper takes the volcanic forcing by means of a reduced decadal forcing explicitly into account. The only missing component is the long-term impact it will have on temperature in the future. This might otherwise well be balanced by the long-term impact of past eruptions (before the time interval in consideration). Assuming it is balanced, one shouldn't assume the decadal heat uptake in the reference period (1860-1879 in their case) to be positive (0.08W/m2). It's quite a stretch for which they didn't have a good reference (or justification for that matter). Assuming zero forcing in the reference period, I would rather set the heat content uptake to zero as well. Only GCM results can provide more reliable results. One would have to determine the fraction of the natural OHC at the beginning and at the end of the time interval in question, starting the simulation at least 500 years before. The difference would then have to be considered in the forcing estimate.

Second, the aerosol forcing was deliberately chosen such that it fits the lower AR5 estimate. However, the references are simply wrong. Should have been picked up, but the whole thing had to be done in quite a hurry. It was meant to be an confirmation exercise, aiming on demonstrating that the current forcing assumptions in AR5 are still consistent with the previous estimates. The list of co-authors does therefore comprise all AR5 lead authors. For aerosols, only Drew Shindell contributed. So Drew might not have double-checked what the final ACP version of ref 19 (Bellouin et al. 2013) actually says. I agree with Paul S, that half their aerosol forcing reduction (adding 0.15 instead of 0.3W/m2) would have been a smarter choice. Given that they performed the test for both results, ECS would be 2.2 everything else being equal.

Third, as OHC is the dominating factor, the results are crucially dependent on the choice of the data set. On top of that, the ECS estimate for each decade varies widely in both directions due to internal variability or delayed responses. Four decades is therefore not too long an interval to make reliable statements. Keeping in mind that the evolution of the anthropogenic forcing over the last century also remains to be inflicted with uncertainties (GISS vs CMIP5 vs Skeie et al. 2011), one is left with a variety of options, covering the entire range of accepted forcing estimates. I personally think that Skeie et al. 2011 come closest to the truth. A crude estimate of their forcing (omitting O3 and land use changes), combined with the NODC/Levitus et al. 2012 (L12) and the most recent ORAS4/Balmaseda et al. 2013 (B13) decadal OHC trends, yields the following result: http://www.karstenhaustein.com/Dateien/Climatedata/R/Forcing_dOHC.png
It is a 10 yr trailing average for all forcing equivalents. It seems, that at least the average of L12 and B13 over the last 50 years is not too far off. In any case, B13 looks more plausible than L12 to me. I remain unconvinced, that you can deduce a reliable ECS estimate with only those five decades of data.

Finally, two more aspects which people should keep in mind. GISS temperature will produce slightly higher estimates than HadCRUT4 used in this study (GISS does not provide data for their reference period). Despite the fact that they call it ECS, it is actually the Charney sensitivity since non-linear feedbacks are excluded. Therefore their TCR/ECS ratio is lower than in most GCMs. They should have made this point clearer, as all GCMs will ultimately produce non-linear feedbacks which tend to increase ECS.

Hope that helps ...

KarSteN said...

As a late update to my comment on 25/5/13 12:01 pm, let me add a few things in response to a quite lengthy chat I had with Alex Otto last week.

First, I was wrong in assuming that their method would neglect the volcanic OHC imprint. Equ. 1 in their paper takes the volcanic forcing by means of a reduced decadal forcing explicitly into account. The only missing component is the long-term impact it will have on temperature in the future. This might otherwise well be balanced by the long-term impact of past eruptions (before the time interval in consideration). Assuming it is balanced, one shouldn't assume the decadal heat uptake in the reference period (1860-1879 in their case) to be positive (0.08W/m2). It's quite a stretch for which they didn't have a good reference (or justification for that matter). Assuming zero forcing in the reference period, I would rather set the heat content uptake to zero as well. Only GCM results can provide more reliable results. One would have to determine the fraction of the natural OHC at the beginning and at the end of the time interval in question, starting the simulation at least 500 years before. The difference would then have to be considered in the forcing estimate.

Second, the aerosol forcing was deliberately chosen such that it fits the lower AR5 estimate. However, the references are simply wrong. Should have been picked up, but the whole thing had to be done a bit in a hurry. It was meant to be an confirmation exercise, aiming on demonstrating that the current forcing assumptions in AR5 are still consistent with the previous estimates. The list of co-authors does therefore comprise all AR5 lead authors. For aerosols, only Drew Shindell contributed. So Drew might not have double-checked what the final ACP version of ref 19 (Bellouin et al. 2013) actually says. I agree with Paul S, that half their aerosol forcing reduction (adding 0.15 instead of 0.3W/m2) would have been a smarter choice. Given that they performed the test for both results, ECS would be 2.2 everything else being equal.

Third, as OHC is the dominating factor, the results are crucially dependent on the choice of the data set. On top of that, the ECS estimate for each decade varies widely in both directions due to internal variability or delayed responses. Four decades is therefore not too long an interval to make reliable statements. Keeping in mind that the evolution of the anthropogenic forcing over the last century also remains to be inflicted with uncertainties (GISS vs CMIP5 vs Skeie et al. 2011), one is left with a variety of options, covering the entire range of accepted forcing estimates. I personally think that Skeie et al. 2011 come closest to the truth. A crude estimate of their forcing (omitting O3 and land use changes), combined with the NODC/Levitus et al. 2012 (L12) and the most recent ORAS4/Balmaseda et al. 2013 (B13) decadal OHC trends, yields the following result: Forcing vs L12/B13 dOHC
It is a 10 yr trailing average for all forcing equivalents. It seems, that at least the average of L12 and B13 over the last 50 years is not too far off. In any case, B13 looks more plausible than L12 to me. I remain unconvinced, that you can deduce a reliable ECS estimate with only those five decades of data.

Finally, two more aspects which people should keep in mind. GISS temperature will produce slightly higher estimates than HadCRUT4 used in this study (GISS does not provide data for their reference period). Despite the fact that they call it ECS, it is actually the Charney sensitivity since non-linear feedbacks are excluded. Therefore their TCR/ECS ratio is lower than in most GCMs. They should have made this point clearer, as all GCMs will ultimately produce non-linear feedbacks which tend to increase ECS.

Hope that helps ...

David Young said...

KarSteN, What do you make of the response to the Penatubo eruption? Ed Hawkins graph appears to show that GCM's badly overestimate the response and Ed says that is true and is the subject of continuing research.

While it is true that GCM's include "nonlinear" responses, is there really any reason to suppose that those responses will be trustworthy given the huge levels of non physical dissipation in the models realization of the Navier-Stokes equations? See my previous comment.

KarSteN said...

@David Young

Ed merely agrees that "models show a larger effect to Pinatubo than the observations". There's plenty of literature available on that subject, which he might not have been fully aware of in that particular moment. In fact, many individual models get the response right and if you correct for ENSO. The observations even match the (CMIP5) ensemble: CMIP5 vs GISS
Note that the ensemble does not suppress the magnitude of the response due to the synchronized response. Apart from that it shouldn't come as a surprise that some models show a stronger temperature response as they have higher sensitivites than others.

Re model reliability: It's because we have people like Paul Williams that I wouldn't bet a penny on the vanishingly small chance that all models are wrong. They are trustworthy within the very well known limits intrinsic to each and every existing model. That's why it's called a model. Thousends of people are working on improving and understanding them better. If you don't trust the modellers, then you better never get on an airplane ...

David Young said...

KarSteN,

I guess it all depends on what the meaning of the word 'match" is. Opinions will vary.

What you say about models is not convincing to me. Paul Williams is doing good work but has already uncovered some rather bad examples of excessive dissipation that do affect skill. He has a fix for one of them, the notorous leapfrog scheme that 32 years ago I was taught is not a good method. He seems to be having trouble getting the modelers attention. I just hope he is continuing to get funding, hopefully a of it. A lot of the money goes into just "running" models which is not going to result in any breakthroughs.

Modelers I have found are usually honest at least when they are not in "grant getting" mode. The models and the "communicators" are another story. Whether the models are "wrong" is not the issue. The issue is can they pass normal numerical tests and do they predict actual data, before the data is known. I've gone into the problems in great detail here on a couple of threads, including the Anthropogenic Data Point thread and this one. I'd welcome any real responses. So far its just the argument from expended effort (we've invested tens of thousands of man years in this so it can't be all wrong) or the honest Abe argument (you can't believe all the modelers are dishonest can you?) My main concern is just the artificial dissipation issue because everyone in this field has seen the documentation on how serious an issue it is.

As to the airplane analogy, there are 2 things to be said.
1. The first paper I referenced earlier on this thread says it clearly. For attached flows, models are pretty good. Of course simple models are also pretty good and cost a lot less to run (paper in press). For separated flow, there are a lot of issues and testing is still critical and required by the FAA as it should be.
2. You must be extremely careful about the literature on this subject. A lot of it is "colorful fluid dynamics" and not meaningful at a quantitative level. Also, there is a pretty strong positive results bias just as in medicine for all the same reasons. The RANS models are not as good as one might be led to believe, Of course climate models use a very dissipative form of the RANS equations. One thing is clear however, adding explicit nonphysical dissipation requires great care and usually destroys the accuracy of the simulation by damping the dynamics. I've gone into the "dogma of the attractor" elsewhere, viz., that short term errors don't matter because we get "sucked into the attractor". It is just hand waving.

A more appropriate challenge for you is. Would you get on an airplane designed with models before it was flight tested? If so, you are naive about issues on all airplane program exposed in flight test. They are all fixed of course and air travel is safer than virtually any other human activity, which is a track record to be proud of. Most of this is due to very careful testing and continuing in service monitoring.

KarSteN said...

Speaking of appropriate challenges. Would you get on an airplane which is bound to fly through severe
turbulences, turbulences so severe that no one could ever safely say it would withstand them damage-free?

As an aside: I have yet to meet a colleague who is naive enough to blindly believe what the models are telling him. And believe it or not, I happen to know a few brilliant colleagues (apart from Paul) who devote their entire scientific career to challenge our models. Yet they are smart enough to realize that these very models are not completely useless.

If you want to discuss details of the model dynamics, do it with those experts. I'm happy to put you in contact with them ...

David Young said...

Yes, There are some good papers on the limits of the GCM's. But I would claim that in general and in particular for the IPCC, there is a pretty pervasive positive results bias. Of course, GCM's are not completely useless, but I do have to wonder if appropriately constrained simpler models might be better. I have still not seen any substantive response to my challenges, however. Gerry Browning has been systematically ignored. Perhaps climate modelers don't read blogs or are too busy to respond. I have made some of them aware of this work. The response has been superficial. Paul Williams said basically that he was focused on getting modelers to pay attention. And that's the problem. Paul has some very good work that should be very interesting to them.

At a recent NASA workshop on these issues, a turbulence modeler said that our current methods are "post-dictive" and not really predictive. But the users of these turbuelence models (who build and maintain the codes) mentioned no such thing and presented their usual positive results and passed over in silence the negative ones.

I really am interested in substantive debate on this. Can you suggest a forum for such a discussion or some people to contact? Please, don't give the names of communicators (such as Real Climate) or those politically invested in this (you know who I mean) or those who censor comments that disagree but allow the most vile insults and libels to stand. I don't need to deal with Carl C and his constant political insults and bigotry. Unless you are very dishonest, you know that this is a real problem.

Airplanes are designed to withstand any turbulence so far encountered and there is a 50% safety factor. The airframe must withstand 150% of the maximum design load. Your challenge is not very interesting. You I am sure fly all the time and don't give it a second thought. Preference is given to observationally based models and actual data over Navier-Stokes models especially for design loads for which the models tend to be pretty poor.

Hank Roberts said...

> Airplanes are designed to withstand
> any turbulence so far encountered

Since when?

http://link.springer.com/article/10.1007/s00703-004-0080-0

David Young said...

OK, Maybe not able to withstand "any turbulence so far encountered" but lets say 99.999% of all turbulence in the atmosphere. I can't tell without buying the article whether any of these accidents were due to structural failure (which is what I was talking about) or to pilot error or failure of other systems. You know commercial aviation is safer than virtually any other human activity. It's a remarkable achievement. I've actually been in severe turbulence on a commercial flight and while it was unsettling, everything worked fine and the pilot found another altitude. I've also been on a flight that was struck by lightening. Once again, not exactly fun, but everything worked. Apparently, the hot spot for severe clear air turbulence is over Wyoming & the Dakotas but I haven't checked into it recently.

According to the Annan criterion, that makes me 99.999% right :-) Not even the Team is right all the time. ;-)

David Young said...

Just wanted to add a postscript for KarSteN and James' benefit. I took KarSteN seriously and tried to discuss this topic of multiple solutions and ill-posedness at Stoat, believing that people are reasonable until they prove otherwise. My comment summarizing the science, which was strictly technical in nature was consigned to the "burrow" because it was "boring." Just one more piece of evidence of the politization of this field. Of course Connolley is a partisan and very political, but the conversation did seem to have a lot of scientist participation. I'm not complaining, just reporting another piece of evidence of the nature of the debate in climate science.

[At this point I got bored. Further comments from DY will just get trashed if they're off-topic or trolling -W]

James and Jules run a more honest forum here it would seem. For those who are interested, here's the comment.

I see from W’s in line response to my last comment that the witch hunt has begun on qualifications. The default assumption seems to be that as soon as someone says something controversial, the demand to qualify yourself follows and I understand that urge.

There are a couple of papers I mentioned on James’ place on the “more on that recent sensitivity paper’ thread. You will also see some very intense name calling and slanderous comments too, something which I’m sure W does not allow here. ;-) The references give some details of our recent work. We are not in the business of “getting people to read our papers” so they are not that visible unless you know what to look for.

The doctrine that seems to me to be the basis of this whole thread and the previous one is the uniqueness of climate as a function of forcings. If this is false, pretty much everything else said here is questionable.

In Navier-Stokes simulation of fluid flows, 15 years ago had there had developed a similar doctrine. It’s true the Navier-Stokes equations at high Reynolds numbers is essentially ill-posed. But people had developed this dissipation called eddy viscosity that “time averages” the effect of small scale eddies and gives their effect on the resolved scales and converts an ill-posed initial value problem into a boundary value problem that was claimed to be well posed. In this case, the RANS problem really is a boundary value problem. This doctrine was supported by tons of computational experience that seemed to confirm it. Of course, there is positive results bias and the placebo effect but that’s another story. To be fair, part of this experience was based on the easier flow problem, namely the attached flow cases where the methods perform reasonably well, but not much better than simpler and far less costly methods.

F. Johnson with a team of top notch people set out to build a code that would embody this principle, use the latest methods, and would avoid the huge numbers of knobs used in existing codes. This effort went on for a long time and fell further and further behind schedule. Finally, people were forced to admit that the underlying assumption of well-posedness was probably wrong.

If you read AIAA 2013-0063, you will find a rather convincing proof that in fact there are multiple steady state fully converged solutions to the RANS equations given identical forcings. Further, which one you find is dependent on the details of the numerics used to get there as well as the grid. In fact, pseudo-time marching techniques are required to find converged solutions reliably. It is unknown whether all of these solutions are physical but at least some of them are as was shown by very early testing many decades ago.

David Young said...

Continuation of previous comment:


There is an even more disturbing phenomenon of the “pseudo-solution” described in the paper. All other RANS codes simply NEVER converge to more than 4-5 digits in the norm of the residual. Hey, its a tough problem to converge reliably. The paper documents the existence of flows whose residual is order 5-6 digits lower than freestream flow that ARE NOT TRUE SOLUTIONS. If the algorithm is allowed to find a true solution, the overall forces differ by 70%. Does this mean all other codes are called into question? You can be the judge.

Further on the bad news front, the pseudo-solutions at least on a common test case are substantially closer to the test data than the true solutions. What does this mean? My guess is that the eddy viscosity models are incorrectly calibrated based on unconverged solutions. By the way, its commonly acknowledged by modelers themselves that the viscosity models are too dissipative in some common situations.

So, climate is the Navier-Stokes equations with lots of subgrid models of everything from aerosols to clouds, to albedo changes. Making something more complex doesn’t usually make it more stable. Linear potential flow methods are much simpler and are absolutely stable. Navier-Stokes is not. Generally, if you add chemistry or combustion, or convection to a Navier-Stokes simulation it merely makes the problem stiffer and less well-posed, but its hard to generalize on this subject.

I will generate a follow up comment on the relationship of numerical stability to well-posedness and sensitivity of results to compilers and other rounding error details. This subject is really pretty well covered in graduate school courses in numerical methods. Basically, if your problem is well-posed these rounding errors can be proven to not make much difference. For the ill-posed problems, they do make a difference and as discussed above. Of course if you add enough dissipation, you can make any problem well-posed.

If you use poor methods, like the leapfrog scheme that was known to be bad when I was in graduate school, you rather shoot yourself in the foot from day one as Paul Williams has shown rather convincingly. I know, GISS doesn’t use the method, but NCAR does.

William M. Connolley said...

Sorry, my ejecting DY's confused trolling seems to have pushed him over here.

David Young said...

W, Glad to see you followed me here. I thought I wasn't worth following :-) Did you actually read the technical substance before judging it? It is technically correct and interesting to a lot of people without a political filter.