Lots of people have asked about this paper (which I think is open access).
To cut a long story short, it's not silly - the authors are entirely respectable and the work is interesting - but I don't think it is really that credible in terms of overturning established consensus. In fact it looks to me like they've gone astray in a few ways which add up to provide plenty of reasons for doubting the result.
The underlying idea is interesting enough and I have no problem with it in principle. They looked at climate change over multiple glacial cycles, to estimate not only the climate sensitivity, but also tease out how much this varies with temperature. Their observed temperature record comes from a handful of long-term proxy records of sea surface temperature, just 14 in total, which do not give very good global coverage. So they start by calibrating these records to a global mean temperature by comparing the local (proxy location) to global temperature at the last glacial maximum as simulated by models. The LGM temperature change arising from their 14 proxy records scaled to global temperature is about 5C colder than pre-industrial. This is a fair bit colder than the 4C we got with 400 data points over both ocean and land. But not content with this, they then average it with the mean of the PMIP model simulations, which is 6.5C colder than PI, thus getting a cooling of almost 6C.
Edit: Thanks to an email from Axel, I've had a more careful read and the above is wrong. One estimate is the PMIP models scaled to match data ("proxy-based"), another is their LOVECLIM simulation scaled to match a different data set ("model-based").
It is probably defensible to use the PMIP models in this way as some sort of independent estimate of the LGM state, but surely it is inconsistent to not then also use the PMIP models to estimate the cimate sensitivity and/or its nonlinearity. Anyway, this cold LGM state feeds through into a high sensitivity. An important additional factor here is the nonlinearity which they diagnose by comparing temperature to net forcing throughout the time series. I think a fair bit of this nonlinearity relates to the very high interglacials which are at best poorly calibrated since they only calibrated the proxy records to a fully glacial state. Interglacials have much smaller global temperature signal compared to the present, with the regional differences being much more important, and it seems doubtful whether a single scaling applied to these 14 proxy records could represent the true relationship with adequate precision for their purposes. In support of this, the last interglacial appears to have extremely high warmth in their calibrated proxy record of some 3C above pre-industrial, which I don't think is widely accepted. On the other hand, some nonlinearity is probably quite plausible, so let's press on. Using the "warm" sensitivity of 4.9C/doubling, they then generate a transient prediction, using a simple energy balance with the ocean heat uptake factor again taken directly from the CMIP models.
Edit: Thanks to an email from Axel, I've had a more careful read and the above is wrong. One estimate is the PMIP models scaled to match data ("proxy-based"), another is their LOVECLIM simulation scaled to match a different data set ("model-based").
It is probably defensible to use the PMIP models in this way as some sort of independent estimate of the LGM state, but surely it is inconsistent to not then also use the PMIP models to estimate the cimate sensitivity and/or its nonlinearity. Anyway, this cold LGM state feeds through into a high sensitivity. An important additional factor here is the nonlinearity which they diagnose by comparing temperature to net forcing throughout the time series. I think a fair bit of this nonlinearity relates to the very high interglacials which are at best poorly calibrated since they only calibrated the proxy records to a fully glacial state. Interglacials have much smaller global temperature signal compared to the present, with the regional differences being much more important, and it seems doubtful whether a single scaling applied to these 14 proxy records could represent the true relationship with adequate precision for their purposes. In support of this, the last interglacial appears to have extremely high warmth in their calibrated proxy record of some 3C above pre-industrial, which I don't think is widely accepted. On the other hand, some nonlinearity is probably quite plausible, so let's press on. Using the "warm" sensitivity of 4.9C/doubling, they then generate a transient prediction, using a simple energy balance with the ocean heat uptake factor again taken directly from the CMIP models.
Disappointingly, their plot of the transient warming from 1880-2100 doesn't show the actual observations up to the present. It is hard to be precise from eyeballing a computer screen, but it looks to me that their new improved prediction is already way ahead of observations. It suggests a warming that first reaches 1C (relative to 1880) back in the early 1990s before Pinatubo, rebounding from that brief dip to reach about 1.5C by the present. HadCRUT4 shows rather less warming that this, with even the current extraordinary hot year (boosted by a strong El Nino) not reaching 1.2C on that anomaly scale. In my view failing to show, or discuss, this discrepancy is a major failing of the paper. If they think it can be explained by internal variability then they should have presented that discussion and I'm surprised the reviewers let them get away without it.
Edit: ok, here is a very quick and dirty attempt to show what their pic would have looked like with real temperatures on it:
Not a great graphic, I just scaled the hadcrut pic off here and tried to line it up with the authors' own axes, matching the baseline temps around the end of the 19th century. As anticipated, recent years are well below their prediction, with 2016 just about reaching the CMIP mean.
Edit: Axel claims that internal variability can explain this discrepancy, but I don't believe it. The magnitude of decadal-scale internal variability is about 0.1C (Watanabe et al 2014 and Dai et al 2015) and this new forecast would be even hotter if it wasn't also hugely overestimating the response to volcanoes.
Edit: ok, here is a very quick and dirty attempt to show what their pic would have looked like with real temperatures on it:
Not a great graphic, I just scaled the hadcrut pic off here and tried to line it up with the authors' own axes, matching the baseline temps around the end of the 19th century. As anticipated, recent years are well below their prediction, with 2016 just about reaching the CMIP mean.
Edit: Axel claims that internal variability can explain this discrepancy, but I don't believe it. The magnitude of decadal-scale internal variability is about 0.1C (Watanabe et al 2014 and Dai et al 2015) and this new forecast would be even hotter if it wasn't also hugely overestimating the response to volcanoes.
So, in summary, nice try, but I don't believe it, and I don't really think the authors do either.
[Blog post title inspired by the Mark Lynas quote which is not the authors' fault. Incidentally, it is disappointing to see journalists falling for the parasitic publishing scam in which "one of the most respected academic journals" cashes in on its name by setting up numerous sister journals which share some elements of the name but neither the editorial policy nor barriers to entry. "Science advances" is not Science and it's only been around for a year or so, nowhere near long enough to have any sort of reputation. But if journalists don't know the difference, scientists will happily pay the steep publication charge and reap the publicity benefits. Nature have been doing this for a few years now (eg Nature Communications) so it's hardly surprising Science have followed suit.]
27 comments:
Glad you wrote this, I was going to ask, too. Nice to note at least two of your papers are cited, too.
Another thing you don't mention here is their assumption of RCP8.5, which also skews the rate of doubling, accelerating it.
In a sense, their conclusions, given their assumptions, are not especially surprising. Somewhere in the paper there's also a reference to a CS of 3.2, which doesn't sound unreasonable.
If the point of the paper is that CS is underestimated because its accelerated by happening during a warm phase (Swarm), have they established this in their workings?
I'm still inclined to think that CS is around 3, give or take, but more inclined to imagine the possibility of this happening faster than model scenarios because of the recent turn of events (and lack of action) in the political sphere.
check your spelling ;) APOCALYPSE
Fergus, RCP 8.5 probably unrealistic within its direct terms (fair enough since "representative"), *but* it lacks carbon feedbacks. Looking at AGU abstracts and considering recent comments by some formerly-staid leading researchers, permafrost in particular seems to be going off the rails fairly quickly. Boreal forests and tropical forests may not be far behind. This stuff is moving very, very fast now.
Interesting fact: AR5 SYR SPM included not a single mention of carbon feedbacks even though there was already enough scientific cause for considerable concern. Hard to model, yes, but even so that's a very strange decision.
Thanks so much for this, James. OTOH why did they submit the paper as it is if they knew it was wrong? And it does seem especially bizarre for reviewers to let that projection discrepancy through undiscussed.
I mentioned on the last post before seeing this one that Andreas Schmittner also pointed out the LGM temp issue as a big problem.
Steve,
Indeed. My point was that using this means that the curve steepens in a similar way that using an LGM over -5 does. I'm definitely not saying that this isn't plausible, in terms of the speed of changes. Excuse the multiple negatives .
For a long time I've been trying to get people to focus on the elephant and not be diverted by the methane, but not so these days- if it is (as I read somewhere) approaching 20% of the forcing already, I'm inclined to add it to the list of 'things to keep an eye on'.
All of that is rather Britishly understated. Sorry (also British to apologise)
James
I agree with your criticisms of this study - I have been telling people much the same. The idea that one can estimate state-dependence (non-linearity is the wrong term) of climate sensitivity by the method they use seems ridiculous to me, given the poor quality and quantity of the data for interglacials, allied to what I presume must be, in effect, a very high efficacy of solar forcing during some parts of glacial-interglacial transitions.
You didn't mention that there is also a problem with the forcings they use, which in total are much smaller at the LGM than per Kohler et al (2010), which are the most recent carefully-assembled LGM forcings I am aware of.
Friedrich et al. use a LGM to preindustrial forcing change of 6.5 W/m2, much lower than the Kohler et al estimate of 9.5 W/m2. The main differences are that Friedrich a) estimates land ice sheet forcing change as only ~1.5 W/m2 rather than 3.2 W/m2; b) treats vegetation changes as a feedback rather than a forcing change of 0.9 W/m2 – which is wrong for ECS, as opposed to ESS; and
c) estimates forcing resulting from sea level change at 0.2 W/m2, much lower than Kohler et al's 0.6 W/m2 estimate.
Adjusting Friedrich et al's LGM to preindustrial forcing of 6.5 W/m2 and ~6 K GMST changes resepctively to Kohler's value of 9.5 W/m2 and your 2013 value of 4 K would reduce the ECS to 45% of the value they calculate.
Hi Nic, yes I spotted the low forcing, but I thought it might not be that important for their final result, as the "warm" sensivitity is associated with small contributions from all the forcings you mention anyway. So with a larger LGM forcing, they might just get an even greater nonlinearity to compensate (at least to some extent).
I have now added a quick and dirty pic with modern obs superimposed on their graphic, which seems to confirm what I thought (though I might not have done a particularly precise job of it).
James,
Look likes your figure is showing that their best estimate for the TCR of just over 2.7K is higher than seems reasonable given how much we've warmed to date. If the TCR really is around 2.7K then that would imply that internal variability has suppressed about 0.7K of warming, which seems rather implausible.
Yes, I suppose if one's looking for excuses, they might argue that their paper isn't really about short-term variability and so a comparison with recent decades isn't appropriate or perhaps within their scope. It's difficult because I know and like several of the authors, and they are certainly talented scientists, but I doubt they will look back at this as one of their finest moments.
Fergus, TBC there'a an important distinction between a catastrophic methane hydrate release and methane released otherwise (including small ongoing hydrate releases of the sort reported in the Arctic). Observations of the latter aren't evidence for the former, however mightily some people strive to make it so. AIUI (primarily from David Archer), a catastrophic hydrate release can't happen this century since the oceans would need to warm several degrees. I guess the implied advice is to not warm the oceans up like that, but at the same time I don't see how crying wolf about hydrates is anything other than a hindrance to addressing that very real problem.
It's not widely known, probably because the AMEG types don't ever talk about it, that following the initial reports from Shakhova et al. the NSF put some serious resources into an observing campaign to see what was going on in the ESS. In short, they found some ongoing releases, but no evidence of any trend. The American PI was Mandy Joye, a biogeochemist and AFAICT the leading expert globally on what happens to methane in water.
Steve,
What on earth made you think I'm of an AMEG inclination? I know enough about clathrates to be able to agree with you on this. It's the other sources, non-burp-like, which I think need more awareness. I'll check out some recent stuff if I can find it. I'm a long way from the catastrophists on almost everything climate-inclined.
Part of the underlying problem is that this is an emerging emergency, not a sudden crisis we are dealing with. When you're in the bus with no brakes and your journey down the hill is near its start, someone who says 'it might be a good idea to try and pull over' doesn't get traction with those who want to get to where they're going.
James,
The most curious aspect for me is the ice sheet forcing, which seems to be about half the size of what's typically found using GCMs. Do you have any idea why that might be? In the methods they describe why using planetary albedo change rather than surface albedo change is more accurate and returns a smaller radiative forcing, but I'm not sure that's a distinction which is relevant to previous GCM estimates?
The typical LGM-to-PI difference for FSW,ICE(t) is −1.45 W/m2 (fig. S7). Note that the choice of albedo is critical when calculating radiative forcing anomalies. The glacial-interglacial change in surface albedo of ice sheet areas simulated by our model amounts to ~0.4. This value is in good agreement with a surface cover change from ice sheets (albedo of ~0.7) to a seasonal mix of vegetation and snow (annual mean albedo of ~0.3 to 0.4). However, the effect of changes in ice sheet cover on planetary albedo is substantially smaller compared to surface albedo. For planetary albedo, the simulated glacial-interglacial amplitude reaches only ~0.2, resulting in a smaller radiative forcing. Thus, using surface albedo to calculate the ice sheet effect on shortwave forcing results in an erroneously large forcing estimate.
Paul,
Yes I'm not sure what to make of that. I don't know how robust the typical ~3W estimate is either. I presume it relates to the amount of cloud over the ice. Hard to imagine that everyone else has got it horribly wrong in the past, and their model is simplified compared to most GCMs. But again, it probably doesn't have much effect on their "warm sensitivity" estimate.
Didn't think so, Fergus, just wanted to clarify.
Perhaps not strictly relevant AGU abstract, but interesting even so.
Should note, their secondary reconstruction has nothing to do with PMIP simulations. From my reading it tries to reconstruct by fitting the LOVECLIM transient simulation variability (using standard deviation as a metric) to that of 63 SST proxies going back to 140kya.
Going back to the first reconstruction, they do use PMIP models to determine a global SAT from sub-sampled average of SST anomalies at proxy locations. They find a scaling factor of 1.95. Given the stated global SAT of 5ºC this must mean a sub-sample average SST of 2.56ºC. Averaging the relevant gridcells of your own SST reconstruction I find average SST of 3ºC. So your reconstruction would give a scaling factor of 1.33, much lower than in any of the PMIP3 models. I guess that points to substantial spatial differences in temperature anomalies between your reconstruction and PMIP3 models?
Um...well it may depend a bit on what "substantial" means. Our reconstruction fits the data substantially better than any individual model, but it certainly can't match all the data points perfectly. We did find that the models typically have a lower land/sea contrast than the data, which made it challenging to fit both simultaneously. But that also makes it seem odd that this paper calculates a larger global temp change from SST data alone. I haven't checked how their data compares to the larger MARGO set but it looks like it must be a fair bit cooler on average. There is still a bit of a debate as to how well the MARGO data set represents a consensus but it does have a lot of data points and authors!
Got an email from Axel Timmermann, which has corrected what I said about the origins of the LGM estimate.
Question I also asked at RC:
Who’s this? ‘Google just showed me an unfamiliar site that says:
Real Global Temperature Trend, p9 – ‘Not all Climate Forcers are equal, so Climate Sensitivity is Higher,’ NASA says
Posted on April 9, 2016 by Rolf Schuttenhelm
Climate sensitivity is hot these days. That is because ‘the lukewarmers’* have tried to suggest it is overestimated – and now real climate scientists are publishing studies showing the opposite: climate sensitivity may be underestimated…..
Real Global Temperature Trend, p10 – Refining cloud feedbacks lifts climate sensitivity to 5-5.3 degrees(!), say Yale researchers ….
http://www.bitsofscience.org/real-global-temperature-trend-forcers-climate-sensitivity-nasa-7046/
The NASA stuff will be Kate Marvel and Gavin Schmidt (et al) looking at individual forcings. Talked about the Yale thing here. Not hugely exciting, and misleadingly presented.
Thank you (once again)
Is it possible that some of this warming is masked by aerosol forcing?
Certainly aerosol forcing is masking some of the warming we would otherwise have observed. However, not only are estimates of the aerosol forcing effect a little better constrained than they used to be, but also the effect is decreasing in relative (perhaps even absolute) magnitude as power generation gets gradually cleaner and co2 continues to accumulate.
Is there a best estimate of how much warming aerosol forcing might have masked (in degrees C)?
In other words, could it be that we would have in fact seen 1c of warming in the early 90s if it wasn't for the negative forcing from aerosols (sorry I'm not too well versed on climate science)
Short answer is I don't know precisely but as a reasonable estimate, yes, aerosols are roughly -1W and the current total forcing is +2.5 so it would have been +3.5 without the aerosols and the resulting warming in that case would have been roughly 40% higher than we've seen. Just a ballpark figure though.
So doesn't that give more legitimacy to the conclusions on the nonlinear sensitivity study?
Basically, no, because that effect was already taken into account and they haven't said anything new about it. Their model simulation showing a warming of about 1.5C by now (as per my pic above) includes the aerosol cooling.
Post a Comment