Friday, December 21, 2012

How cold was the Last Glacial Maximum?

Several years ago, when we first got involved in paleoclimate research and were thinking about using the Last Glacial Maximum to constrain models, jules asked a respected researcher for an estimate of how cold the LGM was, in terms of a global annual mean temperature anomaly relative to the modern pre-industrial climate. Their surprising response was, "I haven't a clue". Of course this wasn't quite true, as the person in question was surely confident that the LGM was at least a degree colder overall than the preindustrial state, but not as much as (say) 20C colder. But they weren't prepared to specify a global average value, because the available proxy data were few and far between, and could not be considered to give good global coverage. 
 
When we looked at the literature, we found lots of analyses which looked at proxy data on a regional basis (such as SST in the tropics, and various polar ice cores) but not much relating to a global average value. Some people (including Jim Hansen) have given rough estimates of about 5-6C, and we used similar arguments to give an estimate of 6C when we needed one for this paper
. But it was a bit hand-wavy. As for modelling results, in 2007, the IPCC AR4 gave an estimate of 4-7C colder than pre-industrial, which was based more-or-less directly on GCM simulations from the PMIP2 project.

Around the same time, various people (including us here) started more formally combining models with proxy data, by running model ensembles with different parameter values, and came up with similar estimates (e.g. here and here). So, a consensus view of around 5-6C as a best estimate seemed well established.

Since then, several major compilations of proxy data have been published (most notably, MARGO  for the ocean, and Bartlein et al for land), and although the MARGO authors did not generate a global mean estimate, there were strong hints that their data set was a bit warmer than the previously prevailing interpretation of ocean proxy data, especially in the tropics. When Andreas Schmittner and colleagues published their Science paper last year, their climate sensitivity estimate made the headlines, but it was actually their LGM reconstruction that was more immediately eye-catching to us. They fitted a simple(ish) climate model to these most recent and comprehensive proxy syntheses, and came up with a global mean temperature anomaly of only 3C (with an uncertainty range of 1.7-3.7C), which is far milder than most previous estimates. Irrespective of the resulting sensitivity estimate (which we'll return to later), such a warm LGM would be hard to reconcile with GCM simulations. Therefore, we thought it would be worth considering the LGM temperature field as a stand-alone problem. One obvious weakness of Schmittner et al's paper is that due to computational limitations, they only used a fairly simple climate model, which didn't seem to fit the data all that well. Our main idea was to see if we could do a better job using the state of the art GCMs. That requires a rather different approach, as we can't run large ensembles of these models.

Our resulting paper is still under review at CP, and available here. It has had useful and positive reviews, and I'm now revising it, but I don't expect the overall answer (which is an LGM cooling of 4.0+-0.8C) to change.

Our main result is based on a multi-model pattern scaling approach, based on a method I'd seen in the numerical weather prediction literature. It's actually just a simple multiple linear regression. The idea is that each model (from the PMIP2 database) is assigned a scaling factor in order that their weighted sum optimally matches the data (and hopefully, interpolates meaningfully between these points). This gave substantially better  results than either of the other two methods (which I'll describe below for the interested reader), and also worked well for a number of validation tests. The resulting reconstruction fits the data rather better than the results that Schmittner et al reported. The resulting estimated temperature anomaly is a nice round 4.0+-0.8C, with this uncertainty representing a 95% confidence interval. So that is a bit colder than Schmittner et al estimated, but also substantially warmer than most previous estimates. It's towards the low end of the range of GCM results, though with significant overlap. Our formal uncertainty range doesn't take account of the possibility that someone might come along in a few years and decide that these proxy data are all misinterpreted or hugely biased. For example, some people argue that the MARGO ocean data are too warm, especially in the tropics. We don't have a particular position on that, but intend to report some more sensitivity analyses in the final paper. Our analysis is just based on the need to interpolate into data voids, and the fact that different models suggest somewhat different interpolations.

And here is what you have all been waiting for - pictures of our reconstruction of the LGM anomaly, which are probably worth more than the 1000 words of this blog post. First, the surface air temperature anomaly, and then the sea surface temperature anomaly. These match closely over the open ocean, but at high latitudes the layer of sea ice insulates the ocean and allows them to diverge.





The dots are the proxy data points. Overall, the main features are very much as expected. Some of the smaller details are probably just noise, like the apparent "hole" in the top plot in the Southern Ocean at around 70S 30W, and also the N Pacific. The method generates uncertainty estimates which are particularly large in these regions. The slight warming in the Arctic SST (under thick sea ice) is in accordance with the proxy reconstructions, that in the south is more speculative as there are no data there

What this means for climate sensitivity, is left for a future post (some readers may be ahead of me at this point)...


Appendix: For anyone who is interested, here's a quick summary of two other methods we investigated. The paper has a fuller description.

We first tested the simplest method we could think of: direct smoothing of the data, as is commonly performed for modern temperature data by the likes of GISS and NCDC. However, the paleoclimate data are much more sparse and noisy than modern data, and the results looked pretty messy. Worse, when we tried sampling "pseudoproxy data" from GCM results (i.e., taking data from them in the same locations as the real proxy data) and smoothing into fields to see how well we could recover the whole field of GCM output, the results from this process had a huge overall bias. The basic reason for this is that there are no proxy data available in the areas which were covered by huge ice sheets back then but which are now bare (particularly North America and Scandinavia), and it is these regions which exhibit the largest anomalies for the LGM. So although we could smooth the real proxy data and obtain an estimated global mean temperature anomaly of 3.2C, we also knew this could easily be biased by a degree (or more, or less) compared to the real temperature. Bias-correcting this (based on the results generated by the pseudo-proxy experiments) gives an estimate of 4.1+-0.9C cooling. An ugly feature of the smoothing is that the spatial pattern generated is completely unreasonable, not showing any of the extreme cooling that must have occurred (simply due to direct albedo and altitude effects) over the large ice sheets that were in existence then.

So, we quickly concluded that we needed to use a model of some sort to interpolate between the data points. As an alternative to running an ensemble of simple climate models, we decided to use the state of the art PMIP2 model results which were generated by essentially the same full complexity GCMs which also contributed to the IPCC AR4 (CMIP3 project). We obviously weren't able to re-run ensembles of these models with different parameter values, so instead, we just used a simple pattern-scaling approach to fit them to the data. We think that this is pretty much comparable to changing the sensitivity parameter of a simple model - opinions seem to differ on to what extent this is true, but anyway it's all we could reasonably do with these models.

The results were markedly better than we got from the smoothing, but still rather disappointing. One minor curiosity, which we had not anticipated but which is obvious with hindsight, is that this pattern-scaling approach tends to lead to an underestimate of the overall temperature change. This is essentially due to regression attenuation which interested readers can google for more info (or look at our manuscript). Anyway, after bias correction this gives an overall estimate of about 4.5C of cooling, but the uncertainty is actually a bit higher than for smoothing, because different model patterns interpolate into the data voids differently.

So these two methods generate results which are compatible with our main result, but which are somewhat inferior.

16 comments:

A.Grinsted said...

... and what does this mean in terms of glacial-interglacial sea level sensitivity?

James Annan said...

I think that's for you to tell us :-)

First instinct is to say that sensitivity of SLR to temp is 50% higher than previously thought (assuming a previous belief of ~6C for LGM cooling). But then, the forcing estimate has not changed, so my second instinct is to say the sensitivity of SLR to radiative forcing is the same.

Steve Bloom said...

4C? I don't think the Pap is going to care for this at all, no sirree, not one bit. You could go from quoted to cursed in record time. :)

Alex Harvey said...

Dear James,

Gavin Schmidt assured me some time ago that Kohler et al. 2010 QSR is the most up to date treatment of the forcing data. They use Schneider von Deimling et al. 2006's value of 5.8 +/- 1.4 K and their own forcing to obtain best estimate of equilibrium climate sensitivity of 2.4 K and a range at 95% confidence of 1.4–5.2 K.

Your LGM is nearly 2 K less than SvD's. So, what does this tell us about ECS? We're looking at something close to 2 K, right?

Best regards,
Alex Harvey

bigcitylib said...

Alex Harvey,

OMG! Nic Lewis is right?

EliRabett said...

Interesting, but what does this imply about the temperatures in iceball earth eras and the forcings needed to escape therefrom?

Paul S said...

Is the LGM estimate Land SAT + SST or global land+ocean SAT?

David B. Benson said...

What I now request is the temperature change from LGM to (roughly) the Holocene Climatic Optimum (HCO). The HCO is higher than the, say, 1750 CE global temperature but I'm rather unsure of how much.

Paul S said...

Alex Harvey,

The answer to that question is already publically available if you look hard enough, but I won't spoil James and Jule's fun by giving it here.

James Annan said...

Paul, it is global SAT, but I'm pretty sure that using SST over the open ocean wouldn't have changed it non-negligibly (though I haven't actually checked). Using ocean SST under the sea ice would certainly make a difference, but that wouldn't be clever.

Anonymous said...

First of all:
Congratulations to this paper, I think when published, this will be a very important step on determining a good ECS estimate from LGM.
And IMO a good ECS estimate is crucial for assessing, what implications will follow from our previous and future GHG emissions and thus what policies are needed.

Second of all:
I'm a big fan of open review journals. It makes the whole process so much more transparent for outsiders, which I think is a big advantage.

Paul S said...
This comment has been removed by a blog administrator.
Paul S said...

Thanks James,

I have another question - it's not very relevant to this particular topic but it just occurred to me and wondered if you knew:

How are near-surface air temperatures calculated from model runs?

As I understand it each model is setup by splitting the world into "boxes" with some horizontal and vertical resolution. From what I've seen CMIP5 models tend to have 17 boxes in the vertical, corresponding to atmospheric pressure levels, not sure about PMIP ones.

The lowest box corresponds to sea level and is below most land so no temperatures are reported over those areas. The first reported temperatures over most land areas occur in the next two boxes up. Highest altitude regions only start reporting at about the fifth box I think.

So, in different regions the near surface occurs at different vertical layers in the model. Also, in the low atmosphere each box relates to about a 500 metre to 1km vertical area. My question is: how does the model specifically output a 2 metre near-surface temperature for each grid cell given the setup described above?

Is it simple interpolation using the output gridded data or sub-grid processes running live in the model to determine what should be the temperature at the specific height?

James Annan said...

I would expect that 2m temperature (which is what SAT means) is calculated by interpolation when necessary. I expect that most atmos models will have a topography-fitting vertical coordinate system (ocean models certainly do), this works better than a set of boxes at fixed heights.

Sigma coordinate is the googleable term.

The standard pressure levels are also not direct model variables as such. I think each group can in principle do its own thing here but I'd expect them all to be interpolation. Most models will have more than 17 levels. Eg MIROC3.2 (AR4 model) had 20 or 56 for two versions.

Paul S said...

Ok, the box setup did seem needlessly rigid.

I'm just wondering about the potential for error caused by using simple interpolation. Seems unlikely to make much difference over large scale averages but wouldn't this be a potential problem for accurate reproduction of more intricate details like seasonal or diurnal cycles in individual grid cells or regions?

I'm assuming any interpolation would effectively follow the local lapse rate, but are lapse rates really so stable? How would the calculations account for inversions, for example?

James Annan said...

The short answer is...I don't know :-) However, models can have quite sophisticated boundary layer schemes even with limited resolution.