Several years
ago, when we first got involved in paleoclimate research and were
thinking about using the Last Glacial Maximum to constrain models, asked a respected researcher for an estimate of cold
the LGM was, in terms of a global annual mean temperature anomaly
relative to the modern pre-industrial climate. Their surprising response
was, "I haven't a clue". Of course this wasn't quite true, as the
person in question was surely confident that the LGM was at least a
degree colder overall than the preindustrial state, but not as much as
(say) 20C colder. But they weren't prepared to specify a global average
value, because the available proxy data were few and far between, and
could not be considered to give good global coverage.
When we looked at the literature, we found lots of analyses which looked at proxy data on a regional basis (such as SST in the tropics, and various polar ice cores) but not much relating to a global average value. Some people (including Jim Hansen) have given rough estimates of about 5-6C, and we used similar arguments to give an estimate of 6C when we needed one for this paper. But it was a bit hand-wavy. As for modelling results, in 2007, the IPCC AR4 gave an estimate of 4-7C colder than pre-industrial, which was based directly on GCM simulations from the PMIP2 project.
When we looked at the literature, we found lots of analyses which looked at proxy data on a regional basis (such as SST in the tropics, and various polar ice cores) but not much relating to a global average value. Some people (including Jim Hansen) have given rough estimates of about 5-6C, and we used similar arguments to give an estimate of 6C when we needed one for this paper. But it was a bit hand-wavy. As for modelling results, in 2007, the IPCC AR4 gave an estimate of 4-7C colder than pre-industrial, which was based directly on GCM simulations from the PMIP2 project.
Around the same time, various people (including us here) started more formally combining models with proxy data, by running ensembles with different parameter values, and came up with similar estimates (e.g. here and here). So, a consensus view of around 5-6C as a best estimate seemed well established.
ince then, several major compilations of proxy data have been published (most notably, MARGO for the ocean, and Bartlein et al
for land), and although the MARGO authors did not generate a global
mean estimate, there were strong hints that their data set was a bit
warmer than the previously prevailing interpretation of ocean proxy
data, especially in the tropics. When Andreas Schmittner and colleagues
published Science paper last yearclimate
sensitivity estimate made the headlines, but it was actually their LGM
reconstruction that was more immediately eye-catching to us. They fitted
a simple(ish) climate model to these most recent and comprehensive
proxy syntheses, and came up with a global mean temperature anomaly of
only 3C (with an uncertainty range of 1.7-3.7C), which is far milder
than most previous estimates. Irrespective of the resulting sensitivity
estimate (which we'll return to later), such a warm LGM would be hard to
reconcile with GCM simulations. Therefore, we thought it would be worth
considering the LGM temperature field as a stand-alone problem. One
obvious weakness of Schmittner et al's paper is that due to
computational limitations, they only used a fairly simple climate model,
which didn't seem to fit the data all that well.
The dots are the proxy data points. Overall, the main features are very much as expected. Some of the smaller details are probably just noise, like the apparent "hole" in the top plot in the Southern Ocean at around 70S 30W, and also the N Pacific. The method generates uncertainty estimates which are particularly large in these regions. The slight warming in the Arctic SST (under thick sea ice) is in accordance with the proxy
reconstructions, that in the south is more speculative as there are no data there
What this means for climate sensitivity, is left for a future post (some readers may be ahead of me at this point)...
We
first tested the simplest method we could think of: direct smoothing of
the data, as is commonly performed for modern temperature data by the
likes of GISS and NCDC. However, the paleoclimate data are much more
sparse and noisy than modern data, and the results looked pretty messy.
Worse, when we tried sampling "pseudoproxy data" from GCM results (i.e.,
taking data from them in the same locations as the real proxy data) and
smoothing into fields to see how well we could recover the whole field
of GCM output, the results from this process had a huge overall bias.
The basic reason for this is that there are no proxy data available in
the areas which were covered by huge ice sheets back then but which are
now bare (particularly North America and Scandinavia), and it is these
regions which exhibit the largest anomalies for the LGM. So although we
could smooth the real proxy data and obtain an estimated global mean
temperature anomaly of 3.2C, we also knew this could easily be biased by
a degree (or more, or less) compared to the real temperature.
Bias-correcting this (based on the
generated by the pseudo-proxy experiments) gives an estimate of
4.1+-0.9C cooling. An ugly feature of the smoothing is that the spatial
pattern generated is completely unreasonable, not showing any of the
extreme cooling that must have occurred (simply due to direct albedo and
altitude effects) over the large ice sheets that were in existence
then.
So, we quickly concluded that we needed to use a model of some sort
interpolate between the data points. As an alternative to running an
ensemble of simple climate models, we decided to use the state of the
art PMIP2 model results which were generated by essentially the same
full complexity GCMs which also contributed to the IPCC AR4 (CMIP3
project). We obviouly
weren't able to re-run ensembles of these models with different
parameter values, so instead, we just used a simple pattern-scaling
approach to fit them to the data. We think that this is pretty much
comparable to changing the sensitivity parameter of a simple model -
opinions seem to differ on to what extent this is true, but anyway it's
all we could reasonably do with these models.
The results were markedly better, but still . One minor curiosity, which
is obvious with hindsight, is that this pattern-scaling approach tends
to lead to an underestimate of the overall temperature change. This is
essentially due to regression attenuation which interested readers can
google for more info (or look at our manuscript). Anyway, an overall estimate of about 4.5C, but the uncertainty is actually a bit higherbecause different model patterns interpolate into the data voids differently.
16 comments:
... and what does this mean in terms of glacial-interglacial sea level sensitivity?
I think that's for you to tell us :-)
First instinct is to say that sensitivity of SLR to temp is 50% higher than previously thought (assuming a previous belief of ~6C for LGM cooling). But then, the forcing estimate has not changed, so my second instinct is to say the sensitivity of SLR to radiative forcing is the same.
4C? I don't think the Pap is going to care for this at all, no sirree, not one bit. You could go from quoted to cursed in record time. :)
Dear James,
Gavin Schmidt assured me some time ago that Kohler et al. 2010 QSR is the most up to date treatment of the forcing data. They use Schneider von Deimling et al. 2006's value of 5.8 +/- 1.4 K and their own forcing to obtain best estimate of equilibrium climate sensitivity of 2.4 K and a range at 95% confidence of 1.4–5.2 K.
Your LGM is nearly 2 K less than SvD's. So, what does this tell us about ECS? We're looking at something close to 2 K, right?
Best regards,
Alex Harvey
Alex Harvey,
OMG! Nic Lewis is right?
Interesting, but what does this imply about the temperatures in iceball earth eras and the forcings needed to escape therefrom?
Is the LGM estimate Land SAT + SST or global land+ocean SAT?
What I now request is the temperature change from LGM to (roughly) the Holocene Climatic Optimum (HCO). The HCO is higher than the, say, 1750 CE global temperature but I'm rather unsure of how much.
Alex Harvey,
The answer to that question is already publically available if you look hard enough, but I won't spoil James and Jule's fun by giving it here.
Paul, it is global SAT, but I'm pretty sure that using SST over the open ocean wouldn't have changed it non-negligibly (though I haven't actually checked). Using ocean SST under the sea ice would certainly make a difference, but that wouldn't be clever.
First of all:
Congratulations to this paper, I think when published, this will be a very important step on determining a good ECS estimate from LGM.
And IMO a good ECS estimate is crucial for assessing, what implications will follow from our previous and future GHG emissions and thus what policies are needed.
Second of all:
I'm a big fan of open review journals. It makes the whole process so much more transparent for outsiders, which I think is a big advantage.
Thanks James,
I have another question - it's not very relevant to this particular topic but it just occurred to me and wondered if you knew:
How are near-surface air temperatures calculated from model runs?
As I understand it each model is setup by splitting the world into "boxes" with some horizontal and vertical resolution. From what I've seen CMIP5 models tend to have 17 boxes in the vertical, corresponding to atmospheric pressure levels, not sure about PMIP ones.
The lowest box corresponds to sea level and is below most land so no temperatures are reported over those areas. The first reported temperatures over most land areas occur in the next two boxes up. Highest altitude regions only start reporting at about the fifth box I think.
So, in different regions the near surface occurs at different vertical layers in the model. Also, in the low atmosphere each box relates to about a 500 metre to 1km vertical area. My question is: how does the model specifically output a 2 metre near-surface temperature for each grid cell given the setup described above?
Is it simple interpolation using the output gridded data or sub-grid processes running live in the model to determine what should be the temperature at the specific height?
I would expect that 2m temperature (which is what SAT means) is calculated by interpolation when necessary. I expect that most atmos models will have a topography-fitting vertical coordinate system (ocean models certainly do), this works better than a set of boxes at fixed heights.
Sigma coordinate is the googleable term.
The standard pressure levels are also not direct model variables as such. I think each group can in principle do its own thing here but I'd expect them all to be interpolation. Most models will have more than 17 levels. Eg MIROC3.2 (AR4 model) had 20 or 56 for two versions.
Ok, the box setup did seem needlessly rigid.
I'm just wondering about the potential for error caused by using simple interpolation. Seems unlikely to make much difference over large scale averages but wouldn't this be a potential problem for accurate reproduction of more intricate details like seasonal or diurnal cycles in individual grid cells or regions?
I'm assuming any interpolation would effectively follow the local lapse rate, but are lapse rates really so stable? How would the calculations account for inversions, for example?
The short answer is...I don't know :-) However, models can have quite sophisticated boundary layer schemes even with limited resolution.
Post a Comment