Thursday, March 27, 2008

More Chylek on sensitivity

Here's another paper that Chylek referred to in his AGU presentation (I blogged the first one here). At that time, his description was vague enough that it wasn't clear where the errors were. But given the full paper, it only took me a few minutes to diagnose some serious (no, fatal) problems.

The basic premise isn't wholly unreasonable. Their overall goal is to diagnose sensitivity from the paleoclimate record. Essentially, their paper hinges on the premise that they have two major unknowns to be estimated - climate sensitivity is one, and the other is the forcing effect of a given atmospheric dust loading (aerosol). They simultaneously solve for both of these factors by using two intervals in the paleoclimate record over which the changes in forcings, and the temperature response, are reasonably well known. The intervals they use are (1) the strong warming from the Last Glacial Maximum (21 ka before present) to the Holocene, and (2) the rather smaller temperature change between the LGM and a warmer period which is observed about 41ka BP. There are observations of the changes in major GHG concentrations across these two intervals, also changes in dust concentration, and observations of temperature, all from the Vostok ice core. Their analysis generates a very large forcing estimate for dust (about 3 times as large as the standard estimates), and simultaneously a small estimate for climate sensitivity of 1.3-2.3C (95% probability interval).

The most glaring problem is probably in the bizarre and strongly biased interpretation of the temperature data. I've reprinted their figure here, with some colour annotations of my own:


The main feature to focus on is the solid wiggly line showing temperature from the Vostok core. The authors claim a temperature difference of 10.2C for LGM to Holocene, and 4.8C for LGM to 41ka BP. They do this by looking for the very lowest single data point in the LGM epoch, and comparing this to the highest peaks in the other eras. There is no attempt at any justification of this choice. I don't know exactly which data points they used, but based on the numbers they quote it must have been something like the spikes I have circled in red. But these values are wholly unrepresentative of the average temperature over the broader time scale and thus badly misrepresent the situation by suggesting that the 41ka warming (from the LGM baseline) is almost half as big (47%, to be precise) as the Holocene. From that graph, the LGM temperature is actually hovering around 20 (on their scale on the LHS, units of 0.1C), the Holocene is clearly below 100 on average (say 95), and the "41ka" peak is actually about 40 if we take an average over several thousand years. I've added horizontal magenta lines to indicate where I think these means lie, at least roughly. Note that they are already averaging the dust measurements over 5 points (~5000 years), and the GHG observations have a similar resolution, so using a temperature average over several thousand years would be entirely consistent and reasonable. To put it another way, if cherry-picking single data points is meaningful, then they could equally have cherry-picked the points I've circled in blue, in which case they would have concluded that the dust forcing is strongly negative, which completely destroys their claimed confidence interval in their publication.

If we (reasonably) deduce that this high-frequency variation in temperature is actually a sign of some internal (perhaps regional) variability, rather than a global response to (nonexistent) forcing changes, then the obvious thing to do is take an average over a longer interval, as I've indicated with the magenta lines. The only problem is that as soon as we do that, the need for a strong dust forcing simply melts away, and the resulting climate sensitivity estimate of 2-3.9C is entirely unremarkable (and completely consistent with what we already know). That would still be tighter than many other published estimates, but they've openly ignored several sources of uncertainty in their calculation.

There are a few other highly dubious things in the paper (such as their treatment of albedo, and ignoring changes in orbital parameters), but it's hardly worth bothering with lesser nit-picks when the temperature analysis is so clearly flawed. Thankfully, the paper seems to have been largely ignored, but I see that World Climate Report embarrassed themselves by puffing it. That's a shame as I was actually going to write something moderately nice about them (OK, I will anyway, probably over the weekend).

12 comments:

J said...

"I was actually going to write something moderately nice about [World Climate Report] (OK, I will anyway, probably over the weekend)."

Say something nice about WCR?

At least wait 'til Tuesday ... that's the one day of the year when one might be able to justify doing something so silly.

:-)

J said...

I forgot to add ... nice post.

Chip said...

And to think James, I (nearly) always have moderately nice things to say about you :^) !

-Chip
www.worldclimatereport.com

Heiko said...

A while back I saw an interesting post by Tamino on boreholes, which referenced this database:

http://www.geo.lsa.umich.edu/climate/core.html

I was interested in arctic temperatures, so I checked out the Northernmost boreholes in N. America and hit these two graphs:

http://www.geo.lsa.umich.edu/climate/RECONSTRUCTION/CA-290-1.html

http://www.geo.lsa.umich.edu/climate/RECONSTRUCTION/CA-066-0.html

One has a downward trend over the last 500 years of 2.5C and the other an upward trend of 3C.

Are boreholes really rubbishy proxies?

Or does local temperature history vary massively from the world trend? If the latter, how can we use a single proxy for a single place to stand in for world temperature?

James Annan said...

Heiko,

I don't know much about boreholes but I think they are probably "rubbishy proxies" to use your term. It must be hard to reliably invert these very small deviations in temperature from a poorly-known background steady state (ie the flow of geothermal heat through a medium that itself has poorly-known properties). For Vostok there is a whole lot of broadly consistent evidence, at least on the longer time scale - and the authors allowed for +- 0.5C of uncertainty for LGM to Holocene temp change, which is probably in the right ballpark.

Chuck said...

Won't looking for natural global signals in S. Hemisphere dust records in the interval after 41KA be fucked up by the contemporaneous anthropogenic aridification of central Australia?

EliRabett said...

IEHO boreholes are not very good proxys but they may be the only game in town to go way back (beyond the ice cores). Again, you have to differentiate between useful and perfect.

ErikS said...

OK, I may be wrong here (stupid Swede...), but havn´t C&L got the equation for the radiative forcing totally wrong? They are including the CO2-feedback in the forcing, which seems stange to me. Shouldn´t it be lambda = deltaT/(external forcing) rather than lambda = deltaT/(external forcing + CO2 feedback + ...)?
Please correct me if I´m wrong or give me some award if I´m right...
/ErikS

James Annan said...

Erik,

The forcing/feedback distinction is somewhat artificial, but their usage is standard. We want the response of the atmosphere-ocean system to a radiative forcing, so that includes the CO2 change.

ErikS said...

I think I now get it with the CO2 (because the CO2 concentration is the parameter being set to a certain value in the model estimates), but the albedo change? Should that be included in the RF, as C&L do? As I used to understand things it was positive feedbacks such as albedo and water vapor which made the climate sensitivity high. If you count these into the RF, what is then making the climate sensitivity high?

James Annan said...

The albedo they are talking about is the slow growth and melt of large continental ice sheets over the 1000y time scale. We now have no really big ice sheets to melt (and what we do have are very polar, hence relatively little albedo effect). A change in sea-ice extent, which we will see (already are doing) is counted as a feedback because it is fast.

Dust and vegetative albedo are perhaps the hardest to call, as they really ought to change quite rapidly in response to a climate change. But given that so much of the world's surface is under direct anthropogenic influence we can probably dominate these changes with direct action irrespective of what the "natural" response would be. Also, they are expected to be fairly modest, so there is little penalty for ignoring them in future climate change experiments.

ErikS said...

OK, thanx for clarifying this.