Tuesday, February 01, 2011

Hot....and not

So the surface temperature analyses for 2010 are in, and depending who you believe, it was either the hottest year (NCDC), or it wasn't (HadCRUT), or it was basically a dead heat (GISTEMP). Even if it wasn't a clear record, it was certainly close. I wonder whether the sceptics will start talking about "no global warming since 2010"?

Realclimate has a nice round-up of various indicators in comparison to model outputs. I don't have much to add to that. Back in 2007, Smith et al gave a 50% probability of 2010 (and every subsequent year) exceeding 1998 by the HadCRUT measure. Given the current La Nina situation, it looks like at least 2011 will stay stubbornly below that threshold. It might be interesting to consider how long it will take before half the years post 2009 will actually exceed the 1998 value. Just for fun, I've roughly updated the plot in their paper with more recent obs (annual means, blue dots in the below), but only by hand and I'm not confident that it is very accurate. I would say it is too early to be sure they are wrong, the obs are persistently in the low end of their predicted range but it's only been a few years. Even with the La Nina, there probably isn't a great chance of next year (or subsequently) falling outside of their 90% range, but I can't see temperatures reaching the upper half of their predicted range in the next few years either.

Keenlyside et al, on the other hand, looks substantially worse, given that the error bars are for the decadal average - only one single year has just about reached that level, and their initialised forecast is substantially worse than the free run with no assimilation! Of course it was clear that their forecast was wrong as soon as it was published. But hey, it got headlines and 140 (google scholar) citations so far, which of course is what really matters these days...



I spotted that Vicky Pope described the 2010 temperature as being a "dead heat" for 1998 despite their analysis coming in about 0.05C colder. I would guess that when the temperature actually does beat the 1998 value by a similar margin, she won't put it in quite the same terms :-) I don't think there is any need to talk things up, the evidence for ongoing warming is quite clear enough as it is. Whatever the details of 2010 beating the HadCRUT record or not, it doesn't change the fact that the world continues to warm at more or less the expected rate.

One place where I do have a nit to pick in RC's post is in their explanation of the multimodel mean not providing a perfect fit to the data. They excuse this (as if it needs excusing) through the models being merely an "ensemble of opportunity". However, even if it was a carefully designed ensemble that perfectly described our uncertainty, the ensemble mean would still not match the truth, instead inevitably being "biased" relative to some future observation (although we would not yet know in which direction the bias will be). My point being, the mere fact that the ensemble mean does not match incoming observations of reality - even ignoring the fact that these observations are themselves imperfect - in itself tells us nothing about the design and adequacy of the ensemble as an indication of uncertainty. It is not possible in principle to design an ensemble which would have this "truth-centred" property, even in simplistic idealised "perfect model" cases. It is therefore not meaningful to try to use it as a means of assessing (or criticising) the ensemble.

24 comments:

Anonymous said...

perhaps it will be a dead heat for Had CRUT after the buoy sst corrections:

http://www.reportingclimatescience.com/news-stories/article/met-office-to-revise-global-warming-data-upwards.html

James Annan said...

hmmm...I'd be surprised if this actually brought 2010 up to 1998...

toto said...

Your modesty prevented you from linking to this post about why the multi-model mean will always be "better" than the average model, and why it doesn't mean that much (I liked the "climate of Mars" bit).

Or is it just that you're ashamed at misspelling Pythagoras? ;)

James Annan said...

toto, it's all greek to me :-)

Unknown said...

"Dead heat" is apt with the JMA analysis. (But, regrettably, I do not find in their document how they handled data-void grid boxes.)

Anonymous said...

How did the actual forcings (both anthropogenic and natural) for 2000-2010 compare to the A1B scenario?

Would hindcasts based on updated historical forcings alter the model ensemble significantly?

James Annan said...

Ben Santer argued at the AGU that a direct splice of A1B onto the historical run was not quite right, mostly (?) due to solar forcing, which is generally just held flat rather than cycling on an 11y basis. He said (IIRC) that a more accurate forcing series would slightly lower the modelled trend and bring it closer to obs. TBH I'm not really convinced it is that big a deal, he seemed particularly concerned about the mean being wrong but as I've explained (in tedious detail) that isn't really anything to worry about in the first place.

Anonymous said...

The last time I looked at this (some time in 2009), the obs showed about about 80% of the warming in the model mean, relative to the 1980-99 baseline.

If that continued to 2030, then we'd have ~0.5C of warming, instead of the ~0.65C projected for 2011-30.

Of course, it will be interesting to see if HadCrut and GisTemp continue to diverge. Or will HadCrut be overhauled to account for the current de facto underestimate of Arctic warming? It would be good if they did that before IPCC cutoff.

Anonymous said...

James,
Actually, the Giss model does have the 11-year cycle, but I think it's the only one. But the actual solar forcing was even lower than the assumed one, 11-year cycle or no, so a hindcast should reflect this in recent years even more than a "nominal" 11-year cycle would.

OTOH, maybe the historical anthropogenic forcings would go the other way, so who knows. Certainly not me - that's why I'm asking. :>)

James Annan said...

I think the new runs will start to appear in a few months. The hindcast forcing is slightly changed and runs to 2005 or 2010 (think there is some sort of overlap period). So we will just have to wait and see! I'm sure any changes will be quite subtle.

Anonymous said...

"... it doesn't change the fact that the world continues to warm at more or less the expected rate."

Was that the conclusion of Michaels et al that caused all the kerfuffle last year? Somehow, Chip Knappenberger gave a distinctly different impression. (Sorry, couldn't resist).

James Annan said...

That's a fair cop :-)

I think it's fair to say that the warming over the past decade(ish) has been towards the low end of the model range, though I haven't checked if that is still true if you include 2010. Michaels et al tried to spin that result too hard which made it hard to publish, but they seemed more interested in getting a controversial soundbite than actually getting published. Mind you, the reviewers were also rather picky considering the analysis was basically correct in principle. (I've had worse reviewers for some of my papers, it was nothing outrageous.)

Chip Knappenberger said...

I am working on an update through 2010...

I suppose if I heap praise the climate model performance that the road to publication will be paved in gold. Maybe something like... "the models themselves undoubtedly work great, it is just that we are neither doing a very good job at predicting the inputs to the model nor measuring the real world variables to compare with the model output."

In our submission, we, of course, did include these issues, but, I guess, made the mistake of also suggesting that some of the problem may actually lie within the models themselves. Silly us.

-Chip

Anonymous said...

James:
"I think it's fair to say that the warming over the past decade(ish) has been towards the low end of the model range"

I agree, and I said as much above (and in my "smoothed" projection post at Deep Climate last year). One year close to the model mean is not going to change that. I would add the distinction that the important metric is the decadal warming (i.e. decade-over-decade, or relative to baseline), not the linear "trend" *within* the last decade, which includes more of the "noise" from natural variation.

I think the reviewers were absolutely right to insist that Micaels et al not "spin that result too hard".

If Chip Knappenberger really wants to rebut James's assertion that he and Michaels "seemed more interested in getting a controversial soundbite than actually getting published", there is a very easy way to do it.

Unfortunately for Michaels and Knappenberger, there is no longer a complaisant editor, like Chris de Freitas at Climate Research in the good old days, who will let them get away with that sort of nonsense.

Chip Knappenberger said...

DeepClimate,

My rebuttal to James' assertion is that what we wrote in the paper was not so over the top as to scare him off. Perhaps we focused too hard on potential climate model shortcomings as a likely cause of the dearth of recent warming. In future submissions, it would seem wise to soften that stance.

Still, as James mentioned, there was still a lot of good stuff in the paper which would have been useful for making model/observation comparisons...or at the very least, serve as a good starting point for such comparisons.

What I learned from the experience was that there is a heck of a lot less scrutiny placed on model/observation comparisons that are favorable than those that are not so favorable. Everything but the kitchen sink was paraded out to potentially explain why we got the results that we did (most of which we had mentioned in our paper). In favorable comparisons, those same potential influences are rarely brought up (at least by model supporters) although, it seems that they should be equally applicable.

-Chip

Hank Roberts said...

Watts thinks it's news:

A Cherry-Picker’s Guide to Temperature Trends Update: Warming Crisis Not
Posted on February 8, 2011 by Anthony Watts
by Chip Knappenberger at Master Resource
One of a series:
http://www.google.com/search?q=masterresource+knappenberger

James Annan said...

Heh...he's straining to cope with 2010...mind you 2011 will be cooler and probably strengthen the claim.

Ron Broberg said...

... internally generated natural variability ...

thanks for the ptr, JA.

Ron Broberg said...

And - just for laughs - some other predictions:
http://rhinohide.wordpress.com/resources/paleoclimate/anthropocene/predictions/

Anonymous said...

JMA analysis is mentioned above by Kooiti MASUDA. I am glad to see this. I thought there was a separate Japanese analysis but it is rarely mentioned and at another blog someone told me No, you must be thinking of JAXA which is not Japanese. Would anyone be so kind as to tell me just a little about it? Is there another set of surface stations? Or what else makes it unlike NASA GISS?

Many thanks

Pete Dunkelberg

James Annan said...

JAXA is definitely Japan (google it) and they produce one of the widely-used sea ice estimates based on satellite obs (which, having googled JAXA, you will not find surprising).

The JMA surface temperature analysis must be based on the same set of obs as all the others, broadly speaking, because that it all that there is. I don't know the details of the analysis, nor why it is not more widely used. I speculate it may be a relative newcomer that does not have such a long history.

Magnus said...

might be the spam filter... however this is worth reading:
http://www.nrcse.washington.edu/NordicNetwork/reports/Significance.pdf

James Annan said...

Not really filtered, just old posts get automatically moderated (though I suppose this is really to cut down on random spam).

Thanks for the ref, at first glance it looks reasonable... as did the Ambaum paper :-)

Ron Broberg said...

JMA page
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/explanation.html