Thursday, April 04, 2013

Decadal prediction part 3

The first one isn't really a decadal prediction, but having finally got round to downloading the HadCRUT4 data and plotting it out, it seemed an obvious comparison to make. The pic is the famous IPCC AR4 (CMIP3) model hindcast and projections from the AR4 (it's Fig 5 in the SPM), with real data plotted on top (appropriately anomalised). I've blown up the relevant portion on the left:



As you can see, it looks pretty good up to about 2000. In fact, it looks ok up to about 2007, but since then has perhaps started to go a little bit pear-shaped. The yellow line should be ignored, of course - it's from the "constant composition" integrations, which hold the atmospheric composition fixed at year 2000 values. The other colours refer to three main projections, all of which give basically indistinguishable results over this time scale. The shaded region only covers ±1σ of the ensemble, and more importantly, I must point out quite clearly that the IPCC explicitly state that this analysis is not intended as a probabilistic prediction, so the fact that reality lies quite clearly outside this shaded areas for the last couple of years does not invalidate or falsify any particular prediction. On the other hand, it does suggest that the models are generally warming up a bit too fast. I make that statement here without prejudice as to whether this is due to errors in physics, or forcing, or just blind luck. Of course, I'm sure most of you will have also seen the equivalent with the CMIP5 models on Ed Hawkins' blog, but in that case it is more of an open question as to how much they might have been tuned to recent data. In the case of CMIP3, the "historical" scenario ended in 2000, so even if this was used to some extent (and there are strong arguments that it was), data subsequent to that is much less likely to have been used.

As Ed shows, the CMIP5 models are running a little hot too. More amusing, is to do a similar evaluation on the latest forecast from Stott et al (inc Hawkins), as published in ERL just recently. These authors did another D&A-type of analysis and concluded that "The upper end of climate model temperature projections is inconsistent with past warming". They also generated their own probabilistic predictions, based on data up to 2010. Since (in contrast to Allen et al's decadal means) their forecasts were explicitly provided for annual temp, we can already evaluate against the two years of 2011 and 2012. As Paul S pointed out in the comments, the prediction is again for decadally averaged temperatures, so we can't yet evaluate a true forecast, but we can still see how it is looking based on another 2 years of data:



Blue is HadCRUT4, showing the latest 2 years, and red is the decadal average (up to the latest available 2003-2012 interval, therefore plotted at year 2007.5). Even though they downscaled the model projections, both years since 2010 already lie outside the 5-95% range of the decadal mean prediction (though not by a huge margin). The decadal average is bumping along near the bottom of their forecast range and looks quite like likely to lie outside the CMIP5 spread, even though this is already normalised to 1986-2005.

 It's not really that surprising in hindsight, since the last year they used (2010) was an El Nino year, and yet they still managed to forecast more warming immediately following. Furthermore, their lower 5th percentile line seems to have a slope of about 0.2C per decade, implying high confidence in at least a maintenance of the recent warming rate

Incidentally, there is no sign of any El Nino on the horizon, so 2013 isn't likely to be particularly warm either (quite possibly below 2003, meaning the new decadal mean would drop). The current 2-month anomaly has it running a bit above 2012's result but still probably just outside the 5th percentile line in the above pic, though that could easily change. Anyone want to take bets on when (if?) we'll see a year inside the forecast range?

19 comments:

Paul S said...

Ed Hawkins stated on his blog that the bars shown relate to decadal averages rather than annual.

I think the forecasts were made in relation to the observationally-scaled model runs rather than directly attached on the end of observations, so I don't think the 2010 El Nino made much difference.

That the forecasts begin at an offset from HadCRUT4 when they have been spatially-scaled to those same observations could be taken as an indication that the HadCRUT4 global average is biased low. Ed Hawkins has previously shown that CMIP5 model outputs masked to HadCRUT4 coverage produce ~0.1ºC less warming between 1850-present.

James Annan said...

Oh yes, I can see that now - would have been helpful to mention it in the figure caption...

I'll give the post a quick ninja edit and hope no-one has read it in the meantime :-)

James Annan said...

Since the anomaly period is recent temps, I don't think that can explain the current (immediate future) offset though. Yes I agree that the latest obs can't have affected the results, but that's a bit of a problem with the method, because reality will continue from its own recent history!

C W Magee said...

This is probably a stupid question, but if ice is melting faster than predicted, and temps are going up slower than predicted, can the difference be explained in terms of hat of fusion of water, or is the energy balance off by many orders of magnitude?

C W Magee said...

heat, not hat...

James Annan said...

Not a ridiculous question, but certainly the vast bulk of the heating is in the ocean, so I think it would be easier to explain a slowdown in terms of large ocean heat uptake.

Also, ice decline actually generates a substantial positive feedback in forcing (albedo) to compensate for any heat absorption.

C W Magee said...

And Asian smog?

James Annan said...

Yeah, it was pretty bad a few weeks ago, I thought I had developed full-blown hay-fever for a few days, but it seemed to clear up. Hope it didn't get as far as you :-)

Oh, you mean its relevance for global temps. Well, I must admit it is a bit of a puzzle to me, because all the aerosol estimates seem to say that the global load has been steady/decreasing (and that's distinct from the re-evaluation of what the overall effect of a given load is). And yet, China at least seems to have turned into a complete smogfest. I assume the aerosol scientists know what they are doing...maybe some of them will chime in here, hint hint...

Anonymous said...

For what it is worth, John N-G has a prediction of a record warmest year with ENSO neutral conditions.

Paul S said...

It would also have been helpful if their observations plot showed decadal averages for a proper comparison. I've produced a Gonzo overlay, which looks like this. The trajectory appears to fit between the lines quite snugly, although that's dependent on what happens over the next few years.

Yes, I see now the 1986-2005 baseline would reduce the influence of an observational cold bias present through the whole record. I think it could still make a difference to the offset of a few hundredths of a degree though.

Paul S said...

I should point out that my decadal average overlay uses HadCRUT4 data up to 2012.

----------------

On a related note, it's been bugging me for a while that these observational comparisons almost always use modelled near-surface air temperatures over ocean areas rather than SSTs, which is what the observations actually are.

I saw in Geert Jan van Oldenborg's paper he briefly mentions this but suggests the difference is negligable. When I looked at trend differences in CMIP3 models global average SAT trends were on generally ~ 0.02-0.03ºC/Dec greater than SST over 1979-2012. It depends on context but to me that isn't a negligable difference.

I would guess the reason SAT is used over SST is due to the way model data is archived - the SST (tos) data is considered to be located in a different "realm" (ocean) from the SAT (tas - atmosphere) data. Part of this distinction between realms is a difference in grid resolution in the output datasets, even for the same model. It's therefore not a simple matter to conjoin land SAT and ocean SST from a model run.

It's perhaps not directly relevant for this forecast because technically the model SATs will have been scaled to the observed SSTs and therefore can be thought of as relating to model SSTs.

Carrick said...

Paul S: On a related note, it's been bugging me for a while that these observational comparisons almost always use modelled near-surface air temperatures over ocean areas rather than SSTs, which is what the observations actually are.

Yes, that's bothered me too as well. I suspect if you're worried about the line staying inside of the 95% CL range, including all of these tweaks to the analysis really are important.

EliRabett said...

Doesn't he Asian smog have particularly nasty characteristics as opposed to most aerosols

James Annan said...

I'm not going to say the SST/SAT over ocean thing doesn't have any effect at all, though it is probably quite small.

But I agree it's basically convenience (most times) that leads people to use the latter as the former. Ocean data (CMIP) is a whole new data set in a different directory, and then you have to bother with masking, and different resolution, and mixed ocean/land grid boxes...

David Young said...

I'm wondering if there is any significance to this seeming failure of the models. One can always chalk it up to "natural variations" that are not adequately modeled. The old ploy that we can't include all the "physics." True enough, but not exactly helpful if one wants to try to validate the models against real data. What is your time frame for saying that the models are missing something significant?

I am still troubled by the tropospheric hot spot predicted by the models and seemingly very hard to find in the data. I don't find the argument that wind speed is a more accurate way to determine temperature than a thermometer persuasive. In my experience, wind speed is remarkably noisy. Seems like a desparate attempt to find SOME data that agrees with model predictions.

Paul S said...

Referring back to CMIP models (CMIP5 data is back up at Climate Explorer), SST and over-ocean SAT trends differ quite consistently by ~15% in both CMIP3 and CMIP5 ensembles.

That's not a big difference, but I wouldn't call it small either. Arguments nowadays are taking place over differences at the level of hundredths of a degree, so the common practice of introducing a potential 15% bias into any analysis, without any attempt at accounting for it, seems like it should be under more scrutiny at least.

James Annan said...

JCH, do you mean John N-G is predicting a new record year, despite it being ENSO-neutral, or rather that he's predicting it to be the warmest of the ENSO-neutral years? Presumably the latter, but in that case, what is his definition of "ENSO-neutral"?

Layzej said...

The former - at least for GISTEMP and HADCRUT:

"With almost all the ENSO forecasts pointing toward neutral conditions, my global temperature forecast will reflect the long-term trend during neutral years."

"My GISTEMP forecast for 2013 is +0.70 +/- .09 C. This would be the warmest global anomaly in this data set, breaking the record set in 2010 by +0.04 C. Given the uncertainty range, I rate the odds of breaking the record as 2 in 3."

"My HadCRUTv4 forecast for 2013 is +0.59 +/- .08 C. This too would be the warmest global anomaly in this data set, breaking the record set in 2010 by +0.05 C. Again, the odds of setting a new record are about 2 in 3."

- http://blog.chron.com/climateabyss/2013/01/global-temperature-anomaly-forecasts-january-2013/

The satellite records tend to exaggerate ENSO so they make it more difficult to set a record outside of an El Nino.

Layzej

James Annan said...

Thanks, I will have more to say on that shortly...