Sunday, March 01, 2009

That MIT report in full, in brief

OK, a few people have asked me about this study. So far it's only a department report but no doubt there is a paper in preparation somewhere. There is stuff about emissions scenarios which I am not going to comment on here, instead focussing on the physical climate science.

Executive summary: I don't believe it. It shares several of the same old problems that all of the probabilistic prediction work exhibits. I hope that the GCM-based community realises what a strong threat this sort of analysis poses to their credibility, and also understands the work well enough interpret it realistically. Basically, this report claims that all the GCMs are rubbish, and indeed (with high probability) horribly biased in the same direction, even though each GCM is validated in far more detail than any of the MIT model instances has been.

First the good bit: they have abandoned the uniform prior for sensitivity, although no reasoning is given. Of course we all know why they did this but I don't mind admitting it rankles to see absolutely no acknowledgment of the substantial amount of time and effort I put into correcting this widespread misunderstanding over the past few years (most recently here, still awaiting reviews). I do realise the "expert prior" they use predates my involvement in the field, and indeed this same group have used it before, but previously results generated in this way have always been deprecated in comparison with those based on a uniform prior (which they have also always used, and presented as the main analysis). The IPCC even refused to mention their previous expert prior results (which I had specifically requested to offset some of the uniform prior nonsense).

Now the not so good: despite apparently accepting that uniform priors do not represent ignorance for equilibrium climate sensitivity, they arbitrarily choose uniform priors for the other climate model parameters! Of course the only reason I had previously focussed on sensitivity was that this was a parameter of topical interest which had featured in the literature: I could just have easily focussed on ocean diffusivity, which controls rate of heat uptake and thus influences the warming rate....and looking at ocean heat uptake (diapycnal difusion) in more detail:-

Some time ago I suggested to (co-author on this report) Chris Forest that both the extremely high, and extremely low values of diffusion that they use are implausible because these would have a strongly detrimental influence on the overall ocean circulation and stucture. While we do not have very good data on ocean warming (which is the only ocean data the MIT people use) we have very extensive observational data on the basic structure of the ocean, such as general circulation and stratification (surface to bed temperature difference). We already showed several years ago that this sort of observational data of the ocean structure can provide a very good constraint on overall ocean mixing and transient response (Hargreaves et al 2004) although this was only an idealised study so I do not trust those results in quantitative detail. So it is interesting to see the following statement:

"However, changing the diapycnal coefficient also alters the ocean circulation, in particular the strength of North Atlantic overturning (Dalan et al. 2005a). Unfortunately it appears infeasible (certainly without changes to parameterizations in the 3-D models) to vary the heat uptake over the full range consistent with observations during the 20th century (Forest et al. 2008) and at the same time to maintain a reasonable circulation. "

I believe the correct way to interpret this situation is that it is fortunate that the extreme ocean diffusion parameter values tested give unrealistic behaviour because it implies that this analysis provides a tighter constraint on ocean mixing than we can glean from the very limited observation of warming rate. If they had taken advantage of this understanding, their results might have been more credible. As it is, they are using parameter values that they know are strongly inconsistent with the real behaviour of the climate system.

Another problem is in the way they handle volcanic forcing. There have been multiple instances of large volcanic eruptions over the well-observed historical period, and each time this has resulted in a brief 1-2 year cooling spike. It is well-known that this is supportive of both mid-range sensitivity and ocean heat uptake - it is certainly difficult to reconcile both a high sensitivity and low ocean mixing with these observations, as this would result in a much longer and stronger cool period than has ever been seen. However, this MIT analysis only looks at the long-term trend. Over this interval, volcanoes provide a modest cooling (eg pic in this post) due to the preponderance of eruptions in the latter half of the 20th century. The authors explicitly state that in their analysis, the influence of volcanoes is to shift their estimate towards both higher sensitivity and low ocean heat uptake, even though the interannual behaviour should downweight that possibility.

That's enough for now. I do not know for sure how much these flaws influence the final result, but I do know they are a worrying enough that I don't trust the results. I do hope that the GCM-using community (including the few who read this) will take up the gauntlet that has been thrown. I can blog all I like about it (and send a few grumbling emails) but there is not a lot more than I can do about it by myself. [FWIW we are extending the previous work I mentioned, but it's a fairly slow process.]

11 comments:

EliRabett said...

We have had a number of exchanges about how putting in physical/observational limits on parameterization improves statistical analysis. You point to a number of excellent examples.

You also point to backing stuff out in the other direction. If a parameterization yields an unreasonable estimate, the parameterization is probably incorrect.

Hank Roberts said...

another?
Urban, N. M., and K. Keller (2009), Complementary observational constraints on climate sensitivity, Geophys. Res. Lett., 36, L04708, doi:10.1029/2008GL036457.

James Annan said...

Thanks Eli and Hank,

Coincidentally, I had just emailed that paper to someone over the weekend. I will blog about it soon...

Magnus Westerstrand said...

Just a thought... that several probably already thought...

Since the ocean is heating "faster" putting heat in the "pipeline" will this mean that as we get closer to the equilibrium balance the ocean will warm relatively slower... and would that effect the CO2 uptake enough to be modelled in such a way?

(slower warming higher uptake then in a normal warming...)

James Annan said...

Not sure if it quite answers your question, but lower heat uptake means faster warming but also means that we are closer to equilibrium (less committed warming). And conversely for greater heat uptake.

Magnus Westerstrand said...

As usual a bit sloppy from me... thought about it last night... what I was thinking was relatively, that the ocean would reach an equilibrium faster then land... which then would give a larger decline in CO2 uptake in the beginning of the warming and not as much at the end... However the models are I guess built on that physic so if that is what would happened I guess they would show that.

Magnus Westerstrand said...

That was strange, I got a link here by posting on my blog without linking... I guess it must feel my "active" link list in my sidebar...

Hank Roberts said...

James, did you blog on that Urban and Keller paper yet? Any more to say about MIT?

James Annan said...

Um...I guess not. I'm a Bad Person.

I am looking for my Round Tuit and hope to find it this weekend...

:-)

James

Magnus Westerstrand said...

So what about the final result? :)

http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F2009JCLI2863.1&ct=1

James Annan said...

Same old same old....