Tuesday, November 06, 2007

A defence of climateprediction.net

Another of my "just writing it so I can use the title" posts, perhaps :-) But having ranted about them a couple of times in the past, there's no harm in taking this opportunity to say something a bit more conciliatory.

Before anyone thinks I must have gone soft in the head, I should emphasise that this post does not mean that I'm going to stop teasing them if they say things in the future that I think are foolish - in fact they have a howler currently in press that I'm looking forward to blogging about when it appears. But criticism should be well targeted, and I think Stoat misses the mark in his recent posts about this paper.

He says:
Clearly they have had some jolly fun dividing the runs up into trees, but the paper is a disappointment to me, as it doesn't really deal with the main issue, which is the physical plausibility of some of the runs.
While I agree pretty much with Stoat's characterisation of "the main issue", I don't have a problem with papers that do not address this, so long as they do not oversell their results as having any meaningful applicability to the real world (which is probably a valid criticism of the original "sensitivity might be 11C" paper, but not the one under discussion here).

In fact this "main issue" is an incredibly complex one to address. It is effectively the crux of the whole climate prediction problem (and many more prediction problems besides). The issue can be roughly restated as "how good/bad does a model have to be before we trust/distrust its output" or perhaps more precisely as "how do we make meaningful inferences about reality, given the output of some model runs, none of which really looks much like reality if you examine it in any detail". It is certainly not as simple as just choosing (discovering?) some convenient "objective criterion" (or a laundry list of such criteria) against which to measure our models, although such criteria may provide some guidance. (As one adds more criteria to the list, the number of models that pass all of them will simply drop to zero - at what point do we decide to stop?) And although some of us have been thinking about this question for a few years already, it may be a few more years yet before we start to agree on some answers. But meanwhile, there is also plenty of more technical work to do, and so long as they are doing something interesting and valid, I don't think it is fair to criticise papers simply because they did not address the particular problem that you would like them to.

No comments: