Tuesday, September 05, 2017

Practice and philosophy of climate model tuning across six US modeling centers

Paper with the above title just appeared in GMD. Despite being a European English-language journal we welcome Americans and even Americanisms, so I'll quote the title as written rather than as it should be :-) In this paper, Gavin has nicely summarised (or perhaps I may say, summarized) how approaches to model tuning vary throughout the US climate science community.

It's a slightly unusual manuscript type for GMD in that it doesn't present any technical advances (such as parameter estimation techniques, examples of which have been published in GMD previously) but instead describes the rather more ad-hoc hand tuning that model developers currently do. As such it generated some behind-the-scenes discussion as to how best to handle the manuscript within the GMD framework. We at GMD have always seen our role as enabling rather than constraining the publication of modelling science, and were already considering the concept of “review” paper types which survey a field rather than notably advancing it, so this was an opportunity rather than problem for us. The reviewers also made constructive comments which my job as editor fairly straightforward.

A major point of interest in the paper (and in model tuning generally) is to what extent the models have or have not been tuned to represent the 20th century warming. This has significant implications for how we would interpret their performance and potentially use the observational data to preferentially distinguish between models. Gavin has always been quite insistent that he doesn't use these data:



and I certainly have no reason to doubt his claim. On the other hand, this Isaac Held post on tuning is also worth reading. In that post, Isaac Held argues that the warming is probably baked-in to some extent in the way that the models are built and evaluated during their construction. On balance I think I prefer Isaac's way of putting it to Gavin's, but it's a nuanced point. Certainly there is no question that modellers do not repeatedly re-run 20C simulations, tweaking parameter values each time until they get a good fit to the observed record. So if this is what people are envisaging when they discuss the topic of “model tuning” then Gavin is certainly correct, this simply doesn't happen. And I'm happy to believe that some modelling teams don't run the 20C simulation at all until the very end of the model development phase, and simply send their very first set of simulation results to the CMIP database. But I've seen for myself that some groups have sometimes done these simulations at an earlier stage, and on seeing a poor result, have gone back and redesigned some aspects of the model to fix the problems that have arisen (these are likely to be more specific than just “the wrong trend”). And even beyond this, it's a bit of an open question to what extent the tuning that is done to individual model components is truly independent of our knowledge about the recent warming which constrains our estimates of various aspects of model behaviour. But given the limited nature of any such tuning, (and indeed the limited agreement between models and data!) perhaps it's a close enough approximation to the truth to just call them untuned.

No comments: