As you will have
seen, we recently spent a few days in Tsukuba, which must be my least favourite place in Japan, by quite a distance. It's a horrible windswept boring new town, built about 40 years ago as a "science city" (
Nature article here). There are no interesting shops, restaurants or other facilities that I am aware of, just a regular grid of offices and apartment blocks. Going to Tsukuba is about on a par with visiting Milton Keynes. Unfortunately, one of our main collaborating labs is based in Tsukuba, so we have to show our faces there occasionally.
When first built, it was almost impossible to travel to (being in the middle of nowhere, with a poor bus service) and had a famously high suicide rate even by Japanese standards.
This web page says the university still has the highest suicide rate in Japan, even though it's supposed to have gone down a lot. A few years ago, the new train line was completed and now the scientists seem happier in the knowledge that they can get whisked straight to the geek heaven of "electric town" Akihabara in under an hour. The train terminus is in the basement of
Yodobashi, no less. If our colleagues are representative, Tsukuba also has an unusually high birth rate, so it seems there still isn't a lot to do there :-) I think all 5 of our closest associates have young children, whereas no-one in our institute does. Of course the different employment system (tenure vs short contracts, basically) may also be a factor in that...
Anyway, enough about Tsukuba. The reason we were there for was a workshop on downscaling, which in practice really meant regional modelling and prediction. It's not that much of a focus of mine but it does seem a natural progression from global down to smaller scales. And being an "International Workshop", all the Japanese-side presentations were in English, so it was a good chance to hear what was going on. As well as the Japan-based work (which I'm not really part of), there were quite a few invitees from overseas, who mostly said interesting and useful things. One of them was Roger Pielke Snr. He didn't actually come over, but gave a presentation via videolink, which was described as "Skype" but looked like it might have been something else - Google video chat perhaps? Technically it worked very well and while I accept there are benefits to physical attendance, the time and cost savings of such an approach make it very attractive.
His main theme was that neither statistical nor dynamical downscaling of multidecadal global climate model projections generates any value. The basic reasoning was based on the typology expounded
here, where "Type 1" is basically hindcasting or nowcasting forced by observations, and "Type 4" is basically pure modelling as is required for long-term climate projections. Pielke's article explicitly states:
Observational constraints on the solution become less as we move from Type 1 to Type 4. Thus forecast skill will diminish from Type 1 to Type 4.
This final sentence is, of course, completely false, at least when you realise that Type 1 and Type 4 are likely to be used for different purposes (we don't have obs for forcing regional climate models in 2100...). While it is true that model accuracy will (other things being equal) generally degrade as observational conditions are replaced by model outputs, this does not mean that
skill will diminish for the simple reason that skill is always a comparative measure against some alternative hypothesis (typically a null hypothesis such as persistence, or climatology, or indeed persistence of climatology, when talking about multidecadal forecasts) and the performance of this null hypothesis will ALSO degrade as we look further ahead. We all agree (including Roger) that
stationarity is dead (if it was ever truly alive), and thus the climate in 100 years is likely to be substantially different to that of today, or the next 10 years. In order for Roger's claim to be true, it would have to be the case that the model performance degrades MORE RAPIDLY than the performance of whatever null hypothesis he would use in the absence of the models. Which could in principle be true but he has not, as far as I'm aware, ever attempted to show it. I'm disappointed to see him still playing fast and loose with this terminology about "skill", several years after I pulled him up on it previously.
Lest you think I'm being too technical and pedantic in my usage of the term "skill", it is quite clear from reading what Roger says, that he is also using it in this sense, at least when it suits his purpose. Eg, he explicitly states that as a consequence of his argument, model runs are not suitable for use as projections or even scenarios.
I challenged Roger on this at the end of his talk, and he retorted that the models had yet to demonstrate skill over these time and space scales. I agree with that to some extent, but it's a very much weaker claim than his original one that the models necessarily have no (or at least negligible) skill. It is perhaps debatable to what extent we can consider skill in a rigorous sense, as we aren't dealing with a lot of repeatable experiments like daily weather forecasting, but
we are working on it to the extent that is possible. (The analysis presented there only considers global mean temp, because that is all that was available. However it's pretty obvious that if the data had been available, it would have given similar results at the continental scale, since the spatial patterns are very consistent between all models and obs. That's still some way from really fine scale of course, but it's a start.)