The 1st UJCC (UK-Japan Climate Collaboration) workshop took place here on 24-25 November. This is part of the Hadley Centre/CGAM collaboration with the Earth Simulator Center, and there were several attendees from the UK who I'd not bumped into before. The undercurrent was a bit different from most workshops, because as well as discussing current issues in research, this meeting was aimed at developing themes and plans for future collaboration, and so I suspect that some people were covertly if not overtly staking out claims for their particular interests.
The introductory talks were all very interesting, which was a pleasant surprise. Not too much of a focus on abstruse details, but a broad exploration of interesting issues well exemplified by Julia Slingo's discussion of the competing demands of complexity vs resolution vs uncertainty. Kimoto-sensei presented some evidence that although increasing resolution doesn't change the broad picture, it strongly affects the ability of the model to represent changes in extremes - in the particular case of rainfall in Japan, this will likely lead to more days with either no rain or extremely heavy rainfall, and fewer with modest rainfall. Observations over the 20th century seem to support these modelled trends.
Then we went on to more detailed modelling, with different people illustrating the effects of focussing on complexity and resolution (not much on uncertainty). The details were certainly interesting but it's not quite my field of research - I'm primarily interested in prediction skill and uncertainty, and the question of deciding what the priorities are (from a modelling POV) for future work is mostly someone else's problem. I was not alone in making the point that exploring feedbacks and processes is not the same thing as improving predictive skill, and it's important to be clear what the specific goals are when new stuff is added to models. There seems to be widespread agreement that increased resolution should certainly help us, since the basic fluid flow equations are well understood (and NWP results are also available as support), but, even though it's been shown that feedbacks due to the ecosystem etc certainly can be important, it's less clear that we understand them well enough that including these sub-models will actually improve the model output.
Jules and I gave our talks on Friday morning - she presented work from our SOLA paper, and I talked about something we did more recently. The latter stirred up a bit of debate, as we had hoped. The last session was about the computer science relating to making large models run efficiently on huge computers - which I'm relieved be able to say is not of much direct importance to me (though I'm glad other people exist to worry about this stuff). We then closed with an interesting discussion about the sort of scientific problems we were hoping to solve, and what sort of models would be necessary to achieve this. There was quite a lot of grumbling at how IPCC deadlines and priorities were forcing people to rush into big modelling efforts and "global warming prediction" at the expense of the underpinning science. It's hard to see a good answer to this - it is only due to the political demands that the science is so heavily funded in the first place, so IMO we can hardly complain if the funding is tied to political demands. OTOH there has to be some room for process-oriented and fundamental reseach without it necessarily having to be justified and presented in terms of the next generation of IPCC results. Does the grumbling indicate a really fundamental problem with the sustainability of climate science and the IPCC process, or is it just the standard cynicism and jockeying for position that can usually be expected from British scientists? Time will tell.
The introductory talks were all very interesting, which was a pleasant surprise. Not too much of a focus on abstruse details, but a broad exploration of interesting issues well exemplified by Julia Slingo's discussion of the competing demands of complexity vs resolution vs uncertainty. Kimoto-sensei presented some evidence that although increasing resolution doesn't change the broad picture, it strongly affects the ability of the model to represent changes in extremes - in the particular case of rainfall in Japan, this will likely lead to more days with either no rain or extremely heavy rainfall, and fewer with modest rainfall. Observations over the 20th century seem to support these modelled trends.
Then we went on to more detailed modelling, with different people illustrating the effects of focussing on complexity and resolution (not much on uncertainty). The details were certainly interesting but it's not quite my field of research - I'm primarily interested in prediction skill and uncertainty, and the question of deciding what the priorities are (from a modelling POV) for future work is mostly someone else's problem. I was not alone in making the point that exploring feedbacks and processes is not the same thing as improving predictive skill, and it's important to be clear what the specific goals are when new stuff is added to models. There seems to be widespread agreement that increased resolution should certainly help us, since the basic fluid flow equations are well understood (and NWP results are also available as support), but, even though it's been shown that feedbacks due to the ecosystem etc certainly can be important, it's less clear that we understand them well enough that including these sub-models will actually improve the model output.
Jules and I gave our talks on Friday morning - she presented work from our SOLA paper, and I talked about something we did more recently. The latter stirred up a bit of debate, as we had hoped. The last session was about the computer science relating to making large models run efficiently on huge computers - which I'm relieved be able to say is not of much direct importance to me (though I'm glad other people exist to worry about this stuff). We then closed with an interesting discussion about the sort of scientific problems we were hoping to solve, and what sort of models would be necessary to achieve this. There was quite a lot of grumbling at how IPCC deadlines and priorities were forcing people to rush into big modelling efforts and "global warming prediction" at the expense of the underpinning science. It's hard to see a good answer to this - it is only due to the political demands that the science is so heavily funded in the first place, so IMO we can hardly complain if the funding is tied to political demands. OTOH there has to be some room for process-oriented and fundamental reseach without it necessarily having to be justified and presented in terms of the next generation of IPCC results. Does the grumbling indicate a really fundamental problem with the sustainability of climate science and the IPCC process, or is it just the standard cynicism and jockeying for position that can usually be expected from British scientists? Time will tell.
7 comments:
John Fleck asks -
James, could you elaborate on the grumbling? As a consumer of the science y'all produce, the sacrifice of underpinnings for rushed predictions makes me a tad nervous.
John,
Well, I've not got much to add to what I said. There seems to be pressure to simply patch any newly-developed parameterisation (say, vegetation models) into GCMs for the purpose of 100 year projections without always understanding their strengths and weaknesses adequately. A relationship that seems robust at the individual leaf scale may not be relevant for 300km grid boxes, and may fail completely once the temperature changes sufficiently. Does this really mean that we face massive vegetative die-back, or is it merely a case of "more research is needed"? So long as the uncertainties in the reseach are presented honestly, maybe it doesn't matter too much, but I suspect most people see the scary headlines from Nature press releases, and might not catch the more measured reassessments in the more conventional literature.
You may or may not be surprised by the fact that development plans and timetabling are already well in place for the anticipated modelling requirements of AR5. It sometimes seems like the IPCC is not so much assessing the science, as determining what science needs to be done, and by when.
The flip-side of this of course is that the pressure to perform has driven research on at a great rate. So it's not by any means all bad.
"...I talked about something we did more recently. The latter stirred up a bit of debate, as we had hoped."
Another tease, eh? Still not down to a glimpse of the fur-lined lingerie, it seems. :) Come on, it sounds as if this event was a sufficiently public forum that surely you can say a little bit about the new paper here.
Sorry Steve,
The journal in question (GRL) seems to have extremely restrictive ules regarding "public" disclosure (more so than Nature, for example). And due to the particular circumstances I really don't want to pre-empt the formal scientific process. I'll have plenty to say in a few weeks...
John,
This paper makes the case against over-complexifying in one particular context. (The author was at the workshop.)
Re It sometimes seems like the IPCC is not so much assessing the science, as determining what science needs to be done, and by when.
I know what you mean, but you're in danger of over/mis-stating this, IMHO. IPCC makes no (little?) attempt to determine who does what; the push comes because people are desperate to get their stuff into ARx, and funding bodies also (rightly?) see inclusion of science in IPCC as some seal of approval/relevance.
I know what you mean, but you're in danger of over/mis-stating this, IMHO
Probably - the point is not so much about the particular chain of command, as the overall effect. But certainly we performed a suite of modelling experiments specifically because IPCC authors wanted to write about a particular subject in a particular way.
Post a Comment