Thursday, October 27, 2011

WCRP OSC Day 3

Today was the busy day, with the schedule including two posters for me, a talk for jules, a business lunch and the conference gay-la (USAian pronunciation of Gala) party. Again, we didn't have the stomach for breakfast, especially after only having about 4h sleep.

Unfortunately, the plenaries were more posturing than science. Christian Jakob made the rather risible claim that physical climate modellers were an endangered species. I don't dispute that a significant emphasis has passed to the sexier chemistry/ecosystem/aerosol components of the climate system, and I don't have hard numbers to refute him with, but I find it very hard to believe that there are not still far more people now engaged in physical model development (he included numerics in this) than there were, say 30 years ago. Certainly our lab, which did not exist 15 years ago, boasts several groups of them, working on diverse aspects of climate modelling. He also promoted Tim Palmer's idea of a worldwide "Manhattan Project" to build one model to rule them all, and my only criticism of Gavin's rather unenthusiastic comment was that he was (predictably) rather too polite...

Adam Scaife of the Hadley Centre then did his best to cherry-pick some marginal successes out of the wreckage of their "decadal" prediction program, with such gems as the "likely to be globally hottest ever" forecast for 2010 (which wasn't actually hottest, by their preferred HadCRUT measure) and the "45% chance of cold" UK 2010 winter (ie 55% chance of warm or near-normal temperatures) which of course turned out to be remarkably cold. Interestingly, he didn't find time to mention the "most years past 2009 will be warmer than 1998" prediction of the Smith et al Science paper, which has failed to pan out for 2010, 2011 and most likely (based on ENSO forecasts) 2012 so far. Of course I must not be churlish about the improvements in the 3-10 day horizon and indeed ENSO forecasting which are notable in themselves, but perhaps they should stop pretending they can do much apart from that.

It was rather a relief when Sandrine Bony gave a nice review of what had (not) been learnt about climate sensitivity since the Charney Report, and why the model results had not converged since then (she didn't cover the probabilistic estimation approaches with the "long tail"). She made the point was that increased process understanding is a key component of reducing uncertainties in climate change predictions.

After coffee (and a remarkably heavy ham and cheese croissant), I had two posters to defend in different rooms, so focussed my attention on one of them. Having just received some encouraging reviews on a related manuscript, this will be the subject of another blog post in the near future. The one I abandoned was mostly a re-hash of the JClim paper which was finally published not long ago. A fair few people came by, though not necessarily the ones who really needed to read it...(but they might have seen it during the rest of the day).

The afternoon session was on reliability of climate models, which was supposed to be looking at CMIP5, but these results are only coming in around now so many people (including us) are still looking at CMIP3. Karl Taylor kicked things off with an unfortunate mis-statement regarding the interpretation of the multi-model ensemble (the stuff we have tried to correct in recent papers), but fortunately it wasn't a major part of the session, or even his talk. I found another virtually identical talk of his on the web here, the problem is on p32, I can rant in more detail if anyone cares...

Grant Branstator showed that the inherent predictability of the models on the decadal time scale varied substantially, which may help to explain the problems they are having. Sandy Harrison energetically and enthusiastically showcased what paleoclimate simulations (now part of CMIP5, for the first time) can offer in terms of model testing and validation, and Reto Knutti gave a nice overview of what (little) we know how to do in terms of evaluating and weighting climate models to improve their predictions. Both of these talks provided a fine background to jules' [brilliant -ed] talk on assessing model reliability and skill with paleoclimate simulations. She rattled though some of our recent work, and also some stuff that is yet to be written. So far our ideas have been developed and applied to the old CMIP3 models, so we look forward to seeing how the new crop of models measures up.

Conveniently, the aforementioned gala followed straight after, which was held in the local Denver art museum. To be honest, I was rather underwhelmed by the art, but that's probably just me.

19 comments:

skanky said...

Though December was very cold, January was about average and February was remarkably warm.

Adam Scaife was discussing seasonal forecasts on R4 today, and gave a tentative forecast of a similar pattern, based on la nina, sea ice and UV - though less pronounced due to two factors being not so pronounced (uv & la nina).

Previous years I've only really seen references to ENSO for winter forecasting.

It's interesting to see a forecast break a season up as those three months tend to see very different weather patterns. Whether they'll start to get more accurate soon remains to be seen.

Alastair said...

Karl Taylor's Slide 32 states:

Is “uncertainty” based on spread of model results
misleading?
• It doesn’t include possibility of a common bias across models

- If the common bias is zero, then the multi-model mean provides a good
estimate of the “truth”
- If the bias is not zero, the truth may lay outside model results


Is that the item you disagree with? If so, I am curious to know how you can find fault
with such a simple proposition.

Cheers, Alastair.

crandles said...

If the common bias is zero then you know the answer and there is no issue. The problem is defined by not knowing the common bias. This if-then is a matter of if the problem is the one you are dealing with or not and therefore does not seem to me to be helpful.

It also seems highly geared towards a truth centered paradigm when a statistically indistinuishable paradigm seems much more appropriate as James has ranted about before.

James Annan said...

Crandles gets it right of course. The slide Taylor showed was actually slightly different wording, but the same intent. The term "common bias" was not clearly defined, but if he means a bias in the ensemble mean (which he probably does, as many have talked about it in these terms), then the first statement is simply false - use of the ensemble in this way actually rests on the assumption that the inter-model spread is a useful indicator of the likely magnitude of the mean bias - and if he means all models actually having the same sign bias then of course the truth lies outside the ensemble range by definition, making the last statement too trivial to write down and in fact misleading in its imprecision (there's no "may" about it).

jules said...

When I saw it I just assumed that the "may" was sloppy casual language so didn't see the mistake until James pointed it out. Being here watching these people talk to each other, it has become easier to understand how these errors in thinking creep in. They are badly in need of more pedantic mathematicians to keep them on the straight and narrow, but somehow they prefer the poetic types...

James Annan said...

I reckon it is because they are more eager to agree with each other and coalesce around a "consensus", than critically examine the science and precision of language. A little bit of excusable vagueness can easily turn into a full-blooded error.

Hank Roberts said...

> common bias

Who was it who recently had a model without air pollution that apparently solved a longstanding problem because the result matched fossil temperature proxies for the PETM extreme? New Scientist reported it from a Royal Society meeting; I think the author was from Denver area. Looking ...

Hank Roberts said...

oops. Easy to find once I remember the terms to search.

I realize New Scientist is an entertainment niche not a science magazine; they make it sound like the aerosol assumption could be a commonly shared bias:

http://www.newscientist.com/article/dn21051-clean-air-fixes-cold-poles-in-model-of-ancient-climate.html

"Climate models have long failed to simulate these warm times, says modeller Paul Valdes of the University of Bristol, UK. Specifically, they don't heat the poles enough, often falling as much as 15 °C short. The models can get the poles right if modellers inject more greenhouse gases into the simulated atmosphere, but then the tropics overheat.

Trying to fix this cold-pole problem, Jeff Kiehl of the National Center for Atmospheric Research in Boulder, Colorado, simulated the climate of 55 million years ago....
with clean skies, and found that the cold-pole problem largely disappeared. With clouds forming in unpolluted air, the poles warmed up much more than the tropics, giving a climate within a few degrees of the one that actually existed...."

Alastair said...

I took it to mean that only if the common bias was greater than (half of) the range then the true mean would lie outside the models results.

There is a more fundamental criticism of that thinking, which is that if all the models give different results then at most only one of them can be correct. And since they are all using the same basic paradigm, then it seems highly likely that they are all wrong!

Perhaps that was what Karl Taylor was hinting at.

Cheers, Alastair.

Alastair said...

Hank,

your excerpt ended "... giving a climate within a few degrees of the one that actually existed...."

It did not give the correct value so the model was still wrong.

Steve Bloom said...

The Nude Scientist's problems notwithstanding, it was an unembellished report of a serious-sounding result from a serious researcher.

Was it mentioned/discussed at the conference? A search tool apparently exists, but I couldn't find it.

William M. Connolley said...

You seen the Schmittner thing?

crandles said...

Re "The term "common bias" was not clearly defined, but if he means a bias in the ensemble mean (which he probably does, as many have talked about it in these terms), then the first statement is simply false - use of the ensemble in this way actually rests on the assumption that the inter-model spread is a useful indicator of the likely magnitude of the mean bias"

I took the 'If the common bias is zero, then the multi-model mean provides a good estimate of the “truth” ' as an obvious truism. I previously tried to emphasise that it is also completely unrelated to the problem we are really dealing with so is completely useless. Discarding the irrelevant stuff just leaves 'the truth may lay outside model results' which is another obvious but not helpful statement.

However you are saying the first statement is false. A possible alternative interpretation might be that 'provides a good estimate of the “truth” ' is suggesting that the spread of model results provides a good indicator of the level of uncertainty. If you know the mean bias is zero then this is false as we know the answer with much more precision than the uncertainty of the ensemble. However, perhaps we do not really know the mean bias is zero and are just speculating about what if the mean bias is small?

Your answer seems to somehow conclude the statement is false perhaps because some circular reasoning is being used. However, I am struggling follow what you are saying.

Steve Bloom said...

Let me know when all of the unknown unknowns have been identified and incorporated into the models (a little unfair, I know).

wv suggests a term for use in scaling denial: nononiti

James Annan said...

Belette, I know of it (he presented it last year at PMIP) but have not read the paper yet.

James Annan said...

Chris, well you were right first time, so maybe I've confused things further. Just to re-state, my main point is that under the most natural (IMO) definition of "common bias", the statement (that using the model spread does not include the possibility of such a bias) is false, or alternatively, under the only interpretation of "common bias" I can think of for which the statement is actually true (ie all models having a bias of the same sign), the last statement is so trivial/misleading (or indeed false, for a strict reading, since there is no "may" about it) that I don't think he can reasonably have meant that interpretation.

Hank Roberts said...

Alastair, all models are wrong. Ok.

Was Kiehl at the meeting?

Alastair said...

Hank,

Yes, but some models are more wrong than others!

James Annan said...

Kiehl is listed as having a poster, but not on that topic.