I recently attended the second installment of this sequences of workshops, this time in Durham. It was if anything better than the first, not merely due to the fact that I had a trouble-free journey there, but also because it seemed like the Bayesians and climate scientists were better able to communicate this time around. As I've mentioned, Marty Weitzman's Dismal Theorem made a brief appearance but the most interesting discussion focussed on if and how we could use model output to generate (or inform on) the detailed probabilistic predictions of regional climate change that are increasingly being made.
Lenny Smith gave a very entertaining rant on this topic, which I found very useful as I'd been aware of his scepticism for some time, but not quite understood the reasons for it. Just for clarity, he is not sceptical of the broad picture of global climate change in terms of the expected large-scale future warming, but rather of the ability of models to provide such detailed predictions as say "typical" weather time series for specific locations and seasons several decades ahead (which UKCIP is promising). As he further points out, the time scale on which credibility may be lost is not the decades it takes for such predictions to be falsified observationally, but rather the much shorter time scale over which someone produces a new, conflicting, prediction with the next "bigger and better" model. I've generally been thinking in terms of global and large regional scales, with variables such as (ok, exclusively) mean temperature so had not really considered the detailed predictability of local climate changes, but he certainly painted a very persuasive picture of the difficulty of this. Note this is not the trivial "weather versus climate" meme, but rather the question of whether a model can usefully inform on (eg) the longest sequence of consecutive hot days in a summer, when it simply does not adequately simulate the processes that control long sequences of hot days. It is the statistics of "weather" events such as these that actually matters to end-users, much more so than global annual mean surface temperature.
However, I do think there is room for coexistence between his view and the Bayesian approach. Indeed it was suggested that his criticism was more an attack on a straw-man of "naive Bayesianism" (albeit that this naive Bayesianism is pretty much the path that has been followed so far in climate science) rather than on the principles of Bayesian probabilistic prediction themselves. The distinction as I see it is that the naive approach which Lenny is criticising is the generation of some model (ensemble) output, dressing it up in some sort of uncertainty kernel (to represent "model inadequacy") and presenting this as a pdf. Whereas perhaps a more sound way of addressing things is to start off with a prior on the future operationally-defined variable of interest, and then consider through the likelihood function to what extent the outputs of (highly imperfect) model runs should cause one to update that prior at all. That doesn't amount to any sort of get-out-of-jail-free card - all the hard judgements still have to be made - but it might perhaps encourage climate scientists to address the issues within in a more comprehensive, coherent and plausible framework than they have done previously.
My talks (one on my own behalf, one of jules' work) were fairly unadventurous. I was relieved to see that the uniform prior really does seem to be increasingly acknowledged as a dead duck now, with one of the climate scientists bothering to mention as an aside that of course one could not pretend that a uniform prior was really "ignorant" (only last year, the IPCC was asserting precisely the opposite, but perhaps this little episode has been airbrushed out of history now). Other than that, there was a range of interesting presentations, some maths that was way over my head, and other stuff I thought was probably wrong, including a claim that seems to contradict some well-established mathematical theory, of which the claim's originator was apparently not aware. Par for the course really :-)
Lenny Smith gave a very entertaining rant on this topic, which I found very useful as I'd been aware of his scepticism for some time, but not quite understood the reasons for it. Just for clarity, he is not sceptical of the broad picture of global climate change in terms of the expected large-scale future warming, but rather of the ability of models to provide such detailed predictions as say "typical" weather time series for specific locations and seasons several decades ahead (which UKCIP is promising). As he further points out, the time scale on which credibility may be lost is not the decades it takes for such predictions to be falsified observationally, but rather the much shorter time scale over which someone produces a new, conflicting, prediction with the next "bigger and better" model. I've generally been thinking in terms of global and large regional scales, with variables such as (ok, exclusively) mean temperature so had not really considered the detailed predictability of local climate changes, but he certainly painted a very persuasive picture of the difficulty of this. Note this is not the trivial "weather versus climate" meme, but rather the question of whether a model can usefully inform on (eg) the longest sequence of consecutive hot days in a summer, when it simply does not adequately simulate the processes that control long sequences of hot days. It is the statistics of "weather" events such as these that actually matters to end-users, much more so than global annual mean surface temperature.
However, I do think there is room for coexistence between his view and the Bayesian approach. Indeed it was suggested that his criticism was more an attack on a straw-man of "naive Bayesianism" (albeit that this naive Bayesianism is pretty much the path that has been followed so far in climate science) rather than on the principles of Bayesian probabilistic prediction themselves. The distinction as I see it is that the naive approach which Lenny is criticising is the generation of some model (ensemble) output, dressing it up in some sort of uncertainty kernel (to represent "model inadequacy") and presenting this as a pdf. Whereas perhaps a more sound way of addressing things is to start off with a prior on the future operationally-defined variable of interest, and then consider through the likelihood function to what extent the outputs of (highly imperfect) model runs should cause one to update that prior at all. That doesn't amount to any sort of get-out-of-jail-free card - all the hard judgements still have to be made - but it might perhaps encourage climate scientists to address the issues within in a more comprehensive, coherent and plausible framework than they have done previously.
My talks (one on my own behalf, one of jules' work) were fairly unadventurous. I was relieved to see that the uniform prior really does seem to be increasingly acknowledged as a dead duck now, with one of the climate scientists bothering to mention as an aside that of course one could not pretend that a uniform prior was really "ignorant" (only last year, the IPCC was asserting precisely the opposite, but perhaps this little episode has been airbrushed out of history now). Other than that, there was a range of interesting presentations, some maths that was way over my head, and other stuff I thought was probably wrong, including a claim that seems to contradict some well-established mathematical theory, of which the claim's originator was apparently not aware. Par for the course really :-)
2 comments:
You might find this of interest re: climate change.
Criminals And Moralists Working Together
Enron Carbon Trading And Hansen
Enron And Carbon Trading
James,
What if the sun is really the independent variable and CO2 dependent.
What if (as some solar scientists predict) we are headed for a Dalton Minimum.
That would throw all the predictions into a cocked hat.
Post a Comment