A commenter worked out where we were and asked about the programme. I haven't had anything to say so far because although we have now been here for more than a week, it was pretty quiet at first, and we were mostly spending our time on some fairly routine work we had brought with us. This week, however, is the first workshop, and it is rather busier.
My first thoughts on reading the blurb (back when we were first invited last year) was that it seemed to have a slightly split personality to me, with two components that have only tenuous links. The second component is the one that directly interests me, as it concerns the development and use of probabilistic methods for climate prediction (which in context and based on attendees primarily means the O(100y) problem). The first component, however, which is the main topic of this opening workshop, is the use of stochastic sub-grid physics parameterisations, which can modify (and improve) the behaviour of weather and climate models in various ways. Tim Palmer has been pushing this for some time and it's clear that he is on to something as far as improving short and medium term weather forecasting. However, he presented this as a grand plan for replacing the ensemble of CMIP models, but it seems immediately obvious to me that the stochastic physics approach does not begin to address the sort of uncertainties in the equilibrium climate state and response that dominate the climate prediction problem on the century time scale. It's just a better parameterisation, but still in principle a single parameterisation which will give rise to a single climate and single climate sensitivity etc, as the uncertainty the stochastic part introduces will generally be negligible over long time scales (coincidentally, I wrote a short and trivial paper on this way back in the mists of time). In order to generate climatologically different models, we would need different probabilistic schemes, not just a different sequence of samples from the same scheme. I asked him about this after his talk and was not convinced by his response.
There was also a talk on maximum entropy, which confirmed my suspicion that I should not attempt to learn anything about maximum entropy. I consider that an hour well spent!
My first thoughts on reading the blurb (back when we were first invited last year) was that it seemed to have a slightly split personality to me, with two components that have only tenuous links. The second component is the one that directly interests me, as it concerns the development and use of probabilistic methods for climate prediction (which in context and based on attendees primarily means the O(100y) problem). The first component, however, which is the main topic of this opening workshop, is the use of stochastic sub-grid physics parameterisations, which can modify (and improve) the behaviour of weather and climate models in various ways. Tim Palmer has been pushing this for some time and it's clear that he is on to something as far as improving short and medium term weather forecasting. However, he presented this as a grand plan for replacing the ensemble of CMIP models, but it seems immediately obvious to me that the stochastic physics approach does not begin to address the sort of uncertainties in the equilibrium climate state and response that dominate the climate prediction problem on the century time scale. It's just a better parameterisation, but still in principle a single parameterisation which will give rise to a single climate and single climate sensitivity etc, as the uncertainty the stochastic part introduces will generally be negligible over long time scales (coincidentally, I wrote a short and trivial paper on this way back in the mists of time). In order to generate climatologically different models, we would need different probabilistic schemes, not just a different sequence of samples from the same scheme. I asked him about this after his talk and was not convinced by his response.
There was also a talk on maximum entropy, which confirmed my suspicion that I should not attempt to learn anything about maximum entropy. I consider that an hour well spent!
8 comments:
Thank you for this.
IMHO, applying stochastic methods on some specific grid points in the climate models that might have something 'unusual', such as a random forest fire, forest clearance, crop failure, or a vast algal bloom, or overfishing going on, might be reasonable, but deteremining the boundary conditions for these to happen is another matter. I've once or twice heard of the maximum entropy and I am of the same opinion :-). Was it Helmholtz free energy the models use as a measure of maximum energy for the grid points or what? Anyway, as the globe is a rather closed system as what comes to matter (discounting the occasional meteor storm) I too think there's no need to invoke the entropy to the deterministic calculations... Simple statistical methods are to me somewhat deterministic, though I can understand the need to do the models as deterministically as possible.
nice
a little bit wry and rough at the edges i s'pose
Firstly, climate model biases are still substantial, and may well be systemically related to the use of deterministic bulk-formula closure - this is an area where a much better basic understanding is needed. Secondly, deterministically formulated climate models are incapable of predicting the uncertainty in their predictions; and yet this is a crucially important prognostic variable for societal applications. Stochastic-dynamic closures can in principle...
at least are admittance of potencial theory failure
good points and the limited human resources and funds are the real problem
I have heard of a maximum entropy production theory of climate before. Didn't really dig into it all that far. I remember their claiming that by assuming that the climate system will rearrange itself for maximum entropy production they were presumably able to identify the equilibrium state after the climate system has adjusted itself to higher levels of carbon dioxide and that this somehow agreed with what one gets by running a climate model long enough. But exactly which climate model they didn't say.
I was skeptical to say the least. Actually from what I understand the whole field of far from equilibrium thermodynamics is in its infancy -- if that. Some speak of maximum entropy production principles, others of minimum entropy production principles and sometimes, depending upon the definitions they might even be speaking of the same thing.
Part of what had gotten me interested was my childhood interest in Illya Prigogine's unfinished Microscopic Theory of Irreversible Processes. It sought a complementarity between thermodynamics and quantum mechanics by means of star-hermitian operators and the view that it is the instability of a physical system that is responsible for the amplification and collapse of the wave function. The other thing was some article by Roderick Dewar who viewed himself as building upon the work of E. T. Jaynes. Don't remember at the moment which article.
However, his approach and more or less the whole field was subject to a fairly devastating critique here:
Classification and discussion of macroscopic entropy production principles
arXiv:cond-mat/0604482v3 [cond-mat.stat-mech] 2 May 2007
Stijn Bruers
http://arxiv.org/PS_cache/cond-mat/pdf/0604/0604482v3.pdf
Maybe at some point progress will be made in the field. Then again it may prove exceedingly difficult to ever create a workable unified theory of far from equilibrium conditions.
Dewar was the speaker yesterday. I should emphasise that I have absolutely nothing against maxent as a research topic, I'm sure some people find it interesting and useful, but it's clearly not something I have the time or energy (and perhaps most importantly, relevant background) to want to get interested in.
If you're into Bayesian reasoning you shouldn't find the basic idea of maxent methods difficult to understand. It says: if you're trying to choose a probability distribution p(i) on some set of events, you should pick it to maximize
- Σ p(i) ln(p(i))
subject to the constraints given by what you believe.
The rest is figuring out how to do this in specific situations, and arguing about whether you really should do it, and why (or why not).
I'm just worried that you went to a talk that made the idea seem complicated, or boring.
E. T. Jaynes' book "Probability Theory as Extended Logic" is a compelling readable introduction to maxent methods and also Bayesianism.
There are Bayesians who regard maxent methods as heresy, and 'Bayesian-Jaynesians', and so on - it makes for jolly good arguments.
Thanks, yes that seems compatible with what was said. The problem I have with it is its usefulness, as the speaker basically said that if the answer didn't match observations then that just meant the constraints were wrong (or possibly the prior - he did not insist on a uniform one) which implies it doesn't have any practical predictive value, as there seems no way of knowing which constraints are the right ones other than by checking if the answer looks good...
Rumour has it that one of the later speakers may be going to say it is not useful for our sort of climate research for some reason...it will all be on video at some later date for those who are interested.
I suppose I should have a relevant background as my DPhil touched on statistical physics (Tutte polynomial etc) but the physics bit was not my strong point.
no problem
Another important quantity is the dissipation rate density or dissipation
function, which is only a function of the fluxes:
D = D(J).
Specific examples will be given below. Note that some authors have different
definitions for the dissipation function, such as T with T the local temperature
field, or T0 with T0 the temperature of the environment or the
temperature the system would acquire if it is in thermodynamic equilibrium with
the environment
The
different usages of the word ’dissipation function’ sometimes causes confusion...yep
at this case nobody have the
right or wrong weather previsions
good chance
Post a Comment