tag:blogger.com,1999:blog-9959776.post114025106591723399..comments2024-02-15T04:42:41.606+00:00Comments on James' Empty Blog: Probability, prediction and verification VI: VerificationJames Annanhttp://www.blogger.com/profile/04318741813895533700noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-9959776.post-1140402247809104232006-02-20T02:24:00.000+00:002006-02-20T02:24:00.000+00:00Well, definitions of aleatory uncertainty aren't a...Well, definitions of aleatory uncertainty aren't always very clear, but give me good enough knowledge of the initial conditions, and I can predict the future of the Lorenz model as far ahead as I want...and given enough knowledge to also build a good model of the weather/climate system, the same is true in real life.James Annanhttps://www.blogger.com/profile/04318741813895533700noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-1140359931282397152006-02-19T14:38:00.000+00:002006-02-19T14:38:00.000+00:00James- Thanks. Just a quick follow up on the noti...James- Thanks. Just a quick follow up on the notion of aleatory uncertainty. You seem to be defining this term the mean randomness. I'd suggest that aleatory uncertainty inlcudes all of those uncertainties that cannot be reduced through new knowledge. From this perspective randomness is thus an example of aleatory uncertainty, but not the same thing.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9959776.post-1140347098887105652006-02-19T11:04:00.000+00:002006-02-19T11:04:00.000+00:00Roger, That's an interesting set of questions that...Roger, <BR/><BR/>That's an interesting set of questions that almost justifies a new post, but:<BR/><BR/>1. Stationarity would be nice, but it only really applies in a situation of frequentist, aleatory uncertainty - which means it is always an abstraction of the real world. The reliability of a specific forecast is not knowable even in that best case.<BR/><BR/>2. The Lorenz model is a prime example of uncertainty in a deterministic context. That's not aleatory uncertainty! Randomness in forecasting is a useful way of coping with our uncertainty, it is not intrinsic to the system.<BR/><BR/>3 I was specifically looking for examples of verification of climate models, but yes I could haver also mentioned Gray. It's obviously hard to differentiate between clever and lucky forecasts in any one-off situation, but it seems reasonable to prefer people with a hypothesis (model) which fits a wide range of conditions, versus someone who gives some ad-hoc prediction with no testable method behind the claim (and that goes double when they refuse to bet on their forecast, or make it just happen to fit the consensus over the plausible betting horizon of 20 years or so).<BR/><BR/>4. I've talked about skill before <A HREF="http://julesandjames.blogspot.com/2006/01/probability-prediction-and_14.html" REL="nofollow">here</A> and <A HREF="http://julesandjames.blogspot.com/2006/01/probability-prediction-and_17.html" REL="nofollow">here</A>, and Chris Randles made similar points about the baseline. Of course just happening to extrapolate a 30-year trend for 30-50 years into the future is probably a good forecast right now, but it's only through the models that we know these time scales to be appropriate. Anyone who wants to argue for some sort of trend extrapolation as a baseline for measuring skill would have to show that it would have outperformed stationarity in the past. It would be an interesting question to investigate further. Certainly very simple models give a good simulation of global temperature, when forced appropriately. It is not clear to me that GCMs add a great deal of skill on top of that.<BR/><BR/>5 Obviously value is what ultimately matters, and that depends on the user(s). Given that current long-term predictions are completely ignored irrespective of what they say, it is hard to argue that they have any value whatsoever :-( However, some people are doing stuff that actually has users over shorter time scales, and I'm hoping to head in that direction myself. If the <A HREF="http://julesandjames.blogspot.com/2006/02/alarming-new-research.html" REL="nofollow">Tyndall Centre's millennial assessment</A> stops someone from putting a nuclear waste store within a few metres of the coast, then it might prove to be very valuable indeed, despite the sarcastic comments I made about its relevance to mitigation.James Annanhttps://www.blogger.com/profile/04318741813895533700noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-1140326962295879122006-02-19T05:29:00.000+00:002006-02-19T05:29:00.000+00:00James-Excellent post, a few questions:1) There see...James-<BR/><BR/>Excellent post, a few questions:<BR/><BR/>1) There seems to be an assumption of statistical stationarity here. What about verification in the context of non-stationarity?<BR/><BR/>2) You assert that forecast uncertainty is not aleatory. I find this implausible, what about Lorenz? Do you really think that forecasts can be made deterministically?<BR/><BR/>3)You positively cite Hansen's 1980s forecasts, but what about Bil Gray's equally accurate forecasts of increased hurricane activity? How to differentiate forecasts verified for the the right reasons from the others?<BR/><BR/>4) You don't include the notion of skill here, which requires a naive baseline. Choice of the naive baseline matters for understanding forecast "goodnesss" -- how to choose this baseline? On ENSO forecasts Knaff and Landsea claim that climatology is overly simplistic since ENSO is cyclical. Should there be a trend line as the naive forecast of future temperature, or is stationarity appropriate?<BR/><BR/>5) Finally, Murohy differntiates between forecast quality, skill, and value as qualities of forecast goodness. if the ultimate goal is to make forecasts useful to people who make decisions, isn't this degree of precision warranted in such discussions?<BR/><BR/>Thanks!Anonymousnoreply@blogger.com