This paper in Science has had a surprisingly muted reaction in the blogosphere. It's almost as if climate scientists aren't supposed to validate their methods and/or make falsifiable predictions.
In contrast to those rather underwhelmed posters, I think it's a really important step forwards, not just in terms of the actual prediction made (which, to be honest is not all that exciting) but what it implies about how people are starting to think more quantitatively and rigorously about the science of prediction. Of course the Hadley Centre is well placed for this trend given their close links to the UKMO. I could probably do the odd bit of trivial nit-picking about the paper if I felt like it, but that would be churlish in the absence of a better result. I am sure they are well on the way to improving their system anyway (the paper was submitted way back in January).
A quick note about the forecast "plateau" in temperatures that was the focus of much of the news coverage: the central forecast may stay slightly below the 1998 observed peak until 2010, but the spread around this forecast assigns significant probability to a higher value. If one assumes that the annual anomalies (relative to the forecast mean) are independent with each of 2008 and 2009 having a 30% chance of exceeding 1998 (just from eyeballing their plot), then that gives a 50% chance of a new record before 2010, and 75% including 2010, which is virtually the same as what I wrote here.
In contrast to those rather underwhelmed posters, I think it's a really important step forwards, not just in terms of the actual prediction made (which, to be honest is not all that exciting) but what it implies about how people are starting to think more quantitatively and rigorously about the science of prediction. Of course the Hadley Centre is well placed for this trend given their close links to the UKMO. I could probably do the odd bit of trivial nit-picking about the paper if I felt like it, but that would be churlish in the absence of a better result. I am sure they are well on the way to improving their system anyway (the paper was submitted way back in January).
A quick note about the forecast "plateau" in temperatures that was the focus of much of the news coverage: the central forecast may stay slightly below the 1998 observed peak until 2010, but the spread around this forecast assigns significant probability to a higher value. If one assumes that the annual anomalies (relative to the forecast mean) are independent with each of 2008 and 2009 having a 30% chance of exceeding 1998 (just from eyeballing their plot), then that gives a 50% chance of a new record before 2010, and 75% including 2010, which is virtually the same as what I wrote here.
2 comments:
>"what it implies about how people are starting to think more quantitatively and rigorously about the science of prediction"
Ok I'll bite. What does it imply?
If the estimate of a year hotter than 1998 is still around 75% for up to and including 2010, then the yes/no outcome doesn't do much to verify the 75%. They are going to need a lot of those sort of predictions to do any verification of their estimated probabilities.
So does it show people are starting to think more quantitatively and rigorously? If so, how?
Maybe it would be obvious that they are (and how) if I had access and read the paper?
Chris,
This paper nicely illustrates people actually building a prediction system and testing its performance in a series of decadal hindcasts (eg how well would it have forecast 1995 to present, based only on data prior to 1995). It's an almost exact analogue of what weather prediction systems do, and how they are tested. Now it wouldn't do to be too harsh on the ~100 year climate science types, as it's a much harder problem in that case. But it's nice to see people pushing the window of genuine forecast validation out to 10 years. As I've said before, I think there's plenty of opportunity for learning from those who make weather/climate forecasts for a living, and who have to get them right!
I could illegally send you a copy if you want. It's not actually that exciting a read though...just "this is the performance of our prediction system" really. The system itself looks fairly standard...the achievement is really just that they have put it together, and found that it works reasonably well (certainly a useful benchmark to measure future improvements against).
Post a Comment