...is the title of an
interesting TV programme that was on BBC 4 last night. It is quite amazing that they dared to show such a maths/stats/science-heavy program at prime time, albeit on a minor channel, so I will start by commending them for that (the inevitable grumbles follow later). The three numbers they featured were the 0.85C warming since 1880s, 95% confidence that anthropogenic influence had caused most of this, and the 1 trillion tonnes of carbon that would take us to about 2C warming. I think it was originally planned to be three 30 minute programmes, but they ran it all together as one long piece, which seemed to work well to me.
I think they told the stories in an engaging manner, there was also lots of interesting historical stuff about how our understanding of the climate system has developed, which was mostly very well done and would probably have been even more interesting had I not already known it! But of course I was hardly the target audience.
In fact one of the researchers making the program contacted me last year to talk about Bayesian vs frequentist approaches to detection and attribution, specifically the IPCC's statement attributing most of the warming of the last century to anthropogenic effects. Unfortunately I wasn't able to be very encouraging about the idea of explaining the differences between Bayesian vs frequentist approaches to the general public, after all most climate scientists struggle with this question as is demonstrated by the IPCC's misrepresentation of D&A results! I've
written on this (really must update my web pages, that link won't last for ever...or will it?) but the argument has little traction even in the climate science community because most people are quite content to continue in their comfortably-erroneous way.
Anyway, the Bayesian thing didn't make it into the transmitted programme, which I was neither surprised nor disappointed about, as I really can't see how to present it in such a way that the general public would get anything out of it. And the traditional misrepresentation of the probability of observations more extreme than observed given the null, as the probability of the null given the observations, was heavily featured (that's basically where the 95% comes from). Sigh. But what I really want to grumble about most strongly was the garbled and nonsensical representation of Kalman filtering in the first section, which, contrary to the claims in the programme, is not a method to check observations against each other and has not been used for temperature data homogenisation. The Kalman filter is actually used for updating a model prediction with new observations, and this is how it was used for space navigation. That is, based on current estimates of velocity and position at time t1, the equations of motion are used to predict the new position and velocity at subsequent time t2, and then imperfect observations of the position at t2 are used to update the estimates of position and velocity, and so on ad infinitum.
Ok, pedants may observe that NCEP has pioneered the use of an ensemble Kalman filter for its 20th century reanalysis project, but this is somewhat tangential to climate change and their results, interesting as they are, have their own homogenisation problems and are are hardly central to the debate on global warming. Ironically, Doug McNeall (who was involved as a scientific consultant, I'm not blaming him for anything in particular though) tweeted a link to the wikipedia page on Kalman filtering, which is a much better resource for anyone interested in learning more about the topic. Anyway, I'm really baffled as to where this bit came from - maybe they just couldn't resist a link to “rocket science” :-) Or did someone think “filter” might be related to filtering out bad data? Well, it isn't.
The “pixel sticks” were very clever, but I don't really think a line graph is improved by drawing it on wobbly axes, expecially if a straight line trend is then drawn through the data! I wonder if Doug will feature that on his
Better Figures blog :-) And as for the presenters spending most of their time walking away from the camera...I'm probably sounding like a grumpy old man so I'd better stop. As I said, I think it was pretty good overall, but if you want a mathematical/statistical program that really doesn't make any concessions to dumbing down, and that does cover climate change (and Bayesian statistics) on occasion, I strongly recommend “
More or Less” on Radio 4.
Update: Oh,
this is interesting. It's a blog post about the programme from the mathematician (Norman Fenton) who presented the 95% section. Turns out he is actually a Bayesian who clearly understands how that number is tarnished by prosecutor's fallacy, and he argues that the scientific debate would be improved by a greater use of Bayesian methods!