Our brief
foray into the last millennium was published recently on Climate of the Past. I didn't want to tread on too many toes so stayed well clear of any attempt to generate a reconstruction of past climate, instead focussing purely on more methodological issues. I'm interesting in ensemble-based data assimilation methods and have been following the pioneering work of Hugues Goosse in applying these ideas for climate reconstruction. The two main questions the work tried to address were: (1) how well is it possible to reconstruct the climate based on a handful of sparse and imprecise observations, and (2) are ensemble-based methods a viable approach for this? Rather than using real proxy data, this investigation was a purely synthetic experiment in which pseudoproxy data are taken from a model simulation. This makes it easy to check the performance of the algorithm, since we know what the real answer is supposed to be.
A pessimistic interpretation of our results would be that there is rather little that can be learnt from the scarce data that are available before about 1500AD, although the performance was significantly better in the presence of external forcing (with its associated large-scale response) than when we just considered a control run with internal variability alone (which gives much higher emphasis on regional variability). With a global change, even sparsely distributed data can give the overall picture pretty well, but where internal variability of the climate system is concerned, it leads to sufficiently small scale patterns that you need local and accurate data to have much idea what is going on in that respect.
These results aren't due to any peculiarity or limitation of the particular method we used, but are fundamental constraints due to the low information content of a handful of proxies. One thing I hadn't really thought through before is the implications of the limited accuracy of the proxies. A typical "signal to noise ratio" of 40% (using the paleoclimate convention for this term) means that the uncertainty is 2.5 times larger than the signal, so it takes quite a lot of proxies to average together to reduce the error to a useful level. We didn't even consider the realistic possibility that the errors might be correlated across different proxies (eg due to a large scale precipitation anomaly, or even calibration issues).
When there are lots of proxies (as in the last few hundred years), then the ensemble method technically failed, in that the ensemble collapsed. This is a well-known phenomenon in this method, but in fact the mean estimate was still quite good, it was just the (predicted) uncertainty was rubbish. So the practical value may exceed its theoretical performance in some cases. That was a surprisingly positive result, to me at least.
Although it was interesting doing the work, I don't expect to do much more along these lines in the near future. There is simply too much other higher-priority work to be done. It does perhaps help to provide some perspective on the hype surrounding
this paper. There is simply no way that a local proxy can provide a meaningful estimate of the hemispheric let alone global temperature.
The manuscript managed to get quite high up on the "
most commented papers" page. This was not due to any particular notoriety or controversy, but simply due to fact that 3 reviewers and another commenter all made useful contributions, to which I responded individually. The open review system seems to be working pretty well, I'd say. It's a shame that the AGU hasn't taken the opportunity to do something more radical with its recent
reorganisation of its publications. Hooking up with a conventional profiteering toll-access publisher (and one with some fairly unsavoury activity in its recent history) is I suppose the easy option for a bunch of conservative greybeards, but I can't help but think of it as a missed opportunity.