This ridiculous paper has already been eviscerated by

Tamino,

RC, and

mt, so I won't waste too much time on it, but I have spotted one more error that no-one else has commented on so far before I get to the main point of my post.

So first, the error. It's not as significant as the one Tamino deals with, but here it is anyway. Paragraph 30 reads as follows:

[30] For the 30 years prior to the 1976 shift (i.e., 1946–1975) the SOI averaged +1.93 but in the 30 years after 1976 (i.e., 1977–2006) the average was −3.06, which represents a shift from a La Niña inclination to an El Niño inclination. The standard deviations for the two periods were 9.48 and 10.40 on monthly SOI averages, and 6.56 and 6.35 on calendar year averages, which indicates consistent variation about a new average value. Only the RATPAC-A data are available for lower tropospheric temperatures both before and after this shift, and even then we are limited to 17-year periods for our analysis of RATPAC-A data because monitoring did not commence until mid-1958. From 1959 to 1975 the RATPAC LTT averaged −0.191°C and from 1977 to 1993 it averaged +0.122°C. The standard deviations on the seasonal data were 0.193° and 0.163 C°, and on monthly data 0.162°C and 0.146°C. We have already illustrated the close relationship between SOI and GTTA, but this description of the respective changes before and after the Great Pacific Climate Shift indicates a stepwise shift in the base values of each factor but otherwise relatively consistent ranges of variation.

(SOI and RATPAC are time series data, the definition of which is irrelevant to my point).

So, to parse this clearly, the authors are claiming that when a time series has the properties that the mean of the first half and second half differ, but the variability in each interval is the same, this indicates that there was a step shift in the middle.

Let's take a linear trend plus noise, y=at+e where t (time) runs from -T to T, and e is any additive noise with variance s

^{2}. The expected mean over the first half [-T,0] is -aT/2, and the mean over the second half is aT/2. The standard deviation of the first half is sqrt(a

^{2}T

^{2}/12 + s

^{2}), where these two contributions come from the linear trend and noise respectively. The standard deviation of the second half is, um, sqrt(a

^{2}T

^{2}/12 + s

^{2}). In other words, when the means of the first and second half of a time series differ, but the variability does not, this tells us precisely nothing about whether there was a step change or just a linear trend. Ooops.

I hate to think what they might have done were it not for Craig Loehle's graciously acknowledged assistance with the statistical analysis. I'm sure he is delighted to be associated with this sorry mess of a paper.

Now to the real point, which is that the AGU journals seem to have become rather prone to publishing this sort of nonsense recently (remember

Schwartz,

Chylek and Lohmann, to name but two). Although of course no system will ever be infallible (and a system that blocked out all the mistakes would block a lot of interesting and important stuff too) the errors in these papers are so blindingly obvious that it is hard to believe that any reasonably diligent and competent reviewers would miss them.

When you submit a paper to an AGU journal, you are asked to suggest 5 reviewers. It's a common enough practice (pretty much ubiquitous) which helps the editor who may not be well acquainted with the particular subfield that the paper address. However, it also serves as an open invitation to game the system by suggesting people who you think are likely to be particularly generous and uncritical. Of course any editor worth his (or her) salt should also look outside this list, especially if he thinks that the authors have played this game. But if they have a lot of papers to deal with, and no real stake in the outcome, they might not bother.

I'd like to see AGU editors attach their names to the papers they handle. This seems to be standard practice in the EGU journals, which have not (AFAIK) suffered from this sort of nonsense. This leaves the editors somewhat accountable for the mistakes they make, and any pattern of repeated carelessness would be easily spotted. Of course, the main responsibility lies with the authors and reviewers, but as things stand, it seems like a small clique can publish anything they want so long as they all pat each other on the back. Peer review isn't well set up to deal with deliberate gaming of the system.

Of course, under the EGU's open review system, the gaping holes in this paper would have been spotted very quickly and it would never have been published.