For various reasons mostly related to the IPCC rumour-mill, the "hiatus" (seems to be the politically correct term these days) in global temperature is in the news again. Which brings to mind this manuscript which was rejected by GRL a few years ago (and which I just put on the arXiv a few days ago):
Our results indicate cause for concern regarding the consistency between climate model projections and observed climate behavior under conditions of increasing anthropogenic greenhouse-gas emissions.
The analysis was extensively discussed back at the time, and the paper submitted to (and rejected by) GRL at about the same time. From memory, it got quite an involved treatment from the reviewers. Rejection from GRL isn't something I can't get too worked up over. I'm confident that the paper was fundamentally correct, worth publishing, and that it would have had plenty of impact. However, the peer review filter is pretty noisy at journals like GRL with high rejection rates, and decisions can't be parsed too finely. I did subsequently encourage submission to other journals, but for various reasons that didn't happen. Of course it's easy for a minor author to encourage other people to do the work for a new submission :-) In case it isn't already clear, my listing as last author is not an indication that I'm the Machiavellian brains masterminding this nefarious plot to discredit climate models, but instead just a fair reflection on the minor magnitude of my contribution.
3 years later, it seems reasonable to conclude that our main error was merely in being several years ahead of the rest of the field.
22 comments:
Do the data since the time the paper was written alter the conclusions? Can you do a similar type of analysis for other climate variables (sea level rise, ocean temp, sea ice)?
Lucia has done that at the blackboard. McIntyre also did it in his latest.
"3 years later, it seems reasonable to conclude that our main error was merely in being several years ahead of the rest of the field."
Another way of putting that is that there wasn't yet enough evidence to support the thesis. And it didn't help that at the time of review, El Nino was blowing away the evidence that was claimed.
But my main objection at the time was that you were testing weather trends against model variability, not weather variability. That could only at best lead to a conclusion that either trends were out of line, or weather was more variable than models.
I see that Fyfe et al, whose publication seems to have aggrieved Lucia, used HADCRUT realisations, not model variability.
Hmm I realise my previous comment was kind of dumb since climate sensitivity is defined at the response of surface temperature (only) to CO2.
I still like the idea of looking how other parameters are tracking with forecasts though since they are less bumpy than surface temps. perhaps we need to be paying attention to 'ocean heat sensitivity' as much as climate sensitivity.
SCM,
It is widely acknowledged that the models don't represent all major sources of sea level change (esp. ice sheet changes) so an error is pretty likely. Ocean temp is worth looking at. A few years ago the common opinion was that models mixed a bit too much heat into the ocean, now people are using the reverse argument to explain the hiatus. RC sometimes posts on this.
Nick, are you serious? El Nino has blown away the evidence that models over-estimate warming? Wow. Why weren't we told.
Nick, are you serious? El Nino has blown away the evidence that models over-estimate warming?
It blew it away for the period of the GRL consideration of your paper. You showed at end 2009 that observed 5-15 yr trends were approaching a region suggesting significant difference from models. But four months later, with the 2010 El Nin o, they were no longer near that region.
Nick,
The numbers through July 2013 are presented here:
http://www.cato.org/blog/ipcc-pretty-much-dead-wrong
Still pretty much still hugging the 2.5th percentile.
-Chip
Nick: But four months later, with the 2010 El Nin o, they were no longer near that region.
In short, place an outlier at an endpoint of an OLS trend estimate, and you distort the result.
Brilliant work.
Thanks for the reply James.
I played with subtracting the ice sheet contribution (from Shepard 2012) from the SLR (data from CMAR) in the hope of getting somewhere close to the thermal expansion component.
After doing this the SLR is in the upper half of the projection range of A1F1 scenario (without ice-sheet contribs). I think this scenario is the closest to real emissions. It does stay within the A1F1 envelope though rather that frequently hopping above it as it does with the full SLR.
Ocean heat / temp would be good to look at but I haven't tracked down any projections for that yet. Perhaps there will be lots of exciting things to read in the WG1 about these things when it comes out next week.
Chip, Carrick
To be fair to Nick I think he was just suggesting this may have influenced how the paper looked at the time when it was being reviewed.
Why don't you re-jig your paper and resubmit with the newer data if it makes the conclusions stronger? It might get a better run from the referees now.
The paper was submitted early in 2010, those data simply didn't exist at that time (even from the reviewers' POV, and IIRC none of them raised this as an issue).
I suppose we could write a short note saying "we agree with Fyfe et al", but I'm not sure who would be willing to publish it...
SCM said...
"Chip, Carrick
To be fair to Nick I think he was just suggesting this may have influenced how the paper looked at the time when it was being reviewed."
Thanks, SCM. Indeed my first comment said:
"And it didn't help that at the time of review, El Nino was blowing away the evidence that was claimed."
But as to
Carrick said...
"In short, place an outlier at an endpoint of an OLS trend estimate, and you distort the result"
Well, you can see the true situation here. It shows, with color, trends from each start year to each end year. Chip et al's plot corresponds to a vertical line up from end 2009, terminating at end 2004 on the y-axis (click for details). It's pretty blue. 2010 leads into the greener strip adjacent. From that perspective, the blue 08/09 stripe is the outlier, and there is a kind of repetition lately, but it's been very up and down.
(previous comment deleted to repair link. It may still fail because blogspot encodes [ and ]. If so, please try this)
SCM: To be fair to Nick I think he was just suggesting this may have influenced how the paper looked at the time when it was being reviewed.
Well what Nick actually said was "El Nino has blown away the evidence that models over-estimate warming" and then "But four months later, with the 2010 El Nin o, they were no longer near that region".
These sound more like rather bold assertions to me than inquires. Regardless, the answer does seem to be "no, nobody raised this issue", so I guess the issue Nick is raising is not relevant to considerations of why the paper was rejected.
From my perspective two problems with Nick's assertions.
One, four months of data into the future, April of 2010 shows up in five+ months, not at the start of four. So the second assertion should be "But five + months later". (As James comments, these data simply weren't available during the review process.)
Secondly, any sensible person should see an ENSO event for what it was, and not demand that an outlier that sits at an end point be included an the OLS fit.
One should not place any value in an OLS fit starting with January, 1998. Nor should one place any value in an OLS fit ending in April, 2010.
Also Nick--Figure 1 of the figure goes from 5 to 15 years. So the start year is 1995, not 1999 as you show in your browser.
I've looked briefly at the influence of adding the additional four months to an otherwise 15-year trend. It appears to change the slope from 0.107°C/decade to 0.110°C/decade.
Carrick,
"I guess the issue Nick is raising is not relevant to considerations of why the paper was rejected"
They may not have done the calculations needed to be the basis for rejection. But even if they checked one or two trends, that would be enough to wonder about the durability of the result. They rejected because of doubts about whether the criteria were appropriate; it's easier to have those thoughts if you know the weather has been warming.
"So the second assertion should be "But five + months later"."
The paper has Dec 09 results, so it was presumably submitted in 2010. It was refereed, rejected, then James came on as author, rewritten and submitted, four reviewers. I know GRL likes to hurry things along, but that takes a few months. And January 2010 was already a very warm month. Three months warming is less conclusive, but not nothing.
"any sensible person should see an ENSO event for what it was, "
Yes, but 08/09 was also an ENSO event. This actually brings up my basic objection to this sort of analysis. I showed here the ACF's of the temperature data. Stochastic theory says that the trend uncertainty should be essentially proportional to the area underneath. ARMA models approximate the central peak, and taper rapidly to zero. But the ACF doesn't go on - it oscillates, and in fact has a much larger area. The oscillations seem to be primarily ENSO.
Now Michaels et al did not fit such a model, but used the statistics of model run trends, with this assumption:
"Our working hypothesis is that these random processes operate to influence model trends to the same degree as they do observed trends. Therefore, we assume that the model trend distributions represent the spread of potential realities (including these uncertainties), of which the single realization of the observed trend is a member."
It's a big assumption, especially with CMIP3 models which often don't do ENSO very well. So it seems to me that the fact that the papers contentions are only sometimes true suggests that the assumption is shaky. This may have occurred to the reviewers too.
"So the start year is 1995, not 1999 as you show in your browser. "
Yes, this is the Blogger encoding problem that I worried about. I tried to direct to the post-1989 plot, but the browser didn't implement the Javascript. You can do it manually.
Carrick,
A few follow-up points,
I've made the link work as it should.
On
"So the second assertion should be "But five + months later"."
there was a discussion on this very blog in late May 2005. Chip joined in. Hank Roberts asked if it had been accepted. No answer, but a clear rejection would have been relevant information. So yes, 5+.
I missed "I've looked briefly at the influence of adding the additional four months...". The 15yr trend is not controversial, and is least sensitive to the extra data. I've plotted here the effect at shorter times.
Nick,
It was finally rejected in the first half of 2010. I have the email at work, or I'd give you the exact date.
And just now that Stocker chap was on the radio bemoaning the lack of peer-reviewed literature discussing this topic...
I suppose we could write a short note saying "we agree with Fyfe et al", but I'm not sure who would be willing to publish it...
Quel dommage (気にするな) - oh well you won't be first in science to experience this kind of annoyance and I doubt you'll be the last!
Hi Nick,
Thanks for the comments.
Regarding this:
Yes, but 08/09 was also an ENSO event. This actually brings up my basic objection to this sort of analysis. I showed here the ACF's of the temperature data. Stochastic theory says that the trend uncertainty should be essentially proportional to the area underneath. ARMA models approximate the central peak, and taper rapidly to zero. But the ACF doesn't go on - it oscillates, and in fact has a much larger area. The oscillations seem to be primarily ENSO.
Yes I think the deficiencies of ARMA is an important point and I think one that needs to be further explored. While that wasn't addressed by this paper in 2009, I don't think it's entirely fair to say retrospectively that's a good reason for why the paper shouldn't have been published.
One common alternative method is involves computing the RMS value of the Fourier amplitude, generate uniform random phases from 0-2pi then inverse Fourier transform. Probably this can be improved on, but I would expect it to be superior to ARMA, since it should capture ENSO related short period fluctuations.
The pause or hiatus seems most clearly connected to the fluctuations of the Southern Oscillation Index (SOI). Over the last 130 years, every major temperature excursion corresponds to a SOI excursion. It seems obvious to compensate the temperature record of GISS with a scaled and lagged version of SOI, and also corrected for volcanic disturbances. This is what I get:
http://imageshack.us/a/img69/9159/hpi.gif
In this case, no pause is observed and the TCR holds steady at 2C for a doubling of CO2.
It would be really interesting to run this again up to 2013.
I wonder how much difference HADCRUT4 makes to the analysis - will it be much closer to GISS ?
Also for the lower troposphere model output - were these 'masked' so that they have the same coverage as the satellites - I wonder how much difference it makes ?
Post a Comment