Showing posts sorted by relevance for query heartland. Sort by date Show all posts
Showing posts sorted by relevance for query heartland. Sort by date Show all posts

Monday, February 20, 2012

Yawn

I wasn't going to bother commenting on the "Heartland" leaked documents. I mean, what is the actual story here:

"Morally and intellectually bankrupt right-wing-nut so called `think tank' caught engaging in morally and intellectually bankrupt behaviour shock horror."

Or is it

"Washed up has-beens and never-wases well outside the fringe of science get paid to say things that are demonstrably false."

Hold the presses. On second thoughts, don't bother. I challenge anyone, skeptic or otherwise, to say with a straight face that they actually thought that Heartland has ever been engaged in a genuine good-faith effort to improve the quality of public understanding of climate science.

But on the other hand, since they are threatening to sue anyone who dares to mention the documents...

Go ahead, make my day.

Saturday, September 23, 2017

Currywurst

I had curry for lunch on Thursday.

It was the wurst!

(Actually it was rather good, however we forgot to take a pic so you'll have to make do with this less appealing version from wikipedia.)



By massive coincidence I saw this tweet from Gavin on the same day:


which quotes from this NY Times article.

Google tells me Curry's been all over this "fundamentally dumb" idea like a rash. It must have seemed like a good wheeze to earmark some funding and publicity for those who can't raise it on the merits of their research. But now she's obvioulsy been tapped up for membership of the “team”, it's finally dawned on her that she'd have to work with a bunch of crazies and losers who have no idea what the hell they are talking about.

What hasn't dawned on her yet, is that that's where she belongs.

Seriously, who is she trying to kid? This is the very same Judith Curry who infamously puffed some brain-meltingly abysmal drivel by Murray Salby, doesn't know what the word “most” means, and wrapped herself in flags of convenience but couldn't explain what they meant. To name just three episodes early in her blogging career before I gave up even bothering to check what she was saying.

Apropos of not very much, she sent me her CV a couple of days ago.



Wonder why she thought I might be interested in it?

This “red team” stuff is hardly new. Who can forget the “Not the IPCC” report that never saw the light of day? Or the various attempts to set up sceptical journals or scientific societies that are invariably still-born (or more often, never-born). You think they'd work it out eventually. Same shit, different day, as they say in Georgia.

Wednesday, June 06, 2007

From the department of "you couldn't make it up"

This barking mad press release recently appeared in my in-box.

Scientists Rally Around NASA Chief After Global Warming Comments

Mostly it's just the same old motley crew:
Professor Robert Carter, observed...Dr. Tim Ball, a Canadian climatologist, responded...Said Ross McKitrick...Dr. Pat Michaels...
But like all the best jokes, they save the punchline to the end:
Finally, Harvard University physicist Lubos Motl praised Griffin's climate comments, calling them "sensible."
If "jumping the shark" refers to the point at which a TV series loses all credibility, perhaps "quoting a Motl" could be analogous in the context of coverage of climate science issues.

Actually, a quick investigation into the "organisation" behind this press release is mildly amusing. The website of the "Science and Public Policy Institute" seems abandoned, but Google links it to the slightly kooky Jill Ungar ("Research interests: Marine mammal care and rehab, especially involving more holistic medical care and less western medicine. Probiotic, herbal, energy work"). However, the first contact name on the press release (Robert Ferguson) runs the similarly-named Center for Science and Public Policy, as you can see from the front page seems to enjoy puffing up the septics and gets puffed in return by the Heartland Institute. 'Nuff said.

Tuesday, August 10, 2010

How not to compare models to data part eleventy-nine...

Not to beat the old dark smear in the road where the horse used to be, but...

A commenter pointed me towards this which has apparently been accepted for publication in ASL. It's the same sorry old tale of someone comparing an ensemble of models to data, but doing so by checking whether the observations match the ensemble mean.

Well, duh. Of course the obs don't match the ensemble mean. Even the models don't match the ensemble mean - and this difference will frequently be statistically significant (depending on how much data you use). Is anyone seriously going to argue on the basis of this that the models don't predict their own behaviour? If not, why on Earth should it be considered a meaningful test of how well the models simulate reality?

Of course the IPCC Experts did effectively endorse this type of analysis in their recent "expert guidance" note, where they remark (entirely uncritically) that statistical methods may assume that "each ensemble member is sampled from a distribution centered around the truth". But it's utterly bogus nevertheless, as there is no plausible situation in which that can occur, for any ensemble prediction system, ever.

Having said that, IMO a correct comparison of the models with these obs does show the consistency to be somewhat tenuous, as we demonstrated in that (in)famous Heartland presentation. It is quite possible that they will diverge more conclusively in the future. Or they may not. They haven't yet.

Monday, May 31, 2010

Assessing the consistency between short-term global temperature trends in observations and climate model projections

People seem to have got very excited over the presentation Chip Knappenberger gave at the Heartland conference, which I am a co-author on. So perhaps it is worth a post. Judith Curry described it as a Good study with appropriate analysis methods as far as I can tell. But please don't let her endorsement put you off too much :-)

The work presented is a straightforward comparison of temperature trends, both observed and modelled. The goal is to check the consistency of the two - ie, asking the question "are the observations inconsistent with the models"?

This is approached though a standard null hypothesis significance test, which I've talked about at some length before. The null hypothesis being that the observations are drawn from the distribution defined by the model ensemble. We are considering whether or not this null hypothesis can be rejected (and at what confidence level). If so, this would tend to cast doubts on either or both of the forced response and the internal variability of the models.

It may be worth emphasising right at the outset that our analysis is almost identical in principle to that presented by Gavin on RC some time ago. In that post, he formed the distribution of model results (over two different intervals) and used this to assess how likely a negative trend would be. Here is his main picture:


He argued (correctly) that if the models described the forced and natural behaviour adequately, a negative 8-year trend was not particularly unlikely, but over 20 years it would be very unlikely, though not impossible (1% according to his Gaussian fit).

We have extended that basic calculation in a few ways, firstly by considering a more complete range of intervals (to avoid accusations of cherry-picking on the start date). Also, rather than using an arbitrary threshold of zero trend, we have specifically looked at where the observed trends actually lie (well, we also show where zero lies in the distributions). I don't believe there is anything remotely sneaky or underhand in the basic premise or method. One subtle difference, which I believe to be appropriate, is to use an equal weighting across models rather than across simulations (which is what I believe Gavin did). I don't think there is any reason to give one model more weight just because more simulations were performed with it. In practice this barely affect the results. Another clever trick (not mine, so I can praise it without a hint of boastfulness) is to use not just the exactly matching time intervals from the models to compare to the data, but also to consider other intervals of equal length but different start months. It so happens that the mean trend of the models is very much constant up to 2020 and of course there were no exciting external events like volcanoes, so this gives a somewhat larger sample size with which to characterise the model ensemble. For longer trends, these intervals are largely overlapping, so it's not entirely clear how much better this approach is quantitatively, but it's still a nice idea.

Anyway, without further ado, here are the results. First the surface observations, plotted as their trend overlaying the model distribution:



You should note that our results agree pretty well with Gavin's - over 8 years, the probability of a negative trend is around 15% on this graph, and we don't go to 20y but it's about 1% at 15y and changing very slowly. So I don't think there is any reason to doubt the analysis.

Then the satellite analyses (compared to the appropriate tropospheric temps, so the y axis is a little different):


And finally a summary of all obs plotted as the cumulative probability (ie one-sided p-level):

As you can see, the surface obs are mostly lowish (all in the lower half), and for several of the years the satellite analyses are really very near the edge indeed.

Note that the observational data points are certainly not independent realisations of the climate trend - they all use overlapping intervals which include the most recent 5 years. Really it's just a lot of different ways of looking at the same system. (If each trend length were independent, then the disagreement would be striking, as it's not plausible that all 11 different values would lie so close to the edge, even with the GISS analysis. But no-one is making that argument.)

It is also worth pointing out that this analysis method contradicts the confused and irrelevant calculations that some have previously presented elsewhere in the blogosphere. Contrary to the impression you might get from those links, the surface obs are certainly not outside the symmetric 95% interval (ie below the 2.5% threshold on the above plots), though you can get just past 5% for HadCRU for particular lengths of trend and a couple of the satellite data points do go below 2.5%, particularly those affected by the super-El-Nino of 1998.

As for the interpretation...well this is where it gets debatable, of course. People may not be entitled to their own facts, but they are entitled to reasonable interpretations of these facts. Clearly, over this time interval, the observed trends lie towards the lower end of the modelled range. No-one disputes that. But at no point do they go outside it, and the lowest value for any of the surface obs is only just outside the cumulative 5% level. (Note this would only correspond to a 10% level on a two-sided test). So it would be hard to argue directly for a rejection of the null hypothesis. On the other hand, it is probably not a good idea to be too blase about it. If the models were wrong, this is exactly what we'd expect to see in the years before the evidence became indisputable. Another point to note is that the satellite data shows worse agreement with the models, right down to the 1% level at one point, and I find it hard to accept that this issue has really been fully reconciled.

A shopping list of possible reasons for the results include:
  • Natural variability - the obs aren't really that unlikely anyway, they are still within the model range
  • Incorrect forcing - eg some of the models don't include solar effects, but some of them do (according to Gavin on that post - I haven't actually looked this up). I don't think the other major forcings can be wrong enough to matter, though missing mechanisms such as stratospheric water vapour certainly could be a factor, let alone "unknown unknowns"
  • Models (collectively) over-estimating the forced response
  • Models (collectively) under-estimating the natural variability
  • Problems with the obs
I don't think the results are very conclusive regarding these reasons. I do think that the analysis is worth keeping an eye on. Anyone who thinks that even mainstream climate scientists are not wondering about the apparent/possible slowdown in the warming rate is kidding themself. As I quoted recently:

However, the trend in global surface temperatures has been nearly flat since the late 1990s despite continuing increases in the forcing due to the sum of the well-mixed greenhouse gases (CO2, CH4, halocarbons, and N2O), raising questions regarding the understanding of forced climate change, its drivers, the parameters that define natural internal variability (2), and how fully these terms are represented in climate models.

That wasn't some sceptic diatribe, but rather Solomon et al, writing in Science (stratospheric water vapour paper). And there was also the Easterling and Wehner paper (which incidentally also uses a very similar underlying methodology for the model ensemble). Knight et al as well: "Observations indicate that global temperature rise has slowed in the last decade"

So all those who are hoping to burn me at the stake, please put away your matches.