Three papers have just appeared in WIREs Climate Change (here, here and here) discussing the role of the null hypothesis in climate science, especially detection and attribution.
Trenberth argues that, since the null (that we have not changed the climate) is not true, we should try to test some other null hypothesis. He sounds like someone who has just discovered that the frequentist approach is actually pretty useless in principle (as I've said many times before, it is fundamentally incapable of even addressing the questions that people want answers to), but although he seems to be grasping towards a Bayesian approach, he hasn't really got there, at least not in a coherent and clear manner. Curry is just nonsense as usual, and beside noting that she has (1) grossly misrepresented the IAC report and (2) abjectly failed to back up the claims that Curry and Webster made in a previous paper, there isn't really anything meaningful to discuss in what she said.
Myles Allen's commentary is by some distance the best of the bunch, in fact I broadly agree (shock horror) with what he has said. If one is going to take a frequentist approach, the null hypothesis of no effect is often an entirely reasonable starting point. It is important to understand that rejecting the null does not simply mean learning that there has been some effect, but it also indicates that we know (at least at some level of confidence) the direction of the effect! That is, it is not only an effect of zero which is rejected, but all possible negative (say) effects of any magnitude too - this generalisation may not be strictly correct in all possible applications of this sort of methodology, but I'm pretty sure it is true in practice for the D&A field. Especially when we are talking about the local incidence of extreme weather, there really are many cases when we have little reason for a prior belief in an anthropogenically-forced increase versus a decrease in these events, so a reasonable Bayesian approach would also start from a prior which was basically symmetric around zero. The correct interpretation of a non-rejection of the null here is not "there has been no effect" but rather "we don't know if AGW is making these events more or less likely/large". Much of Trenberth's complaint could be more productively aimed at the routine misinterpretation of D&A results, rather than the method of their generation. Trenberth also sometimes sounds like he is arguing that we should always assume that every bad thing was caused by (or at least exacerbated by) AGW, but this simply isn't tenable. Even if storminess increases in general, changes in storm tracks might lead to reduction in events in some areas, with Zahn and von Storch's work on polar lows an obvious example of this. On the other hand, there are also some types of event where we may have decent prior belief in the nature of the anthropogenically-forced change (such as temperature extremes) and in these cases it would be reasonable for a Bayesian to use a prior that reflects this belief.
I can find one thing to object to in Myles' commentary though, and that's the manner in which he tries to pre-judge the "consensus" response to Trenberth's argument. Noting that he (Allen) is in fact a major figure in forming the "consensus" in these private meetings where the handful of IPCC authors decide what to say, it sounds to me rather like a pre-emptive strike against anyone who might be tempted to take the opposing view. I would prefer it if he restricted himself to arguing on the basis of the issues rather than that he holds/forms the majority view. His behaviour here is reminiscent of the way he (and others) tried to reject our arguments about uniform priors, on the basis that everyone had already agreed that his approach was the correct solution. All that achieved was to slow the progress of knowledge by a few years.
Trenberth argues that, since the null (that we have not changed the climate) is not true, we should try to test some other null hypothesis. He sounds like someone who has just discovered that the frequentist approach is actually pretty useless in principle (as I've said many times before, it is fundamentally incapable of even addressing the questions that people want answers to), but although he seems to be grasping towards a Bayesian approach, he hasn't really got there, at least not in a coherent and clear manner. Curry is just nonsense as usual, and beside noting that she has (1) grossly misrepresented the IAC report and (2) abjectly failed to back up the claims that Curry and Webster made in a previous paper, there isn't really anything meaningful to discuss in what she said.
Myles Allen's commentary is by some distance the best of the bunch, in fact I broadly agree (shock horror) with what he has said. If one is going to take a frequentist approach, the null hypothesis of no effect is often an entirely reasonable starting point. It is important to understand that rejecting the null does not simply mean learning that there has been some effect, but it also indicates that we know (at least at some level of confidence) the direction of the effect! That is, it is not only an effect of zero which is rejected, but all possible negative (say) effects of any magnitude too - this generalisation may not be strictly correct in all possible applications of this sort of methodology, but I'm pretty sure it is true in practice for the D&A field. Especially when we are talking about the local incidence of extreme weather, there really are many cases when we have little reason for a prior belief in an anthropogenically-forced increase versus a decrease in these events, so a reasonable Bayesian approach would also start from a prior which was basically symmetric around zero. The correct interpretation of a non-rejection of the null here is not "there has been no effect" but rather "we don't know if AGW is making these events more or less likely/large". Much of Trenberth's complaint could be more productively aimed at the routine misinterpretation of D&A results, rather than the method of their generation. Trenberth also sometimes sounds like he is arguing that we should always assume that every bad thing was caused by (or at least exacerbated by) AGW, but this simply isn't tenable. Even if storminess increases in general, changes in storm tracks might lead to reduction in events in some areas, with Zahn and von Storch's work on polar lows an obvious example of this. On the other hand, there are also some types of event where we may have decent prior belief in the nature of the anthropogenically-forced change (such as temperature extremes) and in these cases it would be reasonable for a Bayesian to use a prior that reflects this belief.
I can find one thing to object to in Myles' commentary though, and that's the manner in which he tries to pre-judge the "consensus" response to Trenberth's argument. Noting that he (Allen) is in fact a major figure in forming the "consensus" in these private meetings where the handful of IPCC authors decide what to say, it sounds to me rather like a pre-emptive strike against anyone who might be tempted to take the opposing view. I would prefer it if he restricted himself to arguing on the basis of the issues rather than that he holds/forms the majority view. His behaviour here is reminiscent of the way he (and others) tried to reject our arguments about uniform priors, on the basis that everyone had already agreed that his approach was the correct solution. All that achieved was to slow the progress of knowledge by a few years.
No comments:
Post a Comment