This CPDN paper doesn't seem to have attracted much comment, perhaps because the results aren't actually very far off what the IPCC already said (just a touch higher). But Chris (and Carrick) commented on it
down here so I think it is worth a post.
It's the results from a large ensemble of transient simulations of the mid 20-21st centuries, analysed to produce a "likely" range of warming by 2050.
Here is the main result:
(click for full size) where the vertical dotted lines demarcate their "likely" range, and the horizontal line is the threshold for goodness of fit (such that only the results below this line actually contribute to the final answer). The grey triangles represent models that are thown out due to large radiative imbalance.
I am puzzled by a few aspects of this research. Firstly, on a somewhat philosophical point, I don't have much of a feel for what "likelihood profiling" is or how/why/if it works, and that's even after having obtained the book that they cite on the method. The authors are quite emphatic about not adopting a Bayesian interpretation of probability as a degree of belief, so the results are presented as a
confidence interval (remember, this is
not the same thing as a credible interval). Therefore, I don't really think the comparison with the IPCC "likely" range is meaningful, since the latter is surely intended as a Bayesian credible interval. Whatever this method does, it certainly does not generate an interval that anyone can credibly believe in!
Secondly, on a more practical point, it seems a bit fishy to use the range of results achieved by 2050, relative to 1961-90, without accounting for the fact that almost all of their models have already over-estimated the warming by 2010, many by quite a large margin (albeit an acceptable level according to their statistical test of model perfomance). The point is, given that we currently enjoy 0.5C of warming relative to the baseline, then reaching 3C by 2050 implies an additional warming of 2.5C over the next 40 years. However, as far as I can see none of the models in their sample warms by this much. Certainly the two highest values in their sample - which are the only ones that lie outside the IPCC range, and which can be clearly identified in both the panels of the figure above - were already far too warm by 2010, by about 0.3-0.4C. So although they present a warming of 3C by 2050 as the upper bound of their "likely" range, none of their models actually warmed over the next 40 years by as much as the real world would have to do to reach this level.
Finally, on a fundamental point about the viability of the method, the authors clearly state (in the SI) that they "assume that our sample of ensemble members is sufficient to represent a continuum (i.e. infinite number)". They also use a Gaussian statistical model of natural variability in their statistical method (which is entirely standard and non-controversial, I should point out - if anything, optimistic in its lack of long tails). Their "likely" range is defined as the extremal values from their ensemble of acceptable models. This seems to imply that as the ensemble size grows, the range will also grow
without limit. (Most people would of course use a quantile range, and not have this problem.) So I don't understand how this method can work
at all in this sort of application where there is a formally unbounded (albeit probabilistically small) component of internal variability. In a mere 10
23 samples or so, their bounds would have been as wide as ± 10 sigma of the natural variability alone - which based on the left hand panel of the fig, would have been rather wider than what they actually found.