Thursday, March 02, 2006

Climate sensitivity is 3C

Plus or minus a little bit, of course. But not plus or minus as much as some people have been claiming in recent years :-)

So, our paper has now been accepted, and should be published in a week or two. We think it poses a strong challenge to the "consensus" that has emerged in recent years relating to observationally-based estimates of climate sensitivity, both in terms of the methods used, and the value itself. Remember that climate sensitivity is generally defined as the equilibrium globally-averaged surface temperature rise for a doubled concentration of atmospheric CO2 - so it's a simple benchmark to describe the sensitivity of the global climate to the sort of perturbation we are imposing. Here is what we did...

As you might have noticed, over recent years there have been a number of papers using observational data in an attempt to generate what is sometimes called an "objective" estimate of climate sensitivity. Of course, as you will hopefully realise having read my previous posts about Bayesian vs frequentist notions of probability, there isn't such a thing as a truly objective estimate, since in a situation of epistemic uncertainty, observations can only ever update a subjective prior, and never fully replace it. Moreover, subjectivity goes a lot deeper than merely choosing priors over some unknown parameters - in all scientific research, we always have to make all sorts of judgements about how to build models and analyse evidence. But still, we'd all like to have an estimate of climate sensitivity which can be traced more directly to the data, to replace the old IPCC/Charney report estimate of "likely to be between 1.5-4.5C".

A common approach is to use an ignorant prior (generally, although not always, uniform in climate sensitivity) and look at how observations of the recent (say 20th century) warming narrows the distribution. The unfortunate answer is that it doesn't actually narrow it much, mainly because we don't actually know the recent net forcing (suphate aerosols have a highly uncertain but probably cooling effect which offsets the GHG forcing - if the net forcing is low, then sensitivity much be high to explain the observed warming). I've discussed that further here, and see also the RealClimate posts here and here. Our best estimates give a value of around 3C for climate sensitivity, but values in excess of 6C and perhaps even 10C cannot be ruled out. As a result of numerous studies of this nature, it has been frequently written that we cannot rule out a climate sensitivity of 6C or even substantially more, which is widely regarded as an essentially disastrous situation.

There are some other approaches that can be tried. The cooling effect of a volcanic eruption such as Mt Pinatubo in 1992 also provides some evidence about climate sensitivity. If climate sensitivity is very low, we would expect a modest short-term cooling, but if sensitivity is high, a greater cooling is expected, and it should take longer to recover. We can't get a precise value from this method, since this forced cooling isn't much greater than interannual variability in surface temperature. Wigley et al analysed several recent volcanic eruptions with a simple energy-balance model and found that a value of about 3C looked pretty good in each case, but values as high as about 6C (and as low as 1.5C) could not be completely ruled out. Yokohata et al got broadly consistent results with two versions of a full AOGCM.

We can also look to the paleoclimate record for evidence from our planet's past climate. During the last ice age, the total radiative forcing was roughly 8Wm-2 lower than today (mostly due to lower CO2 and large ice sheets, with dust and vegetation changes also contributing). 8Wm-2 is roughly twice the forcing of doubled CO2 (although in the opposite direction), so with the global temperature at that time being about 6C cooler than at present, a climate sensitivity of about 3C looks pretty good again. However, again there are significant uncertainties in all of these values I've quoted, and it's also not clear that one value of climate sensitivity will necessarily apply both to doubled CO2 and to this rather different forcing. In fact model results (such as our own) show a fair amount of uncertainty in the response to these different scenarios.

There have been some other ideas, based on how well a model reproduces our current climate (say the magnitude of the seasonal cycle) or other quasi-steady climate states with significantly different forcing, such as the Maunder Minimum. Again, these analyses point towards ~3C as being the best answer, but the uncertainties in these approaches mean that none of these approaches can rule out 6C or thereabouts as an upper limit.

So all these diverse methods generate pdfs for climate sensitivity that peak at about 3C, but which have a long tail reaching to values as high as 6C or beyond at the 95% confidence level (and some are even worse). As a result, it's been widely asserted that we cannot reasonably rule out such a high value.

So, what did we do that was new? People who have read this post will already have worked out the answer. We made the rather elementary observation that these above estimates are based on essentially independent observational evidence, and therefore can (indeed must) be combined by Bayes' Theorem to generate an overall estimate of climate sensitivity. Just like the engineer and physicist in my little story, an analysis based on a subset of the available data does not actually provide a valid estimate of climate sensitivity. The question that these previous studies are addressing is not
"What do we estimate climate sensitivity to be"
but is instead
"What would we estimate climate sensitivity to be, if we had no information other than that considered by this study."
The answers to these two questions are simply not equivalent at all. In their defence - and I don't want people to think I'm slamming the important early work in this area - at the time of the first estimates, the various distinct strands of evidence had not been examined in anything like so much detail, so arguably the first few results could be considered valid at the time they were generated. However, with more evidence accumulating, this is clearly no longer the case.

When we combined some of the most credible and solidly-grounded (in our opinion) estimates arising from different observational evidence, we found that the resulting posterior pdf was substantially narrower than any of the observationally-based estimates previously presented. It's inevitable that such a narrowing would occur, but we were surprised by how substantial the effect was and how robust it was to uncertainties in the individual constraints. I suppose with hindsight this is obvious but we admit it did rather take us by surprise. As recently as last summer, I was happily talking about values in the 5-6C region as being plausible, even if the 10C values always seemed pretty silly.

The paper didn't exactly sail through the refereeing process, but has now been seen by a lot of researchers working in this area. Although many of our underlying assumptions are somewhat subjective, our result appears very robust with respect to plausible alternatives (this was rather a surprise to us). No-one has actually suggested that we have made any gross error (well, some people are rather taken aback at a first glance, but they have all come round quickly so far). It's important to realise that we have not just presented another estimate of climate sensitivity, to be considered as an alternative to all the existing ones. We have explained in very simple terms why the more alarming estimates are not valid, and anyone who wants to hang on to those high values is going to have to come up with some very good reasons as to why our argument is invalid, coupled with solid arguments for their alternative view. A few nit-picks over the specific details of our assumptions certainly won't cut it.

As for the upper limit of 4.5C - as should be clear from the paper, I'm not really happy in assigning as high a value as 5% to the probability of exceeding that value. But there's a limit to what we could realistically expect to get past the referees, at least without a lengthy battle. It is, as they say, good enough for Government work :-)

40 comments:

Anonymous said...

Here's the key question policy-wise: Can we start to ignore the consequences of exceeding 4.5C for a doubling? What percentage should we be looking for to make such a decision? And all of this begs the question of exactly what negative effects we might get at 3C or even lower. My impression is that the science in all sorts of areas seems to be tending toward more harm with smaller temp increases. Then there's the other complicating question of how likely it is we will reach doubling and if so when.

William M. Connolley said...

At last the post we've been waiting for... :-) But where is the report on the press conf?

BTW (to save me reading the paper again) suppose you are fairly strict instead of generous, what is your upper 5% conf point?

James Annan said...

Belette,

We said that a more optimistic (but not completely unrealistic) view might give an upper limit of about 4C at 95%. If I was giving round numbers, I would simply say 3+-0.5 (at 1sd, Gaussian) is a pretty good estimate - that makes
2.5-3.5C is likely (68%)
2-4 is very likely (95%)

but I suspect the world isn't quite ready for that yet :-)

(Belette sounds like some sort of feminine French name - can't you become a rat-fancier instead?)

James Annan said...

Steve,

I agree that there are still plenty of questions left about what is really going to happen :-)

IMO, there are plenty of ethical, economic, political and environmental reasons for trying to minimise our overall environmental footprint, including fossil fuel consumption as a significant component of that. Almost all of these reasons apply even if climate sensitivity is 0C, let alone a realistic estimate of 3C. IMO it's not particularly sensible to frame the entire issue in terms of the small chance of "climate catastrophe" because then we risk having the rug abruptly pulled out from under our policies when someone proves that the catastrophe is less likely than was previously thought :-) Also, arguing over the precise threshold probability for particular outcomes risks turning into angels-on-pins stuff.

Anonymous said...

james, which journal? and come on, M$ Word isn't that bad and LaTeX is sooooo 2005!

James Annan said...

It will be in GRL - for some reason, the pdf cuts off the top line (only a header). And I'm certainly not sullying my Mac with M$ products, especially as the AGU even kindly provide a LaTeX template :-)

Brian said...

I should already know the answer to this, but does "doubling" generally refer to doubling of natural CO2 levels or doubling from current levels?

If natural is the baseline, it seems quite possible that we will more than double CO2 by 2100.

BTW, use caution when googling about this - I typed "co2 projection 2100" into Google and got an alarmist site at the top of the list:

http://www.google.com/search?q=co2+projection+2100&sourceid=mozilla-search&start=0&start=0&ie=utf-8&oe=utf-8&client=firefox-a&rls=org.mozilla:en-US:official

It was jamestec something or other...

James Annan said...

Brian,

Doubling is doubling, wherever you start from (roughly speaking). So 550ppm will be about 3C warmer than 275ppm, and unless we work at reducing emissions, I wouldn't be very surprised to see that level of CO2 at or before 2100 (not that I'll be around at that time...). But note that this temperature rise is for the equilibrium state, so we wouldn't expect to feel the full effect until a few decades after hitting the magic number (thermal inertial).

Also, this is all assuming other forcings are unchanged - in practice, sulphate emissions may offset some of the warming, methane may increase it (etc).

Anonymous said...

Since the affect of CO2, indeed any atmospheric greenhouse gas is logarithmic in it's affect. That is each subsequent doubling of CO2 has half the affect, how can you say that any doubling will have 3C forcing?

To further make my case;
CO2 warms the earth by preventing energy in the form of photons of certain frequencies from escaping into space.
Since the supply of these photons is by nature limited, each increase in CO2 acts only on those photons which were not blocked by the previous increases. IE, each increase works on steadily shrinking supply of photons.

Anonymous said...

Anonymous,

I don't agree with your interpretation of "logarithmic". According to Section 1.3.1 of the IPCC's Third Assessment Report, each doubling of CO-2 concentration adds an additional 4 W/m**2 to the rediative forcing. So each additional doubling in CO-2 adds the same (not half) the effect, in terms of radiative forcing.

Going further with the point made in that section: What is happening is that the sides of the absorption band (the "wings") are getting blocked, which were hardly affected before. So that, although it's true that the number of photons in the IR population is decreasing, in the first several rounds of doubling, you are not close to running out them, so the "diminishing returns" concept does not apply.

Another point is that the amount of termperature increase is not linear in the amount of forcing. As I understand it, the transport of IR out of the earth would be equal to the "original" amount (a reference value), minus the radiative forcing. As the transport of radiation outward becomes less efficient, the temperature of the earth's surface must increase to reach a power balance with the absorbed light from the sun. I don't think this is linear!

Finally, even if the increase is "only" 3 degrees Celsius, remember that this is an increase in the average global temperature. The difference between our current average global temperature and the most recent Ice Age is only 5 degrees Celsius. What a difference that made!

Anonymous said...

I would like to focus on what James said about sulphate aerosols acting as a cooling agent. I recall that about three years ago the IPCC said that these aerosols were counteracting global warming, but that as regulatory programs to counteract acid rain were fully implemented, the sulphur compounds would decrease, and the warming from the CO2 emissions would be unconstrained.

I recall that in 1990, just before the US Congress passed the 1990 Clean Air Act Amendments, including establishing the first Acid Rain regulations, the National Acid Precipitation Assessment Program (NAPAP) was published. It represented ten years of study by hundreds of scientists at a cost of $500 million (US), and the conclusion was that acid rain was only a problem for a few lakes in the Adirondack mountains of northern New York State. Despite that finding, Congress enacted the program, and set the US on the course of cutting down on sulphur emissions from coal fired power plants.

The result is that we have a program that is reducing sulphur emissions from power plants which do NOT cause a significant problem with acid rain, and by doing so, it is EXACERBATING what may be a REAL problem with global warming. At the very least, the science indicates that we ought to consider cancelling the "acid rain" controls and take advantage of the cooling effect of the aerosols to buy us some time against greenhouse warming. Perhaps one of the factors in the cooling that occurred from the 1930s to the 1970s was this aerosol pollution, counteracting the expected warming from CO2 emissions, and the warming since the 1970s is a direct result of air pollution regulations that have decreased emissions of all kinds of particulates.

In effect, the major direct cause of global warming in the US may be the Environmental Protection Agency!

Anonymous said...

neilking,

If you would think about it for a minute, you would realize that the linear model you are proposing would require that the atmosphere generate energy.

The forcing is energy from ground emissions that are being reflected back to the ground, rather than escaping into space. Once all of this radiation is reflected, there can be no further increases in the forcing.

As the amount of CO2 increases, there is less energy escaping. So each increase in CO2 acts on a smaller and smaller amount of energy.

The linear model is good for small changes in CO2. It fails utterly when you are attempting to quantify large changes.

Anonymous said...

If these are correct:

http://en.wikipedia.org/wiki/Image:Five_Myr_Climate_Change.png

http://en.wikipedia.org/wiki/Image:Holocene_Temperature_Variations.png

I really don't think any of this matters.

James Annan said...

Robert,

I could just as reasonably say that if my work is correct, those (your?) pictures don't matter :-)

It's largely a matter of perspective...but I'm personally more interested in forecasting what will (or may) happen, than in describing what has happened.

Anonymous said...

surely we can't simply look at the CO2 and think that's all there is that is having an effect? From what I can tell the science of ecology, is even MORE complicated than quantum physics!
If a 3 degrees C rise in temperatures is predicted (as a minimum), then what will that trigger? I have heard that this will trigger the release of millions of tonnes of methane from the sea bed and the Russian tundra (as it thaws) which in turn will speed up the warming.
Surely the imperrative is to do something about our emissions of CO2 now, so that we don't see what happens? It's all very interesting as a scientific excercise (or as a model in a super-computer), but this shouldn't be carried over into the real world to see what happens and test these theories?!
3 degrees would be catastrophic for the majority of people on this planet and would lead to millions of deaths. 10 degrees would be a planet killer (because of all the other things that would then be triggered at that temperature).
It's a bit like asking what would be the difference between putting 7 bullets into someones head compared to 4! The result will be a dead person either way!

James Annan said...

Earl,

I certainly don't believe that 3C of warming will "lead to millions of deaths", and I don't think there is any scientific support for such a position. It will cause changes, for sure, and if we could stop all anthropogenic carbon emissions for free then I'd be all for it, but as things stand we have to consider the trade-offs between current economic growth and future climate changes, both of which contain uncertainties.

Hank Roberts said...

http://www.agu.org/pubs/crossref/2008/2007GL032759.shtml

Chylek

James Annan said...

Thanks Hank,

I'd seen a glimpse of that at the AGU and wondered where the full paper was. Will probably blog it once I've had time to read it.

Hank Roberts said...

Any new citation info? it should go here: http://home.badc.rl.ac.uk/lawrence/blog/2006/03/07/climate_sensitivity_and_politics

mugwump said...

I am curious why you did not reference Douglass and Knox which shows that the climate sensitivity estimate from Pinatubo is much smaller, and probably cannot be used to derive a 2XCO2 estimate because of the different processes involved in a volcanic eruption (see also Robock, Wigley et. al. and Douglass and Knox response to Robock and Wigley et. al.

James Annan said...

I think Douglass' stuff is rubbish (and no reviewer suggested it rated a mention).

In more detail, the natural variability happened to oppose the Pinatubo cooling, so the forced response was actually greater than the observed change. And GCMs with much larger sensitivity than Douglass would allow, still simulate the cooling rather well. So although I would agree that this and other eruptions point towards moderate sensitivity, they by no means prove it is negligible.

mugwump said...

Can you explain why you think Douglass is rubbish? From my reading his is a simple, physical energy balance model that fits the data very well.

Also, Douglass did not claim that his sensitivity results from Pinatubo necessarily established anything about 2XCO2 sensitivity, given that the feedbacks in each case are probably very different (unlike 2XCO2, Pinatubo belched enormous quantities of crap into the atmosphere)

James Annan said...

Douglass completely ignored the ocean heat uptake, which is a huge red flag. Everyone knows that this is a major uncertainty, and every plausible climate model suggests it is a substantial effect. It is not credible that the authors and reviewers were not aware of this, assuming they have a rudimentary awareness of the field. Yet Douglass and Knox don't even mention it!

You may not realise that GRL basically allows authors to pick their own reviewers, which means that complete nonsense occasionally gets through.

mugwump said...


Douglass completely ignored the ocean heat uptake


They addressed that in their followup responses I linked to above. In fact the ocean heat uptake fit very neatly into their model. Once included, it made little difference to their estimates.

mugwump said...

No response? Do you agree then that Douglass is not "rubbish" as you put it? If so, how do you think inclusion of his results will modify your sensitivity estimate?

James Annan said...

Well I'm puzzled, because I definitely did reply some time ago, but it doesn't seem to have stuck...

Douglass do try to salvage their original work by claiming that they original error has negligible effect, but their apparent confusion between the ocean interior diffusion and the effective basin-wide diffusion (substantially larger, due to topographic effects over ridges and near edges, and convection) makes it pretty dubious. And using a quote from the SAR for something written in 2005 is decidedly odd. Modern estimates of the effective diffusion seem to agree with Wigley et al and disagree with Douglass.

mugwump said...

Modern estimates of the effective diffusion seem to agree with Wigley et al and disagree with Douglass.

I am curious to which modern estimates you refer. Ledwell's 1998 SF6 tracer experiments established an eddy diffusion coefficient of around 10-5, which is what Douglass uses. Wigley uses values between 10 and 40 times that.

The book "A Turbulent Ocean" by S A Thorpe has an interesting discussion on pages 39-40, which describes the smaller tracer-derived numbers as the correct "modern" value.

Regardless, describing Douglass as "rubbish" because he used recent, published values for the eddy diffusion coefficient seems rather unfair. The model and analysis in Douglass is, in my opinion, an object lesson in parsimony.

James Annan said...

You also seem to be confusing the ocean interior diffusion with the basin average. They are not the same! Vertical mixing around the boundaries and over steep topography are far greater - Douglass's value refers to the extremal low value in the deep interior, not a representative average.

Note also that he has already effectively claimed in the first paper that the ocean mixed layer is very shallow - so for the effective diffusion out of the bottom of that layer, he should use a diffusion coefficient appropriate for 50m depth or thereabouts, not 500m.

No-one credible considers his analysis reasonable, and I guess that the reason he does not explicitly discuss the implied depths in his paper is that the inconsistency would be too stark.

It is not "parsimonious" to ignore factors that are known to be important, and then produce a spurious and misleading argument to attempt to defend the initial error.

mugwump said...

Vertical mixing around the boundaries and over steep topography are far greater - Douglass's value refers to the extremal low value in the deep interior, not a representative average.

The average depth of the ocean is 3,790m. Two thirds of the earth's surface is covered by ocean greater than 200m deep. Therefore, it is much more likely that the average diapycnal heat transfer of the ocean is determined by the interior than it is by the boundaries or regions of steep topography. Do you have references supporting your assertion to the contrary?

"I guess that the reason he does not explicitly discuss the implied depths in his paper is that the inconsistency would be too stark."

I doubt that is the reason. The original paper included no ocean heat-flux, and fit the data very well. You don't even need the model to see that a huge ocean-induced lag is unnecessary to explain the data. Just eyeballing the data, you can see the eruption, the response, and then the relaxation back to equilibrium with a lag of about 6 months. If there is a big ocean lag in there it is not significantly impacting the temperature response.

James Annan said...

Mugwump, you talk about the "average diapycnal heat transfer of the ocean", but what Douglass' revised model actually requires is the heat transfer out of the upper mixed layer, which must (according to his ~5 month time scale) be very shallow. However, in his revision, he did not use such a (shallow) heat transfer coefficient, but explicitly used a value appropriate to the ocean interior.

Your comments about "huge ocean-induced lag" are just nonsense bluster. The rapid decay of the cold anomaly is (partly) because it is diffused downwards. It is perhaps excusable that you can't get your head around the physical behaviour of the climate system, but that doesn't mean that everyone else who has studied it is a moron who is missing your supposed insight.

mugwump said...


It is perhaps excusable that you can't get your head around the physical behaviour of the climate system, but that doesn't mean that everyone else who has studied it is a moron who is missing your supposed insight.

What a surprisingly aggressive response to my question, which was simply: "Do you have references supporting your assertion that the average ocean heat flux is determined by 'Vertical mixing around the boundaries and over steep topography'"
It wasn't a rhetorical question. I am genuinely curious. And it goes directly to the heart of whether Douglass and Knox's (DK's) model makes sense.

DK countered Wigley et al's objections here

They came up with a neat modification (eq (3) and (4)) to their original model that allowed them to incorporate ocean heat uptake without changing the functional form of the model, hence all their original fits were still valid.

They then estimated the heat flux into the thermocline using a standard (accepted) model, with a thermocline eddy diffusion coefficient of 1.2E-5 m^2/s from Ledwell:

We estimate s by using this slope
along with k = 1.2x10-5 m2/s (the eddy diffusion coefficient in the thermocline [Ledwell et al., 1998])

So if they are wrong, either their basic model is wrong (which seems unlikely - it is just a simple energy balance model after all), or their choice of eddy diffusion coefficient is wrong.

James Annan said...

DK's analysis fails on its own premises. End of story. The diffusion out of their hypothesised thin mixed layer cannot be controlled by the diffusion coefficient in the thermocline because their mixed layer is far too thin to reach this thermocline, based on their own figures.

I can't be bothered generating a reading list for you to learn about the difference between effective diffusion and the abyssal interior value, because it is actually not relevant to this simpler point above which directly refutes their analysis. However, a google search found a bunch of it very easily, which suggests you are more interested in wasting my time than actually learning.

mugwump said...

"The diffusion out of their hypothesised thin mixed layer cannot be controlled by the diffusion coefficient in the thermocline because their mixed layer is far too thin to reach this thermocline, based on their own figures."

DK did not hypothesize a "thin mixed layer". They simply added a term representing heat flux into the ocean to their model and estimated how great that flux should be based on first principles. They freely admit that their's is an approximate treatment - the so-called "separability hypothesis" DQ = sDT - but that seems like a reasonable first-order model.

So the argument just comes down to how great an additional heat flux out of the ocean you would expect from a peak temperature excursion of approximately -0.5C due to Pinatubo. Wigley et al claim it is of the order of 2W/m2, DK claim more like 0.25W/m2. DK's estimate has the advantage that it does not rely on further modeling.

My educated layman's physicist's gut (layman as far as climate science goes, not physics) tells me 2W/m2 out of the ocean seems pretty high given that the temperature difference is generated by a peak forcing of only -3.4W/m2 - it implies that the ocean response is of the same order as the atmospheric response, which seems unlikely given the "impedance mismatch" between the ocean and atmosphere.

Of course DK could be wrong, but it just comes down to basic physics: what governs the ocean heat flux change. They argue the ocean heat flux change is proportional to the atmospheric temperature change, with a small constant of proportionality.

"I can't be bothered generating a reading list for you to learn about the difference between effective diffusion and the abyssal interior value, because it is actually not relevant to this simpler point above which directly refutes their analysis."

As above, I don't see how you have refuted them.

"However, a google search found a bunch of it very easily, which suggests you are more interested in wasting my time than actually learning."

I can assure you I am not trying to waste your time. I would have thought given the references I have already cited that it was obvious that I have indeed been doing my own research (eg - Thorpe above) to try to get to the bottom (no pun intended) of this. My main interest is in understanding what, if anything, we can say about climate sensitivity that does not rely on General Circulation Models. Hence, I would be most grateful if you could post the references you found.

James Annan said...

DK did not hypothesize a "thin mixed layer".

As I said before, they did not explicitly provide a number for the depth of the surface layer in their analysis, most probably because it would destroy the credibility of their results. But a relaxation time scale of a few months requires a thin mixed layer because a thick one takes longer to warm up or cool down.

mugwump said...
This comment has been removed by a blog administrator.
Hank Roberts said...

Cox and Jones?
http://secamlocal.ex.ac.uk/people/staff/pmc205/papers/2008/

C W Magee said...

Apologies if this is a dead thread, but does the PETM offer any constraints on climate sensitivity? Or is the sensitivity of the Eocene planet irrelevant to that of the modern one?

James Annan said...

I think there are enough uncertainties, including the magnitude of the forcing, response, and other boundary conditions, on top of the dubious relevance to the modern system, that it is hard to use directly. But there are certainly people working on all these things.

Hank Roberts said...

Clim. Past Discuss., 7, C26–C31, 2011 www.clim-past-discuss.net/7/C26/2011/ © Author(s) 2011. This work is distributed under the Creative Commons Attribute 3.0 License.
Climate of the Past Discussions

There's currently only one
Interactive comment by
D. Royer (Referee)
droyer@wesleyan.edu
"... a novel way to calculate temperature and CO2 for the last 20 Myrs and explore the implications of this association, especially with regards to climate sensitivity.... Overall, the manuscript is in pretty good shape except for the discussion of climate sensitivity...."

James Annan said...

Thanks, I saw the manuscript but didn't check though the sensitivity calc. I agree with the reviewer that it's a bit of a mess but expect it will get sorted out in the review process. I hope that CP will add "comment feeds" of some type to make it easier to keep up with the discussions, they are generally very sparse at the moment and there's no way of keeping informed (other than submitting a comment, then you get emailed with each update).