Via email, I hear that this paper from Stephen Schwartz is making a bit of a splash in the delusionosphere. In it, he purports to show that climate sensitivity is only about 1.1C, with rather small uncertainty bounds of +-0.5C.

Usually, I am happy to let RealClimate debunk the septic dross that still infects the media. In fact, since I have teased them about their zeal in the past, it may seem slightly hypocritical of me to bother with this. However, this specific paper is particularly close to my own field of research, and the author is also rather unusual in that he seems to be a respected atmospheric scientist with generally rather mainstream views on climate science (although perhaps a bit critical of the IPCC here). However, his background is in aerosols, which suggests that he may have stumbled out of his field without quite realising what he is getting himself into.

Anyway, without further ado, on to the mistakes:

Mistake number 1 is a rather trivial mathematical error. He estimates sensitivity (K per W/m^2) via the equation

S=t/C

where C is the effective heat capacity (mostly ocean) and t is the time constant of the system (more on this later).

His numerical values for t and C are 5+-1, and 16.7+-7 respectively (with the uncertainties at one standard deviation). It is not entirely clear what he really intends these distributions to mean (itself a sign that he is a little out of his depth perhaps), but I'll interpret them in the only way I think reasonable in the context, as gaussian distributions for the parameters in question. He claims these values gives S equal to 0.3+-0.09, although he also writes 0.3+-0.14 elsewhere. This latter value works out at 1.1C+-0.5C for a doubling of CO2. But the quotient of two gaussians is not gaussian, or symmetric. I don't know how he did his calculation, but it's clearly not right.

In fact, the 16%-84% probability interval (the standard central 68% probability interval corresponding to +- 1sd of a gaussian, and the IPPC "likely") of this quotient distribution is really 0.18-0.52K/W/m^2 (0.7-1.9C per doubling) and the 2sd limit of 2.5% to 97.5% is 0.12-1.3K/W/m^2 (0.4-4.8C per doubling). While this range still focuses mostly on lower values than most analyses support, it also reaches the upper range that I (and perhaps increasingly many others) consider credible anyway. His 68% estimate of 0.6-1.6C per doubling is wrong to start with, and doubly misleading in the way that it conceals the long tail that naturally arises from his analysis.

Mistake number 2 is more to do with the physics. In fact this is the big error, but I worked out the maths one first.

He estimates a "time constant" which is supposed to characterise the response of the climate system to any perturbation. On the assumption that there is such a unique time constant, this value can apparently be estimated by some straightforward time series analysis - I haven't checked this in any detail but the references he provides look solid enough. His estimate, based on observed 20th century temperature changes, comes out at 5y. However, he also notes that the literature shows that different analyses of models give wildly different indications of characteristic time scale, depending on what forcing is being considered - for example the response to volcanic perturbations has a dominant time scale of a couple of years, whereas the response to a steady increase in GHGs take decades to reach equilibrium. Unfortunately he does not draw the obvious conclusion from this - that there is no single time scale that completely characterises the climate system - but presses on regardless.

Schwartz is, to be fair, admirably frank about the possibility that he is wrong:

He also says::

Perhaps a better way of putting that would be to suggest applying the analysis to the output of computer models in order to test if the technique is capable of determining their (known) physical properties. Indeed, given the screwy results that Schwartz obtained, I would have thought this should be the first step, prior to his bothering to write it up into a paper. I have done this, by using his approach to estimate the "time scale" of a handful of GCMs based on their 20th century temperature time series. This took all of 5 minutes, and demonstrates unequivocally that the "time scale" exhibited through this analysis (which also comes out at about 5 years for the models I tested) does not represent the (known) multidecadal time scale of their response to a long-term forcing. In short, this method of analysis grossly underestimates the time scale of response of climate models to a long-term forcing change, so there is little reason to expect it to be valid when applied to the real system.

In fact there is an elementary physical explanation for this: the models (and the real climate system) exhibit a range of time scales, with the atmosphere responding very rapidly, the upper ocean taking substantially longer, and the deep ocean taking much longer still. When forced with rapid variations (such as volcanoes), the time series of atmospheric response will seem rapid, but in response to a steady forcing change, the system will take a long time to reach its new equilibrium. An exponential fit to the first few years of such an experiment will look like there is a purely rapid response, before the longer response of the deep ocean comes into play. This is trivial to demonstrate with simple 2-box models (upper and lower ocean) of the climate system.

Changing Schwartz' 5y time scale into a more representative 15y would put his results slap bang in the middle of the IPCC range, and confirm the well-known fact that the 20th century warming does not by itself provide a very tight constraint on climate sensitivity. It's surprising that Schwartz didn't check his results with anyone working in the field, and disappointing that the editor in charge at JGR apparently couldn't find any competent referees to look at it.

Usually, I am happy to let RealClimate debunk the septic dross that still infects the media. In fact, since I have teased them about their zeal in the past, it may seem slightly hypocritical of me to bother with this. However, this specific paper is particularly close to my own field of research, and the author is also rather unusual in that he seems to be a respected atmospheric scientist with generally rather mainstream views on climate science (although perhaps a bit critical of the IPCC here). However, his background is in aerosols, which suggests that he may have stumbled out of his field without quite realising what he is getting himself into.

Anyway, without further ado, on to the mistakes:

Mistake number 1 is a rather trivial mathematical error. He estimates sensitivity (K per W/m^2) via the equation

S=t/C

where C is the effective heat capacity (mostly ocean) and t is the time constant of the system (more on this later).

His numerical values for t and C are 5+-1, and 16.7+-7 respectively (with the uncertainties at one standard deviation). It is not entirely clear what he really intends these distributions to mean (itself a sign that he is a little out of his depth perhaps), but I'll interpret them in the only way I think reasonable in the context, as gaussian distributions for the parameters in question. He claims these values gives S equal to 0.3+-0.09, although he also writes 0.3+-0.14 elsewhere. This latter value works out at 1.1C+-0.5C for a doubling of CO2. But the quotient of two gaussians is not gaussian, or symmetric. I don't know how he did his calculation, but it's clearly not right.

In fact, the 16%-84% probability interval (the standard central 68% probability interval corresponding to +- 1sd of a gaussian, and the IPPC "likely") of this quotient distribution is really 0.18-0.52K/W/m^2 (0.7-1.9C per doubling) and the 2sd limit of 2.5% to 97.5% is 0.12-1.3K/W/m^2 (0.4-4.8C per doubling). While this range still focuses mostly on lower values than most analyses support, it also reaches the upper range that I (and perhaps increasingly many others) consider credible anyway. His 68% estimate of 0.6-1.6C per doubling is wrong to start with, and doubly misleading in the way that it conceals the long tail that naturally arises from his analysis.

Mistake number 2 is more to do with the physics. In fact this is the big error, but I worked out the maths one first.

He estimates a "time constant" which is supposed to characterise the response of the climate system to any perturbation. On the assumption that there is such a unique time constant, this value can apparently be estimated by some straightforward time series analysis - I haven't checked this in any detail but the references he provides look solid enough. His estimate, based on observed 20th century temperature changes, comes out at 5y. However, he also notes that the literature shows that different analyses of models give wildly different indications of characteristic time scale, depending on what forcing is being considered - for example the response to volcanic perturbations has a dominant time scale of a couple of years, whereas the response to a steady increase in GHGs take decades to reach equilibrium. Unfortunately he does not draw the obvious conclusion from this - that there is no single time scale that completely characterises the climate system - but presses on regardless.

Schwartz is, to be fair, admirably frank about the possibility that he is wrong:

This situation invites a scrutiny of the each of these findings for possible sources of error of interpretation in the present study.

He also says::

It might also prove valuable to apply the present analysis approach to the output of global climate models to ascertain the fidelity with which these models reproduce "whole Earth" properties of the climate system such as are empirically determined here.

Perhaps a better way of putting that would be to suggest applying the analysis to the output of computer models in order to test if the technique is capable of determining their (known) physical properties. Indeed, given the screwy results that Schwartz obtained, I would have thought this should be the first step, prior to his bothering to write it up into a paper. I have done this, by using his approach to estimate the "time scale" of a handful of GCMs based on their 20th century temperature time series. This took all of 5 minutes, and demonstrates unequivocally that the "time scale" exhibited through this analysis (which also comes out at about 5 years for the models I tested) does not represent the (known) multidecadal time scale of their response to a long-term forcing. In short, this method of analysis grossly underestimates the time scale of response of climate models to a long-term forcing change, so there is little reason to expect it to be valid when applied to the real system.

In fact there is an elementary physical explanation for this: the models (and the real climate system) exhibit a range of time scales, with the atmosphere responding very rapidly, the upper ocean taking substantially longer, and the deep ocean taking much longer still. When forced with rapid variations (such as volcanoes), the time series of atmospheric response will seem rapid, but in response to a steady forcing change, the system will take a long time to reach its new equilibrium. An exponential fit to the first few years of such an experiment will look like there is a purely rapid response, before the longer response of the deep ocean comes into play. This is trivial to demonstrate with simple 2-box models (upper and lower ocean) of the climate system.

Changing Schwartz' 5y time scale into a more representative 15y would put his results slap bang in the middle of the IPCC range, and confirm the well-known fact that the 20th century warming does not by itself provide a very tight constraint on climate sensitivity. It's surprising that Schwartz didn't check his results with anyone working in the field, and disappointing that the editor in charge at JGR apparently couldn't find any competent referees to look at it.

## 28 comments:

So... the obvious question: are you going to write this up as a comment for JGR?

As a lay reader, looking back and forth between his Nature Blog piece:

"Quantifying climate change — too rosy a picture?"

I don't pretend to follow the math.

But is he actually defending the approach ("autocorrelation analysis" I think) that he illustrates?

OR is the paper meant to be illustrating what I read him as saying in the Nature blog article ---- that doing it this way, the analyses done, by various modelers, when compared are giving us too small a figure for sensitivity?

From the paper:

"Is the relaxation time constant of the climate system determined by autocorrelation analysis the pertinent time constant of the climate system? Of the several assumptions on which the present analysis rests, this would seem to invite the greatest scrutiny."

I can't read science papers this closely, just puzzled whether this is meant to take a position and show that it's leading to unreasonable results and invite others to take it apart ---- or if he's defending the approach as he illustrates it.

Belette,

I may have my arm twisted in that general direction :-) But who knows, he may withdraw it in embarrassment - I emailed him a few days ago (and got an out of office reply).

Ankh,

Putting a sympathetic spin on it, perhaps he just thinks he's had an interesting idea and is putting it out there for discussion. But in that case, it baffles me that he didn't think it would be more appropriate to first try informally asking some people with some experience in this area.

It's not even as if the error is a subtle one - I certainly don't claim any brilliance in exposing the fatal flaw.

RP has picked it up uncritically. I was going to point him here but someone else beat me to it

The climate problem is similar to the capacitor soakage problem in electronics.

There is a primary time constant and a number of secondary time constants.

The secondary time constants are generally not very influential except at very high precisions. Even then their influence is limited to very low frequency signals.

Roy Spencer and a number of others have worked out the primary time constant by other means and have also come up with a numbers around five years.

In control theory to assure system stability you generally want a system where a first order lag is dominant. This appears to be the case in the climate system according to a number of different analysis methods.

In addition because of water vapor evaporation/condensation the atmosphere is more like a heat pipe than a blanket at the time scales (five years) in question. At shorter time scales it is more like a blanket due to the lags. In fact the primary time constant is determined by the evaporation/condensation time constant according to Roy Spencer.

IMO the important question here is not so much "what is the dominant time constant of the climate system" (which may be a rather ill-defined and complex issue) but "does this analysis method correctly diagnose sensitivity" and the answer to that is clearly "no, it does not". End of story.

The fact that there are clear physical reason why one might reasonably expect the analysis to generate an unreasonably low time scale and therefore an incorrect sensitivity estimate is the icing on the cake, but not fundamental to the argument.

If you have time, you should review this study

I.m.o., much more interesting :)

Great post.

I have also done a debunking of this lame study you may be interested in at climateprogress.org

Hi James,

I came across your comment on the recent Schwartz paper, and since your expertise is climate prediction, two questions in response to your analysis.

1) You state that " ... confirm the well-known fact that the 20th century warming does not by itself provide a very tight constraint on climate sensitivity."

What does this mean for the validation/evaluation of climate models with 20th century temperature observations? Comparison with 20th century temperatures is often used as a means of 'proving' the skill of climate models. See for example IPCC 4AR. Maybe you could shed a light on this.

2) You later on state that "When forced with rapid variations (such as volcanoes), the time series of atmospheric response will seem rapid, but in response to a steady forcing change, the system will take a long time to reach its new equilibrium."

If the climate system (also) responds on long time scales, what does this mean for the possibility of recovery from the little ice age? Maybe I should rephrase this question. Doesn't this mean that the current climate state is/may (also) be an initial value problem? With long response time it surely appears possible that the current climate is still (partially) responding to what happened in the distant past. Kevin Trenberth suggested something in this direction in a recent post at Nature's ClimateBlog, i.e. that it is important to also know the initial state of the climate when analyzing 20th century climate.

Combining question 1+2)

How do we then know that the 20th century warming is - especially in the latter part - predominantly due to enhance greenhouse gases. For example, observational data is for sure not good enough to construct an accurate initial climate condition before, let's say, the 1950s (an initial condition should include any climate parameter that responds on long timescales, like the deep ocean, but also the biosphere and the large icecaps). Which suggests that it is simply not possible to uniquely attribute 20th century warming to solar variability, aerosols and greenhouse gases.

(which does not say that these three parameters combine to provide a plausible explanation for the 20th century temperature varitions)

Cherio, Jos.

ps. "Count Iblis": thanks for pointing out the Verdes paper!!

ps2. Sorry about the nickname, I have recently been experimenting with making a weblog on 'blogger.com'.

Jos,

I don't think it is at all reasonable to claim that the IPCC relies heavily on C20th global average temperature trend for evaluating models - there is a whole chapter on evaluation (Ch. 8) which covers a wide range of physical processes.

Trenberth's blog seems written in somewhat provocative manner (not that I'm going to throw stones on that score!), but he is clearly focussing on regional predictions of seasonal/annual climate up to a decade ahead, which is a very different matter from (say) a global temperature trend over the next 30 years. The former is strongly dependent on initial conditions, the latter is not. It is wholly implausible to think that a multicentennial-scale recovery from a ~0.5C cooling (LIA) can be making a significant contribution to the recent 0.5C/30 year warming. Any long response is still at a reducing rate with time, just not quite in the way a simple exponential decay would look.

Hi, I'm the original Mr Layman, and want to thank you all for trying to make the debate as accessible as possible. I have a question, which I saw on a Freeper thread somewhere (I know, I know, but sometimes I like to scare myself).

Is it true that Schwartz' model is the only one that correctly gets the drop in global mean temp from 1940-1970? If so, does that make it more credible than all the other models?

Not sure if this got through the first time, so forgive the double posting. I'm passing this on from a NZ blog where I'm commenting as John A. Another commenter (Falafulu Fisi) wanted to ask this question but doesn't have a Google account, so I'm cutting and pasting on his behalf, since it seems like an interesting question to a non-expert like me:

[cut and paste follows]

Here is my message that I was gonna post at James Annan's site, but it requires the poster to have Google account. If you believe James, mathematical analysis, then you should read the following analysis, and may be you could see that Annan's analysis is exactly what the following paper suggests that it should be dismissed, because it is inaccurate to use linear models. BTW, Scwartz analysis is still based on linear model where the real climate system is non-linear and multi-coupled feedback.

James have you read the followings:

Inferring instantaneous, multivariate and nonlinear sensitivities for the analysis of feedback processes in a dynamical system: Lorenz model case-study

[http://pubs.giss.nasa.gov/docs/2003/2003_Aires_Rossow.pdf]

Appeared in Q. J. Royal Meteorol. Soc., 129, 239-275,

Sensitivity is a non-linear function, and it is pretty much hard to estimate, in current modeling.

Until, this huge barrier in climate numerical modeling is solved or close to being solved, then I think your attack on Dr. Schwartz's work is despicable.

How about you address the science and not the person, and BTW Dr. Schwartz model is a linear one, whether his analysis is correct or not, it won't change the fact that the sensitivity parameter is dynamic, which makes linear feedback climate analysis including those models the IPCC used unreliable.

I recommend that should address the shortfall of numerical modeling rather pushing your religion.

[cut and paste ends]

How about it. Any takers?

Plum,

Schwartz just uses a simple energy balance model, there's nothing special about it.

Hi James - I've found your analysis of the Schwartz paper plausible, but not being a climatologist or geophysicist myself, I wonder if you could elaborate. I'll pose it as a number of questions that might arise in a dialog between you and Schwartz:

Are there any unequivocally demonstrable forcings or feedbacks during current or earlier eras that are known to equilibrate over time scales much longer than 5 years?

How can a 5-year equilibration interval be reconciled with the very long intervals involved in mixing between deep and shallow ocean waters resulting from the meridional overturning circulation? Or possible other types of heat exchange between surface and deep waters?

How much heat goes into ice or snow melting? What mechanism could constrain certain ice/snow melting feedbacks to a 5-year equilibration interval, when (a) reduction in albedo establishes a continuing feedback loop of its own, with higher temperature mediating further albedo loss in a continuing cycle until some compensatory mechanism operates to control the effect; and (b) thinning of snow or ice might not manifest itself climatically until sufficient time for complete disappearance that reveals exposed ground or water - possibly decades. Are these phenomena quantitatively important enough to make a difference?

CO2 rises in the current trend tend to start around 1840, with temperature rises not demonstrable until about 1910. However, the change in CO2 during that lag is rather small. Is it sufficient, however, so that rapid temperature equilibration should have shown up in the late 1800's rather than after 1900?

I'd be interested in your comments and those of others.

Fred

Fromnotpc,

I wonder if anyone actually read that paper as far as page 2, where it says:

However, this classical feedback analysis can still be useful if one is interested in the equilibrium (or transient) response of one variable to a pertubation of one other, especially when one is comparing two nonlinear integrations.And again on page 12:

Previous approaches to feedback analysis are often only a characterization of the equilibrium state of the system after the introduction of an external forcing.But of course this is exactly what we

areinterested in - no-one is seriously trying to estimate climate change based on trulyinstantaneoussensitivities (the topic of the paper), and mostly we are not interested in the actual transient in all its chaotic detail. Instead, we use observational estimates based on intervals which areat leastmany times greater than the characteristic time scale of the atmosphere, and even in the case of multi-annual perturbations have to make adjustments for the ocean disequilibrium.As is usual, the sceptics don't even understand the basics of what they cherry-pick and selectively cite.

Thanks, James. I'm just a normal bloke who's not too snappy on maths, so I appreciate it when scientists take the time to seriously consider sceptics' arguments.

BTW, I've been reading your site for a long time, and your failure to come across as a so-called lockstep climate cleric only makes you more credible in my view.

fmoolten,

Over very long time scales the major ice sheets and orbital forcing are significant O(1000y) effects. Even in the modern era, a 5 year time scale is only plausible if you postulate that the heat exchange with the deep ocean is very limited indeed.

One minor detail that is worth noting: Schwartz says "less of the deeper water is coupled to the surface", but actually the real situation is that it is basically all coupled, but with a long time constant (so it lags the surface warming more substantially, and is further from equilibrium). I doubt that anything else is worth much analysis - there are ample reasons to distrust his result.

Re. James' comment that:

"I may have my arm twisted in that general direction :-) But who knows, he may withdraw it in embarrassment."

He won't - I emailed him as well, and his reply was "I believe I can respond to the concerns raised by

Annan, should that be necessary, in an appropriate forum."

So I think you'll have to write to JGR in order to get a response from him.

Dave

I left a comment for you on Dr. Pielke's blog. http://climatesci.colorado.edu/2007/08/20/new-paper-on-the-diagnosis-and-significance-of-ocean-heat-content-changes/

I just wanted to draw your attention to it in case you cared to comment.

Ron,

Doesn't seem particularly sensible carrying on a conversation about my comments on some other blog (especially one that is now defunct according to its owner)

"It appears to me that the relaxation constant you favor is more a tenet of the AGW faith than it is a fact."

The relaxation constant I quoted is a fact (or close to it) in relation to the models. The point is that Schwartz's method fails to diagnose their time constants and sensitivities, and he presents no evidence or even reasoned argument that it is likely to work for the real world - it is purely supposition on his part. In fact it is easy enough to demonstrate that his method fails even when applied to the simple energy balance model he postulates.

The hope that Schwartz has uncovered something deep and meaningful is absurd, and shows up the straw-clutching desperation of the septics: in fact, his analysis is rather superficial and clearly unreliable for several reasons, including (but not limited to) those presented here.

It seems to me that there is a simple explanation for Schwartz low sensitivity value, which he draws attention to himself, starting at the bottom of p 13. His original analysis gave a relaxation time constant of 15-17 years, which is right in the normal ballpark. But he got inconsistent results for different subperiods, and so detrended the data, even though acknowledging that this is applying a high pass filter.

His demurral is right. Detrending attenuates the long term effects, and almost guarantees a short relaxation constant. It throws out the data that caused the discrepancy, but that is also the data that is needed. It is quite the wrong thing to do.

Well, there are a number of ways of describing the wrongness :-)

In fact some simple experiments show that Schwartz's approach is not capable of detecting the time scale even when this is well defined (and instead it gives an unrealistically low estimate).

James said...https://www.blogger.com/logout.g?d=https%3A%2F%2Fwww.blogger.com%2Fcomment.g%3FblogID%3D9959776%26postID%3D3873365907628373562

Use a different account

As is usual, the sceptics don't even understand the basics of what they cherry-pick and selectively cite.No, James, I understand the derivation that Rossow & Aires were stating in their paper.

I am not a climate scientist although I was trained as a Physicist, however I specialize in numerical computing (scientific computing), and the climate dynamical processes described in the paper are real physical (shall we say Physics) processes that we have very little understanding of the nested feed-back that are taking place. I have always used non-linear models such as Support Vector Machines (SVM) or Artificial Neural Network (ANN) to solve nested-feedback economic models that I have developed and always run-in difficulties, such as the solutions are oscillated wildly or it become too unstable numerically . Non-linear modeling in economics are similar to that of dynamical system climate modeling, however I find it very hard to pre-determine the dynamics &

transfer functionof financial model that I developed, because it is bloody hard to formulate one. If it wasn't hard then scientists who applied feed-back control theory (linear & non-linear) to economic dynamical systems would be millionares by now including myself, but in reality it is not the case. There are numerous papers in Economic/Finance literatures that cover the use of non-linear feedback control theory, almost exactly the same as the model derived by Rossow & Aires in theirdynamical feed-back sensitivity analysis. The work of Rossow & Aires was just the beginning of taking non-linear feedback in climate modeling a few notches up from the linear feedback model that dominates the current models. Climate and Economics are both non-linear dynamical systems, and if the climate models are so convincing to you now , then perhaps you could modify them to use for modeling of the financial markets, and I am sure that you and your fellow modelers would be millionares if you truely believe your models. This is the reason I am not a bloody millionare myself, is that there is so much uncertainty in the models I have developed, be it interest rate models, foreign exchange , etc,... my software seem to have good prediction sometimes and seem to be wrong on some occasions.I think that more analysis using ARMAX (auto-regressive-moving-average-exogenous), as that described by Rossow & Aires, would be a step forward in solving the multi-couple feedback in climate systems, because in reality non-linear multi-couple feedback do exist, but we have simplified it into a linear one.

See a NASA sponsored workshop here a few years back. Note if you click on the link and when it appears, just refresh it, so that the text doesn't quashed up to the left side.

WORKSHOP ON CLIMATE SYSTEM FEEDBACKS

Cheers,

Falafulu Fisi.

You've got to realise that there is a large difference between estimating the long-term change arising from a change in boundary conditions, versus trying to predict the detailed trajectory. The latter is often much harder than the former. That's not to say the former is trivial, or that we can claim to know all the details on a regional basis, but (for example) generalised warming in response to increased GHGs is basically zeroth-order energy balance.

James,

You write: "The relaxation constant I quoted is a fact (or close to it) in relation to the models."

I find your comment comical since we are talking about the real world and not models. I was shocked the first time I heard a modeler talk about his modeling runs as "experiments." They are not experiments because they are not observing the natural world. You seem to be making the same mistake.

Modeling runs are not experiments

and predictions are not evidence. Schwartz's observations are based on the real world and not models. This shows how bad the models are, not an error by Schwartz.

May I suggest you read "Useless Arithmetic" by Orrin Pilkey and his daughter and also "Principles of Forecasting" by J. Scott Armstrong.

Ron,

The problem I am trying to explain to you is that Schwartz's method does not actually calculate the "relaxation time" of systems where this constant is known.

In that context, whether or not the models are really good models of the climate system is entirely beside the point. So long as they capture the basic physical properties for the climate system that Schwartz bases his analysis on, of thermal inertia and relaxation to a new radiative equilibrium in response to a forcing (which they clearly do) his method should work on them just as for the real climate system. But it demonstrably does not work for them, and it is therefore unreasonable to believe that it might magically happen to work for the real climate.

Actually, some further investigations have indicated that things are a whole lot worse than I wrote in the original post. Schwartz's whole method of analysis is predicated on his belief that an AR1 series is an adequate model of the climate system...but his analysis method fails to work

even for for an AR1 series itself!(Clearly an AR1 series is not a very good model of the climate system, but that is another issue - it is obviously enough to show that his method fails even if his underlying hypothesis is true.)

James,

You write:

"The problem I am trying to explain to you is that Schwartz's method does not actually calculate the "relaxation time" of systems where this constant is known."

This is misleading. Actually Schwartz has innovated a new method to determine the relaxation time which yields a different result from what is currently accepted by climatologists. This new method is based on observations of global mean surface temperature and ocean heat content. I am certain you are aware of Dr. Pielke's support for using ocean heat content as a perferable metric for climate change. Schwartz uses it and reaches a different conclusion about relaxation time.

It would seem to me far better for you to show a little humility and keep an open mind. What are you going to do if Schwartz is proven right?

BTW, Schwartz paper is in line with the recent paper by Roy Spencer. If Spencer is right about the negative feedback in the tropics, one would expect a relaxation time more in line with Schwartz.

Actually Schwartz has innovated a new method to determine the relaxation timeNo he hasn't actually, he has picked up a well-established method in time series analysis which is known to give biased answers (with a literature on this detail dating back some 60 years).

This is trivial to show experimentally: make yourself an AR1 series with a known time constant, and then use Schwartz's method to estimate the time constant. It gets it wrong (and wildly wrong for plausible parameter values). This can be demonstrated just a few lines of code in any maths/statistics package - ask on Climate Audit, I'm sure that any competent econometrician will quote the appropriate references off the top of their head. You can expect a longer blog post on this in the near future, once everything is tidied up for public consumption.

Post a comment