Some time ago, Stephan Lewandowsky wrote an article on planet3.0 "The Inescapable Implication of Uncertainty" (also available on his own blog as part of a series), which made the fairly straightforward point that the expected cost of climate change is greater as a result of uncertainty about its magnitude (eg, the canonical example of climate sensitivity), and thus those who argue that uncertainty is a justification for inaction are precisely backwards in their thinking.
It's a pretty simple point, which has been talked about by Michael Tobis for a long time. And it's not at all controversial, scientifically speaking. So I didn't think it needing commenting on.
But recently Ben Pile wrote a really bizarre attempt at criticism, so it might be worth revisiting the topic.
The crux of Stephan's argument is quite simple. The cost of climate change is generally considered to be a nonlinear (concave convex - see comments) function of the magnitude of warming. This is a standard result of all attempts at economic modelling that I am aware of, and in my opinion is very intuitive and natural. For example, I used the quadratic function C(T) = 0.284T2 (where T is temperature change, and the cost is expressed as % GDP loss) in our Climatic Change paper (available here). This function was directly based on the DICE model of Nordhaus. AIUI all credible economic modelling generates qualitatively similar results. (Incidentally, it doesn't affect the argument in any way at all if the loss function actually has an optimum at some nonzero temperature change, as some others have found.)
The point about a concave convex function - indeed it's (almost) the very definition of concave convex - is that for any small t, (C(T+t)+C(T-t))/2 > C(T). Or in words, the average of the costs of T+t and T-t is greater than the cost of T. The consequence of this is that symmetric uncertainty about the value of T leads to an increase in expected cost, compared to a deterministic outcome.
The application to climate change is straightforward, as illustrated with the following simple example. If we know that the sensitivity is 3C (say), then the cost function based on the DICE model gives a ultimate loss of 2.6% GDP for a highly simplistic scenario in which CO2 doubles and is then held constant. If instead of a known sensitivity of 3C, we thought the sensitivity might equiprobably be 2C or 4C, then even though the mean value (our expectation of the temperature change) is unchanged at 3C, the expected cost is (1.1+4.5)/2 = 2.8% GDP. For a 50-50 chance of either 1C or 5C, the expected cost rises to (0.3+7.1)/2 = 3.7%, and for 0C or 6C it's 10.2/2 = 5.1%. The discerning reader may have noticed the first hints of a pattern here...
Another way of saying it, is that the expected cost of (uncertain) climate change is greater than the cost of the expected climate change. (This is using the concept of expectation in the mathematical sense - note that in the uncertain case, there is no possibility of the cost actually being 2.8%, it will either be 1.1 or 4.5, and we don't know which.) The result is not specific to the particular example, of course, but applies widely. Increasing uncertainty (for any sensible definition of "increasing uncertainty") will generally lead to an increase in the expected cost.
So what's Ben Pile so worked up about? He accuses Stephan of producing "the most remarkable attempt to formulate — or reformulate — the precautionary principle I have ever seen", describes it as "an incredibly tortured attempted to alternate between word play and maths abuse". There's more:
"Lewandowsky, over the course of three posts – one, two, three — reinvents the precautionary principle without ever calling it the precautionary principle. This is interesting in itself… An academic in the field of climate policy has forgotten that the precautionary principle already exists, is already applied to the science, and is already manifested in policy. "
And there's plenty more vacuous hyperbole where that came from.
Unfortunately, Pile is dead wrong. Lewandowsky's argument has nothing to do with the precautionary principle, so it's hardly surprising that he doesn't mention it. Instead, it's just a simple application of standard economic analysis under uncertainty, which is implicit in all academic work in this area. It was certainly implicit in our Climatic Change paper - I didn't think it worth specifically highlighting in that work precisely because it is so elementary and well known. But Pile has got such a bee in his bonnet about the PP that he doesn't even realise that Lewandowsky isn't even using it. It's a bit odd, because from what I recall of previous posts of Pile's, they are usually fairly sensible (I'm not an avid follower though). But of course it is hardly the first time that a social scientist has blundered into a debate and a made fool of himself though not having the requisite (albeit rather minimal) mathematical skills to understand the issues...
Unfortunately, Pile is dead wrong. Lewandowsky's argument has nothing to do with the precautionary principle, so it's hardly surprising that he doesn't mention it. Instead, it's just a simple application of standard economic analysis under uncertainty, which is implicit in all academic work in this area. It was certainly implicit in our Climatic Change paper - I didn't think it worth specifically highlighting in that work precisely because it is so elementary and well known. But Pile has got such a bee in his bonnet about the PP that he doesn't even realise that Lewandowsky isn't even using it. It's a bit odd, because from what I recall of previous posts of Pile's, they are usually fairly sensible (I'm not an avid follower though). But of course it is hardly the first time that a social scientist has blundered into a debate and a made fool of himself though not having the requisite (albeit rather minimal) mathematical skills to understand the issues...
70 comments:
I love the idea that 'those who argue that uncertainty is a justification for inaction are precisely backwards in their thinking', next to your claim that I have misconceived Lewandowsky's argument as a (re)formulation of the precautionary principle.
You seem to want to sustain your cake and eat it.
Pile's post, and first comment, is a nice illustration of how reducing a quantitative argument to a verbal argument constitutes a massive loss of information. Because he lacks a model, Pile can say all kinds of internally inconsistent things, and make attributions about Lewandowsky, which cannot be precisely refuted. That makes the conversation rather useless.
I think the reason Pile completely misses the point about convexity is that he's not really arguing about uncertainty around some mean. He's arguing that the mean or uncertainty is somehow misspecified, such that the expected damage is lower. ("Indeed, it is only by virtue of their over-estimations of risk that environmentalism has achieved any influence at all. And this is the reason why environmentalists cannot abandon the precautionary principle.")
Seen this one? I see only the abstract, uninformative, but catchy title. Yours is the first 'related' doc listed: https://springerlink3.metapress.com/content/3p8486p83141k7m8/resource-secured/?target=fulltext.html&sid=rfnnarlyhth42zhcz2kd5cce&sh=www.springerlink.com
Shorter version: Uncertainty costs money. Anybody who is involved in any sort of real R&D knows this. I don't see this as a particularly controversial point.
But if you don't like the model based argument... just look at the impact of uncertainty on the oil commodity market or the stock market in general.
<a href="http://www.stanford.edu/~nbloom/uncertaintyshocks.pdf'>Here's a little study on the impact on markets from uncertainty.</a>
In detail here's two examples relating directly to the potential for CAGW. 1) Climate change turns out to be a less serious threat than the "median" forecast result and we overspend on climate change remediation, 2) climate change turns out to be more serious, and we underspend on remediation (so the direct costs of climate change are higher than they would have been with a modest increase in remediatory spending).
Neither of these are guaranteed to happen, but the likelihood of one of these scenarios playing out increases as uncertainty increases.
The other asymmetry is that there is lots of stuff to do that you would want to do even if there was no threat of climate change, which means the loss of opportunity costs to dealing with the uncertainty on the high side is lower.
The comments on Pile's post are hilarious. I particularly like the part near the end, when someone who actually understands Lewandowsky's point shows up and points out that 'expectation' actually has a precise mathematical meaning. Pile's response:
Well, I took Lewandowsky’s word for it that his argument was based on ‘simple mathematics’, rather than advanced statistics.
Oh, just in case you missed it - which is entirely possible, given that Pile didn't bother to mention it here - he's posted a 'response' to this post. Comment #63.
Tom, its not so much Ben Pile has reduced the simple equations Lewandowsky described to nonsense verbiage, he has just completely ignored them - and notice James' lucid explanation above has no effect either.
He seems to belief quantified argument is useless, and if so as jmc said he is doomed to talk nonsense.
It'd be interesting to put him in room with a James & a whiteboard and see what James could convey to him.
Eli: The other asymmetry is that there is lots of stuff to do that you would want to do even if there was no threat of climate change, which means the loss of opportunity costs to dealing with the uncertainty on the high side is lower.
Yes that's an excellent point. Some of the "cost" of climate mitigation actually incur net benefit (though you could argue it damages some powerful individuals pocketbooks and agendas).
Example: Reduction in oil usage = reduction in oil dependency = increased national security + it's impact on (near-) totalitarian states that depend on foreign oil money to maintain stability.
Thanks all. Hank, see this post.
It's telling that Pile seems to think this elementary argument is "advanced statistics" :-) Ben, haven't you got a substantive response to the point I (and Stephan) made? No, I thought not.
Of course, he won't care about that, because he's won the "debate", according to the adulation from the peanut gallery on his blog. But all that requires is the generation of a content-free rant about the "irrational and incoherent nature of environmentalism". Job done, move on to the next talking point.
James Annan
I can’t tell if you referred to us commenters at Climate Resistance as “the peanut gallery” before or after TomFP (comment #64 to Ben’s article) described the kind of catastrophe-based environmentalism you practise as “simian grooming”. No matter, I don’t give a monkey’s either way. TomFP was making a interesting point about primate behaviour. What are you doing?
Tom Fiddaman said: “Pile's post, and first comment, is a nice illustration of how reducing a quantitative argument to a verbal argument constitutes a massive loss of information”.
Do commenters here really believe that formulating statements about climate change in terms of T and T+t and dt adds information ? To normal human beings it looks like theologians arguing in Latin in order to impress the mathematically challenged masses. It’s not working, is it?
In your defence of Landowsky, you state that “the cost of climate change is generally considered to be a nonlinear (concave) function of the magnitude of warming”, describe this belief to be “intuitive and natural”, and give your own personal quadratic function C(T) = 0.284T2 as a concrete example.
Your dismissal of Ben’s critique of Landowsky and of the comments which follow rests on the assumption that Ben (and we in the peanut gallery) can’t follow such advanced mathematical reasoning. This assumption is wrong. We’re not challenging your equation, or your point about the average of T+t and T-t being greater than T for a concave function. We do wonder - at great length - why you do it.
Some think the algebra is a cover for a secret plan for world domination; others think you’ve found the formula to turn base modelling into gold; others that you’re just deluded practitioners of post-modern numerology.
I look at that 0.284 and think: “Wow. Amman’s constant. To three decimal places. Think of all the drowned Bangla Deshis summed up in that one figure. And think what a massive loss of information would result if he tried to explain it in words.”
The information contained in your formulae (and which Tom Fiddaman fears will be lost if expressed in words) is information about the future. This, I would suggest, is the reason why it has to be preserved in an algebraic form. It is, as the theologians say, ineffable - or should that be in-F()able?
Your dismissal of Ben’s critique of Landowsky and of the comments which follow rests on the assumption that Ben (and we in the peanut gallery) can’t follow such advanced mathematical reasoning. This assumption is wrong.
Anyone who understands elementary probability theory will tell you otherwise, I'm afraid.
Geoff,
Thanks for turning up and proving me right.
"He who refuses to do arithmetic is doomed to talk nonsense."
John McCarthy, 1927-2011
Tsumetai
How does elementary probability theory help you to determine whether we can follow James' and Stephan's reasoning?
James
And he who thinks he can demonstrate truths about the real world using simple arithmetic is doomed to do what?
Well, first, check whether that future cost is shown in inflation-adjusted funds. If it's going up fast, adjust that inflation rate.
If that doesn't lower the anticipated future cost increase below linear, then adjust the discounting-of-the-future-costs slider.
Goal: show that it makes more sense for us older folks to party on now and let your grandchildren, who will all be much richer and wiser than we are, pay for any little problems we leave them along with the richly entertaining YouTube documentation of our lives.
He who can't understand statistics is doomed to do accounting.
Geoff,
A decent analogy to climate change is medical care.
When you are treating a patient, there is never absolute certain that the diagnosis is correct or that the treatment will be 100% effective.
And if the person is suffering from a potentially fatal illness you don't have an indefinite time to treat the patient.
As one can imagine, unlike the Earth's climate and future climate change where we only have one go at it, there is a vast body of research on the effects of uncertainty and decision making on the costs of medical care.
See e.g. this just as an example.
Can you imagine a doctor who refuses to try any treatment on you until he knows without any doubt that the treatment he proposes to try will work? There are costs of action and there are costs and consequences of inaction that have costs associated with them.
Since we can't know the future, all we can do is predict the most likely outcome, and ironically, the more uncertainty there is in the outcome, the greater the expected costs associated with that uncertainty are.
This is why it is absolutely critical that efforts like James and Jules that seek to reduce uncertainty be highly funded and equally important why those of people who seek to increase uncertainty by intentionally muddying the water be defunded.
You might think that it helps the case of the skeptic to increase uncertainty, in reality it is going to cost us all more money. That I'm certain of.
Not particularly relevant to the wider point, but a mathematical nitpick: your example and your definition would be what I was brought up to call a "convex function", not a concave one, and the inequality you state is what I would have called Jensen's inequality for convex functions. (I see that Lewandowsky's article also uses "convex" rather than "concave".) But perhaps you are following terminology from another conversation that I've missed.
Campion,
Ooops that's embarrassing - I've fixed it.
Campion
Lewandowsky is in Australia. Convex looks concave from down under.
Carrick
Why argue by analogy from health care if you’ve got good arguments in the area of earth care? When you are treating a patient, one thing you can be absolutely certain of is that he will be dead by 2100, whatever you do. The analogy breaks down before you begin.
If you really want a medical analogy, try this. You’re running a mental asylum with 6 billion patients, and have ambitious plans to double its capacity. One day a heating engineer calls and says that, due to unforeseen problems with the thermostat, the central heating will start to malfunction in 50 to a hundred years’ time. He says he can probably fix it, but first he’ll have to fly to Rio to consult his colleagues. You ask: How much will it cost? He says 0.284 multiplied by the square of the number of degrees he thinks the temperature will rise by. Do you:
1) Pay up
2) Throw out the heating system and start designing a new one from scratch
3) Release all the patients and intern the heating engineer?
Geoff Chambers wrote:
Do commenters here really believe that formulating statements about climate change in terms of T and T+t and dt adds information ? To normal human beings it looks like theologians arguing in Latin in order to impress the mathematically challenged masses. It’s not working, is it?
---
Presumably theologians would argue in Latin to impress the Latinically challenged masses, not the mathemactically challenged ones. But I get your point. In either case you wouldn't understand.
Geoff, I'm not arguing by analogy. I was giving you an analogy so that you might be able to make the link.
It's of course not necessary me to make an analogy since the point is already obvious and can be derived in general terms for any similar economic situation
Your example was meaningless, and only informative that you don't understand the underlying issue well enough to be able to formulate a real world example of your own, where uncertainty increases the cost of a project.
Let me ask you a question...
What is it you hope to accomplish here? You obviously don't know anything about the topic, and you can't reason worth a flip.
How does elementary probability theory help you to determine whether we can follow James' and Stephan's reasoning?
Gosh, how would an understanding of elementary probability theory help me determine whether you can follow an argument based in elementary probability theory? It's a mystery.
Ben has a complex argument about how Landowsky smuggles the precautionary principle back in under cover of what he claims to be a purely mathematical analysis of the effect of variations in uncertainty on our expectations with respect to climate change.
My point is different. Landowsky is saying nothing at all about climate change for the very simple reason that his whole argument is about the characteristics of the graphs he analyses, nothing more. He is describing some very ordinary mathematical characteristics of frequency distribution curves, that’s all.
He takes a skewed probability graph of estimated climate sensitivity from Roe and Baker, and produces four simulated lognormal versions, each time increasing the spread (which he identifies with uncertainty) while maintaining the same mean sensitivity . Since he assumes that a negative sensitivity is an impossibility, the left tail of the graph is blocked at zero, and since he has decided to hold the mean at 3°C, the result of a increasing spread (increasing uncertainty) naturally results in a fatter tail to the right. He then assumes that a certain temperature is catastrophic (pointing out in a correction to his second article that the particular choice of temperature doesn’t affect the argument) and demonstrates that increasing uncertainty necessarily results in a greater probability of a result to the right of his danger point.
All this is a perfectly reasonable demonstration of the properties of a certain kind of skewed distribution curve, but it tells you nothing about climate sensitivity or our knowledge of it, or the correct reaction to it. At the same time he’s increased uncertainty (the spread) he’s held the mean steady. What situation, in the real world could lead to such a mix of knowledge and doubt?
Let’s take an example from the familiar world of vital statistics - women’s breast sizes for example. We plot them along the x axis, from A to triple F or whatever. We know that B is the mean, and that a size smaller than A is impossible, so we naturally expect a skewed distribution with a long tail trailing off into the realms of fantasy. We have a graph with a mean at B and a peak somewhere between A and B, which we believe represents the truth, and fromthat we derive three other graphs similar to Landowsky’s which represent increasing uncertainty as to the true distribution, while all the while holding to the principle that the B cup is the mean. (The uncertainty might represent our acquaontance with the female form, or the amount of clothig worn).
What do these graphs mean? If you’re Landowsky, they mean that the less you know about women, the more likely you are to think you’re going to meet a woman with really enormous ones. In your dreams. And the more you think that, the more necessary it is to be prepared to meet such a woman, because you never know, and the less you know, the more prepared you need to be.
As a lifestyle choice it has its attractions. So does catastrophe-based environmentalism. As an example of rational thought, it stinks.
I just realised you mean Ben, yes, that Ben, me old pal* Ben Pile, with whom myself and others on the bad science forum wasted many an hour trying to work out what the **** he was on about. Finally it became clear he was looking at the science through the lense of politics and his own weird philosophy, and we all gave up.
Nice to see someone else doesn't like his work.
*Sarcasm alert
Geoff, until you can explain to us how you are going to follow an argument that is based on probability theory without resorting to probability theory, I think you're wasting your time. Again your examples are poor, but you reject ones that are reasonable.
I'm going to assume you're one of those people who don't know that they don't know and move on.
Cheers.
Perhaps Eli can simplify the argument for Jeff. Everyone (and that includes Richard Tol, William Nordhaus and any climate scientist you can shake a stick at) agrees that the cost of a smaller increase in global temperature than we would expect based on our best understanding would be not very much.
Unfortunately the cost of a similarly larger increase would be very very much. When you add them up, (very very much-not very much) the difference is very much, ie. it doesn't average out, so for planning purposes the cost of inaction is more likely to be very much.
Hello James,
According to Nullius in Verba, your "assertion is mathematically false". His reasoning is this:
> The expected cost is the mean of the cost distribution. The uncertainty is measured by the standard deviation of the cost distribution. The two can vary independently – distributions exist with those parameters for any combination of mean and SD. (And there are also distributions where one or both is infinite.)
Since Nullius would not bother coming here because of the login policy (I must take his word for that claim, thus am glad of not being named Nullius in verba), I took the liberty to create a Wordpress account and sign it to copy his argument.
I hope the issue is not as convex, I mean as complex as it seems. In any case, we may notice the shift from "is false" to "I see no reason for" in the comments that follow.
Hope you don't mind,
Congratulations for your award!
willard
PS: Do you know Ron Broberg's blog, by any chance?
Guthrie,
I've been racking my brain for why I thought Pile had previously been "fairly sensible" and suspect I'd confused him for someone else...
Willard, NiV is just waffling randomly, demonstrating again the value of John McCarthy's quote. The uncertainty in the cost isn't the issue. The argument that Stephan presents is that uncertainty in the magnitude of climate change directly affects the mean cost via the mechanism I described. If NiV thinks this is "mathematically falsified" by his verbiage, he has a different concept of both mathematics, and falsification, than I do.
If he really does want to falsify the claim, all he has to do is come up with a plausible cost function (which will be convex), and reasonable distribution of climate outcomes, such that increasing the uncertainty of the climate outcome (while keeping the mean unchanged) does not increase the expected cost.
I'm not holding my breath.
I've seen Broberg's name somewhere, didn't know the blog.
It’s worse than we thought. My example was far too favourable to Landowsky, because, unlike my example of a skewed distribution of breast sizes, which represents something in the real world, his distribution is one of estimates of something in the real world. There is no “distribution of climate sensitivities”, just one true figure, which we don’t know, and a distribution of estimates. So, whereas, in the case of climate sensitivity, he can get away with reasoning about what’s going on in his own head (his “expectation” of high sensitivities) as soon as he applies his reasoning to the real world, his logic leads him to the conclusion that thinking about the world changes it.
In my example,holding the mean at the B cup and disregarding sizes lower than A logically obliges you to make more high estimations as your uncertainty increases (due,for example to looser, woollier jumpers). You are logically obliged to say: “Knowing that the average is a B cup, I’m bound to interpret the fashion for loose jumpers as hiding more big breasted women”. Landowsky can’t avoid the next step, which is to say “therefore the more women there are wearing loose woolly jumpers, the more big-breasted women there must be”. He really believes that his increasing uncertainty about the world is changing the way things are in the world.
The way out is obviously to reject his initial step of imagining synthetic skewed distributions with increasing uncertainty (flattened curves) but a known mean. Your certainty of a derived function (the mean) can’t be greater than your certainty of the data on which the function is based.
Since his initial step is logically false, all discussion based on it (eg of costs) is a waste of time.
There’s a related, and much simpler argument, of course; you can’t be more certain of the risks or costs than you are of the factors (temperature rise) of which those risks or costs are a function. No amount of averaging big guesses and little guesses on a concave curve can alter that.
(Though you may be more certain of your expectation of those risks or costs -i.e. of what’s going on inside your head. But even here Landowsky is wrong in thinking that your inner thoughts can be logically determined by his musings about lognormal distributions).
I’ll be interested to see how you demolish my argument. And even more interested to see how you’re going to persuade politicians that the less certain the risks, the more certain it is that they’re going to have to spend more.
Well, if you'll excuse the slight diversion, not all his writings were wrong exactly, although they could be a little hard to understand. I don't recall anyone accusing him of being stupid, just biased.
A decent analogy to climate change is medical care.
It is.
Modern medicine is reaching a point where people are taking tests they don't need, in the false belief that it makes them safer. They don't understand that unless they have extremely good reasons for testing that the false positive rate is actually more dangerous than the disease they are testing for. And so people get expensive and dangerous treatment for diseases they don't have. Because they got "positive" test results.
Climate science is the science of false positives.
Basically if you do enough testing you will get false positives. So we see climate proxies tortured until they confess, and all the negative evidence ignored.
Worse, climate science is like those alternative medicine types. They can always find something that needs to be cured. No need for false positive, as every test yields some disease.
Just as you would never go to a doctor who needs to find you sick for reasons of his own, you should never trust an environmental activist to dispassionately observe the environment.
================================
Incidentally, it doesn't affect the argument in any way at all if the loss function actually has an optimum at some nonzero temperature change, as some others have found.
What if the function is C(T) = 0.284T(T-10)? Still convex. If that were true, then we should welcome a rise of 5 degrees, since that would minimise cost.
Of course it matters where the optimum is. How can you be so rude about the maths of others, yet not even understand rudimentary parabolas?
I think you are trying to baffle us with Maths if you are prepared to state that it doesn't matter where the optimum is. No-one with even half a brain thinks that is true.
Now I like Maths. I like Maths enough to teach it. Yet I know when it is not applicable.
It is not applicable when all the numbers are made up, based on a complex set of assumptions which under rudimentary examination turn out to be biases.
We need a much better understanding of every aspect of climate and its effects before we can hope to put solid numbers to it.
Your mathematical reasoning reminds me strongly of those super-clever hedge fund managers with their complicated models. Which turn out to be as often as not utterly wrong and very dangerous.
Meanwhile people who argue that you shouldn't try to make it all maths, but actually pay attention to the biases of the people involved have often been shown to be much wiser.
I'm with Ben Pile. Your maths is so much fakery to try to baffle the unwise.
Mark,
Regarding the location of the optimum, and its value - in your example, a warming of 5C would indeed minimise the cost (climate change would actually be a large benefit at that point). However, uncertainty in the future change around this value would still increase the expected cost (or decrease the expected benefit, if you prefer to put it that way). Which is just what I said at the outset...it just depends on the convexity of the cost function, which is one thing that all economists agree on, even though they disagree about many things.
Geoff,
There is no “distribution of climate sensitivities”, just one true figure, which we don’t know, and a distribution of estimates.
Yes, it requires a Bayesian interpretation of probability. Glad to see you've worked out that much. It doesn't invalidate the argument at all. Some would say that all relevant uncertainties in any real-world case of decision making under uncertainty are epistemic.
James Annan
I didn’t say that the distribution of estimates invalidates the argument. (It weakens my mammary analogy, that’s all). my It’s something quite different which invalidates the argument. Read my comment again.
Excuse me if I don’t express myself very clearly. Simple statistics makes my brain hurt.
Landowsky’s error is this:
His synthetic graphs using simulated data aim to express increased uncertainty by increasing spread while holding the mean steady. This results necessarily in a fat tail and can produce a huge increase in the probability of a “catastrophic” high end result.
All he’s done is
1) reduce the degrees of freedom by chopping off the left hand tail in accordance with I know not what a priori knowledge (even the IPCC accepts the possibility of a negative value for sensitivity);
2) insisted that we know the mean value for sensitivity estimates is 3°C; and
3) flattened the curve to simulate increased uncertainty.
It’s the statistical equivalent of screwing the top on the toothpaste, squeezing hard on the middle, and being surprised when it splurts out the bottom.
This is how he comes to the counter-intuitive result that the less we know about sensitivity, the more likely it is to be big and bad.
I said the error was a logical one. It all depends I suppose on his reason for insisting on the necessity of a skewed lognormal distribution. He says: “I used a lognormal distribution because it has the fat-tail property that we know is an attribute of climate sensitivity”. So it’s an empirical decision, and the conclusions he draws are therefore contingent, and don’t follow necessarily from simple maths, as he claims.
Geoff, you are simply wrong, as you would soon discover if you actually did the arithmetic.
Nothing in the argument depends on using a skewed distribution.
Why don't you just try it for yourself, rather than trying to make up reasons why you think it has to be wrong? It's really not very complicated.
James Annan
If you don’t use a skewed distribution, and hold the mean at 3°C, increasing uncertainty means you get a fat tail at both ends. So you now no longer know whether doubling CO2 makes temperatures go up or down. That’s not the kind of uncertainty Landowsky wants.
Landowsky says he needs a skewed distribution, not me. His argument is logically false and also incoherent. He tries to maintain that it follows from simple maths, importing empirical assumptions about “things we know” while at the same time conducting statistical thought experiments simulating uncertainty. It’s a conceptual dog’s breakfast. Why do you defend it?
James,
Thanks. It seems that commenters at Keith's got the cue and are now questioning the necessity of convexity with more reasonable statements like:
> The cost function is convex because everybody knows it is convex because that is what everybody says, at least everybody that counts, or more precisely anybody that wants to publish or have research funds, and if you don’t publish or have funds then you don’t get to have a say. But if you play nicely you can sing along.
followed with the South Park theme.
And then I'm the one accused of going meta.
So, to make sure I understand the point, under what conditions would assuming non-convexity of the cost-function change the conclusion of L's argument?
***
Geoff,
You claim:
> His argument is logically false and incoherent.
If this is true, there is no need to appeal to any thought experiment: a simple argument would show this.
Where's that argument?
I suppose we can take Eli's formulation as evidence that L's argument is both coherent and valid.
Eli: "Everyone (and that includes Richard Tol, William Nordhaus and any climate scientist you can shake a stick at) agrees that the cost of a smaller increase in global temperature than we would expect based on our best understanding would be not very much."
This seems not right. Sufficiently smaller, sure. It's a sliding scale, complicated by tipping points.
geoffchambers: "So you now no longer know whether doubling CO2 makes temperatures go up or down."
Do try to keep the argument connected to physical reality.
Interesting that experts in climate science and those who attempt to understand and appreciate their work are not to be trusted to observe climate. Seems a bit twisted to me.
As far as can be seen, politicians on the right cannot be trusted to look at anything straight, as they use a lens of short-term profit and obfuscation first, last, and always. If you want an example of not noticing what is all around you, they are it.
Interestingly, the toys you rely on to distract you from world news about the developing climate crisis, which is now manifesting in a number of life- and well being-threatening ways are the products of these same scientists.
Maybe you'd prefer to return to the stone age, with a vastly greater population. How do you think you'd survive if nobody took thought for anything but profit. Even the economic theoreticians on whom you appear to base your magic thinking never envisioned the extremes to which you take their thinking.
The most basic physics principle I know is that you never get something for nothing.
Gerard Roe, one of Lewandowsky's main sources for his series on uncertainty, recently published a paper questioning whether wholly abstract 'simple maths' approaches to climate sensitivity like L.'s really do establish what L. called 'the inescapable implications of uncertainty'. See _Climate sensitivity: should the fat tail wag the policy dog?_ by Roe and Bauman. (It wasn't a response to L. It was published half way through the series.) They concluded that climate sensitivity's fat tail is irrelevant for policy purposes and that policy-makers should instead be looking at uncertainties in economic growth and the damage function.
Speaking of which, L. came up with what he thought was an important argument against temporal discounting: because a larger sensitivity means more rapid warming, the convexly greater damage made plausible by greater uncertainty about climate sensitivity would arrive more quickly than lesser damage associated with a low sensitivity. But would it? Some of the damage would arrive more quickly but wouldn't most of it arrive more slowly because of the longer equilibrium times required by a larger sensitivity? L. warned about being confused by equilibrium times. Am I confused or is he?
I also wonder wonder whether he was strictly correct to apply a convex damage function to sensitivity rather than global temperature. If a higher equilibrium sensitivity means that an increasing fraction of the warming goes into the oceans (does it?) then its damage function might well be closer to a straight line.
Or not. I'm at the limits of my understanding here. (The answer is probably in the Roe and Bauman paper but much of that is beyond those limits.)
Re the precautionary principle, L. started the series by objecting to a weak (and wholly sensible) formulation of the PP by an Australian civil servant, which formulation he then converted with shameless aplomb into a mirror image of his own position - '[t]here is so much uncertainty that I'm certain there isn't a problem' - so that he could knock it down. It's the strawiest straw man I've ever seen.
neverendingaudit
Just like everyone else who has “replied” to me here, you ignore what I say and counter an argument I haven’t made.
I didn’t “appeal to any thought experiment”. I called Landowsky’s four-graph trick a “thought experiment”. He uses a Monte Carlo simulation to demonstrate the effect of increasing uncertainty as to the “shape” of the distribution curve of predictions of climate sensitivity (while claiming to “know” the mean of those predictions) in order to to support his conclusion summarised at:
http://www.shapingtomorrowsworld.org/lewandowskyUncertainty_I.html
that “Uncertainty should make us worry more than certainty”. He claims to have shown “that in the case of climate change, uncertainty is asymmetrical and things are more likely to be worse, rather than better, than expected”.
You don’t need a mastery of Baysian statistics to realise how ridiculous the last statement is. Any high school philosophy student - any fan of Monty Python - could do better than that:
- It’s going to be worse than you think, you know.
- No, I already think it’s going to be that bad.
- No, it’s going to be worse even than that.
- What, worse even than I think it will be now?
- How bad is that?
- As bad as you made me think it would be when you said it was going to be worse than I expected.
..and so on.
Steve Bloom
Same remark as my first sentence to neverendingaudit above.
On “keeping the argument connected to physical reality” - the IPCC envisages the possibility of feedbacks to CO2 warming being negative overall. Argue with them.
I’m extremely bewildered by the replies to my comments here. I expected some sneering at my ignorance of statistics, but not the total failure to counter my reasoning, or even to engage with it. You’re not really interested in Landowsky’s argument, let alone Ben Pile’s counter-argument, are you?
Geoff,
Thank you for your kind comment.
You now say:
> I didn’t “appeal to any thought experiment”
Here is what you say earlier:
> If you really want a medical analogy, try this. You’re running a mental asylum with 6 billion patients, and have ambitious plans to double its capacity.
I believe this analogy works like a thought experiment.
I also believe that this analogy, which works like a thought experiment, is the only place where I see something like an argument from you.
Perhaps I'm wrong, but I won't read the thread thrice just to make sure.
So I asked you.
Could you state your argument in a nutshell?
Just the argument. You can keep your appeal to Monthy Python, papal ineffability, or any other images.
Since I am asking for your argument, that I am supposed to "counter an argument you haven’t made" makes little sense.
You claimed that L's argument is incoherent. Now you say it's ridiculous.
Please provide an argument for these claims.
Thank you for your concern,
willard
> Just like everyone else who has “replied” to me here, you ignore what I say and
willard
You’re quite right, my medical analogy at (12/6/12 5:01 PM) can be interpreted as a thought experiment, though I think of it rather as an ironic comment on James’s calculation (to three decimal places) of cost as a function of the square of temperature. I naturally thought you were referring to the only comment where I used the term “thought experiment” (13/6/12 10:45 PM).
My argument is at 13/6/12 4:58 AM, 13/6/12 2:51 PM, 13/6/12 4:39 PM, 13/6/12 9:46 PM, and 13/6/12 10:45 PM and of course in my imitation Python sketch at 14/6/12 4:22 AM, to which you are replying.
It refers to the four graphs in his first article, which he claims establish that “things are more likely to be worse, rather than better, than expected”. He claims that his findings are based on simple maths. Sometimes he seems to be aware of the absurdity of claiming that you can deduce claims about the real world from “simple maths”, sometimes not. Shifting from Bayesian to ordinary statistics - from “expectation of cost “ to cost - helps to preserve the confusion.
Since his first argument about fat tails and “we expect it to be worse than we expect it to be” is based on a logical absurdity, all the rest of the argument here about concave cost functions is irrelevant.
I love James’s by-line at the top of the page about treading on the toes of giants. Landowsky’s apparent belief that you can derive truths about the real world from mathematics alone seems to derive ultimately from Pythagoras. In this case, Landowsky is not so much treading on Pythagoras’s toes, as talking out of a completely different part of his anatomy.
So first he says: "So you now no longer know whether doubling CO2 makes temperatures go up or down."
I objected, saying we do know that.
Then he says in response: "(T)he IPCC envisages the possibility of feedbacks to CO2 warming being negative overall."
Oops. Sloppy, sloppy.
With someone else, it might be worth dissecting the second claim, but not this guy.
Geoff,
Thank you for your response.
I'll take a peek later, more so that I notice that your example about women breasts might help me understand your position.
But please do consider the fact that what you're saying so far sounds a lot like an appeal to incredulity.
Steve Bloom
You misquote me. I didn’t say: “"So you now no longer know whether doubling CO2 makes temperatures go up or down." I said (in reply to James): “If you don’t use a skewed distribution, and hold the mean at 3°C, increasing uncertainty means you get a fat tail at both ends. So you now no longer know whether doubling CO2 makes temperatures go up or down.”
Not the same thing at all.
This kind of argument is boring boring boring. Most of us grew out of it in High School. Some of us (most of the commenters here I imagine) went on to PhDs and successful scientific careers. Others, like me, (I haven’t opened a maths book for fifty years) just shake our heads and wonder what’s the point of it all.
No need to “dissect my claim” about the IPCC and overall negative feedback. Just go and look.
neverendingaudit
“...consider the fact that what you're saying so far sounds a lot like an appeal to incredulity”.
YES! That’s EXACTLY what I’m saying! (though some call it scepticism)
Seriously, there’s something I don’t understand. I’m not sure why Landowsky’s “uncertainty” graphs can’t leak off to the left as they leak off to the right. It’s pure ignorance on my part, and I’d appreciate some enlightenment . Is it a function of his lognormal distributions? Or is it an empirically derived assumption from the original Roe and Baker climate sensitivity graph? Or has he just arbitrarily decided that sensitivity can’t descend below a certain value? I’d really appreciate some expert advice on that point.
Geoff, the 'things are more likely to be worse, rather than better, than expected' was drawn from the first figure in the first article, not the four Monte Carlos. Like much in the series, it's clumsily worded but the underlying message isn't controversial - indeed it's pretty much tautological. What he means is that with a fat-tailed distribution the actual value is more likely to be greater than the mode than smaller than the mode (the most likely single value). That could almost work as a definition of a fat-tailed distribution, so he's just saying that skewness is skewness.
Are estimates of climate sensitivity fat-tailed? It seems so.
So is his statement correct? Yes - although not if, as so many insist, he was using the mathematical meaning of 'expectation' (the mean). But he wasn't. In this instance, 'expectation' meant the mode.
Lewandowsky concludes: “There is only one way to escape that uncertainty: Mitigation. Now”
Does anyone have an idea how long this categorical recommendation will hold for? Is there a number for the years elapsed after this recommendation before observations start to set the expected value for doubling with more certainty and this statement could be revised?
TLITB, Ben Santer has noted that a good analogy here is beating up little old ladies. We'll have done enough to mitigate when the mitigation is complete. Note that we're already committed to plenty of adaptation along the way.
Steve Bloom said...
"We'll have done enough to mitigate when the mitigation is complete."
It is impossible to argue against that.
Don't worry I am not arguing against mitigation, but I can hardly ignore that no matter how rigorous the math, it isn't always a sure driver of policy; (I am not saying Lewandowsky says it is either). Sometimes we have to just use math as an observational tool. So my question about observation feeding back in to the certainty, or lack of it, discussed by Lewandowsky remains.
The premise of uncertainty heightening the demand for certain actions has to be limited by relevant information becoming available doesn’t it? Surely we can all agree that a 4 degree increase is not going to appear only at the last year before doubling?
Geoff,
Thank you for your openness.
Since you are arguing from incredulity, here is a proposal to reconcile your position with J's this way:
1. You do not need to directly argue against L's argument. All you need is to claim that L's conclusion is incredible. In that case, all you need to hold is that L's conclusion is so incredible that something must be wrong with the argument. I believe that you are hinting at the gap between knowledge and reality.
2. You do not need to underline that L's argument is not pure maths. Looking at L's argument shows this is obvious. The two premises are: (1) a convex function borrowed from economics and (2) a data model based on climate projections. These are quasi-empirical premises: the first is a conventional, the second is simulational. The math lies in the application of the function to the model, which L claims is simple.
3. Here L questions the intuition according to which "we shouldn't take actions which have a high severity the other way”, as some political talking head said, if we believe L's report. L basically claims that, taken at face value, this intuition makes no mathematical sense.
4. I believe we can concede that L is perhaps going a bit too far in his evaluation of his argument. But that does not touch his basic point. This basic point seems to agree with his own intuition about decision under uncertainty.
5. So far we mainly have a clash of intuitions.
Pile's argument seems to amount to say that L's intuition rests on the precautionary principle. His argument against the PP amounts to say that the principle does not apply to itself. This reflexivity argument is an old tack in philosophy: if it was that strong, relativity theory would be incoherent, since relativity can't apply to itself. And that's notwithstanding the fact that if you hold PP, you don't even need L's argument!
Please think about that. Ben's argument is a pile of angered talking points.
6. Skepticism looks a lot like arguing from incredulity. But an important difference is that skepticism is an overall principle which guides our epistemic practices, while arguing from ignorance is, well, an argument.
Here's another way to illustrate the difference:
L claims that P. G claims that P is incredible: G can't believe that P. Therefore G concludes that P can't be true, plausible, the result of some formal sleight of hand, or whatnot.
Contrast this with:
L claims that P. G asks on what basis is P asserted. L offers an argument: a conclusion C following some assumptions A. G can question the choices of A. G can ask if the conclusion follows from A. Or G can simply ask how to interpret C.
In the first instance, we have an incredulous chap who simply can't believe anything his interlocutor says.
In the second instance, we have a skeptical chap who forces his interlocutor to come up with the best argument the exchange can produce.
Which process do you prefer?
Incredulity arguments generally lack credibility, because they have the curious tendency to be backed up by other arguments from incredulity.
7. Even if the incredulity is justified, an argument stays on the table as long as it's not replaced by a better one. Therefore, we need another quasi-empirical counter-argument.
8. To return to your bra example, I would personally dislike more risks, unless I can make sure that enough people like you will take care of the right side of the distribution.
I hope I am not teaching you anything here, except perhaps for #8.
Bye,
willard
TLITB, we might reasonably expect uncertainty (at least regarding the global mean change) to decrease gently over the coming years/decades. There's little prospect of a rapid breakthrough, and global mean is only one aspect of the change anyway.
A 4C rise (if the sensitivity is really 4C) will only appear many years *after* doubling of CO2, due to the thermal capacity of the system.
Tom Fiddaman,
I understand why you say:
> Because he lacks a model, Pile can say all kinds of internally inconsistent things, and make attributions about Lewandowsky, which cannot be precisely refuted.
but I believe that's false. Take for instance:
> All other things being equal, things are the same, no matter no matter what we think about them, or how certain we are about what we think about them.
Summoning Bishop Butler's "Every thing is what it is, and not another thing" with the addition of a ceteris paribus clause ("all things being equal") and introducing the always interesting concept of sameness lead to interesting ontological quandaries.
There is also this interesting analysis:
> This part of the sentence puts the degree of uncertainty into a necessary (i.e. it cannot be otherwise) relationship with what we have anticipated, and the outcome of events. The condition of uncertainty itself multiplies the anticipated result, to yield an impact of greater magnitude. This is an absurd claim, because the condition of uncertainty has no bearing on things.
Pile reads necessity in a very strong sense, i.e. metaphysically, and then finds out that it makes no sense.
These examples should show that Ben Pile is recycling the Chewbacca defense, as explained by WebHubTelescope over there:
http://judithcurry.com/2012/05/26/doubt-has-been-eliminated/#comment-204076
While I readily concede that precisely refuting this pile of angered talking points takes time, it is by no means impossible to do.
Sure, Willard, but by then the recipient is pretty much guaranteed to not be listening.
“A 4C rise (if the sensitivity is really 4C) will only appear many years *after* doubling of CO2, due to the thermal capacity of the system”
Thanks for the response.
I know there was some revision of the 5 degree figure and it was later considered too conservative a cut-off for being "catastrophic", but even so, when we talk of doubling aren't we required to add in the temp rise already experienced since the pre-industrial period - that’s why I used 4 degrees assuming we had already observed travelling about 1 degree down the road.
I admit I had assumed this was referring to an affect that was observable in corresponding time. Now you mentioning that time *afterwards* is to be considered I wonder how that changes the considerations in the mathematical exercise, for instance, I know of Carl Wunsch the ocean guy, and that he talks of centuries before heat is cycled – that’s the only place I think heat can be hidden, in the oceans.
But that is a side point - I guess my main point is that the exercise by Lewandowsky, whilst it could be considered interesting in helping grounding the requirement for engendering alarm in the minds of people not pre-disposed to it currently, suffers from not being something static that can be considered cast in stone to be used as a permanent benchmark.
I admit to following the thought process only so far before I realise my motivation is diminished by my overwhelming feeling of the limitation of its application to reality. It seemed that the math can only be observed not applied. So when I cut to the end of the study and see Lewandowsky concludes:
“There is only one way to escape that uncertainty: Mitigation. Now”
I then respond and think; well that's not going to happen is it?
Just considering the usual suspects in the developing world who are not plugged into the discussion at this abstract level who will be outweighing mitigation for at least (here I pluck a figure out of the air) - a decade. We can't help the super tanker stop for at least a decade before we can start the process of mitigation and so then we will have almost certainly moved 10 years beyond Lewandowsky’s "Now". So at least we must have different information about the PDF by then.
I think whether anyone likes it or not the whole math exercise here can only have use as a “wait and see” and not really a promotional tool for a “start and do”.
BTW My last comment was a response to James Annan at 15/6/12 8:19 AM
neverendingaudit
Pretending to believe that my argument is entirely based on my emotional reaction wont work. Here is our previous exchange:
neverendingaudit: “...consider the fact that what you're saying so far sounds a lot like an appeal to incredulity”.
Me: “YES! That’s EXACTLY what I’m saying! (though some call it scepticism).
Seriously, [...]”
It’s pretty clear I think that the bit with the capital letters and exclamation marks just before the word “Seriously” wasn’t meant seriously. You pretend to think it is. The only possible reaction I can have to such a tactic is anger, irony, derision etc. This will confirm you in your conviction that your opponents’ arguments are emotional. You win.
And you think that’s clever.
You then call Landowsky’s disagreement with the conventional wisdom “a clash of intuitions”, accuse Ben of basing his argument on anger, and, out of nowhere, bring in the “argument from ignorance”.
Why not just come out and say it? Everyone who disagrees with you is irrational, motivated by one or other of the seven deadly sins, and probably possessed by the Devil. Is this some kind of exorcism?
Your characterisation of “true” scepticism (point 6) describes exactly what I have been doing in the thread above. I have asked on what basis Landowsky’s proposition is asserted, questioned the choice of assumptions, and asked how the conclusions follow from the assumptions. I’ve also asked for some clarification of part of the argument (though nobody has replied).Your turning the comparison between incredulity and scepticism into an incomprehensible alphabet soup serves only to obscure your own argument.
You haven’t begun to answer my criticisms of Landowsky.
I was fascinated by this exchange above in the thread:
TLITB said...
Lewandowsky concludes: “There is only one way to escape that uncertainty: Mitigation. Now”
Steve Bloom said...
TLITB, Ben Santer has noted that a good analogy here is beating up little old ladies. We'll have done enough to mitigate when the mitigation is complete.
Ben Santer’s known propensity for fantasising about meeting people in dark alleys and beating the crap out of them led me astray for a moment. I really thought that Santer and Bloom think that mitigation is like beating up old ladies, and that you have to go on doing it until it’s done.
Is he trying to say that emitting CO2 is like beating up old ladies, and we have to go on cutting emissions until there’s none left? Have I got that right? Is that what passes for a nice knock-down argument here in Wonderland?
TLITB, there's no need to necessarily express things in terms of some theoretical far distant equilibrium state. The argument applies just the same to shorter term changes: let's say we expect a warming of 0.2C/decade over the next 3 decades. This outcome would incur a particular cost (not excluding the possibility that it is a a "negative cost", ie net benefit). If instead we have uncertainty such the warming might be 0.15 or 0.25, then this uncertainty will increase the expected cost.
Lots of economic analysis also considers the "wait and see" option, and all of it that I'm aware of finds that it's better to take some action now. Here's one that comes to mind:
http://www.sciencemag.org/content/306/5695/416.summary
geoffchambers,
Here's the first question you ask on this thread:
> TomFP was making a interesting point about primate behaviour. What are you doing?
Here's the second question you ask:
> Do commenters here really believe that formulating statements about climate change in terms of T and T+t and dt adds information ? To normal human beings it looks like theologians arguing in Latin in order to impress the mathematically challenged masses. It’s not working, is it?
Here's the third:
> Your dismissal of Ben’s critique of Landowsky and of the comments which follow rests on the assumption that Ben (and we in the peanut gallery) can’t follow such advanced mathematical reasoning. This assumption is wrong. We’re not challenging your equation, or your point about the average of T+t and T-t being greater than T for a concave function. We do wonder - at great length - why you do it.
Does this sound like scepticism as portrayed in #6?
I believe this simply shows incredulity toward people who take seriously a formal argument, and take the time to explain it.
This incredulity is expressed here:
> Some think the algebra is a cover for a secret plan for world domination; others think you’ve found the formula to turn base modelling into gold; others that you’re just deluded practitioners of post-modern numerology.
How to came up with a statement like:
> [L's] argument is logically false and incoherent.
without challenging the premises or the inference of the argument is left as an exercice to the reader.
***
These questions have nothing to do with the argument. Reading back the thread, I find back that your first argument against the argument itself is the thought experiment I mentioned in my first comment here. Please recall your first reaction:
> didn’t “appeal to any thought experiment”.
After I showed you where we can read this thought experiment, you finally conceded that I can read. This was progress, and interesting considering the first questions you asked on the thread, and more so the editorial comments at the end of your latest comment about how it sucks when people who disagree with you look irrational.
You then followed with your interpretation of L's argument:
> Since his first argument about fat tails and “we expect it to be worse than we expect it to be” is based on a logical absurdity, all the rest of the argument here about concave cost functions is irrelevant.
I do believe that this argues from incredulity: it rewords L's argument in the most absurd way and then asks who on earth can seriously believe that.
Please recall the conclusion about your bra thought experiment:
> As a lifestyle choice it has its attractions. So does catastrophe-based environmentalism. As an example of rational thought, it stinks.
Again the incredulity, and again an editorial comment about rationality.
Now, could you really consider that it *does* look like you're arguing from incredulity?
***
I'm sorry I did not get your irony mark. I really thought you knew what you were writing. Perhaps we could then ask your question: what the hell do you think you're doing here?
neverending
The first question of mine was rhetorical, a result of being insulted by Annan. The second was an ironical observation on the mode of reasoning favoured here. The third wasn’t a question, but an attempt to correct the prevalent assumption that the critics of Landowsky can’t understand statistics... and so on
And on and on and on. All to attempt to prove that a commenter on a blog that criticised an Australian psychologist who had wandered into the realms of statistical musings on mitigation (a blog that had been much praised by J Curry, I’ve just discovered) is arguing from incredulity.
It’s incredible.
I can’t believe that someone of your obvious intelligence and erudition can waste their time tryng to analyse the psychological sources of my disbelief. I came here to discuss Landowsky. You want to talk about my incredulity.
A few hours ago you said: “While I readily concede that precisely refuting this pile of angered talking points takes time, it is by no means impossible to do” as if you were prepared to keep up your Socratic sniping for ever. Now you ask me: “what the hell do you think you're doing here?”
Hand-wrestling with a hagfish in a lunatic aquarium.
Shall we just leave it?
Geoff,
Thank you for your last comment. And I mean it. As I did the last times I told you.
On the Internet, I try to mean what I say. We do not have all the cues we have when we have faces and gestures and voice. I do not always succeed, but I do try to make sure the conversation gets going anyway.
I do acknowledge that your reaction is natural: there's so much non-conversations going on on blogs to dispute that fact. More so that now you are telling me that you feel victimized by the expression "appeal to incredulity". I am truly sorry you feel that way.
I do not know any other way to express what I see what I read so much disbelief. I try not to read into others' mind. I did not meant it any other way that the usual way in argumentation theory. That's the effect that your sarcasms have on me.
Please consider that you're here asking why people can't understand Ben Pile, when he himself editorialize so much about the rationality of environmentalists and when you deride the seriousness by which formal guys consider what you say is a platitude.
Both James and you agree about that. Both James and you disagree about what this implies.
This conversation would benefit from sticking to that single point.
But I do accept your offer to leave it at that.
Thank you for making me realize the distinction between incredulity and skepticism. I never thought about it this way. Even if it only applied to tone and manners, it's an important discovery for me.
Please consider that James is almost welcoming 5C: so he might not be as tainted as other irrational enviros...
Goodbye,
willard
This latest defense of AGW extremism- that those who point out how uncertain it is are somehow wrong. And that uncertainty drives decisive expensive action like those demanded by the AGW community is a silly defense.
It is the unwarranted certainty of AGW believers that is the real problem. It is unwarranted certainty that has driven stupid ideas like massive subsidies for windmill and solar power. It is unwarranted certainty of doom that has scared people worldwide with stories of AGW. It is unwarranted certainty in AGW predictions which has led people to ignore evidence that in fact the climate is not doing anything dramatic or dangerous.
Frontiers, the effect of uncertainty on cost has nothing to do with CAGW, other than as an application.
Uncertain has costs associated with it.
There are even textbooks on the topic. This one never even mentions global warming.
Grapple with that, the come back about defenses of climate extremism if you wish.
Well, now that's interesting.
Here's what Ben Pile promised:
> You can go away, and you [sic.] posts will remain.
http://neverendingaudit.tumblr.com/post/25231421098
I did go away. All the thread after my first comment has been deleted.
At Judy's, WebHubTelescope recalled an interesting conversation at the Oil Drum:
http://www.theoildrum.com/node/9212
Following is appearance there, Ben Pile tweeted:
> And so, inevitably, @TheOilDrum removes my comments. Intellectual cowardice is not running out, even if the oil might be.
The uncertainty of Ben Pile's assertion seems to have dropped down a bit.
By chance I keep copies. I could repost the interesting bits of our exchange, if you're interested, James. For instance, there is an intriguing argument from ignorance that emerges at the end:
> [What if the “expected costs of adaptation and mitigation” are utterly wrong] is the important point, and one which you know (but our assailant [yours truly] doesn’t seem to have bothered trying to understand) is explored often here.
http://www.climate-resistance.org/2012/06/reinventing-precaution.html#comment-65131
Appealing to ignorance is not skepticism. We need another distinction than incredibilism.
What if we did improve the world for nothing? What if nothing exists? What if only the comments that Ben Pile deleted exist?
Best,
willard
By the way Frontier, the thread you guys keep losing is that "Since uncertainty costs money, there is no benefit to the skeptic/incredulous or ignorant mind to argue that there is more uncertainty than actually exists."
On the other hand, when you see an activist fanning the flame of uncertainty (it's pretty common to see them argue the uncertainty in climate sensitivity is larger than being reported) they are doing so because they are sensible enough to understand this greater uncertainty works on the behalf of their rhetoric.
willard, is it just your comments that went, or what? I haven't had much time to keep up over the last few days.
James,
More than 15 posts were deleted from a discussion between me and Ben Pile and between Ben Pile and whom we could consider his sidekick. Only another commenter contributed a comment, to introduce the appeal to ignorance.
In fact, the comment about "simple mediocrity" of the latest comment in the thread related to a discussion about comments that were written before that.
Mr. Pile was wise enough to delete all his comments refering to me. His badhominems, however satisfying were they in the heath of the moment, would not have impressed the historians and the philosophers looking to ponder on the legacy of the climate resisters.
Human decision-making isn't rational. It may be easier and cheaper to fix things if it's agreed there's a high sensitivity than if there's a low one. Extreme crises (e.g. wars) usually make it easier to push through extreme social changes.
Supposing very strong evidence of a 5 degree sensitivity emerged tomorrow. My guess is that almost everyone would get behind a rapid transition to nuclear power in virtually all industrial countries. Even in post-Fukushima Japan, people might accept nuclear - with its attendant risks - if the alternative is imminent extreme climate change.
With lower sensitivity there's more room for argument. There are - or there seem to be - more options. People talk about investing in green technologies we haven't invented yet, and suggest plans based on the assumption that these technologies will of course work and be economic. That feels like the energy policy equivalent of climate skepticism (i.e. the belief that climate sensitivity is very low).
The bottom line is this: the hydrocarbons of the industrial revolution finally enabled humans to escape the Malthusian Trap. If we're going to stop using them and not return to Malthusian scarcity, we need replacements with comparable or better energy density. Unfortunately, many people who correctly oppose global warming denial also engage in energy density denial.
Post a Comment