tag:blogger.com,1999:blog-9959776.post3665047910283968207..comments2021-02-27T06:26:38.400+00:00Comments on James' Empty Blog: Once more unto the breach dear friends, once more...James Annanhttp://www.blogger.com/profile/04318741813895533700noreply@blogger.comBlogger46125tag:blogger.com,1999:blog-9959776.post-48727342819616104842008-06-09T14:14:00.000+01:002008-06-09T14:14:00.000+01:00With regards to the OP title:http://www.nearingzer...With regards to the OP title:<BR/><BR/>http://www.nearingzero.net/screen_res/nz089.jpgskankyhttps://www.blogger.com/profile/14584908320777937193noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-7265475342963639202008-06-06T22:57:00.000+01:002008-06-06T22:57:00.000+01:00Yoram;"It is a procedure which, if practiced consi...Yoram;<BR/><BR/><I>"It is a procedure which, if practiced consistently, guarantees a low rate of mistakes (i.e., of ruling out the true value of an unknown parameter)."</I><BR/><BR/>That makes sense. If a researcher generated 95% confidence intervals throughout their work, they expect to have excluded true values ~5% of the time.Lazarhttps://www.blogger.com/profile/02127895240354657777noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-2911155227724221192008-06-06T06:23:00.000+01:002008-06-06T06:23:00.000+01:00> what is a sane way to interpret confidence inter...> what is a sane way to interpret confidence intervals?<BR/><BR/>It is a procedure which, if practiced consistently, guarantees a low rate of mistakes (i.e., of ruling out the true value of an unknown parameter).<BR/><BR/>The important thing to note here is that the CI is the procedure, rather than the result of the procedure in a specific case. About the result in a specific case nothing can be said.<BR/><BR/>A CI is therefore a commitment by the practitioner to do things in a certain way - a commitment that is undertaken before the data is observed.Yoram Gathttps://www.blogger.com/profile/04291094497561607499noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-23338770501603080662008-06-03T23:02:00.000+01:002008-06-03T23:02:00.000+01:00Crandles, if priors are personal beliefs, then con...Crandles, if priors are personal beliefs, then constructing a prior by a <A HREF="http://www.iit.edu/~it/delphi.html" REL="nofollow"> Delphi process</A> might be a good way to depersonalize it. Otherwise why is your choice better than mine?<BR/><BR/>James, I do exaggerate a bit, but the point about where to draw the line remains.EliRabetthttps://www.blogger.com/profile/07957002964638398767noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-39531437658484845572008-06-03T20:07:00.000+01:002008-06-03T20:07:00.000+01:00Thank you.Thank you.David B. Bensonhttps://www.blogger.com/profile/02917182411282836875noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-33642515350771918042008-06-03T02:27:00.000+01:002008-06-03T02:27:00.000+01:00Blogged here. It is obviously wrong, and happens t...Blogged <A HREF="http://julesandjames.blogspot.com/2007/12/chylek-on-sensitivity.html" REL="nofollow">here</A>. It is obviously wrong, and happens to have been published in the same special issue (guest editor: P Chylek) that spawned the <A HREF="http://julesandjames.blogspot.com/2008/05/commen-on-schwartz-final-version.html" REL="nofollow">Schwartz nonsense</A>. I'm not aware of any attempt at a peer-reviewed comment on it, but it will certainly not influence the field.James Annanhttps://www.blogger.com/profile/04318741813895533700noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-75428057739075143482008-06-03T02:15:00.000+01:002008-06-03T02:15:00.000+01:00What is your take on Limits on climate sensitivity...What is your take on<BR/><BR/><A HREF="http://www.agu.org/pubs/crossref/2007/2007JD008740.shtml" REL="nofollow"> Limits on climate sensitivity derived from recent satellite and surface observations</A><BR/><BR/>which states "We find that the climate sensitivity is reduced by at least a factor of 2 when direct and indirect effects of decreasing aerosols are included, compared to the case where the radiative forcing is ascribed only to increases in atmospheric concentrations of carbon dioxide."<BR/><BR/>I am assuming I don't read climatologese all that well...David B. Bensonhttps://www.blogger.com/profile/02917182411282836875noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-36316510875865731112008-06-02T17:01:00.000+01:002008-06-02T17:01:00.000+01:00Eli>"Given that, is there only an arbitrary way of...Eli<BR/>>"Given that, is there only an arbitrary way of separating the two (1979, 1980, 1981, etc??). <BR/><BR/>If so prior construction becomes an art rather than an algorithm"<BR/><BR/>Priors are meant to be personal beliefs that differ from person to person. So even if two people agreed to use expert opinion to 1979 their priors would be different.<BR/><BR/>You are welcome to have a go at it using a different split. <BR/><BR/>Anyway yes it is more of an art of extracting your own beliefs than an algorithm.<BR/><BR/>Does this mean that you 'can easily argue that assigning uniform probability from 0 to 100 C is ok'?<BR/><BR/>Simple answer - No.<BR/>If I said that I believed there was a 99% chance that aliens would come and steal the moon tomorrow would this make that a reasonable expectation? I think you would dismiss me as mad much more readily than you would accept that as a reasonable belief.<BR/><BR/>Where precisely to draw the line between reasonable beliefs and unreasonable ones may not be very clear. However if there is enough of a gap then the precise position does not need to be determined. I doubt you would disagree with me saying any belief that the odds are greater than 25% is an unreasonable belief (other than to suggest a lower percentage could be substituted).<BR/><BR/>Uniform probability from 0 to 100 C<BR/>implies a 50% probability of sensitivity over 50C. That seems crazy to me. Could any intelligent and well informed to 1979 person reasonably believe that the chance of sensitivity being over 50C is greater (let alone 5 times greater) than the chance of sensitivity being between 0 and 10C?crandleshttps://www.blogger.com/profile/15181530527401007161noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-83587265323003485952008-06-02T07:17:00.000+01:002008-06-02T07:17:00.000+01:00"what is a sane way to interpret confidence interv..."what is a sane way to interpret confidence intervals?"<BR/><BR/>I don't know. Well, I know the technically correct interpretation (eg an interval generated according to a random process such that p% of intervals so generated will contain the true parameter value), but I also know that almost everyone misinterprets them as probability intervals in practice. There is a huge discussion on the Wikipedia page about this (<A HREF="http://en.wikipedia.org/wiki/Talk:Confidence_interval#I_suggest_a_major_rewrite" REL="nofollow">here on down</A>). The technically correct interpretation seems rather useless in practice...and the term "confidence" seems to sometimes be used as a con-trick in full knowledge that the unwary will indeed interpret it incorrectly as "probability". So in general I am not a fan of them. although they do have the advantage of simplicity.<BR/><BR/>Eli, <BR/><BR/>If you are going to persist in claiming that a uniform prior represents "ignorance", you are going to have to address the question of how one can be ignorant about x but knowledgeable about 1/x, or vice-versa. That doesn't correspond to any plausible definition of "ignorant" IMO (in either technical or common usage).James Annanhttps://www.blogger.com/profile/04318741813895533700noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-5651867847354143252008-06-01T17:05:00.000+01:002008-06-01T17:05:00.000+01:00Crandles, my point is that the amount of informati...Crandles, my point is that the amount of information in the prior effects the result. Given that, is there only an arbitrary way of separating the two (1979, 1980, 1981, etc??). <BR/><BR/>If so prior construction becomes an art rather than an algorithm and one can easily argue that assigning uniform probability from 0 to 100 C is ok, even if the oceans boil (priors are supposed to be ignorant!). My preference would be to separate models and observational information, using the former to build the prior.EliRabetthttps://www.blogger.com/profile/07957002964638398767noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-85983935046050808122008-06-01T14:42:00.000+01:002008-06-01T14:42:00.000+01:00James...beginners question...what is a sane way to...James...<BR/>beginners question...<BR/>what is a sane way to interpret confidence intervals?<BR/><BR/>Eg. Statistical Analysis, Kachigan p. 141;<BR/><BR/>"In other words, we are 95% sure that the true mean weight of the ball-bearings, had we measured every single one of the day's production, would be somewhere between 149.21 and 150.39 grams. More technically, we are 95% sure that the <I>procedure</I> for creating the obtained confidence interval would produce an interval encompassing the population mean. For the <I>actual</I> interval produced, the true mean either <I>is</I> or <I>is not</I> in it, so strictly speaking we cannot say there is a .95 probability that the mean falls in the interval. This is a moot point from a practical standpoint, but has importance in a theoretical context in which no quantitative probability-type statement is allowable for a specific interval.We will adhere to the more practical view that "95% sure" or "95% confident" are meaningful common sense statements with respect to specific intervals, and are as legitimate as statements which invoke "odds" or phrases such as "substantial assurance" to circumvent the probability issue."<BR/><BR/>... in other words, is "confidence" used as a hand-waving way of saying "probability", because saying "probability" would be wrong?<BR/>... if it is <I>likely</I> (but unstated) that a person intended to convey a probability when they wrote a confidence interval... should the interval be interpreted as a confused frequentist statement, or as bayesian with uniform prior (regardless of what one thinks the individual "intended").<BR/>... if the frequentist frame does not give a location for the population parameter, how can a frequentist frame be useful when interpreting confidence intervals?Lazarhttps://www.blogger.com/profile/02127895240354657777noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-14531341793290131242008-05-31T23:21:00.000+01:002008-05-31T23:21:00.000+01:00Martin is going in the right direction. Consider ...Martin is going in the right direction. Consider just the 'endings' in Jared Diamond's <I>Collapse</I>, not to mention the (other) ones in books by historians, pre-historians and archaeologists.<BR/><BR/>It is rather akin to an avalanche. but alas, without the physical laws governing avalanche behavior.<BR/><BR/>Indeed just consider the current world food crisis brought about (mostly) by bad policies.David B. Bensonhttps://www.blogger.com/profile/02917182411282836875noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-31944632907189139722008-05-31T11:32:00.000+01:002008-05-31T11:32:00.000+01:00James: double counting, by implicitly using observ...James: double counting, by implicitly using observational info anyway? Or by multiple models sharing physics, or even code, in sub-components?<BR/><BR/>James and Steven, CO2 doubling on its own very unlikely, I agree. But combined with peak oil and other ecological stresses for a 10B population?Martin Vermeerhttps://www.blogger.com/profile/04537045395760606324noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-19745022981981338472008-05-31T10:39:00.000+01:002008-05-31T10:39:00.000+01:00Martin,That's effectively what the original NAS re...Martin,<BR/><BR/>That's effectively what the original NAS report (Charney) did with the two main models around at the time (Hansen and GFDL). So it's certainly not an unreasonable idea! One still needs to think about what sort of tail to add outside the range of models, though, as it would not (IMO) be reasonable to claim a priori that S cannot like outside that range. And using modern models brings up the question of double counting, which the 1979 NAS report avoids much more straightforwardly.<BR/><BR/>Steven, there is no reasonable scenario I can think of in which 2xCO2 causes the end of civilisation (unless society is so brittle that any minor disruption triggers a nuclear holocaust, in which case the next flu pandemic or peak oil will do the job first). In general though, your question points towards issues of nonlinear <A HREF="http://en.wikipedia.org/wiki/Utility" REL="nofollow">utility</A> - this is a standard economic issue but one that we preferred not to touch on in this paper.James Annanhttps://www.blogger.com/profile/04318741813895533700noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-22922524072321489642008-05-31T04:29:00.000+01:002008-05-31T04:29:00.000+01:00This comment has been removed by the author.Martin Vermeerhttps://www.blogger.com/profile/04537045395760606324noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-2560506489110727672008-05-31T04:27:00.000+01:002008-05-31T04:27:00.000+01:00One thought that has crossed my mind: would the di...One thought that has crossed my mind: would the distribution of sensitivities from the 22 or so models included in the AR4 report constitute a valid prior? These models are supposed to be based on the physics only, untuned to observations.Martin Vermeerhttps://www.blogger.com/profile/04537045395760606324noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-37869868047895376182008-05-30T15:56:00.000+01:002008-05-30T15:56:00.000+01:00James: great paper. I have one worry, though. If t...James: great paper. I have one worry, though. If there's a chance of a 100% world GDP loss (which would represent human extinction or the collapse of civilization or whatever), then from a utilitarian point of view it won't do to take the expected value, as it seems like the effects of a 100% loss would be permanent in a way that other losses probably wouldn't be.<BR/><BR/>So my question to you is -- do you have any idea how to estimate the probability of a doubling of CO2 causing permanent collapse/extinction? I'd put it at significantly less than 1% but I'm not sure how much less exactly.steven0461https://www.blogger.com/profile/03973320291903177425noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-27367197086912595392008-05-29T20:25:00.000+01:002008-05-29T20:25:00.000+01:00Martin & James Annan --- Thank you.Martin & James Annan --- Thank you.David B. Bensonhttps://www.blogger.com/profile/02917182411282836875noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-796382347066678802008-05-29T13:46:00.000+01:002008-05-29T13:46:00.000+01:00Well, it seems that Chris has answered most points...Well, it seems that Chris has answered most points pretty well - thanks!<BR/><BR/>Yoram, it is not in dispute that one can generate an arbitrarily large range of possible posteriors by selecting extreme enough priors. The question is what sort of priors are reasonable, and no-one, not even the most extreme advocate of uniform priors, has every suggested that 370C is a sensible value. But even in that case (lognormal in L) the posterior does not actually appear to be that unreasonable. This seems to me like a strong demonstration of robustness, not a show of arbitrariness and methodological weakness.<BR/><BR/>I don't know what you found to disagree with me about confidence intervals - the range you present in your later comment is precisely the 95% probability interval that Forster and Gregory presented (for this, they explicitly assumed a uniform prior in L, noting that this choice was unconventional) and your point about wider intervals extending to L=0 is precisely what I meant with my statement "However, some might go to infinity "and beyond", especially at the higher confidence levels." Of course a confidence interval will just be routinely misinterpreted as a probability interval anyway...<BR/><BR/>David, there could br a case for using Arrhenius' 6C, but note that this was based on a calculation that is known to have substantial inaccuracies (and does not need doubling and redoubling). The canonical 3C can be viewed as more careful and credible version of the same calculation. But there is also the issue Chris emphasises of what information is contained in the prior versus likelihood - anyone arguing for a really wide and high prior based on ancient history will have to also consider how to take account of all that we have learnt in the meantime, not just the last couple of decades of some satellite observations that I used.James Annanhttps://www.blogger.com/profile/04318741813895533700noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-32598833508506224012008-05-29T12:46:00.000+01:002008-05-29T12:46:00.000+01:00Yoram,Ah yes, I misread your formula. What you do ...Yoram,<BR/><BR/>Ah yes, I misread your formula. What you do is precisely right.<BR/><BR/>Still, the centre of the normal distribution at 2.3 maps to 0.43, well away from the middle of the CI for S. For what it's worth.Martin Vermeerhttps://www.blogger.com/profile/04537045395760606324noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-7841737432897527832008-05-29T05:59:00.000+01:002008-05-29T05:59:00.000+01:00Martin,For confidence intervals you don't need to ...Martin,<BR/><BR/>For confidence intervals you don't need to make assumptions about the distributions of the unknown parameters: they don't have any.<BR/><BR/>I used the FG measurement of L (with its associated Normal uncertainty) as it appears in the A&H paper to generate a CI for L. One convenient property of CIs is that they allow easy transformation of the parameters.<BR/><BR/>The CI for S = 1/L is derived as:<BR/><BR/>CI_S = { 1 / l : l \in CI_L }.Yoram Gathttps://www.blogger.com/profile/04291094497561607499noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-21386614885725526242008-05-29T05:15:00.000+01:002008-05-29T05:15:00.000+01:00Yoram, yes, if you assume that S is gaussian. More...Yoram, yes, if you assume that S is gaussian. More sensible IMHO is to assume L gaussian, and work from there (but then you have to first convert those numbers which are apparently stated for S :-( )<BR/><BR/>It will become asymmetric but that would be realistic.Martin Vermeerhttps://www.blogger.com/profile/04537045395760606324noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-60982019838616492472008-05-29T05:05:00.000+01:002008-05-29T05:05:00.000+01:00It seems hard to avoid the conclusion that using a...It seems hard to avoid the conclusion that using a confidence interval is much more sensible than arguing about what counts as prior information.<BR/><BR/>A 95% CI for S is<BR/>S \in 1 / (2.3 +- 0.7 * 1.96) = [0.27, 1.08].Yoram Gathttps://www.blogger.com/profile/04291094497561607499noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-7613967722312202742008-05-29T04:05:00.000+01:002008-05-29T04:05:00.000+01:00David, I seem to remember that Arrhenius did inclu...David, I seem to remember that Arrhenius did include water vapour in his 6C estimate (and by the way found also the polar amplification in his original paper; fantastic).<BR/><BR/>As for the oceans, they don't affect sensitivity only the time scale on which it becomes visible (as it is defined as <EM>equilibrium</EM> sensitivity).<BR/><BR/>crandles is right that if the prior is based on state of knowledge 1979, then it must rule out runaway (or even very large S).Martin Vermeerhttps://www.blogger.com/profile/04537045395760606324noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-60336101595939777002008-05-28T22:19:00.000+01:002008-05-28T22:19:00.000+01:00>"Well yes, I would agree. For a prior this seems ...>"Well yes, I would agree. For a prior this seems reasonable: I know one very Earth-like planet that has suffered runaway feedback. Actually two, counting Snowball Earth. Where do you philosophically draw the line between prior and observed?<BR/>"<BR/><BR/>James has made clear in the paper that he has used a prior from expert opinion as at 1979 and updated with data entirely post 1979.<BR/><BR/>Now you might want to argue that to arrive at that prior you need a more ignorant prior and update it with knowledge to 1979. But this really isn't necessary - James is arguing that the precise details of his prior doesn't matter too much as long as something plausible is used. So the need to go back to an ignorant prior and expert knowledge to 1979 isn't necessary.<BR/><BR/>A sensitivity of over 90C would presumably lead to oceans boiling. I suspect a sensitivity of 40C would be enough to restrict life to only near the poles several times in Earth's history and we would have known about that happening by 1979. This should be ruled out at the 2.5% level.<BR/><BR/>I don't see much wrong with a median of 3.7C but a 2.5% chance of S greater than 370C is crazy unless you are trying to get back to a prior with only knowledge to 1850 or earlier or something (James has indicated the split between data used to update and all other knowledge doesn't need to be timewise.)<BR/><BR/>What is more important to ask is how well scientists in 1979 could have predicted the warming rate over the next 30 years for the greenhouse gas levels that have actually existed. If they could have done very well, would this imply some double counting of the (predicted/actual) data to limit the remaining uncertainty?<BR/><BR/>Eli, why ask about 'minimum amount of information' when it is clear that all information must be used either in the prior or in the data used to update else you end up with something that isn't a credible probability distribution?crandleshttps://www.blogger.com/profile/15181530527401007161noreply@blogger.com