tag:blogger.com,1999:blog-9959776.post2280678989870299808..comments2024-02-15T04:42:41.606+00:00Comments on James' Empty Blog: Roger gets it right!James Annanhttp://www.blogger.com/profile/04318741813895533700noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-9959776.post-44474061379915131142008-05-17T23:44:00.000+01:002008-05-17T23:44:00.000+01:00As I recall the climate models include high latitu...As I recall the climate models include high latitudes and not all the temperature reconstructions do, so indeed they are measuring different thingsEliRabetthttps://www.blogger.com/profile/07957002964638398767noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-69878268682004804342008-05-17T17:22:00.000+01:002008-05-17T17:22:00.000+01:00Bi - the usual method for combining the IPCC model...Bi - the usual method for combining the IPCC models has usually been to just average them. One of my swansongs (Connolley and Bracegirdle) was to look at weighting them by their veracity.William M. Connolleyhttps://www.blogger.com/profile/05836299130680534926noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-85819143346052905872008-05-17T10:39:00.000+01:002008-05-17T10:39:00.000+01:00Speaking of which, I think I now have a question.....Speaking of which, I think I now have a question...<BR/><BR/>James,<BR/><BR/>When combining models into an ensemble, do you simply give equal weightage to each of the models, or do you adjust the mixture weights using e.g. Expectation-Maximization? Does this question even make sense? Thanks!<BR/><BR/>-- bi, <A HREF="http://frankbi.wordpress.com/" REL="nofollow"><I>International Journal of Inactivism</I></A>bi -- International Journal of Inactivismhttps://www.blogger.com/profile/03030282249404084578noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-15396497457079606522008-05-17T10:06:00.000+01:002008-05-17T10:06:00.000+01:00Belette:It's the same silly error as in Douglass e...Belette:<BR/><BR/>It's the same silly error as in Douglass et al. (2007).<BR/><BR/>The term "independent models" makes no sense in statistics. If there are two models modelling the same phenomenon, they're either the same, or they're mutually exclusive.<BR/><BR/>(You can combine models to form a new mixture model, which is again different from all other models; but the analysis is totally different from pretending that the separate models are "independent" observations of some sort.)bi -- International Journal of Inactivismhttps://www.blogger.com/profile/03030282249404084578noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-54759564017105184722008-05-17T05:37:00.000+01:002008-05-17T05:37:00.000+01:00Belette: archived.-- bi, International Journal of ...Belette: <A HREF="http://www.webcitation.org/5XsJZDhPX" REL="nofollow">archived</A>.<BR/><BR/>-- bi, <A HREF="http://frankbi.wordpress.com/" REL="nofollow"><I>International Journal of Inactivism</I></A>bi -- International Journal of Inactivismhttps://www.blogger.com/profile/03030282249404084578noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-78883483710378553832008-05-16T23:02:00.000+01:002008-05-16T23:02:00.000+01:00Roger is digging himself deeper into his hole. Loo...Roger is digging himself deeper into his hole. Look at http://sciencepolicy.colorado.edu/prometheus/archives/prediction_and_forecasting/001431the_helpful_undergra.html, quick, before he takes it down again. He's confused the distribution from the models with various estimates of the obs trend.William M. Connolleyhttps://www.blogger.com/profile/05836299130680534926noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-91961943560465840812008-05-16T22:16:00.000+01:002008-05-16T22:16:00.000+01:00I don't see what the problem is, Tom C. It seems o...I don't see what the problem is, Tom C. It seems obvious that the less specific a set of predictions is, the more difficult it is to invalidate. So yes, consistency doesn't neccessarily mean that your model is meaningful, especially over such short terms. But I don't see how it's conceptually flawed.Fangzhttps://www.blogger.com/profile/17792907911535480701noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-7615647476008634522008-05-16T20:26:00.000+01:002008-05-16T20:26:00.000+01:00James -What you and Roger are arguing about is not...James -<BR/><BR/>What you and Roger are arguing about is not worth arguing about. What is worth arguing about is the philosophy behind comparing real-world data to model predictions. I work in the chemical industry. If my boss asked me to model a process, I would not come back with an ensemble of models, some of which predict an increase in a byproduct, some of which predict a decrease, and then claim that the observed concentration of byproduct was "consistent with models". That is just bizarre reasoning, but, of course, such a strategy allows for perpetual CYAing. <BR/><BR/>The fallacy here is that you are taking models, which are inherently different from one another, pretending that they are multiple measurements of a variable that differ only due to random fluctuations, then doing conventional statistics on the "distribution". This is all conceptually flawed. <BR/><BR/>Moreover, the wider the divergence of model results, the better the chance of "consistency" with real-world observations. That fact alone should signal the conceptual problem with the approach assumed in your argument with Roger.Tom Chttps://www.blogger.com/profile/11169660946573910095noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-9806300516562693962008-05-16T15:17:00.000+01:002008-05-16T15:17:00.000+01:00> ou'd think that being able to cherry pick outsid...> ou'd think that being able to cherry pick outside of a 2 sigma envelope shouldn't be that hard.<BR/><BR/>He is only cherry picking one endpoint of the interval (the starting time - the end time is constrained to be the present). Therefore, unless he is picking very short periods, he doesn't have that much wiggle room.<BR/><BR/>I ran a simulation and got about 30% of being able to get a p-value of 5% or less when cherry picking forty years.Yoram Gathttps://www.blogger.com/profile/04291094497561607499noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-36519510930426582012008-05-16T14:12:00.000+01:002008-05-16T14:12:00.000+01:00Gavin confirmed via email that the actual distribu...Gavin confirmed via email that the actual distribution of trends from 7 years of model data is N(0.20,0.24) which is obviously consistent with my estimate of N(0.19,0.23) for 7.25 years. These are all pro-rated as 10-year trends for consistency.<BR/><BR/>Chuck - exactly. Even with cherry-picking, there is absolutely nothing unusual about the last few years. It is strikingly ordinary, which is pretty obvious from just <A HREF="http://julesandjames.blogspot.com/2008/04/has-global-warming-stopped.html" REL="nofollow">looking at it</A> and hardly needs a formal analysis. The only mildly surprising thing at all in the last 30 years was the 1998 El Nino which is about 2.5sd from the trend line (about a 1% event if we assume Gaussianity, although assuming Gaussianity for the extreme outliers is a pretty dodgy assumption anyway).<BR/><BR/>So faced with this stupendously normal data which is worth absolutely nothing to the denialists, they actually make it worth <I>less</I> than nothing by producing desperately wrong analyses.James Annanhttps://www.blogger.com/profile/04318741813895533700noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-4129490192540690152008-05-16T10:01:00.000+01:002008-05-16T10:01:00.000+01:00"Clearly, there is a strong argument to be made"In..."Clearly, there is a strong argument to be made"<BR/><BR/>In Pielkeworld, when "there is a strong argument to be made", it means that Pielke can't actually be bothered to (horrors!) actually <I>make</I> the argument. All he needs to do is to draw some nice little diagrams, and then invite you to <I>feel</I> his desired conclusion. <I>Quo errat demonstrator.</I><BR/><BR/>And when you show him to be wrong, he'll just ignore you, or he'll go `you're right, but...' and then move on to the next bellyfeeling exercise. All the while pretending that his bizarre pronouncements haven't all been falsified or shown to be unfalsifiable.<BR/><BR/>-- bi, <A HREF="http://frankbi.wordpress.com/" REL="nofollow"><I>International Journal of Inactivism</I></A>bi -- International Journal of Inactivismhttps://www.blogger.com/profile/03030282249404084578noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-59848464039082904302008-05-16T07:34:00.000+01:002008-05-16T07:34:00.000+01:00You'd think that being able to cherry pick outside...You'd think that being able to cherry pick outside of a 2 sigma envelope shouldn't be that hard. One of 20 randomly selected intervals ought to disprove* these statistics. Make that one out of 40, since we want to fail on one particular side of the curve.<BR/><BR/>* Disprove, in this case, means "demonstrate ignorance of probability"C W Mageehttps://www.blogger.com/profile/09706100504739548720noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-63132392892044523492008-05-16T06:11:00.000+01:002008-05-16T06:11:00.000+01:00I now realize that you got it right. It's -0.1C/de...I now realize that you got it right. It's -0.1C/decade so it's only -.0725 for the 7.25 year period. This pushes the the z value down to 1.19 and the p-value up to 23.4%.Yoram Gathttps://www.blogger.com/profile/04291094497561607499noreply@blogger.comtag:blogger.com,1999:blog-9959776.post-54828100801065099032008-05-16T05:59:00.000+01:002008-05-16T05:59:00.000+01:00James, I think you made a mistake here.If I unders...James, I think you made a mistake here.<BR/><BR/>If I understand your assumptions, they are:<BR/><BR/>1. Linear temperature trend of .19C per decade.<BR/>2. Independent intervals implying linear variance = (.21^2)C^2 per decade.<BR/><BR/>This implies that the distribution for the temperature interval over 7.25 years has mean 7.25 * .019 = .138C and std. dev. sqrt(7.25 / 10 * .21^2) = .179C. The z-score of -0.1 is thus abs(-0.1 - .138) / .179 = 1.33. The associated two-tailed p-value is 18.3%.<BR/><BR/>By the way, this calculation ignores the fact that Pielke cherry picks the starting point of the interval to maximize the the z-value. This can be factored into the calculation by looking at distribution of the maximum z-value over a period instead of looking at the distribution of the z-value at a fixed point in time.Yoram Gathttps://www.blogger.com/profile/04291094497561607499noreply@blogger.com