OK, we've all had our fun, but perhaps it is time to put an end to it. There's obviously a simple conceptual misunderstanding underlying Roger's attempts at analysis, which some have spotted, but some others don't seem to have so I will try to make it as clear as possible.
The models provided a distribution of predictions about the real-world trend over the 8 years 2000-2007 inclusive. However, we have only one realisation of the real-world trend, even though there are various observational analyses of it. The spread of observational analyses is dependent on observational error and their distribution is (one hopes) roughly centred on the specific instance of the true temperature trend over that one interval, whereas the spread of forecasts depends on the (much larger) natural variability of the system and this distribution is centred on the models' estimate of the underlying forced response. Of course these distributions aren't the same, even in mean let alone width. There is no way they could possibly be expected to be the same (excepting some implausible coincidences). So of course when Roger asks Megan if these distributions differ, it is easy to see that they do. But what is that supposed to show?
People tend to get unreasonably hot under the collar in discussions about climate science, so let's change the scenario to a less charged situation. Roger, please riddle me this:
I have an apple sitting in front of me, mass unknown. Iuse some complex numerical models make a wild guess and estimate its mass at 100±50g (Gaussian, 2sd). I also have several weighing scales, all of which have independent Gaussian measuring errors of ±5g. I have two questions:
1. If I weight the apple once, what range of observed weights X are consistent with my estimate of 100±50g?
2. If I weigh the apple 100 times with 100 different sets of scales (each set of scales having independent errors of the same magnitude), what range of observed weight distributions are consistent with my estimate for the apple's mass of 100±50g. Hint: the distribution of observed weights can be approximated by the Gaussian form X±5g for some X. I am asking what values for X, the mean of the set of observations, would be consistent with (at the 95% level) my estimate for the true mass.
You can also ask Megan for help, if you like - but if so, please show her my exact words rather than trying to "interpret" them for her as you "interpreted" the question about climate models and observations. You can reassure her that I'm not looking for precise answers to N decimal places to a tricky mathematical problem so much as a understanding of the conceptual difference between the uncertainty in a prediction, and the uncertainty in the measurement of a single instance. It is not a trick question, merely a trivial one.
Or, dressing up the same issue in another format:
If the weather forecast for today says that the temperature should be 20±1C, and the thermometer in my garden says 19.4±0.1C, then I hope we would all agree that the observation is consistent with the forecast. Would that conclusion change if I had 10 thermometers, half of which said 19.4±0.1C and half 19.5±0.1C? Of course, in this case the distribution of observations is clearly seen to be markedly different from the distribution of the forecast. Nevertheless, the true temperature is just as predicted (within the forecast uncertainty). If there is anyone (not just Roger) who thinks that the mean observation of 19.45C is inconsistent with the forecast, please let me know what range of observed temperatures would be consistent.
The models provided a distribution of predictions about the real-world trend over the 8 years 2000-2007 inclusive. However, we have only one realisation of the real-world trend, even though there are various observational analyses of it. The spread of observational analyses is dependent on observational error and their distribution is (one hopes) roughly centred on the specific instance of the true temperature trend over that one interval, whereas the spread of forecasts depends on the (much larger) natural variability of the system and this distribution is centred on the models' estimate of the underlying forced response. Of course these distributions aren't the same, even in mean let alone width. There is no way they could possibly be expected to be the same (excepting some implausible coincidences). So of course when Roger asks Megan if these distributions differ, it is easy to see that they do. But what is that supposed to show?
People tend to get unreasonably hot under the collar in discussions about climate science, so let's change the scenario to a less charged situation. Roger, please riddle me this:
I have an apple sitting in front of me, mass unknown. I
1. If I weight the apple once, what range of observed weights X are consistent with my estimate of 100±50g?
2. If I weigh the apple 100 times with 100 different sets of scales (each set of scales having independent errors of the same magnitude), what range of observed weight distributions are consistent with my estimate for the apple's mass of 100±50g. Hint: the distribution of observed weights can be approximated by the Gaussian form X±5g for some X. I am asking what values for X, the mean of the set of observations, would be consistent with (at the 95% level) my estimate for the true mass.
You can also ask Megan for help, if you like - but if so, please show her my exact words rather than trying to "interpret" them for her as you "interpreted" the question about climate models and observations. You can reassure her that I'm not looking for precise answers to N decimal places to a tricky mathematical problem so much as a understanding of the conceptual difference between the uncertainty in a prediction, and the uncertainty in the measurement of a single instance. It is not a trick question, merely a trivial one.
Or, dressing up the same issue in another format:
If the weather forecast for today says that the temperature should be 20±1C, and the thermometer in my garden says 19.4±0.1C, then I hope we would all agree that the observation is consistent with the forecast. Would that conclusion change if I had 10 thermometers, half of which said 19.4±0.1C and half 19.5±0.1C? Of course, in this case the distribution of observations is clearly seen to be markedly different from the distribution of the forecast. Nevertheless, the true temperature is just as predicted (within the forecast uncertainty). If there is anyone (not just Roger) who thinks that the mean observation of 19.45C is inconsistent with the forecast, please let me know what range of observed temperatures would be consistent.
1 comment:
Maybe Nate Silver should hire Megan. She wouldn't be any better than RPjr, but she would be a whole lot cheaper.
Post a Comment