After huge strings of comments, most of which can be found here, here and here, I'm still not sure that I've made much progress in explaining how Bayesian probability works, and how it differs from the frequentist view.
I'll have one last go before I give up....
A man picks an apple out of a barrel and gives it to me. I weigh it on a set of moderately accurate scales, which give a reading of 105+-5g (as previously, this uncertainty is assumed to be Gaussian, with the quoted magnitude being 1 standard deviation). What is the probability that the mass of the apple is less than 100g? Less than 105g?
To a frequentist, these questions are simply ill-posed, and no answer is possible. We don't know the distribution of apple masses in the barrel from which this one was sampled.
To a Bayesian, there are an infinite range of possible priors that could in principle be used to describe the initial uncertainty in the apple's mass. One obvious choices would be to take it as a uniform distribution over a wide range (eg 0-200g). In that case, the posterior pdf for the apple's mass is the Gaussian distribution N(105,5), and the two probabilities requested are 16% and 50% respectively. Another plausible prior would be to use our prior knowledge of the apple being a common medium-sized fruit, and reckon that the overall distribution of apple masses is roughly Gaussian of the form N(125,20). In that case, the posterior pdf of the individual apple is marginally different from before, at N(106.2,4.85), and the two probabilities are 10% and 40% for the apple being less than 100g and 105g respectively.
For each of those answers, of course there is also a perfectly straightforward frequentist interpretation - consider sampling apples from the U(0,200) distribution, weighing each, and of those whose weight is given at 105g, the distribution has the N(105,5) shape. And similarly for the Gaussian prior. So what's the difference? The only difference is, someone with a Bayesian approach to probability will be prepared to actually pick a prior (which might be one of the above, or something else), and give an answer. Someone who claims to take a frequentist approach to probability cannot do so. And indeed if a frequentist experiment is actually performed in which apples are repeatedly sampled from the barrel and weighed on scales with +-5g random error, then the resulting distribution of those which are measured to be 105g will not coincide exactly with the Bayesian's answer, unless he got very lucky and actually guessed the distribution of apples in the barrel correctly.
Note, however, that in my two examples above, the answers are not really so different from each other. Unless the prior is something really pathological (like each apple in the barrel is actually less than 100g) then an apple which measures 105+-5g is "likely" to be greater than 100g ("likely" being the IPCC definition of about 60-90%). So even though the Bayesian won't be exactly "correct", he'll probably be near enough right for his answer to be a useful basis for decision-making. And note that even if you decide you need a better answer, and try to take a more accurate measurement, you still can't get round the fact that you always need to choose a prior which will affect the answer (if only marginally). In the real world, you might need to make a decision - shall I buy the apple or reject it as potentially undersized - with the available data and no opportunity to improve your observation. Without a Bayesian approach to probability, you don't have the tools to address that question at all.
Here endeth the lesson.
I'll have one last go before I give up....
A man picks an apple out of a barrel and gives it to me. I weigh it on a set of moderately accurate scales, which give a reading of 105+-5g (as previously, this uncertainty is assumed to be Gaussian, with the quoted magnitude being 1 standard deviation). What is the probability that the mass of the apple is less than 100g? Less than 105g?
To a frequentist, these questions are simply ill-posed, and no answer is possible. We don't know the distribution of apple masses in the barrel from which this one was sampled.
To a Bayesian, there are an infinite range of possible priors that could in principle be used to describe the initial uncertainty in the apple's mass. One obvious choices would be to take it as a uniform distribution over a wide range (eg 0-200g). In that case, the posterior pdf for the apple's mass is the Gaussian distribution N(105,5), and the two probabilities requested are 16% and 50% respectively. Another plausible prior would be to use our prior knowledge of the apple being a common medium-sized fruit, and reckon that the overall distribution of apple masses is roughly Gaussian of the form N(125,20). In that case, the posterior pdf of the individual apple is marginally different from before, at N(106.2,4.85), and the two probabilities are 10% and 40% for the apple being less than 100g and 105g respectively.
For each of those answers, of course there is also a perfectly straightforward frequentist interpretation - consider sampling apples from the U(0,200) distribution, weighing each, and of those whose weight is given at 105g, the distribution has the N(105,5) shape. And similarly for the Gaussian prior. So what's the difference? The only difference is, someone with a Bayesian approach to probability will be prepared to actually pick a prior (which might be one of the above, or something else), and give an answer. Someone who claims to take a frequentist approach to probability cannot do so. And indeed if a frequentist experiment is actually performed in which apples are repeatedly sampled from the barrel and weighed on scales with +-5g random error, then the resulting distribution of those which are measured to be 105g will not coincide exactly with the Bayesian's answer, unless he got very lucky and actually guessed the distribution of apples in the barrel correctly.
Note, however, that in my two examples above, the answers are not really so different from each other. Unless the prior is something really pathological (like each apple in the barrel is actually less than 100g) then an apple which measures 105+-5g is "likely" to be greater than 100g ("likely" being the IPCC definition of about 60-90%). So even though the Bayesian won't be exactly "correct", he'll probably be near enough right for his answer to be a useful basis for decision-making. And note that even if you decide you need a better answer, and try to take a more accurate measurement, you still can't get round the fact that you always need to choose a prior which will affect the answer (if only marginally). In the real world, you might need to make a decision - shall I buy the apple or reject it as potentially undersized - with the available data and no opportunity to improve your observation. Without a Bayesian approach to probability, you don't have the tools to address that question at all.
Here endeth the lesson.
19 comments:
I'd like to be there when you try out your Bayesian reasoning on the grocer.
James, I would also like to point out that all your "priors" were distributions, which have an intrinsic frequentist (or measure) interpretation. They would be theories, except for the fact that you permit yourself the freedom to make them up as you go along, seemingly arbitrarily.
And one last time:
You need to have a theory, in your example a knowledge about apples, to establish a reasonable "prior".
But once you have a theory, the distinction between Bayesian and frequentist is trivial (as you wrote yourself on CIPs blog).
And if you do not have a theory (could apples weigh 5000kg ?) it would be a mistake to just assume a "prior", just so that you can continue with your analysis.
PS: No more comment(s) from me about this. Enough is enough.
So, I assume from this continued rejection of bayesian ideas that if you had a recipe which called for a minimum of 100g of apple, you would simply have no idea how to estimate whether the apple was big enough, no matter how accurate your scales were. In fact, the whole question is ill-posed to you two, since we have no "theory".
I don't believe that you act like this in the real world. If you did, you would never be able to make any decisions at all.
Wolfgang,
You need to have a theory
Actually I need to make a decision (is the apple big enough, my apple pie needs a 100g apple to taste nice). I don't have a theory, nor do I have perfect measurements. I still need to make a decision. There is no null decision, no middle way between accepting the apple and rejecting it. There are no more measurements available. How would you decide?
Amazingly enough, James, billions of people who never heard of Bayes or Bayesian statistics make these decisions every day. I'm pretty sure I make decisions on apples just like the other 6.8 billion or so people. Frankly, I amazed that not all the Bayesians have starved to death.
CIP,
Amazingly enough, James, billions of people who never heard of Bayes or Bayesian statistics make these decisions every day
And if they want to make the decisions carefully, weighing up all the evidence, then Bayesian methods enable them to formalise the process that they are subconsciously approximating.
If they don't weigh up all the evidence carefully, do you think their decisions are likely to be better, or worse, than if they do? It is easy to find examples on the web where the careless and unthinking interpretation of data will lead to poor decision making, due to inadequate consideration of the prior.
I still don't think I've seen you suggest any method by which you would make a decision in the case described, or any like it. Don't be shy now. Don't just say "you can't do that". Tell me what you would actually do if you wanted to buy a >100g apple. Do you simply ignore the readout on the scales and just toss a coin instead?
If each apple has the same price, I might try a few on the scale before buying. Much more likely, I will pay and go, unless the apple looks rotten or something. I will never take into account the fact that the scale has a random error, partly because I know systematic errors are more likely, but mainly because I don't believe there is any plausible way I could improve my bargain by doing do. I will bet that a) you don't either, or b) you don't do the shopping, or 3) you are loco.
Dear James,
you are free to use your subjective Bayesian tools not justified by any measurements or tested theories, but you can't expect that others will consider these subjective statements to be objective science.
As you say correctly, we can't make your reasoning without the Bayesian notions. The reason is that science prohibits to make any conclusions in these situations, except for charlatans who can always make them.
The strength of your belief in Muhammed or the weight of an Apple being below 100 grams are just subjective things about your brain. We can study your brain scientifically, much like when we do experiments with mice, and scientifically predict what you say if someone asks you what is the weight of an apple.
But in that case, we study your brain, not the apple itself. Of course, when we study your brain scientifically, we again use the frequentist notions only, as opposed to unjustifiable priors and other dogmas - especially dogmas that are guaranteed to be wrong.
When a scientist is looking for the right answers, she may be uncertain in the middle and design subjective strategies how to deal with the uncertainty, but as long as she is uncertain, her knowledge is not settled science.
The numbers only become scientifically meaningful after sufficient "frequentist" statistics has been accumulated to trust them within the accuracy that were measured. This is manifestly false for all of your Bayesian fairy-tales. Your Bayesian probabilities are always number that differ from 0 or 100 percent because you did not do your job right. Whenever you do these things right, all of your numbers will go either to 0 or 100 percent (e.g. the probability that the apple is below 100 grams), and any intermediate answer is rubbish.
The only way how intermediate probabilities may make sense is if they express the ratios of events in the frequentist counting.
Best
Lubos
Lumo,
When a scientist is looking for the right answers, she may be uncertain in the middle and design subjective strategies how to deal with the uncertainty, but as long as she is uncertain, her knowledge is not settled science.
Well that's a fine strawman you've built yourself there.
Who said anything about the estimate being "settled science"? Of course it is not "settled science" to say the apple has a 10% chance of weighing less than 100g. It is simply an up-front representation of our uncertainty about the weight of the apple. If the weight of the apple matters, then we either use such a partially-subjective estimate, or we just make an arbitrary decision that does not take account of all the information which we have.
I can't believe that you, or anyone else, would seriously suggest the second course of action as being preferrable on either practical or theoretical grounds. Indeed I am confident that you pay attention to the weather forecast at least occasionally, despite the prediction being "not settled science". Presumably you think the only properly scientific weather forecast is "I'll tell you tomorrow" :-)
I note with amusement that you are simultaneously more hard-line and more realistic than your accolytes, firstly in asserting that "science prohibits to make any conclusions in these situations" and secondly in clearly acknowledging that it is in fact reasonable for a scientist to "design subjective strategies how to deal with the uncertainty". It's sad that you have such a limited vision of valid scientific behaviour, but I can only imagine that string theory's loss is the rest of the world's gain.
Dear James,
I see that you shifted the discussion once again. Now you want to know how to make desicions under uncertainty.
I will limit myself to this one comment.
In your example of the weather prediction, most people are able to make decisions, because they encountered the same or similar situation many times and they simply counted how many times the weather man was correct and how many times he was wrong. If most people are uncertain about the quality of weather predictions it is because they fail to count patiently and just make up some subjective uncertainty instead.
I see that you use the strategy of name calling quite frequently. I have seen this strategy several times before, usually when people were uncertain about their ideas.
But in my experience this strategy is not very effective and usually backfires.
Best,
Mr. Accolyte
James - If you think I'm a Lumo accolyte, your decision making procedures are based on some pretty faulty priors.
In your example of the weather prediction, most people are able to make decisions
But how did the "scientists" make the forecasts in the first place? According to Lubos, science prohibits to make any conclusions in these situations, except for charlatans who can always make them.
I think it's very sad that so many people have such a narrow view of science. IMO weather prediction is a shining example of science at its best, demonstrating huge benefits to society and tested on a daily basis. But to Lubos, a forecast that predicts a 70% chance of rain is merely a demonstration that the forecaster didn't do their job right:
Bayesian probabilities are always number that differ from 0 or 100 percent because you did not do your job right. Whenever you do these things right, all of your numbers will go either to 0 or 100 percent
WRT decision-making, I suppose if knowing the value of the unknown parameter in question has no possible implications for any future decisions, then there is no particular need to apply ideas such as bayesian probability. In fact there's no reason to even measure it in the first place. Most scientific research concerns things that matter to us, at least a bit.
>> In your example of the weather prediction
> But how did the "scientists" make the forecasts in the first place?
James, you really managed to come full circle 8-) Congratulations!
As we told you, a "probability of 70% chance of rain tomorrow" either means that the scientists ran different simulations and 7 out of 10 gave rain.
Or it means 70% of the area will experience rain tomorrow or it can mean: In previous cases (with the same or very similar initial conditions) it was raining in 7 cases out of 10 the next day.
You can just continue reading your previous posts and our comments.
You no longer need us to comment here 8-)
Best greetings,
Mr. Not-thinking-Wannabee-Acolyte
PS: acolyte is actually with one c only.
Wolfgang,
Where did they get their set of 10 initial states?
From a Bayesian prior (also informed by observations, of course).
Lumo put it very clearly in his last post, although I obviously disagree with what he considers to be doing your job "right":
Bayesian probabilities are always number that differ from 0 or 100 percent because you did not do your job right. Whenever you do these things right, all of your numbers will go either to 0 or 100 percent
It is unarguable that the uncertainty in a weather forecast is epistemic: if we knew the initial conditions better, and knew the behaviour of the atmosphere better, there is in principle no limit to how accurately we could forecast. The occurrence of rain tomorrow is not a random variable, merely an unknown one. We have no option but to use a Bayesian interpretation of probability here, and whether or not we make up an ensemble of model simulations is entirely a matter of practicality (it is only recently that the use of ensembles has become widespread). I could make an ensemble of hypothetical apples too, and see how many are <100g. But I have to choose them from a prior which necessarily has a subjective element. Has it not occurred to you that if there was such a thing as an objective answer, then different weather forecasters would never disagree?
But since you have already had this explained to you in great detail, and have apparently been unable to either understand or accept it, at this point I conclude that you are too stupid or stubborn to learn anything about the subject. Please go away and don't come back. Maybe some others passers-by have learnt something.
Is it too late for me to point out what all right-thinking people know to be true?
emacs.
There. That's better.
Or am I on the wrong thread?
vi!
Hee, hee!
But seriously, folks.
The cognitive gulf between Bayesians and frequentists is either a crack in the sidewalk or the Grand Canyon, depending on how it's approached -- suggesting to me that there are some other issues motivating the discussion.
And I won't even try to add value to that discussion until I have read all of E.T. Jaynes' Probability Theory: The Logic of Science.
Anything by Jaynes is good, and best of all, a lot of it is freely available through the intertubes - Google really is your friend.
Post a Comment