NOTE: I’ve recently posted a (I hope) clearer and more carefully thought out update of the below thoughts; find that here: “Monty Hall Problem and Variations: Intuitive Solutions.” I’ve disabled comments on this post.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/- Ω -\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
Part I: The Monty Hall Problem
The Monty Hall Problem (explained below) strikes most people as counterintuitive. The problem is often illuminated by restating it with 100 doors instead of 3 doors. This makes many people go, “Ah, now I get it,” and concede that their intuition must be wrong. Nevertheless, for many the 3-door scenario continues to be counterintuitive.
This leads many to ask, “Why don’t I understand the Monty Hall Problem?” Like this person at Quora: Why doesn’t the “Monty Hall problem” make sense to me? The usual response is to try to demonstrate to the person why the correct answer is correct—to try to get it to click. But, even when this works (sometimes it seems to), it doesn’t address why the problem’s solution feels so counterintuitive, nor why the standard wrong answer feels so right. I think I have an idea of what’s going on.
First, a summary of the problem.
Suppose you’re playing a game in which you’re faced with three closed doors, numbered 1, 2, and 3. You’re told by the game-master (who does not lie and only speaks the truth) that one of the doors conceals a car, and the other two doors each conceals a goat. You’re not told which door conceals which item. (The game-master need not know which door conceals which item, by the way, though the game goes more smoothly if she does. To be clear, though, it must be understood that the game-master will reveal a goat in all instances of the game.* See the End Note, however, for how the game-master’s knowing could affect how a player should guess.) The arrangement of goats and car will not be changed throughout the course of the game.
(*NOTE: See the Addendum at the end of this post for some comments about the significance of the game-master knowing where the car is, which I wrote following a discussion about that topic in the comments section. It also features yet further explanations of the basic Monty Hall problem. At some point, I’ll thoroughly revise this post to make it clearer and to give better explanations and diagrams etc. of the problem. I also think I now have a better sense of why people struggle with this, and how to deliver an intuitively satisfying explanation, as I’ve spent a lot more time thinking about probabilistic intuitions.)
You are now given the opportunity to guess which door conceals the car. If you guess correctly, you win the car. You pick a door; let’s say door #1. The game-master then opens one of the remaining closed doors; let’s say door #2. Doors #1 and #3 remain shut.
You’re now given the opportunity to stay with door #1, or switch to door #3. Once you switch, you will not be able to switch again. The question now is: Are you better off switching?
Most people say no. They believe there’s a 1/2 chance of winning if they stay or switch, so it makes no difference. But this is wrong. There’s actually a 1/3 chance of winning if you stay and a 2/3 chance of winning if you switch.
Here’s a basic explanation of why. (Below, I’ll distinguishing between the probability of correct guesses on the one hand, and of what’s actually behind the doors on other hand; but this will do for now.)
When you initially guessed door #1, there was a 2/3 chance that you chose a goat and a 1/3 chance that you chose the car. This means that there is a 2/3 chance that the car is behind one of the other two doors. That doesn’t change once door #2 is opened to reveal a goat. That is, there is still a 2/3 chance you chose a goat and a 1/3 chance that you chose the car. And there is still a 2/3 chance that, had you been able to choose both door #2 and door #3 (at the same time), you would have selected the door concealing the car. Now that door #2 has been eliminated, this just leaves door #3 as the viable car option among doors #2 and #3; so, there’s a 2/3 chance that the car is behind door #3. (This assumes a goat is always revealed. See Addendum for more on this.)
A simpler, and probably clearer, way to put this: If you run the game several times, 1/3 of the time, you’ll choose the car, and switching will lose; 2/3 of the time, you’ll choose a goat, and switching will win. So you’ll win twice as often by switching.
A common way to get people to accept this result is to restate the problem with 100 (or 1,000 or 1,000,000) doors. Check out this Numberphile video to get a visual of this:
It’s also worth noting that real-world executions and computer simulations of this game demonstrate that switching does indeed win roughly 2/3 of the time. We can also theorize a hypothetical example as follows:
Suppose you play the game 60 times. Every time you play, a goat is revealed. Of those 60 games, you choose a goat 2/3 of the time. That’s 40 times. Every time you switch in those cases, you win. So, you win 40 times by switching. 1/3 of the time, you first choose a car. That’s 20 times. You lose every time you switch in those instances. So, you’ve won 40 times by switching, and you’ve lost 20 times by switching. You’ve won twice as often as you’ve lost. In other words, you’ve won 2/3 of the time (i.e., 40 out of 60 games), lost 1/3 of the time (i.e., 20 out of 60 games). Thus, the probability of winning by switching is 2/3.
But the robustness of these demonstrations doesn’t explain the robust counterintuitive-ness of the problem—a question that seemingly resides in the intersection of psychology and mathematics.
One way I’ve tried to build an intuition for it is to put it in modal terms. In the worlds where you have three doors to choose from, you’ll make a worse choice than you would in those worlds where you have only two doors to choose from. That is, you’re more likely to get it wrong when there are more doors. So, when one door is eliminated, you’re better off switching because you probably guessed wrong at the start.
I like this attempt at intuition-building, but it doesn’t dig deep enough into where we’re going wrong, and it doesn’t account for how we might respond to variations where switching isn’t 2/3 (see Addendum). The simple answer is that those who get it wrong aren’t conditioning on the fact that the game-master always reveals a goat. But I feel there’s something deeper going on here having to do with a few things, including the counterintuitive application of phrases like “2/3 of the time” when you’re only playing the game once.
I can imagine a response along the lines of (keeping in mind that, if the game-master chooses what door to open at random rather than always revealing a goat, switching indeed wins only 1/2 the time): “Why should it matter that Monty Hall always reveals the goat in some theoretical model involving several trials I’m not actually involved in? What if I only thought Monty Hall knew, but he didn’t and was only guessing? That would only change the theoretical answer I’m supposed to give. But it won’t change whether or not I’m going to win this game I’m playing right now! That seems to come down to either this door or that door concealing the car.”
Does it help to suggest imagining that we could view the history of Let’s Make a Deal as having been played by a single contestant, and when you play, you are simply the temporary avatar for that contestant? Perhaps not, as it’s certainly not the case that all those avatars share the winnings! Besides, and more importantly, the game is an independent event (beware the gambler’s fallacy). In other words, if the game is only ever played once in the history of its existence, the probability that switching wins in that game is 2/3, due to the game’s structure.
Some of the confusion here may stem from a misunderstanding of what probability is meant to do: provide theoretical models for making better decisions, sometimes with more obvious results than others (see the examples with 100+ doors).
I also think some of the difficulty here has to do with popular notions of probability—the intuition-shaping notions we grow up with—being about random external events that haven’t happened yet. That’s the aspect I’ll explore here, though admittedly I’m only skimming the surface.
Part II: What We’re Getting Wrong about the Monty Hall Problem
Consider a variation on the game. After door #2 is opened to reveal a goat, that door is left open (and the goat does not move). Doors #1 and #3 remain closed. Whatever is behind them is now shuffled vigorously and randomly (this happens concealed from your view). You’re now given the opportunity to switch. This time, it really is 1/2 to switch or stay. It makes no difference. Similarly, if you flip a coin to decide whether to switch or stay, you’ll win half the time—i.e., 2/3 of the time you’ll choose a goat, and 1/2 of that time you’ll switch and win; 1/3 of the time you’ll choose the car, and 1/2 of that time you’ll stay and win; that’s (2/3)(1/2) + (1/3)(1/2) = 1/2.
Imagine another variation. There is a lotto machine with three balls flying around in it. On two are the letters “G” (for “goat”), and on one there is the letter “C” for “car.” Your task is to guess which will (randomly) come out. You have a 1/3 chance of guessing C correctly. A G-ball comes out. You are asked to guess again. This time there is a 1/2 chance of guessing correctly, whether you guess C or G.
In both of these scenarios, our intuitions are correctly aligned with the actual probability. What is different about the Monty Hall Problem? In that problem, the events are already fixed. That is, “Where is the car?” is already a settled question. Let’s say that door #1 conceals the car. When you guess door #1, there is a 1/3 chance that your guess is correct, but there is a probability of 1—that’s a 100% chance—that the car is behind that door.
In other words, what is behind each door is already settled. There’s a probability of 1 that: the car is behind door #1, a particular goat is behind door #2, and the other goat is behind door #3. When you assign a probability, however, you don’t know any of this. So what you must assign a probability to is the likelihood of your guess being correct. One out of three times, your guess will reveal the car. And so on.
Think of it this way. Imagine I flip a fair coin, but I keep the result concealed in my hand. I look at the coin and I see it landed Heads. You must now guess what the likelihood is that Heads is facing upwards. The answer is already settled. I can think to myself, while you’re deliberating, that there is a probability of 1 that Heads is facing upwards. You, on the other hand, should assign 1/2 to that outcome. What you are really evaluating, however, is the likelihood of your guess being correct. Because the outcome is already settled. It’s no longer left to chance, no longer random, etc.
You might think the coin example is limited, given that this may also be what’s going on when we have not yet flipped the coin: There’s some already settled fact of the matter about how it will land, and we’re really evaluating the likelihood of a guess (or more rigorous prediction) of Heads or Tails being correct. Fair enough. This very well may be the best way to view probability in general: God is the game-master who chose the best of all possible worlds, and knows what will happen just as if it already has happened. We mortals are just guessing (some with more rigor and sensitivity than others).
Whether that is true, or whether randomness genuinely ensures that future events may go either way, the fact remains that in the first formulation I gave of the Monty Hall Problem, the arrangement of the car and goats is settled at the offset, and that doesn’t change after one or all of the doors has been opened.1
In summary: In the Monty Hall Problem, you should be assigning probability to the likelihood that your guess is correct, not to the likelihood that there actually is this or that item behind the door, and especially not to the likelihood of a future event occurring in terms of where the car is (that event is already settled). When the game-master opens the door, this is not like flipping a coin; it’s more like opening a hand to reveal an already-flipped coin.
The future event in in question, rather, is the correctness of our guess. (In other situations, it may be the correctness of a belief or of a rigorously computed prediction.) In this sense, probability turns out to always be about some future outcome; in this case, about a guess being correct, about winning a game, and so on.
This observation may not help us adjust our wrong intuitions about the problem. But perhaps it is on the right track for helping to explain what we’re doing wrong here and why the correct solution feels so wrong. In particular, I assume, to people for whom statistics and probability are not regular activities. On the other hand, it’s perplexing that the problem was a challenge for even the likes of mathematician Paul Erdős to wrap his head around, and that it garnered thousands of letters, including from math professors, “correcting” Marilyn vos Savant after her initial publication of the 2/3 answer in Parade magazine.
In the variations I gave of the Monty Hall Problem, our natural intuitions align with the probability rules. But this may just be a matter of luck: the proceedings happen to align with our intuited expectations, rather than our intuitions adjusting to a given situation. Presumably there’s an evolutionary advantage to this. Maybe one would need to examine that in order to really know why the Monty Hall Problem embarrasses our intuitions. I’ll leave that and many other deep follow-up questions—e.g., What would embodied mind/cognition researchers (such as George Lakoff and Rafael Núñez) have to say about this? How does the Monty Hall Problem relate to other counterintuitive probability results (from theoretically easy ones, like the Gambler’s Fallacy, to the maybe unresolvable Sleeping Beauty Problem)? In what ways is “Why don’t I get this?” a psychological question, and in what ways is it a mathematical one?—for later.
Have I really even addressed the question suggested by this writing’s title? Maybe I was too enthusiastic with the title, but hey: the layers of Why? go as far down as you like; I think I’ve addressed the first couple of layers here. Or maybe the Monty Hall problem is just stranger than those “in the know” tend to portray it to be, and so we should expect that anyone who’s really thinking closely about it should see some strangeness there. I certainly do.*
(*That in mind, I’ve written a follow-up post on the Monty Hall problem, this time zeroing in on the bizarre effects the game-master’s cognitive state—particularly at the moment of choosing which door to open—has on the probability of winning by switching: Three Strange Results in Probability: Cognitive States and the Principle of Indifference (Monty Hall, Flipping Coins, and Factory Boxes).
END NOTE:
The Monty Hall Problem, like so much in probability, provides a theoretical model that cleans up our complex, messy world in order to provide us with a heuristic (or rule of thumb); in this case: always switch. We may bring the model into question. Consider this. A highly determined and observant viewer, Wanda, watches hundreds of episodes of Lets Make a Deal (the TV show from which the problem originates, with host Monty Hall as game-master). Wanda has tracked, with highly sophisticated technology, Monty Hall’s behavior during the game. (Assume that it’s known that Hall knows what’s behind each door, he always reveals a goat, and he always offers the opportunity to switch.)
Wanda has noted that, in greater than 50% of cases when contestants should NOT have switched, some combination of the following occur: Hall’s vocal inflections become slightly longer and go higher in pitch; his pupils dilate; at least once, his eyebrows raise more than three times in a 30-second period. These things also sometimes, but less often, occur when the contestant should switch (e.g., when the contestant is a female wearing a red skirt).
When Wanda becomes a contestant, she tries to minimize the conditions that lead to false tells (e.g., she wears a brown skirt; a color to which Hall seems to be neutral). Should Wanda switch? It depends. The unconditioned probability that she chose a goat door is still 2/3. But she’ll need to update that probability conditional on Hall’s behavior. Any other contestant should switch. But Wanda has more information.
Finally, it’s interesting to consider an alternative case in which the winning door is kept from Monty Hall—for example, he is told which door to open through an earpiece that emits beeps: low for Door-1, midrange for Door-2, high for Door-3. And yet, Wanda might (though it’s far less likely) still find significant correlates between Hall’s behavior and winning choices; e.g., if his behavior is somehow (in)directly influenced by the behavior of those who do know. I can’t think of a plausible example, however, because those people would not know ahead of time whether a contestant would be best off switching.
(The point of the End Note example is to motivate the plausibility of there being situations where one’s rational subjective probability in not switching in order to win is greater than 1/3, whatever the reasonably assumed unconditioned probability may be.)
ADDENDUM:
See a discussion in the comments section about the significance of the Monty Hall knowing where the car is. Commenters took issue with my saying he need not know. I removed the knowledge requirement from the game because I sometimes encounter people ascribing a quasi-mystical property to mental states, including in terms of the effects those states can have on probability. So, I’m careful about how I characterize the epistemic dimensions of probability.
That said, the discussion did force me to think more deeply about the problem and its implications for how we think about probability. There’s a tendency for us to solve a problem then move on to the next as though the solved problem is now trivial. But it strikes me that there’s potentially something extremely important going on when a concept that seems entirely counterintuitive snaps into focus—something to do with either getting a more properly focused view of the world, or perhaps simply getting a more properly focused view within a particular (useful) model of the world; this is an important distinction. Furthermore, what seems trivial to us about probability today would have seemed strange to brilliant mathematicians in the 15th and 16th centuries (including the likes of Leibnitz and Newton).
Similarly, I often wonder what, say, Descartes would have thought of Edmond Gettier’s influential 1963 paper “Is Justified True Belief Knoweldge?” I bet Descartes would have said, “no, belief is simply not justified in Gettier cases.” Most epistemologists today buy into Gettier cases (as do I; it seems so obvious!)—and part of their job is to get this and similar concepts to snap into focus for students by getting them to think about false barns and painted zebras and a real sheep concealed behind fake sheep (I love all of these examples, by the way; I’m fully convinced). I think we’re able to buy into it today due to a gradual decline in our (I believe justified) confidence, which was very high coming out of the Scientific Revolution, that everything about the world can be proved or uncovered through empirical investigation and math. Now that confidence is being replaced by probabilistic models (e.g., involving Bayesian credences), even, I’m told, in the world of magic (e.g., a spell might not promise a result, but it will increase its chances of happening).
Points of tension in perspectives on probability also seem to have to do with different understandings about what probability is meant to do, or is even capable of doing (help make a better decision in a given moment? state a fact about the world?). (For a wonderful philosophical survey on the history of the development of, and attitudes about, probability, see Ian Hacking’s 1975 [updated 2006] The Emergence of Probability.)
But these are questions for other articles (working on it). My point is that, despite understanding the answer to MH intuitively, I’m not prepared to view it as trivial or obvious.
Following the comments discussion, I made some minor edits to this post, and will add this clarifying note:
If Monty Hall randomly chooses which door to open (due to not knowing which to open or, say, by flipping a fair coin), then 1/3 of the time he’ll choose the car, thus ending the game; 1/3 of the time you’ll choose a goat and he’ll reveal a goat, and switching will win; 1/3 of the time you’ll choose the car and he’ll reveal a goat, and switching will lose; thus, switching wins as often in this version of the game as it loses (1/3 for each), so the probability is 1/2 that you’ll win by switching.
Or, put it this way: 2/3 of the time you’ll pick a goat; half of those (i.e., 1/3 of the time overall), Monty Hall reveals the car, ending the game; the other half, you switch and win; the other 1/3 of the time, you pick the car and switching loses.
That said, it seems to me that the Monty Hall problem encourages interesting questions about the relationship between assessing the probability of a particular trial, and relating a particular trial to a larger set of trials (including how we draw borders around several trials in order to create a set—i.e., how we determine membership criteria for admitting this or that given trial into a given set).
That said, in what I’ve been discussing here, I’m particularly interested in the set of trials in which a goat is revealed, irrespective of how the goat comes to be revealed—in 2/3 of those trials, switching wins. To be clear, this isn’t simply a matter of playing the game several times (without Monty Hall always revealing a goat), and then throwing out the trials in which he reveals the car. That will still result in switching winning only 1/2 the time. To see this, suppose you play 60 games. You’ll choose a goat 40 times. Half of those, Monty Hall voids the game by revealing the car; the other 20, you win by switching. You’ll choose a car 20 times, in which case switching always loses. You’ve won 20 by switching, and lost 20 by switching. Alternatively, you could construct a set of 60 trials in which a goat is revealed, for whatever reason. In that set, switching wins twice as often.
Interestingly, were I to play the game one time without knowing whether Monty Hall knows, once I see a goat revealed, I would intuitively consider myself to be in the set of trials in which I always see a goat. Maybe that’s not such a bad thing, as switching won’t decrease my chances of finding the car.
I would prefer that to the alternative of assessing the probability that Monty Hall knows, in which case I might turn to the principle of indifference and assign .5, which I would nudge up a tenth to .6, via Bayes’ theorem, on the evidence of his revealing a goat, and on the assumption that he intends to reveal the goat (and is making the choice of door cognitively, rather than, say, by flipping a coin): IF he intends to reveal goat and the P(Knows)=(1/2) THEN: the P(Knows|Reveals Goat) = (P(Reveals Goat|Knows)×P(Knows))/P(Reveals Goat) = ((1)(1/2))/[5/6] =3/5.
This gets tougher, however, when assuming ignorance both about Monty Hall’s intentions and how the door is chosen. Knowing more about his wishes and decision method can be helpful; notice that if the door is decided by guessing or even by, say, eeny meeny miney moe (which can be gamed), his wishes seem more important than if decided by coin toss, particularly if he chooses which decision method to use.
Finally, I might as well throw some of the other stuff discussed here into Bayes’ theorem. If you’re in the set of trials in which a goat is always revealed, once you see a goat revealed, where G = “You chose a goat” and R = A goat is revealed: P(G|R) = (P(R|G)×P(G))/P(R) = ((1)(2/3))/(2/3) = 2/3. In other words, you started with 2/3 probability of having chosen a goat, and that doesn’t change given a goat reveal.
In case he is choosing a door randomly, assuming even odds (e.g., flipping a coin), you get: P(G|R) =((1/2)(2/3))/(2/3) = 1/2. So you update from 2/3 to 1/2 chance of having chosen a goat, given the reveal of a goat.
At the end of the day, I think the question I’m finding myself more intrigued by here is how to reconcile a single, real-world instance of the game to a theoretical model in which I’m supposed to locate myself. That is, I am to ask: “As I play this single instance of the game, am I in the set of games in which Monty Hall always knows, the one in which he usually knows but happens forgot on occasion, the one in which the he is guessing, the one in which he can choose not to open a door according to whimsy…?”

Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).
I have to object to two parts of your problem description. I noticed that you were very careful to make certain things, that are usually left implied, explicit. Like “The game-master does not lie” and “You are not told which door has which item.” And that’s where the objections come from.
In the description, you said “The game-master need not know which door has which item.” You also never said that he has to *INTENTIONALLY* open a door with a goat.
But he needs to know where the car is, and intentionally avoid it, for the problem to work as described. If either is not true, your breakdown of the cases is wrong. There would a 1/3 chance he reveals the car. The remaining 2/3 of cases where he doesn’t divide into the 1/3 where your original choice has the car, and the 1/3 where the remaining door does.
In fact, the MHP is one of several well-known problems that are equivalent to one first introduced by Joseph Bertrand in 1889, called the Bertrand Box Problem. Today, most will call it the Bertrand Box “Paradox” because people quibble over whether 1/2, or 2/3, is the correct answer. But the paradox Bertrand referred to was how we can tell that 1/2 can’t be the right answer. I’ll illustrate with your version of the MHP.
After you pick door #1, Monty Hall has two doors he can open. If he were to open #2, most people’s intuitions say the probability #1 has the car changes to 1/2. But if that were true, it would also change to 1/2 if he were to open #3. And if it would change regardless of which door opens,HE DOESN’T HAVE TO OPEN ONE! All you need to know is that he can, which you do know, and then you would conclude that door #1 has a 1/2 chance. But that’s impossible, it must be 1/3.
We get the MHP wrong – or any of the problems in the Bertrand Box family – when we assume that what we have observed was predetermined. That was Bertrand’s point. The information we get when Monty Hall opens door #2 is not just that door #2 has a goat, it is that Monty Hall chose it because it had a goat. He could have chosen #3 if both #2 and #3 had goats, so we err when we dismiss all of those cases. We should eliminate all of the cases where the car was behind #3 (and #2 has a goat), but only half (see more below) of the ones where it is behind #1. This leaves twice as many where switching wins.
What Wanda’s research does, is let her change that number “1/2” to something else. If she can estimate that Monty would open #2 with probability Q when the car is behind #1, the chances that switching wins are 1/(1+Q), and that staying wins are Q/(1+Q). So switching is always better, just not always twice as good.
Another example: You know that Mr. Smith has two children, and that at least one is a boy. What are the chances he has a boy and a girl? (This is a variation of what was first asked by Martin Gardner in the May, 1959 Scientific American)
Incorrect solution #1: The gender of his two children must be independent, so the “other” child is a girl 50% of the time. This is wrong because no specific child was identified as “the boy,” so no child can be identified as “the other.”
Incorrect solution #2: There are four equally-likely combinations of two children when listed by age, BB, BG, GB, and GG. Since GG is eliminated, and two of the three that remain have a boy and a girl, the probability is 2/3. This is incorrect for the exact same reason that 1/2 is wrong for the MHP: we are assuming that the observation of a boy was predetermined. (This was pointed out by Martin Gardner in October, 1959. Nobody ever remembers that.)
Correct solution: Since it is possible to observe that a BG or GB family has at least one girl – in fact, it has to be just as likely as observing that there is at least one boy – we can only count half of those cases.The answer is 1/2, but not for the reasons in solution #1. (Or, apply the paradox: learning one gender in a way that is independent of set can’t change the probability that there is a boy and a girl.)
Hi JeffJo,
Thanks for your interesting and informative comments.
For better or worse, I remain unconvinced that MH needs to know where the car is or intend to open a door with a goat. I’ve often encountered that idea as an explanation for why the probability of having chosen the car does not change to 1/2 once a goat is revealed. I’ve also personally found the idea effective in helping people get their intuitions aligned with why it’s better to switch doors; particularly in scenarios where, out of 100 or 1000 doors, all that’s left are two closed doors: the one you chose, and the one MH has intentionally left closed. But ultimately I think this misses the point.
The point I meant to make is: Provided MH’s behavior is identical when he does or does not know where the car is, the (typical) contestant has the exact same data to work with, and thus should assign the same probability in either case. (I also point out that the game goes more smoothly when MH knows where the car is; but I’d like to be as clear as possible about when certain epistemic factors do or don’t affect probability outcomes, so I leave this bit about knowledge out of the account.)
For example, imagine you’re playing the game. You choose door #1. Earlier in the day, MH knew which door was going to hide the car. Since then, he has suffered a mild (and he hopes anomalous) bout of forgetfulness, thus spacing on which door conceals the car. To save face, he puts on a convincing show of confidence while secretly hoping he’ll reveal a goat rather than the car. He opens door #2, revealing a goat. At this point, you are in the same position you would have been in had MH known all along which door to open. I will take this even further and say that MH could have openly flipped a coin to choose between the two remaining doors; provided a goat is revealed, you are in the same position (probability-wise) as when MH knowingly opens that same door.
[LATER EDIT: This is beyond misleading. JeffJo is absolutely correct that, the way I’m putting this here, it would be 1/2. What I was struggling to get across was: so long as the goat is always revealed in every (theoretical) run of the game, we get the 2/3 answer—and this can happen in ways that don’t involve Monty Hall’s knowledge. This observation suggested a strangeness to me (as someone, that is, whose gut intuition is, incorrectly, that it should ALWAYS be 2/3) that I’ve better explored in subsequent posts, in part thanks these discussions here. At any rate, a corrected example here would be a coin that’s somehow rigged to always reveal a goat, though I find that unsatisfyingly unmysterious. As for a momentary lapse of memory, we would need to convince ourselves that he always still chooses the door with a goat, for some reason; that idea is a tough sell and one I’ve sensibly come to reject since suggesting it.]
Whatever MH’s (presumed) knowledge, it remains that there’s a 1/3 chance you initially chose the door with a car, and a 2/3 chance one of the other doors conceals the car. Once you see that one of those other doors has a goat, you update to a 2/3 chance that the remaining door has the car, so it’s best to switch.
I can, of course, imagine scenarios where assessment of MH’s knowledge matters. That was my point of the Wanda example. There, I simply meant to illustrate that there could be situations where someone—not a typical contestant—is getting extra information from MH’s gameplay behavior. If Wanda was justifiedly .9 confident that she was getting an unconscious “don’t switch” tell from MH, it would be better for her not to switch. I wouldn’t recognize the tell, however, so I would in theory be better off switching (though I’d likely lose the game in this case).
A simpler example where MH’s knowledge matters: There are two doors, one with a goat and one with a car. In this game, the doors are not opened until the end. MH announces that he chooses door #1. You believe with certainty that MH knows what’s behind each door, and that he always intentionally picks the goat. So, you assign a probability of 1 (rather than .5) to there being a car behind door #2. Interestingly, MH still might not actually know here (e.g., he could get Gettiered), yet it could all turn out in the end as you expected.
What the contestant must assume are the rules of the game, whether or not they are the actual rules, are (1) MH will open a door and offer a switch, (2) He cannot open the contestant’s door, and (3) He cannot open the car’s door. MH cannot follow rule #3 unless he knows where the car is (or you create some convoluted mechanism that amounts to the same thing).
If the contestant does not assume rule #3, then she must assume that (A) 1/3 of the time, MH will reveal the car, (B) 1/3 of the time the contestant picked the car, and (C) 1/3 of the car is behind the door that the contestant can switch to. To solve this, case (A) is removed because MH did not reveal it in the game being played, and switching wins with probability 1/2.
It seems to me as simple as this: If you run several instances of the game, you’ll win roughly 2/3 of the time by switching once a goat is revealed. This will be true whether or not anyone knew in advance that the goat was behind the opened door. In other words, whether the goat is revealed knowingly or not, once you see a goat revealed, you’re better off switching.
It seems I could similarly argue: Suppose MH intends to open the door with goat, but accidentally opens the door with the car. You now have a probability of 1 that you’ll pick the car by switching to that opened door. It clearly doesn’t at all matter what MH knew or intended here.
NOTE: In the above comment, I should have clarified: “If you run several instances of the game, you’ll win roughly 2/3 of the time by switching once a goat is revealed, provided a goat is always revealed.”
In the MHP the rules are specific and they include the requirement of the host to know where the car is, reveal a goat from a door not chosen, and offer the player a chance to switch his original choice for the remaining door. Since the probability of the player in picking a goat is 2/3 and for the host it’s 1 then the chances to win by switching is 2/3×1=2/3.
If the host did not know where the car is and happens to reveal a goat then it is a different problem entirely. The chances to win by switching and staying are the same. This shouldn’t even be up for debate when it comes to suggestions that there is still a 2/3 chance to win by switching if a goat were revealed. The chances to win by switching is 2/3×1/2=1/3 same as in staying which is 1/3×1=1/3. Since the host reveals a goat 2/3 of the time and the player picks the car 1/3 of the time then the player has a car behind his door half of the time that a goat is revealed. In other words the only ones that can continue are the 1/3 of the players that picked cars, and half of the 2/3 of the players that picked goats, which is also 1/3.
“It seems I could similarly argue: Suppose MH intends to open the door with goat, but accidentally opens the door with the car. You now have a probability of 1 that you’ll pick the car by switching to that opened door. It clearly doesn’t at all matter what MH knew or intended here.”
That clearly is not the case in which the host does not know where the car is and opens a door at random. If a car is revealed the contestant loses because he CANNOT continue since he can only stay with his original guess, or switch to the remaining door. To suggest otherwise we would then need to amend the standard MHP to include the option for the contestant to switch to the revealed goat as well. The 1/3 of the games that the host reveals the car are LOST, therefore in the 2/3 of the games that can continue the chances to win by staying and switching are the same, 50/50.
“It seems I could similarly argue: Suppose MH intends to open the door with goat, but accidentally opens the door with the car. You now have a probability of 1 that you’ll pick the car by switching to that opened door. It clearly doesn’t at all matter what MH knew or intended here.”
That’s a moot point entirely. The player has the option to either stay with his guess or switch to the remaining door.
Dan, Jeffjo is entirely correct in his understanding of the MHP. I hope you don’t make a point of trying to teach others when it is you that has demonstrated a lack of understanding of the problem by way of your comments.
Mr. Wallace you clearly do not understand as to why the host must know where the car is, a fundamental rule to the problem. If he opens a door without knowing where the car is and what’s behind it happens to be a goat, then it’s 50/50.
And this part here….”Suppose MH intends to open the door with goat, but accidentally opens the door with the car. You now have a probability of 1 that you’ll pick the car by switching to that opened door.” is in a word laughable. You already do not understand one of the rules to the problem yet you introduce one that is not even included in it.
Please, do everyone a service and remove your blog.
Thanks for the comments. I’m not going to remove the post, Mr. Cantor. If I’m getting things wrong (or am not stating them clearly), the post now has several comments pointing that out, which is nice!
That said, I understand the core argument against me to be: If MH doesn’t know where the car is (or, more precisely, is randomly opening one of the two remaining doors), he’ll sometimes reveal the car, and this will mess up the whole “win 2/3 of the time by switching” thing. That’s all fine and clear. In fact, to save anyone else the trouble of correcting me, I’ll explain this in my own words:
Suppose, over several runs of the game, MH flips a fair coin to choose which door to open. 1/3 of the time, he’ll reveal the car. That ends 1/3 of the games. In the other 2/3 of the games, MH reveals a goat. In half of those games, switching will win. So, over the course of these games, you’ll win 1/2 the time by switching.
I realize now that in my above comment I should have added “provided a goat is always revealed” to: “If you run several instances of the game, you’ll win roughly 2/3 of the time by switching once a goat is revealed.” That was an error on my part! As I’ve said more than once, the game goes better if the host knows where the car is. I had in mind something along the lines of just trials in which a goat is revealed for whatever reason (but never not revealed). I thought I made this clear enough by including the example with 1,000,000 doors: If MH opened 999,998 doors, even if done by pure luck, it would make sense to switch to once it comes down to your door and that remaining closed door.
Now, back to my claims about it not mattering what MH knows or intends.
Even if MH usually—nearly always—knows where the car is, if on some random night MH forgets which door has the car and just hopes for the best and gets lucky enough to reveal a goat, the math still seems to say that, once I’ve seen that goat, I should update the probability to 2/3 that the remaining door has the car. This will even be true if, on some random night, MH mistakenly thought he knew where the car was and meant to reveal the car but accidentally reveals a goat. I realize that an argument can be made that this is a “different game” (i.e., is a trial in a different set of trials), and that technically the probability should be reworked to reflect the coin-flipping scenario I describe above; but I’m thinking in terms of what a person who assumes they are—indeed who is—in the real-world standard scenario should do, and, more importantly, I use this example to make a point about knowledge/intention.
Regarding my little argument about having a probability of 1 of picking the car were MH to accidentally reveal the car: It did feel a little snarky when I wrote it, and I thought about not including it, if only because a car being (accidentally) revealed definitely is not part of the standard MH scenario (which is what I’m ultimately interested in). My point, though, was just that the probability of the car’s being behind that door doesn’t depend on what MH knows. Perhaps that example fails to say anything interesting about the standard scenario, in which there is always a goat revealed. It strikes me as interesting, though, because of the obvious irrelevance of MH’s knowledge and intentions. And that really was my initial point, though I admit how I’ve presented it may reveal a struggle on my part to foster an intuitive connection between how many times something happens on average over several trials and its probability of happening in a single trial. I sometimes worry that people seem to imbue MH’s belief with a kind of magical property to affect the physical outcome of a single trial, and I want to be clear that belief has no such property.
That in mind, I can much better make my point about knowledge/intention with the following rule: “he always reveals a goat” (for whatever reason; it doesn’t HAVE to be because of any particular epistemological state on his part: he could ALWAYS intend to reveal the car despite pretending to intend the opposite, but he could be constantly foiled by his own ineptitude in this regard, thus always revealing a goat). JeffJo seems to grant this possibility when he writes: “MH cannot follow rule #3 unless he knows where the car is (or you create some convoluted mechanism that amounts to the same thing).” I take it that amounting to the same thing can involve things that have nothing to do with intention or knowledge.
All that said, it seems I need to be clearer about what sorts of points I’m trying to make with this stuff, and how I’m intending to make them etc.—so maybe I need to revise my blog post. This discussion has definitely helped me to better organize my thoughts.
Thank you for your reply Mr. Wallace. What strikes me most in your comments is that you are very organized in your process of thought and rightly or wrongly it’s end result ‘got there’ for reasons that are better than anyone else whom I have disagreed with. There is nothing more frustrating then dealing with someone that is adamant that the core MHP is 50/50 due to two doors left therefore no formula or experiment to prove otherwise is valid enough for them.
With regards to this statement….”Even if MH usually—nearly always—knows where the car is, if on some random night MH forgets which door has the car and just hopes for the best and gets lucky enough to reveal a goat, the math still seems to say that, once I’ve seen that goat, I should update the probability to 2/3 that the remaining door has the car.” I am curious as to why you would suggest that. So let’s say he does know and calls out three members of the audience, Tom, Dick, and Harry to each have a door. Tom picks 1, Dick picks 2, and Harry picks 3. Dick cannot wait to see what is behind his door and pushes it open only to reveal a goat. Now, why would you think Harry would have twice the chance of having the car behind his door as compared to Tom?
Just to be more clear, when I used the statement…”There is nothing more frustrating then dealing with someone that is adamant that the core MHP is 50/50 due to two doors left therefore no formula or experiment to prove otherwise is valid enough for them.” it was referring to a type of argument and not of yourself specifically because I know that you certainly wouldn’t think it was 50/50 in the core MHP.
Thank you again for your reply. To close it was an absolute pleasure, and enlightening to have this discussion with you. And I apologize for the instances in which I have come on too strong…I KNOW I did and it’s an extension of the passion I have with the MHP.
By coincidence your reference in your last sentence, of the repeating ‘9’ debate, someone else has said that (almost word for word) to me on another site. Luckily my interest in that topic is minimal for at my age if I were to partake in it there is a 2/3 chance that my ticker would fail, lol.
Thank you!!
Thanks Georg, I’m so glad it was a productive discussion in the end! (I know it was for me, and I’ll at some point revise the blog post to make my thoughts clearer.)
Feel free to stop by anytime to help me clarify my thoughts — I can always use the help :)
I hear ya’ about .9-repeating, ha!
Thank you for your invitation and I certainly will drop in to see how things are going. You are a deep and curious thinker and there is one problem I have got on my mind that I simply cannot shake off. It’s like one of those ear-worms, songs that ring in your head over and over (usually by Barry Manilow, lol). I don’t know if it’s a genuine conundrum or paradox etc. but it involves probability with a deck of cards. I don’t want to mention it’s description to you unless you are interested in tackling it and also this area should be reserved for MHP inquiries. So if you are interested let me know which area of your site I can send it to.
Thanks in advance.
I’d be curious to know what card deck problem is. I tend to enjoy anything that has even a whiff of paradox about it. You’re welcome to send it to me via my contact form, or even list it out here (and I could delete the post after, if it seems like it might distract from the MH topic), or if there’s a link to description on the problem, feel free to share here and I can go check that out. If I have any thoughts about it, I’ll make a blog post.
Thanks!
I will submit it here and you would possibly want to delete it after to allow focus on the MHP. The card problem begins like this….you and I are both seeking the Ace of Spades. The cards (52-standard deck) are spread face down and we each take one. If you always reveal first then you will have a 1/52 chance of being right. I will always have a 1/51 chance of being right when I reveal mine after your card is wrong, as my card cannot be the same as yours. This part I can understand because if we play until we each get it right 5 times, I have a higher probability but you will have revealed yours 5 times more often, so that part offsets each other. Now, for my self-inflicted paradox….you play by yourself in seeking out the Ace of Spades and take out TWO cards at the same time without looking at them immediately. There is (hopefully I’m correct here) a 2/52 chance, or a 1/26 chance that one of them is the Ace of Spades. However, if the first card you look at is not the right card, and if the second card you look at has a 1/51 chance then how come this method have a slightly greater probability than your original chances of 2/52?
Hope I explained it clearly and maybe you can help me out, I must be approaching the problem incorrectly somewhere!!
Thank you, and regards,
Georg
Hi Georg,
Thanks for sharing. I’d like to make sure I understand the problem you’re noticing. To clarify…
In the original version of the game, where you and I play, there are three possible outcomes (A means Ace of Spades):
(1) A then ~A: probability is 1/52 (i.e., 1/52 x 51/51);
(2) ~A then A; probability is 1/52 (i.e., 51/52 x 1/51);
(3) ~A then ~A; probability is 50/52 (i.e., 51/52 x 50/51).
This makes sense. P(A) + P(~A) = 2/52 + 50/20 = 52/52.
Seems to me this aligns with the solo version of the game. There is, just as above, a 2/52 probability that one of the two cards picked will be A. I think the problem you propose surrounds thinking about the probability of a certain particular pull (e.g., if the first card pulled is ~A, which has a chance of 51/52, then the chance of the next card pulled being A is 1/51) rather than thinking about what’s implied by the probability of one of the first two pulls being A (whether pulled out separately or simultaneously).
If you see something more to this etc., let me know and I’ll think more about it.
Thanks!
Hi Georg,
Thanks again for your comments. Please note that at nearly the same moment you posted your question about Tom, Dick, and Harry, I was posting a slightly edited version of my initial reply to you. I was revising my reply mainly to address a problem with precisely the scenario you’re asking about (“Even if MH usually—nearly always…”).
As for Tom, Dick, and Harry: I agree with you.
Perhaps the edits I made to my earlier reply explain why I think it’s still ok to treat the “Even if MH…” scenario with 2/3, even if only on practical grounds. (I don’t expect you to go back and reread; but I wanted to mention that it’s been updated.) Perhaps I’m abusing probability in this example, for the sake of my point about epistemology. I’ll need to think more about this.
Thanks, I understand what you mean about the frustration of people insisting on 50/50 in the core MHP. Ultimately my post was to try to understand that insistence. I also understand that if I’m getting there for the wrong reasons, it hurts the cause (so to speak).
(I encounter a similar frustration with people who insist that .9-repeating cannot possibly equal 1. I’m fascinated by how a given individual’s intuition and intellect—including my own—come together with these things.)
I think there are three things I struggle with in the blog post:
(1) Why do most people have the strong intuition that the standard MPH case presents .5 chance winning by switching?
(2) What is the role of epistemology in probability?
(3) What is the relation of single trials to the large sets of trials of which those single trials are members?
(1) is meant to be the point of the blog post. (2) and (3) perhaps seep in because they are always on my mind when thinking about probability; but also perhaps because they have a role to play in (1) (at least concerning my intuitions about MHP). Maybe I need to be clearer about what I’m addressing as I go along, and set aside (2) and (3) except when explicitly addressing those things. I’ll give it more thought.
Thank you for your reply….so quickly too. I will reread your comment more times and hopefully it will sink in. To be honest I have more confidence in you about this one than I have in myself. I still have this feeling that the two cards you look at have the probabilities of 1/52 and 1/51 respectively yet ‘should’ be exactly 2/52. Now I’m starting to sound like those 50/50ers in the MHP that you and I fend off, lol.
While I sulk over my ill-confidence I have your reply saved to a text file so this and my query and your reply can be deleted anytime from this page leaving it again for the MHP comments.
In the interim thank you ever so much again!
Hi Georg, the struggle to align our intuitions with probability is a big part of what I love about it.
A few more thoughts about this problem: When pulling separately, there’s a 1/52 chance that the first card is A, and, if that doesn’t happen, a 1/51 chance for pulling A the second pull. But it’s a 1/52 chance that the second card in the deck is A. Indeed, it’s a 1/52 chance that any given card in the deck is A. So it’s a 2/52 chance that of any two cards in the deck, one will be A. (And if you pull 52 cards, it’s a 52/52 chance that one of those is A. I explore the significance of this example at the end of this comment.)
The issue seems to be how to deal with the notion of “first” and “second” when the cards are pulled simultaneously. If you flip two cards over simultaneously, there is a 2/52 chance that one of them is A. But it’s interesting to consider whether there is a fact of the matter about what probability each card would have in terms of sequence, particularly when you look at both cards at the same time.
This doesn’t seem as obvious as when, say, flipping two quarters at the exact same time: there’s a fact of the matter that each quarter has a .5 chance coming up H; and the chance that both land H is .25. But with your card example, you could, say (assuming they are spread out in a row), look from left to right or from right to left (just as you could have separately flipped them over from left to right or right to left). It’s arbitrary. You could also pull out two cards and shuffle them up, then put them back into the empty spaces on the table they came from, or you could just flip those over without first putting them back. In those cases, I think you would do just as you do when flipping separately. The probability that the first one you look at is A is 1/52, and, upon crossing that out, the probability for the next is 1/51 (but keep in mind that before crossing something out, the probability of any given card being A is 1/52).
But when you see two at the same time, we lose the sense of the probability of the cards being in any sort of sequence, such as 1/52 x 51/51, because there’s no first (to cross out) and then second pull. What we can say, though, for example, is that there’s a 1/52 probability that the flipped card on the left is A, and a 1/52 probability that the flipped card on the right is A. Again, there’s a 1/52 chance that any given card in the (unflipped) deck is A. To demonstrate this, let’s just go through the first three cards, left to right (which is arbitrary; could go right to left, top to bottom, etc.):
(1) P(A is first): A, ~A, ~A: 1/52 x 51/51 x 50/50 = 1/52
(2) P(A is second): ~A, A, ~A: 51/52 x 1/51 x 50/50 = 1/52
(3) P(A is third): ~A, ~A, A: 51/52 x 50/51 x 1/50 = 1/52
You can also imagine a scenario where all the cards are face up. You close your eyes with your back to the deck. You turn around and put your face close to the cards so that when you open your eyes, you’ll see one card. Etc. This should be just like the example when you flip cards separately. I’d apply the same idea to flipping two over together, but looking at them separately. Something more interesting happens (it seems to me) intuitively when you look at two cards simultaneously, however; perhaps because we lose any sense of sequence, and sequence (e.g., we’ve pulled ~A and now have 51 cards left to choose from) helps us conceptualize the probability more easily. But, when two are pulled at once, the above numbers don’t change in terms of the probability being 1/52 that any given card is A. (What if you look at all 52 simultaneously?) Etc. etc. Hmmmm… thinking out loud here, of course, and not sure I’m being clear with my thoughts.
I’ll just close by saying that I think the confusion here is that when you pull (or look) in sequence, you are removing cards and changing the denominator. But this doesn’t change the fact that, in the first place (before anything’s been flipped etc.; once flipped/seen, you know with certainty whether the card is A and update your evidence/probability accordingly etc.—this can happen with two cards simultaneously), each card has a probability of 1/52 of being A.
If you pull all 52 cards at once, there’s a 52/52 chance that one of them is A, but you wouldn’t try to then say that there’s a 1/52 that the first is A, a 1/51 that the second is A, 1/50 the third is A… down to 1/1 that the last one is A, then try adding all that up (it’ll come out to greater than 1, of course). Rather, there’s a 1/52 chance that any given card is A, and once you’ve eliminated 50 cards, each of the remaining two cards has a 1/2 chance, and once you’ve eliminated 51 cards, there’s a 1/1 chance that the last remaining card is A, etc.
I might think about (especially the intuition/subjective-related elements of) this more and make a little post about it (once I’ve finished up a few other drafts I’m working on). If so, I’ll link it from here. Thanks for sharing!
I’ve read your explanation slowly and carefully and some elements of it are starting to sink in. I think in time I will get that ‘clang’ but for now it’s still a work in progress for me.
I’m in the process of moving today and it may be a few days to really invest my concentration on what you have written, but you have laid it out for me to understand. I’m determined to for this really ‘bugs’ me and I want to have it resolved in my mind.
I have copied your explanation and added it to the text file as well so it’s safe to delete it from here.
Thank you for your interest in this problem and for giving me clear and thorough explanation, you understand from my point of view my own confusion and are wording your explanation(s) to specifically address that. And it will be!
Thank you ever so much!!
Hello Mr. Wallace and thank you again for your reply. I know you must be correct in your explanation yet I’m lost as ever. It’s like a black hole for this problem in particular. Another series of questions, totally related…..this might help me I don’t know…..
1. You are going to be given two chances for the Ace of Spades by pulling a single card from the 52 card pile, looking at it and if it’s wrong you put it back in the deck and pull a single card again. Is this a 2/52 chance that you will be correct once?
2. You have one chance to pull two cards from the 52 at the same time. Isn’t this ALSO a 2/52 chance but ‘better’ than the first option?
I’m lost…lol!!
Hi Georg,
For 1, I would calculate the chance of pulling A at least once (you could pull A twice, presuming you shuffle; if you don’t shuffle and just avoid the first card, it’s the same as not replacing it) in operation 1 by subtracting from 1 the chance of getting no As in two pulls: 1 – (51/52)^2, or about .0380917. We could also calculate getting exactly one A (in two pulls with replacement), which could be in the first or second pull, but not both: which is (1/52)(51/52) + (51/52)(1/52), or about .0377219; notice that this is what we get if we subtract getting A twice, i.e. (1/52)^2 from the above probability of getting A at least once.
Operation 2 is 2/52 that one card will be A, or .0384615, which is a little higher than the probabilities for operation 1.
Hope this is clear and I didn’t make any errors (typing on my phone).
Hi Mr. Wallace,
I should have described my first option as using two decks and pulling one card out from each. But I understand by your explanations how the probabilities are calculated now.
You’ve been most helpful (and patient) with me, and again thank you for all your replies. It was always a pleasant and resourceful visit to your page and I look forward to sometime in the future to be back.
Warmest regards,
Georg
Looking at the card question again, I thought of a simpler way to explain it. Just in case it’s helpful to anyone:
We spread out a standard, shuffled 52-card deck with the cards face down. Two cards are pulled in succession, without replacement. [1] What is the probability one of those is the Ace of Spades? Suppose we shuffle and repeat, but this time we flip two cards simultaneously. [2] What is the probability one of those is the Ace of Spades? Questions [1] and [2] should have the same answer.
Let A = “Ace of Spades”:
[1] This is asking the probability that either the first or second card pulled is A. We find this by adding the chance of each together: P(first pull is A) + P(second pull is A) = (1/52) + (51/52)(1/51) = 2/52.
[2] This is asking the probability that one of two cards in a simultaneous pull is A. We’ve pulled 2 out of 52 cards, each of which has an equal chance of being A, so it’s also 2/52.
So, it all works out as expected.
Whew! I just re-found this thread after many months off. I want to address Dan’s Oct. 25 comment “Why do most people have the strong intuition that the standard MPH case presents .5 chance winning by switching?” I’ll ignore the role of epistemology, but touch on single/multiple trials.
(Seemingly irrelevant aside: I live in Maryland, which is south of the Mason Dixon Line. Some will say that the Line is defined to be the border between Maryland and Pennsylvania, but that is wrong. Maryland is defined to be the area between the Potomac river and the line surveyed by Charles Mason and Jeremiah Dixon in the 1750s, to settle a border dispute. My point is that a definition works in one direction only, and that direction can become confused.)
Probability is not an absolute property. MH may know where the car is, which means that TO HIM the “probability” is 100% for that door, and 0% for the others. We don’t, so TO US it is 33.3% for each. Probability is a measure of our UNcertainty about a variable situation, and that depends on knowledge.
Probability is the measure of our confidence in the outcomes of a single trial. If we can repeat that situation many times over, with the same uncertainty content each time, we expect the frequency of each outcome to match the probability. Some will say that the probability is defined to be that frequency, but that is a backwards definition.
As a simplification, there are two rules that govern the values: they must be >=0, and they must sum to 100%. If our state of uncertainty doesn’t distinguish between a set of N disjoint possibilities, none can have a higher, or lower, probability than another. So those rules say each has a probability of 1/N to us. For example, when any of 3 doors could have the car, each has a 1/3 probability.
This is such a fundamental principle, that we may not realize that we actually have to satisfy requirements to apply it. In the Monty Hall problem, if we say the two remaining doors have a 1/2 probability, we are implying there is no difference in our knowledge about those doors. But there is: MH was not allowed to open the door we choose, but he was allowed to open the door we can switch to. A simple analysis says that he had to open (100%) the door he did if choose a goat, but had a 50% chance if we choose the car. So these cases have different uncertainties. This is what makes it a 100:50 = 2:1 chance, or 2/3 probability, that our door has a goat.
The reason we fail to see the difference in our knowledge about the two doors, is because it requires us to consider a possibility that we know didn’t happen: MH opening the door that he didn’t.
Hi JeffJo, thanks for this excellent comment.
The (intuitive, learned, or real) tension between frequency and confidence (or subjectivity) is, for me, one of the most fascinating aspects of probability.
I’ve more recently boiled this fascination down in another post: Three Strange Results in Probability: Cognitive States and the Principle of Indifference (Monty Hall, Flipping Coins, and Factory Boxes)
If you have time one of these days, I’d love your thoughts on that post. (The coin example is the one I find most interesting there, by the way. And the Factory Boxes example, I don’t find strange, exactly–not in itself; rather, I think I’m especially interested in the question of what it suggests about indifference, if anything, given the extent to which we rely on it… maybe it just means we’ll fail to make the best available decision sometimes, even with a reasonable assessment of evidence… but not as often as we would fail otherwise.)