Part I: The Monty Hall Problem
The Monty Hall Problem (explained below) is one of those math results that strikes most people as not making intuitive sense. The problem is often illuminated by restating it with 100 doors instead of 3 doors. This makes many people go, “Ah, now I get it,” and concede that their intuition must be wrong. Nevertheless, for many of them the 3-door scenario continues to be counterintuitive.
This leads many to ask, “Why don’t I understand the Monty Hall Problem?” Like this person at Quora: Why doesn’t the “Monty Hall problem” make sense to me? The usual response is to try to demonstrate to the person why the correct answer is correct—to try to get it to click. But, even when this works (sometimes it seems to), it doesn’t address why the problem’s solution feels so counterintuitive, nor why the standard wrong answer feels so right. I think I have an idea of what’s going on.
First, a summary of the problem.
Suppose you’re playing a game in which you are faced with three closed doors. The doors are numbered 1, 2, and 3. You are told by the game-master (who does not lie and only speaks the truth) that behind one of the doors there is a car, and that behind each of the other two doors there is a goat. You are not told which door has which item. (The game-master need not know which door has which item, by the way, though the game goes better if she does. See the End Note, however, for how the game-master’s knowing could affect a player’s credence in her guess.) The arrangement of goats and car will not be changed throughout the course of the game.
You are now given the opportunity to guess which door conceals the car. If you guess correctly, you win the car. You pick a door, let’s say door #1. The game-master then opens one of the remaining closed doors, let’s say door #2. Doors #1 and #3 remain shut.
You are now given the opportunity to stay with door #1, or switch to door #3. Once you switch, you will not be able to switch again. The question now is: Are you better off switching?
Most people say no. They believe there is a 1/2 (or at least equal, i.e., 1/3 and 1/3) chance of winning if they stay or switch, so it makes no difference. But this is wrong—their intuition has misled them. There is actually a 1/3 chance of winning if you stay, and a 2/3 chance of winning if you switch.
Here’s a basic explanation of why. (Below, I’ll distinguishing between the probability of correct guesses on the one hand, and that of what’s actually behind the doors on other hand; but this will do for now.)
When you initially guessed door #1, there was a 2/3 chance that you chose a goat, and a 1/3 chance that you chose the car. This means that there is a 2/3 chance that the car is behind one of the other two doors (i.e., doors #2 and #3). That does not change once door #2 is opened to reveal a goat. That is, there is still a 2/3 chance that you chose a goat, and a 1/3 chance that you chose the car. And there is still a 2/3 chance that, had you been able to choose both door #2 and door #3 (at the same time), you would have selected the door concealing the car. Now that door #2 has been eliminated, this just leaves door #3 as the viable car option given doors #2 and #3; so, there’s a 2/3 chance that the car is behind door #3.
The usual way to get people to accept this result is to restate the problem with 100 (or 1,000 or 1,000,000) doors. Check out this Numberphile video to get a visual of this; it’ll also illustrate the above explanation:1
It’s also worth noting that real-world executions of this game demonstrate that switching does indeed win roughly 2/3 of the time. But showing that the math is correct doesn’t explain the robust counterintuitive-ness of the problem—a question that seemingly resides in the intersection of psychology and mathematics.
One way I’ve tried to build an intuition for it is to put it in modal terms. (This is how I approached the problem when I first encountered it.) In the worlds where you have three doors to choose from, you’ll make a worse choice than you would in those worlds where you have only two doors to choose from. That is, you’re more likely to get it wrong when there are more doors. So, when one door is eliminated, you’re better off switching because you probably guessed wrong at the start.
I like this attempt at intuition-building, but it doesn’t dig deep enough into where we’re going wrong.
Part II: What We’re Getting Wrong about the Monty Hall Problem
Consider a variation on the game. After door #2 is opened to reveal a goat, that door is left open (and the goat does not move). Doors #1 and #3 remain closed. Whatever is behind them is now shuffled vigorously and randomly (this happens concealed from your view).
You are now given the opportunity to switch. This time, it really is 1/2 to switch or stay. It makes no difference.
Imagine another variation. There is a lotto machine with three balls flying around in it. On two are the letters “G” (for “goat”), and on one there is the letter “C” for “car.” Your task is to guess which will (randomly) come out. You have a 1/3 chance of guessing C correctly. A G-ball comes out. You are asked to guess again. This time there is a 1/2 chance of guessing correctly, whether you guess C or G.
In both of these scenarios, our intuitions are correctly aligned with the actual probability. What is different about the Monty Hall Problem? In that problem, the events are already fixed. That is, “Where is the car?” is already a settled question. Let’s say that door #1 conceals the car. When you guess door #1, there is a 1/3 chance that your guess is correct, but there is a probability of 1—that’s a 100% chance—that the car is behind that door. (Imagine the game-master’s assistants behind the door: they have a probability of 1 of choosing which door has the car; there’s no guesswork involved.)
In other words, what is behind each door is already settled. There’s a probability of 1 that: the car is behind door #1, a particular goat is behind door #2, and the other goat is behind door #3. When you assign a probability, however, you don’t know any of this. So what you must assign a probability to is the likelihood of your guess being correct. One out of three times, your guess will reveal the car. And so on. This is the best you can do, given that you don’t know what’s actually behind the doors.
Think of it this way. Imagine I flip a fair coin, but I keep the result concealed in my hand. I look at the coin and I see it landed Heads. You must now guess what the likelihood is that Heads is facing upwards. The answer is already settled. I can think to myself, while you’re deliberating, that there is a probability of 1 that Heads is facing upwards. You, on the other hand, should assign 1/2 to that outcome. What you are really evaluating, however, is the likelihood of your guess being correct. Because the outcome is already settled. It’s no longer left to chance, no longer random, etc.
You might think the coin example is limited, given that this may also be what’s going on when we have not yet flipped the coin: There’s some already settled fact of the matter about how it will land, and we are really evaluating the likelihood of a guess (or more rigorous prediction) of Heads or Tails being correct. Fair enough. This very well may be the best way to view probability in general: God is the game-master who chose the best of all possible worlds, and knows what will happen just as if it already has happened. We mortals are just guessing (some with more rigor and sensitivity than others).
Whether that is true, or whether randomness genuinely ensures that future events may go either way, the fact remains that in the first formulation I gave of the Monty Hall Problem, the arrangement of the car and goats is settled at the offset, and that doesn’t change after one or all of the doors has been opened. Indeed, even if the game-master opens all three doors, there is still a 1/3 probability that your first guess was correct, even if that door reveals a goat. Just as once I’ve revealed the concealed coin, the probability that you guessed correctly is 1/2, whether or not your guess was correct.
Think of it another way. You would have been wrong to assign a probability of 1 to its being Heads, even if Heads turns out to be true (which is to say that it was a probability of 1 that Heads is what is concealed in the hand); this is because roughly 50% of the time, over several experiments involving similar circumstances, you will guess (in)correctly.
In summary: In the Monty Hall Problem, you should be assigning probability to the likelihood that your guess is correct, not to the likelihood that there actually is this or that item behind the door, and especially not to the likelihood of a future event occurring (as the event has already occurred and is thus already settled). When the game-master opens the door, this is not like flipping a coin; it’s more like opening a hand to reveal an already-flipped coin.
In other terms, the future event in in question—the outcome to which we should be assigning probability—is not the spontaneous (or random) output of a car rather than of a goat, but instead the correctness of our guess. (In other situations, it may be the correctness of a belief or of a rigorously computed prediction.)
This observation may not help us adjust our wrong intuitions about the problem. But I do think it explains—or is on the right track for explaining—what we’re doing wrong here and why the correct solution feels so wrong (in particular, I assume, to people for whom statistics and probability are not regular activities).
In the variations I gave of the Monty Hall Problem, our natural intuitions align with the probability rules. But this may just be a matter of luck: the proceedings happen to align with our intuited expectations, rather than our intuitions adjusting to a given situation. Presumably there’s an evolutionary advantage to this. Maybe one would need to examine that in order to really know why the Monty Hall Problem embarrasses our intuitions. I’ll leave that and many other deep follow-up questions—e.g., What would embodied mind/cognition researchers (such as George Lakoff and Rafael Núñez) have to say about this? How does the Monty Hall Problem relate to other counterintuitive probability results (from easy ones, like the Gambler’s Fallacy, to the maybe unresolvable Sleeping Beauty Problem)? In what ways is “Why don’t I get this?” a psychological question, and in what ways is it a mathematical one?—for later.
(Have I answered the question suggested by this writing’s title? Maybe I was too enthusiastic with the title, but hey: the layers of Why? go as far down as you like; I think I’ve addressed the first couple of layers here.)
The Monty Hall Problem, like so much in probability, provides a theoretical model that cleans up our complex, messy world in order to provide us with a heuristic (or rule of thumb). The truth is, there are many ways to bring the model into question. Consider this. A highly determined and observant viewer, Wanda, watches hundreds of episodes of Lets Make a Deal (the show from which the problem originates, with host Monty Hall as game-master). Wanda has tracked, with highly sophisticated technology, Monty Hall’s behavior during the door game. (Note that it’s known that Hall knows what’s behind each door, and he always offers the opportunity to switch; if he didn’t, this could change how we guess as well.)
Wanda has noted that, in greater than 50% of cases when contestants should NOT have switched, some combination of the following occur: Hall’s vocal inflections become slightly longer and go higher in pitch; his pupils dilate; at least once, his eyebrows raise more than three times in a 30-second period. These things also sometimes occur when the contestant should switch (e.g., when the contestant is female and wearing a red skirt).
When Wanda becomes a contestant, she tries to minimize the conditions that lead to false tells (e.g., she wears a brown skirt; a color to which Hall seems to be neutral). Should Wanda switch? It depends. The unconditioned probability that she chose a goat door is still 2/3. But she’ll need to update the probability she signs conditional on Hall’s behavior. Any other contestant should switch. But Wanda has more information.
Finally, it’s interesting to consider an alternative case in which the winning door is kept from Monty Hall—for example, he is told which door to open through an earpiece that emits beeps: low for D1, midrange for D2, high for D3. And yet, Wanda might (though it’s far less likely) still find significant correlates between Hall’s behavior and winning choices; e.g., if his behavior is somehow (in)directly influenced by the behavior of those who do know. I can’t think of a plausible example, however, because those people would not know ahead of time of whether a contestant would be best off switching.
(The point of the End Note example is to motivate the plausibility of there being situations where one’s rational subjective probability in not switching is greater than 1/3, whatever the reasonably assumed unconditioned probability may be.)