Why We Get the Monty Hall Problem Wrong(?)

Print Friendly, PDF & Email

I’d rather have the goat.

Part I: The Monty Hall Problem

The Monty Hall Problem (explained below) is one of those math results that strikes most people as not making intuitive sense. The problem is often illuminated by restating it with 100 doors instead of 3 doors. This makes many people go, “Ah, now I get it,” and concede that their intuition must be wrong. Nevertheless, for many of them the 3-door scenario continues to be counterintuitive.

This leads many to ask, “Why don’t I understand the Monty Hall Problem?” Like this person at Quora: Why doesn’t the “Monty Hall problem” make sense to me? The usual response is to try to demonstrate to the person why the correct answer is correct—to try to get it to click. But, even when this works (sometimes it seems to), it doesn’t address why the problem’s solution feels so counterintuitive, nor why the standard wrong answer feels so right. I think I have an idea of what’s going on.

First, a summary of the problem.

Suppose you’re playing a game in which you are faced with three closed doors. The doors are numbered 1, 2, and 3. You are told by the game-master (who does not lie and only speaks the truth) that behind one of the doors there is a car, and that behind each of the other two doors there is a goat. You are not told which door has which item. (The game-master need not know which door has which item, by the way, though the game goes better if she does. See the End Note, however, for how the game-master’s knowing could affect a player’s credence in her guess.) The arrangement of goats and car will not be changed throughout the course of the game.

You are now given the opportunity to guess which door conceals the car. If you guess correctly, you win the car. You pick a door, let’s say door #1. The game-master then opens one of the remaining closed doors, let’s say door #2. Doors #1 and #3 remain shut.

You are now given the opportunity to stay with door #1, or switch to door #3. Once you switch, you will not be able to switch again. The question now is: Are you better off switching?

Most people say no. They believe there is a 1/2 (or at least equal, i.e., 1/3 and 1/3) chance of winning if they stay or switch, so it makes no difference. But this is wrong—their intuition has misled them. There is actually a 1/3 chance of winning if you stay, and a 2/3 chance of winning if you switch.

Here’s a basic explanation of why. (Below, I’ll distinguishing between the probability of correct guesses on the one hand, and that of what’s actually behind the doors on other hand; but this will do for now.)

When you initially guessed door #1, there was a 2/3 chance that you chose a goat, and a 1/3 chance that you chose the car. This means that there is a 2/3 chance that the car is behind one of the other two doors (i.e., doors #2 and #3). That does not change once door #2 is opened to reveal a goat. That is, there is still a 2/3 chance that you chose a goat, and a 1/3 chance that you chose the car. And there is still a 2/3 chance that, had you been able to choose both door #2 and door #3 (at the same time), you would have selected the door concealing the car. Now that door #2 has been eliminated, this just leaves door #3 as the viable car option given doors #2 and #3; so, there’s a 2/3 chance that the car is behind door #3.

The usual way to get people to accept this result is to restate the problem with 100 (or 1,000 or 1,000,000) doors. Check out this Numberphile video to get a visual of this; it’ll also illustrate the above explanation:1

It’s also worth noting that real-world executions of this game demonstrate that switching does indeed win roughly 2/3 of the time. But showing that the math is correct doesn’t explain the robust counterintuitive-ness of the problem—a question that seemingly resides in the intersection of psychology and mathematics.

One way I’ve tried to build an intuition for it is to put it in modal terms. (This is how I approached the problem when I first encountered it.) In the worlds where you have three doors to choose from, you’ll make a worse choice than you would in those worlds where you have only two doors to choose from. That is, you’re more likely to get it wrong when there are more doors. So, when one door is eliminated, you’re better off switching because you probably guessed wrong at the start.

I like this attempt at intuition-building, but it doesn’t dig deep enough into where we’re going wrong.

Part II: What We’re Getting Wrong about the Monty Hall Problem

Consider a variation on the game. After door #2 is opened to reveal a goat, that door is left open (and the goat does not move). Doors #1 and #3 remain closed. Whatever is behind them is now shuffled vigorously and randomly (this happens concealed from your view).

You are now given the opportunity to switch. This time, it really is 1/2 to switch or stay. It makes no difference.

Imagine another variation. There is a lotto machine with three balls flying around in it. On two are the letters “G” (for “goat”), and on one there is the letter “C” for “car.” Your task is to guess which will (randomly) come out. You have a 1/3 chance of guessing C correctly. A G-ball comes out. You are asked to guess again. This time there is a 1/2 chance of guessing correctly, whether you guess C or G.

In both of these scenarios, our intuitions are correctly aligned with the actual probability. What is different about the Monty Hall Problem? In that problem, the events are already fixed. That is, “Where is the car?” is already a settled question. Let’s say that door #1 conceals the car. When you guess door #1, there is a 1/3 chance that your guess is correct, but there is a probability of 1—that’s a 100% chance—that the car is behind that door. (Imagine the game-master’s assistants behind the door: they have a probability of 1 of choosing which door has the car; there’s no guesswork involved.)

In other words, what is behind each door is already settled. There’s a probability of 1 that: the car is behind door #1, a particular goat is behind door #2, and the other goat is behind door #3. When you assign a probability, however, you don’t know any of this. So what you must assign a probability to is the likelihood of your guess being correct. One out of three times, your guess will reveal the car. And so on. This is the best you can do, given that you don’t know what’s actually behind the doors.

Think of it this way. Imagine I flip a fair coin, but I keep the result concealed in my hand. I look at the coin and I see it landed Heads. You must now guess what the likelihood is that Heads is facing upwards. The answer is already settled. I can think to myself, while you’re deliberating, that there is a probability of 1 that Heads is facing upwards. You, on the other hand, should assign 1/2 to that outcome. What you are really evaluating, however, is the likelihood of your guess being correct. Because the outcome is already settled. It’s no longer left to chance, no longer random, etc.

You might think the coin example is limited, given that this may also be what’s going on when we have not yet flipped the coin: There’s some already settled fact of the matter about how it will land, and we are really evaluating the likelihood of a guess (or more rigorous prediction) of Heads or Tails being correct. Fair enough. This very well may be the best way to view probability in general: God is the game-master who chose the best of all possible worlds, and knows what will happen just as if it already has happened. We mortals are just guessing (some with more rigor and sensitivity than others).

Whether that is true, or whether randomness genuinely ensures that future events may go either way, the fact remains that in the first formulation I gave of the Monty Hall Problem, the arrangement of the car and goats is settled at the offset, and that doesn’t change after one or all of the doors has been opened. Indeed, even if the game-master opens all three doors, there is still a 1/3 probability that your first guess was correct, even if that door reveals a goat. Just as once I’ve revealed the concealed coin, the probability that you guessed correctly is 1/2, whether or not your guess was correct.

Think of it another way. You would have been wrong to assign a probability of 1 to its being Heads, even if Heads turns out to be true (which is to say that it was a probability of 1 that Heads is what is concealed in the hand); this is because roughly 50% of the time, over several experiments involving similar circumstances, you will guess (in)correctly.

In summary: In the Monty Hall Problem, you should be assigning probability to the likelihood that your guess is correct, not to the likelihood that there actually is this or that item behind the door, and especially not to the likelihood of a future event occurring (as the event has already occurred and is thus already settled). When the game-master opens the door, this is not like flipping a coin; it’s more like opening a hand to reveal an already-flipped coin.

In other terms, the future event in in question—the outcome to which we should be assigning probability—is not the spontaneous (or random) output of a car rather than of a goat, but instead the correctness of our guess. (In other situations, it may be the correctness of a belief or of a rigorously computed prediction.)

This observation may not help us adjust our wrong intuitions about the problem. But I do think it explains—or is on the right track for explaining—what we’re doing wrong here and why the correct solution feels so wrong (in particular, I assume, to people for whom statistics and probability are not regular activities).

In the variations I gave of the Monty Hall Problem, our natural intuitions align with the probability rules. But this may just be a matter of luck: the proceedings happen to align with our intuited expectations, rather than our intuitions adjusting to a given situation. Presumably there’s an evolutionary advantage to this. Maybe one would need to examine that in order to really know why the Monty Hall Problem embarrasses our intuitions. I’ll leave that and many other deep follow-up questions—e.g., What would embodied mind/cognition researchers (such as George Lakoff and Rafael Núñez) have to say about this? How does the Monty Hall Problem relate to other counterintuitive probability results (from easy ones, like the Gambler’s Fallacy, to the maybe unresolvable Sleeping Beauty Problem)? In what ways is “Why don’t I get this?” a psychological question, and in what ways is it a mathematical one?—for later.

(Have I answered the question suggested by this writing’s title? Maybe I was too enthusiastic with the title, but hey: the layers of Why? go as far down as you like; I think I’ve addressed the first couple of layers here.)

END NOTE:
The Monty Hall Problem, like so much in probability, provides a theoretical model that cleans up our complex, messy world in order to provide us with a heuristic (or rule of thumb). The truth is, there are many ways to bring the model into question. Consider this. A highly determined and observant viewer, Wanda, watches hundreds of episodes of Lets Make a Deal (the show from which the problem originates, with host Monty Hall as game-master). Wanda has tracked, with highly sophisticated technology, Monty Hall’s behavior during the door game. (Note that it’s known that Hall knows what’s behind each door, and he always offers the opportunity to switch; if he didn’t, this could change how we guess as well.)

Wanda has noted that, in greater than 50% of cases when contestants should NOT have switched, some combination of the following occur: Hall’s vocal inflections become slightly longer and go higher in pitch; his pupils dilate; at least once, his eyebrows raise more than three times in a 30-second period. These things also sometimes occur when the contestant should switch (e.g., when the contestant is female and wearing a red skirt).

When Wanda becomes a contestant, she tries to minimize the conditions that lead to false tells (e.g., she wears a brown skirt; a color to which Hall seems to be neutral). Should Wanda switch? It depends. The unconditioned probability that she chose a goat door is still 2/3. But she’ll need to update the probability she signs conditional on Hall’s behavior. Any other contestant should switch. But Wanda has more information.

Finally, it’s interesting to consider an alternative case in which the winning door is kept from Monty Hall—for example, he is told which door to open through an earpiece that emits beeps: low for D1, midrange for D2, high for D3. And yet, Wanda might (though it’s far less likely) still find significant correlates between Hall’s behavior and winning choices; e.g., if his behavior is somehow (in)directly influenced by the behavior of those who do know. I can’t think of a plausible example, however, because those people would not know ahead of time of whether a contestant would be best off switching.

(The point of the End Note example is to motivate the plausibility of there being situations where one’s rational subjective probability in not switching is greater than 1/3, whatever the reasonably assumed unconditioned probability may be.)

Footnotes:

  1. And here we clearly see why the game goes better if the game-master knows where the car is. It would otherwise require a lot of luck for the game-master to open 98 doors without revealing the car. But even if that did happen, it would not change the probability from the player’s perspective.
Share

Dan Jacob Wallace

3 Comments

  1. I have to object to two parts of your problem description. I noticed that you were very careful to make certain things, that are usually left implied, explicit. Like “The game-master does not lie” and “You are not told which door has which item.” And that’s where the objections come from.

    In the description, you said “The game-master need not know which door has which item.” You also never said that he has to *INTENTIONALLY* open a door with a goat.

    But he needs to know where the car is, and intentionally avoid it, for the problem to work as described. If either is not true, your breakdown of the cases is wrong. There would a 1/3 chance he reveals the car. The remaining 2/3 of cases where he doesn’t divide into the 1/3 where your original choice has the car, and the 1/3 where the remaining door does.

    In fact, the MHP is one of several well-known problems that are equivalent to one first introduced by Joseph Bertrand in 1889, called the Bertrand Box Problem. Today, most will call it the Bertrand Box “Paradox” because people quibble over whether 1/2, or 2/3, is the correct answer. But the paradox Bertrand referred to was how we can tell that 1/2 can’t be the right answer. I’ll illustrate with your version of the MHP.

    After you pick door #1, Monty Hall has two doors he can open. If he were to open #2, most people’s intuitions say the probability #1 has the car changes to 1/2. But if that were true, it would also change to 1/2 if he were to open #3. And if it would change regardless of which door opens,HE DOESN’T HAVE TO OPEN ONE! All you need to know is that he can, which you do know, and then you would conclude that door #1 has a 1/2 chance. But that’s impossible, it must be 1/3.

    We get the MHP wrong – or any of the problems in the Bertrand Box family – when we assume that what we have observed was predetermined. That was Bertrand’s point. The information we get when Monty Hall opens door #2 is not just that door #2 has a goat, it is that Monty Hall chose it because it had a goat. He could have chosen #3 if both #2 and #3 had goats, so we err when we dismiss all of those cases. We should eliminate all of the cases where the car was behind #3 (and #2 has a goat), but only half (see more below) of the ones where it is behind #1. This leaves twice as many where switching wins.

    What Wanda’s research does, is let her change that number “1/2” to something else. If she can estimate that Monty would open #2 with probability Q when the car is behind #1, the chances that switching wins are 1/(1+Q), and that staying wins are Q/(1+Q). So switching is always better, just not always twice as good.

    Another example: You know that Mr. Smith has two children, and that at least one is a boy. What are the chances he has a boy and a girl? (This is a variation of what was first asked by Martin Gardner in the May, 1959 Scientific American)

    Incorrect solution #1: The gender of his two children must be independent, so the “other” child is a girl 50% of the time. This is wrong because no specific child was identified as “the boy,” so no child can be identified as “the other.”

    Incorrect solution #2: There are four equally-likely combinations of two children when listed by age, BB, BG, GB, and GG. Since GG is eliminated, and two of the three that remain have a boy and a girl, the probability is 2/3. This is incorrect for the exact same reason that 1/2 is wrong for the MHP: we are assuming that the observation of a boy was predetermined. (This was pointed out by Martin Gardner in October, 1959. Nobody ever remembers that.)

    Correct solution: Since it is possible to observe that a BG or GB family has at least one girl – in fact, it has to be just as likely as observing that there is at least one boy – we can only count half of those cases.The answer is 1/2, but not for the reasons in solution #1. (Or, apply the paradox: learning one gender in a way that is independent of set can’t change the probability that there is a boy and a girl.)

  2. Hi JeffJo,

    Thanks for your interesting and informative comments.

    For better or worse, I remain unconvinced that MH needs to know where the car is or intend to open a door with a goat. I’ve often encountered that idea as an explanation for why the probability of having chosen the car does not change to 1/2 once a goat is revealed. I’ve also personally found the idea effective in helping people get their intuitions aligned with why it’s better to switch doors; particularly in scenarios where, out of 100 or 1000 doors, all that’s left are two closed doors: the one you chose, and the one MH has intentionally left closed. But ultimately I think this misses the point.

    The point I meant to make is: Provided MH’s behavior is identical when he does or does not know where the car is, the (typical) contestant has the exact same data to work with, and thus should assign the same probability in either case. (I also point out that the game goes more smoothly when MH knows where the car is; but I’d like to be as clear as possible about when certain epistemic factors do or don’t affect probability outcomes, so I leave this bit of knowledge out of the account.)

    For example, imagine you’re playing the game. You choose door #1. Earlier in the day, MH knew which door was going to hide the car. Since then, he has suffered a mild (and he hopes anomalous) bout of forgetfulness, thus spacing on which door conceals the car. To save face, he puts on a convincing show of confidence while secretly hoping he’ll reveal a goat rather than the car. He opens door #2, revealing a goat. At this point, you are in the same position you would have been in had MH known all along which door to open. I will take this even further and say that MH could have openly flipped a coin to choose between the two remaining doors; provided a goat is revealed, you are in the same position (probability-wise) as when MH knowingly opens that same door.

    Whatever MH’s (presumed) knowledge, it remains that there’s a 1/3 chance you initially chose the door with a car, and a 2/3 chance one of the other doors conceals the car. Once you see that one of those other doors has a goat, you update to a 2/3 chance that the remaining door has the car, so it’s best to switch.

    I can, of course, imagine scenarios where assessment of MH’s knowledge matters. That was my point of the Wanda example. There, I simply meant to illustrate that there could be situations where someone—not a typical contestant—is getting extra information from MH’s gameplay behavior. If Wanda was justifiedly .9 confident that she was getting an unconscious “don’t switch” tell from MH, it would be better for her not to switch. I wouldn’t recognize the tell, however, so I would in theory be better off switching (though I’d likely lose the game in this case).

    A simpler example where MH’s knowledge matters: There are two doors, one with a goat and one with a car. In this game, the doors are not opened until the end. MH announces that he chooses door #1. You believe with certainty that MH knows what’s behind each door, and that he always intentionally picks the goat. So, you assign a probability of 1 (rather than .5) to there being a car behind door #2. Interestingly, MH still might not actually know here (e.g., he could get Gettiered), yet it could all turn out in the end as you expected.

  3. What the contestant must assume are the rules of the game, whether or not they are the actual rules, are (1) MH will open a door and offer a switch, (2) He cannot open the contestant’s door, and (3) He cannot open the car’s door. MH cannot follow rule #3 unless he knows where the car is (or you create some convoluted mechanism that amounts to the same thing).

    If the contestant does not assume rule #3, then she must assume that (A) 1/3 of the time, MH will reveal the car, (B) 1/3 of the time the contestant picked the car, and (C) 1/3 of the car is behind the door that the contestant can switch to. To solve this, case (A) is removed because MH did not reveal it in the game being played, and switching wins with probability 1/2.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.