NOTE: I’ve recently posted a (I hope) clearer and more carefully thought out update of the below thoughts; find that here: “Monty Hall Problem and Variations: Intuitive Solutions.” I’ve disabled comments on this post.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/- Ω -\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
Part I: The Monty Hall Problem
The Monty Hall Problem (explained below) strikes most people as counterintuitive. The problem is often illuminated by restating it with 100 doors instead of 3 doors. This makes many people go, “Ah, now I get it,” and concede that their intuition must be wrong. Nevertheless, for many the 3-door scenario continues to be counterintuitive.
This leads many to ask, “Why don’t I understand the Monty Hall Problem?” Like this person at Quora: Why doesn’t the “Monty Hall problem” make sense to me? The usual response is to try to demonstrate to the person why the correct answer is correct—to try to get it to click. But, even when this works (sometimes it seems to), it doesn’t address why the problem’s solution feels so counterintuitive, nor why the standard wrong answer feels so right. I think I have an idea of what’s going on.
First, a summary of the problem.
Suppose you’re playing a game in which you’re faced with three closed doors, numbered 1, 2, and 3. You’re told by the game-master (who does not lie and only speaks the truth) that one of the doors conceals a car, and the other two doors each conceals a goat. You’re not told which door conceals which item. (The game-master need not know which door conceals which item, by the way, though the game goes more smoothly if she does. To be clear, though, it must be understood that the game-master will reveal a goat in all instances of the game.* See the End Note, however, for how the game-master’s knowing could affect how a player should guess.) The arrangement of goats and car will not be changed throughout the course of the game.
(*NOTE: See the Addendum at the end of this post for some comments about the significance of the game-master knowing where the car is, which I wrote following a discussion about that topic in the comments section. It also features yet further explanations of the basic Monty Hall problem. At some point, I’ll thoroughly revise this post to make it clearer and to give better explanations and diagrams etc. of the problem. I also think I now have a better sense of why people struggle with this, and how to deliver an intuitively satisfying explanation, as I’ve spent a lot more time thinking about probabilistic intuitions.)
You are now given the opportunity to guess which door conceals the car. If you guess correctly, you win the car. You pick a door; let’s say door #1. The game-master then opens one of the remaining closed doors; let’s say door #2. Doors #1 and #3 remain shut.
You’re now given the opportunity to stay with door #1, or switch to door #3. Once you switch, you will not be able to switch again. The question now is: Are you better off switching?
Most people say no. They believe there’s a 1/2 chance of winning if they stay or switch, so it makes no difference. But this is wrong. There’s actually a 1/3 chance of winning if you stay and a 2/3 chance of winning if you switch.
Here’s a basic explanation of why. (Below, I’ll distinguishing between the probability of correct guesses on the one hand, and of what’s actually behind the doors on other hand; but this will do for now.)
When you initially guessed door #1, there was a 2/3 chance that you chose a goat and a 1/3 chance that you chose the car. This means that there is a 2/3 chance that the car is behind one of the other two doors. That doesn’t change once door #2 is opened to reveal a goat. That is, there is still a 2/3 chance you chose a goat and a 1/3 chance that you chose the car. And there is still a 2/3 chance that, had you been able to choose both door #2 and door #3 (at the same time), you would have selected the door concealing the car. Now that door #2 has been eliminated, this just leaves door #3 as the viable car option among doors #2 and #3; so, there’s a 2/3 chance that the car is behind door #3. (This assumes a goat is always revealed. See Addendum for more on this.)
A simpler, and probably clearer, way to put this: If you run the game several times, 1/3 of the time, you’ll choose the car, and switching will lose; 2/3 of the time, you’ll choose a goat, and switching will win. So you’ll win twice as often by switching.
A common way to get people to accept this result is to restate the problem with 100 (or 1,000 or 1,000,000) doors. Check out this Numberphile video to get a visual of this:
It’s also worth noting that real-world executions and computer simulations of this game demonstrate that switching does indeed win roughly 2/3 of the time. We can also theorize a hypothetical example as follows:
Suppose you play the game 60 times. Every time you play, a goat is revealed. Of those 60 games, you choose a goat 2/3 of the time. That’s 40 times. Every time you switch in those cases, you win. So, you win 40 times by switching. 1/3 of the time, you first choose a car. That’s 20 times. You lose every time you switch in those instances. So, you’ve won 40 times by switching, and you’ve lost 20 times by switching. You’ve won twice as often as you’ve lost. In other words, you’ve won 2/3 of the time (i.e., 40 out of 60 games), lost 1/3 of the time (i.e., 20 out of 60 games). Thus, the probability of winning by switching is 2/3.
But the robustness of these demonstrations doesn’t explain the robust counterintuitive-ness of the problem—a question that seemingly resides in the intersection of psychology and mathematics.
One way I’ve tried to build an intuition for it is to put it in modal terms. In the worlds where you have three doors to choose from, you’ll make a worse choice than you would in those worlds where you have only two doors to choose from. That is, you’re more likely to get it wrong when there are more doors. So, when one door is eliminated, you’re better off switching because you probably guessed wrong at the start.
I like this attempt at intuition-building, but it doesn’t dig deep enough into where we’re going wrong, and it doesn’t account for how we might respond to variations where switching isn’t 2/3 (see Addendum). The simple answer is that those who get it wrong aren’t conditioning on the fact that the game-master always reveals a goat. But I feel there’s something deeper going on here having to do with a few things, including the counterintuitive application of phrases like “2/3 of the time” when you’re only playing the game once.
I can imagine a response along the lines of (keeping in mind that, if the game-master chooses what door to open at random rather than always revealing a goat, switching indeed wins only 1/2 the time): “Why should it matter that Monty Hall always reveals the goat in some theoretical model involving several trials I’m not actually involved in? What if I only thought Monty Hall knew, but he didn’t and was only guessing? That would only change the theoretical answer I’m supposed to give. But it won’t change whether or not I’m going to win this game I’m playing right now! That seems to come down to either this door or that door concealing the car.”
Does it help to suggest imagining that we could view the history of Let’s Make a Deal as having been played by a single contestant, and when you play, you are simply the temporary avatar for that contestant? Perhaps not, as it’s certainly not the case that all those avatars share the winnings! Besides, and more importantly, the game is an independent event (beware the gambler’s fallacy). In other words, if the game is only ever played once in the history of its existence, the probability that switching wins in that game is 2/3, due to the game’s structure.
Some of the confusion here may stem from a misunderstanding of what probability is meant to do: provide theoretical models for making better decisions, sometimes with more obvious results than others (see the examples with 100+ doors).
I also think some of the difficulty here has to do with popular notions of probability—the intuition-shaping notions we grow up with—being about random external events that haven’t happened yet. That’s the aspect I’ll explore here, though admittedly I’m only skimming the surface.
Part II: What We’re Getting Wrong about the Monty Hall Problem
Consider a variation on the game. After door #2 is opened to reveal a goat, that door is left open (and the goat does not move). Doors #1 and #3 remain closed. Whatever is behind them is now shuffled vigorously and randomly (this happens concealed from your view). You’re now given the opportunity to switch. This time, it really is 1/2 to switch or stay. It makes no difference. Similarly, if you flip a coin to decide whether to switch or stay, you’ll win half the time—i.e., 2/3 of the time you’ll choose a goat, and 1/2 of that time you’ll switch and win; 1/3 of the time you’ll choose the car, and 1/2 of that time you’ll stay and win; that’s (2/3)(1/2) + (1/3)(1/2) = 1/2.
Imagine another variation. There is a lotto machine with three balls flying around in it. On two are the letters “G” (for “goat”), and on one there is the letter “C” for “car.” Your task is to guess which will (randomly) come out. You have a 1/3 chance of guessing C correctly. A G-ball comes out. You are asked to guess again. This time there is a 1/2 chance of guessing correctly, whether you guess C or G.
In both of these scenarios, our intuitions are correctly aligned with the actual probability. What is different about the Monty Hall Problem? In that problem, the events are already fixed. That is, “Where is the car?” is already a settled question. Let’s say that door #1 conceals the car. When you guess door #1, there is a 1/3 chance that your guess is correct, but there is a probability of 1—that’s a 100% chance—that the car is behind that door.
In other words, what is behind each door is already settled. There’s a probability of 1 that: the car is behind door #1, a particular goat is behind door #2, and the other goat is behind door #3. When you assign a probability, however, you don’t know any of this. So what you must assign a probability to is the likelihood of your guess being correct. One out of three times, your guess will reveal the car. And so on.
Think of it this way. Imagine I flip a fair coin, but I keep the result concealed in my hand. I look at the coin and I see it landed Heads. You must now guess what the likelihood is that Heads is facing upwards. The answer is already settled. I can think to myself, while you’re deliberating, that there is a probability of 1 that Heads is facing upwards. You, on the other hand, should assign 1/2 to that outcome. What you are really evaluating, however, is the likelihood of your guess being correct. Because the outcome is already settled. It’s no longer left to chance, no longer random, etc.
You might think the coin example is limited, given that this may also be what’s going on when we have not yet flipped the coin: There’s some already settled fact of the matter about how it will land, and we’re really evaluating the likelihood of a guess (or more rigorous prediction) of Heads or Tails being correct. Fair enough. This very well may be the best way to view probability in general: God is the game-master who chose the best of all possible worlds, and knows what will happen just as if it already has happened. We mortals are just guessing (some with more rigor and sensitivity than others).
Whether that is true, or whether randomness genuinely ensures that future events may go either way, the fact remains that in the first formulation I gave of the Monty Hall Problem, the arrangement of the car and goats is settled at the offset, and that doesn’t change after one or all of the doors has been opened.1
In summary: In the Monty Hall Problem, you should be assigning probability to the likelihood that your guess is correct, not to the likelihood that there actually is this or that item behind the door, and especially not to the likelihood of a future event occurring in terms of where the car is (that event is already settled). When the game-master opens the door, this is not like flipping a coin; it’s more like opening a hand to reveal an already-flipped coin.
The future event in in question, rather, is the correctness of our guess. (In other situations, it may be the correctness of a belief or of a rigorously computed prediction.) In this sense, probability turns out to always be about some future outcome; in this case, about a guess being correct, about winning a game, and so on.
This observation may not help us adjust our wrong intuitions about the problem. But perhaps it is on the right track for helping to explain what we’re doing wrong here and why the correct solution feels so wrong. In particular, I assume, to people for whom statistics and probability are not regular activities. On the other hand, it’s perplexing that the problem was a challenge for even the likes of mathematician Paul Erdős to wrap his head around, and that it garnered thousands of letters, including from math professors, “correcting” Marilyn vos Savant after her initial publication of the 2/3 answer in Parade magazine.
In the variations I gave of the Monty Hall Problem, our natural intuitions align with the probability rules. But this may just be a matter of luck: the proceedings happen to align with our intuited expectations, rather than our intuitions adjusting to a given situation. Presumably there’s an evolutionary advantage to this. Maybe one would need to examine that in order to really know why the Monty Hall Problem embarrasses our intuitions. I’ll leave that and many other deep follow-up questions—e.g., What would embodied mind/cognition researchers (such as George Lakoff and Rafael Núñez) have to say about this? How does the Monty Hall Problem relate to other counterintuitive probability results (from theoretically easy ones, like the Gambler’s Fallacy, to the maybe unresolvable Sleeping Beauty Problem)? In what ways is “Why don’t I get this?” a psychological question, and in what ways is it a mathematical one?—for later.
Have I really even addressed the question suggested by this writing’s title? Maybe I was too enthusiastic with the title, but hey: the layers of Why? go as far down as you like; I think I’ve addressed the first couple of layers here. Or maybe the Monty Hall problem is just stranger than those “in the know” tend to portray it to be, and so we should expect that anyone who’s really thinking closely about it should see some strangeness there. I certainly do.*
(*That in mind, I’ve written a follow-up post on the Monty Hall problem, this time zeroing in on the bizarre effects the game-master’s cognitive state—particularly at the moment of choosing which door to open—has on the probability of winning by switching: Three Strange Results in Probability: Cognitive States and the Principle of Indifference (Monty Hall, Flipping Coins, and Factory Boxes).
The Monty Hall Problem, like so much in probability, provides a theoretical model that cleans up our complex, messy world in order to provide us with a heuristic (or rule of thumb); in this case: always switch. We may bring the model into question. Consider this. A highly determined and observant viewer, Wanda, watches hundreds of episodes of Lets Make a Deal (the TV show from which the problem originates, with host Monty Hall as game-master). Wanda has tracked, with highly sophisticated technology, Monty Hall’s behavior during the game. (Assume that it’s known that Hall knows what’s behind each door, he always reveals a goat, and he always offers the opportunity to switch.)
Wanda has noted that, in greater than 50% of cases when contestants should NOT have switched, some combination of the following occur: Hall’s vocal inflections become slightly longer and go higher in pitch; his pupils dilate; at least once, his eyebrows raise more than three times in a 30-second period. These things also sometimes, but less often, occur when the contestant should switch (e.g., when the contestant is a female wearing a red skirt).
When Wanda becomes a contestant, she tries to minimize the conditions that lead to false tells (e.g., she wears a brown skirt; a color to which Hall seems to be neutral). Should Wanda switch? It depends. The unconditioned probability that she chose a goat door is still 2/3. But she’ll need to update that probability conditional on Hall’s behavior. Any other contestant should switch. But Wanda has more information.
Finally, it’s interesting to consider an alternative case in which the winning door is kept from Monty Hall—for example, he is told which door to open through an earpiece that emits beeps: low for Door-1, midrange for Door-2, high for Door-3. And yet, Wanda might (though it’s far less likely) still find significant correlates between Hall’s behavior and winning choices; e.g., if his behavior is somehow (in)directly influenced by the behavior of those who do know. I can’t think of a plausible example, however, because those people would not know ahead of time whether a contestant would be best off switching.
(The point of the End Note example is to motivate the plausibility of there being situations where one’s rational subjective probability in not switching in order to win is greater than 1/3, whatever the reasonably assumed unconditioned probability may be.)
See a discussion in the comments section about the significance of the Monty Hall knowing where the car is. Commenters took issue with my saying he need not know. I removed the knowledge requirement from the game because I sometimes encounter people ascribing a quasi-mystical property to mental states, including in terms of the effects those states can have on probability. So, I’m careful about how I characterize the epistemic dimensions of probability.
That said, the discussion did force me to think more deeply about the problem and its implications for how we think about probability. There’s a tendency for us to solve a problem then move on to the next as though the solved problem is now trivial. But it strikes me that there’s potentially something extremely important going on when a concept that seems entirely counterintuitive snaps into focus—something to do with either getting a more properly focused view of the world, or perhaps simply getting a more properly focused view within a particular (useful) model of the world; this is an important distinction. Furthermore, what seems trivial to us about probability today would have seemed strange to brilliant mathematicians in the 15th and 16th centuries (including the likes of Leibnitz and Newton).
Similarly, I often wonder what, say, Descartes would have thought of Edmond Gettier’s influential 1963 paper “Is Justified True Belief Knoweldge?” I bet Descartes would have said, “no, belief is simply not justified in Gettier cases.” Most epistemologists today buy into Gettier cases (as do I; it seems so obvious!)—and part of their job is to get this and similar concepts to snap into focus for students by getting them to think about false barns and painted zebras and a real sheep concealed behind fake sheep (I love all of these examples, by the way; I’m fully convinced). I think we’re able to buy into it today due to a gradual decline in our (I believe justified) confidence, which was very high coming out of the Scientific Revolution, that everything about the world can be proved or uncovered through empirical investigation and math. Now that confidence is being replaced by probabilistic models (e.g., involving Bayesian credences), even, I’m told, in the world of magic (e.g., a spell might not promise a result, but it will increase its chances of happening).
Points of tension in perspectives on probability also seem to have to do with different understandings about what probability is meant to do, or is even capable of doing (help make a better decision in a given moment? state a fact about the world?). (For a wonderful philosophical survey on the history of the development of, and attitudes about, probability, see Ian Hacking’s 1975 [updated 2006] The Emergence of Probability.)
But these are questions for other articles (working on it). My point is that, despite understanding the answer to MH intuitively, I’m not prepared to view it as trivial or obvious.
Following the comments discussion, I made some minor edits to this post, and will add this clarifying note:
If Monty Hall randomly chooses which door to open (due to not knowing which to open or, say, by flipping a fair coin), then 1/3 of the time he’ll choose the car, thus ending the game; 1/3 of the time you’ll choose a goat and he’ll reveal a goat, and switching will win; 1/3 of the time you’ll choose the car and he’ll reveal a goat, and switching will lose; thus, switching wins as often in this version of the game as it loses (1/3 for each), so the probability is 1/2 that you’ll win by switching.
Or, put it this way: 2/3 of the time you’ll pick a goat; half of those (i.e., 1/3 of the time overall), Monty Hall reveals the car, ending the game; the other half, you switch and win; the other 1/3 of the time, you pick the car and switching loses.
That said, it seems to me that the Monty Hall problem encourages interesting questions about the relationship between assessing the probability of a particular trial, and relating a particular trial to a larger set of trials (including how we draw borders around several trials in order to create a set—i.e., how we determine membership criteria for admitting this or that given trial into a given set).
That said, in what I’ve been discussing here, I’m particularly interested in the set of trials in which a goat is revealed, irrespective of how the goat comes to be revealed—in 2/3 of those trials, switching wins. To be clear, this isn’t simply a matter of playing the game several times (without Monty Hall always revealing a goat), and then throwing out the trials in which he reveals the car. That will still result in switching winning only 1/2 the time. To see this, suppose you play 60 games. You’ll choose a goat 40 times. Half of those, Monty Hall voids the game by revealing the car; the other 20, you win by switching. You’ll choose a car 20 times, in which case switching always loses. You’ve won 20 by switching, and lost 20 by switching. Alternatively, you could construct a set of 60 trials in which a goat is revealed, for whatever reason. In that set, switching wins twice as often.
Interestingly, were I to play the game one time without knowing whether Monty Hall knows, once I see a goat revealed, I would intuitively consider myself to be in the set of trials in which I always see a goat. Maybe that’s not such a bad thing, as switching won’t decrease my chances of finding the car.
I would prefer that to the alternative of assessing the probability that Monty Hall knows, in which case I might turn to the principle of indifference and assign .5, which I would nudge up a tenth to .6, via Bayes’ theorem, on the evidence of his revealing a goat, and on the assumption that he intends to reveal the goat (and is making the choice of door cognitively, rather than, say, by flipping a coin): IF he intends to reveal goat and the P(Knows)=(1/2) THEN: the P(Knows|Reveals Goat) = (P(Reveals Goat|Knows)×P(Knows))/P(Reveals Goat) = ((1)(1/2))/[5/6] =3/5.
This gets tougher, however, when assuming ignorance both about Monty Hall’s intentions and how the door is chosen. Knowing more about his wishes and decision method can be helpful; notice that if the door is decided by guessing or even by, say, eeny meeny miney moe (which can be gamed), his wishes seem more important than if decided by coin toss, particularly if he chooses which decision method to use.
Finally, I might as well throw some of the other stuff discussed here into Bayes’ theorem. If you’re in the set of trials in which a goat is always revealed, once you see a goat revealed, where G = “You chose a goat” and R = A goat is revealed: P(G|R) = (P(R|G)×P(G))/P(R) = ((1)(2/3))/(2/3) = 2/3. In other words, you started with 2/3 probability of having chosen a goat, and that doesn’t change given a goat reveal.
In case he is choosing a door randomly, assuming even odds (e.g., flipping a coin), you get: P(G|R) =((1/2)(2/3))/(2/3) = 1/2. So you update from 2/3 to 1/2 chance of having chosen a goat, given the reveal of a goat.
At the end of the day, I think the question I’m finding myself more intrigued by here is how to reconcile a single, real-world instance of the game to a theoretical model in which I’m supposed to locate myself. That is, I am to ask: “As I play this single instance of the game, am I in the set of games in which Monty Hall always knows, the one in which he usually knows but happens forgot on occasion, the one in which the he is guessing, the one in which he can choose not to open a door according to whimsy…?”
Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).