There’s a probability problem that lacks an obvious solution, despite appearing simple at first glance. It’s usually called the *Sleeping Beauty Problem*, but I’m uncomfortable with that formulation, as it strikes me as needlessly sexist: it usually revolves around a young woman who is put to sleep by researchers, awoken and questioned about the result of a coin flip, then given a mild memory-erasing procedure, then put back to sleep, etc. Maybe it’s not (always) sexist. In some versions, Sleeping Beauty is said to have consented. But anyone can be made to consent to anything in fiction. At any rate, there’s no harm in changing it, and in some ways doing so makes it easier to think about (in other ways, not so much).

What you see here is my attempt at thinking through this difficult problem, which I reformulate as the *Amnesiac’s Dilemma*. (Though I’ll refer to it as the *Sleeping Beauty Problem *as well; I just won’t use that story… much.) The upshot of the dilemma is that 1/2 and 1/3 both seem to be viable solutions (proponents of which have been called halfers and thirders, respectively). Rather than rule the problem indeterminate, we take it that there must be some fact of the matter about which solution is correct given that the problem can be reasonably well-defined by a discrete, finite sample space in which the experiment may be repeated indefinitely.

To make sense of the problem, we need to be clear about its relevant features—for example, about what counts as a desired outcome. It may be that 1/2 is valid for one sort of outcome, while 1/3 is for another. In fact, I ultimately conclude here that, whenever asked “Heads or Tails?”, the Amnesiac (Henry, in this case) should have a credence of 1/2 that the coin landed Heads, even though he may (reasonably) simultaneously assign 1/3 credence to his situation being, say, {(First Question AND Heads)} (Henry’s analogue to Sleeping Beauty’s {(Monday AND Heads)}). Though I don’t take this result lightly (if pushed, I might lean towards 1/2 or agnosticism). This will make more sense (I hope!) by the end.

But I’m getting ahead of myself, as I haven’t yet summarized the problem. Rather than doing that right away, I will lead up to it with a series of simpler scenarios. If you’d like to get a head start, here are two brief videos that demonstrate the standard Sleeping Beauty scenario: www.youtube.com/watch?v=zL52lG6aNIY (a fairly simple overview); www.youtube.com/watch?v=5Cqbf86jTro (a more in depth, though still brief, overview). If you’d like to go deeper, this Wikipedia entry will set you in the right direction: https://en.wikipedia.org/wiki/Sleeping_Beauty_problem. And of course you can check out these two frequently cited articles: Adam Elga, “Self-Locating Belief and the Sleeping Beauty Problem” (*Analysis*, 60(2): 143–147, 2000); David Lewis, “Sleeping Beauty: Reply to Elga” (*Analysis*, 61.3: 171–176, 2001).

Before getting started, some clarifying notes:

The math involved in showing the 1/2 and 1/3 results isn’t complicated. The real difficulty is deeper than that—philosophical, psychological, theoretical (i.e., regarding how our probabilistic models, which are cognitive and conceptual tools, map to the Real World). My interest is especially in the implications of the problem, but I won’t explore those much here. First, I’d like to get lost in the problem itself.^{1}

In the series of games I describe, leading up to and including the *Amnesiac’s Dilemma* (i.e., *Sleeping Beauty Problem*), the game-master (me) never lies and always tells the truth (at least when functioning as game-master). We assume that the player (you, unless otherwise specified) is *rational*—which here means a player who understands the rules, wants to win, tries to win, and does not (*cannot*) knowingly and simultaneously believe mutually contradictory propositions. (On this meaning, a “rational” person will endeavor to give the correct answer when aiming to win, and the wrong answer when aiming to lose.)

When I use the word *credence*, I am referring to the degree to which someone is confident, or believes, that something is true or false. This (subjective) confidence translates to a probability assignment. For example, if you say you’re going to flip a fair coin five seconds from now, I will assign 1/2 to its landing Heads, because this is what I understand to be the probability of a typical coin landing Heads. My confidence in Heads is split with my confidence in Tails. Suppose, however, that I’ve already seen the coin land Heads eight times in a row. I might then adjust my confidence in favor of Heads or Tails—perhaps because I suspect you of cheating or using an unfair coin, or because I’m prone to the so-called Gambler’s Fallacy (even while presuming that the objective probability of the coin’s landing Heads is still 1/2).

I’ll also note that when I talk about the probability of flipping a *fair *coin being 50–50, I’m talking about a fictitious, theoretical coin for the sake of exploring probability. Maybe no coin is really fair. Maybe a coin could not even be designed to genuinely result in a 50–50 tendency when flipped (consider: if you flip the coin the exact same way in the exact same conditions each time, it should land the same way 100% of the time; Persi Diaconis, et al has built a machine that can do this, and on top of that has demonstrated that under “random” [i.e., non-machine-guided, or what I call “ignorant”] flipping conditions we actually end up with a .51 probability favoring the coin’s starting position: Dynamical Bias in the Coin Toss^{2}). This should make intuitive sense. Given infinite flips in a world that doesn’t respect the Heads/Tails sample space, plenty of other things can happen to a coin rather than landing Heads or Tails. Moving on…

Standard versions of this problem frame it as a question about the credence that a rational agent (i.e., someone who would assign 1/2 to a *fair* coin flip) would, or should, assign in the given scenario. I will address that, but am going to head in that direction from the perspective of winning.

**Game 1: The Easy Game
**I tell you, through an intercom in another room, that I’m going to vigorously flip a fair coin. If it lands Heads, I won’t ask you anything. If it lands Tails, I’ll ask you: “Heads or Tails?” (You can’t see me.)

In the event you are asked “Heads or Tails?”, you may answer as you wish. I will continue to flip the coin until Tails occurs.

There are many ways we can define what counts as winning. A simple formulation is: When asked “Heads or Tails?”, if the answer you utter is correct, you win.

This is a very easy game to win. Here’s one example of winning:

*Given that I just now asked you, “Heads or Tails?”:*

Probability that it landed Tails: 1^{3}

Credence you should assign: 1 to Tails, 0 to Heads

Probability of winning, given that you remain rational etc. throughout the game: 1

If we run this game 1000 times, you’ll always win. In general, I’ll assume that whatever is good for 1000 runs is what a rational player should do for any given single run.

**Game 2: A harder, and weirder, game:**

I tell you, again through an intercom, that I’m going to vigorously flip a fair coin. After I flip it, no matter how it lands, I will ask you “Heads or Tails?” If it lands Heads, I will only ask you that question once. If it lands Tails, I will ask you twice. If asked twice, you must give the same answer both times you are asked. To win, you must answer correctly every time you’re asked. I will let you know when the game is over.

This time you’ll want to think about a strategy, if only due to the weirdness of the scenario. A reasonable intuition is that you have a 50–50 chance of winning by picking either Heads or Tails. But let’s look more closely, just to be sure.

Reviewing some outcomes might help. The following outcome is for a standard coin toss, without any of the above weird rules. Oh, I should point out that you have a life-long strategy for deciding coin tosses. When your credence is .5, you flip your own (fair) coin in order to decide what to guess.^{4}:

*Given that the coin landed Heads:*

Probability that it landed Heads: 1

Credence you should assign: .5 to Heads, .5 to Tails

Probability that your guess is correct (i.e., of winning): .5

Now, would it make any difference given that you’ll be twice asked to answer (with the exact same response) should it land Tails? Maybe it depends on what counts as winning. Consider the two winning outcomes *with *the weird rules:

*Given that the coin lands Heads:*

Probability that it landed Heads: 1

Credence you should assign: .5 to Heads, .5 to Tails

Probability that your guess is correct (i.e., of winning): .5

Asked “Heads or Tails?”: You answer “Heads.”

You win.

*Given that the coin lands Tails:*

Probability that it landed Tails: 1

Credence you should assign (before being asked “Heads or Tails”): .5 to Heads, .5 to Tails

Probability that your guess is correct (i.e., of winning): .5

Asked “Heads or Tails?” (First Time): You answer “Tails.”

Asked “Heads or Tails?” (Second Time): You adjust your credence for Tails to 1, and you answer “Tails.”

You win.

Let’s consider all (rational) outcomes. Correct answers are in blue, incorrect in red:

HEADS OUTCOMES:

(1) “Heads or Tails?”: You answer “Heads” (WIN)

(2) “Heads or Tails?”: You answer “Tails” (LOSE)

TAILS OUTCOMES:

(3) “Heads or Tails?”: You answer “Tails“; Asked again: “Tails” (WIN)

(4) “Heads or Tails?”: You answer “Heads“; Asked again:”Heads” (LOSE)

We see that Heads is correct one out of three times the question is asked, and Tails is correct two out of three times the question is asked. (E.g., if you run the game 1000 times, you’ll be asked 500 times under Heads conditions, and 1000 times under Tails conditions.) So, there’s a temptation to think that the probability of Heads winning is 1/3, and the probability of Tails winning is 2/3. It would seem, then, that the best strategy is to always call Tails. Put another way, since it seems that if you’re being asked “Heads or Tails?” it’s more likely to be a Tails run, it’s best to always say “Tails.” Right?

Of course not. You’ll win this game half the time whatever you call.

**Game 3: The Amnesiac’s Dilemma (AKA, The Sleeping Beauty Problem):**

This time, we bring in Henry, who, due to a common side-effect following a rare brand of conscious sedation, is experiencing temporary anterograde amnesia. For the next two hours, he can’t form new longterm memories (he’ll be fine by evening). He still has all his existing longterm memories, however, and he’s proficient with basic probability.

To help stimulate a quicker recovery, and because he enjoys brain-teasers, Henry is scheduled to play Game 3. It’s just like Game 2, but altered thanks to his amnesia.The special challenge here is, when asked “Heads or Tails?”, he will not know if he’s being asked for the first or second time. And so he is allowed to give a different answer each time he is asked. He knows the rules and has developed a strategy to account for the amnesia. He will always call Heads. Is this the best strategy? Let’s consider his options.

There are three basic strategies he can use to win: (1) Decide ahead of time to call Heads; (2) Decide ahead of time to call Tails; (3) Go with a random guess in the moment—for example, by flipping a (fair) coin in order to decide what to guess.

(3) seems like a bad idea. In the event that Tails comes up, Henry could end up answering “Tails” and then “Heads,” which means he’ll have a .25 chance of winning given that Tails has come up, because his random guesses will have to both be Tails. So, we’ll rule out (3).

Given this, and given what we observed in Game 2, should Henry assign a credence of 1/3 or 1/2 to the coin flip’s having landed Heads?

*Halfer Response:* Henry knows that a fair coin’s probability of landing Head’s is .5, so this should not change just because of his amnesia. If Henry’s credence in the coin’s landing Heads was .5 before the flip happened, and he’s learned nothing new about the world, then he should maintain that credence no matter how many times he’s asked.

*Thirder Response:* Over several trials, Henry will be asked under Tails conditions 2/3 of the time. So, he should assign 1/3 credence to Heads (namely, 1/3 to Heads-First-Question), and 2/3 to Tails (i.e., 1/3 for Tails-First-Question PLUS 1/3 for Tails-Second-Question). This is appealing, but let’s look at the winning outcomes more closely for the halfer and thirder recommendations to Henry.

**HALFER:**

*Given that the coin lands Heads:*^{5}

Strategy: Always call Heads

Credence you should assign to Heads (whenever asked): .5

Probability of winning given this strategy: .5

Asked “Heads or Tails?”: You answer “Heads.”

You win.

*Given that the coin lands Tails:*

Strategy: Always call Tails

Credence you should assign to Tails (whenever asked): .5

Probability that your guess is correct (i.e., of winning): .5

Asked “Heads or Tails?” (First Time): You answer “Tails.” (Correct)

Asked “Heads or Tails?” (Second Time): You answer “Tails.” (Correct)

You win.

And so, we see that if you want to win, you can adopt a strategy of sticking to calling Heads or Tails. This gives you a .5 probability of winning, because the coin has a .5 chance of confirming either strategy. It makes no difference which you choose, just as in Game 2.

If you run this 1,000 times, you’ll be asked the question 1500 times. 500 of those will be under Heads conditions, and 1000 under Tails. But if you stick to your policy of calling Heads, you will win 500 times. If your policy is calling Tails, you will win 500 times.

The *Amnesiac’s Dilemma* is about assigning credence, however, in the given instance. I’ve already listed that as .5. But let’s look more closely at this.

When asked 1500 times, if you use the Heads policy, you’ll answer the question correctly 1/3 of the time, and with the Tails policy you’ll answer correctly 2/3 of the time.

So far so good, but we need to interrogate this further. We might say that if Henry always has a .5 credence, then he should be fine with adopting strategy (3): Whenever asked “Heads or Tails,” if he thinks it’s 50–50, why not just flip the coin? The thirder might say it’s because 2/3 time, he’ll be asked under Tails conditions. So, the very fact that (3) is (rightly) dismissed demonstrates that the credence is, or at least should be, 1/3 that it landed Heads (i.e., that Henry is being asked under Heads-First-Question conditions).

My response to this is that it is rational to have a credence of .5 that the fair coin landed Heads (indeed, it may be practically impossible not to, as belief is passive), as there really is a .5 probability that a “Heads” guess will be correct whenever asked. But if you’re asked several times following the same flip, there is a very good chance you’ll make a mistake if you answer randomly. This is easy to see. Similarly, Henry can have a sustained credence of .5 that it landed Heads, while understanding that it’s better to stick to one answer given that he’ll be asked more than once while amnesiac. In other words, whenever asked, Henry can reasonably think to himself, *My credence is .5 that the answer I’m about to give accurately describes the result of the coin flip*. This will always be true, even if asked a billion times.

In the end, it seems clear to me that Henry should assign a credence of .5 to Heads (whenever asked, conditional on being given no new information). He won’t win more often if he employs a Heads or Tails policy, but he will lose more often if he employs a “random guess” strategy.

Now, if he calls Tails and we ask him to guess whether it’s the First or Second time he’s being asked, he will split his credence between those options: It’s .25 credence for Tails-First-Question (TQ1), .25 for Tails-Second-Question (TQ2). This would mean that Heads-First-Question (HQ1) merits .5 credence. That’s a different game, but it does seem to have (troubling) implications for Henry’s game.

Suppose Henry is asked asked, “Which condition are you in: HQ1, TQ1, or TQ2?” If there are 1000 coin flips, he’ll be asked this question 1500 times. 500 of those times, HQ1 will be correct. 500 times, TQ1 will be correct. 500 times, TQ2 will be correct. So this would not seem to be .5, .25, .25 respectively, but 1/3 for each. That said, two examples of a thirder’s recommendation to Henry:

**THIRDER:**

*Given that the coin lands Heads:*

Strategy: Always call Heads

Credence you should assign to Heads (whenever asked): 1/3

Probability of winning given this strategy: .5

Asked “Heads or Tails?”: You answer “Heads.”

You win.

*Given that the coin lands Tails:*

Strategy: Always call Tails

Credence you should assign to Tails (whenever asked): 2/3

Probability that your guess is correct (i.e., of winning): .5

Asked “Heads or Tails?” (First Time): You answer “Tails.” (Correct)

Asked “Heads or Tails?” (Second Time): You answer “Tails.” (Correct)

You win.

There’s something a bit off about this, but not probabilistically. It has more to do, I think, with what it means to win. The point of saying Henry should assign 1/3 credence is to say that this is the probability he should assign to being correct whenever asked “Heads or Tails?” Let’s suppose Henry has forgotten his strategy but remembers the rules, and is thus forced to produce a guess in the moment. It seems he should indeed go with Tails. In fact, this could become a strategy that he works out each time he’s asked: “I don’t remember my strategy. Whenever that happens, I should call Tails since there’s a 2/3 chance this is a Tails condition… not to mention that I might need to call Tails twice.”

In other words, the thirder can reasonably claim that, when Henry is asked “Heads or Tails?”, with consideration for the answer to that question in that moment, rather than with consideration for winning the games I’ve described, 2/3 of the time the answer will be “Tails,” and so this is the credence Henry should assign. And yet, each time Henry is asked, he has a .5 chance of guessing correctly (provided, for example, that he flips a coin to decide how to answer; or provided we are running the game several times). And if he does use this “always Tails” strategy, he will, over several trials, be *consistently *correct about the result of the coin flip 1/2 the time. And he will answer the question “Heads or Tails?” correctly 2/3 of the time.

So far, it appears that Henry should assign 1/2 credence to the coin flip result, and 1/3 credence to whether it’s HQ1, TQ1, or TQ2. Perhaps this is ok. Henry has no evidence at all about the coin result or about which day it is. So perhaps these probabilities can comfortably exist independently.

Or perhaps not. Perhaps it’s at least quasi-irrational to hold both credences (in the way that it would be irrational to believe, say, *p *and *q*, where *q* implies *not-p*, while one is at least vaguely aware of this tension). To make this a little easier to think about, I’m going to name Q1 and Q2 *Monday *and *Tuesday*, respectively (borrowing from the *Sleeping Beauty *formulation of this problem).

If Henry assigns .5 to its being Tails, then perhaps he should adjust his credence accordingly. That is, if he calls Tails, then he’s betting that it could be Tuesday. But if he calls Heads, then he’s betting against its being Tuesday. So, to call Heads is to express *some *non-zero credence about its being Monday; certainly Henry cannot rationally say, “I believe it to not be Monday, and I call ‘Tails’.”

While this gives me pause, I’m not convinced it’s significant. I’ll explore a bit more closely:

When asked, “Heads or Tails?”:

–Credence in Heads is .5.

–Credence in Tails is .5.

We might also try to frame this by first assigning credence to whether it’s the Monday or Tuesday question:

When asked, “Heads or Tails?”

–Credence in Monday is 2/3.

–Credence in Tuesday is 1/3.

–How does this affect credence in the coin flip?

The above shows us three different scenarios in which Henry can find himself: {MH, MT, TT}. All three seem identical to Henry, so each is equally likely. So, each gets a credence of 1/3. Two out of three of these are Monday conditions, and one out of three is a Heads condition.

The thirder position is looking strong, but it relies on credence in the coin flip’s landing Heads being .5; i.e., in 100 flips: 50 Heads and 50 Tails. “Heads or Tails” is asked twice as often under Tails condition precisely because the probability of landing Heads or Tails is 50–50. Henry knows this.

What I conclude from all this is that, at any point in the game, the credence in its being, say {Monday AND Heads} should be 1/3, and the credence in the coin’s having landed Heads is 1/2. This credence set facilitates winning the game 1/2 the time by choosing to always either call Heads or Tails in any given run of the game; while, over many runs, also answering correctly 2/3 of the time should you choose to always answer “Tails.”

Put another way, the credence in its being {Heads given that it’s Monday} should be 1/2; {Tails given that it’s Monday} should be 1/2; {Heads given that it’s Tuesday} should be 0; {Tails given that its Tuesday} should be 1. And, credence in its being {Monday given that it’s Heads} should be 1; {Monday given that it’s Tails} should be 1/2; {Tuesday given that it’s Heads} should be 0. {Tuesday given that it’s Tails} should be 1/2. And, credence in the coin’s having landed Heads should be 1/2; Tails should be 1/2. And, credence in its being Monday should be 2/3; Tuesday should be 1/3. And, credence in the answer to “Heads or Tails?” being “Heads” should be 1/3; “Tails” should be 2/3.

All these things should be compatible. If not, it must be a problem with probability (or our probabilistic models), not with the world. Or perhaps it demonstrates a problem strictly with subjective probability: There is some threshold between “enough information to form some probability” (e.g., a coin flip), or too little (e.g., Does God exist? Are we living in a computer simulation?). This may recommend agnosticism, but sometimes we must develop some credence, like it or not. Again, belief is passive. I can’t help it: I believe I’m sitting here typing these words. So, we do our best, especially if we are aware of the limitations of our models. I find the present case particularly fascinating because it seems simple on the face of it, and because Henry can indeed rationally strategize in order to win 50% of the time.

In the end, I claim: Henry should assign 1/3 to it being any one of MH, MT, TT, and 1/2 to it being any one of Heads or Tails. If pushed to choose one as the overall credence, I’d lean towards 1/2. If pushed harder, I’d say it’s indeterminate: Henry lacks sufficient information so that the rational thing to do is say, “I don’t know, and so I’ll suspend judgment. But I’ve got a strategy that will assign 1/2 or 1/3 based on desired outcome and context, etc.”

Pushed again, I might give the following argument: I seem to believe equally and all at once in the truth of 1/2 and 1/3. I think myself rational (in the terms I describe above). So, it must not be irrational to say both are equally true. Indeed, from this I gather that both *are* equally true. But as I contemplate the thing, I shift from one to the other, as my perception might of a rabbit-duck, while sort of seeing both at the same time (to use a crude analogy). (Or does this give me reason to think neither is true? Suspending judgment doesn’t feel like an option: my mind is judging, and I can’t stop it.)

Pushed 753 more times, now exhausted, I’d give in to 1/2, as 1/2 seems to play a more significant role in this game than does 1/3 (due, I suppose, to the odds of a coin flip’s outcomes). Indeed, when I contemplate this rabbit-duck of a problem, it is 1/2 that jumps out most vividly, that seems most natural. To emphasize this, and to take one more stab at the inner workings of this problem^{6}, I conclude with one last game:

**Game 4: “This isn’t what I signed up for.” (-Henry) **

This time Henry is in the same amnesiac condition as in Game 3. But he will be asked “Heads or Tails?” either one time or seven times. The flip is executed before the game. If it lands Heads, he will be asked once. If Tails, he’ll be asked seven times. To win, he must answer correctly whenever asked “Heads or Tails?”As with Game 3, if Henry chooses one answer to give any time he’s asked, he’ll win the game half the time over several runs. But there seems to be a complication here.

If he were asked, “Are you being asked Q1, or are you being asked some Q2–Q7 inclusive (i.e., not-Q1)?”, clearly his tactic should be to say not-Q1, anytime he’s asked. He’ll be correct 6/7 of the time.

Given that it will only be not-Q1 if the coin landed Tails, it seems he should also make his strategy to always choose Tails. This is irrelevant, however, as whether he even makes it to Q2 is 50–50. In other words, there’s only a 50–50 chance it could be not-Q1. So, if asked if he thinks it’s Q4, there’s a 50–50 chance he made it past Q1. If he makes it to Q2, then he makes it to Q4. This doesn’t make it 1/2 that it’s Q4, but it does mean that it is 1/2 that it’s not-Q1, and he’ll go from there (i.e., 1/12 [from .5/6] for any given Q2—Q7 inclusive). But the key here is that it’s being a Q2–7 scenario is 50–50.

And so we see that what day it is will be conditional on how the coin landed, but how the coin landed is not conditional on what day it is, and credence in guess success should not be updated unless he’s *told *what day it is. Otherwise, credence in the coin result remains 50–50. In other words, even though he’d be best off ever saying not-Q1 any time he’s asked “Which Q*n*?”, he will not win the game more often by calling “Tails” whenever asked.

A separate question is if he’s asked: “Is it Q1-H, Q1-T, Q2-T, Q3-T, Q4-T, Q5-T, Q6-T, Q7-T?” He has a 1/8 chance of getting this correct. But it should have no bearing on his credence in how the coin landed, any more than if asked, “Is it {(Q1-H) AND (I ate a banana with breakfast today)}? Henry has no evidence at all about what the answer is in terms of Q*n*, except that there are seven of them, and there are two ways it can be Q1. And he has no evidence about how the coin landed, except that there was a 50–50 chance that it would land Heads. His credence will draw from this knowledge in accordance with the question being asked, and with his desired outcome (e.g., to win).

Look at it this way: In 100 runs of the game, there will be 50 Heads, 50 Tails. So the game-winning answer will be Heads 50 times and Tails 50 times. There will be 50 Q1-H and 50 Q1-T, and so 50 of each Q2–Q7, which is to say 350 Q*n*-T. This adds up to 400 scenarios. So, the probability of it being any given instance is 1/8 (50/400). This also means that the correct answer to “Heads or Tails?” will be Heads 50 out of 400 times, and Tails 350/400 times. This makes it tempting to think it best to always pick Tails. But remember that the game-winning answer is either Heads or Tails, and it will be Tails 50% of the time. That is, Henry will only get to Q2 (and thus Q3, Q4…) 50 out of the 100 runs.

So, the credence that the flip outcome, and thus the game-winning answer, is Tails should always be 1/2. The credence that it’s Q1 should be 1/2, and the credence in its being some Q*n* between Q2–Q7 (inclusive) should be 1/2. But the credence in its being Q3 in particular, should be 1/8.

Finally, it may seem that we need to balance the 1/8 credence that it’s Q3-T with the credence in Tails being 1/2. That is, by saying it’s 7/8 that it landed Tails, and 1/8 that it landed Heads. That’s the thirder position translated to Game 4. The problem with this is that the Q2–7 scenario only comes up half the time. In which case we might want to say that the credence should be 1/2 that its Q1-H, and 1/14 (i.e., .5/7) that it’s Q1-T. Nevertheless, it is 1/2 Q1-H and 1/2 for T[Q1, Q7]. Again: In 100 flips, Henry will get 50 Q1-H scenarios, and 50 scenarios of T[Q1–7] (which also means 50 scenarios of Q2–7).

When asked “Heads or Tails?” should Henry take into consideration that he’s more likely to be in one of Q2–Q7, than in Q1? I say no. Because he’s not. He’s only in Q2–Q7 50% of the time. That said, if he hasn’t developed a strategy, but knows the rules of the game, it’s best to answer Tails, because the only instance in which he’ll be asked to repeat an answer is if the correct answer is Tails. But if he has a strategy, it doesn’t matter if he chooses Heads or Tails, because either will win 50% of the time.

**Conclusion**

There may be no good way to resolve this tension between credence in winning answers (which is equal to the credence in how the coin landed) and credence in any given Q*n*. But we do see that, in practice, we can work around it, depending on if the goal is to correctly answer “Heads or Tails?”, or is to correctly answer “Which Q*n *is this?” If we think that it’s usually some Q2–Q7 inclusive, we must think that the correct answer to “Heads or Tails?” is usually Tails. But it’s only the chunk Q2–Q7 50% of the time, and Tails is the game-winning answer only 50% of the time.

These observations apply to Game 3 as well. Namely, over many runs of the game, Henry will be in a MH scenario half of the trials, and in a not-MH scenario the other half. Thus my leaning towards 1/2 if pushed. But if I think about it anymore, I’ll circle back around to sympathizing with the thirder view.

That said, I’ll end here with a final note.

One way to think of probability is as a technology for generating models for organizing the future.^{7} All models are flawed, and here we see no exception. I think the Amnesiac’s Dilemma brings out some of those flaws in interesting ways.

As such flaws accumulate, one sees that probability, as a thing in itself, does not exist—it is not a concrete entity, but rather a conceptual tool that has uses and limitations. But to see how we tend to treat it as a concrete entity, consider this. When discussing any given event, especially those that are in theory repeatable (though no event is *really* repeatable), we often act as though there is some fact of the matter about what the probability is, in some absolute sense in the moment that an any individual instance of the event occurs. We often act as though, for example, that the fact that a certain coin flipped one thousand times will tend towards 50-50 Heads-Tails has special meaning—perhaps even some sort of causal role to play—for any singular instance of flipping the coin. It means something over a thousand flips, but what does it *really *mean in a single flip? (I’ll explore this more in a short piece I’m working on in response to a thought experiment due to Taleb, involving a character named Fat Tony.)

Some people win the lottery and some 30-year-olds get lung cancer, and things with a probability of zero happen commonly.

This treatment of even frequency-based (aka, *ontic* or *aleatoric *or *objective *or *built-into-the-physical-world*) probability blurs the line between subjective (aka, *Bayesian* or *personal* or *epistemic* or *psychological*) probability, which is the space in which the tension I’m exploring here begins to express itself.

Having studied a fair amount of (though still not nearly enough; working on it…) probability mathematics, I’ve become more interested in the theoretical and historical foundations of probability. To that end, I’ve recently read Taleb’s aforementioned *The Black Swan*, and am now about halfway through Ian Hacking’s *The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference* (1975; 2006, 2nd edition). Fascinating stuff.

#### Footnotes:

- Nassim Taleb, in his excellent 2007 book
*The Black Swan: The Impact of the Highly Improbable*, describes the Ludic Fallacy—i.e., what we do when we use easily conceivable and definable games as models for generalizing about uncertainty in the, by contrast, infinitely messier and more complex Real World. I agree with Taleb, which is why I’m quick to point out that our statistical and probabilistic models are just that: models that we’ve constructed for the convenience of our limited human minds (and, increasingly, the limitations imposed on computers by limitations in computational capacity and scarcity of time). That said, much of what I love about scenarios such as the*Sleeping Beauty Problem*is that even a seemingly simple games aren’t always so easy to make sense of. - Persi Diaconis, Susan Holmes, and Richard Montgomery;
*SIAM Review*Vol. 49, No. 2 (Jun., 2007), pp. 211-235) - Given that the coin has already landed Tails, I consider the probability that it’s Tails to be settled: it’s 1. From your point of view, however, you’re assessing the likelihood of what happened in the past. Or, put differently, the likelihood of your guess—and thus your utterance, since you want to win—correctly corresponding to how the coin landed.
- I’ve added this because of the way I’m framing the outcomes—i.e., after the coin has landed. Suppose you have a tendency, or even a policy, to always guess Heads. Before the coin lands you have a .5 chance of winning. But after the coin lands Head, your probability of winning is 1. Over several trials, you’ll win half the time whatever strategy you use, so I’m making sure the outcome of a given single trial reflects that.
- I’m leaving out the “Probability that it Landed Heads” here, for a few reasons. There’s a variation of the Sleeping Beauty Problem in which she’s woken up on Monday but the coin hasn’t been flipped yet. The implications of the results make no difference if “Heads or Tails?” is asked before or after flipping the coin, but to keep things simpler I’ll continue with the coin having been flipped before the game-player is asked.
- Though I suspect I’ll revisit it later, as I’d like to understand this problem better, in particular its implications; it strikes me as a serious problem, not as a mere party trick, but such problems in our math-y models tend to get ignored (or axiomatized out if they do seem serious).
- Statistics plays a similar role, but about the past.

But there is a simple way to resolve the problem. The roadblock with any betting argument, that you “discussed around” in this blog, is that probability theory doesn’t address how to handle a wager where the amount of the wager (or the number of times the same amount is wagered) actually depends on the result of the wager. So remove that variability.

Note that it doesn’t matter if, after Heads, you wake Rip van Winkle (I personally don’t find anything sexist in the reference to a well-known folk tale with similar aspects to the problem; but to balance the fact that you do, I’ll use a different one) on Monday or Tuesday. Only that he can’t know which day it is, and will sleep through the other.

It also doesn’t matter if this option occurs after Heads, or after Tails, as long as you make the question he is asked about whichever coin result is the one where he wakes only once.

So, use four volunteers and the same coin:

1) Always wake RvW1 on Monday, but on Tuesday only after Tails, as in the original Sleeping Beauty Problem.

2) Always wake RvW2 Tuesday, but on Monday only after Tails.

3) Always wake RvW3 on Monday, but on Tuesday only after Heads.

4) Always wake RvW4 Tuesday, but on Monday only after Heads.

On each day, you will wake exactly three of these men. Put them in a room together. Don’t tell them which they are (or equivalently, tell each, but don’t let them reveal their schedules to the others). Then ask each for his credence that he is the one, of these three, that will be woken only once during the experiment.

Each man’s question is essentially the same as the original Sleeping Beauty’s. Yet posed this way, each can clearly have only one answer: 1/3.

Hi JeffJo,

I intentionally left out mention of betting or wager amounts and instead put it in terms of winning. I’m not thinking of expected value, etc. so much as just “what general outcome do I desire?”

I really like the “four volunteers” example. It might convince me. I’ll keep thinking about it!

Well, I’d say that “winning” is just a firm of a water where you eliminate the values. But I disagree whether Lewis, and to a lesser extent Elga, stick to actual mathematics. Lewis ignores causality altogether, and forces the math to adhere to his assertion that there is no new information. The problem is that he never defines what new information is. Elga tries to avoid defining it by looking at the information that exists in two overlapping subsets of the problem, and anchoring the solution on the intersection.

But there is an unorthodox way to define it. Since you seem open minded, I’ll share it. A sample space can be defined as the complete set of disjoint outcomes that all have a non-negative probability. “New information” is anything that makes this definition no longer hold.

Usually that happens when the information eliminates some of the outcomes. You update the probabilities by dividing each that remains, by the sum of what remains.

The Sleeping Beauty Problem doesn’t eliminate anything, but it still invalidates the sample space {H,T}. The two outcomes T1 and T2, where the numeral indicates the day, represented the same outcome before the experiment started (and in Lewis’s treatment). But they are clearly disjoint occurrences to the volunteer who gets awakened. Lewis ignored this, and Elga isolated two ways to look at T1 with one other outcome without explaining why he could.

My point is that the “new” sample space is the set of now-disjoint outcomes {H1,T1,T2}, where each had a probability of 1/2 in the previous sample space. The same updating procedure applies; divide each by the sum 3/2.

Thanks for these thought-provoking notes. I’ll need to revisit this problem at some point with your comments in mind.