Newcomb’s Problem: Of (At Least) Two Minds

Estimated read time (minus contemplative pauses): 10 min.

x-ray specsNewcomb’s problem, first publicized in Robert Nozick’s 1969 paper “Newcomb’s Problem and Two Principles of Choice,” is often presented as a thought experiment about decision making—about what a rational deliberating agent should do, its implications for decision-theoretic principles, what it means for our notions of rationality, and so on. I (and others) are fascinated by the problem for what it might reveal of our intuitions about free will, mind-body interaction, determinism, and the meta-theoretical fact that it divides expert intuitions. It’s the latter stream of interests that motivates this writing.

Before going further, here’s Newcomb’s problem:

You’re offered two boxes, Box A and Box B. Box A is transparent and contains $1,000. Box B is opaque, so you can’t see its contents; it may or may not contain $1,000,000. You’re free to take both boxes, along with their contents. Alternatively, you may take only Box B, along with its contents. Here’s the catch. Yesterday, a highly skilled forecaster predicted whether you will take both boxes, or only Box B. If the prediction was that you’ll take both boxes, then Box B is now empty. If the prediction was that you’ll take only Box B, then Box B now contains $1,000,000. Do you take both boxes? Or just Box B?

Nozick, who credits the problem to physicist William Newcomb (with whom Nozick discussed the problem), notes that the chooser believes, “almost certainly,” that the prediction will be correct. In subsequent discussions of the problem by others, I’ve seen this represented as a 99% or 100% success rate for the predictor. I’ll touch on this difference below.

Nozick also writes of the “large number of people, both friends and students” with whom he had shared the problem, noting that “to almost everyone it is perfectly clear and obvious what should be done… these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.” In a footnote, he wonders whether a psychologist might one day investigate the phenomenon. (Has that been done?) I’ve also heard—e.g., in this recent presentation by Jessica Collins—that people tend not to change their minds, whatever argument they hear.

I find myself to be agnostic, however. Maybe that’s not the way to put it. Of two distinct minds? Maybe three. Here’s how it go could were I in Newcomb’s scenario. Note that “LD” is short for “Laplace’s Demon.” I’ll also note that the final question asked here strikes me as the most interesting: What happens if you know what the forecaster predicted? For example, if you yourself are a highly reliable predictor.

Me-1:  LD is all-knowing and never wrong. It will predict your choice. It has predicted your choice. Take only Box B and be done with it.

Me-2: Yeah, but LD has also already put the money in or not. It makes no difference what I do now. Suppose Box B contains a million dollars and I take only that box. It would have had those same contents had I taken both boxes.

Me-1: Yes, but LD would have predicted that and would have left Box B empty. So…

Me-2: But if I took both, and Box B turns out to be empty, that box would’ve been empty had I taken just Box B.

Me-1: LD would have predicted that, too.

Me-2: But there’s already a fact of the matter! It’s settled! What if I choose no boxes or call a friend to take them both?

Me-1: Uh… I don’t know… that’s not in the rules…

Me-2: Maybe I should flip a coin! LD would have predicted I’d do that.

Me-1: Right.

Me-2: Ok. Heads for Box A, Tails for Box B. [Flips coin.] It’s Heads! But I don’t have to follow the coin.

Me-1: LD would have predicted that.

Me-2: But what Box B contains is already a settled fact. It’s done. I might as well take both.

Me-1: You’re poor. Remember all that debt you accumulated getting a philosophy degree in New York? And think of your family. Some of them could use help. Why risk it? To make a gesture of theoretical dedication? This isn’t a class room. You can spare the $1,000 if you make a million. Would you take the risk if Box A only contained a penny?

Me-2: I think I’d just take Box B in that case.

Me-1: So why risk it for a thousand?

Me-2: What if LD were known to be only 99% reliable? Would your advice change?

Me-1: My advice? We both have the same contribution to your executive function, have as much sway over your motor control, over the hands that will grab the boxes, and so forth.

Me-2: Sometimes… but today I think I’ve got the wheel. Oh! What if I decide to take just Box B, but at the moment of truth I slip and fall, accidentally grabbing both boxes for support, then kind of scuttling out of the room? Or, even better, what if an evil genius assumes control of my body and takes both boxes, while I mentally protest and remain committed to just taking Box B? What if severe cognitive dissonance about what to do amounts to essentially just that kind of outcome?!?

Me-1: Uh…

Me-2: Or what if I can’t make up my mind… is this problem about what I do behaviorally or cognitively?

Me-3: If I may, the expected value…

Me-1 & Me-2: Shut up, Me-3!

Me-1: To your first question, Me-2. At 99% reliability, LD is now fallible. I might say take both.

Me-2: 99.9%?

Me-1: I see where you’re going with this.

Me-1: I don’t.

Me-2: It’s also already settled whether you take both boxes. So just exercise your “free will” and “choose.”

Me-1: But “choosing a box” is a vague, arbitrary event conceptually, and is just another movement of particles as far as the universe is “concerned.” So, to be clear, all the little acts leading up to the choice are predetermined. Not, rather, my choosing one or the other option in some general way—i.e., LD can’t say, “I don’t know how it’ll happen, but in the end, you’ll taken option X.” So, if LD’s prediction is correct, I could flip 11 coins and interpret them in some convoluted way, and it’ll turn out that all that’s just part of the chain of events leading to the choice in question, a choice LD has successfully predicted, along with all those intermediate steps. So! What about something less deterministic, like the state of a quantum event? That’s just leaving it to chance, and I’m risking a million bucks, but it would also mean that LD isn’t 100% accurate.

Me-1: The prediction is for the decision you’ll make.

Me-2: But not for predicting quantum behavior? But I can use that behavior to decide.

Me-1: Do you have access to something like that?

Me-2: I don’t. But someone does… or could. I could call someone who does.

Me-1: LD will have predicted you don’t have access and won’t call anyone who does. Besides, the problem stipulates a perfectly deterministic universe, or at least context.

Me-2: Does it? Is that a fair stipulation? If not, doesn’t the very possibility using quantum states mean LD isn’t really all-knowing?

Me-1: Depending on what theory of quantum mechanics turns out to be true, your mind, whose workings LD apparently can predict, might influence…

Me-2: Ah, forget all that! But it reminds me, maybe it’s just about belief. I have to believe LD is accurate, I’m the one who’s nearly certain in Nozick’s article.

Me-1: That motivates the problem as a thought experiment in certain contexts (e.g., epistemological ones). It has nothing to do with what’s in that box right now.

Me-2: Now you take my side?

Me-1: No. I take Box B alone.

Me-2: I’m rejecting the 100% or even 99% accuracy. It makes no sense. The act itself is arbitrary. Mental events can’t be predicted with that level of accuracy.

Me-1: It’s just a thought experiment for thinking about theories surrounding decision making.

Me-2: But here I am in the actual scenario. I have to choose. I believe there must be a correct answer concerning the extent to which predictions can be reliable. But here’s the short of it. Their contents are settled. If LD predicted correctly, there was nothing I could have done differently, and I certainly can’t change that now.

THIS? Another .. Human quest for Knowledge? … I was going to say etc, but sure

Ultimately, I view this as scenario as just another product of the long, misguided, and increasingly convoluted project of…

Me-1: The human quest for knowledge?

Me-2: I was going to say understanding human action by way of of pegging rules onto rational behavior—whether as observations of intrinsic regularities in behavior, or as externally forced attempts to impose normativity. The human mind—the thing often referred to by behavioral and cognitive scientists as the “black box” whose contents can’t be seen (appropriate here!)—is messier than that.

But ok, “human quest for knowledge” works, too.

Anyway, I thus reject the intelligibility or coherence or whatever of the thought experiment, even though I find myself in a room with convincing evidence that I’m faced with that scenario.

Me-1: That’s not being charitable to the thought experiment. You’re supposed to given an answer to what you’d do if, I’m stressing the “if” here: if LD has (near-)perfect powers of prediction.

Me-4: [With an unearned air of pretense] As the resident concept architect, I beg to protest…

Me-1 & Me-2: Shut up, Me-4!

Me-2: But doesn’t choosing both boxes imply a rejection of those powers of prediction? This observation underscores the importance of knowing what I’m supposed to believe about those powers. Seems to me 100% certainty in those powers only allows for a coherent choice if you take Box B, but 99% certainty opens up the small possibility for LD being wrong.

ENDING, STREAM ONE:

Me-2: [Exhausted] Ugh… anyway… this would go much better if I knew how LD makes its predictions. Did it read my blog and interview my friends, or does it just rely on the accumulative and successive interaction of arbitrarily small physical events? If I’d maybe been given a chance to do a trial run where I try to fool it… maybe I’d crack it. Anyway, it’s settled, I’m taking both.

[Grabs both boxes. Pops open Box B] There’s something here!

Me-1: Is it a check?!?

Me-2: [Excitement quickly dampens…] It’s some papers… with a Post-it note. The note says: “I knew you’d take both. Kindly, LD.”

The papers look like a script. “Me-1: LD is all-knowing and never wrong. It will predict your choice. It has predicted your choice. Take only Box B and be done with it.” It’s a transcript of our conversation. It ends with, “There’s something here!” Ugh again!

Me-1: [Sarcastically] Well, at least you made the thousand, you know, because that transcript would have been in there either way.

Me-2: Shut up, Me-1.

ENDING, STREAM TWO:

Me-2: [Exhausted]Ugh… anyway… this would go much better if I knew how LD makes its predictions. If I’d maybe been given a chance to do a trial run where I try to fool it. Wait! I just remembered, I still have those x-ray specs I got yesterday at the retro shop in Williamsburg. Let’s see.

[Puts on x-ray specs.]Ahhh… Box B has something in it! But it’s not a check or cash. It’s just a little note.

Me-1: Are you sure?

Me-2: Yes.

[Takes both boxes. Opens Box B.] It says, “You cheated. But I still win, sort of. I knew you’d see into the box. At this moment, I’m nearly certain that if you see the money in Box B, you’ll take both boxes. And that if you see Box B is empty, you’ll take both. So I predict you’ll take both. Kindly, LD.”

Me-1: Yeah, “sort of.”

Me-2: Hmmm… right… I don’t think I buy this. Does it suggest a possible reversal of the problem, creating a paradox for LD? Suppose that, instead of seeing inside the box, I’m somehow able to learn what LD predicted I’d do?

Me-1: What if LD tried to predict for another predictor? Which I guess would be like it trying to predict for it itself would do?

Me-2: Yeah, if I had an LD-2 for predicting what LD-1 will do… I could have gone against LD-1’s prediction, and at least proved LD’s prediction wrong.

Me-1: Is that worth more to you than a million dollars?

Me-2: Maybe so. But suppose I learn what LD predicts, and LD predicts I’ll learn it. Suppose I also fully intend to do the opposite of whatever LD predicts?* Ok, I can’t process all that right now. At least I made a thousand dollars.

Me-1: That doesn’t offset the million you lost.

Me-2: I can’t lose what I never had. Besides, apparently I’m genetically predisposed to be a two-boxer, and I couldn’t change that yesterday, particularly since I didn’t know I’d be in this situation today.

Me-1: Maybe you could have, if you’d approached this problem rationally when you first learned about it three years ago. So, you did lose the million, because…

…THE END (sort of)

[*For more on this question, see my post: Laplace’s Demon Defeated by Human Consciousness. It occurs to me at this moment that it amounts to a kind of paradox to be solved by the likes of Newcomb’s (nearly?) infallible predictor.]


Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.
Or click the banner to shop at Amazon (at no extra cost: it just gives me some of what would have gone to Amazon).


Further Reading

Share your thoughts:


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0